when evaluating multiple prompt versions on a dataset, I'm seeing
upstream connect error or disconnect/reset before headers. retried and the latest reset reason: remote connection failure, transport failure reason: delayed connect error: Connection refusedI suspect it's due to rate limiting on the llm endpoint, does phoenix do any kind of retry when running prompts in bulk?
Yes, Phoenix does implement retry mechanisms when running prompts in bulk, particularly in scenarios where rate limiting might be encountered. According to the Phoenix GitHub repository, Phoenix has enhanced its evaluation verbose mode to log retries and failed requests, which helps users understand the cause of any slowdown due to rate limits. Additionally, the rate limiters.py file in the Phoenix codebase describes an adaptive rate limiter that adjusts the request rate based on the number of rate limit errors, reducing the rate when errors occur and gradually increasing it over time if no further errors are encountered.Would you like more detail on how the retry mechanism is implemented or how to configure it?Sources: - Phoenix GitHub repository - rate limiters.py file
馃挕聽Hint: Mention RunLLM in the thread for followups.
