Timeouts during experiment runs in Phoenix are a known issue, especially with large datasets or long-running tasks. Workarounds include increasing the timeout parameter (if available), upgrading to the latest Phoenix version for improved pagination and efficiency, and splitting large datasets into smaller batches. For some operations, you may need to adjust server or client-side timeout settings, but not all methods currently expose a timeout parameter directly in the API. Pagination and batching are being actively improved in recent releases to address these issues. See <https://github.com/Arize-ai/phoenix/issues/7991|issue 7991>, <https://github.com/Arize-ai/phoenix/issues/9321|issue 9321>, and <https://github.com/Arize-ai/phoenix/issues/7486|issue 7486> for details.
If you are using a method that does not support a timeout parameter, consider upgrading Phoenix (11.32.0+ for pagination improvements) or splitting your workload. For experiment runs specifically, the timeout is often set internally, and you may need to check the latest documentation or source code for new options. If you need to avoid errors, batching or increasing server resources may help until more granular timeout controls are available in the API.
Would you like more detail on how to set timeouts or batch your data?
Sources:
- <https://github.com/Arize-ai/phoenix/issues/7991|issue 7991>
- <https://github.com/Arize-ai/phoenix/issues/9321|issue 9321>
- <https://github.com/Arize-ai/phoenix/issues/7486|issue 7486>