Hi Arize team, how do we usually work around with issues related to timeout when running experiments? I see it's coming from phoenix.experiments.functions.async_run_experiment
Timeouts during experiment runs in Phoenix are a known issue, especially with large datasets or long-running tasks. Workarounds include increasing the timeout parameter (if available), upgrading to the latest Phoenix version for improved pagination and efficiency, and splitting large datasets into smaller batches. For some operations, you may need to adjust server or client-side timeout settings, but not all methods currently expose a timeout parameter directly in the API. Pagination and batching are being actively improved in recent releases to address these issues. See issue 7991, issue 9321, and issue 7486 for details.If you are using a method that does not support a timeout parameter, consider upgrading Phoenix (11.32.0+ for pagination improvements) or splitting your workload. For experiment runs specifically, the timeout is often set internally, and you may need to check the latest documentation or source code for new options. If you need to avoid errors, batching or increasing server resources may help until more granular timeout controls are available in the API.Would you like more detail on how to set timeouts or batch your data?Sources: - issue 7991 - issue 9321 - issue 7486
💡 Hint: Mention RunLLM in the thread for followups.
I have been doubling timeout parameter in run_experiment
Might need your help here Elizabeth H.
hmm, I actually switched to the new way of initializing the Pheonix Client
Nico G., have you tried increasing the timeout from the default?
from phoenix.client import AsyncClient
async_client = AsyncClient()
experiment = await async_client.experiments.run_experiment(timeout=60, dataset=...)I think the default was 5 and tried with 10 and little more. let me try with higher timeout
