the nest_asyncio seems to be really easy to timeout --> this was a run of hallucinations, qa, and relevance on 9 queries
The issue with nest_asyncio timing out during evaluations is likely due to the concurrency and synchronization primitives used in the llm_classify function. Running evaluations in a Django environment, which typically uses worker threads, can exacerbate this issue.To work around this, you can run llm_classify synchronously by setting run_sync=True. This will avoid using any synchronization primitives that can only be created in the main thread, but it may be slower.LBKShallucination_classifications = llm_classify( dataframe=df, template=HALLUCINATION_PROMPT_TEMPLATE, model=azure_model, rails=rails, concurrency=20, run_sync=True )["label"].tolist()LBKSWould you like more details on this workaround?Sources:
- https://arize-ai.slack.com/archives/C04R3GXC8HK/p1713961598.350669
- https://github.com/Arize-ai/phoenix/issues/2309
Note: I wasn鈥檛 able to find highly relevant data sources, but above are a few potentially relevant links.
馃挕聽Hint: Mention RunLLM in the thread for followups.
this was on run_evals
the 9 examples ran faster without nest_asyncio 馃し
