The error occurs because Azure OpenAI's o4-mini model expects the parameter max_completion_tokens, not max_tokens. Phoenix previously sent max_tokens, which is not supported for o1/o3/o4 Azure models, resulting in the error you see. This was fixed in Phoenix 11.10.1 and later, but if you still encounter it, ensure your Phoenix and all related packages are up to date, and that your code or environment isn't overriding this fix. See GitHub Issue #8631 and PR #8661 for details.If you are on the latest versions and still see this, double-check that your deployment is not caching old code, and that your model name is correct. The fix ensures the correct parameter (max_completion_tokens) is sent for o1/o3/o4 Azure models, resolving this error.Would you like more detail or troubleshooting steps?Sources: - GitHub Issue #8631 - PR #8661
馃挕聽Hint: Mention RunLLM in the thread for followups.
It seems likely that the playground fix (mapping max_tokens to max_completion_tokens for Azure o1/o3/o4 models) was initially specific to the playground client, as described in PR #8661. There is no explicit mention in the provided sources that this same fix has been applied to the llm_classify path or the core evaluation logic.If you are still seeing this error with llm_classify, it suggests the fix may not yet be present in the evaluation pipeline. You may want to open an issue or check for recent updates in the evals package to confirm. Would you like a step-by-step on how to check or a workaround?Sources: - PR #8661
馃挕聽Hint: Mention RunLLM in the thread for followups.
We release quite often so probably by the end of the day
Awesome , looking forward to it. Thanks
i'd have to update arize-phoenix-evals specifically or just updating the main phoenix package will do ?
It鈥檚 always better to update evals specifically
