Hi Team , are the gpt 5 models via azure openai ,supported yet for 'phoenix-evals' ? Running into an error : BadRequestError('Error code: 400 - {\'error\': {\'message\': "Invalid type for \'max_tokens\': expected an unsupported value, but got null instead." while trying to use gpt-5-mini , hosted on azure openai
Phoenix-evals supports GPT-5 in the Prompt Playground as of version 11.21+, but there is no explicit documentation confirming support for GPT-5 models via Azure OpenAI in phoenix-evals. Recent updates have focused on parameter compatibility (e.g., mapping max_tokens to max_completion_tokens for Azure o1/o3 models), but there is no mention of GPT-5-specific Azure support or fixes for the error you encountered with gpt-5-mini. The error suggests a parameter mismatch, which has been a recurring issue for new Azure models in Phoenix, but no official fix for GPT-5 on Azure is documented yet. If you are using the latest Phoenix version and still encounter this error, it likely means GPT-5 models via Azure OpenAI are not yet fully supported in phoenix-evals, or require additional parameter handling not yet implemented. You may want to monitor the Phoenix GitHub issues and release notes for updates on Azure GPT-5 support and parameter compatibility improvements. Would you like more detail on workarounds or tracking this feature?Sources: - fix: duplicate OpenAIReasoningStreamingClient for AzureOpenAIReasoningStreamingClient - fix: azure max tokens update
馃挕聽Hint: Mention RunLLM in the thread for followups.
