I'm trying to use AzureOpenAI instead of OpenAi model in following code excerpt: with suppress_tracing(): clarity_eval = llm_classify( dataframe = clarity_df, template = CLARITY_LLM_JUDGE_PROMPT, rails = ['clear', 'unclear'], model=OpenAIModel(model="gpt-4o"), provide_explanation=True ) I am actually going through the Arize course on the Deeplearning.AI platform Struggling with this simple issue...is there any example how to use AzureOpenAI (from openai python library) with llm_classify?
To use Azure OpenAI with llm_classify, you should use the OpenAIModel class from Phoenix, passing Azure-specific parameters (model, azure_endpoint, and api_version). There is no separate AzureOpenAI class in Phoenix; Azure support is handled via OpenAIModel with the correct arguments. Example: LBKSfrom phoenix.evals import OpenAIModel, llm_classifyLBKSazure_model = OpenAIModel( model="gpt-35-turbo-16k", # your Azure deployment name azure_endpoint="https://<your-resource-name> .openai.azure.com/", api_version="2023-09-15-preview", api_key="<your-azure-openai-api-key> " )LBKSclarity_eval = llm_classify( dataframe=clarity_df, template=CLARITY_LLM_JUDGE_PROMPT, rails=['clear', 'unclear'], model=azure_model, provide_explanation=True )LBKSDo not use a class named AzureOpenAI; use OpenAIModel with Azure parameters as shown above. For more, see the Phoenix Azure OpenAI integration docs and API reference.Would you like more detail on configuring authentication or troubleshooting common Azure errors?Sources:
- Phoenix Azure OpenAI integration docs
- Phoenix API reference: evaluation models
- Phoenix evaluation models GitHub
๐กย Hint: Mention RunLLM in the thread for followups.
