Hi, I am trying to set-up evaluations for my RAG system using Phoenix and be able to trace them on the Phoenix UI. This is the notebook I am using for reference: evaluate_rag.ipynb
When I try to call the OpenAIModel from phoenix.experimental.evals using the necessary parameters, I keep getting this error:
If I don't provide the api_version and api_endpoint, I get the error "API Key Invalid".
These same parameters work fine for me when I use them to create AzureOpenAI model from llama_index.llms
Hey Samantha F. - the model names for Azure depend on your deployment. They don't have the same aliases as the public OpenAI.
One way to check for this is to see what the params are on the Azure OpenAI playground
Hmm, weird, I struggled with this too just from the OpenAI deployment page - the only way I got it to work was by copying the values from the playground explicitly
Quick question, I am facing another error which seems to be coming from the incorrect fallback values being returned by the run_evals function as can be seen in the attached picture. I saw this issue on github briefly addressed how to resolve this. Just want to check if this is the only way to do it right now or this can be resolved differently?
Good catch Samantha - I'm not quite sure the details but it looks like there's a fix out for that. I'll get our on-call engineer Xander S. to get that merged and deployed for ya.
Hi Mikyo and Xander S., thanks for the update. I upgraded phoenix and have been trying to run the evaluations but receiving errors - both in notebook environment and in scripts. Any help with resolving this will be highly appreciated
Hey Samantha F., thanks for the report. It's difficult to tell what's going on from the stacktrace alone. Are you running Phoenix evals in a background thread?