I’ve ran the eval examples from deeplearning.ai on my local laptop. Everything worked great. Very cool. However I’m unable to upload the feedback to the server. Do I need a license? Or am I missing something else. The additional metric shows up “Too Calling Eval” with no values. And the feedback fields are not populated. I see them in the dataframe before uploading. If I need a license, how can I go about purchasing one. Thanks so much
You do not need a license to use Phoenix for uploading feedback or evaluations—Phoenix is open-source and free to use for self-hosted and most cloud scenarios. If your feedback fields and metrics (like "Tool Calling Eval") appear in your dataframe but not in the UI after upload, the most common issue is that the evaluation dataframe is missing required columns, especially span_id, or the upload is not associated with the correct project or trace. Make sure your dataframe includes span_id and that you use the correct log_evaluations method as shown in the docs (see here).If you are using Phoenix Cloud and encounter access or feature limitations, you may need to set an API key or configure your environment variables as described in the documentation. For enterprise features or support, you can contact Arize AI via their website, but for standard feedback/eval uploads, no license is required (see here). Would you like more detail on troubleshooting missing feedback in the UI?Sources: - Connect to Phoenix (API keys, endpoints, etc.) - see here
💡 Hint: Mention RunLLM in the thread for followups.
Fantastic! Thank you so much John. I really appreciate the quick reply. (Unrelated, but I have a call set up to speak with you guys next week in regards to AX. Really looking forward to it. Thanks again in advance!)
Ok. So.. I’m am hosting locally…. And have expanded all spans… but still don’t see the feedback. But this is already helpful! At least I’m not blocked by an unknown licensing issue. Much appreciate the guidance! (Oh, and I do have the span_id’s in the data frame as well. .. I followed the tutorials exactly.
ps. Everything ran fine. Was able to export the evals. Figured out how to run the llm_classify using bearer token provider for AzureOpenAI, dropped the non-tool-calling evals. And generated an output dataframe ready for upload. Upload created teh ‘metric’ but was blank.
No no. Not at all! I’m rerunning the eval for sanity check (llm_classify on 200 spans with gpt-4o-mini is taking a bit of time) Is there way I can look at some logs to see exactly what is missing?
Screens shot of the run, but not showing up in the UI
Thank you sir!
if you pick any one of these span_id and search for it in Phoenix like below. do you see any result?
