Hi folk, I am currently working with the Phoenix SDK in Python and have encountered some difficulties in collecting traces from a Phoenix session that I've deployed locally. I am attempting to connect to this session from a different notebook to gather trace data, but it seems there may be a gap in my configuration or understanding of the process
Hey Soufiane A., thanks for trying Phoenix! From the snippet you sent, it's difficult to tell what the issue might be. The process for using Phoenix to collect trace is you need to first launch Phoenix with px.launch_app, instrument your application as you do in this example, and then invoke your application (in this case, by invoking the OpenAI client). Have you tried those steps?
Thanks Xander S., When i work on the same notebook is fine, but if i Lunch from an other notebook or session how i can set phoenix to specified url to extract traces and spans using diff茅rent notebook or python script
Hey Soufiane A., try running the launch_phoenix notebooks followed by the instrument_and_invoke_client notebook. Let us know if that works!
I see now you point, but this is restricted only to openai? How I can collect this traces with my own llm using llamaindex
Hey Soufiane A., give this tutorial a try!
The TLDR is you need to launch Phoenix, instrument your LlamaIndex application with set_global_handler("arize_phoenix"), and then run LlamaIndex as usual.
Dear Xander S., I have previously worked in this notebook and everything seems to be functioning correctly. However, I believe my initial query might not have been explicit enough. Let me clarify my situation: Imagine I've deployed my Phoenix image and initiated lanch_app on notebook_1. Now, in a separate script or notebook_2, my objective is to evaluate my spans. For this purpose, I need to use get_spans_dataframe(). My concern is how to do this effectively, given that px.active_session() isn't running in the same session as where lanch_app was initiated. Is my query more understandable now?
Hey Soufiane A., thanks for the clarification. When you say "Phoenix image", are you referring to our Docker image? Or are you just running Phoenix locally in a notebook?
yes, Docker image (an also some times running locally on notebook)
If I'm understanding correctly, you're trying to run Phoenix in a Docker image, submit traces to the Docker image from a LlamaIndex application running in one notebook, and export those traces from the Docker container into a second notebook for evaluation. Does that sound right?
Yes, that's correct. My goal is to gather traces from a Docker image using a different script. Once collected, I aim to evaluate these traces and then feed them back into the Phoenix application, which has been deployed using Docker. This process is for the purpose of displaying them within the application. To achieve this, I'll be utilizing px.log_evaluations(DocumentEvaluations(eval_name="Relevance", dataframe=relevance_eval_df)) for the integration and display of these evaluations.
I see, thanks. As you're pointing out, I don't know if we currently have an easy way of connecting to a remote session for the purpose of downloading spans. Let me check and get back to you.
Thanks a lot
Or if you have an other alternative to evaluate and persiste within docker image ( evaluate on the fly)
