Hi Arize AI Support Team, I am following the two Colab notebooks below to use Tracing and Evaluations in LlamaIndex with Azure AI Search and Azure OpenAI:
However, I am encountering an issue where the get_spans_dataframe method returns an empty DataFrame, despite spans appearing in the UI. Here's the code snippet I'm using:
spans_df = px.Client(endpoint="http://127.0.0.1:6006").get_spans_dataframe()
And here is a screenshot showing the spans in the UI: Below is the screenshot showing the empty DataFrame result: Could you please help me resolve this issue? Thank you!
I’m running it in a Jupyter notebook on local host
hmmm, I'm not sure what's not working for you. Here's a minimum working example of traces in phoenix. https://colab.research.google.com/drive/1SbCmD_LBkEbHqOMHmEFJRkZaj-aJXSvo?usp=sharing You can also see if you might have the right parameters set for get_spans_dataframe
def get_spans_dataframe(
self,
filter_condition: Optional[str] = None,
*,
start_time: Optional[datetime] = None,
end_time: Optional[datetime] = None,
limit: Optional[int] = DEFAULT_SPAN_LIMIT,
root_spans_only: Optional[bool] = None,
project_name: Optional[str] = None,
timeout: Optional[int] = DEFAULT_TIMEOUT_IN_SECONDS,
) -> Optional[pd.DataFrame]:Otherwise I'm a bit stumped TBH. Would probably need to have someone on the team jump on a quick call to understand what might be going wrong. Any chance that would be possible?
To help with debugging, can you try running this to see if it works?
from phoenix.trace.dsl import SpanQuery
px.Client().query_spans(SpanQuery().select("input.value", "output.value"))weird, how about just the span_id
from phoenix.trace.dsl import SpanQuery
px.Client().query_spans(SpanQuery().select("span_id"))Thatt didn't print anything in the console. 😞
can see everything here thought in the UI
is the endpoint correct?
are you using a proxy?
No I am not using a proxy
Let me try changing the endpoint to use llamatrace
Farzad S. Here’s a small example for adding metadata plus screenshots showing how to filter for it.
from llama_index.core.llms import MockLLM
from openinference.instrumentation import using_metadata
questions = ["What is the capital of France?", "What is the capital of Germany?"]
if __name__ == "__main__":
for question in questions:
metadata = {"country": question.split()[-1][:-1]}
with using_metadata(metadata):
MockLLM().complete(question)here we also push questions to different projects
from llama_index.core.llms import MockLLM
from openinference.instrumentation import using_metadata
from phoenix.trace import using_project
questions = ["What is the capital of France?", "What is the capital of Germany?"]
if __name__ == "__main__":
for question in questions:
country = question.split()[-1][:-1]
metadata = {"country": country}
with using_project(country), using_metadata(metadata):
MockLLM().complete(question)