John G. - I think Filter condition can work for me, let me check. Question: When you say you're trying to filter based on users, is that the user accessing the Phoenix dashboard or some user parameter you're including in your traces? Answer: Yes, I want to include a user parameter to trace. I have a separate OpenAI application that answers the questions. Then these responses will saved in the DB with the username of whoever asked the question. Now I want to show these saved responses in Arize trace and the username in a dropdown filter. Now anyone can select the username from the trace user filter then questions and answers will be visualized on the Arize UI of that user only.
Hi John G. - Thank you so much for the solution, I appreciate your quick response. I think your solution worked for me. Apologies for the late reply I was busy integrating the Arize App. Now, I have another requirement. This is the most required functionality we want in our Arize App. I have multiple users and want to add a filter (dropdown) in the UI of tracing beside the date filter (dropdown) for the users, based on the selection of users I want to show the OpenAI responses in tracing. Please let me know how can I add filters (dropdown) in the UI of tracing beside the date filter (dropdown). Please see the screenshots below.
John G. - I can run tracing using test data: https://storage.googleapis.com/arize-assets/phoenix/datasets/unstructured/llm/context-retrieval/trace.jsonl I want to use this data for evaluation but I am getting "No retrieval documents found." and an empty dataframe for "queries_df" while running the retrieved_documents_df = get_retrieved_documents(px.Client()) queries_df = get_qa_with_reference(px.Client()) Please take a look at the screenshots below. Please take a look at my arize_eval.ipynb Jupyter Notebook below.
Thanks, John G.. I really appreciate your quick response. It is working as expected using OpenAIInstrumentor. I am facing another issue. I have Questions and their Answers in the Mongo database, and responses saved in the database from a different application earlier. Now I want to create a trace using these responses. How can I log the data in the trace and evaluation of Arize from Mongo? Note: Questions and their Answers are present in the Mongo database. I found that I'll have to tracedataset for tracing, I was trying to do it with test data. I am using test dataset: traces_url = "https://storage.googleapis.com/arize-assets/phoenix/datasets/unstructured/llm/context-retrieval/trace.jsonl" that creates a trace as expected but using the code below, from phoenix.session.evaluation import get_qa_with_reference, get_retrieved_documents retrieved_documents_df = get_retrieved_documents(px.active_session()) queries_df = get_qa_with_reference(px.active_session()) it says, and there is no column "reference" and "document_score" in the retrieved_documents_df as expected, and there is no data for queries_df No retrieval documents found.
Traceback:
ValueError Traceback (most recent call last)
Cell In[45], line 24
12 hallucination_eval_df, qa_correctness_eval_df = run_evals(
13 dataframe=queries_df,
14 evaluators=[hallucination_evaluator, qa_correctness_evaluator],
15 provide_explanation=True,
16 )
17 relevance_eval_df = run_evals(
18 dataframe=retrieved_documents_df,
19 evaluators=[relevance_evaluator],
20 provide_explanation=True,
21 )[0]
23 px.Client().log_evaluations(
---> 24 SpanEvaluations(eval_name="Hallucination", dataframe=hallucination_eval_df),
25 SpanEvaluations(eval_name="QA Correctness", dataframe=qa_correctness_eval_df),
26 DocumentEvaluations(eval_name="Relevance", dataframe=relevance_eval_df),
27 )
File <string>:6, in __init__(self, eval_name, dataframe)
File C:\.venvs\openai_venv\lib\site-packages\phoenix\trace\span_evaluations.py:117, in Evaluations.__post_init__(self)
115 def __post_init__(self) -> None:
116 dataframe = (
--> 117 pd.DataFrame() if self.dataframe.empty else self._clean_dataframe(self.dataframe)
118 )
119 object.__setattr__(self, "dataframe", dataframe)
File C:\.venvs\openai_venv\lib\site-packages\phoenix\trace\span_evaluations.py:141, in Evaluations._clean_dataframe(self, dataframe)
139 # Validate that the dataframe contains result columns of appropriate types.
140 if not self.is_valid_result_columns(dataframe.dtypes):
--> 141 raise ValueError(
142 f"The dataframe must contain one of these columns with appropriate "
143 f"value types: {self.result_column_names.keys()} "
144 )
146 # Un-alias to the preferred names.
147 preferred_names = [self.unalias(name) for name in dataframe.index.names]
ValueError: The dataframe must contain one of these columns with appropriate value types: dict_keys(['score', 'label', 'explanation']) Hi Jason, I hope you are doing well. I tried to run the notebook you shared with me. I am getting None for labels, score, and explanation. I am using AzureOpenAI with AzureAd cred. Please check the screenshots and the error message. Also, I do not want to use llama_index instead of AzureOpenAI client. Please take a look at the code below. I am using openai==1.29.0 from openai import AzureOpenAI openai_client = AzureOpenAI(azure_endpoint = f"https://{AZURE_OPENAI_SERVICE}.openai.azure.com", api_version = OPENAI_API_VERSION, azure_ad_token_provider = token_provider, ) response = openai_client.chat.completions.create( model=CHAT_MODEL_DEPLOYMENT_NAME, # model = "deployment_name". response_format={ "type": "json_object" }, messages=[ {"role": "system", "content": """ ### Insurance Policy Information Extraction System"""} ] )
I am using arize-phoenix==4.5.0
I need sample queries_df for https://docs.arize.com/phoenix/evaluation/evals#:~:text=dataframe%3Dqueries_df%2C, can you please refer me any URL where I can download the sample dataset to understand the structure of data for SpansEvaluations
