hi, I would like to customize how my traces are displayed in UI: I want last llm call output to be visible in the output column, in feedback column I would like it to display combined evaluations from spans that are in particular trace. which df should I pull and update/send back to achieve this? by get_spans_df I can set column of the output most likely but dont see feedback column. and how to update it
To customize the traces display in the Phoenix UI, you need to work with the spans DataFrame and the evaluations DataFrame. Here's a concise breakdown: 1. Retrieve Spans DataFrame: Use get_spans_dataframe to get the spans DataFrame.
2. Update Output Column: Modify the output column to include the last LLM call output.
3. Combine Evaluations: Aggregate evaluations from spans within a trace and update the feedback column.The feedback column can be customized by adding annotations to spans. According to this pull request, the feedback column now displays all span annotations.Here's a step-by-step approach: 1. Retrieve and Update Spans DataFrame:
client = px.Client()
spans_df = client.get_spans_dataframe()
# Assuming 'output' column exists and you want to update it with the last LLM call output
spans_df['output'] = spans_df.apply(lambda row: get_last_llm_output(row), axis=1)
```LBKSLBKS2. **Combine Evaluations**:
```LBKS def combine_evaluations(trace_id):
# Logic to combine evaluations for spans within a trace
evaluations = get_evaluations_for_trace(trace_id)
return combine(evaluations)LBKS spans_df['feedback'] = spans_df['trace_id'].apply(combine_evaluations)
```LBKSLBKS3. **Log Updated Evaluations**:
```LBKS from phoenix.trace import SpanEvaluations
client.log_evaluations(SpanEvaluations(eval_name="Custom Feedback", dataframe=spans_df))
```LBKSLBKSWould you like more detail on any specific part?LBKS**Sources:**
- [feat(annotations): add feedback column to spans / traces tables with all annotations](https://github.com/Arize-ai/phoenix/pull/4100)
馃挕聽Hint: Mention RunLLM in the thread for followups.
To add Feedback to your traces, you can follow our evals tutorial here: https://docs.arize.com/phoenix/evaluation/evals#id-2.-export-data-and-launch-phoenix
Dustin N. I execute this function to setup instrumentation
def setup_tracer(project_name, local=True):
if local:
endpoint = "http://127.0.0.1:6006/v1/traces" # phoenix receiver address - local - requires starting phoenix server in CLI
else:
endpoint = "http://llamatrace.com/v1/traces" # hosted
resource = Resource(attributes={
ResourceAttributes.PROJECT_NAME: project_name
})
tracer_provider = trace_sdk.TracerProvider(resource=resource)
tracer_provider.add_span_processor(SimpleSpanProcessor(OTLPSpanExporter(endpoint)))
LlamaIndexInstrumentor().instrument(
tracer_provider=tracer_provider,
use_legacy_callback_handler=True, # important so traces get displayed properly
)
I logged span eval already - they are shown in the spans tabs per specific spans, just not in the tracers sadly which would also be desired
Hm, it鈥檚 surprising that the outputs aren鈥檛 showing up, are you using streaming outputs?
nope
Ah, can you try not using the legacy instrumentor?
yess, then output is in the output column, however ,then the spans are not grouped into the trace. that was also the initial reason of using legacy instrumentator
I see, thank you could you elaborate a little bit on what kind of LlamaIndex functionality you're using and how you expect them to be grouped? Sometimes, spans will not be grouped if they have not yet been closed (like when a stream isn't fully consumed). For the time being, you can use Otel to manually manually start a containing span if you want to group related bits of functionality in your code
