Hi Mikyo, happy new year too ! Thank you for your kind help 馃檪 I have a structured output LLM message that has text fields, and I'd like to be able to render such fields as markdown instead of raw text. This happens because Phoenix displays the structured responses as JSON/dictionaries. It's not a problem for most use cases, but when dealing with long generation we lose a lot of readability. If we're talking features, being able to focus on a text element and have a modal/side panel with the prettified text in markdown would be super useful. Also, having long-text truncation + expander/read more ellipsis could truly help for manually reviewing long conversations. Another feedback that I got is that I feel difficult, when dealing with long text generations, to visualize easily the generation metadata : reasoning tokens, output tokens, etc. I spend a lot of time in scrolling, and Ctrl+F often struggles to find the text I want to search.
Hi team! I'm working with Arize Phoenix for observability and I'm wondering: what's the recommended way to render markdown text in a pretty format when it's part of an LLM's structured output? I'm using Langchain with auto_instrument=True in phoenix.otel.register to register a provider. Currently, I'm getting raw JSON in the traces, but I'd like to display nicely formatted markdown instead. I've explored a few workarounds: Option 1: Tool-based tracing
from langchain_core.tools import tool
@tool
def trace_pretty(content: str) -> str:
"""Trace some content in a pretty way."""
return content
def run_agent(...):
result = agent.invoke(...)
structured_output = result["structured_response"]
trace_pretty.invoke(structured_output.text_field)This works but feels like a workaround that pollutes the traces. Option 2: Direct span tracing
from opentelemetry import trace
tracer = trace.get_tracer(__name__)
def run_agent(...):
with tracer.start_as_current_span("run_agent", openinference_span_kind="tool"):
result = agent.invoke(...)
structured_output = result["structured_response"]
span.set_output(structured_output.text_field)This creates a new root span that's decoupled from the current tracing context, and it also feels like a workaround (we declare a tool span only to have pretty traces...). Option 3: Custom SpanProcessor
from phoenix.otel import SimpleSpanProcessor
class MySpanProcessor(SimpleSpanProcessor):
def on_end(span):
# Span is read-only here, can't modify itThis doesn't work because the span is read-only in on_end , and in on_start the span has not begun to reference the LLM answer. Is there a cleaner, officially supported way to achieve this? Thanks for any guidance!
Hi, I'm Tristan. I use Arize Phoenix as AI tracing stack for my company, in which I'm working as a freelance AI Engineer. Based from France 馃嚝馃嚪 Thanks for this great product !
