Rendering Markdown in LLM Outputs with Arize Phoenix and Langchain
Hi team! I'm working with Arize Phoenix for observability and I'm wondering: what's the recommended way to render markdown text in a pretty format when it's part of an LLM's structured output? I'm using Langchain with auto_instrument=True in phoenix.otel.register to register a provider. Currently, I'm getting raw JSON in the traces, but I'd like to display nicely formatted markdown instead. I've explored a few workarounds: Option 1: Tool-based tracing
from langchain_core.tools import tool
@tool
def trace_pretty(content: str) -> str:
"""Trace some content in a pretty way."""
return content
def run_agent(...):
result = agent.invoke(...)
structured_output = result["structured_response"]
trace_pretty.invoke(structured_output.text_field)This works but feels like a workaround that pollutes the traces. Option 2: Direct span tracing
from opentelemetry import trace
tracer = trace.get_tracer(__name__)
def run_agent(...):
with tracer.start_as_current_span("run_agent", openinference_span_kind="tool"):
result = agent.invoke(...)
structured_output = result["structured_response"]
span.set_output(structured_output.text_field)This creates a new root span that's decoupled from the current tracing context, and it also feels like a workaround (we declare a tool span only to have pretty traces...). Option 3: Custom SpanProcessor
from phoenix.otel import SimpleSpanProcessor
class MySpanProcessor(SimpleSpanProcessor):
def on_end(span):
# Span is read-only here, can't modify itThis doesn't work because the span is read-only in on_end , and in on_start the span has not begun to reference the LLM answer. Is there a cleaner, officially supported way to achieve this? Thanks for any guidance!
