Hi Team, I'm trying to set up tracing using agno. I've successfully managed to set it up for agents that use OpenAI models, but for AWS Bedrock models, when streaming is enabled, the output is not "prettified." Specifically, with streaming enabled, I receive text chunks (as shown in the attached picture). When I disable streaming, the traces look correct and display the actual LLM output as expected, rather than JSON. Can someone please help me resolve this? P.S. Great booth at AI Engineer Paris鈥擨'm here because of it!
When using Agno with AWS Bedrock models and streaming enabled, the trace output is not "prettified" because the instrumentation currently serializes and attaches all input/output鈥攊ncluding streaming chunks鈥攁s raw, deeply nested metadata. This results in traces filled with JSON chunks rather than a clean, aggregated LLM output. There are no built-in configuration options to change this behavior; only non-streaming responses are aggregated and displayed as expected. To improve the trace output for streaming, you would need to preprocess or aggregate the streamed chunks yourself before they reach the instrumentor, or patch the serialization logic locally. This is a known limitation and a recurring pain point for users of AgnoInstrumentor with Bedrock streaming models, and there is no official workaround yet聽鈥斅爁eature requests for better streaming output handling are open with the maintainers聽(source).Would you like more detail on possible workarounds or how to preprocess/aggregate the streaming output before tracing?Sources: - https://github.com/Arize-ai/openinference/issues/2045
馃挕聽Hint: Mention RunLLM in the thread for followups.
I have the following packages installed in my requirements.txt
arize-phoenix==11.37.0
openinference-instrumentation-agno==0.1.15
arize-phoenix-otel==0.13.1
opentelemetry-exporter-otlp==1.37.0
openinference-instrumentation-openai==0.1.33
openinference-instrumentation-bedrock==0.1.27
openinference-instrumentation-anthropic==0.1.19Correct鈥攖here is currently no built-in solution in Agno or OpenInference to aggregate and prettify streaming output from AWS Bedrock models in traces; streamed chunks are serialized as-is, resulting in verbose or raw JSON output (source). You would need to implement custom aggregation or patch the instrumentor if you want improved output formatting.Would you like more detail on how to implement a workaround?Sources: - https://github.com/Arize-ai/openinference/issues/2045
馃挕聽Hint: Mention RunLLM in the thread for followups.
Hey Andrei I. a human here 馃檪 we co-maintain the instrumentation with the Agno team under this repo. Definitely file us an issue and I can raise it with their team. https://github.com/Arize-ai/openinference Thanks for reporting!
No problem, keep the feedback comin.
Mikyo here is the issue: https://github.com/Arize-ai/openinference/issues/2236 (in case that sending it here might speed up a bit the process馃槀). Thanks again for the help.
