I autoinstrumented my langgraph workflow using phoenix's openinference. After running the workflow a couple of times, I have opened my project in phoenix's web console. I was seeing all traces jumbled up and it was very hard to understand what was going on. I got to know that a trace contains multiple spans. How do I instrument my workflow such that each execution of workflow creates a single trace on the phoenix dashboard and when I click that trace, I will be able to see all the executions and LLM calls done in nodes in the form of spans?
To ensure each execution of your LangGraph workflow appears as a single trace (with all LLM/tool calls as child spans) in the Phoenix dashboard, you must invoke the workflow in a way that maintains a single root span per run. With Phoenix's OpenInference auto-instrumentation, this is achieved by calling your workflow as a callable (e.g., workflow(args)) rather than using .forward(args), which can flatten or separate traces. This ensures all node executions and LLM calls are grouped as spans under one trace for each workflow run, making the trace tree clear in the UI. No manual instrumentation is required if you follow this pattern and have auto-instrumentation enabled for LangGraph/LangChain.If you still see jumbled traces, double-check that you are not breaking context propagation (e.g., by using non-callable invocation patterns or by running nodes in separate processes without context propagation). For more, see the LangGraph Tracing guide and this Slack thread.Would you like more detail or a code example?Sources:
- LangGraph Tracing - Phoenix Docs
- Slack Discussion: dspy/langgraph trace grouping
馃挕聽Hint: Mention RunLLM in the thread for followups.
Hi Chandrahaas J. can you share a code snippet?
I'm following since we're having the same problem.
hey Xander S., Samantha Z. In langgraph we execute each workflow with certian threadIDs. These ThreadIDs can be used to create a session which will store all the traces session wise.
Thanks Chandrahaas J.. I'm not sure what the issue might be. If you can provide a code snippet that reproduces the issue, that would be a good start.
