I am following langgraph integration tutorial with auto instrumentation. I see the traces fine but I see that graph is seen as one trace and LLM/Tool invocations (from within graph nodes) as separate traces. How do we connect everything that runs as part of graph.inoke() as a single trace ?
To connect all operations within a graph.invoke() call as a single trace, you need to ensure that the trace context is propagated correctly across all components of your application. This involves setting up the tracer provider and ensuring that all spans are part of the same trace.According to the LangGraph integration documentation, you should initialize the LangChainInstrumentor before your application code to ensure that all invocations are traced correctly. Here's a basic setup: LBKSfrom openinference.instrumentation.langchain import LangChainInstrumentor from phoenix.otel import registerLBKS# Configure the Phoenix tracer tracer_provider = register( project_name="my-llm-app", # Default is 'default' endpoint="http://localhost:6006/v1/traces", )LBKS# Instrument LangChain LangChainInstrumentor().instrument(tracer_provider=tracer_provider)LBKSEnsure that the LangChainInstrumentor is initialized before any graph or LLM operations are invoked. This setup should help in capturing all operations under a single trace. If you still face issues, verify that the trace context is being correctly propagated across all components and that there are no manual span creations that might be causing separate traces.Would you like more detail on any specific part?Sources:
- LangGraph integration documentation
๐กย Hint: Mention RunLLM in the thread for followups.
That is correct ๐[private user]. In Traces tab, I see 3 traces for a single graph invocation - one for Langgraph, one for LLM and another for Tool.
yes, no manual instrumentation. I tried adding tracer.tool annotation but still it ends up as a separate trace. I am clearly missing something basic. โโโ af-agent-sdk[bots, langchain, langgraph, tools] v0.6.10 โ โโโ langgraph v0.3.21 (extra: langgraph) โ โ โโโ langgraph-checkpoint v2.0.23 โ โ โโโ langgraph-prebuilt v0.1.7 โ โ โ โโโ langgraph-checkpoint v2.0.23 (*) โ โ โโโ langgraph-sdk v0.1.60 โ โโโ openinference-instrumentation-langchain v0.1.37 (extra: langgraph) (*) โ genai-generic-agent git:(master) โ uv tree | grep -i arize Resolved 193 packages in 9ms โ โโโ arize-phoenix-otel v0.9.0 โโโ arize-phoenix v8.20.0 โ โโโ arize-phoenix-client v1.1.0 โ โโโ arize-phoenix-evals v0.20.4 โ โโโ arize-phoenix-otel v0.9.0 (*) โโโ arize-phoenix v8.20.0 (group: dev) (*)
No, I have a custom graph. But when it was not working, I did come across that. I didn't see anything different being done in that notebook as far as instrumentaion is concerned ?
can you point me to that base langgraph example where it is working ? I can try to reproduce and see if I am doing anything different ?
In my exanple, I am not doing the bind_tools on chat model because I am implementing ReWoo. So, I understand if tools are not part of the same trace. But not understanding why Chat Model invocations are not part of the same trace.
I see LLM and Langgraph as two separate trace hierarchy for a single run
I am using ChatOpenAI model from langchain.
No. not seeing the LLM spans within Langgraph trace. Let me see if I can share a quick screenshot
4 Traces for a single graph run. Main Langgraph trace has multiple spans for different graph nodes. Couple of these graph nodes make LLM /Tool calls. And these are shown as 3 separate traces. When I expand the Langgraph trace, I don't see any LLM invocation span within it.
I ask, because one thing that could cause this would be multiple instrumentation packages in your environment that are both attempting to instrument your LLM calls
Let me check this
Not another tracing instrumentation, but I had another callback registered with Langgraph. I tried disabling it - still the same issue,
yes, correct
