Good morning! I just saw this fantastic news and video from John G. this morning https://www.linkedin.com/posts/johngilhuly_connect-otel-tracing-between-mcp-clients-activity-7319120266809298944-efRm/ I tried to set it up myself this morning and it does not seem to be working for me. I am still getting 2 separate traces from my client and my server. Main differences I can spot are that I am:
Using Langchain with Bedrock instead of OpenAI
Using sse instead of stdio for my MCP server mode.
Mainly just wanted to check if I am trying something unsupported before I go down and troubleshooting rabbit holes. Thanks!
To connect OpenTelemetry (OTEL) tracing between your client and server using LangChain with Bedrock, ensure that you have correctly set up the tracing instrumentation for both the client and server. According to the LangChain documentation, you should use the LangChainInstrumentor to instrument your LangChain application. For Bedrock, you can use the BedrockInstrumentor as described in the Bedrock documentation.The use of sse instead of stdio for your MCP server mode should not inherently cause issues with tracing, but ensure that your tracing context is correctly propagated across service boundaries. This typically involves ensuring that the trace context is passed in HTTP headers or other communication protocols used between your client and server.Would you like more detail on setting up tracing for LangChain and Bedrock, or on ensuring trace context propagation? Sources:
- Bedrock documentation
💡 Hint: Mention RunLLM in the thread for followups.
Thanks! If it helps at all, here are my current poetry dependencies:
python = ">=3.12,<3.14"
langchain = "^0.3.23"
langgraph = "^0.3.31"
fastmcp = "^2.2.0"
boto3 = "^1.37.37"
langchain-aws = "^0.2.21"
langchain-mcp-adapters = "^0.0.9"
grandalf = "^0.8"
fastapi = "^0.115.12"
openinference-instrumentation-langchain = "^0.1.41"
openinference-instrumentation-mcp = "^1.1.0"
arize-phoenix-otel = "^0.9.2"
mcp = "^1.6.0"
openinference-semantic-conventions = "^0.1.17"Hey Grey L. - the context propagation should work for client to server communication (we haven't solved the server to client side). LangChain / LangGraph is a bit of a strange beast in that the span creation is not controlled by us - It's part of the langchain tracer which doesn't do a great job at propagating context like natively instrumented packages so you might see issues there. Do you see spans for your MCP client and server under the same project? If so I think you probably have the setup right and I would guess that the LangChain tracer is the culprit. This is the line in question (https://github.com/Arize-ai/openinference/blob/6f8ec172647c48d9a15218cf248651df37f[…]angchain/src/openinference/instrumentation/langchain/_tracer.py) - I can bring it up with the team to see if there's a way that we can enable it. Thanks for trying this out and providing feedback!
Gotcha - thanks Grey L. - I'll bring it up with a few folks and see if we can't solve it elegantly somehow. Thanks for your patience! https://github.com/Arize-ai/openinference/issues/1544
Sounds great! Let me know if there is anything I can do to help!
