RunLLM How can I orchestrate opentelemetry / openinference propagation? I want to trace across services
To trace across services with OpenTelemetry/OpenInference, you must propagate the trace context (trace ID and span ID) between services, typically via HTTP headers. The calling service injects the current trace context into outgoing requests, and the receiving service extracts this context to continue the trace, ensuring all spans are grouped under a single trace in Phoenix. This is achieved using OpenTelemetry's inject and extract methods for context propagation in your client and server code, respectively. For Python, see the following example: LBKS# Client: inject context into headers from opentelemetry.propagate import inject headers = {} inject(headers) requests.post("http://service/api", headers=headers, json=payload)LBKS# Server: extract context from headers from opentelemetry.propagate import extract context = extract(request.headers) with tracer.start_as_current_span("service_span", context=context): # service logic passLBKSThis approach works for any number of services and is the standard for distributed tracing with Phoenix and OpenInference. For more, see the manual instrumentation guide and Slack discussion.Would you like more detail or a language-specific example?Sources:
- https://arize.com/docs/phoenix/tracing/how-to-tracing/setup-tracing/custom-spans#radix-芦R25mjtt9lffffffb禄
- https://arize-ai.slack.com/archives/C04R3GXC8HK/p1724796536.922089
馃挕聽Hint: Mention RunLLM in the thread for followups.
No, propagation will not group spans into a single trace if each service logs to a different project; each project is a separate container for traces, so the trace will be split across projects. Spans are not duplicated between projects鈥攅ach span appears only in the project it was logged to, and traces are not merged across projects in Phoenix. To have a unified trace across services, all services must log to the same project. For more, see Phoenix tracing concepts and GitHub Issue #3375.Would you like more detail or guidance on project configuration?Sources: - https://github.com/Arize-ai/phoenix/issues/3375 - Phoenix tracing concepts
馃挕聽Hint: Mention RunLLM in the thread for followups.
Hey Ben M., the concept you're looking for in OTel is called "baggage" https://opentelemetry.io/docs/concepts/signals/baggage/
What a name 馃槀 I鈥檒l look into it! Do you know off hand if spans will appear in both projects?
In terms of different services having different projects, I am actually not certain what will happen
Gotcha 馃憤
Maybe you can give it a try and let us know if something weird happens
Yeah, I鈥檒l setup a quick test and report back. Should be easy to validate locally
We probably expect them to share the same project.
Is my guess
I setup a quick test:
I call service a and start a span, hello-world-span
This service calls service b and propagates baggage
service b starts its own span in context, api2_span and returns data to service a
Service A calls another local function, do_work which is itself a chain
Result: A service_a project does not get created and instead all work falls under service_b
If I use dangerously_using_project from service b, I can get everything to log under the original project though 馃檪
Kind of an interesting paradigm. I would love it if the spans all ended up under the originating project in cases of propagation, but it鈥檚 nice knowing that we can do this at all
