Does phoenix support tracing spans to a single top level trace_id for a initial application call that requires multiple service calls each deployed in their own infrastructure? For example,
A user hits send on a chat bot application
Goes to main backend service that hits 1) moderation service 2) search + answer service
The main backend, moderation service, and search + answer serrvice all emit phoenix spans
Currently they all go to different top level traces each with their service specific spans I want them all wrapped under one trace , how can I achieve this? Is it even possible?
Yes, Phoenix supports tracing spans to a single top-level trace_id for an initial application call that requires multiple service calls. This can be achieved by manually creating a top-level span and invoking your service calls within that span. You can use the start_as_current_span method to create a top-level span and ensure all subsequent spans are nested under it.Here's an example of how to create a top-level span and nest other spans under it: LBKSfrom opentelemetry import traceLBKStracer = trace.get_tracer(__name__)LBKSwith tracer.start_as_current_span("top_level_trace") as span: span.set_attribute("openinference.span.kind", "CHAIN") # Call moderation service with tracer.start_as_current_span("moderation_service") as mod_span: # moderation service logic pass # Call search + answer service with tracer.start_as_current_span("search_answer_service") as search_span: # search + answer service logic passLBKSThis approach ensures that all spans are grouped under a single top-level trace. For more details, refer to the manual instrumentation guide.Would you like more detail on this process?Sources:
- Slack Discussion
- manual instrumentation guide
💡 Hint: Mention RunLLM in the thread for followups.
To achieve tracing across multiple services deployed in separate containers, you need to propagate the trace context (trace ID and span ID) across service boundaries. This involves passing the trace context from the main backend service to the moderation service and the search + answer service via HTTP headers.Here's a step-by-step approach: 1. Main Backend Service: Start a top-level span and propagate the trace context in the HTTP headers when making REST API calls to other services.2. Moderation Service and Search + Answer Service: Extract the trace context from the incoming HTTP headers and continue the trace.Main Backend Service```from opentelemetry import trace from opentelemetry.propagate import injecttracer = trace.get_tracer(name)with tracer.start_as_current_span("top_level_trace") as span: span.set_attribute("openinference.span.kind", "CHAIN")
# Prepare headers for propagation
headers = {}
inject(headers)
# Call moderation service
response = [requests.post](http://requests.post)("[http://moderation-service/api](http://moderation-service/api)", headers=headers, json=payload)
# Call search + answer service
response = [requests.post](http://requests.post)("[http://search-answer-service/api](http://search-answer-service/api)", headers=headers, json=payload)LBKS```LBKSLBKS**Moderation Service and Search + Answer Service**LBKSLBKS```LBKSfrom opentelemetry import trace
from opentelemetry.propagate import extract from opentelemetry.sdk.trace import TracerProvider from opentelemetry.sdk.trace.export import SimpleSpanProcessor from opentelemetry.exporter.otlp.proto.http.trace_exporter import OTLPSpanExporter# Setup tracer provider tracer_provider = TracerProvider() tracer_provider.add_span_processor(SimpleSpanProcessor(OTLPSpanExporter(endpoint="[http://phoenix::6006/v1/traces](http:://phoenix:6006/v1/traces)"))) trace.set_tracer_provider(tracer_provider) tracer = trace.get_tracer(name)# Extract trace context from incoming request headers def handle_request(request): context = extract(request.headers) with tracer.start_as_current_span("service_span", context=context) as span: span.set_attribute("openinference.span.kind", "CHAIN") # Service logic here pass```This approach ensures that all spans are grouped under a single top-level trace, even when the services are deployed in separate containers.Would you like more detail on any specific part?Sources: - Slack Discussion
💡 Hint: Mention RunLLM in the thread for followups.
Hey Swaraj R. - you'd have to get OTEL context propagation setup and working. It's a bit out of the scope of phoenix itself but I'm sure there are some guides online that can help you: https://opentelemetry.io/docs/concepts/context-propagation/
Ah yes was looking into this Thanks! I saw you folks also have a using_session feature open? I
