My spans are not being grouped in a single trace. I have the most update versions of llama-index, openinference-instrumentation-llama-index, arize-phoenix... Does anybody have a clue what this can be? Thanks
To ensure spans are grouped under a single trace, you need to propagate the trace context across service boundaries. This involves passing the trace context (trace ID and span ID) from one service to another, typically via HTTP headers. Make sure you are using the correct instrumentation setup and that the trace context is being propagated correctly.1. Main Service: Start a top-level span and propagate the trace context in the HTTP headers when making REST API calls to other services.
2. Other Services: Extract the trace context from the incoming HTTP headers and continue the trace.Here's a basic example of how to achieve this: LBKSfrom opentelemetry import trace from opentelemetry.propagate import inject, extract from opentelemetry.sdk.trace import TracerProvider from opentelemetry.sdk.trace.export import SimpleSpanProcessor from opentelemetry.exporter.otlp.proto.http.trace_exporter import OTLPSpanExporterLBKS# Setup tracer provider tracer_provider = TracerProvider() tracer_provider.add_span_processor(SimpleSpanProcessor(OTLPSpanExporter(endpoint="http://phoenix:6006/v1/traces"))) trace.set_tracer_provider(tracer_provider) tracer = trace.get_tracer(__name__)LBKS# Main Service with tracer.start_as_current_span("top_level_trace") as span: headers = {} inject(headers) # Call other services with headersLBKS# Other Services def handle_request(request): context = extract(request.headers) with tracer.start_as_current_span("service_span", context=context) as span: # Service logic here passLBKSEnsure that your services are correctly configured to propagate and extract the trace context. If the issue persists, verify that the instrumentation libraries are correctly initialized and that there are no network issues preventing the spans from being sent to Phoenix.Would you like more detail on any specific part?Sources:
- https://arize-ai.slack.com/archives/C04R3GXC8HK/p1724796536.922089
- https://arize-ai.slack.com/archives/C04R3GXC8HK/p1722778288.923159
馃挕聽Hint: Mention RunLLM in the thread for followups.
After deeper investigation, I think I found the version that is breaking this... I'm in the latest versions of llama index:
llama-index = "==0.12.19"
llama-index-storage-index-store-postgres = "==0.4.0"
llama-index-storage-kvstore-postgres = "==0.3.0"
llama-index-utils-workflow = "==0.3.0"
llama-index-vector-stores-postgres = "==0.4.2"But when I use:
openinference-instrumentation-llama-index = "==3.0.3"I get the traces correctly displaying in Phoenix (first image). If I update to:
openinference-instrumentation-llama-index = "==3.0.4"And onwards, I get the result of second image.
