Hi! I just deployed phoenix to a staging environment hooked up to a mastra app using the out of the box tracing, awesome! I'm noticing that my spans seem to be orphaned as I have no root spans compared to the same code I had running locally. Anyone experience this before? Any direction to start looking? Thanks!
Orphaned spans (no root spans) often occur when parent spans are missing or not ingested, which can happen if spans are sent out of order, dropped, or if context propagation is misconfigured across services. In Phoenix, a root span is defined as any span whose parent is not present in the collection; this can be temporary if the parent arrives later. This issue is more common in distributed or multi-service environments, especially if trace context (trace ID, parent span ID) is not properly propagated between services or containers, or if network or exporter configuration differs from your local setup. To troubleshoot, check that your Mastra app and all services are correctly propagating trace context (e.g., via HTTP headers), and that all spans are being exported to Phoenix without being dropped. Also, ensure that the Phoenix collector endpoint is reachable and consistent across your staging environment. For more details on root span handling and context propagation, see the Phoenix custom spans guide and Mastra tracing integration.Would you like more detail on debugging context propagation or exporter setup?Sources: - Phoenix custom spans guide - Mastra tracing integration
馃挕聽Hint: Mention RunLLM in the thread for followups.
nope! just a very basic mastra app using the telemetry setup from the guide the bot linked. I'd guess the difference may be in the mastra dev vs production build as I am using the phoenix docker image with almost the same env setup local vs deployed
actually this part of the setup actually causes the orphaning I believe. It's sorta a larger problem that we have with things like vercel: https://github.com/Arize-ai/openinference/issues/1840
We just released a new version of the Mastra instrumentation today that may help with package version mismatches between our instrumentation and vercel ai sdk (which mastra uses). Can you try upgrading to that?
awesome! stoked to see the traction here
If not, then the linked issue would likely be the culprit, but the fact that you only observe this in production makes me unsure about that
I think this may have to do with how we are using the mastra API. updated and it didn't solve my issue in production
any repro would be helpful
I believe so. from your docs
telemetry: {
serviceName: 'agent-service',
enabled: true,
export: {
type: 'custom',
exporter: new OpenInferenceOTLPTraceExporter({
url: `${configuration.config.aiObservabilityUrl}/v1/traces`,
spanFilter: isOpenInferenceSpan,
}),
},
},doesn't quite hold water as I still collect root spans via the mastra dev UI chat vs invoking the API via an external chat
Ah I think that's the differentiator. Hitting the prod API vs using the dev playground
Mastra might be doing something under the hood to gather the chat into a parent, any quick suggestions/guidance how I might do the same via API?
I do not have anything coming to mind immediately. I think we'll need to investigate further with a GitHub issue
