Hi there! I'm interested in using PydanticAI with Arize (or Phoenix locally). The framework uses model provider libraries behind the scenes (e.g. openai), but also offers its own instrumentation that follows OpenTelemetry GenAI semantic conventions. A few options come to mind:
Add PydanticAI support in OpenInference.
Or, support OTel GenAI in Arize/Phoenix.
Or, recommend underlying library instrumentation (e.g. openinference-instrumentation-openai).
To integrate PydanticAI with Phoenix, you can consider using the existing OpenInference instrumentation for underlying libraries like OpenAI. Phoenix supports OpenTelemetry, which means you can leverage OpenInference's auto-instrumentation for libraries like OpenAI to collect traces and send them to Phoenix. This approach aligns with your third option of recommending underlying library instrumentation. If you are interested in adding PydanticAI support directly to OpenInference, you might need to contribute to the OpenInference project by creating a new instrumentor for PydanticAI. This would involve following the OpenTelemetry GenAI semantic conventions and ensuring compatibility with Phoenix's tracing capabilities. Would you like more detail on how to proceed with either of these options?Sources: - OpenInference OpenAI Instrumentation
💡 Hint: Mention RunLLM in the thread for followups.
Hi Elliot! Thanks so much for your interest in using Arize+Phoenix! While OpenInference is not currently compatible with the GenAI semantic conventions (we have more in there) I believe we do hope one day to provide functionality to consume GenAI traces
Thanks so much for the help, Dustin. A few quick follow up questions:
Do you have a timeline for supporting OTel GenAI semantic conventions?
In the meantime, for PydanticAI, do you recommend using instrumentors for underlying provider libraries (e.g. the OpenAI Python library)?
Yeah! For the time being, please use the instrumentors we've built for each specific SDK, if you're also needing to use the instrumentation that does not provide OpenInference spans, you can either:
as long as the traces are OTEL compliant, you can send them to Phoenix anyway (no promises on how they render)
configure those instrumentors with different tracer providers to send them to a different place where you can view them properly
I'm also interested in trace compatibility with other otel observability like logfire. adopting a standard is always good in general
