Arize Phoenix provides tracing and observability for OpenAI's response APIs (completions, chat completions, embeddings) by auto-instrumenting your application. You can use the OpenInference OpenAI instrumentor to capture all OpenAI API calls and stream them to Phoenix for analysis and evaluation, including prompt/response pairs and associated metadata. This enables you to visualize, evaluate, and troubleshoot LLM responses in the Phoenix UI according to Phoenix OpenAI integration docs and Phoenix LLM tracing tutorial.If you want to use the OpenAI "Responses API" specifically, Phoenix supports capturing and analyzing all response data via OpenTelemetry-based tracing, but does not provide a custom API for OpenAI's "Responses API" beyond this instrumentation. Would you like more detail on how to set up tracing for OpenAI responses in Phoenix?Sources:
- Phoenix OpenAI integration docs
- Phoenix LLM tracing tutorial
馃挕聽Hint: Mention RunLLM in the thread for followups.