Is there plan to extend the semantics to response API from openai? E.g. I'm using openai-agents sdk with openinference-instrumentation-openai-agents for a tool that just returns text, but the agents sdk would wrap it inside {"type": .., "text": ..., "annotation": ..} , which makes it hard to read on phoenix. I assume it's due to the response api as it matches the structure from response example on https://platform.openai.com/docs/api-reference/responses/create. I'm already using agents.OpenAIChatCompletionsModel in my code, but it looks the agents SDK converts the chat completion format to response format somehow internally.
It seems likely that there is ongoing work and discussion about aligning or extending Phoenix/OpenInference semantic conventions to better support OpenAI's Responses API, especially as the structure of agent SDK outputs (like {"type": ..., "text": ..., "annotation": ...}) can differ from standard chat completions and may be harder to read in Phoenix. There are open feature requests and issues about supporting OpenAI agent response payloads and unifying conventions for different OpenAI APIs, but as of now, there is no explicit confirmation in the data sources that this has been fully implemented or released yet. The current behavior you observe is likely due to the agent SDK wrapping outputs to match the Responses API format, as you described, and this is a known area for improvement in Phoenix and OpenInference instrumentation for better readability and consistency in the UI.Would you like more detail on the current state of these discussions or on possible workarounds?Sources:
- https://github.com/Arize-ai/phoenix/issues/7710
- https://github.com/Arize-ai/openinference/blob/main/python/instrumentation/openinference-instrumentation-openai/src/openinference/instrumentation/openai/_response_attributes_extractor.py
馃挕聽Hint: Mention RunLLM in the thread for followups.
That does look quite rough. Feel free to file us a ticket and we'll get it serialized properly.
Thank you! Added https://github.com/Arize-ai/openinference/issues/1808
