Phoenix supports displaying images in traces if the image is included in the LLM response as a base64-encoded string or as a URL, such as those returned by OpenAI's image generation APIs. However, the specific OpenAI `responses.create` API you referenced is not mentioned in the Phoenix documentation or examples; Phoenix examples use the `chat.completions.create` API for multimodal/image outputs. As long as the image output is captured in the trace and provided in a supported format (base64 or URL), Phoenix will display it in the UI <https://docs.arize.com/phoenix/tracing/how-to-tracing/advanced/multimodal-tracing#%3AR1sd9uucqfkvfa%3A|[docs: multimodal tracing>].
If you are using OpenAI's image generation and the output is a base64-encoded image or a URL, you can instrument your application with Phoenix and see the generated images in the trace view. For best results, use the OpenAIInstrumentor and ensure your payloads include the image data in the supported format <https://docs.arize.com/phoenix/tracing/how-to-tracing/advanced/multimodal-tracing#%3AR1sd9uucqfkvfa%3A|[docs: multimodal tracing>]. Would you like more detail or a code example?
Sources:
- <https://docs.arize.com/phoenix/tracing/how-to-tracing/advanced/multimodal-tracing#%3AR1sd9uucqfkvfa%3A|docs: multimodal tracing>