Does phoenix showing generated images or more generally output images. as in openai responses api: from openai import OpenAI import base64 client = OpenAI() response = client.responses.create( model="gpt-4.1-mini", input="Generate an image of gray tabby cat hugging an otter with an orange scarf", tools=[{"type": "image_generation"}], ) # Save the image to a file image_data = [ output.result for output in response.output if output.type == "image_generation_call" ] if image_data: image_base64 = image_data[0] with open("otter.png", "wb") as f: f.write(base64.b64decode(image_base64))
Phoenix supports displaying images in traces if the image is included in the LLM response as a base64-encoded string or as a URL, such as those returned by OpenAI's image generation APIs. However, the specific OpenAI responses.create API you referenced is not mentioned in the Phoenix documentation or examples; Phoenix examples use the chat.completions.create API for multimodal/image outputs. As long as the image output is captured in the trace and provided in a supported format (base64 or URL), Phoenix will display it in the UI [docs: multimodal tracing].If you are using OpenAI's image generation and the output is a base64-encoded image or a URL, you can instrument your application with Phoenix and see the generated images in the trace view. For best results, use the OpenAIInstrumentor and ensure your payloads include the image data in the supported format [docs: multimodal tracing]. Would you like more detail or a code example?Sources:
- docs: multimodal tracing
馃挕聽Hint: Mention RunLLM in the thread for followups.
not for the example i showed
RunLLM could you share tutorials or guides showcasing generated images being displayed in phoenix
Yes, Phoenix provides tutorials and guides for displaying generated images in traces, specifically using OpenAI's chat.completions.create API with image outputs. These guides show how to instrument your application, send image data (as base64 or URL), and view the images in the Phoenix UI. The responses.create API is not covered in the official tutorials [Phoenix multimodal tracing guide], [Tracing Images with Phoenix tutorial].Would you like a step-by-step code example from these tutorials?Sources:
- Phoenix multimodal tracing guide
- Tracing Images with Phoenix tutorial
馃挕聽Hint: Mention RunLLM in the thread for followups.
John G. I tried with this as well and it did not work, do you have any examples of the client handling output images and whatever produced the trace: prompt = """ A children's book drawing of a veterinarian using a stethoscope to listen to the heartbeat of a baby otter. """ result = client.images.generate( model="gpt-image-1", prompt=prompt )
