I'm trying to get this working: https://github.com/Arize-ai/phoenix/blob/f56cdb86136f9cb0af05edfeddd35ba2a0c524c4/docs/tracing/how-to-tracing/multimodal-tracing.md Where is that implemented in the code? do i need to send a message with a special format? right now we share the same data we send to openai as a trace attribute but phoenix still renders the base64 string with heading
To implement multimodal tracing in Phoenix, you need to ensure that your application includes either a base64 UTF-8 encoded image or an image URL in the call made to your LLM. Phoenix supports displaying images that are included in LLM traces, but the images must be properly encoded or referenced.1. Include Image in Trace: Ensure your trace includes a base64 encoded image or an image URL. For example:
response = client.chat.completions.create(
model="gpt-4o",
messages=[
{
"role": "user",
"content": [
{"type": "text", "text": "What鈥檚 in this image?"},
{"type": "image_url", "image_url": {"url": "[https://example.com/image.jpg](https://example.com/image.jpg)"}},
],
}
],
max_tokens=300,
)
```LBKSLBKS2. **Instrument Your Application**: Use the Phoenix instrumentation to capture and send traces.
```LBKS from phoenix.otel import register
from openinference.instrumentation.openai import OpenAIInstrumentorLBKS tracer_provider = register()
OpenAIInstrumentor().instrument(tracer_provider=tracer_provider, skip_dep_check=True)
```LBKSLBKS3. **Launch Phoenix**: Start the Phoenix application to view the traces.
```LBKS import phoenix as px
px.launch_app()
```LBKSLBKSFor more details, refer to the [Phoenix documentation on multimodal tracing](https://docs.arize.com/phoenix/tracing/how-to-tracing/multimodal-tracing).LBKSWould you like more detail?LBKSSources:
- [Multimodal Tracing](https://docs.arize.com/phoenix/tracing/how-to-tracing/multimodal-tracing)
- [GitHub Issue #2895](https://github.com/Arize-ai/phoenix/issues/2895)
馃挕聽Hint: Mention RunLLM in the thread for followups.
we don't use OpenAIInstrumentor().
endpoint = "http://localhost:6006/v1/traces"
tracer_provider = trace_sdk.TracerProvider()
tracer_provider.add_span_processor(SimpleSpanProcessor(OTLPSpanExporter(endpoint)))
trace_api.set_tracer_provider(tracer_provider)
tracer = trace_api.get_tracer(module_name)
that's just the tracer, which instrumentor do you use to collect traces from your source code?
with self.tracer.start_as_current_span("SotaAgent.get_action") as span:
span.set_attributes(format_call_attributes(observation, info))
self._agent_state.pre_update(observation)we don't use an instrumentor, update the spam directly
ah, if that's the case can you check that the span follows our semantic conventions for including images in the response? If it doesn't I think we'll render the response as text: https://github.com/Arize-ai/openinference/blob/main/python/openinference-semantic-conventions/src/openinference/semconv/trace/__init__.py#L161-L180
type should be image_url for the openai schema
specifically ensure that message_content.type is "image" and message_content.image is a base64 encoded string
do i need to convert it to image last?
"message": { "role": "user", "content": "data:image/jpeg;base64,/9j/4AAQSkZJRg..." }
this is how it looks like today
