Yes, Phoenix supports image inputs for LLMs. According to the Multimodal Tracing documentation, Phoenix can display images included in LLM traces by connecting to a Phoenix instance and including either a base64 UTF-8 encoded image or an image URL in the call made to your LLM. Additionally, the GitHub issue discusses the enhancement request for multimodal evaluations on dataframes, indicating ongoing support and development for image-based evaluations.Would you like more detail on how to implement this?Sources:
- Multimodal Tracing documentation
- GitHub issue
馃挕聽Hint: Mention RunLLM in the thread for followups.
hi Xiaohan W. like RunLLM mentioned we have semantic conventions to support the tracing of images and some subset of our autoinstrumentors support them