hi all! We have a dataset that contains base64 encoded images that we have been iterating on using the python library. Can we use this for experimentation in the prompt playground? I am not seeing a way to tell the model it is an image versus just text as an input in the UI
To use base64 encoded images in Phoenix for experimentation, you need to ensure that the images are correctly formatted and recognized as images rather than text. According to the Phoenix image classification tutorial, Phoenix expects image data to be in base64 encoded format or as a URL. You can map the image data to the appropriate input keys when uploading your dataset to Phoenix.If you are using a multi-modal model, ensure that the input is structured correctly. For instance, when using OpenAI's models, you can specify the input type as "image_url" with the base64 data URL. This is demonstrated in the test_image_in_message.py file, where the image is included in the content as a data URL.Would you like more details on how to structure your input for Phoenix?Sources:
- Phoenix image classification tutorial
- test_image_in_message.py
馃挕聽Hint: Mention RunLLM in the thread for followups.
Hey Tristan, we don鈥檛 have image support in the playground. We鈥檙e working on making it easy to save prompts in Phoenix but image support makes a ton of sense. If you file us a GitHub issue we鈥檒l let you know when the feature lands!