🔍 For those interested in experimenting with image classification in Phoenix, we've put together a new blog post that walks you through setting up and running your own experiment.
This guide covers the essentials and also highlights some of the recent updates we've rolled out:
🎨 Multimodal Tracing: Our new multimodal tracing feature allows you to visualize and debug your model’s decisions across various data types, including text and images.
🚤 Register Ergonomics: We've streamlined the OTEL register function, so you can now launch Phoenix with just one line of code.
If you're looking to get hands-on with these features, the blog post is a good starting point.
🧪 Experiments: Set up and manage LLM experiments directly in Phoenix. This should make tracking and comparing different runs much simpler—no more juggling Google Sheets.
Check it out: https://arize.com/blog/evaluate-image-classifier/