Currently, there are no official resources or documentation on using Arize Phoenix specifically with video generation models like veo3. Phoenix's primary support and investment are focused on LLM, NLP, CV (image), and tabular models, with multimodal (including video) support still under development and not yet generally available for video gen models such as veo3. Most existing resources and integrations are for text, image, and structured data, not video generation models or pipelines. If you are interested in multimodal or video support, the Phoenix team is ramping up internal efforts, but as of now, the platform does not provide out-of-the-box support or examples for video generation models. You may follow the project or reach out to the team for updates on future support for such use cases. Would you like more detail or guidance on current multimodal capabilities or workarounds?Sources:
- https://arize.com/docs/phoenix//changes/v6Zhm276x8LlKmwqElIA/gitbook/pdf#«R6f3afbnetrnb»
- https://arize.com/docs/phoenix/~~/changes/v6Zhm276x8LlKmwqElIA/quickstart/phoenix-inferences#«Rct38ut1letrnb»
💡 Hint: Mention RunLLM in the thread for followups.
