Closed-Loop Fine-Tuning using Phoenix Metrics • How can we trigger model retraining or fine-tuning workflows based on specific performance thresholds flagged by Phoenix(e.g., LLM hallucination rate)? • Can Phoenix push alerts to a CI/CD or retraining pipeline (e.g., via webhook or integration with Airflow/SageMaker/etc.)
Phoenix enables teams to identify data points for fine-tuning based on production performance and user feedback, such as high hallucination rates, and supports exporting these datasets for retraining. However, Phoenix does not natively provide built-in automation to push alerts or trigger retraining workflows directly to CI/CD or orchestration tools (e.g., via webhook, Airflow, or SageMaker integration); such automation must be implemented externally by monitoring Phoenix metrics and programmatically exporting data as needed according to Phoenix documentation, be8694ec, and c26af0d7.Would you like more detail on how to set up external automation for closed-loop retraining with Phoenix?Sources: - Phoenix LLM Observability: Testing/Staging & Production - Phoenix documentation - be8694ec
💡 Hint: Mention RunLLM in the thread for followups.
Would be great of you can share more detail on how to set up external automation for closed-loop retraining with Phoenix!
John G. this seems like a very cool use-case. We don't currently have any automation but we've been thinking about web hooks for some time. If you have any thoughts about what this would look like I'd love to hear about it! https://github.com/Arize-ai/phoenix/issues/6736
