does phoenix have some feature or concept similar to model serving in MLflow? e.g. once i have defined a task (https://arize.com/docs/phoenix/datasets-and-experiments/how-to-experiments/run-experiments) and i am happy with its performance on my evaluators, datasets, etc. i want to deploy this task to a production environment. Currently i am just wrapping this (usually a call to an API like gemini, inc. pre defined structured output, prompt, temperature, other config, etc.) in a docker container and deploying to something like ECS or Lambda. But i'm wondering if there's a "proper" way phoenix recommends for doing this, if there's any features for it, etc.
Phoenix does not provide a built-in model serving feature analogous to MLflow's model serving. Phoenix focuses on experimentation, evaluation, tracing, and observability for LLM and AI applications, but it does not offer a native mechanism to deploy or serve models or tasks as production endpoints. The recommended approach is to use your own serving infrastructure (e.g., Docker, ECS, Lambda) for deploying tasks or models, and then instrument those services with Phoenix for observability and evaluation purposes. There is no "Phoenix-native" serving or deployment abstraction鈥擯hoenix is designed to observe and evaluate, not to serve models directly or manage production inference endpoints. You can continue wrapping your task logic in a container and deploying it as you currently do, then use Phoenix to trace, evaluate, and monitor the production service by instrumenting your code and sending traces to Phoenix. This workflow is consistent with Phoenix's intended usage and documentation; there are no additional serving features or deployment recommendations beyond this in the official docs or guides. For more, see Phoenix documentation on experiments and deployment and self-hosting/deployment options.Would you like more detail on how to instrument your production service for Phoenix observability?Sources: - Phoenix documentation on experiments and deployment - self-hosting/deployment options
馃挕聽Hint: Mention RunLLM in the thread for followups.
