What is the best way to register model providers (Baseten, togetherAI, etc.) in Phoenix? They all use the OpenAI request format. I would like to track the model name and provider in my experiments?
It seems likely that the best way to register model providers like Baseten or TogetherAI in Phoenix鈥攚hen they use the OpenAI-compatible API鈥攊s to configure them as custom OpenAI-compatible providers. You can do this by setting the OPENAI_BASE_URL environment variable (server-side) or by specifying the base URL in the prompt playground UI, and by using the provider's API key as the OPENAI_API_KEY. This allows you to use multiple OpenAI-compatible endpoints and track the model name in your experiments. However, Phoenix currently overloads the "OpenAI" provider for all OpenAI-compatible endpoints, so you cannot natively distinguish between different OpenAI-compatible providers in the UI or experiment metadata at the same time, though enhancements are being considered to support custom providers and better provider tracking in the future (configure-ai-providers.md, issue #7325).To track the model name, you can pass any string as the model name in the UI or API, and it will be recorded in the experiment metadata. Tracking the provider is less direct鈥攃urrently, the provider is inferred as "OpenAI" for all OpenAI-compatible endpoints, but future updates may allow explicit custom provider registration (issue #7325, issue #5427). Would you like more detail or code examples for setting this up?Sources:
- Configure AI Providers
- issue #7325
- issue #5427
馃挕聽Hint: Mention RunLLM in the thread for followups.
I would love to call them from the playground and from a notebook when I run experiments against those models
If I update the base_url for OpenAI to point to Baseten, wouldn't it still stay track the experiment as OpenAI as provider?
