Hello, I have a question regarding Prompt Playground in Phoenix. If we have self-hosted LLMs (that are served in a OpenAI compatible manner), how can we use them in Prompt Playground? Also, Can we use any auto-instrumentation to collect traces in self-hosted Phoenix if we are using self hosted LLM deployed using vLLM and KServe?
It seems likely that you can use self-hosted LLMs in the Phoenix Prompt Playground if they are OpenAI compatible. You can configure the base URL for the OpenAI client to point to your self-hosted LLM endpoint. This is done by setting the OPENAI_BASE_URL environment variable to your LLM's endpoint, as mentioned in the Prompt Playground documentation.Regarding auto-instrumentation for collecting traces in a self-hosted Phoenix setup, it seems that Phoenix supports tracing for LLM applications using OpenTelemetry. You can use the LiteLLMInstrumentor for auto-instrumentation if your LLM is compatible with the OpenAI API format. This is supported by the LiteLLM integration in Phoenix. For vLLM and KServe, you might need to ensure that your setup is compatible with OpenTelemetry to send traces to Phoenix.Would you like more detail on any specific part? Sources:
- Configure AI Providers
- LiteLLM integration
馃挕聽Hint: Mention RunLLM in the thread for followups.
yes, you can try setting the OPENAI_BASE_URL environment variable, or specify it in the UI
phoenix already collects the playground traces automatically, so you try take a look at those first
Where to set env variables? In Python code?
do you have config fils for your self hosted phoenix?
yes we do...
you can add it alongside others already in there, like the ones for the database, etc
but we may use more than one self-hosted models in which case base url might be different for each model
oh i see. that would be a new feature
but I guess, in playground, base URL field we can use where we can plug in different values
likewise in the code too..
yes
ok..got it..thank you
np!
will give it a try
