Hello, I have a question regarding Prompt Playground in Phoenix. If we have self-hosted LLMs (that are served in a OpenAI compatible manner), how can we use them in Prompt Playground? Also, Can we use any auto-instrumentation to collect traces in self-hosted Phoenix if we are using self hosted LLM deployed using vLLM and KServe?
馃挕聽Hint: Mention RunLLM in the thread for followups.
yes, you can try setting the OPENAI_BASE_URL environment variable, or specify it in the UI
phoenix already collects the playground traces automatically, so you try take a look at those first
Where to set env variables? In Python code?
do you have config fils for your self hosted phoenix?
yes we do...
you can add it alongside others already in there, like the ones for the database, etc
but we may use more than one self-hosted models in which case base url might be different for each model
oh i see. that would be a new feature
but I guess, in playground, base URL field we can use where we can plug in different values
likewise in the code too..
yes
ok..got it..thank you
np!
will give it a try
