Hello,
I have a question regarding Prompt Playground in Phoenix. If we have self-hosted LLMs (that are served in a OpenAI compatible manner), how can we use them in Prompt Playground?
Also, Can we use any auto-instrumentation to collect traces in self-hosted Phoenix if we are using self hosted LLM deployed using vLLM and KServe?