Hi! I have set up a self-hosted Phoenix instance. Tracing works perfectly, but I'm struggling to set up the prompt playground. I have to make it use custom models hosted in LiteLLM. I did set OPENAI_BASE_URL and OPENAI_API_KEY which seems to work out because when trying to submit a prompt with a standard OpenAI model, it errors with "selected model does not exists", which is correct.
I tried to add a new model (e.g. claude-3-5-sonnet ) with the OpenAI provider, but I can't get it to show up in the model dropdowns in the playground or prompt workshop.
Should this work? And if yes, does anybody have a suggestion how to debug it?