Definitely possible - if you are using things like vLLM you can use OpenAI SDKs or use LiteLLM. Can't say we will be much help past that since we've mainly tested things via things like ollama.
Thankyou Mikyo is there any sample notebook for eval with openai sdk ? i have my openai-spec compatible inference server. So i want to use that url as the openai endpoint and use that for eval.