Hi everyone, I have a question related to phoenix evaluation. Due to company policy, we use an internal AI Gateway to access Azure OpenAI. This requires injecting a refreshed token and client certificate into every API call. I’m trying to use the OpenAIModel class in Phoenix to run evaluation workflows like the following:
from phoenix.evals import (
HallucinationEvaluator,
OpenAIModel,
)
eval_model = OpenAIModel(model="gpt-4-turbo-preview")
hallucination_evaluator = HallucinationEvaluator(eval_model)Could you please provide some guidance on how to support dynamic token and certificate injection in this scenario? Should I write a helper to refresh the token and inject it into the headers before every call? Is it recommended to hook this logic into the _rate_limited_completion and _async_rate_limited_completion functions? Basically, what's the best way to pass refreshed token and certificates if required by the underlying request? I’d appreciate any best practices or example extensions for integrating this into Phoenix’s evaluation framework. Thank you in advanced!
