Hi everyone, I have a question related to phoenix evaluation. Due to company policy, we use an internal AI Gateway to access Azure OpenAI. This requires injecting a refreshed token and client certificate into every API call. I鈥檓 trying to use the OpenAIModel class in Phoenix to run evaluation workflows like the following:
from phoenix.evals import (
HallucinationEvaluator,
OpenAIModel,
)
eval_model = OpenAIModel(model="gpt-4-turbo-preview")
hallucination_evaluator = HallucinationEvaluator(eval_model)Could you please provide some guidance on how to support dynamic token and certificate injection in this scenario? Should I write a helper to refresh the token and inject it into the headers before every call? Is it recommended to hook this logic into the _rate_limited_completion and _async_rate_limited_completion functions? Basically, what's the best way to pass refreshed token and certificates if required by the underlying request? I鈥檇 appreciate any best practices or example extensions for integrating this into Phoenix鈥檚 evaluation framework. Thank you in advanced!
Phoenix鈥檚 OpenAIModel supports passing custom headers (including tokens and certificates) via the model_kwargs argument, which is forwarded to the underlying OpenAI client. You should implement a helper that refreshes your token and prepares the certificate, then inject these into the extra_headers and cert fields of model_kwargs before each evaluation call. This approach is preferred over modifying internal rate-limiting methods, as it keeps your integration clean and compatible with Phoenix updates. For example, you can dynamically update model_kwargs like so: LBKSfrom phoenix.evals import OpenAIModelLBKSdef get_fresh_token(): # Your logic to fetch a new token return "refreshed_token"LBKSdef get_client_cert(): # Your logic to fetch the cert file path or tuple (cert, key) return ("/path/to/cert.pem", "/path/to/key.pem")LBKSeval_model = OpenAIModel( model="gpt-4-turbo-preview", model_kwargs={ "extra_headers": {"Authorization": f"Bearer {get_fresh_token()}"}, "cert": get_client_cert(), } )LBKSYou may need to re-instantiate the model or update its model_kwargs before each evaluation to ensure the token/cert are current. This method is documented as the way to pass custom headers and certificates for OpenAI-compatible endpoints in Phoenix evaluations. Would you like more detail or a more advanced example?Sources:
- Phoenix evaluation models documentation
- OpenAIModel source code
馃挕聽Hint: Mention RunLLM in the thread for followups.
