afaict, that would entail subclassing class LLM(*, provider=None, model=None, client=None, initial_per_second_request_rate=None, **kwargs) ? 馃
or this:
Alternative: Fully Custom LLM Evaluator
Alternatively, for LLM-as-a-judge tasks that don鈥檛 fit the classification paradigm, it is also possible to create a custom evaluator that implements the base LLMEvaluator class. We can implement our own LLMEvaluator for almost any complex eval that doesn鈥檛 fit into the classification type.