Integrating LiteLLM with Llama_index: Seeking MWE Example
I am trying to do an evaluation with Phoenix/Arize with Llama_index, but using an OpenSource model. LiteLLM is supposedly supported, but when I use it within Llama_index, I get errors such as field default_concurrency missing, and once I created such a field, the message reload-client missing. Needless to say, I must be doing something wrong or missing some important point. Can somebody please point me to, or provide me with, a MWE example using Llama_index and an open source model, with the use of an evaluator, for example, ” hallucination_evaluator = HallucinationEvaluator(model=eval_llm)” If I am posting in the wrong channel, I apologize. Gordon
