Setting Up Arize Evals for Document Retrieval in LLM Applications
Hey guys, my organisation is just getting started with Arize and I'm struggling to set up tracing/evals for document retrieval and was wondering if someone could help. I'm sure I'm missing something obvious, but I want to use UI-configured evals (running on Arize compute) to evaluate the relevance of retrieved documents that will be fed into an LLM application. The retrieval takes an input query and returns a list of chunks/documents. I want to run the relevance eval for every input-output pair (i.e. the same input with each of the output documents). However, all I can figure out how to do is run the eval on one document only, I can't see how to iterate through them. Is this possible with evals or do I need to write customise tracing to flatten the output into many spans?
