Hey guys just a quick question. sorry if I was not suppose to ask this in this channel. I was surfing over this slack workspace to ask this question. IMO, I found this could be the best channel. Q: I had followed all the videos on Evals and some of the docs and examples to it. I was also planning to experiment it to our RAG system to evaluate the outputs. For what I see from docs, tutorials we always need to fetch all the traces from the arise dashboard to run evals test and push back to the platform again. Referred this - https://colab.research.google.com/github/Arize-ai/tutorials/blob/main/python/llm/evaluation/quickstart-evals.ipynb#scrollTo=gCIi4MPkq4i1 Is this case every time? If not what could be the possible best approach. Asking since we run the eval evaluations every time it fetches lot of previous traces again which isn't ideal IMO. Expectations: Would it be possible to during the trace capture event can we do the eval on that trace and then send to the arise? This saves multiple arise platform calls and time. Looking forward for the response. THANK YOU
馃敀[private user] Sure, will connect. Thank you guys for replying.
Got it, That's clear and I did experiment this using open source ui and its doing great.
TLDR: Phoenix - run eval side car and log it
I.e Run the filter based fetch of spans from phoenix into dataframe and run evals and push back to the phenix UI periodically right?
