Arize-Phoenix: Visualize Embeddings & Run RAG with Tracing
hey adarsh k., Arize-Phoenix actually let's you project 3 datasets of embeddings (primary, reference, and corpus) - which could be used for fine-tuned, pre-fine-tuned, and knowledge base. While the projection of points is not guaranteed to preserve higher dimensional distance, you could use this as a visual mechanism of seeing how your embeddings are shifting post fine-tuning. https://colab.research.google.com/github/Arize-ai/phoenix/blob/main/tutorials/llama_index_search_and_retrieval_tutorial.ipynb You can also use Arize-Phoenix's new tracing capabilities to run RAG on the same set of queries pre/post fine-tuning. Phoenix tracing captures the score of your document retrievals and lets you explore the data as a dataframe. the score of each document's retrieval is under document.score - you could then do statistical analysis on this if you wanted. Note, you probably could do this just with running a retriever on it's own but you can take advantage of LLM Evals for RAG relevancy pre/post fine-tuning with this approach. https://colab.research.google.com/github/Arize-ai/phoenix/blob/main/tutorials/tracing/llama_index_tracing_tutorial.ipynb
