How to Submit RAG Relevancy Evaluations Back to Phoenix
Sorry if I missed this in the docs.. once I run rag relevancy evals with:
from phoenix.evals import (
RAG_RELEVANCY_PROMPT_RAILS_MAP,
RAG_RELEVANCY_PROMPT_TEMPLATE,
OpenAIModel,
download_benchmark_dataset,
llm_classify,
)
from phoenix.session.evaluation import get_retrieved_documents
retrieved_documents = get_retrieved_documents(px_client, project_name=PROJECT_NAME)
model = OpenAIModel(
model="gpt-4-turbo-preview",
temperature=0.0,
)
#The rails is used to hold the output to specific values based on the template
#It will remove text such as ",,," or "..."
#Will ensure the binary value expected from the template is returned
rails = list(RAG_RELEVANCY_PROMPT_RAILS_MAP.values())
relevance_classifications = llm_classify(
dataframe=retrieved_documents,
template=RAG_RELEVANCY_PROMPT_TEMPLATE,
model=model,
rails=rails,
provide_explanation=True, #optional to generate explanations for the value produced by the eval LLM
)how do I write those back up to phoenix?
