Not sure where I learned to upload evals to arize, may be outdated from https://docs.arize.com/arize/llm-evaluation-and-annotations/catching-hallucinations/log-evaluations-to-arize, :
evals_df = llm_classify(...)
evals_df["eval.formatting_consistency_out_of_5.label"] = evals_df["label"]
evals_df["eval.formatting_consistency_out_of_5.explanation"] = evals_df["explanation"]
ARIZE_API_KEY = os.getenv("ARIZE_API_KEY")
ARIZE_MODEL_ID = os.getenv("ARIZE_MODEL_ID")
client = Client(
space_id=ARIZE_SPACE_ID,
developer_key=ARIZE_DEVELOPER_KEY,
api_key=ARIZE_API_KEY,
)
evals_df["context.span_id"] = primary_df["context.span_id"]
# save evals_df to json
evals_df.to_json("evals_df.json", orient="records")
client.log_evaluations_sync(evals_df, ARIZE_MODEL_ID)but it would be great to abstract this away, to prevent devs from shooting themselves in the foot (personal experience). In my case since I explicitly added formatting_consistency_out_of_5 to the eval.NAME.ATTRIBUTE openinference log it would have been nice to have formally configured eval name to keep myself from forgetting.