Check out our release notes from this sprint!
š Sessions for Debugging LLM Chatbots
Group traces based on session ID attribute
Identify conversation breakpoints or unhelpful chatbot interactions
Find poorly performing sessions and users
Create custom metrics based on evals using session or user ID
āļø Join Evals on Existing Traces
Send latent evals through Python SDK, joined daily on spans with matching span_ids
š Programmatically Create Dashboards
Replicate widgets across dashboards
š GPT-4o Support
Available in prompt playground with vision/impact support, prompt template logging, and tracing