thanks for the prompt reply!
Following your suggestion, I tried using client.get_trace_dataset(project_name='transcript-agent-mvp') again and inspected the returned `trace_dataset` object.
Here's what I found:
1. The trace_dataset.dataframe was populated (125 spans, 32 columns), but inspecting the columns didn't reveal any obvious ones for the manual annotations added in the UI (like 'Final Answer Quality', 'SQL Correctness', etc.).
2. The trace_dataset.evaluations list was still empty.
This seems to confirm that the UI annotations aren't directly exposed via these standard retrieval methods currently.
While I understand the annotation overhaul (#5917) is coming and will improve this, is there any current method or workaround available today (even if less ergonomic) that would allow us to programmatically retrieve these specific UI-added annotations? We'd like to use them for analysis now if possible.
Thanks for your help!