Using LangGraph AI Workflow to Auto-Label Spans with batchUpdateAnnotations on LLM Judge Failures
Hey! I have a LangGraph-based AI workflow and using the batchUpdateAnnotations mutation to automatically label spans when our LLM judge node fails (e.g. timeout, parse error). We create a custom OTel span via startActiveSpan, extract the span ID, and fire the annotation after span.end() using BatchSpanProcessor. The mutation consistently returns BatchUpdateAnnotationSuccess: true with the correct spanId and label, and the span itself is visible in the trace UI but the Annotations tab for that span is always empty. Here is the exact payload being sent:
{
"input": {
"modelId": "<base64-encoded project ID>",
"recordAnnotationUpdates": [
{
"recordId": "207c5e526c68c0de",
"startTime": "2026-03-27T00:00:00.000Z",
"annotationUpdates": [
{
"annotationConfigId": "<our config ID>",
"annotation": {
"name": "Judge Failure Annotation",
"label": "llm_error",
"annotationType": "Label"
}
}
],
"note": { "text": "..." }
}
]
}
}Would appreciate any help I can get. Thanks! 馃槃
