I am looking to contextualize when a prompt is cancelled in tracing/sessions. At first just knowing it was cancelled should be fine but I'd like to expand it to the user clicked stop, etc at some point. Does anything like that exist now? A bit interesting as the session thread just shows undefined but I can see the backend is still working as the LLM stream gets a response. Thanks!
It seems likely that Phoenix's Prompt Playground now supports client-side run cancellation, so you can detect when a user clicks "stop" and mark a prompt as cancelled on the client. However, server-side cancellation (actually halting the backend LLM run) is not yet implemented, so the backend may continue processing even after the UI shows cancellation. There is no explicit mention in the data sources of a built-in way to annotate or propagate cancellation events (like "user clicked stop") into traces or sessions metadata, nor is there documentation on surfacing this in the session thread UI鈥攃urrently, cancelled runs may just show as "undefined" in the UI while the backend continues working. If you want to expand this to include more detailed cancellation context (e.g., "user clicked stop"), you would likely need to implement custom metadata or annotations yourself, as there is no out-of-the-box support for this in Phoenix at this time. Would you like more detail on how to add custom annotations or metadata to traces or sessions to track cancellation events?Sources: - feat: Add cancel button to playground runs
馃挕聽Hint: Mention RunLLM in the thread for followups.
I am using mastra with agui/copilot kit. I can poke around to see if there is anything more to build on top of
Yeah the easiest thing to do here would be to set an attribute on a mastra span using OTEL. This way you are fully in control.
Does phoenix/openinference have any attributes for this case?
not specifically no. You could always add it to the metadata which would basically make it easy to see in the UI and filter
fair enough. I think part of this would also be some hoisting into the session view so rather than showing an undefined response it would be a clear reason why one was not present
I see, that makes a lot of sense
I could overwrite the output attribute but that feels a little hacky when there could just be a fallback to check for some kind of reason when there is no output or it is undefined
Yeah, makes sense. With Mastra as you know we're taking the telemetry from Mastra itself, so we'd have to tap into its mechanics. If they have this info within the framework maybe we could figure out how to slot it into our conventions to drive a better UI flow. I put in a ticket to start brainstorming ideas: https://github.com/Arize-ai/openinference/issues/2003
Awesome. Yeah this is definitely multifaceted. I figured I'd ask since it feels like a solid concept to bake into the phoenix platform in general
Im getting pretty good at dealing with mastra these days 馃槄
Yeah I saw! That's so great. Yeah let me know if you need any help but I think you have more pull than me these days:) I chatted with Sam Bhagwat about 2 months ago and I know he cares a lot about this stuff so let me know. Can set something up
For sure. I'm happy to provide more context too for this "there is no output" initiative. While I agree with you about using metadata we have some nontechnical folks who are using phoenix for product discovery and hand labeling and it would be great to give them the context a little easier
Gotcha. So like some indicator on this view or on the sessions list?
