Hello! The sessions UX/UI where you see the messages under an unique session ID like a chat, only supports AI and Human/User messages display? If you have an API that can also return tool_call or tool_response, those type of messages are not being directly supported and I need to implement something custom?
Phoenix sessions UI supports displaying not only AI and user messages, but also tool call and tool response messages. Recent updates added server and UI support for "tool" role messages and for assistant messages with tool_calls attached, so tool calls and tool results are now visible in the chat/session view without custom implementation, as long as your traces follow the OpenInference specification and your instrumentation is up to date. For Anthropic and OpenAI, tool calls and tool results are normalized and shown in the UI, and the playground/editor also supports these message types natively聽ad1a1252,聽3e1d179c,聽6fd930a8.If your traces are not showing tool calls/results, check that your instrumentation emits messages in the OpenInference format, including the correct roles and attributes for tool calls and tool results. No custom UI work is needed unless you use a nonstandard message format or an unsupported provider聽4b6bf9f4,聽5952.Would you like more detail on how to structure your traces or examples for a specific provider?Sources: - ad1a1252 - 3e1d179c - 6fd930a8 - 4b6bf9f4 - 5952
馃挕聽Hint: Mention RunLLM in the thread for followups.
I'm talking about the "sessions" view, we have a multi-turn api which allow the client execute the tools in their side, so our agent just ends with the tool call messages, and wait till the client return to him again with the tool result:
Hi Brais. My understanding is that you are looking for a way to segment the API calls traces within the session. I'm a bit confused because you mentioned that the client/user is executing the tools on their side, and then simply relaying the tool result back to the agent. So the tool/API calls itself are not being traced, as its not part of your application, but just the client running it on their side. If you would like to segment tool response messages from the client, you can wrap those messages like this: with using_user("API responses"): send_to_agent("search_product: success")
We have an agent created through the langchain prebuilt component "create_agent" (from langchain 1.0.0) with a middleware that stops the execution when the agent decide to call a tool, with N args. Then the client is on charge to execute that tool locally and come back with the tool response to continue the execution
