Q1. Hey, are there any examples of adding tool calls with using_sessions to visualize tool calls/results as part of a multi-turn conversation?
Yes, there are examples of using using_session to track tool calls and results in a multi-turn conversation. The openinference-instrumentation package provides utilities like using_session to specify a session ID, which helps in grouping and visualizing related traces. This is particularly useful for tracking multi-turn conversations in applications like chatbots or virtual assistants. You can find examples in the Phoenix documentation and GitHub repository.Would you like more detail on how to implement this in your application? Sources:
- Setup Sessions
- GitHub repository
💡 Hint: Mention RunLLM in the thread for followups.
If I understand correctly, you should be able to follow any sessions example like this one https://github.com/Arize-ai/phoenix/blob/main/tutorials%2Ftracing%2Fopenai_sessions_tutorial.ipynb And add tools into the request parameters. Is there anything else in particular you are looking for or would like to see?
I'd like to see how to pass tool_call and tool_results so they're properly visible in the session ui. e.g.
it doesn't include the tool definition from tool_call (1st screenshot) even though traces ui includes it (2nd screenshot).
tool_result is shown as a json.
I can't use @tracer.tool decorator as my tools are not in-memory python objects.
Ah I see. I believe our sessions view UI is not currently equipped to display tool calls. Let me double check this with the team and get back to you
ok, thank you!
Q2. May I also ask, when should span.set_tool(...) be used? Is it to denote all the tools available to the llm, or when a llm selects a tool, or when a tool result is available? Q3. What does SpanKind.TOOL mean? The docs say
A Tool is a span that represents a call to an external tool such as a calculator or a weather API.
Does this mean, a span should be started when llm decides a tool name and should be ended after too execution is done? How does span.set_tool fit in here?
(I have a few more questions, happy to start new threads if that is recommended)
No worries at all, feel free to drop them all here 🙂 myself and other members of the team can answer them as we can
Q4. How to add thinking/reasoning blocks to the span? I don't see special attributes in the semconv.
hey Deepankar M. you are right that a tool span is intended to capture the start / end of the function that the assistant said to call a tool. So say you have the weather API, span.set_tool would be setting the weather API definition. https://github.com/Arize-ai/openinference/blob/b6f207c7a3b4639c626a757b391a0d9ab9b[…]nce-instrumentation/src/openinference/instrumentation/config.py The input would then be the dict of parameters and the output the response (e.g. 47 degrees)
Q4. How to add thinking/reasoning blocks to the span? I don't see special attributes in the semconv. (edited)
Very good point. We are actively starting the work to track reasoning tokens and will definitely be adding reasoning blocks ( https://github.com/Arize-ai/openinference/issues/1370) Are you using anthropic Deepankar M.? Or like a deepseek model? I
I'm using anthropic
Perfect. Then yeah that one should not be so bad.
This is what I'm doing right now, lol 😁
The input would then be the dict of parameters and the output the response (e.g. 47 degrees)
span = tracer.start_span(openinference_span_kind="tool", ...)
tool_name = "get_weather"
tool_args = {"city": "London"}
tool_output, is_error = execute_external_tool(tool_name, tool_args)
span.set_tool(
name=tool_name,
description="...",
parameters=tool_args,
)
span.set_status(Status(StatusCode.ERROR if is_error else StatusCode.OK))
span.end()Do you mean something like this? parameters is the input args to the tool? How to pass tool_output? Should I do a span.set_output(...) and pass the complete tool_result?
