Is there a way to display Anthropic tool calls in chat messages similar to OpenAI tool calls?
Yes, Phoenix supports displaying Anthropic tool calls in chat messages. According to the GitHub pull request, Phoenix has implemented support for Anthropic tool messages by allowing users to add tool results to chats as "role: tool" messages. This implementation ensures that tool calls are properly instrumented and displayed in the chat interface, similar to OpenAI tool calls.Would you like more detail on how this is implemented?Sources: - GitHub pull request
馃挕聽Hint: Mention RunLLM in the thread for followups.
It seems like that PR was focused on the playground?
We're currently running 7.0.1, and this is what I'm seeing:
Those two FunctionTool.acall's were triggered by that Anthropic.achat
Hey 馃敀[private user] are you using LangChain or CrewAI? We might have not covered that ground quite yet. Can you let 馃敀[private user] 馃敀[private user] and I know your setup? Happy to get it prioritized!
This is with Llama Index 馃憤馃徎
With OpenAI, we see something like this:
Even with OpenAI, the actual output is blank. I think since the LLM is only making a tool call.
Looking at "input" of a later call in the workflow, I can see the tool call information (in both the OpenAI case and Anthropic case)
OpenAI:
"ChatMessage(role=<MessageRole.ASSISTANT: 'assistant'>, additional_kwargs={'tool_calls': [ChatCompletionMessageToolCall(id='call_RUIL4vYzqq8GPfjrAU828A9x', function=Function(arguments=...Anthropic: "ChatMessage(role=<MessageRole.ASSISTANT: 'assistant'>, additional_kwargs={'tool_calls': [ChatCompletionMessageToolCall(id='call_upXdw9LIwxbWrLHG95IOau9y', function=Function(arguments=...
Does that help? LMK if you want more details.
Actually I copied an OpenAI example twice. 馃う馃徎 This is an Anthropic example for above:
"ChatMessage(role=<MessageRole.ASSISTANT: 'assistant'>, additional_kwargs={'tool_calls': [{'id': 'toolu_01MeY9woeX12dn5PQ62skjnZ', 'input': {...