Hi, after switching our llama index workflows from the deprecated ChatMemoryBuffer to Memory, function calls in chat inputs are not rendering properly in Phoenix.
I tried upgrading openinference-instrumentation-llama-index to 4.3.0, and that didn't seem to help.
Phoenix relies on proper instrumentation and message formatting to display function calls in chat inputs. If function calls are not rendering after switching to Memory, ensure you are using the latest LlamaIndex and Phoenix instrumentation, and that your tool/function call messages are structured according to the new OpenAI tool_calls array format. Recent Phoenix updates added support for this format, but issues can arise if the instrumentation or message structure is outdated or incompatible (feat: semantic conventions for tool_calls array in OpenAI ChatCompletion messages, [ENHANCEMENT] capture multiple function calls from the new tool_calls array in the OpenAI chat completion object](https://github.com/Arize-ai/phoenix/issues/1767)).If you are using the correct versions and still see issues, it may be due to how LlamaIndex's new memory system structures messages or how the Phoenix UI expects tool call data. Double-check that your instrumentation is up to date and that your LlamaIndex agent is emitting tool call information in the expected format. If the problem persists, it may require an update to the Phoenix instrumentation or a bug report (Feature Request: Display Tool Choices for Function Calls in LLM Tracing Dashboard). Would you like more detail on how to verify or update your instrumentation and message formatting?Sources:
- feat: semantic conventions for tool_calls array in OpenAI ChatCompletion messages
- https://github.com/Arize-ai/phoenix/issues/1767
- Feature Request: Display Tool Choices for Function Calls in LLM Tracing Dashboard
馃挕聽Hint: Mention RunLLM in the thread for followups.
This is a redacted example from the inputs tab if helpful:
"ChatMessage(role=<MessageRole.ASSISTANT: 'assistant'>, additional_kwargs={'tool_calls': [{'id': 'call_TFp7xX0UwDUFkzBBYBJfwuwh', 'function': {'arguments': '{\"<redacted>\": \"<redacted>\"}', 'name': '<redacted>'}, 'type': 'function'}, {'id': 'call_yEtxtP7grtw2eqXueUfHNYX5', 'function': {'arguments': '{\"<redacted>\": \"<redacted>\"}', 'name': '<redacted>'}, 'type': 'function'}]}, blocks=[])"Hmm that is definitely peculiar, let's get an issue filed for this
Would you mind filling in this issue with more details? https://github.com/Arize-ai/openinference/issues/1727
Updated with a test script to demonstrate the issue 馃憤馃徎
