Anybody knows why when using Llamaindex React Agent we see system prompt + user + assistant panels in the AGENT span in phoenix But when using LlamaIndex OpenAI Agent we only see user + assistant (and tool) but not system prompt 🤔 First SC - React Agent Second SC - OpenAI Agent
from llama_index.agent.openai import OpenAIAgent
Vs from llama_index.core.agent import ReactAgent
Their usage is pretty much the same We initialize using the “from_tools” function
We then make the call by using the arun_step method
hmm yeah we are not passing any custom system prompt that overrides the default function calling one I guess we shouldn't expect to see this because open ai agent likely using the function calling api? I was assuming phoenix would show an LLM span where the input includes all the tool schema descriptions (like in ReAct) But i suppose i'm having a wrong understanding , i guess that prompt would be on OpenAI side and likely not visible to Phoenix
i think the function schema in the actual openai request might not be captured in by the instrumentor. Alternatively, you can also instrument OpenAI itself with the following. You may have to put this at the very top of your program so it runs before importing llama-index
from phoenix.trace.openai import OpenAIInstrumentor
OpenAIInstrumentor().instrument()^ this will capture the actual JSON object sent to OpenAI and it should contain the function schema
oh nice okay let me try this !
thank you!
np
