I'm trying CrewAI and phoenix to understand better how crewai works, is it expected the traces from openinference-instrumentation-crewai looks like as attached? It's tricky to read, and the Output (also attached) is not even valid json... It looks crewai uses a quite different set of APIs from openai's chat.completion.create .
also attached the output. when I jq it, it says
jq keys /tmp/output.json
jq: parse error: Invalid numeric literal at line 1, column 20Yes, it is expected that traces from openinference-instrumentation-crewai look different from those produced by direct OpenAI API calls. CrewAI orchestrates multi-agent workflows and its traces reflect higher-level agent/task orchestration, not just raw LLM completions. The output fields may not always be valid JSON, as CrewAI's outputs are often plain text or custom-structured, depending on how agents and tasks are defined. This is a known limitation and design choice, as CrewAI abstracts over multiple LLM calls and tool invocations, so the trace structure and output format can differ significantly from OpenAI's chat.completion.create responses, which are strictly JSON-serializable and follow OpenAI's schema. For more details, see the CrewAI tracing documentation and OpenInference CrewAI Instrumentation README.If you need more structured outputs or JSON, you may need to post-process the outputs or adjust your agent/task definitions. The traces are designed to capture the full workflow, not just LLM calls, so some fields may be plain text or custom objects rather than strict JSON. Would you like a step-by-step breakdown of how to interpret or improve the trace outputs?Sources:
- CrewAI tracing documentation
- OpenInference CrewAI Instrumentation README
馃挕聽Hint: Mention RunLLM in the thread for followups.
the code is adopted from https://arize.com/docs/phoenix/integrations/frameworks/crewai/crewai-tracing
