Yes, it is expected that traces from `openinference-instrumentation-crewai` look different from those produced by direct OpenAI API calls. CrewAI orchestrates multi-agent workflows and its traces reflect higher-level agent/task orchestration, not just raw LLM completions. The output fields may not always be valid JSON, as CrewAI's outputs are often plain text or custom-structured, depending on how agents and tasks are defined. This is a known limitation and design choice, as CrewAI abstracts over multiple LLM calls and tool invocations, so the trace structure and output format can differ significantly from OpenAI's `chat.completion.create` responses, which are strictly JSON-serializable and follow OpenAI's schema. For more details, see the <https://arize.com/docs/phoenix/integrations/frameworks/crewai/crewai-tracing|CrewAI tracing documentation> and <https://github.com/Arize-ai/openinference/blob/main/python/instrumentation/openinference-instrumentation-crewai/README.md|OpenInference CrewAI Instrumentation README>.
If you need more structured outputs or JSON, you may need to post-process the outputs or adjust your agent/task definitions. The traces are designed to capture the full workflow, not just LLM calls, so some fields may be plain text or custom objects rather than strict JSON. Would you like a step-by-step breakdown of how to interpret or improve the trace outputs?
Sources:
- <https://arize.com/docs/phoenix/integrations/frameworks/crewai/crewai-tracing|CrewAI tracing documentation>
- <https://github.com/Arize-ai/openinference/blob/main/python/instrumentation/openinference-instrumentation-crewai/README.md|OpenInference CrewAI Instrumentation README>