Troubleshooting Token Usage in Phoenix Tracing Setup
Hello Phoenix Community, RunLLM I'm currently working on an application that uses:
Phoenix for tracing
AutoGen for multi-agent conversations
OpenAI for Large Language Model (LLM) calls
I have set up tracing in my `Chatbot` class using `src.chatbot.pieces.tracing_piece.py` as follows:
# tracing_piece.py
# ...
from openinference.instrumentation.autogen_agentchat import AutogenAgentChatInstrumentor
from openinference.instrumentation.openai import OpenAIInstrumentor
# ...
class PhoenixTracing:
# ...
def initialize(self):
# ...
OpenAIInstrumentor().instrument(tracer_provider=self.tracer_provider)
AutogenAgentChatInstrumentor().instrument(tracer_provider=self.tracer_provider)
# ...And `tracing_piece.initialize()` is called within the `Chatbot` class constructor. According to the Phoenix cost tracking documentation, auto-instrumentation with OpenAI and other supported providers should automatically capture token counts and model information using OpenInference semantic conventions. Specifically, it states that llm.token_count.prompt, llm.token_count.completion, and llm.token_count.total attributes are required on each LLM span for cost aggregation. Despite having both OpenAIInstrumentor and AutogenAgentChatInstrumentor enabled, I am not seeing cumulative token usage or cost data reflected in my Phoenix traces. Individual LLM spans do not appear to have the expected `llm.token_count.*` attributes attached. Could anyone provide guidance on why token usage might not be getting traced and aggregated with this setup? Any help would be greatly appreciated! Thank you.
