Yes, you can access token counts for specific traces via code by extracting the llm.token_count.prompt (input tokens) and llm.token_count.completion (output tokens) attributes from the relevant spans. These attributes are available if your instrumentation or provider supplies them, and you can retrieve them programmatically from the span data in Phoenix or via the OpenInference conventions in your tracing code. For example, in Python, you can access these attributes from the span object or the exported trace data as shown in the documentation and code examples. If you use OpenInference auto-instrumentation with supported providers (like OpenAI or Anthropic), token counts are automatically included in the trace spans and can be queried directly in your code or exported for analysis. For manual instrumentation, ensure you log these attributes on your spans for later retrieval. For more details, see the Phoenix cost tracking documentation and OpenInference semantic conventions. Would you like a code example for extracting these values from a trace?Sources:
- Phoenix cost tracking documentation
- OpenInference semantic conventions
- Slack discussion on token counts
💡Hint: Mention RunLLM in the thread for followups.