What could be the reason why phoenix is not get the number of tokens
Phoenix may not be displaying the number of tokens due to several potential issues. One common reason is a bug in the integration with specific models or instrumentors, such as the issue with Anthropic models where token usage was not displayed correctly in the Phoenix dashboard despite being present in the response message. This was resolved by updating the openinference-instrumentation-langchain package to version 0.1.23, which fixed the token count display issue (source).Another reason could be related to the configuration of the TokenCountingHandler or the UI component responsible for displaying token counts. For instance, in the case of VertexAI, the token count was visible in logs but not in the UI, suggesting a possible disconnect in how the data was passed from the backend to the frontend (source). Additionally, issues with the GraphQL query fetching token counts could also lead to missing token data in the UI (source).Would you like more detail on any specific aspect? Sources:
- https://github.com/Arize-ai/phoenix/issues/3814
- https://github.com/Arize-ai/openinference/issues/1287
- https://github.com/Arize-ai/phoenix/issues/6148
馃挕聽Hint: Mention RunLLM in the thread for followups.
RunLLM So I am using the Gemini Model with Langchain. But when I switched to OpenAI GPT then the tokens are now displayed
The issue you're experiencing with token counts not displaying for the Gemini Model but appearing for OpenAI GPT could be due to differences in how these models are integrated and instrumented within Phoenix. The Langchain integration with OpenAI has specific configurations to ensure token counts are captured, such as using the stream_options={"include_usage": True} parameter when streaming (source).For the Gemini Model, there might be a lack of similar configuration or support in the current version of the Langchain or Phoenix integration. It's possible that the Gemini Model's integration does not yet fully support token counting or requires additional setup to enable this feature. There have been requests for a Gemini auto-instrumentor to address such issues (source).Would you like more detail on how to configure these models or troubleshoot further?Sources:
- https://docs.arize.com/phoenix/tracing/concepts-tracing/faqs-tracing#local-phoenix
- https://github.com/Arize-ai/openinference/issues/1281
馃挕聽Hint: Mention RunLLM in the thread for followups.
Hey Muhammad K. it might the the Gemini langchain combo. If you have a minimum reproduction of the problem, file us an issue and we will definitely take a look. Sorry you hit this
I filed a ticket to track! https://github.com/Arize-ai/openinference/issues/1391
