If the prompt is too large, it may exceed the model's maximum context length, leading to errors and potentially affecting token count display. According to GitHub Issue #4730, token counts may not be displayed if the data is not correctly formatted or passed to the Phoenix backend. Ensure that the token count is included in the span's attributes and passed as a number to the UI components.Would you like more detail on how to address this issue? Sources:
- GitHub Issue #4730
馃挕聽Hint: Mention RunLLM in the thread for followups.