Hey Jayakrishna Y., the issue seems to be that in the AssistantAgent you are setting model_client_stream=True . If you remove this, you will be able to see tokens & cost for each LLM span in Phoenix.
One more thing, I noticed in your dependencies list, you don't have openinference-instrumentation-autogen-agentchat . Make sure this is installed so you can see the full trace.
You can also remove the 2 lines below because setting auto_instrument=True takes care of it for you:
from openinference.instrumentation.openai import OpenAIInstrumentor
OpenAIInstrumentor().instrument(tracer_provider=tracer_provider)
Let me know if you need anymore help with this!