Troubleshooting Tracing Issues in Kubernetes Pod for LLm-Providers
Hello Phoenix Team On my local machine I was able to successfully handle tracing of my calls to external llm-providers. Now, I try to reproduce the same scenario on a remote machine. If that is relevant, it is actually a k8s pod, but one that is alive for some time now. Within the same pod, which is meant for experimentation, I launched the tracing app using python3 -m phoenix.server.main serve. On the same pod, i run several experiments, (implemented in langchain). The code initializes the instrumentation as explained in the docs. relevant environment variables are aligned between the experiments and served tracing app. However, differently from the experience on my local machine, I see no traces collected on the UI / logs. What could be the cause for that? Many thanks in advance!
