Hi, I am occasionally seeing below error message while navigating between projects on the UI, any ideas on what could be causing this? It seems there are no error logs related to this on the phoenix pod logs. Unexpected token 'u', "upstream r"... is not valid JSON
This error typically occurs when the Phoenix frontend receives a response that is not valid JSON鈥攐ften HTML or a server error page鈥攚hen it expects JSON. Common causes include backend timeouts, proxy/load balancer misconfiguration, or a mismatch between frontend and backend versions. Upgrading Phoenix to a newer version (8.26.2+), clearing browser cache, or restarting the server often resolves the issue. If the error persists, check for large data loads or concurrent requests that may be overwhelming the backend, and review proxy settings if using a reverse proxy or load balancer. See GitHub issue 7193 and GitHub issue 6661 for similar reports and solutions.If you are self-hosting, also ensure that your deployment is not returning HTML error pages (e.g., from a proxy or load balancer) to API requests, as this will trigger the "Unexpected token" error in the UI. If you recently upgraded, mismatched cached frontend files can also cause this鈥攖ry clearing your browser cache or reinstalling Phoenix. For more, see GitHub issue 5243.Would you like more detail or troubleshooting steps?Sources: - GitHub issue 7193 - GitHub issue 6661 - GitHub issue 5243
馃挕聽Hint: Mention RunLLM in the thread for followups.
how are you hosting phoenix? This reads to me like the application attempted to perform a graphql query and got plaintext back as a response, possibly from a proxy or similar
Anthony P. RunLLM I am self hosting on kubernetes (AKS) using helm with managed azure postgres backend. Currently on : version-12.6.0-nonroot Is there a way to enable more detailed logs for troubleshooting?
Yes, you can enable more detailed logs in Phoenix by setting the OTEL_LOG_LEVEL environment variable to debug in your deployment. This increases OpenTelemetry diagnostics and can help troubleshoot issues with trace collection and backend responses. Example for Kubernetes/Helm: LBKS# values.yaml server: env: - name: OTEL_LOG_LEVEL value: debugLBKSThis will provide verbose logging for Phoenix and its tracing components. For more, see Slack discussion.Would you like more detail on log collection or other debugging options?Sources:
- Slack discussion: OTEL_LOG_LEVEL=debug
馃挕聽Hint: Mention RunLLM in the thread for followups.
I don't think the env var that runLLM suggested will help here, let me see what we have available. If you check the browser debug network tab when the error occurs you can see the full error in the meantime
I am still thinking it might be an issue with the frontend (in your browser) contacting the backend (in kubernetes) however, and errors there may not propagate to the backend
