I've provisioned a phoenix server and trying to use the playground for prompt engineering. I keep seeing
Connection error.but the server side logs show no error, all like
INFO: 127.0.0.6:48983 - "POST /graphql HTTP/1.1" 200 OK how can I figure out why connection error?
The "Connection error" in the Phoenix playground could be due to several reasons, even if the server logs show a successful connection. One common issue is related to network configurations or SSL certificate problems, especially if you're accessing the server over HTTPS. Ensure that the server's SSL certificate is valid and trusted by your client. Additionally, check if there are any network restrictions or firewall settings that might be blocking the connection.If the issue persists, consider checking the browser's developer console for any additional error messages or network logs that might provide more insight into the connection error. This can help identify if there are any client-side issues, such as CORS errors or incorrect endpoint configurations. Would you like more detailed troubleshooting steps?Sources: - https://github.com/Arize-ai/phoenix/issues/6296 - https://arize-ai.slack.com/archives/C04R3GXC8HK/p1737656246.759019
馃挕聽Hint: Mention RunLLM in the thread for followups.
Hey Zhuyi - sorry you are facing an issue. I'm not 100% sure what's wrong but it does look like you have no model selected. Can you try selecting one?
I looked into the source, the connection error is due to my server has no access to the azure endpoint by default during network settings. the error liked happened when llm_client.chat_completion_createis invoked, but not sure why no error or stacktrace at all? https://github.com/Arize-ai/phoenix/blob/c02f0e7d807129952afa5da430299aec32fafcc9/src/phoenix/server/api/subscriptions.py#L141-L148
now i can access the endpoint, but seeing another error
Error code: 404 - {'timestamp': 1745619001268, 'status': 404, 'error': 'Not Found', 'message': 'No message available', 'path': '/openai/deployments//chat/completions'}any idea why it could be?
the lack of error propagations I apologize about 馃槄 we just switched streaming to not be websocket based and so we have an issue to fix that ASAP
it looks however like you don't have a deployment set. /openai/deployments//chat/completions should contain the deployment
Need to fill this out
I see, there may be some diff between what phoenix expected and how my company hosted llm service. in python, I usually just query with
client = openai.AzureOpenAI(
api_key=get_token(),
api_version="2024-02-01",
azure_endpoint=company_endpoint,
)
client.chat.completions.create(
model="gpt-4o", messages=[{"role": "user", "content": "hello"}]
)is the deployment required? looking at https://github.com/Arize-ai/phoenix/blob/70126e416553a543e122e89a4fd7c4e89488fbbb/src/phoenix/server/api/helpers/playground_clients.py#L339, it's not that diff from how I used it except it's streaming and async, could you please point where deployment is used in phoenix code?
I think the deployment should be in your azure endpoint. Do you have the slug of your azure endpoint?
Let me bring up my Azure account. Might take a sec
my end point is like http://llmservice.llmservice.svc.cluster.local:8080
Oh I see you are inside a vpn. That looks like an endpoint inside of a cluster
and phoenix has access inside of the cluster?
yes
without access is why I saw connection error earlier
