Hey everyone! I am running an arize-phoenix container on my laptop:
docker run -p 6006:6006 -p 4317:4317 -i -t arizephoenix/phoenix:latestI want to connect to it with Python:
from phoenix.otel import register
project_name = "phoenix-arize-demo"
tracer_provider = register(
project_name=project_name,
endpoint="http://localhost:6006/v1/traces",
)
However, I don't see a project called "phoenix-arize-demo" when i go to localhost:6006. I tried also
import os
os.environ["PHOENIX_COLLECTOR_ENDPOINT"] = "http://localhost:6006
Doesn't help. I have a small dataset from which LLM is supposed to classify pieces of text into three categories and I wanted to use phoenix arize to inspect model outputs.
Running Phoenix with the command you provided will start the Phoenix server, exposing the web UI on port 6006 and the OTLP gRPC collector on port 4317. You can access the UI at http://localhost:6006 and send OpenTelemetry traces to port 4317 for observability and evaluation of your LLM applications. For more details or to configure persistent storage, see the official Docker deployment guide.Would you like more detail on sending traces, configuring storage, or connecting your applications?Sources: - Phoenix Docker Deployment Guide - Phoenix Quick Start Example
💡 Hint: Mention RunLLM in the thread for followups.
Mateusz K. Have you instrumented your app and run any traces? The project gets created as traces/spans are generated, not at the time you call register
The project will appear when you send your first trace via integration, or, you can create the project manually in the UI by clicking new project in the top right corner of the projects page and selecting the manual tab
How do I send a trace? As fas as I understand, I should just make a call to the LLM or am I wrong? I call this function:
from google import genai
def classify_tender(prompt: str, api_key: str) -> dict:
client = genai.Client(api_key=api_key)
response = client.models.generate_content(
model="gemini-2.5-flash",
config=genai.types.GenerateContentConfig(
system_instruction="You classify reviews according to company profile.",
),
contents=prompt
)
return responseIt depends on the LLM client that you would like traces from. You will install the appropriate auto-instrumentation (if one is available) or trace the client yourself.
For google genai, you would install this one https://github.com/Arize-ai/openinference/tree/main/python/instrumentation/openinference-instrumentation-google-genai
Awesome, hope you enjoy phoenix and please let us know here or on github if you have further issues 🙏
