Hi, I'm encountering some slowness on my local Phoenix.
Phoenix UI slowness is a known issue, especially with large numbers of traces or long-context spans. Recent improvements include lazy loading of spans and reducing the default page size, which can significantly reduce load times. Upgrading to the latest Phoenix version is recommended, as performance enhancements have been made in recent releases. For persistent issues, consider limiting the amount of data loaded initially (e.g., only metadata or previews) and loading full details on demand. If you are running a self-hosted instance, ensure you are not loading all attributes or large text fields by default, and consider using Postgres over SQLite for better scalability. For very large deployments, scaling up resources or running multiple Phoenix containers behind a single Postgres database is supported. Would you like more detail on how to implement these optimizations or upgrade your deployment?Sources: - GitHub Issue #4006 - GitHub Issue #6201 - Slack Thread: UI performance - Phoenix Container Sizing & Scaling
💡 Hint: Mention RunLLM in the thread for followups.
This is the query that's taking a while (~3 minutes in this case):
phoenix-1 | 2025-06-07 16:01:36,465 INFO sqlalchemy.engine.Engine BEGIN (implicit)
phoenix-1 | 2025-06-07 16:01:36,466 INFO sqlalchemy.engine.Engine SELECT traces.project_rowid, coalesce(sum(spans.llm_token_count_prompt), ?) AS prompt, coalesce(sum(spans.llm_token_count_completion), ?) AS completion, coalesce(sum(spans.llm_token_count_prompt), ?) + coalesce(sum(spans.llm_token_count_completion), ?) AS total
phoenix-1 | FROM traces JOIN spans ON traces.id = spans.trace_rowid
phoenix-1 | WHERE spans.start_time >= ? AND traces.project_rowid IN (?, ?, ?) GROUP BY traces.project_rowid
phoenix-1 | 2025-06-07 16:01:36,466 INFO sqlalchemy.engine.Engine [generated in 0.00012s] (0, 0, 0, 0, '2025-06-07 15:46:00.000000', 3, 3, 3)
phoenix-1 | 2025-06-07 16:04:25,489 DEBUG sqlalchemy.engine.Engine Col ('project_rowid', 'prompt', 'completion', 'total')
phoenix-1 | 2025-06-07 16:04:25,492 DEBUG sqlalchemy.engine.Engine Row (3, 918256, 18650, 936906)
phoenix-1 | 2025-06-07 16:04:25,493 INFO sqlalchemy.engine.Engine COMMITThis is the query itself:
SELECT traces.project_rowid, coalesce(sum(spans.llm_token_count_prompt), ?) AS prompt, coalesce(sum(spans.llm_token_count_completion), ?) AS completion, coalesce(sum(spans.llm_token_count_prompt), ?) + coalesce(sum(spans.llm_token_count_completion), ?) AS total
FROM traces JOIN spans ON traces.id = spans.trace_rowid
WHERE spans.start_time >= ? AND traces.project_rowid IN (?, ?, ?) GROUP BY traces.project_rowidAnd the query plan:
sqlite> .eqp on
sqlite> SELECT traces.project_rowid, coalesce(sum(spans.llm_token_count_prompt), 0) AS prompt, coalesce(sum(spans.llm_token_count_completion), 0) AS completion, coalesce(sum(spans.llm_token_count_prompt), 0) + coalesce(sum(spans.llm_token_count_completion), 0) AS total FROM traces JOIN spans ON traces.id = spans.trace_rowid WHERE spans.start_time >= '2025-06-07 15:46:00.000000' AND traces.project_rowid IN (3) GROUP BY traces.project_rowid;
QUERY PLAN
|--SEARCH traces USING COVERING INDEX ix_traces_project_rowid (project_rowid=?)
`--SEARCH spans USING INDEX ix_spans_trace_rowid (trace_rowid=?)
3|918256|18650|936906I made this change locally, and Phoenix is super-snappy again:
diff --git a/src/phoenix/server/api/types/Project.py b/src/phoenix/server/api/types/Project.py
index fe3876a95..bed722283 100644
--- a/src/phoenix/server/api/types/Project.py
+++ b/src/phoenix/server/api/types/Project.py
@@ -149,6 +149,7 @@ class Project(Node):
time_range: Optional[TimeRange] = UNSET,
filter_condition: Optional[str] = UNSET,
) -> float:
+ return 0
return await info.context.data_loaders.token_counts.load(
("total", self.project_rowid, time_range, filter_condition),
)
@@ -160,6 +161,7 @@ class Project(Node):
time_range: Optional[TimeRange] = UNSET,
filter_condition: Optional[str] = UNSET,
) -> float:
+ return 0
return await info.context.data_loaders.token_counts.load(
("prompt", self.project_rowid, time_range, filter_condition),
)
@@ -171,6 +173,7 @@ class Project(Node):
time_range: Optional[TimeRange] = UNSET,
filter_condition: Optional[str] = UNSET,
) -> float:
+ return 0
return await info.context.data_loaders.token_counts.load(
("completion", self.project_rowid, time_range, filter_condition),
)(obviously not proposing this patch / only useful as a diagnostic)
Definitely let me know if there are other diagnostics that would be helpful.
If you tell me the right answer to this question, I'll make sure I don't make the same mistake twice!
Feedback
Type the right answer here...
↵ Please 'enter' to submit
10.9.1
Got it, thank you
The is the relevant query on the dashboard:
{"query":"query ProjectsPageProjectMetricsQuery(\n $id: ID!\n $timeRange: TimeRange!\n) {\n project: node(id: $id) {\n __typename\n ... on Project {\n traceCount(timeRange: $timeRange)\n latencyMsP50: latencyMsQuantile(probability: 0.5, timeRange: $timeRange)\n tokenCountTotal(timeRange: $timeRange)\n }\n id\n }\n}\n","variables":{"id":"UHJvamVjdDoz","timeRange":{"start":"2025-06-08T18:38:00.000Z"}}}and this is the one on the project page itself:
{"query":"query ProjectPageQuery(\n $id: ID!\n $timeRange: TimeRange!\n) {\n project: node(id: $id) {\n __typename\n ...ProjectPageHeader_stats\n ...StreamToggle_data\n id\n }\n}\n\nfragment ProjectPageHeader_stats on Project {\n traceCount(timeRange: $timeRange)\n tokenCountTotal(timeRange: $timeRange)\n tokenCountPrompt(timeRange: $timeRange)\n tokenCountCompletion(timeRange: $timeRange)\n latencyMsP50: latencyMsQuantile(probability: 0.5, timeRange: $timeRange)\n latencyMsP99: latencyMsQuantile(probability: 0.99, timeRange: $timeRange)\n spanAnnotationNames\n documentEvaluationNames\n id\n}\n\nfragment StreamToggle_data on Project {\n streamingLastUpdatedAt\n id\n}\n","variables":{"id":"UHJvamVjdDoz","timeRange":{"start":"2025-06-08T18:50:00.000Z"}}}This is where the database query is constructed in the server: https://github.com/Arize-ai/phoenix/blob/ccef63a6fd2f6f16b5e054f7063c4f4a5ff6a6f4/src/phoenix/server/api/dataloaders/token_counts.py#L97-L124
