Troubleshooting Concurrent Trace Drops in Phoenix with PostgreSQL
Hi Phoenix community! I’m facing an issue where traces are getting dropped when sending concurrent requests to Phoenix. My Setup:
Using PostgreSQL for storage
OTEL configured for tracing
Running multiple replicas (5), but each pod is using very little memory (around 200-300MB)
I tried tweaking these settings to handle the concurrent load better: ```yaml env:
name: SQLALCHEMY_POOL_SIZE value: “20”
name: SQLALCHEMY_MAX_OVERFLOW value: “30"
name: SQLALCHEMY_POOL_TIMEOUT value: “30”
name: OTEL_BSP_MAX_QUEUE_SIZE value: “2048"
name: OTEL_BSP_SCHEDULE_DELAY value: “5000”
name: PHOENIX_LOG_LEVEL value: “DEBUG”
name: OTEL_LOG_LEVEL value: “DEBUG” ```
But I’m still seeing traces getting dropped. Any suggestions on:
- 1.
What settings work best for handling lots of concurrent traces?
- 2.
Is this a known issue?
- 3.
How can I better monitor or debug this?
- 4.
Are there other configuration tweaks I should try?
Would really appreciate any help or insights! Thanks!
