I'm instrumenting my codebase using the OpenAIInstrumentor
from openinference.instrumentation.openai import OpenAIInstrumentor
from phoenix.otel import register
tracer_provider = register(project_name="atlas")
OpenAIInstrumentor().instrument(tracer_provider=tracer_provider)I've noticed that when phoenix is down (either momentarily or because the app is running in an environment where it is not available) that all calls to LLMs fail due to
{"message": "Transient error StatusCode.UNAVAILABLE encountered while exporting traces to localhost:4317, retrying in 32s.", "logger.name": "opentelemetry.exporter.otlp.proto.grpc.exporter", "logger.thread_name": "MainThread", "logger.method_name": "_export", "date": "2025-01-16T00:13:58.400601+00:00", "status": "WARNING"}Is there a way to configure tracing so that sending traces is best-effort and doesn't affect the critical path of my application?
