What are some things I can do to reduce the latency between when I call an LLM in my application and when the trace appears in the project UI?
馃敀[private user] Here is how I currently set up my instrumentation:
from openinference.instrumentation.anthropic import AnthropicInstrumentor
tracer_provider = register(space_id=os.getenv("ARIZE_SPACE_ID"),
api_key=os.getenv("ARIZE_API_KEY"),
project_name=project_name)
AnthropicInstrumentor().instrument(tracer_provider=tracer_provider)Is this inline with documentation recommendations?
Ya... I just don't understand why it's so slow
Maybe I'm abusing the feature?
It looks like you are using Phoenix Cloud?
No, I'm using the Arize Cloud Platform. Do y'all refer to that as Phoenix Cloud?
No, I'm using the Arize Cloud Platform. Do y'all refer to that as Phoenix Cloud?
These are separate platforms.
I am not sure of what the standard ingestion times might be on the Arize Cloud platform (I am a maintainer on Phoenix).
Can we get an answer on the normal ingestion times in the Arize platform and whether there are other options for using register that may expedite the export, processing and ingestion of the traces?
Thanks Harrison. I just DMed you the project name
Hello! I wanted to check back in on the expected behavior here. 馃敀[private user]
