Step-by-step reasoning:
1. *Synchronous vs. Asynchronous Tracing*: In LlamaIndex, when you use synchronous program calls (e.g., `call`), the OpenInference/Phoenix instrumentation hooks into the call stack and captures both the formatted prompt and the original input parameters, which are then attached as attributes to the trace span. This allows Phoenix to display both the prompt and the named parameters in the UI <https://github.com/Arize-ai/phoenix/issues/1700|[672765d3>].
2. *Async Instrumentation Gaps*: For asynchronous calls (e.g., `acall`), the tracing relies on async-compatible hooks. However, as of recent versions, the OpenInference and LlamaIndex integration does not always propagate or capture the input parameters in the same way for async calls. This is a known gap and has been discussed in the context of both missing spans and missing attributes for async workflows <https://github.com/Arize-ai/phoenix/issues/1700|[672765d3>; <https://github.com/Arize-ai/phoenix/issues/3274|6b83b9f5>].
3. *Root Cause*: The root cause is that the callback or instrumentor responsible for capturing the parameters is not always invoked or does not have access to the original arguments in the async path. This can be due to context propagation issues, missing callback manager wiring, or incomplete async support in the tracing handler <https://github.com/Arize-ai/phoenix/issues/1700|[672765d3>].
4. *Workarounds*:
- *Manual Tracing*: As a workaround, you can manually create a span around your async call and attach the parameters as attributes using OpenTelemetry or Phoenix's manual tracing helpers. This ensures the parameters are included in the trace, even if the automatic instrumentation misses them <https://github.com/Arize-ai/phoenix/issues/1700|[672765d3>].
- *Callback Manager*: Ensure that the callback manager (with the OpenInferenceTraceCallbackHandler) is explicitly passed to all components, including the async program and any subcomponents, as sometimes the global handler is not propagated in async contexts <https://github.com/Arize-ai/phoenix/issues/1700|[672765d3>].
- *Version Updates*: Monitor updates to the OpenInference and LlamaIndex packages, as recent releases have improved async tracing support. Upgrading may resolve some issues, but full parity is not guaranteed yet <https://github.com/Arize-ai/phoenix/issues/3274|[6b83b9f5>].
5. *Reporting and Tracking*: If you need this feature, consider filing or upvoting issues in the <https://github.com/Arize-ai/openinference/issues/new/choose|OpenInference> and <https://github.com/run-llama/llama_index/issues|LlamaIndex> repositories to help prioritize async tracing improvements.
Let me know if you want a code example for manual async tracing.
Sources:
- <https://github.com/Arize-ai/phoenix/issues/1700|https://github.com/Arize-ai/phoenix/issues/1700>
- <https://github.com/Arize-ai/phoenix/issues/3274|https://github.com/Arize-ai/phoenix/issues/3274>