Challenges with Bedrock LLM Tracing in LlamaIndex QueryPipeline
I'm back. For whatever reason, when I invoke a Bedrock LLM object as part of a LlamaIndex QueryPipeline, it for some reason invokes a trace on both prompt_to_messages and the original prompt itself. To get around this, I tried to just define a CustomQueryComponent and invoke the LLM .complete() explicitly in the run_component method. However, this caused the trace to be dropped entirely. I reached out to the LlamaIndex team and they were unsure if Phoenix supports the ability to tag custom components. But this is confusing to me, I don't want to have to deal with the native LlamaIndex object generating two traces, but if I can't add a trace label to a custom component I don't think I have a choice. Anyone have any ideas?
