RunLLM if i'm currently using llama index and llama index instrumentor like this at the end of my setup function LlamaIndexInstrumentor().instrument() if i wanna switch to using open ai client directly? Will i have to .instrument another instrumentor like OpenAI? Can both be instrumented at the same time?
Yes, you can instrument both the LlamaIndex and OpenAI clients simultaneously. You will need to call the instrument method for each instrumentor separately. For OpenAI, you can use the OpenAIInstrumentor to instrument the OpenAI client. According to the Phoenix documentation, you can initialize the OpenAIInstrumentor before your application code to ensure that all requests are traced and reported to Phoenix.Would you like more detail on how to set this up?Sources:
- Phoenix documentation
馃挕聽Hint: Mention RunLLM in the thread for followups.
Ah interesting so if i have some parts using open ai directly, i should instrument it and uninstrument after calling?
Thanks Mikyo ! To clarify, you are suggesting to add the open inference OpenAIInstrumentor.instrument() to our current tracing setup? based on this
If you use our OpenAIInstrumentor you will get spans for your calls to OpenAI. I would start there. Would be happy to do follow-up enhancements to make the two work together a bit nicer.
Would it still create duplicate spans for now then?
It would right now Swaraj R. - I apologize for that. You'd get one produced by llama-index and one from the invocation of the openai client. It's a problem we need to solve. More people are stitching together frameworks and providers but llama-index workflows has made it a bit difficult since we have to rely on llama-index to track things a bit.
Keep us in the loop with your painpoints. Would love to help
Ah so the dupes are only for the open ai calls? not all spans right? (esp custom ones we are creating)
Yeah exactly
if so, then that's not so bad while we migrate
