I'm using open ai instrumentor 2 questions
is there a way to add "events" or "attributes" to indicate when calls to open ai are retried?
I'm using structured outputs and the output message in the assistant output isn't formatted json, making it a bit hard to read, any way i can have this json formatted?
It seems likely that for adding "events" or "attributes" to indicate when calls to OpenAI are retried, you can use the OpenAIInstrumentor to instrument your OpenAI calls and manually add attributes or events to the spans. You can use the set_attribute method on spans to add custom attributes indicating retries. However, specific support for retry events is not explicitly mentioned in the data sources.Regarding the formatting of structured outputs, it seems that there is a request for pretty-printing or formatting structured output data in the UI to make it more readable. This feature is not currently available, but there is an open enhancement request to address this issue by pretty-printing or formatting structured outputs in the UI (GitHub Issue).Would you like more details on any specific part? Sources:
- GitHub Issue
馃挕聽Hint: Mention RunLLM in the thread for followups.
Hey Swaraj R. good to see ya here.
is there a way to add "events" or "attributes" to indicate when calls to open ai are retried?
This is retries inside of the OpenAI SDK correct? You are right in that we don't have that captured. Is it that you are wanting to capture the instability of OpenAI or incurred latency with OpenAI?
I'm using structured outputs and the output message in the assistant output isn't formatted json, making it a bit hard to read, any way i can have this json formatted?
This is a good bit of feedback. Let me dig into this a bit more. Is the feedback more visual or are you trying to use the spans for datasets of structured output?
More for visibility on why a call took long , we have some alerts on overall run duration and at times one call takes up most of the time, having insight on whether it was retried or now will help with investigating that
Is the feedback more visual or are you trying to use the spans for datasets of structured output?
I like annotating / visually inspecting runs using the output messages tab on an LLM span, now i gotta copy paste into vs code + format so i can inspect it This ties into being able to quickly build more datasets as well so kinda both
basically this issue https://github.com/Arize-ai/phoenix/issues/5140
Gotcha. Yeah that latter one makes a ton of sense
And the former one makes a ton of sense too. Captured here. https://github.com/Arize-ai/openinference/issues/1175
I think we might tackle the structured output one first - making structured outputs replay-able and easy to manage as datasets for evals is top priority. Will try to post back to the issues but feel free to check back in on these. We'll try to crank-em out in the coming weeks. Good to see you Swaraj. Happy to see you using the OpenAI instrumentor.
Thank you so much Mikyo for the quick response! Great to chat with you as well Looking forward to seeing that land in 馃檹
