Also, just curious, what would happen if I use crewai, langchain and openai instrumentors with same trace provider and project? I believe crewai and langchain will add their own spans? not sure what openai adds here..
Hi Xander S., here is the output of pip show
Name: openinference-instrumentation-crewai Version: 0.1.2 Summary: OpenInference Crewai Instrumentation Home-page: Author: Author-email: OpenInference Authors <oss@arize.com>
And thanks, I will look at the datasets feature. Basically, I have a test dataset, I wanted to check/observe the flow/trace for each of the samples in it to be sure the application is working as expected. And maybe a naive question, can I know if the pheonix observability tool is meant for humans to qualitatively look for issues or is it meant to bulk download the traces (on all the logged user inputs) and do an evaluation of it? Thanks.
HI Xander S., thanks a bunch for the response.
I am actually running a CrewAI example and since the project uses crewai==0.11.0 , I am using the same version...With the langchain instrumentor, I couldnt find any span containing only tool inputs and outputs. The only way I find the tool response currently is by string matching "thought, action, observation" strings (used after the tool is called). Wonder if there is an easy way to get this info. Also, if I use both crewai instrumentor and langchain instrumentor, both should have the same trace provider (and same project)? I dont know if the outputs of both the instrumentors would be combined in the UI.
3. I have a dataset to input to the agent, and I want to get the trace for each of the samples in the dataset to debug the agent. 4. Thanks, I used langchain instrumentor and also openai instrumentor, at the end, the former says 18k total tokens and the latter says 29k....not sure which one to use? (and not sure when to use which). Thanks.
