Hey Jonathan H. ! Thanks for trying out Phoenix! Phoenix soon will support gRPC based tracing but right now you need to switch from gRPC to HTTP in your example. So you have
from opentelemetry.exporter.otlp.proto.grpc.trace_exporter import OTLPSpanExporterNeeds to be
from opentelemetry.exporter.otlp.proto.http.trace_exporter import OTLPSpanExporterYou can find an example here! https://github.com/Arize-ai/openinference/blob/main/python/examples/langserve/per_request_metadata.py
I really appreciate your quick reply! This fixed my problem. One more quick follow-up as I am still a newb with Phoenix. Since I am guessing in this example the LangChainInstrumentor takes advantage of LangChain callbacks so that the instrumentation overhead in terms of latency is small. Do I have that correct? If so, is that how most of the instrumentation works or are their cases where you might need to proxy calls which would add more overhead or latency? Really appreciate all your help!
LangChain callbacks so that the instrumentation overhead in terms of latency is small.
Yes, it uses the LangChain built-in tracer and the span processing happens in a thread so it should not have any significant impact.
If so, is that how most of the instrumentation works or are their cases where you might need to proxy calls which would add more overhead or latency?
Instrumentation simply wraps your calls but doesn't actually proxy anything so it simply takes the IO at different stages of your application and captures them in spans. We follow OTEL best practices to accomplish this so I wouldn't imagine it adds any significant load. We have some timing logging built in to the instruments if you want to check!
Awesome, thanks again Mikyo! Now I will start working with the Javascript/Typescript instrumentor.
Okay, now that I was able to get the Python version of LangChain instrumentation working, I wanted to get the same working for TypeScript following this documentation. I am testing this in a Deno Jupyter notebook: /*instrumentation.ts */ import { LangChainInstrumentation } from "npm:@arizeai/openinference-instrumentation-langchain"; import { ConsoleSpanExporter } from "npm:@opentelemetry/sdk-trace-base"; import { NodeTracerProvider, SimpleSpanProcessor, } from "npm:@opentelemetry/sdk-trace-node"; import { Resource } from "npm:@opentelemetry/resources"; import { OTLPTraceExporter as ProtoOTLPTraceExporter } from "npm:@opentelemetry/exporter-trace-otlp-proto"; import { diag, DiagConsoleLogger, DiagLogLevel } from "npm:@opentelemetry/api"; import * as CallbackManagerModule from "npm:@langchain/core/callbacks/manager"; // For troubleshooting, set the log level to DiagLogLevel.DEBUG diag.setLogger(new DiagConsoleLogger(), DiagLogLevel.DEBUG); // Your Arize Space and API Keys, which can be found in the UI // metadata.set('space_key', 'your-space-key'); // metadata.set('api_key', 'your-api-key'); const provider = new NodeTracerProvider({ resource: new Resource({ // Arize specific - The name of a new or preexisting model you // want to export spans to "model_id": "Aporia_Testing", "model_version": "1.0" }), }); // add as another SpanProcessor below the previous SpanProcessor provider.addSpanProcessor( new SimpleSpanProcessor( new ProtoOTLPTraceExporter({ // This is the url where your phoenix server is running url: "http://192.168.1.69:6006/v1/traces", }), ), ); const lcInstrumentation = new LangChainInstrumentation(); // LangChain must be manually instrumented as it doesn't have // a traditional module structure lcInstrumentation.manuallyInstrument(CallbackManagerModule); provider.register();
@opentelemetry/api: Registered a global for diag v1.8.0.
Manually instrumenting @langchain/core/callbacks
Applying patch for @langchain/core/callbacks
Stack trace:
TypeError: Cannot add property openInferencePatched, object is not extensible
at LangChainInstrumentation.patch (file:///Users/hodgesz/Library/Caches/deno/npm/registry.npmjs.org/@arizeai/openinference-instrumentation-langchain/0.0.5/dist/src/instrumentation.js:35:37)
at LangChainInstrumentation.manuallyInstrument (file:///Users/hodgesz/Library/Caches/deno/npm/registry.npmjs.org/@arizeai/openinference-instrumentation-langchain/0.0.5/dist/src/instrumentation.js:15:14)
at <anonymous>:41:19Any thoughts on this object is not extensible error?
Hi Jonathan, I have to be honest I have never used js in a notebook nor have I been using Deno and so I'm not a as familiar with it's runtime. If neither of these things are hard requirements we have a node example here: https://github.com/Arize-ai/openinference/tree/main/js/examples/langchain-express From just looking at the error stack above, it looks like the deno runtime is more strict about modules and has marked modules as not extensible. An understandable security feature. In general OTEL tends to follow a pattern of marking a module as instrumented by extending it with a private flag as to avoid instrumenting multiple times. We could relax this mechanism if desired. Might take me a bit of time to ramp up on getting a deno environment setup. LMK if you would like me to take a look or if the given example with node is sufficient.
Appreciate it, Mikyo! Deno isn't a hard requirement and was just a way to test in the same Jupyter notebook environment as the Python example you helped me with yesterday. Let me try to get the Node example you shared running and if I have any questions, I will ping in here. Thanks again!
