That worked like a charm, thank-you sir!
Mikyo That's where I am a little confused. How do I attach or let my code in a notebook know about the remote Phoenix instance? If the answer is with the client, I am still having some trouble connecting to it. Currently, all the traces are in the 'default' project but I am still having problems retrieving them as a dataframe from the remote Phoenix instance. endpoint = "http://192.168.1.69:6006" spans_df = px.Client(endpoint=endpoint).get_spans_dataframe() spans_df[["name", "span_kind", "attributes.input.value", "attributes.retrieval.documents"]].head()
---------------------------------------------------------------------------
TypeError Traceback (most recent call last)
Cell In[15], line 4
1 endpoint = "http://192.168.1.69:6006"
3 spans_df = px.Client(endpoint=endpoint).get_spans_dataframe()
----> 4 spans_df[["name", "span_kind", "attributes.input.value", "attributes.retrieval.documents"]].head()
TypeError: 'NoneType' object is not subscriptable
You mentioned earlier, I might need to specify the project in the parameters earlier. How do I do this? endpoint = "http://192.168.1.69:6006" project = "default" spans_df = px.Client(project=project, endpoint=endpoint).get_spans_dataframe() spans_df[["name", "span_kind", "attributes.input.value", "attributes.retrieval.documents"]].head()
---------------------------------------------------------------------------
TypeError Traceback (most recent call last)
Cell In[16], line 4
1 endpoint = "http://192.168.1.69:6006"
2 project = "default"
----> 4 spans_df = px.Client(project=project, endpoint=endpoint).get_spans_dataframe()
5 spans_df[["name", "span_kind", "attributes.input.value", "attributes.retrieval.documents"]].head()
TypeError: Client.__init__() got an unexpected keyword argument 'project'
Thanks again for your help 馃檹
Really enjoying getting to know Phoenix so far. Great platform! Now I am moving on to evals and following the quick start - https://docs.arize.com/phoenix/evaluation/evals During the first part of it there is the following code to launch Phoenix. import phoenix as px session = px.launch_app(trace=trace_ds) session.view() Since I am already running Phoenix on another server in a Docker container, how can I get access to a session from that instance? Is there a connect method or something? I tried the following with Client, but was unsuccessful. spans_df = px.Client(endpoint="http://192.168.1.69:6006").get_spans_dataframe() spans_df[["name", "span_kind", "attributes.input.value", "attributes.retrieval.documents"]].head()
---------------------------------------------------------------------------
TypeError Traceback (most recent call last)
Cell In[10], line 2
1 spans_df = px.Client(endpoint="http://192.168.1.69:6006").get_spans_dataframe()
----> 2 spans_df[["name", "span_kind", "attributes.input.value", "attributes.retrieval.documents"]].head()
TypeError: 'NoneType' object is not subscriptable
Same goes with the example notebook - https://colab.research.google.com/github/Arize-ai/phoenix/blob/main/tutorials/llm_ops_overview.ipynb How can I connect to an existing running Phoenix server in Docker versus launching a new one in code?
Mikyo Appreciate all your help. I was able to get the Node.JS example you shared up and running. I have some more basic questions now that I am able to see a few traces show up in Phoenix. It seems like all the examples I have worked with are ending up in the default project even though I set the 'model_id' to different values. # Set resource attributes for the name and version for your application resource = Resource( attributes={ "model_id":"langchain-llm-tracing", # Set this to any name you'd like for your app "model_version":"1.0", # Set this to a version number string } ) const provider = new NodeTracerProvider({ resource: new Resource({ [SemanticResourceAttributes.SERVICE_NAME]: "chat-service", }), }); const provider = new NodeTracerProvider({ resource: new Resource({ // Arize specific - The name of a new or preexisting model you // want to export spans to "model_id": "Aporia_Testing", "model_version": "1.0" }), }); I am also curious where the value for the 'name' field comes from, maybe pulling this in from the auto-instrumentation? I see it picking up RetrievalQAChain from the Node.JS example you shared as well as the LangGraph notebook I was testing with.
Appreciate it, Mikyo! Deno isn't a hard requirement and was just a way to test in the same Jupyter notebook environment as the Python example you helped me with yesterday. Let me try to get the Node example you shared running and if I have any questions, I will ping in here. Thanks again!
Okay, now that I was able to get the Python version of LangChain instrumentation working, I wanted to get the same working for TypeScript following this documentation. I am testing this in a Deno Jupyter notebook: /*instrumentation.ts */ import { LangChainInstrumentation } from "npm:@arizeai/openinference-instrumentation-langchain"; import { ConsoleSpanExporter } from "npm:@opentelemetry/sdk-trace-base"; import { NodeTracerProvider, SimpleSpanProcessor, } from "npm:@opentelemetry/sdk-trace-node"; import { Resource } from "npm:@opentelemetry/resources"; import { OTLPTraceExporter as ProtoOTLPTraceExporter } from "npm:@opentelemetry/exporter-trace-otlp-proto"; import { diag, DiagConsoleLogger, DiagLogLevel } from "npm:@opentelemetry/api"; import * as CallbackManagerModule from "npm:@langchain/core/callbacks/manager"; // For troubleshooting, set the log level to DiagLogLevel.DEBUG diag.setLogger(new DiagConsoleLogger(), DiagLogLevel.DEBUG); // Your Arize Space and API Keys, which can be found in the UI // metadata.set('space_key', 'your-space-key'); // metadata.set('api_key', 'your-api-key'); const provider = new NodeTracerProvider({ resource: new Resource({ // Arize specific - The name of a new or preexisting model you // want to export spans to "model_id": "Aporia_Testing", "model_version": "1.0" }), }); // add as another SpanProcessor below the previous SpanProcessor provider.addSpanProcessor( new SimpleSpanProcessor( new ProtoOTLPTraceExporter({ // This is the url where your phoenix server is running url: "http://192.168.1.69:6006/v1/traces", }), ), ); const lcInstrumentation = new LangChainInstrumentation(); // LangChain must be manually instrumented as it doesn't have // a traditional module structure lcInstrumentation.manuallyInstrument(CallbackManagerModule); provider.register();
@opentelemetry/api: Registered a global for diag v1.8.0.
Manually instrumenting @langchain/core/callbacks
Applying patch for @langchain/core/callbacks
Stack trace:
TypeError: Cannot add property openInferencePatched, object is not extensible
at LangChainInstrumentation.patch (file:///Users/hodgesz/Library/Caches/deno/npm/registry.npmjs.org/@arizeai/openinference-instrumentation-langchain/0.0.5/dist/src/instrumentation.js:35:37)
at LangChainInstrumentation.manuallyInstrument (file:///Users/hodgesz/Library/Caches/deno/npm/registry.npmjs.org/@arizeai/openinference-instrumentation-langchain/0.0.5/dist/src/instrumentation.js:15:14)
at <anonymous>:41:19Any thoughts on this object is not extensible error?
Awesome, thanks again Mikyo! Now I will start working with the Javascript/Typescript instrumentor.
I really appreciate your quick reply! This fixed my problem. One more quick follow-up as I am still a newb with Phoenix. Since I am guessing in this example the LangChainInstrumentor takes advantage of LangChain callbacks so that the instrumentation overhead in terms of latency is small. Do I have that correct? If so, is that how most of the instrumentation works or are their cases where you might need to proxy calls which would add more overhead or latency? Really appreciate all your help!
Hello, I just installed Phoenix as a stand-alone instance via Docker following these instructions. I verified it is running by viewing the web portal in a browser on port 6006. I then have a LangChain notebook on another machine following this example, but commenting out the headers since not using the cloud version of Phoenix or authentication. import os # Import open-telemetry dependencies from opentelemetry import trace as trace_api from opentelemetry.exporter.otlp.proto.grpc.trace_exporter import OTLPSpanExporter from opentelemetry.sdk import trace as trace_sdk from opentelemetry.sdk.resources import Resource from opentelemetry.sdk.trace.export import SimpleSpanProcessor # Import the automatic instrumentor from OpenInference from openinference.instrumentation.langchain import LangChainInstrumentor # Set the Space and API keys as headers for authentication # headers = f"space_key={ARIZE_SPACE_KEY},api_key={ARIZE_API_KEY}" # os.environ['OTEL_EXPORTER_OTLP_TRACES_HEADERS'] = headers # Set resource attributes for the name and version for your application resource = Resource( attributes={ "model_id":"langchain-llm-tracing", # Set this to any name you'd like for your app "model_version":"1.0", # Set this to a version number string } ) # Define the span processor as an exporter to the desired endpoint endpoint = "http://192.168.1.69:6006/v1/traces" span_exporter = OTLPSpanExporter(endpoint=endpoint) span_processor = SimpleSpanProcessor(span_exporter=span_exporter) # Set the tracer provider tracer_provider = trace_sdk.TracerProvider(resource=resource) tracer_provider.add_span_processor(span_processor=span_processor) trace_api.set_tracer_provider(tracer_provider=tracer_provider) # Finish automatic instrumentation LangChainInstrumentor().instrument() When I execute the LangChain code I get the following errors on the client side:
Transient error StatusCode.UNAVAILABLE encountered while exporting traces to 192.168.1.69:6006, retrying in 1s.
Transient error StatusCode.UNAVAILABLE encountered while exporting traces to 192.168.1.69:6006, retrying in 2s.
Transient error StatusCode.UNAVAILABLE encountered while exporting traces to 192.168.1.69:6006, retrying in 4s.
Transient error StatusCode.UNAVAILABLE encountered while exporting traces to 192.168.1.69:6006, retrying in 8s.
Transient error StatusCode.UNAVAILABLE encountered while exporting traces to 192.168.1.69:6006, retrying in 16s.Looking at the Phoenix server Docker logs, I just see the following messages: INFO: 192.168.65.1:33533 - "PRI %2A HTTP/2.0" 404 Not Found WARNING: Invalid HTTP request received. INFO: 192.168.65.1:33654 - "PRI %2A HTTP/2.0" 404 Not Found WARNING: Invalid HTTP request received. INFO: 192.168.65.1:33755 - "PRI %2A HTTP/2.0" 404 Not Found WARNING: Invalid HTTP request received. Any ideas what this could be or what I can do to dig into the details further?
