Hey, I鈥檝e Arize Phoenix running locally in docker. I鈥檝e Streamlit based app which uses OpenAI for LLM based chats. I am trying to get observability working in this app. I followed OpenAi getting started guide, that worked for me in notebook. But same doesn鈥檛 work in Streamlit app. Inside app, do I need to do something different? I tried DSPy based example from your GitHub. But I鈥檝e no luck getting it up and running.
To get observability working in your Streamlit app using OpenAI, you need to ensure that the tracing setup is correctly integrated within the Streamlit app's main thread. The issue might be due to asynchronous execution not being supported in non-main threads, which is common in Streamlit apps.1. Ensure Main Thread Execution: Make sure the tracing setup is executed in the main thread of your Streamlit app.
2. Use nest_asyncio: Apply nest_asyncio to allow nested event loops, which can help with asynchronous execution in Streamlit.Here's a basic example to integrate OpenAI tracing in a Streamlit app: LBKSimport streamlit as st import nest_asyncio import os from openinference.instrumentation.openai import OpenAIInstrumentor from opentelemetry import trace as trace_api from opentelemetry.exporter.otlp.proto.http.trace_exporter import OTLPSpanExporter from opentelemetry.sdk import trace as trace_sdk from opentelemetry.sdk.resources import Resource from opentelemetry.sdk.trace.export import SimpleSpanProcessorLBKS# Apply nest_asyncio nest_asyncio.apply()LBKS# Set up tracing endpoint = "[http://localhost:6006/v1/traces](http://localhost:6006/v1/traces)" resource = Resource(attributes={}) tracer_provider = trace_sdk.TracerProvider(resource=resource) tracer_provider.add_span_processor(SimpleSpanProcessor(OTLPSpanExporter(endpoint))) trace_api.set_tracer_provider(tracer_provider=tracer_provider)LBKS# Instrument OpenAI OpenAIInstrumentor().instrument()LBKS# Streamlit app code st.title("OpenAI LLM Chat") query = st.text_input("Enter your query:") if query: # Your OpenAI call here response = openai.Completion.create( model="text-davinci-003", prompt=query, max_tokens=50 ) st.write(response.choices[0].text)LBKSWould you like more detail on any specific part of this setup?Sources:
- Slack Discussion on Async Execution
- OpenAI Integration Guide
馃挕聽Hint: Mention RunLLM in the thread for followups.
from Mikyo in the other thread We鈥檝e got a bunch of examples with streamlit here! https://github.com/Arize-ai/openinference/tree/main/python/examples
I followed the examples in this git. I also tried creating manual spans. These manual test spans are going thru. But OpenAI.chat.completions.create traces are not getting logged.
Can you please show how you're instrumenting OpenAI?
I鈥檓 using the following code. from openinference.instrumentation.openai import OpenAIInstrumentor from opentelemetry import trace as trace_api from opentelemetry.exporter.otlp.proto.http.trace_exporter import OTLPSpanExporter from opentelemetry.sdk import trace as trace_sdk from opentelemetry.sdk.resources import Resource from opentelemetry.sdk.trace.export import SimpleSpanProcessor collector_endpoint = "http://localhost:6006/v1/traces" def instrument(): print("Instrumenting OpenAI") resource = Resource(attributes={"project_name": "ragpile"}) tracer_provider = trace_sdk.TracerProvider(resource=resource) span_exporter = OTLPSpanExporter(endpoint=collector_endpoint) span_processor = SimpleSpanProcessor(span_exporter=span_exporter) tracer_provider.add_span_processor(span_processor=span_processor) trace_api.set_tracer_provider(tracer_provider=tracer_provider) OpenAIInstrumentor().instrument(tracer_provider=tracer_provider)
Hey Aditya A. when you docker compose phoenix will not be on localhost. It will be using docker鈥檚 internal dns https://github.com/Arize-ai/phoenix/blob/main/examples/manually-instrumented-chatbot/compose.yml#L12
Hmm. But with this setup I鈥檓 able to get following trace logged. Also, I can get traces working from Jupyter notebook using localhost endpoint with tracer.start_as_current_span("manual-span") as span: print("This is a test span.")
And my Streamlit app is not running in docker container
Where is streamlet running?
Streamlit app is running locally using Streamlit run
To troubleshoot OTEL exports, you can add a ConsoleSpanExporter to your tracer provider. This will print the spans to the console, allowing you to verify if the spans are being created and exported correctly. Here's how you can modify your instrument function to include a ConsoleSpanExporter: ```from openinference.instrumentation.openai import OpenAIInstrumentor
from opentelemetry import trace as trace_api
from opentelemetry.exporter.otlp.proto.http.trace_exporter import OTLPSpanExporter
from opentelemetry.sdk import trace as trace_sdk
from opentelemetry.sdk.resources import Resource
from opentelemetry.sdk.trace.export import SimpleSpanProcessor, ConsoleSpanExportercollector_endpoint = "[http://localhost::6006/v1/traces](http:://localhost:6006/v1/traces)"def instrument():
print("Instrumenting OpenAI")
resource = Resource(attributes={"project_name": "ragpile"})
tracer_provider = trace_sdk.TracerProvider(resource=resource)
# OTLP Span Exporter
span_exporter = OTLPSpanExporter(endpoint=collector_endpoint)
span_processor = SimpleSpanProcessor(span_exporter=span_exporter)
tracer_provider.add_span_processor(span_processor)
# Console Span Exporter for debugging
console_exporter = ConsoleSpanExporter()
console_processor = SimpleSpanProcessor(console_exporter)
tracer_provider.add_span_processor(console_processor)
trace_api.set_tracer_provider(tracer_provider=tracer_provider)
OpenAIInstrumentor().instrument(tracer_provider=tracer_provider)LBKS# Call the instrument function
instrument()```This setup will print the spans to the console, which can help you verify if the spans are being created and exported correctly. If the manual spans are being logged but the OpenAI spans are not, it might indicate an issue with the OpenAI instrumentation.Would you like more detail on any specific part of this setup?Sources: - https://github.com/arize-ai/openinference/blob/main/python/instrumentation/openinference-instrumentation-langchain/examples/openai_chat_stream.py - https://github.com/arize-ai/openinference/blob/main/python/instrumentation/openinference-instrumentation-openai/examples/chat_completions_stream.py
馃挕聽Hint: Mention RunLLM in the thread for followups.
I checked docker logs for phoenix. Here all the rest calls are logged. I鈥檓 seeing traces endpoint is invoked in following cases
From Jupyter notebook for OpenAI traces
From Streamlit app for manual traces
Hmm, it鈥檚 a bit hard to tell what might be an issue. If there is a minimum working example. I鈥檇 be happy to take a look. Sorry about that
