Any recommendation on how to instrument OpenAI real-time API?
To instrument OpenAI's real-time API with Phoenix, use the OpenAIInstrumentor class from the openinference.instrumentation.openai module. This allows you to trace OpenAI API calls and view them in Phoenix. First, ensure you have the necessary packages installed:
```
pip install openinference-instrumentation-openai openai
LBKSThen, set up the instrumentation in your application:
LBKS```
import openai
from openinference.instrumentation.openai import OpenAIInstrumentor
from opentelemetry.sdk import trace as trace_sdk
from opentelemetry.exporter.otlp.proto.http.trace_exporter import OTLPSpanExporter
from opentelemetry.sdk.trace.export import SimpleSpanProcessor
endpoint = "[http://localhost:6006/v1/traces](http://localhost:6006/v1/traces)"
tracer_provider = trace_sdk.TracerProvider()
tracer_provider.add_span_processor(SimpleSpanProcessor(OTLPSpanExporter(endpoint)))
OpenAIInstrumentor().instrument(tracer_provider=tracer_provider)
client = openai.OpenAI()
response = client.chat.completions.create(
model="gpt-4o",
messages=[{"role": "user", "content": "Write a haiku."}],
max_tokens=20,
)
print(response.choices[0].message.content)
This setup will trace all OpenAI API calls and send the data to your Phoenix instance for observability and evaluation.
Would you like more detail on any specific part of this process?
Sources: - OpenAI Instrumentation Guide - Phoenix Tracing Setup
馃挕聽Hint: Mention RunLLM in the thread for followups.
We have not looked into this yet (since it just came out recently), and the websocket part of this api may require additional handling, Would you mind opening an enhancement request for us to track this?
