Looking for some help regarding instrumentation for OpenAI client + MCP calls. I found the from openinference.instrumentation.openai import OpenAIInstrumentor and I'm seeing traces in one client. However, when trying to use this in my program which runs through several test cases to evaluate OpenAI, I'm not seeing any traces come through. Are there any "gotchas" for using this? Code in the thread 馃У
import logging
import time
import os
from phoenix.otel import register
import uuid
from dotenv import load_dotenv
# Load environment variables first
load_dotenv('.env.local')
from ..config import Config
logger = logging.getLogger(__name__)
PROJECT_NAME = "evaluators"
PHOENIX_API_KEY = os.getenv("PHOENIX_API_KEY")
PHOENIX_ENDPOINT = os.getenv("PHOENIX_COLLECTOR_ENDPOINT")
logger.warning(f"Registering Phoenix tracer provider for project {PROJECT_NAME} with endpoint {PHOENIX_ENDPOINT}/v1/traces")
tracer_provider = register(
project_name=PROJECT_NAME,
endpoint=f"{PHOENIX_ENDPOINT}/v1/traces",
auto_instrument=True,
set_global_tracer_provider=True,
headers={"Authorization": f"Bearer {PHOENIX_API_KEY}"}
)
from openinference.instrumentation.openai import OpenAIInstrumentor
from datetime import datetime, timedelta
from openai import OpenAI
from .base_agent_runner import BaseAgentRunner, AgentCall
from ..config import Config
from ..phoenix_client import PhoenixClient, PhoenixTraceProcessor
class OpenAIRunner(BaseAgentRunner):
"""OpenAI-specific agent runner that executes commands via OpenAI API"""
def __init__(self, model_name: str = "gpt-4o"):
"""Initialize with OpenAI configuration"""
self.config = Config()
self.client = OpenAI(api_key=self.config.OPENAI_API_KEY)
self.model_name = model_name
def run_agent(self,
eval_call: AgentCall,
mcp_server_label: str,
mcp_server_url: str,
temperature: float = 0.1,
phoenix_client: PhoenixClient = None,
phoenix_trace_processor: PhoenixTraceProcessor = None,
) -> str:
"""
Run a complete evaluation using OpenAI API
Args:
eval_call: The AgentCall containing commands to execute
Returns:
str: Session ID for tracking the execution
"""
OpenAIInstrumentor().instrument(tracer_provider=tracer_provider)
session_id = str(uuid.uuid4())
for command in eval_call.commands:
logger.info(f"Running command: {command}")
response = self.client.responses.create(
model=self.model_name,
input=command,
temperature=temperature,
tools=[
{
"type": "mcp",
"server_label": mcp_server_label,
"server_url": mcp_server_url,
"require_approval": "never",
},
],
)
logger.info(f"OpenAI Response: {response.output_text}")A common "gotcha" is that the MCP instrumentor (openinference-instrumentation-mcp) does not generate its own telemetry鈥攊t only propagates context. You must instrument both the OpenAI client and the MCP client/server with OpenTelemetry (e.g., using OpenAIInstrumentor().instrument(...) and registering a tracer provider) to see traces. If you only instrument one side, traces may not appear or may be incomplete. Also, ensure that instrumentation is set up before importing or using the OpenAI client, and that your tracer provider is correctly configured to export to Phoenix or your collector endpoint. For a working example and more details, see the Phoenix MCP Tracing documentation and OpenInference MCP tracing guide. Would you like a step-by-step breakdown or a code example?Sources:
- Phoenix MCP Tracing documentation
- OpenInference MCP tracing guide
馃挕聽Hint: Mention RunLLM in the thread for followups.
Thanks 馃敀[private user]. Do I need to do that before getting the envvars as well?
This is the edited version of the OpenAIRunner .py file.
import logging
import time
import os
from phoenix.otel import register
from phoenix.trace.dsl import SpanQuery
import uuid
from dotenv import load_dotenv
from openinference.instrumentation.openai import OpenAIInstrumentor
# Load environment variables first
load_dotenv('.env.local')
from ..config import Config
logger = logging.getLogger(__name__)
PROJECT_NAME = "evaluators"
PHOENIX_API_KEY = os.getenv("PHOENIX_API_KEY")
PHOENIX_ENDPOINT = os.getenv("PHOENIX_COLLECTOR_ENDPOINT")
logger.info(f"Registering Phoenix tracer provider for project {PROJECT_NAME} with endpoint {PHOENIX_ENDPOINT}/v1/traces")
tracer_provider = register(
project_name=PROJECT_NAME,
endpoint=f"{PHOENIX_ENDPOINT}/v1/traces",
auto_instrument=True,
set_global_tracer_provider=True,
headers={"Authorization": f"Bearer {PHOENIX_API_KEY}"}
)
OpenAIInstrumentor().instrument(tracer_provider=tracer_provider)
...Ok - good to know. My MCP server is in another file. I just added the other instrumentation.
import logging
import sys
import argparse
from mcp.server.fastmcp import FastMCP
from pydantic import BaseModel, field_validator
from typing import Optional
from dotenv import load_dotenv
from datetime import datetime, timedelta
from phoenix.otel import register
load_dotenv('.env.local')
tracer_provider = register(auto_instrument=True)
tracer = tracer_provider.get_tracer("mcp")Unfortunately, I'm still not seeing any traces come through
I saw traces when I just had a single python script with an OpenAI client that I would call once. Now that I merged that into a process that runs through a set of test cases, I don't see anything coming through. Still running locally.
I'm using localhost
Thank you! 馃槉
Ok! I'll take a closer look.
I set it as the latter. You script above also sets it as http://localhost:6006/ then appends {PHOENIX_ENDPOINT}/v1/traces -- the double / still worked?
To confirm, you're using client.chat.completions.create while I am trying to use responses.create to add the tools config
resp = client.responses.create(
model="gpt-4.1",
tools=[
{
"type": "mcp",
"server_label": "mcp-server-test",
"server_url": f"{MCP_SERVER_URL}",
"require_approval": "never",
},
],
input=query,
)It's called out as the OpenAI API with MCP server compatibility. https://platform.openai.com/docs/guides/tools-remote-mcp
Thanks for the further digging into this. I am able to see tool traces in Phoenix now, but none of the llm traces (e.g. responses).
