Hello, I'm trying to log my traces and I see that we always need to use the function .instrument() associated with each available client (OpenAIInstrumentor, LlamaIndexInstrumentor(), etc.). Can I log the "traces" and all other metrics by directly sending a raw JSON? Or, if for example, I have my own RAG in C# .Net 7.0, can I use phoenix?
Hi Clash, thanks for your interest in Phoenix! We currently only maintain auto instrumentors in Python and JavaScript, but because we’re fully open telemetry compatible, you can send your own traces manually, we don’t have manual instrumentation examples in other languages but let me see if I can find some links for you that might help
You may be able to follow these examples with the notes below.
Thanks Roger Y. for the reply, but can I track only the traces this way? Can I also track evaluation metrics? is there a NET example where a phoenix server is started with the equivalent command in python: "python -m phoenix.server.main serve" and traces and evals are sent? A thousand thanks
is there a NET example where a phoenix server is started with the equivalent command in python: “python -m phoenix.server.main serve” and traces and evals are sent?
I think you’re the first person asking about .NET so we have not prepared any specific examples. As for the server, we have a docker image that you can use without invoking Python. It’s just a server that runs in the background and collects things, so docker will work just fine.
can I track only the traces this way? Can I also track evaluation metrics?
Eval metrics is done separately and you can read about it here to start. We only have a python client for sending them for now. As for generating the evals, you can read ore about it here.
thx Roger Y. I'm trying to execute the following code, but in my instance of phoenix in http://localhost:6006/ doesn't track anything but if i use the code in python works, can you hel me pls? ASP.NET.CORE using OpenTelemetry.Resources; using OpenTelemetry.Trace; using OpenTelemetry.Exporter; using Microsoft.AspNetCore.Builder; using Microsoft.Extensions.DependencyInjection; using Microsoft.Extensions.Hosting; using System.Diagnostics; var builder = WebApplication.CreateBuilder(args); builder.Logging.ClearProviders(); builder.Services.AddOpenTelemetryTracing(tracerProviderBuilder => { tracerProviderBuilder .SetResourceBuilder( ResourceBuilder.CreateDefault() .AddService(builder.Environment.ApplicationName)) .AddAspNetCoreInstrumentation() .AddHttpClientInstrumentation() .AddOtlpExporter(options => { options.Endpoint = new Uri("http://127.0.0.1:6006/v1/traces"); options.Protocol = OtlpExportProtocol.HttpProtobuf; }); }); var app = builder.Build(); app.MapGet("/", () => "Hello from OpenTelemetry Tracing!"); app.MapGet("/simulate-llm-call", () => { using var activitySource = new ActivitySource("SimulateLLMCall"); using var activity = activitySource.StartActivity("LLM Call Simulation", ActivityKind.Client); activity?.SetTag("llm.model_name", "gpt-3.5-turbo"); activity?.SetTag("llm.function_call", "{function_name: 'generate_text', args: ['Hello, world!']}"); activity?.SetTag("llm.response", "Hello, this is a response from a simulated LLM call."); return "LLM call simulated and traced!"; }); app.Run(); python code: import openai from openinference.instrumentation.openai import OpenAIInstrumentor from opentelemetry import trace as trace_api from opentelemetry.exporter.otlp.proto.http.trace_exporter import OTLPSpanExporter from opentelemetry.sdk import trace as trace_sdk from opentelemetry.sdk.trace.export import ConsoleSpanExporter, SimpleSpanProcessor endpoint = "http://127.0.0.1:6006/v1/traces" tracer_provider = trace_sdk.TracerProvider() tracer_provider.add_span_processor(SimpleSpanProcessor(OTLPSpanExporter(endpoint))) tracer_provider.add_span_processor(SimpleSpanProcessor(ConsoleSpanExporter())) trace_api.set_tracer_provider(tracer_provider) OpenAIInstrumentor().instrument() if __name__ == "__main__": client = openai.OpenAI() response = client.chat.completions.create( model="gpt-3.5-turbo", messages=[{"role": "user", "content": "write a haiku."}], max_tokens=20, ) print(response.choices[0].message.content)
Maybe you need .AddSource("SimulateLLMCall")? I don’t actually know anything about dotnet…but through some trial and error I got the following to work on my mac.
using OpenTelemetry.Resources;
using OpenTelemetry.Trace;
using OpenTelemetry.Exporter;
using Microsoft.AspNetCore.Builder;
using Microsoft.Extensions.DependencyInjection;
using Microsoft.Extensions.Hosting;
using System.Diagnostics;
var builder = WebApplication.CreateBuilder(args);
builder.Services.AddOpenTelemetry()
.ConfigureResource(resource => resource.AddService(builder.Environment.ApplicationName))
.WithTracing(tracing => tracing
.AddAspNetCoreInstrumentation()
.AddSource("SimulateLLMCall")
.AddOtlpExporter(options =>
{
options.Endpoint = new Uri("http://127.0.0.1:6006/v1/traces");
options.Protocol = OtlpExportProtocol.HttpProtobuf;
}));
var app = builder.Build();
app.MapGet("/", () => "Hello from OpenTelemetry Tracing!");
app.MapGet("/simulate-llm-call", () =>
{
using var activitySource = new ActivitySource("SimulateLLMCall");
using var activity = activitySource.StartActivity("LLM Call Simulation", ActivityKind.Client);
activity?.SetTag("llm.model_name", "gpt-3.5-turbo");
activity?.SetTag("llm.function_call", "{function_name: 'generate_text', args: ['Hello, world!']}");
activity?.SetTag("llm.response", "Hello, this is a response from a simulated LLM call.");
return "LLM call simulated and traced!";
});
app.Run();
i’m using the sdk i just downloaded
% dotnet --list-sdks
8.0.204 [/usr/local/share/dotnet/sdk]The embedding visualization currently only works separately from tracing and only as dataset parameters to launch_app(). See some of the colabs for Embedding Analysis here https://docs.arize.com/phoenix/notebooks#embedding-analysis
I’m running phoenix as a container. Let’s say I have the dataset prepared. How I can load to phoenix without restarting it?
Got it. Thank you for your prompt response.
