Hi is there a way to get the trace_id back for feedback to be added to the trace_id from the client side , i think langsmith has that with from the docs https://docs.smith.langchain.com/observability/how_to_guides/trace_with_langchain with the run_id i can pass that to langsmith to capture user feedback from client ??
from langchain_openai import ChatOpenAI
from langchain_core.prompts import ChatPromptTemplate
from langchain_core.output_parsers import StrOutputParser
from langchain_core.tracers.context import collect_runs
prompt = ChatPromptTemplate.from_messages([
("system", "You are a helpful assistant. Please respond to the user's request only based on the given context."),
("user", "Question: {question}\n\nContext: {context}")
])
model = ChatOpenAI(model="gpt-4o-mini")
output_parser = StrOutputParser()
chain = prompt | model | output_parser
question = "Can you summarize this morning's meetings?"
context = "During this morning's meeting, we solved all world conflict."
with collect_runs() as cb:
result = chain.invoke({"question": question, "context": context})
# Get the root run id
run_id = cb.traced_runs[0].id
print(run_id)Phoenix does not provide a direct equivalent to LangSmith's collect_runs for retrieving a trace_id after a chain invocation. However, you can access the current span (which contains the trace_id) using get_current_span from openinference.instrumentation.langchain, but you must call it within the active span context鈥攖ypically inside a with tracer.start_as_current_span(...) block or during the chain execution. If called outside an active span, it will return None or an invalid span. See the example below: LBKSfrom openinference.instrumentation.langchain import get_current_spanLBKSwith tracer.start_as_current_span("my_span") as span: # Run your chain here chain.invoke({"input": question}) phoenix_span = get_current_span() trace_id = phoenix_span.get_span_context().trace_id print(f"Trace ID: {format(trace_id, '032x')}")LBKSThis approach is necessary because Phoenix/OpenInference relies on OpenTelemetry's context propagation, which differs from LangSmith's callback-based tracing. For LangChain, ensure you use get_current_span from openinference.instrumentation.langchain, not from opentelemetry.trace, to get the correct span during execution. See GitHub Issue #4158 and Slack discussion for details.Would you like more detail or a full example?Sources:
- GitHub Issue #4158
- Slack discussion: span_id retrieval
馃挕聽Hint: Mention RunLLM in the thread for followups.
A full example would have been better from the 馃敀[private user] , will try it out 馃敀[private user] , if anything will let you know. thanks
Certainly! Here鈥檚 a full example showing how to get the trace_id for feedback in a Phoenix-instrumented LangChain app. The key is to call get_current_span from openinference.instrumentation.langchain within the active span context, then extract the trace_id: LBKSfrom openinference.instrumentation.langchain import get_current_span from opentelemetry.trace import format_trace_id from phoenix.otel import register from langchain_core.prompts import ChatPromptTemplate from langchain_openai import ChatOpenAILBKS# Register Phoenix tracer tracer_provider = register(project_name="my-llm-app") tracer = tracer_provider.get_tracer(__name__)LBKSprompt = ChatPromptTemplate.from_template("{x} {y} {z}?").partial(x="why is", z="blue") chain = prompt | ChatOpenAI(model_name="gpt-3.5-turbo")LBKSwith tracer.start_as_current_span("my_chain_span") as span: result = chain.invoke(dict(y="sky")) phoenix_span = get_current_span() if phoenix_span is not None: trace_id = phoenix_span.get_span_context().trace_id print("Phoenix Trace ID:", format_trace_id(trace_id)) else: print("No active span found")LBKSThis will print the trace_id in the correct format for Phoenix. You can then use this trace_id to associate feedback or annotations.Sources:
- GitHub Issue #4158
- Slack discussion: span_id retrieval
馃挕聽Hint: Mention RunLLM in the thread for followups.
The code doesn't provide the span ID, whereas I am getting a trace in Phoenix 馃敀[private user] and 馃敀[private user]
t prior to execution of 'app.agent.main'; this may result in unpredictable behaviour
馃敪 OpenTelemetry Tracing Details 馃敪
| Phoenix Project: uni-ai-summarizer
| Span Processor: BatchSpanProcessor
| Collector Endpoint: localhost:4317
| Transport: gRPC
| Transport Headers: {'user-agent': '****'}
|
| Using a default SpanProcessor. `add_span_processor` will overwrite this default.
馃敪 OpenTelemetry Tracing Details 馃敪
| Phoenix Project: my-llm-app
| Span Processor: SimpleSpanProcessor
| Collector Endpoint: localhost:4317
| Transport: gRPC
| Transport Headers: {'user-agent': '****'}
|
| Using a default SpanProcessor. `add_span_processor` will overwrite this default.
|
| 鈿狅笍 WARNING: It is strongly advised to use a BatchSpanProcessor in production environments.
|
| `register` has set this TracerProvider as the global OpenTelemetry default.
| To disable this behavior, call `register` with `set_global_tracer_provider=False`.
No active span foundIf you see "No active span found" when calling get_current_span, it means you're calling it outside the context of an active span. In Phoenix/OpenInference, get_current_span only returns a valid span if called during the execution of a traced block (e.g., inside with tracer.start_as_current_span(...) or within a LangChain-instrumented chain). If called before or after, it returns None. This is different from just seeing traces in the UI, which can occur if auto-instrumentation is working, but you still need to call get_current_span at the right time to get the span or span_id for feedback or annotation purposes. See GitHub Issue #6399 and Phoenix feedback docs.Would you like a code snippet showing exactly where to call get_current_span to get the span_id?Sources: - GitHub Issue #6399 - Phoenix feedback docs
馃挕聽Hint: Mention RunLLM in the thread for followups.
RunLLM provide the code for the answer
Certainly! Here is a code example that shows how to get the Phoenix span_id (and trace_id) at the correct point鈥攊nside the traced context鈥攕o you can use it for feedback or annotations: LBKSfrom phoenix.otel import register from openinference.instrumentation.langchain import get_current_span from opentelemetry.trace import format_span_id, format_trace_id from langchain_core.prompts import ChatPromptTemplate from langchain_openai import ChatOpenAILBKS# Register Phoenix tracer tracer_provider = register(project_name="my-llm-app") tracer = tracer_provider.get_tracer(__name__)LBKSprompt = ChatPromptTemplate.from_template("{x} {y} {z}?").partial(x="why is", z="blue") chain = prompt | ChatOpenAI(model_name="gpt-3.5-turbo")LBKSwith tracer.start_as_current_span("my_chain_span") as span: result = chain.invoke(dict(y="sky")) phoenix_span = get_current_span() if phoenix_span is not None: span_id = format_span_id(phoenix_span.get_span_context().span_id) trace_id = format_trace_id(phoenix_span.get_span_context().trace_id) print("Phoenix Span ID:", span_id) print("Phoenix Trace ID:", trace_id) else: print("No active span found")LBKSCall get_current_span after the chain.invoke, but still inside the with block. This ensures you get the correct span and IDs for feedback.Sources:
- Phoenix feedback docs
- GitHub Issue #6399
馃挕聽Hint: Mention RunLLM in the thread for followups.
Thanks 馃敀[private user], I wish this was in the docs, it would be super valuable, especially if you are trying to tie the trace with user feedback. Appreciate it
