How can I add custom spans inside Auto-instrumentation Langchain traces using LangChainInstrumentor.instrument(). It should be as nested span under current span which has been generated by autoinstrumenting the chain.
You cannot directly nest a manual span under an auto-instrumented LangChain span using standard OpenTelemetry context propagation, because LangChain's callback-based tracing is isolated from the usual OTEL context. Manual spans created with tracer.start_as_current_span will not automatically appear as children of LangChain's auto-instrumented spans, since the context is not shared between them. To achieve true nesting, you would need to implement a custom callback or modify the callback manager to insert your custom span at the right point in the LangChain execution, similar to how LangChain attaches its own callbacks (slack reference).For a workaround, you can create a custom retriever or chain component and wire in your own callback handler, following the LangChain pattern. This allows your custom logic to participate in the same callback system and thus be properly nested in the trace tree (slack reference). Would you like a code example of how to do this?Sources:
- Slack: Manual span context with LangChain auto-instrumentation
馃挕聽Hint: Mention RunLLM in the thread for followups.
Certainly! To ensure your custom retriever is traced and its spans are nested under the auto-instrumented chain, inherit from BaseRetriever and let LangChain's callback system handle the tracing. Here鈥檚 a minimal example using LangChainInstrumentor().instrument() and a custom retriever (example, docs): LBKSfrom typing import List from langchain_core.callbacks import CallbackManagerForRetrieverRun from langchain_core.retrievers import BaseRetriever, Document from openinference.instrumentation.langchain import LangChainInstrumentor from opentelemetry.sdk import trace as trace_sdk from opentelemetry.sdk.trace.export import ConsoleSpanExporter, SimpleSpanProcessorLBKStracer_provider = trace_sdk.TracerProvider() tracer_provider.add_span_processor(SimpleSpanProcessor(ConsoleSpanExporter())) LangChainInstrumentor().instrument(tracer_provider=tracer_provider)LBKSclass CustomRetriever(BaseRetriever): k: int def _get_relevant_documents(self, query: str, *, run_manager: CallbackManagerForRetrieverRun) -> List[Document]: return [Document(page_content=f"dummy content at {i}") for i in range(self.k)]LBKSretriever = CustomRetriever(k=3) documents = retriever.invoke("what is the meaning of life?") print(documents)LBKSThis ensures your custom retriever's spans are auto-nested under the parent chain span. Would you like more detail on customizing the callback handler?Sources:
- Custom retriever example
- Phoenix tracing FAQ: custom retriever
馃挕聽Hint: Mention RunLLM in the thread for followups.
Yes, you can instrument a custom reranker in the same way: inherit from the appropriate LangChain base class (e.g., implement as a retriever or post-processor), and ensure it is invoked within the LangChain callback system. This will allow Phoenix's auto-instrumentation to capture and nest reranker spans (source, source). However, note that some rerankers (like LlamaIndex's node post-processors) may not expose pre/post reranking data to the callback, so span detail may be limited.Would you like a code example for a custom reranker with tracing?Sources: - Phoenix issue: reranker tracing - Phoenix PR: reranker span kind
馃挕聽Hint: Mention RunLLM in the thread for followups.
To nest manual spans, you must create child spans within the context of a parent span using a with tracer.start_as_current_span("parent") as parent: block, and then start child spans inside that block. This ensures the child spans are properly nested under the parent in the trace hierarchy (manual instrumentation guide, docs).Example: LBKSwith tracer.start_as_current_span("parent") as parent: # Parent span logic with tracer.start_as_current_span("child") as child: # Child span logicLBKSIf your functions are in different code paths, pass the parent span context explicitly using set_span_in_context (slack example). Would you like more detail on context propagation?Sources:
- Manual instrumentation guide
- Phoenix docs: custom spans
- Slack: parent/child span context
馃挕聽Hint: Mention RunLLM in the thread for followups.
Certainly! Here鈥檚 a minimal example showing how to nest a child span under a parent span using set_span_in_context to propagate the parent context (slack example): LBKSfrom opentelemetry import trace from opentelemetry.trace import set_span_in_context, SpanKindLBKStracer = trace.get_tracer(__name__)LBKSdef create_child_span(tracer, parent_span): parent_context = set_span_in_context(parent_span) child_span = tracer.start_span( "child_span", kind=SpanKind.INTERNAL, context=parent_context, ) # Do work in child span child_span.end()LBKSwith tracer.start_as_current_span("parent_span") as parent_span: create_child_span(tracer, parent_span)LBKSThis ensures the child span is nested under the parent span in the trace. Would you like more detail or a multi-function example?Sources:
- Slack: set_span_in_context example
馃挕聽Hint: Mention RunLLM in the thread for followups.
When using manual instrumentation with the @tracer.llm decorator, you must explicitly pass the input, output, and invocation parameters to the span, as these are not automatically extracted like in auto-instrumentation. You can do this by setting attributes inside your decorated function using the span object (docs, manual instrumentation guide).Example: LBKS@tracer.llm def my_llm_call(messages, model, temperature): # ... call your LLM here ... response = call_llm(messages, model, temperature) span = trace.get_current_span() span.set_attribute("llm.input_messages", messages) span.set_attribute("llm.invocation_parameters", {"model": model, "temperature": temperature}) span.set_attribute("output.value", response) return responseLBKSWould you like more detail or a more advanced example?Sources:
- Manual instrumentation guide
馃挕聽Hint: Mention RunLLM in the thread for followups.
Certainly! For manual instrumentation of a reranker span, you should use the OpenInference semantic conventions and set the input and output documents as span attributes. Here鈥檚 a minimal example (manual instrumentation guide, reranker attribute reference): LBKSfrom opentelemetry import trace from openinference.semconv.trace import RerankerAttributes, OpenInferenceSpanKindValuesLBKStracer = trace.get_tracer(__name__)LBKSinput_docs = [{"content": "doc1"}, {"content": "doc2"}] output_docs = [{"content": "doc2"}, {"content": "doc1"}]LBKSwith tracer.start_as_current_span( "reranker", attributes={ "openinference.span.kind": OpenInferenceSpanKindValues.RERANKER.value, RerankerAttributes.RERANKER_INPUT_DOCUMENTS: input_docs, RerankerAttributes.RERANKER_OUTPUT_DOCUMENTS: output_docs, } ) as span: # Reranking logic here passLBKSThis will log a reranker span with input and output documents as attributes. Would you like more detail or a multi-step example?Sources:
- Manual instrumentation guide
- Reranker attribute reference
馃挕聽Hint: Mention RunLLM in the thread for followups.
You should flatten the list of document objects using the OpenInference conventions before setting them as span attributes. Use a utility like _flatten to convert nested lists/dicts into the required flat key-value pairs (spec example, flatten utility reference).Example: LBKSfrom openinference.semconv.trace import RerankerAttributes from openinference.instrumentation.groq._wrappers import _flattenLBKSoutput_docs = [{"document.id": "1", "document.score": 0.9, "document.content": "..."}] span.set_attributes(dict(_flatten({RerankerAttributes.RERANKER_OUTPUT_DOCUMENTS: output_docs})))LBKSThis will flatten your list so Phoenix can ingest it. Would you like more detail or a full reranker span example?Sources:
- Semantic conventions: flattening
- Flatten utility example
馃挕聽Hint: Mention RunLLM in the thread for followups.
