Hey where should I go with onboarding issues? the docs for langchain onboarding returns Failed to export traces to otlp.arize.com, error code: StatusCode.INTERNAL and can't find any docs to troubleshoot this status code
isn't this the support chat on slack for arize ax?
# %%
import os
from langchain_openai import ChatOpenAI
from arize.otel import register, Transport
from openinference.instrumentation.langchain import LangChainInstrumentor
# Sanity-check creds exist
assert os.getenv("ARIZE_SPACE_ID") and os.getenv("ARIZE_API_KEY")
tracer_provider = register(
space_id=os.getenv("ARIZE_SPACE_ID"),
api_key=os.getenv("ARIZE_API_KEY"),
project_name="litmus",
# endpoint="https://otlp.arize.com/v1/traces",
# transport=Transport.HTTP,
log_to_console=True,
)
LangChainInstrumentor().instrument(tracer_provider=tracer_provider)
llm = ChatOpenAI(model="gpt-4.1")
response = llm.invoke("This is a test")
this test based on the documentation, with langsmith env variables LANGCHAIN_TRACING_V2 and LANGSMITH_TRACING tested to be set to both true and false to see if it makes a difference this returns two things, first: 1.
馃敪 OpenTelemetry Tracing Details 馃敪
| Arize Project: litmus
| Span Processor: Multiple Span Processors
| Collector Endpoint: Multiple Span Exporters
| Transport: Multiple Span Exporters
| Transport Headers: Multiple Span Exporters
|
| Using a default SpanProcessor. `add_span_processor` will overwrite this default.
|
| `register` has set this TracerProvider as the global OpenTelemetry default.
| To disable this behavior, call `register` with `set_global_tracer_provider=False`.
{
"name": "ChatOpenAI",
"context": {
"trace_id": "0xdf491b034f6791289b4e656e1a71729b",
"span_id": "0x499bfb85ee776a5f",
"trace_state": "[]"
},
"kind": "SpanKind.INTERNAL",
"parent_id": null,
"start_time": "2025-11-07T10:46:48.501630Z",
"end_time": "2025-11-07T10:46:49.710864Z",
"status": {
"status_code": "OK"
},
"attributes": {
"input.value": "{\"messages\": [[{\"lc\": 1, \"type\": \"constructor\", \"id\": [\"langchain\", \"schema\", \"messages\", \"HumanMessage\"], \"kwargs\": {\"content\": \"This is a test\", \"type\": \"human\"}}]]}",
"input.mime_type": "application/json",
"output.value": "{\"generations\": [[{\"text\": \"I see your message: \\\"This is a test.\\\" How can I assist you today?\", \"generation_info\": {\"finish_reason\": \"stop\", \"logprobs\": null}, \"type\": \"ChatGeneration\", \"message\": {\"lc\": 1, \"type\": \"constructor\", \"id\": [\"langchain\", \"schema\", \"messages\", \"AIMessage\"], \"kwargs\": {\"content\": \"I see your message: \\\"This is a test.\\\" How can I assist you today?\", \"additional_kwargs\": {\"refusal\": null}, \"response_metadata\": {\"token_usage\": {\"completion_tokens\": 18, \"prompt_tokens\": 11, \"total_tokens\": 29, \"completion_tokens_details\": {\"accepted_prediction_tokens\": 0, \"audio_tokens\": 0, \"reasoning_tokens\": 0, \"rejected_prediction_tokens\": 0}, \"prompt_tokens_details\": {\"audio_tokens\": 0, \"cached_tokens\": 0}}, \"model_name\": \"gpt-4.1-2025-04-14\", \"system_fingerprint\": \"fp_d38c7f4fa7\", \"id\": \"chatcmpl-CZEHQzqVfZuhYhDnb3N8V0Uq6fpJc\", \"service_tier\": \"default\", \"finish_reason\": \"stop\", \"logprobs\": null}, \"type\": \"ai\", \"id\": \"run--83527e39-05b5-41d9-8b88-c0c3f7c75f6b-0\", \"usage_metadata\": {\"input_tokens\": 11, \"output_tokens\": 18, \"total_tokens\": 29, \"input_token_details\": {\"audio\": 0, \"cache_read\": 0}, \"output_token_details\": {\"audio\": 0, \"reasoning\": 0}}, \"tool_calls\": [], \"invalid_tool_calls\": []}}}]], \"llm_output\": {\"token_usage\": {\"completion_tokens\": 18, \"prompt_tokens\": 11, \"total_tokens\": 29, \"completion_tokens_details\": {\"accepted_prediction_tokens\": 0, \"audio_tokens\": 0, \"reasoning_tokens\": 0, \"rejected_prediction_tokens\": 0}, \"prompt_tokens_details\": {\"audio_tokens\": 0, \"cached_tokens\": 0}}, \"model_name\": \"gpt-4.1-2025-04-14\", \"system_fingerprint\": \"fp_d38c7f4fa7\", \"id\": \"chatcmpl-CZEHQzqVfZuhYhDnb3N8V0Uq6fpJc\", \"service_tier\": \"default\"}, \"run\": null, \"type\": \"LLMResult\"}",
"output.mime_type": "application/json",
"llm.input_messages.0.message.role": "user",
"llm.input_messages.0.message.content": "This is a test",
"llm.output_messages.0.message.role": "assistant",
"llm.output_messages.0.message.content": "I see your message: \"This is a test.\" How can I assist you today?",
"llm.invocation_parameters": "{\"model\": \"gpt-4.1\", \"model_name\": \"gpt-4.1\", \"stream\": false, \"_type\": \"openai-chat\", \"stop\": null}",
"llm.provider": "openai",
"llm.system": "openai",
"llm.model_name": "gpt-4.1-2025-04-14",
"llm.token_count.prompt": 11,
"llm.token_count.completion": 18,
"llm.token_count.total": 29,
"llm.token_count.completion_details.audio": 0,
"llm.token_count.completion_details.reasoning": 0,
"llm.token_count.prompt_details.audio": 0,
"llm.token_count.prompt_details.cache_read": 0,
"metadata": "{\"ls_provider\": \"openai\", \"ls_model_name\": \"gpt-4.1\", \"ls_model_type\": \"chat\", \"ls_temperature\": null}",
"openinference.span.kind": "LLM"
},
"events": [],
"links": [],
"resource": {
"attributes": {
"telemetry.sdk.language": "python",
"telemetry.sdk.name": "opentelemetry",
"telemetry.sdk.version": "1.38.0",
"openinference.project.name": "litmus",
"service.name": "unknown_service"
},
"schema_url": ""
}
}and 2. it returns: Failed to export traces to otlp.arize.com, error code: StatusCode.INTERNAL with no additional error context or steps for debugging.
with the "litmus" project not being created in Arize AX, so doesn't seem to connect let me know if anyone has any ideas on a solve will continue with LangSmith in the meantime
So I ran the code you pasted:
# %%
import os
from langchain_openai import ChatOpenAI
from arize.otel import register, Transport
from openinference.instrumentation.langchain import LangChainInstrumentor
# Sanity-check creds exist
assert os.getenv("ARIZE_SPACE_ID") and os.getenv("ARIZE_API_KEY")
tracer_provider = register(
space_id=os.getenv("ARIZE_SPACE_ID"),
api_key=os.getenv("ARIZE_API_KEY"),
project_name="litmus",
# endpoint="https://otlp.arize.com/v1/traces",
# transport=Transport.HTTP,
log_to_console=True,
)
LangChainInstrumentor().instrument(tracer_provider=tracer_provider)
llm = ChatOpenAI(model="gpt-4.1")
response = llm.invoke("This is a test")
and it worked for me. Litmus project showed up in my projects, with the trace. Your error is likely due to one of two things: ARIZE_API_KEY or ARIZE_SPACE_ID are incorrectly populated. You can grab these from Settings -> User API Keys. I recommend creating a New API Key for sanity check . You can copy paste your ARIZE_SPACE_ID from "Current SpaceID" on the page. Langsmith is blocking exports to Arize. If you could first confirm ARIZE_API_KEY and ARIZE_SPACE_ID are correct, I can try to help you fix the issue from there. Olle G.
I've;
created a new api key
copy pasted the arize space id directly from your portal
still nothing happens when running this:
import os
from langchain_openai import ChatOpenAI
from arize.otel import register, Transport
from openinference.instrumentation.langchain import LangChainInstrumentor
os.environ["LANGCHAIN_TRACING_V2"] = "false"
os.environ["LANGSMITH_TRACING"] = "false"
os.environ["LANGCHAIN_API_KEY"] = "false"
# Sanity-check creds exist
assert os.getenv("ARIZE_SPACE_ID") and os.getenv("ARIZE_API_KEY")
tracer_provider = register(
space_id=os.getenv("ARIZE_SPACE_ID"),
api_key=os.getenv("ARIZE_API_KEY"),
project_name="litmus",
# endpoint="https://otlp.arize.com/v1/traces",
# transport=Transport.HTTP,
log_to_console=True,
)
LangChainInstrumentor().instrument(tracer_provider=tracer_provider)
llm = ChatOpenAI(model="gpt-4.1")
response = llm.invoke("This is a test")
instead it only returns in the logs that: Failed to export traces to otlp.arize.com, error code: StatusCode.INTERNAL Let me know if anyone can jump on a call to quickly
fix this onboarding issue
it doesn't create the project called litmus, and stays on this page on the Arize site:
also tried GRPC:
import os
from langchain_openai import ChatOpenAI
from arize.otel import register, Transport
from openinference.instrumentation.langchain import LangChainInstrumentor
os.environ["LANGCHAIN_TRACING_V2"] = "false"
os.environ["LANGSMITH_TRACING"] = "false"
os.environ["LANGCHAIN_API_KEY"] = "false"
# Sanity-check creds exist
assert os.getenv("ARIZE_SPACE_ID") and os.getenv("ARIZE_API_KEY")
tracer_provider = register(
space_id=os.getenv("ARIZE_SPACE_ID"),
api_key=os.getenv("ARIZE_API_KEY"),
project_name="litmus",
# endpoint="https://otlp.arize.com/v1/traces",
endpoint="https://otlp.arize.com/v1",
# transport=Transport.HTTP,
transport=Transport.GRPC,
log_to_console=False,
)
LangChainInstrumentor().instrument(tracer_provider=tracer_provider)
llm = ChatOpenAI(model="gpt-4.1")
response = llm.invoke("This is a test")
but nothing is being created / traced in Arize AX
We're trying out Arize and having the exact same issue - just curious if this was resolved for you Olle G.? Also tried JS and it gives the same error but a bit more JS-y
{"stack":"OTLPExporterError: Internal Server Error\n at IncomingMessage.<anonymous> (<redacted>/node_modules/@opentelemetry/otlp-exporter-base/src/transport/http-transport-utils.ts:83:23)\n at IncomingMessage.emit (node:events:530:35)\n at endReadableNT (node:internal/streams/readable:1698:12)\n at process.processTicksAndRejections (node:internal/process/task_queues:90:21)","message":"Internal Server Error","code":"500","name":"OTLPExporterError","data":"\b\r\u0012*unable to validate authorization from span"}Update: it worked for us by setting the right endpoint: https://arize.com/docs/ax/integrations/opentelemetry/opentelemetry-arize-otel#arize-eu-endpoint
馃敀[private user] you are brilliant, yes this seems to have fixed the issue 馃敀[private user] + 馃敀[private user] the solve was above to add the endpoint to point to EU: https://arize.com/docs/ax/integrations/opentelemetry/opentelemetry-arize-otel#arize-eu-endpoint might be worth adding to the onboarding screen when trying to set up the first project with endpoint=Endpoint.ARIZE_EUROPE, perhaps give it dynamically for users like me selecting EU during login Cheers again 馃敀[private user] and the Arize team for the actions
