RunLLM
I do have a quick question. I'm trying to hook up my older llamaindex project, I've updated llamaindex to 0.10.43 and using the LlamaIndexInstrumentor, but it's not sending anything to the running docker container. Do LlamaIndex Pipelines not support the new instrumentation? Do I need to move to workflows?
openinference-instrumentation-llama-index = "^3.0.3"
arize-phoenix-otel = "^0.6.1"
arize-phoenix = "^4.32.0"
opentelemetry-sdk = "^1.27.0"
opentelemetry-exporter-otlp = "^1.27.0"
opentelemetry-api = "^1.27.0"
opentelemetry-instrumentation-fastapi = "^0.48b0"
Code
from openinference.instrumentation.llama_index import LlamaIndexInstrumentor
from phoenix.otel import register
tracer_provider = register(
endpoint="http://localhost:6006/v1/traces",
)
LlamaIndexInstrumentor().instrument(tracer_provider=tracer_provider)
The code is written in a FastAPI file so should be triggered on app startup. Trying to run via uvicorn, which also maybe has some port wiring extra steps?
poetry run uvicorn api.app.main:app --reload
The API layer directs from endpoint -> Semantic Router, which invokes a LLamaIndex Pipeline. I know they moved to Workflows recently but was trying to avoid having to rewrite it all for as long as possible