All the workflows which we write are always placed behind an api-gateway in production. Request Execution Cycle : request -> api_gateway -> workfow execution -> response(sent through API) Now, where should we set up phoenix in the above reques-response lifecycle? I know we need to configure phoenix for every workflow with project name and sessionIDs but where should this go? between api_gateway and workflow execution ?
Phoenix should not be set up as a middleware or proxy between your API gateway and workflow execution. Instead, Phoenix (and its OpenInference instrumentation) must be integrated directly within your workflow application code. You configure Phoenix tracing (project name, session IDs, etc.) inside the workflow execution logic, so that traces are emitted as your workflow runs鈥攏ot at the API gateway or as a network-level proxy. Phoenix cannot capture traces without code-level instrumentation in the workflow itself; it does not function like an HTTP proxy such as NGINX or Apache for telemetry capture聽see discussion.In summary: instrument your workflow code to emit traces to Phoenix, typically by initializing the tracer (with project/session info) at the start of each workflow execution. The API gateway simply routes requests; Phoenix must be configured in the workflow backend, not as a standalone layer between the gateway and the workflow聽see docs聽see docs.Would you like more detail on how to set up tracing in your workflow code?Sources: - https://github.com/Arize-ai/phoenix/issues/8380 - see docs - see docs
馃挕聽Hint: Mention RunLLM in the thread for followups.
hi Chandrahaas J. yes, that is a solid plan. If you set the session id and and project name in a middleware layer, the workflow execution code you can explicitly set the session on any spans you create
