Maintaining Tracing Context in FastAPI Streaming Responses
One more question (for now lol)! - in my fastAPI app that serves all this, I have a:
with tracer.start_as_current_span("Chat interaction") as span:This groups all of my various sub-chains, retriever, and final generation spans together in phoenix as expected. But, for streaming, I do a:
//still under the with tracing block
return StreamingResponse(content=stream_generator(...))where stream_generator is a separate function that deals with asyncronously consuming the asyncgenerator langchain returns, and saving the results to my DB. But, now the grouping is "broken" in phoenix for the streaming case - as actually executing the other function that consumes the generator breaks the tracing span. I'm sure its something simple, but I somehow need to let the other function know which top level span it belongs to
