Hello, we have a big team with different devs using phoenix, we want to create multiple sessions for each user. I know that you can run a phoenix session in a thread, but any guide or documentation on how to do that?
Can't really wait on that, I will add the feature myself and possibly open a PR.
thanks
Nabeegh A. A bit confused about your use-case here. Is there any reason you need to run multiple phoenixes on a single computer? have you tried deploying a phoenix for you all to share?
P.S. you can run multiple containers if you really want multiple phoenixes on your own computer.
Mikyo - We maintain our own database, we use OpenInference to collect and export traces to our server. We parse and store them. When we receive a request to see some traces or run evals, we fetch those traces from db, and start a new session passing the constructed dataset. We have added a number of new features as well that fetch new traces from db and update the phoenix session. This can cause a problem where for example, one user requests to see traces that are in a dataset (we have a flow that we have built that allows adding traces to a dataset and viewing traces inside a dataset on phoenix). We go about that in the same manner, fetching traces from our db, constructing a dataset and then starting a session using phoenix. So if another user comes in, and wants to see dataset b, we start a new session for them, the user working on the previous session will also see the new view. We are looking for a solution where: a - There's a better way to do this; OR b - We can create multiple sessions for different users The end goal being: Multiple devs using our phoenix system can work independently of each other.
Nabeegh A. I see - seems a bit complicated. So you manually construct TraceDatasets from your central phoenix and want to display those. I think there's probably a better way to do this but let me get back to you after I read through the above a bit more to better understand it. When you say DB, are you talking about postgres?
Are you planning on running these on separate ports?
Sure, please take your time. I don't mean postgres. We have our own MongoDB that has all the span fields, and some additional fields (specific to our system). We query it and construct TraceDataset. We are already using two ports in our deployment, one for our API, which essentially takes care of receiving exported traces and some other things. We run a Phoenix session on a separate port. I have already seen the session management code a lot of times, seems like running multiple sessions on one port for different users will be complicated to say the least.
Have you tried https://github.com/Arize-ai/phoenix/blob/aa6fa176aea29a84dd6e7b496ecf599a752bcc09/src/phoenix/session/client.py#L206 ? log_traces? You then can load multiple trace datasets into a single session under different projects. Does this not work for you?
Seems to be one way to go about this. We have encountered many bugs with using px.Client() . Tried to log traces using it and it bugged out. I will explore further in this direction and update on this thread.
Mikyo logging traces through this works, but I just can't log evaluations via px.Client() I have tried using log_evaluations I have tried appending the evaluations to the trace dataset as well.
is there an error message on the server log when it fails?
Nothing. https://github.com/Arize-ai/phoenix/blob/aa6fa176aea29a84dd6e7b496ecf599a752bcc09/src/phoenix/session/client.py#L191 Could this be the reason? Even though I am passing project_name to log_evaluations it is stilling logging on default
