is there any way to use Phoenix on Lambda ? Perhaps there is a way to store the DB on an EFS mount ?
We are talking through a lot of options and likely some options are imminent. Direction we are headed - Essentially the LLM tracer runs on Lambda and can persist to 3 options (EFS, Cloud bucket or Table). Phoenix or Arize cloud can be brought up on those openinference Trace files Adam D. Would cloud bucket be better or worse than EFS?
i want to enable traces on a lambda function running LlamaIndex but obv lambda functions dont keep their drives aorund so trying to see how to persist the data for a Lambda
i imagine S3 bucket would be easier for users overall (but we do happen to use EFS)
Thanks for the info Adam D.! From what you're describing you can consider running the Phoenix application separately from the tracing and instrumentation inside Lambda. The Phoenix server acts as our collector for traces and doesn't need to be ephemeral. Once that's set up you can configure the trace exporters to point to the collector. More info can be found here: https://docs.arize.com/phoenix/reference/environments#container
Here's more info on deploying Phoenix: https://docs.arize.com/phoenix/concepts/deploying-phoenix
oh thats super helpful
So to run the app server , all i need to do is ..
import phoenix as px
session = px.launch_app()?
when i do that, the app just dies after running that command, should it loop or do i need to throw it into a while True: loop
what error are you getting?
it just exits , i thought it would auto run in the background
# cat main.py
import phoenix as px
session = px.launch_app()# python main.py
馃實 To view the Phoenix app in your browser, visit http://localhost:6006/
馃摵 To view the Phoenix app in a notebook, run `px.active_session().view()`
馃摉 For more information on how to use Phoenix, check out https://docs.arize.com/phoenix
[root@app]#
I'll look into this, for now try running it inside an ipython shell?
have not used ipython before, will take a look
