any thoughts on storing actual vectors in phoenix datasets or traces? is that explicitly discouraged? are they captured in traces by default RunLLM
馃挕聽Hint: Mention RunLLM in the thread for followups.
Hey Andrew S. - this is an interesting question as I lean towards it being less important than the text itself. Storing the vectors if the embeddings are dense does unlock some really interesting EDA flows via dimensionality reduction and clustering - but it does require a pretty savvy DS persona. It can be interesting at volume however.
I was mostly curious for the purpose of
faster experiments
improving performance of phoenix鈥檚 web UI
Searching for traces in the web UI is super slow and I was wondering if dropping embeddings from traces would speed it up.
Anything to reduce the payloads could help. So setting OPENINFERENCE_HIDE_EMBEDDING_VECTORS might help a bit. https://arize-ai.github.io/openinference/spec/configuration.html
