Understanding LLM Operations in Phoenix Sessions: A Query
Hi - This is probably explained somewhere that I haven't found yet, but I'm wondering which types of llm operations will appear in my active Phoenix session, and which will not be automatically sent. For instance, I'm using LlamaIndex, and when I generate questions from a corpus as follows, my session remains unchanged, even though I'm calling the llm model for each question: questions_df = llm_generate( dataframe=document_chunks_df, template=generate_questions_template, model=OpenAIModel( model_name="gpt-3.5-turbo-instruct", ), output_parser=output_parser, ) ------------------------------------------------ However, when I answer each question with the same llm model, the Phoenix session actively records this info: # loop over the questions and generate the answers for _, row in questions_with_document_chunk_df.iterrows(): question = row["question"] response_vector = query_engine.query(question) print(f"Question: {question}\nAnswer: {response_vector.response}\n") -------------------------------------------------- Is this because in the second case I assigned the llm earlier in my code, as part of the query_engine definition, as opposed to explicitly calling out the llm model as in the first case? Or am I just confusing the types of llm operations that Phoenix tracks versus the ones it ignores? Thanks, David
