Hi team.. I need your help in a formatting issue. I am using langchain framework and autoinstrument option to send the traces to phoenix server. Recently I came across the feature "sessions" and used it to view the traces in a user and assistant chatbot way. But I am getting the conversations not in a good way . Check the below image. All the parameters that I am passing to the chain are also displayed . The context I am passing etc.. How to fix this?
To improve the formatting of conversations in Phoenix sessions when using LangChain, ensure that only the relevant user/assistant messages are passed as the input to the LLM, and avoid including extra context or parameters in the messages list. The Phoenix UI displays whatever is sent as input, so if you include additional parameters (like context or other metadata) in the messages, they will appear in the conversation view. Structure your input so that only the intended chat messages are included, as shown in the LangChain and Phoenix session docs.If you are using custom payloads or passing extra fields, refactor your code to separate system/user/assistant messages from other parameters. For example, use a list of message objects with only "role" and "content" fields for the conversation, and pass other data (like context) separately if needed. See the example in the Phoenix docs for the recommended structure.Would you like a code example for this?Sources: - How to track sessions with LangChain - LangChain session formatting issue discussion
馃挕聽Hint: Mention RunLLM in the thread for followups.
Here's a code example showing how to structure your LangChain input so only user/assistant messages appear in Phoenix sessions, avoiding extra parameters like context: LBKSfrom langchain_openai import ChatOpenAI from langchain_core.prompts import ChatPromptTemplate from openinference.instrumentation.langchain import LangChainInstrumentor from phoenix.otel import registerLBKS# Register Phoenix tracing tracer_provider = register() LangChainInstrumentor(tracer_provider=tracer_provider).instrument()LBKS# Define only the chat messages (no extra context or parameters) messages = [ {"role": "user", "content": "Hi! I'm Bob."}, {"role": "assistant", "content": "Hello Bob, how can I help you?"} ]LBKS# Create the prompt and chain prompt = ChatPromptTemplate.from_messages(messages) llm = ChatOpenAI(model="gpt-4") chain = prompt | llmLBKS# Invoke the chain (add new user message as needed) response = chain.invoke({"messages": [{"role": "user", "content": "What's my name?"}]}) print(response)LBKSThis ensures only the intended chat messages are shown in the Phoenix sessions UI, as recommended in the official docs.Sources:
- LangChain session formatting issue discussion
馃挕聽Hint: Mention RunLLM in the thread for followups.
