Hi, I am currently using Phoenix langchain auto-instrumentor for tracing with this example: https://python.langchain.com/docs/integrations/llms/azure_openai/ In the traces captured in Phoenix UI, input and output does not appear formatted well as you can see in the image here...all I want is input and output to appears as what user query and what LLM returned instead of this entire JSON object with so much unnecessary information.. How can we fix this?
To improve input/output formatting in Phoenix UI for LangChain traces, ensure you are using the latest version of the OpenInference LangChain instrumentor, which automatically sets the input and output columns to the user query and LLM response. If your input/output appears as a large JSON object, it's likely because the chain is passing a dictionary or complex object as input/output, rather than a simple string. You can customize this by extracting the relevant fields (e.g., user query and LLM response) and setting them as the span's input/output values using the tracer's set_input and set_output methods within your chain or callback logic, or by post-processing the spans DataFrame before logging or displaying it in Phoenix UI. See instrument-python.md and langchain integration docs.If you want the UI to render only the user query and LLM response, ensure your chain returns and receives plain text, or manually set the input/output attributes to those values. For more advanced customization, you can post-process the spans DataFrame (using get_spans_dataframe) to overwrite the input/output columns with the desired fields before further analysis or display (slack reference). Would you like a code example for customizing the input/output fields?Sources: - instrument-python.md - Slack: customizing spans DataFrame - langchain integration docs
馃挕聽Hint: Mention RunLLM in the thread for followups.
