Yes, this is expected. The OpenInference semantic conventions specify both `input.value` (a raw string or JSON of the input payload) and `llm.input_messages` (a flattened, indexed list of chat messages) for LLM spans. This allows for both a full, machine-readable input and a structured, human-readable breakdown. The same applies to `output.value` and `llm.output_messages` for outputs. This duplication ensures compatibility with various tools and UIs, and is reflected in both the spec and real-world traces (<https://github.com/Arize-ai/openinference/blob/main/spec/semantic_conventions.md|semantic conventions>, <https://github.com/Arize-ai/phoenix/blob/main/docs/section-learn/tracing/what-are-traces.md|example spans>, <https://github.com/Arize-ai/openinference/blob/main/spec/llm_spans.md|llm_spans.md>).
Would you like more detail on how these fields are populated or used in Phoenix and OpenInference?
Sources:
- <https://github.com/Arize-ai/openinference/blob/main/spec/llm_spans.md|llm_spans.md>
- <https://github.com/Arize-ai/openinference/blob/main/spec/semantic_conventions.md|semantic conventions>
- <https://github.com/Arize-ai/phoenix/blob/main/docs/section-learn/tracing/what-are-traces.md|example spans>