Hey I'm trying ollama and it doesn't seem to be working with structured outputs, any idea why?
Ollama models do not consistently support structured outputs (i.e., response schemas or function/tool calling) in the same way as OpenAI or Anthropic models. Many Ollama models either lack support for structured output or do not reliably follow output schemas, which can result in parsing errors or unstructured responses. This is a known limitation and is reflected in both user reports and official documentation—see, for example, the error message "does not support tools" when using certain Ollama models with Phoenix's Playground, and the need to check model compatibility for tool/function calling or structured outputs. You can find a list of Ollama models and their capabilities (including function calling support) in the model registry, but even models marked as supporting function calling may not always adhere to strict output schemas in practice. For best results, use models and providers known to support structured outputs, such as OpenAI or Anthropic, or experiment with prompt engineering to improve output consistency, though this is not always reliable with Ollama models. According to Phoenix release notes and GitHub issues, this is a common limitation.Would you like more detail on workarounds or model-specific capabilities?Sources: - https://github.com/Arize-ai/phoenix/issues/6346 - https://github.com/Arize-ai/openinference/blob/main/python/instrumentation/openinference-instrumentation-smolagents/tests/openinference/instrumentation/smolagents/cassettes/test_instrumentor/TestModels.test_litellm_reasoning_model_has_expected_attributes.yaml - Phoenix release notes
💡 Hint: Mention RunLLM in the thread for followups.
