Skip to content

docs: warn more clearly that openai-responses/openai-completions breaks tool calling for remote Ollama #21243

@Chloe-VP

Description

@Chloe-VP

Problem

Users running Ollama on a remote host (NAS, GPU box, etc.) naturally configure the provider like this:

{
  "models": { "providers": { "spark": {
    "baseUrl": "http://remote-host:11434/v1",
    "api": "openai-responses",
    "apiKey": "ollama-local"
  }}}
}

This silently breaks tool calling — the model outputs tool call JSON as plain text instead of structured tool_calls, which OpenClaw passes through as a chat message to the user.

The fix (api: 'ollama' + drop /v1) is buried in the Advanced section under 'Legacy OpenAI-Compatible Mode'. The warning about tool calling incompatibility doesn't appear in the basic setup example at all.

Impact

  • Any user pointing OpenClaw at a remote Ollama instance via the OpenAI-compat URL gets silently broken tool calling
  • No error surfaced — raw JSON leaks to chat/Telegram/etc
  • Affects all models (qwen, llama, mistral) configured this way

Suggested Fix

  1. Add callout at top of Ollama provider doc warning that /v1 + openai-responses is the legacy path and breaks tools
  2. Make the Quick Start example use native API (api: 'ollama', no /v1)
  3. Consider gateway startup warning when api: openai-responses is used with port 11434

Related: #5769, #4892, #9900

Metadata

Metadata

Assignees

No one assigned

    Labels

    staleMarked as stale due to inactivity

    Type

    No type
    No fields configured for issues without a type.

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions