-
-
Notifications
You must be signed in to change notification settings - Fork 68.8k
docs: warn more clearly that openai-responses/openai-completions breaks tool calling for remote Ollama #21243
Copy link
Copy link
Closed
Labels
staleMarked as stale due to inactivityMarked as stale due to inactivity
Description
Problem
Users running Ollama on a remote host (NAS, GPU box, etc.) naturally configure the provider like this:
{
"models": { "providers": { "spark": {
"baseUrl": "http://remote-host:11434/v1",
"api": "openai-responses",
"apiKey": "ollama-local"
}}}
}This silently breaks tool calling — the model outputs tool call JSON as plain text instead of structured tool_calls, which OpenClaw passes through as a chat message to the user.
The fix (api: 'ollama' + drop /v1) is buried in the Advanced section under 'Legacy OpenAI-Compatible Mode'. The warning about tool calling incompatibility doesn't appear in the basic setup example at all.
Impact
- Any user pointing OpenClaw at a remote Ollama instance via the OpenAI-compat URL gets silently broken tool calling
- No error surfaced — raw JSON leaks to chat/Telegram/etc
- Affects all models (qwen, llama, mistral) configured this way
Suggested Fix
- Add callout at top of Ollama provider doc warning that /v1 + openai-responses is the legacy path and breaks tools
- Make the Quick Start example use native API (api: 'ollama', no /v1)
- Consider gateway startup warning when api: openai-responses is used with port 11434
Related: #5769, #4892, #9900
Reactions are currently unavailable
Metadata
Metadata
Assignees
Labels
staleMarked as stale due to inactivityMarked as stale due to inactivity
Type
Fields
Give feedbackNo fields configured for issues without a type.