-
-
Notifications
You must be signed in to change notification settings - Fork 39.8k
Description
Bug Report: Ollama Models Return Broken Responses in OpenClaw
Summary
Ollama integration appears to connect successfully but returns corrupted/nonsensical responses instead of proper model output.
Environment
- OpenClaw Version: 2026.2.1 (ed4529e) → 2026.2.2-3 (updated during troubleshooting)
- OS: macOS (Darwin 25.2.0 arm64)
- Node: v25.5.0
- Ollama Version: Latest (service running on PID 1024)
- Models Tested: llama3.1:8b, mistral:7b
Expected Behavior
When using ollama models, should receive proper responses like "2+2 = 4"
Actual Behavior
- Model switching appears successful (session_status shows correct ollama model)
- Token counting works (shows proper token usage)
- But responses are completely broken/nonsensical
Example:
Input: "What is 2+2?"
Expected: "4" or "2+2 = 4"
Actual: Random text about sessions_send functions and memory_get operations
Troubleshooting Attempted
✅ Working Components:
- Ollama service running correctly (PID 1024)
- Direct API calls work:
curl http://127.0.0.1:11434/api/generatereturns proper responses - Models available: llama3.1:8b, mistral:7b, qwen2.5:14b, llava:7b, codellama:7b
- Environment variable set:
OLLAMA_API_KEY='ollama-local'in ~/.zshrc
✅ Configuration Fixes Applied:
- Added missing
ollama:defaultauth profile to config:
"auth": {
"profiles": {
"ollama:default": {
"provider": "ollama",
"mode": "api_key"
}
}
}✅ Status Shows Connection Works:
🧠 Model: ollama/llama3.1:8b · 🔑 api-key ollama…-local (ollama:default)
🧮 Tokens: 12 in / 433 out
❌ But Responses Are Garbage:
- Model switching succeeds visually
- Token counting works
- Authentication appears successful
- Response content is completely broken
Reproduction Steps
- Configure ollama models in OpenClaw config
- Set
OLLAMA_API_KEY='ollama-local' - Add
ollama:defaultauth profile - Switch to any ollama model:
/model ollama/llama3.1:8b - Ask simple question: "What is 2+2?"
- Observe broken response instead of "4"
Diagnostic Info
$ openclaw status --all
🦞 OpenClaw 2026.2.1 (ed4529e)
🕒 Time: Wednesday, February 4th, 2026 — 6:08 AM (America/Phoenix)
🧠 Model: ollama/llama3.1:8b · 🔑 api-key ollama…-local (ollama:default)
🧮 Tokens: 11 in / 171 out
📚 Context: 25k/200k (13%) · 🧹 Compactions: 1
🧵 Session: agent:main:main • updated just now
⚙️ Runtime: direct · Think: off · elevated
🪢 Queue: collect (depth 0)Hypothesis
This appears to be a bug in OpenClaw's ollama provider response parsing/handling, not a connection or authentication issue. The provider successfully connects to Ollama and counts tokens correctly, but the response processing pipeline is corrupted.
Impact
- Ollama models completely unusable in OpenClaw
- Users forced to use expensive API models instead of free local models
- Significant cost impact for high-usage scenarios
Workaround
Changed default model from ollama/llama3.1:8b to anthropic/claude-sonnet-4-20250514 to restore functionality.
Additional Context
Direct Ollama API testing confirms the service works perfectly:
curl -X POST http://127.0.0.1:11434/api/generate \
-d '{"model": "llama3.1:8b", "prompt": "What is 2+2?", "stream": false}'
# Returns proper JSON with "4" responseThe issue appears to be isolated to OpenClaw's ollama integration layer.