-
-
Notifications
You must be signed in to change notification settings - Fork 39.8k
Description
Bug Description
All Ollama models (glm-4.7-flash, gpt-oss, deepseek-r1:70b, etc.) return (no output) when used through ClawdBot TUI or WhatsApp. Anthropic models work fine in the same session.
Root Cause
isReasoningTagProvider() in dist/utils/provider-utils.js returns true for all Ollama models (line 14: normalized === "ollama"). This causes two problems:
1. enforceFinalTag is set to true for all Ollama runs
In auto-reply/reply/get-reply-run.js:253:
...(isReasoningTagProvider(provider) ? { enforceFinalTag: true } : {}),This means stripBlockTags() in pi-embedded-subscribe.js enforces strict <final> tag extraction (lines 296-298):
if (!everInFinal) {
return "";
}Since Ollama models don't output <final> tags, all text content is discarded.
2. System prompt injects <think>/<final> format instructions
In agents/system-prompt.js:202-207, when reasoningTagHint is true:
"ALL internal reasoning MUST be inside <think>...</think>."
"Format every reply as <think>...</think> then <final>...</final>, with no other text."
Most Ollama models don't reliably follow this instruction, resulting in malformed output that gets stripped.
Why this is wrong for Ollama
Ollama's OpenAI-compatible endpoint (/v1/chat/completions) already handles reasoning natively via the reasoning field in streaming chunks:
{"choices":[{"delta":{"content":"","reasoning":"The user said hello"}}]}
...
{"choices":[{"delta":{"content":"Hello! How can I help?"}}]}The pi-ai library correctly maps reasoning → thinking blocks and content → text blocks. There is no need for <think>/<final> tag enforcement because reasoning separation happens at the API level.
Reproduction
# Ollama returns valid content through the OpenAI SDK
node -e "
import OpenAI from 'openai';
const c = new OpenAI({baseURL:'http://localhost:11434/v1', apiKey:'ollama'});
const s = await c.chat.completions.create({model:'glm-4.7-flash:latest', messages:[{role:'user',content:'say hello'}], stream:true});
let t='';
for await (const ch of s) { if(ch.choices[0]?.delta?.content) t+=ch.choices[0].delta.content; }
console.log(t); // 'Hello! How can I help you today?'
"
# But ClawdBot shows (no output) because enforceFinalTag strips everythingSuggested Fix
Remove "ollama" from isReasoningTagProvider() in dist/utils/provider-utils.js:
- if (normalized === "ollama" ||
- normalized === "google-gemini-cli" ||
+ if (normalized === "google-gemini-cli" ||
normalized === "google-generative-ai") {Ollama's native API-level reasoning separation (via the reasoning field) makes tag-based enforcement unnecessary and actively harmful.
Environment
- ClawdBot: v2026.1.24-3
- Ollama: running on localhost:11434
- Models tested: glm-4.7-flash, gpt-oss, deepseek-r1:70b
- Platform: Ubuntu, AMD Strix Halo APU
- All models produce valid responses via direct API calls but show
(no output)in ClawdBot
Workaround
Manually edit ~/.npm-global/lib/node_modules/clawdbot/dist/utils/provider-utils.js and remove "ollama" from the isReasoningTagProvider() function, then restart the gateway service.