fix: send think: false in Ollama API requests to prevent empty responses from thinking models (closes #46680)#46994
fix: send think: false in Ollama API requests to prevent empty responses from thinking models (closes #46680)#46994Br1an67 wants to merge 1 commit intoopenclaw:mainfrom
Conversation
…ses from thinking models (closes openclaw#46680) Co-authored-by: Copilot <[email protected]>
Greptile SummaryThis PR fixes a silent empty-response bug with Ollama 0.18+ thinking-capable models (e.g. Key observations:
Confidence Score: 4/5
Prompt To Fix All With AIThis is a comment left during a code review.
Path: src/agents/ollama-stream.ts
Line: 465
Comment:
**Truthy check passes `think: true` when `reasoning` is `"off"`**
`!!options?.reasoning` is `true` for any non-empty string, including `"off"`. If a caller ever passes `options.reasoning = "off"` directly (bypassing `resolveSimpleThinkingLevel`), the request will incorrectly enable thinking mode.
In the current codebase this doesn't happen because `resolveSimpleThinkingLevel` converts `"off"` → `undefined`, but it's a fragile invariant. A more defensive guard would be explicit:
```suggestion
const ollamaThink = typeof options?.reasoning === "string" && options.reasoning !== "off";
```
How can I resolve this? If you propose a fix, please make it concise.Last reviewed commit: b2c9474 |
| // Ollama 0.18+ thinking-capable models (qwen3.5, kimi-k2.5, etc.) | ||
| // default to producing thinking tokens which consumes the output budget. | ||
| // Explicitly send think: false unless the user requested reasoning. | ||
| const ollamaThink = !!options?.reasoning; |
There was a problem hiding this comment.
Truthy check passes think: true when reasoning is "off"
!!options?.reasoning is true for any non-empty string, including "off". If a caller ever passes options.reasoning = "off" directly (bypassing resolveSimpleThinkingLevel), the request will incorrectly enable thinking mode.
In the current codebase this doesn't happen because resolveSimpleThinkingLevel converts "off" → undefined, but it's a fragile invariant. A more defensive guard would be explicit:
| const ollamaThink = !!options?.reasoning; | |
| const ollamaThink = typeof options?.reasoning === "string" && options.reasoning !== "off"; |
Prompt To Fix With AI
This is a comment left during a code review.
Path: src/agents/ollama-stream.ts
Line: 465
Comment:
**Truthy check passes `think: true` when `reasoning` is `"off"`**
`!!options?.reasoning` is `true` for any non-empty string, including `"off"`. If a caller ever passes `options.reasoning = "off"` directly (bypassing `resolveSimpleThinkingLevel`), the request will incorrectly enable thinking mode.
In the current codebase this doesn't happen because `resolveSimpleThinkingLevel` converts `"off"` → `undefined`, but it's a fragile invariant. A more defensive guard would be explicit:
```suggestion
const ollamaThink = typeof options?.reasoning === "string" && options.reasoning !== "off";
```
How can I resolve this? If you propose a fix, please make it concise.…claw#46994) fix: Windows terminal encoding set to UTF-8 (openclaw#46992)
|
Closing to manage active PR count. Will reopen when slot is available. |
Summary
message.content, so when all output goes to thethinkingfield, the response appears empty.thinkfield toOllamaChatRequestinterface. The request body now explicitly sendsthink: falseby default, andthink: truewhen the user has requested reasoning viaoptions.reasoning.Change Type
Scope
Linked Issue/PR
User-visible / Behavior Changes
Ollama thinking-capable models now produce actual text/tool_call responses instead of empty content. When reasoning is explicitly requested, thinking is enabled.
Security Impact
Repro + Verification
Steps
ollama/qwen3.5:35bExpected
Actual (before fix)
Evidence
Human Verification
Review Conversations
Compatibility / Migration
Failure Recovery
Risks and Mitigations
None