Skip to content

fix(model): propagate provider model properties in fallback resolution#13626

Closed
mcaxtr wants to merge 5 commits intoopenclaw:mainfrom
mcaxtr:fix/13575-reasoning-effort
Closed

fix(model): propagate provider model properties in fallback resolution#13626
mcaxtr wants to merge 5 commits intoopenclaw:mainfrom
mcaxtr:fix/13575-reasoning-effort

Conversation

@mcaxtr
Copy link
Copy Markdown
Contributor

@mcaxtr mcaxtr commented Feb 10, 2026

Summary

Fixes #13575

The generic fallback in resolveModel() had two bugs when constructing a model from provider config:

  1. reasoning: false hardcoded — ignored the matched model's reasoning setting, preventing reasoning.effort from being forwarded to the API (e.g. Ollama models with reasoning: true)
  2. models[0] for contextWindow/maxTokens — read from the first model in the provider's array instead of the matched model, causing incorrect context limits when multiple models are configured

The fix finds the matched model by ID from the provider config and propagates all its properties (reasoning, input, cost, contextWindow, maxTokens, name, api) instead of using hardcoded defaults or the first model in the array.

Test plan

  • Add test: fallback does not bleed contextWindow from unrelated models[0]
  • Add test: fallback propagates reasoning from matched provider model
  • Add test: fallback uses correct model when provider has multiple models
  • All 3 new tests fail before the fix, pass after
  • All 13 tests pass (npx vitest run src/agents/pi-embedded-runner/model.test.ts)
  • pnpm build && pnpm check clean

Greptile Overview

Greptile Summary

Fixed two bugs in the generic fallback path of resolveModel() when constructing models from provider config:

  • reasoning propagation: Previously hardcoded to false, now reads from matched model config (e.g., Ollama models with reasoning: true)
  • contextWindow/maxTokens selection: Previously read from models[0] regardless of which model was requested, now reads from the matched model

Added comprehensive test coverage for all three scenarios (unmatched model using defaults, matched model with reasoning, multiple models selecting the correct one). The fix properly finds the matched model by trimmed ID and propagates all its properties (reasoning, input, cost, contextWindow, maxTokens, name, api). Previous review comments about trimming inconsistencies have been addressed in follow-up commits.

Confidence Score: 5/5

  • Safe to merge with no blocking issues
  • The implementation correctly fixes both documented bugs with proper test coverage. All three new tests target the exact scenarios described in the PR (bleeding contextWindow, reasoning propagation, multiple model selection). Previous review concerns about trimming have been addressed. The logic is straightforward and the changes are minimal and focused.
  • No files require special attention

@openclaw-barnacle openclaw-barnacle bot added the agents Agent runtime and tooling label Feb 10, 2026
Copy link
Copy Markdown
Contributor

@greptile-apps greptile-apps bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

1 file reviewed, 1 comment

Edit Code Review Agent Settings | Greptile

@mcaxtr
Copy link
Copy Markdown
Contributor Author

mcaxtr commented Feb 10, 2026

@greptile please re-review this PR.

Copy link
Copy Markdown
Contributor

@greptile-apps greptile-apps bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

1 file reviewed, 1 comment

Edit Code Review Agent Settings | Greptile

@mcaxtr
Copy link
Copy Markdown
Contributor Author

mcaxtr commented Feb 10, 2026

@greptile please re-review this PR.

@mcaxtr mcaxtr force-pushed the fix/13575-reasoning-effort branch 2 times, most recently from d99df76 to 293cef9 Compare February 12, 2026 04:12
@mcaxtr mcaxtr force-pushed the fix/13575-reasoning-effort branch from 293cef9 to 8d85d20 Compare February 13, 2026 02:27
@mcaxtr mcaxtr force-pushed the fix/13575-reasoning-effort branch from 8d85d20 to 25fc1c4 Compare February 13, 2026 14:41
@mcaxtr mcaxtr force-pushed the fix/13575-reasoning-effort branch 8 times, most recently from 5549930 to 61a3af4 Compare February 15, 2026 14:54
@mcaxtr mcaxtr force-pushed the fix/13575-reasoning-effort branch 2 times, most recently from ab3c6ca to 4498c02 Compare February 17, 2026 01:28
The generic fallback in resolveModel() hardcoded reasoning: false and read
contextWindow/maxTokens from models[0] instead of the matched model. This
caused reasoning.effort to not be forwarded for Ollama models with
reasoning: true in the provider config.

Find the matched model by ID from the provider config and propagate all
its properties (reasoning, input, cost, contextWindow, maxTokens, name,
api) instead of using hardcoded defaults or the first model in the array.

Fixes openclaw#13575
@mcaxtr mcaxtr force-pushed the fix/13575-reasoning-effort branch from 4498c02 to e1eaf83 Compare February 19, 2026 02:04
@steipete
Copy link
Copy Markdown
Contributor

Closing as AI-assisted stale-fix triage.

Linked issue #13575 ("[Bug]: reasoning.effort not forwarded to Ollama — only minimal thinking despite thinking=high") is currently CLOSED and was closed on 2026-02-23T04:27:21Z with state reason NOT_PLANNED.
Given that issue state, this fix PR is no longer needed in the active queue and is being closed as stale.

If the underlying bug is still reproducible on current main, please reopen this PR (or open a new focused fix PR) and reference both #13575 and #13626 for fast re-triage.

@steipete
Copy link
Copy Markdown
Contributor

Closed after AI-assisted stale-fix triage (closed issue duplicate/stale fix).

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

agents Agent runtime and tooling size: S

Projects

None yet

Development

Successfully merging this pull request may close these issues.

[Bug]: reasoning.effort not forwarded to Ollama — only minimal thinking despite thinking=high

2 participants