fix(model): propagate provider model properties in fallback resolution#13626
Closed
mcaxtr wants to merge 5 commits intoopenclaw:mainfrom
Closed
fix(model): propagate provider model properties in fallback resolution#13626mcaxtr wants to merge 5 commits intoopenclaw:mainfrom
mcaxtr wants to merge 5 commits intoopenclaw:mainfrom
Conversation
Contributor
Author
|
@greptile please re-review this PR. |
Contributor
Author
|
@greptile please re-review this PR. |
d99df76 to
293cef9
Compare
293cef9 to
8d85d20
Compare
8d85d20 to
25fc1c4
Compare
5549930 to
61a3af4
Compare
bfc1ccb to
f92900f
Compare
ab3c6ca to
4498c02
Compare
The generic fallback in resolveModel() hardcoded reasoning: false and read contextWindow/maxTokens from models[0] instead of the matched model. This caused reasoning.effort to not be forwarded for Ollama models with reasoning: true in the provider config. Find the matched model by ID from the provider config and propagate all its properties (reasoning, input, cost, contextWindow, maxTokens, name, api) instead of using hardcoded defaults or the first model in the array. Fixes openclaw#13575
4498c02 to
e1eaf83
Compare
Contributor
|
Closing as AI-assisted stale-fix triage. Linked issue #13575 ("[Bug]: reasoning.effort not forwarded to Ollama — only minimal thinking despite thinking=high") is currently CLOSED and was closed on 2026-02-23T04:27:21Z with state reason NOT_PLANNED. If the underlying bug is still reproducible on current main, please reopen this PR (or open a new focused fix PR) and reference both #13575 and #13626 for fast re-triage. |
Contributor
|
Closed after AI-assisted stale-fix triage (closed issue duplicate/stale fix). |
This was referenced Feb 24, 2026
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
Summary
Fixes #13575
The generic fallback in
resolveModel()had two bugs when constructing a model from provider config:reasoning: falsehardcoded — ignored the matched model'sreasoningsetting, preventingreasoning.effortfrom being forwarded to the API (e.g. Ollama models withreasoning: true)models[0]for contextWindow/maxTokens — read from the first model in the provider's array instead of the matched model, causing incorrect context limits when multiple models are configuredThe fix finds the matched model by ID from the provider config and propagates all its properties (
reasoning,input,cost,contextWindow,maxTokens,name,api) instead of using hardcoded defaults or the first model in the array.Test plan
models[0]npx vitest run src/agents/pi-embedded-runner/model.test.ts)pnpm build && pnpm checkcleanGreptile Overview
Greptile Summary
Fixed two bugs in the generic fallback path of
resolveModel()when constructing models from provider config:reasoningpropagation: Previously hardcoded tofalse, now reads from matched model config (e.g., Ollama models withreasoning: true)contextWindow/maxTokensselection: Previously read frommodels[0]regardless of which model was requested, now reads from the matched modelAdded comprehensive test coverage for all three scenarios (unmatched model using defaults, matched model with reasoning, multiple models selecting the correct one). The fix properly finds the matched model by trimmed ID and propagates all its properties (
reasoning,input,cost,contextWindow,maxTokens,name,api). Previous review comments about trimming inconsistencies have been addressed in follow-up commits.Confidence Score: 5/5