fix(model): propagate provider model properties in fallback resolution#25641
Closed
mcaxtr wants to merge 1 commit intoopenclaw:mainfrom
Closed
fix(model): propagate provider model properties in fallback resolution#25641mcaxtr wants to merge 1 commit intoopenclaw:mainfrom
mcaxtr wants to merge 1 commit intoopenclaw:mainfrom
Conversation
0260d60 to
b559298
Compare
Member
|
Thanks for this PR and the detailed tests. Context after merge of #29205:
If you want to keep this alive, the cleanest path is to re-scope this PR to only the remaining reasoning-propagation delta, then it should be much easier to review/merge. |
This was referenced Feb 28, 2026
Member
|
Thanks again for the report and patch here. This is now fully covered by merged PRs:
Closing this as superseded to keep one canonical implementation path, with credit preserved to this PR and issue #25636. |
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
Summary
resolveModel()hardcodesreasoning: falseand readscontextWindow/maxTokensfrommodels[0]instead of matched modelreasoning: trueproduce minimal thinking (~30 tokens) instead of deep reasoning (200+ tokens); multiple models under same provider get wrong context limitsreasoning,input,cost,contextWindow,maxTokens,name,api)Change Type (select all)
Scope (select all touched areas)
Linked Issue/PR
User-visible / Behavior Changes
Users running Ollama models (or other custom provider models) with
reasoning: truein their config will now correctly receive deep reasoning output withreasoning.effortforwarded to the API. Users with multiple models configured under a custom provider will now get the correct context window and max tokens for the requested model instead of inheriting from the first model in the array.Security Impact (required)
NoNoNoNoNoRepro + Verification
Environment
{ "models": { "providers": { "ollama": { "baseUrl": "http://localhost:11434/v1", "api": "openai-responses", "models": [{ "id": "thinking-model", "reasoning": true, "contextWindow": 128000 }] } } } }Steps
reasoning: truemodelagents.defaults.thinkingDefaulttohighExpected
reasoning.effort: "high"included in API requestActual
Before fix:
reasoning.effortnot forwarded (reasoning field was hardcodedfalse)After fix:
reasoning.effortcorrectly forwardedEvidence
Added 3 new tests in
src/agents/pi-embedded-runner/model.test.ts:models[0]All tests pass (20 total: 17 existing + 3 new).
Human Verification (required)
What you personally verified (not just CI), and how:
Verified scenarios:
pnpm build)pnpm check)reasoningandcontextWindow/maxTokensbugsEdge cases checked:
What you did not verify:
Compatibility / Migration
YesNoNoUsers with affected configs will immediately get correct behavior after upgrade.
Failure Recovery (if this breaks)
src/agents/pi-embedded-runner/model.ts(removematchedModellookup, restore hardcodedreasoning: falseandmodels[0]reads)Risks and Mitigations
.trim()on both sides of comparisonmodels[0]behaviorGreptile Summary
Fixes model property propagation in fallback resolution by matching models by ID. Before this fix, the generic fallback hardcoded
reasoning: falseand readcontextWindow/maxTokensfrom the first model in the provider's array (models[0]), causing incorrect behavior for Ollama and other custom providers with reasoning-capable models or multiple models with different limits.Confidence Score: 5/5
Last reviewed commit: 0260d60