feat: per-model thinkingDefault config and /think default directive#20458
feat: per-model thinkingDefault config and /think default directive#20458kmixter wants to merge 1 commit intoopenclaw:mainfrom
Conversation
04057a8 to
82292ae
Compare
| // Re-resolve thinking default if the model changed (per-model thinkingDefault support). | ||
| const newProvider = sessionEntry.providerOverride ?? provider; | ||
| const newModel = sessionEntry.modelOverride ?? model; | ||
| if (newProvider !== provider || newModel !== model) { | ||
| const catalog = await loadModelCatalog({ config: cfg }); | ||
| resolvedDefaultThinkLevel = | ||
| (sessionEntry?.thinkingLevel as ThinkLevel | undefined) ?? | ||
| resolveThinkingDefault({ cfg, provider: newProvider, model: newModel, catalog }); | ||
| } |
There was a problem hiding this comment.
per-model thinking default only re-resolved in directive-only path, not when directives are combined with message content
the re-resolution logic here only applies when the user sends a directive-only message (e.g., just /model openai/gpt-4). when directives are combined with content (e.g., /model openai/gpt-4 what is 2+2?), the code takes the hasAnyDirective path (line 217), which uses applyInlineDirectivesFastLane. that path calls resolveCurrentDirectiveLevels which invokes modelState.resolveDefaultThinkingLevel(), but this closure was created with the original provider/model values and won't reflect the per-model default for the newly switched model.
this means users who switch models inline with their message won't get the correct per-model thinking default until their next message.
consider moving the re-resolution logic to a shared location that both paths use, or updating the resolveDefaultThinkingLevel closure to check for provider/model overrides in the session entry.
Prompt To Fix With AI
This is a comment left during a code review.
Path: src/auto-reply/reply/get-reply-directives-apply.ts
Line: 166-174
Comment:
per-model thinking default only re-resolved in directive-only path, not when directives are combined with message content
the re-resolution logic here only applies when the user sends a directive-only message (e.g., just `/model openai/gpt-4`). when directives are combined with content (e.g., `/model openai/gpt-4 what is 2+2?`), the code takes the `hasAnyDirective` path (line 217), which uses `applyInlineDirectivesFastLane`. that path calls `resolveCurrentDirectiveLevels` which invokes `modelState.resolveDefaultThinkingLevel()`, but this closure was created with the *original* provider/model values and won't reflect the per-model default for the newly switched model.
this means users who switch models inline with their message won't get the correct per-model thinking default until their next message.
consider moving the re-resolution logic to a shared location that both paths use, or updating the `resolveDefaultThinkingLevel` closure to check for provider/model overrides in the session entry.
How can I resolve this? If you propose a fix, please make it concise.82292ae to
493652c
Compare
Add per-model thinkingDefault to resolve thinking level based on which model is active, so users running different models (e.g. gemini-3-pro for chat, gemini-3-flash for heartbeats) get appropriate thinking levels automatically without manually toggling /think per session. Resolution priority: per-model → global → catalog auto-detect (existing fallback behavior unchanged). Config keys are normalized via parseModelRef so aliases like "anthropic/opus-4.6" resolve correctly. Also adds: - /think default directive to undo a session-level thinking override and cascade back to the per-model default - Re-resolve thinking default after inline /model switch for both directive-only and inline-with-content paths - Memory flush passes provider/model through for correct resolution Related: openclaw#18152 (built independently, shares the per-model config concept but adds /think default and model-switch re-resolution) AI-assisted (Claude Code), fully tested. Tested: pnpm build, pnpm check, pnpm test, and e2e verified on live gateway.
493652c to
240f2b6
Compare
|
The concern raised by the Greptile bot (per-model thinking default only re-resolved in directive-only path) has been addressed in the current revision:
|
Summary
Add per-model thinkingDefault to resolve thinking level based on which
model is active, so users running different models (e.g. gemini-3-pro
for chat, gemini-3-flash for heartbeats) get appropriate thinking levels
automatically without manually toggling /think per session.
Resolution priority: per-model → global → catalog auto-detect (existing
fallback behavior unchanged). Config keys are normalized via parseModelRef
so aliases like "anthropic/opus-4.6" resolve correctly.
Also adds:
cascade back to the per-model default
directive-only and inline-with-content paths
Discussion: #20612
Related: #18152 (built independently, shares the per-model config concept
but adds /think default and model-switch re-resolution)
Config example
{ "agents": { "defaults": { "thinkingDefault": "low", "models": { "google/gemini-2.5-pro": {}, "google/gemini-3-flash": { "thinkingDefault": "high" }, "google/gemini-flash-lite-latest": { "thinkingDefault": "off" } } } } }Test plan
pnpm buildpassespnpm checkpasses (clean on changed files)pnpm testpasses (456 tests, 47 files)/think defaultresets level and per-model config resolves correctly viaopenclaw doctorAI-assisted (Claude Code), fully tested.