Reply: honor per-model context token overrides#32559
Reply: honor per-model context token overrides#32559zwright8 wants to merge 3 commits intoopenclaw:mainfrom
Conversation
Greptile SummaryThis PR honors per-model context-token overrides ( Key changes:
Observation — untested edge case: There is no test for when per-model Confidence Score: 5/5
Last reviewed commit: e748da9 |
There was a problem hiding this comment.
💡 Codex Review
Here are some automated review suggestions for this pull request.
Reviewed commit: e748da9614
ℹ️ About Codex in GitHub
Your team has set up Codex to review pull requests in this repo. Reviews are triggered when you
- Open a pull request for review
- Mark a draft as ready
- Comment "@codex review".
If Codex has suggestions, it will comment; otherwise it will react with 👍.
Codex can also answer questions or update the PR. Try commenting "@codex address that feedback".
| : modelParams?.context1m === true | ||
| ? CONTEXT_1M_TOKENS |
There was a problem hiding this comment.
Gate context1m override to supported Anthropic models
This branch treats any params.context1m: true as a universal 1_048_576 token window, but context1m is only valid for supported Anthropic Opus/Sonnet models; for other models/providers, the request path does not enable the matching capability and this inflated limit can let oversized prompts through local truncation, resulting in downstream model-limit failures. This regression appears whenever a model entry has context1m: true outside the supported Anthropic set.
Useful? React with 👍 / 👎.
e748da9 to
1b90111
Compare
There was a problem hiding this comment.
💡 Codex Review
Here are some automated review suggestions for this pull request.
Reviewed commit: 1b901112f1
ℹ️ About Codex in GitHub
Your team has set up Codex to review pull requests in this repo. Reviews are triggered when you
- Open a pull request for review
- Mark a draft as ready
- Comment "@codex review".
If Codex has suggestions, it will comment; otherwise it will react with 👍.
Codex can also answer questions or update the PR. Try commenting "@codex address that feedback".
| const parsed = resolveModelRefFromString({ | ||
| raw: rawKey, | ||
| defaultProvider: params.provider, | ||
| }); |
There was a problem hiding this comment.
Use agent default provider when parsing model keys
In resolveContextTokens, unqualified agents.defaults.models keys are reparsed with defaultProvider: params.provider, which makes keys like gpt-4o follow the currently selected provider instead of the config’s default-provider semantics used elsewhere. When a config includes both an unqualified default-provider entry and an explicit entry for another provider with the same model id, this can select the wrong model params and apply the wrong context token limit for replies.
Useful? React with 👍 / 👎.
Summary
Validation
pnpm exec vitest run src/auto-reply/reply/model-selection.test.ts src/auto-reply/reply/directive-handling.model.test.tsContext
This PR is one focused slice extracted from: