Skip to content

fix: honor model-specific context limits for non-Anthropic models#2366

Merged
code-yeongyu merged 4 commits intodevfrom
fix/issue-2338
Mar 11, 2026
Merged

fix: honor model-specific context limits for non-Anthropic models#2366
code-yeongyu merged 4 commits intodevfrom
fix/issue-2338

Conversation

@code-yeongyu
Copy link
Copy Markdown
Owner

@code-yeongyu code-yeongyu commented Mar 7, 2026

Summary

  • use cached provider/model context limits in context-window-monitor instead of skipping non-Anthropic models
  • make dynamic-truncator resolve provider-aware context limits before falling back to Anthropic-specific limits
  • allow tool-output-truncator to pass modelContextLimitsCache through and cover the fix with regression tests

Testing

  • bun test src/hooks/preemptive-compaction.test.ts src/hooks/context-window-monitor.test.ts src/hooks/context-window-monitor.model-context-limits.test.ts src/shared/dynamic-truncator.test.ts src/hooks/tool-output-truncator.test.ts
  • bun test
  • bun run typecheck
  • bun run build

Summary by cubic

Honors model-specific context window limits for non-Anthropic providers in warnings, reminder text, and truncation. Fixes missing warnings and incorrect truncation by using cached per-model limits, with tests. Fixes issue 2338.

  • Bug Fixes
    • Context window monitor uses modelContextLimitsCache with providerID/modelID; warns only when a known limit is exceeded.
    • Reminder text shows the actual per-model limit and usage percentages/tokens (no hardcoded 1M).
    • Dynamic truncator applies provider-aware limits; falls back to Anthropic only for Anthropic providers; tool output truncator forwards modelContextLimitsCache.
    • Added tests for non-Anthropic cached limits, reminder content, and pass-through behavior.

Written for commit 7de80e6. Summary will update on new commits.

Copy link
Copy Markdown

@cubic-dev-ai cubic-dev-ai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

2 issues found across 6 files

Confidence score: 3/5

  • There is concrete regression risk: src/hooks/context-window-monitor.ts still hardcodes Anthropic-specific limits/messages, which can produce incorrect warning stats (including negative percentages) and misleading guidance for non-Anthropic models.
  • src/hooks/context-window-monitor.model-context-limits.test.ts uses a boolean for AssistantMessage.finish where Opencode compatibility expects a string (for example, "stop"), indicating a likely contract mismatch that could surface beyond tests.
  • Given two medium-severity, high-confidence findings with user-visible impact, this looks like some risk rather than a safe-to-merge change.
  • Pay close attention to src/hooks/context-window-monitor.ts and src/hooks/context-window-monitor.model-context-limits.test.ts - model-specific messaging math and message-shape compatibility need correction.
Prompt for AI agents (unresolved issues)

Check if these issues are valid — if so, understand the root cause of each and fix them. If appropriate, use sub-agents to investigate and fix each issue separately.


<file name="src/hooks/context-window-monitor.ts">

<violation number="1" location="src/hooks/context-window-monitor.ts:74">
P2: The warning message output still hardcodes `ANTHROPIC_DISPLAY_LIMIT` (1M) and `CONTEXT_REMINDER`, which will send incorrect stats (e.g., negative percentages) and Anthropic-specific instructions to non-Anthropic models.</violation>
</file>

<file name="src/hooks/context-window-monitor.model-context-limits.test.ts">

<violation number="1" location="src/hooks/context-window-monitor.model-context-limits.test.ts:31">
P1: Custom agent: **Opencode Compatibility**

The `finish` property on `AssistantMessage` should be a string (e.g. `"stop"`), not a boolean.</violation>
</file>

Reply with feedback, questions, or to request a fix. Tag @cubic-dev-ai to re-run a review.

sessionID,
providerID: "opencode",
modelID: "kimi-k2.5-free",
finish: true,
Copy link
Copy Markdown

@cubic-dev-ai cubic-dev-ai bot Mar 7, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

P1: Custom agent: Opencode Compatibility

The finish property on AssistantMessage should be a string (e.g. "stop"), not a boolean.

Prompt for AI agents
Check if this issue is valid — if so, understand the root cause and fix it. At src/hooks/context-window-monitor.model-context-limits.test.ts, line 31:

<comment>The `finish` property on `AssistantMessage` should be a string (e.g. `"stop"`), not a boolean.</comment>

<file context>
@@ -0,0 +1,90 @@
+            sessionID,
+            providerID: "opencode",
+            modelID: "kimi-k2.5-free",
+            finish: true,
+            tokens: {
+              input: 150000,
</file context>
Fix with Cubic

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant