Skip to content

fix(mistral): lower default maxTokens to prevent 422 rejection (#52599)#53006

Closed
MoerAI wants to merge 1 commit intoopenclaw:mainfrom
MoerAI:fix/mistral-max-tokens
Closed

fix(mistral): lower default maxTokens to prevent 422 rejection (#52599)#53006
MoerAI wants to merge 1 commit intoopenclaw:mainfrom
MoerAI:fix/mistral-max-tokens

Conversation

@MoerAI
Copy link
Copy Markdown
Contributor

@MoerAI MoerAI commented Mar 23, 2026

Summary

Mistral AI returns HTTP 422 for all chat requests because several model definitions set maxTokens equal to contextWindow (up to 262144). This value is forwarded as max_tokens in API requests, exceeding Mistral's actual output token limit.

Root Cause

In extensions/mistral/model-definitions.ts, five models had maxTokens set to their full context window size. The max_tokens parameter controls output generation length, not input capacity — setting it to the context window size causes the API to reject the request.

Two models already had correct values (codestral-latest: 4096, mistral-small-latest: 16384), confirming this was an oversight.

Changes

  • extensions/mistral/model-definitions.ts: Lower MISTRAL_DEFAULT_MAX_TOKENS and per-model output limits:
Model Before After
MISTRAL_DEFAULT_MAX_TOKENS 262144 16384
devstral-medium-latest 262144 32768
magistral-small 128000 40960
mistral-large-latest 262144 16384
mistral-medium-2508 262144 32768
pixtral-large-latest 128000 32768
  • src/commands/onboard-auth.test.ts: Update expected maxTokens assertion

Closes #52599

…law#52599)

Several Mistral model definitions had maxTokens set equal to contextWindow (up to 262144), which is sent as max_tokens in API requests. The Mistral API rejects oversized output budgets with 422. Lowered to realistic output limits consistent with the already-correct mistral-small-latest (16384) and codestral-latest (4096) values.
@greptile-apps
Copy link
Copy Markdown
Contributor

greptile-apps bot commented Mar 23, 2026

Greptile Summary

This PR fixes HTTP 422 rejections from the Mistral API by correcting maxTokens values across model definitions — the root cause was that several models had maxTokens set equal to their full contextWindow size, but max_tokens in the API controls output length, not input capacity, and Mistral enforces a much lower output limit.

  • extensions/mistral/model-definitions.ts: MISTRAL_DEFAULT_MAX_TOKENS and five per-model maxTokens values are lowered to reasonable output limits (e.g. 16384 for mistral-large-latest, 32768 for devstral-medium-latest). The fix is correct for this module.
  • src/commands/onboard-auth.test.ts: Test updated correctly to reflect the new mistral-large-latest value.
  • Missing: extensions/mistral/model-definitions.test.ts was not updated — the assertions for magistral-small and pixtral-large-latest still expect the old value of 128000, so these tests will fail.
  • Missing: src/plugins/provider-model-definitions.ts contains its own duplicate MISTRAL_DEFAULT_MAX_TOKENS = 262144 constant (line 30) used by a separate buildMistralModelDefinition() function, which was not updated. This leaves the original 422 bug alive for that code path.

Confidence Score: 2/5

  • Not safe to merge — the fix is incomplete and existing tests will fail.
  • Two concrete blockers prevent merging: (1) extensions/mistral/model-definitions.test.ts has stale assertions that will cause CI failures, and (2) a duplicate MISTRAL_DEFAULT_MAX_TOKENS = 262144 in src/plugins/provider-model-definitions.ts means the 422 bug is not fully resolved. Both are straightforward one-line fixes, but they need to be addressed before this PR is merged.
  • extensions/mistral/model-definitions.test.ts (stale maxTokens assertions) and src/plugins/provider-model-definitions.ts (un-updated duplicate constant)

Comments Outside Diff (2)

  1. extensions/mistral/model-definitions.test.ts, line 37-50 (link)

    P0 Stale test assertions will cause test failures

    The maxTokens values for magistral-small and pixtral-large-latest in this test still reflect the old values (128000), but the model definitions were updated to 40960 and 32768 respectively. These assertions will now fail.

    Prompt To Fix With AI
    This is a comment left during a code review.
    Path: extensions/mistral/model-definitions.test.ts
    Line: 37-50
    
    Comment:
    **Stale test assertions will cause test failures**
    
    The `maxTokens` values for `magistral-small` and `pixtral-large-latest` in this test still reflect the old values (`128000`), but the model definitions were updated to `40960` and `32768` respectively. These assertions will now fail.
    
    
    
    How can I resolve this? If you propose a fix, please make it concise.
  2. src/plugins/provider-model-definitions.ts, line 30 (link)

    P1 Duplicate Mistral constant not updated

    This file contains its own copy of MISTRAL_DEFAULT_MAX_TOKENS that is still set to 262144. The buildMistralModelDefinition() function at line 167 returns a model using this stale value, meaning the same 422 rejection bug persists for any code path that goes through this plugin-level builder rather than the extensions/mistral module.

    Prompt To Fix With AI
    This is a comment left during a code review.
    Path: src/plugins/provider-model-definitions.ts
    Line: 30
    
    Comment:
    **Duplicate Mistral constant not updated**
    
    This file contains its own copy of `MISTRAL_DEFAULT_MAX_TOKENS` that is still set to `262144`. The `buildMistralModelDefinition()` function at line 167 returns a model using this stale value, meaning the same 422 rejection bug persists for any code path that goes through this plugin-level builder rather than the `extensions/mistral` module.
    
    
    
    How can I resolve this? If you propose a fix, please make it concise.
Prompt To Fix All With AI
This is a comment left during a code review.
Path: extensions/mistral/model-definitions.test.ts
Line: 37-50

Comment:
**Stale test assertions will cause test failures**

The `maxTokens` values for `magistral-small` and `pixtral-large-latest` in this test still reflect the old values (`128000`), but the model definitions were updated to `40960` and `32768` respectively. These assertions will now fail.

```suggestion
        expect.objectContaining({
          id: "magistral-small",
          reasoning: true,
          input: ["text"],
          contextWindow: 128000,
          maxTokens: 40960,
        }),
        expect.objectContaining({
          id: "pixtral-large-latest",
          input: ["text", "image"],
          contextWindow: 128000,
          maxTokens: 32768,
        }),
```

How can I resolve this? If you propose a fix, please make it concise.

---

This is a comment left during a code review.
Path: src/plugins/provider-model-definitions.ts
Line: 30

Comment:
**Duplicate Mistral constant not updated**

This file contains its own copy of `MISTRAL_DEFAULT_MAX_TOKENS` that is still set to `262144`. The `buildMistralModelDefinition()` function at line 167 returns a model using this stale value, meaning the same 422 rejection bug persists for any code path that goes through this plugin-level builder rather than the `extensions/mistral` module.

```suggestion
const MISTRAL_DEFAULT_MAX_TOKENS = 16384;
```

How can I resolve this? If you propose a fix, please make it concise.

Reviews (1): Last reviewed commit: "fix(mistral): lower default maxTokens to..." | Re-trigger Greptile

Copy link
Copy Markdown

@chatgpt-codex-connector chatgpt-codex-connector bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

💡 Codex Review

Here are some automated review suggestions for this pull request.

Reviewed commit: d11d6007ca

ℹ️ About Codex in GitHub

Your team has set up Codex to review pull requests in this repo. Reviews are triggered when you

  • Open a pull request for review
  • Mark a draft as ready
  • Comment "@codex review".

If Codex has suggestions, it will comment; otherwise it will react with 👍.

Codex can also answer questions or update the PR. Try commenting "@codex address that feedback".

export const MISTRAL_DEFAULT_MODEL_REF = `mistral/${MISTRAL_DEFAULT_MODEL_ID}`;
export const MISTRAL_DEFAULT_CONTEXT_WINDOW = 262144;
export const MISTRAL_DEFAULT_MAX_TOKENS = 262144;
export const MISTRAL_DEFAULT_MAX_TOKENS = 16384;
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

P1 Badge Rewrite saved Mistral configs to the new token limits

Lowering the catalog defaults here only fixes brand-new configs. Any user who already onboarded mistral-large-latest keeps the old maxTokens value, because applyProviderConfigWithDefaultModels reuses existingModels unchanged when the default model ID is already present (src/plugins/provider-onboarding-config.ts:111-119), and the runtime merge path prefers an explicit maxTokens over the implicit plugin catalog (src/agents/models-config.merge.ts:83-92). In that upgrade scenario, requests still go out with 262144 and continue to hit the same 422 this commit is trying to resolve.

Useful? React with 👍 / 👎.

cost: { input: 0.5, output: 1.5, cacheRead: 0, cacheWrite: 0 },
contextWindow: 128000,
maxTokens: 128000,
maxTokens: 40960,
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

P2 Badge Update the Mistral catalog test to these new limits

The extension test still asserts the old catalog values for magistral-small and pixtral-large-latest in extensions/mistral/model-definitions.test.ts:37-48, so this change leaves the Mistral plugin test red. Without updating that assertion alongside the catalog change, CI will fail even though the production behavior was intentionally modified.

Useful? React with 👍 / 👎.

@vincentkoc
Copy link
Copy Markdown
Member

closed in favour of #53006

@vincentkoc vincentkoc closed this Mar 23, 2026
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

commands Command implementations size: XS

Projects

None yet

Development

Successfully merging this pull request may close these issues.

[Bug]: Mistral AI Error 422

2 participants