Skip to content

fix(aiCore): bypass AI SDK model ID allowlist for reasoning detection#13463

Merged
DeJeune merged 1 commit intomainfrom
fix/13454
Mar 14, 2026
Merged

fix(aiCore): bypass AI SDK model ID allowlist for reasoning detection#13463
DeJeune merged 1 commit intomainfrom
fix/13454

Conversation

@EurFelux
Copy link
Copy Markdown
Collaborator

What this PR does

Before this PR:

When using reasoning models (e.g., GPT-5.2) through third-party OpenAI-compatible providers that use the Responses API endpoint, reasoningEffort and reasoningSummary settings from the UI are silently ignored. The AI SDK logs a warning: reasoningEffort is not supported for non-reasoning models.

After this PR:

Reasoning parameters are correctly included in the Responses API request body for all providers, regardless of model ID format.

Fixes #13454

Why we need it and why it was done in this way

The following tradeoffs were made:

The @ai-sdk/openai Responses model uses a hardcoded model ID allowlist (modelId.startsWith('gpt-5'), etc.) to determine if a model supports reasoning. Third-party providers often use non-canonical model IDs (e.g., openai/gpt-5.2) that fail these checks, causing reasoning params to be silently dropped.

The fix uses forceReasoning: true — a bypass mechanism provided by the AI SDK — when Cherry Studio's own isReasoningModel() detects the model as a reasoning model. This is semantically correct: Cherry Studio's model detection is more comprehensive than the AI SDK's allowlist.

The following alternatives were considered:

Breaking changes

None.

Testing

  • Manual test passed: confirmed reasoning.effort and reasoning.summary are now correctly included in the Responses API request body when using a third-party provider with non-canonical model IDs.

Special notes for your reviewer

  • The forceReasoning option is an official AI SDK feature, not a hack. It's documented for exactly this use case: "stealth" reasoning models where the model ID is not recognized by the SDK's allowlist.
  • isReasoningModel(model) is used instead of enableReasoning because forceReasoning should reflect model capability, not user preference.
  • A TODO comment links to [Feature]: Migrate third-party OpenAI Responses providers to @ai-sdk/open-responses #13462 for the longer-term migration to @ai-sdk/open-responses.

Checklist

  • PR: The PR description is expressive enough and will help future contributors
  • Code: Write code that humans can understand and Keep it simple
  • Refactor: You have left the code cleaner than you found it (Boy Scout Rule)
  • Upgrade: Impact of this change on upgrade flows was considered and addressed if required
  • Documentation: A user-guide update was considered and is present (link) or not required.
  • Self-review: I have reviewed my own code before requesting review from others

Release note

fix: reasoning effort and summary mode settings now correctly apply for third-party OpenAI-compatible providers using the Responses API endpoint

Third-party providers often use non-canonical model IDs (e.g.,
"openai/gpt-5.2") that fail @ai-sdk/openai's internal startsWith()
checks in getOpenAILanguageModelCapabilities(), causing reasoningEffort
and reasoningSummary to be silently dropped from the request body.

Set forceReasoning: true in providerOptions when the model is detected
as a reasoning model by Cherry Studio's own isReasoningModel(), bypassing
the AI SDK's model ID allowlist.

Fixes #13454

Co-Authored-By: Claude Opus 4.6 (1M context) <[email protected]>
Signed-off-by: icarus <[email protected]>
@EurFelux EurFelux requested a review from DeJeune March 14, 2026 10:45
Copy link
Copy Markdown
Contributor

@cherry-ai-bot cherry-ai-bot bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Reviewed the workaround and the detection path it relies on. The fix is narrowly scoped: it only adds forceReasoning when Cherry Studio already classifies the model as a reasoning model, which is the right temporary bypass for third-party Responses providers using non-canonical model IDs. I don't see a new blocking issue in the current revision. LGTM.

@DeJeune DeJeune merged commit f3244c3 into main Mar 14, 2026
12 checks passed
@DeJeune DeJeune deleted the fix/13454 branch March 14, 2026 11:25
@kangfenmao kangfenmao mentioned this pull request Mar 14, 2026
10 tasks
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

[Bug]: The reasoning strength settings and summary mode settings on the UI interface are invalid for some LLM models.

2 participants