Conversation
Third-party providers often use non-canonical model IDs (e.g., "openai/gpt-5.2") that fail @ai-sdk/openai's internal startsWith() checks in getOpenAILanguageModelCapabilities(), causing reasoningEffort and reasoningSummary to be silently dropped from the request body. Set forceReasoning: true in providerOptions when the model is detected as a reasoning model by Cherry Studio's own isReasoningModel(), bypassing the AI SDK's model ID allowlist. Fixes #13454 Co-Authored-By: Claude Opus 4.6 (1M context) <[email protected]> Signed-off-by: icarus <[email protected]>
DeJeune
approved these changes
Mar 14, 2026
Contributor
There was a problem hiding this comment.
Reviewed the workaround and the detection path it relies on. The fix is narrowly scoped: it only adds forceReasoning when Cherry Studio already classifies the model as a reasoning model, which is the right temporary bypass for third-party Responses providers using non-canonical model IDs. I don't see a new blocking issue in the current revision. LGTM.
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
What this PR does
Before this PR:
When using reasoning models (e.g., GPT-5.2) through third-party OpenAI-compatible providers that use the Responses API endpoint,
reasoningEffortandreasoningSummarysettings from the UI are silently ignored. The AI SDK logs a warning:reasoningEffort is not supported for non-reasoning models.After this PR:
Reasoning parameters are correctly included in the Responses API request body for all providers, regardless of model ID format.
Fixes #13454
Why we need it and why it was done in this way
The following tradeoffs were made:
The
@ai-sdk/openaiResponses model uses a hardcoded model ID allowlist (modelId.startsWith('gpt-5'), etc.) to determine if a model supports reasoning. Third-party providers often use non-canonical model IDs (e.g.,openai/gpt-5.2) that fail these checks, causing reasoning params to be silently dropped.The fix uses
forceReasoning: true— a bypass mechanism provided by the AI SDK — when Cherry Studio's ownisReasoningModel()detects the model as a reasoning model. This is semantically correct: Cherry Studio's model detection is more comprehensive than the AI SDK's allowlist.The following alternatives were considered:
@ai-sdk/open-responses(tracked in [Feature]: Migrate third-party OpenAI Responses providers to @ai-sdk/open-responses #13462) — better long-term solution but larger scopeopenai-compatibleprovider with manual reasoning parameter handling — would lose Responses API featuresBreaking changes
None.
Testing
reasoning.effortandreasoning.summaryare now correctly included in the Responses API request body when using a third-party provider with non-canonical model IDs.Special notes for your reviewer
forceReasoningoption is an official AI SDK feature, not a hack. It's documented for exactly this use case: "stealth" reasoning models where the model ID is not recognized by the SDK's allowlist.isReasoningModel(model)is used instead ofenableReasoningbecauseforceReasoningshould reflect model capability, not user preference.@ai-sdk/open-responses.Checklist
Release note