-
Notifications
You must be signed in to change notification settings - Fork 2.8k
fix: support reasoning_details format for Gemini 3 models #9506
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Merged
Conversation
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
- Add OPEN_ROUTER_REASONING_DETAILS_MODELS set to track models using reasoning_details array format - Accumulate and store full reasoning_details array during streaming - Add getReasoningDetails() method to OpenRouterHandler - Store reasoning_details on ApiMessage type and persist to conversation history - Set preserveReasoning: true for models in the set - Preserve reasoning_details when converting messages to OpenAI format - Send reasoning_details back to API on subsequent requests for tool calling workflows Fixes upstream issue from cline/cline#7551 Follows OpenRouter docs: https://openrouter.ai/docs/use-cases/reasoning-tokens#preserving-reasoning-blocks
Contributor
Re-review completed successfully. No new issues found. ✅ All checks passed - The changes correctly implement universal Mention @roomote in a comment to request specific changes to this pull request or fix all unresolved issues. |
mrubens
reviewed
Nov 22, 2025
mrubens
reviewed
Nov 22, 2025
…roperly - Accumulate reasoning_details chunks into complete objects (not fragments) - Prevent double storage: only store reasoning_details OR reasoning block, never both - Remove debug logging from openai-format.ts - Inject fake reasoning.encrypted blocks for Gemini tool calls when switching models - Fix priority-based format handling: check reasoning_details first, fall back to legacy reasoning
mrubens
approved these changes
Nov 24, 2025
mini2s
added a commit
to zgsm-ai/costrict
that referenced
this pull request
Nov 24, 2025
* Update cerebras.ts (RooCodeInc#9024) * fix: update Opus 4.1 max tokens from 8K to 32K (RooCodeInc#9046) Aligns claude-opus-4-1-20250805 max token limit with claude-opus-4-20250514, both models now supporting 32K output tokens (overridable to 8K when enableReasoningEffort is false). Fixes RooCodeInc#9045 Co-authored-by: Roo Code <[email protected]> * Merge remote-tracking branch 'upstream/main' into roo-to-main * feat(api): add mode parameter to ZgsmAiHandler and add tooltips to ChatRow buttons * chore: simplify Google Analytics to standard implementation (RooCodeInc#9044) Co-authored-by: Roo Code <[email protected]> * feat: add conditional test running to pre-push hook (RooCodeInc#9055) * Fix dynamic provider model validation to prevent cross-contamination (RooCodeInc#9054) * Fix Bedrock user agent to report full SDK details (RooCodeInc#9043) * feat: add Qwen3 embedding models (0.6B and 4B) to OpenRouter support (RooCodeInc#9060) Co-authored-by: Roo Code <[email protected]> * web: Agent Landing Page A/B testing toolkit (RooCodeInc#9018) Co-authored-by: Roo Code <[email protected]> * feat: Global Inference for Bedrock models (RooCodeInc#8750) (RooCodeInc#8940) Co-authored-by: ellipsis-dev[bot] <65095814+ellipsis-dev[bot]@users.noreply.github.com> Co-authored-by: Roo Code <[email protected]> * Release v3.30.2 (RooCodeInc#9065) chore: add changeset for v3.30.2 * Changeset version bump (RooCodeInc#9066) * changeset version bump * Revise CHANGELOG for version 3.30.2 Updated changelog for version 3.30.2 with new features and fixes. --------- Co-authored-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com> Co-authored-by: Matt Rubens <[email protected]> * Merge branch 'main' of github.com:zgsm-ai/costrict into roo-to-main * feat(error-handling): add HTTP 413 payload too large error handling * fix(webview): correct default value for useZgsmCustomConfig and fix settings message order * feat: add kimi-k2-thinking model to moonshot provider (RooCodeInc#9079) * ux: Home screen visuals (RooCodeInc#9057) Co-authored-by: roomote[bot] <219738659+roomote[bot]@users.noreply.github.com> Co-authored-by: ellipsis-dev[bot] <65095814+ellipsis-dev[bot]@users.noreply.github.com> Co-authored-by: Roo Code <[email protected]> * feat: add MiniMax-M2-Stable model and enable prompt caching (RooCodeInc#9072) Co-authored-by: Roo Code <[email protected]> Co-authored-by: Daniel <[email protected]> * fix(task): auto-retry on empty assistant response (RooCodeInc#9076) (RooCodeInc#9083) * feat(chat): Improve diff appearance in main chat view (RooCodeInc#8932) Co-authored-by: daniel-lxs <[email protected]> * Clarify: setting 0 disables Error & Repetition Limit (RooCodeInc#8965) Co-authored-by: Roo Code <[email protected]> Co-authored-by: daniel-lxs <[email protected]> * fix: use system role for OpenAI Compatible provider when streaming is disabled (RooCodeInc#8216) Co-authored-by: Roo Code <[email protected]> Co-authored-by: daniel-lxs <[email protected]> * fix: prevent shell injection in pre-push hook environment loading (RooCodeInc#9059) * feat: auto-switch to imported mode with architect fallback (RooCodeInc#9003) Co-authored-by: roomote[bot] <219738659+roomote[bot]@users.noreply.github.com> Co-authored-by: Seth Miller <[email protected]> Co-authored-by: heyseth <[email protected]> Co-authored-by: Roo Code <[email protected]> * fix: prevent notification sound on attempt_completion with queued messages (RooCodeInc#8540) Co-authored-by: Roo Code <[email protected]> * chore(deps): update dependency @changesets/cli to v2.29.7 (RooCodeInc#8490) Co-authored-by: renovate[bot] <29139614+renovate[bot]@users.noreply.github.com> * chore: add changeset for v3.30.3 (RooCodeInc#9092) * Changeset version bump (RooCodeInc#9094) Co-authored-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com> Co-authored-by: Matt Rubens <[email protected]> * fix: respect custom OpenRouter URL for all API operations (RooCodeInc#8951) Co-authored-by: Roo Code <[email protected]> * feat: Add comprehensive error logging to Roo Cloud provider (RooCodeInc#9098) feat: add comprehensive error logging to Roo Cloud provider - Add detailed error logging in handleOpenAIError() to capture error details before transformation - Enhanced getRooModels() to log HTTP response details on failed requests - Added error context logging to RooHandler streaming and model loading - All existing tests passing (48 total) * ux: Less Caffeine (RooCodeInc#9104) Prevents stress on Roo's hip bones * fix: prevent crash when streaming chunks have null choices array (RooCodeInc#9105) * ux: Improvements to to-do lists and task headers (RooCodeInc#9096) Co-authored-by: roomote[bot] <219738659+roomote[bot]@users.noreply.github.com> Co-authored-by: Matt Rubens <[email protected]> * fix: prevent context condensing on settings save when provider/model unchanged (RooCodeInc#9108) Co-authored-by: Matt Rubens <[email protected]> * Release v3.31.0 (
Summary
Fixes support for Gemini 3 models on OpenRouter by implementing proper handling of the
reasoning_detailsarray format.Problem
Gemini 3 models were causing 400 errors because they use the new
reasoning_detailsarray format instead of the legacyreasoningstring format. The reasoning details need to be preserved and sent back to the API in multi-turn conversations (especially for tool calling workflows).Solution
Implemented full support for the
reasoning_detailsformat following OpenRouter's documentation:OPEN_ROUTER_REASONING_DETAILS_MODELSset to track affected modelsreasoning_detailsarray during streaming (not just text extraction)reasoning_detailson assistant messages viaApiMessagetypepreserveReasoning: truefor models in the setreasoning_detailswhen converting messages and sending back to APIChanges
packages/types/src/providers/openrouter.ts: Added model constantsrc/api/providers/openrouter.ts: Added accumulator and getter methodsrc/api/providers/fetchers/openrouter.ts: Set preserveReasoning flagsrc/api/transform/openai-format.ts: Preserve reasoning_details in conversionsrc/core/task-persistence/apiMessages.ts: Added type supportsrc/core/task/Task.ts: Store and send back reasoning_detailsTesting
✅ All OpenRouter provider tests pass (12/12)
✅ All reasoning preservation tests pass (6/6)
✅ All Task core tests pass (34 passed, 4 skipped)
✅ All transform tests pass (4/4)
✅ All provider tests pass (102/102)
Related Issues
Notes
preserveReasoningflag mechanism (no needless conditionals)Important
Adds support for
reasoning_detailsformat in Gemini 3 models, ensuring compatibility with OpenRouter and preserving details in multi-turn conversations.reasoning_detailsformat in Gemini 3 models, preserving details in multi-turn conversations.OPEN_ROUTER_REASONING_DETAILS_MODELSset to track models usingreasoning_details.reasoning_detailsduring streaming inopenrouter.ts.reasoning_detailsinopenai-format.tsandTask.ts.openrouter.ts: Adds accumulator forreasoning_details, resets accumulator per request, and processes reasoning details for Gemini models.openai-format.ts: Preservesreasoning_detailswhen converting messages.Task.ts: Stores and sends backreasoning_detailsinaddToApiConversationHistory().This description was created by
for 6b741a2. You can customize this summary. It will automatically update as commits are pushed.