Skip to content

Conversation

@daniel-lxs
Copy link
Member

@daniel-lxs daniel-lxs commented Nov 22, 2025

Summary

Fixes support for Gemini 3 models on OpenRouter by implementing proper handling of the reasoning_details array format.

Problem

Gemini 3 models were causing 400 errors because they use the new reasoning_details array format instead of the legacy reasoning string format. The reasoning details need to be preserved and sent back to the API in multi-turn conversations (especially for tool calling workflows).

Solution

Implemented full support for the reasoning_details format following OpenRouter's documentation:

  1. Created OPEN_ROUTER_REASONING_DETAILS_MODELS set to track affected models
  2. Accumulate full reasoning_details array during streaming (not just text extraction)
  3. Store reasoning_details on assistant messages via ApiMessage type
  4. Set preserveReasoning: true for models in the set
  5. Preserve reasoning_details when converting messages and sending back to API

Changes

  • packages/types/src/providers/openrouter.ts: Added model constant
  • src/api/providers/openrouter.ts: Added accumulator and getter method
  • src/api/providers/fetchers/openrouter.ts: Set preserveReasoning flag
  • src/api/transform/openai-format.ts: Preserve reasoning_details in conversion
  • src/core/task-persistence/apiMessages.ts: Added type support
  • src/core/task/Task.ts: Store and send back reasoning_details

Testing

✅ All OpenRouter provider tests pass (12/12)
✅ All reasoning preservation tests pass (6/6)
✅ All Task core tests pass (34 passed, 4 skipped)
✅ All transform tests pass (4/4)
✅ All provider tests pass (102/102)

Related Issues

Notes

  • Uses the preserveReasoning flag mechanism (no needless conditionals)
  • Only affects models explicitly listed in the set
  • Maintains backward compatibility with legacy format
  • Easy to add more models as needed

Important

Adds support for reasoning_details format in Gemini 3 models, ensuring compatibility with OpenRouter and preserving details in multi-turn conversations.

  • Behavior:
    • Adds support for reasoning_details format in Gemini 3 models, preserving details in multi-turn conversations.
    • Introduces OPEN_ROUTER_REASONING_DETAILS_MODELS set to track models using reasoning_details.
    • Accumulates reasoning_details during streaming in openrouter.ts.
    • Preserves reasoning_details in openai-format.ts and Task.ts.
  • Files:
    • openrouter.ts: Adds accumulator for reasoning_details, resets accumulator per request, and processes reasoning details for Gemini models.
    • openai-format.ts: Preserves reasoning_details when converting messages.
    • Task.ts: Stores and sends back reasoning_details in addToApiConversationHistory().
  • Testing:
    • All relevant tests pass, including OpenRouter provider tests, reasoning preservation tests, Task core tests, and transform tests.

This description was created by Ellipsis for 6b741a2. You can customize this summary. It will automatically update as commits are pushed.

- Add OPEN_ROUTER_REASONING_DETAILS_MODELS set to track models using reasoning_details array format
- Accumulate and store full reasoning_details array during streaming
- Add getReasoningDetails() method to OpenRouterHandler
- Store reasoning_details on ApiMessage type and persist to conversation history
- Set preserveReasoning: true for models in the set
- Preserve reasoning_details when converting messages to OpenAI format
- Send reasoning_details back to API on subsequent requests for tool calling workflows

Fixes upstream issue from cline/cline#7551
Follows OpenRouter docs: https://openrouter.ai/docs/use-cases/reasoning-tokens#preserving-reasoning-blocks
@dosubot dosubot bot added size:L This PR changes 100-499 lines, ignoring generated files. bug Something isn't working labels Nov 22, 2025
@roomote
Copy link
Contributor

roomote bot commented Nov 22, 2025

Rooviewer Clock   See task on Roo Cloud

Re-review completed successfully. No new issues found.

All checks passed - The changes correctly implement universal reasoning_details handling by removing the model allowlist and processing the chunks dynamically when present in the stream.

Previous reviews

Mention @roomote in a comment to request specific changes to this pull request or fix all unresolved issues.

@daniel-lxs daniel-lxs moved this from Triage to PR [Needs Review] in Roo Code Roadmap Nov 22, 2025
@dosubot dosubot bot added size:M This PR changes 30-99 lines, ignoring generated files. and removed size:L This PR changes 100-499 lines, ignoring generated files. labels Nov 22, 2025
…roperly

- Accumulate reasoning_details chunks into complete objects (not fragments)
- Prevent double storage: only store reasoning_details OR reasoning block, never both
- Remove debug logging from openai-format.ts
- Inject fake reasoning.encrypted blocks for Gemini tool calls when switching models
- Fix priority-based format handling: check reasoning_details first, fall back to legacy reasoning
@dosubot dosubot bot added size:L This PR changes 100-499 lines, ignoring generated files. and removed size:M This PR changes 30-99 lines, ignoring generated files. labels Nov 23, 2025
@dosubot dosubot bot added the lgtm This PR has been approved by a maintainer label Nov 24, 2025
@mrubens mrubens merged commit b531075 into main Nov 24, 2025
27 of 30 checks passed
@github-project-automation github-project-automation bot moved this from PR [Needs Review] to Done in Roo Code Roadmap Nov 24, 2025
@mrubens mrubens deleted the fix/gemini-3-reasoning-details branch November 24, 2025 05:47
@github-project-automation github-project-automation bot moved this from New to Done in Roo Code Roadmap Nov 24, 2025
mini2s added a commit to zgsm-ai/costrict that referenced this pull request Nov 24, 2025
* Update cerebras.ts (RooCodeInc#9024)

* fix: update Opus 4.1 max tokens from 8K to 32K (RooCodeInc#9046)

Aligns claude-opus-4-1-20250805 max token limit with claude-opus-4-20250514,
both models now supporting 32K output tokens (overridable to 8K when
enableReasoningEffort is false).

Fixes RooCodeInc#9045

Co-authored-by: Roo Code <[email protected]>

* Merge remote-tracking branch 'upstream/main' into roo-to-main

* feat(api): add mode parameter to ZgsmAiHandler and add tooltips to ChatRow buttons

* chore: simplify Google Analytics to standard implementation (RooCodeInc#9044)

Co-authored-by: Roo Code <[email protected]>

* feat: add conditional test running to pre-push hook (RooCodeInc#9055)

* Fix dynamic provider model validation to prevent cross-contamination (RooCodeInc#9054)

* Fix Bedrock user agent to report full SDK details (RooCodeInc#9043)

* feat: add Qwen3 embedding models (0.6B and 4B) to OpenRouter support (RooCodeInc#9060)

Co-authored-by: Roo Code <[email protected]>

* web: Agent Landing Page A/B testing toolkit (RooCodeInc#9018)

Co-authored-by: Roo Code <[email protected]>

* feat: Global Inference for Bedrock models (RooCodeInc#8750) (RooCodeInc#8940)

Co-authored-by: ellipsis-dev[bot] <65095814+ellipsis-dev[bot]@users.noreply.github.com>
Co-authored-by: Roo Code <[email protected]>

* Release v3.30.2 (RooCodeInc#9065)

chore: add changeset for v3.30.2

* Changeset version bump (RooCodeInc#9066)

* changeset version bump

* Revise CHANGELOG for version 3.30.2

Updated changelog for version 3.30.2 with new features and fixes.

---------

Co-authored-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
Co-authored-by: Matt Rubens <[email protected]>

* Merge branch 'main' of github.com:zgsm-ai/costrict into roo-to-main

* feat(error-handling): add HTTP 413 payload too large error handling

* fix(webview): correct default value for useZgsmCustomConfig and fix settings message order

* feat: add kimi-k2-thinking model to moonshot provider (RooCodeInc#9079)

* ux: Home screen visuals (RooCodeInc#9057)

Co-authored-by: roomote[bot] <219738659+roomote[bot]@users.noreply.github.com>
Co-authored-by: ellipsis-dev[bot] <65095814+ellipsis-dev[bot]@users.noreply.github.com>
Co-authored-by: Roo Code <[email protected]>

* feat: add MiniMax-M2-Stable model and enable prompt caching (RooCodeInc#9072)

Co-authored-by: Roo Code <[email protected]>
Co-authored-by: Daniel <[email protected]>

* fix(task): auto-retry on empty assistant response (RooCodeInc#9076) (RooCodeInc#9083)

* feat(chat): Improve diff appearance in main chat view (RooCodeInc#8932)

Co-authored-by: daniel-lxs <[email protected]>

* Clarify: setting 0 disables Error & Repetition Limit (RooCodeInc#8965)

Co-authored-by: Roo Code <[email protected]>
Co-authored-by: daniel-lxs <[email protected]>

* fix: use system role for OpenAI Compatible provider when streaming is disabled (RooCodeInc#8216)

Co-authored-by: Roo Code <[email protected]>
Co-authored-by: daniel-lxs <[email protected]>

* fix: prevent shell injection in pre-push hook environment loading (RooCodeInc#9059)

* feat: auto-switch to imported mode with architect fallback (RooCodeInc#9003)

Co-authored-by: roomote[bot] <219738659+roomote[bot]@users.noreply.github.com>
Co-authored-by: Seth Miller <[email protected]>
Co-authored-by: heyseth <[email protected]>
Co-authored-by: Roo Code <[email protected]>

* fix: prevent notification sound on attempt_completion with queued messages (RooCodeInc#8540)

Co-authored-by: Roo Code <[email protected]>

* chore(deps): update dependency @changesets/cli to v2.29.7 (RooCodeInc#8490)

Co-authored-by: renovate[bot] <29139614+renovate[bot]@users.noreply.github.com>

* chore: add changeset for v3.30.3 (RooCodeInc#9092)

* Changeset version bump (RooCodeInc#9094)

Co-authored-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
Co-authored-by: Matt Rubens <[email protected]>

* fix: respect custom OpenRouter URL for all API operations (RooCodeInc#8951)

Co-authored-by: Roo Code <[email protected]>

* feat: Add comprehensive error logging to Roo Cloud provider (RooCodeInc#9098)

feat: add comprehensive error logging to Roo Cloud provider

- Add detailed error logging in handleOpenAIError() to capture error details before transformation
- Enhanced getRooModels() to log HTTP response details on failed requests
- Added error context logging to RooHandler streaming and model loading
- All existing tests passing (48 total)

* ux: Less Caffeine (RooCodeInc#9104)

Prevents stress on Roo's hip bones

* fix: prevent crash when streaming chunks have null choices array (RooCodeInc#9105)

* ux: Improvements to to-do lists and task headers (RooCodeInc#9096)

Co-authored-by: roomote[bot] <219738659+roomote[bot]@users.noreply.github.com>
Co-authored-by: Matt Rubens <[email protected]>

* fix: prevent context condensing on settings save when provider/model unchanged (RooCodeInc#9108)

Co-authored-by: Matt Rubens <[email protected]>

* Release v3.31.0 (