Skip to content

fix: transform tool blocks to text before condensing (EXT-624)#10975

Merged
mrubens merged 1 commit intomainfrom
feature/ext-624-fix-litellm-bedrock-error-during-condensing-tool-blocks
Feb 3, 2026
Merged

fix: transform tool blocks to text before condensing (EXT-624)#10975
mrubens merged 1 commit intomainfrom
feature/ext-624-fix-litellm-bedrock-error-during-condensing-tool-blocks

Conversation

@daniel-lxs
Copy link
Copy Markdown
Member

Problem

When using LiteLLM with Bedrock as the provider, conversation condensing fails with error:

400 litellm.UnsupportedParamsError: Bedrock doesn't support tool calling without tools= param specified

This occurs because conversation history contains tool_use and tool_result blocks, but during condensing no tools parameter is passed to the API call.

Solution

Transform tool blocks to text representations before sending for summarization. This removes the structural dependency on the tools parameter while preserving semantic meaning for the LLM to summarize.

Changes

  • Added toolUseToText() - converts tool_use blocks to readable text format
  • Added toolResultToText() - converts tool_result blocks to readable text format
  • Added convertToolBlocksToText() - processes content arrays
  • Added transformMessagesForCondensing() - applies transformation to message arrays
  • Applied transformation in summarizeConversation() before API call
  • Added 18 unit tests for the new functions

Example Transformation

Before (tool_use block):

{
  "type": "tool_use",
  "id": "toolu_123",
  "name": "read_file",
  "input": { "path": "src/index.ts" }
}

After (text block):

{
  "type": "text",
  "text": "[Tool Use: read_file]\npath: src/index.ts"
}

Linear Issue

https://linear.app/roocode/issue/EXT-624

…atibility

- Add toolUseToText() to convert tool_use blocks to text format
- Add toolResultToText() to convert tool_result blocks to text format
- Add convertToolBlocksToText() to transform all tool blocks in message content
- Add transformMessagesForCondensing() to apply transformation to messages
- Apply transformation in summarizeConversation() before API call
- Add 18 unit tests for full coverage

Fixes LiteLLM/Bedrock error: 'Bedrock doesn't support tool calling without tools= param specified'

When condensing conversations containing tool_use/tool_result blocks, Bedrock requires the tools parameter. By transforming these blocks to text representations, we remove this dependency while preserving semantic meaning for summarization.
@daniel-lxs daniel-lxs requested review from cte, jr and mrubens as code owners January 26, 2026 16:24
@dosubot dosubot bot added size:L This PR changes 100-499 lines, ignoring generated files. bug Something isn't working labels Jan 26, 2026
@roomote-v0
Copy link
Copy Markdown
Contributor

roomote-v0 bot commented Jan 26, 2026

Rooviewer Clock   See task on Roo Cloud

Review complete. No issues identified.

The implementation correctly transforms tool_use and tool_result blocks to text representations before sending for summarization, resolving the Bedrock/LiteLLM compatibility issue. The code is well-structured with comprehensive test coverage (18 new tests).

Mention @roomote in a comment to request specific changes to this pull request or fix all unresolved issues.

@dosubot dosubot bot added the lgtm This PR has been approved by a maintainer label Feb 3, 2026
@mrubens mrubens merged commit b4b8cef into main Feb 3, 2026
19 checks passed
@mrubens mrubens deleted the feature/ext-624-fix-litellm-bedrock-error-during-condensing-tool-blocks branch February 3, 2026 03:31
@github-project-automation github-project-automation bot moved this from New to Done in Roo Code Roadmap Feb 3, 2026
mini2s referenced this pull request in zgsm-ai/costrict Feb 3, 2026
* fix: handle missing tool identity in OpenAI Native streams (#10719)

* Feat/issue 5376 aggregate subtask costs (#10757)

* feat(chat): add streaming state to task header interaction

* feat: add settings tab titles to search index (#10761)

Co-authored-by: Roo Code <[email protected]>

* fix: filter Ollama models without native tool support (#10735)

* fix: filter out empty text blocks from user messages for Gemini compatibility (#10728)

* fix: flatten top-level anyOf/oneOf/allOf in MCP tool schemas (#10726)

* fix: prevent duplicate tool_use IDs causing API 400 errors (#10760)

* fix: truncate call_id to 64 chars for OpenAI Responses API (#10763)

* fix: Gemini thought signature validation errors (#10694)

Co-authored-by: Roo Code <[email protected]>

* Release v3.41.1 (#10767)

* Changeset version bump (#10768)

Co-authored-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
Co-authored-by: Matt Rubens <[email protected]>

* feat: add button to open markdown in VSCode preview (#10773)

Co-authored-by: Roo Code <[email protected]>

* fix(openai-codex): reset invalid model selection (#10777)

* fix: add openai-codex to providers that don't require API key (#10786)

Co-authored-by: Roo Code <[email protected]>

* fix(litellm): detect Gemini models with space-separated names for thought signature injection (#10787)

* Release v3.41.2 (#10788)

* Changeset version bump (#10790)

Co-authored-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
Co-authored-by: Matt Rubens <[email protected]>

* Roo Code Router fixes for the cli (#10789)

Co-authored-by: roomote[bot] <219738659+roomote[bot]@users.noreply.github.com>

* Revert "feat(e2e): Enable E2E tests - 39 passing tests" (#10794)

Co-authored-by: Hannes Rudolph <[email protected]>

* Claude-like cli flags, auth fixes (#10797)

Co-authored-by: roomote[bot] <219738659+roomote[bot]@users.noreply.github.com>
Co-authored-by: Roo Code <[email protected]>

* Release cli v0.0.47 (#10798)

* Use a redirect instead of a fetch for cli auth (#10799)

* chore(cli): prepare release v0.0.48 (#10800)

* Fix thinking block word-breaking to prevent horizontal scroll (#10806)

Co-authored-by: Roo Code <[email protected]>

* chore: add changeset for v3.41.3 (#10822)

* Removal of glm4 6 (#10815)

Co-authored-by: Matt Rubens <[email protected]>

* Changeset version bump (#10823)

Co-authored-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
Co-authored-by: Matt Rubens <[email protected]>

* feat: warn users when too many MCP tools are enabled (#10772)

* feat: warn users when too many MCP tools are enabled

- Add WarningRow component for displaying generic warnings with icon, title, message, and optional docs link
- Add TooManyToolsWarning component that shows when users have more than 40 MCP tools enabled
- Add MAX_MCP_TOOLS_THRESHOLD constant (40)
- Add i18n translations for the warning message
- Integrate warning into ChatView to display after task header
- Add comprehensive tests for both components

Closes ROO-542

* Moves constant to the right place

* Move it to the backend

* i18n

* Add actionlink that takes you to MCP settings in this case

* Add to MCP settings too

* Bump max tools up to 60 since github itself has 50+

* DRY

* Fix test

---------

Co-authored-by: Roo Code <[email protected]>
Co-authored-by: Bruno Bergher <[email protected]>
Co-authored-by: Matt Rubens <[email protected]>

* Support different cli output formats: text, json, streaming json (#10812)

Co-authored-by: Roo Code <[email protected]>

* chore(cli): prepare release v0.0.49 (#10825)

* fix(cli): set integrationTest to true in ExtensionHost constructor (#10826)

* fix(cli): fix quiet mode tests by capturing console before host creation (#10827)

* refactor: unify user content tags to <user_message> (#10723)

Co-authored-by: Roo Code <[email protected]>

* feat(openai-codex): add ChatGPT subscription usage limits dashboard (#10813)

* perf(webview): avoid resending taskHistory in state updates (#10842)

Co-authored-by: roomote[bot] <219738659+roomote[bot]@users.noreply.github.com>

* Fix broken link on pricing page (#10847)

* fix: update broken pricing link to /models page

* Update apps/web-roo-code/src/app/pricing/page.tsx

---------

Co-authored-by: Roo Code <[email protected]>
Co-authored-by: Bruno Bergher <[email protected]>

* Git worktree management (#10458)

Co-authored-by: Roo Code <[email protected]>
Co-authored-by: Hannes Rudolph <[email protected]>
Co-authored-by: daniel-lxs <[email protected]>

* feat: enable prompt caching for Cerebras zai-glm-4.7 model (#10670)

Co-authored-by: Roo Code <[email protected]>

* feat: add Kimi K2 thinking model to VertexAI provider (#9269)

Co-authored-by: Roo Code <[email protected]>

* feat: standardize model selectors across all providers (#10294)

Co-authored-by: roomote[bot] <219738659+roomote[bot]@users.noreply.github.com>
Co-authored-by: Roo Code <[email protected]>

* chore: remove XML tool calling support (#10841)

Co-authored-by: daniel-lxs <[email protected]>
Co-authored-by: Matt Rubens <[email protected]>

* Fix broken link on pricing page (#10847)

* fix: update broken pricing link to /models page

* Update apps/web-roo-code/src/app/pricing/page.tsx

---------

Co-authored-by: Roo Code <[email protected]>
Co-authored-by: Bruno Bergher <[email protected]>

* Pr 10853 (#10854)

Co-authored-by: Roo Code <[email protected]>

* Git worktree management (#10458)

Co-authored-by: Roo Code <[email protected]>
Co-authored-by: Hannes Rudolph <[email protected]>
Co-authored-by: daniel-lxs <[email protected]>

* feat: enable prompt caching for Cerebras zai-glm-4.7 model (#10670)

Co-authored-by: Roo Code <[email protected]>
(cherry picked from commit c7ce8aa)

* feat: add Kimi K2 thinking model to VertexAI provider (#9269)

Co-authored-by: Roo Code <[email protected]>
(cherry picked from commit a060915)

* feat: standardize model selectors across all providers (#10294)

Co-authored-by: roomote[bot] <219738659+roomote[bot]@users.noreply.github.com>
Co-authored-by: Roo Code <[email protected]>
(cherry picked from commit e356d05)

* fix: resolve race condition in context condensing prompt input (#10876)

* Copy: update /slack page messaging (#10869)

copy: update /slack page messaging

- Update trial CTA to 'Start a free 14 day Team trial'
- Replace 'humans' with 'your team' in value props subtitle
- Shorten value prop titles for consistent one-line display
- Improve Thread-aware and Open to all descriptions

* fix: Handle mode selector empty state on workspace switch (#9674)

* fix: handle mode selector empty state on workspace switch

When switching between VS Code workspaces, if the current mode from
workspace A is not available in workspace B, the mode selector would
show an empty string. This fix adds fallback logic to automatically
switch to the default "code" mode when the current mode is not found
in the available modes list.

Changes:
- Import defaultModeSlug from @roo/modes
- Add fallback logic in selectedMode useMemo to detect when current
  mode is not available and automatically switch to default mode
- Add tests to verify the fallback behavior works correctly
- Export defaultModeSlug in test mock for consistent behavior

* fix: prevent infinite loop by moving fallback notification to useEffect

* fix: prevent infinite loop by using ref to track notified invalid mode

* refactor: clean up comments in ModeSelector fallback logic

---------

Co-authored-by: Roo Code <[email protected]>
Co-authored-by: daniel-lxs <[email protected]>

* Roo to main remove xml (#936)

* Fix broken link on pricing page (#10847)

* fix: update broken pricing link to /models page

* Update apps/web-roo-code/src/app/pricing/page.tsx

---------

Co-authored-by: Roo Code <[email protected]>
Co-authored-by: Bruno Bergher <[email protected]>

* Git worktree management (#10458)

Co-authored-by: Roo Code <[email protected]>
Co-authored-by: Hannes Rudolph <[email protected]>
Co-authored-by: daniel-lxs <[email protected]>

* feat: enable prompt caching for Cerebras zai-glm-4.7 model (#10670)

Co-authored-by: Roo Code <[email protected]>

* feat: add Kimi K2 thinking model to VertexAI provider (#9269)

Co-authored-by: Roo Code <[email protected]>

* feat: standardize model selectors across all providers (#10294)

Co-authored-by: roomote[bot] <219738659+roomote[bot]@users.noreply.github.com>
Co-authored-by: Roo Code <[email protected]>

* chore: remove XML tool calling support (#10841)

Co-authored-by: daniel-lxs <[email protected]>
Co-authored-by: Matt Rubens <[email protected]>

* Pr 10853 (#10854)

Co-authored-by: Roo Code <[email protected]>

* feat(commit): enhance git diff handling for new repositories

* feat(task): support fake_tool_call for Qwen model with XML tool call format

* feat(prompts): add snapshots for custom instructions and system prompt variations

---------

Co-authored-by: roomote[bot] <219738659+roomote[bot]@users.noreply.github.com>
Co-authored-by: Roo Code <[email protected]>
Co-authored-by: Bruno Bergher <[email protected]>
Co-authored-by: Chris Estreich <[email protected]>
Co-authored-by: Hannes Rudolph <[email protected]>
Co-authored-by: daniel-lxs <[email protected]>
Co-authored-by: Matt Rubens <[email protected]>
Co-authored-by: MP <[email protected]>

* feat: remove Claude Code provider (#10883)

* refactor: migrate context condensing prompt to customSupportPrompts and cleanup legacy code (#10881)

* refactor: unify export path logic and default to Downloads (#10882)

* Fix marketing site preview logic (#10886)

* feat(web): redesign Slack page Featured Workflow section with YouTube… (#10880)

Co-authored-by: Matt Rubens <[email protected]>
Co-authored-by: Roo Code <[email protected]>

* feat: add size-based progress tracking for worktree file copying (#10871)

* feat: add HubSpot tracking with consent-based loading (#10885)

Co-authored-by: Roo Code <[email protected]>

* fix(openai): prevent double emission of text/reasoning in native and codex handlers (#10888)

* Fix padding on Roo Code Cloud upsell (#10889)

Co-authored-by: Roo Code <[email protected]>

* Open the worktreeinclude file after creating it (#10891)

Co-authored-by: roomote[bot] <219738659+roomote[bot]@users.noreply.github.com>

* fix: prevent task abortion when resuming via IPC/bridge (#10892)

* fix: rename bot to 'Roomote' and fix spacing in Slack demo (#10898)

* Fix: Enforce file restrictions for all editing tools (#10896)

Co-authored-by: Roo Code <[email protected]>

* chore: clean up XML legacy code and native-only comments (#10900)

* feat: hide worktree feature from menus (#10899)

* fix(condense): remove custom condensing model option (#10901)

* fix(condense): remove custom condensing model option

Remove the ability to specify a different model/API configuration for
condensing conversations. Modern conversations include provider-specific
data (tool calls, reasoning blocks, thought signatures) that only the
originating model can properly understand and summarize.

Changes:
- Remove condensingApiHandler parameter from summarizeConversation()
- Remove condensingApiConfigId from context management and Task
- Remove API config dropdown for CONDENSE in settings UI
- Update telemetry to remove usedCustomApiHandler parameter
- Update related tests

Users can still customize the CONDENSE prompt text; only model selection
is removed.

* fix: remove condensingApiConfigId from types and test fixtures

---------

Co-authored-by: Roo Code <[email protected]>

* test(prompts): update snapshots to fix indentation

* Fix EXT-553: Remove percentage-based progress tracking for worktree file copying (#10905)

* Fix EXT-553: Remove percentage-based progress tracking for worktree file copying

- Removed totalBytes from CopyProgress interface
- Removed Math.min() clamping that caused stuck-at-100% issue
- Changed UI from progress bar to spinner with activity indicator
- Shows 'item — X MB copied' instead of percentage
- Updated all 18 locale files
- Uses native cp with polling (no new dependencies)

* fix: translate copyingProgress text in all 17 non-English locale files

---------

Co-authored-by: Roo Code <[email protected]>

* chore(prompts): clarify linked SKILL.md file handling (#10907)

* Release v3.42.0 (#10910)

* Changeset version bump (#10911)

Co-authored-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
Co-authored-by: Matt Rubens <[email protected]>

* feat: Move condense prompt editor to Context Management tab (#10909)

* feat: Update Z.AI models with new variants and pricing (#10860)

Co-authored-by: erdemgoksel <erdemgoksel@MAU-BILISIM42>

* fix: correct Gemini 3 pricing for Flash and Pro models (#10487)

Co-authored-by: Roo Code <[email protected]>

* feat: add pnpm install:vsix:nightly command (#10912)

* Intelligent Context Condensation v2 (#10873)

* feat(gemini): add tool call support for Gemini CLI

* feat(condense): improve condensation with environment details, accurate token counts, and lazy evaluation (#10920)

* docs: fix CLI README to use correct command syntax (#10923)

Co-authored-by: Roo Code <[email protected]>

* chore: remove diffEnabled and fuzzyMatchThreshold settings (#10298)

* Remove Merge button from worktrees (#10924)

Co-authored-by: Roo Code <[email protected]>

* chore: remove POWER_STEERING experimental feature (#10926)

- Remove powerSteering from experimentIds array and schema in packages/types
- Remove POWER_STEERING from EXPERIMENT_IDS and experimentConfigsMap
- Remove power steering conditional block from getEnvironmentDetails
- Remove POWER_STEERING entry from all 18 locale settings.json files
- Update related test files to remove power steering references

* chore: remove MULTI_FILE_APPLY_DIFF experiment (#10925)

* chore: remove MULTI_FILE_APPLY_DIFF experiment

Remove the 'Enable concurrent file edits' experimental feature that
allowed editing multiple files in a single apply_diff call.

- Remove multiFileApplyDiff from experiment types and config
- Delete MultiFileSearchReplaceDiffStrategy class and tests
- Delete MultiApplyDiffTool wrapper and tests
- Remove experiment-specific code paths in Task.ts, generateSystemPrompt.ts, and presentAssistantMessage.ts
- Remove special handling in ExperimentalSettings.tsx
- Remove translations from all 18 locale files

The existing MultiSearchReplaceDiffStrategy continues to handle
multiple SEARCH/REPLACE blocks within a single file.

* fix: remove unused EXPERIMENT_IDS/experiments import from Task.ts

Addresses review feedback: removes the unused imports from
src/core/task/Task.ts that were left over after removing the
MULTI_FILE_APPLY_DIFF experiment routing code.

* fix: convert orphaned tool_results to text blocks after condensing (#10927)

* fix: convert orphaned tool_results to text blocks after condensing

When condensing occurs after assistant sends tool_uses but before user responds,
the tool_use blocks get condensed away. User messages containing tool_results that
reference condensed tool_use_ids become orphaned and get filtered out by
getEffectiveApiHistory, causing user feedback to be lost.

This fix enhances the existing check in addToApiConversationHistory to detect when
the previous effective message is not an assistant and converts any tool_result
blocks to text blocks, preventing them from being filtered as orphans.

The conversion happens at the latest possible moment (message insertion) because:
- Tool results are created before we know if condensing will occur
- We need actual effective history state to make the decision
- This is the last checkpoint before orphan filtering happens

* Only include environment details in summary for automatic condensing

For automatic condensing (during attemptApiRequest), environment details
are included in the summary because the API request is already in progress
and the next user message won't have fresh environment details injected.

For manual condensing (via condenseContext button), environment details
are NOT included because fresh details will be injected on the very next
turn via getEnvironmentDetails() in recursivelyMakeClineRequests().

This uses the existing isAutomaticTrigger flag to differentiate behavior.

---------

Co-authored-by: Hannes Rudolph <[email protected]>

* refactor: remove legacy XML tool calling code (getToolDescription) (#10929)

- Remove getToolDescription() method from MultiSearchReplaceDiffStrategy
- Remove getToolDescription() from DiffStrategy interface
- Remove unused ToolDescription type from shared/tools.ts
- Remove unused eslint-disable directive
- Update test mocks to remove getToolDescription references
- Remove getToolDescription tests from multi-search-replace.spec.ts

Native tools are now defined in src/core/prompts/tools/native-tools/ using
the OpenAI function format. The removed code was dead since XML-style tool
calling was replaced with native tool calling.

* Fix duplicate model display for OpenAI Codex provider (#10930)

Co-authored-by: Roo Code <[email protected]>

* Skip thoughtSignature blocks during markdown export #10199 (#10932)

* fix: use json-stream-stringify for pretty-printing MCP config files (#9864)

Co-authored-by: Roo Code <[email protected]>

* fix: auto-migrate v1 condensing prompt and handle invalid providers on import (#10931)

* chore: add changeset for v3.43.0 (#10933)

* Changeset version bump (#10934)

Co-authored-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
Co-authored-by: Matt Rubens <[email protected]>

* Replace hyphen encoding with fuzzy matching for MCP tool names (#10775)

* fix: truncate AWS Bedrock toolUseId to 64 characters (#10902)

* feat: remove MCP SERVERS section from system prompt (#10895)

Co-authored-by: Roo Code <[email protected]>

* feat(tools): enhance Gemini CLI with new tool descriptions and formatting

* feat(task): improve smart mistake detector with error source tracking and refined auto-switching

* feat: update Fireworks provider with new models (#10679)

Fixes #10674

Added new models:
- MiniMax M2.1 (minimax-m2p1)
- DeepSeek V3.2 (deepseek-v3p2)
- GLM-4.7 (glm-4p7)
- Llama 3.3 70B Instruct (llama-v3p3-70b-instruct)
- Llama 4 Maverick Instruct (llama4-maverick-instruct-basic)
- Llama 4 Scout Instruct (llama4-scout-instruct-basic)

All models include correct pricing, context windows, and capabilities.

* ux: Improve subtask visibility and navigation in history and chat views (#10864)

* Taskheader

* Subtask messages

* View subtask

* subtasks in history items

* i18n

* Table

* Lighter visuals

* bug

* fix: Align tests with implementation behavior

* refactor: extract CircularProgress component from TaskHeader

- Created reusable CircularProgress component for displaying percentage as a ring
- Moved inline SVG calculation from TaskHeader.tsx to dedicated component
- Added comprehensive tests for CircularProgress component (14 tests)
- Component supports customizable size, strokeWidth, and className
- Includes proper accessibility attributes (progressbar role, aria-valuenow)

* chore: update StandardTooltip default delay to 600ms

As mentioned in the PR description, increased the tooltip delay to 600ms
for less intrusive tooltips. The delay is still configurable via the
delay prop for components that need a different value.

---------

Co-authored-by: Roo Code <[email protected]>

* fix(types): remove unsupported Fireworks model tool fields (#10937)

fix(types): remove unsupported tool capability fields from Fireworks model metadata

Co-authored-by: Roo Code <[email protected]>

* ux: improve worktree selector and creation UX (#10940)

* Delete modal

* Restructured

* Much more prominent

* UI

* i18n

* Fixes i18n

* Remove mergeresultmodel

* i18n

* tests

* knip

* code review

* feat: add wildcard support for MCP alwaysAllow configuration (#10948)

Co-authored-by: Roo Code <[email protected]>

* fix: restore opaque background to settings section headers (#10951)

Co-authored-by: Roo Code <[email protected]>
Co-authored-by: Matt Rubens <[email protected]>

* Update and improve zh-TW Traditional Chinese locale and docs (#10953)

* chore: remove POWER_STEERING experiment remnants (#10980)

* fix: record truncation event when condensation fails but truncation succeeds (#10984)

* feat: new_task tool creates checkpoint the same way write_to_file does (#10982)

* fix: VS Code LM token counting returns 0 outside requests, breaking context condensing (EXT-620) (#10983)

- Modified VsCodeLmHandler.internalCountTokens() to create temporary cancellation tokens when needed
- Token counting now works both during and outside of active requests
- Added 4 new tests to verify the fix and prevent regression
- Resolves issue where VS Code LM API users experienced context overflow errors

* fix: prevent nested condensing from including previously-condensed content (#10985)

* fix: use --force by default when deleting worktrees (#10986)

Co-authored-by: Roo Code <[email protected]>

* chore: add changeset for v3.44.0 (#10987)

* Changeset version bump (#10989)

Co-authored-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
Co-authored-by: Matt Rubens <[email protected]>

* Fix LiteLLM tool ID validation errors for Bedrock proxy (#10990)

* Enable parallel tool calling with new_task isolation safeguards (#10979)

Co-authored-by: Matt Rubens <[email protected]>
Co-authored-by: Hannes Rudolph <[email protected]>

* Add quality checks to marketing site deployment workflows (#10959)

Co-authored-by: cte <[email protected]>

* Add temperature=0.9 and top_p=0.95 to zai-glm-4.7 model (#10945)

Co-authored-by: Matt Rubens <[email protected]>

* Revert "Enable parallel tool calling with new_task isolation safeguards" (#11004)

* Release v3.44.1 (#11003)

* Changeset version bump (#11005)

Co-authored-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
Co-authored-by: Matt Rubens <[email protected]>

* Revert "Revert "Enable parallel tool calling with new_task isolation safeguards"" (#11006)

* Fix local model validation error for Ollama models (#10893)

fix: prevent false validation error for local Ollama models

The validation logic was checking against an empty router models object
that was initialized but never populated for Ollama. This caused false
validation errors even when models existed locally.

Now only validates against router models if they actually contain data,
preventing the false error when using local Ollama models.

Fixes ROO-581

Co-authored-by: Roo Code <[email protected]>

* feat: enhance tool call parsing and enable smart mistake detection

* fix: use relative paths in isPathInIgnoredDirectory to fix worktree indexing (#11009)

* fix: remove duplicate tool_call emission from Responses API providers (#11008)

* Release v3.44.2 (#11025)

* Changeset version bump (#11027)

Co-authored-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
Co-authored-by: Matt Rubens <[email protected]>

* feat(condense v2.1): add smart code folding (#10942)

* feat(condense): add smart code folding with tree-sitter signatures

At context condensation time, use tree-sitter to generate folded code
signatures (function definitions, class declarations) for files read
during the conversation. Each file is included as its own <system-reminder>
block in the condensed summary, preserving structural awareness without
consuming excessive tokens.

- Add getFilesReadByRoo() method to FileContextTracker
- Create generateFoldedFileContext() using tree-sitter parsing
- Update summarizeConversation() to accept array of file sections
- Each file gets its own content block in the summary message
- Add comprehensive test coverage (12 tests)

* fix: skip tree-sitter error strings in folded file context

- Add isTreeSitterErrorString helper to detect error messages
- Skip files that return error strings instead of em