[AI-Assisted] Fix: Repair tool_use/tool_result pairing for Claude on any provider#2806
Open
Arthur742Ramos wants to merge 1 commit intoopenclaw:mainfrom
Open
Conversation
When using Claude models via non-Anthropic providers (github-copilot, openrouter, amazon-bedrock, etc), the repairToolUseResultPairing sanitizer was not running, causing 400 errors when sessions had orphaned tool_use blocks without matching tool_result. Added isClaudeModel() helper that detects Claude by modelId, and wired it into: - repairToolUseResultPairing (fixes the 400 error) - validateAnthropicTurns (Claude needs Anthropic-style turn validation) - allowSyntheticToolResults (allows inserting synthetic results for missing) Added comprehensive test suite covering: - Direct Anthropic provider - Claude via github-copilot, openrouter, opencode, amazon-bedrock - Non-Claude models (GPT, Llama, Gemini) - Edge cases (null/empty modelId) - Case-insensitive detection Fixes: '400 messages.220: tool_use ids were found without tool_result blocks'
6 tasks
4 tasks
Comment on lines
+65
to
+72
| /** | ||
| * Detects Claude models by checking if the modelId contains 'claude'. | ||
| * This catches Claude models accessed via non-Anthropic providers like | ||
| * github-copilot, openrouter, etc. | ||
| */ | ||
| function isClaudeModel(modelId?: string | null): boolean { | ||
| if (!modelId) return false; | ||
| return modelId.toLowerCase().includes("claude"); |
Contributor
There was a problem hiding this comment.
[P1] Claude detection is over-broad and may incorrectly enable Anthropic-specific policies
isClaudeModel() is a plain substring check (modelId.toLowerCase().includes("claude")), and isClaude is then used to enable validateAnthropicTurns/allowSyntheticToolResults (and tool_use/tool_result repair) for any non-OpenAI provider. If a provider ever has a non-Anthropic-format model whose ID contains "claude" (or modelId is user-configurable and contains that substring), this would turn on Anthropic-style validation/repair and could cause incorrect sanitization or unexpected transcript mutations.
Prompt To Fix With AI
This is a comment left during a code review.
Path: src/agents/transcript-policy.ts
Line: 65:72
Comment:
[P1] Claude detection is over-broad and may incorrectly enable Anthropic-specific policies
`isClaudeModel()` is a plain substring check (`modelId.toLowerCase().includes("claude")`), and `isClaude` is then used to enable `validateAnthropicTurns`/`allowSyntheticToolResults` (and tool_use/tool_result repair) for any non-OpenAI provider. If a provider ever has a non-Anthropic-format model whose ID contains "claude" (or modelId is user-configurable and contains that substring), this would turn on Anthropic-style validation/repair and could cause incorrect sanitization or unexpected transcript mutations.
How can I resolve this? If you propose a fix, please make it concise.bfc1ccb to
f92900f
Compare
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
Motivation
Discovered this bug while running Clawdbot on an Azure VM using GitHub Copilot with Claude Opus 4.5. Every message to a WhatsApp group was failing with:
The session had orphaned
tool_useblocks, and the repair logic wasn't running because GitHub Copilot wasn't recognized as needing Anthropic-style sanitization.Root Cause
In
transcript-policy.ts,repairToolUseResultPairingwas only enabled for Google or Anthropic providers:But
github-copilotwith Claude models uses Anthropic's API format and needs this repair. The provider detection only checked forprovider === "anthropic"ormodelApi === "anthropic-messages", missing Claude models accessed via other providers.Solution
Added
isClaudeModel()helper that detects Claude by checking ifmodelIdcontains "claude", and wired it into:repairToolUseResultPairing(fixes the 400 error)validateAnthropicTurns(Claude needs Anthropic-style turn validation)allowSyntheticToolResults(allows inserting synthetic results for missing tool results)This covers Claude on any provider: github-copilot, openrouter, opencode, amazon-bedrock, etc.
Testing
Added comprehensive test suite (238 lines) covering:
Logic verified with standalone tests (9/9 passing). Full vitest suite requires pnpm workspace setup.
AI Disclosure
🤖 AI-Assisted PR
Files Changed
src/agents/transcript-policy.ts- AddedisClaudeModel()helper and updated policy flagssrc/agents/transcript-policy.test.ts- New comprehensive test suiteGreptile Overview
Greptile Summary
This PR extends transcript sanitization policy selection to recognize Claude models even when they’re routed through non-Anthropic providers (e.g., GitHub Copilot/OpenRouter/Bedrock) by introducing an
isClaudeModel()helper and using it to enable Anthropic-style repair/validation flags. It also adds a focusedresolveTranscriptPolicytest suite covering Claude-via-third-party providers, non-Claude models, and a few edge cases.Confidence Score: 4/5
(2/5) Greptile learns from your feedback when you react with thumbs up/down!