Skip to content

fix(transcript-policy): don't preserve thinking signatures for kimi-coding (#39798)#39841

Merged
steipete merged 2 commits intoopenclaw:mainfrom
VarunChopra11:fix/kimi-coding-preserve-signatures-regression
Mar 8, 2026
Merged

fix(transcript-policy): don't preserve thinking signatures for kimi-coding (#39798)#39841
steipete merged 2 commits intoopenclaw:mainfrom
VarunChopra11:fix/kimi-coding-preserve-signatures-regression

Conversation

@VarunChopra11
Copy link
Copy Markdown
Contributor

Summary

  • Problem: Since v2026.3.7, kimi-coding (which uses modelApi: "anthropic-messages") received its own thinkingSignature blobs back as context on turn 2+, causing JSON parse crashes and session termination.
  • Why it matters: Every multi-turn kimi-coding / k2p5 session crashes on turn 2 with Unexpected non-whitespace character after JSON at position N.
  • What changed: preserveSignatures is now isAnthropic && provider !== "kimi-coding" — kimi-coding is excluded from signature preservation despite using the Anthropic messages API.
  • What did NOT change: All other Anthropic, Bedrock, and compatible providers are unaffected. No behaviour change for any provider except kimi-coding.

Change Type (select all)

  • Bug fix

Scope (select all touched areas)

  • Gateway / orchestration

Linked Issue/PR

User-visible / Behavior Changes

kimi-coding / k2p5 multi-turn sessions no longer crash on turn 2. Thinking blocks from prior turns are stripped before being re-sent, matching the pre-v2026.3.7 behaviour.

Security Impact (required)

  • New permissions/capabilities? No
  • Secrets/tokens handling changed? No
  • New/changed network calls? No
  • Command/tool execution surface changed? No
  • Data access scope changed? No

Repro + Verification

Environment

  • OS: Linux (Docker, Ubuntu-based)
  • Model/provider: kimi-coding / k2p5, modelApi: anthropic-messages

Steps

  1. Configure a model with provider: kimi-coding, model: k2p5
  2. Run any agent session requiring more than one LLM turn (e.g. a task that calls a tool and continues)
  3. Before fix: crash on turn 2 — Unexpected non-whitespace character after JSON at position N
  4. After fix: session continues normally across turns

Expected

Session continues across multiple turns.

Actual (before fix)

Session crashes on turn 2 with a JSON parse error originating from re-sent thinkingSignature blobs.

Evidence

  • Failing test/log before + passing after

15 tests pass in src/agents/transcript-policy.test.ts, including the new regression test for kimi-coding (does not preserve signatures for kimi-coding provider (#39798)).

Human Verification (required)

  • Verified scenarios: resolveTranscriptPolicy({ provider: "kimi-coding", modelApi: "anthropic-messages" }) returns preserveSignatures: false
  • Edge cases checked: All other Anthropic/Bedrock providers still return preserveSignatures: true; Google/OpenAI/Mistral unaffected
  • What you did not verify: live end-to-end session with real kimi API credentials

Compatibility / Migration

  • Backward compatible? Yes
  • Config/env changes? No
  • Migration needed? No

Failure Recovery (if this breaks)

  • How to disable/revert this change quickly: revert the single-line change in src/agents/transcript-policy.ts (remove && provider !== "kimi-coding")
  • Files/config to restore: src/agents/transcript-policy.ts only

@openclaw-barnacle openclaw-barnacle bot added agents Agent runtime and tooling size: XS labels Mar 8, 2026
@VarunChopra11 VarunChopra11 marked this pull request as ready for review March 8, 2026 12:44
@greptile-apps
Copy link
Copy Markdown
Contributor

greptile-apps bot commented Mar 8, 2026

Greptile Summary

This PR fixes a multi-turn session crash for the kimi-coding provider by excluding it from preserveSignatures, even though it uses the anthropic-messages API. The change is a single targeted line in src/agents/transcript-policy.ts with a corresponding regression test.

  • The root cause is correctly identified: kimi-coding accepts the Anthropic messages API format but cannot handle re-sent thinkingSignature blobs as context, causing a JSON parse crash on turn 2+.
  • The fix is minimal and surgical — only kimi-coding is affected; all other Anthropic, Bedrock, and compatible providers remain unchanged.
  • The normalized provider string (normalizeProviderId is called before the comparison) means the "kimi-code" alias is also correctly handled without an additional check.
  • The regression test is clear and covers the core scenario. Coverage for the "kimi-code" alias variant would be a nice-to-have, but the normalization logic is already tested separately in model-selection.test.ts.
  • Minor style note: the codebase uses named Set constants for other provider exclusions (e.g. OPENAI_COMPAT_TURN_MERGE_EXCLUDED_PROVIDERS); a similar pattern here would make future additions easier, but the inline string check is perfectly correct as written.

Confidence Score: 5/5

  • Safe to merge — the change is a minimal, well-tested, backward-compatible bug fix with no impact on any provider other than kimi-coding.
  • The fix is a single-line change in a well-understood policy function, accompanied by a targeted regression test. All existing tests for other providers continue to pass. The normalization path ensures the "kimi-code" alias is also covered. No new permissions, network calls, or data-access changes are introduced.
  • No files require special attention.

Last reviewed commit: 3cb39a6

@steipete steipete merged commit 097c588 into openclaw:main Mar 8, 2026
28 checks passed
@steipete
Copy link
Copy Markdown
Contributor

steipete commented Mar 8, 2026

Landed via temp rebase onto main.

  • Gate: pnpm check && pnpm build && pnpm test
  • Land commit: e375489
  • Merge commit: 097c588

Thanks @VarunChopra11!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

agents Agent runtime and tooling size: XS

Projects

None yet

Development

Successfully merging this pull request may close these issues.

[Bug]: kimi-coding/k2p5 crashes on turn 2+ since v2026.3.7 (preserveSignatures regression)

2 participants