Bug Report
Description
Auto-compaction overflow recovery is not working. When a model request exceeds the context window, the documented flow should be:
- Model returns token limit error
- Auto-compaction triggers
- Original request is retried with compacted context
What Actually Happens:
The request fails with HTTP 400 and no auto-compaction or retry occurs. compactionCount remains 0.
Reproduction Steps
- Session accumulates context (93 lines in transcript,
contextTokens: 204,800)
- Request is sent that exceeds model context limit
- Error returned:
HTTP 400: Invalid request: Your request exceeded model token limit: 262144 (requested: 276742)
- Session shows
compactionCount: 0 — no compaction ran
- No retry attempted
Actual Behavior
- Error:
exceeded model token limit: 262144 (requested: 276742)
compactionCount: 0 (checked in sessions.json)
- No retry with compacted context
- Session transcript shows no
"type":"compaction" entries
Expected Behavior
Per documentation (/concepts/compaction.md):
"When a session nears or exceeds the model's context window, OpenClaw triggers auto-compaction and may retry the original request using the compacted context."
Expected flow:
- Error detected
- Auto-compaction runs
- Request retried with compacted context
compactionCount increments
Configuration
agents:
defaults:
compaction:
mode: "safeguard"
contextPruning:
mode: "off"
Session Details
- Session ID:
446dd781-76a1-4bb8-bed3-d53e7038b75d
- Session key:
agent:default:telegram:dm:5437910345
- Model at error:
kimi-code/kimi-for-coding (262,144 context window)
- Transcript lines: 93
contextTokens: 204,800
compactionCount: 0
Notes
- Error shows limit
262,144 which is kimi-for-coding's context window, even though session model shows glm-4.7 (200,000 limit). This suggests a fallback model switch occurred.
- Documentation references Pi runtime compaction settings like
reserveTokens and keepRecentTokens, but these are not exposed in OpenClaw config. Overflow recovery may depend on these settings.
- The overflow recovery flow is documented but not executing — appears to be a bug in the retry logic.
Environment
- OS: Linux 5.15.0-305.176.4.el9uek.aarch64 (arm64)
- Node: v22.20.0
- OpenClaw: dev channel, version 2026.1.29
- Config:
agents.defaults.compaction.mode: "safeguard"
Documentation Reference
Bug Report
Description
Auto-compaction overflow recovery is not working. When a model request exceeds the context window, the documented flow should be:
What Actually Happens:
The request fails with HTTP 400 and no auto-compaction or retry occurs.
compactionCountremains 0.Reproduction Steps
contextTokens: 204,800)HTTP 400: Invalid request: Your request exceeded model token limit: 262144 (requested: 276742)compactionCount: 0— no compaction ranActual Behavior
exceeded model token limit: 262144 (requested: 276742)compactionCount: 0(checked insessions.json)"type":"compaction"entriesExpected Behavior
Per documentation (/concepts/compaction.md):
Expected flow:
compactionCountincrementsConfiguration
Session Details
446dd781-76a1-4bb8-bed3-d53e7038b75dagent:default:telegram:dm:5437910345kimi-code/kimi-for-coding(262,144 context window)contextTokens: 204,800compactionCount: 0Notes
262,144which iskimi-for-coding's context window, even though session model showsglm-4.7(200,000 limit). This suggests a fallback model switch occurred.reserveTokensandkeepRecentTokens, but these are not exposed in OpenClaw config. Overflow recovery may depend on these settings.Environment
agents.defaults.compaction.mode: "safeguard"Documentation Reference