Bug Report
Summary
After a successful safeguard compaction, a second compaction fires immediately (~18-34 seconds later), destroying all messages that the first compaction preserved. This results in complete conversation amnesia — zero message history survives.
Root Cause
In pi-coding-agent/src/core/agent-session.ts, the prompt() method has a pre-compaction check:
// agent-session.ts, prompt() method
const lastAssistant = this._findLastAssistantMessage();
if (lastAssistant) {
await this._checkCompaction(lastAssistant, false);
}
_findLastAssistantMessage() scans agent.state.messages in reverse and returns the first assistant message found. After compaction, the kept messages include assistant messages from before compaction, which carry their original usage.totalTokens (e.g., 184,297 tokens).
When the next prompt() is called (e.g., by a memoryFlush run on the same session file):
_findLastAssistantMessage() → finds the kept assistant with stale totalTokens = 184,297
_checkCompaction(staleAssistant) → calculateContextTokens(usage) returns 184,297
shouldCompact(184297, 200000, 20000) → true (184,297 > 180,000)
- Second compaction fires on a session that's already compacted to ~1-5K actual tokens
findCutPoint() finds 0 conversation messages to keep → all preserved messages destroyed
The core issue: the pre-compaction check in prompt() doesn't account for compaction boundaries — it uses stale usage.totalTokens from a preserved assistant message that no longer reflects the actual context size.
Impact
- 100% reproduction rate across all sessions with compaction enabled
- 22 double compaction events observed across 8 different sessions on a single installation
- Every double compaction preserves 0 conversation messages (verified by examining
firstKeptEntryId targets)
- First compaction successfully preserves 1-193 messages; second compaction immediately destroys them all
Timeline (from actual session data)
11:36:04 Assistant response: usage.totalTokens = 184,297
→ shouldCompact(184297, 200000, 20000) = true
→ Compaction #1 fires (threshold, willRetry=false)
11:36:38 Compaction #1 completes
→ Summary generated successfully ✅
→ 45 messages preserved in keep window ✅
→ Preserved messages include the 184K assistant ⚠️
11:36:56 openclaw.cache-ttl custom entry appended to session
(this bypasses prepareCompaction()'s guard that checks
if the last entry is a compaction)
11:37:12 memoryFlush run starts on the same session file
→ prompt("Pre-compaction memory flush...")
→ prompt() calls _findLastAssistantMessage()
→ Finds the kept assistant with stale totalTokens=184,297
→ _checkCompaction → shouldCompact = true
→ Compaction #2 fires ⚠️
11:37:12 Compaction #2 completes
→ firstKeptEntryId points to cache-ttl custom entry
→ 0 conversation messages preserved
→ Agent has complete amnesia
Data: All 22 double compaction events
| Session |
Comp1 tokens |
Comp1 kept msgs |
Stale assistant totalTokens |
Comp2 kept msgs |
| 0c827498 |
184,297 |
45 |
184,297 |
0 |
| 0c827498 |
180,168 |
81 |
180,168 |
0 |
| 0c827498 |
183,096 |
72 |
183,096 |
0 |
| 0c827498 |
187,872 |
5 |
187,872 |
0 |
| 30b28b73 |
182,038 |
193 |
182,038 |
0 |
| 3f3605b3 |
180,102 |
66 |
180,102 |
0 |
| 4adb9a21 |
184,537 |
36 |
184,537 |
0 |
| 51903c45 |
182,048 |
79 |
182,048 |
0 |
| 51903c45 |
181,232 |
75 |
181,232 |
0 |
| 51903c45 |
186,758 |
45 |
186,758 |
0 |
| 51903c45 |
180,226 |
39 |
180,226 |
0 |
| 52c0e430 |
183,475 |
61 |
183,475 |
0 |
| 52c0e430 |
180,095 |
49 |
180,095 |
0 |
| 52c0e430 |
192,079 |
11 |
192,079 |
0 |
| 52c0e430 |
185,763 |
43 |
185,763 |
0 |
| 52c0e430 |
181,876 |
3 |
181,876 |
0 |
| 52c0e430 |
161,586 |
145 |
161,586 |
0 |
| 8f49d68b |
185,516 |
44 |
185,516 |
0 |
| 8f49d68b |
182,867 |
72 |
182,867 |
0 |
| b4bfa524 |
180,837 |
29 |
180,837 |
0 |
| b4bfa524 |
183,377 |
1 |
183,377 |
0 |
| b4bfa524 |
181,156 |
153 |
181,156 |
0 |
In all 22 cases, the stale totalTokens exceeds the shouldCompact threshold (contextWindow - reserveTokens = 180,000), triggering the spurious second compaction.
Contributing factor: prepareCompaction() guard bypass
prepareCompaction() has a guard against double compaction:
if (pathEntries.length > 0 && pathEntries[pathEntries.length - 1].type === "compaction") {
return undefined;
}
However, in all 22 cases, an openclaw.cache-ttl custom entry is inserted between Comp1 and Comp2, making the last entry a custom type instead of compaction, bypassing this guard. Even without cache-ttl, any non-compaction entry appended after compaction would bypass this guard.
Suggested fixes
Primary fix (in pi-coding-agent):
The pre-compaction check in prompt() should not use stale usage from pre-compaction assistant messages. Options:
- Skip check if a compaction has occurred since the last assistant message — compare the assistant's timestamp against the latest compaction entry's timestamp
- Use
estimateContextTokens() on current messages instead of a single assistant's cached usage.totalTokens
- Clear/invalidate usage on kept messages after compaction so stale values can't trigger re-compaction
Secondary fix (defense in depth):
Add a stronger double-compaction guard in prepareCompaction():
// Instead of only checking last entry type,
// check if any compaction occurred within the current boundary
const lastCompactionIdx = findLastCompactionIndex(pathEntries);
const messagesSinceCompaction = countMessagesSince(pathEntries, lastCompactionIdx);
if (messagesSinceCompaction === 0) return undefined;
Environment
- OpenClaw version: 2026.2.23 (also reproduced on earlier versions back to ~2026.2.4)
- OS: macOS Tahoe (arm64)
- Model: anthropic/claude-opus-4-6 (200K context window)
- Compaction mode: safeguard
- contextPruning was: cache-ttl, ttl: 6h (contributes to guard bypass, but not the root cause)
Related issues
Bug Report
Summary
After a successful safeguard compaction, a second compaction fires immediately (~18-34 seconds later), destroying all messages that the first compaction preserved. This results in complete conversation amnesia — zero message history survives.
Root Cause
In
pi-coding-agent/src/core/agent-session.ts, theprompt()method has a pre-compaction check:_findLastAssistantMessage()scansagent.state.messagesin reverse and returns the first assistant message found. After compaction, the kept messages include assistant messages from before compaction, which carry their originalusage.totalTokens(e.g., 184,297 tokens).When the next
prompt()is called (e.g., by a memoryFlush run on the same session file):_findLastAssistantMessage()→ finds the kept assistant with staletotalTokens = 184,297_checkCompaction(staleAssistant)→calculateContextTokens(usage)returns 184,297shouldCompact(184297, 200000, 20000)→ true (184,297 > 180,000)findCutPoint()finds 0 conversation messages to keep → all preserved messages destroyedThe core issue: the pre-compaction check in
prompt()doesn't account for compaction boundaries — it uses staleusage.totalTokensfrom a preserved assistant message that no longer reflects the actual context size.Impact
firstKeptEntryIdtargets)Timeline (from actual session data)
Data: All 22 double compaction events
In all 22 cases, the stale
totalTokensexceeds theshouldCompactthreshold (contextWindow - reserveTokens = 180,000), triggering the spurious second compaction.Contributing factor:
prepareCompaction()guard bypassprepareCompaction()has a guard against double compaction:However, in all 22 cases, an
openclaw.cache-ttlcustom entry is inserted between Comp1 and Comp2, making the last entry acustomtype instead ofcompaction, bypassing this guard. Even without cache-ttl, any non-compaction entry appended after compaction would bypass this guard.Suggested fixes
Primary fix (in pi-coding-agent):
The pre-compaction check in
prompt()should not use stale usage from pre-compaction assistant messages. Options:estimateContextTokens()on current messages instead of a single assistant's cachedusage.totalTokensSecondary fix (defense in depth):
Add a stronger double-compaction guard in
prepareCompaction():Environment
Related issues