-
-
Notifications
You must be signed in to change notification settings - Fork 69.5k
[Bug]: Compaction does not auto-trigger reliably and fails under 5min JS timeout with slow local models #27595
Description
Summary
Compaction does not reliably auto-trigger under safeguard mode, and when it does trigger, it fails due to a hardcoded ~5 minute JS timeout.
With slow local models (e.g. Ollama ~30 tok/sec), compaction never successfully completes.
As a result, compaction is effectively unusable in long-running local deployments.
Steps to reproduce
Run OpenClaw with a local Ollama model (~30 tokens/sec).
Use a large context window (e.g. 131k).
Enable compaction:
"compaction": {
"mode": "safeguard",
"maxHistoryShare": 0.6,
"reserveTokensFloor": 24000
}
My openclaw.json
"agents": {
"defaults": {
"timeoutSeconds": 7200,
"contextTokens": 131072,
"model": {
"primary": "ollama/glm-4.7-flash"
},
"workspace": "/home/tsingwin/.openclaw/workspace",
"compaction": {
"mode": "safeguard",
"maxHistoryShare":0.6,
"reserveTokensFloor": 24000
},
"maxConcurrent": 4,
"subagents": {
"maxConcurrent": 8
}
}
},
Expected behavior
Compaction should auto-trigger before silent trimming removes early history.
Compaction should complete successfully even for slower local models.
If timeout occurs, backend generation should be cancelled.
Compaction timeout should be configurable.
Session state should remain consistent.
Actual behavior
Compaction does not auto-trigger even when session shows:
Context: 852k / 131k (650%)
Compactions: 0
It appears that OpenClaw silently trims history instead of triggering compaction.
When compaction is manually triggered:
After ~5 minutes, JS layer times out.
Frontend/runtime returns.
Compaction is not marked successful.
However, Ollama backend continues generating.
GPU remains busy.
No cancellation is propagated to the backend model.
I have never successfully completed a compaction cycle in a real-world long session using a slow local model.
OpenClaw version
2026.2.23
Operating system
Ubuntu22.04 Arm64(Jetson Orin AGX + Ollama Glm4.7-flash)
Install method
npm global
Logs, screenshots, and evidence
/status
🦞 OpenClaw 2026.2.23 (b817600)
🧠 Model: ollama/glm-4.7-flash · 🔑 api-key ollama…-local (models.json)
🧮 Tokens: 850k in / 1.8k out · 💵 Cost: $0.0000
📚 Context: 852k/131k (650%) · 🧹 Compactions: 0
🧵 Session: agent:main:feishu:direct:ou_8554c0da0b36ebfdfd8d236726224f71 • updated just now
⚙️ Runtime: direct · Think: off
🪢 Queue: collect (depth 0)
Compaction failed: Compaction timed out • Context ?/131kImpact and severity
No response
Additional information
No response