-
-
Notifications
You must be signed in to change notification settings - Fork 39.9k
Description
Bug Description
When a context overflow occurs (prompt too large for model), OpenClaw posts the raw error message to the Discord channel before retrying with compaction. The retry often succeeds, but the user sees a confusing error message followed by a normal response.
Expected Behavior
OpenClaw should silently retry (compact/prune) and only surface the error to the chat channel if all retries fail.
Actual Behavior
The error message "Context overflow: prompt too large for the model. Try again with less input or a larger-context model." is posted to Discord as a bot message on the first failure, even when the subsequent retry succeeds. This results in:
- Error message appears in chat
- Successful response appears right after
- User sees both, causing confusion
Reproduction
- Use Opus 4.6 with
contextTokens: 1000000 - Have a conversation with several tool calls that produce large results (e.g.,
gateway config.schema,gateway config.get) - Context builds up from tool results in the conversation history
- Next user message triggers the overflow on first API attempt
- Error leaks to Discord channel before compaction/retry
Environment
- OpenClaw v2026.2.6 (f831c48)
- Model: anthropic/claude-opus-4-6
- Channel: Discord
- Compaction mode: tested with both
safeguardanddefault - Context pruning: tested with both
offandcache-ttl
Frequency
Observed 12 times in a single session today (Feb 7, 2026). Particularly common after large tool results (config schema dumps, web fetches) accumulate in context.
Workaround
Manually deleting the error messages from the channel after they appear. Enabling contextPruning.mode: "cache-ttl" reduces frequency but does not prevent the error from leaking.