fix(slack): allow app_mention events to bypass dedup cache#20152
fix(slack): allow app_mention events to bypass dedup cache#20152nova-openclaw-cgk wants to merge 2 commits intoopenclaw:mainfrom
Conversation
Slack delivers both `message` and `app_mention` events for the same @mention with no guaranteed delivery order. The existing dedup cache uses the same `channelId:ts` key for both event types, so whichever arrives first marks the message as seen and the second is silently dropped. When `message` arrives first in a channel with `requireMention: true`, it is dropped for lacking a mention — but the dedup key is still set. The subsequent `app_mention` (which carries `wasMentioned: true`) is then deduped, causing: 1. @mentions silently ignored (~50% of the time) 2. Thread context never established (threadTsResolver.resolve never runs) 3. `invalid_thread_ts` errors on reply → fallback to main channel The fix exempts `app_mention` events from the dedup check. This is safe because the downstream debouncer already merges duplicate entries via `buildKey`, and `prepareSlackMessage` correctly combines `wasMentioned` across debounced entries. Fixes openclaw#20151
| if (opts.source !== "app_mention" && ctx.markMessageSeen(message.channel, message.ts)) { | ||
| return; | ||
| } |
There was a problem hiding this comment.
Double-processing risk when debounce is 0
When debounceMs is 0 (the default per resolveInboundDebounceMs), the debouncer flushes each event immediately without buffering. Since app_mention now bypasses both the check and the set of markMessageSeen, both message and app_mention events will independently reach prepareSlackMessage and dispatchPreparedSlackMessage.
In requireMention: true channels the message event is dropped by the mention gate, so there's no issue. But in channels where requireMention is false (or DMs), both events pass through, potentially causing the bot to respond twice to the same @mention.
Consider still calling markMessageSeen for app_mention (to set the key for the subsequent message event) while skipping the early-return when the key was already set:
| if (opts.source !== "app_mention" && ctx.markMessageSeen(message.channel, message.ts)) { | |
| return; | |
| } | |
| // Allow app_mention events to bypass dedup. Slack sends both `message` and | |
| // `app_mention` for the same @mention with no delivery-order guarantee. If | |
| // the `message` event arrives first it is marked seen; without this guard the | |
| // subsequent `app_mention` (which carries the authoritative `wasMentioned` | |
| // flag) would be silently dropped, causing missed mentions and lost thread | |
| // context. See: https://github.com/openclaw/openclaw/issues/20151 | |
| const alreadySeen = ctx.markMessageSeen(message.channel, message.ts); | |
| if (alreadySeen && opts.source !== "app_mention") { | |
| return; | |
| } |
This way, whichever event arrives first marks the key as seen, and the second is deduped — unless it's app_mention, which always proceeds. If app_mention arrives first it marks the key, so the subsequent message event is deduped. If message arrives first it marks the key and proceeds, but app_mention also proceeds — however in non-requireMention channels the message event would have already handled the message, and the debouncer would merge them if debounce > 0. For debounce=0 with message winning the race, both still process, but at least the app_mention-first ordering is safe.
Prompt To Fix With AI
This is a comment left during a code review.
Path: src/slack/monitor/message-handler.ts
Line: 119-121
Comment:
**Double-processing risk when debounce is 0**
When `debounceMs` is 0 (the default per `resolveInboundDebounceMs`), the debouncer flushes each event immediately without buffering. Since `app_mention` now bypasses **both** the check and the set of `markMessageSeen`, both `message` and `app_mention` events will independently reach `prepareSlackMessage` and `dispatchPreparedSlackMessage`.
In `requireMention: true` channels the `message` event is dropped by the mention gate, so there's no issue. But in channels where `requireMention` is `false` (or DMs), both events pass through, potentially causing the bot to respond twice to the same @mention.
Consider still calling `markMessageSeen` for `app_mention` (to set the key for the subsequent `message` event) while skipping the early-return when the key was already set:
```suggestion
// Allow app_mention events to bypass dedup. Slack sends both `message` and
// `app_mention` for the same @mention with no delivery-order guarantee. If
// the `message` event arrives first it is marked seen; without this guard the
// subsequent `app_mention` (which carries the authoritative `wasMentioned`
// flag) would be silently dropped, causing missed mentions and lost thread
// context. See: https://github.com/openclaw/openclaw/issues/20151
const alreadySeen = ctx.markMessageSeen(message.channel, message.ts);
if (alreadySeen && opts.source !== "app_mention") {
return;
}
```
This way, whichever event arrives first marks the key as seen, and the second is deduped — unless it's `app_mention`, which always proceeds. If `app_mention` arrives first it marks the key, so the subsequent `message` event is deduped. If `message` arrives first it marks the key and proceeds, but `app_mention` also proceeds — however in non-`requireMention` channels the `message` event would have already handled the message, and the debouncer would merge them if debounce > 0. For debounce=0 with `message` winning the race, both still process, but at least the `app_mention`-first ordering is safe.
How can I resolve this? If you propose a fix, please make it concise.|
Hey 👋 — we're running into this exact issue in production (multi-agent Slack setup where agents @mention each other). The race between We've cherry-picked this fix into our fork and can confirm it resolves the issue cleanly. The downstream debouncer handles any duplicate concern. Is there anything blocking this from being merged? Happy to address review feedback if needed. |
|
This pull request has been automatically marked as stale due to inactivity. |
|
Closing due to inactivity. |
Summary
app_mentionevents from the message dedup cache increateSlackMessageHandler. Slack delivers bothmessageandapp_mentionfor the same @mention with no ordering guarantee. The sharedchannelId:tsdedup key causes the second event to be silently dropped ~50% of the time.messagewins the race in arequireMention: truechannel, it's dropped for lacking a mention but the dedup key is still set. Theapp_mention(withwasMentioned: true) is then deduped, causing missed mentions and lost thread context (invalid_thread_ts→ fallback to main channel).createInboundDebounceralready handles dedup at the flush level viabuildKey, andprepareSlackMessagecorrectly mergeswasMentionedacross debounced entries.Test plan
vitesttests pass (src/slack/monitor/message-handler/prepare.test.ts,src/slack/monitor/monitor.test.ts)requireMention: true, @mention the bot — confirm it responds every time (not ~50%)invalid_thread_tsfallback)requireMention: truechannel — confirm bot still ignores itFixes #20151
🤖 Generated with Claude Code
Greptile Summary
This PR fixes a race condition where Slack's
app_mentionevents were being silently dropped ~50% of the time due to a shared dedup cache key (channelId:ts) withmessageevents. InrequireMention: truechannels, if themessageevent won the race it would set the dedup key (but be dropped for lacking a mention), causing the subsequentapp_mentionto be deduped — resulting in missed mentions andinvalid_thread_tserrors.The fix exempts
app_mentionevents from themarkMessageSeendedup check entirely, relying on the downstreamcreateInboundDebouncerto handle dedup at the flush level.requireMention: truechannelsdebounceMsis 0 (the default), bothmessageandapp_mentionevents will independently reach dispatch in channels whererequireMentionisfalse, since the debouncer does not buffer at zero debouncemarkMessageSeenforapp_mentionevents (to poison the key for the competingmessageevent) while skipping the early-return, reducing the double-processing windowConfidence Score: 3/5
Last reviewed commit: 78d7065
(2/5) Greptile learns from your feedback when you react with thumbs up/down!