fix(feishu): add early event-level dedup to prevent duplicate message processing#37528
fix(feishu): add early event-level dedup to prevent duplicate message processing#37528zhuowater wants to merge 1 commit intoopenclaw:mainfrom
Conversation
… processing Feishu webhook retries and WebSocket reconnect replays can deliver the same event multiple times. The existing messageId dedup in handleFeishuMessage runs downstream after the debouncer, so two concurrent dispatches of the same event can both enter the pipeline before either records the messageId. Add a synchronous in-memory dedup check at the EventDispatcher handler level using message_id as key with a 5-minute TTL. This catches duplicates immediately when they arrive, before they enter the inbound debouncer or processing queue. The downstream persistent dedup in bot.ts is preserved for cross-restart protection. Closes openclaw#37477
There was a problem hiding this comment.
💡 Codex Review
Here are some automated review suggestions for this pull request.
Reviewed commit: 42e6b3c553
ℹ️ About Codex in GitHub
Codex has been enabled to automatically review pull requests in this repo. Reviews are triggered when you
- Open a pull request for review
- Mark a draft as ready
- Comment "@codex review".
If Codex has suggestions, it will comment; otherwise it will react with 👍.
When you sign up for Codex through ChatGPT, Codex can also answer questions or update the PR, like "@codex address that feedback".
| const messageId = event.message?.message_id?.trim(); | ||
| if (messageId) { | ||
| const eventKey = `${accountId}:evt:${messageId}`; | ||
| if (!eventDedup.check(eventKey)) { |
There was a problem hiding this comment.
Invert event dedupe guard to process first delivery
createDedupeCache.check returns false for a new key and true only after the key has already been seen (src/infra/dedupe.ts:57-67), but this handler currently returns early when check is false. In normal Feishu im.message.receive_v1 traffic (where message_id is present), that means the first delivery of each message is dropped and only later retries/replays can proceed, effectively suppressing legitimate inbound messages.
Useful? React with 👍 / 👎.
Greptile SummaryThis PR adds an early in-memory dedup cache to the Feishu The change is otherwise well-structured (correct helper, correct TTL/size, correct placement), so the fix is straightforward: remove the Confidence Score: 1/5
Last reviewed commit: 42e6b3c |
| if (!eventDedup.check(eventKey)) { | ||
| log(`feishu[${accountId}]: dropping duplicate event for message ${messageId}`); | ||
| return; | ||
| } |
There was a problem hiding this comment.
createDedupeCache.check(key) returns false on the first encounter (recording the key) and true on subsequent (duplicate) encounters. The negation !eventDedup.check(eventKey) inverts this:
- First message arrives →
checkreturnsfalse→!false === true→ the message is dropped ❌ - Duplicate arrives →
checkreturnstrue→!true === false→ the duplicate passes through ❌
This is precisely backwards from the intended behaviour — every original message will be silently discarded and every retry/replay will be processed. The ! must be removed:
| if (!eventDedup.check(eventKey)) { | |
| log(`feishu[${accountId}]: dropping duplicate event for message ${messageId}`); | |
| return; | |
| } | |
| if (eventDedup.check(eventKey)) { | |
| log(`feishu[${accountId}]: dropping duplicate event for message ${messageId}`); | |
| return; | |
| } |
The existing tryRecordMessage in dedup.ts wraps check the same way — return !memoryDedupe.check(messageId) — and returns true for new messages. The direct check call here needs the negation removed.
Prompt To Fix With AI
This is a comment left during a code review.
Path: extensions/feishu/src/monitor.account.ts
Line: 392-395
Comment:
`createDedupeCache.check(key)` returns `false` on the **first** encounter (recording the key) and `true` on **subsequent** (duplicate) encounters. The negation `!eventDedup.check(eventKey)` inverts this:
- First message arrives → `check` returns `false` → `!false === true` → the message is **dropped** ❌
- Duplicate arrives → `check` returns `true` → `!true === false` → the duplicate **passes through** ❌
This is precisely backwards from the intended behaviour — every original message will be silently discarded and every retry/replay will be processed. The `!` must be removed:
```suggestion
if (eventDedup.check(eventKey)) {
log(`feishu[${accountId}]: dropping duplicate event for message ${messageId}`);
return;
}
```
The existing `tryRecordMessage` in `dedup.ts` wraps `check` the same way — `return !memoryDedupe.check(messageId)` — and returns `true` for new messages. The direct `check` call here needs the negation removed.
How can I resolve this? If you propose a fix, please make it concise.|
Closing this as superseded by #43762. Both PRs patch the same Feishu #43762 carries the same fix path with the corrected predicate and the same |
|
This earlier PR targeted the same Feishu early-event dedupe race that was ultimately landed in #43762. Keeping this closed as superseded by the merged fix. |
Problem
Fixes #37477
Feishu webhook retries and WebSocket reconnect replays can deliver the same
im.message.receive_v1event multiple times. The existingmessageIddedup inhandleFeishuMessage(bot.ts) runs downstream after the inbound debouncer, so two concurrent dispatches of the same event can both enter the pipeline before either records themessageId— causing duplicate replies.Fix
Add a synchronous in-memory dedup check at the
EventDispatcherhandler level (monitor.account.ts) usingmessage_idas key with a 5-minute TTL and 2000-entry cap. This catches duplicates immediately when they arrive from the Lark SDK, before they enter the inbound debouncer or processing queue.The dedup cache uses the same
createDedupeCacheutility already used by the persistent dedup module, keeping the implementation consistent.The downstream persistent dedup in
bot.tsis intentionally preserved — it guards against cross-restart duplicate processing which the in-memory cache cannot cover.Changes
monitor.account.ts: ImportcreateDedupeCachefromopenclaw/plugin-sdk/feishu. Create a per-account event dedup cache before registering event handlers. In theim.message.receive_v1handler, checkmessage_idagainst the cache before passing to the debouncer.Test plan
pnpm test -- extensions/feishu— existing tests should pass