fix(delivery-queue): increment retryCount on entries deferred when time budget exceeded#31192
Conversation
There was a problem hiding this comment.
💡 Codex Review
Here are some automated review suggestions for this pull request.
Reviewed commit: dba6aa848b
ℹ️ About Codex in GitHub
Your team has set up Codex to review pull requests in this repo. Reviews are triggered when you
- Open a pull request for review
- Mark a draft as ready
- Comment "@codex review".
If Codex has suggestions, it will comment; otherwise it will react with 👍.
Codex can also answer questions or update the PR. Try commenting "@codex address that feedback".
src/infra/outbound/delivery-queue.ts
Outdated
| for (const r of remaining) { | ||
| try { | ||
| await failDelivery(r.id, "Recovery time budget exceeded — deferred", opts.stateDir); |
There was a problem hiding this comment.
Stop processing deferred entries after recovery deadline
Once now >= deadline, this loop performs a failDelivery read/write/rename for every remaining queue item before returning, which can take arbitrarily long on large queues or slow disks and effectively nullifies the documented maxRecoveryMs wall-clock cap for startup recovery. The same unbounded work pattern is also present in the now + backoff >= deadline branch, so recovery can still block startup well past the configured budget.
Useful? React with 👍 / 👎.
Greptile SummaryFixed critical bug where delivery queue entries deferred due to time budget exhaustion would loop forever without incrementing
The implementation correctly reuses existing Confidence Score: 5/5
Last reviewed commit: dba6aa8 |
dba6aa8 to
098d276
Compare
There was a problem hiding this comment.
💡 Codex Review
Here are some automated review suggestions for this pull request.
Reviewed commit: d8876ec76b
ℹ️ About Codex in GitHub
Your team has set up Codex to review pull requests in this repo. Reviews are triggered when you
- Open a pull request for review
- Mark a draft as ready
- Comment "@codex review".
If Codex has suggestions, it will comment; otherwise it will react with 👍.
Codex can also answer questions or update the PR. Try commenting "@codex address that feedback".
| if (now >= deadline) { | ||
| const deferred = pending.length - recovered - failed - skippedMaxRetries - deferredBackoff; | ||
| opts.log.warn(`Recovery time budget exceeded — ${deferred} entries deferred to next restart`); | ||
| opts.log.warn(`Recovery time budget exceeded — remaining entries deferred to next startup`); |
There was a problem hiding this comment.
Keep recovery warning text consistent with existing assertions
This changes the timeout warning from deferred to next restart to deferred to next startup, but recoverPendingDeliveries is still asserted against the old wording in src/infra/outbound/outbound.test.ts:448 (expect.stringContaining("deferred to next restart")). When maxRecoveryMs is exhausted, that test path now fails deterministically, so this commit will break the current recovery test suite unless the assertion or warning text is updated together.
Useful? React with 👍 / 👎.
|
The This PR only touches |
e2d0754 to
b7a97f1
Compare
There was a problem hiding this comment.
💡 Codex Review
Here are some automated review suggestions for this pull request.
Reviewed commit: b7a97f1378
ℹ️ About Codex in GitHub
Your team has set up Codex to review pull requests in this repo. Reviews are triggered when you
- Open a pull request for review
- Mark a draft as ready
- Comment "@codex review".
If Codex has suggestions, it will comment; otherwise it will react with 👍.
Codex can also answer questions or update the PR. Try commenting "@codex address that feedback".
src/browser/chrome.ts
Outdated
|
|
||
| return spawn(exe.path, args, { | ||
| stdio: "pipe", | ||
| stdio: ["ignore", "ignore", "ignore"], |
There was a problem hiding this comment.
Keep Chrome stderr piped during launch attempts
When Chrome fails to come up (for example because of a bad flag, missing display, or sandbox issue), launchOpenClawChrome still tries to read proc.stderr and append it to the thrown startup error, but this stdio setting makes proc.stderr null. That regresses a previously actionable failure path into an opaque Failed to start Chrome CDP... message with no browser diagnostics, which will make real startup issues much harder to debug.
Useful? React with 👍 / 👎.
b7a97f1 to
384bbba
Compare
There was a problem hiding this comment.
💡 Codex Review
Here are some automated review suggestions for this pull request.
Reviewed commit: 384bbba21d
ℹ️ About Codex in GitHub
Your team has set up Codex to review pull requests in this repo. Reviews are triggered when you
- Open a pull request for review
- Mark a draft as ready
- Comment "@codex review".
If Codex has suggestions, it will comment; otherwise it will react with 👍.
Codex can also answer questions or update the PR. Try commenting "@codex address that feedback".
| if (now >= deadline) { | ||
| const deferred = pending.length - recovered - failed - skippedMaxRetries - deferredBackoff; | ||
| opts.log.warn(`Recovery time budget exceeded — ${deferred} entries deferred to next restart`); | ||
| opts.log.warn(`Recovery time budget exceeded — remaining entries deferred to next startup`); | ||
| break; |
There was a problem hiding this comment.
Update deferred retries before breaking on recovery timeout
When maxRecoveryMs is exhausted, recoverPendingDeliveries() still only logs and breaks, so every queued item after the current cursor keeps its old retryCount. In the exact large-queue/slow-startup case this commit claims to fix, those same tail entries can be deferred on every restart and never reach MAX_RETRIES, because failDelivery() is the only path here that increments the counter and it is never called from the timeout branch.
Useful? React with 👍 / 👎.
93f868b to
fe3f992
Compare
There was a problem hiding this comment.
💡 Codex Review
Here are some automated review suggestions for this pull request.
Reviewed commit: fe3f992390
ℹ️ About Codex in GitHub
Your team has set up Codex to review pull requests in this repo. Reviews are triggered when you
- Open a pull request for review
- Mark a draft as ready
- Comment "@codex review".
If Codex has suggestions, it will comment; otherwise it will react with 👍.
Codex can also answer questions or update the PR. Try commenting "@codex address that feedback".
| await Promise.allSettled( | ||
| pending | ||
| .slice(i) | ||
| .map((e) => failDelivery(e.id, "recovery time budget exceeded", opts.stateDir)), |
There was a problem hiding this comment.
Stop consuming retry budget for entries never retried
When recovery hits the startup time budget, this now runs failDelivery(...) on every untouched queue entry. failDelivery increments both retryCount and lastAttemptAt, so repeated restarts with a small maxRecoveryMs can exhaust MAX_RETRIES and move messages to failed/ without a single delivery attempt. I verified this by replaying recoverPendingDeliveries(..., maxRecoveryMs: 0) five times: the next normal startup archives the entry instead of calling deliver.
Useful? React with 👍 / 👎.
Prevents crash when totals is undefined in byModel/byProvider/byAgent sort comparators. Fixes 'Cannot read properties of undefined (reading totalTokens)' crash that causes context overflow in active sessions.
…me budget exceeded When delivery recovery ran out of the 60s time budget, remaining pending entries were silently deferred to the next restart with no retryCount increment. This caused them to loop forever across restarts, never hitting MAX_RETRIES and never moving to failed/. Fix: call failDelivery() on each remaining entry before breaking out of the recovery loop (both the deadline check and the backoff-exceeds-deadline check). This increments retryCount so that entries eventually exhaust MAX_RETRIES and are permanently skipped. Fixes openclaw#24353
… all remaining entries P1-C: After now >= deadline, the old code would iterate all remaining queue entries and call failDelivery() on each — O(n) work that nullified the maxRecoveryMs wall-clock cap on large queues. Fix: break out of the recovery loop immediately when the deadline is exceeded. Remaining entries are picked up on next startup unchanged (retryCount not incremented). The deadline means 'stop here', not 'fail everything remaining'.
… compat The stdio tuple overload resolves differently across @types/node versions (v20 vs v24/v25). Cast the spawn() result to ChildProcessWithoutNullStreams to ensure proc.stderr?.on/off type-checks regardless of installed @types/node.
Codex P1: entries deferred by the recovery time budget kept retryCount=0 forever, so they could loop across restarts without ever reaching MAX_RETRIES. After breaking on deadline, call failDelivery() for all remaining entries so retryCount is incremented. Entries stay in queue until MAX_RETRIES is reached and they are pruned normally. Also updates the maxRecoveryMs test to assert retryCount=1 on deferred entries.
83e19dd to
30113a8
Compare
There was a problem hiding this comment.
💡 Codex Review
Here are some automated review suggestions for this pull request.
Reviewed commit: 83e19dd045
ℹ️ About Codex in GitHub
Your team has set up Codex to review pull requests in this repo. Reviews are triggered when you
- Open a pull request for review
- Mark a draft as ready
- Comment "@codex review".
If Codex has suggestions, it will comment; otherwise it will react with 👍.
Codex can also answer questions or update the PR. Try commenting "@codex address that feedback".
| await Promise.allSettled( | ||
| pending | ||
| .slice(i) | ||
| .map((e) => failDelivery(e.id, "recovery time budget exceeded", opts.stateDir)), |
There was a problem hiding this comment.
Move timed-out entries to failed once they hit MAX_RETRIES
When maxRecoveryMs is already exhausted (for example 0 or a consistently overloaded startup), this branch only calls failDelivery(...) on the remaining entries and then breaks before the later entry.retryCount >= MAX_RETRIES path can run. That means the exact scenario this fix is trying to stop can still loop forever: after 7 consecutive recoverPendingDeliveries(..., maxRecoveryMs: 0) runs, I still had the entry in pending/ with retryCount: 7 and nothing in failed/. The timeout path needs to archive maxed-out entries instead of blindly incrementing them again.
Useful? React with 👍 / 👎.
|
Landed via temp rebase onto main.
Thanks @stephenschoettler! |
There was a problem hiding this comment.
💡 Codex Review
Here are some automated review suggestions for this pull request.
Reviewed commit: 30113a8156
ℹ️ About Codex in GitHub
Your team has set up Codex to review pull requests in this repo. Reviews are triggered when you
- Open a pull request for review
- Mark a draft as ready
- Comment "@codex review".
If Codex has suggestions, it will comment; otherwise it will react with 👍.
Codex can also answer questions or update the PR. Try commenting "@codex address that feedback".
| if (now >= deadline) { | ||
| const deferred = pending.length - recovered - failed - skippedMaxRetries - deferredBackoff; | ||
| opts.log.warn(`Recovery time budget exceeded — ${deferred} entries deferred to next restart`); | ||
| opts.log.warn(`Recovery time budget exceeded — remaining entries deferred to next startup`); | ||
| // Increment retryCount for all remaining entries so that entries which | ||
| // are consistently deferred by the time budget eventually reach | ||
| // MAX_RETRIES and are pruned rather than looping forever. |
There was a problem hiding this comment.
Check MAX_RETRIES before deferring timed-out entries
If recovery starts already past the deadline (for example maxRecoveryMs: 0 or a consistently too-small budget), this timeout branch runs before the entry.retryCount >= MAX_RETRIES check below, so queued files are only rewritten with ever-higher retryCount values and are never moved to failed/. I reproduced this by calling recoverPendingDeliveries(..., maxRecoveryMs: 0) seven times: the entry reached retryCount=7 and still remained queued, so the advertised “eventually pruned” behavior never happens under persistent budget exhaustion.
Useful? React with 👍 / 👎.
Problem
When delivery recovery exceeded the 60s time budget, remaining pending entries were silently skipped with a log warning but no
retryCountincrement. This caused them to loop forever across restarts — they'd always defer, never hitMAX_RETRIES, and never move tofailed/.Two code paths were affected:
now >= deadline)now + backoff >= deadline)Fix
Call
failDelivery()on each remaining entry before breaking out of the loop. This incrementsretryCountso that entries eventually exhaustMAX_RETRIESand are permanently skipped instead of spinning forever.The existing
failDeliveryhelper already handles the retryCount increment and stateDir write — no new logic needed.Fixes #24353