fix(msteams): block provider promise until abort signal fires#27968
fix(msteams): block provider promise until abort signal fires#27968Anandesh-Sharma wants to merge 1 commit intoopenclaw:mainfrom
Conversation
The msteams provider returned immediately after starting the Express
server. The channel manager in server-channels.ts treats promise
resolution as "provider stopped" and triggers auto-restart with
exponential backoff, causing an infinite restart loop.
Add the same blocking pattern used by the Slack provider: await a
promise that only resolves when the abort signal fires. Also add
{ once: true } to the existing abort listener to prevent leaks.
Closes openclaw#27885
Co-Authored-By: Claude Opus 4.6 <[email protected]>
There was a problem hiding this comment.
💡 Codex Review
Here are some automated review suggestions for this pull request.
Reviewed commit: 83caf25d28
ℹ️ About Codex in GitHub
Codex has been enabled to automatically review pull requests in this repo. Reviews are triggered when you
- Open a pull request for review
- Mark a draft as ready
- Comment "@codex review".
If Codex has suggestions, it will comment; otherwise it will react with 👍.
When you sign up for Codex through ChatGPT, Codex can also answer questions or update the PR, like "@codex address that feedback".
| if (opts.abortSignal?.aborted) { | ||
| resolve(); | ||
| return; | ||
| } | ||
| opts.abortSignal?.addEventListener("abort", () => resolve(), { once: true }); |
There was a problem hiding this comment.
Propagate server startup failures out of abort wait
This new wait only resolves when opts.abortSignal aborts, so if expressApp.listen fails (for example EADDRINUSE), the error handler logs but this promise never settles and the channel manager keeps the account marked as running instead of entering its restart/error path. In practice, a failed MSTeams bind can now hang indefinitely until a manual stop triggers abort, so startup failures should also resolve/reject this wait.
Useful? React with 👍 / 👎.
Greptile SummaryPrevents infinite restart loop by blocking the provider promise until abort signal fires. Previously returned immediately after
Confidence Score: 5/5
Last reviewed commit: 83caf25 |
|
Field report from a live production recovery (sanitized, no secrets / no env values). Posting this in case it helps maintainers and others triaging the same restart-loop class. Executive summaryWe hit a Microsoft Teams provider auto-restart loop with the same user-visible signature described here:
In our incident, there were three independent contributors. Fixing only one did not fully resolve the loop:
What we observed (timeline-style)Phase A — restart loop with repeated startup success signals
Phase B — eliminated local package-collision factor
Phase C — fixed missing credential config
Phase D — remaining behavior matches known monitor/lifecycle bug class
Distinguishing signals that helped triageThese indicators were the most useful to separate local misconfiguration from upstream bug behavior:
Secure remediation checklist that worked best for us(Generalized to avoid host-specific details) 1) Eliminate package duplication first
2) Validate Teams auth fields completely
3) Restart and verify by behavior, not process state
4) If still looping, classify as likely upstream lifecycle bug
What to include in repro data (high value for maintainers)Recommend sharing these artifacts (all sanitized):
Suggested maintainer-facing acceptance testA robust guardrail test for this bug class would assert:
Current status from this field report
If helpful, I can provide a compact sanitized log excerpt in follow-up showing exact line ordering (startup → resolution → restart) without any identifiers. |
|
Superseded by already-merged MSTeams monitor lifecycle fixes:
Closing this as duplicate to keep the queue focused. Thank you for the patch and write-up. |
Summary
awaitpattern used by the Slack provider: the monitor promise now only resolves when the abort signal fires{ once: true }to the existing abort listener to prevent event listener leaksRoot Cause
monitorMSTeamsProvider()returned{ app, shutdown }immediately afterexpressApp.listen(). The channel manager inserver-channels.tstreats promise resolution as "provider stopped" and triggers auto-restart with exponential backoff — resulting in 10 restart attempts cycling indefinitely.The Slack provider (
src/slack/monitor/provider.ts:355-359) correctly blocks with:This same pattern is now applied to the msteams provider.
Test plan
{ once: true }Closes #27885
🤖 Generated with Claude Code