fix(msteams): keep monitor alive until shutdown#24580
fix(msteams): keep monitor alive until shutdown#24580steipete merged 2 commits intoopenclaw:mainfrom
Conversation
| resolve(); | ||
| }); | ||
| }); | ||
| opts.abortSignal?.removeEventListener("abort", onAbort); |
There was a problem hiding this comment.
removeEventListener unnecessary since listener added with { once: true } on line 317
| opts.abortSignal?.removeEventListener("abort", onAbort); | |
| // Listener auto-removed by { once: true } option |
Prompt To Fix With AI
This is a comment left during a code review.
Path: extensions/msteams/src/monitor.ts
Line: 327
Comment:
`removeEventListener` unnecessary since listener added with `{ once: true }` on line 317
```suggestion
// Listener auto-removed by { once: true } option
```
How can I resolve this? If you propose a fix, please make it concise.|
This pull request has been automatically marked as stale due to inactivity. |
|
Field report from a live production recovery (sanitized, no secrets / no env values). Posting this in case it helps maintainers and others triaging the same restart-loop class. Executive summaryWe hit a Microsoft Teams provider auto-restart loop with the same user-visible signature described here:
In our incident, there were three independent contributors. Fixing only one did not fully resolve the loop:
What we observed (timeline-style)Phase A — restart loop with repeated startup success signals
Phase B — eliminated local package-collision factor
Phase C — fixed missing credential config
Phase D — remaining behavior matches known monitor/lifecycle bug class
Distinguishing signals that helped triageThese indicators were the most useful to separate local misconfiguration from upstream bug behavior:
Secure remediation checklist that worked best for us(Generalized to avoid host-specific details) 1) Eliminate package duplication first
2) Validate Teams auth fields completely
3) Restart and verify by behavior, not process state
4) If still looping, classify as likely upstream lifecycle bug
What to include in repro data (high value for maintainers)Recommend sharing these artifacts (all sanitized):
Suggested maintainer-facing acceptance testA robust guardrail test for this bug class would assert:
Current status from this field report
If helpful, I can provide a compact sanitized log excerpt in follow-up showing exact line ordering (startup → resolution → restart) without any identifiers. |
|
I've validated this PR against latest main and it resolves the restart loop for our MSTeams deployment. This is currently blocking our testing of the MSTeams provider. Is there anything preventing a merge? |
I posted an update to fix the Teams issue three weeks ago - #13089 I even got Peter to reply to a request to review Teams issues and PRs, but that hasn't happened. Sadly, Teams isn't a priorty - it has been broken for over a month and there have been dozens of issues and PRs submitted. |
|
Thank you all!!. I'm back up and running with msTeams. |
Summary
monitorMSTeamsProvideralive until server close so channel manager does not treat startup as an immediate exitEADDRINUSE) so restart backoff can classify a real startup failureFixes #24522.
Tests
corepack pnpm --dir /tmp/openclaw-next4-CwcfMD exec vitest run extensions/msteams/src/monitor.lifecycle.test.ts src/gateway/server-channels.test.tsGreptile Summary
Fixed lifecycle issue where
monitorMSTeamsProviderreturned immediately after startup, causing the channel manager to treat it as an exit and trigger restart loops. The fix keeps the monitor promise pending until the HTTP server closes (via abort signal or shutdown), and fails fast on startup errors likeEADDRINUSE.Confidence Score: 5/5
Last reviewed commit: d839de4