-
-
Notifications
You must be signed in to change notification settings - Fork 69.5k
Telegram health-monitor stale-socket restart every ~35min on 2026.3.2 (post-#10795 fix) #38395
Copy link
Copy link
Closed
Description
Description
After upgrading to OpenClaw 2026.3.2 (which included the undici dispatcher fix for #10795), the Telegram health-monitor is restarting the long-polling connection every ~35 minutes with reason: stale-socket. The restart cycle is persistent and has been running continuously since the upgrade.
Telegram recovers each time (~1 sec), and no messages are dropped — but the cycle never stops.
Environment
- OpenClaw: 2026.3.2
- Node: v22.22.0
- OS: Debian Bookworm (Raspberry Pi 4, aarch64, kernel 6.12.62+rpt-rpi-v8)
- Telegram: Long-polling mode (bot API)
Reproduction
Persistent — occurs on every ~35 min interval without any user interaction. Started immediately after upgrading to 2026.3.2 on the evening of March 5, 2026. No config changes were made alongside the upgrade.
Logs
30+ consecutive restarts with consistent ~30-35 min intervals:
Mar 06 01:40:18 [health-monitor] [telegram:default] restarting (reason: stale-socket)
Mar 06 02:15:18 [health-monitor] [telegram:default] restarting (reason: stale-socket)
Mar 06 02:50:18 [health-monitor] [telegram:default] restarting (reason: stale-socket)
Mar 06 03:25:18 [health-monitor] [telegram:default] restarting (reason: stale-socket)
Mar 06 03:55:18 [health-monitor] [telegram:default] restarting (reason: stale-socket)
... (30+ total in 20 hours)
Mar 06 17:30:18 [health-monitor] [telegram:default] restarting (reason: stale-socket)
After each restart, the provider recovers immediately:
Mar 06 16:55:19 [telegram] [default] starting provider (@sj_clawdbot_bot)
Notes
- The ~35 min interval is remarkably consistent, suggesting a timer or timeout threshold in the health-monitor stale-socket detection
- The undici dispatcher fix in 2026.3.2 (Telegram fails on servers without IPv6 (Node 22+ undici ignores net.setDefaultAutoSelectFamily) #10795) may have changed socket lifecycle behavior in a way that triggers the stale-socket heuristic on a regular cadence
- No messages are lost — the health-monitor catches and recovers each time — but this is clearly not intended behavior
- Was not occurring on the prior version before the 2026.3.2 upgrade
Reactions are currently unavailable
Metadata
Metadata
Assignees
Labels
No labels
Type
Fields
Give feedbackNo fields configured for issues without a type.