-
-
Notifications
You must be signed in to change notification settings - Fork 40k
Description
Alex:
Here’s an English version you can more or less paste as a GitHub issue. You can tweak wording if you like, but technically这就够用了。
Title
Cron repeatedly replays old systemEvent reminders ("empty-heartbeat-file", status=skipped) even after jobs.json reset
Description
I’m running Clawdbot locally on a Mac and hit a strange cron behavior.
I created a couple of one-shot cron jobs:
-
A “wake up” reminder at 8:00 AM • payload.kind = "systemEvent"
• text: ⏰ 提醒:早上8点了,该起床啦!这是你昨晚设的闹钟。新的一天,加油!🌸 -
A “5 minutes later” reminder • payload.kind = "systemEvent"
• text: ⏰ 提醒:5 分钟到了,这是你 5 分钟前让我喊你的提醒。
After that, even when I:
• Edited ~/.clawdbot/cron/jobs.json to an empty jobs: []
• Flipped cron.enabled off and back on
• Completely removed the cron store with rm -rf ~/.clawdbot/cron and recreated a fresh jobs.json with {"version":1,"jobs":[]}
…the Gateway kept injecting the same two systemEvent texts into the main session, repeatedly:
• 早上8点了,该起床啦!这是你昨晚设的闹钟。新的一天,加油!🌸
• 5 分钟到了,这是你 5 分钟前让我喊你的提醒。
It looks like old systemEvent / run state is being replayed over and over, even after the job store is nuked.
Environment
• Host: local Mac
• OS: macOS 15 (arm64)
• Clawdbot version: 2026.1.24-3
• Gateway: • mode: "local"
• bind: "loopback"
• port: 18789
• Workspace: /Users/shuo/clawd
• Current config snippet (relevant parts):
{
"cron": {
"enabled": false // currently disabled to stop the spam
},
"channels": {
"whatsapp": { ... },
"telegram": { ... }
},
"gateway": {
"port": 18789,
"mode": "local",
"bind": "loopback"
}
}
jobs.json content (keeps coming back)
Multiple times I found ~/.clawdbot/cron/jobs.json rewritten back to these two jobs, even after I had manually set jobs: []:
{
"version": 1,
"jobs": [
{
"id": "639fea0f-2689-467c-9aa9-053c0197fc17",
"agentId": "main",
"name": "wake-shuo-morning",
"enabled": true,
"deleteAfterRun": true,
"createdAtMs": 1769492074650,
"updatedAtMs": 1769660949582,
"schedule": {
"kind": "at",
"atMs": 1769518800000
},
"sessionTarget": "main",
"wakeMode": "now",
"payload": {
"kind": "systemEvent",
"text": "⏰ 提醒:早上8点了,该起床啦!这是你昨晚设的闹钟。新的一天,加油!🌸"
},
"state": {
"nextRunAtMs": 1769518800000,
"lastError": "empty-heartbeat-file",
"lastRunAtMs": 1769660949582,
"lastStatus": "skipped",
"lastDurationMs": 0
}
},
{
"id": "52c7d93e-041c-4175-b24d-f16d0308e05a",
"agentId": "main",
"name": "remind-in-5-min",
"enabled": true,
"deleteAfterRun": true,
"createdAtMs": 1769573546325,
"updatedAtMs": 1769660949582,
"schedule": {
"kind": "at",
"atMs": 1769573826000
},
"sessionTarget": "main",
"wakeMode": "now",
"payload": {
"kind": "systemEvent",
"text": "⏰ 提醒:5 分钟到了,这是你 5 分钟前让我喊你的提醒。"
},
"state": {
"nextRunAtMs": 1769573826000,
"lastError": "empty-heartbeat-file",
"lastRunAtMs": 1769660949582,
"lastStatus": "skipped",
"lastDurationMs": 1
}
}
]
}
Notable:
• Both jobs have deleteAfterRun: true
• lastError is always "empty-heartbeat-file"
• lastStatus is always "skipped"
• Despite that, the jobs don’t disappear, and their systemEvent texts keep appearing in the main session.
cron status
At one point (before nuking the whole cron dir), clawdbot cron status returned something like:
{
"enabled": true,
"storePath": "/Users/shuo/.clawdbot/cron/jobs.json",
"jobs": 2,
"nextWakeAtMs": ...
}
Right now, after I nuked the store and disabled cron:
clawdbot cron status
{
"enabled": false,
"storePath": "/Users/shuo/.clawdbot/cron/jobs.json",
"jobs": 0,
"nextWakeAtMs": null
}
…and jobs.json is just:
{"version":1,"jobs":[]}
Gateway timeouts when trying to remove jobs
When I tried to remove the jobs via CLI:
Alex:
clawdbot cron remove 639fea0f-...
clawdbot cron remove 52c7d93e-...
I sometimes get:
Gateway timeout after 10000ms
Gateway target: ws://127.0.0.1:18789
Source: local loopback
Config: /Users/shuo/.clawdbot/clawdbot.json
Bind: loopback
And the jobs still end up present in jobs.json again later.
What I did to “stop the bleeding”
To stop the repeated systemEvent spam, I ended up:
- rm -rf ~/.clawdbot/cron
- Recreated a fresh jobs.json with {"version":1,"jobs":[]}
- Set "cron.enabled": false in config
After that, the spam stopped.
Expectations vs actual behavior
Expected:
• One-shot at jobs with deleteAfterRun: true should be removed (or at least not retried indefinitely) once they’ve run or failed.
• With lastError = "empty-heartbeat-file" and no heartbeat configured, I’d expect the systemEvent to maybe fail once, but not be endlessly replayed into the main session.
• Clearing jobs.json and even deleting ~/.clawdbot/cron should really reset cron state.
Actual:
• The two one-shot jobs persisted in jobs.json with lastStatus: "skipped" and kept generating systemEvent messages.
• Editing jobs.json to jobs: [] only worked temporarily; something rewrote the jobs back.
• clawdbot cron remove sometimes timed out to the Gateway.
• Only deleting the entire cron directory and disabling cron stopped the behavior.
Questions
- What is the intended behavior for jobs with lastError: "empty-heartbeat-file" and lastStatus: "skipped"? Should they auto-clean when deleteAfterRun: true is set?
- Is there another place (e.g., runs log / pending queue) that can rehydrate jobs or replay systemEvents even after the cron store is removed?
- In a setup without a configured heartbeat, is using sessionTarget: "main" + payload.kind: "systemEvent" a misuse? Should I instead be using sessionTarget: "isolated" + payload.kind: "agentTurn" + deliver for reminders?
- Given the state above, what’s the recommended “clean reset” procedure for cron so I can safely start using it again?