-
Notifications
You must be signed in to change notification settings - Fork 14.7k
Log cleanup deletes newest log files instead of oldest (path-scurry reverse ordering) #14731
Description
Summary
The log cleanup function in packages/opencode/src/util/log.ts deletes the newest log files and keeps the oldest ones — the exact opposite of the intended behavior. This means actively-written log files from running OpenCode instances get deleted while stale logs from days ago accumulate on disk.
Root Cause
Two bugs in cleanup():
Bug 1: Missing sort — glob() / path-scurry returns newest-first
async function cleanup(dir: string) {
const files = await Glob.scan("????-??-??T??????.log", {
cwd: dir,
absolute: true,
include: "file",
})
if (files.length <= 5) return
const filesToDelete = files.slice(0, -10) // ← assumes oldest-first order
await Promise.all(filesToDelete.map((file) => fs.unlink(file).catch(() => {})))
}Glob.scan() wraps the npm glob package, which uses path-scurry v2 for directory walking. path-scurry's readdirSync() returns entries in reverse order compared to native fs.readdirSync():
// path-scurry readdir order (what glob returns):
0: 2026-02-22T233952.log ← NEWEST
1: 2026-02-20T002649.log
...
10: 2026-02-19T214906.log ← OLDEST
// Native fs.readdirSync order:
0: 2026-02-19T214906.log ← OLDEST
...
10: 2026-02-22T233952.log ← NEWEST
Since files.slice(0, -10) takes from the front, it selects the newest files for deletion.
Bug 2: Guard threshold mismatch
The guard if (files.length <= 5) return doesn't match the slice(0, -10) which keeps 10. When there are 6–10 files, the guard passes but slice(0, -10) returns an empty array, so nothing happens. Not harmful, but inconsistent.
Observable Impact
On a system with multiple concurrent OpenCode instances:
- On disk: 10 stale log files from days ago survive indefinitely
- Deleted: Every new log file created by a fresh OpenCode instance is immediately deleted by cleanup
- Result:
lsofshows all running instances writing to(deleted)inodes, andgrepagainst the log directory finds nothing from current sessions
Example from a real system with 4 concurrent OpenCode instances:
$ ls ~/.local/share/opencode/log/
2026-02-19T214906.log ← 3 days old, still here
2026-02-19T225349.log ← 3 days old, still here
... (10 old files)
$ lsof -p <opencode_pid> | grep log
...opencode/log/2026-02-22T223648.log (deleted) ← today, deleted
...opencode/log/2026-02-22T224549.log (deleted) ← today, deleted
Reproduction
import { glob } from "glob"
const files = await glob("????-??-??T??????.log", {
cwd: "/path/to/opencode/log",
absolute: true,
nodir: true,
})
// files[0] is the NEWEST file, not the oldest
// files.slice(0, -10) deletes the newest filesTested on btrfs (Linux), glob v13.0.6, path-scurry v2.0.2. The ordering depends on the filesystem and path-scurry's internal directory walking, which does not guarantee any particular sort order.
Fix
Sort the file list before slicing, and align the guard threshold:
async function cleanup(dir: string) {
const files = await Glob.scan("????-??-??T??????.log", {
cwd: dir,
absolute: true,
include: "file",
})
if (files.length <= 10) return // fix: was <= 5
files.sort() // fix: ISO-8601 filenames sort chronologically
const filesToDelete = files.slice(0, -10)
await Promise.all(filesToDelete.map((file) => fs.unlink(file).catch(() => {})))
}Secondary consideration
cleanup() is called without await inside Log.init(), which means it runs concurrently with log file creation. When multiple OpenCode instances start simultaneously, multiple cleanups race. The files.sort() fix resolves the correctness issue regardless of race ordering, since even concurrent cleanups will always target old files rather than new ones.