You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
uv tool install --reinstall memtomem (the canonical upgrade path) replaces only the on-disk bytes — ~/.local/share/uv/tools/memtomem/lib/python*/site-packages/memtomem/. It does not touch any currently-running memtomem-server / mm process. Python caches module bytecode in memory at import time, so those processes keep executing the old code until they exit.
In today's ecosystem a typical user has memtomem-server running continuously under their MCP client (Claude Code keeps the server alive across prompts). So a plain uv tool install --reinstall produces a subtle split-brain:
Process
Memory code
Disk code
Running memtomem-server from before upgrade
N (old)
N+1 (new)
New memtomem-server spawned after upgrade
N+1
N+1
If the pre-upgrade version had a teardown bug (e.g. 0.1.25's "legacy .server.pid is not unlinked on exit" — fixed in #439 / v0.1.26), the old in-memory process continues to exhibit that bug until it dies, potentially leaking artefacts that confuse the new version's startup path. Today's live repro (orphan ~/.memtomem/.server.pid after the 0.1.26 upgrade) traced back exactly to this: a 0.1.25 server still running in memory even after the disk bytes had been updated to 0.1.26.
Proposal
Add mm upgrade (or extend an existing mm doctor --fix if we add that umbrella later) that wraps the upgrade with process-level hygiene:
Detect live instances. Reuse the liveness probe from mm uninstall (~/.memtomem/.server.pid + optionally scan for other writers). mm uninstall already treats this as "refuse to delete state while anything is alive"; mm upgrade instead treats it as "stop them first, then proceed".
Clear stale lock if any.rm -f ~/.memtomem/.server.pid after processes are gone.
Run the reinstall.uv tool install --refresh memtomem==<version> (or @latest). Respect --refresh since uv's index cache can serve stale versions (see feedback_uv_index_cache_lag.md).
Exit code semantics: 0 on success, non-zero on any step failing. Print the PIDs killed and the file removed so users see what was cleaned up.
Opt-out flags:
--skip-pkill — only run the reinstall, don't touch running processes (for advanced users who manage lifecycle elsewhere).
--version X.Y.Z — pin a specific version (default: latest on configured index).
Alternatives considered
Document the hygiene step in README / CHANGELOG. Rejected as load-bearing: docs-only "remember to pkill after upgrade" is easy to miss, and the failure mode is silent (user sees MCP "failed to connect" later, not "you forgot to kill the old process").
Make the new server take over the stale file on startup. Dangerous if the other holder is actually alive (two writers → WAL corruption risk). The current _try_hold_legacy_flock correctly refuses; the fix should be at the lifecycle layer, not relaxing the guard.
Rely on PR feat(server): parent-death watchdog self-SIGTERMs on reparent (closes #440) #442's parent-death watchdog. That would catch orphans over time (10s polling), but doesn't help when the "old process" parent is still alive (Claude Code holding onto a pre-upgrade server). mm upgrade is a proactive cleanup, complementary to any runtime watchdog.
Scope / sizing
Medium. New CLI command file (~100 LOC modelled on mm uninstall), shared helper for liveness detection, 5-ish tests. The closest existing pattern is uninstall_cmd.py — it already handles the "is something alive that shouldn't be" probe and the "refuse vs proceed" branches. Most of the work is threading the probe into an "okay to kill, then reinstall" flow.
Out of scope
Auto-upgrade on MCP server startup. Too magical for a long-running daemon.
Replacing uv tool install as the base mechanism. uv tool install is the right primitive; mm upgrade just adds the hygiene wrapper.
Context
PR fix(server): unlink legacy .server.pid on atexit and SIGTERM (closes #437) #439 / v0.1.26 fixed the stale-file class of bugs at the server level. mm upgrade addresses the same class of issues at the upgrade level: even if every future version has perfect teardown, the N → N+1 transition for a currently-running server still needs an external cleanup step that the CLI should provide.
Problem
uv tool install --reinstall memtomem(the canonical upgrade path) replaces only the on-disk bytes —~/.local/share/uv/tools/memtomem/lib/python*/site-packages/memtomem/. It does not touch any currently-runningmemtomem-server/mmprocess. Python caches module bytecode in memory at import time, so those processes keep executing the old code until they exit.In today's ecosystem a typical user has
memtomem-serverrunning continuously under their MCP client (Claude Code keeps the server alive across prompts). So a plainuv tool install --reinstallproduces a subtle split-brain:memtomem-serverfrom before upgradememtomem-serverspawned after upgradeIf the pre-upgrade version had a teardown bug (e.g. 0.1.25's "legacy
.server.pidis not unlinked on exit" — fixed in #439 / v0.1.26), the old in-memory process continues to exhibit that bug until it dies, potentially leaking artefacts that confuse the new version's startup path. Today's live repro (orphan~/.memtomem/.server.pidafter the 0.1.26 upgrade) traced back exactly to this: a 0.1.25 server still running in memory even after the disk bytes had been updated to 0.1.26.Proposal
Add
mm upgrade(or extend an existingmm doctor --fixif we add that umbrella later) that wraps the upgrade with process-level hygiene:mm uninstall(~/.memtomem/.server.pid+ optionally scan for other writers).mm uninstallalready treats this as "refuse to delete state while anything is alive";mm upgradeinstead treats it as "stop them first, then proceed".SIGTERM, wait N seconds, escalate toSIGKILLif still alive._install_sigterm_handler(fix(server): unlink legacy .server.pid on atexit and SIGTERM (closes #437) #439) already makes SIGTERM a clean-exit path, so the graceful case leaves no stale pid files.rm -f ~/.memtomem/.server.pidafter processes are gone.uv tool install --refresh memtomem==<version>(or@latest). Respect--refreshsince uv's index cache can serve stale versions (seefeedback_uv_index_cache_lag.md).Opt-out flags:
--skip-pkill— only run the reinstall, don't touch running processes (for advanced users who manage lifecycle elsewhere).--version X.Y.Z— pin a specific version (default: latest on configured index).Alternatives considered
_try_hold_legacy_flockcorrectly refuses; the fix should be at the lifecycle layer, not relaxing the guard.mm upgradeis a proactive cleanup, complementary to any runtime watchdog.Scope / sizing
Medium. New CLI command file (~100 LOC modelled on
mm uninstall), shared helper for liveness detection, 5-ish tests. The closest existing pattern isuninstall_cmd.py— it already handles the "is something alive that shouldn't be" probe and the "refuse vs proceed" branches. Most of the work is threading the probe into an "okay to kill, then reinstall" flow.Out of scope
uv tool installas the base mechanism.uv tool installis the right primitive;mm upgradejust adds the hygiene wrapper.Context
mm upgradeaddresses the same class of issues at the upgrade level: even if every future version has perfect teardown, the N → N+1 transition for a currently-running server still needs an external cleanup step that the CLI should provide.