Description
When the Stop hook fires during an active mempalace mine session, it launches additional mining processes against the same palace without checking if a mine is already running. This results in multiple concurrent writes to ChromaDB, which causes HNSW index corruption (segfault, exit code 139) and extreme CPU/thermal load on the host machine.
Setup
Machine
- MacBook Pro 14" — Apple M5 Pro (15 cores: 5 efficiency + 10 performance)
- 24 GB RAM
- macOS Tahoe 26.4.1
MemPalace
- mempalace 3.3.3 (installed via
uv tool install mempalace)
- chromadb 1.5.8
- Python 3.11 (uv-managed venv at
~/.local/share/uv/tools/mempalace/)
- MCP server:
mempalace-mcp binary at ~/.local/bin/mempalace-mcp
Claude Code hooks configuration (~/.claude/settings.json)
The Stop and PreCompact hooks are configured as recommended in the MemPalace docs, using the built-in mempalace hook run CLI subcommand:
{
"hooks": {
"Stop": [
{
"matcher": "*",
"hooks": [
{
"type": "command",
"command": "/Users/me/.local/bin/mempalace hook run --hook stop --harness claude-code",
"timeout": 30
}
]
}
],
"PreCompact": [
{
"hooks": [
{
"type": "command",
"command": "/Users/me/.local/bin/mempalace hook run --hook precompact --harness claude-code",
"timeout": 30
}
]
}
]
}
}
Note: there are also other hooks on these events (notification sounds, session auto-rename, etc.) — the mempalace hooks coexist with those.
Steps to reproduce
- Install mempalace via
uv tool install mempalace
- Configure Stop + PreCompact hooks as above
- Start a manual mine on a project:
mempalace mine ~/code/my-project
- In the same Claude Code session, continue working — the Stop hook fires after 15 interactions
- The hook launches additional mining processes against
~/.mempalace/palace
Observed behavior
Three concurrent mempalace mine processes running simultaneously against the same palace:
PID 61734 98.6% CPU mempalace mine ~/code/my-project
PID 63386 100.0% CPU mempalace mine ~/.claude/projects/...
PID 63385 97.6% CPU mempalace mine ~/.claude/projects/... --mode convos --wing sessions
Thermal impact
- CPU load jumped from ~25% to 53% sustained
- CPU temperature went from 77°C to 92°C (macOS "intervene before damage" warning)
- Total CPU usage by parent process (cmux): 623% (across multiple cores)
- Machine: M5 Pro with 15 cores — the mining processes saturated 3 full cores
Corruption
After the concurrent mining:
mempalace status → exit code 139 (SIGSEGV)
mempalace repair → exit code 139 (SIGSEGV)
mempalace search → "error": "Search error: Error executing plan: Internal error: Error finding id"
- MCP server disconnected (
Connection closed)
- Only recovery: delete
~/.mempalace/palace entirely and re-mine from scratch
Expected behavior
The hook should acquire a global lock (PID file, flock, or similar) before launching any mining subprocess. If a mine is already active, the hook should either:
- Skip the mine and only perform the diary/drawer save via MCP tools
- Queue the mine for after the current one completes
- Exit cleanly with a log message
Suggested fix
A palace-level lock file (e.g. ~/.mempalace/palace/.mine.lock) checked by both:
mempalace mine CLI — acquire before starting, release on exit
hooks_cli.py — check before spawning _maybe_auto_ingest
This would prevent the concurrent write scenario regardless of whether the mine is manual or hook-triggered.
Related issues
Environment
mempalace 3.3.3
chromadb 1.5.8
Python 3.11.12 (uv)
macOS Tahoe 26.4.1
Apple M5 Pro (15 cores), 24 GB RAM
Claude Code with cmux terminal multiplexer
Description
When the Stop hook fires during an active
mempalace minesession, it launches additional mining processes against the same palace without checking if a mine is already running. This results in multiple concurrent writes to ChromaDB, which causes HNSW index corruption (segfault, exit code 139) and extreme CPU/thermal load on the host machine.Setup
Machine
MemPalace
uv tool install mempalace)~/.local/share/uv/tools/mempalace/)mempalace-mcpbinary at~/.local/bin/mempalace-mcpClaude Code hooks configuration (
~/.claude/settings.json)The Stop and PreCompact hooks are configured as recommended in the MemPalace docs, using the built-in
mempalace hook runCLI subcommand:{ "hooks": { "Stop": [ { "matcher": "*", "hooks": [ { "type": "command", "command": "/Users/me/.local/bin/mempalace hook run --hook stop --harness claude-code", "timeout": 30 } ] } ], "PreCompact": [ { "hooks": [ { "type": "command", "command": "/Users/me/.local/bin/mempalace hook run --hook precompact --harness claude-code", "timeout": 30 } ] } ] } }Note: there are also other hooks on these events (notification sounds, session auto-rename, etc.) — the mempalace hooks coexist with those.
Steps to reproduce
uv tool install mempalacemempalace mine ~/code/my-project~/.mempalace/palaceObserved behavior
Three concurrent
mempalace mineprocesses running simultaneously against the same palace:Thermal impact
Corruption
After the concurrent mining:
mempalace status→ exit code 139 (SIGSEGV)mempalace repair→ exit code 139 (SIGSEGV)mempalace search→"error": "Search error: Error executing plan: Internal error: Error finding id"Connection closed)~/.mempalace/palaceentirely and re-mine from scratchExpected behavior
The hook should acquire a global lock (PID file,
flock, or similar) before launching any mining subprocess. If a mine is already active, the hook should either:Suggested fix
A palace-level lock file (e.g.
~/.mempalace/palace/.mine.lock) checked by both:mempalace mineCLI — acquire before starting, release on exithooks_cli.py— check before spawning_maybe_auto_ingestThis would prevent the concurrent write scenario regardless of whether the mine is manual or hook-triggered.
Related issues
bulk_check_mined()pre-fetch #1088 — Concurrent mining proposal with ThreadPoolExecutor (single-process parallelism, different approach)Environment