Feature Request / Bug: No Storage Limit Handling or Disk-Full Graceful Degradation
Summary
MemPalace has no documented behavior or safeguards for when local disk storage runs out. Given that the system is designed for long-term, continuous memory accumulation (verbatim raw storage by design), this is a realistic production concern — especially for heavy users mining months of conversations.
Current Behavior
- No storage quota setting exists in
config.json or wing_config.json
- No documented eviction policy, pruning strategy, or archival mechanism
- No
mempalace status output for current palace disk usage
- ChromaDB (vector store) will throw hard write errors when disk is full — no graceful degradation
- SQLite (knowledge graph) behaves identically — hard failure on write
- This likely crashes the MCP server mid-session, with no user-friendly error surfaced to the AI or the user
- The only removal mechanism is
mempalace_delete_drawer (single entry) — no bulk pruning command exists
Why This Matters
The README explicitly states:
"Six months of daily AI use = 19.5 million tokens"
Raw verbatim storage scales with usage. A user mining multiple projects, Slack exports, and daily Claude Code sessions will accumulate significant data over time. There is currently no way to:
- Know how large the palace has grown (without running
du -sh ~/.mempalace/ manually)
- Set a size cap
- Automatically prune old or low-value entries
- Archive wings to cold storage
- Recover gracefully if a write fails mid-session
Expected / Requested Behavior
mempalace status should report disk usage — total palace size, per-wing breakdown
- Graceful write failure handling — if ChromaDB or SQLite write fails due to disk full, surface a clear warning to the user/AI rather than crashing the MCP server
- Storage cap option in
config.json — e.g. "max_palace_size_gb": 10 with configurable behavior (warn / stop mining / auto-prune oldest)
- Bulk pruning command — e.g.
mempalace prune --wing myapp --older-than 180d
- Archive command — export a wing to cold storage and remove from active palace
Environment
- MemPalace v3.0.0
- ChromaDB (vector store)
- SQLite (knowledge graph)
- Use case: Claude Code MCP integration, continuous daily mining
Workaround (current)
Users must manually monitor with:
And manually delete individual drawers via the MCP tool. This is not sustainable at scale.
Feature Request / Bug: No Storage Limit Handling or Disk-Full Graceful Degradation
Summary
MemPalace has no documented behavior or safeguards for when local disk storage runs out. Given that the system is designed for long-term, continuous memory accumulation (verbatim raw storage by design), this is a realistic production concern — especially for heavy users mining months of conversations.
Current Behavior
config.jsonorwing_config.jsonmempalace statusoutput for current palace disk usagemempalace_delete_drawer(single entry) — no bulk pruning command existsWhy This Matters
The README explicitly states:
Raw verbatim storage scales with usage. A user mining multiple projects, Slack exports, and daily Claude Code sessions will accumulate significant data over time. There is currently no way to:
du -sh ~/.mempalace/manually)Expected / Requested Behavior
mempalace statusshould report disk usage — total palace size, per-wing breakdownconfig.json— e.g."max_palace_size_gb": 10with configurable behavior (warn / stop mining / auto-prune oldest)mempalace prune --wing myapp --older-than 180dEnvironment
Workaround (current)
Users must manually monitor with:
du -sh ~/.mempalace/And manually delete individual drawers via the MCP tool. This is not sustainable at scale.