feat: Hermes memory provider integration#3
Closed
ZK-Snarky wants to merge 4 commits intoMemPalace:mainfrom
Closed
Conversation
Collaborator
|
Thanks for this — the Hermes integration looks solid. CI is failing on lint though, 3 issues in
A quick |
added 2 commits
April 6, 2026 23:43
Author
|
Corrected and updated 🫡 |
2 tasks
igorls
added a commit
to igorls/mempalace
that referenced
this pull request
Apr 7, 2026
Prevents OOM when the palace grows large. The following unbounded metadata fetches now have a safety cap: - tool_status: col.get(include=['metadatas'], limit=10000) - tool_list_wings: same - tool_list_rooms: same (including wing-filtered variant) - tool_get_taxonomy: same - Layer1.generate: col.get(include=['documents','metadatas'], limit=10000) Layer2 already had a limit parameter — no change needed. Finding: MemPalace#3 (CRITICAL — unbounded data fetching causes OOM) Includes test infrastructure from PR MemPalace#131. 92 tests pass.
3 tasks
Collaborator
|
This has been open a while and conflicts with main. The Hermes integration space has also evolved. If you'd like to continue, a rebase + update to match the current codebase would be needed. |
igorls
added a commit
to igorls/mempalace
that referenced
this pull request
Apr 7, 2026
Prevents OOM when the palace grows large. The following unbounded metadata fetches now have a safety cap: - tool_status: col.get(include=['metadatas'], limit=10000) - tool_list_wings: same - tool_list_rooms: same (including wing-filtered variant) - tool_get_taxonomy: same - Layer1.generate: col.get(include=['documents','metadatas'], limit=10000) Layer2 already had a limit parameter — no change needed. Finding: MemPalace#3 (CRITICAL — unbounded data fetching causes OOM) Includes test infrastructure from PR MemPalace#131. 92 tests pass.
This was referenced Apr 8, 2026
This was referenced Apr 9, 2026
18 tasks
Perseusxrltd
added a commit
to Perseusxrltd/mnemion
that referenced
this pull request
Apr 9, 2026
README: - Version badge: 3.2.0 → 3.2.7 - New "What PerseusXR Added" sections: ManagedBackend (MemPalace#3) and Behavioral Protocol Bootstrap (MemPalace#4) with comparison table - Quick Start: vLLM section updated to show auto-start workflow and mention mempalace llm start/stop; added SYSTEM_PROMPT.md reference after MCP server install - Changelog: added v3.2.7 (protocol bootstrap + MCP prompts) and v3.2.5 (ManagedBackend) entries RESEARCH.md: - Contribution 5: ManagedBackend design — WSL process independence, DETACHED_PROCESS rationale, auto-start/stop/restart mechanics - Contribution 6: Behavioral protocol bootstrap — the circular dependency problem, three-layer solution (tool descriptions, MCP prompts, SYSTEM_PROMPT.md), why CLAUDE.md is most reliable .molthub/project.md: - Version: 3.0.0-fused → 3.2.7 - Summary and description updated to reflect full v3.2 capabilities - Tags: added MCP, Knowledge-Graph, Trust-Layer - Latest milestone updated Co-Authored-By: Claude Sonnet 4.6 <[email protected]>
brandonhon
added a commit
to brandonhon/mempalace
that referenced
this pull request
Apr 10, 2026
…tion Addresses PR MemPalace#548 review feedback about scan amplification on large palaces. The previous implementation made up to six ChromaDB scans per clean operation: 1. count_drawers(drawers_col) — scan MemPalace#1 2. count_drawers(compressed_col) — scan MemPalace#2 3. delete_drawers(drawers_col) — internal get(where=...) scan MemPalace#3 4. delete_drawers(drawers_col) — delete(where=...) scan MemPalace#4 5. delete_drawers(compressed_col) — scan MemPalace#5 6. delete_drawers(compressed_col) — scan MemPalace#6 Each call was doing its own metadata filter over the collection, meaning a 100K-drawer palace paid the filter cost six times for a single cleanup. This refactor drops it to exactly two scans — one per collection — regardless of palace size. Changes: * palace.py: introduce `find_drawer_ids(col, wing, room)` which returns the matching ID list in a single `get(where=..., include=[])` call. ChromaDB fetches only IDs — no documents, embeddings, or metadatas — so the scan is as cheap as ChromaDB can make it. * palace.py: `count_drawers` is now a thin wrapper around `find_drawer_ids`. The old standalone `delete_drawers` helper is removed because its count-then-delete pattern is exactly what we are trying to avoid. * cli.py::cmd_clean: call `find_drawer_ids` once per collection, reuse the returned ID lists for both the preview counts and the subsequent delete. Deletes go through `col.delete(ids=[...])` which is an O(n) primary-key delete, not another metadata scan. Empty lists are guarded to stay compatible with ChromaDB versions that reject `delete()` with no ids or where filter. Why two scans and not one: ChromaDB collections are independent — there is no cross-collection query. `mempalace_drawers` and `mempalace_compressed` must each be filtered separately. A single-scan variant would have to assume that compressed IDs are a strict subset of drawer IDs and delete compressed by the drawer IDs we already found, but `tool_delete_drawer` in mcp_server.py does not cascade to compressed, so real palaces can contain orphaned compressed rows whose drawer is already gone. Going to one scan would silently leak those orphans. Two scans is the minimum that preserves correctness. Tests: * New `test_find_drawer_ids_*` unit tests cover wing-only, wing+room, and no-match cases. * New `test_find_drawer_ids_single_scan` monkey-patches `collection.get` to assert exactly one call. * New `test_clean_scans_each_collection_exactly_once` is a regression test that wraps `chromadb.PersistentClient` and counts `where`-filtered `get()` calls per collection during a full `cmd_clean` invocation, failing if either collection is scanned more than once. * The existing 15 CLI black-box tests stay identical — the behavior is unchanged, only the scan count dropped. Full suite: 551 passed (was 549, +2 new perf regression tests). Also in this commit: `-V` / `--version` flags, because every good app needs a version flag and we somehow shipped three minor releases without one. The installed version is now embedded in the `-h` description line too, so `mempalace -h` answers "what am I running?" without a separate invocation.
jphein
referenced
this pull request
in jphein/mempalace
Apr 11, 2026
Graph cache (TODO #2): build_graph() results cached module-level with 60s TTL. Invalidated on add/delete/update via invalidate_graph_cache(). Eliminates redundant O(n) metadata scans during multi-tool MCP conversations (traverse + tunnels + stats = 3 scans → 1). L1 optimization (TODO #3): _fetch_drawers() tries importance >= 3 pre-filter first. Only falls back to full scan if fewer than 15 high-importance drawers exist. Avoids loading 2000 documents just to sort and take top 15. Co-Authored-By: Claude Opus 4.6 <[email protected]>
igorls
added a commit
that referenced
this pull request
Apr 13, 2026
Addresses the actual spec defects flagged in #743 review, ignoring operator-UX asks that are not plugin-contract concerns. - Goal #3: 'without data loss' → mirrors §8.2's capability-conditional lossless-vs-reembed framing. No more overpromise. - §1.5: `server_embedder` is no longer an implicit escape hatch from identity/dimension rules. Such backends MUST expose an effective identity via `effective_embedder_identity()` and are bound by the same three-state check. - §7.3: adds `maintenance_kinds: ClassVar[frozenset[str]]` advertisement mechanism. `run_maintenance(kind)` must raise UnsupportedMaintenanceKindError for unadvertised kinds. Benchmark harness reads this set rather than guessing kind names. Reserves `analyze`/`compact`/`reindex` as well-defined names. - §1.2: adds `update()` as optional method with a default get+merge+ upsert implementation. §2.1: `supports_update` redefined to gate atomic single-round-trip semantics (not mere capability), since the default impl already supports partial updates. Operator asks explicitly NOT adopted (diplomatic shims, not contract defects): `.to_dict()` compat on typed results, migration progress reporting, `BaseBackend.repair()` separate from `run_maintenance`, per-palace capability variance, identity recording on read-only ops.
felipetruman
added a commit
to felipetruman/mempalace
that referenced
this pull request
Apr 17, 2026
…ut, tests Addresses the six Copilot review comments on the initial commit. 1) MemPalace#6 (critical) — mcp_server.py `_get_collection` bypassed ChromaBackend The MCP server creates its palace collection directly via `chromadb.PersistentClient.get_or_create_collection` in `_get_collection`, not through `ChromaBackend.get_collection`. That path was missing the `hnsw:num_threads=1` metadata, so the primary crash surface for MemPalace#974 and MemPalace#965 was untouched by the original patch. Fixed by passing `hnsw:num_threads=1` at the mcp_server create site too. Documented in a code comment that the setting is only honored at creation time — existing palaces created before this fix still need a `mempalace nuke` + re-mine to gain the protection. 2) MemPalace#3 — mine_global_lock over-serialized mines across unrelated palaces Replaced the single global lock file `mine_global.lock` with a per-palace lock keyed by `sha256(os.path.abspath(palace_path))` (`mine_palace_<hash>.lock`). Mines against the same palace still collapse to a single runner (the correctness boundary), but mines against *different* palaces are now free to run in parallel. `mine_global_lock` is kept as a backward-compatible alias for `mine_palace_lock` so any external callers that imported the previous name keep working. 3) MemPalace#1 — hook_precompact swallowed OSError but not subprocess.TimeoutExpired `subprocess.run(..., timeout=60)` raises `TimeoutExpired` on slow palaces. The previous `except OSError` clause didn't catch it, so the hook could raise and fail to emit any JSON decision — leaving the harness without a block/passthrough signal. Fixed by catching `(OSError, subprocess.TimeoutExpired)` together and always falling through to the block decision so the hook reliably emits a response. 4) MemPalace#2 + MemPalace#4 — tests - tests/test_hooks_cli.py: added `test_precompact_first_two_attempts_block`, `test_precompact_passes_through_after_cap`, and `test_precompact_counter_is_per_session` to lock in the MemPalace#955 deadlock fix. - tests/test_palace_locks.py (new): covers `mine_palace_lock` single-acquire, reuse-after-release, cross-process serialization on the same palace, non-interference across different palaces, path normalization, and the `mine_global_lock` back-compat alias. 5) MemPalace#5 — known limitation, documented but not auto-fixed Copilot suggested detecting collections missing `hnsw:num_threads=1` and calling `collection.modify(metadata=...)` to retrofit existing palaces. Verified against chromadb 1.5.7: `modify(metadata=...)` replaces metadata rather than merging, and re-passing `hnsw:space="cosine"` then raises `ValueError: Changing the distance function of a collection once it is created is not supported currently.` The HNSW runtime configuration (`configuration_json`) also does not expose `num_threads` in chromadb 1.5.x, so the flag appears to be read only at creation time. Rather than paper over the limitation with a best-effort `modify` that silently drops `hnsw:space`, documented in the mcp_server comment that pre-existing palaces need a `mempalace nuke` + re-mine to gain the protection. Fresh palaces are always protected. Testing - pytest tests/test_palace_locks.py tests/test_hooks_cli.py tests/test_backends.py tests/test_cli.py → **98 passed, 0 failed**. - Runtime validation with two concurrent `mempalace mine` calls: - Different palaces → both complete in parallel ✓ - Same palace → one completes, the other exits with "another `mine` is already running against <palace> — exiting cleanly." ✓
5 tasks
felipetruman
added a commit
to felipetruman/mempalace
that referenced
this pull request
Apr 17, 2026
…ut, tests Addresses the six Copilot review comments on the initial commit. 1) MemPalace#6 (critical) — mcp_server.py `_get_collection` bypassed ChromaBackend The MCP server creates its palace collection directly via `chromadb.PersistentClient.get_or_create_collection` in `_get_collection`, not through `ChromaBackend.get_collection`. That path was missing the `hnsw:num_threads=1` metadata, so the primary crash surface for MemPalace#974 and MemPalace#965 was untouched by the original patch. Fixed by passing `hnsw:num_threads=1` at the mcp_server create site too. Documented in a code comment that the setting is only honored at creation time — existing palaces created before this fix still need a `mempalace nuke` + re-mine to gain the protection. 2) MemPalace#3 — mine_global_lock over-serialized mines across unrelated palaces Replaced the single global lock file `mine_global.lock` with a per-palace lock keyed by `sha256(os.path.abspath(palace_path))` (`mine_palace_<hash>.lock`). Mines against the same palace still collapse to a single runner (the correctness boundary), but mines against *different* palaces are now free to run in parallel. `mine_global_lock` is kept as a backward-compatible alias for `mine_palace_lock` so any external callers that imported the previous name keep working. 3) MemPalace#1 — hook_precompact swallowed OSError but not subprocess.TimeoutExpired `subprocess.run(..., timeout=60)` raises `TimeoutExpired` on slow palaces. The previous `except OSError` clause didn't catch it, so the hook could raise and fail to emit any JSON decision — leaving the harness without a block/passthrough signal. Fixed by catching `(OSError, subprocess.TimeoutExpired)` together and always falling through to the block decision so the hook reliably emits a response. 4) MemPalace#2 + MemPalace#4 — tests - tests/test_hooks_cli.py: added `test_precompact_first_two_attempts_block`, `test_precompact_passes_through_after_cap`, and `test_precompact_counter_is_per_session` to lock in the MemPalace#955 deadlock fix. - tests/test_palace_locks.py (new): covers `mine_palace_lock` single-acquire, reuse-after-release, cross-process serialization on the same palace, non-interference across different palaces, path normalization, and the `mine_global_lock` back-compat alias. 5) MemPalace#5 — known limitation, documented but not auto-fixed Copilot suggested detecting collections missing `hnsw:num_threads=1` and calling `collection.modify(metadata=...)` to retrofit existing palaces. Verified against chromadb 1.5.7: `modify(metadata=...)` replaces metadata rather than merging, and re-passing `hnsw:space="cosine"` then raises `ValueError: Changing the distance function of a collection once it is created is not supported currently.` The HNSW runtime configuration (`configuration_json`) also does not expose `num_threads` in chromadb 1.5.x, so the flag appears to be read only at creation time. Rather than paper over the limitation with a best-effort `modify` that silently drops `hnsw:space`, documented in the mcp_server comment that pre-existing palaces need a `mempalace nuke` + re-mine to gain the protection. Fresh palaces are always protected. Testing - pytest tests/test_palace_locks.py tests/test_hooks_cli.py tests/test_backends.py tests/test_cli.py → **98 passed, 0 failed**. - Runtime validation with two concurrent `mempalace mine` calls: - Different palaces → both complete in parallel ✓ - Same palace → one completes, the other exits with "another `mine` is already running against <palace> — exiting cleanly." ✓
felipetruman
added a commit
to felipetruman/mempalace
that referenced
this pull request
Apr 17, 2026
…stant Addresses remaining PR MemPalace#976 review items after rebase on develop. ## Copilot Review MemPalace#3 (chroma.py:134) — retrofit legacy collections `get_collection(create=False)` previously returned existing collections without re-applying `hnsw:num_threads=1`, so palaces created before the fix kept the unsafe parallel-insert path. Add `_pin_hnsw_threads()` helper that calls `collection.modify(configuration=UpdateCollectionConfiguration( hnsw=UpdateHNSWConfiguration(num_threads=1)))` best-effort on every `get_collection` call (including the MCP server's `_get_collection`). In chromadb 1.5.x the runtime config does not persist to disk across `PersistentClient` reopens, so the retrofit is re-applied each process start rather than being a one-shot migration. Fresh palaces keep the metadata-based pin as primary defense; legacy palaces now also get per-session protection without requiring `mempalace nuke` + re-mine. ## mvalentsev feedback — drop dead `MAX_PRECOMPACT_BLOCK_ATTEMPTS` After the rebase on develop, `hook_precompact` delegates to `_mine_sync` and no longer emits `decision: block`, so the attempt-cap constant was orphaned. Grep confirms 0 usages in the repo — remove it. ## Tests - `_pin_hnsw_threads` retrofits legacy collection (num_threads None -> 1) - `_pin_hnsw_threads` swallows all errors (never raises) - `ChromaBackend.get_collection(create=False)` applies retrofit on legacy palace - 62 tests pass (10 backends + 6 palace locks + 46 hooks_cli)
felipetruman
added a commit
to felipetruman/mempalace
that referenced
this pull request
Apr 17, 2026
## MemPalace#1 docstring / implementation mismatch in _get_save_interval Docstring said "invalid / non-positive values fall back to the default" but the code clamps to 1 via max(1, value). Rewrote the docstring to document the actual behavior: missing / non-integer values fall back to default; parsed values < 1 clamp to 1 (effectively blocks every turn). Added a pointer to MEMPAL_STOP_HOOK_DISABLE for users who wanted "no block" rather than "block every turn". ## MemPalace#2 MEMPAL_STOP_HOOK_DISABLE must not drift cadence state Disabled path used to return before updating *_last_save, so toggling the flag off mid-session would make since_last cover every message accumulated while disabled and trigger an immediate retroactive block. Fixed by advancing the last-save watermark to the current exchange count while disabled — state stays synced with activity, no block is ever emitted. Re-enabling resumes the normal cadence from the current position, not from whenever the hook was last active. ## MemPalace#3 integration tests for hook_stop env-var behavior Unit tests only exercised _get_save_interval / _stop_hook_disabled helpers. Added five integration tests in test_hooks_cli_tuning.py that run hook_stop end-to-end: - custom interval triggers block at override - custom interval passes through below override - MEMPAL_STOP_HOOK_DISABLE=1 passes through even far above interval - disabled path still advances last-save watermark (state tracking) - stop_hook_active short-circuit must not touch state Also added an autouse fixture in test_hooks_cli.py that monkeypatches MEMPAL_SAVE_INTERVAL and MEMPAL_STOP_HOOK_DISABLE off, so a developer with those vars set in their shell can't break the interval assertions. 79 tests pass (46 in test_hooks_cli.py + 33 in test_hooks_cli_tuning.py).
rergards
pushed a commit
to rergards/mempalace
that referenced
this pull request
Apr 20, 2026
jphein
referenced
this pull request
in jphein/mempalace
Apr 24, 2026
Cherry-picks the critical chroma.py fix from @felipetruman's MemPalace#976 (without the bundled mine_global_lock and precompact-attempt-cap changes, which we don't need — mine_global_lock is covered by our MemPalace#1171 backend-seam flock, and our silent_save path bypasses the block-mode precompact deadlock MemPalace#976's fix #3 addresses). Root cause: ChromaDB's multi-threaded `ParallelFor` HNSW insert path races in `repairConnectionsForUpdate` / `addPoint`, corrupting the graph under any concurrent add (including a single mine's concurrent batches). Manifests as SIGSEGV (MemPalace#974) or runaway link_lists.bin writes (MemPalace#965 — 437 GB observed on JP's palace, 1.5 TB on the Nobara install in MemPalace#976's report). The num_threads=1 pin disables ParallelFor, serializing inserts within a single process. MemPalace ingests drawers one at a time anyway, so this was never a throughput win — only a latent correctness hazard. Three call sites patched: - `ChromaBackend.get_collection` (create path): add metadata pin + call `_pin_hnsw_threads()` on every return - `ChromaBackend.create_collection`: add metadata pin - `mcp_server._get_collection`: add metadata pin + call `_pin_hnsw_threads()` on every return (direct chromadb call, bypasses the backend adapter) - `cli.cmd_compact` new_col creation: add metadata pin `_pin_hnsw_threads()` exists because ChromaDB 1.5.x doesn't persist the modified HNSW config to disk, so it must re-apply on every open. Credits @felipetruman for the root-cause diagnosis and fix design in MemPalace#976. Posted corroborating reproduction data (437 GB bloat on 135K drawers, identical failure mode) on that PR earlier today at MemPalace#976 (comment). This fork-local cherry-pick gets JP protected now; expect this to become a no-op when MemPalace#976 lands upstream and we merge develop→main. Co-Authored-By: Claude Sonnet 4.6 <[email protected]>
igorls
pushed a commit
to felipetruman/mempalace
that referenced
this pull request
Apr 25, 2026
…ut, tests Addresses the six Copilot review comments on the initial commit. 1) MemPalace#6 (critical) — mcp_server.py `_get_collection` bypassed ChromaBackend The MCP server creates its palace collection directly via `chromadb.PersistentClient.get_or_create_collection` in `_get_collection`, not through `ChromaBackend.get_collection`. That path was missing the `hnsw:num_threads=1` metadata, so the primary crash surface for MemPalace#974 and MemPalace#965 was untouched by the original patch. Fixed by passing `hnsw:num_threads=1` at the mcp_server create site too. Documented in a code comment that the setting is only honored at creation time — existing palaces created before this fix still need a `mempalace nuke` + re-mine to gain the protection. 2) MemPalace#3 — mine_global_lock over-serialized mines across unrelated palaces Replaced the single global lock file `mine_global.lock` with a per-palace lock keyed by `sha256(os.path.abspath(palace_path))` (`mine_palace_<hash>.lock`). Mines against the same palace still collapse to a single runner (the correctness boundary), but mines against *different* palaces are now free to run in parallel. `mine_global_lock` is kept as a backward-compatible alias for `mine_palace_lock` so any external callers that imported the previous name keep working. 3) MemPalace#1 — hook_precompact swallowed OSError but not subprocess.TimeoutExpired `subprocess.run(..., timeout=60)` raises `TimeoutExpired` on slow palaces. The previous `except OSError` clause didn't catch it, so the hook could raise and fail to emit any JSON decision — leaving the harness without a block/passthrough signal. Fixed by catching `(OSError, subprocess.TimeoutExpired)` together and always falling through to the block decision so the hook reliably emits a response. 4) MemPalace#2 + MemPalace#4 — tests - tests/test_hooks_cli.py: added `test_precompact_first_two_attempts_block`, `test_precompact_passes_through_after_cap`, and `test_precompact_counter_is_per_session` to lock in the MemPalace#955 deadlock fix. - tests/test_palace_locks.py (new): covers `mine_palace_lock` single-acquire, reuse-after-release, cross-process serialization on the same palace, non-interference across different palaces, path normalization, and the `mine_global_lock` back-compat alias. 5) MemPalace#5 — known limitation, documented but not auto-fixed Copilot suggested detecting collections missing `hnsw:num_threads=1` and calling `collection.modify(metadata=...)` to retrofit existing palaces. Verified against chromadb 1.5.7: `modify(metadata=...)` replaces metadata rather than merging, and re-passing `hnsw:space="cosine"` then raises `ValueError: Changing the distance function of a collection once it is created is not supported currently.` The HNSW runtime configuration (`configuration_json`) also does not expose `num_threads` in chromadb 1.5.x, so the flag appears to be read only at creation time. Rather than paper over the limitation with a best-effort `modify` that silently drops `hnsw:space`, documented in the mcp_server comment that pre-existing palaces need a `mempalace nuke` + re-mine to gain the protection. Fresh palaces are always protected. Testing - pytest tests/test_palace_locks.py tests/test_hooks_cli.py tests/test_backends.py tests/test_cli.py → **98 passed, 0 failed**. - Runtime validation with two concurrent `mempalace mine` calls: - Different palaces → both complete in parallel ✓ - Same palace → one completes, the other exits with "another `mine` is already running against <palace> — exiting cleanly." ✓
gnusam
pushed a commit
to gnusam/mempalace-pgsql
that referenced
this pull request
Apr 25, 2026
… 0-chunk files Three upstream fixes ported together because they're conceptually one "convo_miner polish" pass on the same exchange-chunking path. 1. Remove ai_lines[:8] truncation (upstream d52d6c9, PR MemPalace#695). The _chunk_by_exchange path was silently dropping every line past line 8 of the AI response, violating the verbatim-storage principle. 2. Split oversize exchanges across drawers (upstream 9b60c6e, PR MemPalace#708). Now that the full response is preserved, an exchange that exceeds CHUNK_SIZE (800 chars, aligned with miner.py) is split into consecutive drawers instead of a single oversized one. Adds CHUNK_SIZE module constant. 3. Register a no-embedding sentinel for files that produce zero chunks (upstream 87e8baf, PR MemPalace#732). mine_convos has three early-exit paths (OSError, content too short, zero chunks) that previously wrote nothing — file_already_mined() then returned False on the next run and the file was re-read every time. Adapted MemPalace#3 for the PG backend: the upstream sentinel uses collection.upsert() (ChromaDB API). This fork instead adds a PalaceDB.register_empty_file() method that inserts a row directly with embedding=NULL and metadata.ingest_mode='registry', so the sentinel is free of embedding cost and invisible to vector search. file_already_mined() already keys on source_file + source_mtime, so the existing path picks up the sentinel without further changes. Three behavioural tests added: full AI response preserved, oversize exchange split across drawers, and the sentinel + file_already_mined round trip. Upstream: MemPalace@d52d6c9 MemPalace@9b60c6e MemPalace@87e8baf Co-authored-by: shafdev <[email protected]> Co-authored-by: Sanjay Ramadugu <[email protected]> Co-authored-by: Mikhail Valentsev <[email protected]> Co-Authored-By: Claude Opus 4.7 (1M context) <[email protected]>
6 tasks
lealvona
pushed a commit
to lealvona/mempalace
that referenced
this pull request
Apr 29, 2026
…ut, tests Addresses the six Copilot review comments on the initial commit. 1) MemPalace#6 (critical) — mcp_server.py `_get_collection` bypassed ChromaBackend The MCP server creates its palace collection directly via `chromadb.PersistentClient.get_or_create_collection` in `_get_collection`, not through `ChromaBackend.get_collection`. That path was missing the `hnsw:num_threads=1` metadata, so the primary crash surface for MemPalace#974 and MemPalace#965 was untouched by the original patch. Fixed by passing `hnsw:num_threads=1` at the mcp_server create site too. Documented in a code comment that the setting is only honored at creation time — existing palaces created before this fix still need a `mempalace nuke` + re-mine to gain the protection. 2) MemPalace#3 — mine_global_lock over-serialized mines across unrelated palaces Replaced the single global lock file `mine_global.lock` with a per-palace lock keyed by `sha256(os.path.abspath(palace_path))` (`mine_palace_<hash>.lock`). Mines against the same palace still collapse to a single runner (the correctness boundary), but mines against *different* palaces are now free to run in parallel. `mine_global_lock` is kept as a backward-compatible alias for `mine_palace_lock` so any external callers that imported the previous name keep working. 3) MemPalace#1 — hook_precompact swallowed OSError but not subprocess.TimeoutExpired `subprocess.run(..., timeout=60)` raises `TimeoutExpired` on slow palaces. The previous `except OSError` clause didn't catch it, so the hook could raise and fail to emit any JSON decision — leaving the harness without a block/passthrough signal. Fixed by catching `(OSError, subprocess.TimeoutExpired)` together and always falling through to the block decision so the hook reliably emits a response. 4) MemPalace#2 + MemPalace#4 — tests - tests/test_hooks_cli.py: added `test_precompact_first_two_attempts_block`, `test_precompact_passes_through_after_cap`, and `test_precompact_counter_is_per_session` to lock in the MemPalace#955 deadlock fix. - tests/test_palace_locks.py (new): covers `mine_palace_lock` single-acquire, reuse-after-release, cross-process serialization on the same palace, non-interference across different palaces, path normalization, and the `mine_global_lock` back-compat alias. 5) MemPalace#5 — known limitation, documented but not auto-fixed Copilot suggested detecting collections missing `hnsw:num_threads=1` and calling `collection.modify(metadata=...)` to retrofit existing palaces. Verified against chromadb 1.5.7: `modify(metadata=...)` replaces metadata rather than merging, and re-passing `hnsw:space="cosine"` then raises `ValueError: Changing the distance function of a collection once it is created is not supported currently.` The HNSW runtime configuration (`configuration_json`) also does not expose `num_threads` in chromadb 1.5.x, so the flag appears to be read only at creation time. Rather than paper over the limitation with a best-effort `modify` that silently drops `hnsw:space`, documented in the mcp_server comment that pre-existing palaces need a `mempalace nuke` + re-mine to gain the protection. Fresh palaces are always protected. Testing - pytest tests/test_palace_locks.py tests/test_hooks_cli.py tests/test_backends.py tests/test_cli.py → **98 passed, 0 failed**. - Runtime validation with two concurrent `mempalace mine` calls: - Different palaces → both complete in parallel ✓ - Same palace → one completes, the other exits with "another `mine` is already running against <palace> — exiting cleanly." ✓
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
Adds a native MemPalace memory provider for Hermes,
the open-source AI agent framework.
What this adds:
integrations/hermes/— drop-in memory provider plugin implementing Hermes'MemoryProviderABCmempalace hermes install— one-command installer that copies the plugin,updates Hermes config, and optionally backfills existing session history
integrations/hermes/README.md— installation and usage guideHow it works:
Every conversation turn is filed into the palace automatically (wing-classified,
semantic-search ready). The AAAK wake-up injects your identity + critical facts
at session start. 8 palace tools are exposed to the agent.
Zero PII in defaults. Wing config and identity come from the user's
~/.mempalace/files (set up viamempalace init). The plugin ships with nohardcoded personal information.
Tested on: Hermes v0.7.0+, MemPalace 3.0.0, Python 3.9+