This is Rust re-implementation (not a direct fork) of the milla-jovovich/mempalace project. The port was largely done using Pi Coding Agent and OpenAI GPT-5.4 - it's a work in progress and may not be as good as the original.
Another interesting similar project is SaraBrain backed by 30+ years of research, papers, and experience.
MemPalace is a local memory system for projects, conversations, and agent workflows. This Rust implementation stores everything in SQLite, supports hybrid retrieval, and exposes both a CLI and an MCP server.
MemPalace ingests source material into a global, structured "palace" of memories:
- wings separate projects, agents, or source domains
- rooms organize memories by topic
- drawers store raw chunks and derived artifacts
- vectors support semantic retrieval
- FTS supports lexical retrieval
The result is a searchable local memory store that can be mined from codebases and conversations, queried from the CLI, and connected to MCP clients.
The palace lives in SQLite at:
<palace-path>/mempalace.sqlite3
Key persisted data:
- raw drawers and derived artifacts
- source revision tracking for incremental refresh
- stored vectors for semantic search
- SQLite FTS5 index for lexical search
Search is hybrid by default:
- lexical retrieval via SQLite FTS5
- semantic retrieval via stored vectors
- heuristic reranking and fused scoring
Wake-up uses a layered memory model:
- L0 — identity text from
~/.mempalace/identity.txt - L1 — essential story built from recent important drawers
- L2 — scoped recall
- L3 — deep search
The project also includes:
- AAAK compression and compact artifacts
- general extraction from conversations
- a temporal knowledge graph
- a room graph for traversal and tunnel finding
- an MCP server for external clients
- Rust toolchain
- SQLite is bundled through
rusqlite's bundled feature
cargo buildRun the CLI:
cargo run -- --helpTo enable the ONNX local embedding backend:
cargo build --features onnx-embeddingsInitialize MemPalace for a project:
cargo run -- init .Mine the current project:
cargo run -- mine . --mode projectsSearch the palace:
cargo run -- search "how openai embedding backend works"Show palace status:
cargo run -- statusRender the wake-up summary:
cargo run -- wake-upStart the MCP server:
cargo run -- mcp --transport stdioMemPalace uses both global and per-project config.
It stores all memories into a "global store". If you run it from a project that has "local config" it would know how to narrow the search to be scoped only for that project.
Global state lives under:
~/.mempalace
Important files:
~/.mempalace/config.json~/.mempalace/identity.txt~/.mempalace/palace/
Projects can define mempalace.yaml:
wing: my-project
rooms:
- name: general
keywords: []
- name: src
keywords: []The init command creates this file automatically if it does not already exist.
Create global config if needed and create a project mempalace.yaml.
cargo run -- init /path/to/projectMine/import a project or a conversation directory into the palace.
cargo run -- mine /path/to/project --mode projects
cargo run -- mine /path/to/chats --mode convos --extract exchange
cargo run -- mine /path/to/chats --mode convos --extract generalUseful flags:
--wing <name>--limit <n>--dry-run--agent <name>
Hybrid search with optional wing and room filtering.
cargo run -- search "typed GraphQL queries"
cargo run -- search "typed GraphQL queries" --wing my-app --room architecture --results 10Show total drawer counts grouped by wing and room.
cargo run -- statusRender the L0 + L1 summary, optionally scoped to a wing.
cargo run -- wake-up
cargo run -- wake-up --wing my-appGenerate and store AAAK compressed artifacts.
cargo run -- compress
cargo run -- compress --wing my-appRun the stdio MCP server.
cargo run -- mcp --transport stdioEvaluate retrieval quality on a benchmark dataset.
cargo run -- benchmark ./bench.json --backend hybrid --k 5MemPalace supports three backend modes:
autolocalopenai
Local provider choices:
autobuiltinonnx
Examples:
Use the default automatic selection:
cargo run -- search "deployment incident"Force the built-in local provider:
cargo run -- search "deployment incident" \
--embedding-backend local \
--local-embedding-provider builtincargo run --features onnx-embeddings -- search "deployment incident" \
--embedding-backend local \
--local-embedding-provider onnxexport OPENAI_API_KEY=sk-...
cargo run -- search "deployment incident" --embedding-backend openaiEmbedding configuration precedence is:
- CLI flags
- environment variables
~/.mempalace/config.json- defaults
The MCP server:
- supports
stdiotransport - accepts
Content-Lengthframed input and newline-delimited JSON input - writes startup logging to stderr
- gates mutating tools behind
MEMPALACE_ENABLE_MUTATIONS=1
Important source files:
src/storage.rs— SQLite storage, vectors, refresh logic, hybrid searchsrc/embedding.rs— backend selection and embedding providerssrc/project.rs— project init and miningsrc/convo.rs— conversation miningsrc/layers.rs— layered memory stacksrc/dialect.rs— AAAK compression dialectsrc/kg.rs— temporal knowledge graphsrc/graph.rs— room graph traversal and tunnel logicsrc/mcp.rs— MCP serversrc/bench.rs— benchmark runner
See the focused docs in docs/:
GPL-3.0-or-later, 2026.
It's not a direct fork of milla-jovovich/mempalace, but spiritual re-implementation in Rust.