Open Source · Apache 2.0

AI That Never Forgets —
Memory Infrastructure for Agents

Stop re-explaining your project to your AI every session. memtomem turns your notes, docs, and code into a searchable memory that any MCP-compatible agent can use — across sessions, across agents, all on your machine.

$ uv tool install 'memtomem[all]'
74MCP Tools
10Compression Strategies
6+ · 3+MCP Editors · Frameworks
Why Do Agents Forget?
Tool integration (MCP), safety (Guardrails), and observability (Langfuse) are mature — but the memory layer still has no standard.
01

No Memory Between Sessions

All context is lost when a session ends. Architecture decisions, coding patterns, and debugging history must be re-explained every time.

02

Memory Silos Between Agents

Knowledge from Claude Code can't be carried over to Cursor. Each agent is trapped in its own isolated memory silo.

03

Limitations of Existing Solutions

Current memory systems only work when agents explicitly search, are locked to specific runtimes, and offer only a single LTM layer.

memtomem Solves This
Applying the cognitive science working-memory / long-term memory model to agents. Short-term compression and long-term search as independent MCP servers.

Proactive Surfacing

Your agent doesn't have to ask. STM watches tool calls and slips in relevant past memories automatically — tuned by your feedback.

MCP Proxy Gateway

STM sits invisibly between your agent and its tools. Existing MCP servers keep working unchanged — no code, no new integrations.

Cross-Runtime Sync

Define an agent or skill once. Context Gateway syncs it to Claude Code, Cursor, Codex CLI, and more — in each tool's native format.

Smart Compression

Big tool responses get trimmed to fit your context window. Ten strategies for JSON, markdown, or free-form text, plus a zero-loss progressive mode for huge payloads.

Multi-Agent Knowledge Sharing

Each agent gets its own private memory, plus a shared one. Knowledge flows between agents — or from you to every agent at once.

Fully Local

SQLite + ONNX under the hood. No GPU, no external API, no cloud dependency — your memory stays on your machine.

Two-Layer Architecture
STM proxy and LTM server connected via MCP, transparently providing memory to all agents.
AI Agents / Frameworks
Claude Code
Cursor
Codex CLI
LangGraph
MCP
memtomem-stm
STM Proxy
CLEAN → COMPRESS → SURFACE → INDEX
Surfacing
MCP
memtomem
LTM Server
74 MCP Tools
Upstream MCP Servers
filesystem, GitHub, …
Supported Runtimes & Frameworks
MCP-native — works with any MCP-compatible client.
Claude Code
Cursor
Windsurf
Claude Desktop
Codex CLI
Gemini CLI
Antigravity
LangGraph
CrewAI
LangChain
Docs & Tutorials
From getting started to advanced usage.

Get Started Now

No GPU. No external services. One uv install is all you need.