Skip to content

memtomem/memtomem-stm

Folders and files

NameName
Last commit message
Last commit date

Latest commit

Β 

History

283 Commits
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 

memtomem-stm

Official website & docs: https://memtomem.com

PyPI Python 3.12+ License: Apache 2.0 CLA

🚧 Alpha β€” APIs and defaults may change between 0.1.x releases. Feedback and issue reports are especially welcome: Issues Β· Discussions.

Spend fewer tokens. Remember more. Ship faster.

memtomem-stm is an MCP proxy that typically cuts token usage by 20–80% and gives your agent memory across sessions β€” with no changes to your upstream MCP servers.

It sits between your AI agent and its upstream MCP servers, compressing tool responses, caching repeated calls, and automatically surfacing relevant context from prior sessions via a memtomem LTM server.

What memtomem-stm does:

  • Cuts token spend on repeated reads β€” compresses and caches tool responses, so the agent doesn't re-pay for the same file or search result. Works with Claude Code, Cursor, Claude Desktop, or any MCP client.
  • Carries context across sessions β€” surfaces prior decisions from memtomem LTM automatically, so the agent picks up where it left off rather than re-discovering what it already knew.
  • Drops in front of any MCP server β€” adds compression, caching, and observability as a proxy layer, without changes to upstream code.
flowchart TB
    Agent["Agent<br/>(Claude Code, Cursor, …)"]
    subgraph STM["memtomem-stm (STM)"]
        Pipe["CLEAN β†’ COMPRESS β†’ SURFACE β†’ INDEX"]
    end
    LTM[("memtomem LTM<br/>(MCP server)")]
    FS["filesystem<br/>MCP server"]
    GH["github<br/>MCP server"]
    Other["…any MCP server"]

    Agent -->|MCP| STM
    STM <-->|MCP: stdio / SSE / HTTP| FS
    STM <-->|MCP| GH
    STM <-->|MCP| Other
    STM <-.->|surfacing<br/>via MCP| LTM
Loading

Installation

pip install memtomem-stm

Or with uv:

uv tool install memtomem-stm     # install mms / memtomem-stm as global CLI tools
uvx memtomem-stm --help          # or run without installing
uv pip install memtomem-stm      # or install into the active environment

memtomem-stm is independent: it has no Python-level dependency on memtomem core. To enable proactive memory surfacing, point STM at a running memtomem MCP server (or any compatible MCP server) β€” communication happens entirely through the MCP protocol.

Quick Start

mms is the short alias for memtomem-stm-proxy β€” both commands are identical, use whichever you prefer.

1. Add an upstream MCP server

For first-time setup, run the guided wizard β€” it prompts for name/prefix/command, optionally probes the server, and then offers to register STM with Claude Code (or generate .mcp.json) in the same flow:

mms init

Or add servers non-interactively:

mms add filesystem \
  --command npx \
  --args "-y @modelcontextprotocol/server-filesystem /home/user/projects" \
  --prefix fs

--prefix is required: it's the namespace under which the upstream server's tools will appear (e.g. fs__read_file). Repeat for each MCP server you want to proxy.

If you've already configured MCP servers in Claude Desktop, Claude Code, or a project .mcp.json, mms add --import (alias --from-clients) reuses the init wizard to bulk-select them β€” skipping anything already registered.

mms list      # show what you've added
mms status    # show full config + connectivity

2. Connect your AI client to STM

mms init ends with a 3-way prompt β€” pick option 1 and it shells out to claude mcp add for you. If you skipped that step or want to register with a different client later, run:

mms register

To register manually, use claude directly:

claude mcp add mms -s user -- mms

Or add it to a JSON MCP config for Cursor / Windsurf / Claude Desktop / Gemini:

{
  "mcpServers": {
    "mms": {
      "command": "mms"
    }
  }
}

Why mms and not memtomem-stm? Either name works (the three entry points are interchangeable), but the MCP client composes proxied tool names as mcp__<server>__<prefix>__<tool>. The short alias mms (3 chars) saves 9 bytes vs memtomem-stm (12 chars), which is exactly enough headroom to keep upstreams with long tool names under the 64-char MCP limit. If you registered under a different name and want the mms add overflow check (#261) to match exactly, export MMS_CLIENT_SERVER_NAME=<name> in your shell β€” otherwise the default assumption is conservative and at worst causes a few false-positive warnings on borderline prefixes.

3. Use the proxied tools

Your agent now sees proxied tools (fs__read_file, gh__search_repositories, etc.). Every call goes through the 4-stage pipeline automatically β€” responses are cleaned, compressed, cached, and (when an LTM server is configured) enriched with relevant memories.

To check what's happening, ask the agent to call stm_proxy_stats.

What STM proxies β€” and what it doesn't

STM is an MCP proxy: it sees a tool call only if the client routes that call through the MCP protocol. Coverage depends on how your client invokes the tool, not on what the tool does.

STM sees: any MCP server you register with mms add β€” every tool under the mcp__<server>__<prefix>__<tool> namespace β€” plus LTM surfacing calls to a configured memtomem server.

STM does NOT see:

  • Claude Code's built-in tools β€” Read, Write, Edit, Bash, Grep, Glob, WebFetch. They run inside the client and never reach an MCP server, so their token spend is invisible to STM and unaffected by compression or caching.
  • Cursor / Windsurf / Claude Desktop built-ins β€” same principle: anything the client provides natively bypasses the MCP layer.
  • Sub-agent built-in calls β€” the parent's MCP wiring is inherited, but built-in tool calls inside an Agent / Task invocation stay client-internal.

To bring file or shell operations under STM, register an MCP server that exposes them (the filesystem example above is the most common case) and steer the agent toward the proxied alias instead of the built-in. This is the same boundary every MCP proxy lives within β€” it's not specific to STM.

Project-scoped MCPs (mms project + mms import)

A second tier of management lets you decide which MCP servers a given project sees, separately from the STM proxy gateway config. State lives in a new dotdir, ~/.mms/:

  • mms import --from claude-code β€” pull existing MCP definitions out of ~/.claude.json, ~/.cursor/mcp.json, ~/.codex/config.toml, or Claude Desktop's config into ~/.mms/registry.toml (secrets redacted in --plan, written verbatim under --apply).
  • mms project init β€” create a <project>/.mms/project.toml marker (commit-recommended).
  • mms project enable filesystem github β€” declare which MCPs that project wants visible.
  • mms project list / mms project show β€” inspect the index and the current project.

~/.mms/ is intentionally separate from ~/.memtomem/ β€” STM proxy bootstrap (stm_proxy.json) and mms project state (registry.toml) are fully disjoint in W1: mms add writes only stm_proxy.json, mms import --apply writes only registry.toml. See docs/cli.md for the full reference.

Tutorial notebooks

Try it without wiring into your AI client first. A quickstart Jupyter notebook registers an upstream MCP server, calls a proxied tool, and reads stm_proxy_stats end-to-end. Clone the repo, uv sync, and uv run jupyter lab notebooks/ β€” no external services needed.

Key Features

  • πŸ—œοΈ Typically 20–80% fewer tokens per tool call β€” 10 compression strategies with auto-selection by content type, query-aware budget, and zero-loss progressive delivery β†’ docs/compression.md
  • 🧠 Your agent remembers β€” proactive memory surfacing from prior sessions, gated by relevance threshold, rate limit, dedup, and circuit breaker β†’ docs/surfacing.md
  • πŸ’Ύ Repeated calls are free β€” response cache with TTL and eviction; surfacing re-applied on cache hit so injected memories stay fresh β†’ docs/caching.md
  • πŸ›‘οΈ Production-safe β€” circuit breaker, retry with backoff, write-tool skip, query cooldown, dedup, sensitive content auto-detection, Langfuse tracing, horizontal scaling via PendingStore

Documentation

Guide Topic
Surfacing How agents recall prior context automatically
Compression All 10 strategies β€” pick the right one for your content
Caching Skip repeated work with response caching
Configuration Tune settings without touching code
CLI CLI commands and the 11 MCP tools

Development

uv sync                                                    # install dev deps
uv run pytest -m "not ollama and not bench_qa_meta and not bench_qa_llm_judge"   # tests (CI filter)
uv run ruff check src && uv run ruff format --check src    # lint (required)
uv run mypy src                                            # typecheck (advisory)

CI runs the same commands on every PR via .github/workflows/ci.yml. Lint (ruff check + ruff format --check) and tests must pass; mypy is advisory.

License

Apache License 2.0. Contributions are accepted under the terms of the Contributor License Agreement.