Open source ideas for your personal AI assistant. You generate the code. No trust required.
- Claude Code installed and working
- Python 3.11+
- A Telegram account (to create a bot via @BotFather)
- An Anthropic API key
Install via plugin:
/plugin marketplace add swkpku/BuildClaw
/plugin install buildclaw@buildclawThen generate your assistant:
mkdir my-assistant && cd my-assistant
claude
/buildclaw:buildFollow the prompts, then:
cp .env.example .env # fill in 3 values
python3 -m venv .venv && source .venv/bin/activate
pip install -r requirements.txt
python bot.pyYou watched Claude write every line. You own it.
$ /buildclaw:build
┌──────────────────────────────────────────────────────┐
│ BuildClaw — Your Personal AI Assistant │
├──────────────────────────────────────────────────────┤
│ BLOCK CAPABILITY STATUS │
│ files read/write workspace [available] │
│ shell run commands [available] │
│ memory remember across chats [available] │
│ web search the web [available] │
│ scheduler run tasks on a schedule [available] │
└──────────────────────────────────────────────────────┘
> Choose: 1, 3, 4
Writing bot.py...
BLOCKED_PATTERNS = {
".ssh", ".gnupg", ".aws", ".azure", ".gcloud", ".kube", ".docker",
".env", "credentials", ".netrc", ".npmrc",
"id_rsa", "id_ed25519", "id_ecdsa", "private_key", ".secret",
}
def is_safe_path(path: str) -> bool:
"""True only if path resolves inside WORKSPACE and no blocked pattern."""
...
def tool_read_file(path: str) -> str: ...
def tool_write_file(path: str, content: str) -> str: ...
def tool_remember(key: str, value: str) -> str: ...
def tool_recall(key: str = "") -> str: ...
def tool_web_search(query: str) -> str: ...
[~280 lines total, written in your terminal]
── Security Audit ────────────────────────────────────
[PASS] No hardcoded secrets
[PASS] BLOCKED_PATTERNS: .ssh .gnupg .aws .azure .gcloud ...
[PASS] is_safe_path() guards all filesystem tools
[PASS] Unauthorized users: silently ignored
[PASS] History trimmed to 20 messages
[INFO] Total lines: 284 | Dependencies: 4
──────────────────────────────────────────────────────
Done. You read every line above. Ask me to explain anything.
When AI can generate code from ideas, open source doesn't need to mean open source code. It can mean open source ideas.
BuildClaw is a set of Claude Code skills — plain English instructions that tell Claude how to build a personal AI assistant. When you run one, Claude writes the implementation in your terminal, line by line, in front of you.
You never download code written by strangers. You generate code from ideas written by strangers, using a tool you already trust.
There is no malicious code risk. The code on your machine was generated by Claude Code, in your terminal, the moment you ran the skill. You watched every line appear. Code written by strangers doesn't exist here — only ideas do.
This is what open source looks like in the age of coding agents.
Open source AI assistants ship tens or hundreds of thousands of lines of code. You install them, hand over your API keys, grant shell access, and trust that nothing malicious is hidden in the codebase. Some have had real security incidents — infostealers targeting config files, supply chain attacks on plugins and dependencies.
Smaller alternatives are genuinely better — less code, real sandboxing, more auditable. But they are still code you download and run. Someone else wrote it.
BuildClaw is not an alternative implementation. It is a different answer to the question:
Instead of "here is safer code to run," it says: here are the ideas — generate your own code.
| Traditional AI assistant | BuildClaw | |
|---|---|---|
| What you install | 100K+ lines of code | 3 markdown files |
| Who wrote the code on your machine | Strangers on the internet | Claude, in your terminal, while you watched |
| API keys | Handed to their runtime | Handed to code you generated |
| Supply chain risk | Every dependency they chose | Only dependencies you see generated |
| Audit effort | Read 100K+ lines | Read ~300 generated lines |
A personal AI agent has four layers. BuildClaw makes each one explicit — you choose which blocks to include before any code is written.
┌─────────────────────────────────────────────────────────┐
│ Channel how messages reach your agent │
│ → Telegram (included) │
├─────────────────────────────────────────────────────────┤
│ Agent loop the Claude reasoning engine │
│ → included, non-negotiable │
├─────────────────────────────────────────────────────────┤
│ Tools what your agent can do │
│ → you choose: files, shell, memory, │
│ web search, scheduled tasks │
├─────────────────────────────────────────────────────────┤
│ Security what it can never touch │
│ → hardcoded, non-negotiable │
│ .ssh .aws .gnupg credentials id_rsa ... │
└─────────────────────────────────────────────────────────┘
You pick the tools layer. Everything else is fixed. That is the whole design.
| Command | What it does |
|---|---|
/buildclaw:build |
The Lego manual — detects current state, shows the full architecture, lets you choose what to build |
/buildclaw:audit |
Audits any bot.py against 15 security checks |
/buildclaw:test |
Generates and runs a pytest suite that proves your security invariants hold |
The workflow:
/buildclaw:build first run: choose your blocks, get a working bot
/buildclaw:build any time after: see what's built, add a new block
/buildclaw:audit verify the security layer any time
/buildclaw:test run 41 automated security tests
| Block | Risk | What it adds |
|---|---|---|
chat |
none | Claude conversation — always included |
files |
low | Read and write files inside your workspace |
shell |
medium | Run commands inside your workspace (30s timeout) |
memory |
low | Persist facts across restarts in workspace/memory.json |
web |
low-medium | Search the web via DuckDuckGo (no API key needed) |
scheduler |
medium | Run tasks autonomously on a schedule |
mcp |
variable | Connect to any MCP server (via /buildclaw:build only) |
Hard-blocked in every generated bot — no path containing these strings can ever be read, written, or listed:
.ssh .gnupg .aws .azure .gcloud .kube .docker
.env credentials .netrc .npmrc
id_rsa id_ed25519 private_key .secret
examples/telegram/bot.py — 274 lines. This is what your generated bot looks like.
examples/telegram/test_security.py — 41 security tests, all passing.
Read both before running anything.
The skills are plain English markdown. Improving a skill prompt is improving the open source idea — that is the primary contribution path. See CONTRIBUTING.md.
See SECURITY.md for the full threat model — what is protected, what is not, and how to report vulnerabilities.
MIT — see LICENSE.