A persistent, learning, multi-agent development environment built on Claude Code
Continuous Claude transforms Claude Code into a continuously learning system that maintains context across sessions, orchestrates specialized agents, and eliminates wasting tokens through intelligent code analysis.
- Why Continuous Claude?
- Design Principles
- How to Talk to Claude
- Quick Start
- Architecture
- Core Systems
- Workflows
- Installation
- Updating
- Configuration
- Contributing
- License
Claude Code has a compaction problem: when context fills up, the system compacts your conversation, losing nuanced understanding and decisions made during the session.
Continuous Claude solves this with:
| Problem | Solution |
|---|---|
| Context loss on compaction | YAML handoffs - more token-efficient transfer |
| Starting fresh each session | Memory system recalls + daemon auto-extracts learnings |
| Reading entire files burns tokens | 5-layer code analysis + semantic index |
| Complex tasks need coordination | Meta-skills orchestrate agent workflows |
| Repeating workflows manually | 109 skills with natural language triggers |
The mantra: Compound, don't compact. Extract learnings automatically, then start fresh with full context.
The name is a pun. Continuous because Claude maintains state across sessions. Compounding because each session makes the system smarter—learnings accumulate like compound interest.
An agent is five things: Prompt + Tools + Context + Memory + Model.
| Component | What We Optimize |
|---|---|
| Prompt | Skills inject relevant context; hooks add system reminders |
| Tools | TLDR reduces tokens; agents parallelize work |
| Context | Not just what Claude knows, but how it's provided |
| Memory | Daemon extracts learnings; recall surfaces them |
| Model | Becomes swappable when the other four are solid |
We resist plugin sprawl. Every MCP, subscription, and tool you add promises improvement but risks breaking context, tools, or prompts through clashes.
Our approach:
- Time, not money — No required paid services. Perplexity and NIA are optional, high-value-per-token.
- Learn, don't accumulate — A system that learns handles edge cases better than one that collects plugins.
- Shift-left validation — Hooks run pyright/ruff after edits, catching errors before tests.
The failure modes of complex systems are structurally invisible until they happen. A learning, context-efficient system doesn't prevent all failures—but it recovers and improves.
You don't need to memorize slash commands. Just describe what you want naturally.
When you send a message, a hook injects context that tells Claude which skills and agents are relevant. Claude infers from a rule-based system and decides which tools to use.
> "Fix the login bug in auth.py"
🎯 SKILL ACTIVATION CHECK
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
⚠️ CRITICAL SKILLS (REQUIRED):
→ create_handoff
📚 RECOMMENDED SKILLS:
→ fix
→ debug
🤖 RECOMMENDED AGENTS (token-efficient):
→ debug-agent
→ scout
ACTION: Use Skill tool BEFORE responding
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
| Level | Meaning |
|---|---|
| Must use (e.g., handoffs before ending session) | |
| 📚 RECOMMENDED | Should use (e.g., workflow skills) |
| 💡 SUGGESTED | Consider using (e.g., optimization tools) |
| 📌 OPTIONAL | Nice to have (e.g., documentation helpers) |
| What You Say | What Activates |
|---|---|
| "Fix the broken login" | /fix workflow → debug-agent, scout |
| "Build a user dashboard" | /build workflow → plan-agent, kraken |
| "I want to understand this codebase" | /explore + scout agent |
| "What could go wrong with this plan?" | /premortem |
| "Help me figure out what I need" | /discovery-interview |
| "Done for today" | create_handoff (critical) |
| "Resume where we left off" | resume_handoff |
| "Research auth patterns" | oracle agent + perplexity |
| "Find all usages of this API" | scout agent + ast-grep |
| Benefit | How |
|---|---|
| More Discoverable | Don't need to know commands exist |
| Context-Aware | System knows when you're 90% through context |
| Reduces Cognitive Load | Describe intent naturally, get curated suggestions |
| Power User Friendly | Still supports /fix, /build, etc. directly |
| Type | Purpose | Example |
|---|---|---|
| Skill | Single-purpose tool | commit, tldr-code, qlty-check |
| Workflow | Multi-step process | /fix (sleuth → premortem → kraken → commit) |
| Agent | Specialized sub-session | scout (exploration), oracle (research) |
See detailed skill activation docs →
- Python 3.11+
- uv package manager
- Docker (for PostgreSQL)
- Claude Code CLI
# Clone
git clone https://github.com/parcadei/Continuous-Claude-v3.git
cd Continuous-Claude-v3/opc
# Run setup wizard (12 steps)
uv run python -m scripts.setup.wizardNote: The
pyproject.tomlis inopc/. Always runuvcommands from theopc/directory.
| Step | What It Does |
|---|---|
| 1 | Backup existing .claude/ config (if present) |
| 2 | Check prerequisites (Docker, Python, uv) |
| 3-5 | Database + API key configuration |
| 6-7 | Start Docker stack, run migrations |
| 8 | Install Claude Code integration (32 agents, 109 skills, 30 hooks) |
| 9 | Math features (SymPy, Z3, Pint - optional) |
| 10 | TLDR code analysis tool |
| 11-12 | Diagnostics tools + Loogle (optional) |
By default, CC-v3 runs PostgreSQL locally via Docker. For remote database setups:
# Connect to your remote PostgreSQL instance
psql -h hostname -U user -d continuous_claude
# Enable pgvector extension (requires superuser or rds_superuser)
CREATE EXTENSION IF NOT EXISTS vector;
# Apply the schema (from your local clone)
psql -h hostname -U user -d continuous_claude -f docker/init-schema.sqlManaged PostgreSQL tips:
- AWS RDS: Add
vectortoshared_preload_librariesin DB Parameter Group- Supabase: Enable via Database Extensions page
- Azure Database: Use Extensions pane to enable pgvector
Set CONTINUOUS_CLAUDE_DB_URL in ~/.claude/settings.json:
{
"env": {
"CONTINUOUS_CLAUDE_DB_URL": "postgresql://user:password@hostname:5432/continuous_claude"
}
}Or export before running Claude:
export CONTINUOUS_CLAUDE_DB_URL="postgresql://user:password@hostname:5432/continuous_claude"
claudeSee .env.example for all available environment variables.
# Start Claude Code
claude
# Try a workflow
> /workflow| Command | What it does |
|---|---|
/workflow |
Goal-based routing (Research/Plan/Build/Fix) |
/fix bug <description> |
Investigate and fix a bug |
/build greenfield <feature> |
Build a new feature from scratch |
/explore |
Understand the codebase |
/premortem |
Risk analysis before implementation |
┌─────────────────────────────────────────────────────────────────────┐
│ CONTINUOUS CLAUDE │
├─────────────────────────────────────────────────────────────────────┤
│ │
│ ┌─────────────┐ ┌─────────────┐ ┌─────────────┐ │
│ │ Skills │ │ Agents │ │ Hooks │ │
│ │ (109) │───▶│ (32) │◀───│ (30) │ │
│ └─────────────┘ └─────────────┘ └─────────────┘ │
│ │ │ │ │
│ ▼ ▼ ▼ │
│ ┌─────────────────────────────────────────────────────────────┐ │
│ │ TLDR Code Analysis │ │
│ │ L1:AST → L2:CallGraph → L3:CFG → L4:DFG → L5:Slicing │ │
│ │ (95% token savings) │ │
│ └─────────────────────────────────────────────────────────────┘ │
│ │ │ │ │
│ ▼ ▼ ▼ │
│ ┌─────────────┐ ┌─────────────┐ ┌─────────────┐ │
│ │ Memory │ │ Continuity │ │ Coordination│ │
│ │ System │ │ Ledgers │ │ Layer │ │
│ └─────────────┘ └─────────────┘ └─────────────┘ │
│ │
└─────────────────────────────────────────────────────────────────────┘
SessionStart Working SessionEnd
│ │ │
▼ ▼ ▼
┌─────────┐ ┌─────────┐ ┌─────────┐
│ Load │ │ Track │ │ Save │
│ context │─────────────────▶│ changes │──────────────────▶│ state │
└─────────┘ └─────────┘ └─────────┘
│ │ │
├── Continuity ledger ├── File claims ├── Handoff
├── Memory recall ├── TLDR indexing ├── Learnings
└── Symbol index └── Blackboard └── Outcome
│
▼
┌─────────┐
│ /clear │
│ Fresh │
│ context │
└─────────┘
┌─────────────────────────────────────────────────────────────────────────────┐
│ THE CONTINUITY LOOP │
└─────────────────────────────────────────────────────────────────────────────┘
1. SESSION START 2. WORKING
┌────────────────────┐ ┌────────────────────┐
│ │ │ │
│ Ledger loaded ────┼──▶ Context │ PostToolUse ──────┼──▶ Index handoffs
│ Handoff loaded │ │ UserPrompt ───────┼──▶ Skill hints
│ Memory recalled │ │ Edit tracking ────┼──▶ Dirty flag++
│ TLDR cache warmed │ │ SubagentStop ─────┼──▶ Agent reports
│ │ │ │
└────────────────────┘ └────────────────────┘
│ │
│ ▼
│ ┌────────────────────┐
│ │ 3. PRE-COMPACT │
│ │ │
│ │ Auto-handoff ─────┼──▶ thoughts/shared/
│ │ (YAML format) │ handoffs/*.yaml
│ │ Dirty > 20? ──────┼──▶ TLDR re-index
│ │ │
│ └────────────────────┘
│ │
│ ▼
│ ┌────────────────────┐
│ │ 4. SESSION END │
│ │ │
│ │ Stale heartbeat ──┼──▶ Daemon wakes
│ │ Daemon spawns ────┼──▶ Headless Claude
│ │ Thinking blocks ──┼──▶ archival_memory
│ │ │
│ └────────────────────┘
│ │
│ │
└──────────────◀────── /clear ◀──────┘
Fresh context + state preserved
┌─────────────────────────────────────────────────────────────────────────────┐
│ META-SKILL WORKFLOWS │
└─────────────────────────────────────────────────────────────────────────────┘
/fix bug /build greenfield
───────── ─────────────────
┌──────────┐ ┌──────────┐ ┌──────────┐ ┌──────────┐
│ sleuth │─▶│ premortem│ │discovery │─▶│plan-agent│
│(diagnose)│ │ (risk) │ │(clarify) │ │ (design) │
└──────────┘ └────┬─────┘ └──────────┘ └────┬─────┘
│ │
▼ ▼
┌──────────┐ ┌──────────┐
│ kraken │ │ validate │
│ (fix) │ │ (check) │
└────┬─────┘ └────┬─────┘
│ │
▼ ▼
┌──────────┐ ┌──────────┐
│ arbiter │ │ kraken │
│ (test) │ │(implement│
└────┬─────┘ └────┬─────┘
│ │
▼ ▼
┌──────────┐ ┌──────────┐
│ commit │ │ commit │
└──────────┘ └──────────┘
/tdd /refactor
──── ─────────
┌──────────┐ ┌──────────┐ ┌──────────┐ ┌──────────┐
│plan-agent│─▶│ arbiter │ │ phoenix │─▶│ warden │
│ (design) │ │(tests 🔴)│ │(analyze) │ │ (review) │
└──────────┘ └────┬─────┘ └──────────┘ └────┬─────┘
│ │
▼ ▼
┌──────────┐ ┌──────────┐
│ kraken │ │ kraken │
│(code 🟢) │ │(transform│
└────┬─────┘ └────┬─────┘
│ │
▼ ▼
┌──────────┐ ┌──────────┐
│ arbiter │ │ judge │
│(verify ✓)│ │ (review) │
└──────────┘ └──────────┘
┌─────────────────────────────────────────────────────────────────────────────┐
│ DATA LAYER ARCHITECTURE │
├─────────────────────────────────────────────────────────────────────────────┤
│ │
│ TLDR 5-LAYER CODE ANALYSIS SEMANTIC INDEX │
│ ┌────────────────────────┐ ┌────────────────────────┐ │
│ │ L1: AST (~500 tok) │ │ BGE-large-en-v1.5 │ │
│ │ └── Functions, │ │ ├── All 5 layers │ │
│ │ classes, sigs │ │ ├── 10 lines context │ │
│ │ │ │ └── FAISS index │ │
│ │ L2: Call Graph (+440) │ │ │ │
│ │ └── Cross-file │──────────────│ Query: "auth logic" │ │
│ │ dependencies │ │ Returns: ranked funcs │ │
│ │ │ └────────────────────────┘ │
│ │ L3: CFG (+110 tok) │ │
│ │ └── Control flow │ │
│ │ │ MEMORY (PostgreSQL+pgvector) │
│ │ L4: DFG (+130 tok) │ ┌────────────────────────┐ │
│ │ └── Data flow │ │ sessions (heartbeat) │ │
│ │ │ │ file_claims (locks) │ │
│ │ L5: PDG (+150 tok) │ │ archival_memory (BGE) │ │
│ │ └── Slicing │ │ handoffs (embeddings) │ │
│ └────────────────────────┘ └────────────────────────┘ │
│ ~1,200 tokens │
│ vs 23,000 raw │
│ = 95% savings FILE SYSTEM │
│ ┌────────────────────────┐ │
│ │ thoughts/ │ │
│ │ ├── ledgers/ │ │
│ │ │ └── CONTINUITY_*.md│ │
│ │ └── shared/ │ │
│ │ ├── handoffs/*.yaml│ │
│ │ └── plans/*.md │ │
│ │ │ │
│ │ .tldr/ │ │
│ │ └── (daemon cache) │ │
│ └────────────────────────┘ │
│ │
└─────────────────────────────────────────────────────────────────────────────┘
Skills are modular capabilities triggered by natural language. Located in .claude/skills/.
| Meta-Skill | Chain | Use When |
|---|---|---|
/workflow |
Router → appropriate workflow | Don't know where to start |
/build |
discovery → plan → validate → implement → commit | Building features |
/fix |
sleuth → premortem → kraken → test → commit | Fixing bugs |
/tdd |
plan → arbiter (tests) → kraken (implement) → arbiter | Test-first development |
/refactor |
phoenix → plan → kraken → reviewer → arbiter | Safe code transformation |
/review |
parallel specialized reviews → synthesis | Code review |
/explore |
scout (quick/deep/architecture) | Understand codebase |
/security |
vulnerability scan → verification | Security audits |
/release |
audit → E2E → review → changelog | Ship releases |
Each meta-skill supports modes, scopes, and flags. Type the skill alone (e.g., /build) to get an interactive question flow.
/build <mode> [options] [description]
| Mode | Chain | Use For |
|---|---|---|
greenfield |
discovery → plan → validate → implement → commit → PR | New feature from scratch |
brownfield |
onboard → research → plan → validate → implement | Feature in existing codebase |
tdd |
plan → test-first → implement | Test-driven development |
refactor |
impact analysis → plan → TDD → implement | Safe refactoring |
| Option | Effect |
|---|---|
--skip-discovery |
Skip interview phase (have clear spec) |
--skip-validate |
Skip plan validation |
--skip-commit |
Don't auto-commit |
--skip-pr |
Don't create PR description |
--parallel |
Run research agents in parallel |
/fix <scope> [options] [description]
| Scope | Chain | Use For |
|---|---|---|
bug |
debug → implement → test → commit | General bug fix |
hook |
debug-hooks → hook-developer → implement → test | Hook issues |
deps |
preflight → oracle → plan → implement → qlty | Dependency errors |
pr-comments |
github-search → research → plan → implement → commit | PR feedback |
| Option | Effect |
|---|---|
--no-test |
Skip regression test |
--dry-run |
Diagnose only, don't fix |
--no-commit |
Don't auto-commit |
/explore <depth> [options]
| Depth | Time | What It Does |
|---|---|---|
quick |
~1 min | tldr tree + structure overview |
deep |
~5 min | onboard + tldr + research + documentation |
architecture |
~3 min | tldr arch + call graph + layers |
| Option | Effect |
|---|---|
--focus "area" |
Focus on specific area (e.g., --focus "auth") |
--output handoff |
Create handoff for implementation |
--output doc |
Create documentation file |
--entry "func" |
Start from specific entry point |
/tdd, /refactor, /review, /security, /release
These follow their defined chains without mode flags. Just run:
/tdd "implement retry logic"
/refactor "extract auth module"
/review # reviews current changes
/security "authentication code"
/release v1.2.0
Planning & Risk
- premortem: TIGERS & ELEPHANTS risk analysis - use before any significant implementation
- discovery-interview: Transform vague ideas into detailed specs
Context Management
- create_handoff: Capture session state for transfer
- resume_handoff: Resume from handoff with context
- continuity_ledger: Track state within session
Code Analysis (95% Token Savings)
- tldr-code: Call graph, CFG, DFG, slicing
- ast-grep-find: Structural code search
- morph-search: Fast text search (20x faster than grep)
Research
- perplexity-search: AI-powered web search
- nia-docs: Library documentation search
- github-search: Search GitHub code/issues/PRs
Quality
- qlty-check: 70+ linters, auto-fix
- braintrust-analyze: Session analysis, replay, and debugging failed sessions
Math & Formal Proofs
- math: Unified computation (SymPy, Z3, Pint) — one entry point for all math
- prove: Lean4 theorem proving with 5-phase workflow (Research → Design → Test → Implement → Verify)
- pint-compute: Unit-aware arithmetic and conversions
- shapely-compute: Computational geometry
The /prove skill enables machine-verified proofs without learning Lean syntax. Used to create the first Lean formalization of Sylvester-Gallai theorem.
What do I want to do?
├── Don't know → /workflow (guided router)
├── Building → /build greenfield or brownfield
├── Fixing → /fix bug
├── Understanding → /explore
├── Planning → premortem first, then plan-agent
├── Researching → oracle or perplexity-search
├── Reviewing → /review
├── Proving → /prove (Lean4 formal verification)
├── Computing → /math (SymPy, Z3, Pint)
└── Shipping → /release
See detailed skills breakdown →
Agents are specialized AI workers spawned via the Task tool. Located in .claude/agents/.
Note: There are likely too many agents—consolidation is a v4 goal. Use what fits your workflow.
Orchestrators (2)
- maestro: Multi-agent coordination with patterns (Pipeline, Swarm, Jury)
- kraken: TDD implementation agent with checkpoint/resume support
Planners (4)
- architect: Feature planning + API integration
- phoenix: Refactoring + framework migration planning
- plan-agent: Lightweight planning with research/MCP tools
- validate-agent: Validate plans against best practices
Explorers (4)
- scout: Codebase exploration (use instead of Explore)
- oracle: External research (web, docs, APIs)
- pathfinder: External repository analysis
- research-codebase: Document codebase as-is
Implementers (3)
- kraken: TDD implementation with strict test-first workflow
- spark: Lightweight fixes and quick tweaks
- agentica-agent: Build Python agents using Agentica SDK
Debuggers (3)
- sleuth: General bug investigation and root cause
- debug-agent: Issue investigation via logs/code search
- profiler: Performance profiling and race conditions
Validators (2) - arbiter, atlas
Reviewers (6) - critic, judge, surveyor, liaison, plan-reviewer, review-agent
Specialized (8) - aegis, herald, scribe, chronicler, session-analyst, braintrust-analyst, memory-extractor, onboard
| Workflow | Agent Chain |
|---|---|
| Feature | architect → plan-reviewer → kraken → review-agent → arbiter |
| Refactoring | phoenix → plan-reviewer → kraken → judge → arbiter |
| Bug Fix | sleuth → spark/kraken → arbiter → scribe |
Hooks intercept Claude Code at lifecycle points. Located in .claude/hooks/.
| Event | Key Hooks | Purpose |
|---|---|---|
| SessionStart | session-start-continuity, session-register, braintrust-tracing | Load context, register session |
| PreToolUse | tldr-read-enforcer, smart-search-router, tldr-context-inject, file-claims | Token savings, search routing |
| PostToolUse | post-edit-diagnostics, handoff-index, post-edit-notify | Validation, indexing |
| PreCompact | pre-compact-continuity | Auto-save before compaction |
| UserPromptSubmit | skill-activation-prompt, memory-awareness | Skill hints, memory recall |
| SubagentStop | subagent-stop-continuity | Save agent state |
| SessionEnd | session-end-cleanup, session-outcome | Cleanup, extract learnings |
| Hook | Purpose |
|---|---|
| tldr-context-inject | Adds code analysis to agent prompts |
| smart-search-router | Routes grep to AST-grep when appropriate |
| post-edit-diagnostics | Runs pyright/ruff after edits |
| memory-awareness | Surfaces relevant learnings |
TLDR provides token-efficient code summaries through 5 analysis layers.
| Layer | Name | What it provides | Tokens |
|---|---|---|---|
| L1 | AST | Functions, classes, signatures | ~500 tokens |
| L2 | Call Graph | Who calls what (cross-file) | +440 tokens |
| L3 | CFG | Control flow, complexity | +110 tokens |
| L4 | DFG | Data flow, variable tracking | +130 tokens |
| L5 | PDG | Program slicing, impact analysis | +150 tokens |
Total: ~1,200 tokens vs 23,000 raw = 95% savings
# Structure analysis
tldr tree src/ # File tree
tldr structure src/ --lang python # Code structure (codemaps)
# Search and extraction
tldr search "process_data" src/ # Find code
tldr context process_data --project src/ --depth 2 # LLM-ready context
# Flow analysis
tldr cfg src/main.py main # Control flow graph
tldr dfg src/main.py main # Data flow graph
tldr slice src/main.py main 42 # What affects line 42?
# Codebase analysis
tldr impact process_data src/ # Who calls this function?
tldr dead src/ # Find unreachable code
tldr arch src/ # Detect architectural layers
# Semantic search (natural language)
tldr daemon semantic "find authentication logic"Beyond structural analysis, TLDR builds a semantic index of your codebase:
- Natural language queries — Ask "where is error handling?" instead of grepping
- Auto-rebuild — Dirty flag hook tracks file changes; index rebuilds after N edits
- Selective indexing — Use
.tldrignoreto control what gets indexed
# .tldrignore example
__pycache__/
*.test.py
node_modules/
.venv/The semantic index uses all 5 layers plus 10 lines of surrounding code context—not just docstrings.
TLDR is automatically integrated via hooks:
- tldr-read-enforcer: Returns L1+L2+L3 instead of full file reads
- smart-search-router: Routes Grep to
tldr search - post-tool-use-tracker: Updates indexes when files change
Cross-session learning powered by PostgreSQL + pgvector.
Session ends → Database detects stale heartbeat (>5 min)
→ Daemon spawns headless Claude (Sonnet)
→ Analyzes thinking blocks from session
→ Extracts learnings to archival_memory
→ Next session recalls relevant learnings
The key insight: thinking blocks contain the real reasoning—not just what Claude did, but why. The daemon extracts this automatically.
| What You Say | What Happens |
|---|---|
| "Remember that auth uses JWT" | Stores learning with context |
| "Recall authentication patterns" | Searches memory, surfaces matches |
| "What did we decide about X?" | Implicit recall via memory-awareness hook |
| Table | Purpose |
|---|---|
| sessions | Cross-terminal awareness |
| file_claims | Cross-terminal file locking |
| archival_memory | Long-term learnings with BGE embeddings |
| handoffs | Session handoffs with embeddings |
# Recall learnings (hybrid text + vector search)
cd opc && uv run python scripts/core/recall_learnings.py \
--query "authentication patterns"
# Store a learning explicitly
cd opc && uv run python scripts/core/store_learning.py \
--session-id "my-session" \
--type WORKING_SOLUTION \
--content "What I learned" \
--confidence highThe memory-awareness hook surfaces relevant learnings when you send a message. You'll see MEMORY MATCH indicators—Claude can use these without you asking.
Preserve state across context clears and sessions.
Within-session state tracking. Location: thoughts/ledgers/CONTINUITY_<topic>.md
# Session: feature-x
Updated: 2026-01-08
## Goal
Implement feature X with proper error handling
## Completed
- [x] Designed API schema
- [x] Implemented core logic
## In Progress
- [ ] Add error handling
## Blockers
- Need clarification on retry policyBetween-session knowledge transfer. Location: thoughts/shared/handoffs/<session>/
---
date: 2026-01-08T15:26:01+0000
session_name: feature-x
status: complete
---
# Handoff: Feature X Implementation
## Task(s)
| Task | Status |
|------|--------|
| Design API | Completed |
| Implement core | Completed |
| Error handling | Pending |
## Next Steps
1. Add retry logic to API calls
2. Write integration tests| Command | Effect |
|---|---|
| "save state" | Updates continuity ledger |
"done for today" / /handoff |
Creates handoff document |
| "resume work" | Loads latest handoff |
Two capabilities: computation (SymPy, Z3, Pint) and formal verification (Lean4 + Mathlib).
| Tool | Purpose | Example |
|---|---|---|
| SymPy | Symbolic math | Solve equations, integrals, matrix operations |
| Z3 | Constraint solving | Prove inequalities, SAT problems |
| Pint | Unit conversion | Convert miles to km, dimensional analysis |
| Lean4 | Formal proofs | Machine-verified theorems |
| Mathlib | 100K+ theorems | Pre-formalized lemmas to build on |
| Loogle | Type-aware search | Find Mathlib lemmas by signature |
| Skill | Use When |
|---|---|
/math |
Computing, solving, calculating |
/prove |
Formal verification, machine-checked proofs |
# Solve equation
"Solve x² - 4 = 0" → x = ±2
# Compute eigenvalues
"Eigenvalues of [[2,1],[1,2]]" → {1: 1, 3: 1}
# Prove inequality
"Is x² + y² ≥ 2xy always true?" → PROVED (equals (x-y)²)
# Convert units
"26.2 miles to km" → 42.16 km5-phase workflow for machine-verified proofs:
📚 RESEARCH → 🏗️ DESIGN → 🧪 TEST → ⚙️ IMPLEMENT → ✅ VERIFY
- Research: Search Mathlib with Loogle, find proof strategy
- Design: Create skeleton with
sorryplaceholders - Test: Search for counterexamples before proving
- Implement: Fill sorries with compiler-in-the-loop feedback
- Verify: Audit axioms, confirm zero sorries
/prove every group homomorphism preserves identity
/prove continuous functions on compact sets are uniformly continuous
Achievement: Used to create the first Lean formalization of the Sylvester-Gallai theorem.
Math features require installation via wizard step 9:
# Installed automatically by wizard
uv pip install sympy z3-solver pint shapely
# Lean4 (for /prove)
curl https://raw.githubusercontent.com/leanprover/elan/master/elan-init.sh -sSf | sh> /workflow
? What's your goal?
○ Research - Understand codebase/docs
○ Plan - Design implementation approach
○ Build - Implement features
○ Fix - Investigate and resolve issues
/fix bug "login fails silently"Chain: sleuth → [checkpoint] → [premortem] → kraken → test → commit
| Scope | What it does |
|---|---|
bug |
General bug investigation |
hook |
Hook-specific debugging |
deps |
Dependency issues |
pr-comments |
Address PR feedback |
/build greenfield "user dashboard"Chain: discovery → plan → validate → implement → commit → PR
| Mode | What it does |
|---|---|
greenfield |
New feature from scratch |
brownfield |
Modify existing codebase |
tdd |
Test-first development |
refactor |
Safe code transformation |
/premortem deep thoughts/shared/plans/feature-x.mdOutput:
- TIGERS: Clear threats (HIGH/MEDIUM/LOW severity)
- ELEPHANTS: Unspoken concerns
Blocks on HIGH severity until user accepts/mitigates risks.
# Clone
git clone https://github.com/parcadei/continuous-claude.git
cd continuous-claude/opc
# Run the setup wizard
uv run python -m scripts.setup.wizardThe wizard walks you through all configuration options interactively.
Pull latest changes and sync your installation:
cd continuous-claude/opc
uv run python -m scripts.setup.updateThis will:
- Pull latest from GitHub
- Update hooks, skills, rules, agents
- Upgrade TLDR if installed
- Rebuild TypeScript hooks if changed
| Component | Location |
|---|---|
| Agents (32) | ~/.claude/agents/ |
| Skills (109) | ~/.claude/skills/ |
| Hooks (30) | ~/.claude/hooks/ |
| Rules | ~/.claude/rules/ |
| Scripts | ~/.claude/scripts/ |
| PostgreSQL | Docker container |
The wizard offers two installation modes:
| Mode | How It Works | Best For |
|---|---|---|
| Copy (default) | Copies files from repo to ~/.claude/ |
End users, stable setup |
| Symlink | Creates symlinks to repo files | Contributors, development |
Files are copied from continuous-claude/.claude/ to ~/.claude/. Changes you make in ~/.claude/ are local only and will be overwritten on next update.
continuous-claude/.claude/ ──COPY──> ~/.claude/
(source) (user config)
Pros: Stable, isolated from repo changes Cons: Local changes lost on update, manual sync needed
Creates symlinks so ~/.claude/ points directly to repo files. Changes in either location affect the same files.
~/.claude/rules ──SYMLINK──> continuous-claude/.claude/rules
~/.claude/skills ──SYMLINK──> continuous-claude/.claude/skills
~/.claude/hooks ──SYMLINK──> continuous-claude/.claude/hooks
~/.claude/agents ──SYMLINK──> continuous-claude/.claude/agents
Pros:
- Changes auto-sync to repo (can
git commitimprovements) - No re-installation needed after
git pull - Contribute back easily
Cons:
- Breaking changes in repo affect your setup immediately
- Need to manage git workflow
If you installed with copy mode and want to switch:
# Backup current config
mkdir -p ~/.claude/backups/$(date +%Y%m%d)
cp -r ~/.claude/{rules,skills,hooks,agents} ~/.claude/backups/$(date +%Y%m%d)/
# Verify backup succeeded before proceeding
ls -la ~/.claude/backups/$(date +%Y%m%d)/
# Remove copies (only after verifying backup above)
rm -rf ~/.claude/{rules,skills,hooks,agents}
# Create symlinks (adjust path to your repo location)
REPO="$HOME/continuous-claude" # or wherever you cloned
ln -s "$REPO/.claude/rules" ~/.claude/rules
ln -s "$REPO/.claude/skills" ~/.claude/skills
ln -s "$REPO/.claude/hooks" ~/.claude/hooks
ln -s "$REPO/.claude/agents" ~/.claude/agents
# Verify
ls -la ~/.claude | grep -E "rules|skills|hooks|agents"Windows users: Use PowerShell (as Administrator or with Developer Mode enabled):
# Enable Developer Mode first (Settings → Privacy & security → For developers)
# Or run PowerShell as Administrator
# Backup current config
$BackupDir = "$HOME\.claude\backups\$(Get-Date -Format 'yyyyMMdd')"
New-Item -ItemType Directory -Path $BackupDir -Force
Copy-Item -Recurse "$HOME\.claude\rules","$HOME\.claude\skills","$HOME\.claude\hooks","$HOME\.claude\agents" $BackupDir
# Verify backup succeeded before proceeding
Get-ChildItem $BackupDir
# Remove copies (only after verifying backup above)
Remove-Item -Recurse "$HOME\.claude\rules","$HOME\.claude\skills","$HOME\.claude\hooks","$HOME\.claude\agents"
# Create symlinks (adjust path to your repo location)
$REPO = "$HOME\continuous-claude" # or wherever you cloned
New-Item -ItemType SymbolicLink -Path "$HOME\.claude\rules" -Target "$REPO\.claude\rules"
New-Item -ItemType SymbolicLink -Path "$HOME\.claude\skills" -Target "$REPO\.claude\skills"
New-Item -ItemType SymbolicLink -Path "$HOME\.claude\hooks" -Target "$REPO\.claude\hooks"
New-Item -ItemType SymbolicLink -Path "$HOME\.claude\agents" -Target "$REPO\.claude\agents"
# Verify
Get-ChildItem "$HOME\.claude" | Where-Object { $_.LinkType -eq "SymbolicLink" }After installation, start Claude and run:
> /onboard
This analyzes the codebase and creates an initial continuity ledger.
Central configuration for hooks, tools, and workflows.
{
"hooks": {
"SessionStart": [...],
"PreToolUse": [...],
"PostToolUse": [...],
"UserPromptSubmit": [...]
}
}Skill activation triggers.
{
"rules": [
{
"skill": "fix",
"keywords": ["fix this", "broken", "not working"],
"intentPatterns": ["fix.*(bug|issue|error)"]
}
]
}| Variable | Purpose | Required |
|---|---|---|
DATABASE_URL |
PostgreSQL connection string | Yes |
BRAINTRUST_API_KEY |
Session tracing | No |
PERPLEXITY_API_KEY |
Web search | No |
NIA_API_KEY |
Documentation search | No |
CLAUDE_OPC_DIR |
Path to CC's opc/ directory (set by wizard) | Auto |
CLAUDE_PROJECT_DIR |
Current project directory (set by SessionStart hook) | Auto |
Services without API keys still work:
- Continuity system (ledgers, handoffs)
- TLDR code analysis
- Local git operations
- TDD workflow
continuous-claude/
├── .claude/
│ ├── agents/ # 32 specialized AI agents
│ ├── hooks/ # 30 lifecycle hooks
│ │ ├── src/ # TypeScript source
│ │ └── dist/ # Compiled JavaScript
│ ├── skills/ # 109 modular capabilities
│ ├── rules/ # System policies
│ ├── scripts/ # Python utilities
│ └── settings.json # Hook configuration
├── opc/
│ ├── packages/
│ │ └── tldr-code/ # 5-layer code analysis
│ ├── scripts/
│ │ ├── setup/ # Wizard, Docker, integration
│ │ └── core/ # recall_learnings, store_learning
│ └── docker/
│ └── init-schema.sql # 4-table PostgreSQL schema
├── thoughts/
│ ├── ledgers/ # Continuity ledgers (CONTINUITY_*.md)
│ └── shared/
│ ├── handoffs/ # Session handoffs (*.yaml)
│ └── plans/ # Implementation plans
└── docs/ # Documentation
See CONTRIBUTING.md for guidelines on:
- Adding new skills
- Creating agents
- Developing hooks
- Extending TLDR
- @numman-ali - Continuity ledger pattern
- Anthropic - Claude Code and "Code Execution with MCP"
- obra/superpowers - Agent orchestration patterns
- EveryInc/compound-engineering-plugin - Compound engineering workflow
- yoloshii/mcp-code-execution-enhanced - Enhanced MCP execution
- HumanLayer - Agent patterns
- uv - Python packaging
- tree-sitter - Code parsing
- Braintrust - LLM evaluation, logging, and session tracing
- qlty - Universal code quality CLI (70+ linters)
- ast-grep - AST-based code search and refactoring
- Nia - Library documentation search
- Morph - WarpGrep fast code search
- Firecrawl - Web scraping API
- RepoPrompt - Token-efficient codebase maps
MIT - Use freely, contribute back.
Continuous Claude: Not just a coding assistant—a persistent, learning, multi-agent development environment that gets smarter with every session.