Track what works. Prove it. Drop what doesn't.
Every AI-assisted work session produces decisions, corrections, and outcomes. Almost all of it gets discarded. The next session starts from scratch with the same blind spots.
buildlog captures structured trajectories from real work, extracts decision patterns, and uses Thompson Sampling to select which patterns to surface. Then it measures whether that selection actually reduced mistakes.
Each session is a dated entry documenting what you did, what went wrong, and what you learned -- a structured record of decisions and outcomes, not a chat transcript.
buildlog init # scaffold a project
buildlog new my-feature # start a session
# ... work ...
buildlog commit -m "feat: add auth"The seed engine watches your development patterns and extracts seeds: atomic observations about what works. A seed might be "always define interfaces before implementations" or "mock at the boundary, not the implementation." Each seed carries a category, a confidence score, and source provenance.
Extraction runs through a pipeline: sources -> extractors -> categorizers -> generators. Extractors range from regex-based (fast, cheap, brittle) to LLM-backed (accurate, expensive). The pipeline deduplicates semantically using embeddings.
The gauntlet is an automated quality gate with curated reviewer personas. It runs on your code and files findings categorized by severity. When a reviewer cites a rule in their review, that rule gets credited -- this is the sole feedback signal that drives learning.
Seeds compete for inclusion in your agent's instruction set. The system treats each seed as an arm in a multi-armed bandit and uses Thompson Sampling (via qortex-learning) to balance exploration (trying under-tested rules) against exploitation (surfacing rules with strong track records).
Each seed maintains a Beta posterior updated by gauntlet review outcomes. Over time, the system converges on the rules that actually reduce mistakes in your specific codebase and workflow, not rules that sound good in the abstract.
Selected rules are written into the instruction files your agents actually read:
CLAUDE.md(Claude Code).cursorrules(Cursor).github/copilot-instructions.md(GitHub Copilot)- Windsurf, Continue.dev, generic
settings.json
buildlog skills # render current policy to agent filesThe gauntlet closes the loop automatically. Every gauntlet run credits the rules its reviewers cite, and log_reward(outcome="accepted") after PR approval updates the Thompson Sampling posteriors. No extra ceremony required.
For teams that want longitudinal tracking across many sessions, buildlog also ships optional experiment/session commands that measure Repeated Mistake Rate (RMR) over time:
# Optional — for longitudinal RMR tracking
buildlog experiment start
# ... work across sessions ...
buildlog experiment end
buildlog experiment reportThe feedback loop is fully closed and mechanically proven:
Gauntlet Review
|
v
gauntlet_process_issues()
|-- credits rules cited by reviewers
|-- persists credited rule IDs to SQLite (gauntlet_credits table)
v
log_reward(outcome="accepted")
|-- reads latest gauntlet_credits from SQLite
|-- calls bandit.batch_update(rules, reward)
v
qortex Learner (Thompson Sampling)
|-- Beta(alpha, beta) posteriors shift
|-- next select() favors rules with higher posteriors
The gauntlet is the sole feedback source. Rules get credited when cited in reviews, not from session selection. This eliminates the credit assignment problem: only rules that demonstrably contributed to review quality get reinforced.
Each gauntlet citation followed by a reward acceptance increments alpha in the Beta posterior. A rule that starts at Beta(1,1) with mean 0.5 (uniform prior / no evidence) converges toward 1.0 as it accumulates positive evidence, making it increasingly likely to be selected for future sessions.
- LLM-backed extraction: when regex isn't enough, the seed engine can use OpenAI, Anthropic, or Ollama to extract patterns from code and logs. Metered backend tracks token usage and cost.
- Global SQLite storage: all buildlog data is stored in a single global database at
~/.buildlog/buildlog.db(SQLite with WAL mode, schema v7). Project isolation via hashed project IDs derived from git remote URLs. Legacy per-project JSON/JSONL files are still supported as a fallback. - Migration and export:
buildlog migrateconverts legacy JSON/JSONL files to the global database (idempotent, non-destructive).buildlog exportdumps data back to JSONL for portability or backup. - Ambient emission protocol: mistakes and learned rules are automatically emitted as JSON artifacts to
~/.buildlog/emissions/pending/for offline ingestion by downstream systems (knowledge graphs, analytics). Fire-and-forget -- emission failure never breaks the primary operation. - Workflow enforcement:
buildlog verifychecks your setup (CLAUDE.md workflow section, MCP registration, branch protection hooks) and--fixrepairs it.buildlog initinstalls pre-commit hooks that prevent direct commits to main. - Interactive dashboard:
buildlog vizlaunches a marimo notebook in your browser with live visualizations of reward trends, bandit posteriors, session history, mistake analysis, and insight breakdowns. - Posterior history: Every gauntlet credit and reward event snapshots the bandit's alpha/beta/mean for credited rules. Query evolution over time with
buildlog_posterior_history()to verify convergence or detect stale rules. - MCP server: buildlog exposes 36 tools as an MCP server so agents can query seeds, skills, and build history programmatically during sessions.
- npm wrapper:
npx @peleke.s/buildlogfor JS/TS projects. Thin shim that finds and invokes the Python CLI.
This is v0.22, not the end state.
- Extraction quality is uneven. Regex extractors miss nuance; LLM extractors are accurate but expensive. The middle ground is still being found.
- Single-agent only. Multi-agent coordination (shared learning across agents) is designed but not implemented.
- Long-horizon learning is not modeled. The bandit operates per-gauntlet-citation. Sessions are optional grouping containers. Longer arcs of competence building need richer policy models.
Two layers building on the global SQLite backend and qortex integration:
- Cross-project convergence -- detect rules independently rediscovered across projects, track salience
- Emergent rule graphs -- cluster embeddings into concept nodes, derive edges from co-occurrence and bandit correlation, contextual bandits with embedding-space context vectors (LinUCB)
Embedding persistence via sqlite-vec is already available through the qortex learning backend.
See the full roadmap for details.
Requires Python >= 3.11
We run buildlog as an ambient data capture layer across all projects. One command, works everywhere:
pipx install buildlog # or: uv tool install buildlog
buildlog init-mcp --global -y # registers MCP + writes instructions to ~/.claude/CLAUDE.mdThat's it. Claude Code now has all 36 buildlog tools and knows how to use them in every project you open. No per-project setup needed.
The --global flag:
- Registers the MCP server in
~/.claude.json(Claude Code's global config) - Creates
~/.claude/CLAUDE.mdwith usage instructions so Claude proactively uses buildlog - Works immediately in any repo, even without a local
buildlog/directory
The -y flag skips confirmation prompts (useful for scripts and CI).
This is how we use buildlog ourselves: always on, capturing structured trajectories from every session, feeding downstream systems that generate engineering logs, courses, and content.
If you prefer explicit per-project control:
pip install buildlog # MCP server included by default
buildlog init --defaults # scaffold buildlog/, register MCP, update CLAUDE.mdThis creates a buildlog/ directory with templates and configures Claude Code for that specific project.
npx @peleke.s/buildlog initCore dependencies installed automatically:
| Package | Purpose |
|---|---|
qortex-learning |
Thompson Sampling backend (default learning engine) |
mcp |
MCP server for Claude Code integration |
sqlite-vec |
Vector similarity for semantic deduplication |
numpy |
Numerical operations for bandit computations |
Optional extras:
pip install buildlog[viz] # marimo dashboard + plotly
pip install buildlog[embeddings] # local sentence-transformers
pip install buildlog[llm] # Ollama + Anthropic extractors
pip install buildlog[openai] # OpenAI embeddings
pip install buildlog[qortex-full] # full qortex KG + REST + MCP
pip install buildlog[all] # everythingbuildlog mcp-test # verify all 36 tools are registered
buildlog overview # check project state (works without init in global mode)buildlog init --defaults # scaffold + MCP + CLAUDE.md
buildlog new my-feature # start a session
# ... work ...
buildlog commit -m "feat: add auth"
buildlog gauntlet-loop --target src/ # review with curated personas
buildlog log-reward --outcome accepted # close the feedback loopSessions and experiments are optional. If you want longitudinal RMR tracking:
# Optional — for tracking RMR across many sessions
buildlog experiment start
# ... work across sessions ...
buildlog experiment end
buildlog experiment reportWant the full picture? The Learning Loop E2E Trace walks through all 13 steps with explicit code citations: installation, Thompson Sampling, gauntlet review, bandit updates, emission pipeline, cross-domain discovery via qortex, and rule re-export. Every claim above has a mechanical proof.
buildlog defaults to qortex-learning for Thompson Sampling. To force the builtin bandit fallback:
export BUILDLOG_LEARNING_BACKEND=builtinIf qortex-learning is not installed, buildlog falls back to the builtin bandit automatically with a warning.
Sessions and experiments are optional. log_mistake() works without an active session, and the gauntlet can credit rules and update posteriors without any session ceremony.
| Section | Description |
|---|---|
| Installation | Setup, extras, and initialization |
| Quick Start | Full pipeline walkthrough |
| Learning Loop E2E | Complete 13-step trace with code citations -- the proof |
| Core Concepts | The problem, the claim, and the metric |
| Theory | From restaurant intuition to contextual bandits -- the full tutorial |
| CLI Reference | Every command documented |
| MCP Integration | Claude Code setup and available tools |
| Storage Architecture | Global SQLite backend, migration, and export |
| Experiments | Optional longitudinal RMR tracking across sessions |
| Dashboard | Interactive marimo dashboard (buildlog viz) |
| Review Gauntlet | Reviewer personas and the gauntlet loop |
| Multi-Agent Setup | Render rules to any AI coding agent |
| Roadmap | Embeddings, cross-project convergence, rule graphs |
| Philosophy | Principles and honest limitations |
git clone https://github.com/Peleke/buildlog-template
cd buildlog-template
uv venv && source .venv/bin/activate
uv pip install -e ".[dev]"
pytestWe're especially interested in better context representations, credit assignment approaches, statistical methodology improvements, and real-world experiment results (positive or negative).
MIT License. See LICENSE