Thank you for considering contributing to APM! This document outlines the process for contributing to the project.
By participating in this project, you agree to abide by our Code of Conduct. Please read it before contributing.
Before submitting a bug report:
- Check the GitHub Issues to see if the bug has already been reported.
- Update your copy of the code to the latest version to ensure the issue hasn't been fixed.
When submitting a bug report:
- Use our bug report template.
- Include detailed steps to reproduce the bug.
- Describe the expected behavior and what actually happened.
- Include any relevant logs or error messages.
Enhancement suggestions are welcome! Please:
- Use our feature request template.
- Clearly describe the enhancement and its benefits.
- Provide examples of how the enhancement would work.
This repo uses APM to ship its own author and review skills. The
canonical sources live under .apm/skills/ and
.apm/agents/ -- the same primitive layout any APM
package uses. They are not magically loaded by your editor; you have
to install them like any other APM dependency.
After cloning, run APM against this repo the way you would against any other APM project:
# 1. Install APM itself if you haven't already.
# See https://github.com/microsoft/apm#install for all install options.
curl -sSL https://aka.ms/apm-unix | sh # macOS / Linux
# irm https://aka.ms/apm-windows | iex # Windows PowerShell
# 2. From the root of this repo:
apm installapm install reads this repo's apm.yml (includes: auto),
picks up everything under .apm/, and deploys it into the harness
directories your coding agent already watches -- .github/skills/,
.github/agents/, .claude/skills/, .cursor/, etc. -- depending on
which targets are detected on your machine. Once that is done, your
harness (Claude Code, GitHub Copilot CLI, Cursor, OpenCode, Codex,
Gemini, ...) can discover and invoke the skills by name.
For most PRs, two of those skills carry most of the weight:
| Skill | When to use it |
|---|---|
pr-description-skill |
Every PR. Drafts a self-sufficient PR body (TL;DR, Problem / Approach / Implementation, mermaid diagrams, validation evidence, How-to-test) that anchors every WHY-claim to PROSE / Agent Skills. Avoids the "what does this PR even do?" round-trip with reviewers. |
apm-review-panel |
Non-trivial PRs (new behaviour, security-relevant code, CLI UX changes, manifest/schema changes). Runs the same multi-persona panel CI runs in pr-review-panel.yml -- locally, on your working tree, before you push. Surfaces the required findings while the cost of fixing is still cheap. |
Typical local flow (after apm install):
- Implement your change against
main. - Ask your agent: "Run the apm-review-panel skill on my working tree."
The panel fans out to the architectural, CLI-logging, DevX,
supply-chain, growth, and (if relevant) auth personas, and returns
a single verdict with
requiredfindings split fromnits. Address therequireditems in-place. - Ask your agent: "Use the pr-description-skill to draft the PR body
for this branch." Review the draft, paste it into
gh pr create --body-file. - Push and open the PR. The same panel runs in CI on label, but most
requiredfindings will already be addressed -- the comment thread stays focused on substance instead of correctness debt.
You don't have to use these skills, but the panel verdict in CI applies the same rubric either way, and PRs that have already been through it locally tend to merge faster.
The full persona roster lives in .apm/agents/ -- you
can also summon any single persona (e.g. python-architect,
supply-chain-security-expert) for a focused review of a specific file
or design question without running the full panel.
Don't wait for the panel verdict to discover you should have talked to a specialist. The same personas the panel runs are the ones to consult while you are designing and building. Recommended pairings:
| Situation | Persona to summon | Why |
|---|---|---|
| Any new feature or feature change | devx-ux-expert first |
Validate the user-facing approach (flags, defaults, error messages, manifest shape) before you write code. Cheaper than re-doing the implementation after the panel rejects it. |
| Anything that prints to the terminal | cli-logging-expert |
Always include this. Keeps log levels, colours, prefixes, and progress indicators consistent across the CLI. |
| Refactor, new module, or non-trivial architecture decision | python-architect |
Get the boundaries / interfaces / dependency direction right up front. |
Anything that fetches packages, evaluates manifests, scans content, signs / verifies / locks, or touches apm install |
supply-chain-security-expert mandatory |
A core promise of APM is that apm install blocks compromised packages before agents read them. This persona is non-optional for any PR that touches the supply chain -- the panel will reject it otherwise. |
| Any change touching authentication, tokens, credential resolution, or remote host auth (GitHub, GHE, ADO, EMU, GitHub Apps) | auth-expert |
Auth bugs are silent and expensive. Run this persona on the design and again on the diff. |
| New primitive type, manifest schema change, or cross-target deployment behaviour | apm-primitives-architect |
Keeps the primitive model coherent across Copilot, Claude, Cursor, OpenCode, Codex, Gemini. |
| Public-facing copy, README, docs site, or release notes | doc-writer and/or oss-growth-hacker |
Voice consistency and positioning for new-user moments. |
Rule of thumb: ask the matching persona to critique your plan before
you implement, then ask it again to review the diff before you
push. Two cheap, focused passes per persona beat one expensive panel
rejection. The apm-review-panel skill at the end is then a sanity
check, not a redesign.
- Fork the repository.
- Create a new branch for your feature/fix:
git checkout -b feature/your-feature-nameorgit checkout -b fix/issue-description. - Make your changes.
- Run tests:
uv run pytest tests/unit tests/test_console.py -x - Mirror the CI
Lintjob locally before pushing -- both commands must be silent:Auto-fix withuv run --extra dev ruff check src/ tests/ uv run --extra dev ruff format --check src/ tests/
ruff check --fixandruff format(without--check). The full contract -- including common surprises likeRUF043,UP006,I001-- lives in.apm/instructions/linting.instructions.md, the canonical source of truth that CI, thepr-description-skill, and the dogfoodapm compile -t copilotall mirror. - Commit your changes with a descriptive message.
- Push to your fork.
- Submit a pull request.
- Fill out the PR template - describe what changed, why, and link the issue.
- Ensure your PR addresses only one concern (one feature, one bug fix).
- Include tests for new functionality.
- Update documentation if needed.
- PRs must pass all CI checks before they can be merged.
This repo uses GitHub's native merge queue. Once your PR is approved, a maintainer adds it to the queue. The queue then:
- Builds a tentative merge of your PR against the latest
main- no manual "Update branch" needed. - Runs the integration suite against that tentative merge.
- Auto-merges if checks pass; ejects from the queue if they fail.
What this means for contributors:
- You don't need to keep your branch up to date with
mainmanually. - The fast unit + build checks (Tier 1) run on every push to your PR.
- The full integration suite (Tier 2) only runs once your PR is in the queue, not on every WIP push.
If your PR is ejected from the queue because of a real failure, push a fix and ask a maintainer to re-queue.
Every new issue is automatically labeled needs-triage. Maintainers review incoming issues and:
- Accept - remove
needs-triage, addaccepted, and assign a milestone. - Prioritize - optionally add
priority/highorpriority/low. - Close - if it's a duplicate (
duplicate) or out of scope, close with a comment explaining why.
Labels used for triage: needs-triage, accepted, needs-design, priority/high, priority/low.
This project uses uv to manage Python environments and dependencies:
# Clone the repository
git clone <this-repo-url>
cd apm
# Install all dependencies (creates .venv automatically)
uv sync --extra devWe use pytest for testing with pytest-xdist for parallel execution. After completing the setup above:
# Run the unit test suite (recommended - matches CI, fast)
uv run pytest tests/unit tests/test_console.py -x
# Run a specific test file (fastest, use during development)
uv run pytest tests/unit/path/to/relevant_test.py -x
# Run the full test suite (includes integration & acceptance tests)
uv run pytest
# Run with verbose output
uv run pytest tests/unit -x -vTests run in parallel automatically (-n auto is configured in pyproject.toml). To force serial execution, add -n0.
If you don't have uv available, you can use a standard Python venv and pip:
# create and activate a venv (POSIX / WSL)
python -m venv .venv
source .venv/bin/activate
# install this package in editable mode and test deps
pip install -U pip
pip install -e .[dev]
# run unit tests
pytest tests/unit tests/test_console.py -xThis project follows:
CI enforces all lint and formatting rules automatically. The CI Lint job runs the following two commands -- both must be silent before you open a PR:
uv run --extra dev ruff check src/ tests/ # lint (CI-mirror)
uv run --extra dev ruff format --check src/ tests/ # format check (CI-mirror)Auto-fix locally with:
uv run --extra dev ruff check src/ tests/ --fix # lint with auto-fix
uv run --extra dev ruff format src/ tests/ # apply formatterThe canonical lint contract (with common diagnostics and lifecycle binding for skills that claim green CI) lives in .apm/instructions/linting.instructions.md. Do not redefine these commands elsewhere -- honor that instruction.
For instant feedback before pushing, install the pre-commit hooks:
uv run pre-commit installThis is optional -- CI is the authoritative gate. The pre-commit hook rev may lag behind the CI version; check .pre-commit-config.yaml against uv.lock if you see discrepancies.
If your changes affect how users interact with the project, update the documentation accordingly.
Use an experimental flag to de-risk rollout of a user-visible behavioural change that may need early adopter feedback. Do not add a flag for a bug fix, internal refactor, or any change that should simply ship as the default behaviour.
Experimental flags MUST NOT gate security-critical behaviour (content scanning, path validation, lockfile integrity, token handling, MCP trust, collision detection). Flags are ergonomic/UX toggles only.
When adding a new experimental flag:
- Register it in
src/apm_cli/core/experimental.pyin theFLAGSdict with a frozenExperimentalFlag(name=..., description=..., default=False, hint=...). - Gate the code path with a function-scope import (avoids import cycles):
def my_function(): from apm_cli.core.experimental import is_enabled if is_enabled("my_flag"): ...
- Add tests that cover both the enabled and disabled code paths.
- Update the experimental command reference page at
docs/src/content/docs/reference/experimental.md.
Naming rules:
- Use
snake_casein the registry and config. - Use
kebab-casefor display and other user-facing strings. - The CLI accepts both forms on input.
Graduation and retirement:
- When a flag becomes the default, remove the gate and remove the matching
FLAGSentry in the same PR. - Add a
CHANGELOG.mdentry underChangedwith a migration note if the previous default differed.
Avoid these anti-patterns:
- Do not gate security-critical behaviour behind an experimental flag.
- Do not read
is_enabled()at module import time. - Do not persist flag state anywhere other than
~/.apm/config.jsonviaupdate_config.
By contributing to this project, you agree that your contributions will be licensed under the project's MIT License.
If you have any questions, feel free to open an issue or reach out to the maintainers.
Thank you for your contributions!