Problem
There is no structured way for AI coding agents (Claude Code, GitHub Copilot, Codex) to know which expertise or workflow to apply for different Proton development tasks (build, SQL, C++ review, CI, etc.).
Solution
Add a single canonical skill library under .github/skills/ and wire each agent platform to it via lightweight entry-points — no duplication.
.github/skills/ ← canonical skill library (10 skills)
↑ ↑ ↑
.claude/skills (symlink) .github/copilot- .codex/skills (symlink)
.claude/CLAUDE.md instructions.md .codex/config.toml
(Claude Code) (Copilot) (Codex)
Skill library — .github/skills/
| Skill |
Purpose |
alloc-profile |
Analyze jemalloc / async-profiler allocation profiles |
architecture |
Codebase layout, component interactions, data-flow |
build-and-verify |
Build, run server/client/cluster, execute tests |
ci-diagnostics |
Diagnose CI failures and performance reports |
cpp-coding |
Write/review C++20 code, naming, clang-format |
create-worktree |
Create isolated git worktree with local submodule reuse |
issue-workflow |
End-to-end GitHub issue → branch → commit → PR |
review |
Review PRs for correctness, streaming semantics, performance |
sql-usage |
Streaming SQL, EMIT, windows, JOINs, MVs, UDFs |
Agent entry-points
- Claude Code:
.claude/CLAUDE.md (project instructions) + .claude/skills → ../.github/skills
- GitHub Copilot:
.github/copilot-instructions.md
- Codex:
.codex/config.toml + .codex/skills → ../.github/skills
AGENTS.md at repo root for other agent frameworks
Additional fixes (found during local macOS development)
tests/ported-clickhouse-test.py: use ThreadPoolExecutor on macOS to avoid fork() EAGAIN with coverage builds; fix HTTP connection leak; guard setpgid/killpg
src/CPython/tests/CPythonTest.h: auto-discover Python 3.10, skip suite gracefully
src/Bootstrap/ServerDescriptor.cpp: unify port config to node.*.port pattern
base/base/coverage.cpp: suppress -Wreserved-identifier
local_coverage.sh, start-local-proton.sh: local developer helper scripts
Acceptance Criteria
Problem
There is no structured way for AI coding agents (Claude Code, GitHub Copilot, Codex) to know which expertise or workflow to apply for different Proton development tasks (build, SQL, C++ review, CI, etc.).
Solution
Add a single canonical skill library under
.github/skills/and wire each agent platform to it via lightweight entry-points — no duplication.Skill library —
.github/skills/alloc-profilearchitecturebuild-and-verifyci-diagnosticscpp-codingcreate-worktreeissue-workflowreviewsql-usageAgent entry-points
.claude/CLAUDE.md(project instructions) +.claude/skills → ../.github/skills.github/copilot-instructions.md.codex/config.toml+.codex/skills → ../.github/skillsAGENTS.mdat repo root for other agent frameworksAdditional fixes (found during local macOS development)
tests/ported-clickhouse-test.py: useThreadPoolExecutoron macOS to avoidfork()EAGAIN with coverage builds; fix HTTP connection leak; guardsetpgid/killpgsrc/CPython/tests/CPythonTest.h: auto-discover Python 3.10, skip suite gracefullysrc/Bootstrap/ServerDescriptor.cpp: unify port config tonode.*.portpatternbase/base/coverage.cpp: suppress-Wreserved-identifierlocal_coverage.sh,start-local-proton.sh: local developer helper scriptsAcceptance Criteria
tests/ported-clickhouse-test.pyruns parallel tests on macOS without fork errors