This guide is for anyone who wants to contribute to the Rerun repository.
ARCHITECTURE.mdBUILD.mdrerun_py/README.md- build instructions for Python SDKCODE_OF_CONDUCT.mdCODE_STYLE.mdRELEASES.md
- Examples: We welcome any examples you would like to add. Follow the pattern of existing examples in the
examples/folder. - Report bugs and feature requests at https://github.com/rerun-io/rerun/issues.
- Look at our
good first issuetag. - We track things we would like implemented in 3rd party crates here.
Note that maintainers do not have infinite time, and reviews take a lot of it. When choosing what to work on, please ensure that it is either:
- A small change (+100-100 at most), or
- A larger change that has been discussed with one or more maintainers.
You can discuss these changes by:
- Commenting on an existing issue,
- Creating a new issue, or
- Pinging one of the Rerun maintainers on our Discord
Note
PRs containing large undiscussed changes may be closed without comment.
We use Trunk Based Development, which means we encourage small, short-lived branches.
- Open draft PRs early to get feedback before a full review.
- Don't PR from your own
mainbranch — it makes it hard for reviewers to add fixes. - Add improvements as new commits rather than rebasing, so reviewers can follow progress (add images if possible!).
- All PRs are merged with
Squash and Merge, so you don't need a clean commit history on feature branches. Prefer new commits over rebasing — force-pushing discourages collaboration.
Our CI will record binary sizes and run benchmarks on each merged PR.
Pull requests from external contributors require approval for CI runs. Click the Approve and run button:
Members of the rerun-io organization can enable auto-approval for a single PR by commenting with @rerun-bot approve:
Org members must label their PRs — labels are how we generate changelogs.
include in changelog: The PR title will be used as a changelog entry. Keep it informative and concise.exclude from changelog: Required if the PR shouldn't appear in the changelog.- At least one category label is required. See the CI job for the current list.
- When in doubt, add more labels rather than fewer — they help with search.
Err on the side of including entries — if it adds value for a user browsing the changelog, add it. Be generous with external contributions — credit where credit is due!
We typically don't include: pure refactors, testing, CI fixes, fixes for bugs introduced since last release, minor doc changes (typos, etc.).
deploy docs: Cherry-picked todocs-latest, triggering a rebuild of the doc page. Use this for doc fixes relevant to the latest release.do-not-merge: Fails CI unconditionally. Useful for PRs targeting non-mainbranches or awaiting test results. Alternatively, unticked checkboxes in the PR description will also fail CI ✨
Every CI job should ideally be a single pixi (or similar) script invocation that works locally as-is.
Benefits:
- Scripts in a real programming language instead of Bash embedded in YAML
- Much lower iteration times when working on CI
- Ability to manually re-run a job when CI fails
Always output artifacts to GCS instead of GHA artifact storage. This lets anyone download the output of a script and continue from where it failed.
Scripts should be local-first and easy for contributors to run.
Each script should document:
- Dependencies
- Files and directories
- Environment variables
- Usage examples
Pass inputs explicitly via arguments with sane defaults. Validate inputs as early as possible: auth credentials, numeric ranges, string formats, file path existence, etc.
Support GCS paths (gs://bucket/blob/path) and stdin/stdout (-) for file I/O where it makes sense.
Write descriptive error messages — they may be the only info someone has when debugging a CI failure. Print frequently to show progress.
Use environment variables only for auth and output config (e.g. disabling color). Prefer SDK default auth where possible (e.g. GCP Application Default Credentials).
Support --dry-run for destructive or irreversible actions.
Be thoughtful when adding dependencies. Each one adds compile time, binary size, potential breakage, and attack surface. Sometimes 100 lines of code is better than a new dependency.
When adding a dependency in a PR, motivate it:
- Why use this dependency instead of rolling our own?
- Why this one over alternatives?
For Rust, use default-features = false where it makes sense to minimize new code pulled in.
When reviewing a PR, always check the Cargo.lock diff (collapsed by default in GitHub 😤).
Guide for picking good dependencies: https://gist.github.com/repi/d98bf9c202ec567fd67ef9e31152f43f.
A full cargo update should be its own stand-alone PR. Include the output in the commit message.
Main crates are in crates/, examples in examples/.
To get an overview of the crates, read their documentation with:
cargo doc --no-deps --open
To learn about the viewer, run:
cargo run -p rerun -- --help
There are various kinds of automated tests throughout the repository.
Unless noted otherwise, all tests run on CI, though their frequency (per PR, on main, nightly) and platform coverage may vary.
cargo test --all-targets --all-featuresor with cargo nextest:
cargo nextest run --all-targets --all-features
cargo test --all-features --docRuns unit & integration tests for all Rust crates, including the viewer.
Tests use the standard #[test] attribute.
Some tests use insta snapshot tests, which compare textual output against checked-in references. They run as part of the regular test suite.
If output changes, they will fail. Review results with cargo insta review (install: cargo install cargo-insta).
Some tests render an image and compare it against a checked-in reference image. They run as part of the regular test suite.
These are driven by egui_kittest's Harness::snapshot method.
We typically use TestContext to mock relevant parts of the viewer.
Each test run produces new images (typically at <your-test.rs>/snapshots).
On failure, a diff.png is added highlighting all differences.
To update references, run with UPDATE_SNAPSHOTS=1.
Use pixi run snapshots to compare results of all failed tests visually in Rerun.
You can also update from a failed CI run using ./scripts/update_snapshots_from_ci.sh.
Inspect PR diffs (including failed comparisons) via https://rerun-io.github.io/kitdiff/?url=.
For best practices and unexpected sources of image differences, see the egui_kittest README.
Image comparison tests require a wgpu-compatible driver. Currently they run on Vulkan & Metal.
For CI / headless environments, we use lavapipe (llvmpipe) for software rendering on all platforms.
On macOS, we use a custom static build from rerun-io/lavapipe-build.
For setup details, see the CI workflow.
pixi run py-testUses pytest. Tests are in ./rerun_py/tests/.
pixi run cpp-testUses catch2. Tests are in ./rerun_cpp/tests/.
pixi run uvpy docs/snippets/compare_snippet_output.pyVerifies that all snippets produce the same output across languages, unless configured otherwise in snippets.toml. More details in README.md.
pixi run uv run tests/python/release_checklist/main.pyA set of manual checklist-style tests run prior to each release. Avoid adding new ones — they add friction and failures are easy to miss. More details in README.md.
Additional test scenes in ./tests/cpp/, ./tests/python/, and ./tests/rust/. These are built on CI but run only irregularly. See respective READMEs for details.
We use pixi for dev-tool versioning, downloads, and task running. See available tasks with pixi task list.
We use cargo deny to check our dependency tree for copyleft licenses, duplicate dependencies, and rustsec advisories. Configure in deny.toml, run with cargo deny check.
Configure your editor to run cargo fmt on save, strip trailing whitespace, and end each file with a newline. VSCode settings in .vscode/ should apply automatically. If you use a different editor, consider adding good settings to this repository!
Run relevant tests locally depending on your changes: cargo test --all-targets --all-features, pixi run py-test, pixi run -e cpp cpp-test. See Tests for details.
We recommend cargo nextest for running Rust tests — it's faster than cargo test with better output. Note that it doesn't support doc tests yet; run those with cargo test.
Before pushing, always run pixi run fast-lint. It takes seconds on repeated runs and catches trivial issues before wasting CI time.
We recommend installing the Rerun pre-push hook, which runs pixi run fast-lint for you.
Copy it into your local .git/hooks:
cp hooks/pre-push .git/hooks/pre-push
chmod +x .git/hooks/pre-push
or configure git to use the hooks directory directly:
git config core.hooksPath hooks
- bacon — automatically re-runs
cargo clippyon save. Seebacon.toml. sccache— speeds up recompilation (e.g. when switching branches). Set cache size:export SCCACHE_CACHE_SIZE="256G".
View higher log levels with export RUST_LOG=trace.
Debug logging is automatically enabled for the viewer when running inside the rerun checkout.

