Skip to content

Latest commit

 

History

History
265 lines (172 loc) · 11.7 KB

File metadata and controls

265 lines (172 loc) · 11.7 KB

Contributing to Rerun

This guide is for anyone who wants to contribute to the Rerun repository.

See also

What to contribute

Note that maintainers do not have infinite time, and reviews take a lot of it. When choosing what to work on, please ensure that it is either:

  • A small change (+100-100 at most), or
  • A larger change that has been discussed with one or more maintainers.

You can discuss these changes by:

  • Commenting on an existing issue,
  • Creating a new issue, or
  • Pinging one of the Rerun maintainers on our Discord

Note

PRs containing large undiscussed changes may be closed without comment.

Pull requests

We use Trunk Based Development, which means we encourage small, short-lived branches.

  • Open draft PRs early to get feedback before a full review.
  • Don't PR from your own main branch — it makes it hard for reviewers to add fixes.
  • Add improvements as new commits rather than rebasing, so reviewers can follow progress (add images if possible!).
  • All PRs are merged with Squash and Merge, so you don't need a clean commit history on feature branches. Prefer new commits over rebasing — force-pushing discourages collaboration.

Our CI will record binary sizes and run benchmarks on each merged PR.

Pull requests from external contributors require approval for CI runs. Click the Approve and run button:

Image showing the approve and run button

Members of the rerun-io organization can enable auto-approval for a single PR by commenting with @rerun-bot approve:

PR comment with the text @rerun-bot approve

Labeling of PRs & changelog generation

Org members must label their PRs — labels are how we generate changelogs.

  • include in changelog: The PR title will be used as a changelog entry. Keep it informative and concise.
  • exclude from changelog: Required if the PR shouldn't appear in the changelog.
  • At least one category label is required. See the CI job for the current list.
  • When in doubt, add more labels rather than fewer — they help with search.

What should go to the changelog?

Err on the side of including entries — if it adds value for a user browsing the changelog, add it. Be generous with external contributions — credit where credit is due!

We typically don't include: pure refactors, testing, CI fixes, fixes for bugs introduced since last release, minor doc changes (typos, etc.).

Other special labels

  • deploy docs: Cherry-picked to docs-latest, triggering a rebuild of the doc page. Use this for doc fixes relevant to the latest release.
  • do-not-merge: Fails CI unconditionally. Useful for PRs targeting non-main branches or awaiting test results. Alternatively, unticked checkboxes in the PR description will also fail CI ✨

Contributing to CI

Every CI job should ideally be a single pixi (or similar) script invocation that works locally as-is.

Benefits:

  • Scripts in a real programming language instead of Bash embedded in YAML
  • Much lower iteration times when working on CI
  • Ability to manually re-run a job when CI fails

Always output artifacts to GCS instead of GHA artifact storage. This lets anyone download the output of a script and continue from where it failed.

CI script guidelines

Scripts should be local-first and easy for contributors to run.

Each script should document:

  • Dependencies
  • Files and directories
  • Environment variables
  • Usage examples

Pass inputs explicitly via arguments with sane defaults. Validate inputs as early as possible: auth credentials, numeric ranges, string formats, file path existence, etc.

Support GCS paths (gs://bucket/blob/path) and stdin/stdout (-) for file I/O where it makes sense.

Write descriptive error messages — they may be the only info someone has when debugging a CI failure. Print frequently to show progress.

Use environment variables only for auth and output config (e.g. disabling color). Prefer SDK default auth where possible (e.g. GCP Application Default Credentials).

Support --dry-run for destructive or irreversible actions.

Adding dependencies

Be thoughtful when adding dependencies. Each one adds compile time, binary size, potential breakage, and attack surface. Sometimes 100 lines of code is better than a new dependency.

When adding a dependency in a PR, motivate it:

  • Why use this dependency instead of rolling our own?
  • Why this one over alternatives?

For Rust, use default-features = false where it makes sense to minimize new code pulled in.

When reviewing a PR, always check the Cargo.lock diff (collapsed by default in GitHub 😤).

Guide for picking good dependencies: https://gist.github.com/repi/d98bf9c202ec567fd67ef9e31152f43f.

A full cargo update should be its own stand-alone PR. Include the output in the commit message.

Structure

Main crates are in crates/, examples in examples/.

To get an overview of the crates, read their documentation with:

cargo doc --no-deps --open

To learn about the viewer, run:

cargo run -p rerun -- --help

Tests

There are various kinds of automated tests throughout the repository. Unless noted otherwise, all tests run on CI, though their frequency (per PR, on main, nightly) and platform coverage may vary.

Rust tests

cargo test --all-targets --all-features

or with cargo nextest:

cargo nextest run --all-targets --all-features
cargo test --all-features --doc

Runs unit & integration tests for all Rust crates, including the viewer. Tests use the standard #[test] attribute.

insta snapshot tests

Some tests use insta snapshot tests, which compare textual output against checked-in references. They run as part of the regular test suite.

If output changes, they will fail. Review results with cargo insta review (install: cargo install cargo-insta).

Image comparison tests

Some tests render an image and compare it against a checked-in reference image. They run as part of the regular test suite.

These are driven by egui_kittest's Harness::snapshot method. We typically use TestContext to mock relevant parts of the viewer.

Comparing results & updating images

Each test run produces new images (typically at <your-test.rs>/snapshots). On failure, a diff.png is added highlighting all differences. To update references, run with UPDATE_SNAPSHOTS=1.

Use pixi run snapshots to compare results of all failed tests visually in Rerun. You can also update from a failed CI run using ./scripts/update_snapshots_from_ci.sh. Inspect PR diffs (including failed comparisons) via https://rerun-io.github.io/kitdiff/?url=.

For best practices and unexpected sources of image differences, see the egui_kittest README.

Rendering backend

Image comparison tests require a wgpu-compatible driver. Currently they run on Vulkan & Metal. For CI / headless environments, we use lavapipe (llvmpipe) for software rendering on all platforms. On macOS, we use a custom static build from rerun-io/lavapipe-build.

For setup details, see the CI workflow.

Python tests

pixi run py-test

Uses pytest. Tests are in ./rerun_py/tests/.

C++ tests

pixi run cpp-test

Uses catch2. Tests are in ./rerun_cpp/tests/.

Snippet comparison tests

pixi run uvpy docs/snippets/compare_snippet_output.py

Verifies that all snippets produce the same output across languages, unless configured otherwise in snippets.toml. More details in README.md.

Release checklists

pixi run uv run tests/python/release_checklist/main.py

A set of manual checklist-style tests run prior to each release. Avoid adding new ones — they add friction and failures are easy to miss. More details in README.md.

Other ad-hoc manual tests

Additional test scenes in ./tests/cpp/, ./tests/python/, and ./tests/rust/. These are built on CI but run only irregularly. See respective READMEs for details.

Tools

We use pixi for dev-tool versioning, downloads, and task running. See available tasks with pixi task list.

We use cargo deny to check our dependency tree for copyleft licenses, duplicate dependencies, and rustsec advisories. Configure in deny.toml, run with cargo deny check.

Configure your editor to run cargo fmt on save, strip trailing whitespace, and end each file with a newline. VSCode settings in .vscode/ should apply automatically. If you use a different editor, consider adding good settings to this repository!

Run relevant tests locally depending on your changes: cargo test --all-targets --all-features, pixi run py-test, pixi run -e cpp cpp-test. See Tests for details.

We recommend cargo nextest for running Rust tests — it's faster than cargo test with better output. Note that it doesn't support doc tests yet; run those with cargo test.

Linting

Before pushing, always run pixi run fast-lint. It takes seconds on repeated runs and catches trivial issues before wasting CI time.

Hooks

We recommend installing the Rerun pre-push hook, which runs pixi run fast-lint for you.

Copy it into your local .git/hooks:

cp hooks/pre-push .git/hooks/pre-push
chmod +x .git/hooks/pre-push

or configure git to use the hooks directory directly:

git config core.hooksPath hooks

Optional

  • bacon — automatically re-runs cargo clippy on save. See bacon.toml.
  • sccache — speeds up recompilation (e.g. when switching branches). Set cache size: export SCCACHE_CACHE_SIZE="256G".

Other

View higher log levels with export RUST_LOG=trace. Debug logging is automatically enabled for the viewer when running inside the rerun checkout.