A fresh season, updated rules, same family rivalry.
This is the 2026 edition of the game, continuing from 2025. It keeps the spirit of last year while updating the rules for this season.
- Pick one driver each race weekend.
- Your driver scores points from the race and the sprint (if there is one).
- For the first 22 races, once you've used a driver you cannot pick them again.
- This means you'll end up using all 22 drivers across those races.
- The last 2 races (Qatar and Abu Dhabi) are wild cards: you can pick anyone, even if you used them before.
- 6 sprint weekends in the season: China, Miami, Canada, Britain, Netherlands, Singapore.
- Your driver scores points from both sprint and race.
- Strong sprint performers are valuable picks on these weekends.
- Normal weekends: picks lock 10 minutes before Qualifying (Saturday).
- Sprint weekends: picks lock 10 minutes before Sprint Qualifying (Saturday morning).
- Pick window opens Monday of race week, so you can watch practice before locking in.
Race points: 25-18-15-12-10-8-6-4-2-1 (P1-10), 0 otherwise. Sprint points: 8-7-6-5-4-3-2-1 (P1-8), 0 otherwise.
make startCommon commands:
make lint
make test
make buildRun make to see all available targets.
To enable push notifications (pick reminders and result alerts), set up VAPID keys:
# Generate a key pair
bunx web-push generate-vapid-keys
# Store as Cloudflare secrets
bunx wrangler secret put VAPID_PUBLIC_KEY
bunx wrangler secret put VAPID_PRIVATE_KEY
bunx wrangler secret put VAPID_SUBJECT # e.g. mailto:[email protected]Users can subscribe from their profile page. Notifications are sent via scheduled crons before qualifying deadlines and after results are synced.
Run the UI with in-browser demo data (no backend required):
VITE_DEMO_MODE=true make startOr use the shorthand:
make demoUse the Demo Mode buttons on the login screen to pick a scenario (Showcase, New Player, Locked Picks, Admin).
Screenshots
| Login | Dashboard |
|---|---|
![]() |
![]() |
![]() |
![]() |
React + Redux Toolkit + React Router SPA.
- UI and routes live in
src/. - API client lives in
src/lib/api.tswith typed request/response handling.
Cloudflare Workers + Hono + D1 SQLite, structured as a layered architecture with manual dependency injection - three layers with a strict one-way dependency flow:
- Routes:
worker/routes/(HTTP handlers, validation) - Use cases:
worker/usecases/(business logic) - Repositories:
worker/repositories/(data access via interfaces)
Use cases declare their dependencies as interfaces (CreatePickDeps), and routes wire concrete implementations at call time via factory functions (createD1UserRepository(env)). This differs from hexagonal/ports-and-adapters which treats driving adapters (HTTP) and driven adapters (database) symmetrically with ports on both sides - here the HTTP layer directly calls use cases with no abstraction. It's simpler than clean architecture which prescribes more layers (entities, use cases, interface adapters, frameworks) and stricter isolation rules. And unlike onion architecture which emphasises concentric rings with domain at the centre, this has no distinct domain model layer - shared types live in shared/ and are used directly. The pattern extends to external services (F1 API, clock) which are also injected as interfaces, making everything testable without mocks or network calls. It's a pragmatic middle ground: the key insight from these patterns (depend on abstractions, not implementations) without the ceremony.
Stack: Bun test runner + Testing Library + MSW.
- UI tests:
src/**/*.test.tsx(app flows and rendering). - MSW mocks API responses at the network level.
- Fixtures live in
src/test/fixtures.ts.
Run:
make test/client- Use-case tests:
worker/test/usecases/*.test.ts(business behavior with in-memory repos and stub services). - Repository tests:
worker/test/repositories/*.d1.test.ts(D1 integration tests for SQL implementations). - HTTP API tests:
worker/test/http/*.http.test.ts(core request/response paths + auth checks using a real worker + D1).
Run:
make test/workerThe notebook at analysis/picks_analysis.ipynb explores whether historical data can improve pick strategy through the season. The goal is to test whether prior race results, sprint performance, and driver form can help optimize picks while staying within the one-time driver constraint.
For more detail, see analysis/.




