Skip to content

docs(public): refresh demo-branches-replay.gif with replay + cost steps#205

Merged
hugocorreia90 merged 1 commit intomainfrom
docs/demo-branches-replay-gif
Apr 21, 2026
Merged

docs(public): refresh demo-branches-replay.gif with replay + cost steps#205
hugocorreia90 merged 1 commit intomainfrom
docs/demo-branches-replay-gif

Conversation

@hugocorreia90
Copy link
Copy Markdown
Contributor

Summary

The demo-branches-replay tape's name has always promised replay; the content never actually ran rocky replay. Now that PR #203 makes rocky replay / rocky cost return real data end-to-end (Arc 1 wave 2's record_run wiring), the tape can finally deliver on its name.

New step sequence

After rocky run --branch fix_revenue the tape now runs:

  • rocky replay latest --output table — per-model SQL hashes, status, timings captured for the branch's run.
  • rocky cost latest --output table — per-model + total cost rollup ($0.00 for DuckDB; showcases the surface).

rocky lineage --downstream stays at the end as the cross-cutting flourish.

Tape hygiene

Added an explicit export PATH=/Users/hugocorreia/Developer/rocky-data/engine/target/release:$PATH to the preamble so the render uses the local build rather than whatever rocky happens to be on global PATH (the installed ~/.local/bin/rocky is still v1.11.0 and doesn't have rocky cost until the next release cut).

Size

  • Old: 370 KB (6 steps, ~15 s)
  • New: 953 KB (8 steps, ~22 s)
  • Budget: ≤ 5 MB per the rocky-live handoff

Test plan

Tape source lives in ~/Developer/rocky-live/assets/demo-branches-replay.tape (private library — not committed here; tracked only in the private rocky-live folder).

The tape's name has always promised replay; the content never showed
it. Now that PR #203 makes `rocky replay` / `rocky cost` return real
data end-to-end (Arc 1 wave 2's record_run wiring), the demo can
finally deliver on its name.

New step sequence after `rocky run --branch fix_revenue`:

    rocky replay latest --output table   # per-model SQL hashes,
                                         # status, timings for the
                                         # branch's run
    rocky cost latest --output table     # per-model + total cost
                                         # rollup ($0.00 for DuckDB)

Lineage stays as the final flourish. Tape preamble also adds an
explicit `export PATH=...engine/target/release:$PATH` so the demo
uses the local build rather than whatever `rocky` happens to be on
the global PATH — relevant until the next release cut.

Rendered at 953 KB / 1200×700 via `vhs` — within the 5 MB asset
budget. Tape source lives in `~/Developer/rocky-live/assets/` (private
library); this commit updates only the public-facing GIF embedded in
the README and the Astro docs.
@hugocorreia90 hugocorreia90 merged commit 14a4461 into main Apr 21, 2026
7 checks passed
@hugocorreia90 hugocorreia90 deleted the docs/demo-branches-replay-gif branch April 21, 2026 16:21
hugocorreia90 added a commit that referenced this pull request Apr 21, 2026
* chore(engine): release 1.12.0

Arc 1 wave 2 + cleanup cascade. Eight PRs since v1.11.0:

- #199 SIGPIPE handler
- #200 rocky branch compare
- #201 POC target_dialect cleanup
- #202 rocky cost <run_id|latest> (Arc 2 wave 2 first PR)
- #203 rocky run persists RunRecord (Arc 1 wave 2 load-bearing fix)
- #204 docs + CHANGELOG [Unreleased] cascade
- #205 demo-branches-replay.gif refresh
- #206 real per-model started_at on MaterializationOutput

rocky history / replay / trace / cost now return real data end-to-end
for the first time. Full notes in CHANGELOG.

* feat(state): configurable transfer timeout + tracing span + Valkey wrap

- `StateConfig.transfer_timeout_seconds` (default 300s) replaces the hard-
  coded `STATE_TRANSFER_TIMEOUT`. Operators can now tune the wall-clock
  budget in `rocky.toml` for very large state or slow networks without
  recompiling. `StateConfig` gains a manual `Default` impl so
  `StateConfig::default()` yields 300s (not u64's zero).
- `state.upload` / `state.download` tracing spans wrap every transfer
  carrying `backend`, `bucket`, and `size_bytes`. The in-elapse warn event
  inherits those fields automatically, so hung transfers are diagnosable
  from stderr logs alone (which dagster-rocky streams into the Dagster run
  viewer).
- Structured `warn!` on timeout elapse ("state transfer exceeded timeout
  budget") with a `duration_ms` field — replaces silent `Timeout(_)`.
- Valkey read/write paths audited and closed: `redis::Client::get_connection`
  + `redis::cmd(...).query()` are sync and blocked the tokio runtime thread;
  no outer `tokio::time::timeout` could rescue them. Both `upload_to_valkey`
  and `download_from_valkey` now run under `tokio::task::spawn_blocking`
  inside `with_transfer_timeout`, closing the same class of hang the
  object-store paths were already protected against.
- `default_client_options()` in `object_store.rs` honours the standard
  `object_store`-crate env vars `AWS_ALLOW_HTTP` / `AZURE_ALLOW_HTTP` /
  `GOOGLE_STORAGE_ALLOW_HTTP`. Always off in production; the new
  integration test uses it to front-end the S3 SDK with a plain-HTTP
  wiremock server without bypassing the credential chain.
- New `tests/state_sync_timeout_test.rs` integration test: a wiremock S3
  endpoint that holds PutObject for 1h proves `upload_state` returns
  `StateSyncError::Timeout` within the configured 2s budget (+grace).
  A prompt-endpoint negative control guards against regressions.
- CHANGELOG entries added under [1.12.0]. Example config in
  `engine/examples/dagster-integration/rocky.toml` surfaces the new key.

cargo fmt clean; `clippy --workspace --all-targets -- -D warnings` clean;
all 977 rocky-core unit tests + 30 e2e + 20 integration + the 2 new
timeout tests pass.

* chore(codegen): regenerate schemas + pydantic types for StateConfig.transfer_timeout_seconds

* chore(fixtures): regenerate dagster test fixtures for 1.12.0

`just regen-fixtures` — version string bump only (1.11.0 → 1.12.0) across
35 captured fixtures under integrations/dagster/tests/fixtures_generated/.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant