-
Notifications
You must be signed in to change notification settings - Fork 53
Comparing changes
Open a pull request
base repository: daydreamlive/scope
base: v0.1.6
head repository: daydreamlive/scope
compare: v0.1.7
- 13 commits
- 43 files changed
- 11 contributors
Commits on Mar 9, 2026
-
Fix download UI blocking when model artifacts contain empty files (#610)
HuggingFace repos can legitimately contain 0-byte files (config stubs, etc.). The models_are_downloaded check treated these as missing, causing the download status to never report completion and trapping the user in the download dialog with no way to exit. Remove the file size check so that file existence alone is sufficient. The empty-directory check is preserved since that genuinely indicates a failed or incomplete download. Signed-off-by: RyanOnTheInside <[email protected]>
Configuration menu - View commit details
-
Copy full SHA for abed05e - Browse repository at this point
Copy the full SHA abed05eView commit details -
Output app logs from e2e as a github artifact (#608)
* Output app logs from e2e as a github artifact Signed-off-by: Max Holland <[email protected]> * switch to prod daydream api Signed-off-by: Max Holland <[email protected]> --------- Signed-off-by: Max Holland <[email protected]>
Configuration menu - View commit details
-
Copy full SHA for d0342e8 - Browse repository at this point
Copy the full SHA d0342e8View commit details -
Optional logging of OSC messages (#629)
* Optional logging of OSC messages Signed-off-by: Thom Shutt <[email protected]> * fix: use server-confirmed state for OSC logging toggle Instead of optimistically updating local state (which was not reverted on error), use the status response returned by PUT /api/v1/osc/settings to set the UI state. Also deduplicate the OscStatus interface into a shared OscStatusResponse type in api.ts. Co-Authored-By: Claude Opus 4.6 <[email protected]> Signed-off-by: Rafał Leszko <[email protected]> * fix: rename OSC log toggle label from "Unknown only" to "Errors only" Co-Authored-By: Claude Opus 4.6 <[email protected]> Signed-off-by: Rafał Leszko <[email protected]> --------- Signed-off-by: Thom Shutt <[email protected]> Signed-off-by: Rafał Leszko <[email protected]> Co-authored-by: Rafał Leszko <[email protected]> Co-authored-by: Claude Opus 4.6 <[email protected]>
Configuration menu - View commit details
-
Copy full SHA for d347ccb - Browse repository at this point
Copy the full SHA d347ccbView commit details -
fix: batch-level FPS tracking for smooth frame delivery (#617)
* fix: batch-level FPS tracking for smooth frame delivery Replace per-frame delta FPS tracking with batch-level throughput measurement (sum(frames) / sum(intervals)) in PipelineProcessor. The old approach measured inter-frame deltas, but in batched pipelines (e.g., 12 frames emitted at once), near-zero intra-batch deltas mixed with large inter-batch gaps caused the FPS estimate to oscillate permanently. This produced unstable frame pacing downstream. The new approach records one sample per pipeline call as a (num_frames, interval) tuple, then computes FPS over a sliding window of 10 batches. This gives a stable, accurate estimate regardless of batch size. Benchmark results (10s runs, real PipelineProcessor with stub pipelines): Scenario | Metric | Old | New --------------------|----------------|---------|-------- LongLive (batch=12) | Jitter | 84.1% | 0.3% | Stalls (>1.5x) | 7.9% | 0.0% | FPS error | 34.3% | 0.2% | FPS stdev | 7.23 | 0.01 SDv2 (batch=4) | Jitter | 29.1% | 5.8% | Stalls (>1.5x) | 4.6% | 0.0% | FPS error | 13.8% | 1.0% | FPS stdev | 5.24 | 0.69 Single-frame | Jitter | 2.3% | 4.5% | FPS error | 2.7% | 2.7% | FPS stdev | 1.89 | 1.35 Pause/resume | Jitter | 112.9% | 0.3% | Stalls (>1.5x) | 7.3% | 0.0% | FPS error | 53.8% | 0.2% | FPS stdev | 12.10 | 0.01 Single-frame results are equivalent, confirming no regression for non-batching pipelines. Signed-off-by: RyanOnTheInside <[email protected]> * Reduce OUTPUT_QUEUE_MAX_SIZE_FACTOR to 2 Signed-off-by: Rafal Leszko <[email protected]> --------- Signed-off-by: RyanOnTheInside <[email protected]> Signed-off-by: Rafal Leszko <[email protected]> Co-authored-by: Rafal Leszko <[email protected]>Configuration menu - View commit details
-
Copy full SHA for 66e8d31 - Browse repository at this point
Copy the full SHA 66e8d31View commit details -
Account for not found error in fal app delete (#634)
When this PR cleanup action ran it was silently failing if the app was not found Signed-off-by: Max Holland <[email protected]>
Configuration menu - View commit details
-
Copy full SHA for a9160aa - Browse repository at this point
Copy the full SHA a9160aaView commit details -
feat: MIDI support & MIDI mappable parameters (#537)
* feat: MIDI support Signed-off-by: gioelecerati <[email protected]> * midi mappable plugins Signed-off-by: gioelecerati <[email protected]> * midi ui fixes Signed-off-by: gioelecerati <[email protected]> * es lint midi fixes Signed-off-by: gioelecerati <[email protected]> * midi ui formatting Signed-off-by: gioelecerati <[email protected]> * feat: ui fixes for the midi input Signed-off-by: gioelecerati <[email protected]> * feat: add more midi inputs and remove superfluos cache management button Signed-off-by: gioelecerati <[email protected]> * feat: add play button to the midi controller Signed-off-by: gioelecerati <[email protected]> * formatting Signed-off-by: gioelecerati <[email protected]> * fix bug with rerenders & remove unused fun Signed-off-by: gioelecerati <[email protected]> * formatting Signed-off-by: gioelecerati <[email protected]> * feat: add MIDI prompt blend controls Replace the old prompt slot workflow with direct prompt solo and blend controls so MIDI mappings can switch and rebalance prompts live while streaming. Signed-off-by: gioelecerati <[email protected]> * feat: replace inline MIDI learning badge with a toast Signed-off-by: gioelecerati <[email protected]> * revert: restore cache management UI to match main Reverts the cache management changes introduced alongside MIDI support, restoring the Manage Cache toggle and Reset Cache button to their original behavior. Made-with: Cursor Signed-off-by: gioelecerati <[email protected]> * fix build Signed-off-by: gioelecerati <[email protected]> * build fix, manage cache & forbid bypass of disabled params Signed-off-by: gioelecerati <[email protected]> * address relevant CR comments Signed-off-by: gioelecerati <[email protected]> * formatting Signed-off-by: gioelecerati <[email protected]> * address minor CR comments Signed-off-by: gioelecerati <[email protected]> --------- Signed-off-by: gioelecerati <[email protected]> Co-authored-by: James Dawson <[email protected]>
Configuration menu - View commit details
-
Copy full SHA for 5b122ea - Browse repository at this point
Copy the full SHA 5b122eaView commit details -
Fix vace_context_scale not sent in initial params without ref images (#…
…630) * Fix vace_context_scale not sent in initial params without ref images vace_context_scale was only included in initial stream parameters and load params when reference images were present, because it was gated inside getVaceParams() which returns {} without ref images. When using VACE with input video (no ref images), the scale was never sent at stream start, so the backend always defaulted to 1.0 regardless of the UI slider value. Decouple vace_context_scale from the ref-images check so it is always sent when the pipeline supports VACE. Signed-off-by: RyanOnTheInside <[email protected]> * Propagate VACE params (vace_context_scale, ref_images) for Krea pipeline The Krea branch in _load_pipeline_implementation inlined its VACE setup to use the 14B checkpoint instead of the default 1.3B, but this skipped the vace_context_scale and ref_images extraction that _configure_vace provides. Add the missing parameter extraction and tests. Signed-off-by: RyanOnTheInside <[email protected]> --------- Signed-off-by: RyanOnTheInside <[email protected]>
Configuration menu - View commit details
-
Copy full SHA for 7f6a8df - Browse repository at this point
Copy the full SHA 7f6a8dfView commit details -
Fix WebRTC VP8 decode errors and PLI keyframe requests
- Add retry logic for transient VP8 decode errors in cloud WebRTC input loops - Fix PLI keyframe request for aiortc >= 1.14.0 (pass media_ssrc) - Poll for remote streams before sending PLI instead of blind retries - Log first frame receipt to verify PLI/keyframe roundtrip Signed-off-by: emranemran <[email protected]>
Configuration menu - View commit details
-
Copy full SHA for bb69e0a - Browse repository at this point
Copy the full SHA bb69e0aView commit details -
Merge pull request #614 from daydreamlive/emran/webrtc-retry-fix-vp8
Fix WebRTC VP8 decode errors and PLI keyframe requests
Configuration menu - View commit details
-
Copy full SHA for 3a76284 - Browse repository at this point
Copy the full SHA 3a76284View commit details
Commits on Mar 10, 2026
-
Add native Workflow export to daydream.live (#636)
* Add native Workflow export to daydream.live Signed-off-by: Thom Shutt <[email protected]> * Handle URL opening better when running in browser Signed-off-by: Thom Shutt <[email protected]> --------- Signed-off-by: Thom Shutt <[email protected]>
Configuration menu - View commit details
-
Copy full SHA for 1ac85de - Browse repository at this point
Copy the full SHA 1ac85deView commit details -
Close WebRTC session when cloud WebSocket disconnects (#638)
When the fal WebSocket closes (e.g. MAX_DURATION_EXCEEDED timer), the WebRTC peer connection (UDP) between the client and the scope backend continues running independently — the pipeline keeps processing frames and the video keeps streaming. Add a DELETE /api/v1/webrtc/offer/{session_id} endpoint to the backend, and call it from fal_app.py's finally block to explicitly tear down the WebRTC session when the signaling WebSocket closes. Signed-off-by: emranemran <[email protected]>Configuration menu - View commit details
-
Copy full SHA for 3e19e6d - Browse repository at this point
Copy the full SHA 3e19e6dView commit details -
Cache pipeline schemas and plugin list responses (#645)
* Cache pipeline schemas and plugin list responses The /api/v1/pipelines/schemas and /api/v1/plugins endpoints were recomputing everything on every request (schema generation, distribution scanning, update check subprocesses). Since these don't change until a plugin install/uninstall (which restarts the server), cache the responses after first computation. Co-Authored-By: Claude Opus 4.6 <[email protected]> Signed-off-by: Rafal Leszko <[email protected]> * Fix test: clear plugin/schema caches between tests The cached plugin list response from a previous test was being returned instead of the mock, causing test_returns_plugin_list to fail. Co-Authored-By: Claude Opus 4.6 <[email protected]> Signed-off-by: Rafal Leszko <[email protected]> --------- Signed-off-by: Rafal Leszko <[email protected]> Co-authored-by: Claude Opus 4.6 <[email protected]>
Configuration menu - View commit details
-
Copy full SHA for 9e7a370 - Browse repository at this point
Copy the full SHA 9e7a370View commit details -
chore: bump version to 0.1.7 (#647)
Signed-off-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com> Co-authored-by: leszko <[email protected]>
Configuration menu - View commit details
-
Copy full SHA for dbaa335 - Browse repository at this point
Copy the full SHA dbaa335View commit details
This comparison is taking too long to generate.
Unfortunately it looks like we can’t render this comparison for you right now. It might be too big, or there might be something weird with your repository.
You can try running this command locally to see the comparison on your machine:
git diff v0.1.6...v0.1.7