Skip to content

Tempo Sync Modulation#672

Closed
BuffMcBigHuge wants to merge 31 commits intomainfrom
marco/feat/beatsync-modulation-2
Closed

Tempo Sync Modulation#672
BuffMcBigHuge wants to merge 31 commits intomainfrom
marco/feat/beatsync-modulation-2

Conversation

@BuffMcBigHuge
Copy link
Copy Markdown
Collaborator

@BuffMcBigHuge BuffMcBigHuge commented Mar 11, 2026

Tempo Sync & Modulation

Summary

Adds tempo synchronization and beat-synced parameter modulation to Scope. Pipelines can lock to Ableton Link or MIDI clock, quantize discrete parameter changes to beat boundaries, and continuously modulate parameters (e.g. noise_scale, denoising_steps) in sync with the beat. Target users are VDJs running Scope alongside Ableton Live, Resolume, or similar applications.

What's in this PR

Core

  • Tempo sync — Ableton Link and MIDI clock support with optional dependencies (aalink, mido/python-rtmidi). When neither is installed, the UI shows install hints and the feature degrades gracefully.
  • Parameter scheduling — Discrete parameter changes (prompt switches, denoising adjustments) can be quantized to beat/bar boundaries with configurable lookahead to compensate for pipeline latency.
  • Parameter modulation — Continuous per-frame modulation of numeric parameters driven by beat phase. Pipeline-agnostic: operates on kwargs before they reach any pipeline.

Frontend

  • Tempo Sync panel — Source toggle (Link/MIDI), BPM input, beats per bar, live beat indicator, quantize mode, lookahead, beat cache reset rate, prompt cycle rate.
  • Modulation section — Target selector (from schema ui.modulatable fields), wave shape (sine, cosine, triangle, saw, square, pulse decay), rate (½ beat → 4 bars), depth.
  • Prompt cycling — Automatically advance through timeline prompts on beat/bar boundaries when enabled.

Pipelines

  • metronome — Visual metronome for testing ParameterScheduler lookahead. Three color layers, latency_ms to simulate pipeline delay.
  • mod-scope — Oscilloscope for modulation visualization.

Modulatable pipelines

LongLive, Krea, MemFlow, RewardForcing, StreamDiffusionV2 expose denoising_steps and (LongLive) noise_scale as modulatable via ui_field_config(modulatable=True, ...).


Architecture

Tempo Sources (Link / MIDI / Client-forwarded)
    → TempoSync Manager (BeatState, 15Hz notification loop)
    → ParameterScheduler (quantize discrete changes)
    → ModulationEngine (per-frame param modulation)
    → PipelineProcessor (inject beat state + modulated params)
    → Pipeline
  • Beat state: bpm, beat_phase, bar_position, beat_count, is_playing injected into every pipeline call.
  • Cloud mode: Browser forwards beat state over WebRTC data channel; server prefers client state when fresh (< 2s).
  • Modulation: Config sent as modulations in parameter updates; Pydantic-validated on receipt.

REST API

Method Path Purpose
GET /api/v1/tempo/status Current tempo status and beat state
POST /api/v1/tempo/enable Enable tempo sync (Link or MIDI)
POST /api/v1/tempo/disable Disable tempo sync
POST /api/v1/tempo/set_tempo Set BPM (Link only)
GET /api/v1/tempo/sources Available tempo sources and capabilities

Modulation: No dedicated REST API. Config is sent as modulations in WebRTC parameter updates.


How to try it

  1. Install optional deps (for Link or MIDI):

    uv sync --group link   # Ableton Link
    uv sync --group midi   # MIDI clock
  2. Run Scope and start a stream.

  3. Open the Tempo Sync panel in the sidebar. Enable tempo sync, choose Link or MIDI, set BPM.

  4. Quantize mode: Set to "Beat" or "Bar" so changes like prompt switches land on the beat. Adjust lookahead if latency is visible.

  5. Modulation: In the modulation section, pick a target (e.g. Noise Scale), choose shape/rate/depth, and toggle ON.

  6. Prompt cycling: Add multiple prompts to the timeline, set promptCycleRate to "Bar" or "Beat", and prompts will advance automatically.

  7. Demo pipelines: Use metronome to test lookahead or mod-scope to visualize modulation.


Testing

  • ParameterScheduler: tests/test_parameter_scheduler.py — adversarial tests for boundary math, concurrency, lookahead, and edge cases.
  • E2E: Tests passed against fal.ai preview deployment.

Optional dependencies

[project.optional-dependencies]
link = ["aalink>=0.1.1"]
midi = ["mido>=1.3.0", "python-rtmidi>=1.5.0"]

Breaking changes

None. Tempo sync and modulation are additive. Pipelines that don't use beat kwargs are unaffected.


Documentation

  • docs/tempo-sync-review.md — Implementation review, architecture, API, edge cases, and future work.

Summary by CodeRabbit

  • New Features

    • Tempo synchronization via Ableton Link and MIDI Clock support
    • Beat-synced parameter modulation with waveform shapes, depth, and rate controls
    • Parameter scheduling aligned to beat boundaries with quantization modes
    • New Metronome and Modulation Scope visualization pipelines for testing
    • Beat-driven prompt cycling
  • Bug Fixes

    • Enhanced noise controller state initialization handling

@coderabbitai
Copy link
Copy Markdown

coderabbitai bot commented Mar 11, 2026

Important

Review skipped

Auto reviews are disabled on this repository. Please check the settings in the CodeRabbit UI or the .coderabbit.yaml file in this repository. To trigger a single review, invoke the @coderabbitai review command.

⚙️ Run configuration

Configuration used: Organization UI

Review profile: CHILL

Plan: Pro

Run ID: 3d408969-3410-4439-ab98-f31cdd1e2f44

You can disable this status message by setting the reviews.review_status to false in the CodeRabbit configuration file.

Use the checkbox below for a quick retry:

  • 🔍 Trigger review
📝 Walkthrough

Walkthrough

This pull request introduces a comprehensive beat-synchronization and parameter modulation system. It adds tempo sync management (Ableton Link and MIDI Clock support), a parameter scheduler for quantized beat-boundary updates, a modulation engine for beat-driven parameter animation, and frontend UI components to configure these features. New Metronome and ModScope visualization pipelines are included for testing.

Changes

Cohort / File(s) Summary
Documentation
docs/tempo-sync-review.md
Comprehensive design review documenting tempo sync architecture, REST APIs, WebRTC messages, beat flow, modulation system, and feature index across layers.
Frontend Tempo Sync UI
frontend/src/components/TempoSyncPanel.tsx, frontend/src/components/settings/TempoSyncSection.tsx
New React components for tempo sync configuration with source selection, MIDI device control, BPM display, beat quantization, lookahead, and beat reset controls.
Frontend Modulation UI
frontend/src/components/settings/ModulationSection.tsx
New component for per-target modulation configuration with shape selection (sine, cosine, triangle, saw, square, pulse decay), depth control, and rate selection.
Frontend Tempo Sync Logic
frontend/src/hooks/useTempoSync.ts
New hook managing tempo state (enabled, bpm, beat phase, bar position) with polling, enable/disable actions, and error handling.
Frontend WebRTC & Settings
frontend/src/hooks/useUnifiedWebRTC.ts, frontend/src/types/index.ts
Extended WebRTC callback options for tempo updates and parameter changes; added quantizeMode, lookaheadMs, beatCacheResetRate, promptCycleRate, and modulations to SettingsState.
Frontend API & Components
frontend/src/lib/api.ts, frontend/src/components/DownloadDialog.tsx, frontend/src/components/WorkflowImportDialog.tsx, frontend/src/components/settings/OscTab.tsx
New Tempo Sync REST API surface (getTempoStatus, enableTempo, disableTempo, setTempo, getTempoSources); extended SchemaFieldUI metadata for modulatable parameters; minor type annotation improvements.
Frontend Stream Integration
frontend/src/pages/StreamPage.tsx
Integrated TempoSyncPanel and useTempoSync hook; added beat-driven prompt cycling, quantized parameter updates, modulation state propagation, and WebRTC parameter forwarding for tempo-aware settings.
Backend Tempo Sync Core
src/scope/server/tempo_sync.py
New TempoSync orchestrator managing multiple sources with thread-safe beat state caching, client-state prioritization, source discovery, and notification broadcasting to WebRTC sessions.
Backend Tempo Sources
src/scope/server/tempo_sources/link.py, src/scope/server/tempo_sources/midi_clock.py, src/scope/server/tempo_sources/__init__.py
New Ableton Link and MIDI Clock tempo source adapters with polling/listening loops, async lifecycle management, and thread-safe beat state updates.
Backend Parameter Scheduling
src/scope/server/parameter_scheduler.py
New ParameterScheduler for quantized beat-boundary updates with lookahead support, timer-based delayed application, and configurable quantize modes (beat, bar, 2_bar, 4_bar).
Backend Modulation Engine
src/scope/server/modulation.py
New ModulationEngine applying beat-synced waveform-based modulation (sine, cosine, triangle, saw, square, exp_decay) to parameters with bounds enforcement and special step-list handling.
Backend Frame/Pipeline Processing
src/scope/server/frame_processor.py, src/scope/server/pipeline_processor.py, src/scope/server/tracks.py
Extended frame processor and pipeline processor with tempo_sync and modulation_engine integration; added beat-state injection into parameters, modulation application, and beat-boundary cache resets.
Backend WebRTC & Server
src/scope/server/webrtc.py, src/scope/server/app.py
Added NotificationSender for data-channel notifications; extended WebRTCManager.handle_offer with tempo_sync parameter; added tempo API endpoints (GET/POST /api/v1/tempo/*); integrated TempoSync initialization and lifecycle.
Backend Schemas & Config
src/scope/server/schema.py, src/scope/core/pipelines/base_schema.py
New TempoEnableRequest, TempoStatusResponse, TempoSetTempoRequest, TempoSourcesResponse; extended ui_field_config with modulatable, modulatable_min, modulatable_max flags.
New Visualization Pipelines
src/scope/core/pipelines/metronome/, src/scope/core/pipelines/mod_scope/
New Metronome pipeline for beat-sync latency testing with visual pulse/beat display; new ModScope pipeline for real-time oscilloscope visualization of modulatable parameters with beat-phase alignment.
Pipeline Modulatable Config
src/scope/core/pipelines/krea_realtime_video/schema.py, src/scope/core/pipelines/longlive/schema.py, src/scope/core/pipelines/memflow/schema.py, src/scope/core/pipelines/reward_forcing/schema.py, src/scope/core/pipelines/streamdiffusionv2/schema.py
Extended denoising_steps fields across pipelines with modulatable metadata and bounds (modulatable_min=100, modulatable_max=1000 for denoising; 0.0–1.0 for noise_scale).
Backend Improvements
src/scope/core/pipelines/defaults.py, src/scope/core/pipelines/wan2_1/blocks/noise_scale_controller.py, src/scope/core/pipelines/wan2_1/blocks/set_timesteps.py, src/scope/core/pipelines/registry.py
Conditional noise control nulling to preserve explicit values; refined cache reset logic in noise scale controller and set timesteps block; registered metronome and mod-scope pipelines; added modulation state flag (_modulated_step_list).
Dependencies
pyproject.toml
Added optional dependency groups: link (aalink>=0.1.1) for Ableton Link support, midi (mido>=1.3.0, python-rtmidi>=1.5.0) for MIDI Clock support.
Tests
tests/test_parameter_scheduler.py
Comprehensive test suite for ParameterScheduler covering boundary math, edge cases, scheduling/merging behavior, concurrency stress tests, and property validation.

Sequence Diagram(s)

sequenceDiagram
    participant User as User
    participant Frontend as Frontend UI
    participant WebRTC as WebRTC Manager
    participant TempoSync as TempoSync
    participant FrameProc as Frame Processor
    participant Scheduler as Parameter Scheduler
    participant ModEngine as Modulation Engine
    participant Pipeline as Pipeline

    User->>Frontend: Enable Tempo Sync (Link source)
    Frontend->>WebRTC: POST /api/v1/tempo/enable
    WebRTC->>TempoSync: enable("link", bpm=120)
    TempoSync->>TempoSync: Start Link polling (100 Hz)
    TempoSync->>WebRTC: Register notification session

    loop Every beat (~4 times/sec at 120 BPM)
        TempoSync->>WebRTC: Send tempo_update message
        WebRTC->>Frontend: Update beat display
    end

    User->>Frontend: Set quantize_mode="beat", enable modulation
    Frontend->>WebRTC: Send parameter update (quantize_mode, modulations)
    WebRTC->>FrameProc: schedule_quantized_update(params)
    FrameProc->>Scheduler: schedule({modulations, denoising_steps})
    Scheduler->>Scheduler: Compute delay to next beat boundary
    Scheduler->>Scheduler: Start timer (based on lookahead_ms)

    TempoSync->>FrameProc: Beat boundary reached
    Scheduler->>FrameProc: Apply pending parameters
    FrameProc->>ModEngine: apply(beat_phase, bar_position, params)
    ModEngine->>ModEngine: Compute waveform phase (sine, cosine, etc.)
    ModEngine->>ModEngine: Modulate denoising_steps and noise_scale
    ModEngine->>FrameProc: Return modulated parameters
    FrameProc->>Pipeline: Call with modulated params
    Pipeline->>Pipeline: Process frame with beat-synced settings
    FrameProc->>WebRTC: Send change_applied notification
    WebRTC->>Frontend: Update UI (confirm modulation applied)
Loading

Estimated Code Review Effort

🎯 5 (Critical) | ⏱️ ~120 minutes

Poem

🐰 A rabbit hops to beat-synced delight,
Tempo and modulations, oh what a sight!
From Link to MIDI, the beat flows so free,
Parameters dance in harmony! ✨🎵

🚥 Pre-merge checks | ✅ 2 | ❌ 1

❌ Failed checks (1 warning)

Check name Status Explanation Resolution
Docstring Coverage ⚠️ Warning Docstring coverage is 52.83% which is insufficient. The required threshold is 80.00%. Write docstrings for the functions missing them to satisfy the coverage threshold.
✅ Passed checks (2 passed)
Check name Status Explanation
Description Check ✅ Passed Check skipped - CodeRabbit’s high-level summary is enabled.
Title check ✅ Passed The title 'Tempo Sync Modulation' is directly related to the PR's main objective of adding tempo synchronization and beat-synced parameter modulation to Scope.

✏️ Tip: You can configure your own custom pre-merge checks in the settings.

✨ Finishing Touches
🧪 Generate unit tests (beta)
  • Create PR with unit tests
  • Post copyable unit tests in a comment
  • Commit unit tests in branch marco/feat/beatsync-modulation-2
📝 Coding Plan
  • Generate coding plan for human review comments

Comment @coderabbitai help to get the list of available commands and usage tips.

@github-actions
Copy link
Copy Markdown
Contributor

github-actions bot commented Mar 12, 2026

🚀 fal.ai Preview Deployment

App ID daydream/scope-pr-672--preview
WebSocket wss://fal.run/daydream/scope-pr-672--preview/ws
Commit 9866f59

Testing

Connect to this preview deployment by running this on your branch:

uv run build && SCOPE_CLOUD_APP_ID="daydream/scope-pr-672--preview/ws" uv run daydream-scope

🧪 E2E tests will run automatically against this deployment.

@github-actions
Copy link
Copy Markdown
Contributor

github-actions bot commented Mar 12, 2026

✅ E2E Tests passed

Status passed
fal App daydream/scope-pr-672--preview
Run View logs

Test Artifacts

Check the workflow run for screenshots.

BuffMcBigHuge and others added 9 commits March 11, 2026 20:22
Signed-off-by: BuffMcBigHuge <[email protected]>
Signed-off-by: BuffMcBigHuge <[email protected]>
Signed-off-by: RyanOnTheInside <[email protected]>
Signed-off-by: BuffMcBigHuge <[email protected]>
Signed-off-by: BuffMcBigHuge <[email protected]>
Signed-off-by: BuffMcBigHuge <[email protected]>
Signed-off-by: BuffMcBigHuge <[email protected]>
@BuffMcBigHuge BuffMcBigHuge force-pushed the marco/feat/beatsync-modulation-2 branch from 5e4f357 to 4163333 Compare March 12, 2026 00:23
Signed-off-by: BuffMcBigHuge <[email protected]>
Signed-off-by: BuffMcBigHuge <[email protected]>
Signed-off-by: BuffMcBigHuge <[email protected]>
@BuffMcBigHuge BuffMcBigHuge marked this pull request as ready for review March 12, 2026 02:13
Copy link
Copy Markdown

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 13

Caution

Some comments are outside the diff and can’t be posted inline due to platform limitations.

⚠️ Outside diff range comments (2)
src/scope/server/frame_processor.py (1)

628-660: ⚠️ Potential issue | 🟠 Major

Don't forward tempo-control keys into pipelines when tempo sync is absent.

Lines 629-660 only strip quantization, modulation, and client beat-state fields when the corresponding tempo helper exists. Any FrameProcessor still constructed without tempo_sync will now forward those control-plane keys into processor.update_parameters(...) and self.parameters, which changes the pre-feature contract and can surface as unknown pipeline args.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/scope/server/frame_processor.py` around lines 628 - 660, The
FrameProcessor currently only removes tempo-control keys when helpers exist,
causing stale control keys to be forwarded when helpers are absent; always strip
these keys from the incoming parameters before any pipeline forwarding.
Specifically: ensure "quantize_mode" and "lookahead_ms" are popped from
parameters even if self.parameter_scheduler is None; pop "modulations" from
parameters even if self.modulation_engine is None (and call
self.modulation_engine.update(...) only when present); pop
"beat_cache_reset_rate" regardless (and only apply it to
self.pipeline_processors when present); and when self.tempo_sync is None remove
any client beat-state fields (the same keys handled by
tempo_sync.update_client_beat_state) so they are not forwarded to
processor.update_parameters or retained on self.parameters. Use the existing
symbols (parameter_scheduler, modulation_engine, tempo_sync,
pipeline_processors, _update_output_sinks_from_config, _update_input_source) to
locate the logic and implement the unconditional popping before any call that
forwards parameters.
frontend/src/pages/StreamPage.tsx (1)

519-545: ⚠️ Potential issue | 🟠 Major

Don't persist every unknown data-channel param into schemaFieldOverrides.

The fallback here captures any non-allowlisted key, but sendParameterUpdate() is also used for transient protocol fields like transition on Lines 631-633, prompt_interpolation_method on Line 1291, and output_sinks on Lines 1174-1176. Those values are later merged back into loadParams and initialParameters on Lines 1860-1866 and Lines 2036-2042, so a runtime-only message can leak into the next pipeline load/start as an unexpected schema override.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@frontend/src/pages/StreamPage.tsx` around lines 519 - 545, The current
fallback in the params sync block is persisting every non-allowlisted param into
settingsUpdate.schemaFieldOverrides, which causes transient protocol-only fields
(e.g., transition, prompt_interpolation_method, output_sinks sent via
sendParameterUpdate) to leak into future loads; update the logic around
overrideUpdates creation to exclude a small denylist of known transient keys
(e.g., "transition", "prompt_interpolation_method", "output_sinks") or detect
runtime-only flags before copying into settingsUpdate.schemaFieldOverrides, so
only true schema overrides are merged; keep the rest of the merge behavior
intact (referencing settingsRef.current and settingsUpdate.schemaFieldOverrides)
and ensure transient keys are ignored when building overrideUpdates.
🧹 Nitpick comments (14)
docs/tempo-sync-review.md (3)

258-264: Optional: Add language tag to flow diagram.

Consistent with earlier flow diagrams, consider adding ```text to silence markdown linters.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@docs/tempo-sync-review.md` around lines 258 - 264, The flow diagram block
containing ModulationSection → modulations state → sendParameterUpdate(...) →
FrameProcessor.update_parameters() → ModulationEngine.update(raw_configs) →
ModulationEngine.apply(beat_state, call_params) → PipelineProcessor should be
fenced as a text code block to satisfy markdown linters; update the diagram to
use a triple-backtick fence with the language tag "text" (i.e., start with
```text and end with ```) so the existing symbols (ModulationSection,
sendParameterUpdate, FrameProcessor.update_parameters, ModulationEngine.update,
ModulationEngine.apply, PipelineProcessor) remain unchanged but the block is
properly annotated.

42-78: Optional: Add language tag to fenced code block.

The ASCII architecture diagram lacks a language specifier. Consider adding ```text to silence markdown linters.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@docs/tempo-sync-review.md` around lines 42 - 78, Add a language tag to the
fenced ASCII diagram block (the triple backticks that begin the diagram starting
with "┌─────────────────────────────────────────────────────────────┐" and
containing headers like "TEMPO SOURCES" and "TEMPOSYNC MANAGER"); change the
opening fence from ``` to ```text so markdown linters recognize it as plain text
and stop flagging the diagram.

239-243: Optional: Add language tags to flow diagram code blocks.

The flow diagrams lack language specifiers. Consider adding ```text to silence markdown linters and improve rendering consistency.

Also applies to: 247-251

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@docs/tempo-sync-review.md` around lines 239 - 243, Update the flow diagram
code blocks that show "Ableton Link / MIDI device → TempoSource →
TempoSync.get_beat_state() → PipelineProcessor (inject + modulate) → Pipeline →
_notification_loop() → WebRTC data channel → useTempoSync" (and the similar
block at the later section) to include a language specifier by changing the
fenced code blocks from ``` to ```text so markdown linters render them
consistently; locate the fenced blocks containing those exact flow lines and
prepend "text" after the opening backticks for both occurrences.
frontend/src/components/DownloadDialog.tsx (1)

31-32: Unused props declared in interface.

The props isCloudMode and onOpenLoRAsSettings are added to the interface and destructured (via the comment on line 46), but they are never used within the component. If these are solely for type compatibility when spreading props from a parent, consider either:

  1. Actually using them in the component logic (e.g., conditionally rendering UI for cloud mode)
  2. Removing them from the interface if they're not needed here
  3. If intentionally ignored for forward compatibility, explicitly destructure and discard them: isCloudMode: _isCloudMode, onOpenLoRAsSettings: _onOpenLoRAsSettings

Also applies to: 46-47

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@frontend/src/components/DownloadDialog.tsx` around lines 31 - 32, The props
isCloudMode and onOpenLoRAsSettings declared for the DownloadDialog component
are never used; either remove them from the DownloadDialogProps interface and
from the component's destructuring, or explicitly destructure-and-discard them
to document intentional ignoring (e.g., isCloudMode: _isCloudMode,
onOpenLoRAsSettings: _onOpenLoRAsSettings) inside the DownloadDialog function to
satisfy type compatibility while avoiding unused variable warnings.
frontend/src/components/TempoSyncPanel.tsx (1)

11-32: Consider exporting the props interface.

TempoSyncPanelProps is not exported, which may limit reusability for consumers who want to type props passed to this component.

♻️ Suggested fix
-interface TempoSyncPanelProps {
+export interface TempoSyncPanelProps {
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@frontend/src/components/TempoSyncPanel.tsx` around lines 11 - 32,
TempoSyncPanelProps is declared but not exported, preventing external modules
from importing the prop type; fix by adding an export to the interface
declaration (export interface TempoSyncPanelProps) in the TempoSyncPanel
component file and update any places that should import this type (e.g., other
components or tests) to import { TempoSyncPanelProps } from the component
module; ensure any existing default/export patterns in the file remain correct
after adding the export.
src/scope/server/schema.py (1)

857-869: Consider stronger typing for response fields.

source and beat_state are typed as dict | None, which provides flexibility but reduces type safety. If the structure is stable, consider defining nested Pydantic models.

This is acceptable as-is if the dict structure varies by source type or is expected to evolve.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/scope/server/schema.py` around lines 857 - 869, TempoStatusResponse
currently types source and beat_state as dict | None which weakens type safety;
define two nested Pydantic models (e.g., SourceInfo and BeatState) that capture
the stable structure (fields like type, num_peers for source; bpm, beat_phase,
bar_position, beat_count, is_playing, source for beat_state) and replace the
dict annotations on TempoStatusResponse.source and
TempoStatusResponse.beat_state with Optional[SourceInfo] and Optional[BeatState]
respectively, updating any code that constructs or reads these fields to use the
new models.
frontend/src/hooks/useUnifiedWebRTC.ts (1)

167-203: Gate the per-message log before tempo_update goes live at 15Hz.

Line 168 now runs for every tempo packet, so an active stream will spam DevTools continuously. Please skip or sample logging for high-frequency message types.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@frontend/src/hooks/useUnifiedWebRTC.ts` around lines 167 - 203, The
per-message console.log in dataChannel.onmessage is currently printing every
"tempo_update" packet (handled where data.type === "tempo_update"), flooding
DevTools; modify the handler around the console.log("[UnifiedWebRTC] Data
channel message:", event.data) so it skips or samples logs for high-frequency
messages — e.g., only log when data.type !== "tempo_update" or apply a simple
sampling/throttle for "tempo_update" (log every Nth packet or once per second) —
leaving other branches (stream_stopped, change_scheduled, change_applied)
unchanged; update the logic in the function handling dataChannel.onmessage to
check data.type before logging.
src/scope/core/pipelines/metronome/pipeline.py (2)

175-181: Drawing beat number twice - first draw at (0,0) is unnecessary.

Line 175 draws the number at (0,0) just to calculate width, then line 181 draws it again at the centered position. This overwrites pixels unnecessarily.

♻️ Calculate width without drawing
-        num_w = _draw_number(canvas, beat_display, 0, 0, scale, text_color)
-        # Calculate position to center it
-        center_x = (W - num_w) // 2
+        # Calculate width: each digit is 3*scale wide with scale spacing
+        num_digits = len(str(beat_display))
+        char_w = 3 * scale + scale
+        num_w = num_digits * char_w
+        center_x = (W - num_w) // 2
         center_y = H // 3 + (H // 3 - 5 * scale) // 2

-        # Clear area first and redraw centered
         _draw_number(canvas, beat_display, center_x, center_y, scale, text_color)

Apply the same fix for bar_num_w at line 186.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/scope/core/pipelines/metronome/pipeline.py` around lines 175 - 181,
Remove the unnecessary first draw used to measure width: replace the call that
invokes _draw_number(canvas, beat_display, 0, 0, scale, text_color) with a
non-drawing width calculation (e.g., call the same underlying measurement
routine or add a helper to measure text width) so you only compute num_w without
rendering, then use _draw_number only once at (center_x, center_y); do the same
for bar_num_w (avoid drawing at 0,0 to measure width and only draw once at its
centered position). Reference: _draw_number, beat_display, num_w, bar_num_w in
pipeline.py.

100-103: Blocking time.sleep in pipeline call may cause frame drops.

Using time.sleep blocks the entire pipeline thread. While intentional for testing latency compensation, this could cause frame drops in the input queue if latency is high. Consider documenting this clearly or using a non-blocking approach.

📝 Add comment clarifying the intentional blocking
         # Simulate pipeline latency
+        # NOTE: This is intentionally blocking to accurately simulate real
+        # pipeline latency for testing beat-sync lookahead compensation.
+        # High values will cause input queue backup, which is expected behavior.
         latency_ms = kwargs.get("latency_ms", 0.0)
         if latency_ms > 0:
             time.sleep(latency_ms / 1000.0)
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/scope/core/pipelines/metronome/pipeline.py` around lines 100 - 103, The
current pipeline code that simulates latency uses a blocking call via time.sleep
when reading latency_ms from kwargs, which can block the pipeline thread and
cause input-frame drops; update the latency simulation in the pipeline (the
latency_ms handling inside the pipeline function) to either (a) replace blocking
time.sleep with a non-blocking approach such as scheduling an asyncio.sleep or
deferring the delay to a worker/threadpool so the pipeline thread isn’t blocked,
or (b) if blocking is intentional for testing, add a clear inline comment and
docstring on the pipeline function explaining that latency_ms intentionally
blocks the pipeline thread and warning about potential frame drops so callers
know to avoid using high latency_ms in production. Ensure references to
latency_ms and the pipeline function/method name are preserved when changing the
implementation.
tests/test_parameter_scheduler.py (1)

359-376: Test relies on real timer and sleep, which can be flaky in CI.

The test waits 0.35s for a 0.25s timer, which should be sufficient, but timing-based tests can be unreliable under load. Consider using a larger tolerance or mocking threading.Timer.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@tests/test_parameter_scheduler.py` around lines 359 - 376, The test
test_scheduled_change_fires_after_delay is flaky because it relies on real
timers/sleep; modify the test to avoid real-time dependence by either increasing
the tolerance (e.g., sleep to 0.5s) or, preferably, mock the timer used by the
scheduler (mock threading.Timer or the scheduler's internal timer creation) so
schedule() triggers immediately in tests; locate references to make_scheduler,
FakeBeatState, and the scheduler.schedule(...) call and patch the Timer creation
used by that scheduler instance to a synchronous stub that calls the callback
immediately and assert applied/notifications as before.
src/scope/server/tempo_sources/link.py (1)

109-112: set_tempo silently does nothing when Link is not initialized.

Consider logging a warning or raising an error if set_tempo is called when _link is None to help debug misconfigured flows.

🔧 Optional: Add warning when Link is not available
     def set_tempo(self, bpm: float) -> None:
         """Set the Link session tempo (makes Scope the tempo leader)."""
         if self._link is not None:
             self._link.tempo = bpm
+        else:
+            logger.warning("Cannot set tempo: Link session not started")
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/scope/server/tempo_sources/link.py` around lines 109 - 112, set_tempo
currently returns silently when self._link is None which hides misconfiguration;
update set_tempo to detect when self._link is None and either log a warning or
raise an error. Inside the set_tempo method (referencing set_tempo and attribute
_link) add a branch that emits a clear warning (using the existing logger on the
class if available, otherwise the module logging) stating that tempo change was
requested but Link is not initialized, or raise a RuntimeError with that
message—choose logging for non-fatal flows and raise for fatal flows.
frontend/src/components/settings/TempoSyncSection.tsx (1)

270-298: BPM input lacks debouncing and allows invalid intermediate states.

While the validation on submit is good, the input allows typing values outside the 20-300 range. Consider adding onBlur validation to clamp the value when focus leaves the input.

🔧 Optional: clamp value on blur
                 onBlur={() => (bpmInputFocusedRef.current = false)}
+                onBlur={() => {
+                  bpmInputFocusedRef.current = false;
+                  const val = parseFloat(bpmInput);
+                  if (!isNaN(val)) {
+                    setBpmInput(String(Math.max(20, Math.min(300, Math.round(val)))));
+                  }
+                }}
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@frontend/src/components/settings/TempoSyncSection.tsx` around lines 270 -
298, The BPM input currently only validates on submit and allows invalid
intermediate values; update the input's onBlur handler (currently setting
bpmInputFocusedRef) to parse and clamp bpmInput into the 20–300 range, call
setBpmInput with the clamped string, and invoke onSetBpm(clampedValue) so the
final value is normalized; keep onChange as-is to allow free typing, and
optionally add a short debounce around any future calls to onSetBpm if you want
live updates instead of only on blur/submit (referencing bpmInput, setBpmInput,
bpmInputFocusedRef, and onSetBpm).
frontend/src/components/settings/ModulationSection.tsx (2)

114-119: Potential issue: selectedTarget state sync may flicker on pipeline switch.

When targets change and the current selectedTarget is invalid, the effect updates to targets[0].value. However, the component renders before this effect runs, potentially showing stale UI briefly. This is a minor UX issue.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@frontend/src/components/settings/ModulationSection.tsx` around lines 114 -
119, The current useEffect in ModulationSection (useEffect, targets,
selectedTarget, setSelectedTarget) causes a brief render with a stale
selectedTarget when targets change; to avoid flicker, derive an
effectiveSelectedTarget synchronously during render (e.g., compute a fallback
like targets[0].value when selectedTarget is missing) and use that derived value
in the JSX, or setSelectedTarget synchronously at the same place you update
targets/pipeline (the handler that changes pipeline/targets) so the state is
correct before the next render; update references in the component to use the
derived/synchronously-updated selected value instead of relying solely on the
effect.

137-149: handleToggle deletes config on disable, losing user settings.

When modulation is toggled off, the config for that target is deleted. If the user toggles it back on, they get defaults instead of their previous settings. Consider keeping disabled configs with enabled: false instead.

♻️ Preserve config on disable
   const handleToggle = useCallback(
     (pressed: boolean) => {
       if (pressed && currentTarget) {
-        const cfg = defaultConfigFor(currentTarget);
+        const existing = modulations[selectedTarget];
+        const cfg = existing 
+          ? { ...existing, enabled: true }
+          : defaultConfigFor(currentTarget);
         onModulationsChange({ ...modulations, [selectedTarget]: cfg });
       } else {
-        const next = { ...modulations };
-        delete next[selectedTarget];
-        onModulationsChange(next);
+        const existing = modulations[selectedTarget];
+        if (existing) {
+          onModulationsChange({ 
+            ...modulations, 
+            [selectedTarget]: { ...existing, enabled: false } 
+          });
+        }
       }
     },
     [selectedTarget, currentTarget, modulations, onModulationsChange]
   );
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@frontend/src/components/settings/ModulationSection.tsx` around lines 137 -
149, handleToggle currently deletes the target config when disabling, losing
user settings; change it to preserve the config object and only toggle an
enabled flag. Specifically, in handleToggle use the existing config from
modulations[selectedTarget] (falling back to defaultConfigFor(currentTarget))
when pressed is true and set enabled: true, and when pressed is false set
modulations[selectedTarget] = { ...(existingOrDefault), enabled: false } instead
of deleting the entry; continue to call onModulationsChange with the updated
modulations object. Ensure you reference handleToggle, defaultConfigFor,
onModulationsChange, modulations, selectedTarget, and currentTarget when making
the change.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@frontend/src/lib/api.ts`:
- Around line 1050-1055: The TempoEnableRequest type currently makes bpm and
beats_per_bar optional which allows invalid calls to compile; change it to a
discriminated union that matches the server contract: define TempoEnableRequest
as a union of { source: "link"; bpm: number; beats_per_bar: number } and {
source: "midi_clock"; midi_device: string } (or the exact server-required
shapes) so callers must provide the correct fields depending on source; update
any usages of TempoEnableRequest accordingly.

In `@frontend/src/pages/StreamPage.tsx`:
- Around line 1594-1637: The prompt-cycling effect is being double-quantized
because it sends prompt updates via sendParameterUpdate which the quantize logic
will push to the next boundary; change this effect to send the new prompt
immediately by either calling a new helper (e.g., sendParameterUpdateImmediate)
or extending sendParameterUpdate to accept a flag (e.g., { skipQuantize: true })
and set that flag when updating prompts from promptCycleIndexRef/current items;
update the quantize branch in the sendParameterUpdate implementation to honor
the flag so promptCycleBoundaryRef/Index logic produces the prompt on the
current boundary rather than being deferred.
- Around line 473-494: The quantize-detection logic in the isQuantizeActive
block (variables NEVER_QUANTIZE, hasDiscrete, params, _quantized) wrongly infers
discreteness by typeof and misses non-primitive discrete payloads like
transition and denoising_step_list; change hasDiscrete to use a key-based
allowlist instead: create a DISCRETE_PARAM_KEYS set (include prompts,
reset_cache, transition, denoising_step_list, and any other known composite
discrete keys), skip keys in NEVER_QUANTIZE, and mark params._quantized = true
when any key is in DISCRETE_PARAM_KEYS or matches existing boolean/string checks
for backward compatibility. Ensure you reference the same symbols
(isQuantizeActive, NEVER_QUANTIZE, hasDiscrete, params, _quantized) when making
the change.

In `@src/scope/core/pipelines/mod_scope/pipeline.py`:
- Around line 77-79: BEATS_PER_BAR is hardcoded to 4 causing incorrect beat
highlighting; make beats per bar dynamic by replacing the constant with a
configurable variable (e.g., beats_per_bar) sourced from pipeline settings or
the time signature, update any code that computes or wraps bar_position (the
logic currently at the block referenced by lines handling bar_position wrapping
around BEATS_PER_BAR) to use beats_per_bar instead of BEATS_PER_BAR, and ensure
the rendering logic that draws the beat dots uses beats_per_bar to determine
count and placement so 3/4, 5/4, etc. render correctly.
- Around line 105-109: The glow rendering loop is performing per-pixel
host/device syncs by calling Tensor.item() (search for the glow loop that reads
tensor values with .item() in this class), which kills GPU performance; update
the glow accumulation to keep everything on-tensor: remove all .item() usages in
the glow loop, replace Python scalar operations with torch tensor ops (e.g., use
torch.max, torch.clamp, tensor indexing/assignment and tensor-wise arithmetic)
and ensure results are written back as tensors on self.device (the pipeline's
device is set via self.device). Locate the glow loop in this module (the section
that reads tensor values per-sample/pixel and writes glow outputs) and refactor
it to compute glow intensities and color blending entirely with PyTorch tensor
operations without converting to Python scalars.

In `@src/scope/core/pipelines/mod_scope/schema.py`:
- Around line 25-27: The three demo parameters noise_scale, vace_context_scale,
and kv_cache_attention_bias are plain floats so the generated schema lacks the
ui.modulatable metadata; update each field declaration to include the UI
metadata (e.g., use Annotated/Field or the project's Field helper) so they carry
ui.modulatable=True in the schema generation step — change the noise_scale,
vace_context_scale, and kv_cache_attention_bias declarations to use the
project's schema field pattern that attaches ui.modulatable metadata rather than
raw float types.

In `@src/scope/server/frame_processor.py`:
- Around line 58-76: The ParameterScheduler creates live timers but
FrameProcessor.stop() does not cancel them, allowing scheduled callbacks (e.g.
update_parameters / change_applied) to run after shutdown; modify
FrameProcessor.stop() to explicitly cancel/stop any active schedulers and
engines (call the cancellation/cleanup API on ParameterScheduler instance
referenced by self.parameter_scheduler and also on self.modulation_engine if
applicable) before tearing down pipeline or clearing processors so no pending
timers can fire; ensure you use the scheduler/engine methods (e.g.
ParameterScheduler.cancel() or .stop(), and ModulationEngine.stop()/shutdown())
that exist on those classes to cleanly terminate pending work and prevent late
callbacks.

In `@src/scope/server/modulation.py`:
- Around line 130-146: The descending-repair loop can push values outside the
configured bounds (lo/hi); after the existing backward pass that mutates result
(the for loop that checks result[i] <= result[i+1] and increments earlier
entries), re-clamp all entries of result into [lo, hi] (i.e., for each value set
result[i] = max(lo, min(hi, result[i]))), or alternatively perform the clamp
inside the loop after adjusting each element; reference the variables/result
list and the existing backward-loop in modulation.py to locate the change.

In `@src/scope/server/parameter_scheduler.py`:
- Around line 52-68: When changing quantize settings, cancel or recompute any
pending scheduled timer so stale pending updates don't fire: inside the
quantize_mode setter (quantize_mode) and the lookahead_ms setter (lookahead_ms)
ensure you clear any existing pending timer (e.g., cancel_timer(),
self._pending_timer, or similar scheduler handle used elsewhere) before updating
self._quantize_mode or self._lookahead_ms, and if needed reschedule according to
the new mode/lookahead; also log that the pending timer was canceled or
rescheduled to aid debugging.

In `@src/scope/server/tempo_sources/link.py`:
- Around line 14-16: The unconditional import of Link will raise if aalink is
missing and prevents graceful degradation; change the top-level import to a
guarded try/except (e.g., try: from aalink import Link except Exception: Link =
None) so importing this module never raises, and ensure any code that expects
Link (e.g., the LinkTempoSource class or instantiation in tempo_sync.py) checks
for Link is not None before use.

In `@src/scope/server/tempo_sources/midi_clock.py`:
- Around line 137-151: The transport handlers (_on_start, _on_stop,
_on_continue) currently only flip self._is_playing and leave the cached beat
snapshot stale so get_beat_state() returns old data; update these handlers to
refresh/invalidate the cached beat snapshot when the transport state changes
(e.g., set self._cached_beat_snapshot = None or call an existing
self._update_beat_snapshot()/self._invalidate_beat_snapshot() helper) so that
subsequent calls to get_beat_state() reflect the new playing/stopped/continued
state immediately.

In `@src/scope/server/tempo_sync.py`:
- Around line 161-169: The enable() method currently returns silently if source
creation fails; instead detect when _create_link_source(...) or
_create_midi_clock_source(midi_device, beats_per_bar) returns None and raise a
RuntimeError with a clear message (including the requested source_type and any
install/device hint) so callers can surface a 4xx; update the code in enable()
to replace the `if source is None: return` branch with raising RuntimeError and
ensure the exception includes sufficient context for the REST layer to show the
install hint.
- Around line 94-112: get_beat_state currently ignores the _enabled flag and
disable() doesn't clear the cached _client_beat_state, so fresh client-forwarded
beats remain available after tempo sync is disabled; fix by early-returning None
from get_beat_state when self._enabled is False (before reading
_client_beat_state) and, in disable(), acquire self._client_state_lock and set
self._client_beat_state = None (and keep existing logic clearing/locking
_source/_source_lock); apply the same gating and cache-clear change to the
analogous method around lines 191-203 as well.

---

Outside diff comments:
In `@frontend/src/pages/StreamPage.tsx`:
- Around line 519-545: The current fallback in the params sync block is
persisting every non-allowlisted param into settingsUpdate.schemaFieldOverrides,
which causes transient protocol-only fields (e.g., transition,
prompt_interpolation_method, output_sinks sent via sendParameterUpdate) to leak
into future loads; update the logic around overrideUpdates creation to exclude a
small denylist of known transient keys (e.g., "transition",
"prompt_interpolation_method", "output_sinks") or detect runtime-only flags
before copying into settingsUpdate.schemaFieldOverrides, so only true schema
overrides are merged; keep the rest of the merge behavior intact (referencing
settingsRef.current and settingsUpdate.schemaFieldOverrides) and ensure
transient keys are ignored when building overrideUpdates.

In `@src/scope/server/frame_processor.py`:
- Around line 628-660: The FrameProcessor currently only removes tempo-control
keys when helpers exist, causing stale control keys to be forwarded when helpers
are absent; always strip these keys from the incoming parameters before any
pipeline forwarding. Specifically: ensure "quantize_mode" and "lookahead_ms" are
popped from parameters even if self.parameter_scheduler is None; pop
"modulations" from parameters even if self.modulation_engine is None (and call
self.modulation_engine.update(...) only when present); pop
"beat_cache_reset_rate" regardless (and only apply it to
self.pipeline_processors when present); and when self.tempo_sync is None remove
any client beat-state fields (the same keys handled by
tempo_sync.update_client_beat_state) so they are not forwarded to
processor.update_parameters or retained on self.parameters. Use the existing
symbols (parameter_scheduler, modulation_engine, tempo_sync,
pipeline_processors, _update_output_sinks_from_config, _update_input_source) to
locate the logic and implement the unconditional popping before any call that
forwards parameters.

---

Nitpick comments:
In `@docs/tempo-sync-review.md`:
- Around line 258-264: The flow diagram block containing ModulationSection →
modulations state → sendParameterUpdate(...) →
FrameProcessor.update_parameters() → ModulationEngine.update(raw_configs) →
ModulationEngine.apply(beat_state, call_params) → PipelineProcessor should be
fenced as a text code block to satisfy markdown linters; update the diagram to
use a triple-backtick fence with the language tag "text" (i.e., start with
```text and end with ```) so the existing symbols (ModulationSection,
sendParameterUpdate, FrameProcessor.update_parameters, ModulationEngine.update,
ModulationEngine.apply, PipelineProcessor) remain unchanged but the block is
properly annotated.
- Around line 42-78: Add a language tag to the fenced ASCII diagram block (the
triple backticks that begin the diagram starting with
"┌─────────────────────────────────────────────────────────────┐" and containing
headers like "TEMPO SOURCES" and "TEMPOSYNC MANAGER"); change the opening fence
from ``` to ```text so markdown linters recognize it as plain text and stop
flagging the diagram.
- Around line 239-243: Update the flow diagram code blocks that show "Ableton
Link / MIDI device → TempoSource → TempoSync.get_beat_state() →
PipelineProcessor (inject + modulate) → Pipeline → _notification_loop() → WebRTC
data channel → useTempoSync" (and the similar block at the later section) to
include a language specifier by changing the fenced code blocks from ``` to
```text so markdown linters render them consistently; locate the fenced blocks
containing those exact flow lines and prepend "text" after the opening backticks
for both occurrences.

In `@frontend/src/components/DownloadDialog.tsx`:
- Around line 31-32: The props isCloudMode and onOpenLoRAsSettings declared for
the DownloadDialog component are never used; either remove them from the
DownloadDialogProps interface and from the component's destructuring, or
explicitly destructure-and-discard them to document intentional ignoring (e.g.,
isCloudMode: _isCloudMode, onOpenLoRAsSettings: _onOpenLoRAsSettings) inside the
DownloadDialog function to satisfy type compatibility while avoiding unused
variable warnings.

In `@frontend/src/components/settings/ModulationSection.tsx`:
- Around line 114-119: The current useEffect in ModulationSection (useEffect,
targets, selectedTarget, setSelectedTarget) causes a brief render with a stale
selectedTarget when targets change; to avoid flicker, derive an
effectiveSelectedTarget synchronously during render (e.g., compute a fallback
like targets[0].value when selectedTarget is missing) and use that derived value
in the JSX, or setSelectedTarget synchronously at the same place you update
targets/pipeline (the handler that changes pipeline/targets) so the state is
correct before the next render; update references in the component to use the
derived/synchronously-updated selected value instead of relying solely on the
effect.
- Around line 137-149: handleToggle currently deletes the target config when
disabling, losing user settings; change it to preserve the config object and
only toggle an enabled flag. Specifically, in handleToggle use the existing
config from modulations[selectedTarget] (falling back to
defaultConfigFor(currentTarget)) when pressed is true and set enabled: true, and
when pressed is false set modulations[selectedTarget] = {
...(existingOrDefault), enabled: false } instead of deleting the entry; continue
to call onModulationsChange with the updated modulations object. Ensure you
reference handleToggle, defaultConfigFor, onModulationsChange, modulations,
selectedTarget, and currentTarget when making the change.

In `@frontend/src/components/settings/TempoSyncSection.tsx`:
- Around line 270-298: The BPM input currently only validates on submit and
allows invalid intermediate values; update the input's onBlur handler (currently
setting bpmInputFocusedRef) to parse and clamp bpmInput into the 20–300 range,
call setBpmInput with the clamped string, and invoke onSetBpm(clampedValue) so
the final value is normalized; keep onChange as-is to allow free typing, and
optionally add a short debounce around any future calls to onSetBpm if you want
live updates instead of only on blur/submit (referencing bpmInput, setBpmInput,
bpmInputFocusedRef, and onSetBpm).

In `@frontend/src/components/TempoSyncPanel.tsx`:
- Around line 11-32: TempoSyncPanelProps is declared but not exported,
preventing external modules from importing the prop type; fix by adding an
export to the interface declaration (export interface TempoSyncPanelProps) in
the TempoSyncPanel component file and update any places that should import this
type (e.g., other components or tests) to import { TempoSyncPanelProps } from
the component module; ensure any existing default/export patterns in the file
remain correct after adding the export.

In `@frontend/src/hooks/useUnifiedWebRTC.ts`:
- Around line 167-203: The per-message console.log in dataChannel.onmessage is
currently printing every "tempo_update" packet (handled where data.type ===
"tempo_update"), flooding DevTools; modify the handler around the
console.log("[UnifiedWebRTC] Data channel message:", event.data) so it skips or
samples logs for high-frequency messages — e.g., only log when data.type !==
"tempo_update" or apply a simple sampling/throttle for "tempo_update" (log every
Nth packet or once per second) — leaving other branches (stream_stopped,
change_scheduled, change_applied) unchanged; update the logic in the function
handling dataChannel.onmessage to check data.type before logging.

In `@src/scope/core/pipelines/metronome/pipeline.py`:
- Around line 175-181: Remove the unnecessary first draw used to measure width:
replace the call that invokes _draw_number(canvas, beat_display, 0, 0, scale,
text_color) with a non-drawing width calculation (e.g., call the same underlying
measurement routine or add a helper to measure text width) so you only compute
num_w without rendering, then use _draw_number only once at (center_x,
center_y); do the same for bar_num_w (avoid drawing at 0,0 to measure width and
only draw once at its centered position). Reference: _draw_number, beat_display,
num_w, bar_num_w in pipeline.py.
- Around line 100-103: The current pipeline code that simulates latency uses a
blocking call via time.sleep when reading latency_ms from kwargs, which can
block the pipeline thread and cause input-frame drops; update the latency
simulation in the pipeline (the latency_ms handling inside the pipeline
function) to either (a) replace blocking time.sleep with a non-blocking approach
such as scheduling an asyncio.sleep or deferring the delay to a
worker/threadpool so the pipeline thread isn’t blocked, or (b) if blocking is
intentional for testing, add a clear inline comment and docstring on the
pipeline function explaining that latency_ms intentionally blocks the pipeline
thread and warning about potential frame drops so callers know to avoid using
high latency_ms in production. Ensure references to latency_ms and the pipeline
function/method name are preserved when changing the implementation.

In `@src/scope/server/schema.py`:
- Around line 857-869: TempoStatusResponse currently types source and beat_state
as dict | None which weakens type safety; define two nested Pydantic models
(e.g., SourceInfo and BeatState) that capture the stable structure (fields like
type, num_peers for source; bpm, beat_phase, bar_position, beat_count,
is_playing, source for beat_state) and replace the dict annotations on
TempoStatusResponse.source and TempoStatusResponse.beat_state with
Optional[SourceInfo] and Optional[BeatState] respectively, updating any code
that constructs or reads these fields to use the new models.

In `@src/scope/server/tempo_sources/link.py`:
- Around line 109-112: set_tempo currently returns silently when self._link is
None which hides misconfiguration; update set_tempo to detect when self._link is
None and either log a warning or raise an error. Inside the set_tempo method
(referencing set_tempo and attribute _link) add a branch that emits a clear
warning (using the existing logger on the class if available, otherwise the
module logging) stating that tempo change was requested but Link is not
initialized, or raise a RuntimeError with that message—choose logging for
non-fatal flows and raise for fatal flows.

In `@tests/test_parameter_scheduler.py`:
- Around line 359-376: The test test_scheduled_change_fires_after_delay is flaky
because it relies on real timers/sleep; modify the test to avoid real-time
dependence by either increasing the tolerance (e.g., sleep to 0.5s) or,
preferably, mock the timer used by the scheduler (mock threading.Timer or the
scheduler's internal timer creation) so schedule() triggers immediately in
tests; locate references to make_scheduler, FakeBeatState, and the
scheduler.schedule(...) call and patch the Timer creation used by that scheduler
instance to a synchronous stub that calls the callback immediately and assert
applied/notifications as before.

ℹ️ Review info
⚙️ Run configuration

Configuration used: defaults

Review profile: CHILL

Plan: Pro

Run ID: c28c4a4a-a47c-44b6-853c-8f61afebbac5

📥 Commits

Reviewing files that changed from the base of the PR and between 5f6ee61 and 83ca5e6.

⛔ Files ignored due to path filters (2)
  • frontend/package-lock.json is excluded by !**/package-lock.json
  • uv.lock is excluded by !**/*.lock
📒 Files selected for processing (42)
  • docs/tempo-sync-review.md
  • frontend/src/components/DownloadDialog.tsx
  • frontend/src/components/TempoSyncPanel.tsx
  • frontend/src/components/WorkflowImportDialog.tsx
  • frontend/src/components/settings/ModulationSection.tsx
  • frontend/src/components/settings/OscTab.tsx
  • frontend/src/components/settings/TempoSyncSection.tsx
  • frontend/src/hooks/useTempoSync.ts
  • frontend/src/hooks/useUnifiedWebRTC.ts
  • frontend/src/lib/api.ts
  • frontend/src/pages/StreamPage.tsx
  • frontend/src/types/index.ts
  • pyproject.toml
  • src/scope/core/pipelines/base_schema.py
  • src/scope/core/pipelines/defaults.py
  • src/scope/core/pipelines/krea_realtime_video/schema.py
  • src/scope/core/pipelines/longlive/schema.py
  • src/scope/core/pipelines/memflow/schema.py
  • src/scope/core/pipelines/metronome/__init__.py
  • src/scope/core/pipelines/metronome/pipeline.py
  • src/scope/core/pipelines/metronome/schema.py
  • src/scope/core/pipelines/mod_scope/__init__.py
  • src/scope/core/pipelines/mod_scope/pipeline.py
  • src/scope/core/pipelines/mod_scope/schema.py
  • src/scope/core/pipelines/registry.py
  • src/scope/core/pipelines/reward_forcing/schema.py
  • src/scope/core/pipelines/streamdiffusionv2/schema.py
  • src/scope/core/pipelines/wan2_1/blocks/noise_scale_controller.py
  • src/scope/core/pipelines/wan2_1/blocks/set_timesteps.py
  • src/scope/server/app.py
  • src/scope/server/frame_processor.py
  • src/scope/server/modulation.py
  • src/scope/server/parameter_scheduler.py
  • src/scope/server/pipeline_processor.py
  • src/scope/server/schema.py
  • src/scope/server/tempo_sources/__init__.py
  • src/scope/server/tempo_sources/link.py
  • src/scope/server/tempo_sources/midi_clock.py
  • src/scope/server/tempo_sync.py
  • src/scope/server/tracks.py
  • src/scope/server/webrtc.py
  • tests/test_parameter_scheduler.py

Comment on lines +473 to +494
// Auto-flag discrete params for beat-quantized gating
if (isQuantizeActive) {
const NEVER_QUANTIZE = new Set([
"paused",
"quantize_mode",
"lookahead_ms",
"_quantized",
"prompt_interpolation_method",
]);
const hasDiscrete = Object.entries(params).some(([key, value]) => {
if (NEVER_QUANTIZE.has(key)) return false;
return (
typeof value === "boolean" ||
typeof value === "string" ||
key === "prompts" ||
key === "reset_cache"
);
});
if (hasDiscrete) {
params = { ...params, _quantized: true };
}
}
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Quantize detection misses non-primitive discrete updates.

hasDiscrete only treats booleans, strings, prompts, and reset_cache as quantizable. That skips object/array payloads like transition on Lines 631-633 and denoising_step_list on Lines 921-923, so those changes still apply immediately even when beat/bar quantization is enabled. Use a key-based allowlist for discrete runtime params instead of typeof heuristics.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@frontend/src/pages/StreamPage.tsx` around lines 473 - 494, The
quantize-detection logic in the isQuantizeActive block (variables
NEVER_QUANTIZE, hasDiscrete, params, _quantized) wrongly infers discreteness by
typeof and misses non-primitive discrete payloads like transition and
denoising_step_list; change hasDiscrete to use a key-based allowlist instead:
create a DISCRETE_PARAM_KEYS set (include prompts, reset_cache, transition,
denoising_step_list, and any other known composite discrete keys), skip keys in
NEVER_QUANTIZE, and mark params._quantized = true when any key is in
DISCRETE_PARAM_KEYS or matches existing boolean/string checks for backward
compatibility. Ensure you reference the same symbols (isQuantizeActive,
NEVER_QUANTIZE, hasDiscrete, params, _quantized) when making the change.

Comment on lines +1594 to +1637
useEffect(() => {
const rate = settings.promptCycleRate || "none";
const items = promptCycleItemsRef.current;
if (
rate === "none" ||
!isStreaming ||
!tempoState.enabled ||
items.length < 2
) {
promptCycleBoundaryRef.current = -1;
return;
}

const { beatCount, beatsPerBar } = tempoState;
let boundary: number;
if (rate === "beat") boundary = beatCount;
else if (rate === "bar") boundary = Math.floor(beatCount / beatsPerBar);
else if (rate === "2_bar")
boundary = Math.floor(beatCount / (beatsPerBar * 2));
else if (rate === "4_bar")
boundary = Math.floor(beatCount / (beatsPerBar * 4));
else return;

if (
boundary !== promptCycleBoundaryRef.current &&
promptCycleBoundaryRef.current >= 0
) {
promptCycleIndexRef.current =
(promptCycleIndexRef.current + 1) % items.length;
const active = items[promptCycleIndexRef.current];

sendParameterUpdate({
prompts: [{ text: active.text, weight: 100 }],
});
}
promptCycleBoundaryRef.current = boundary;
}, [
settings.promptCycleRate,
tempoState.beatCount,
tempoState.beatsPerBar,
tempoState.enabled,
isStreaming,
sendParameterUpdate,
]);
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Prompt cycling is double-quantized when quantize mode is active.

This effect already waits for a beat/bar boundary before sending the next prompt. Routing that payload through sendParameterUpdate() causes the prompts update to be marked _quantized by the logic on Lines 473-494, so beat/bar scheduling pushes it to the next boundary instead of the one that just fired. With promptCycleRate="beat" and quantizeMode="beat", the active prompt will lag by one beat.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@frontend/src/pages/StreamPage.tsx` around lines 1594 - 1637, The
prompt-cycling effect is being double-quantized because it sends prompt updates
via sendParameterUpdate which the quantize logic will push to the next boundary;
change this effect to send the new prompt immediately by either calling a new
helper (e.g., sendParameterUpdateImmediate) or extending sendParameterUpdate to
accept a flag (e.g., { skipQuantize: true }) and set that flag when updating
prompts from promptCycleIndexRef/current items; update the quantize branch in
the sendParameterUpdate implementation to honor the flag so
promptCycleBoundaryRef/Index logic produces the prompt on the current boundary
rather than being deferred.

Comment on lines +77 to +79
TRACE_LENGTH = 192
BEATS_PER_BAR = 4

Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

The beat indicator is locked to 4/4.

BEATS_PER_BAR is hardcoded to 4, and Lines 326-331 wrap bar_position against that constant. Once the user selects 3/4, 5/4, etc., this oscilloscope will highlight the wrong beat and draw the wrong number of dots.

Also applies to: 326-331

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/scope/core/pipelines/mod_scope/pipeline.py` around lines 77 - 79,
BEATS_PER_BAR is hardcoded to 4 causing incorrect beat highlighting; make beats
per bar dynamic by replacing the constant with a configurable variable (e.g.,
beats_per_bar) sourced from pipeline settings or the time signature, update any
code that computes or wraps bar_position (the logic currently at the block
referenced by lines handling bar_position wrapping around BEATS_PER_BAR) to use
beats_per_bar instead of BEATS_PER_BAR, and ensure the rendering logic that
draws the beat dots uses beats_per_bar to determine count and placement so 3/4,
5/4, etc. render correctly.

Comment on lines +105 to +109
self.device = (
device
if device is not None
else torch.device("cuda" if torch.cuda.is_available() else "cpu")
)
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

🧩 Analysis chain

🏁 Script executed:

find . -name "pipeline.py" -path "*/mod_scope/*" | head -5

Repository: daydreamlive/scope

Length of output: 110


🏁 Script executed:

wc -l src/scope/core/pipelines/mod_scope/pipeline.py

Repository: daydreamlive/scope

Length of output: 112


🏁 Script executed:

sed -n '100,115p' src/scope/core/pipelines/mod_scope/pipeline.py

Repository: daydreamlive/scope

Length of output: 618


🏁 Script executed:

sed -n '245,265p' src/scope/core/pipelines/mod_scope/pipeline.py

Repository: daydreamlive/scope

Length of output: 1128


🏁 Script executed:

sed -n '70,85p' src/scope/core/pipelines/mod_scope/pipeline.py

Repository: daydreamlive/scope

Length of output: 482


🏁 Script executed:

sed -n '230,275p' src/scope/core/pipelines/mod_scope/pipeline.py

Repository: daydreamlive/scope

Length of output: 2390


🏁 Script executed:

sed -n '215,235p' src/scope/core/pipelines/mod_scope/pipeline.py

Repository: daydreamlive/scope

Length of output: 908


Avoid Tensor.item() inside the glow rendering loop—it forces device synchronization on every pixel.

The pipeline defaults to CUDA (lines 105-109). With three traced parameters and up to 192 samples per trace, the .item() calls in the glow loop (lines 253-260) trigger a host/device sync for each of the roughly 3,456 glow pixels per frame (3 traces × 192 samples × 2 glow offsets × 3 color channels). This undermines real-time performance. Refactor to use tensor-only operations (e.g., torch.max() instead of Python's max() with scalar readbacks), or write the glow values directly without the intermediate .item() conversion.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/scope/core/pipelines/mod_scope/pipeline.py` around lines 105 - 109, The
glow rendering loop is performing per-pixel host/device syncs by calling
Tensor.item() (search for the glow loop that reads tensor values with .item() in
this class), which kills GPU performance; update the glow accumulation to keep
everything on-tensor: remove all .item() usages in the glow loop, replace Python
scalar operations with torch tensor ops (e.g., use torch.max, torch.clamp,
tensor indexing/assignment and tensor-wise arithmetic) and ensure results are
written back as tensors on self.device (the pipeline's device is set via
self.device). Locate the glow loop in this module (the section that reads tensor
values per-sample/pixel and writes glow outputs) and refactor it to compute glow
intensities and color blending entirely with PyTorch tensor operations without
converting to Python scalars.

Comment on lines +52 to +68
@quantize_mode.setter
def quantize_mode(self, mode: str) -> None:
valid = ("none", "beat", "bar", "2_bar", "4_bar")
if mode not in valid:
logger.warning(f"[SCHEDULER] Invalid quantize mode '{mode}', ignoring")
return
self._quantize_mode = mode
logger.info(f"[SCHEDULER] Quantize mode set to '{mode}'")

@property
def lookahead_ms(self) -> float:
return self._lookahead_ms

@lookahead_ms.setter
def lookahead_ms(self, ms: float) -> None:
self._lookahead_ms = max(0.0, float(ms))
logger.info(f"[SCHEDULER] Lookahead set to {self._lookahead_ms:.0f}ms")
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Cancel pending timers when quantize settings change.

A scheduled update survives changes to quantize_mode and lookahead_ms. If a beat-quantized change is pending and the user switches back to "none" or retunes the lookahead, the old timer still fires later and can reapply stale params on the old boundary. Cancel or recompute any pending timer inside both setters.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/scope/server/parameter_scheduler.py` around lines 52 - 68, When changing
quantize settings, cancel or recompute any pending scheduled timer so stale
pending updates don't fire: inside the quantize_mode setter (quantize_mode) and
the lookahead_ms setter (lookahead_ms) ensure you clear any existing pending
timer (e.g., cancel_timer(), self._pending_timer, or similar scheduler handle
used elsewhere) before updating self._quantize_mode or self._lookahead_ms, and
if needed reschedule according to the new mode/lookahead; also log that the
pending timer was canceled or rescheduled to aid debugging.

Comment on lines +14 to +16
from aalink import Link

from ..tempo_sync import BeatState, TempoSource
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

🧩 Analysis chain

🏁 Script executed:

#!/bin/bash
# Check if aalink import is guarded at the module registration level
rg -n "aalink" --type=py -C3

Repository: daydreamlive/scope

Length of output: 2362


Unconditional aalink import breaks graceful degradation.

The import at line 14 (from aalink import Link) is unconditional and will fail immediately if aalink is not installed, preventing the optional dependency handling in tempo_sync.py (which wraps LinkTempoSource instantiation in a try/except block) from working as designed. The import should be guarded with a try/except block.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/scope/server/tempo_sources/link.py` around lines 14 - 16, The
unconditional import of Link will raise if aalink is missing and prevents
graceful degradation; change the top-level import to a guarded try/except (e.g.,
try: from aalink import Link except Exception: Link = None) so importing this
module never raises, and ensure any code that expects Link (e.g., the
LinkTempoSource class or instantiation in tempo_sync.py) checks for Link is not
None before use.

Comment on lines +137 to +151
def _on_start(self) -> None:
self._tick_count = 0
self._last_tick_time = None
self._ema_interval = None
self._is_playing = True
logger.info("MIDI Start received")

def _on_stop(self) -> None:
self._is_playing = False
logger.info("MIDI Stop received")

def _on_continue(self) -> None:
self._is_playing = True
self._last_tick_time = None
logger.info("MIDI Continue received")
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Refresh the cached beat snapshot on transport events.

Line 145 only flips _is_playing; get_beat_state() keeps returning the previous cached snapshot until another clock tick arrives. After a Stop, tempo sync can keep reporting is_playing=True and a stale position to schedulers and the UI.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/scope/server/tempo_sources/midi_clock.py` around lines 137 - 151, The
transport handlers (_on_start, _on_stop, _on_continue) currently only flip
self._is_playing and leave the cached beat snapshot stale so get_beat_state()
returns old data; update these handlers to refresh/invalidate the cached beat
snapshot when the transport state changes (e.g., set self._cached_beat_snapshot
= None or call an existing
self._update_beat_snapshot()/self._invalidate_beat_snapshot() helper) so that
subsequent calls to get_beat_state() reflect the new playing/stopped/continued
state immediately.

Comment on lines +94 to +112
def get_beat_state(self) -> BeatState | None:
"""Get the current beat state. Thread-safe.

Returns client-forwarded state if fresh, otherwise falls back
to the server-side tempo source.
"""
with self._client_state_lock:
client_state = self._client_beat_state

if client_state is not None:
age = time.time() - client_state.timestamp
if age < CLIENT_BEAT_STATE_STALE_SECONDS:
return client_state

with self._source_lock:
if self._source is not None:
return self._source.get_beat_state()

return None
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Disabled tempo sync still exposes fresh client beat state.

get_beat_state() never checks _enabled, and disable() does not clear _client_beat_state. If the browser keeps forwarding beat updates, schedulers and modulators can keep receiving beat data after the user disables tempo sync. Gate get_beat_state() on _enabled and clear the cached client state during disable.

Also applies to: 191-203

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/scope/server/tempo_sync.py` around lines 94 - 112, get_beat_state
currently ignores the _enabled flag and disable() doesn't clear the cached
_client_beat_state, so fresh client-forwarded beats remain available after tempo
sync is disabled; fix by early-returning None from get_beat_state when
self._enabled is False (before reading _client_beat_state) and, in disable(),
acquire self._client_state_lock and set self._client_beat_state = None (and keep
existing logic clearing/locking _source/_source_lock); apply the same gating and
cache-clear change to the analogous method around lines 191-203 as well.

Comment on lines +161 to +169
if source_type == "link":
source = self._create_link_source(bpm)
elif source_type == "midi_clock":
source = self._create_midi_clock_source(midi_device, beats_per_bar)
else:
raise ValueError(f"Unknown tempo source type: {source_type}")

if source is None:
return
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Fail fast when a requested source cannot be created.

If _create_link_source() or _create_midi_clock_source() returns None, enable() just returns here. The REST endpoint then answers 200 with enabled=false, which makes a missing dependency or device indistinguishable from a successful enable. Raise a RuntimeError instead so the caller can surface a proper 4xx with the install hint.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/scope/server/tempo_sync.py` around lines 161 - 169, The enable() method
currently returns silently if source creation fails; instead detect when
_create_link_source(...) or _create_midi_clock_source(midi_device,
beats_per_bar) returns None and raise a RuntimeError with a clear message
(including the requested source_type and any install/device hint) so callers can
surface a 4xx; update the code in enable() to replace the `if source is None:
return` branch with raising RuntimeError and ensure the exception includes
sufficient context for the REST layer to show the install hint.

BuffMcBigHuge and others added 10 commits March 12, 2026 14:46
Signed-off-by: BuffMcBigHuge <[email protected]>
Signed-off-by: BuffMcBigHuge <[email protected]>
Signed-off-by: BuffMcBigHuge <[email protected]>
Resolve conflicts in pyproject.toml, frame_processor.py,
pipeline_processor.py, and uv.lock. Thread tempo_sync and
modulation_engine through the new graph executor.

Co-Authored-By: Claude Opus 4.6 <[email protected]>
Signed-off-by: Rafał Leszko <[email protected]>
…ockerfiles

MIDI requires local hardware access so libasound2 is not useful in containers.
Also corrects uv sync --group to --extra across docs.

Co-Authored-By: Claude Opus 4.6 <[email protected]>
Signed-off-by: Rafal Leszko <[email protected]>
Remove isCloudMode and onOpenLoRAsSettings props that were added to
DownloadDialog solely to suppress TypeScript errors in StreamPage,
but were never actually used by the component.

Co-Authored-By: Claude Opus 4.6 <[email protected]>
Signed-off-by: Rafał Leszko <[email protected]>
transition (object) and denoising_step_list (array) were not caught by
the typeof === "boolean" / "string" checks, so they bypassed beat
quantization. Use a DISCRETE_PARAM_KEYS allowlist for key-based
detection of known discrete parameters.

Co-Authored-By: Claude Opus 4.6 <[email protected]>
Signed-off-by: Rafał Leszko <[email protected]>
leszko and others added 3 commits March 13, 2026 16:41
Co-Authored-By: Claude Opus 4.6 <[email protected]>
Signed-off-by: Rafał Leszko <[email protected]>
Moves the 5 tempo endpoints from app.py into tempo_router.py,
following the same APIRouter pattern as mcp_router.py.

Co-Authored-By: Claude Opus 4.6 <[email protected]>
Signed-off-by: Rafał Leszko <[email protected]>
mcp>=1.0.0 is already in core dependencies, so the optional-dependencies
entry was unnecessary.

Co-Authored-By: Claude Opus 4.6 <[email protected]>
Signed-off-by: Rafał Leszko <[email protected]>
json_schema_extra=ui_field_config(
order=6, component="denoising_steps", is_load_param=True
order=6,
component="denoising_steps",
Copy link
Copy Markdown
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Stupid question: so the tempo sync always changes/modulates only the denoising_steps param, right? It's not something a user can map/select?

Copy link
Copy Markdown
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The system is schema-driven and user-selectable.

Backend pipeline schemas opt in per-field using modulatable=True in ui_field_config(), along with optional modulatable_min/modulatable_max bounds.

This way plugins can opt-in!

ALL_PATTERNS = {**_DIGIT_PATTERNS, **_ALPHA_PATTERNS}

TRACE_LENGTH = 192
BEATS_PER_BAR = 4
Copy link
Copy Markdown
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Shouldn't it be a param that we set in UI?

Copy link
Copy Markdown
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Considering that mod_scope and metronome are simple tests for tempo sync, should we just remove them entirely @leszko ?

Copy link
Copy Markdown
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We can remove and move to plugins. Or if you prefer, we can keep one. But probably 2 "hello worlds" tests is too much for the scope core.

Copy link
Copy Markdown
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Do I understand correctly that both metronome and mod_scope are sample pipelines to present the usage of tempo sync? If yes, then maybe we need just 1 "hello-world" pipeline like that, wdyt?

Copy link
Copy Markdown
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think it's best that we remove both.

Comment on lines +122 to +130
# Cache init on discrete jumps from the motion controller
if (
block_state.manage_cache
and block_state.current_noise_scale is not None
and block_state.current_noise_scale != block_state.noise_scale
):
block_state.init_cache = True
# When noise_controller is off (e.g. during modulation), let noise_scale
# changes flow through without triggering expensive cache resets.
Copy link
Copy Markdown
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why do we need this change? The modulation takes care of the denoising step list, so why do we change anything about the noise controller?

Copy link
Copy Markdown
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This change is needed for noise_scale modulation (not just denoising steps). LongLive exposes noise_scale and denoising_steps as modulatable. I'm not necessarily happy with noise modulation, it works - but it's not very impressive. I'm sure it can be tuned better.

leszko and others added 5 commits March 16, 2026 10:18
Make TempoSource an ABC with abstract methods, add _enabled_lock for
thread-safe access, prune dead notification senders, unify install hints
to use `uv sync --extra`, and clean up redundant .get() fallbacks.

Co-Authored-By: Claude Opus 4.6 <[email protected]>
Signed-off-by: Rafał Leszko <[email protected]>
Signed-off-by: Rafał Leszko <[email protected]>
Extract beat boundary calculation to a standalone get_beat_boundary()
in tempo_sync.py and consolidate inline beat state handling into a
dedicated _apply_tempo_sync() method.

Co-Authored-By: Claude Opus 4.6 (1M context) <[email protected]>
Signed-off-by: Rafał Leszko <[email protected]>
… warnings

- Use `self.enabled` property (with lock) in `get_status()` instead of bare `self._enabled`
- Add `PipelineProcessor.set_beat_cache_reset_rate()` to encapsulate private attribute access
- Downgrade noisy per-frame "Extra params" log from info to debug
- Convert f-string logger calls to %-style lazy formatting across tempo sync modules
- Fix ESLint exhaustive-deps warnings in StreamPage prompt cycle effect

Co-Authored-By: Claude Opus 4.6 (1M context) <[email protected]>
Signed-off-by: Rafał Leszko <[email protected]>
Comment on lines +375 to +383
# Check for quantized update flag
if data.pop("_quantized", False):
if session.video_track and hasattr(
session.video_track, "frame_processor"
):
session.video_track.frame_processor.schedule_quantized_update(
data
)
return
Copy link
Copy Markdown
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I guess this will not work with the cloud. I think we could make it working with the cloud, but I wonder if the tempo sync feature even makes sense in the cloud? Won't the latency make it unusable?

return tempo_sync


@router.get("/status", response_model=TempoStatusResponse)
Copy link
Copy Markdown
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This will not proxied to the cloud. I guess it's correct. But won't to confirm.

@BuffMcBigHuge
Copy link
Copy Markdown
Collaborator Author

Closing this PR in lieu of #703.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants