Conversation
|
Important Review skippedAuto reviews are disabled on this repository. Please check the settings in the CodeRabbit UI or the ⚙️ Run configurationConfiguration used: Organization UI Review profile: CHILL Plan: Pro Run ID: You can disable this status message by setting the Use the checkbox below for a quick retry:
📝 WalkthroughWalkthroughThis pull request introduces a comprehensive beat-synchronization and parameter modulation system. It adds tempo sync management (Ableton Link and MIDI Clock support), a parameter scheduler for quantized beat-boundary updates, a modulation engine for beat-driven parameter animation, and frontend UI components to configure these features. New Metronome and ModScope visualization pipelines are included for testing. Changes
Sequence Diagram(s)sequenceDiagram
participant User as User
participant Frontend as Frontend UI
participant WebRTC as WebRTC Manager
participant TempoSync as TempoSync
participant FrameProc as Frame Processor
participant Scheduler as Parameter Scheduler
participant ModEngine as Modulation Engine
participant Pipeline as Pipeline
User->>Frontend: Enable Tempo Sync (Link source)
Frontend->>WebRTC: POST /api/v1/tempo/enable
WebRTC->>TempoSync: enable("link", bpm=120)
TempoSync->>TempoSync: Start Link polling (100 Hz)
TempoSync->>WebRTC: Register notification session
loop Every beat (~4 times/sec at 120 BPM)
TempoSync->>WebRTC: Send tempo_update message
WebRTC->>Frontend: Update beat display
end
User->>Frontend: Set quantize_mode="beat", enable modulation
Frontend->>WebRTC: Send parameter update (quantize_mode, modulations)
WebRTC->>FrameProc: schedule_quantized_update(params)
FrameProc->>Scheduler: schedule({modulations, denoising_steps})
Scheduler->>Scheduler: Compute delay to next beat boundary
Scheduler->>Scheduler: Start timer (based on lookahead_ms)
TempoSync->>FrameProc: Beat boundary reached
Scheduler->>FrameProc: Apply pending parameters
FrameProc->>ModEngine: apply(beat_phase, bar_position, params)
ModEngine->>ModEngine: Compute waveform phase (sine, cosine, etc.)
ModEngine->>ModEngine: Modulate denoising_steps and noise_scale
ModEngine->>FrameProc: Return modulated parameters
FrameProc->>Pipeline: Call with modulated params
Pipeline->>Pipeline: Process frame with beat-synced settings
FrameProc->>WebRTC: Send change_applied notification
WebRTC->>Frontend: Update UI (confirm modulation applied)
Estimated Code Review Effort🎯 5 (Critical) | ⏱️ ~120 minutes Poem
🚥 Pre-merge checks | ✅ 2 | ❌ 1❌ Failed checks (1 warning)
✅ Passed checks (2 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches🧪 Generate unit tests (beta)
📝 Coding Plan
Comment |
🚀 fal.ai Preview Deployment
TestingConnect to this preview deployment by running this on your branch: 🧪 E2E tests will run automatically against this deployment. |
✅ E2E Tests passed
Test ArtifactsCheck the workflow run for screenshots. |
…_playing. Signed-off-by: BuffMcBigHuge <[email protected]>
Signed-off-by: BuffMcBigHuge <[email protected]>
…_playing. Signed-off-by: BuffMcBigHuge <[email protected]>
Signed-off-by: BuffMcBigHuge <[email protected]>
Signed-off-by: RyanOnTheInside <[email protected]> Signed-off-by: BuffMcBigHuge <[email protected]>
Signed-off-by: BuffMcBigHuge <[email protected]>
…steps and others. Signed-off-by: BuffMcBigHuge <[email protected]>
Signed-off-by: BuffMcBigHuge <[email protected]>
Signed-off-by: BuffMcBigHuge <[email protected]>
5e4f357 to
4163333
Compare
Signed-off-by: BuffMcBigHuge <[email protected]>
Signed-off-by: BuffMcBigHuge <[email protected]>
Signed-off-by: BuffMcBigHuge <[email protected]>
Signed-off-by: BuffMcBigHuge <[email protected]>
There was a problem hiding this comment.
Actionable comments posted: 13
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (2)
src/scope/server/frame_processor.py (1)
628-660:⚠️ Potential issue | 🟠 MajorDon't forward tempo-control keys into pipelines when tempo sync is absent.
Lines 629-660 only strip quantization, modulation, and client beat-state fields when the corresponding tempo helper exists. Any
FrameProcessorstill constructed withouttempo_syncwill now forward those control-plane keys intoprocessor.update_parameters(...)andself.parameters, which changes the pre-feature contract and can surface as unknown pipeline args.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/scope/server/frame_processor.py` around lines 628 - 660, The FrameProcessor currently only removes tempo-control keys when helpers exist, causing stale control keys to be forwarded when helpers are absent; always strip these keys from the incoming parameters before any pipeline forwarding. Specifically: ensure "quantize_mode" and "lookahead_ms" are popped from parameters even if self.parameter_scheduler is None; pop "modulations" from parameters even if self.modulation_engine is None (and call self.modulation_engine.update(...) only when present); pop "beat_cache_reset_rate" regardless (and only apply it to self.pipeline_processors when present); and when self.tempo_sync is None remove any client beat-state fields (the same keys handled by tempo_sync.update_client_beat_state) so they are not forwarded to processor.update_parameters or retained on self.parameters. Use the existing symbols (parameter_scheduler, modulation_engine, tempo_sync, pipeline_processors, _update_output_sinks_from_config, _update_input_source) to locate the logic and implement the unconditional popping before any call that forwards parameters.frontend/src/pages/StreamPage.tsx (1)
519-545:⚠️ Potential issue | 🟠 MajorDon't persist every unknown data-channel param into
schemaFieldOverrides.The fallback here captures any non-allowlisted key, but
sendParameterUpdate()is also used for transient protocol fields liketransitionon Lines 631-633,prompt_interpolation_methodon Line 1291, andoutput_sinkson Lines 1174-1176. Those values are later merged back intoloadParamsandinitialParameterson Lines 1860-1866 and Lines 2036-2042, so a runtime-only message can leak into the next pipeline load/start as an unexpected schema override.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@frontend/src/pages/StreamPage.tsx` around lines 519 - 545, The current fallback in the params sync block is persisting every non-allowlisted param into settingsUpdate.schemaFieldOverrides, which causes transient protocol-only fields (e.g., transition, prompt_interpolation_method, output_sinks sent via sendParameterUpdate) to leak into future loads; update the logic around overrideUpdates creation to exclude a small denylist of known transient keys (e.g., "transition", "prompt_interpolation_method", "output_sinks") or detect runtime-only flags before copying into settingsUpdate.schemaFieldOverrides, so only true schema overrides are merged; keep the rest of the merge behavior intact (referencing settingsRef.current and settingsUpdate.schemaFieldOverrides) and ensure transient keys are ignored when building overrideUpdates.
🧹 Nitpick comments (14)
docs/tempo-sync-review.md (3)
258-264: Optional: Add language tag to flow diagram.Consistent with earlier flow diagrams, consider adding
```textto silence markdown linters.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@docs/tempo-sync-review.md` around lines 258 - 264, The flow diagram block containing ModulationSection → modulations state → sendParameterUpdate(...) → FrameProcessor.update_parameters() → ModulationEngine.update(raw_configs) → ModulationEngine.apply(beat_state, call_params) → PipelineProcessor should be fenced as a text code block to satisfy markdown linters; update the diagram to use a triple-backtick fence with the language tag "text" (i.e., start with ```text and end with ```) so the existing symbols (ModulationSection, sendParameterUpdate, FrameProcessor.update_parameters, ModulationEngine.update, ModulationEngine.apply, PipelineProcessor) remain unchanged but the block is properly annotated.
42-78: Optional: Add language tag to fenced code block.The ASCII architecture diagram lacks a language specifier. Consider adding
```textto silence markdown linters.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@docs/tempo-sync-review.md` around lines 42 - 78, Add a language tag to the fenced ASCII diagram block (the triple backticks that begin the diagram starting with "┌─────────────────────────────────────────────────────────────┐" and containing headers like "TEMPO SOURCES" and "TEMPOSYNC MANAGER"); change the opening fence from ``` to ```text so markdown linters recognize it as plain text and stop flagging the diagram.
239-243: Optional: Add language tags to flow diagram code blocks.The flow diagrams lack language specifiers. Consider adding
```textto silence markdown linters and improve rendering consistency.Also applies to: 247-251
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@docs/tempo-sync-review.md` around lines 239 - 243, Update the flow diagram code blocks that show "Ableton Link / MIDI device → TempoSource → TempoSync.get_beat_state() → PipelineProcessor (inject + modulate) → Pipeline → _notification_loop() → WebRTC data channel → useTempoSync" (and the similar block at the later section) to include a language specifier by changing the fenced code blocks from ``` to ```text so markdown linters render them consistently; locate the fenced blocks containing those exact flow lines and prepend "text" after the opening backticks for both occurrences.frontend/src/components/DownloadDialog.tsx (1)
31-32: Unused props declared in interface.The props
isCloudModeandonOpenLoRAsSettingsare added to the interface and destructured (via the comment on line 46), but they are never used within the component. If these are solely for type compatibility when spreading props from a parent, consider either:
- Actually using them in the component logic (e.g., conditionally rendering UI for cloud mode)
- Removing them from the interface if they're not needed here
- If intentionally ignored for forward compatibility, explicitly destructure and discard them:
isCloudMode: _isCloudMode, onOpenLoRAsSettings: _onOpenLoRAsSettingsAlso applies to: 46-47
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@frontend/src/components/DownloadDialog.tsx` around lines 31 - 32, The props isCloudMode and onOpenLoRAsSettings declared for the DownloadDialog component are never used; either remove them from the DownloadDialogProps interface and from the component's destructuring, or explicitly destructure-and-discard them to document intentional ignoring (e.g., isCloudMode: _isCloudMode, onOpenLoRAsSettings: _onOpenLoRAsSettings) inside the DownloadDialog function to satisfy type compatibility while avoiding unused variable warnings.frontend/src/components/TempoSyncPanel.tsx (1)
11-32: Consider exporting the props interface.
TempoSyncPanelPropsis not exported, which may limit reusability for consumers who want to type props passed to this component.♻️ Suggested fix
-interface TempoSyncPanelProps { +export interface TempoSyncPanelProps {🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@frontend/src/components/TempoSyncPanel.tsx` around lines 11 - 32, TempoSyncPanelProps is declared but not exported, preventing external modules from importing the prop type; fix by adding an export to the interface declaration (export interface TempoSyncPanelProps) in the TempoSyncPanel component file and update any places that should import this type (e.g., other components or tests) to import { TempoSyncPanelProps } from the component module; ensure any existing default/export patterns in the file remain correct after adding the export.src/scope/server/schema.py (1)
857-869: Consider stronger typing for response fields.
sourceandbeat_stateare typed asdict | None, which provides flexibility but reduces type safety. If the structure is stable, consider defining nested Pydantic models.This is acceptable as-is if the dict structure varies by source type or is expected to evolve.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/scope/server/schema.py` around lines 857 - 869, TempoStatusResponse currently types source and beat_state as dict | None which weakens type safety; define two nested Pydantic models (e.g., SourceInfo and BeatState) that capture the stable structure (fields like type, num_peers for source; bpm, beat_phase, bar_position, beat_count, is_playing, source for beat_state) and replace the dict annotations on TempoStatusResponse.source and TempoStatusResponse.beat_state with Optional[SourceInfo] and Optional[BeatState] respectively, updating any code that constructs or reads these fields to use the new models.frontend/src/hooks/useUnifiedWebRTC.ts (1)
167-203: Gate the per-message log beforetempo_updategoes live at 15Hz.Line 168 now runs for every tempo packet, so an active stream will spam DevTools continuously. Please skip or sample logging for high-frequency message types.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@frontend/src/hooks/useUnifiedWebRTC.ts` around lines 167 - 203, The per-message console.log in dataChannel.onmessage is currently printing every "tempo_update" packet (handled where data.type === "tempo_update"), flooding DevTools; modify the handler around the console.log("[UnifiedWebRTC] Data channel message:", event.data) so it skips or samples logs for high-frequency messages — e.g., only log when data.type !== "tempo_update" or apply a simple sampling/throttle for "tempo_update" (log every Nth packet or once per second) — leaving other branches (stream_stopped, change_scheduled, change_applied) unchanged; update the logic in the function handling dataChannel.onmessage to check data.type before logging.src/scope/core/pipelines/metronome/pipeline.py (2)
175-181: Drawing beat number twice - first draw at (0,0) is unnecessary.Line 175 draws the number at (0,0) just to calculate width, then line 181 draws it again at the centered position. This overwrites pixels unnecessarily.
♻️ Calculate width without drawing
- num_w = _draw_number(canvas, beat_display, 0, 0, scale, text_color) - # Calculate position to center it - center_x = (W - num_w) // 2 + # Calculate width: each digit is 3*scale wide with scale spacing + num_digits = len(str(beat_display)) + char_w = 3 * scale + scale + num_w = num_digits * char_w + center_x = (W - num_w) // 2 center_y = H // 3 + (H // 3 - 5 * scale) // 2 - # Clear area first and redraw centered _draw_number(canvas, beat_display, center_x, center_y, scale, text_color)Apply the same fix for
bar_num_wat line 186.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/scope/core/pipelines/metronome/pipeline.py` around lines 175 - 181, Remove the unnecessary first draw used to measure width: replace the call that invokes _draw_number(canvas, beat_display, 0, 0, scale, text_color) with a non-drawing width calculation (e.g., call the same underlying measurement routine or add a helper to measure text width) so you only compute num_w without rendering, then use _draw_number only once at (center_x, center_y); do the same for bar_num_w (avoid drawing at 0,0 to measure width and only draw once at its centered position). Reference: _draw_number, beat_display, num_w, bar_num_w in pipeline.py.
100-103: Blockingtime.sleepin pipeline call may cause frame drops.Using
time.sleepblocks the entire pipeline thread. While intentional for testing latency compensation, this could cause frame drops in the input queue if latency is high. Consider documenting this clearly or using a non-blocking approach.📝 Add comment clarifying the intentional blocking
# Simulate pipeline latency + # NOTE: This is intentionally blocking to accurately simulate real + # pipeline latency for testing beat-sync lookahead compensation. + # High values will cause input queue backup, which is expected behavior. latency_ms = kwargs.get("latency_ms", 0.0) if latency_ms > 0: time.sleep(latency_ms / 1000.0)🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/scope/core/pipelines/metronome/pipeline.py` around lines 100 - 103, The current pipeline code that simulates latency uses a blocking call via time.sleep when reading latency_ms from kwargs, which can block the pipeline thread and cause input-frame drops; update the latency simulation in the pipeline (the latency_ms handling inside the pipeline function) to either (a) replace blocking time.sleep with a non-blocking approach such as scheduling an asyncio.sleep or deferring the delay to a worker/threadpool so the pipeline thread isn’t blocked, or (b) if blocking is intentional for testing, add a clear inline comment and docstring on the pipeline function explaining that latency_ms intentionally blocks the pipeline thread and warning about potential frame drops so callers know to avoid using high latency_ms in production. Ensure references to latency_ms and the pipeline function/method name are preserved when changing the implementation.tests/test_parameter_scheduler.py (1)
359-376: Test relies on real timer and sleep, which can be flaky in CI.The test waits 0.35s for a 0.25s timer, which should be sufficient, but timing-based tests can be unreliable under load. Consider using a larger tolerance or mocking
threading.Timer.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@tests/test_parameter_scheduler.py` around lines 359 - 376, The test test_scheduled_change_fires_after_delay is flaky because it relies on real timers/sleep; modify the test to avoid real-time dependence by either increasing the tolerance (e.g., sleep to 0.5s) or, preferably, mock the timer used by the scheduler (mock threading.Timer or the scheduler's internal timer creation) so schedule() triggers immediately in tests; locate references to make_scheduler, FakeBeatState, and the scheduler.schedule(...) call and patch the Timer creation used by that scheduler instance to a synchronous stub that calls the callback immediately and assert applied/notifications as before.src/scope/server/tempo_sources/link.py (1)
109-112:set_temposilently does nothing when Link is not initialized.Consider logging a warning or raising an error if
set_tempois called when_link is Noneto help debug misconfigured flows.🔧 Optional: Add warning when Link is not available
def set_tempo(self, bpm: float) -> None: """Set the Link session tempo (makes Scope the tempo leader).""" if self._link is not None: self._link.tempo = bpm + else: + logger.warning("Cannot set tempo: Link session not started")🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/scope/server/tempo_sources/link.py` around lines 109 - 112, set_tempo currently returns silently when self._link is None which hides misconfiguration; update set_tempo to detect when self._link is None and either log a warning or raise an error. Inside the set_tempo method (referencing set_tempo and attribute _link) add a branch that emits a clear warning (using the existing logger on the class if available, otherwise the module logging) stating that tempo change was requested but Link is not initialized, or raise a RuntimeError with that message—choose logging for non-fatal flows and raise for fatal flows.frontend/src/components/settings/TempoSyncSection.tsx (1)
270-298: BPM input lacks debouncing and allows invalid intermediate states.While the validation on submit is good, the input allows typing values outside the 20-300 range. Consider adding
onBlurvalidation to clamp the value when focus leaves the input.🔧 Optional: clamp value on blur
onBlur={() => (bpmInputFocusedRef.current = false)} + onBlur={() => { + bpmInputFocusedRef.current = false; + const val = parseFloat(bpmInput); + if (!isNaN(val)) { + setBpmInput(String(Math.max(20, Math.min(300, Math.round(val))))); + } + }}🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@frontend/src/components/settings/TempoSyncSection.tsx` around lines 270 - 298, The BPM input currently only validates on submit and allows invalid intermediate values; update the input's onBlur handler (currently setting bpmInputFocusedRef) to parse and clamp bpmInput into the 20–300 range, call setBpmInput with the clamped string, and invoke onSetBpm(clampedValue) so the final value is normalized; keep onChange as-is to allow free typing, and optionally add a short debounce around any future calls to onSetBpm if you want live updates instead of only on blur/submit (referencing bpmInput, setBpmInput, bpmInputFocusedRef, and onSetBpm).frontend/src/components/settings/ModulationSection.tsx (2)
114-119: Potential issue: selectedTarget state sync may flicker on pipeline switch.When targets change and the current selectedTarget is invalid, the effect updates to
targets[0].value. However, the component renders before this effect runs, potentially showing stale UI briefly. This is a minor UX issue.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@frontend/src/components/settings/ModulationSection.tsx` around lines 114 - 119, The current useEffect in ModulationSection (useEffect, targets, selectedTarget, setSelectedTarget) causes a brief render with a stale selectedTarget when targets change; to avoid flicker, derive an effectiveSelectedTarget synchronously during render (e.g., compute a fallback like targets[0].value when selectedTarget is missing) and use that derived value in the JSX, or setSelectedTarget synchronously at the same place you update targets/pipeline (the handler that changes pipeline/targets) so the state is correct before the next render; update references in the component to use the derived/synchronously-updated selected value instead of relying solely on the effect.
137-149:handleToggledeletes config on disable, losing user settings.When modulation is toggled off, the config for that target is deleted. If the user toggles it back on, they get defaults instead of their previous settings. Consider keeping disabled configs with
enabled: falseinstead.♻️ Preserve config on disable
const handleToggle = useCallback( (pressed: boolean) => { if (pressed && currentTarget) { - const cfg = defaultConfigFor(currentTarget); + const existing = modulations[selectedTarget]; + const cfg = existing + ? { ...existing, enabled: true } + : defaultConfigFor(currentTarget); onModulationsChange({ ...modulations, [selectedTarget]: cfg }); } else { - const next = { ...modulations }; - delete next[selectedTarget]; - onModulationsChange(next); + const existing = modulations[selectedTarget]; + if (existing) { + onModulationsChange({ + ...modulations, + [selectedTarget]: { ...existing, enabled: false } + }); + } } }, [selectedTarget, currentTarget, modulations, onModulationsChange] );🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@frontend/src/components/settings/ModulationSection.tsx` around lines 137 - 149, handleToggle currently deletes the target config when disabling, losing user settings; change it to preserve the config object and only toggle an enabled flag. Specifically, in handleToggle use the existing config from modulations[selectedTarget] (falling back to defaultConfigFor(currentTarget)) when pressed is true and set enabled: true, and when pressed is false set modulations[selectedTarget] = { ...(existingOrDefault), enabled: false } instead of deleting the entry; continue to call onModulationsChange with the updated modulations object. Ensure you reference handleToggle, defaultConfigFor, onModulationsChange, modulations, selectedTarget, and currentTarget when making the change.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@frontend/src/lib/api.ts`:
- Around line 1050-1055: The TempoEnableRequest type currently makes bpm and
beats_per_bar optional which allows invalid calls to compile; change it to a
discriminated union that matches the server contract: define TempoEnableRequest
as a union of { source: "link"; bpm: number; beats_per_bar: number } and {
source: "midi_clock"; midi_device: string } (or the exact server-required
shapes) so callers must provide the correct fields depending on source; update
any usages of TempoEnableRequest accordingly.
In `@frontend/src/pages/StreamPage.tsx`:
- Around line 1594-1637: The prompt-cycling effect is being double-quantized
because it sends prompt updates via sendParameterUpdate which the quantize logic
will push to the next boundary; change this effect to send the new prompt
immediately by either calling a new helper (e.g., sendParameterUpdateImmediate)
or extending sendParameterUpdate to accept a flag (e.g., { skipQuantize: true })
and set that flag when updating prompts from promptCycleIndexRef/current items;
update the quantize branch in the sendParameterUpdate implementation to honor
the flag so promptCycleBoundaryRef/Index logic produces the prompt on the
current boundary rather than being deferred.
- Around line 473-494: The quantize-detection logic in the isQuantizeActive
block (variables NEVER_QUANTIZE, hasDiscrete, params, _quantized) wrongly infers
discreteness by typeof and misses non-primitive discrete payloads like
transition and denoising_step_list; change hasDiscrete to use a key-based
allowlist instead: create a DISCRETE_PARAM_KEYS set (include prompts,
reset_cache, transition, denoising_step_list, and any other known composite
discrete keys), skip keys in NEVER_QUANTIZE, and mark params._quantized = true
when any key is in DISCRETE_PARAM_KEYS or matches existing boolean/string checks
for backward compatibility. Ensure you reference the same symbols
(isQuantizeActive, NEVER_QUANTIZE, hasDiscrete, params, _quantized) when making
the change.
In `@src/scope/core/pipelines/mod_scope/pipeline.py`:
- Around line 77-79: BEATS_PER_BAR is hardcoded to 4 causing incorrect beat
highlighting; make beats per bar dynamic by replacing the constant with a
configurable variable (e.g., beats_per_bar) sourced from pipeline settings or
the time signature, update any code that computes or wraps bar_position (the
logic currently at the block referenced by lines handling bar_position wrapping
around BEATS_PER_BAR) to use beats_per_bar instead of BEATS_PER_BAR, and ensure
the rendering logic that draws the beat dots uses beats_per_bar to determine
count and placement so 3/4, 5/4, etc. render correctly.
- Around line 105-109: The glow rendering loop is performing per-pixel
host/device syncs by calling Tensor.item() (search for the glow loop that reads
tensor values with .item() in this class), which kills GPU performance; update
the glow accumulation to keep everything on-tensor: remove all .item() usages in
the glow loop, replace Python scalar operations with torch tensor ops (e.g., use
torch.max, torch.clamp, tensor indexing/assignment and tensor-wise arithmetic)
and ensure results are written back as tensors on self.device (the pipeline's
device is set via self.device). Locate the glow loop in this module (the section
that reads tensor values per-sample/pixel and writes glow outputs) and refactor
it to compute glow intensities and color blending entirely with PyTorch tensor
operations without converting to Python scalars.
In `@src/scope/core/pipelines/mod_scope/schema.py`:
- Around line 25-27: The three demo parameters noise_scale, vace_context_scale,
and kv_cache_attention_bias are plain floats so the generated schema lacks the
ui.modulatable metadata; update each field declaration to include the UI
metadata (e.g., use Annotated/Field or the project's Field helper) so they carry
ui.modulatable=True in the schema generation step — change the noise_scale,
vace_context_scale, and kv_cache_attention_bias declarations to use the
project's schema field pattern that attaches ui.modulatable metadata rather than
raw float types.
In `@src/scope/server/frame_processor.py`:
- Around line 58-76: The ParameterScheduler creates live timers but
FrameProcessor.stop() does not cancel them, allowing scheduled callbacks (e.g.
update_parameters / change_applied) to run after shutdown; modify
FrameProcessor.stop() to explicitly cancel/stop any active schedulers and
engines (call the cancellation/cleanup API on ParameterScheduler instance
referenced by self.parameter_scheduler and also on self.modulation_engine if
applicable) before tearing down pipeline or clearing processors so no pending
timers can fire; ensure you use the scheduler/engine methods (e.g.
ParameterScheduler.cancel() or .stop(), and ModulationEngine.stop()/shutdown())
that exist on those classes to cleanly terminate pending work and prevent late
callbacks.
In `@src/scope/server/modulation.py`:
- Around line 130-146: The descending-repair loop can push values outside the
configured bounds (lo/hi); after the existing backward pass that mutates result
(the for loop that checks result[i] <= result[i+1] and increments earlier
entries), re-clamp all entries of result into [lo, hi] (i.e., for each value set
result[i] = max(lo, min(hi, result[i]))), or alternatively perform the clamp
inside the loop after adjusting each element; reference the variables/result
list and the existing backward-loop in modulation.py to locate the change.
In `@src/scope/server/parameter_scheduler.py`:
- Around line 52-68: When changing quantize settings, cancel or recompute any
pending scheduled timer so stale pending updates don't fire: inside the
quantize_mode setter (quantize_mode) and the lookahead_ms setter (lookahead_ms)
ensure you clear any existing pending timer (e.g., cancel_timer(),
self._pending_timer, or similar scheduler handle used elsewhere) before updating
self._quantize_mode or self._lookahead_ms, and if needed reschedule according to
the new mode/lookahead; also log that the pending timer was canceled or
rescheduled to aid debugging.
In `@src/scope/server/tempo_sources/link.py`:
- Around line 14-16: The unconditional import of Link will raise if aalink is
missing and prevents graceful degradation; change the top-level import to a
guarded try/except (e.g., try: from aalink import Link except Exception: Link =
None) so importing this module never raises, and ensure any code that expects
Link (e.g., the LinkTempoSource class or instantiation in tempo_sync.py) checks
for Link is not None before use.
In `@src/scope/server/tempo_sources/midi_clock.py`:
- Around line 137-151: The transport handlers (_on_start, _on_stop,
_on_continue) currently only flip self._is_playing and leave the cached beat
snapshot stale so get_beat_state() returns old data; update these handlers to
refresh/invalidate the cached beat snapshot when the transport state changes
(e.g., set self._cached_beat_snapshot = None or call an existing
self._update_beat_snapshot()/self._invalidate_beat_snapshot() helper) so that
subsequent calls to get_beat_state() reflect the new playing/stopped/continued
state immediately.
In `@src/scope/server/tempo_sync.py`:
- Around line 161-169: The enable() method currently returns silently if source
creation fails; instead detect when _create_link_source(...) or
_create_midi_clock_source(midi_device, beats_per_bar) returns None and raise a
RuntimeError with a clear message (including the requested source_type and any
install/device hint) so callers can surface a 4xx; update the code in enable()
to replace the `if source is None: return` branch with raising RuntimeError and
ensure the exception includes sufficient context for the REST layer to show the
install hint.
- Around line 94-112: get_beat_state currently ignores the _enabled flag and
disable() doesn't clear the cached _client_beat_state, so fresh client-forwarded
beats remain available after tempo sync is disabled; fix by early-returning None
from get_beat_state when self._enabled is False (before reading
_client_beat_state) and, in disable(), acquire self._client_state_lock and set
self._client_beat_state = None (and keep existing logic clearing/locking
_source/_source_lock); apply the same gating and cache-clear change to the
analogous method around lines 191-203 as well.
---
Outside diff comments:
In `@frontend/src/pages/StreamPage.tsx`:
- Around line 519-545: The current fallback in the params sync block is
persisting every non-allowlisted param into settingsUpdate.schemaFieldOverrides,
which causes transient protocol-only fields (e.g., transition,
prompt_interpolation_method, output_sinks sent via sendParameterUpdate) to leak
into future loads; update the logic around overrideUpdates creation to exclude a
small denylist of known transient keys (e.g., "transition",
"prompt_interpolation_method", "output_sinks") or detect runtime-only flags
before copying into settingsUpdate.schemaFieldOverrides, so only true schema
overrides are merged; keep the rest of the merge behavior intact (referencing
settingsRef.current and settingsUpdate.schemaFieldOverrides) and ensure
transient keys are ignored when building overrideUpdates.
In `@src/scope/server/frame_processor.py`:
- Around line 628-660: The FrameProcessor currently only removes tempo-control
keys when helpers exist, causing stale control keys to be forwarded when helpers
are absent; always strip these keys from the incoming parameters before any
pipeline forwarding. Specifically: ensure "quantize_mode" and "lookahead_ms" are
popped from parameters even if self.parameter_scheduler is None; pop
"modulations" from parameters even if self.modulation_engine is None (and call
self.modulation_engine.update(...) only when present); pop
"beat_cache_reset_rate" regardless (and only apply it to
self.pipeline_processors when present); and when self.tempo_sync is None remove
any client beat-state fields (the same keys handled by
tempo_sync.update_client_beat_state) so they are not forwarded to
processor.update_parameters or retained on self.parameters. Use the existing
symbols (parameter_scheduler, modulation_engine, tempo_sync,
pipeline_processors, _update_output_sinks_from_config, _update_input_source) to
locate the logic and implement the unconditional popping before any call that
forwards parameters.
---
Nitpick comments:
In `@docs/tempo-sync-review.md`:
- Around line 258-264: The flow diagram block containing ModulationSection →
modulations state → sendParameterUpdate(...) →
FrameProcessor.update_parameters() → ModulationEngine.update(raw_configs) →
ModulationEngine.apply(beat_state, call_params) → PipelineProcessor should be
fenced as a text code block to satisfy markdown linters; update the diagram to
use a triple-backtick fence with the language tag "text" (i.e., start with
```text and end with ```) so the existing symbols (ModulationSection,
sendParameterUpdate, FrameProcessor.update_parameters, ModulationEngine.update,
ModulationEngine.apply, PipelineProcessor) remain unchanged but the block is
properly annotated.
- Around line 42-78: Add a language tag to the fenced ASCII diagram block (the
triple backticks that begin the diagram starting with
"┌─────────────────────────────────────────────────────────────┐" and containing
headers like "TEMPO SOURCES" and "TEMPOSYNC MANAGER"); change the opening fence
from ``` to ```text so markdown linters recognize it as plain text and stop
flagging the diagram.
- Around line 239-243: Update the flow diagram code blocks that show "Ableton
Link / MIDI device → TempoSource → TempoSync.get_beat_state() →
PipelineProcessor (inject + modulate) → Pipeline → _notification_loop() → WebRTC
data channel → useTempoSync" (and the similar block at the later section) to
include a language specifier by changing the fenced code blocks from ``` to
```text so markdown linters render them consistently; locate the fenced blocks
containing those exact flow lines and prepend "text" after the opening backticks
for both occurrences.
In `@frontend/src/components/DownloadDialog.tsx`:
- Around line 31-32: The props isCloudMode and onOpenLoRAsSettings declared for
the DownloadDialog component are never used; either remove them from the
DownloadDialogProps interface and from the component's destructuring, or
explicitly destructure-and-discard them to document intentional ignoring (e.g.,
isCloudMode: _isCloudMode, onOpenLoRAsSettings: _onOpenLoRAsSettings) inside the
DownloadDialog function to satisfy type compatibility while avoiding unused
variable warnings.
In `@frontend/src/components/settings/ModulationSection.tsx`:
- Around line 114-119: The current useEffect in ModulationSection (useEffect,
targets, selectedTarget, setSelectedTarget) causes a brief render with a stale
selectedTarget when targets change; to avoid flicker, derive an
effectiveSelectedTarget synchronously during render (e.g., compute a fallback
like targets[0].value when selectedTarget is missing) and use that derived value
in the JSX, or setSelectedTarget synchronously at the same place you update
targets/pipeline (the handler that changes pipeline/targets) so the state is
correct before the next render; update references in the component to use the
derived/synchronously-updated selected value instead of relying solely on the
effect.
- Around line 137-149: handleToggle currently deletes the target config when
disabling, losing user settings; change it to preserve the config object and
only toggle an enabled flag. Specifically, in handleToggle use the existing
config from modulations[selectedTarget] (falling back to
defaultConfigFor(currentTarget)) when pressed is true and set enabled: true, and
when pressed is false set modulations[selectedTarget] = {
...(existingOrDefault), enabled: false } instead of deleting the entry; continue
to call onModulationsChange with the updated modulations object. Ensure you
reference handleToggle, defaultConfigFor, onModulationsChange, modulations,
selectedTarget, and currentTarget when making the change.
In `@frontend/src/components/settings/TempoSyncSection.tsx`:
- Around line 270-298: The BPM input currently only validates on submit and
allows invalid intermediate values; update the input's onBlur handler (currently
setting bpmInputFocusedRef) to parse and clamp bpmInput into the 20–300 range,
call setBpmInput with the clamped string, and invoke onSetBpm(clampedValue) so
the final value is normalized; keep onChange as-is to allow free typing, and
optionally add a short debounce around any future calls to onSetBpm if you want
live updates instead of only on blur/submit (referencing bpmInput, setBpmInput,
bpmInputFocusedRef, and onSetBpm).
In `@frontend/src/components/TempoSyncPanel.tsx`:
- Around line 11-32: TempoSyncPanelProps is declared but not exported,
preventing external modules from importing the prop type; fix by adding an
export to the interface declaration (export interface TempoSyncPanelProps) in
the TempoSyncPanel component file and update any places that should import this
type (e.g., other components or tests) to import { TempoSyncPanelProps } from
the component module; ensure any existing default/export patterns in the file
remain correct after adding the export.
In `@frontend/src/hooks/useUnifiedWebRTC.ts`:
- Around line 167-203: The per-message console.log in dataChannel.onmessage is
currently printing every "tempo_update" packet (handled where data.type ===
"tempo_update"), flooding DevTools; modify the handler around the
console.log("[UnifiedWebRTC] Data channel message:", event.data) so it skips or
samples logs for high-frequency messages — e.g., only log when data.type !==
"tempo_update" or apply a simple sampling/throttle for "tempo_update" (log every
Nth packet or once per second) — leaving other branches (stream_stopped,
change_scheduled, change_applied) unchanged; update the logic in the function
handling dataChannel.onmessage to check data.type before logging.
In `@src/scope/core/pipelines/metronome/pipeline.py`:
- Around line 175-181: Remove the unnecessary first draw used to measure width:
replace the call that invokes _draw_number(canvas, beat_display, 0, 0, scale,
text_color) with a non-drawing width calculation (e.g., call the same underlying
measurement routine or add a helper to measure text width) so you only compute
num_w without rendering, then use _draw_number only once at (center_x,
center_y); do the same for bar_num_w (avoid drawing at 0,0 to measure width and
only draw once at its centered position). Reference: _draw_number, beat_display,
num_w, bar_num_w in pipeline.py.
- Around line 100-103: The current pipeline code that simulates latency uses a
blocking call via time.sleep when reading latency_ms from kwargs, which can
block the pipeline thread and cause input-frame drops; update the latency
simulation in the pipeline (the latency_ms handling inside the pipeline
function) to either (a) replace blocking time.sleep with a non-blocking approach
such as scheduling an asyncio.sleep or deferring the delay to a
worker/threadpool so the pipeline thread isn’t blocked, or (b) if blocking is
intentional for testing, add a clear inline comment and docstring on the
pipeline function explaining that latency_ms intentionally blocks the pipeline
thread and warning about potential frame drops so callers know to avoid using
high latency_ms in production. Ensure references to latency_ms and the pipeline
function/method name are preserved when changing the implementation.
In `@src/scope/server/schema.py`:
- Around line 857-869: TempoStatusResponse currently types source and beat_state
as dict | None which weakens type safety; define two nested Pydantic models
(e.g., SourceInfo and BeatState) that capture the stable structure (fields like
type, num_peers for source; bpm, beat_phase, bar_position, beat_count,
is_playing, source for beat_state) and replace the dict annotations on
TempoStatusResponse.source and TempoStatusResponse.beat_state with
Optional[SourceInfo] and Optional[BeatState] respectively, updating any code
that constructs or reads these fields to use the new models.
In `@src/scope/server/tempo_sources/link.py`:
- Around line 109-112: set_tempo currently returns silently when self._link is
None which hides misconfiguration; update set_tempo to detect when self._link is
None and either log a warning or raise an error. Inside the set_tempo method
(referencing set_tempo and attribute _link) add a branch that emits a clear
warning (using the existing logger on the class if available, otherwise the
module logging) stating that tempo change was requested but Link is not
initialized, or raise a RuntimeError with that message—choose logging for
non-fatal flows and raise for fatal flows.
In `@tests/test_parameter_scheduler.py`:
- Around line 359-376: The test test_scheduled_change_fires_after_delay is flaky
because it relies on real timers/sleep; modify the test to avoid real-time
dependence by either increasing the tolerance (e.g., sleep to 0.5s) or,
preferably, mock the timer used by the scheduler (mock threading.Timer or the
scheduler's internal timer creation) so schedule() triggers immediately in
tests; locate references to make_scheduler, FakeBeatState, and the
scheduler.schedule(...) call and patch the Timer creation used by that scheduler
instance to a synchronous stub that calls the callback immediately and assert
applied/notifications as before.
ℹ️ Review info
⚙️ Run configuration
Configuration used: defaults
Review profile: CHILL
Plan: Pro
Run ID: c28c4a4a-a47c-44b6-853c-8f61afebbac5
⛔ Files ignored due to path filters (2)
frontend/package-lock.jsonis excluded by!**/package-lock.jsonuv.lockis excluded by!**/*.lock
📒 Files selected for processing (42)
docs/tempo-sync-review.mdfrontend/src/components/DownloadDialog.tsxfrontend/src/components/TempoSyncPanel.tsxfrontend/src/components/WorkflowImportDialog.tsxfrontend/src/components/settings/ModulationSection.tsxfrontend/src/components/settings/OscTab.tsxfrontend/src/components/settings/TempoSyncSection.tsxfrontend/src/hooks/useTempoSync.tsfrontend/src/hooks/useUnifiedWebRTC.tsfrontend/src/lib/api.tsfrontend/src/pages/StreamPage.tsxfrontend/src/types/index.tspyproject.tomlsrc/scope/core/pipelines/base_schema.pysrc/scope/core/pipelines/defaults.pysrc/scope/core/pipelines/krea_realtime_video/schema.pysrc/scope/core/pipelines/longlive/schema.pysrc/scope/core/pipelines/memflow/schema.pysrc/scope/core/pipelines/metronome/__init__.pysrc/scope/core/pipelines/metronome/pipeline.pysrc/scope/core/pipelines/metronome/schema.pysrc/scope/core/pipelines/mod_scope/__init__.pysrc/scope/core/pipelines/mod_scope/pipeline.pysrc/scope/core/pipelines/mod_scope/schema.pysrc/scope/core/pipelines/registry.pysrc/scope/core/pipelines/reward_forcing/schema.pysrc/scope/core/pipelines/streamdiffusionv2/schema.pysrc/scope/core/pipelines/wan2_1/blocks/noise_scale_controller.pysrc/scope/core/pipelines/wan2_1/blocks/set_timesteps.pysrc/scope/server/app.pysrc/scope/server/frame_processor.pysrc/scope/server/modulation.pysrc/scope/server/parameter_scheduler.pysrc/scope/server/pipeline_processor.pysrc/scope/server/schema.pysrc/scope/server/tempo_sources/__init__.pysrc/scope/server/tempo_sources/link.pysrc/scope/server/tempo_sources/midi_clock.pysrc/scope/server/tempo_sync.pysrc/scope/server/tracks.pysrc/scope/server/webrtc.pytests/test_parameter_scheduler.py
| // Auto-flag discrete params for beat-quantized gating | ||
| if (isQuantizeActive) { | ||
| const NEVER_QUANTIZE = new Set([ | ||
| "paused", | ||
| "quantize_mode", | ||
| "lookahead_ms", | ||
| "_quantized", | ||
| "prompt_interpolation_method", | ||
| ]); | ||
| const hasDiscrete = Object.entries(params).some(([key, value]) => { | ||
| if (NEVER_QUANTIZE.has(key)) return false; | ||
| return ( | ||
| typeof value === "boolean" || | ||
| typeof value === "string" || | ||
| key === "prompts" || | ||
| key === "reset_cache" | ||
| ); | ||
| }); | ||
| if (hasDiscrete) { | ||
| params = { ...params, _quantized: true }; | ||
| } | ||
| } |
There was a problem hiding this comment.
Quantize detection misses non-primitive discrete updates.
hasDiscrete only treats booleans, strings, prompts, and reset_cache as quantizable. That skips object/array payloads like transition on Lines 631-633 and denoising_step_list on Lines 921-923, so those changes still apply immediately even when beat/bar quantization is enabled. Use a key-based allowlist for discrete runtime params instead of typeof heuristics.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@frontend/src/pages/StreamPage.tsx` around lines 473 - 494, The
quantize-detection logic in the isQuantizeActive block (variables
NEVER_QUANTIZE, hasDiscrete, params, _quantized) wrongly infers discreteness by
typeof and misses non-primitive discrete payloads like transition and
denoising_step_list; change hasDiscrete to use a key-based allowlist instead:
create a DISCRETE_PARAM_KEYS set (include prompts, reset_cache, transition,
denoising_step_list, and any other known composite discrete keys), skip keys in
NEVER_QUANTIZE, and mark params._quantized = true when any key is in
DISCRETE_PARAM_KEYS or matches existing boolean/string checks for backward
compatibility. Ensure you reference the same symbols (isQuantizeActive,
NEVER_QUANTIZE, hasDiscrete, params, _quantized) when making the change.
| useEffect(() => { | ||
| const rate = settings.promptCycleRate || "none"; | ||
| const items = promptCycleItemsRef.current; | ||
| if ( | ||
| rate === "none" || | ||
| !isStreaming || | ||
| !tempoState.enabled || | ||
| items.length < 2 | ||
| ) { | ||
| promptCycleBoundaryRef.current = -1; | ||
| return; | ||
| } | ||
|
|
||
| const { beatCount, beatsPerBar } = tempoState; | ||
| let boundary: number; | ||
| if (rate === "beat") boundary = beatCount; | ||
| else if (rate === "bar") boundary = Math.floor(beatCount / beatsPerBar); | ||
| else if (rate === "2_bar") | ||
| boundary = Math.floor(beatCount / (beatsPerBar * 2)); | ||
| else if (rate === "4_bar") | ||
| boundary = Math.floor(beatCount / (beatsPerBar * 4)); | ||
| else return; | ||
|
|
||
| if ( | ||
| boundary !== promptCycleBoundaryRef.current && | ||
| promptCycleBoundaryRef.current >= 0 | ||
| ) { | ||
| promptCycleIndexRef.current = | ||
| (promptCycleIndexRef.current + 1) % items.length; | ||
| const active = items[promptCycleIndexRef.current]; | ||
|
|
||
| sendParameterUpdate({ | ||
| prompts: [{ text: active.text, weight: 100 }], | ||
| }); | ||
| } | ||
| promptCycleBoundaryRef.current = boundary; | ||
| }, [ | ||
| settings.promptCycleRate, | ||
| tempoState.beatCount, | ||
| tempoState.beatsPerBar, | ||
| tempoState.enabled, | ||
| isStreaming, | ||
| sendParameterUpdate, | ||
| ]); |
There was a problem hiding this comment.
Prompt cycling is double-quantized when quantize mode is active.
This effect already waits for a beat/bar boundary before sending the next prompt. Routing that payload through sendParameterUpdate() causes the prompts update to be marked _quantized by the logic on Lines 473-494, so beat/bar scheduling pushes it to the next boundary instead of the one that just fired. With promptCycleRate="beat" and quantizeMode="beat", the active prompt will lag by one beat.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@frontend/src/pages/StreamPage.tsx` around lines 1594 - 1637, The
prompt-cycling effect is being double-quantized because it sends prompt updates
via sendParameterUpdate which the quantize logic will push to the next boundary;
change this effect to send the new prompt immediately by either calling a new
helper (e.g., sendParameterUpdateImmediate) or extending sendParameterUpdate to
accept a flag (e.g., { skipQuantize: true }) and set that flag when updating
prompts from promptCycleIndexRef/current items; update the quantize branch in
the sendParameterUpdate implementation to honor the flag so
promptCycleBoundaryRef/Index logic produces the prompt on the current boundary
rather than being deferred.
| TRACE_LENGTH = 192 | ||
| BEATS_PER_BAR = 4 | ||
|
|
There was a problem hiding this comment.
The beat indicator is locked to 4/4.
BEATS_PER_BAR is hardcoded to 4, and Lines 326-331 wrap bar_position against that constant. Once the user selects 3/4, 5/4, etc., this oscilloscope will highlight the wrong beat and draw the wrong number of dots.
Also applies to: 326-331
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@src/scope/core/pipelines/mod_scope/pipeline.py` around lines 77 - 79,
BEATS_PER_BAR is hardcoded to 4 causing incorrect beat highlighting; make beats
per bar dynamic by replacing the constant with a configurable variable (e.g.,
beats_per_bar) sourced from pipeline settings or the time signature, update any
code that computes or wraps bar_position (the logic currently at the block
referenced by lines handling bar_position wrapping around BEATS_PER_BAR) to use
beats_per_bar instead of BEATS_PER_BAR, and ensure the rendering logic that
draws the beat dots uses beats_per_bar to determine count and placement so 3/4,
5/4, etc. render correctly.
| self.device = ( | ||
| device | ||
| if device is not None | ||
| else torch.device("cuda" if torch.cuda.is_available() else "cpu") | ||
| ) |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
find . -name "pipeline.py" -path "*/mod_scope/*" | head -5Repository: daydreamlive/scope
Length of output: 110
🏁 Script executed:
wc -l src/scope/core/pipelines/mod_scope/pipeline.pyRepository: daydreamlive/scope
Length of output: 112
🏁 Script executed:
sed -n '100,115p' src/scope/core/pipelines/mod_scope/pipeline.pyRepository: daydreamlive/scope
Length of output: 618
🏁 Script executed:
sed -n '245,265p' src/scope/core/pipelines/mod_scope/pipeline.pyRepository: daydreamlive/scope
Length of output: 1128
🏁 Script executed:
sed -n '70,85p' src/scope/core/pipelines/mod_scope/pipeline.pyRepository: daydreamlive/scope
Length of output: 482
🏁 Script executed:
sed -n '230,275p' src/scope/core/pipelines/mod_scope/pipeline.pyRepository: daydreamlive/scope
Length of output: 2390
🏁 Script executed:
sed -n '215,235p' src/scope/core/pipelines/mod_scope/pipeline.pyRepository: daydreamlive/scope
Length of output: 908
Avoid Tensor.item() inside the glow rendering loop—it forces device synchronization on every pixel.
The pipeline defaults to CUDA (lines 105-109). With three traced parameters and up to 192 samples per trace, the .item() calls in the glow loop (lines 253-260) trigger a host/device sync for each of the roughly 3,456 glow pixels per frame (3 traces × 192 samples × 2 glow offsets × 3 color channels). This undermines real-time performance. Refactor to use tensor-only operations (e.g., torch.max() instead of Python's max() with scalar readbacks), or write the glow values directly without the intermediate .item() conversion.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@src/scope/core/pipelines/mod_scope/pipeline.py` around lines 105 - 109, The
glow rendering loop is performing per-pixel host/device syncs by calling
Tensor.item() (search for the glow loop that reads tensor values with .item() in
this class), which kills GPU performance; update the glow accumulation to keep
everything on-tensor: remove all .item() usages in the glow loop, replace Python
scalar operations with torch tensor ops (e.g., use torch.max, torch.clamp,
tensor indexing/assignment and tensor-wise arithmetic) and ensure results are
written back as tensors on self.device (the pipeline's device is set via
self.device). Locate the glow loop in this module (the section that reads tensor
values per-sample/pixel and writes glow outputs) and refactor it to compute glow
intensities and color blending entirely with PyTorch tensor operations without
converting to Python scalars.
| @quantize_mode.setter | ||
| def quantize_mode(self, mode: str) -> None: | ||
| valid = ("none", "beat", "bar", "2_bar", "4_bar") | ||
| if mode not in valid: | ||
| logger.warning(f"[SCHEDULER] Invalid quantize mode '{mode}', ignoring") | ||
| return | ||
| self._quantize_mode = mode | ||
| logger.info(f"[SCHEDULER] Quantize mode set to '{mode}'") | ||
|
|
||
| @property | ||
| def lookahead_ms(self) -> float: | ||
| return self._lookahead_ms | ||
|
|
||
| @lookahead_ms.setter | ||
| def lookahead_ms(self, ms: float) -> None: | ||
| self._lookahead_ms = max(0.0, float(ms)) | ||
| logger.info(f"[SCHEDULER] Lookahead set to {self._lookahead_ms:.0f}ms") |
There was a problem hiding this comment.
Cancel pending timers when quantize settings change.
A scheduled update survives changes to quantize_mode and lookahead_ms. If a beat-quantized change is pending and the user switches back to "none" or retunes the lookahead, the old timer still fires later and can reapply stale params on the old boundary. Cancel or recompute any pending timer inside both setters.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@src/scope/server/parameter_scheduler.py` around lines 52 - 68, When changing
quantize settings, cancel or recompute any pending scheduled timer so stale
pending updates don't fire: inside the quantize_mode setter (quantize_mode) and
the lookahead_ms setter (lookahead_ms) ensure you clear any existing pending
timer (e.g., cancel_timer(), self._pending_timer, or similar scheduler handle
used elsewhere) before updating self._quantize_mode or self._lookahead_ms, and
if needed reschedule according to the new mode/lookahead; also log that the
pending timer was canceled or rescheduled to aid debugging.
| from aalink import Link | ||
|
|
||
| from ..tempo_sync import BeatState, TempoSource |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
#!/bin/bash
# Check if aalink import is guarded at the module registration level
rg -n "aalink" --type=py -C3Repository: daydreamlive/scope
Length of output: 2362
Unconditional aalink import breaks graceful degradation.
The import at line 14 (from aalink import Link) is unconditional and will fail immediately if aalink is not installed, preventing the optional dependency handling in tempo_sync.py (which wraps LinkTempoSource instantiation in a try/except block) from working as designed. The import should be guarded with a try/except block.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@src/scope/server/tempo_sources/link.py` around lines 14 - 16, The
unconditional import of Link will raise if aalink is missing and prevents
graceful degradation; change the top-level import to a guarded try/except (e.g.,
try: from aalink import Link except Exception: Link = None) so importing this
module never raises, and ensure any code that expects Link (e.g., the
LinkTempoSource class or instantiation in tempo_sync.py) checks for Link is not
None before use.
| def _on_start(self) -> None: | ||
| self._tick_count = 0 | ||
| self._last_tick_time = None | ||
| self._ema_interval = None | ||
| self._is_playing = True | ||
| logger.info("MIDI Start received") | ||
|
|
||
| def _on_stop(self) -> None: | ||
| self._is_playing = False | ||
| logger.info("MIDI Stop received") | ||
|
|
||
| def _on_continue(self) -> None: | ||
| self._is_playing = True | ||
| self._last_tick_time = None | ||
| logger.info("MIDI Continue received") |
There was a problem hiding this comment.
Refresh the cached beat snapshot on transport events.
Line 145 only flips _is_playing; get_beat_state() keeps returning the previous cached snapshot until another clock tick arrives. After a Stop, tempo sync can keep reporting is_playing=True and a stale position to schedulers and the UI.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@src/scope/server/tempo_sources/midi_clock.py` around lines 137 - 151, The
transport handlers (_on_start, _on_stop, _on_continue) currently only flip
self._is_playing and leave the cached beat snapshot stale so get_beat_state()
returns old data; update these handlers to refresh/invalidate the cached beat
snapshot when the transport state changes (e.g., set self._cached_beat_snapshot
= None or call an existing
self._update_beat_snapshot()/self._invalidate_beat_snapshot() helper) so that
subsequent calls to get_beat_state() reflect the new playing/stopped/continued
state immediately.
| def get_beat_state(self) -> BeatState | None: | ||
| """Get the current beat state. Thread-safe. | ||
|
|
||
| Returns client-forwarded state if fresh, otherwise falls back | ||
| to the server-side tempo source. | ||
| """ | ||
| with self._client_state_lock: | ||
| client_state = self._client_beat_state | ||
|
|
||
| if client_state is not None: | ||
| age = time.time() - client_state.timestamp | ||
| if age < CLIENT_BEAT_STATE_STALE_SECONDS: | ||
| return client_state | ||
|
|
||
| with self._source_lock: | ||
| if self._source is not None: | ||
| return self._source.get_beat_state() | ||
|
|
||
| return None |
There was a problem hiding this comment.
Disabled tempo sync still exposes fresh client beat state.
get_beat_state() never checks _enabled, and disable() does not clear _client_beat_state. If the browser keeps forwarding beat updates, schedulers and modulators can keep receiving beat data after the user disables tempo sync. Gate get_beat_state() on _enabled and clear the cached client state during disable.
Also applies to: 191-203
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@src/scope/server/tempo_sync.py` around lines 94 - 112, get_beat_state
currently ignores the _enabled flag and disable() doesn't clear the cached
_client_beat_state, so fresh client-forwarded beats remain available after tempo
sync is disabled; fix by early-returning None from get_beat_state when
self._enabled is False (before reading _client_beat_state) and, in disable(),
acquire self._client_state_lock and set self._client_beat_state = None (and keep
existing logic clearing/locking _source/_source_lock); apply the same gating and
cache-clear change to the analogous method around lines 191-203 as well.
src/scope/server/tempo_sync.py
Outdated
| if source_type == "link": | ||
| source = self._create_link_source(bpm) | ||
| elif source_type == "midi_clock": | ||
| source = self._create_midi_clock_source(midi_device, beats_per_bar) | ||
| else: | ||
| raise ValueError(f"Unknown tempo source type: {source_type}") | ||
|
|
||
| if source is None: | ||
| return |
There was a problem hiding this comment.
Fail fast when a requested source cannot be created.
If _create_link_source() or _create_midi_clock_source() returns None, enable() just returns here. The REST endpoint then answers 200 with enabled=false, which makes a missing dependency or device indistinguishable from a successful enable. Raise a RuntimeError instead so the caller can surface a proper 4xx with the install hint.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@src/scope/server/tempo_sync.py` around lines 161 - 169, The enable() method
currently returns silently if source creation fails; instead detect when
_create_link_source(...) or _create_midi_clock_source(midi_device,
beats_per_bar) returns None and raise a RuntimeError with a clear message
(including the requested source_type and any install/device hint) so callers can
surface a 4xx; update the code in enable() to replace the `if source is None:
return` branch with raising RuntimeError and ensure the exception includes
sufficient context for the REST layer to show the install hint.
…odulation-2 Signed-off-by: BuffMcBigHuge <[email protected]>
Signed-off-by: BuffMcBigHuge <[email protected]>
Signed-off-by: BuffMcBigHuge <[email protected]>
Signed-off-by: BuffMcBigHuge <[email protected]>
Resolve conflicts in pyproject.toml, frame_processor.py, pipeline_processor.py, and uv.lock. Thread tempo_sync and modulation_engine through the new graph executor. Co-Authored-By: Claude Opus 4.6 <[email protected]> Signed-off-by: Rafał Leszko <[email protected]>
…ockerfiles MIDI requires local hardware access so libasound2 is not useful in containers. Also corrects uv sync --group to --extra across docs. Co-Authored-By: Claude Opus 4.6 <[email protected]> Signed-off-by: Rafal Leszko <[email protected]>
Remove isCloudMode and onOpenLoRAsSettings props that were added to DownloadDialog solely to suppress TypeScript errors in StreamPage, but were never actually used by the component. Co-Authored-By: Claude Opus 4.6 <[email protected]> Signed-off-by: Rafał Leszko <[email protected]>
transition (object) and denoising_step_list (array) were not caught by the typeof === "boolean" / "string" checks, so they bypassed beat quantization. Use a DISCRETE_PARAM_KEYS allowlist for key-based detection of known discrete parameters. Co-Authored-By: Claude Opus 4.6 <[email protected]> Signed-off-by: Rafał Leszko <[email protected]>
Co-Authored-By: Claude Opus 4.6 <[email protected]> Signed-off-by: Rafał Leszko <[email protected]>
Moves the 5 tempo endpoints from app.py into tempo_router.py, following the same APIRouter pattern as mcp_router.py. Co-Authored-By: Claude Opus 4.6 <[email protected]> Signed-off-by: Rafał Leszko <[email protected]>
mcp>=1.0.0 is already in core dependencies, so the optional-dependencies entry was unnecessary. Co-Authored-By: Claude Opus 4.6 <[email protected]> Signed-off-by: Rafał Leszko <[email protected]>
| json_schema_extra=ui_field_config( | ||
| order=6, component="denoising_steps", is_load_param=True | ||
| order=6, | ||
| component="denoising_steps", |
There was a problem hiding this comment.
Stupid question: so the tempo sync always changes/modulates only the denoising_steps param, right? It's not something a user can map/select?
There was a problem hiding this comment.
The system is schema-driven and user-selectable.
Backend pipeline schemas opt in per-field using modulatable=True in ui_field_config(), along with optional modulatable_min/modulatable_max bounds.
This way plugins can opt-in!
| ALL_PATTERNS = {**_DIGIT_PATTERNS, **_ALPHA_PATTERNS} | ||
|
|
||
| TRACE_LENGTH = 192 | ||
| BEATS_PER_BAR = 4 |
There was a problem hiding this comment.
Shouldn't it be a param that we set in UI?
There was a problem hiding this comment.
Considering that mod_scope and metronome are simple tests for tempo sync, should we just remove them entirely @leszko ?
There was a problem hiding this comment.
We can remove and move to plugins. Or if you prefer, we can keep one. But probably 2 "hello worlds" tests is too much for the scope core.
There was a problem hiding this comment.
Do I understand correctly that both metronome and mod_scope are sample pipelines to present the usage of tempo sync? If yes, then maybe we need just 1 "hello-world" pipeline like that, wdyt?
There was a problem hiding this comment.
I think it's best that we remove both.
| # Cache init on discrete jumps from the motion controller | ||
| if ( | ||
| block_state.manage_cache | ||
| and block_state.current_noise_scale is not None | ||
| and block_state.current_noise_scale != block_state.noise_scale | ||
| ): | ||
| block_state.init_cache = True | ||
| # When noise_controller is off (e.g. during modulation), let noise_scale | ||
| # changes flow through without triggering expensive cache resets. |
There was a problem hiding this comment.
Why do we need this change? The modulation takes care of the denoising step list, so why do we change anything about the noise controller?
There was a problem hiding this comment.
This change is needed for noise_scale modulation (not just denoising steps). LongLive exposes noise_scale and denoising_steps as modulatable. I'm not necessarily happy with noise modulation, it works - but it's not very impressive. I'm sure it can be tuned better.
Make TempoSource an ABC with abstract methods, add _enabled_lock for thread-safe access, prune dead notification senders, unify install hints to use `uv sync --extra`, and clean up redundant .get() fallbacks. Co-Authored-By: Claude Opus 4.6 <[email protected]> Signed-off-by: Rafał Leszko <[email protected]>
Signed-off-by: Rafał Leszko <[email protected]>
Extract beat boundary calculation to a standalone get_beat_boundary() in tempo_sync.py and consolidate inline beat state handling into a dedicated _apply_tempo_sync() method. Co-Authored-By: Claude Opus 4.6 (1M context) <[email protected]> Signed-off-by: Rafał Leszko <[email protected]>
… warnings - Use `self.enabled` property (with lock) in `get_status()` instead of bare `self._enabled` - Add `PipelineProcessor.set_beat_cache_reset_rate()` to encapsulate private attribute access - Downgrade noisy per-frame "Extra params" log from info to debug - Convert f-string logger calls to %-style lazy formatting across tempo sync modules - Fix ESLint exhaustive-deps warnings in StreamPage prompt cycle effect Co-Authored-By: Claude Opus 4.6 (1M context) <[email protected]> Signed-off-by: Rafał Leszko <[email protected]>
| # Check for quantized update flag | ||
| if data.pop("_quantized", False): | ||
| if session.video_track and hasattr( | ||
| session.video_track, "frame_processor" | ||
| ): | ||
| session.video_track.frame_processor.schedule_quantized_update( | ||
| data | ||
| ) | ||
| return |
There was a problem hiding this comment.
I guess this will not work with the cloud. I think we could make it working with the cloud, but I wonder if the tempo sync feature even makes sense in the cloud? Won't the latency make it unusable?
| return tempo_sync | ||
|
|
||
|
|
||
| @router.get("/status", response_model=TempoStatusResponse) |
There was a problem hiding this comment.
This will not proxied to the cloud. I guess it's correct. But won't to confirm.
|
Closing this PR in lieu of #703. |
Tempo Sync & Modulation
Summary
Adds tempo synchronization and beat-synced parameter modulation to Scope. Pipelines can lock to Ableton Link or MIDI clock, quantize discrete parameter changes to beat boundaries, and continuously modulate parameters (e.g.
noise_scale,denoising_steps) in sync with the beat. Target users are VDJs running Scope alongside Ableton Live, Resolume, or similar applications.What's in this PR
Core
aalink,mido/python-rtmidi). When neither is installed, the UI shows install hints and the feature degrades gracefully.Frontend
ui.modulatablefields), wave shape (sine, cosine, triangle, saw, square, pulse decay), rate (½ beat → 4 bars), depth.Pipelines
latency_msto simulate pipeline delay.Modulatable pipelines
LongLive, Krea, MemFlow, RewardForcing, StreamDiffusionV2 expose
denoising_stepsand (LongLive)noise_scaleas modulatable viaui_field_config(modulatable=True, ...).Architecture
bpm,beat_phase,bar_position,beat_count,is_playinginjected into every pipeline call.modulationsin parameter updates; Pydantic-validated on receipt.REST API
/api/v1/tempo/status/api/v1/tempo/enable/api/v1/tempo/disable/api/v1/tempo/set_tempo/api/v1/tempo/sourcesModulation: No dedicated REST API. Config is sent as
modulationsin WebRTC parameter updates.How to try it
Install optional deps (for Link or MIDI):
Run Scope and start a stream.
Open the Tempo Sync panel in the sidebar. Enable tempo sync, choose Link or MIDI, set BPM.
Quantize mode: Set to "Beat" or "Bar" so changes like prompt switches land on the beat. Adjust lookahead if latency is visible.
Modulation: In the modulation section, pick a target (e.g. Noise Scale), choose shape/rate/depth, and toggle ON.
Prompt cycling: Add multiple prompts to the timeline, set
promptCycleRateto "Bar" or "Beat", and prompts will advance automatically.Demo pipelines: Use
metronometo test lookahead ormod-scopeto visualize modulation.Testing
tests/test_parameter_scheduler.py— adversarial tests for boundary math, concurrency, lookahead, and edge cases.Optional dependencies
Breaking changes
None. Tempo sync and modulation are additive. Pipelines that don't use beat kwargs are unaffected.
Documentation
docs/tempo-sync-review.md— Implementation review, architecture, API, edge cases, and future work.Summary by CodeRabbit
New Features
Bug Fixes