Add MCP server for AI-assisted pipeline control#660
Conversation
Expose Scope's API as MCP (Model Context Protocol) tools so AI assistants can programmatically manage pipelines, control parameters, capture frames, and interact with a running Scope instance over stdio. Backend: - Add mcp_server.py: thin MCP-over-stdio client that proxies to the Scope HTTP API, with tools for pipeline lifecycle, parameter control, frame capture, session metrics, recording, LoRA/plugin management, and more - Add REST endpoints for headless sessions (start/stop stream, get/set parameters, capture frame, session metrics, unload pipeline, recording) - Add HeadlessSession to WebRTCManager for sessions without a browser - Add broadcast_notification() for pushing parameter changes to frontends - Add VideoProcessingTrack.get_last_frame() public accessor - Defer logging setup to run_server() so the MCP subprocess does not create a competing log file that shadows the server's logs - Add --mcp CLI flag and optional `mcp` dependency group Frontend: - Add onParametersUpdated callback to useUnifiedWebRTC for external parameter updates (REST API, MCP, OSC) - Extract applyBackendParamsToSettings in StreamPage so both local sends and external pushes share the same settings-sync path Signed-off-by: RyanOnTheInside <[email protected]>
|
Important Review skippedAuto reviews are disabled on this repository. Please check the settings in the CodeRabbit UI or the ⚙️ Run configurationConfiguration used: Organization UI Review profile: CHILL Plan: Pro Run ID: You can disable this status message by setting the Use the checkbox below for a quick retry:
📝 WalkthroughWalkthroughThe pull request introduces MCP (Model Context Protocol) server support with corresponding parameter synchronization between backend and frontend. It adds configuration files, a complete MCP server implementation exposing pipeline and session APIs as tools, new REST endpoints for session management and parameter control, WebRTC infrastructure for headless sessions, and frontend callback integration for real-time parameter updates. Changes
Sequence Diagram(s)sequenceDiagram
actor User
participant Frontend as Frontend<br/>(StreamPage)
participant WebRTC as WebRTC<br/>(useUnifiedWebRTC)
participant Backend as Backend<br/>(app.py)
participant Manager as WebRTCManager
participant Session as Frame<br/>Processor
User->>Frontend: Update parameters<br/>(e.g., prompts, settings)
Frontend->>Backend: POST /api/v1/session/parameters<br/>(Parameters)
Backend->>Manager: broadcast_parameter_update(params)
Manager->>Session: Forward parameter update
Session->>Session: Process updated parameters
Manager->>Backend: Acknowledge broadcast
Backend->>Frontend: Response OK
Note over Session: Parameter applied<br/>in processing pipeline
Session-->>WebRTC: Data channel message<br/>(type: "parameters_updated")
WebRTC->>WebRTC: Invoke onParametersUpdated callback
WebRTC->>Frontend: applyBackendParamsToSettings(params)
Frontend->>Frontend: Update state<br/>(prompts, vace, overrides)
Frontend->>User: Reflect updated parameters
sequenceDiagram
actor MCPClient as MCP Client<br/>(Claude, etc.)
participant MCPServer as MCP Server<br/>(mcp_server.py)
participant HTTPClient as HTTP Client<br/>(httpx)
participant Backend as Scope Backend<br/>(app.py)
participant Manager as WebRTCManager
MCPClient->>MCPServer: Call MCP tool<br/>(e.g., update_parameters)
MCPServer->>HTTPClient: POST /api/v1/session/parameters
HTTPClient->>Backend: HTTP Request
Backend->>Manager: broadcast_parameter_update(params)
Manager->>Manager: Update all sessions
Backend->>HTTPClient: Response
HTTPClient->>MCPServer: HTTP Response
MCPServer->>MCPClient: Tool result (JSON)
Estimated code review effort🎯 5 (Critical) | ⏱️ ~95 minutes Poem
🚥 Pre-merge checks | ✅ 3✅ Passed checks (3 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches🧪 Generate unit tests (beta)
📝 Coding Plan for PR comments
Comment |
There was a problem hiding this comment.
Actionable comments posted: 2
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (1)
frontend/src/pages/StreamPage.tsx (1)
463-468:⚠️ Potential issue | 🟡 MinorPotential race condition when merging
schemaFieldOverrides.When multiple backend parameter updates arrive in rapid succession (e.g., from MCP commands), each invocation reads
settings.schemaFieldOverridesfrom the closure. If two updates arrive before React re-renders, both will merge with the same stale base, and the second update may overwrite keys set by the first.Consider using a ref to hold the latest
schemaFieldOverridesvalue, or refactorupdateSettingsto accept a functional updater pattern:🔧 Suggested approach using a ref
+ const schemaFieldOverridesRef = useRef(settings.schemaFieldOverrides); + useEffect(() => { + schemaFieldOverridesRef.current = settings.schemaFieldOverrides; + }, [settings.schemaFieldOverrides]); const applyBackendParamsToSettings = useCallback( (params: Record<string, unknown>) => { // ... existing code ... if (Object.keys(overrideUpdates).length > 0) { settingsUpdate.schemaFieldOverrides = { - ...(settings.schemaFieldOverrides ?? {}), + ...(schemaFieldOverridesRef.current ?? {}), ...overrideUpdates, }; } // ... }, - [updateSettings, settings] + [updateSettings] );🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@frontend/src/pages/StreamPage.tsx` around lines 463 - 468, The merge of schemaFieldOverrides using the closed-over settings object can cause lost updates; change the update flow so merges use the latest value (e.g., read from a ref or use a functional updater) instead of relying on settings in the closure. Specifically, ensure the code that applies overrideUpdates into settingsUpdate.schemaFieldOverrides reads the current overrides via a stable ref (e.g., schemaFieldOverridesRef.current) or by calling updateSettings with a functional form that receives prev => ({ ...prev, schemaFieldOverrides: { ...(prev.schemaFieldOverrides ?? {}), ...overrideUpdates } })). Update references to settings.schemaFieldOverrides, overrideUpdates, and the call site of updateSettings/ settingsUpdate so concurrent rapid updates merge correctly.
🧹 Nitpick comments (3)
src/scope/server/webrtc.py (1)
104-107: Redundant runtime import ofFrameProcessor.
FrameProcessoris already imported at line 32 underTYPE_CHECKING. The runtime import at line 104 is unnecessary since it's only used for the type annotation at line 107, not for instantiation.♻️ Remove redundant import
def __init__( self, frame_processor: "FrameProcessor", session_id: str | None = None, ): - from .frame_processor import FrameProcessor - self.id = session_id or str(uuid.uuid4()) self.frame_processor: FrameProcessor = frame_processorThe type annotation
"FrameProcessor"(string form) works with theTYPE_CHECKINGimport at line 32.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/scope/server/webrtc.py` around lines 104 - 107, Remove the runtime import of FrameProcessor and rely on the TYPE_CHECKING-only import and a forward/quoted type annotation: delete the line "from .frame_processor import FrameProcessor" near the top of the constructor and keep the TYPE_CHECKING import at line 32, leaving the attribute annotation as self.frame_processor: "FrameProcessor" = frame_processor (or enable from __future__ import annotations) so the import is only used for typing and not executed at runtime..mcp.json (1)
1-9: Configuration assumes Scope server runs on port 8002.The
--port 8002argument tells the MCP server to connect to a Scope HTTP server atlocalhost:8002. Users must start the Scope server on port 8002 before the MCP server can function. Consider documenting this in the README or adding a comment.If port 8002 is not the standard port, you may want to use the default port 8000 for easier setup:
- "args": ["run", "daydream-scope", "--mcp", "--port", "8002"] + "args": ["run", "daydream-scope", "--mcp", "--port", "8000"]Alternatively, document the expected workflow in the PR description or README.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In @.mcp.json around lines 1 - 9, The config hardcodes the Scope server port to 8002 in mcpServers.scope.args (the "--port", "8002" argument passed to the "uv run daydream-scope" command), which requires users to start Scope on that port; either change the port to the more common default ("8000") or add README/PR documentation and an inline comment explaining that the MCP expects a Scope HTTP server at localhost:8002 and instructing users to start Scope with the same --port value before running the MCP server—update the .mcp.json args and/or project docs accordingly.frontend/src/pages/StreamPage.tsx (1)
498-505: Consider the redundancy with existing handlers.The wrapper applies parameters to local settings, which is useful for code paths that call
sendParameterUpdatewithout first updating local state. However, most handlers (e.g.,handleNoiseScaleChangeat line 877) already callupdateSettingsdirectly before callingsendParameterUpdate, resulting in duplicate state updates for the same values.This is functionally correct and the trade-off for simplicity is reasonable, but you could avoid extra re-renders by having the wrapper skip applying params that match known handler patterns, or by removing the direct
updateSettingscalls from handlers and relying solely on the wrapper.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@frontend/src/pages/StreamPage.tsx` around lines 498 - 505, The wrapper sendParameterUpdate currently calls sendParameterUpdateWebRTC and always applyBackendParamsToSettings, causing duplicate updates when handlers like handleNoiseScaleChange already call updateSettings; fix by making one consistent approach: either (A) remove direct updateSettings calls from handlers such as handleNoiseScaleChange and rely on sendParameterUpdate to call applyBackendParamsToSettings, or (B) change sendParameterUpdate to first compare incoming params with current settings and only call applyBackendParamsToSettings for keys whose values differ; update references to sendParameterUpdateWebRTC, applyBackendParamsToSettings, handleNoiseScaleChange, and updateSettings accordingly so only one state update occurs per change.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@src/scope/server/mcp_server.py`:
- Line 54: The httpx.AsyncClient assigned to the module-level variable client
inside create_mcp_server() is never closed, causing a resource leak; update
create_mcp_server() to ensure the AsyncClient is closed on server shutdown (for
example register an async shutdown handler or use an async context/lifespan so
client.aclose() is called), referencing the client variable and
create_mcp_server function to locate the code; alternatively, if you
intentionally rely on process exit to release resources, add a clear comment in
create_mcp_server() documenting that design choice and why explicit cleanup is
omitted.
- Around line 529-538: resolve_workflow currently calls
json.loads(workflow_json) without handling malformed input; wrap that call in a
try/except catching json.JSONDecodeError (imported from json) and return or
raise a clear, user-facing error message that includes the JSONDecodeError text
(e.g., "Invalid workflow JSON: {e}") so the mcp.tool handler (resolve_workflow)
returns a friendly error instead of an unhandled exception; keep the rest of the
function (client.post and _json(resp)) unchanged.
---
Outside diff comments:
In `@frontend/src/pages/StreamPage.tsx`:
- Around line 463-468: The merge of schemaFieldOverrides using the closed-over
settings object can cause lost updates; change the update flow so merges use the
latest value (e.g., read from a ref or use a functional updater) instead of
relying on settings in the closure. Specifically, ensure the code that applies
overrideUpdates into settingsUpdate.schemaFieldOverrides reads the current
overrides via a stable ref (e.g., schemaFieldOverridesRef.current) or by calling
updateSettings with a functional form that receives prev => ({ ...prev,
schemaFieldOverrides: { ...(prev.schemaFieldOverrides ?? {}), ...overrideUpdates
} })). Update references to settings.schemaFieldOverrides, overrideUpdates, and
the call site of updateSettings/ settingsUpdate so concurrent rapid updates
merge correctly.
---
Nitpick comments:
In @.mcp.json:
- Around line 1-9: The config hardcodes the Scope server port to 8002 in
mcpServers.scope.args (the "--port", "8002" argument passed to the "uv run
daydream-scope" command), which requires users to start Scope on that port;
either change the port to the more common default ("8000") or add README/PR
documentation and an inline comment explaining that the MCP expects a Scope HTTP
server at localhost:8002 and instructing users to start Scope with the same
--port value before running the MCP server—update the .mcp.json args and/or
project docs accordingly.
In `@frontend/src/pages/StreamPage.tsx`:
- Around line 498-505: The wrapper sendParameterUpdate currently calls
sendParameterUpdateWebRTC and always applyBackendParamsToSettings, causing
duplicate updates when handlers like handleNoiseScaleChange already call
updateSettings; fix by making one consistent approach: either (A) remove direct
updateSettings calls from handlers such as handleNoiseScaleChange and rely on
sendParameterUpdate to call applyBackendParamsToSettings, or (B) change
sendParameterUpdate to first compare incoming params with current settings and
only call applyBackendParamsToSettings for keys whose values differ; update
references to sendParameterUpdateWebRTC, applyBackendParamsToSettings,
handleNoiseScaleChange, and updateSettings accordingly so only one state update
occurs per change.
In `@src/scope/server/webrtc.py`:
- Around line 104-107: Remove the runtime import of FrameProcessor and rely on
the TYPE_CHECKING-only import and a forward/quoted type annotation: delete the
line "from .frame_processor import FrameProcessor" near the top of the
constructor and keep the TYPE_CHECKING import at line 32, leaving the attribute
annotation as self.frame_processor: "FrameProcessor" = frame_processor (or
enable from __future__ import annotations) so the import is only used for typing
and not executed at runtime.
ℹ️ Review info
⚙️ Run configuration
Configuration used: defaults
Review profile: CHILL
Plan: Pro
Run ID: 589d55fd-bfb8-4106-85b7-f1d4f8423571
⛔ Files ignored due to path filters (1)
uv.lockis excluded by!**/*.lock
📒 Files selected for processing (8)
.mcp.jsonfrontend/src/hooks/useUnifiedWebRTC.tsfrontend/src/pages/StreamPage.tsxpyproject.tomlsrc/scope/server/app.pysrc/scope/server/mcp_server.pysrc/scope/server/tracks.pysrc/scope/server/webrtc.py
src/scope/server/mcp_server.py
Outdated
| ), | ||
| ) | ||
|
|
||
| client = httpx.AsyncClient(base_url=base_url, timeout=300.0) |
There was a problem hiding this comment.
The httpx.AsyncClient is never closed, causing a resource leak.
The client is created at module initialization within create_mcp_server() but there's no cleanup when the MCP server shuts down. This leaves HTTP connections open.
🔧 Proposed fix using async context manager pattern
Consider using lifespan context or ensuring cleanup. One approach:
- client = httpx.AsyncClient(base_url=base_url, timeout=300.0)
+ # Note: FastMCP may provide lifecycle hooks; otherwise document that
+ # the client persists for the process lifetime (acceptable for stdio)
+ client = httpx.AsyncClient(base_url=base_url, timeout=300.0)Since the MCP server runs as a subprocess and exits when stdio closes, this is acceptable in practice. However, adding a comment clarifying this design decision would help maintainability.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@src/scope/server/mcp_server.py` at line 54, The httpx.AsyncClient assigned to
the module-level variable client inside create_mcp_server() is never closed,
causing a resource leak; update create_mcp_server() to ensure the AsyncClient is
closed on server shutdown (for example register an async shutdown handler or use
an async context/lifespan so client.aclose() is called), referencing the client
variable and create_mcp_server function to locate the code; alternatively, if
you intentionally rely on process exit to release resources, add a clear comment
in create_mcp_server() documenting that design choice and why explicit cleanup
is omitted.
| @mcp.tool() | ||
| async def resolve_workflow(workflow_json: str) -> str: | ||
| """Resolve dependencies for a workflow import (checks pipelines, LoRAs, plugins). | ||
|
|
||
| Args: | ||
| workflow_json: The workflow JSON string to resolve dependencies for | ||
| """ | ||
| workflow = json.loads(workflow_json) | ||
| resp = await client.post("/api/v1/workflow/resolve", json=workflow) | ||
| return await _json(resp) |
There was a problem hiding this comment.
Add error handling for malformed JSON in resolve_workflow.
json.loads(workflow_json) will raise JSONDecodeError if the input is not valid JSON. This exception will propagate as an unhandled error rather than a user-friendly message.
🛡️ Proposed fix to handle JSON parsing errors
`@mcp.tool`()
async def resolve_workflow(workflow_json: str) -> str:
"""Resolve dependencies for a workflow import (checks pipelines, LoRAs, plugins).
Args:
workflow_json: The workflow JSON string to resolve dependencies for
"""
+ try:
+ workflow = json.loads(workflow_json)
+ except json.JSONDecodeError as e:
+ return json.dumps({"error": f"Invalid JSON: {e}"}, indent=2)
- workflow = json.loads(workflow_json)
resp = await client.post("/api/v1/workflow/resolve", json=workflow)
return await _json(resp)📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| @mcp.tool() | |
| async def resolve_workflow(workflow_json: str) -> str: | |
| """Resolve dependencies for a workflow import (checks pipelines, LoRAs, plugins). | |
| Args: | |
| workflow_json: The workflow JSON string to resolve dependencies for | |
| """ | |
| workflow = json.loads(workflow_json) | |
| resp = await client.post("/api/v1/workflow/resolve", json=workflow) | |
| return await _json(resp) | |
| `@mcp.tool`() | |
| async def resolve_workflow(workflow_json: str) -> str: | |
| """Resolve dependencies for a workflow import (checks pipelines, LoRAs, plugins). | |
| Args: | |
| workflow_json: The workflow JSON string to resolve dependencies for | |
| """ | |
| try: | |
| workflow = json.loads(workflow_json) | |
| except json.JSONDecodeError as e: | |
| return json.dumps({"error": f"Invalid JSON: {e}"}, indent=2) | |
| resp = await client.post("/api/v1/workflow/resolve", json=workflow) | |
| return await _json(resp) |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@src/scope/server/mcp_server.py` around lines 529 - 538, resolve_workflow
currently calls json.loads(workflow_json) without handling malformed input; wrap
that call in a try/except catching json.JSONDecodeError (imported from json) and
return or raise a clear, user-facing error message that includes the
JSONDecodeError text (e.g., "Invalid workflow JSON: {e}") so the mcp.tool
handler (resolve_workflow) returns a friendly error instead of an unhandled
exception; keep the rest of the function (client.post and _json(resp))
unchanged.
leszko
left a comment
There was a problem hiding this comment.
Added some comments. In general:
- I really love the idea of having the MCP Server in Scope ❤️
- I think we should try to limit the changes to the API (
app.py); if possible then make no changes at all - I think the whole MCP logic should be only executed when scope is started with the
--mcpflag
src/scope/server/app.py
Outdated
| raise HTTPException(status_code=500, detail=str(e)) from e | ||
|
|
||
|
|
||
| @app.post("/api/v1/session/parameters") |
There was a problem hiding this comment.
I wonder if we should not move it all these functions into a separate router. Plus have them enabled only if daydream-scope is started with the --mcp flag. The reason why I mention this is that now w introduce new functions to modify the params (and other data), which brings complexity. E.g. soon we'll not have a concept of broadcasting parameters, because params will be per node.
src/scope/server/app.py
Outdated
| count = 0 | ||
| for _sid, fp, _headless in webrtc_manager.iter_frame_processors(): | ||
| all_params.update(fp.parameters) | ||
| count += 1 |
There was a problem hiding this comment.
I think the count does not make much sense. We don't support multiple sessions, so this one will always be 1.
src/scope/server/app.py
Outdated
| raise HTTPException(status_code=500, detail=str(e)) from e | ||
|
|
||
|
|
||
| @app.post("/api/v1/pipeline/unload") |
There was a problem hiding this comment.
Why do we need an endpoint for the pipeline unload?
src/scope/server/app.py
Outdated
| raise HTTPException(status_code=500, detail=str(e)) from e | ||
|
|
||
|
|
||
| @app.post("/api/v1/session/{session_id}/recording/start") |
There was a problem hiding this comment.
- What's the use for recording start/stop?
- Why do we use
session_idparam here while for other endpoints we don't havesession_id? In general I don't think it makes much sense to have session_id since we don't support multi-session setup.
| def create_mcp_server(base_url: str = "http://localhost:8000") -> FastMCP: | ||
| """Create and configure the MCP server with all Scope tools.""" | ||
|
|
||
| mcp = FastMCP( |
There was a problem hiding this comment.
Is this FastMCP a separate process? If not, then maybe we don't need to add any new endpoints into app.py, but just make code calls. That would keep our API simpler.
src/scope/server/webrtc.py
Outdated
| logger.error(f"Failed to add ICE candidate to session {session_id}: {e}") | ||
| raise ValueError(f"Invalid ICE candidate: {e}") from e | ||
|
|
||
| def iter_frame_processors(self): |
There was a problem hiding this comment.
We don't need this. We always have just 1 frame processor.
src/scope/server/tracks.py
Outdated
|
|
||
| def get_last_frame(self): | ||
| """Return the most recently rendered frame, or None.""" | ||
| return self._last_frame |
There was a problem hiding this comment.
I guess we would need to have some lock for that.
Signed-off-by: RyanOnTheInside <[email protected]>
Signed-off-by: RyanOnTheInside <[email protected]>
…onboard-mcp/mcp Signed-off-by: Rafał Leszko <[email protected]> # Conflicts: # src/scope/server/app.py
🚀 fal.ai Preview Deployment
TestingConnect to this preview deployment by running this on your branch: 🧪 E2E tests will run automatically against this deployment. |
✅ E2E Tests passed
Test ArtifactsCheck the workflow run for screenshots. |
These endpoints are not needed for MCP workflows - removes start_recording, stop_recording, and unload_pipeline tools along with their references in instructions and docstrings. Co-Authored-By: Claude Opus 4.6 <[email protected]> Signed-off-by: Rafał Leszko <[email protected]>
Replaces session_id path parameters with implicit active-session lookup for stop, recording start/stop endpoints, aligning the REST API with MCP server usage patterns. Co-Authored-By: Claude Opus 4.6 <[email protected]> Signed-off-by: Rafał Leszko <[email protected]>
…ndpoints Remove recording and pipeline unload endpoints from MCP router, and refactor WebRTCManager from a multi-session dict to a single headless session slot since only one headless session is supported at a time. Co-Authored-By: Claude Opus 4.6 <[email protected]> Signed-off-by: Rafal Leszko <[email protected]>
HeadlessSession has no WebRTC dependencies — it runs FrameProcessor directly. Moving it to headless_session.py clarifies the separation between WebRTC and MCP-only session management. Co-Authored-By: Claude Opus 4.6 <[email protected]> Signed-off-by: Rafal Leszko <[email protected]>
Consistent with sibling module naming (webrtc.py, recording.py). Co-Authored-By: Claude Opus 4.6 <[email protected]> Signed-off-by: Rafal Leszko <[email protected]>
Signed-off-by: RyanOnTheInside <[email protected]>
Summary
onParametersUpdatedcallback to the frontend WebRTC hook so external parameter changes (via MCP, REST, OSC) sync the UIget_logsto return empty results instead of the server's actual logsDetails
MCP server (
src/scope/server/mcp_server.py): Thin stdio client that proxies to the Scope HTTP API. Covers pipeline lifecycle, runtime parameters, frame capture, session metrics, recording, LoRA/plugin/asset management, input sources, OSC, workflows, and API keys. Run withdaydream-scope --mcp --port <port>.New REST endpoints:
POST/GET /api/v1/session/parameters,GET /api/v1/session/frame,GET /api/v1/session/metrics,POST /api/v1/session/start,POST /api/v1/session/{id}/stop,POST /api/v1/pipeline/unload,POST /api/v1/session/{id}/recording/start|stop.Backend changes:
HeadlessSessionclass and headless session management onWebRTCManager,broadcast_notification()for pushing parameter changes to frontends,VideoProcessingTrack.get_last_frame()public accessor, deferred logging setup to avoid MCP subprocess interference.Frontend changes:
useUnifiedWebRTCgainsonParametersUpdatedcallback;StreamPageextractsapplyBackendParamsToSettingsso both local sends and external pushes share the same settings-sync path.Test plan
daydream-scopeand verify server starts without logging errorsdaydream-scope --mcp --port 8000and verify MCP server connectsget_logsvia MCP and verify it returns actual server logs (not empty)Summary by CodeRabbit