Skip to content

Add MCP server for AI-assisted pipeline control#660

Merged
ryanontheinside merged 10 commits intomainfrom
ryanontheinside/feat/onboard-mcp/mcp
Mar 12, 2026
Merged

Add MCP server for AI-assisted pipeline control#660
ryanontheinside merged 10 commits intomainfrom
ryanontheinside/feat/onboard-mcp/mcp

Conversation

@ryanontheinside
Copy link
Copy Markdown
Collaborator

@ryanontheinside ryanontheinside commented Mar 11, 2026

Summary

  • Adds an MCP (Model Context Protocol) server that exposes Scope's API as tools, enabling AI assistants to programmatically manage pipelines, control parameters, capture frames, and interact with a running instance over stdio
  • Adds REST endpoints for headless sessions (start/stop, parameter get/set, frame capture, metrics, recording) so the MCP server (and other clients) can drive pipelines without a browser
  • Adds onParametersUpdated callback to the frontend WebRTC hook so external parameter changes (via MCP, REST, OSC) sync the UI
  • Fixes a bug where the MCP subprocess created a competing log file at import time, causing get_logs to return empty results instead of the server's actual logs

Details

MCP server (src/scope/server/mcp_server.py): Thin stdio client that proxies to the Scope HTTP API. Covers pipeline lifecycle, runtime parameters, frame capture, session metrics, recording, LoRA/plugin/asset management, input sources, OSC, workflows, and API keys. Run with daydream-scope --mcp --port <port>.

New REST endpoints: POST/GET /api/v1/session/parameters, GET /api/v1/session/frame, GET /api/v1/session/metrics, POST /api/v1/session/start, POST /api/v1/session/{id}/stop, POST /api/v1/pipeline/unload, POST /api/v1/session/{id}/recording/start|stop.

Backend changes: HeadlessSession class and headless session management on WebRTCManager, broadcast_notification() for pushing parameter changes to frontends, VideoProcessingTrack.get_last_frame() public accessor, deferred logging setup to avoid MCP subprocess interference.

Frontend changes: useUnifiedWebRTC gains onParametersUpdated callback; StreamPage extracts applyBackendParamsToSettings so both local sends and external pushes share the same settings-sync path.

Test plan

  • Run daydream-scope and verify server starts without logging errors
  • Run daydream-scope --mcp --port 8000 and verify MCP server connects
  • Load a pipeline via MCP, start a headless stream, capture a frame
  • Update parameters via MCP and verify the frontend UI reflects changes
  • Call get_logs via MCP and verify it returns actual server logs (not empty)
  • Verify WebRTC streaming still works normally through the browser UI

Summary by CodeRabbit

  • New Features
    • Added MCP (Model Context Protocol) server support for remote integration and automation
    • Enabled real-time parameter synchronization from backend to frontend UI
    • Added session management endpoints (start, stop sessions)
    • Added recording management (start, stop, download recordings)
    • Added frame capture and performance metrics endpoints
    • Enabled headless session execution without WebRTC connections

Expose Scope's API as MCP (Model Context Protocol) tools so AI assistants
can programmatically manage pipelines, control parameters, capture frames,
and interact with a running Scope instance over stdio.

Backend:
- Add mcp_server.py: thin MCP-over-stdio client that proxies to the Scope
  HTTP API, with tools for pipeline lifecycle, parameter control, frame
  capture, session metrics, recording, LoRA/plugin management, and more
- Add REST endpoints for headless sessions (start/stop stream, get/set
  parameters, capture frame, session metrics, unload pipeline, recording)
- Add HeadlessSession to WebRTCManager for sessions without a browser
- Add broadcast_notification() for pushing parameter changes to frontends
- Add VideoProcessingTrack.get_last_frame() public accessor
- Defer logging setup to run_server() so the MCP subprocess does not
  create a competing log file that shadows the server's logs
- Add --mcp CLI flag and optional `mcp` dependency group

Frontend:
- Add onParametersUpdated callback to useUnifiedWebRTC for external
  parameter updates (REST API, MCP, OSC)
- Extract applyBackendParamsToSettings in StreamPage so both local sends
  and external pushes share the same settings-sync path

Signed-off-by: RyanOnTheInside <[email protected]>
@coderabbitai
Copy link
Copy Markdown

coderabbitai bot commented Mar 11, 2026

Important

Review skipped

Auto reviews are disabled on this repository. Please check the settings in the CodeRabbit UI or the .coderabbit.yaml file in this repository. To trigger a single review, invoke the @coderabbitai review command.

⚙️ Run configuration

Configuration used: Organization UI

Review profile: CHILL

Plan: Pro

Run ID: 57e53425-a99e-499a-9778-ed37a083e692

You can disable this status message by setting the reviews.review_status to false in the CodeRabbit configuration file.

Use the checkbox below for a quick retry:

  • 🔍 Trigger review
📝 Walkthrough

Walkthrough

The pull request introduces MCP (Model Context Protocol) server support with corresponding parameter synchronization between backend and frontend. It adds configuration files, a complete MCP server implementation exposing pipeline and session APIs as tools, new REST endpoints for session management and parameter control, WebRTC infrastructure for headless sessions, and frontend callback integration for real-time parameter updates.

Changes

Cohort / File(s) Summary
Configuration & Dependencies
.mcp.json, pyproject.toml
New MCP server configuration and optional dependency group mcp>=1.0.0 for FastMCP framework support.
Frontend Parameter Syncing
frontend/src/hooks/useUnifiedWebRTC.ts, frontend/src/pages/StreamPage.tsx
Added optional onParametersUpdated callback to WebRTC hook. Integrated parameter callback in StreamPage to translate backend parameter updates into frontend settings state, including prompt items and schema field overrides.
Backend Session Management APIs
src/scope/server/app.py
Reorganized logging initialization, added nine new REST endpoints for parameter syncing, frame capture, metrics, stream lifecycle (start/stop), pipeline management, and recording control. Introduced --mcp CLI flag and StartStreamRequest model for headless session creation.
MCP Server Implementation
src/scope/server/mcp_server.py
New 600-line MCP server module exposing comprehensive tool set including pipeline management, runtime parameters, session observation, recording, asset/plugin management, and system monitoring. Includes async HTTP client integration and stdio transport setup.
WebRTC Session Infrastructure
src/scope/server/webrtc.py, src/scope/server/tracks.py
Introduced HeadlessSession class for non-WebRTC pipeline execution. Extended WebRTCManager with headless session management, frame processor iteration, parameter broadcasting, and notification delivery to both real and headless sessions. Added frame accessor in VideoProcessingTrack.

Sequence Diagram(s)

sequenceDiagram
    actor User
    participant Frontend as Frontend<br/>(StreamPage)
    participant WebRTC as WebRTC<br/>(useUnifiedWebRTC)
    participant Backend as Backend<br/>(app.py)
    participant Manager as WebRTCManager
    participant Session as Frame<br/>Processor

    User->>Frontend: Update parameters<br/>(e.g., prompts, settings)
    Frontend->>Backend: POST /api/v1/session/parameters<br/>(Parameters)
    Backend->>Manager: broadcast_parameter_update(params)
    Manager->>Session: Forward parameter update
    Session->>Session: Process updated parameters
    Manager->>Backend: Acknowledge broadcast
    Backend->>Frontend: Response OK
    
    Note over Session: Parameter applied<br/>in processing pipeline
    
    Session-->>WebRTC: Data channel message<br/>(type: "parameters_updated")
    WebRTC->>WebRTC: Invoke onParametersUpdated callback
    WebRTC->>Frontend: applyBackendParamsToSettings(params)
    Frontend->>Frontend: Update state<br/>(prompts, vace, overrides)
    Frontend->>User: Reflect updated parameters
Loading
sequenceDiagram
    actor MCPClient as MCP Client<br/>(Claude, etc.)
    participant MCPServer as MCP Server<br/>(mcp_server.py)
    participant HTTPClient as HTTP Client<br/>(httpx)
    participant Backend as Scope Backend<br/>(app.py)
    participant Manager as WebRTCManager

    MCPClient->>MCPServer: Call MCP tool<br/>(e.g., update_parameters)
    MCPServer->>HTTPClient: POST /api/v1/session/parameters
    HTTPClient->>Backend: HTTP Request
    Backend->>Manager: broadcast_parameter_update(params)
    Manager->>Manager: Update all sessions
    Backend->>HTTPClient: Response
    HTTPClient->>MCPServer: HTTP Response
    MCPServer->>MCPClient: Tool result (JSON)
Loading

Estimated code review effort

🎯 5 (Critical) | ⏱️ ~95 minutes

Poem

🐰 Parameter whispers flow like streams,
From backend dreams to frontend schemes,
MCP tools unlock the API door,
Headless sessions do much more—
Daydream Scope now speaks in tools! 🌙✨

🚥 Pre-merge checks | ✅ 3
✅ Passed checks (3 passed)
Check name Status Explanation
Title check ✅ Passed The title 'Add MCP server for AI-assisted pipeline control' accurately summarizes the main change—introducing an MCP server with related infrastructure, new API endpoints, and UI callback wiring for external parameter control.
Docstring Coverage ✅ Passed Docstring coverage is 91.03% which is sufficient. The required threshold is 80.00%.
Description Check ✅ Passed Check skipped - CodeRabbit’s high-level summary is enabled.

✏️ Tip: You can configure your own custom pre-merge checks in the settings.

✨ Finishing Touches
🧪 Generate unit tests (beta)
  • Create PR with unit tests
  • Post copyable unit tests in a comment
  • Commit unit tests in branch ryanontheinside/feat/onboard-mcp/mcp
📝 Coding Plan for PR comments
  • Generate coding plan

Comment @coderabbitai help to get the list of available commands and usage tips.

Copy link
Copy Markdown

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 2

Caution

Some comments are outside the diff and can’t be posted inline due to platform limitations.

⚠️ Outside diff range comments (1)
frontend/src/pages/StreamPage.tsx (1)

463-468: ⚠️ Potential issue | 🟡 Minor

Potential race condition when merging schemaFieldOverrides.

When multiple backend parameter updates arrive in rapid succession (e.g., from MCP commands), each invocation reads settings.schemaFieldOverrides from the closure. If two updates arrive before React re-renders, both will merge with the same stale base, and the second update may overwrite keys set by the first.

Consider using a ref to hold the latest schemaFieldOverrides value, or refactor updateSettings to accept a functional updater pattern:

🔧 Suggested approach using a ref
+ const schemaFieldOverridesRef = useRef(settings.schemaFieldOverrides);
+ useEffect(() => {
+   schemaFieldOverridesRef.current = settings.schemaFieldOverrides;
+ }, [settings.schemaFieldOverrides]);

  const applyBackendParamsToSettings = useCallback(
    (params: Record<string, unknown>) => {
      // ... existing code ...
      if (Object.keys(overrideUpdates).length > 0) {
        settingsUpdate.schemaFieldOverrides = {
-         ...(settings.schemaFieldOverrides ?? {}),
+         ...(schemaFieldOverridesRef.current ?? {}),
          ...overrideUpdates,
        };
      }
      // ...
    },
-   [updateSettings, settings]
+   [updateSettings]
  );
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@frontend/src/pages/StreamPage.tsx` around lines 463 - 468, The merge of
schemaFieldOverrides using the closed-over settings object can cause lost
updates; change the update flow so merges use the latest value (e.g., read from
a ref or use a functional updater) instead of relying on settings in the
closure. Specifically, ensure the code that applies overrideUpdates into
settingsUpdate.schemaFieldOverrides reads the current overrides via a stable ref
(e.g., schemaFieldOverridesRef.current) or by calling updateSettings with a
functional form that receives prev => ({ ...prev, schemaFieldOverrides: {
...(prev.schemaFieldOverrides ?? {}), ...overrideUpdates } })). Update
references to settings.schemaFieldOverrides, overrideUpdates, and the call site
of updateSettings/ settingsUpdate so concurrent rapid updates merge correctly.
🧹 Nitpick comments (3)
src/scope/server/webrtc.py (1)

104-107: Redundant runtime import of FrameProcessor.

FrameProcessor is already imported at line 32 under TYPE_CHECKING. The runtime import at line 104 is unnecessary since it's only used for the type annotation at line 107, not for instantiation.

♻️ Remove redundant import
     def __init__(
         self,
         frame_processor: "FrameProcessor",
         session_id: str | None = None,
     ):
-        from .frame_processor import FrameProcessor
-
         self.id = session_id or str(uuid.uuid4())
         self.frame_processor: FrameProcessor = frame_processor

The type annotation "FrameProcessor" (string form) works with the TYPE_CHECKING import at line 32.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/scope/server/webrtc.py` around lines 104 - 107, Remove the runtime import
of FrameProcessor and rely on the TYPE_CHECKING-only import and a forward/quoted
type annotation: delete the line "from .frame_processor import FrameProcessor"
near the top of the constructor and keep the TYPE_CHECKING import at line 32,
leaving the attribute annotation as self.frame_processor: "FrameProcessor" =
frame_processor (or enable from __future__ import annotations) so the import is
only used for typing and not executed at runtime.
.mcp.json (1)

1-9: Configuration assumes Scope server runs on port 8002.

The --port 8002 argument tells the MCP server to connect to a Scope HTTP server at localhost:8002. Users must start the Scope server on port 8002 before the MCP server can function. Consider documenting this in the README or adding a comment.

If port 8002 is not the standard port, you may want to use the default port 8000 for easier setup:

-      "args": ["run", "daydream-scope", "--mcp", "--port", "8002"]
+      "args": ["run", "daydream-scope", "--mcp", "--port", "8000"]

Alternatively, document the expected workflow in the PR description or README.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In @.mcp.json around lines 1 - 9, The config hardcodes the Scope server port to
8002 in mcpServers.scope.args (the "--port", "8002" argument passed to the "uv
run daydream-scope" command), which requires users to start Scope on that port;
either change the port to the more common default ("8000") or add README/PR
documentation and an inline comment explaining that the MCP expects a Scope HTTP
server at localhost:8002 and instructing users to start Scope with the same
--port value before running the MCP server—update the .mcp.json args and/or
project docs accordingly.
frontend/src/pages/StreamPage.tsx (1)

498-505: Consider the redundancy with existing handlers.

The wrapper applies parameters to local settings, which is useful for code paths that call sendParameterUpdate without first updating local state. However, most handlers (e.g., handleNoiseScaleChange at line 877) already call updateSettings directly before calling sendParameterUpdate, resulting in duplicate state updates for the same values.

This is functionally correct and the trade-off for simplicity is reasonable, but you could avoid extra re-renders by having the wrapper skip applying params that match known handler patterns, or by removing the direct updateSettings calls from handlers and relying solely on the wrapper.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@frontend/src/pages/StreamPage.tsx` around lines 498 - 505, The wrapper
sendParameterUpdate currently calls sendParameterUpdateWebRTC and always
applyBackendParamsToSettings, causing duplicate updates when handlers like
handleNoiseScaleChange already call updateSettings; fix by making one consistent
approach: either (A) remove direct updateSettings calls from handlers such as
handleNoiseScaleChange and rely on sendParameterUpdate to call
applyBackendParamsToSettings, or (B) change sendParameterUpdate to first compare
incoming params with current settings and only call applyBackendParamsToSettings
for keys whose values differ; update references to sendParameterUpdateWebRTC,
applyBackendParamsToSettings, handleNoiseScaleChange, and updateSettings
accordingly so only one state update occurs per change.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@src/scope/server/mcp_server.py`:
- Line 54: The httpx.AsyncClient assigned to the module-level variable client
inside create_mcp_server() is never closed, causing a resource leak; update
create_mcp_server() to ensure the AsyncClient is closed on server shutdown (for
example register an async shutdown handler or use an async context/lifespan so
client.aclose() is called), referencing the client variable and
create_mcp_server function to locate the code; alternatively, if you
intentionally rely on process exit to release resources, add a clear comment in
create_mcp_server() documenting that design choice and why explicit cleanup is
omitted.
- Around line 529-538: resolve_workflow currently calls
json.loads(workflow_json) without handling malformed input; wrap that call in a
try/except catching json.JSONDecodeError (imported from json) and return or
raise a clear, user-facing error message that includes the JSONDecodeError text
(e.g., "Invalid workflow JSON: {e}") so the mcp.tool handler (resolve_workflow)
returns a friendly error instead of an unhandled exception; keep the rest of the
function (client.post and _json(resp)) unchanged.

---

Outside diff comments:
In `@frontend/src/pages/StreamPage.tsx`:
- Around line 463-468: The merge of schemaFieldOverrides using the closed-over
settings object can cause lost updates; change the update flow so merges use the
latest value (e.g., read from a ref or use a functional updater) instead of
relying on settings in the closure. Specifically, ensure the code that applies
overrideUpdates into settingsUpdate.schemaFieldOverrides reads the current
overrides via a stable ref (e.g., schemaFieldOverridesRef.current) or by calling
updateSettings with a functional form that receives prev => ({ ...prev,
schemaFieldOverrides: { ...(prev.schemaFieldOverrides ?? {}), ...overrideUpdates
} })). Update references to settings.schemaFieldOverrides, overrideUpdates, and
the call site of updateSettings/ settingsUpdate so concurrent rapid updates
merge correctly.

---

Nitpick comments:
In @.mcp.json:
- Around line 1-9: The config hardcodes the Scope server port to 8002 in
mcpServers.scope.args (the "--port", "8002" argument passed to the "uv run
daydream-scope" command), which requires users to start Scope on that port;
either change the port to the more common default ("8000") or add README/PR
documentation and an inline comment explaining that the MCP expects a Scope HTTP
server at localhost:8002 and instructing users to start Scope with the same
--port value before running the MCP server—update the .mcp.json args and/or
project docs accordingly.

In `@frontend/src/pages/StreamPage.tsx`:
- Around line 498-505: The wrapper sendParameterUpdate currently calls
sendParameterUpdateWebRTC and always applyBackendParamsToSettings, causing
duplicate updates when handlers like handleNoiseScaleChange already call
updateSettings; fix by making one consistent approach: either (A) remove direct
updateSettings calls from handlers such as handleNoiseScaleChange and rely on
sendParameterUpdate to call applyBackendParamsToSettings, or (B) change
sendParameterUpdate to first compare incoming params with current settings and
only call applyBackendParamsToSettings for keys whose values differ; update
references to sendParameterUpdateWebRTC, applyBackendParamsToSettings,
handleNoiseScaleChange, and updateSettings accordingly so only one state update
occurs per change.

In `@src/scope/server/webrtc.py`:
- Around line 104-107: Remove the runtime import of FrameProcessor and rely on
the TYPE_CHECKING-only import and a forward/quoted type annotation: delete the
line "from .frame_processor import FrameProcessor" near the top of the
constructor and keep the TYPE_CHECKING import at line 32, leaving the attribute
annotation as self.frame_processor: "FrameProcessor" = frame_processor (or
enable from __future__ import annotations) so the import is only used for typing
and not executed at runtime.

ℹ️ Review info
⚙️ Run configuration

Configuration used: defaults

Review profile: CHILL

Plan: Pro

Run ID: 589d55fd-bfb8-4106-85b7-f1d4f8423571

📥 Commits

Reviewing files that changed from the base of the PR and between 4cba0a9 and 98141e3.

⛔ Files ignored due to path filters (1)
  • uv.lock is excluded by !**/*.lock
📒 Files selected for processing (8)
  • .mcp.json
  • frontend/src/hooks/useUnifiedWebRTC.ts
  • frontend/src/pages/StreamPage.tsx
  • pyproject.toml
  • src/scope/server/app.py
  • src/scope/server/mcp_server.py
  • src/scope/server/tracks.py
  • src/scope/server/webrtc.py

),
)

client = httpx.AsyncClient(base_url=base_url, timeout=300.0)
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

The httpx.AsyncClient is never closed, causing a resource leak.

The client is created at module initialization within create_mcp_server() but there's no cleanup when the MCP server shuts down. This leaves HTTP connections open.

🔧 Proposed fix using async context manager pattern

Consider using lifespan context or ensuring cleanup. One approach:

-    client = httpx.AsyncClient(base_url=base_url, timeout=300.0)
+    # Note: FastMCP may provide lifecycle hooks; otherwise document that
+    # the client persists for the process lifetime (acceptable for stdio)
+    client = httpx.AsyncClient(base_url=base_url, timeout=300.0)

Since the MCP server runs as a subprocess and exits when stdio closes, this is acceptable in practice. However, adding a comment clarifying this design decision would help maintainability.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/scope/server/mcp_server.py` at line 54, The httpx.AsyncClient assigned to
the module-level variable client inside create_mcp_server() is never closed,
causing a resource leak; update create_mcp_server() to ensure the AsyncClient is
closed on server shutdown (for example register an async shutdown handler or use
an async context/lifespan so client.aclose() is called), referencing the client
variable and create_mcp_server function to locate the code; alternatively, if
you intentionally rely on process exit to release resources, add a clear comment
in create_mcp_server() documenting that design choice and why explicit cleanup
is omitted.

Comment on lines +529 to +538
@mcp.tool()
async def resolve_workflow(workflow_json: str) -> str:
"""Resolve dependencies for a workflow import (checks pipelines, LoRAs, plugins).

Args:
workflow_json: The workflow JSON string to resolve dependencies for
"""
workflow = json.loads(workflow_json)
resp = await client.post("/api/v1/workflow/resolve", json=workflow)
return await _json(resp)
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

Add error handling for malformed JSON in resolve_workflow.

json.loads(workflow_json) will raise JSONDecodeError if the input is not valid JSON. This exception will propagate as an unhandled error rather than a user-friendly message.

🛡️ Proposed fix to handle JSON parsing errors
     `@mcp.tool`()
     async def resolve_workflow(workflow_json: str) -> str:
         """Resolve dependencies for a workflow import (checks pipelines, LoRAs, plugins).

         Args:
             workflow_json: The workflow JSON string to resolve dependencies for
         """
+        try:
+            workflow = json.loads(workflow_json)
+        except json.JSONDecodeError as e:
+            return json.dumps({"error": f"Invalid JSON: {e}"}, indent=2)
-        workflow = json.loads(workflow_json)
         resp = await client.post("/api/v1/workflow/resolve", json=workflow)
         return await _json(resp)
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
@mcp.tool()
async def resolve_workflow(workflow_json: str) -> str:
"""Resolve dependencies for a workflow import (checks pipelines, LoRAs, plugins).
Args:
workflow_json: The workflow JSON string to resolve dependencies for
"""
workflow = json.loads(workflow_json)
resp = await client.post("/api/v1/workflow/resolve", json=workflow)
return await _json(resp)
`@mcp.tool`()
async def resolve_workflow(workflow_json: str) -> str:
"""Resolve dependencies for a workflow import (checks pipelines, LoRAs, plugins).
Args:
workflow_json: The workflow JSON string to resolve dependencies for
"""
try:
workflow = json.loads(workflow_json)
except json.JSONDecodeError as e:
return json.dumps({"error": f"Invalid JSON: {e}"}, indent=2)
resp = await client.post("/api/v1/workflow/resolve", json=workflow)
return await _json(resp)
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/scope/server/mcp_server.py` around lines 529 - 538, resolve_workflow
currently calls json.loads(workflow_json) without handling malformed input; wrap
that call in a try/except catching json.JSONDecodeError (imported from json) and
return or raise a clear, user-facing error message that includes the
JSONDecodeError text (e.g., "Invalid workflow JSON: {e}") so the mcp.tool
handler (resolve_workflow) returns a friendly error instead of an unhandled
exception; keep the rest of the function (client.post and _json(resp))
unchanged.

Copy link
Copy Markdown
Collaborator

@leszko leszko left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Added some comments. In general:

  1. I really love the idea of having the MCP Server in Scope ❤️
  2. I think we should try to limit the changes to the API (app.py); if possible then make no changes at all
  3. I think the whole MCP logic should be only executed when scope is started with the --mcp flag

raise HTTPException(status_code=500, detail=str(e)) from e


@app.post("/api/v1/session/parameters")
Copy link
Copy Markdown
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I wonder if we should not move it all these functions into a separate router. Plus have them enabled only if daydream-scope is started with the --mcp flag. The reason why I mention this is that now w introduce new functions to modify the params (and other data), which brings complexity. E.g. soon we'll not have a concept of broadcasting parameters, because params will be per node.

count = 0
for _sid, fp, _headless in webrtc_manager.iter_frame_processors():
all_params.update(fp.parameters)
count += 1
Copy link
Copy Markdown
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think the count does not make much sense. We don't support multiple sessions, so this one will always be 1.

raise HTTPException(status_code=500, detail=str(e)) from e


@app.post("/api/v1/pipeline/unload")
Copy link
Copy Markdown
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why do we need an endpoint for the pipeline unload?

raise HTTPException(status_code=500, detail=str(e)) from e


@app.post("/api/v1/session/{session_id}/recording/start")
Copy link
Copy Markdown
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

  1. What's the use for recording start/stop?
  2. Why do we use session_id param here while for other endpoints we don't have session_id? In general I don't think it makes much sense to have session_id since we don't support multi-session setup.

def create_mcp_server(base_url: str = "http://localhost:8000") -> FastMCP:
"""Create and configure the MCP server with all Scope tools."""

mcp = FastMCP(
Copy link
Copy Markdown
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is this FastMCP a separate process? If not, then maybe we don't need to add any new endpoints into app.py, but just make code calls. That would keep our API simpler.

logger.error(f"Failed to add ICE candidate to session {session_id}: {e}")
raise ValueError(f"Invalid ICE candidate: {e}") from e

def iter_frame_processors(self):
Copy link
Copy Markdown
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We don't need this. We always have just 1 frame processor.


def get_last_frame(self):
"""Return the most recently rendered frame, or None."""
return self._last_frame
Copy link
Copy Markdown
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I guess we would need to have some lock for that.

ryanontheinside and others added 3 commits March 12, 2026 08:01
Signed-off-by: RyanOnTheInside <[email protected]>
Signed-off-by: RyanOnTheInside <[email protected]>
…onboard-mcp/mcp

Signed-off-by: Rafał Leszko <[email protected]>

# Conflicts:
#	src/scope/server/app.py
@github-actions
Copy link
Copy Markdown
Contributor

github-actions bot commented Mar 12, 2026

🚀 fal.ai Preview Deployment

App ID daydream/scope-pr-660--preview
WebSocket wss://fal.run/daydream/scope-pr-660--preview/ws
Commit dfa4885

Testing

Connect to this preview deployment by running this on your branch:

uv run build && SCOPE_CLOUD_APP_ID="daydream/scope-pr-660--preview/ws" uv run daydream-scope

🧪 E2E tests will run automatically against this deployment.

@github-actions
Copy link
Copy Markdown
Contributor

github-actions bot commented Mar 12, 2026

✅ E2E Tests passed

Status passed
fal App daydream/scope-pr-660--preview
Run View logs

Test Artifacts

Check the workflow run for screenshots.

leszko and others added 5 commits March 12, 2026 15:36
These endpoints are not needed for MCP workflows - removes start_recording,
stop_recording, and unload_pipeline tools along with their references in
instructions and docstrings.

Co-Authored-By: Claude Opus 4.6 <[email protected]>
Signed-off-by: Rafał Leszko <[email protected]>
Replaces session_id path parameters with implicit active-session lookup
for stop, recording start/stop endpoints, aligning the REST API with
MCP server usage patterns.

Co-Authored-By: Claude Opus 4.6 <[email protected]>
Signed-off-by: Rafał Leszko <[email protected]>
…ndpoints

Remove recording and pipeline unload endpoints from MCP router, and
refactor WebRTCManager from a multi-session dict to a single headless
session slot since only one headless session is supported at a time.

Co-Authored-By: Claude Opus 4.6 <[email protected]>
Signed-off-by: Rafal Leszko <[email protected]>
HeadlessSession has no WebRTC dependencies — it runs FrameProcessor
directly. Moving it to headless_session.py clarifies the separation
between WebRTC and MCP-only session management.

Co-Authored-By: Claude Opus 4.6 <[email protected]>
Signed-off-by: Rafal Leszko <[email protected]>
Consistent with sibling module naming (webrtc.py, recording.py).

Co-Authored-By: Claude Opus 4.6 <[email protected]>
Signed-off-by: Rafal Leszko <[email protected]>
Signed-off-by: RyanOnTheInside <[email protected]>
@ryanontheinside ryanontheinside merged commit ec4fedd into main Mar 12, 2026
8 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants