Conversation
Signed-off-by: Max Holland <[email protected]>
Signed-off-by: Max Holland <[email protected]>
Signed-off-by: Max Holland <[email protected]>
Signed-off-by: Max Holland <[email protected]>
Signed-off-by: Max Holland <[email protected]>
Signed-off-by: Max Holland <[email protected]>
Signed-off-by: Max Holland <[email protected]>
Signed-off-by: Max Holland <[email protected]>
Signed-off-by: Max Holland <[email protected]>
Signed-off-by: Max Holland <[email protected]>
This reverts commit 2eae6ee. Signed-off-by: Max Holland <[email protected]>
Signed-off-by: Max Holland <[email protected]>
Signed-off-by: Max Holland <[email protected]>
Signed-off-by: Max Holland <[email protected]>
Signed-off-by: Max Holland <[email protected]>
Signed-off-by: Max Holland <[email protected]>
Signed-off-by: Max Holland <[email protected]>
Signed-off-by: Max Holland <[email protected]>
Signed-off-by: Max Holland <[email protected]>
disable plugins install while stream running or cloud connecting Signed-off-by: Max Holland <[email protected]>
|
@emranemran hopefully you don't mind me leaving all the commits untidy, i wanted to preserve the history of doing the built in plugins to begin with. |
Signed-off-by: Max Holland <[email protected]>
|
You have run out of free Bugbot PR reviews for this billing cycle. This will reset on April 6. To receive reviews on all of your PRs, visit the Cursor dashboard to activate Pro and start your 14-day free trial. |
leszko
left a comment
There was a problem hiding this comment.
Added some comments. PTAL.Please send me also some environment where I can test it. Thanks.
| normalized = re.sub(r"^git\+https?://", "", normalized) | ||
| normalized = re.sub(r"^https?://", "", normalized) |
There was a problem hiding this comment.
So, we support only URLs for plugins, right? Not local path and not PyPI packages. If that's the intention, then I'd add some comment about it. Maybe even in UI, it should be explicit that if you're in the cloud mode, you can only install from URLs.
There was a problem hiding this comment.
I'll add a note to the UI 👍 in cloud mode we're only allowing installation from the discover tab for now, url entry is hidden.
| for plugin in plugins: | ||
| name = plugin.get("name") | ||
| if not name: | ||
| continue | ||
| try: | ||
| resp = await client.delete( | ||
| f"{scope_url}/api/v1/plugins/{name}", timeout=60.0 | ||
| ) | ||
| if resp.status_code == 200: | ||
| print(f"Cleanup: uninstalled plugin '{name}'") | ||
| else: | ||
| print( | ||
| f"Warning: Failed to uninstall plugin '{name}': {resp.status_code}" | ||
| ) | ||
| except Exception as e: | ||
| print(f"Warning: Failed to uninstall plugin '{name}': {e}") |
There was a problem hiding this comment.
I wonder if it would make sense to delete plugins in parallel?
There was a problem hiding this comment.
Are you sure that will be ok? I can test it out anyway
There was a problem hiding this comment.
hmm maybe i'll avoid it for now:
There are a few concurrency hazards:
- pipeline_manager.unload_pipeline_by_id() mutates shared state (_pipelines, _pipeline_statuses, _pipeline_load_params dicts, and main pipeline state). The internal helper is even named _unload_pipeline_by_id_unsafe -- it must be called with the lock held. Running multiple unloads concurrently risks corrupting that shared state.
- CUDA cleanup -- each uninstall calls torch.cuda.empty_cache() and torch.cuda.synchronize(). Concurrent calls to these while pipelines are being torn down could interfere with each other.
- pip uninstall -- the sync implementation ultimately shells out to pip. Running multiple pip processes concurrently is not safe; pip uses a single shared site-packages directory and doesn't handle concurrent modifications.
- The plugin manager lock (self._lock) only guards the registry unregister section, not the entire uninstall operation, so it doesn't provide end-to-end protection.
I was thinking it sounded risky to have concurrent pip commands.
| "KAFKA_BOOTSTRAP_SERVERS", | ||
| "KAFKA_TOPIC", | ||
| "KAFKA_SASL_USERNAME", | ||
| "KAFKA_SASL_PASSWORD", |
There was a problem hiding this comment.
Ok, since we pass credentials into Scope, do we already have some safeguard for the plugins that are added into Daydream Plugin registry?
There was a problem hiding this comment.
My plan is to create a write only user for this so that if they ever did leak it would be much less of a concern, they wouldn't be able to read our data. Then we can also have a scan over the current plugins to check nothing looks dangerous.
| return normalized | ||
|
|
||
| def is_plugin_whitelisted(package: str) -> bool: | ||
| raw = os.getenv("CLOUD_PLUGINS_WHITELIST", "") |
There was a problem hiding this comment.
How are you going to update this plugin list env variable?
There was a problem hiding this comment.
This is a variable that lives in fal, it can be updated through their CLI or web dashboard.
There was a problem hiding this comment.
ok, but who will update it? Will we have some automation which will take our plugins and update this variable?
| uses: actions/setup-python@v5 | ||
| with: | ||
| python-version: '3.10' | ||
| python-version: '3.12' |
There was a problem hiding this comment.
Are we sure that using different Python version won't have any effect?
There was a problem hiding this comment.
It looks good in testing, I can do some more exhaustive testing though. In our pyproject file we specify python 3.12 so I think this was just an accidental case where the docker images were on 3.10 because that was the default installed with the older ubuntu version. I updated that which means it also needs to be updated here for the fal deploy.
src/scope/cloud/fal_app.py
Outdated
| request_id = payload.get("request_id") | ||
|
|
||
| # Block plugin installation in cloud mode (security: prevent arbitrary code execution) | ||
| if method == "POST" and path == "/api/v1/plugins": |
There was a problem hiding this comment.
I wonder if this check is good enough. The path comes from the WebSocket message payload, so someone could hack like /api/v1/plugins?hack=true and then this check will not apply. Isn't that an issue?
🚀 fal.ai Preview Deployment
TestingConnect to this preview deployment by running this on your branch: 🧪 E2E tests will run automatically against this deployment. |
✅ E2E Tests passed
Test ArtifactsCheck the workflow run for screenshots. |
|
A few more comments after trying it out:
|
|
Signed-off-by: Max Holland <[email protected]>
Signed-off-by: Max Holland <[email protected]>
leszko
left a comment
There was a problem hiding this comment.
Looks good. The one question that is unanswered is this one:
ok, but who will update it? Will we have some automation which will take our plugins and update this variable?
I just want to make sure we have a plan about updating this. Other than that LGTM. Feel free to Squash and Merge.
Signed-off-by: Max Holland <[email protected]>
Yep i'll make a github action to update it 👍 |

Cloud Plugin Support & Separate Docker Image
Summary
Dockerfile.cloud) and update CI to build both standard and cloud variants using a matrix strategy, with-cloudsuffixed image tagsCLOUD_PLUGINS_WHITELISTenvironment variable (with URL normalization for flexible matching)/health,/api/v1/restart, and all/api/v1/pluginsendpoints now use the@cloud_proxy()decorator so the frontend can manage plugins on the remote cloud instance_cleanup_event) that blocks the next connection's "ready" message until cleanup finishesChanges
Dockerfile.cloudnvidia/cuda:12.8.0-cudnn-runtime-ubuntu24.04.github/workflows/docker-build.ymlstandard/cloudbuilds with per-variant tags, dockerfiles, and GHA cache scopessrc/scope/cloud/fal_app.pyuv run --extra kafkainstead of pre-install, assets dir moved to/tmp/src/scope/server/app.py@cloud_proxy()to health, restart, and all plugin CRUD endpoints; renamedrequest->http_requestto avoid shadowingYou can now connect to cloud and go to the plugin settings discover tab and click install, same as the local experience.