Conversation
📝 WalkthroughWalkthroughFile size validation was changed: frontend and backend now apply the 50MB upload limit only when a cloud connection is active; the video hook removed its prior 10MB pre-upload rejection, allowing larger local uploads. Changes
Estimated code review effort🎯 3 (Moderate) | ⏱️ ~20 minutes Poem
🚥 Pre-merge checks | ✅ 2 | ❌ 1❌ Failed checks (1 warning)
✅ Passed checks (2 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches
🧪 Generate unit tests (beta)
Comment |
There was a problem hiding this comment.
🧹 Nitpick comments (1)
frontend/src/components/MediaPicker.tsx (1)
111-114: Consider adding user-visible feedback for validation failures.The validation failure only logs to
console.error, which users won't see. While this is pre-existing behavior, it becomes more relevant now that the limit is conditional on cloud connection status.💡 Suggestion: Add toast notification for user feedback
If the codebase has a toast/notification system, consider surfacing this error to users:
+ // Assuming a toast hook or similar is available if (isCloudConnected && file.size > maxCloudUploadSize) { console.error( `handleFileUpload: File size exceeds maximum of ${maxCloudUploadSize / (1024 * 1024)}MB while connected to cloud` ); + // toast.error(`File size exceeds ${maxCloudUploadSize / (1024 * 1024)}MB limit for cloud uploads`); return; }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@frontend/src/components/MediaPicker.tsx` around lines 111 - 114, The console.error in handleFileUpload that fires when file size exceeds maxCloudUploadSize only logs to the console; update handleFileUpload to also surface a user-visible notification (e.g., using the app's toast/notification API) when this validation fails, including the same descriptive message and the size limit (referencing handleFileUpload and maxCloudUploadSize) so users see immediate feedback; ensure the notification path is used only in the cloud-connection branch and keep the console.error for developer logs.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Nitpick comments:
In `@frontend/src/components/MediaPicker.tsx`:
- Around line 111-114: The console.error in handleFileUpload that fires when
file size exceeds maxCloudUploadSize only logs to the console; update
handleFileUpload to also surface a user-visible notification (e.g., using the
app's toast/notification API) when this validation fails, including the same
descriptive message and the size limit (referencing handleFileUpload and
maxCloudUploadSize) so users see immediate feedback; ensure the notification
path is used only in the cloud-connection branch and keep the console.error for
developer logs.
ℹ️ Review info
⚙️ Run configuration
Configuration used: defaults
Review profile: CHILL
Plan: Pro
Run ID: c80c550d-dcf5-46aa-ba1b-53f9c6a16591
📒 Files selected for processing (3)
frontend/src/components/MediaPicker.tsxfrontend/src/hooks/useVideoSource.tssrc/scope/server/app.py
💤 Files with no reviewable changes (1)
- frontend/src/hooks/useVideoSource.ts
🚀 fal.ai Preview Deployment
TestingConnect to this preview deployment by running this on your branch: 🧪 E2E tests will run automatically against this deployment. |
✅ E2E Tests passed
Test ArtifactsCheck the workflow run for screenshots. |
Signed-off-by: Thom Shutt <[email protected]>
There was a problem hiding this comment.
Actionable comments posted: 1
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@src/scope/server/app.py`:
- Around line 1573-1581: The handler currently calls await request.body()
(materializing content into the content variable) before checking
cloud_manager.is_connected, which allows unbounded RAM use for local uploads; to
fix, stop using request.body() for local uploads and instead stream the request
body (use the request.stream()/iter_bytes async iterator) and spool incoming
chunks to a temporary file (e.g., tempfile.NamedTemporaryFile) while optionally
tracking bytes written, then operate on the temp file path; keep the existing
in-memory read only for the cloud path (when cloud_manager.is_connected is true)
or enforce a size cap while streaming by aborting and returning HTTP 400 if the
running total exceeds max_size, and update references to content to read from
the temp file or streamed buffer accordingly.
ℹ️ Review info
⚙️ Run configuration
Configuration used: defaults
Review profile: CHILL
Plan: Pro
Run ID: e98ae269-53d2-451f-a8a7-35bd9c1bd3c1
📒 Files selected for processing (3)
frontend/src/components/MediaPicker.tsxfrontend/src/hooks/useVideoSource.tssrc/scope/server/app.py
💤 Files with no reviewable changes (1)
- frontend/src/hooks/useVideoSource.ts
| # Apply upload size validation only for cloud uploads. | ||
| # Local mode keeps files on the same machine, so no explicit cap is enforced. | ||
| if cloud_manager.is_connected: | ||
| max_size = 50 * 1024 * 1024 # 50MB | ||
| if len(content) > max_size: | ||
| raise HTTPException( | ||
| status_code=400, | ||
| detail=f"File size exceeds maximum of {max_size / (1024 * 1024):.0f}MB", | ||
| ) |
There was a problem hiding this comment.
Avoid unbounded buffering for local uploads.
This removes the only size ceiling, but the handler still materializes the entire payload with await request.body() on Line 1571 before reaching this branch. Large local videos can now be fully buffered in RAM before they ever hit disk, which is enough to stall or OOM the server. If local uploads are intended to be unlimited, this path should stream/spool the request body instead of reading it all at once.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@src/scope/server/app.py` around lines 1573 - 1581, The handler currently
calls await request.body() (materializing content into the content variable)
before checking cloud_manager.is_connected, which allows unbounded RAM use for
local uploads; to fix, stop using request.body() for local uploads and instead
stream the request body (use the request.stream()/iter_bytes async iterator) and
spool incoming chunks to a temporary file (e.g., tempfile.NamedTemporaryFile)
while optionally tracking bytes written, then operate on the temp file path;
keep the existing in-memory read only for the cloud path (when
cloud_manager.is_connected is true) or enforce a size cap while streaming by
aborting and returning HTTP 400 if the running total exceeds max_size, and
update references to content to read from the temp file or streamed buffer
accordingly.
There's no reason to constrain filesizes for input videos given they never leave the device. For images, only check the size for remote inference.
Summary by CodeRabbit