Conversation
There was a problem hiding this comment.
💡 Codex Review
Here are some automated review suggestions for this pull request.
Reviewed commit: ae23027aa2
ℹ️ About Codex in GitHub
Your team has set up Codex to review pull requests in this repo. Reviews are triggered when you
- Open a pull request for review
- Mark a draft as ready
- Comment "@codex review".
If Codex has suggestions, it will comment; otherwise it will react with 👍.
Codex can also answer questions or update the PR. Try commenting "@codex address that feedback".
| for f in files: | ||
| session_files[f["name"]] = { | ||
| "name": f["name"], | ||
| "size": f["size"], |
There was a problem hiding this comment.
Enforce server-side max_file_size checks
on_store persists every uploaded file without validating against self._max_file_size, so the configured limit is only a UI hint. Because app tools can still be invoked directly by name (for example Files___store_files), a client can bypass the DropZone constraint and submit arbitrarily large base64 payloads, leading to unbounded in-memory growth in _store despite max_file_size being set.
Useful? React with 👍 / 👎.
There was a problem hiding this comment.
Good catch — fixed in 614bc2b. The validation now happens in the tool function itself (before on_store), so subclasses don't need to re-implement the size check.
| @self.tool(model=True) | ||
| def list_files() -> list[dict]: | ||
| """List all uploaded files with metadata.""" |
There was a problem hiding this comment.
Namespace model-visible file tools to avoid collisions
Registering model-visible tools with generic names like list_files/read_file makes this provider conflict with host servers that already define those names. FastMCP deduplicates tool listings by name and resolves call_tool by a single winner for that name, so in collisions the FileUpload tools can be hidden or unreachable to the model, undermining the “add to any server” behavior.
Useful? React with 👍 / 👎.
There was a problem hiding this comment.
This is what namespaces are for — server.add_provider(FileUpload(), namespace="files") gives you files_list_files, files_read_file, etc. Generic names are intentional for the simple case where there's no collision.
|
You have reached your Codex usage limits for code reviews. You can see your limits in the Codex usage dashboard. |
614bc2b to
c054a78
Compare
|
You have reached your Codex usage limits for code reviews. You can see your limits in the Codex usage dashboard. |
Test Failure Analysis(Edited: updated for workflow run `23694104842`, commit `9cd5a1c8` — "Add FileUpload provider". The failure is identical to the previous analysis.) Summary: One pre-existing test is failing across all Python versions — `test_background_task_can_read_snapshotted_request_headers` — due to HTTP request context not being available when a background task calls `get_http_request()` directly (without using FastMCP dependency injection). Note: This failure is not introduced by this PR. The failing test was added to `main` in commit `1d8a8bc` ("chore: Update SDK documentation #3668") and is present on `main` independently of this PR's changes. Root Cause: When a Docket background worker executes a tool, HTTP request headers are snapshotted to Redis at submission time (in `submit_to_docket`). However, those headers are only restored from Redis when the tool uses `CurrentRequest()` or `CurrentHeaders()` as dependency-injected parameters — not when the tool calls `get_http_request()` directly. The test calls `get_http_request()` directly inside a `task=True` tool: `get_http_request()` reads `_task_http_headers.get()`, which is `None` in the worker because `_restore_task_http_headers()` was never called before the function ran. That restoration only happens inside `_CurrentRequest.aenter` and `_CurrentHeaders.aenter`. Suggested Solution: Restore `_task_http_headers` early in the background task execution path, before the user's function runs. In `FunctionTool.run()` (or a new wrapper registered with Docket), check for task context and call `_restore_task_http_headers()` if not already set: ```python At the start of FunctionTool.run() (or a Docket-registered wrapper):if _task_http_headers.get() is None: This mirrors what `_CurrentRequest.aenter` already does (see `dependencies.py:1186-1197`), but applied at the tool execution level so it covers all tools regardless of whether they use dependency injection. Detailed AnalysisFailure log excerpt (all 4 failing jobs show the same failure): Jobs failing: Python 3.10 (windows-latest), Python 3.10 (ubuntu-latest), Python 3.13 (ubuntu-latest), Tests with lowest-direct dependencies — all with the same single test failure. MCP conformance tests and Integration tests both passed. The contrast with the passing test (`test_background_task_current_http_dependencies_restore_headers`): This PASSES — uses dependency injection@server.tool(task=True) The failing test (added in commit `1d8a8bc`): This FAILS — calls get_http_request() directly@server.tool(task=True) Code flow:
Related Files
|
There was a problem hiding this comment.
💡 Codex Review
Here are some automated review suggestions for this pull request.
Reviewed commit: b66e215f8c
ℹ️ About Codex in GitHub
Your team has set up Codex to review pull requests in this repo. Reviews are triggered when you
- Open a pull request for review
- Mark a draft as ready
- Comment "@codex review".
If Codex has suggestions, it will comment; otherwise it will react with 👍.
Codex can also answer questions or update the PR. Try commenting "@codex address that feedback".
| def store_files(files: list[dict]) -> list[dict]: | ||
| """Store uploaded files. Receives file objects with name, size, type, data (base64).""" | ||
| for f in files: | ||
| if f.get("size", 0) > provider._max_file_size: |
There was a problem hiding this comment.
Enforce max_file_size against decoded payload bytes
The size guard in store_files trusts the client-provided size field, so callers can bypass max_file_size by sending a very large base64 data value with a small declared size (for example size=1). In that case the file is still persisted in memory by on_store, which defeats the server-side limit and can still cause unbounded memory growth under direct tool calls.
Useful? React with 👍 / 👎.
| if is_text: | ||
| try: | ||
| result["content"] = base64.b64decode(entry["data"]).decode("utf-8") | ||
| except UnicodeDecodeError: |
There was a problem hiding this comment.
Catch base64 decoding errors for text reads
read_file only catches UnicodeDecodeError for text files, but malformed base64 (which can be stored via direct store_files calls) raises binascii.Error before decoding to UTF-8. That turns read_file into a hard tool failure instead of returning the intended content_base64 fallback preview for unreadable content.
Useful? React with 👍 / 👎.
Session-scoped in-memory storage by default. Override on_store, on_list, on_read for custom persistence.
9cd5a1c to
dff412c
Compare
There was a problem hiding this comment.
💡 Codex Review
Here are some automated review suggestions for this pull request.
Reviewed commit: dff412c822
ℹ️ About Codex in GitHub
Your team has set up Codex to review pull requests in this repo. Reviews are triggered when you
- Open a pull request for review
- Mark a draft as ready
- Comment "@codex review".
If Codex has suggestions, it will comment; otherwise it will react with 👍.
Codex can also answer questions or update the PR. Try commenting "@codex address that feedback".
| For custom persistence, override the storage methods:: | ||
|
|
||
| class S3Upload(FileUpload): | ||
| def on_store(self, files: list[dict]) -> list[dict]: |
There was a problem hiding this comment.
Include Context in storage hook override signatures
The custom-storage example omits ctx in the on_store/on_list/on_read signatures, but the provider calls these hooks with context (on_store(files, ctx), etc.). Implementing the documented signatures will fail at runtime with TypeError when a file is stored or read, so the example should include the ctx: Context parameter on each override.
Useful? React with 👍 / 👎.
There was a problem hiding this comment.
Fixed — the module docstring now includes ctx in the override signatures.
Extracts the file upload example into a reusable
FileUploadprovider. Adding file upload to any server is now one line:This registers a drag-and-drop UI tool, a backend
store_filestool (app-only), and model-visiblelist_files/read_filetools. Files are scoped by MCP session and stored in memory by default. Storage methods receive the currentContext, so subclasses can partition by user, tenant, or any other dimension — and have full access to auth tokens and request metadata for persistent backends.Also adds a new "Providers" group to the Apps docs nav, reorganizes the Apps sidebar into Building Apps / Providers / Advanced, and fixes an MDX parse error in the changelog.