Conversation
There was a problem hiding this comment.
💡 Codex Review
Here are some automated review suggestions for this pull request.
Reviewed commit: ae23027aa2
ℹ️ About Codex in GitHub
Your team has set up Codex to review pull requests in this repo. Reviews are triggered when you
- Open a pull request for review
- Mark a draft as ready
- Comment "@codex review".
If Codex has suggestions, it will comment; otherwise it will react with 👍.
Codex can also answer questions or update the PR. Try commenting "@codex address that feedback".
| for f in files: | ||
| session_files[f["name"]] = { | ||
| "name": f["name"], | ||
| "size": f["size"], |
There was a problem hiding this comment.
Enforce server-side max_file_size checks
on_store persists every uploaded file without validating against self._max_file_size, so the configured limit is only a UI hint. Because app tools can still be invoked directly by name (for example Files___store_files), a client can bypass the DropZone constraint and submit arbitrarily large base64 payloads, leading to unbounded in-memory growth in _store despite max_file_size being set.
Useful? React with 👍 / 👎.
There was a problem hiding this comment.
Good catch — fixed in 614bc2b. The validation now happens in the tool function itself (before on_store), so subclasses don't need to re-implement the size check.
| @self.tool(model=True) | ||
| def list_files() -> list[dict]: | ||
| """List all uploaded files with metadata.""" |
There was a problem hiding this comment.
Namespace model-visible file tools to avoid collisions
Registering model-visible tools with generic names like list_files/read_file makes this provider conflict with host servers that already define those names. FastMCP deduplicates tool listings by name and resolves call_tool by a single winner for that name, so in collisions the FileUpload tools can be hidden or unreachable to the model, undermining the “add to any server” behavior.
Useful? React with 👍 / 👎.
There was a problem hiding this comment.
This is what namespaces are for — server.add_provider(FileUpload(), namespace="files") gives you files_list_files, files_read_file, etc. Generic names are intentional for the simple case where there's no collision.
|
You have reached your Codex usage limits for code reviews. You can see your limits in the Codex usage dashboard. |
Session-scoped in-memory storage by default. Override on_store, on_list, on_read for custom persistence.
614bc2b to
c054a78
Compare
|
You have reached your Codex usage limits for code reviews. You can see your limits in the Codex usage dashboard. |
Test Failure AnalysisSummary: One pre-existing test is failing across all Python versions — Note: This failure is not introduced by this PR. The failing test was added to Root Cause: When a Docket background worker executes a tool, HTTP request headers are snapshotted to Redis at submission time (in The test uses @server.tool(task=True)
async def check_request_header() -> str:
request = get_http_request() # fails — _task_http_headers ContextVar is not set
return request.headers.get("x-tenant-id", "missing")
Suggested Solution: The fix should restore In # At the start of FunctionTool.run() (or a Docket-registered wrapper):
if _task_http_headers.get() is None:
task_info = get_task_context()
if task_info is not None:
await _restore_task_http_headers(task_info.session_id, task_info.task_id)This mirrors what Detailed AnalysisFailure log excerpt: The contrast with the passing test ( # This PASSES — uses dependency injection
@server.tool(task=True)
async def check_headers(
headers: dict[str, str] = CurrentHeaders(), # triggers _restore_task_http_headers
request: Request = CurrentRequest(), # triggers _restore_task_http_headers
) -> dict[str, str]: ...The failing test (added in commit # This FAILS — calls get_http_request() directly
@server.tool(task=True)
async def check_request_header() -> str:
request = get_http_request() # _task_http_headers not set in worker
return request.headers.get("x-tenant-id", "missing")Code flow:
Pre-existing on main: confirmed by run Related Files
|
Extracts the file upload example into a reusable
FileUploadprovider. Adding file upload to any server is now one line:This registers a drag-and-drop UI tool, a backend
store_filestool (app-only), and model-visiblelist_files/read_filetools. Files are scoped by MCP session and stored in memory by default — each session gets its own file store, and files don't leak between conversations.For custom persistence (S3, database, filesystem), subclass and override three methods:
TODO: Verify session scoping behavior in stateless HTTP transport — need to confirm virtual sessions work correctly so files don't end up in a shared
__default__bucket.