Skip to content

feat(storage): replace proxy upload with presigned URL architecture#129

Merged
yash-pouranik merged 7 commits intogeturbackend:mainfrom
Nitin-kumar-yadav1307:feat/presigned-url-upload
Apr 23, 2026
Merged

feat(storage): replace proxy upload with presigned URL architecture#129
yash-pouranik merged 7 commits intogeturbackend:mainfrom
Nitin-kumar-yadav1307:feat/presigned-url-upload

Conversation

@Nitin-kumar-yadav1307
Copy link
Copy Markdown
Contributor

@Nitin-kumar-yadav1307 Nitin-kumar-yadav1307 commented Apr 23, 2026

Fixes # (122)

summary of problem

the current POST /api/storage/upload endpoint using multer to hold the entire file on node.js memory (req.file.buffer). then send it to Supabase/S3. this means every upload require heavy task . due to that our node.js server require more food to eat so it started eating RAM and bandwith for no good reason .Under heavy traffic this would
cause crashes.

What did I change?

The server no longer touches file bytes at all. Instead:

  1. Client asks the server for a signed URL
  2. Server checks size + quota, generates the URL, hands it back
  3. Client uploads directly to Supabase/S3 — server is completely idle
  4. Client tells the server "I'm done"
  5. Server verifies the file actually landed, then updates the quota

Files changed

  • storage.manager.js — two new helpers: one to generate signed URLs
    (works for internal Supabase, external Supabase, and S3/R2), one to
    verify the file exists after upload
  • storage.controller.js — two new endpoints: requestUpload and confirmUpload
  • storage.routes.js — wired the new routes, no multer on them
  • sdk/storage.ts — updated upload() to do the 3-step flow internally,
    public API is unchanged so existing devs won't notice anything different
  • common/package.json — added @aws-sdk/s3-request-presigner

A few things to note

  • Kept the old /upload route so nothing breaks for existing users
  • Quota is only charged after confirm step, so abandoned uploads
    don't eat into anyone's storage limit
  • For S3/R2 users, their bucket will need CORS configured to allow
    PUT from the browser — this is a bucket setting, not a code change

i might be forgetting something or done wrong , just let me know!
long Live urBackend!!

Summary by CodeRabbit

  • New Features

    • Presigned URL upload workflow for direct-to-storage uploads with request and confirm endpoints
    • SDK now performs three-step upload: request, PUT to signed URL, confirm
  • Bug Fixes

    • Strict filename/path validation and safer delete behavior
    • Confirms uploaded size matches declared size; returns warning when public URL is unavailable
    • Enforces 10MB per-file limit and quota checks
  • Refactor

    • Centralized storage helpers and unified bucket/presign handling
  • Tests

    • Expanded tests covering presigned flows, validations, and quota charging logic

@coderabbitai
Copy link
Copy Markdown
Contributor

coderabbitai Bot commented Apr 23, 2026

Warning

Rate limit exceeded

@Nitin-kumar-yadav1307 has exceeded the limit for the number of commits that can be reviewed per hour. Please wait 31 minutes and 17 seconds before requesting another review.

Your organization is not enrolled in usage-based pricing. Contact your admin to enable usage-based pricing to continue reviews beyond the rate limit, or try again in 31 minutes and 17 seconds.

⌛ How to resolve this issue?

After the wait time has elapsed, a review can be triggered using the @coderabbitai review command as a PR comment. Alternatively, push new commits to this PR.

We recommend that you space out your commits to avoid hitting the rate limit.

🚦 How do rate limits work?

CodeRabbit enforces hourly rate limits for each developer per organization.

Our paid plans have higher rate limits than the trial, open-source and free plans. In all cases, we re-allow further reviews after a brief timeout.

Please see our FAQ for further information.

ℹ️ Review info
⚙️ Run configuration

Configuration used: defaults

Review profile: CHILL

Plan: Pro

Run ID: 4b10e5b6-daf2-441f-8684-8d919248471e

📥 Commits

Reviewing files that changed from the base of the PR and between bdfa51e and 145bc64.

📒 Files selected for processing (2)
  • apps/public-api/src/__tests__/storage.controller.test.js
  • apps/public-api/src/controllers/storage.controller.js
📝 Walkthrough

Walkthrough

Adds a two-step presigned-upload workflow: new /upload-request and /upload-confirm endpoints, storage manager presign/verify utilities, SDK changes to PUT to signed URLs, controller quota/verification logic, and corresponding tests and route wiring.

Changes

Cohort / File(s) Summary
Controller & Routes
apps/public-api/src/controllers/storage.controller.js, apps/public-api/src/routes/storage.js
Adds requestUpload and confirmUpload exports and routes. Validates inputs, enforces 10MB limit, constructs normalized project-scoped filePath, returns presigned { signedUrl, token }, verifies uploads, and atomically charges internal storage with an $expr guard.
Common Package (storage utilities)
packages/common/src/utils/storage.manager.js, packages/common/src/index.js, packages/common/package.json
Adds getPresignedUploadUrl and verifyUploadedFile exports; implements presign and verification for Supabase and S3/R2; adds @aws-sdk/s3-request-presigner dependency.
SDK & Tests
sdks/urbackend-sdk/src/modules/storage.ts, sdks/urbackend-sdk/tests/storage.test.ts
Refactors StorageModule.upload to a three-step flow: POST /upload-request, PUT to returned signedUrl, POST /upload-confirm. Tests updated to mock and assert the multi-request sequence (request, PUT, confirm).
API Tests
apps/public-api/src/__tests__/storage.controller.test.js
Expands @urbackend/common mock to include getBucket, getPresignedUploadUrl, verifyUploadedFile; adds tests for request/confirm flows, size validation, verified-size mismatch handling, and external-provider URL-warnings.

Sequence Diagram

sequenceDiagram
    participant Client as SDK Client
    participant API as API Controller
    participant SM as Storage Manager
    participant Cloud as Cloud Storage
    participant DB as Database

    Client->>API: POST /api/storage/upload-request (filename, contentType, size)
    API->>DB: check project quota headroom
    API->>SM: getPresignedUploadUrl(project, filePath, contentType, size)
    SM->>Cloud: request presigned PUT URL
    Cloud-->>SM: signedUrl (and token if needed)
    SM-->>API: signedUrl, token, filePath
    API-->>Client: { signedUrl, token, filePath }

    Client->>Cloud: PUT signedUrl (raw bytes, Content-Type)
    Cloud-->>Client: 200 OK

    Client->>API: POST /api/storage/upload-confirm (filePath, size, token)
    API->>SM: verifyUploadedFile(project, filePath)
    SM->>Cloud: Head/List to confirm object and size
    Cloud-->>SM: actualSize
    SM-->>API: actualSize
    API->>DB: atomic update (increment storageUsed if actualSize matches and under limit via $expr)
    DB-->>API: updated project record
    API-->>Client: { publicUrl?, provider, (warning if public URL unavailable) }
Loading

Estimated code review effort

🎯 4 (Complex) | ⏱️ ~60 minutes

Possibly related issues

  • urBackend#122: Implements the same presigned-upload workflow with upload-request/upload-confirm endpoints and presign/verify utilities.

Possibly related PRs

  • urBackend#65: Modifies packages/common/src/utils/storage.manager.js with S3/R2 handling and related storage utility changes.
  • urBackend#128: Touches apps/public-api/src/controllers/storage.controller.js and quota/reservation logic used by the new upload flow.
  • urBackend#72: Adds controller and test updates around storage path normalization and tests that overlap with these changes.

Suggested reviewers

  • yash-pouranik
  • KushagraJaiswar02

Poem

🐰
I hopped to fetch a signed URL bright,
Carried bytes through day and night.
Request, then PUT, then one last cheer—
Confirmed the file, the quota clear! ✨

🚥 Pre-merge checks | ✅ 4 | ❌ 1

❌ Failed checks (1 warning)

Check name Status Explanation Resolution
Docstring Coverage ⚠️ Warning Docstring coverage is 0.00% which is insufficient. The required threshold is 80.00%. Write docstrings for the functions missing them to satisfy the coverage threshold.
✅ Passed checks (4 passed)
Check name Status Explanation
Description Check ✅ Passed Check skipped - CodeRabbit’s high-level summary is enabled.
Title check ✅ Passed The title clearly and concisely describes the main architectural change: replacing the previous server-proxied upload approach with a presigned URL architecture. It accurately captures the primary objective of the pull request.
Linked Issues check ✅ Passed Check skipped because no linked issues were found for this pull request.
Out of Scope Changes check ✅ Passed Check skipped because no linked issues were found for this pull request.

✏️ Tip: You can configure your own custom pre-merge checks in the settings.

✨ Finishing Touches
🧪 Generate unit tests (beta)
  • Create PR with unit tests

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

Copy link
Copy Markdown
Contributor

@coderabbitai coderabbitai Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 10

Caution

Some comments are outside the diff and can’t be posted inline due to platform limitations.

⚠️ Outside diff range comments (2)
apps/public-api/src/__tests__/storage.controller.test.js (1)

11-34: 🛠️ Refactor suggestion | 🟠 Major

Missing test coverage for the new requestUpload / confirmUpload endpoints.

This test file adds the getBucket mock but doesn't add any describe('requestUpload', ...) / describe('confirmUpload', ...) suites. Given the delicate quota logic (peek-then-charge, ownership check, verifyUploadedFile, rollback semantics), shipping without tests for the primary new flow leaves regressions easy to miss.

At minimum, consider tests for:

  • requestUpload: 400 on missing fields, 413 on oversize, 403 on quota headroom exhaustion, 200 returns { signedUrl, filePath }, S3/R2 path where token is undefined.
  • confirmUpload: 400 on missing fields, 403 on foreign-project/.. paths, 500 when verifyUploadedFile throws (ensures quota is not charged), 403 when the $expr guard matches 0, 200 happy path returns public URL.

Want me to draft these test cases?

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@apps/public-api/src/__tests__/storage.controller.test.js` around lines 11 -
34, The test file mocks getBucket but lacks suites for the new requestUpload and
confirmUpload flows; add describe blocks for requestUpload and confirmUpload
that exercise the quota/ownership/verification and rollback semantics. For
requestUpload, add tests asserting 400 on missing fields, 413 on oversize, 403
when quota headroom is insufficient, and 200 returns { signedUrl, filePath }
(also assert S3/R2 branch where token is undefined) by calling the requestUpload
handler and using the __mockStorageFrom.upload/getPublicUrl mocks; for
confirmUpload, add tests asserting 400 on missing fields, 403 on foreign-project
or path traversal, 500 when verifyUploadedFile throws (and assert quota was not
charged/rolled back), 403 when the MongoDB $expr guard returns 0, and 200 happy
path returning public URL while verifying getBucket, verifyUploadedFile,
Project.updateOne and storage.from().remove/list behaviors are stubbed
appropriately. Ensure tests mock/inspect verifyUploadedFile, Project.updateOne,
getStorage().__mockStorageFrom, and getBucket to simulate each condition and
assert side-effects (quota changes, calls to remove on rollback, and returned
responses).
apps/public-api/src/controllers/storage.controller.js (1)

1-307: ⚠️ Potential issue | 🟠 Major

Coding-guideline violation: response shape and error handling.

The new requestUpload / confirmUpload (and the existing handlers alongside them) return { error: ... } / ad-hoc success shapes and raw res.status(...).json(...) on failure instead of the standard { success, data, message } envelope and AppError-based error flow. Per the repo's controller guideline, endpoints in src/controllers/** must return { success, data, message } and must use AppError — never raw throws, never ad-hoc error bodies.

If aligning the older handlers is out of scope for this PR, at least wrap the two new endpoints so the public surface starts converging on the standard shape.

As per coding guidelines: "All API endpoints return: { success: bool, data: {}, message: "" }. Use AppError class for errors — never raw throw, never expose MongoDB errors to client."

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@apps/public-api/src/controllers/storage.controller.js` around lines 1 - 307,
The requestUpload and confirmUpload handlers currently send ad-hoc responses and
throw raw errors; update both handlers (requestUpload, confirmUpload) to follow
the standard envelope { success, data, message } for all success responses
(e.g., return res.status(200).json({ success: true, data: { signedUrl, token,
filePath }, message: "..." }) and for confirmUpload similarly include
url/path/provider in data) and stop throwing raw errors from
verifyUploadedFile/getPresignedUploadUrl; instead convert failure cases into
AppError instances and pass to next (e.g., next(new AppError("Internal storage
limit exceeded.", 403))) rather than res.status(...).json or throw; ensure any
quota check failures, validation errors, and caught exceptions use AppError with
appropriate status codes and messages and do not leak internal error details.
🧹 Nitpick comments (3)
packages/common/src/index.js (1)

164-166: Fix indentation on the new export.

verifyUploadedFile is missing indentation to match the surrounding exports.

nit
-  getPresignedUploadUrl,
-verifyUploadedFile
+  getPresignedUploadUrl,
+  verifyUploadedFile,
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@packages/common/src/index.js` around lines 164 - 166, The export list has
inconsistent indentation: align the new export verifyUploadedFile with the
surrounding exports (e.g., match indentation of getPresignedUploadUrl) in the
export block so the exported symbols (getPresignedUploadUrl, verifyUploadedFile)
are vertically aligned and formatted consistently.
sdks/urbackend-sdk/src/modules/storage.ts (1)

115-119: Direct PUT to cloud has no timeout / retry / abort signal.

fetch has no default timeout. On flaky networks or stalled connections this will hang indefinitely, blocking the caller and never reaching upload-confirm. Consider wiring an AbortController (configurable via an options param) and documenting retry semantics — presigned PUT is idempotent for the same URL window, so bounded retries are safe.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@sdks/urbackend-sdk/src/modules/storage.ts` around lines 115 - 119, The direct
fetch PUT to the presigned URL (using signedUrl, fileData and producing
putResponse) lacks an AbortController, timeout and retry behavior; modify the
upload function to accept an optional options param (e.g., { signal?,
timeoutMs?, maxRetries? }), create/derive an AbortController when no external
signal is provided, pass its signal into fetch, implement a bounded retry loop
for idempotent PUTs honoring backoff and maxRetries, and ensure the controller
is aborted after timeoutMs; also update function docstring to describe the new
timeout/retry/options semantics so callers can provide their own AbortSignal or
rely on the configured timeout.
packages/common/src/utils/storage.manager.js (1)

190-198: Recreating S3Client on every presign/verify call.

Both helpers build a fresh S3Client per request rather than leveraging the per-project cache in storageRegistry / createS3Adapter. At high RPS this adds TLS + SDK middleware setup overhead. Consider extending createS3Adapter (or getStorage) to also expose the raw S3Client (or the presign/head helpers) so the pooling already implemented in getStorage applies here too.

Also applies to: 240-248

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@packages/common/src/utils/storage.manager.js` around lines 190 - 198, The
presign/verify helpers recreate a new S3Client for each call (symbols: S3Client,
presign, verify) which bypasses the existing per-project cache in
storageRegistry/createS3Adapter/getStorage and causes TLS/SDK setup overhead;
update createS3Adapter (or getStorage) to also return the underlying S3Client or
presign/head helper functions, then change the presign and verify helper
implementations to obtain and reuse that cached client/helper from
storageRegistry/getStorage instead of instantiating a new S3Client locally
(apply the same change to the other occurrence doing the same work).
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@apps/public-api/src/__tests__/storage.controller.test.js`:
- Line 40: The jest mock factory for '@urbackend/common' is missing stubs for
getPresignedUploadUrl and verifyUploadedFile (and ensure getBucket is exported),
so update the mock definition that currently exports getStorage, Project,
isProjectStorageExternal, getBucket, __mockStorageFrom (alias mockStorageFrom)
to also include no-op/mock implementations for getPresignedUploadUrl and
verifyUploadedFile; implement simple stub functions that return predictable
values or resolved promises consistent with existing test patterns so tests
calling getPresignedUploadUrl(...) and verifyUploadedFile(...) do not receive
undefined and fail.

In `@apps/public-api/src/controllers/storage.controller.js`:
- Around line 291-300: The current response returns url: publicUrlData.publicUrl
which can be null for S3/R2 adapters; update the handler that calls
getStorage/getBucket to detect when publicUrlData.publicUrl is null and either
(A) include the adapter error from createS3Adapter.getPublicUrl (e.g.,
publicUrlData.error) in the JSON response as an explicit warning/error field
alongside message and keep 200, or (B, preferred) enforce at
project-config/validation time that a publicUrlHost is provided for S3/R2 so
createS3Adapter.getPublicUrl never returns null; implement one of these fixes by
updating the response logic around publicUrlData.publicUrl and by adding
validation where project config is created/loaded (reference getStorage,
getBucket, publicUrlData.publicUrl, and createS3Adapter.getPublicUrl).
- Around line 275-289: The confirmUpload path verifies the uploaded file but
never updates the monthly usage metric, so add a call to
updateMonthlyUsageCounter(project._id, "storage:uploadedBytes", size) after the
successful Project.updateOne quota increment (use the verified size from
verifyUploadedFile) so billing/usage remains consistent with uploadFile; place
the call alongside the quota update block and only for non-external uploads.
- Around line 229-244: The incoming size must be validated and normalized to a
positive finite numeric value before any checks or DB writes: in the upload
initiation handler (the block using filename, contentType, size and comparing
against MAX_FILE_SIZE and project.storageUsed) coerce/parse the incoming size to
a Number, ensure it is a finite integer > 0 and <= MAX_FILE_SIZE, and return 400
on invalid input; do the same normalization and validation in confirmUpload
before using size in the Mongo $inc (e.g., the code path that performs $inc: {
storageUsed: size }), to prevent string concatenation, negative values, NaN, or
non-number writes to storageUsed.
- Around line 271-276: The ownership check should normalize and validate path
segments rather than banning any occurrence of '..'; update the check around
filePath (before calling verifyUploadedFile) to first normalize the path using
POSIX semantics (e.g., path.posix.normalize), ensure the normalized path begins
with `${project._id}/`, then split on '/' and reject if any segment equals '..'
or '.' or if decoded segments (decodeURIComponent) equal those values; mirror
the per-segment validation used in deleteFile so file names like "my..file.txt"
are allowed while true traversal is blocked, and keep calling
verifyUploadedFile(project, filePath, size) after the validated normalized path
is used.

In `@packages/common/src/utils/storage.manager.js`:
- Around line 213-217: The Supabase storage call using
defaultSupabase.storage.from("dev-files").list(folder, { search: fileName })
uses substring matching and can return other objects; after the list call in the
function that derives folder and fileName from filePath, filter the returned
data array for the exact item.name === fileName (or use Supabase's object
metadata/head endpoint if available) and use that exact match for the
size/metadata checks (also update the same pattern around the other list usage
at the later block referenced). Ensure you still handle the case where no exact
match is found (treat as missing and surface the error).
- Around line 208-252: verifyUploadedFile currently falls back to the
client-supplied expectedSize and returns it, which lets clients cheat; change
verifyUploadedFile to always resolve to the actual object size or throw if the
size cannot be determined (remove any "|| expectedSize" fallbacks and throw when
metadata.size or S3 HeadObject ContentLength is missing), and update the
controller confirmUpload to use the returned actualSize for the $inc and $expr
quota check (instead of req.body.size), rejecting with 400 if the declared size
differs from actualSize beyond a small tolerance; reference verifyUploadedFile,
confirmUpload, and the Supabase list/S3 HeadObject code paths when making the
changes.
- Around line 162-206: Replace the hardcoded bucket names in
getPresignedUploadUrl (internal "dev-files" and external supabase "files") with
the canonical bucket resolved by getBucket(project) (the same helper used by
uploadFile/confirmUpload/deleteFile), and ensure verifyUploadedFile also uses
getBucket(project) where it currently references buckets on lines around 216 and
232; import or use the existing getBucket symbol from `@urbackend/common` and pass
that bucket value into the supabase.storage.from(...) and S3
PutObjectCommand/Bucket resolution so all operations target the same bucket.
- Around line 199-205: The presigned PUT URL currently doesn't bind the allowed
upload size; update the PutObjectCommand creation in the function that calls
PutObjectCommand so it sets ContentLength to the incoming size parameter (the
controller's size/MAX_FILE_SIZE) and call getSignedUrl(s3Client, command, ...)
with options including signableHeaders: new Set(["content-length"]) and a
shorter expiresIn (e.g., 300–900 seconds) so the signed URL enforces the size
limit and has reduced lifetime; look for references to PutObjectCommand,
getSignedUrl, s3Client and the returned signedUrl to apply this change.

In `@sdks/urbackend-sdk/src/modules/storage.ts`:
- Around line 89-105: The runtime check guarding the File branch causes File
instances in non-window environments (Node 20+, Deno, Bun, Workers) to miss
setting resolvedName; update the type check to use typeof File !== "undefined"
&& file instanceof File instead of typeof window !== "undefined" && file
instanceof File, and in the Blob branch set resolvedName = filename || (file as
any).name || resolvedName so anonymous Blobs fall back to the provided filename
or existing default; keep the Buffer branch as-is but ensure resolvedName still
uses filename when passed.

---

Outside diff comments:
In `@apps/public-api/src/__tests__/storage.controller.test.js`:
- Around line 11-34: The test file mocks getBucket but lacks suites for the new
requestUpload and confirmUpload flows; add describe blocks for requestUpload and
confirmUpload that exercise the quota/ownership/verification and rollback
semantics. For requestUpload, add tests asserting 400 on missing fields, 413 on
oversize, 403 when quota headroom is insufficient, and 200 returns { signedUrl,
filePath } (also assert S3/R2 branch where token is undefined) by calling the
requestUpload handler and using the __mockStorageFrom.upload/getPublicUrl mocks;
for confirmUpload, add tests asserting 400 on missing fields, 403 on
foreign-project or path traversal, 500 when verifyUploadedFile throws (and
assert quota was not charged/rolled back), 403 when the MongoDB $expr guard
returns 0, and 200 happy path returning public URL while verifying getBucket,
verifyUploadedFile, Project.updateOne and storage.from().remove/list behaviors
are stubbed appropriately. Ensure tests mock/inspect verifyUploadedFile,
Project.updateOne, getStorage().__mockStorageFrom, and getBucket to simulate
each condition and assert side-effects (quota changes, calls to remove on
rollback, and returned responses).

In `@apps/public-api/src/controllers/storage.controller.js`:
- Around line 1-307: The requestUpload and confirmUpload handlers currently send
ad-hoc responses and throw raw errors; update both handlers (requestUpload,
confirmUpload) to follow the standard envelope { success, data, message } for
all success responses (e.g., return res.status(200).json({ success: true, data:
{ signedUrl, token, filePath }, message: "..." }) and for confirmUpload
similarly include url/path/provider in data) and stop throwing raw errors from
verifyUploadedFile/getPresignedUploadUrl; instead convert failure cases into
AppError instances and pass to next (e.g., next(new AppError("Internal storage
limit exceeded.", 403))) rather than res.status(...).json or throw; ensure any
quota check failures, validation errors, and caught exceptions use AppError with
appropriate status codes and messages and do not leak internal error details.

---

Nitpick comments:
In `@packages/common/src/index.js`:
- Around line 164-166: The export list has inconsistent indentation: align the
new export verifyUploadedFile with the surrounding exports (e.g., match
indentation of getPresignedUploadUrl) in the export block so the exported
symbols (getPresignedUploadUrl, verifyUploadedFile) are vertically aligned and
formatted consistently.

In `@packages/common/src/utils/storage.manager.js`:
- Around line 190-198: The presign/verify helpers recreate a new S3Client for
each call (symbols: S3Client, presign, verify) which bypasses the existing
per-project cache in storageRegistry/createS3Adapter/getStorage and causes
TLS/SDK setup overhead; update createS3Adapter (or getStorage) to also return
the underlying S3Client or presign/head helper functions, then change the
presign and verify helper implementations to obtain and reuse that cached
client/helper from storageRegistry/getStorage instead of instantiating a new
S3Client locally (apply the same change to the other occurrence doing the same
work).

In `@sdks/urbackend-sdk/src/modules/storage.ts`:
- Around line 115-119: The direct fetch PUT to the presigned URL (using
signedUrl, fileData and producing putResponse) lacks an AbortController, timeout
and retry behavior; modify the upload function to accept an optional options
param (e.g., { signal?, timeoutMs?, maxRetries? }), create/derive an
AbortController when no external signal is provided, pass its signal into fetch,
implement a bounded retry loop for idempotent PUTs honoring backoff and
maxRetries, and ensure the controller is aborted after timeoutMs; also update
function docstring to describe the new timeout/retry/options semantics so
callers can provide their own AbortSignal or rely on the configured timeout.
🪄 Autofix (Beta)

Fix all unresolved CodeRabbit comments on this PR:

  • Push a commit to this branch (recommended)
  • Create a new PR with the fixes

ℹ️ Review info
⚙️ Run configuration

Configuration used: defaults

Review profile: CHILL

Plan: Pro

Run ID: 18a01d8c-eb79-4d68-a67a-70eb84517c7b

📥 Commits

Reviewing files that changed from the base of the PR and between 63bee25 and 8f4b03d.

⛔ Files ignored due to path filters (1)
  • package-lock.json is excluded by !**/package-lock.json
📒 Files selected for processing (7)
  • apps/public-api/src/__tests__/storage.controller.test.js
  • apps/public-api/src/controllers/storage.controller.js
  • apps/public-api/src/routes/storage.js
  • packages/common/package.json
  • packages/common/src/index.js
  • packages/common/src/utils/storage.manager.js
  • sdks/urbackend-sdk/src/modules/storage.ts

Comment thread apps/public-api/src/__tests__/storage.controller.test.js Outdated
Comment thread apps/public-api/src/controllers/storage.controller.js
Comment thread apps/public-api/src/controllers/storage.controller.js Outdated
Comment thread apps/public-api/src/controllers/storage.controller.js
Comment thread apps/public-api/src/controllers/storage.controller.js Outdated
Comment thread packages/common/src/utils/storage.manager.js Outdated
Comment thread packages/common/src/utils/storage.manager.js
Comment on lines +208 to +252
async function verifyUploadedFile(project, filePath, expectedSize) {
const isExternal = !!project.resources?.storage?.isExternal;

if (!isExternal) {
// internal supabase
const folder = filePath.split("/")[0];
const fileName = filePath.split("/").slice(1).join("/");
const { data, error } = await defaultSupabase.storage
.from("dev-files")
.list(folder, { search: fileName });
if (error) throw error;
if (!data || data.length === 0) throw new Error("File not found after upload");
return data[0].metadata?.size || expectedSize;
}

const decrypted = decrypt(project.resources.storage.config);
const config = JSON.parse(decrypted);
const provider = config.storageProvider || "supabase";

if (provider === "supabase") {
const supabase = await getStorage(project);
const folder = filePath.split("/")[0];
const fileName = filePath.split("/").slice(1).join("/");
const { data, error } = await supabase.storage
.from("files")
.list(folder, { search: fileName });
if (error) throw error;
if (!data || data.length === 0) throw new Error("File not found after upload");
return data[0].metadata?.size || expectedSize;
}

// S3 / R2 — just ask "does this object exist and what's its size?"
const s3Client = new S3Client({
region: config.region || "auto",
endpoint: config.endpoint,
forcePathStyle: true,
credentials: {
accessKeyId: config.accessKeyId,
secretAccessKey: config.secretAccessKey,
},
});
const command = new HeadObjectCommand({ Bucket: config.bucket, Key: filePath });
const head = await s3Client.send(command);
return head.ContentLength || expectedSize;
}
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🔴 Critical

verifyUploadedFile's return value is ignored — client-supplied size is used to charge quota.

The controller's confirmUpload calls await verifyUploadedFile(project, filePath, size) but discards the return value, then does $inc: { storageUsed: size } using the client-provided size (req.body.size). A client can therefore:

  • Upload a large file and claim size: 1 to bypass the storageLimit check.
  • Upload a small file but claim a larger size to burn another project member's quota (if any).

Also, the fallback return data[0].metadata?.size || expectedSize; silently trusts the client whenever Supabase doesn't return a size — the "verification" has no teeth on that path.

Recommend: have verifyUploadedFile return the actual object size (throw if unknown, never fall back to expectedSize) and have confirmUpload use that value for the $inc and $expr guard. Additionally, consider rejecting with a 400 if the returned size diverges from the declared size beyond a small tolerance.

🛠 Sketch of fix
-        if (!data || data.length === 0) throw new Error("File not found after upload");
-        return data[0].metadata?.size || expectedSize;
+        if (!data || data.length === 0) throw new Error("File not found after upload");
+        const actual = Number(data[0].metadata?.size);
+        if (!Number.isFinite(actual)) throw new Error("Could not determine uploaded size");
+        return actual;

And in the controller:

-        await verifyUploadedFile(project, filePath, size);
+        const actualSize = await verifyUploadedFile(project, filePath, size);
...
-                { $inc: { storageUsed: size } }
+                { $inc: { storageUsed: actualSize } }
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@packages/common/src/utils/storage.manager.js` around lines 208 - 252,
verifyUploadedFile currently falls back to the client-supplied expectedSize and
returns it, which lets clients cheat; change verifyUploadedFile to always
resolve to the actual object size or throw if the size cannot be determined
(remove any "|| expectedSize" fallbacks and throw when metadata.size or S3
HeadObject ContentLength is missing), and update the controller confirmUpload to
use the returned actualSize for the $inc and $expr quota check (instead of
req.body.size), rejecting with 400 if the declared size differs from actualSize
beyond a small tolerance; reference verifyUploadedFile, confirmUpload, and the
Supabase list/S3 HeadObject code paths when making the changes.

Comment thread packages/common/src/utils/storage.manager.js
Comment thread sdks/urbackend-sdk/src/modules/storage.ts Outdated
@Nitin-kumar-yadav1307
Copy link
Copy Markdown
Contributor Author

@yash-pouranik let me know , if it require some changes

@yash-pouranik
Copy link
Copy Markdown
Collaborator

@Nitin-kumar-yadav1307 Have u tested locally ??

@Nitin-kumar-yadav1307
Copy link
Copy Markdown
Contributor Author

Yeah ! , but before resolving conflicts and issues , sorry for that @yash-pouranik

Copy link
Copy Markdown
Collaborator

@yash-pouranik yash-pouranik left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

2 comments in Storage.controller.js and 1 in storage.manager.js by coderabbit are still not resolved?? @Nitin-kumar-yadav1307

@yash-pouranik yash-pouranik linked an issue Apr 23, 2026 that may be closed by this pull request
4 tasks
@Nitin-kumar-yadav1307
Copy link
Copy Markdown
Contributor Author

@yash-pouranik ok, i am resolving

@Nitin-kumar-yadav1307
Copy link
Copy Markdown
Contributor Author

WhatsApp Image 2026-04-23 at 10 17 44 PM all the tests are passed let see what will happen ! @yash-pouranik waiting for dear coderabbit

Copy link
Copy Markdown
Contributor

@coderabbitai coderabbitai Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 3

♻️ Duplicate comments (1)
sdks/urbackend-sdk/src/modules/storage.ts (1)

89-105: ⚠️ Potential issue | 🟡 Minor

Add typeof Blob !== "undefined" guard and preserve file.name in the Blob branch.

The unguarded instanceof Blob check on line 95 will throw ReferenceError in Node.js 16 and earlier (where Blob is not a global). Additionally, the Blob branch should preserve resolvedName from file.name as a fallback, since in constrained runtimes where the File global is undefined, a File instance may fall through to this branch and lose its name.

Proposed fix
-    } else if (file instanceof Blob) {
-        contentType = file.type || contentType;
-        fileSize = file.size;
-        fileData = file;
+    } else if (typeof Blob !== "undefined" && file instanceof Blob) {
+        resolvedName = filename || (file as { name?: string }).name || resolvedName;
+        contentType = file.type || contentType;
+        fileSize = file.size;
+        fileData = file;
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@sdks/urbackend-sdk/src/modules/storage.ts` around lines 89 - 105, The Blob
instanceof check is unguarded and can throw in runtimes without a global Blob,
and the Blob branch currently doesn't preserve a filename; update the condition
to check typeof Blob !== "undefined" && file instanceof Blob, and in that
Blob-handling branch set resolvedName = resolvedName || (file as any).name (or
use filename || file.name fallback) so the original file name is retained when a
File falls through to the Blob branch; adjust the Blob branch to also set
contentType, fileSize, and fileData (as currently done) while preserving
resolvedName.
🧹 Nitpick comments (7)
sdks/urbackend-sdk/tests/storage.test.ts (2)

8-8: Stale test name.

The test no longer sends FormData; it now posts JSON for the upload-request step and uses PUT with a raw body for the signed URL upload. Consider renaming to reflect the new 3-step presigned flow (e.g., 'upload performs 3-step presigned flow and returns { url, path }') to avoid confusing future readers.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@sdks/urbackend-sdk/tests/storage.test.ts` at line 8, Rename the test
description to accurately reflect the new 3-step presigned flow: change the test
title string in the test block that currently reads 'upload sends FormData and
returns { url, path }' to something like 'upload performs 3-step presigned flow
and returns { url, path }' so it matches the updated behavior in the upload test
in storage.test.ts (the test function with that title).

39-46: PUT assertion doesn't verify the body was forwarded.

The second-call assertion checks method and headers but not that fileData (the Buffer) is actually passed as the body to the signed URL. A regression that dropped the body or sent undefined would still pass this test. Consider adding body: expect.anything() (or a more specific matcher) to the objectContaining to lock down that contract.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@sdks/urbackend-sdk/tests/storage.test.ts` around lines 39 - 46, The test's
second fetchMock assertion (in storage.test.ts) only checks method and headers
and should also assert the request body was forwarded; update the
expect.objectContaining for fetchMock's 2nd call (the PUT to
'https://signed.example/upload') to include a body matcher such as body:
expect.anything() or body: fileData (or expect.equal(fileData)) so the Buffer
passed as the upload body is verified.
apps/public-api/src/__tests__/storage.controller.test.js (1)

356-436: Good coverage of the new presigned flow.

Tests cover: non-numeric size validation, happy-path signed URL generation (including that .. in filenames isn't incorrectly rejected as traversal), $expr-guarded quota charging on confirm, declared-vs-verified size mismatch, and the external/missing-public-URL warning path. This matches the controller logic well.

One optional gap to consider adding later: a test for confirmUpload rejecting a filePath that normalizes into a different project (e.g., other_project_id/file.txt or project_id_1/../other/file.txt) to lock down normalizeProjectPath's ownership check. Not a blocker.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@apps/public-api/src/__tests__/storage.controller.test.js` around lines 356 -
436, Add a test to ensure confirmUpload rejects filePath values that normalize
to a different project to protect ownership checks: create a case in the
requestUpload/confirmUpload suite that calls storageController.confirmUpload
with req.body.filePath set to values like "other_project_id/file.txt" and
"project_id_1/../other/file.txt", mock normalizeProjectPath (or rely on real
normalize function) and assert a 400/forbidden response and error message;
reference the confirmUpload handler and normalizeProjectPath behavior so the
test verifies the ownership guard preventing cross-project normalization
escapes.
sdks/urbackend-sdk/src/modules/storage.ts (2)

125-131: Failed/abandoned confirm step leaves orphan objects in storage.

If the PUT succeeds but upload-confirm fails (network drop, client process killed, server 5xx), the file exists on Supabase/S3/R2 with no server record and no quota charge — a deliberate design choice per the PR description, but it does produce orphans over time.

Worth tracking as a follow-up: either (a) a server-side sweeper that lists storage under ${projectId}/ and deletes objects not referenced by any DB record older than N minutes, or (b) a client-side retry with backoff around the confirm call before surfacing the error. Not required for this PR.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@sdks/urbackend-sdk/src/modules/storage.ts` around lines 125 - 131, The
confirm step ("POST" "/api/storage/upload-confirm") can fail after a successful
PUT leaving orphaned objects; add a client-side retry-with-backoff around the
this.client.request<UploadResponse>(...) call that posts to
"/api/storage/upload-confirm" to reduce orphan creation (e.g., exponential
backoff with a small max attempts and jitter), and surface the error only after
retries exhaust; alternatively note adding a server-side sweeper to
garbage-collect unreferenced objects as a follow-up. Ensure you modify the
method that performs the confirm call (the code calling this.client.request for
"upload-confirm") so retries are scoped to that request.

115-123: Direct PUT has no timeout/abort and throws a plain Error.

Two small concerns on the cloud PUT leg:

  1. fetch(signedUrl, ...) has no AbortSignal/timeout. For large files on flaky networks this can hang indefinitely on the client with no way to cancel. Consider threading an optional AbortSignal through upload() (or at least wrapping with a configurable timeout) so callers can bound the wait.
  2. The other SDK failure paths surface as StorageError (see the 'StorageError thrown on failure' test), but this path throws a generic Error. Throwing a StorageError-typed error here would keep catch blocks in consumer code uniform.

Neither is a release blocker, but both will be noticeable once real users hit large-file edge cases.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@sdks/urbackend-sdk/src/modules/storage.ts` around lines 115 - 123, The direct
PUT call using fetch(signedUrl, ...) in upload() lacks cancellation/timeout and
throws a plain Error; modify upload() to accept an optional AbortSignal (or
implement a configurable timeout using AbortController) and pass that signal to
fetch so callers can cancel long uploads, and replace the plain throw new
Error(...) for the failed putResponse path with a StorageError (using the
existing StorageError class) that includes HTTP status and statusText to match
other SDK error paths and keep catch handling uniform.
apps/public-api/src/controllers/storage.controller.js (2)

297-400: Response envelope doesn't match the project-wide API contract.

The new requestUpload and confirmUpload handlers return flat shapes like { signedUrl, token, filePath } and { error: "..." }, but per the repo's controller guideline all API endpoints should return { success: bool, data: {}, message: "" } and use AppError for errors rather than raw res.status(...).json({ error }).

The rest of this file already drifts from the guideline (so this PR isn't introducing the drift), but since these are brand-new endpoints consumed by the SDK, it's cheap to align them now before any external clients depend on the shape — otherwise the contract becomes load-bearing and harder to migrate later. At minimum, consider wrapping the success payloads in { success: true, data: { ... }, message: "..." } and routing errors through AppError.

As per coding guidelines: All API endpoints return: { success: bool, data: {}, message: "" }. Use AppError class for errors — never raw throw, never expose MongoDB errors to client.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@apps/public-api/src/controllers/storage.controller.js` around lines 297 -
400, Both requestUpload and confirmUpload return flat JSON and raw error shapes;
update them to follow the project API contract by returning the envelope {
success: bool, data: {...}, message: "" } on success and throw/use AppError for
client errors (instead of res.status(...).json({ error })) so error middleware
handles status and payload. Specifically, in requestUpload replace the success
return to res.status(200).json({ success: true, data: { signedUrl, token,
filePath }, message: "Upload URL generated" }) and replace each early client
error branch (400/403/413) to throw new AppError("...", statusCode) (or
construct AppError with the message and code) instead of returning raw JSON; do
the same in confirmUpload: wrap the success response (message/path/provider/url)
into data and message, and convert all res.status(...).json({ error, details })
error responses to AppError throws so the centralized error handler emits the
proper envelope. Ensure catch blocks do not directly send raw JSON but rethrow
or convert to an AppError with a sanitized message.

313-317: TOCTOU between requestUpload quota peek and confirmUpload quota charge.

The peek here is purely advisory — between issuing the signed URL and the later confirmUpload, nothing prevents N parallel requestUpload calls from all seeing storageUsed + size <= limit and all receiving signed URLs. The first few confirms will succeed, but once the atomic $expr guard in confirmUpload trips, the remaining clients will get 403s with files already uploaded to the bucket (→ same orphan problem flagged below).

Two options:

  • Reserve on peek: do an atomic $inc under the same $expr guard here, keyed by a pending-upload token, and release/commit it in confirmUpload. Most correct, most code.
  • Accept it: document that under high concurrency the client may see a 403 on confirm and should delete locally; pair with the compensating-delete mitigation suggested on lines 370-371.

Not a blocker for v1, but worth a design note in the PR so it isn't silently inherited.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@apps/public-api/src/controllers/storage.controller.js` around lines 313 -
317, The current TOCTOU occurs in requestUpload where the advisory quota peek
(calculating quotaLimit from effectiveLimit/SAFETY_MAX_BYTES and checking
project.storageUsed + numericSize) can race with confirmUpload's atomic $expr
guard, allowing many signed URLs to be issued but later rejected, leaving
orphaned bucket objects; fix by either implementing a reserve-on-peek mechanism:
atomically $inc a pending_uploads counter or store a pending-upload token on the
project record under the same $expr guard when issuing signed URLs and then
atomically commit (move pending to used) or release that token in confirmUpload,
or explicitly document and handle the race by returning a clear error to clients
and ensuring confirmUpload triggers a compensating-delete path; refer to
requestUpload, confirmUpload, project.storageUsed, numericSize,
SAFETY_MAX_BYTES, effectiveLimit, and the $expr guard when making the change.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@apps/public-api/src/controllers/storage.controller.js`:
- Around line 370-371: In confirmUpload, ensure any return path after the cloud
PUT (e.g., the 500 verify failure in the verify step, the 400 size-mismatch, and
the 403 when result.matchedCount === 0) performs a best-effort compensating
delete of the uploaded object by calling
supabase.storage.from(bucket).remove([normalizedPath]) and swallowing any errors
before returning; add this delete call just prior to each non-2xx return (or
factor into a small helper invoked on error) so orphaned objects are removed
even if confirmUpload rejects.
- Around line 350-355: The verifyUploadedFile call inside the confirmUpload
handler currently lets a "File not found after upload" exception bubble up and
produce a 500; change confirmUpload to explicitly catch errors from
verifyUploadedFile (or inspect the thrown error/type/message) and, when the
error indicates the file is not yet present, return a 4xx (e.g. 409 Conflict or
404 Not Found) with a clear machine-readable error code (e.g. "UPLOAD_NOT_FOUND"
or "UPLOAD_NOT_READY") and a human message, otherwise rethrow or let other
errors produce the existing 500 path; update the response where actualSize is
used so that only real verification failures continue to produce 500 and
retryable "not found" cases produce the new 4xx response.
- Around line 362-372: confirmUpload currently uses Project.updateOne with an
$expr that always compares $storageUsed + actualSize <= $storageLimit, which
fails for storageLimit === -1 (unlimited). Change the update filter used in
confirmUpload so it allows unlimited plans by adding an OR condition: either
$storageLimit equals -1 OR the existing $expr comparison holds; keep the same
$inc update of storageUsed. Update the Project.updateOne call (the filter around
storageLimit/$expr and the $inc of storageUsed) to mirror the branching used in
uploadFile (where effectiveLimit === -1 is handled), and add a unit/integration
test that confirms confirmUpload succeeds for projects with storageLimit: -1.

---

Duplicate comments:
In `@sdks/urbackend-sdk/src/modules/storage.ts`:
- Around line 89-105: The Blob instanceof check is unguarded and can throw in
runtimes without a global Blob, and the Blob branch currently doesn't preserve a
filename; update the condition to check typeof Blob !== "undefined" && file
instanceof Blob, and in that Blob-handling branch set resolvedName =
resolvedName || (file as any).name (or use filename || file.name fallback) so
the original file name is retained when a File falls through to the Blob branch;
adjust the Blob branch to also set contentType, fileSize, and fileData (as
currently done) while preserving resolvedName.

---

Nitpick comments:
In `@apps/public-api/src/__tests__/storage.controller.test.js`:
- Around line 356-436: Add a test to ensure confirmUpload rejects filePath
values that normalize to a different project to protect ownership checks: create
a case in the requestUpload/confirmUpload suite that calls
storageController.confirmUpload with req.body.filePath set to values like
"other_project_id/file.txt" and "project_id_1/../other/file.txt", mock
normalizeProjectPath (or rely on real normalize function) and assert a
400/forbidden response and error message; reference the confirmUpload handler
and normalizeProjectPath behavior so the test verifies the ownership guard
preventing cross-project normalization escapes.

In `@apps/public-api/src/controllers/storage.controller.js`:
- Around line 297-400: Both requestUpload and confirmUpload return flat JSON and
raw error shapes; update them to follow the project API contract by returning
the envelope { success: bool, data: {...}, message: "" } on success and
throw/use AppError for client errors (instead of res.status(...).json({ error
})) so error middleware handles status and payload. Specifically, in
requestUpload replace the success return to res.status(200).json({ success:
true, data: { signedUrl, token, filePath }, message: "Upload URL generated" })
and replace each early client error branch (400/403/413) to throw new
AppError("...", statusCode) (or construct AppError with the message and code)
instead of returning raw JSON; do the same in confirmUpload: wrap the success
response (message/path/provider/url) into data and message, and convert all
res.status(...).json({ error, details }) error responses to AppError throws so
the centralized error handler emits the proper envelope. Ensure catch blocks do
not directly send raw JSON but rethrow or convert to an AppError with a
sanitized message.
- Around line 313-317: The current TOCTOU occurs in requestUpload where the
advisory quota peek (calculating quotaLimit from effectiveLimit/SAFETY_MAX_BYTES
and checking project.storageUsed + numericSize) can race with confirmUpload's
atomic $expr guard, allowing many signed URLs to be issued but later rejected,
leaving orphaned bucket objects; fix by either implementing a reserve-on-peek
mechanism: atomically $inc a pending_uploads counter or store a pending-upload
token on the project record under the same $expr guard when issuing signed URLs
and then atomically commit (move pending to used) or release that token in
confirmUpload, or explicitly document and handle the race by returning a clear
error to clients and ensuring confirmUpload triggers a compensating-delete path;
refer to requestUpload, confirmUpload, project.storageUsed, numericSize,
SAFETY_MAX_BYTES, effectiveLimit, and the $expr guard when making the change.

In `@sdks/urbackend-sdk/src/modules/storage.ts`:
- Around line 125-131: The confirm step ("POST" "/api/storage/upload-confirm")
can fail after a successful PUT leaving orphaned objects; add a client-side
retry-with-backoff around the this.client.request<UploadResponse>(...) call that
posts to "/api/storage/upload-confirm" to reduce orphan creation (e.g.,
exponential backoff with a small max attempts and jitter), and surface the error
only after retries exhaust; alternatively note adding a server-side sweeper to
garbage-collect unreferenced objects as a follow-up. Ensure you modify the
method that performs the confirm call (the code calling this.client.request for
"upload-confirm") so retries are scoped to that request.
- Around line 115-123: The direct PUT call using fetch(signedUrl, ...) in
upload() lacks cancellation/timeout and throws a plain Error; modify upload() to
accept an optional AbortSignal (or implement a configurable timeout using
AbortController) and pass that signal to fetch so callers can cancel long
uploads, and replace the plain throw new Error(...) for the failed putResponse
path with a StorageError (using the existing StorageError class) that includes
HTTP status and statusText to match other SDK error paths and keep catch
handling uniform.

In `@sdks/urbackend-sdk/tests/storage.test.ts`:
- Line 8: Rename the test description to accurately reflect the new 3-step
presigned flow: change the test title string in the test block that currently
reads 'upload sends FormData and returns { url, path }' to something like
'upload performs 3-step presigned flow and returns { url, path }' so it matches
the updated behavior in the upload test in storage.test.ts (the test function
with that title).
- Around line 39-46: The test's second fetchMock assertion (in storage.test.ts)
only checks method and headers and should also assert the request body was
forwarded; update the expect.objectContaining for fetchMock's 2nd call (the PUT
to 'https://signed.example/upload') to include a body matcher such as body:
expect.anything() or body: fileData (or expect.equal(fileData)) so the Buffer
passed as the upload body is verified.
🪄 Autofix (Beta)

Fix all unresolved CodeRabbit comments on this PR:

  • Push a commit to this branch (recommended)
  • Create a new PR with the fixes

ℹ️ Review info
⚙️ Run configuration

Configuration used: defaults

Review profile: CHILL

Plan: Pro

Run ID: 7c91e0ae-e613-4122-92a6-4f6bdd4d5464

📥 Commits

Reviewing files that changed from the base of the PR and between 8f4b03d and bdfa51e.

⛔ Files ignored due to path filters (1)
  • package-lock.json is excluded by !**/package-lock.json
📒 Files selected for processing (8)
  • apps/public-api/src/__tests__/storage.controller.test.js
  • apps/public-api/src/controllers/storage.controller.js
  • apps/public-api/src/routes/storage.js
  • packages/common/package.json
  • packages/common/src/index.js
  • packages/common/src/utils/storage.manager.js
  • sdks/urbackend-sdk/src/modules/storage.ts
  • sdks/urbackend-sdk/tests/storage.test.ts
✅ Files skipped from review due to trivial changes (1)
  • packages/common/package.json
🚧 Files skipped from review as they are similar to previous changes (3)
  • packages/common/src/index.js
  • apps/public-api/src/routes/storage.js
  • packages/common/src/utils/storage.manager.js

Comment thread apps/public-api/src/controllers/storage.controller.js
Comment thread apps/public-api/src/controllers/storage.controller.js
Comment thread apps/public-api/src/controllers/storage.controller.js Outdated
@yash-pouranik
Copy link
Copy Markdown
Collaborator

LASTLY @coderabbitai Can we Merge this??
Also this file upload flow is implemented for dashboard file upload or for public apis as well??

@coderabbitai
Copy link
Copy Markdown
Contributor

coderabbitai Bot commented Apr 23, 2026

Tip

For best results, initiate chat on the files or code changes.

🧩 Analysis chain

🏁 Script executed:

#!/bin/bash
echo "=== Public API storage routes ==="
fd -e js "storage" apps/public-api/src/routes/ --exec cat {}

echo "=== Dashboard app structure (if any) ==="
fd -t d "dashboard" --max-depth=3

echo "=== Apps directory structure ==="
tree -L 2 apps/ 2>/dev/null || ls -la apps/

echo "=== Controller exports ==="
rg -n "module.exports\." apps/public-api/src/controllers/storage.controller.js

echo "=== Middleware used on new routes ==="
rg -n "upload-request\|upload-confirm\|requestUpload\|confirmUpload" apps/public-api/src/routes/storage.js

Length of output: 2146


🏁 Script executed:

#!/bin/bash
echo "=== Dashboard API storage routes (if any) ==="
fd -e js "storage" apps/dashboard-api/src/ --exec cat {} 2>/dev/null || echo "No storage files in dashboard-api"

echo "=== Open review comments from coderabbit on PR ==="
gh pr view 129 --repo geturbackend/urBackend --json reviews,comments 2>/dev/null | jq '.comments | length' 2>/dev/null

echo "=== PR status ==="
gh pr view 129 --repo geturbackend/urBackend --json mergeable,mergeStateStatus,statusCheckRollup 2>/dev/null

echo "=== Any TODO/FIXME in new code ==="
rg -n "TODO\|FIXME\|HACK\|XXX" packages/common/src/utils/storage.manager.js apps/public-api/src/controllers/storage.controller.js

echo "=== confirmUpload handler ==="
sed -n '345,420p' apps/public-api/src/controllers/storage.controller.js

echo "=== requestUpload handler ==="
sed -n '308,345p' apps/public-api/src/controllers/storage.controller.js

Length of output: 7117


@yash-pouranik, here's a summary based on the current state of the PR:


🔀 Merge Readiness

The PR is technically mergeable — GitHub shows mergeStateStatus: CLEAN and all CI checks have passed:

Check Status
public-api-tests ✅ SUCCESS
dashboard-api-tests ✅ SUCCESS
sdk-tests ✅ SUCCESS
frontend-lint ✅ SUCCESS
CodeQL ✅ SUCCESS

However, there are 7 open review comments from this review that should be addressed (or explicitly acknowledged as won't-fix) before merging. These cover topics like path traversal edge cases, size-tolerance constants, error handling, and CORS configuration requirements. You can view them all in the Files changed tab.


📂 Dashboard or Public API?

This is public API only. Specifically:

  • The new /upload-request and /upload-confirm routes are wired exclusively inside apps/public-api/src/routes/storage.js, protected by verifyApiKey + requireSecretKey.
  • apps/dashboard-api/ has no storage routes at all — it was not touched.
  • The urbackend-sdk (sdks/urbackend-sdk) was updated so that developers using the SDK get the new presigned URL flow transparently — the public upload() method signature is unchanged.

In short: app developers calling the SDK or public API directly benefit from this change (no more server-proxied uploads). The web dashboard itself is unaffected.

@yash-pouranik
Copy link
Copy Markdown
Collaborator

@coderabbitai create a follow up issue of this to add the same flow in dashboard side as well.
Right now merging this after a little review

@coderabbitai
Copy link
Copy Markdown
Contributor

coderabbitai Bot commented Apr 23, 2026

Tip

For best results, initiate chat on the files or code changes.

🧩 Analysis chain

🏁 Script executed:

#!/bin/bash
gh issue create \
  --repo geturbackend/urBackend \
  --title "feat(dashboard): add presigned URL upload flow to dashboard API" \
  --assignee yash-pouranik \
  --body "## Summary

PR `#129` (https://github.com/geturbackend/urBackend/pull/129) introduced a presigned URL three-step upload architecture for the **public API** to avoid proxying large file bytes through the Node.js server. The dashboard side (\`apps/dashboard-api/\`) currently has no storage routes and was not included in that PR.

This issue tracks implementing the same flow for the dashboard.

## Work Required

- Add storage routes to \`apps/dashboard-api/\` (e.g., \`/upload-request\` and \`/upload-confirm\`).
- Wire in appropriate authentication/authorization middleware suitable for dashboard users (session-based or similar, rather than \`verifyApiKey\`/\`requireSecretKey\`).
- Reuse the shared \`getPresignedUploadUrl\` and \`verifyUploadedFile\` helpers already exported from \`@urbackend/common\` (added in PR `#129`).
- Apply the same quota-check-before and quota-charge-after-confirm pattern used in the public API controller.
- Add/update dashboard-specific tests.
- Document any required CORS PUT configuration for S3/R2 buckets if not already covered.

## References

- Implementing PR: https://github.com/geturbackend/urBackend/pull/129
- Requested by: `@yash-pouranik`
- Related comment: https://github.com/geturbackend/urBackend/pull/129#issuecomment-$(gh pr view 129 --repo geturbackend/urBackend --json comments --jq '.comments[-1].id' 2>/dev/null || echo 'see PR comments')"

Length of output: 52


@yash-pouranik, the follow-up issue has been created and assigned to you: #131

It covers:

  • Adding /upload-request and /upload-confirm routes to apps/dashboard-api/
  • Using dashboard-appropriate auth middleware (session-based, not API key)
  • Reusing the getPresignedUploadUrl and verifyUploadedFile helpers from @urbackend/common introduced in this PR
  • Applying the same quota-check-before / charge-after-confirm pattern
  • Dashboard-specific tests and CORS documentation

@yash-pouranik yash-pouranik self-requested a review April 23, 2026 19:08
Copy link
Copy Markdown
Collaborator

@yash-pouranik yash-pouranik left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thank you @Nitin-kumar-yadav1307
merging this.

@yash-pouranik yash-pouranik merged commit fd05e9a into geturbackend:main Apr 23, 2026
7 checks passed
@yash-pouranik
Copy link
Copy Markdown
Collaborator

@coderabbitai Does this PR also updates Docs?

@coderabbitai
Copy link
Copy Markdown
Contributor

coderabbitai Bot commented Apr 23, 2026

Tip

For best results, initiate chat on the files or code changes.

🧩 Analysis chain

🏁 Script executed:

#!/bin/bash
echo "=== Files changed in PR `#129` ==="
gh pr view 129 --repo geturbackend/urBackend --json files --jq '.files[].path'

echo ""
echo "=== Any markdown/doc files in the repo ==="
fd -e md -e mdx -e rst -e txt --max-depth=4 | grep -iE "readme|doc|changelog|guide|api" | head -30

echo ""
echo "=== Check if any AGENTS.md or docs dir exists ==="
fd -e md "AGENTS" 2>/dev/null | head -5
fd -t d "docs" --max-depth=3 2>/dev/null | head -10

Length of output: 1502


🏁 Script executed:

#!/bin/bash
echo "=== mintlify storage guide ==="
cat mintlify/docs/guides/storage.mdx

echo ""
echo "=== docs-legacy storage ==="
cat docs-legacy/storage.md

Length of output: 4888


@yash-pouranik, No — this PR does not update any documentation files.

The following docs exist and are relevant, but were not touched:

File What it covers What's missing
mintlify/docs/guides/storage.mdx Only the old POST /api/storage/upload (multipart) flow New /upload-request + /upload-confirm endpoints
docs-legacy/storage.md Same — only the old multipart flow Same gap

The two new public endpoints added by this PR are completely undocumented:

  • POST /api/storage/upload-request — returns a presigned URL + token
  • POST /api/storage/upload-confirm — verifies upload and charges quota, returns public URL

The SDK change is transparent to SDK users (same upload() signature), but developers calling the REST API directly have no guidance on the new three-step flow, required request bodies, response shapes, or the S3/R2 CORS-PUT bucket configuration requirement.

Before merging, both mintlify/docs/guides/storage.mdx and docs-legacy/storage.md should be updated to document the new endpoints — or at minimum, a follow-up issue should be opened to track it. Would you like me to open a follow-up issue for the docs update?


✏️ Learnings added
Learnt from: yash-pouranik
URL: https://github.com/geturbackend/urBackend/pull/129

Timestamp: 2026-04-23T19:32:31.401Z
Learning: In the geturbackend/urBackend repo, PR `#129` introduced two new public storage REST endpoints (POST /api/storage/upload-request and POST /api/storage/upload-confirm) as part of a presigned URL upload flow, but did not update docs in mintlify/docs/guides/storage.mdx or docs-legacy/storage.md. Future PRs touching storage should update both doc files.

Note: Learnings are effective only in the context of similar code segments. To apply general coding guidelines, please configure review instructions. You can manage existing learnings in the app.


🧠 Learnings used
Learnt from: CR
Repo: geturbackend/urBackend PR: 0
File: AGENTS.md:0-0
Timestamp: 2026-04-21T21:00:02.427Z
Learning: When touching docs, update both repo docs and in-dashboard docs if the user-facing flow changes.

@yash-pouranik
Copy link
Copy Markdown
Collaborator

yash-pouranik commented Apr 23, 2026

I have already merge this
Please create a issue to update docs as of PR #129

@coderabbitai

@coderabbitai
Copy link
Copy Markdown
Contributor

coderabbitai Bot commented Apr 23, 2026

Tip

For best results, initiate chat on the files or code changes.

🧩 Analysis chain

🏁 Script executed:

#!/bin/bash
gh issue create \
  --repo geturbackend/urBackend \
  --title "docs(storage): document presigned URL upload endpoints introduced in PR `#129`" \
  --assignee yash-pouranik \
  --body "## Summary

PR `#129` (https://github.com/geturbackend/urBackend/pull/129) replaced the server-proxied upload flow with a presigned URL three-step architecture and added two new public REST endpoints. Neither of the existing documentation files was updated as part of that PR.

## Files to Update

- \`mintlify/docs/guides/storage.mdx\`
- \`docs-legacy/storage.md\`

## What Needs to Be Documented

### New endpoints

**POST \`/api/storage/upload-request\`**
- Request body: \`{ filename, contentType, size }\`
- Response: \`{ signedUrl, token, filePath }\`
- Notes: Enforces 10 MB max, performs quota headroom check without charging.

**POST \`/api/storage/upload-confirm\`**
- Request body: \`{ filePath, size }\`
- Response: \`{ url, provider }\` (or a warning when public URL is unavailable for external providers)
- Notes: Verifies the object exists in storage, size matches declared size, then charges quota atomically.

### Three-step upload flow

1. Client calls \`POST /api/storage/upload-request\` → receives \`signedUrl\`, \`token\`, \`filePath\`.
2. Client uploads binary directly to storage via \`PUT signedUrl\` with correct \`Content-Type\` header.
3. Client calls \`POST /api/storage/upload-confirm\` with \`filePath\` and \`size\` → receives public \`url\`.

### S3/R2 CORS requirement

S3/R2 buckets must have a CORS policy that allows \`PUT\` requests from the client's origin. This is a bucket-level configuration and is required for the presigned URL flow to work.

### SDK note

SDK users (urbackend-sdk) get this flow transparently — the public \`upload()\` method signature is unchanged.

## References

- Merged PR: https://github.com/geturbackend/urBackend/pull/129
- Requested by: `@yash-pouranik`
- Related comment: https://github.com/geturbackend/urBackend/pull/129"

Length of output: 52


@yash-pouranik, the follow-up issue has been created and assigned to you: #132

It covers updating both doc files with:

  • mintlify/docs/guides/storage.mdx and docs-legacy/storage.md
  • The new POST /api/storage/upload-request and POST /api/storage/upload-confirm endpoints (request/response shapes, notes)
  • The full three-step presigned URL flow walkthrough
  • S3/R2 CORS PUT configuration requirement
  • A note that SDK users get the new flow transparently

🧠 Learnings used
Learnt from: yash-pouranik
Repo: geturbackend/urBackend PR: 0
File: :0-0
Timestamp: 2026-04-23T19:32:31.401Z
Learning: In the geturbackend/urBackend repo, PR `#129` introduced two new public storage REST endpoints (POST /api/storage/upload-request and POST /api/storage/upload-confirm) as part of a presigned URL upload flow, but did not update docs in mintlify/docs/guides/storage.mdx or docs-legacy/storage.md. Future PRs touching storage should update both doc files.

Learnt from: CR
Repo: geturbackend/urBackend PR: 0
File: AGENTS.md:0-0
Timestamp: 2026-04-21T21:00:02.427Z
Learning: When touching docs, update both repo docs and in-dashboard docs if the user-facing flow changes.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

Projects

None yet

Development

Successfully merging this pull request may close these issues.

Refactor Storage Uploads to use Presigned URLs

2 participants