The Problem
Currently, the uploadFile endpoint in apps/public-api/src/controllers/storage.controller.js behaves as a proxy.
- The client sends the entire file to the Node.js server.
- The Node.js server holds the file in memory (
req.file).
- The Node.js server uploads the file to Supabase/S3.
- The server returns the final URL.
Why this is bad:
This places massive strain on the server's CPU, memory, and bandwidth. As traffic scales, concurrent large file uploads will block the Node.js event loop and cause extreme server latency or memory crashes (OOM errors).
The Solution (Direct Upload via Signed URLs)
We need to shift to a Signed URL (Presigned URL) architecture. The server should never touch the actual file bytes. Instead, the server acts as an authoritarian gatekeeper—generating a secure, temporary URL that allows the client to upload the file directly to the cloud provider.
The New Flow
-
Client requests upload URL (POST /api/storage/upload-request)
- Client sends
{ filename: "image.png", contentType: "image/png", size: 1048576 }.
- Server checks if
file.size exceeds MAX_FILE_SIZE.
- Server checks quota (
project.storageUsed + size <= project.storageLimit).
- Server determines the storage provider:
- For Supabase: Server generates a Presigned Upload URL using
supabase.storage.from(bucket).createSignedUploadUrl(filePath).
- For AWS S3 or Cloudflare R2: Server uses AWS SDK v3 (
@aws-sdk/client-s3 and @aws-sdk/s3-request-presigner) to generate a PutObjectCommand presigned URL.
- Server returns the signed URL and the
filePath to the client.
-
Client uploads directly (Browser -> Cloud)
- Client performs a standard
PUT request containing the literal file bits directly to the signed URL provided. The urBackend server does no work during this period, zero load.
-
Client confirms upload (POST /api/storage/upload-confirm)
- Once the cloud upload is complete, the client pings the server with
{ filePath: "project_id/image.png", size: 1048576 }.
- Server verifies the file actually exists on the external cloud:
- For Supabase: Uses
supabase.storage.from(bucket).info(filePath) or .list().
- For S3 / R2: Uses AWS SDK
HeadObjectCommand to test for a 200 OM response length matching the uploaded size.
- Server permanently increments the quota
Project.updateOne({ $inc: { storageUsed: size } }).
- Server responds with the final
publicUrl.
Acceptance Criteria
Technical Gotchas (For the Dev)
- Supabase Presigned: Supabase natively supports
createSignedUploadUrl(path).
- AWS S3 / Cloudflare R2 Presigned: You will need to install
@aws-sdk/client-s3 and @aws-sdk/s3-request-presigner. Create an S3Client with the user's R2/S3 endpoint, access key, and secret key. Then construct a PutObjectCommand and use getSignedUrl(s3Client, command, { expiresIn: 3600 }).
- CORS Configuration: Whether it is Supabase, S3, or R2, the user's cloud bucket MUST have CORS configured to allow
PUT requests directly from the browser's domain, otherwise the direct browser upload will be blocked.
- Abandoned Uploads: If a client requests a signed URL but aborts the upload, the server shouldn't permanently consume their quota. To fix this, only increment the
storageUsed in the DB after the upload-confirm step, OR run a daily cron-job that syncs DB quota with actual external bucket sizes.
The Problem
Currently, the
uploadFileendpoint inapps/public-api/src/controllers/storage.controller.jsbehaves as a proxy.req.file).Why this is bad:
This places massive strain on the server's CPU, memory, and bandwidth. As traffic scales, concurrent large file uploads will block the Node.js event loop and cause extreme server latency or memory crashes (OOM errors).
The Solution (Direct Upload via Signed URLs)
We need to shift to a Signed URL (Presigned URL) architecture. The server should never touch the actual file bytes. Instead, the server acts as an authoritarian gatekeeper—generating a secure, temporary URL that allows the client to upload the file directly to the cloud provider.
The New Flow
Client requests upload URL (
POST /api/storage/upload-request){ filename: "image.png", contentType: "image/png", size: 1048576 }.file.sizeexceedsMAX_FILE_SIZE.project.storageUsed + size <= project.storageLimit).supabase.storage.from(bucket).createSignedUploadUrl(filePath).@aws-sdk/client-s3and@aws-sdk/s3-request-presigner) to generate aPutObjectCommandpresigned URL.filePathto the client.Client uploads directly (Browser -> Cloud)
PUTrequest containing the literal file bits directly to the signed URL provided. The urBackend server does no work during this period, zero load.Client confirms upload (
POST /api/storage/upload-confirm){ filePath: "project_id/image.png", size: 1048576 }.supabase.storage.from(bucket).info(filePath)or.list().HeadObjectCommandto test for a 200 OM response length matching the uploaded size.Project.updateOne({ $inc: { storageUsed: size } }).publicUrl.Acceptance Criteria
POST /api/storage/upload-requestto replace the mult-part form ingest.@urbackend/sdk) to handle the 3-step presigned flow seamlessly so the developer experience remainsurbackend.storage.upload(file).multeror heavy file payload parsers from the standard storage routes.Technical Gotchas (For the Dev)
createSignedUploadUrl(path).@aws-sdk/client-s3and@aws-sdk/s3-request-presigner. Create anS3Clientwith the user's R2/S3 endpoint, access key, and secret key. Then construct aPutObjectCommandand usegetSignedUrl(s3Client, command, { expiresIn: 3600 }).PUTrequests directly from the browser's domain, otherwise the direct browser upload will be blocked.storageUsedin the DB after theupload-confirmstep, OR run a daily cron-job that syncs DB quota with actual external bucket sizes.