Skip to content

Refactor Storage Uploads to use Presigned URLs #122

@yash-pouranik

Description

@yash-pouranik

The Problem

Currently, the uploadFile endpoint in apps/public-api/src/controllers/storage.controller.js behaves as a proxy.

  1. The client sends the entire file to the Node.js server.
  2. The Node.js server holds the file in memory (req.file).
  3. The Node.js server uploads the file to Supabase/S3.
  4. The server returns the final URL.

Why this is bad:
This places massive strain on the server's CPU, memory, and bandwidth. As traffic scales, concurrent large file uploads will block the Node.js event loop and cause extreme server latency or memory crashes (OOM errors).

The Solution (Direct Upload via Signed URLs)

We need to shift to a Signed URL (Presigned URL) architecture. The server should never touch the actual file bytes. Instead, the server acts as an authoritarian gatekeeper—generating a secure, temporary URL that allows the client to upload the file directly to the cloud provider.

The New Flow

  1. Client requests upload URL (POST /api/storage/upload-request)

    • Client sends { filename: "image.png", contentType: "image/png", size: 1048576 }.
    • Server checks if file.size exceeds MAX_FILE_SIZE.
    • Server checks quota (project.storageUsed + size <= project.storageLimit).
    • Server determines the storage provider:
      • For Supabase: Server generates a Presigned Upload URL using supabase.storage.from(bucket).createSignedUploadUrl(filePath).
      • For AWS S3 or Cloudflare R2: Server uses AWS SDK v3 (@aws-sdk/client-s3 and @aws-sdk/s3-request-presigner) to generate a PutObjectCommand presigned URL.
    • Server returns the signed URL and the filePath to the client.
  2. Client uploads directly (Browser -> Cloud)

    • Client performs a standard PUT request containing the literal file bits directly to the signed URL provided. The urBackend server does no work during this period, zero load.
  3. Client confirms upload (POST /api/storage/upload-confirm)

    • Once the cloud upload is complete, the client pings the server with { filePath: "project_id/image.png", size: 1048576 }.
    • Server verifies the file actually exists on the external cloud:
      • For Supabase: Uses supabase.storage.from(bucket).info(filePath) or .list().
      • For S3 / R2: Uses AWS SDK HeadObjectCommand to test for a 200 OM response length matching the uploaded size.
    • Server permanently increments the quota Project.updateOne({ $inc: { storageUsed: size } }).
    • Server responds with the final publicUrl.

Acceptance Criteria

  • Create POST /api/storage/upload-request to replace the mult-part form ingest.
  • Update SDK (@urbackend/sdk) to handle the 3-step presigned flow seamlessly so the developer experience remains urbackend.storage.upload(file).
  • Remove multer or heavy file payload parsers from the standard storage routes.
  • Ensure quota is cleanly managed (accounting for abandoned uploads if necessary).

Technical Gotchas (For the Dev)

  • Supabase Presigned: Supabase natively supports createSignedUploadUrl(path).
  • AWS S3 / Cloudflare R2 Presigned: You will need to install @aws-sdk/client-s3 and @aws-sdk/s3-request-presigner. Create an S3Client with the user's R2/S3 endpoint, access key, and secret key. Then construct a PutObjectCommand and use getSignedUrl(s3Client, command, { expiresIn: 3600 }).
  • CORS Configuration: Whether it is Supabase, S3, or R2, the user's cloud bucket MUST have CORS configured to allow PUT requests directly from the browser's domain, otherwise the direct browser upload will be blocked.
  • Abandoned Uploads: If a client requests a signed URL but aborts the upload, the server shouldn't permanently consume their quota. To fix this, only increment the storageUsed in the DB after the upload-confirm step, OR run a daily cron-job that syncs DB quota with actual external bucket sizes.

Metadata

Metadata

Projects

No projects

Milestone

No milestone

Relationships

None yet

Development

No branches or pull requests

Issue actions