feat: implement multi-tier plan system and usage enforcement (Free/Pro)#128
feat: implement multi-tier plan system and usage enforcement (Free/Pro)#128yash-pouranik merged 14 commits intomainfrom
Conversation
|
Warning Rate limit exceeded
Your organization is not enrolled in usage-based pricing. Contact your admin to enable usage-based pricing to continue reviews beyond the rate limit, or try again in 32 minutes and 6 seconds. ⌛ How to resolve this issue?After the wait time has elapsed, a review can be triggered using the We recommend that you space out your commits to avoid hitting the rate limit. 🚦 How do rate limits work?CodeRabbit enforces hourly rate limits for each developer per organization. Our paid plans have higher rate limits than the trial, open-source and free plans. In all cases, we re-allow further reviews after a brief timeout. Please see our FAQ for further information. ℹ️ Review info⚙️ Run configurationConfiguration used: defaults Review profile: CHILL Plan: Pro Run ID: 📒 Files selected for processing (4)
📝 WalkthroughWalkthroughAdds plan-aware billing fields, utilities, Redis caching, middleware and route wiring to enforce project/collection/storage/mail/request limits across dashboard and public APIs; introduces transactional project/collection creation, plan-aware quota checks, an AppError type, and a backfill script for existing developers. Changes
Sequence Diagram(s)sequenceDiagram
participant Client
participant DashboardAPI as Dashboard API
participant PlanMW as Plan Enforcement MW
participant Redis as Redis Cache
participant MongoDB as MongoDB
Client->>DashboardAPI: POST /projects
DashboardAPI->>PlanMW: attachDeveloper
PlanMW->>MongoDB: Find Developer by user ID
MongoDB-->>PlanMW: Developer record
PlanMW->>Redis: getDeveloperPlanCache(id)
alt cache hit
Redis-->>PlanMW: cached plan context
else
PlanMW->>PlanMW: resolveEffectivePlan & getPlanLimits
PlanMW->>Redis: setDeveloperPlanCache(id, context)
end
PlanMW->>MongoDB: count Projects (session)
MongoDB-->>PlanMW: project count
alt limit exceeded
PlanMW-->>Client: 403 Upgrade required
else
PlanMW->>DashboardAPI: proceed to createProject (with DB session)
DashboardAPI->>MongoDB: create Project (session)
MongoDB-->>DashboardAPI: Created
DashboardAPI-->>Client: 201 Created
end
sequenceDiagram
participant Client
participant PublicAPI as Public API
participant UsageMW as checkUsageLimits
participant Redis as Redis
participant MongoDB as MongoDB
Client->>PublicAPI: POST /data
PublicAPI->>UsageMW: checkUsageLimits
UsageMW->>Redis: getDeveloperPlanCache(owner)
alt cache hit
Redis-->>UsageMW: planLimits
else
UsageMW->>MongoDB: Load Developer by owner
MongoDB-->>UsageMW: Developer record
UsageMW->>UsageMW: resolveEffectivePlan & merge limits
UsageMW->>Redis: setDeveloperPlanCache(owner, limits)
end
UsageMW->>Redis: INCR per-minute counter (short TTL)
alt per-minute exceeded
UsageMW-->>Client: 429 Too Many Requests
else
UsageMW->>Redis: INCR per-day counter (day TTL)
alt daily exceeded
UsageMW-->>Client: 429 Too Many Requests
else
UsageMW->>PublicAPI: attach req.planLimits
PublicAPI-->>Client: 200 Success
end
end
Estimated code review effort🎯 4 (Complex) | ⏱️ ~45 minutes Possibly related PRs
Suggested labels
Poem
🚥 Pre-merge checks | ✅ 4 | ❌ 1❌ Failed checks (1 warning)
✅ Passed checks (4 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
There was a problem hiding this comment.
Pull request overview
Implements a Free/Pro multi-tier plan system across the common package and both public/dashboard APIs, adding quota enforcement, feature gating, and supporting DB/backfill tooling.
Changes:
- Added centralized plan limits utilities (
planLimits) and an operationalAppErrortype in@urbackend/common. - Implemented plan-aware enforcement: public API usage throttling/quotas via Redis (
usageGate) and dashboard-side plan guardrails (planEnforcement). - Extended data models to support plan fields on developers and per-project enterprise overrides (
customLimits), plus added backfill and global mail template seed data.
Reviewed changes
Copilot reviewed 19 out of 20 changed files in this pull request and generated 8 comments.
Show a summary per file
| File | Description |
|---|---|
| tools/db-queries/global-mail-templates.json | Adds seed data for global/system mail templates. |
| tools/db-queries/backfill-developer-plans.js | Adds a script to backfill existing Developer docs with plan: free. |
| packages/common/src/utils/planLimits.js | Introduces central plan limit definitions and limit-merging helpers. |
| packages/common/src/utils/AppError.js | Adds a shared operational error type for consistent HTTP status propagation. |
| packages/common/src/redis/redisCaching.js | Adds Redis helpers for caching developer plan context. |
| packages/common/src/models/Project.js | Adds customLimits for enterprise/project-level overrides. |
| packages/common/src/models/Developer.js | Adds plan lifecycle fields (plan, activation/expiry, trialUsed). |
| packages/common/src/index.js | Re-exports plan utilities and AppError from @urbackend/common. |
| apps/public-api/src/utils/mailLimit.js | Makes monthly mail limits plan-aware with a safe fallback. |
| apps/public-api/src/routes/storage.js | Replaces old rate limiter with plan-aware checkUsageLimits. |
| apps/public-api/src/routes/mail.js | Replaces old rate limiter with plan-aware checkUsageLimits. |
| apps/public-api/src/routes/data.js | Replaces old rate limiter with plan-aware checkUsageLimits. |
| apps/public-api/src/middlewares/usageGate.js | Adds Redis-backed per-minute and daily usage enforcement + plan context caching. |
| apps/public-api/src/controllers/storage.controller.js | Makes internal storage quota enforcement plan-aware. |
| apps/public-api/src/controllers/mail.controller.js | Adds plan gating for mail template usage; uses plan-aware monthly limits. |
| apps/public-api/src/app.js | Preserves AppError HTTP status codes in the global error handler. |
| apps/dashboard-api/src/routes/projects.js | Adds plan enforcement middleware hooks for project/collection/BYOK routes. |
| apps/dashboard-api/src/middlewares/planEnforcement.js | Adds dashboard-side plan enforcement and BYOK gating. |
| apps/dashboard-api/src/controllers/project.controller.js | Removes inline project-limit checks in favor of middleware. |
| apps/dashboard-api/src/controllers/analytics.controller.js | Returns plan-derived limits in global stats response. |
💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.
| // We use INCRBY 0 to just read the current count if we don't want to double count, | ||
| // BUT actually, we should increment it here and skip it in the logger to be precise. | ||
| // For now, let's just check the current value. | ||
| const currentDailyCount = parseInt(await redis.get(reqCountKey) || '0'); | ||
|
|
||
| if (limits.reqPerDay !== -1 && currentDailyCount >= limits.reqPerDay) { |
There was a problem hiding this comment.
Daily quota enforcement reads the current Redis counter with GET, which is not atomic and can allow concurrent requests to exceed reqPerDay (multiple requests can observe the same count before the logger increments it). Use an atomic INCR (e.g., incrWithTtlAtomic) and compare the returned value against reqPerDay (optionally rolling back with DECR when over limit), and use DEFAULT_DAILY_TTL_SECONDS for TTL consistency.
| // We use INCRBY 0 to just read the current count if we don't want to double count, | |
| // BUT actually, we should increment it here and skip it in the logger to be precise. | |
| // For now, let's just check the current value. | |
| const currentDailyCount = parseInt(await redis.get(reqCountKey) || '0'); | |
| if (limits.reqPerDay !== -1 && currentDailyCount >= limits.reqPerDay) { | |
| // Atomically reserve one daily request slot so concurrent requests | |
| // cannot all pass based on the same stale counter value. | |
| const currentDailyCount = await incrWithTtlAtomic( | |
| redis, | |
| reqCountKey, | |
| DEFAULT_DAILY_TTL_SECONDS | |
| ); | |
| if (limits.reqPerDay !== -1 && currentDailyCount > limits.reqPerDay) { | |
| // Roll back the reservation so rejected requests do not consume quota. | |
| await redis.decr(reqCountKey); |
| if (usingTemplate) { | ||
| const limits = req.planLimits || {}; | ||
| if (limits.mailTemplatesEnabled === false) { | ||
| return res.status(403).json({ | ||
| success: false, | ||
| data: {}, | ||
| message: "Pre-designed Email Templates are a Pro feature. Please upgrade to use this functionality." | ||
| }); | ||
| } |
There was a problem hiding this comment.
Template gating currently allows templates when req.planLimits is missing (limits.mailTemplatesEnabled is undefined), which can happen if usageGate fails open on Redis/DB issues. Default to a computed Free-tier limits object when req.planLimits is absent so Pro-only features don’t become available during cache/Redis outages.
| { $inc: { storageUsed: file.size } } | ||
| ); | ||
| const limits = req.planLimits || {}; | ||
| const effectiveLimit = limits.storageBytes || project.storageLimit || 20 * 1024 * 1024; |
There was a problem hiding this comment.
effectiveLimit uses limits.storageBytes || project.storageLimit || ..., which treats 0 as “unset” and will ignore an explicit override. Prefer nullish coalescing (??) when choosing between numeric limits so that 0 (or other falsy numeric values) is respected.
| const effectiveLimit = limits.storageBytes || project.storageLimit || 20 * 1024 * 1024; | |
| const effectiveLimit = limits.storageBytes ?? project.storageLimit ?? 20 * 1024 * 1024; |
| const effectivePlan = resolveEffectivePlan(dev); | ||
| const limits = getPlanLimits({ | ||
| plan: effectivePlan, | ||
| legacyLimits: { | ||
| maxProjects: dev?.maxProjects, | ||
| maxCollections: dev?.maxCollections | ||
| } | ||
| }); |
There was a problem hiding this comment.
getGlobalStats calculates plan-based limits, but the Developer query only selects maxProjects/maxCollections. Because plan/planExpiresAt aren’t selected, resolveEffectivePlan(dev) will always fall back to 'free' and return incorrect limits for Pro users. Include plan (and planExpiresAt if applicable) in the select, or remove the select restriction.
| @@ -0,0 +1,28 @@ | |||
| const mongoose = require('mongoose'); | |||
There was a problem hiding this comment.
This backfill script imports mongoose but never uses it. Remove the unused require to avoid confusion and keep the script minimal.
| const mongoose = require('mongoose'); |
| const effectivePlan = resolveEffectivePlan(req.developer); | ||
| const limits = getPlanLimits({ | ||
| plan: effectivePlan, | ||
| legacyLimits: { | ||
| maxProjects: req.developer.maxProjects ?? null, | ||
| maxCollections: req.developer.maxCollections ?? null | ||
| } | ||
| }); | ||
|
|
||
| if (limits.maxCollections === -1) return next(); | ||
|
|
||
| // For collection creation, the projectId is usually in req.body | ||
| const projectId = req.body.projectId; | ||
| if (!projectId) return next(); | ||
|
|
||
| const project = await Project.findById(projectId); | ||
| if (!project) return next(new AppError(404, 'Project not found')); | ||
|
|
There was a problem hiding this comment.
checkCollectionLimit calculates limits without considering project-level customLimits.maxCollections (introduced on Project.customLimits). This means enterprise/project overrides won’t be honored when creating collections via the dashboard. Load the project first (ideally with an owner filter), then pass project.customLimits into getPlanLimits so overrides apply consistently with public-api enforcement.
| const effectivePlan = resolveEffectivePlan(req.developer); | |
| const limits = getPlanLimits({ | |
| plan: effectivePlan, | |
| legacyLimits: { | |
| maxProjects: req.developer.maxProjects ?? null, | |
| maxCollections: req.developer.maxCollections ?? null | |
| } | |
| }); | |
| if (limits.maxCollections === -1) return next(); | |
| // For collection creation, the projectId is usually in req.body | |
| const projectId = req.body.projectId; | |
| if (!projectId) return next(); | |
| const project = await Project.findById(projectId); | |
| if (!project) return next(new AppError(404, 'Project not found')); | |
| // For collection creation, the projectId is usually in req.body | |
| const projectId = req.body.projectId; | |
| if (!projectId) return next(); | |
| const project = await Project.findOne({ _id: projectId, owner: req.developer._id }); | |
| if (!project) return next(new AppError(404, 'Project not found')); | |
| const effectivePlan = resolveEffectivePlan(req.developer); | |
| const limits = getPlanLimits({ | |
| plan: effectivePlan, | |
| legacyLimits: { | |
| maxProjects: req.developer.maxProjects ?? null, | |
| maxCollections: req.developer.maxCollections ?? null | |
| }, | |
| customLimits: project.customLimits ?? null | |
| }); | |
| if (limits.maxCollections === -1) return next(); |
| const { | ||
| redis, | ||
| Developer, | ||
| resolveEffectivePlan, | ||
| getPlanLimits, | ||
| getDeveloperPlanCache, | ||
| setDeveloperPlanCache, | ||
| AppError | ||
| } = require('@urbackend/common'); |
There was a problem hiding this comment.
usageGate destructures getDeveloperPlanCache/setDeveloperPlanCache from @urbackend/common, but packages/common/src/index.js currently doesn’t export those redisCaching helpers (it only re-exports the project cache functions). This will throw at runtime when resolveDeveloperPlanContext calls them. Re-export the developer plan cache helpers from common index (and include deleteDeveloperPlanCache if needed) or import directly from the redisCaching module.
| * Applies legacy developer-level overrides ONLY when they are more generous | ||
| * than the plan default. Legacy limits are admin-granted exceptions (e.g., | ||
| * "allow this user 5 projects even on free"). They must never reduce a paid | ||
| * plan's higher entitlement (e.g., Pro allows 10 projects — a legacy value | ||
| * of 1 or 20 must NOT override that). | ||
| */ |
There was a problem hiding this comment.
The inline comment says a legacy value of "20" must not override Pro’s default (10), but mergeLegacyOverrides currently applies any legacy numeric value greater than the base (so 20 will override 10). Please either update the comment to reflect the intended behavior (allow more generous admin overrides), or change the merge condition so paid-plan defaults cannot be overridden upward if that’s the requirement.
There was a problem hiding this comment.
Actionable comments posted: 6
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (1)
apps/dashboard-api/src/controllers/analytics.controller.js (1)
33-54:⚠️ Potential issue | 🟠 MajorFix
.select()projection to includeplanandplanExpiresAtfields; remove unnecessaryplanActivatedAt.Line 33 only selects
maxProjects maxCollections, butresolveEffectivePlan(dev)on line 47 requiresdev.plananddev.planExpiresAtto compute the effective tier. Without these fields,dev.plananddev.planExpiresAtare undefined, causing every caller to fall through to the default (free) plan regardless of their actual subscription, silently capping Pro users' limits to free-tier values. Note:planActivatedAtis not used byresolveEffectivePlanand should not be added.Proposed fix
- Developer.findById(user_id).select("maxProjects maxCollections") + Developer.findById(user_id).select("maxProjects maxCollections plan planExpiresAt")Additionally, the error handler on line 65 exposes raw error messages and does not follow the standard response format. It should use the AppError class and return
{ success: false, data: {}, message: "..." }.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@apps/dashboard-api/src/controllers/analytics.controller.js` around lines 33 - 54, Update the Developer.findById projection in the analytics controller to include the subscription fields used by resolveEffectivePlan by selecting "maxProjects maxCollections plan planExpiresAt" (do not add planActivatedAt), so dev.plan and dev.planExpiresAt are populated before calling resolveEffectivePlan and getPlanLimits; also replace the current raw-error response in the controller's catch/error handler to use the AppError class and return the standard JSON shape { success: false, data: {}, message: "..." } so errors conform to the app's response format.
🧹 Nitpick comments (6)
apps/public-api/src/controllers/mail.controller.js (2)
297-297: Verifyreq.planLimitsis always populated on this path.
getMonthlyMailLimittreats a missingplanLimitsContextas a fallback to free-tier limits (perutils/mailLimit.js). SincecheckUsageLimitsis the route’s middleware responsible for attachingreq.planLimits, make sure no future refactor bypasses it (e.g., adding a route that skips the usage gate) — otherwise Pro users would silently be metered at free-tiermailPerMonth.No change required for this PR; consider adding a brief unit test asserting
req.planLimitsis populated whencheckUsageLimitsis in the chain.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@apps/public-api/src/controllers/mail.controller.js` at line 297, Ensure the route handler always has req.planLimits populated by verifying the usage-check middleware is present: confirm checkUsageLimits is applied to this route (or add an explicit guard at the top of the handler) so getMonthlyMailLimit(req.project, req.planLimits) never receives undefined; also add a unit test that mounts the route with checkUsageLimits in the middleware chain and asserts req.planLimits is set before the controller runs (referencing getMonthlyMailLimit, req.planLimits, checkUsageLimits and utils/mailLimit.js).
177-185: LGTM — plan-gate for templates is correctly scoped.Gate runs only when
usingTemplateis true and only short-circuits whenmailTemplatesEnabled === false(strict equality preserves the "undefined means allow" fallback for contexts wherereq.planLimitsdidn’t populate the flag). Response shape matches the{ success, data, message }contract.As per coding guidelines ("All API endpoints return:
{ success: bool, data: {}, message: "" }").Minor consistency nit: elsewhere in this PR 403s are thrown as
new AppError(403, ...)and routed via the global handler; here the controller returns inline JSON. Both are valid and the file already uses this inline pattern throughout, so no change required — just flagging for future unification.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@apps/public-api/src/controllers/mail.controller.js` around lines 177 - 185, Minor consistency: instead of returning inline res.status(403).json(...) in the mail template gate, throw the same AppError used elsewhere so the global error handler formats responses consistently; locate the block guarded by usingTemplate that checks req.planLimits.mailTemplatesEnabled === false and replace the inline response with throwing new AppError(403, "Pre-designed Email Templates are a Pro feature. Please upgrade to use this functionality.") so the global handler produces the { success, data, message } shape.apps/dashboard-api/src/controllers/analytics.controller.js (1)
64-66: Pre-existing: error response shape does not match project contract.Not introduced here, but while touching this controller:
res.status(500).json({ error: err.message })exposes raw error text and deviates from the{ success, data, message }contract. Consider switching toAppError+ a shared error handler, or at minimum returning the standardized payload.As per coding guidelines ("All API endpoints return:
{ success: bool, data: {}, message: "" }. Use AppError class for errors — never raw throw, never expose MongoDB errors to client.").🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@apps/dashboard-api/src/controllers/analytics.controller.js` around lines 64 - 66, The catch block in analytics.controller.js currently returns raw error text with res.status(500).json({ error: err.message }) which leaks internal details and breaks the { success, data, message } contract; update the handler to throw or pass an AppError (e.g., new AppError('Internal server error', 500)) to the centralized error middleware or, if you must respond here, return the standardized payload like res.status(500).json({ success: false, data: {}, message: 'Internal server error' }) and ensure you do not include err.message or raw MongoDB errors; reference the catch block around the function handling analytics requests and the use of AppError/shared error middleware to fix this consistently.apps/public-api/src/app.js (1)
120-123: Pre-existing: 404 handler is unreachable (placed after error handler).Not introduced by this PR, but worth noting while we’re here — the catch-all at lines 120–123 sits after the error-handling middleware. Express only invokes it if a prior middleware calls
next()without responding, which none of the route handlers above do for unmatched routes. Consider moving it above the error handler in a follow-up so real 404s return the{ error: "Not Found", replayId }payload instead of falling through.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@apps/public-api/src/app.js` around lines 120 - 123, The catch-all 404 handler using app.use((req, res) => { ... }) is placed after the error-handling middleware so it is never reached for true unmatched routes; move this 404 middleware (the function that reads the "X-Kiroo-Replay-ID" header and responds with { error: "Not Found", replayId }) to execute before the global error handler so real 404s return the intended JSON payload instead of falling through to the error handler.packages/common/src/redis/redisCaching.js (1)
109-141: LGTM — developer plan cache helpers follow existing patterns.New key namespace
developer:plan:${id}does not collide with the protected patterns (project:auth:refresh:session:*,project:auth:oauth:state:*,project:mail:count:*). 5-minute TTL andredis.status !== "ready"guards are consistent with the existingsetProjectById/getProjectByIdhelpers.As per coding guidelines (protected Redis key patterns).
Optional nit: consider exposing the TTL as a named constant (e.g.,
DEVELOPER_PLAN_CACHE_TTL_SECONDS = 300) so callers/tests can reference it, and refactoring the three project/developer pairs to share a commonset/get/delhelper to remove duplication — not blocking.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@packages/common/src/redis/redisCaching.js` around lines 109 - 141, Create a named TTL constant and reduce duplication: define a constant (e.g., DEVELOPER_PLAN_CACHE_TTL_SECONDS = 300) and use it in setDeveloperPlanCache instead of the hardcoded 60 * 5, and optionally extract a shared helper (e.g., cacheSet(key, value, ttl), cacheGet(key), cacheDel(key)) that setDeveloperPlanCache, getDeveloperPlanCache, and deleteDeveloperPlanCache delegate to, keeping the current redis.status checks and JSON stringify/parse behavior.tools/db-queries/backfill-developer-plans.js (1)
10-26: Optional: graceful disconnect + dry-run flag.Script works as-is (idempotent filter is correct). For operational hygiene on one-shot migrations, consider a
--dry-runmode that printsmatchedCountwithout writing, and an explicitawait mongoose.disconnect()beforeprocess.exitso socket close is clean on slow networks.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@tools/db-queries/backfill-developer-plans.js` around lines 10 - 26, Update backfill() to support a --dry-run flag and to always close DB sockets before exiting: parse process.argv for a dry-run token, call Developer.updateMany only when not dry-run (otherwise run the same query with no write and log result.matchedCount), and keep the existing success message to report modifiedCount when writes run; also ensure you await mongoose.disconnect() (or a disconnect helper) in both the try success path and the catch path before calling process.exit so sockets are cleanly closed. Reference: backfill(), Developer.updateMany, connectDB(), result.matchedCount/result.modifiedCount, and mongoose.disconnect().
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@apps/dashboard-api/src/middlewares/planEnforcement.js`:
- Around line 104-109: The middleware currently blocks all external
configuration by checking limits.byokEnabled; change it to gate by the specific
feature requested: update the caller or middleware to accept a feature type
(e.g., "BYOK" vs "BYOM"), call resolveEffectivePlan(req.developer) and
getPlanLimits({ plan: effectivePlan }) as before, then if the request is for
BYOM check limits.byomEnabled and if for BYOK check limits.byokEnabled,
returning the AppError only when the relevant flag is false (use the same
AppError message/context). Reference the existing resolveEffectivePlan,
getPlanLimits, and limits.byokEnabled/limits.byomEnabled symbols when applying
this split check.
- Around line 68-87: Load the Project scoped to the requesting developer before
computing collection limits and enforce ownership: replace
Project.findById(projectId) with a lookup that includes owner ===
req.developer._id (e.g., Project.findOne({_id: projectId, owner:
req.developer._id})) and return the same 404 if not found; then call
getPlanLimits after you have the project and pass project.customLimits (or a
customLimits field) into the limits calculation along with
resolveEffectivePlan(req.developer) so project-level custom limits are applied
before checking project.collections.length against limits.maxCollections.
In `@apps/public-api/src/controllers/storage.controller.js`:
- Around line 40-63: The current logic treats an effectiveLimit of -1 as
unlimited and unconditionally increments internal storage (via
Project.updateOne), which allows Pro users without BYOS to store unbounded data;
change the limit selection to use explicit null/undefined checks for
limits.storageBytes and project.storageLimit (e.g., check typeof
limits.storageBytes === 'number' and typeof project.storageLimit === 'number'
instead of ||), and then only honor the -1 “unlimited” sentinel when the project
is external by gating the branch with isProjectStorageExternal(project) (i.e.,
if (effectiveLimit === -1 && isProjectStorageExternal(project)) do the unlimited
increment); otherwise, when effectiveLimit === -1 but storage is internal, clamp
to a safety ceiling constant (e.g., SAFETY_MAX_BYTES) and use the same
Project.updateOne/$expr check to enforce it and return 403 on matchedCount === 0
as before.
In `@apps/public-api/src/middlewares/usageGate.js`:
- Around line 72-83: The daily quota check must be atomic: replace the read-only
redis.get(reqCountKey) with an atomic increment (use redis.incr or incrby) so
you obtain the post-increment count, then if that count exceeds limits.reqPerDay
call redis.decr(reqCountKey) to roll back and return the AppError; if it is
within limit set a flag (e.g. req._dailyCountIncremented = true) so the
downstream logger (the code that also increments reqCountKey) skips its
duplicate increment. Locate getDayKey(), reqCountKey, limits.reqPerDay and where
the logger increments the same key to add the flag check and the atomic
incr/decr flow.
- Around line 86-92: The assignment to req.planLimits must happen immediately
after computing limits so downstream gates see them even if Redis fails: move
the line setting req.planLimits = limits to directly after limits is resolved
(inside the try block before any Redis-dependent checks or awaiting further
calls) and ensure the catch still logs the error and calls next(); update the
usageGate middleware (the try/catch around limits and next()) so req.planLimits
is always set when limits are available, even if later Redis operations throw.
In `@apps/public-api/src/routes/mail.js`:
- Line 8: The middleware order on the /send route currently calls
checkUsageLimits before requireSecretKey so invalid/publishable secrets
increment Redis counters; update the router.post call to invoke requireSecretKey
before checkUsageLimits (i.e., router.post("/send", verifyApiKey,
requireSecretKey, checkUsageLimits, sendMail)) and apply the same reorder in the
corresponding route in storage.js so that requireSecretKey runs prior to
checkUsageLimits, preventing failed auth attempts from consuming quota.
---
Outside diff comments:
In `@apps/dashboard-api/src/controllers/analytics.controller.js`:
- Around line 33-54: Update the Developer.findById projection in the analytics
controller to include the subscription fields used by resolveEffectivePlan by
selecting "maxProjects maxCollections plan planExpiresAt" (do not add
planActivatedAt), so dev.plan and dev.planExpiresAt are populated before calling
resolveEffectivePlan and getPlanLimits; also replace the current raw-error
response in the controller's catch/error handler to use the AppError class and
return the standard JSON shape { success: false, data: {}, message: "..." } so
errors conform to the app's response format.
---
Nitpick comments:
In `@apps/dashboard-api/src/controllers/analytics.controller.js`:
- Around line 64-66: The catch block in analytics.controller.js currently
returns raw error text with res.status(500).json({ error: err.message }) which
leaks internal details and breaks the { success, data, message } contract;
update the handler to throw or pass an AppError (e.g., new AppError('Internal
server error', 500)) to the centralized error middleware or, if you must respond
here, return the standardized payload like res.status(500).json({ success:
false, data: {}, message: 'Internal server error' }) and ensure you do not
include err.message or raw MongoDB errors; reference the catch block around the
function handling analytics requests and the use of AppError/shared error
middleware to fix this consistently.
In `@apps/public-api/src/app.js`:
- Around line 120-123: The catch-all 404 handler using app.use((req, res) => {
... }) is placed after the error-handling middleware so it is never reached for
true unmatched routes; move this 404 middleware (the function that reads the
"X-Kiroo-Replay-ID" header and responds with { error: "Not Found", replayId })
to execute before the global error handler so real 404s return the intended JSON
payload instead of falling through to the error handler.
In `@apps/public-api/src/controllers/mail.controller.js`:
- Line 297: Ensure the route handler always has req.planLimits populated by
verifying the usage-check middleware is present: confirm checkUsageLimits is
applied to this route (or add an explicit guard at the top of the handler) so
getMonthlyMailLimit(req.project, req.planLimits) never receives undefined; also
add a unit test that mounts the route with checkUsageLimits in the middleware
chain and asserts req.planLimits is set before the controller runs (referencing
getMonthlyMailLimit, req.planLimits, checkUsageLimits and utils/mailLimit.js).
- Around line 177-185: Minor consistency: instead of returning inline
res.status(403).json(...) in the mail template gate, throw the same AppError
used elsewhere so the global error handler formats responses consistently;
locate the block guarded by usingTemplate that checks
req.planLimits.mailTemplatesEnabled === false and replace the inline response
with throwing new AppError(403, "Pre-designed Email Templates are a Pro feature.
Please upgrade to use this functionality.") so the global handler produces the {
success, data, message } shape.
In `@packages/common/src/redis/redisCaching.js`:
- Around line 109-141: Create a named TTL constant and reduce duplication:
define a constant (e.g., DEVELOPER_PLAN_CACHE_TTL_SECONDS = 300) and use it in
setDeveloperPlanCache instead of the hardcoded 60 * 5, and optionally extract a
shared helper (e.g., cacheSet(key, value, ttl), cacheGet(key), cacheDel(key))
that setDeveloperPlanCache, getDeveloperPlanCache, and deleteDeveloperPlanCache
delegate to, keeping the current redis.status checks and JSON stringify/parse
behavior.
In `@tools/db-queries/backfill-developer-plans.js`:
- Around line 10-26: Update backfill() to support a --dry-run flag and to always
close DB sockets before exiting: parse process.argv for a dry-run token, call
Developer.updateMany only when not dry-run (otherwise run the same query with no
write and log result.matchedCount), and keep the existing success message to
report modifiedCount when writes run; also ensure you await
mongoose.disconnect() (or a disconnect helper) in both the try success path and
the catch path before calling process.exit so sockets are cleanly closed.
Reference: backfill(), Developer.updateMany, connectDB(),
result.matchedCount/result.modifiedCount, and mongoose.disconnect().
🪄 Autofix (Beta)
✅ Autofix completed
ℹ️ Review info
⚙️ Run configuration
Configuration used: defaults
Review profile: CHILL
Plan: Pro
Run ID: c2f72f45-b8a9-4736-99fa-87e1291dd6b7
📒 Files selected for processing (20)
apps/dashboard-api/src/controllers/analytics.controller.jsapps/dashboard-api/src/controllers/project.controller.jsapps/dashboard-api/src/middlewares/planEnforcement.jsapps/dashboard-api/src/routes/projects.jsapps/public-api/src/app.jsapps/public-api/src/controllers/mail.controller.jsapps/public-api/src/controllers/storage.controller.jsapps/public-api/src/middlewares/usageGate.jsapps/public-api/src/routes/data.jsapps/public-api/src/routes/mail.jsapps/public-api/src/routes/storage.jsapps/public-api/src/utils/mailLimit.jspackages/common/src/index.jspackages/common/src/models/Developer.jspackages/common/src/models/Project.jspackages/common/src/redis/redisCaching.jspackages/common/src/utils/AppError.jspackages/common/src/utils/planLimits.jstools/db-queries/backfill-developer-plans.jstools/db-queries/global-mail-templates.json
| const { sendMail } = require("../controllers/mail.controller"); | ||
|
|
||
| router.post("/send", verifyApiKey, projectRateLimiter, requireSecretKey, sendMail); | ||
| router.post("/send", verifyApiKey, checkUsageLimits, requireSecretKey, sendMail); |
There was a problem hiding this comment.
Middleware order: usage is counted before secret-key auth succeeds.
checkUsageLimits runs before requireSecretKey, so requests using a publishable/invalid secret key still increment the per-minute and per-day Redis counters before being rejected with 403. A caller hitting /send with wrong credentials can burn through daily quota and trigger 429s on legitimate traffic.
Consider moving requireSecretKey ahead of checkUsageLimits (same applies to storage.js):
Proposed fix
-router.post("/send", verifyApiKey, checkUsageLimits, requireSecretKey, sendMail);
+router.post("/send", verifyApiKey, requireSecretKey, checkUsageLimits, sendMail);📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| router.post("/send", verifyApiKey, checkUsageLimits, requireSecretKey, sendMail); | |
| router.post("/send", verifyApiKey, requireSecretKey, checkUsageLimits, sendMail); |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@apps/public-api/src/routes/mail.js` at line 8, The middleware order on the
/send route currently calls checkUsageLimits before requireSecretKey so
invalid/publishable secrets increment Redis counters; update the router.post
call to invoke requireSecretKey before checkUsageLimits (i.e.,
router.post("/send", verifyApiKey, requireSecretKey, checkUsageLimits,
sendMail)) and apply the same reorder in the corresponding route in storage.js
so that requireSecretKey runs prior to checkUsageLimits, preventing failed auth
attempts from consuming quota.
|
@copilot apply changes based on the comments in this thread |
|
Note Autofix is a beta feature. Expect some limitations and changes as we gather feedback and continue to improve it. Fixes Applied SuccessfullyFixed 7 file(s) based on 6 unresolved review comments. Files modified:
Commit: The changes have been pushed to the Time taken: |
Fixed 7 file(s) based on 6 unresolved review comments. Co-authored-by: CodeRabbit <[email protected]>
There was a problem hiding this comment.
Actionable comments posted: 2
♻️ Duplicate comments (1)
apps/public-api/src/middlewares/usageGate.js (1)
16-21:⚠️ Potential issue | 🟡 MinorGuard against missing
req.project.owner.If
rawOwnerisnull/undefined(e.g., project doc loaded without populatingowner, or a legacy project with a stale ref),rawOwner.toString()throws aTypeErrorthat falls into the outer catch and fails the request open — bypassing plan enforcement entirely.const rawOwner = req.project.owner; + if (!rawOwner) return { plan: 'free', planExpiresAt: null, legacyLimits: { maxProjects: null, maxCollections: null } }; const developerId = (rawOwner && typeof rawOwner === 'object' && rawOwner._id) ? rawOwner._id.toString() : rawOwner.toString();🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@apps/public-api/src/middlewares/usageGate.js` around lines 16 - 21, resolveDeveloperPlanContext currently assumes req.project.owner exists when computing developerId and calls toString(), which throws if rawOwner is null/undefined; add a guard for rawOwner (check if falsy) before calling toString() — e.g., if rawOwner is missing return null (or set developerId to null) so you don't call rawOwner.toString(); then continue the existing logic that handles a populated owner (checking rawOwner._id) and the downstream callers of resolveDeveloperPlanContext should handle a null/unknown developerId. Ensure you update resolveDeveloperPlanContext and references to developerId to avoid calling toString() on undefined.
🧹 Nitpick comments (3)
apps/dashboard-api/src/middlewares/planEnforcement.js (1)
106-122: BYOK/BYOM gates ignore project-levelcustomLimits.
checkByokGate(andcheckByomGateat lines 127-143) computelimitsfrom onlyplan, so enterprise-scopedcustomLimits.byokEnabled/byomEnabledon a project cannot unlock these features. This is inconsistent withcheckCollectionLimit(line 84), which correctly mergesproject.customLimits. Since this middleware runs onPATCH /:projectId/byod-config, the target project id is available inreq.params.projectId— load it scoped to the developer and passproject.customLimitsthrough the same way the collection gate does.🛡️ Proposed direction for
checkByokGate(apply analogously tocheckByomGate)exports.checkByokGate = async (req, res, next) => { try { if (req.user?.isAdmin) return next(); + const projectId = req.params.projectId; + let customLimits = null; + if (projectId) { + if (typeof projectId !== 'string' || !/^[a-fA-F0-9]{24}$/.test(projectId)) { + return next(new AppError(400, 'Invalid projectId format')); + } + const project = await Project.findOne({ _id: projectId, owner: req.developer._id }); + if (!project) return next(new AppError(404, 'Project not found')); + customLimits = project.customLimits; + } + const effectivePlan = resolveEffectivePlan(req.developer); - const limits = getPlanLimits({ plan: effectivePlan }); + const limits = getPlanLimits({ plan: effectivePlan, customLimits }); if (!limits.byokEnabled) { return next(new AppError(403, 'External configuration (BYOK) is a Pro feature. Please upgrade to connect your own resources.')); } next(); } catch (err) { next(err); } };🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@apps/dashboard-api/src/middlewares/planEnforcement.js` around lines 106 - 122, checkByokGate (and likewise checkByomGate) currently calls resolveEffectivePlan(req.developer) and getPlanLimits({ plan }) without considering project-level overrides, so project.customLimits.byokEnabled / byomEnabled never unlocks features; update the middleware to load the target project scoped to the developer using req.params.projectId (same mechanism used by checkCollectionLimit), merge project.customLimits into the getPlanLimits call (e.g., pass customLimits or merge into the options you pass to getPlanLimits), and then check the resulting limits.byokEnabled / limits.byomEnabled before returning the 403 or calling next(); keep the existing admin bypass (req.user?.isAdmin) and error handling.apps/public-api/src/middlewares/usageGate.js (2)
93-97: Fail-open on Redis errors is a quota-bypass vector — consider logging-and-metrics at minimum.
catch → next()is intentional per the comment, but during a Redis outage every request skips per-minute and per-day enforcement for all tiers. At a minimum, emit a structured log/metric so this state is observable, and consider a circuit-breaker or fail-closed policy on the/mail/sendand/storage/uploadroutes where abuse has direct cost impact. The dashboard-level plan ceilings (maxProjects, storage$exprcaps) still hold, but per-request throttling is gone.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@apps/public-api/src/middlewares/usageGate.js` around lines 93 - 97, The catch block in the usageGate middleware currently swallows Redis errors and calls next(), which fails open; change that to emit structured logs and metrics and optionally fail-closed for high-cost endpoints: replace console.error("Usage limit check failed:", err) with a structured logger call (e.g., logger.error or processLogger.error) that includes error, stack, redis host/conn id and request context (route, user id, plan), and increment a monitoring counter (e.g., "usage_gate.redis_error" with tags route=t.route, env, plan) so outages are observable; additionally, for the critical routes "/mail/send" and "/storage/upload" implement a short circuit (in the same middleware or a wrapper) that returns a 503 or enforces conservative default quotas when Redis is unavailable (or use a circuit-breaker flag toggled by the redis error metric) instead of always calling next().
23-43: Plan cache invalidation strategy needed for future plan-change endpointsPlan-change functionality is not currently implemented, but when it is added, ensure Developer.plan mutations (dashboard endpoints and billing webhooks) call
deleteDeveloperPlanCacheto invalidate the 5-minute cache. Otherwise, upgrades/downgrades take up to 5 minutes to take effect.Additionally, the cached plan context includes
planExpiresAt, but this is checked at request time inresolveEffectivePlan()to reject expired Pro plans. Document this behavior to ensure it's preserved if the caching logic changes.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@apps/public-api/src/middlewares/usageGate.js` around lines 23 - 43, The plan cache logic caches developer plan context via getDeveloperPlanCache and setDeveloperPlanCache for 5 minutes but lacks cache invalidation; ensure any endpoints or handlers that mutate Developer.plan or planExpiresAt (dashboard update functions and billing webhook handlers that call Developer.findById / update) call deleteDeveloperPlanCache(developerId) after successful writes so upgrades/downgrades take effect immediately, and keep resolveEffectivePlan(...) behavior intact by preserving planExpiresAt in the cached context (document this in the mutation handlers' comments) so expiry checks continue to run at request time.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@apps/dashboard-api/src/middlewares/planEnforcement.js`:
- Around line 47-51: The current TOCTOU comes from checking
Project.countDocuments and project.collections.length before creation (see
Project.countDocuments, the limits check that returns new AppError, and the
project.collections usage); fix by enforcing the limit atomically: wrap the
count+create in a MongoDB transaction (startSession()/withTransaction) or
implement a conditional update/atomic counter on the Developer document that
increments only if current value < max (using findOneAndUpdate with $inc and a
filter on the limit) so concurrent requests cannot exceed
maxProjects/maxCollections, or add a unique per-developer project slot via a
conditional create; additionally, add a server-side re-validation in the
controller after create and rollback/delete the resource if limit was exceeded
to cover non-transactional cases.
In `@apps/public-api/src/controllers/storage.controller.js`:
- Around line 41-84: The code contains an unreachable branch: inside the block
guarded by if (!external) (where external was set from
isProjectStorageExternal(project)), the branch if (effectiveLimit === -1 &&
isProjectStorageExternal(project)) can never be true; remove this dead branch
(Lines performing await Project.updateOne(...) for the external -1 case) and
keep the internal-storage clamp logic that uses SAFETY_MAX_BYTES and the later
Project.updateOne with $expr, or if the intent was to re-check storage location
use the already-evaluated external variable instead of calling
isProjectStorageExternal(project) again; adjust code to only handle the internal
-1 case and the normal effectiveLimit path (referencing variables external,
effectiveLimit, isProjectStorageExternal(project), Project.updateOne, and
SAFETY_MAX_BYTES).
---
Duplicate comments:
In `@apps/public-api/src/middlewares/usageGate.js`:
- Around line 16-21: resolveDeveloperPlanContext currently assumes
req.project.owner exists when computing developerId and calls toString(), which
throws if rawOwner is null/undefined; add a guard for rawOwner (check if falsy)
before calling toString() — e.g., if rawOwner is missing return null (or set
developerId to null) so you don't call rawOwner.toString(); then continue the
existing logic that handles a populated owner (checking rawOwner._id) and the
downstream callers of resolveDeveloperPlanContext should handle a null/unknown
developerId. Ensure you update resolveDeveloperPlanContext and references to
developerId to avoid calling toString() on undefined.
---
Nitpick comments:
In `@apps/dashboard-api/src/middlewares/planEnforcement.js`:
- Around line 106-122: checkByokGate (and likewise checkByomGate) currently
calls resolveEffectivePlan(req.developer) and getPlanLimits({ plan }) without
considering project-level overrides, so project.customLimits.byokEnabled /
byomEnabled never unlocks features; update the middleware to load the target
project scoped to the developer using req.params.projectId (same mechanism used
by checkCollectionLimit), merge project.customLimits into the getPlanLimits call
(e.g., pass customLimits or merge into the options you pass to getPlanLimits),
and then check the resulting limits.byokEnabled / limits.byomEnabled before
returning the 403 or calling next(); keep the existing admin bypass
(req.user?.isAdmin) and error handling.
In `@apps/public-api/src/middlewares/usageGate.js`:
- Around line 93-97: The catch block in the usageGate middleware currently
swallows Redis errors and calls next(), which fails open; change that to emit
structured logs and metrics and optionally fail-closed for high-cost endpoints:
replace console.error("Usage limit check failed:", err) with a structured logger
call (e.g., logger.error or processLogger.error) that includes error, stack,
redis host/conn id and request context (route, user id, plan), and increment a
monitoring counter (e.g., "usage_gate.redis_error" with tags route=t.route, env,
plan) so outages are observable; additionally, for the critical routes
"/mail/send" and "/storage/upload" implement a short circuit (in the same
middleware or a wrapper) that returns a 503 or enforces conservative default
quotas when Redis is unavailable (or use a circuit-breaker flag toggled by the
redis error metric) instead of always calling next().
- Around line 23-43: The plan cache logic caches developer plan context via
getDeveloperPlanCache and setDeveloperPlanCache for 5 minutes but lacks cache
invalidation; ensure any endpoints or handlers that mutate Developer.plan or
planExpiresAt (dashboard update functions and billing webhook handlers that call
Developer.findById / update) call deleteDeveloperPlanCache(developerId) after
successful writes so upgrades/downgrades take effect immediately, and keep
resolveEffectivePlan(...) behavior intact by preserving planExpiresAt in the
cached context (document this in the mutation handlers' comments) so expiry
checks continue to run at request time.
🪄 Autofix (Beta)
✅ Autofix completed
ℹ️ Review info
⚙️ Run configuration
Configuration used: defaults
Review profile: CHILL
Plan: Pro
Run ID: 7a9d780d-63a8-4bb9-a69c-ff8ef9a4ae70
⛔ Files ignored due to path filters (1)
package-lock.jsonis excluded by!**/package-lock.json
📒 Files selected for processing (7)
apps/dashboard-api/src/middlewares/planEnforcement.jsapps/public-api/src/__tests__/storage.controller.test.jsapps/public-api/src/controllers/storage.controller.jsapps/public-api/src/middlewares/api_usage.jsapps/public-api/src/middlewares/usageGate.jsapps/public-api/src/routes/mail.jsapps/public-api/src/routes/storage.js
✅ Files skipped from review due to trivial changes (1)
- apps/public-api/src/tests/storage.controller.test.js
🚧 Files skipped from review as they are similar to previous changes (1)
- apps/public-api/src/routes/storage.js
| const currentProjectsCount = await Project.countDocuments({ owner: req.developer._id }); | ||
|
|
||
| if (currentProjectsCount >= limits.maxProjects) { | ||
| return next(new AppError(403, `Project limit reached (${limits.maxProjects}). Please upgrade your plan to create more projects.`)); | ||
| } |
There was a problem hiding this comment.
TOCTOU race allows exceeding project/collection limits.
countDocuments (and project.collections.length at line 93) is checked before the downstream controller creates the resource. Two concurrent requests from the same developer can both pass the check and push the total to maxProjects + 1 (or maxCollections + 1). Under the Free tier's single-project cap this is easily reachable with a double-click or two API clients.
Consider enforcing the limit atomically — e.g., wrap the count+create in a transaction, use a unique index / conditional update that fails when the cap is reached, or use an atomic counter on the Developer document. At minimum, re-validate the count inside the controller after creation and roll back on overflow.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@apps/dashboard-api/src/middlewares/planEnforcement.js` around lines 47 - 51,
The current TOCTOU comes from checking Project.countDocuments and
project.collections.length before creation (see Project.countDocuments, the
limits check that returns new AppError, and the project.collections usage); fix
by enforcing the limit atomically: wrap the count+create in a MongoDB
transaction (startSession()/withTransaction) or implement a conditional
update/atomic counter on the Developer document that increments only if current
value < max (using findOneAndUpdate with $inc and a filter on the limit) so
concurrent requests cannot exceed maxProjects/maxCollections, or add a unique
per-developer project slot via a conditional create; additionally, add a
server-side re-validation in the controller after create and rollback/delete the
resource if limit was exceeded to cover non-transactional cases.
|
Note Autofix is a beta feature. Expect some limitations and changes as we gather feedback and continue to improve it. Fixes Applied SuccessfullyFixed 4 file(s) based on 2 unresolved review comments. Files modified:
Commit: The changes have been pushed to the Time taken: |
Fixed 4 file(s) based on 2 unresolved review comments. Co-authored-by: CodeRabbit <[email protected]>
|
OKay to merge?? @coderabbitai |
|
Tip For best results, initiate chat on the files or code changes. 🧩 Analysis chainLet me check the current state of the PR — open review comments, CI checks, and overall status. 🏁 Script executed: Length of output: 153404
✅ Green lights:
There are two Copilot-flagged issues that are not outdated and could cause runtime/functional problems:
My recommendation: The two items above (especially |
|
@coderabbitai |
|
Tip For best results, initiate chat on the files or code changes.
✅ What's done well (SaaS-quality foundations)
🔴 What's missing (the "headache" you're describing)1. Plan Expiry Lifecycle — not implemented
2. Payment Processor Integration — not implementedThere's no webhook handler (Stripe / LemonSqueezy / Paddle etc.) to:
3. Trial Flow — incomplete
4. Cache Invalidation on Plan ChangeWherever a plan is changed (webhook, admin panel, etc.), 5. Downgrade Impact — undefinedWhat happens when a Free user has 3 projects (was Pro, now expired)?
📋 Recommended Phased RoadmapQuick Schema Addition to ConsiderFor tracking grace periods (minimal change): // packages/common/src/models/Developer.js
gracePeriodEndsAt: { type: Date, default: null },
planHistory: [
{
plan: String,
activatedAt: Date,
reason: String, // 'payment', 'trial', 'admin', 'expiry'
}
]Bottom line: The enforcement layer is production-quality, but the lifecycle layer (payment → activate → renew → expire → downgrade) is entirely missing. The PR is safe to merge as an enforcement foundation, but monetization cannot actually go live without Phase 1 above. Would you like me to open a follow-up issue tracking the payment integration and expiry lifecycle work? |
…g exports, missing select properties)
There was a problem hiding this comment.
Actionable comments posted: 1
🧹 Nitpick comments (2)
apps/public-api/src/controllers/storage.controller.js (1)
42-79: New 403 responses don't follow controller response/error contract; branches can be deduplicated.Two issues in the newly added reservation block:
The two
403responses at Lines 65 and 77 use a raw{ error: "..." }shape and bypassAppError. Per the controller guideline, responses must use the{ success, data, message }envelope and errors must flow throughAppError(so the framework can set proper status/metadata, e.g. 403 for blocked routes as used elsewhere in this PR). Pre-existing handlers in this file have the same drift, but these are new lines and a good place to start aligning.The
if (effectiveLimit === -1) { ... } else { ... }branches are structurally identical — only the cap differs. Collapsing them removes the duplicatedProject.updateOne(...)call and the duplicated 403 response, eliminating a future drift hazard (e.g. someone fixing the message in one branch but not the other).♻️ Suggested consolidation (also routes the 403 through AppError)
- // For internal storage: honor -1 as unlimited but clamp to safety ceiling - if (effectiveLimit === -1) { - // Internal storage with -1: clamp to safety ceiling - const result = await Project.updateOne( - { - _id: project._id, - $expr: { $lte: [{ $add: ["$storageUsed", file.size] }, SAFETY_MAX_BYTES] } - }, - { $inc: { storageUsed: file.size } } - ); - - if (result.matchedCount === 0) { - return res.status(403).json({ error: "Storage limit exceeded. Please upgrade your plan or delete some files." }); - } - } else { - const result = await Project.updateOne( - { - _id: project._id, - $expr: { $lte: [{ $add: ["$storageUsed", file.size] }, effectiveLimit] } - }, - { $inc: { storageUsed: file.size } } - ); - - if (result.matchedCount === 0) { - return res.status(403).json({ error: "Storage limit exceeded. Please upgrade your plan or delete some files." }); - } - } + // For internal storage, -1 (unlimited) is clamped to SAFETY_MAX_BYTES. + const cap = effectiveLimit === -1 ? SAFETY_MAX_BYTES : effectiveLimit; + const result = await Project.updateOne( + { + _id: project._id, + $expr: { $lte: [{ $add: ["$storageUsed", file.size] }, cap] } + }, + { $inc: { storageUsed: file.size } } + ); + + if (result.matchedCount === 0) { + throw new AppError( + "Storage limit exceeded. Please upgrade your plan or delete some files.", + 403 + ); + }Requires importing
AppErrorat the top of the file and ensuring the surroundingtry/catch(or an error middleware) translates it to the standard{ success: false, data: {}, message }envelope.As per coding guidelines: "All API endpoints return:
{ success: bool, data: {}, message: "" }. Use AppError class for errors — never raw throw, never expose MongoDB errors to client."🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@apps/public-api/src/controllers/storage.controller.js` around lines 42 - 79, The new reservation block duplicates Project.updateOne and returns raw 403 JSON; collapse the two branches by computing a single cap (use SAFETY_MAX_BYTES when effectiveLimit === -1, otherwise use effectiveLimit), call Project.updateOne once with that cap, and when result.matchedCount === 0 surface the error via AppError (e.g., throw or pass new AppError("Storage limit exceeded. Please upgrade your plan or delete some files.", 403)) instead of returning raw JSON; also add/import AppError at the top and ensure the existing try/catch/error middleware will translate it to the { success, data, message } envelope.apps/dashboard-api/src/middlewares/planEnforcement.js (1)
33-34: Use strict boolean checks for admin bypasses as a security best practice.While the JWT payload correctly assigns
isAdminas a boolean result (user.email === process.env.ADMIN_EMAILat auth.controller.js:222 and :614), using strict equality (req.user?.isAdmin === true) hardens authorization checks against future changes to the JWT producer or accidental type coercion bugs.🔒 Proposed hardening
- if (req.user?.isAdmin) return next(); + if (req.user?.isAdmin === true) return next();Apply the same replacement at Lines 64, 106, and 127.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@apps/dashboard-api/src/middlewares/planEnforcement.js` around lines 33 - 34, Replace permissive truthy checks of the admin flag with strict boolean comparison: where the middleware currently does "if (req.user?.isAdmin) return next();" change to "if (req.user?.isAdmin === true) return next();" and apply the same change for every other admin bypass check referencing req.user?.isAdmin (e.g., similar occurrences later in the file), ensuring you only allow the bypass when the JWT explicitly sets isAdmin to true.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@apps/dashboard-api/src/middlewares/planEnforcement.js`:
- Around line 108-133: checkByokGate and checkByomGate call resolveEffectivePlan
and getPlanLimits but don’t pass per-project overrides; update both middlewares
(checkByokGate and checkByomGate) to pass project.customLimits into
getPlanLimits (e.g., getPlanLimits({ plan: effectivePlan, customLimits:
req.project?.customLimits })) so feature gates respect any project-level
overrides similarly to checkCollectionLimit while keeping the existing admin
bypass and error handling.
---
Nitpick comments:
In `@apps/dashboard-api/src/middlewares/planEnforcement.js`:
- Around line 33-34: Replace permissive truthy checks of the admin flag with
strict boolean comparison: where the middleware currently does "if
(req.user?.isAdmin) return next();" change to "if (req.user?.isAdmin === true)
return next();" and apply the same change for every other admin bypass check
referencing req.user?.isAdmin (e.g., similar occurrences later in the file),
ensuring you only allow the bypass when the JWT explicitly sets isAdmin to true.
In `@apps/public-api/src/controllers/storage.controller.js`:
- Around line 42-79: The new reservation block duplicates Project.updateOne and
returns raw 403 JSON; collapse the two branches by computing a single cap (use
SAFETY_MAX_BYTES when effectiveLimit === -1, otherwise use effectiveLimit), call
Project.updateOne once with that cap, and when result.matchedCount === 0 surface
the error via AppError (e.g., throw or pass new AppError("Storage limit
exceeded. Please upgrade your plan or delete some files.", 403)) instead of
returning raw JSON; also add/import AppError at the top and ensure the existing
try/catch/error middleware will translate it to the { success, data, message }
envelope.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: defaults
Review profile: CHILL
Plan: Pro
Run ID: 888996cb-0b29-449a-8e63-4a79b17aa780
📒 Files selected for processing (7)
apps/dashboard-api/src/controllers/analytics.controller.jsapps/dashboard-api/src/controllers/project.controller.jsapps/dashboard-api/src/middlewares/planEnforcement.jsapps/public-api/src/__tests__/mail.controller.test.jsapps/public-api/src/controllers/mail.controller.jsapps/public-api/src/controllers/storage.controller.jspackages/common/src/index.js
🚧 Files skipped from review as they are similar to previous changes (3)
- apps/dashboard-api/src/controllers/analytics.controller.js
- apps/dashboard-api/src/controllers/project.controller.js
- packages/common/src/index.js
… from user-controlled sources' Co-authored-by: Copilot Autofix powered by AI <62310815+github-advanced-security[bot]@users.noreply.github.com>
|
lets merge?? or anything else?? |
|
Tip For best results, initiate chat on the files or code changes. 🧩 Analysis chain🏁 Script executed: Length of output: 2985 🏁 Script executed: Length of output: 9044
There are two minor remaining items from the unresolved Copilot threads (neither is a merge blocker):
✅ Safe to merge. CI is fully green, branch is clean, and the enforcement layer is production-quality. The two items above can be cleaned up in a follow-up PR. Would you also like me to open a GitHub issue to track the payment/lifecycle work (Stripe webhooks, expiry cron, cache invalidation) we discussed? |
🚀 Objective
Transition urBackend to a monetized SaaS model by implementing robust Free/Pro tier guardrails, server traffic protection (throttling), and feature gating (BYOK/BYOM, Mail Templates).
✨ Key Changes
planLimits.jsinsidepackages/commonthat defines all specific quotas (e.g. 5K req/day for Free vs 50K for Pro) for all projects to refer to.plan,planActivatedAt, andplanExpiresAtinsideDeveloper.js. Enabled Enterprise-levelcustomLimitsdirectly insideProject.js.isAdminchecks across dashboard routes.checkProjectLimitandcheckCollectionLimitcustom middlewares.checkByokGate).usageGate.jsmiddleware with atomic Redis tracking.Developerretrieval (getDeveloperPlanCache) with a 5-minute TTL to drastically reduce MongoDB overhead on public routes.🛠 Fixes Included
req.user._idcompatibility with the JWT Dashboard auth layer to ensure no authenticated requests are wrongfully blocked.req.project.ownerMongoose Document mapping successfully normalized inside Redis caches.resolvePublicAuthContexton the PUT/DELETE public routes ensuring valid App keys have uninterrupted write operations.🧪 Testing Notes
POST APIresponds429 Too Many Requestsaccurately upon artificial limit decrease.backfill-developer-plans.js) correctly defaults pre-existing Dev tables tofree.Build with ❤️ for urBackend
Summary by CodeRabbit