Skip to content

feat: implement multi-tier plan system and usage enforcement (Free/Pro)#128

Merged
yash-pouranik merged 14 commits intomainfrom
feature/plan-enforcement
Apr 22, 2026
Merged

feat: implement multi-tier plan system and usage enforcement (Free/Pro)#128
yash-pouranik merged 14 commits intomainfrom
feature/plan-enforcement

Conversation

@yash-pouranik
Copy link
Copy Markdown
Collaborator

@yash-pouranik yash-pouranik commented Apr 22, 2026

🚀 Objective

Transition urBackend to a monetized SaaS model by implementing robust Free/Pro tier guardrails, server traffic protection (throttling), and feature gating (BYOK/BYOM, Mail Templates).

✨ Key Changes

  • Central Plan Source of Truth: Intergrated planLimits.js inside packages/common that defines all specific quotas (e.g. 5K req/day for Free vs 50K for Pro) for all projects to refer to.
  • Database Models Update: Added plan, planActivatedAt, and planExpiresAt inside Developer.js. Enabled Enterprise-level customLimits directly inside Project.js.
  • Dashboard Auth & Guardrails:
    • Replaced generic string comparison with JWT-driven isAdmin checks across dashboard routes.
    • Blocked Free users from making multiple projects or creating >10 collections via checkProjectLimit and checkCollectionLimit custom middlewares.
    • Pro-Gated specific features like Custom Cloudinary/S3 connections (checkByokGate).
  • Zero-Downtime Public API Throttling:
    • Implemented usageGate.js middleware with atomic Redis tracking.
    • Successfully created dual-layer server protection blocking spikes (req/minute) while keeping track of total quota (req/day).
    • Optimized validation speed using Redis cache on Developer retrieval (getDeveloperPlanCache) with a 5-minute TTL to drastically reduce MongoDB overhead on public routes.
  • AppError Engine: Integrated a repo-compliant generic error handler ensuring client-side UI gets predictable metadata with precise HTTP status codes (like 429 for limit reached and 403 for blocked paths).

🛠 Fixes Included

  • [Regressions Fixed]: Handled req.user._id compatibility with the JWT Dashboard auth layer to ensure no authenticated requests are wrongfully blocked.
  • [Security Patch]: req.project.owner Mongoose Document mapping successfully normalized inside Redis caches.
  • [Authentication]: Reinstated resolvePublicAuthContext on the PUT/DELETE public routes ensuring valid App keys have uninterrupted write operations.

🧪 Testing Notes

  • Limit check on POST API responds 429 Too Many Requests accurately upon artificial limit decrease.
  • Tested Admin fallback limits with DB override fields.
  • Validated backend data script (backfill-developer-plans.js) correctly defaults pre-existing Dev tables to free.

Build with ❤️ for urBackend

Summary by CodeRabbit

  • New Features
    • Plan-based enforcement across project/collection creation, data, storage and mail endpoints; per-minute and per-day usage gating applied.
    • Projects can have optional custom limits; developer billing/plan fields and caching added.
  • Behavior Changes
    • Internal storage “unlimited” is clamped for safe enforcement; quota-failure messaging updated to suggest upgrade or deletion.
    • Mail template sending now requires plan-enabled templates.
    • Project/collection creation is more reliable under concurrent usage (transactional).

@coderabbitai
Copy link
Copy Markdown
Contributor

coderabbitai Bot commented Apr 22, 2026

Warning

Rate limit exceeded

@yash-pouranik has exceeded the limit for the number of commits that can be reviewed per hour. Please wait 32 minutes and 6 seconds before requesting another review.

Your organization is not enrolled in usage-based pricing. Contact your admin to enable usage-based pricing to continue reviews beyond the rate limit, or try again in 32 minutes and 6 seconds.

⌛ How to resolve this issue?

After the wait time has elapsed, a review can be triggered using the @coderabbitai review command as a PR comment. Alternatively, push new commits to this PR.

We recommend that you space out your commits to avoid hitting the rate limit.

🚦 How do rate limits work?

CodeRabbit enforces hourly rate limits for each developer per organization.

Our paid plans have higher rate limits than the trial, open-source and free plans. In all cases, we re-allow further reviews after a brief timeout.

Please see our FAQ for further information.

ℹ️ Review info
⚙️ Run configuration

Configuration used: defaults

Review profile: CHILL

Plan: Pro

Run ID: 4a30804c-05c9-4737-a78b-17afc32a901a

📥 Commits

Reviewing files that changed from the base of the PR and between 1a0ba2d and 26a4567.

📒 Files selected for processing (4)
  • apps/dashboard-api/src/middlewares/planEnforcement.js
  • apps/public-api/src/controllers/mail.controller.js
  • packages/common/src/utils/planLimits.js
  • tools/db-queries/backfill-developer-plans.js
📝 Walkthrough

Walkthrough

Adds plan-aware billing fields, utilities, Redis caching, middleware and route wiring to enforce project/collection/storage/mail/request limits across dashboard and public APIs; introduces transactional project/collection creation, plan-aware quota checks, an AppError type, and a backfill script for existing developers.

Changes

Cohort / File(s) Summary
Plan limits core & exports
packages/common/src/utils/planLimits.js, packages/common/src/index.js
New PLAN_LIMITS, resolveEffectivePlan, getPlanLimits; expose plan cache helpers and AppError on common exports.
Developer & Project models
packages/common/src/models/Developer.js, packages/common/src/models/Project.js
Add plan, planActivatedAt, planExpiresAt, trialUsed to Developer; add customLimits (reqPerDay, storageBytes, mailPerMonth, maxCollections) to Project.
Redis plan cache helpers
packages/common/src/redis/redisCaching.js
Add setDeveloperPlanCache, getDeveloperPlanCache, deleteDeveloperPlanCache with 5-minute TTL.
App-wide error type
packages/common/src/utils/AppError.js
Introduce centralized AppError class for operational errors.
Dashboard middleware & routes
apps/dashboard-api/src/middlewares/planEnforcement.js, apps/dashboard-api/src/routes/projects.js
Add attachDeveloper, checkProjectLimit, checkCollectionLimit, checkByokGate, checkByomGate; wire into project/collection/BYOD routes.
Dashboard controllers (transactions)
apps/dashboard-api/src/controllers/project.controller.js, apps/dashboard-api/src/controllers/analytics.controller.js
Wrap project/collection creation in DB transactions, move limit checks to middleware; analytics uses resolveEffectivePlan/getPlanLimits for limits.
Public API usage gating
apps/public-api/src/middlewares/usageGate.js, apps/public-api/src/routes/data.js, apps/public-api/src/routes/mail.js, apps/public-api/src/routes/storage.js
Replace projectRateLimiter with Redis-backed checkUsageLimits that loads/caches plan context, attaches req.planLimits, and enforces per-minute and per-day quotas; routes updated accordingly.
Public controllers & utils
apps/public-api/src/controllers/mail.controller.js, apps/public-api/src/controllers/storage.controller.js, apps/public-api/src/utils/mailLimit.js, apps/public-api/src/app.js
Mail endpoint gated by req.planLimits.mailTemplatesEnabled; monthly mail limit becomes plan-aware; storage internal upload quota uses computed effectiveLimit and clamps unlimited to safety max; global error handler returns operational errors with original status codes.
Tests & infra changes
apps/public-api/src/__tests__/storage.controller.test.js, apps/public-api/src/__tests__/mail.controller.test.js, apps/public-api/src/middlewares/api_usage.js
Adjust tests to provide plan context and expect new storage error message; prevent duplicate daily Redis increment via request flag.
Backfill script
tools/db-queries/backfill-developer-plans.js
Script to set plan: 'free' for Developer documents missing the field.

Sequence Diagram(s)

sequenceDiagram
    participant Client
    participant DashboardAPI as Dashboard API
    participant PlanMW as Plan Enforcement MW
    participant Redis as Redis Cache
    participant MongoDB as MongoDB

    Client->>DashboardAPI: POST /projects
    DashboardAPI->>PlanMW: attachDeveloper
    PlanMW->>MongoDB: Find Developer by user ID
    MongoDB-->>PlanMW: Developer record
    PlanMW->>Redis: getDeveloperPlanCache(id)
    alt cache hit
        Redis-->>PlanMW: cached plan context
    else
        PlanMW->>PlanMW: resolveEffectivePlan & getPlanLimits
        PlanMW->>Redis: setDeveloperPlanCache(id, context)
    end
    PlanMW->>MongoDB: count Projects (session)
    MongoDB-->>PlanMW: project count
    alt limit exceeded
        PlanMW-->>Client: 403 Upgrade required
    else
        PlanMW->>DashboardAPI: proceed to createProject (with DB session)
        DashboardAPI->>MongoDB: create Project (session)
        MongoDB-->>DashboardAPI: Created
        DashboardAPI-->>Client: 201 Created
    end
Loading
sequenceDiagram
    participant Client
    participant PublicAPI as Public API
    participant UsageMW as checkUsageLimits
    participant Redis as Redis
    participant MongoDB as MongoDB

    Client->>PublicAPI: POST /data
    PublicAPI->>UsageMW: checkUsageLimits
    UsageMW->>Redis: getDeveloperPlanCache(owner)
    alt cache hit
        Redis-->>UsageMW: planLimits
    else
        UsageMW->>MongoDB: Load Developer by owner
        MongoDB-->>UsageMW: Developer record
        UsageMW->>UsageMW: resolveEffectivePlan & merge limits
        UsageMW->>Redis: setDeveloperPlanCache(owner, limits)
    end
    UsageMW->>Redis: INCR per-minute counter (short TTL)
    alt per-minute exceeded
        UsageMW-->>Client: 429 Too Many Requests
    else
        UsageMW->>Redis: INCR per-day counter (day TTL)
        alt daily exceeded
            UsageMW-->>Client: 429 Too Many Requests
        else
            UsageMW->>PublicAPI: attach req.planLimits
            PublicAPI-->>Client: 200 Success
        end
    end
Loading

Estimated code review effort

🎯 4 (Complex) | ⏱️ ~45 minutes

Possibly related PRs

Suggested labels

feature, backend

Poem

I nibble keys and cache the plan,
I count the projects, hop when I can,
Limits stitched like carrot seams,
Transactions keep the sweetest dreams,
🐇 Hooray — the rules hop in!

🚥 Pre-merge checks | ✅ 4 | ❌ 1

❌ Failed checks (1 warning)

Check name Status Explanation Resolution
Docstring Coverage ⚠️ Warning Docstring coverage is 50.00% which is insufficient. The required threshold is 80.00%. Write docstrings for the functions missing them to satisfy the coverage threshold.
✅ Passed checks (4 passed)
Check name Status Explanation
Description Check ✅ Passed Check skipped - CodeRabbit’s high-level summary is enabled.
Title check ✅ Passed The title clearly and directly describes the main feature: implementing a multi-tier plan system (Free/Pro) with usage enforcement—the core objective of this comprehensive PR across the codebase.
Linked Issues check ✅ Passed Check skipped because no linked issues were found for this pull request.
Out of Scope Changes check ✅ Passed Check skipped because no linked issues were found for this pull request.

✏️ Tip: You can configure your own custom pre-merge checks in the settings.

✨ Finishing Touches
🧪 Generate unit tests (beta)
  • Create PR with unit tests
  • Commit unit tests in branch feature/plan-enforcement

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

@yash-pouranik yash-pouranik marked this pull request as ready for review April 22, 2026 20:00
Copilot AI review requested due to automatic review settings April 22, 2026 20:00
Comment thread apps/dashboard-api/src/middlewares/planEnforcement.js Fixed
Copy link
Copy Markdown
Contributor

Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull request overview

Implements a Free/Pro multi-tier plan system across the common package and both public/dashboard APIs, adding quota enforcement, feature gating, and supporting DB/backfill tooling.

Changes:

  • Added centralized plan limits utilities (planLimits) and an operational AppError type in @urbackend/common.
  • Implemented plan-aware enforcement: public API usage throttling/quotas via Redis (usageGate) and dashboard-side plan guardrails (planEnforcement).
  • Extended data models to support plan fields on developers and per-project enterprise overrides (customLimits), plus added backfill and global mail template seed data.

Reviewed changes

Copilot reviewed 19 out of 20 changed files in this pull request and generated 8 comments.

Show a summary per file
File Description
tools/db-queries/global-mail-templates.json Adds seed data for global/system mail templates.
tools/db-queries/backfill-developer-plans.js Adds a script to backfill existing Developer docs with plan: free.
packages/common/src/utils/planLimits.js Introduces central plan limit definitions and limit-merging helpers.
packages/common/src/utils/AppError.js Adds a shared operational error type for consistent HTTP status propagation.
packages/common/src/redis/redisCaching.js Adds Redis helpers for caching developer plan context.
packages/common/src/models/Project.js Adds customLimits for enterprise/project-level overrides.
packages/common/src/models/Developer.js Adds plan lifecycle fields (plan, activation/expiry, trialUsed).
packages/common/src/index.js Re-exports plan utilities and AppError from @urbackend/common.
apps/public-api/src/utils/mailLimit.js Makes monthly mail limits plan-aware with a safe fallback.
apps/public-api/src/routes/storage.js Replaces old rate limiter with plan-aware checkUsageLimits.
apps/public-api/src/routes/mail.js Replaces old rate limiter with plan-aware checkUsageLimits.
apps/public-api/src/routes/data.js Replaces old rate limiter with plan-aware checkUsageLimits.
apps/public-api/src/middlewares/usageGate.js Adds Redis-backed per-minute and daily usage enforcement + plan context caching.
apps/public-api/src/controllers/storage.controller.js Makes internal storage quota enforcement plan-aware.
apps/public-api/src/controllers/mail.controller.js Adds plan gating for mail template usage; uses plan-aware monthly limits.
apps/public-api/src/app.js Preserves AppError HTTP status codes in the global error handler.
apps/dashboard-api/src/routes/projects.js Adds plan enforcement middleware hooks for project/collection/BYOK routes.
apps/dashboard-api/src/middlewares/planEnforcement.js Adds dashboard-side plan enforcement and BYOK gating.
apps/dashboard-api/src/controllers/project.controller.js Removes inline project-limit checks in favor of middleware.
apps/dashboard-api/src/controllers/analytics.controller.js Returns plan-derived limits in global stats response.

💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.

Comment on lines +77 to +82
// We use INCRBY 0 to just read the current count if we don't want to double count,
// BUT actually, we should increment it here and skip it in the logger to be precise.
// For now, let's just check the current value.
const currentDailyCount = parseInt(await redis.get(reqCountKey) || '0');

if (limits.reqPerDay !== -1 && currentDailyCount >= limits.reqPerDay) {
Copy link

Copilot AI Apr 22, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Daily quota enforcement reads the current Redis counter with GET, which is not atomic and can allow concurrent requests to exceed reqPerDay (multiple requests can observe the same count before the logger increments it). Use an atomic INCR (e.g., incrWithTtlAtomic) and compare the returned value against reqPerDay (optionally rolling back with DECR when over limit), and use DEFAULT_DAILY_TTL_SECONDS for TTL consistency.

Suggested change
// We use INCRBY 0 to just read the current count if we don't want to double count,
// BUT actually, we should increment it here and skip it in the logger to be precise.
// For now, let's just check the current value.
const currentDailyCount = parseInt(await redis.get(reqCountKey) || '0');
if (limits.reqPerDay !== -1 && currentDailyCount >= limits.reqPerDay) {
// Atomically reserve one daily request slot so concurrent requests
// cannot all pass based on the same stale counter value.
const currentDailyCount = await incrWithTtlAtomic(
redis,
reqCountKey,
DEFAULT_DAILY_TTL_SECONDS
);
if (limits.reqPerDay !== -1 && currentDailyCount > limits.reqPerDay) {
// Roll back the reservation so rejected requests do not consume quota.
await redis.decr(reqCountKey);

Copilot uses AI. Check for mistakes.
Comment on lines +177 to +185
if (usingTemplate) {
const limits = req.planLimits || {};
if (limits.mailTemplatesEnabled === false) {
return res.status(403).json({
success: false,
data: {},
message: "Pre-designed Email Templates are a Pro feature. Please upgrade to use this functionality."
});
}
Copy link

Copilot AI Apr 22, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Template gating currently allows templates when req.planLimits is missing (limits.mailTemplatesEnabled is undefined), which can happen if usageGate fails open on Redis/DB issues. Default to a computed Free-tier limits object when req.planLimits is absent so Pro-only features don’t become available during cache/Redis outages.

Copilot uses AI. Check for mistakes.
{ $inc: { storageUsed: file.size } }
);
const limits = req.planLimits || {};
const effectiveLimit = limits.storageBytes || project.storageLimit || 20 * 1024 * 1024;
Copy link

Copilot AI Apr 22, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

effectiveLimit uses limits.storageBytes || project.storageLimit || ..., which treats 0 as “unset” and will ignore an explicit override. Prefer nullish coalescing (??) when choosing between numeric limits so that 0 (or other falsy numeric values) is respected.

Suggested change
const effectiveLimit = limits.storageBytes || project.storageLimit || 20 * 1024 * 1024;
const effectiveLimit = limits.storageBytes ?? project.storageLimit ?? 20 * 1024 * 1024;

Copilot uses AI. Check for mistakes.
Comment on lines +47 to +54
const effectivePlan = resolveEffectivePlan(dev);
const limits = getPlanLimits({
plan: effectivePlan,
legacyLimits: {
maxProjects: dev?.maxProjects,
maxCollections: dev?.maxCollections
}
});
Copy link

Copilot AI Apr 22, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

getGlobalStats calculates plan-based limits, but the Developer query only selects maxProjects/maxCollections. Because plan/planExpiresAt aren’t selected, resolveEffectivePlan(dev) will always fall back to 'free' and return incorrect limits for Pro users. Include plan (and planExpiresAt if applicable) in the select, or remove the select restriction.

Copilot uses AI. Check for mistakes.
@@ -0,0 +1,28 @@
const mongoose = require('mongoose');
Copy link

Copilot AI Apr 22, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This backfill script imports mongoose but never uses it. Remove the unused require to avoid confusion and keep the script minimal.

Suggested change
const mongoose = require('mongoose');

Copilot uses AI. Check for mistakes.
Comment on lines +68 to +85
const effectivePlan = resolveEffectivePlan(req.developer);
const limits = getPlanLimits({
plan: effectivePlan,
legacyLimits: {
maxProjects: req.developer.maxProjects ?? null,
maxCollections: req.developer.maxCollections ?? null
}
});

if (limits.maxCollections === -1) return next();

// For collection creation, the projectId is usually in req.body
const projectId = req.body.projectId;
if (!projectId) return next();

const project = await Project.findById(projectId);
if (!project) return next(new AppError(404, 'Project not found'));

Copy link

Copilot AI Apr 22, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

checkCollectionLimit calculates limits without considering project-level customLimits.maxCollections (introduced on Project.customLimits). This means enterprise/project overrides won’t be honored when creating collections via the dashboard. Load the project first (ideally with an owner filter), then pass project.customLimits into getPlanLimits so overrides apply consistently with public-api enforcement.

Suggested change
const effectivePlan = resolveEffectivePlan(req.developer);
const limits = getPlanLimits({
plan: effectivePlan,
legacyLimits: {
maxProjects: req.developer.maxProjects ?? null,
maxCollections: req.developer.maxCollections ?? null
}
});
if (limits.maxCollections === -1) return next();
// For collection creation, the projectId is usually in req.body
const projectId = req.body.projectId;
if (!projectId) return next();
const project = await Project.findById(projectId);
if (!project) return next(new AppError(404, 'Project not found'));
// For collection creation, the projectId is usually in req.body
const projectId = req.body.projectId;
if (!projectId) return next();
const project = await Project.findOne({ _id: projectId, owner: req.developer._id });
if (!project) return next(new AppError(404, 'Project not found'));
const effectivePlan = resolveEffectivePlan(req.developer);
const limits = getPlanLimits({
plan: effectivePlan,
legacyLimits: {
maxProjects: req.developer.maxProjects ?? null,
maxCollections: req.developer.maxCollections ?? null
},
customLimits: project.customLimits ?? null
});
if (limits.maxCollections === -1) return next();

Copilot uses AI. Check for mistakes.
Comment on lines +1 to +9
const {
redis,
Developer,
resolveEffectivePlan,
getPlanLimits,
getDeveloperPlanCache,
setDeveloperPlanCache,
AppError
} = require('@urbackend/common');
Copy link

Copilot AI Apr 22, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

usageGate destructures getDeveloperPlanCache/setDeveloperPlanCache from @urbackend/common, but packages/common/src/index.js currently doesn’t export those redisCaching helpers (it only re-exports the project cache functions). This will throw at runtime when resolveDeveloperPlanContext calls them. Re-export the developer plan cache helpers from common index (and include deleteDeveloperPlanCache if needed) or import directly from the redisCaching module.

Copilot uses AI. Check for mistakes.
Comment on lines +71 to +76
* Applies legacy developer-level overrides ONLY when they are more generous
* than the plan default. Legacy limits are admin-granted exceptions (e.g.,
* "allow this user 5 projects even on free"). They must never reduce a paid
* plan's higher entitlement (e.g., Pro allows 10 projects — a legacy value
* of 1 or 20 must NOT override that).
*/
Copy link

Copilot AI Apr 22, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The inline comment says a legacy value of "20" must not override Pro’s default (10), but mergeLegacyOverrides currently applies any legacy numeric value greater than the base (so 20 will override 10). Please either update the comment to reflect the intended behavior (allow more generous admin overrides), or change the merge condition so paid-plan defaults cannot be overridden upward if that’s the requirement.

Copilot uses AI. Check for mistakes.
Copy link
Copy Markdown
Contributor

@coderabbitai coderabbitai Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 6

Caution

Some comments are outside the diff and can’t be posted inline due to platform limitations.

⚠️ Outside diff range comments (1)
apps/dashboard-api/src/controllers/analytics.controller.js (1)

33-54: ⚠️ Potential issue | 🟠 Major

Fix .select() projection to include plan and planExpiresAt fields; remove unnecessary planActivatedAt.

Line 33 only selects maxProjects maxCollections, but resolveEffectivePlan(dev) on line 47 requires dev.plan and dev.planExpiresAt to compute the effective tier. Without these fields, dev.plan and dev.planExpiresAt are undefined, causing every caller to fall through to the default (free) plan regardless of their actual subscription, silently capping Pro users' limits to free-tier values. Note: planActivatedAt is not used by resolveEffectivePlan and should not be added.

Proposed fix
-      Developer.findById(user_id).select("maxProjects maxCollections")
+      Developer.findById(user_id).select("maxProjects maxCollections plan planExpiresAt")

Additionally, the error handler on line 65 exposes raw error messages and does not follow the standard response format. It should use the AppError class and return { success: false, data: {}, message: "..." }.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@apps/dashboard-api/src/controllers/analytics.controller.js` around lines 33 -
54, Update the Developer.findById projection in the analytics controller to
include the subscription fields used by resolveEffectivePlan by selecting
"maxProjects maxCollections plan planExpiresAt" (do not add planActivatedAt), so
dev.plan and dev.planExpiresAt are populated before calling resolveEffectivePlan
and getPlanLimits; also replace the current raw-error response in the
controller's catch/error handler to use the AppError class and return the
standard JSON shape { success: false, data: {}, message: "..." } so errors
conform to the app's response format.
🧹 Nitpick comments (6)
apps/public-api/src/controllers/mail.controller.js (2)

297-297: Verify req.planLimits is always populated on this path.

getMonthlyMailLimit treats a missing planLimitsContext as a fallback to free-tier limits (per utils/mailLimit.js). Since checkUsageLimits is the route’s middleware responsible for attaching req.planLimits, make sure no future refactor bypasses it (e.g., adding a route that skips the usage gate) — otherwise Pro users would silently be metered at free-tier mailPerMonth.

No change required for this PR; consider adding a brief unit test asserting req.planLimits is populated when checkUsageLimits is in the chain.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@apps/public-api/src/controllers/mail.controller.js` at line 297, Ensure the
route handler always has req.planLimits populated by verifying the usage-check
middleware is present: confirm checkUsageLimits is applied to this route (or add
an explicit guard at the top of the handler) so getMonthlyMailLimit(req.project,
req.planLimits) never receives undefined; also add a unit test that mounts the
route with checkUsageLimits in the middleware chain and asserts req.planLimits
is set before the controller runs (referencing getMonthlyMailLimit,
req.planLimits, checkUsageLimits and utils/mailLimit.js).

177-185: LGTM — plan-gate for templates is correctly scoped.

Gate runs only when usingTemplate is true and only short-circuits when mailTemplatesEnabled === false (strict equality preserves the "undefined means allow" fallback for contexts where req.planLimits didn’t populate the flag). Response shape matches the { success, data, message } contract.

As per coding guidelines ("All API endpoints return: { success: bool, data: {}, message: "" }").

Minor consistency nit: elsewhere in this PR 403s are thrown as new AppError(403, ...) and routed via the global handler; here the controller returns inline JSON. Both are valid and the file already uses this inline pattern throughout, so no change required — just flagging for future unification.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@apps/public-api/src/controllers/mail.controller.js` around lines 177 - 185,
Minor consistency: instead of returning inline res.status(403).json(...) in the
mail template gate, throw the same AppError used elsewhere so the global error
handler formats responses consistently; locate the block guarded by
usingTemplate that checks req.planLimits.mailTemplatesEnabled === false and
replace the inline response with throwing new AppError(403, "Pre-designed Email
Templates are a Pro feature. Please upgrade to use this functionality.") so the
global handler produces the { success, data, message } shape.
apps/dashboard-api/src/controllers/analytics.controller.js (1)

64-66: Pre-existing: error response shape does not match project contract.

Not introduced here, but while touching this controller: res.status(500).json({ error: err.message }) exposes raw error text and deviates from the { success, data, message } contract. Consider switching to AppError + a shared error handler, or at minimum returning the standardized payload.

As per coding guidelines ("All API endpoints return: { success: bool, data: {}, message: "" }. Use AppError class for errors — never raw throw, never expose MongoDB errors to client.").

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@apps/dashboard-api/src/controllers/analytics.controller.js` around lines 64 -
66, The catch block in analytics.controller.js currently returns raw error text
with res.status(500).json({ error: err.message }) which leaks internal details
and breaks the { success, data, message } contract; update the handler to throw
or pass an AppError (e.g., new AppError('Internal server error', 500)) to the
centralized error middleware or, if you must respond here, return the
standardized payload like res.status(500).json({ success: false, data: {},
message: 'Internal server error' }) and ensure you do not include err.message or
raw MongoDB errors; reference the catch block around the function handling
analytics requests and the use of AppError/shared error middleware to fix this
consistently.
apps/public-api/src/app.js (1)

120-123: Pre-existing: 404 handler is unreachable (placed after error handler).

Not introduced by this PR, but worth noting while we’re here — the catch-all at lines 120–123 sits after the error-handling middleware. Express only invokes it if a prior middleware calls next() without responding, which none of the route handlers above do for unmatched routes. Consider moving it above the error handler in a follow-up so real 404s return the { error: "Not Found", replayId } payload instead of falling through.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@apps/public-api/src/app.js` around lines 120 - 123, The catch-all 404 handler
using app.use((req, res) => { ... }) is placed after the error-handling
middleware so it is never reached for true unmatched routes; move this 404
middleware (the function that reads the "X-Kiroo-Replay-ID" header and responds
with { error: "Not Found", replayId }) to execute before the global error
handler so real 404s return the intended JSON payload instead of falling through
to the error handler.
packages/common/src/redis/redisCaching.js (1)

109-141: LGTM — developer plan cache helpers follow existing patterns.

New key namespace developer:plan:${id} does not collide with the protected patterns (project:auth:refresh:session:*, project:auth:oauth:state:*, project:mail:count:*). 5-minute TTL and redis.status !== "ready" guards are consistent with the existing setProjectById/getProjectById helpers.

As per coding guidelines (protected Redis key patterns).

Optional nit: consider exposing the TTL as a named constant (e.g., DEVELOPER_PLAN_CACHE_TTL_SECONDS = 300) so callers/tests can reference it, and refactoring the three project/developer pairs to share a common set/get/del helper to remove duplication — not blocking.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@packages/common/src/redis/redisCaching.js` around lines 109 - 141, Create a
named TTL constant and reduce duplication: define a constant (e.g.,
DEVELOPER_PLAN_CACHE_TTL_SECONDS = 300) and use it in setDeveloperPlanCache
instead of the hardcoded 60 * 5, and optionally extract a shared helper (e.g.,
cacheSet(key, value, ttl), cacheGet(key), cacheDel(key)) that
setDeveloperPlanCache, getDeveloperPlanCache, and deleteDeveloperPlanCache
delegate to, keeping the current redis.status checks and JSON stringify/parse
behavior.
tools/db-queries/backfill-developer-plans.js (1)

10-26: Optional: graceful disconnect + dry-run flag.

Script works as-is (idempotent filter is correct). For operational hygiene on one-shot migrations, consider a --dry-run mode that prints matchedCount without writing, and an explicit await mongoose.disconnect() before process.exit so socket close is clean on slow networks.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@tools/db-queries/backfill-developer-plans.js` around lines 10 - 26, Update
backfill() to support a --dry-run flag and to always close DB sockets before
exiting: parse process.argv for a dry-run token, call Developer.updateMany only
when not dry-run (otherwise run the same query with no write and log
result.matchedCount), and keep the existing success message to report
modifiedCount when writes run; also ensure you await mongoose.disconnect() (or a
disconnect helper) in both the try success path and the catch path before
calling process.exit so sockets are cleanly closed. Reference: backfill(),
Developer.updateMany, connectDB(), result.matchedCount/result.modifiedCount, and
mongoose.disconnect().
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@apps/dashboard-api/src/middlewares/planEnforcement.js`:
- Around line 104-109: The middleware currently blocks all external
configuration by checking limits.byokEnabled; change it to gate by the specific
feature requested: update the caller or middleware to accept a feature type
(e.g., "BYOK" vs "BYOM"), call resolveEffectivePlan(req.developer) and
getPlanLimits({ plan: effectivePlan }) as before, then if the request is for
BYOM check limits.byomEnabled and if for BYOK check limits.byokEnabled,
returning the AppError only when the relevant flag is false (use the same
AppError message/context). Reference the existing resolveEffectivePlan,
getPlanLimits, and limits.byokEnabled/limits.byomEnabled symbols when applying
this split check.
- Around line 68-87: Load the Project scoped to the requesting developer before
computing collection limits and enforce ownership: replace
Project.findById(projectId) with a lookup that includes owner ===
req.developer._id (e.g., Project.findOne({_id: projectId, owner:
req.developer._id})) and return the same 404 if not found; then call
getPlanLimits after you have the project and pass project.customLimits (or a
customLimits field) into the limits calculation along with
resolveEffectivePlan(req.developer) so project-level custom limits are applied
before checking project.collections.length against limits.maxCollections.

In `@apps/public-api/src/controllers/storage.controller.js`:
- Around line 40-63: The current logic treats an effectiveLimit of -1 as
unlimited and unconditionally increments internal storage (via
Project.updateOne), which allows Pro users without BYOS to store unbounded data;
change the limit selection to use explicit null/undefined checks for
limits.storageBytes and project.storageLimit (e.g., check typeof
limits.storageBytes === 'number' and typeof project.storageLimit === 'number'
instead of ||), and then only honor the -1 “unlimited” sentinel when the project
is external by gating the branch with isProjectStorageExternal(project) (i.e.,
if (effectiveLimit === -1 && isProjectStorageExternal(project)) do the unlimited
increment); otherwise, when effectiveLimit === -1 but storage is internal, clamp
to a safety ceiling constant (e.g., SAFETY_MAX_BYTES) and use the same
Project.updateOne/$expr check to enforce it and return 403 on matchedCount === 0
as before.

In `@apps/public-api/src/middlewares/usageGate.js`:
- Around line 72-83: The daily quota check must be atomic: replace the read-only
redis.get(reqCountKey) with an atomic increment (use redis.incr or incrby) so
you obtain the post-increment count, then if that count exceeds limits.reqPerDay
call redis.decr(reqCountKey) to roll back and return the AppError; if it is
within limit set a flag (e.g. req._dailyCountIncremented = true) so the
downstream logger (the code that also increments reqCountKey) skips its
duplicate increment. Locate getDayKey(), reqCountKey, limits.reqPerDay and where
the logger increments the same key to add the flag check and the atomic
incr/decr flow.
- Around line 86-92: The assignment to req.planLimits must happen immediately
after computing limits so downstream gates see them even if Redis fails: move
the line setting req.planLimits = limits to directly after limits is resolved
(inside the try block before any Redis-dependent checks or awaiting further
calls) and ensure the catch still logs the error and calls next(); update the
usageGate middleware (the try/catch around limits and next()) so req.planLimits
is always set when limits are available, even if later Redis operations throw.

In `@apps/public-api/src/routes/mail.js`:
- Line 8: The middleware order on the /send route currently calls
checkUsageLimits before requireSecretKey so invalid/publishable secrets
increment Redis counters; update the router.post call to invoke requireSecretKey
before checkUsageLimits (i.e., router.post("/send", verifyApiKey,
requireSecretKey, checkUsageLimits, sendMail)) and apply the same reorder in the
corresponding route in storage.js so that requireSecretKey runs prior to
checkUsageLimits, preventing failed auth attempts from consuming quota.

---

Outside diff comments:
In `@apps/dashboard-api/src/controllers/analytics.controller.js`:
- Around line 33-54: Update the Developer.findById projection in the analytics
controller to include the subscription fields used by resolveEffectivePlan by
selecting "maxProjects maxCollections plan planExpiresAt" (do not add
planActivatedAt), so dev.plan and dev.planExpiresAt are populated before calling
resolveEffectivePlan and getPlanLimits; also replace the current raw-error
response in the controller's catch/error handler to use the AppError class and
return the standard JSON shape { success: false, data: {}, message: "..." } so
errors conform to the app's response format.

---

Nitpick comments:
In `@apps/dashboard-api/src/controllers/analytics.controller.js`:
- Around line 64-66: The catch block in analytics.controller.js currently
returns raw error text with res.status(500).json({ error: err.message }) which
leaks internal details and breaks the { success, data, message } contract;
update the handler to throw or pass an AppError (e.g., new AppError('Internal
server error', 500)) to the centralized error middleware or, if you must respond
here, return the standardized payload like res.status(500).json({ success:
false, data: {}, message: 'Internal server error' }) and ensure you do not
include err.message or raw MongoDB errors; reference the catch block around the
function handling analytics requests and the use of AppError/shared error
middleware to fix this consistently.

In `@apps/public-api/src/app.js`:
- Around line 120-123: The catch-all 404 handler using app.use((req, res) => {
... }) is placed after the error-handling middleware so it is never reached for
true unmatched routes; move this 404 middleware (the function that reads the
"X-Kiroo-Replay-ID" header and responds with { error: "Not Found", replayId })
to execute before the global error handler so real 404s return the intended JSON
payload instead of falling through to the error handler.

In `@apps/public-api/src/controllers/mail.controller.js`:
- Line 297: Ensure the route handler always has req.planLimits populated by
verifying the usage-check middleware is present: confirm checkUsageLimits is
applied to this route (or add an explicit guard at the top of the handler) so
getMonthlyMailLimit(req.project, req.planLimits) never receives undefined; also
add a unit test that mounts the route with checkUsageLimits in the middleware
chain and asserts req.planLimits is set before the controller runs (referencing
getMonthlyMailLimit, req.planLimits, checkUsageLimits and utils/mailLimit.js).
- Around line 177-185: Minor consistency: instead of returning inline
res.status(403).json(...) in the mail template gate, throw the same AppError
used elsewhere so the global error handler formats responses consistently;
locate the block guarded by usingTemplate that checks
req.planLimits.mailTemplatesEnabled === false and replace the inline response
with throwing new AppError(403, "Pre-designed Email Templates are a Pro feature.
Please upgrade to use this functionality.") so the global handler produces the {
success, data, message } shape.

In `@packages/common/src/redis/redisCaching.js`:
- Around line 109-141: Create a named TTL constant and reduce duplication:
define a constant (e.g., DEVELOPER_PLAN_CACHE_TTL_SECONDS = 300) and use it in
setDeveloperPlanCache instead of the hardcoded 60 * 5, and optionally extract a
shared helper (e.g., cacheSet(key, value, ttl), cacheGet(key), cacheDel(key))
that setDeveloperPlanCache, getDeveloperPlanCache, and deleteDeveloperPlanCache
delegate to, keeping the current redis.status checks and JSON stringify/parse
behavior.

In `@tools/db-queries/backfill-developer-plans.js`:
- Around line 10-26: Update backfill() to support a --dry-run flag and to always
close DB sockets before exiting: parse process.argv for a dry-run token, call
Developer.updateMany only when not dry-run (otherwise run the same query with no
write and log result.matchedCount), and keep the existing success message to
report modifiedCount when writes run; also ensure you await
mongoose.disconnect() (or a disconnect helper) in both the try success path and
the catch path before calling process.exit so sockets are cleanly closed.
Reference: backfill(), Developer.updateMany, connectDB(),
result.matchedCount/result.modifiedCount, and mongoose.disconnect().
🪄 Autofix (Beta)

✅ Autofix completed


ℹ️ Review info
⚙️ Run configuration

Configuration used: defaults

Review profile: CHILL

Plan: Pro

Run ID: c2f72f45-b8a9-4736-99fa-87e1291dd6b7

📥 Commits

Reviewing files that changed from the base of the PR and between b96bc40 and ff8d46f.

📒 Files selected for processing (20)
  • apps/dashboard-api/src/controllers/analytics.controller.js
  • apps/dashboard-api/src/controllers/project.controller.js
  • apps/dashboard-api/src/middlewares/planEnforcement.js
  • apps/dashboard-api/src/routes/projects.js
  • apps/public-api/src/app.js
  • apps/public-api/src/controllers/mail.controller.js
  • apps/public-api/src/controllers/storage.controller.js
  • apps/public-api/src/middlewares/usageGate.js
  • apps/public-api/src/routes/data.js
  • apps/public-api/src/routes/mail.js
  • apps/public-api/src/routes/storage.js
  • apps/public-api/src/utils/mailLimit.js
  • packages/common/src/index.js
  • packages/common/src/models/Developer.js
  • packages/common/src/models/Project.js
  • packages/common/src/redis/redisCaching.js
  • packages/common/src/utils/AppError.js
  • packages/common/src/utils/planLimits.js
  • tools/db-queries/backfill-developer-plans.js
  • tools/db-queries/global-mail-templates.json

Comment thread apps/dashboard-api/src/middlewares/planEnforcement.js Outdated
Comment thread apps/dashboard-api/src/middlewares/planEnforcement.js
Comment thread apps/public-api/src/controllers/storage.controller.js
Comment thread apps/public-api/src/middlewares/usageGate.js Outdated
Comment thread apps/public-api/src/middlewares/usageGate.js Outdated
Comment thread apps/public-api/src/routes/mail.js Outdated
const { sendMail } = require("../controllers/mail.controller");

router.post("/send", verifyApiKey, projectRateLimiter, requireSecretKey, sendMail);
router.post("/send", verifyApiKey, checkUsageLimits, requireSecretKey, sendMail);
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

Middleware order: usage is counted before secret-key auth succeeds.

checkUsageLimits runs before requireSecretKey, so requests using a publishable/invalid secret key still increment the per-minute and per-day Redis counters before being rejected with 403. A caller hitting /send with wrong credentials can burn through daily quota and trigger 429s on legitimate traffic.

Consider moving requireSecretKey ahead of checkUsageLimits (same applies to storage.js):

Proposed fix
-router.post("/send", verifyApiKey, checkUsageLimits, requireSecretKey, sendMail);
+router.post("/send", verifyApiKey, requireSecretKey, checkUsageLimits, sendMail);
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
router.post("/send", verifyApiKey, checkUsageLimits, requireSecretKey, sendMail);
router.post("/send", verifyApiKey, requireSecretKey, checkUsageLimits, sendMail);
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@apps/public-api/src/routes/mail.js` at line 8, The middleware order on the
/send route currently calls checkUsageLimits before requireSecretKey so
invalid/publishable secrets increment Redis counters; update the router.post
call to invoke requireSecretKey before checkUsageLimits (i.e.,
router.post("/send", verifyApiKey, requireSecretKey, checkUsageLimits,
sendMail)) and apply the same reorder in the corresponding route in storage.js
so that requireSecretKey runs prior to checkUsageLimits, preventing failed auth
attempts from consuming quota.

@yash-pouranik
Copy link
Copy Markdown
Collaborator Author

@copilot apply changes based on the comments in this thread

@coderabbitai
Copy link
Copy Markdown
Contributor

coderabbitai Bot commented Apr 22, 2026

Note

Autofix is a beta feature. Expect some limitations and changes as we gather feedback and continue to improve it.

Fixes Applied Successfully

Fixed 7 file(s) based on 6 unresolved review comments.

Files modified:

  • apps/dashboard-api/src/middlewares/planEnforcement.js
  • apps/public-api/src/controllers/storage.controller.js
  • apps/public-api/src/middlewares/api_usage.js
  • apps/public-api/src/middlewares/usageGate.js
  • apps/public-api/src/routes/mail.js
  • apps/public-api/src/routes/storage.js
  • package-lock.json

Commit: e272023e8fd283764f655089f3b491fa288ec50c

The changes have been pushed to the feature/plan-enforcement branch.

Time taken: 5m 4s

Fixed 7 file(s) based on 6 unresolved review comments.

Co-authored-by: CodeRabbit <[email protected]>
Copy link
Copy Markdown
Contributor

@coderabbitai coderabbitai Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 2

♻️ Duplicate comments (1)
apps/public-api/src/middlewares/usageGate.js (1)

16-21: ⚠️ Potential issue | 🟡 Minor

Guard against missing req.project.owner.

If rawOwner is null/undefined (e.g., project doc loaded without populating owner, or a legacy project with a stale ref), rawOwner.toString() throws a TypeError that falls into the outer catch and fails the request open — bypassing plan enforcement entirely.

     const rawOwner = req.project.owner;
+    if (!rawOwner) return { plan: 'free', planExpiresAt: null, legacyLimits: { maxProjects: null, maxCollections: null } };
     const developerId = (rawOwner && typeof rawOwner === 'object' && rawOwner._id)
         ? rawOwner._id.toString()
         : rawOwner.toString();
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@apps/public-api/src/middlewares/usageGate.js` around lines 16 - 21,
resolveDeveloperPlanContext currently assumes req.project.owner exists when
computing developerId and calls toString(), which throws if rawOwner is
null/undefined; add a guard for rawOwner (check if falsy) before calling
toString() — e.g., if rawOwner is missing return null (or set developerId to
null) so you don't call rawOwner.toString(); then continue the existing logic
that handles a populated owner (checking rawOwner._id) and the downstream
callers of resolveDeveloperPlanContext should handle a null/unknown developerId.
Ensure you update resolveDeveloperPlanContext and references to developerId to
avoid calling toString() on undefined.
🧹 Nitpick comments (3)
apps/dashboard-api/src/middlewares/planEnforcement.js (1)

106-122: BYOK/BYOM gates ignore project-level customLimits.

checkByokGate (and checkByomGate at lines 127-143) compute limits from only plan, so enterprise-scoped customLimits.byokEnabled / byomEnabled on a project cannot unlock these features. This is inconsistent with checkCollectionLimit (line 84), which correctly merges project.customLimits. Since this middleware runs on PATCH /:projectId/byod-config, the target project id is available in req.params.projectId — load it scoped to the developer and pass project.customLimits through the same way the collection gate does.

🛡️ Proposed direction for checkByokGate (apply analogously to checkByomGate)
 exports.checkByokGate = async (req, res, next) => {
     try {
         if (req.user?.isAdmin) return next();
 
+        const projectId = req.params.projectId;
+        let customLimits = null;
+        if (projectId) {
+            if (typeof projectId !== 'string' || !/^[a-fA-F0-9]{24}$/.test(projectId)) {
+                return next(new AppError(400, 'Invalid projectId format'));
+            }
+            const project = await Project.findOne({ _id: projectId, owner: req.developer._id });
+            if (!project) return next(new AppError(404, 'Project not found'));
+            customLimits = project.customLimits;
+        }
+
         const effectivePlan = resolveEffectivePlan(req.developer);
-        const limits = getPlanLimits({ plan: effectivePlan });
+        const limits = getPlanLimits({ plan: effectivePlan, customLimits });
 
         if (!limits.byokEnabled) {
             return next(new AppError(403, 'External configuration (BYOK) is a Pro feature. Please upgrade to connect your own resources.'));
         }
 
         next();
     } catch (err) {
         next(err);
     }
 };
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@apps/dashboard-api/src/middlewares/planEnforcement.js` around lines 106 -
122, checkByokGate (and likewise checkByomGate) currently calls
resolveEffectivePlan(req.developer) and getPlanLimits({ plan }) without
considering project-level overrides, so project.customLimits.byokEnabled /
byomEnabled never unlocks features; update the middleware to load the target
project scoped to the developer using req.params.projectId (same mechanism used
by checkCollectionLimit), merge project.customLimits into the getPlanLimits call
(e.g., pass customLimits or merge into the options you pass to getPlanLimits),
and then check the resulting limits.byokEnabled / limits.byomEnabled before
returning the 403 or calling next(); keep the existing admin bypass
(req.user?.isAdmin) and error handling.
apps/public-api/src/middlewares/usageGate.js (2)

93-97: Fail-open on Redis errors is a quota-bypass vector — consider logging-and-metrics at minimum.

catch → next() is intentional per the comment, but during a Redis outage every request skips per-minute and per-day enforcement for all tiers. At a minimum, emit a structured log/metric so this state is observable, and consider a circuit-breaker or fail-closed policy on the /mail/send and /storage/upload routes where abuse has direct cost impact. The dashboard-level plan ceilings (maxProjects, storage $expr caps) still hold, but per-request throttling is gone.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@apps/public-api/src/middlewares/usageGate.js` around lines 93 - 97, The catch
block in the usageGate middleware currently swallows Redis errors and calls
next(), which fails open; change that to emit structured logs and metrics and
optionally fail-closed for high-cost endpoints: replace console.error("Usage
limit check failed:", err) with a structured logger call (e.g., logger.error or
processLogger.error) that includes error, stack, redis host/conn id and request
context (route, user id, plan), and increment a monitoring counter (e.g.,
"usage_gate.redis_error" with tags route=t.route, env, plan) so outages are
observable; additionally, for the critical routes "/mail/send" and
"/storage/upload" implement a short circuit (in the same middleware or a
wrapper) that returns a 503 or enforces conservative default quotas when Redis
is unavailable (or use a circuit-breaker flag toggled by the redis error metric)
instead of always calling next().

23-43: Plan cache invalidation strategy needed for future plan-change endpoints

Plan-change functionality is not currently implemented, but when it is added, ensure Developer.plan mutations (dashboard endpoints and billing webhooks) call deleteDeveloperPlanCache to invalidate the 5-minute cache. Otherwise, upgrades/downgrades take up to 5 minutes to take effect.

Additionally, the cached plan context includes planExpiresAt, but this is checked at request time in resolveEffectivePlan() to reject expired Pro plans. Document this behavior to ensure it's preserved if the caching logic changes.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@apps/public-api/src/middlewares/usageGate.js` around lines 23 - 43, The plan
cache logic caches developer plan context via getDeveloperPlanCache and
setDeveloperPlanCache for 5 minutes but lacks cache invalidation; ensure any
endpoints or handlers that mutate Developer.plan or planExpiresAt (dashboard
update functions and billing webhook handlers that call Developer.findById /
update) call deleteDeveloperPlanCache(developerId) after successful writes so
upgrades/downgrades take effect immediately, and keep resolveEffectivePlan(...)
behavior intact by preserving planExpiresAt in the cached context (document this
in the mutation handlers' comments) so expiry checks continue to run at request
time.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@apps/dashboard-api/src/middlewares/planEnforcement.js`:
- Around line 47-51: The current TOCTOU comes from checking
Project.countDocuments and project.collections.length before creation (see
Project.countDocuments, the limits check that returns new AppError, and the
project.collections usage); fix by enforcing the limit atomically: wrap the
count+create in a MongoDB transaction (startSession()/withTransaction) or
implement a conditional update/atomic counter on the Developer document that
increments only if current value < max (using findOneAndUpdate with $inc and a
filter on the limit) so concurrent requests cannot exceed
maxProjects/maxCollections, or add a unique per-developer project slot via a
conditional create; additionally, add a server-side re-validation in the
controller after create and rollback/delete the resource if limit was exceeded
to cover non-transactional cases.

In `@apps/public-api/src/controllers/storage.controller.js`:
- Around line 41-84: The code contains an unreachable branch: inside the block
guarded by if (!external) (where external was set from
isProjectStorageExternal(project)), the branch if (effectiveLimit === -1 &&
isProjectStorageExternal(project)) can never be true; remove this dead branch
(Lines performing await Project.updateOne(...) for the external -1 case) and
keep the internal-storage clamp logic that uses SAFETY_MAX_BYTES and the later
Project.updateOne with $expr, or if the intent was to re-check storage location
use the already-evaluated external variable instead of calling
isProjectStorageExternal(project) again; adjust code to only handle the internal
-1 case and the normal effectiveLimit path (referencing variables external,
effectiveLimit, isProjectStorageExternal(project), Project.updateOne, and
SAFETY_MAX_BYTES).

---

Duplicate comments:
In `@apps/public-api/src/middlewares/usageGate.js`:
- Around line 16-21: resolveDeveloperPlanContext currently assumes
req.project.owner exists when computing developerId and calls toString(), which
throws if rawOwner is null/undefined; add a guard for rawOwner (check if falsy)
before calling toString() — e.g., if rawOwner is missing return null (or set
developerId to null) so you don't call rawOwner.toString(); then continue the
existing logic that handles a populated owner (checking rawOwner._id) and the
downstream callers of resolveDeveloperPlanContext should handle a null/unknown
developerId. Ensure you update resolveDeveloperPlanContext and references to
developerId to avoid calling toString() on undefined.

---

Nitpick comments:
In `@apps/dashboard-api/src/middlewares/planEnforcement.js`:
- Around line 106-122: checkByokGate (and likewise checkByomGate) currently
calls resolveEffectivePlan(req.developer) and getPlanLimits({ plan }) without
considering project-level overrides, so project.customLimits.byokEnabled /
byomEnabled never unlocks features; update the middleware to load the target
project scoped to the developer using req.params.projectId (same mechanism used
by checkCollectionLimit), merge project.customLimits into the getPlanLimits call
(e.g., pass customLimits or merge into the options you pass to getPlanLimits),
and then check the resulting limits.byokEnabled / limits.byomEnabled before
returning the 403 or calling next(); keep the existing admin bypass
(req.user?.isAdmin) and error handling.

In `@apps/public-api/src/middlewares/usageGate.js`:
- Around line 93-97: The catch block in the usageGate middleware currently
swallows Redis errors and calls next(), which fails open; change that to emit
structured logs and metrics and optionally fail-closed for high-cost endpoints:
replace console.error("Usage limit check failed:", err) with a structured logger
call (e.g., logger.error or processLogger.error) that includes error, stack,
redis host/conn id and request context (route, user id, plan), and increment a
monitoring counter (e.g., "usage_gate.redis_error" with tags route=t.route, env,
plan) so outages are observable; additionally, for the critical routes
"/mail/send" and "/storage/upload" implement a short circuit (in the same
middleware or a wrapper) that returns a 503 or enforces conservative default
quotas when Redis is unavailable (or use a circuit-breaker flag toggled by the
redis error metric) instead of always calling next().
- Around line 23-43: The plan cache logic caches developer plan context via
getDeveloperPlanCache and setDeveloperPlanCache for 5 minutes but lacks cache
invalidation; ensure any endpoints or handlers that mutate Developer.plan or
planExpiresAt (dashboard update functions and billing webhook handlers that call
Developer.findById / update) call deleteDeveloperPlanCache(developerId) after
successful writes so upgrades/downgrades take effect immediately, and keep
resolveEffectivePlan(...) behavior intact by preserving planExpiresAt in the
cached context (document this in the mutation handlers' comments) so expiry
checks continue to run at request time.
🪄 Autofix (Beta)

✅ Autofix completed


ℹ️ Review info
⚙️ Run configuration

Configuration used: defaults

Review profile: CHILL

Plan: Pro

Run ID: 7a9d780d-63a8-4bb9-a69c-ff8ef9a4ae70

📥 Commits

Reviewing files that changed from the base of the PR and between ff8d46f and e272023.

⛔ Files ignored due to path filters (1)
  • package-lock.json is excluded by !**/package-lock.json
📒 Files selected for processing (7)
  • apps/dashboard-api/src/middlewares/planEnforcement.js
  • apps/public-api/src/__tests__/storage.controller.test.js
  • apps/public-api/src/controllers/storage.controller.js
  • apps/public-api/src/middlewares/api_usage.js
  • apps/public-api/src/middlewares/usageGate.js
  • apps/public-api/src/routes/mail.js
  • apps/public-api/src/routes/storage.js
✅ Files skipped from review due to trivial changes (1)
  • apps/public-api/src/tests/storage.controller.test.js
🚧 Files skipped from review as they are similar to previous changes (1)
  • apps/public-api/src/routes/storage.js

Comment on lines +47 to +51
const currentProjectsCount = await Project.countDocuments({ owner: req.developer._id });

if (currentProjectsCount >= limits.maxProjects) {
return next(new AppError(403, `Project limit reached (${limits.maxProjects}). Please upgrade your plan to create more projects.`));
}
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

TOCTOU race allows exceeding project/collection limits.

countDocuments (and project.collections.length at line 93) is checked before the downstream controller creates the resource. Two concurrent requests from the same developer can both pass the check and push the total to maxProjects + 1 (or maxCollections + 1). Under the Free tier's single-project cap this is easily reachable with a double-click or two API clients.

Consider enforcing the limit atomically — e.g., wrap the count+create in a transaction, use a unique index / conditional update that fails when the cap is reached, or use an atomic counter on the Developer document. At minimum, re-validate the count inside the controller after creation and roll back on overflow.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@apps/dashboard-api/src/middlewares/planEnforcement.js` around lines 47 - 51,
The current TOCTOU comes from checking Project.countDocuments and
project.collections.length before creation (see Project.countDocuments, the
limits check that returns new AppError, and the project.collections usage); fix
by enforcing the limit atomically: wrap the count+create in a MongoDB
transaction (startSession()/withTransaction) or implement a conditional
update/atomic counter on the Developer document that increments only if current
value < max (using findOneAndUpdate with $inc and a filter on the limit) so
concurrent requests cannot exceed maxProjects/maxCollections, or add a unique
per-developer project slot via a conditional create; additionally, add a
server-side re-validation in the controller after create and rollback/delete the
resource if limit was exceeded to cover non-transactional cases.

Comment thread apps/public-api/src/controllers/storage.controller.js
@coderabbitai
Copy link
Copy Markdown
Contributor

coderabbitai Bot commented Apr 22, 2026

Note

Autofix is a beta feature. Expect some limitations and changes as we gather feedback and continue to improve it.

Fixes Applied Successfully

Fixed 4 file(s) based on 2 unresolved review comments.

Files modified:

  • apps/dashboard-api/src/controllers/project.controller.js
  • apps/dashboard-api/src/middlewares/planEnforcement.js
  • apps/public-api/src/controllers/storage.controller.js
  • package-lock.json

Commit: 02e2e50dac35caaa38e10d4d919d94b19c4b61d3

The changes have been pushed to the feature/plan-enforcement branch.

Time taken: 4m 57s

Fixed 4 file(s) based on 2 unresolved review comments.

Co-authored-by: CodeRabbit <[email protected]>
@yash-pouranik
Copy link
Copy Markdown
Collaborator Author

OKay to merge?? @coderabbitai

@coderabbitai
Copy link
Copy Markdown
Contributor

coderabbitai Bot commented Apr 22, 2026

Tip

For best results, initiate chat on the files or code changes.

🧩 Analysis chain

Let me check the current state of the PR — open review comments, CI checks, and overall status.


🏁 Script executed:

#!/bin/bash
echo "=== PR STATUS ==="
gh pr view 128 --repo geturbackend/urBackend --json state,mergeable,mergeStateStatus,reviewDecision,statusCheckRollup,isDraft

echo ""
echo "=== OPEN/UNRESOLVED REVIEW COMMENTS ==="
gh pr view 128 --repo geturbackend/urBackend --json reviews,comments

echo ""
echo "=== PR CHECKS ==="
gh pr checks 128 --repo geturbackend/urBackend 2>&1 || echo "Could not fetch checks"

echo ""
echo "=== REVIEW THREADS ==="
gh api repos/geturbackend/urBackend/pulls/128/reviews --jq '[.[] | {id: .id, state: .state, user: .user.login, body: .body}]' 2>&1

echo ""
echo "=== UNRESOLVED REVIEW THREADS ==="
gh api graphql -f query='
{
  repository(owner: "geturbackend", name: "urBackend") {
    pullRequest(number: 128) {
      reviewThreads(first: 50) {
        nodes {
          isResolved
          isOutdated
          comments(first: 1) {
            nodes {
              body
              author { login }
            }
          }
        }
      }
    }
  }
}' 2>&1

Length of output: 153404


@yash-pouranik — Here's a merge-readiness summary:

✅ Green lights:

  • All CI checks pass (CodeQL, frontend-lint, dashboard-api-tests, public-api-tests, sdk-tests)
  • Branch is mergeable with a clean merge state
  • All 6 of my previously flagged major issues have been addressed (BYOK/BYOM gate, storage unlimited internal, atomic daily counting, req.planLimits attachment, dead branch, collection limit ownership)

⚠️ Items to verify before merging:

There are two Copilot-flagged issues that are not outdated and could cause runtime/functional problems:

  1. 🔴 Potential Runtime CrashusageGate.js imports getDeveloperPlanCache/setDeveloperPlanCache from @urbackend/common, but packages/common/src/index.js may not export those helpers. If so, this would throw at runtime and break all public API routes. Please confirm packages/common/src/index.js re-exports the developer plan cache functions.

  2. 🟠 Functional Bug for Pro Usersanalytics.controller.js calls Developer.findById(user_id).select("maxProjects maxCollections") but resolveEffectivePlan() needs plan and planExpiresAt to determine the correct tier. Without those fields in the projection, Pro users will silently be returned Free-tier limits in the dashboard stats.

  3. 🟡 Minor — Template gating in mail.controller.js allows Pro-only templates when req.planLimits is missing/undefined (fail-open), and effectiveLimit uses || instead of ?? (swallows 0 overrides).

My recommendation: The two items above (especially #1 and #2) are worth a quick fix before merging. Would you like me to trigger a fresh review of the latest commit, or would you like to address those items first?

@yash-pouranik
Copy link
Copy Markdown
Collaborator Author

@coderabbitai
does this follows good saas quality plan? or should we modify something?
Also plan,limits and then if expired how to set everything again is a headache and is not done here?

@coderabbitai
Copy link
Copy Markdown
Contributor

coderabbitai Bot commented Apr 22, 2026

Tip

For best results, initiate chat on the files or code changes.

@yash-pouranik — Great questions. Here's an honest assessment:


✅ What's done well (SaaS-quality foundations)

Concern Implementation
Tier abstraction PLAN_LIMITS with free/pro, getPlanLimits with clean override layers
Expiry-aware resolution resolveEffectivePlan falls back to 'free' when planExpiresAt is past
Redis caching 5-min TTL avoids DB hits on every request
Atomic counters Per-minute + per-day Redis increments with rollback on exceed
Legacy/enterprise overrides customLimits on Project, legacy developer limits respected
Backfill Existing developers default to 'free' cleanly

🔴 What's missing (the "headache" you're describing)

1. Plan Expiry Lifecycle — not implemented

resolveEffectivePlan correctly reads as free when expired, but the Developer document still holds plan: 'pro' with a stale planExpiresAt. There's no:

  • Cron/scheduler to proactively downgrade expired plans and invalidate cache
  • Grace period (e.g., 3–7 days before hard cutoff)
  • Email notifications at T-7, T-3, T-1 days before expiry
  • Post-expiry cleanup (e.g., what happens to projects > free tier limit?)

2. Payment Processor Integration — not implemented

There's no webhook handler (Stripe / LemonSqueezy / Paddle etc.) to:

  • Set plan: 'pro', planActivatedAt, planExpiresAt on successful payment
  • Reset planExpiresAt on renewal
  • Handle failed renewals (dunning → grace period → downgrade)
  • Call deleteDeveloperPlanCache(id) after plan changes

3. Trial Flow — incomplete

trialUsed field exists in the schema, but there's no logic to:

  • Activate a trial (set plan: 'pro', planExpiresAt: now + 14 days, trialUsed: true)
  • Prevent re-trials
  • Differentiate trial vs paid Pro in analytics

4. Cache Invalidation on Plan Change

Wherever a plan is changed (webhook, admin panel, etc.), deleteDeveloperPlanCache(developerId) must be called. Currently there's no integration point for this — the 5-min stale window could give Pro access for up to 5 minutes after downgrade.

5. Downgrade Impact — undefined

What happens when a Free user has 3 projects (was Pro, now expired)?

  • Are they blocked from creating more but existing data is preserved? ✅ (reasonable)
  • Is there a UI warning? ❌ Not implemented
  • Are excess projects auto-archived? ❌ Not specified

📋 Recommended Phased Roadmap

Phase 1 — Critical (before monetization launch)
├── Payment webhook endpoint (Stripe recommended)
│   ├── charge.succeeded → set plan fields + deleteDeveloperPlanCache
│   ├── invoice.payment_failed → start grace period
│   └── customer.subscription.deleted → downgrade to free
├── Cron job (daily) — find expired plans, set plan='free', invalidate cache
└── Email alerts — T-7/T-1 expiry warnings

Phase 2 — Important
├── Trial activation endpoint + trialUsed guard
├── Dashboard API: GET /me/plan — return current plan, limits, expiry
├── Grace period field on Developer schema (gracePeriodEndsAt)
└── Downgrade impact policy (document + UI messaging)

Phase 3 — Polish
├── Audit log for plan changes (planHistory array or separate collection)
├── Admin override endpoint (set custom plan/limits per developer)
└── Usage dashboard (current day/month consumption vs limits)

Quick Schema Addition to Consider

For tracking grace periods (minimal change):

// packages/common/src/models/Developer.js
gracePeriodEndsAt: { type: Date, default: null },
planHistory: [
  {
    plan: String,
    activatedAt: Date,
    reason: String, // 'payment', 'trial', 'admin', 'expiry'
  }
]

Bottom line: The enforcement layer is production-quality, but the lifecycle layer (payment → activate → renew → expire → downgrade) is entirely missing. The PR is safe to merge as an enforcement foundation, but monetization cannot actually go live without Phase 1 above. Would you like me to open a follow-up issue tracking the payment integration and expiry lifecycle work?

Copy link
Copy Markdown
Contributor

@coderabbitai coderabbitai Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

🧹 Nitpick comments (2)
apps/public-api/src/controllers/storage.controller.js (1)

42-79: New 403 responses don't follow controller response/error contract; branches can be deduplicated.

Two issues in the newly added reservation block:

  1. The two 403 responses at Lines 65 and 77 use a raw { error: "..." } shape and bypass AppError. Per the controller guideline, responses must use the { success, data, message } envelope and errors must flow through AppError (so the framework can set proper status/metadata, e.g. 403 for blocked routes as used elsewhere in this PR). Pre-existing handlers in this file have the same drift, but these are new lines and a good place to start aligning.

  2. The if (effectiveLimit === -1) { ... } else { ... } branches are structurally identical — only the cap differs. Collapsing them removes the duplicated Project.updateOne(...) call and the duplicated 403 response, eliminating a future drift hazard (e.g. someone fixing the message in one branch but not the other).

♻️ Suggested consolidation (also routes the 403 through AppError)
-            // For internal storage: honor -1 as unlimited but clamp to safety ceiling
-            if (effectiveLimit === -1) {
-                // Internal storage with -1: clamp to safety ceiling
-                const result = await Project.updateOne(
-                    {
-                        _id: project._id,
-                        $expr: { $lte: [{ $add: ["$storageUsed", file.size] }, SAFETY_MAX_BYTES] }
-                    },
-                    { $inc: { storageUsed: file.size } }
-                );
-
-                if (result.matchedCount === 0) {
-                    return res.status(403).json({ error: "Storage limit exceeded. Please upgrade your plan or delete some files." });
-                }
-            } else {
-                const result = await Project.updateOne(
-                    {
-                        _id: project._id,
-                        $expr: { $lte: [{ $add: ["$storageUsed", file.size] }, effectiveLimit] }
-                    },
-                    { $inc: { storageUsed: file.size } }
-                );
-
-                if (result.matchedCount === 0) {
-                    return res.status(403).json({ error: "Storage limit exceeded. Please upgrade your plan or delete some files." });
-                }
-            }
+            // For internal storage, -1 (unlimited) is clamped to SAFETY_MAX_BYTES.
+            const cap = effectiveLimit === -1 ? SAFETY_MAX_BYTES : effectiveLimit;
+            const result = await Project.updateOne(
+                {
+                    _id: project._id,
+                    $expr: { $lte: [{ $add: ["$storageUsed", file.size] }, cap] }
+                },
+                { $inc: { storageUsed: file.size } }
+            );
+
+            if (result.matchedCount === 0) {
+                throw new AppError(
+                    "Storage limit exceeded. Please upgrade your plan or delete some files.",
+                    403
+                );
+            }

Requires importing AppError at the top of the file and ensuring the surrounding try/catch (or an error middleware) translates it to the standard { success: false, data: {}, message } envelope.

As per coding guidelines: "All API endpoints return: { success: bool, data: {}, message: "" }. Use AppError class for errors — never raw throw, never expose MongoDB errors to client."

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@apps/public-api/src/controllers/storage.controller.js` around lines 42 - 79,
The new reservation block duplicates Project.updateOne and returns raw 403 JSON;
collapse the two branches by computing a single cap (use SAFETY_MAX_BYTES when
effectiveLimit === -1, otherwise use effectiveLimit), call Project.updateOne
once with that cap, and when result.matchedCount === 0 surface the error via
AppError (e.g., throw or pass new AppError("Storage limit exceeded. Please
upgrade your plan or delete some files.", 403)) instead of returning raw JSON;
also add/import AppError at the top and ensure the existing try/catch/error
middleware will translate it to the { success, data, message } envelope.
apps/dashboard-api/src/middlewares/planEnforcement.js (1)

33-34: Use strict boolean checks for admin bypasses as a security best practice.

While the JWT payload correctly assigns isAdmin as a boolean result (user.email === process.env.ADMIN_EMAIL at auth.controller.js:222 and :614), using strict equality (req.user?.isAdmin === true) hardens authorization checks against future changes to the JWT producer or accidental type coercion bugs.

🔒 Proposed hardening
-        if (req.user?.isAdmin) return next();
+        if (req.user?.isAdmin === true) return next();

Apply the same replacement at Lines 64, 106, and 127.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@apps/dashboard-api/src/middlewares/planEnforcement.js` around lines 33 - 34,
Replace permissive truthy checks of the admin flag with strict boolean
comparison: where the middleware currently does "if (req.user?.isAdmin) return
next();" change to "if (req.user?.isAdmin === true) return next();" and apply
the same change for every other admin bypass check referencing req.user?.isAdmin
(e.g., similar occurrences later in the file), ensuring you only allow the
bypass when the JWT explicitly sets isAdmin to true.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@apps/dashboard-api/src/middlewares/planEnforcement.js`:
- Around line 108-133: checkByokGate and checkByomGate call resolveEffectivePlan
and getPlanLimits but don’t pass per-project overrides; update both middlewares
(checkByokGate and checkByomGate) to pass project.customLimits into
getPlanLimits (e.g., getPlanLimits({ plan: effectivePlan, customLimits:
req.project?.customLimits })) so feature gates respect any project-level
overrides similarly to checkCollectionLimit while keeping the existing admin
bypass and error handling.

---

Nitpick comments:
In `@apps/dashboard-api/src/middlewares/planEnforcement.js`:
- Around line 33-34: Replace permissive truthy checks of the admin flag with
strict boolean comparison: where the middleware currently does "if
(req.user?.isAdmin) return next();" change to "if (req.user?.isAdmin === true)
return next();" and apply the same change for every other admin bypass check
referencing req.user?.isAdmin (e.g., similar occurrences later in the file),
ensuring you only allow the bypass when the JWT explicitly sets isAdmin to true.

In `@apps/public-api/src/controllers/storage.controller.js`:
- Around line 42-79: The new reservation block duplicates Project.updateOne and
returns raw 403 JSON; collapse the two branches by computing a single cap (use
SAFETY_MAX_BYTES when effectiveLimit === -1, otherwise use effectiveLimit), call
Project.updateOne once with that cap, and when result.matchedCount === 0 surface
the error via AppError (e.g., throw or pass new AppError("Storage limit
exceeded. Please upgrade your plan or delete some files.", 403)) instead of
returning raw JSON; also add/import AppError at the top and ensure the existing
try/catch/error middleware will translate it to the { success, data, message }
envelope.
🪄 Autofix (Beta)

Fix all unresolved CodeRabbit comments on this PR:

  • Push a commit to this branch (recommended)
  • Create a new PR with the fixes

ℹ️ Review info
⚙️ Run configuration

Configuration used: defaults

Review profile: CHILL

Plan: Pro

Run ID: 888996cb-0b29-449a-8e63-4a79b17aa780

📥 Commits

Reviewing files that changed from the base of the PR and between e272023 and 1a0ba2d.

📒 Files selected for processing (7)
  • apps/dashboard-api/src/controllers/analytics.controller.js
  • apps/dashboard-api/src/controllers/project.controller.js
  • apps/dashboard-api/src/middlewares/planEnforcement.js
  • apps/public-api/src/__tests__/mail.controller.test.js
  • apps/public-api/src/controllers/mail.controller.js
  • apps/public-api/src/controllers/storage.controller.js
  • packages/common/src/index.js
🚧 Files skipped from review as they are similar to previous changes (3)
  • apps/dashboard-api/src/controllers/analytics.controller.js
  • apps/dashboard-api/src/controllers/project.controller.js
  • packages/common/src/index.js

Comment thread apps/dashboard-api/src/middlewares/planEnforcement.js
Comment thread apps/dashboard-api/src/middlewares/planEnforcement.js Fixed
… from user-controlled sources'

Co-authored-by: Copilot Autofix powered by AI <62310815+github-advanced-security[bot]@users.noreply.github.com>
@yash-pouranik
Copy link
Copy Markdown
Collaborator Author

lets merge?? or anything else??

@coderabbitai

@coderabbitai
Copy link
Copy Markdown
Contributor

coderabbitai Bot commented Apr 22, 2026

Tip

For best results, initiate chat on the files or code changes.

🧩 Analysis chain

🏁 Script executed:

#!/bin/bash
echo "=== PR STATUS ==="
gh pr view 128 --repo geturbackend/urBackend --json state,mergeable,mergeStateStatus,isDraft,statusCheckRollup

echo ""
echo "=== UNRESOLVED REVIEW THREADS ==="
gh api graphql -f query='
{
  repository(owner: "geturbackend", name: "urBackend") {
    pullRequest(number: 128) {
      reviewThreads(first: 50) {
        nodes {
          isResolved
          isOutdated
          comments(first: 1) {
            nodes {
              body
              author { login }
            }
          }
        }
      }
    }
  }
}' | python3 -c "
import json, sys
data = json.load(sys.stdin)
threads = data['data']['repository']['pullRequest']['reviewThreads']['nodes']
unresolved = [t for t in threads if not t['isResolved'] and not t['isOutdated']]
print(f'Total threads: {len(threads)}')
print(f'Unresolved + not outdated: {len(unresolved)}')
for t in unresolved:
    c = t['comments']['nodes'][0]
    print(f'  - [{c[\"author\"][\"login\"]}]: {c[\"body\"][:120]}')
"

Length of output: 2985


🏁 Script executed:

#!/bin/bash
echo "=== analytics.controller.js - Developer select fields ==="
rg -n "select\(" apps/dashboard-api/src/controllers/analytics.controller.js -A2 -B2

echo ""
echo "=== packages/common/src/index.js - getDeveloperPlanCache exports ==="
cat packages/common/src/index.js

echo ""
echo "=== backfill-developer-plans.js - mongoose import ==="
cat tools/db-queries/backfill-developer-plans.js

echo ""
echo "=== planLimits.js - mergeLegacyOverrides logic ==="
cat packages/common/src/utils/planLimits.js

Length of output: 9044


@yash-pouranik — Good news: the two critical blockers are both confirmed fixed in the current code:

Issue Status
getDeveloperPlanCache/setDeveloperPlanCache missing from packages/common/src/index.js ✅ Fixed — both are imported and exported
analytics.controller.js missing plan/planExpiresAt in .select() ✅ Fixed — Line 33 now selects all 4 fields

There are two minor remaining items from the unresolved Copilot threads (neither is a merge blocker):

  1. Unused mongoose import in tools/db-queries/backfill-developer-plans.jsmongoose is required but never used (DB connects via connectDB from @urbackend/common). Cosmetic only.

  2. mergeLegacyOverrides intent vs logic mismatch in planLimits.js — the inline comment says "a legacy value of 20 must NOT override Pro's default (10)" but the condition v > baseVal would actually allow it (20 > 10). This means a developer with a legacy maxProjects: 20 could exceed their Pro tier limit of 10. Functionally this is permissive rather than restrictive — unlikely to cause harm but worth noting.


✅ Safe to merge. CI is fully green, branch is clean, and the enforcement layer is production-quality. The two items above can be cleaned up in a follow-up PR. Would you also like me to open a GitHub issue to track the payment/lifecycle work (Stripe webhooks, expiry cron, cache invalidation) we discussed?

@yash-pouranik yash-pouranik temporarily deployed to feature/plan-enforcement - urBackend-frankfrut PR #128 April 22, 2026 21:39 — with Render Destroyed
@yash-pouranik yash-pouranik merged commit 63bee25 into main Apr 22, 2026
8 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants