Skip to content

feat: add API performance analytics (latency + error rate)#140

Open
avanithumballi wants to merge 7 commits intogeturbackend:mainfrom
avanithumballi:feat/api-analytics
Open

feat: add API performance analytics (latency + error rate)#140
avanithumballi wants to merge 7 commits intogeturbackend:mainfrom
avanithumballi:feat/api-analytics

Conversation

@avanithumballi
Copy link
Copy Markdown
Contributor

@avanithumballi avanithumballi commented May 2, 2026

Closes #138

Summary

  • Added ApiAnalytics model with TTL index (configurable via ANALYTICS_TTL_DAYS).
  • Extended api_usage.js middleware to log response time (process.hrtime) and status code asynchronously.
  • Extended existing GET /api/projects/:projectId/analytics endpoint with avgResponseTimeMs and errorRate.

Testing

  • Verified that analytics documents are created on API calls (using manual requests).
  • Verified that the analytics endpoint returns the new metrics for different time ranges.
  • Existing functionality (logs, rate limiting) remains intact.

Note

Local MongoDB is a standalone instance (transactions not supported), causing unrelated 500 errors on project creation. This PR does not modify any transaction code and is safe to merge. The maintainer’s CI/replica set environment will confirm the changes work as expected.

Related

Summary by CodeRabbit

  • New Features
    • Analytics endpoint accepts a time-range parameter (defaults to last 24 hours).
    • New performance metrics: average response time and error rate across the selected period.
    • API now records performance analytics for requests in the background to populate charts and summaries.
  • Other
    • Analytics responses use a standardized success/data/message wrapper and echo the selected range.
  • Chores
    • Analytics storage includes a configurable retention policy.

- Add ApiAnalytics model with TTL index (configurable via ANALYTICS_TTL_DAYS)
- Extend api_usage middleware to log responseTime and statusCode asynchronously
- Add avgResponseTimeMs and errorRate to GET /api/projects/:projectId/analytics
@coderabbitai
Copy link
Copy Markdown
Contributor

coderabbitai Bot commented May 2, 2026

Warning

Rate limit exceeded

@avanithumballi has exceeded the limit for the number of commits that can be reviewed per hour. Please wait 32 minutes and 15 seconds before requesting another review.

To keep reviews running without waiting, you can enable usage-based add-on for your organization. This allows additional reviews beyond the hourly cap. Account admins can enable it under billing.

⌛ How to resolve this issue?

After the wait time has elapsed, a review can be triggered using the @coderabbitai review command as a PR comment. Alternatively, push new commits to this PR.

We recommend that you space out your commits to avoid hitting the rate limit.

🚦 How do rate limits work?

CodeRabbit enforces hourly rate limits for each developer per organization.

Our paid plans have higher rate limits than the trial, open-source and free plans. In all cases, we re-allow further reviews after a brief timeout.

Please see our FAQ for further information.

ℹ️ Review info
⚙️ Run configuration

Configuration used: defaults

Review profile: CHILL

Plan: Pro

Run ID: d5660315-7a94-4289-aa2a-f776fc57a289

📥 Commits

Reviewing files that changed from the base of the PR and between 75ef752 and 0c71020.

📒 Files selected for processing (1)
  • apps/dashboard-api/src/controllers/project.controller.js
📝 Walkthrough

Walkthrough

Adds ApiAnalytics model, records per-request response times and status codes in public API middleware, and extends dashboard analytics endpoint to accept a range parameter and return aggregated avgResponseTimeMs and errorRate alongside existing data. Error handling and controller signature updated to use next.

Changes

API Performance & Error Analytics

Layer / File(s) Summary
Data Model
packages/common/src/models/ApiAnalytics.js
New Mongoose schema ApiAnalytics with projectId, endpoint, method, statusCode, responseTimeMs, timestamp; TTL index based on ANALYTICS_TTL_DAYS (fallback 365 days).
Model Export
packages/common/src/models/index.js
Exports ApiAnalytics via module.exports.ApiAnalytics = require('./ApiAnalytics').
Middleware Timing & Persistence
apps/public-api/src/middlewares/api_usage.js
Imports ApiAnalytics; starts high-resolution timer (process.hrtime()), on res.finish computes responseTimeMs and asynchronously creates ApiAnalytics record (setImmediate) with projectId, endpoint (`req.route?.path
Dashboard Analytics Endpoint
apps/dashboard-api/src/controllers/project.controller.js
Imports ApiAnalytics; analytics handler signature now (req, res, next); accepts range query (default last24h), computes startDate, aggregates ApiAnalytics to compute avgResponseTimeMs and errorRate (statusCode ≥ 400), and responds with { success, data: { storage, database, totalRequests, logs, chartData, avgResponseTimeMs, errorRate, range }, message }. Error handling changed to next(new AppError(...)).

Sequence Diagram

sequenceDiagram
    actor Client
    participant PublicAPI as Public API
    participant Middleware as API Usage<br/>Middleware
    participant MongoDB as MongoDB<br/>(ApiAnalytics)
    participant DashboardAPI as Dashboard API<br/>Controller

    Client->>PublicAPI: HTTP Request
    PublicAPI->>Middleware: run logger middleware
    Middleware->>Middleware: start hrtime()
    Middleware->>PublicAPI: next()
    PublicAPI->>PublicAPI: process request
    PublicAPI->>Middleware: 'finish' event
    Middleware->>Middleware: compute responseTimeMs
    Middleware->>MongoDB: setImmediate ApiAnalytics.create(...)
    MongoDB->>MongoDB: store record (async)
    Middleware->>Client: response sent

    Client->>DashboardAPI: GET /analytics?range=last24h
    DashboardAPI->>DashboardAPI: parse range → startDate
    DashboardAPI->>MongoDB: aggregate avgResponseTimeMs & errorRate
    MongoDB->>DashboardAPI: return aggregations
    DashboardAPI->>Client: respond with metrics + existing data
Loading

Estimated Code Review Effort

🎯 3 (Moderate) | ⏱️ ~25 minutes

Suggested labels

enhancement, backend, level-2

Suggested reviewers

  • yash-pouranik

Poem

🐰 I timed each hop with a stopwatch bright,
Logged every bounce, every status and flight,
From public API to dashboard's bright beam,
Metrics now sparkle — a rabbit's small dream! 🥕

🚥 Pre-merge checks | ✅ 5
✅ Passed checks (5 passed)
Check name Status Explanation
Description Check ✅ Passed Check skipped - CodeRabbit’s high-level summary is enabled.
Title check ✅ Passed The title 'feat: add API performance analytics (latency + error rate)' directly and accurately describes the main change: adding API performance analytics with latency and error rate metrics.
Linked Issues check ✅ Passed The PR implements all core requirements from issue #138: global middleware records response time and status code, ApiAnalytics model stores data efficiently, analytics endpoint returns avgResponseTimeMs and errorRate for time ranges, and logging is asynchronous.
Out of Scope Changes check ✅ Passed All changes are directly scoped to issue #138 requirements: ApiAnalytics model creation, middleware extension for metrics recording, and analytics endpoint enhancement for latency and error rate reporting.
Docstring Coverage ✅ Passed No functions found in the changed files to evaluate docstring coverage. Skipping docstring coverage check.

✏️ Tip: You can configure your own custom pre-merge checks in the settings.

✨ Finishing Touches
🧪 Generate unit tests (beta)
  • Create PR with unit tests

Tip

💬 Introducing [Slack Agent](https://www.coderabbit.ai/agent): Turn conversations into code.

Slack Agent is built on CodeRabbit's deep understanding of your code, so your team can collaborate across the entire SDLC without losing context.

  • Generate code and open pull requests
  • Plan features and break down work
  • Investigate incidents and troubleshoot customer tickets together
  • Automate recurring tasks and respond to alerts with triggers
  • Summarize progress and report instantly

Built for teams:

  • Shared memory across your entire org—no repeating context
  • Per-thread sandboxes to safely plan and execute work
  • Governance built-in—scoped access, auditability, and budget controls

One agent for your entire SDLC. Right inside Slack.

👉 Get your free trial and get 200 agent minutes per Slack user (a $50 value).


Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share
Review rate limit: 0/1 reviews remaining, refill in 32 minutes and 15 seconds.

Comment @coderabbitai help to get the list of available commands and usage tips.

Copy link
Copy Markdown
Contributor

@coderabbitai coderabbitai Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 3

🧹 Nitpick comments (2)
apps/dashboard-api/src/controllers/project.controller.js (1)

2028-2045: ⚡ Quick win

Combine the two ApiAnalytics aggregations into one round-trip

latencyAgg and errorAgg both hit the same collection with the identical $match filter and differ only in their $group projection. Running them as separate pipelines costs two MongoDB round-trips per analytics request. Use $facet (or a single $group with both accumulators) to compute both metrics in one pass.

♻️ Proposed refactor using a single pipeline
-    // Average response time
-    const latencyAgg = await ApiAnalytics.aggregate([
-      { $match: match },
-      { $group: { _id: null, avg: { $avg: '$responseTimeMs' } } },
-    ]);
-    const avgResponseTimeMs = latencyAgg[0]?.avg ?? null;
-
-    // Error rate (status >= 400)
-    const errorAgg = await ApiAnalytics.aggregate([
-      { $match: match },
-      {
-        $group: {
-          _id: null,
-          total: { $sum: 1 },
-          errors: { $sum: { $cond: [{ $gte: ['$statusCode', 400] }, 1, 0] } },
-        },
-      },
-    ]);
-    const errorRate = errorAgg[0] ? (errorAgg[0].errors / errorAgg[0].total) * 100 : 0;
+    const [perfAgg] = await ApiAnalytics.aggregate([
+      { $match: match },
+      {
+        $group: {
+          _id: null,
+          avg: { $avg: '$responseTimeMs' },
+          total: { $sum: 1 },
+          errors: { $sum: { $cond: [{ $gte: ['$statusCode', 400] }, 1, 0] } },
+        },
+      },
+    ]);
+    const avgResponseTimeMs = perfAgg?.avg ?? null;
+    const errorRate = perfAgg ? (perfAgg.errors / perfAgg.total) * 100 : 0;
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@apps/dashboard-api/src/controllers/project.controller.js` around lines 2028 -
2045, The two separate ApiAnalytics.aggregate calls (latencyAgg and errorAgg)
both use the same match filter and should be combined into a single aggregation
to avoid two MongoDB round-trips: replace the separate calls to
ApiAnalytics.aggregate that produce latencyAgg and errorAgg with one pipeline
that either uses $group to compute both avg: { $avg: '$responseTimeMs' } and
counts total/errors in the same group, or use $facet to run both projections in
one aggregate; then set avgResponseTimeMs and errorRate from the single
aggregation result (currently assigned to avgResponseTimeMs and errorRate
variables) and remove the redundant queries.
apps/public-api/src/middlewares/api_usage.js (1)

19-19: ⚡ Quick win

Remove console.time debug instrumentation from production middleware

Line 19 adds console.time("logger middleware"), which fires on every incoming request (it runs before the URL-prefix check on lines 25–29), pairing with the pre-existing console.timeEnd on line 77. Together they print a timing measurement of the synchronous middleware setup — a sub-millisecond operation — to stdout for every API call in production, adding log noise without any observability value.

♻️ Proposed fix
-    console.time("logger middleware");
-    
     // Capture start time for response time measurement
     const startHr = process.hrtime();
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@apps/public-api/src/middlewares/api_usage.js` at line 19, Remove the
per-request console.time instrumentation: delete the console.time("logger
middleware") call (which runs before the URL-prefix check) and either remove or
guard the matching console.timeEnd("logger middleware") so timing is only
emitted in non-production/debug mode (e.g., wrap both calls in a NODE_ENV !==
'production' or debug flag check) to stop noisy sub-millisecond logs from the
logger middleware.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@apps/dashboard-api/src/controllers/project.controller.js`:
- Around line 2048-2060: The analytics controller currently returns a flat
payload and sends raw error messages; change it to return the standard response
shape by wrapping the existing payload under data and returning { success: true,
data: { storage, database, totalRequests, logs, chartData, avgResponseTimeMs,
errorRate, range }, message: "" } (keep the same storage/database keys and new
fields inside data). In the catch block, do not send err.message directly —
instead construct or forward an AppError (use new AppError(statusCode, "An error
occurred fetching analytics") or call next(new AppError(...))) and log the
original err server-side; ensure error responses follow { success: false, data:
{}, message: "<safe message>" } and map AppError.statusCode to the response
status. Target the analytics handler where res.json(...) and the catch block are
defined to apply these changes.

In `@packages/common/src/models/ApiAnalytics.js`:
- Around line 10-17: Remove the conflicting field-level index: in the
ApiAnalytics schema drop the "index: true" option on the timestamp field so it
doesn't create a regular index that blocks the TTL index; keep the schema-level
call apiAnalyticsSchema.index({ timestamp: 1 }, { expireAfterSeconds: ttlDays *
24 * 60 * 60 }) (which uses ttlDays derived from process.env.ANALYTICS_TTL_DAYS)
as the sole index for timestamp to ensure the TTL index can be created.
- Around line 16-17: The ttlDays parsing can yield NaN or 0 which breaks MongoDB
index creation; in ApiAnalytics.js replace the direct parseInt usage with
validated logic: read process.env.ANALYTICS_TTL_DAYS, parse to a number, and if
Number.isInteger(parsed) is false or parsed <= 0 then set ttlDays to a safe
default (e.g., 365); compute expireSeconds = ttlDays * 24 * 60 * 60 and pass
that to apiAnalyticsSchema.index(..., { expireAfterSeconds: expireSeconds });
ensure you reference ttlDays, ANALYTICS_TTL_DAYS, apiAnalyticsSchema.index, and
expireAfterSeconds when making the change.

---

Nitpick comments:
In `@apps/dashboard-api/src/controllers/project.controller.js`:
- Around line 2028-2045: The two separate ApiAnalytics.aggregate calls
(latencyAgg and errorAgg) both use the same match filter and should be combined
into a single aggregation to avoid two MongoDB round-trips: replace the separate
calls to ApiAnalytics.aggregate that produce latencyAgg and errorAgg with one
pipeline that either uses $group to compute both avg: { $avg: '$responseTimeMs'
} and counts total/errors in the same group, or use $facet to run both
projections in one aggregate; then set avgResponseTimeMs and errorRate from the
single aggregation result (currently assigned to avgResponseTimeMs and errorRate
variables) and remove the redundant queries.

In `@apps/public-api/src/middlewares/api_usage.js`:
- Line 19: Remove the per-request console.time instrumentation: delete the
console.time("logger middleware") call (which runs before the URL-prefix check)
and either remove or guard the matching console.timeEnd("logger middleware") so
timing is only emitted in non-production/debug mode (e.g., wrap both calls in a
NODE_ENV !== 'production' or debug flag check) to stop noisy sub-millisecond
logs from the logger middleware.
🪄 Autofix (Beta)

Fix all unresolved CodeRabbit comments on this PR:

  • Push a commit to this branch (recommended)
  • Create a new PR with the fixes

ℹ️ Review info
⚙️ Run configuration

Configuration used: defaults

Review profile: CHILL

Plan: Pro

Run ID: e8f427b8-b7bd-4b5e-a5b9-d1b030aaeced

📥 Commits

Reviewing files that changed from the base of the PR and between 78878be and f01bef0.

📒 Files selected for processing (4)
  • apps/dashboard-api/src/controllers/project.controller.js
  • apps/public-api/src/middlewares/api_usage.js
  • packages/common/src/models/ApiAnalytics.js
  • packages/common/src/models/index.js

Comment thread apps/dashboard-api/src/controllers/project.controller.js Outdated
Comment thread packages/common/src/models/ApiAnalytics.js Outdated
Comment thread packages/common/src/models/ApiAnalytics.js Outdated
@avanithumballi
Copy link
Copy Markdown
Contributor Author

avanithumballi commented May 2, 2026

@yash-pouranik i have created pr,review it.i made changes what coderabbit mentioned

@yash-pouranik
Copy link
Copy Markdown
Collaborator

1 more coderabbit comment is still not resolved. @avanithumballi

Copy link
Copy Markdown
Collaborator

@yash-pouranik yash-pouranik left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@avanithumballi
Are these changes compatible with frontend
or we need to change UI as well??

Comment thread apps/dashboard-api/src/controllers/project.controller.js Outdated
Comment thread apps/public-api/src/middlewares/api_usage.js Outdated
@avanithumballi
Copy link
Copy Markdown
Contributor Author

@yash-pouranik The changes are fully compatible with the existing frontend. The analytics endpoint now returns additional fields alongside the original data. The frontend will simply ignore these new fields until the UI is updated to display them. No UI changes are required for this PR to be merged. If the UI handling person later wants to show these metrics, they can do so without any backend changes.

@yash-pouranik
Copy link
Copy Markdown
Collaborator

yash-pouranik commented May 2, 2026

@avanithumballi can u pelase share SS of UI
I think there's some changes needed in UI?

@avanithumballi
Copy link
Copy Markdown
Contributor Author

@yash-pouranik Regarding UI: This PR does not include any UI changes, as per your earlier guidance to minimize frontend changes. The new fields (avgResponseTimeMs, errorRate, range) are additive and the existing frontend will ignore them. No UI modifications are required for this PR to be merged. And the changes are compatible with frontend

@yash-pouranik
Copy link
Copy Markdown
Collaborator

please create a follow up issue for this @avanithumballi
list all the changes needed and files affect.

@yash-pouranik
Copy link
Copy Markdown
Collaborator

@coderabbitai Ok to merge?

@coderabbitai
Copy link
Copy Markdown
Contributor

coderabbitai Bot commented May 2, 2026

Tip

For best results, initiate chat on the files or code changes.

🧩 Analysis chain

🏁 Script executed:

#!/bin/bash
# Get open review comments on PR `#140`
gh pr view 140 --repo geturbackend/urBackend --json reviews,reviewDecision,mergeable,state

echo "---REVIEW COMMENTS---"
gh api repos/geturbackend/urBackend/pulls/140/comments --jq '.[] | {id: .id, path: .path, body: .body, line: .line, resolved: .resolved}'

echo "---PR CHECKS---"
gh pr checks 140 --repo geturbackend/urBackend

echo "---ISSUES ON PR---"
gh api repos/geturbackend/urBackend/issues/140/comments --jq '.[] | {user: .user.login, body: .body[:200]}'

Length of output: 21753


@yash-pouranikNot quite yet. Here's the current status:


✅ All CI checks are passing

All 7 checks (tests, lint, CodeQL, static analysis) are green.


❌ 1 Major issue still unresolved

The review comment on apps/dashboard-api/src/controllers/project.controller.js about the response contract violation has not been addressed. The analytics endpoint still:

  • Returns a flat payload instead of { success: true, data: { ... }, message: "" }
  • Exposes raw err.message in the catch block instead of using AppError

This violates the project-wide controller guidelines. The fix is straightforward:

-    res.json({
+    return res.json({
+      success: true,
+      data: {
         storage: { used: project.storageUsed, limit: project.storageLimit },
         database: { used: project.databaseUsed, limit: project.databaseLimit },
         totalRequests,
         logs,
         chartData,
         avgResponseTimeMs,
         errorRate,
         range,
+      },
+      message: 'Analytics fetched successfully.',
     });
   } catch (err) {
     console.error('Analytics error:', err);
-    res.status(500).json({ error: err.message });
+    return next(new AppError(500, 'Failed to fetch analytics.'));
   }

✅ Fixed since last review

  • index: true conflict on TTL index in ApiAnalytics.js — fixed in commit 2b969a5
  • parseInt NaN/0 guard for ANALYTICS_TTL_DAYS — fixed in commit 2b969a5
  • console.time debug noise — @avanithumballi confirmed fixed

⏳ Follow-up issue for UI changes

@avanithumballi has not yet created the follow-up issue for UI changes that you requested. That should be done before or shortly after merge.

Summary: Ask @avanithumballi to apply the response shape fix above, then it's good to go.

Copy link
Copy Markdown
Contributor

@coderabbitai coderabbitai Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

🧹 Nitpick comments (2)
apps/dashboard-api/src/controllers/project.controller.js (2)

2008-2011: Add a compound index { projectId, timestamp } to ApiAnalytics.

Every aggregation pipeline issues $match: { projectId, timestamp: { $gte: ... } }. The schema in packages/common/src/models/ApiAnalytics.js only carries a single-field TTL index on timestamp. Without a compound index, MongoDB will scan all analytics records before filtering by projectId, which degrades linearly as the collection grows.

Add to ApiAnalytics.js:

apiAnalyticsSchema.index({ projectId: 1, timestamp: 1 });
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@apps/dashboard-api/src/controllers/project.controller.js` around lines 2008 -
2011, The ApiAnalytics collection is missing a compound index for queries that
match { projectId, timestamp }, causing full collection scans; update the
ApiAnalytics schema by adding a compound index via apiAnalyticsSchema.index({
projectId: 1, timestamp: 1 }) in the ApiAnalytics.js file (where
apiAnalyticsSchema is defined) so queries using projectId + timestamp $gte are
served by the index and avoid scanning all analytics records.

2013-2029: ⚡ Quick win

Merge the two ApiAnalytics.aggregate() calls into one pipeline.

Both pipelines share the same $match stage and can be collapsed into a single $group, halving the round-trips on every analytics request.

♻️ Proposed refactor
-    const latencyAgg = await ApiAnalytics.aggregate([
-      { $match: match },
-      { $group: { _id: null, avg: { $avg: '$responseTimeMs' } } },
-    ]);
-    const avgResponseTimeMs = latencyAgg[0]?.avg ?? null;
-
-    const errorAgg = await ApiAnalytics.aggregate([
-      { $match: match },
-      {
-        $group: {
-          _id: null,
-          total: { $sum: 1 },
-          errors: { $sum: { $cond: [{ $gte: ['$statusCode', 400] }, 1, 0] } },
-        },
-      },
-    ]);
-    const errorRate = errorAgg[0] ? (errorAgg[0].errors / errorAgg[0].total) * 100 : 0;
+    const analyticsAgg = await ApiAnalytics.aggregate([
+      { $match: match },
+      {
+        $group: {
+          _id: null,
+          avg: { $avg: '$responseTimeMs' },
+          total: { $sum: 1 },
+          errors: { $sum: { $cond: [{ $gte: ['$statusCode', 400] }, 1, 0] } },
+        },
+      },
+    ]);
+    const avgResponseTimeMs = analyticsAgg[0]?.avg ?? null;
+    const errorRate = analyticsAgg[0] ? (analyticsAgg[0].errors / analyticsAgg[0].total) * 100 : 0;
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@apps/dashboard-api/src/controllers/project.controller.js` around lines 2013 -
2029, The two separate ApiAnalytics.aggregate calls (producing latencyAgg and
errorAgg) should be merged into one aggregate pipeline that starts with the same
match object and a single $group stage that computes avg: { $avg:
'$responseTimeMs' }, total: { $sum: 1 }, and errors: { $sum: { $cond: [{ $gte:
['$statusCode', 400] }, 1, 0] } }; replace usage of latencyAgg and errorAgg by
reading the single result (e.g. agg[0]) to set avgResponseTimeMs and errorRate,
and remove the duplicate aggregate call to eliminate the extra round-trip in
ApiAnalytics.aggregate.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@apps/dashboard-api/src/controllers/project.controller.js`:
- Around line 1999-2006: The switch on the incoming range parameter currently
falls through to default startDate = new Date(0) for any unrecognized value;
instead validate range against the allowed set
['last1h','last24h','last7d','last30d'] and return a 400 Bad Request when the
value is unknown (do not compute or return all-time data or echo the raw client
string). Update the logic around the switch(range) and startDate variable in the
controller (the block using startDate and switch(range)) to perform whitelist
validation before adjusting startDate, and ensure the handler sends a clear 400
response for invalid range values rather than using new Date(0).

---

Nitpick comments:
In `@apps/dashboard-api/src/controllers/project.controller.js`:
- Around line 2008-2011: The ApiAnalytics collection is missing a compound index
for queries that match { projectId, timestamp }, causing full collection scans;
update the ApiAnalytics schema by adding a compound index via
apiAnalyticsSchema.index({ projectId: 1, timestamp: 1 }) in the ApiAnalytics.js
file (where apiAnalyticsSchema is defined) so queries using projectId +
timestamp $gte are served by the index and avoid scanning all analytics records.
- Around line 2013-2029: The two separate ApiAnalytics.aggregate calls
(producing latencyAgg and errorAgg) should be merged into one aggregate pipeline
that starts with the same match object and a single $group stage that computes
avg: { $avg: '$responseTimeMs' }, total: { $sum: 1 }, and errors: { $sum: {
$cond: [{ $gte: ['$statusCode', 400] }, 1, 0] } }; replace usage of latencyAgg
and errorAgg by reading the single result (e.g. agg[0]) to set avgResponseTimeMs
and errorRate, and remove the duplicate aggregate call to eliminate the extra
round-trip in ApiAnalytics.aggregate.
🪄 Autofix (Beta)

Fix all unresolved CodeRabbit comments on this PR:

  • Push a commit to this branch (recommended)
  • Create a new PR with the fixes

ℹ️ Review info
⚙️ Run configuration

Configuration used: defaults

Review profile: CHILL

Plan: Pro

Run ID: 877c3e62-c3c7-443b-9795-ddea19265701

📥 Commits

Reviewing files that changed from the base of the PR and between f01bef0 and 75ef752.

📒 Files selected for processing (3)
  • apps/dashboard-api/src/controllers/project.controller.js
  • apps/public-api/src/middlewares/api_usage.js
  • packages/common/src/models/ApiAnalytics.js
🚧 Files skipped from review as they are similar to previous changes (2)
  • packages/common/src/models/ApiAnalytics.js
  • apps/public-api/src/middlewares/api_usage.js

Comment thread apps/dashboard-api/src/controllers/project.controller.js Outdated
@avanithumballi
Copy link
Copy Markdown
Contributor Author

Tip

For best results, initiate chat on the files or code changes.

🧩 Analysis chain
🏁 Script executed:

#!/bin/bash
# Get open review comments on PR `#140`
gh pr view 140 --repo geturbackend/urBackend --json reviews,reviewDecision,mergeable,state

echo "---REVIEW COMMENTS---"
gh api repos/geturbackend/urBackend/pulls/140/comments --jq '.[] | {id: .id, path: .path, body: .body, line: .line, resolved: .resolved}'

echo "---PR CHECKS---"
gh pr checks 140 --repo geturbackend/urBackend

echo "---ISSUES ON PR---"
gh api repos/geturbackend/urBackend/issues/140/comments --jq '.[] | {user: .user.login, body: .body[:200]}'

Length of output: 21753

@yash-pouranikNot quite yet. Here's the current status:

✅ All CI checks are passing

All 7 checks (tests, lint, CodeQL, static analysis) are green.

❌ 1 Major issue still unresolved

The review comment on apps/dashboard-api/src/controllers/project.controller.js about the response contract violation has not been addressed. The analytics endpoint still:

  • Returns a flat payload instead of { success: true, data: { ... }, message: "" }
  • Exposes raw err.message in the catch block instead of using AppError

This violates the project-wide controller guidelines. The fix is straightforward:

-    res.json({
+    return res.json({
+      success: true,
+      data: {
         storage: { used: project.storageUsed, limit: project.storageLimit },
         database: { used: project.databaseUsed, limit: project.databaseLimit },
         totalRequests,
         logs,
         chartData,
         avgResponseTimeMs,
         errorRate,
         range,
+      },
+      message: 'Analytics fetched successfully.',
     });
   } catch (err) {
     console.error('Analytics error:', err);
-    res.status(500).json({ error: err.message });
+    return next(new AppError(500, 'Failed to fetch analytics.'));
   }

✅ Fixed since last review

  • index: true conflict on TTL index in ApiAnalytics.js — fixed in commit 2b969a5
  • parseInt NaN/0 guard for ANALYTICS_TTL_DAYS — fixed in commit 2b969a5
  • console.time debug noise — @avanithumballi confirmed fixed

⏳ Follow-up issue for UI changes

@avanithumballi has not yet created the follow-up issue for UI changes that you requested. That should be done before or shortly after merge.

Summary: Ask @avanithumballi to apply the response shape fix above, then it's good to go.

@yash-pouranik i have solved this issue

@yash-pouranik
Copy link
Copy Markdown
Collaborator

now can u work on that UI issue? @avanithumballi

@yash-pouranik
Copy link
Copy Markdown
Collaborator

@avanithumballi ???/

@avanithumballi
Copy link
Copy Markdown
Contributor Author

avanithumballi commented May 3, 2026 via email

@yash-pouranik
Copy link
Copy Markdown
Collaborator

yeah that's totally okay. but before working on that create that issue and get assigned and only then start.

@avanithumballi
Copy link
Copy Markdown
Contributor Author

avanithumballi commented May 3, 2026 via email

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

API Performance & Error Analytics

2 participants