feat: add API performance analytics (latency + error rate)#140
feat: add API performance analytics (latency + error rate)#140avanithumballi wants to merge 7 commits intogeturbackend:mainfrom
Conversation
- Add ApiAnalytics model with TTL index (configurable via ANALYTICS_TTL_DAYS) - Extend api_usage middleware to log responseTime and statusCode asynchronously - Add avgResponseTimeMs and errorRate to GET /api/projects/:projectId/analytics
|
Warning Rate limit exceeded
To keep reviews running without waiting, you can enable usage-based add-on for your organization. This allows additional reviews beyond the hourly cap. Account admins can enable it under billing. ⌛ How to resolve this issue?After the wait time has elapsed, a review can be triggered using the We recommend that you space out your commits to avoid hitting the rate limit. 🚦 How do rate limits work?CodeRabbit enforces hourly rate limits for each developer per organization. Our paid plans have higher rate limits than the trial, open-source and free plans. In all cases, we re-allow further reviews after a brief timeout. Please see our FAQ for further information. ℹ️ Review info⚙️ Run configurationConfiguration used: defaults Review profile: CHILL Plan: Pro Run ID: 📒 Files selected for processing (1)
📝 WalkthroughWalkthroughAdds ApiAnalytics model, records per-request response times and status codes in public API middleware, and extends dashboard analytics endpoint to accept a ChangesAPI Performance & Error Analytics
Sequence DiagramsequenceDiagram
actor Client
participant PublicAPI as Public API
participant Middleware as API Usage<br/>Middleware
participant MongoDB as MongoDB<br/>(ApiAnalytics)
participant DashboardAPI as Dashboard API<br/>Controller
Client->>PublicAPI: HTTP Request
PublicAPI->>Middleware: run logger middleware
Middleware->>Middleware: start hrtime()
Middleware->>PublicAPI: next()
PublicAPI->>PublicAPI: process request
PublicAPI->>Middleware: 'finish' event
Middleware->>Middleware: compute responseTimeMs
Middleware->>MongoDB: setImmediate ApiAnalytics.create(...)
MongoDB->>MongoDB: store record (async)
Middleware->>Client: response sent
Client->>DashboardAPI: GET /analytics?range=last24h
DashboardAPI->>DashboardAPI: parse range → startDate
DashboardAPI->>MongoDB: aggregate avgResponseTimeMs & errorRate
MongoDB->>DashboardAPI: return aggregations
DashboardAPI->>Client: respond with metrics + existing data
Estimated Code Review Effort🎯 3 (Moderate) | ⏱️ ~25 minutes Suggested labels
Suggested reviewers
Poem
🚥 Pre-merge checks | ✅ 5✅ Passed checks (5 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches🧪 Generate unit tests (beta)
Tip 💬 Introducing [Slack Agent](https://www.coderabbit.ai/agent): Turn conversations into code.Slack Agent is built on CodeRabbit's deep understanding of your code, so your team can collaborate across the entire SDLC without losing context.
Built for teams:
One agent for your entire SDLC. Right inside Slack. 👉 Get your free trial and get 200 agent minutes per Slack user (a $50 value). Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Review rate limit: 0/1 reviews remaining, refill in 32 minutes and 15 seconds.Comment |
There was a problem hiding this comment.
Actionable comments posted: 3
🧹 Nitpick comments (2)
apps/dashboard-api/src/controllers/project.controller.js (1)
2028-2045: ⚡ Quick winCombine the two
ApiAnalyticsaggregations into one round-trip
latencyAgganderrorAggboth hit the same collection with the identical$matchfilter and differ only in their$groupprojection. Running them as separate pipelines costs two MongoDB round-trips per analytics request. Use$facet(or a single$groupwith both accumulators) to compute both metrics in one pass.♻️ Proposed refactor using a single pipeline
- // Average response time - const latencyAgg = await ApiAnalytics.aggregate([ - { $match: match }, - { $group: { _id: null, avg: { $avg: '$responseTimeMs' } } }, - ]); - const avgResponseTimeMs = latencyAgg[0]?.avg ?? null; - - // Error rate (status >= 400) - const errorAgg = await ApiAnalytics.aggregate([ - { $match: match }, - { - $group: { - _id: null, - total: { $sum: 1 }, - errors: { $sum: { $cond: [{ $gte: ['$statusCode', 400] }, 1, 0] } }, - }, - }, - ]); - const errorRate = errorAgg[0] ? (errorAgg[0].errors / errorAgg[0].total) * 100 : 0; + const [perfAgg] = await ApiAnalytics.aggregate([ + { $match: match }, + { + $group: { + _id: null, + avg: { $avg: '$responseTimeMs' }, + total: { $sum: 1 }, + errors: { $sum: { $cond: [{ $gte: ['$statusCode', 400] }, 1, 0] } }, + }, + }, + ]); + const avgResponseTimeMs = perfAgg?.avg ?? null; + const errorRate = perfAgg ? (perfAgg.errors / perfAgg.total) * 100 : 0;🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@apps/dashboard-api/src/controllers/project.controller.js` around lines 2028 - 2045, The two separate ApiAnalytics.aggregate calls (latencyAgg and errorAgg) both use the same match filter and should be combined into a single aggregation to avoid two MongoDB round-trips: replace the separate calls to ApiAnalytics.aggregate that produce latencyAgg and errorAgg with one pipeline that either uses $group to compute both avg: { $avg: '$responseTimeMs' } and counts total/errors in the same group, or use $facet to run both projections in one aggregate; then set avgResponseTimeMs and errorRate from the single aggregation result (currently assigned to avgResponseTimeMs and errorRate variables) and remove the redundant queries.apps/public-api/src/middlewares/api_usage.js (1)
19-19: ⚡ Quick winRemove
console.timedebug instrumentation from production middlewareLine 19 adds
console.time("logger middleware"), which fires on every incoming request (it runs before the URL-prefix check on lines 25–29), pairing with the pre-existingconsole.timeEndon line 77. Together they print a timing measurement of the synchronous middleware setup — a sub-millisecond operation — to stdout for every API call in production, adding log noise without any observability value.♻️ Proposed fix
- console.time("logger middleware"); - // Capture start time for response time measurement const startHr = process.hrtime();🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@apps/public-api/src/middlewares/api_usage.js` at line 19, Remove the per-request console.time instrumentation: delete the console.time("logger middleware") call (which runs before the URL-prefix check) and either remove or guard the matching console.timeEnd("logger middleware") so timing is only emitted in non-production/debug mode (e.g., wrap both calls in a NODE_ENV !== 'production' or debug flag check) to stop noisy sub-millisecond logs from the logger middleware.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@apps/dashboard-api/src/controllers/project.controller.js`:
- Around line 2048-2060: The analytics controller currently returns a flat
payload and sends raw error messages; change it to return the standard response
shape by wrapping the existing payload under data and returning { success: true,
data: { storage, database, totalRequests, logs, chartData, avgResponseTimeMs,
errorRate, range }, message: "" } (keep the same storage/database keys and new
fields inside data). In the catch block, do not send err.message directly —
instead construct or forward an AppError (use new AppError(statusCode, "An error
occurred fetching analytics") or call next(new AppError(...))) and log the
original err server-side; ensure error responses follow { success: false, data:
{}, message: "<safe message>" } and map AppError.statusCode to the response
status. Target the analytics handler where res.json(...) and the catch block are
defined to apply these changes.
In `@packages/common/src/models/ApiAnalytics.js`:
- Around line 10-17: Remove the conflicting field-level index: in the
ApiAnalytics schema drop the "index: true" option on the timestamp field so it
doesn't create a regular index that blocks the TTL index; keep the schema-level
call apiAnalyticsSchema.index({ timestamp: 1 }, { expireAfterSeconds: ttlDays *
24 * 60 * 60 }) (which uses ttlDays derived from process.env.ANALYTICS_TTL_DAYS)
as the sole index for timestamp to ensure the TTL index can be created.
- Around line 16-17: The ttlDays parsing can yield NaN or 0 which breaks MongoDB
index creation; in ApiAnalytics.js replace the direct parseInt usage with
validated logic: read process.env.ANALYTICS_TTL_DAYS, parse to a number, and if
Number.isInteger(parsed) is false or parsed <= 0 then set ttlDays to a safe
default (e.g., 365); compute expireSeconds = ttlDays * 24 * 60 * 60 and pass
that to apiAnalyticsSchema.index(..., { expireAfterSeconds: expireSeconds });
ensure you reference ttlDays, ANALYTICS_TTL_DAYS, apiAnalyticsSchema.index, and
expireAfterSeconds when making the change.
---
Nitpick comments:
In `@apps/dashboard-api/src/controllers/project.controller.js`:
- Around line 2028-2045: The two separate ApiAnalytics.aggregate calls
(latencyAgg and errorAgg) both use the same match filter and should be combined
into a single aggregation to avoid two MongoDB round-trips: replace the separate
calls to ApiAnalytics.aggregate that produce latencyAgg and errorAgg with one
pipeline that either uses $group to compute both avg: { $avg: '$responseTimeMs'
} and counts total/errors in the same group, or use $facet to run both
projections in one aggregate; then set avgResponseTimeMs and errorRate from the
single aggregation result (currently assigned to avgResponseTimeMs and errorRate
variables) and remove the redundant queries.
In `@apps/public-api/src/middlewares/api_usage.js`:
- Line 19: Remove the per-request console.time instrumentation: delete the
console.time("logger middleware") call (which runs before the URL-prefix check)
and either remove or guard the matching console.timeEnd("logger middleware") so
timing is only emitted in non-production/debug mode (e.g., wrap both calls in a
NODE_ENV !== 'production' or debug flag check) to stop noisy sub-millisecond
logs from the logger middleware.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: defaults
Review profile: CHILL
Plan: Pro
Run ID: e8f427b8-b7bd-4b5e-a5b9-d1b030aaeced
📒 Files selected for processing (4)
apps/dashboard-api/src/controllers/project.controller.jsapps/public-api/src/middlewares/api_usage.jspackages/common/src/models/ApiAnalytics.jspackages/common/src/models/index.js
|
@yash-pouranik i have created pr,review it.i made changes what coderabbit mentioned |
|
1 more coderabbit comment is still not resolved. @avanithumballi |
yash-pouranik
left a comment
There was a problem hiding this comment.
@avanithumballi
Are these changes compatible with frontend
or we need to change UI as well??
|
@yash-pouranik The changes are fully compatible with the existing frontend. The analytics endpoint now returns additional fields alongside the original data. The frontend will simply ignore these new fields until the UI is updated to display them. No UI changes are required for this PR to be merged. If the UI handling person later wants to show these metrics, they can do so without any backend changes. |
|
@avanithumballi can u pelase share SS of UI |
|
@yash-pouranik Regarding UI: This PR does not include any UI changes, as per your earlier guidance to minimize frontend changes. The new fields (avgResponseTimeMs, errorRate, range) are additive and the existing frontend will ignore them. No UI modifications are required for this PR to be merged. And the changes are compatible with frontend |
|
please create a follow up issue for this @avanithumballi |
|
@coderabbitai Ok to merge? |
|
Tip For best results, initiate chat on the files or code changes. 🧩 Analysis chain🏁 Script executed: Length of output: 21753
✅ All CI checks are passingAll 7 checks (tests, lint, CodeQL, static analysis) are green. ❌ 1 Major issue still unresolvedThe review comment on
This violates the project-wide controller guidelines. The fix is straightforward: - res.json({
+ return res.json({
+ success: true,
+ data: {
storage: { used: project.storageUsed, limit: project.storageLimit },
database: { used: project.databaseUsed, limit: project.databaseLimit },
totalRequests,
logs,
chartData,
avgResponseTimeMs,
errorRate,
range,
+ },
+ message: 'Analytics fetched successfully.',
});
} catch (err) {
console.error('Analytics error:', err);
- res.status(500).json({ error: err.message });
+ return next(new AppError(500, 'Failed to fetch analytics.'));
}✅ Fixed since last review
⏳ Follow-up issue for UI changes
Summary: Ask |
…ta/message, AppError)
There was a problem hiding this comment.
Actionable comments posted: 1
🧹 Nitpick comments (2)
apps/dashboard-api/src/controllers/project.controller.js (2)
2008-2011: Add a compound index{ projectId, timestamp }toApiAnalytics.Every aggregation pipeline issues
$match: { projectId, timestamp: { $gte: ... } }. The schema inpackages/common/src/models/ApiAnalytics.jsonly carries a single-field TTL index ontimestamp. Without a compound index, MongoDB will scan all analytics records before filtering byprojectId, which degrades linearly as the collection grows.Add to
ApiAnalytics.js:apiAnalyticsSchema.index({ projectId: 1, timestamp: 1 });🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@apps/dashboard-api/src/controllers/project.controller.js` around lines 2008 - 2011, The ApiAnalytics collection is missing a compound index for queries that match { projectId, timestamp }, causing full collection scans; update the ApiAnalytics schema by adding a compound index via apiAnalyticsSchema.index({ projectId: 1, timestamp: 1 }) in the ApiAnalytics.js file (where apiAnalyticsSchema is defined) so queries using projectId + timestamp $gte are served by the index and avoid scanning all analytics records.
2013-2029: ⚡ Quick winMerge the two
ApiAnalytics.aggregate()calls into one pipeline.Both pipelines share the same
$matchstage and can be collapsed into a single$group, halving the round-trips on every analytics request.♻️ Proposed refactor
- const latencyAgg = await ApiAnalytics.aggregate([ - { $match: match }, - { $group: { _id: null, avg: { $avg: '$responseTimeMs' } } }, - ]); - const avgResponseTimeMs = latencyAgg[0]?.avg ?? null; - - const errorAgg = await ApiAnalytics.aggregate([ - { $match: match }, - { - $group: { - _id: null, - total: { $sum: 1 }, - errors: { $sum: { $cond: [{ $gte: ['$statusCode', 400] }, 1, 0] } }, - }, - }, - ]); - const errorRate = errorAgg[0] ? (errorAgg[0].errors / errorAgg[0].total) * 100 : 0; + const analyticsAgg = await ApiAnalytics.aggregate([ + { $match: match }, + { + $group: { + _id: null, + avg: { $avg: '$responseTimeMs' }, + total: { $sum: 1 }, + errors: { $sum: { $cond: [{ $gte: ['$statusCode', 400] }, 1, 0] } }, + }, + }, + ]); + const avgResponseTimeMs = analyticsAgg[0]?.avg ?? null; + const errorRate = analyticsAgg[0] ? (analyticsAgg[0].errors / analyticsAgg[0].total) * 100 : 0;🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@apps/dashboard-api/src/controllers/project.controller.js` around lines 2013 - 2029, The two separate ApiAnalytics.aggregate calls (producing latencyAgg and errorAgg) should be merged into one aggregate pipeline that starts with the same match object and a single $group stage that computes avg: { $avg: '$responseTimeMs' }, total: { $sum: 1 }, and errors: { $sum: { $cond: [{ $gte: ['$statusCode', 400] }, 1, 0] } }; replace usage of latencyAgg and errorAgg by reading the single result (e.g. agg[0]) to set avgResponseTimeMs and errorRate, and remove the duplicate aggregate call to eliminate the extra round-trip in ApiAnalytics.aggregate.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@apps/dashboard-api/src/controllers/project.controller.js`:
- Around line 1999-2006: The switch on the incoming range parameter currently
falls through to default startDate = new Date(0) for any unrecognized value;
instead validate range against the allowed set
['last1h','last24h','last7d','last30d'] and return a 400 Bad Request when the
value is unknown (do not compute or return all-time data or echo the raw client
string). Update the logic around the switch(range) and startDate variable in the
controller (the block using startDate and switch(range)) to perform whitelist
validation before adjusting startDate, and ensure the handler sends a clear 400
response for invalid range values rather than using new Date(0).
---
Nitpick comments:
In `@apps/dashboard-api/src/controllers/project.controller.js`:
- Around line 2008-2011: The ApiAnalytics collection is missing a compound index
for queries that match { projectId, timestamp }, causing full collection scans;
update the ApiAnalytics schema by adding a compound index via
apiAnalyticsSchema.index({ projectId: 1, timestamp: 1 }) in the ApiAnalytics.js
file (where apiAnalyticsSchema is defined) so queries using projectId +
timestamp $gte are served by the index and avoid scanning all analytics records.
- Around line 2013-2029: The two separate ApiAnalytics.aggregate calls
(producing latencyAgg and errorAgg) should be merged into one aggregate pipeline
that starts with the same match object and a single $group stage that computes
avg: { $avg: '$responseTimeMs' }, total: { $sum: 1 }, and errors: { $sum: {
$cond: [{ $gte: ['$statusCode', 400] }, 1, 0] } }; replace usage of latencyAgg
and errorAgg by reading the single result (e.g. agg[0]) to set avgResponseTimeMs
and errorRate, and remove the duplicate aggregate call to eliminate the extra
round-trip in ApiAnalytics.aggregate.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: defaults
Review profile: CHILL
Plan: Pro
Run ID: 877c3e62-c3c7-443b-9795-ddea19265701
📒 Files selected for processing (3)
apps/dashboard-api/src/controllers/project.controller.jsapps/public-api/src/middlewares/api_usage.jspackages/common/src/models/ApiAnalytics.js
🚧 Files skipped from review as they are similar to previous changes (2)
- packages/common/src/models/ApiAnalytics.js
- apps/public-api/src/middlewares/api_usage.js
@yash-pouranik i have solved this issue |
|
now can u work on that UI issue? @avanithumballi |
|
@avanithumballi ???/ |
|
I'll work on it, actually I have one exam tomorrow morning so. I'll do it
by tomorrow EOD???
…On Sun, May 3, 2026, 11:34 PM Yash Pouranik ***@***.***> wrote:
*yash-pouranik* left a comment (geturbackend/urBackend#140)
<#140 (comment)>
@avanithumballi <https://github.com/avanithumballi> ???/
—
Reply to this email directly, view it on GitHub
<#140 (comment)>,
or unsubscribe
<https://github.com/notifications/unsubscribe-auth/BWGHKIFVMCL2WYZ26KV7RC34Y6C2JAVCNFSM6AAAAACYONNZLGVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHM2DGNRWHAYTIMRSGE>
.
Triage notifications on the go with GitHub Mobile for iOS
<https://apps.apple.com/app/apple-store/id1477376905?ct=notification-email&mt=8&pt=524675>
or Android
<https://play.google.com/store/apps/details?id=com.github.android&referrer=utm_campaign%3Dnotification-email%26utm_medium%3Demail%26utm_source%3Dgithub>.
You are receiving this because you were mentioned.Message ID:
***@***.***>
|
|
yeah that's totally okay. but before working on that create that issue and get assigned and only then start. |
|
Ya sure.
…On Sun, May 3, 2026, 11:39 PM Yash Pouranik ***@***.***> wrote:
*yash-pouranik* left a comment (geturbackend/urBackend#140)
<#140 (comment)>
yeah that's totally okay. but before working on that create that issue and
get assigned and only then start.
—
Reply to this email directly, view it on GitHub
<#140 (comment)>,
or unsubscribe
<https://github.com/notifications/unsubscribe-auth/BWGHKIHXPMPEENVHCPVPC4D4Y6DMZAVCNFSM6AAAAACYONNZLGVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHM2DGNRWHAZDGOJVHA>
.
Triage notifications on the go with GitHub Mobile for iOS
<https://apps.apple.com/app/apple-store/id1477376905?ct=notification-email&mt=8&pt=524675>
or Android
<https://play.google.com/store/apps/details?id=com.github.android&referrer=utm_campaign%3Dnotification-email%26utm_medium%3Demail%26utm_source%3Dgithub>.
You are receiving this because you were mentioned.Message ID:
***@***.***>
|
Closes #138
Summary
ApiAnalyticsmodel with TTL index (configurable viaANALYTICS_TTL_DAYS).api_usage.jsmiddleware to log response time (process.hrtime) and status code asynchronously.GET /api/projects/:projectId/analyticsendpoint withavgResponseTimeMsanderrorRate.Testing
Note
Local MongoDB is a standalone instance (transactions not supported), causing unrelated 500 errors on project creation. This PR does not modify any transaction code and is safe to merge. The maintainer’s CI/replica set environment will confirm the changes work as expected.
Related
Summary by CodeRabbit