fix(redis): use POST body for setCachedJson to avoid ECONNRESET with large payloads#2524
fix(redis): use POST body for setCachedJson to avoid ECONNRESET with large payloads#2524bighaoZQH wants to merge 3 commits intokoala73:mainfrom
Conversation
|
@bighaoZQH is attempting to deploy a commit to the Elie Team on Vercel. A member of the Team first needs to authorize it. |
Greptile SummaryThis PR fixes a real Key findings:
Confidence Score: 4/5Safe to merge after addressing the pipeline timeout — the core fix is correct but the remaining 1,500 ms timeout may still cause failures for large payloads on slower networks. The approach is correct and well-motivated. A single P1 remains: the REDIS_OP_TIMEOUT_MS (1,500 ms) is inconsistent with every other pipeline call in the file (all use 5,000 ms) and could partially re-introduce timeout failures for the large payloads this PR targets. A one-line fix addresses it. The P2 (missing resp.ok guard) is pre-existing behaviour and non-blocking. server/_shared/redis.ts — timeout constant on line 94 should be REDIS_PIPELINE_TIMEOUT_MS Important Files Changed
Sequence DiagramsequenceDiagram
participant Caller
participant setCachedJson
participant RedisRestProxy as Redis REST Proxy (/pipeline)
Note over Caller,RedisRestProxy: Before (ECONNRESET on large payloads)
Caller->>setCachedJson: key, value (~126 KB), ttlSeconds
setCachedJson->>RedisRestProxy: POST /set/{encodeURIComponent(key)}/{encodeURIComponent(value)}/EX/{ttl}
RedisRestProxy-->>setCachedJson: ECONNRESET / EPIPE (URL > 16 KB)
Note over Caller,RedisRestProxy: After (body-based pipeline)
Caller->>setCachedJson: key, value (~126 KB), ttlSeconds
setCachedJson->>RedisRestProxy: POST /pipeline, body: [[SET, prefixedKey, value, EX, ttl]]
RedisRestProxy-->>setCachedJson: [{result: OK}]
setCachedJson-->>Caller: void
Reviews (1): Last reviewed commit: "fix(redis): use pipeline body for setCac..." | Re-trigger Greptile |
server/_shared/redis.ts
Outdated
| 'Content-Type': 'application/json', | ||
| }, | ||
| body: JSON.stringify(pipeline), | ||
| signal: AbortSignal.timeout(REDIS_OP_TIMEOUT_MS), |
There was a problem hiding this comment.
Wrong timeout for pipeline endpoint
The fix correctly moves large payloads to the request body, but the timeout is still REDIS_OP_TIMEOUT_MS (1,500 ms) — the same budget as the old single-command URL call. Every other pipeline call in this file (getCachedJsonBatch, geoSearchByBox, getHashFieldsBatch, runRedisPipeline) uses REDIS_PIPELINE_TIMEOUT_MS (5,000 ms), which exists precisely because pipelines can be slower, and because large payload transfers take more time over the wire.
For the news:digest:v1 key (~126 KB), transmitting the body to a self-hosted Redis REST proxy within 1,500 ms may still fail on slower or containerised networks, partially re-introducing the timeout failures this PR is meant to fix.
| signal: AbortSignal.timeout(REDIS_OP_TIMEOUT_MS), | |
| signal: AbortSignal.timeout(REDIS_PIPELINE_TIMEOUT_MS), |
server/_shared/redis.ts
Outdated
| await fetch(`${url}/pipeline`, { | ||
| method: 'POST', | ||
| headers: { Authorization: `Bearer ${token}` }, | ||
| headers: { | ||
| Authorization: `Bearer ${token}`, | ||
| 'Content-Type': 'application/json', | ||
| }, | ||
| body: JSON.stringify(pipeline), | ||
| signal: AbortSignal.timeout(REDIS_OP_TIMEOUT_MS), | ||
| }); |
There was a problem hiding this comment.
HTTP error response not checked
Every other pipeline function in this file (getCachedJsonBatch, geoSearchByBox, getHashFieldsBatch, runRedisPipeline) checks resp.ok and logs or returns early on non-2xx responses. The new implementation silently swallows a non-2xx status (e.g., a 429 rate-limit or 500 from the proxy), so a persistent write failure won't surface in logs.
Consider adding the same guard the other pipeline helpers use:
const resp = await fetch(`${url}/pipeline`, {
method: 'POST',
headers: {
Authorization: `Bearer ${token}`,
'Content-Type': 'application/json',
},
body: JSON.stringify(pipeline),
signal: AbortSignal.timeout(REDIS_PIPELINE_TIMEOUT_MS),
});
if (!resp.ok) {
console.warn(`[redis] setCachedJson HTTP ${resp.status}`);
}Replace URL-path encoding with POST / body to avoid Node.js 16KB URL length limit causing ECONNRESET on large payloads. Keeps single-command semantics and REDIS_OP_TIMEOUT_MS — pipeline is not needed here.
|
@bighaoZQH — good catch and well-documented fix. Root cause analysis is correct and POST body approach is consistent with other pipeline helpers. One issue to fix: AbortSignal.timeout uses REDIS_OP_TIMEOUT_MS (1,500ms), but all other pipeline calls use REDIS_PIPELINE_TIMEOUT_MS (5,000ms). For the ~126KB payloads motivating this PR, 1.5s may cause intermittent timeouts. Please change to REDIS_PIPELINE_TIMEOUT_MS. Once updated, this is ready to merge. |
Summary
Error writing data to Redis
Type of change
Affected areas
/api/*)Checklist
api/rss-proxy.jsallowlist (if adding feeds)npm run typecheck)Screenshots
Problem
setCachedJsoncurrently encodes the value into the URL path:When the payload is large (e.g.
news:digest:v1is ~126KB),encodeURIComponentinflates the data further, exceeding Node.js'sdefault HTTP URL length limit (~16KB). This causes
ECONNRESET/EPIPEerrors on every write, so the key is never persisted to Redis.The data falls back to the in-process
fallbackDigestCacheMap instead,which means the cache is lost on every container restart.
As shown in the image, I curled a request to the news module, but the logs showed errors related to the Redis link and that the relevant key was missing:


Root Cause
Node.js
http.createServerrejects requests with URLs exceeding ~16KB,and the redis-rest proxy (
redis-rest-proxy.mjs) uses Node.js's built-inHTTP server. Large payloads in the URL path are silently dropped.
Fix
Switch setCachedJson to use a POST request body instead of URL parameters, consistent with the pattern used by other Redis helpers in this file:
As shown in the image, I modified the


setCachedJsonmethod in theredis.tsfile, then rebuilt the project and repeated the previous steps. The problem is now resolved, and there are no more Redis-related errors in the logs:Impact
news:digest:v1never being persisted to Redis in self-hosted deployments