Why is the local cache not used until the fourth request? #162
-
Hi, I'm currently testing and evaluating the usage of Relay as an additional local caching layer in our application. During these tests, I have noticed a strange behavior that I couldn't find any documentation about. The use case is very simple: the app performs a lookup for a key in the cache. If it's available in the cache, it returns the key, otherwise it stores the key. My initial expectation was, that the app wouldn't find the key on the first attempt, so it would store the key and receive it on the second attempt. Then I noticed, that there's this open issue about write-through support (#12). So my next expectation was, that it would at least use the local cache on the third attempt. But my observation shows, that only the fourth request is answered from the local cache. And even though I have enabled persistent connections, a second connection is established on the second request (that might be intended as a separate invalidation message channel). In order to make this behavior reproducible for you, I spent the day creating this bare minimum example project with The procedure is as follows:
The result is three calls to the cache server, only the fourth call is answered by the local cache: docker exec -it relay-demo-cache valkey-cli MONITOR
OK
1739544723.995374 [0 172.19.0.3:47562] "HELLO" "3" "AUTH" "(redacted)" "(redacted)" "SETNAME" "relay@default:13"
1739544723.995399 [0 172.19.0.3:47562] "CLIENT" "SETINFO" "LIB-NAME" "relay"
1739544723.995413 [0 172.19.0.3:47562] "CLIENT" "SETINFO" "LIB-VER" "0.10.0"
1739544723.995416 [0 172.19.0.3:47562] "CLIENT" "TRACKING" "ON"
1739544723.995586 [0 172.19.0.3:47562] "INFO"
1739544723.995841 [0 172.19.0.3:47562] "GET" "test-key"
1739544723.995993 [0 172.19.0.3:47562] "SETEX" "test-key" "25" "14:52:03"
1739544726.747832 [0 172.19.0.3:47566] "HELLO" "3" "AUTH" "(redacted)" "(redacted)" "SETNAME" "relay@default:14"
1739544726.747840 [0 172.19.0.3:47566] "CLIENT" "SETINFO" "LIB-NAME" "relay"
1739544726.747842 [0 172.19.0.3:47566] "CLIENT" "SETINFO" "LIB-VER" "0.10.0"
1739544726.747954 [0 172.19.0.3:47566] "GET" "test-key"
1739544729.269536 [0 172.19.0.3:47562] "GET" "test-key" Can you please tell me, whether this is the expected behavior and why? Is there any error in my configuration? Also I would like to ask, whether it would be possible/beneficial to use a single connection if the target supports RESP 3 (as mentioned here https://redis.io/docs/latest/develop/reference/client-side-caching/#the-redis-implementation-of-client-side-caching). Thanks in advance. |
Beta Was this translation helpful? Give feedback.
Replies: 2 comments 1 reply
-
We'll look into it. I'm guessing it's a concurrent worker issue. |
Beta Was this translation helpful? Give feedback.
-
Hi 👋, I can replicate this in the Relay uses both active and passive invalidations in order to invalidate keys more quickly than if we just waited for a If I modify your diff --git a/app/index.php b/app/index.php
index 1e27fcd..b0b5050 100644
--- a/app/index.php
+++ b/app/index.php
@@ -15,6 +15,7 @@ $relay->setOption(Relay::OPT_BACKOFF_CAP, 25); // ms
$relay->setOption(Relay::OPT_BACKOFF_ALGORITHM, Relay::BACKOFF_ALGORITHM_DECORRELATED_JITTER);
$relay->connect(host: 'relay-demo-cache', timeout: 0.250, read_timeout: 0.250);
+$relay->setOption(Relay::OPT_CLIENT_INVALIDATIONS, false);
$relay->dispatchEvents();
$cacheEntry = $relay->get(CACHE_KEY); I'm not 100% sure why in this setup client-side invalidations cause a one request delay, but will try to figure that out. Timing is my speculative guess. As an aside, you don't need to invoke |
Beta Was this translation helpful? Give feedback.
Hi 👋,
I can replicate this in the
docker-compose
envcironment as well, although it's not immediatly clear why we're not caching until the 4th request. I am periodically seeing some odd behavior in the invalidation messages (e.g. being delivered late in some cases) although that doesn't explain precicely what's going on.Relay uses both active and passive invalidations in order to invalidate keys more quickly than if we just waited for a
RESP3
push message. For example, when a relay object writes a key, we blanket invalidate all known copies of the key immediately even before Redis delivers the invalidation. This helps a great deal with data consistency in most workloads.If I modify your
i…