You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
During my investigation here I didn't find any convincing argument to use HyperClockCache.
According to this, Hyper Clock Cache should increase throughput for point queries.
In ScaleZ the most obvious benefit we can get - is for FS Index store, since we can have lots of point queries there.
I've tried using hyper clock cache as a shared block cache for the whole ScaleZ instead of LRU, and I've tried to use it separately for >FS Index store only.
I've used estimated_entry_charge parameter as recommended in RocksDB code comments:
// A reasonable choice is the larger of block_size and metadata_block_size.
// When WriteBufferManager (and similar) charge memory usage to the block
// cache, this can lead to the same effect as estimate being too low, which
// is better than the opposite. Therefore, the general recommendation is to
// assume that other memory charged to block cache could be negligible, and
// ignore it in making the estimate.
In both cases, performance got worse (by 10-20% on average).
Probably we can find some specific use cases, maybe for specific CFs, for example.
But we need to test it on a large environment.
I think we can create a separate issue to make Hyper Clock Cache available as a configuration parameter.
Note: there's no CreateFromString for hyper clock cache options, so we'd probably need to implement our own format for that.
But I used some benchmarks and didn't try it on real environment.
Also I've encountered some occasional weird failures of the benchmarks, which can be a bug in benchmark itself, but also can be a bug in hyper clock cache, since I didn't notice such failures in the same benchmark with LRU Cache.
We still can consider adding a configuration option to use HyperClockCache (as a shared cache for whole store, or as a cache for a specific store / specific cf)
The text was updated successfully, but these errors were encountered:
During my investigation here I didn't find any convincing argument to use HyperClockCache.
But I used some benchmarks and didn't try it on real environment.
Also I've encountered some occasional weird failures of the benchmarks, which can be a bug in benchmark itself, but also can be a bug in hyper clock cache, since I didn't notice such failures in the same benchmark with LRU Cache.
We still can consider adding a configuration option to use HyperClockCache (as a shared cache for whole store, or as a cache for a specific store / specific cf)
The text was updated successfully, but these errors were encountered: