Skip to content
Discussion options

You must be logged in to vote

The chunk size itself can't be tuned: the amount of data that ends up in a chunk is however much data the stream writer has in its mailbox at a point in time. So the higher the publishing throughput, the larger the chunk size.

Lower chunk sizes end up causing lower consumer throughput because the stream is being read in smaller sections. You should try upgrading to 4.2.0 as https://www.rabbitmq.com/blog/2025/09/26/stream-delivery-optimization specifically improves this scenario by reading ahead in the stream. I was looking into this for a somewhat similar use-case in #14877. In 4.2.1 you should be able to tune this read-ahead size to trade memory for higher consumer throughput.

I will try…

Replies: 1 comment 4 replies

Comment options

You must be logged in to vote
4 replies
@the-mikedavis
Comment options

@danieljirausch
Comment options

@the-mikedavis
Comment options

@acogoluegnes
Comment options

Answer selected by danieljirausch
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
3 participants