How to Increase chunk size for stream messages ingressed via mqtt plugin #14908
-
Community Support Policy
RabbitMQ version used4.1.2 Erlang version used27.3.x Operating system (distribution) usedLinux How is RabbitMQ deployed?Bitnami Helm chart rabbitmq-diagnostics status outputnot applicable Logs from node 1 (with sensitive values edited out)not applicable Logs from node 2 (if applicable, with sensitive values edited out)No response Logs from node 3 (if applicable, with sensitive values edited out)No response rabbitmq.conffor config see kubernetes section Steps to deploy RabbitMQ clusterHelm deploy via pipeline Steps to reproduce the behavior in questionMessages are sent to RabbitMQ queues via MQTT message providers. Utilizing their routing keys they are eventually forwarded to streams. The workflow is basically as follows:
advanced.configNo response Application codeNo response Kubernetes deployment fileKubernetes values yaml What problem are you trying to solve?Is there a way to increase the chunk size of messages stored in a stream, when they are ingressed via MQTT plugin? We suspect that this is the for slow performance. More detail: The reason seems to be that messages ingressed via MQTT are stored in individual chunks. When consuming from a stream that holds these messages, the PerfTest tool always reports a chunk size of 1.
Throughput of our system when consuming from a stream that was directly filled with messages via stream producer (instead of MQTT plugin) is about on the level reported here: |
Beta Was this translation helpful? Give feedback.
Replies: 1 comment 4 replies
-
|
The chunk size itself can't be tuned: the amount of data that ends up in a chunk is however much data the stream writer has in its mailbox at a point in time. So the higher the publishing throughput, the larger the chunk size. Lower chunk sizes end up causing lower consumer throughput because the stream is being read in smaller sections. You should try upgrading to 4.2.0 as https://www.rabbitmq.com/blog/2025/09/26/stream-delivery-optimization specifically improves this scenario by reading ahead in the stream. I was looking into this for a somewhat similar use-case in #14877. In 4.2.1 you should be able to tune this read-ahead size to trade memory for higher consumer throughput. I will try to reproduce this scenario with MQTT. I expect that 4.2.0 will make a big improvement. |
Beta Was this translation helpful? Give feedback.
The chunk size itself can't be tuned: the amount of data that ends up in a chunk is however much data the stream writer has in its mailbox at a point in time. So the higher the publishing throughput, the larger the chunk size.
Lower chunk sizes end up causing lower consumer throughput because the stream is being read in smaller sections. You should try upgrading to 4.2.0 as https://www.rabbitmq.com/blog/2025/09/26/stream-delivery-optimization specifically improves this scenario by reading ahead in the stream. I was looking into this for a somewhat similar use-case in #14877. In 4.2.1 you should be able to tune this read-ahead size to trade memory for higher consumer throughput.
I will try…