This article discusses the process of conducting performance testing for RabbitMQ, including practical steps to perform it on your own, as well as some recent benchmarks for your reference.
The objective of this performance testing is to:
- Measure the volume of messages RabbitMQ could process in a given time (Throughput) and what this looks like across different message sizes (16 bytes and 1024 bytes).
- Measure the total time it takes a message to move from a producer to a consumer in RabbitMQ (Latency). Again we will test this with different message sizes (16 bytes and 1024 bytes).
Before we proceed, keep in mind that the producer, consumer, and broker configuration utilized during our testing may not be suitable for your specific scenario. It's important to note that our goal was not to replicate a realistic configuration, but rather to provide you with a simple example.
RabbitMQ throughput and latency
This section will present the throughput and latency values of our benchmarking exercise
We conducted a throughput test to assess RabbitMQ's performance with varying message sizes. Specifically, we tested message sizes of 16 bytes and 1024 bytes, with 3 producers and 5 consumers for each message size.
The outcome of our evaluation is presented in the chart below.
Figure 1 - RabbitMQ Throughput
In the chart above, we see RabbitMQ performing at around 40k+ messages/sec, when tested with lightweight messages of about 16 bytes. We also see RabbitMQ performing at around 26k+ when tested with 1024 bytes messages.
The table below shows the latency distribution of RabbitMQ for different message sizes– because generally, latency data does not have a normal distribution, reporting it in terms of percentiles gives a more holistic picture.
Figure 2 - RabbitMQ Latency Table
Although the table displays latency data from the minimum to the 99th percentile, our analysis of RabbitMQ's latency performance will focus on the 50th percentile or median.
This metric is a fair representation of central tendency in a distribution, meaning that a significant portion of the values would be slightly above or below the median.
The chart below shows the median latency of RabbitMQ across messages of different sizes.
Figure 2 - RabbitMQ Median Latency
RabbitMQ benchmark methodology and tooling
This section will cover the server configuration we used to run the benchmarks, and generally how we arrived at the throughput and latency results we got.
Throughput: test bed
For the RabbitMQ throughput test we used two instances of AWS EC2 servers with EBS GP3 drive – precisely m6g.medium(a single vCPU, 4GiB RAM). Both instances ran in a VPC.
To execute the benchmarks, we ran the load generator (lavinmqperf) on one of the EC2 instances and the RabbitMQ message broker on the other instance – simply to avoid having the load generator and the broker running on the same machine and by extension reduce the effect of resource sharing.
The primary function of the load generator, lavinmqperf is to spin up makeshift producers and consumers. It then uses the producers and consumers to generate tons of messages, publish them to the broker, and then consume the messages eventually.
Note that, even though we used lavinmqperf here, you can equally use the RabbitMQ Performance Testing tool, rabbitmq-perf-test for thesame purpose.
Latency: test bed
The major change we made here, as per our configuration, was the
as our load generator. This is mostly because,
reports latency out of the box and lavinmqperf doesn’t.
Other than that, we used the same machines as the ones used for the
throughput test – only that now, one of the instances runs
will spin up the number of producers and consumers, create the
message size and connect to the broker we pass to it in a command
that’s similar to that of
The limitations of our approach
We made it clear from the outset of this article that our producer, broker, and consumer configuration does not model a remotely realistic one – but how so, you might ask?
Well, both lavinmqperf and rabbitmq-perf-test spin up their makeshift consumers and producers on the same machine – in a real world scenario, it is not very common to have the producers and the consumers running on the same machine.
Instead, you are more likely to have several producers and consumers distributed across multiple machines. However, keep in mind that, if for your own benchmarking you decide to run the consumers and the producers on separate machines, then you’d need to synchronize the clocks of the machines.
In this article we’ve seen how RabbitMQ performs across messages of different sizes. We’ve also put together a similar article that documents LavinMQ’s performance benchmarks
Ready to spin up a RabbitMQ instance? Create a free RabbitMQ instance at CloudAMQP.
Leave your suggestions, questions, or feedback in the comment section below or email email@example.com