RabbitMQ MQTT vs EMQX

In this talk from RabbitMQ Summit 2019 we listen to Grigory Starinkin from Erlang Solutions.

RabbitMQ is a multi-protocol messaging broker, which, on a vanilla installation supports AMQP-0.9-1. Through its plugin architecture, RabbitMQ may also be configured for other protocols such as EMQX and MQTT. Which of these two are better? Join me in this talk as I answer these questions and help users decide on which MQTT broker to use.

Short biography
Grigory Starinkin ( Twitter, GitHub ) is an Erlang Developer and RabbitMQ consultant. He has over 10+ years of experience in software development. For the past 6 years Grigory has been developing and supporting Erlang based software.

RabbitMQ MQTT vs EMQX

I'm obviously from Erlang solutions, as you can see. Today, I'm going to talk about MQTT protocol, but more importantly about two brokers, RabbitMQ and EMQX, and how they implement MQTT protocol, and what are the architectural choices that are made internally in RabbitMQ and EMQX and how that can affect the performance of your cluster.

First of all, I do apologize for the clickbait title in the description of this talk. There is not going to be a definitive answer to the question of who is the winner here. It’s just the overview. There's going to be a lot of technical details here. I'm going to try to fit everything in 20 minutes.

MQTT / Publish-Subscribe

Before that, let's try to understand what the MQTT protocol is. I'm going to try, very lightly, I'll review this. MQTT is the publish-subscribe protocol. Very lightweight. It's designed for an environment where you have very limited resource capacity and some devices, the clients - MQTT clients, they can connect to the broker and publish messages. The other brokers, if they would like to show the interest in some topics and some events, they subscribe to those events and then receive the events from the broker. That's as simple as is. 

MQTT protocol itself, it's very lightweight as I said. It's essentially 2 bytes overhead or 1 byte overhead of the MQTT protocol in itself, on top of the payload that you have in your message.

Here's how the subscribe-publish would look like. Each client can show your interest in some topic using the filter. There are some wildcards in that filter. Whenever the message is published by other clients, the device is going to eventually receive these events.

There are a lot of MQTT clients. There are a lot of brokers but which one should you use for your next application, or maybe which one should you move from current solution that you’re using as MQTT broker? Let's find out.

EMQX

In this talk, I'm going to talk only about EMQX and RabbitMQ. Why EMQX? Before that, EMQX, for those who don't know, EMQX is an MQTT broker. It is available in different flavors. There is an open source version of it. There is enterprise version of it. And it also available is as the servers in the cloud, so you can subscribe to the service and use MQTT as a service. There are additional plugins that you can enable. It is very extensible as RabbitMQ is.

EMQX / Cluster

But what is more interesting, in our case, is that EMQX as well as RabbitMQ, they are both written in Erlang and they are using the same underlying infrastructure, essentially, which is the Mnesia database. The Mnesia, essentially, is used for exchanging the routing topology and exchanging the information between the nodes within a cluster.

Performance

All right. Let's straight go to the performance measurements because for some of our customers, they have seen. They can consider performance as one of the main criteria when it comes to the choice which broker they would like to use. It's like one variable in the equation, essentially.

Stress tools

For the stress test, I used these tools. Quickly, I used MZ Bench. MZ Bench is the stress test tool distributed. It's also written in Erlang. There are three types of nodes. MZServer which accepts your scenarios. So, you'd like to create one publisher, and many consumers, and different scenarios, you send this data to the MZBench as server. Then, this data is propagated to the MZController. And those MZNodes that you see on the graph here, those nodes are going to be used as MQTT clients to connect to your cluster.

In terms of the cluster, I used three-node cluster. Each of them were provisioned with two cores and 8 gb of memory. It is M5 large instances. All of them, they were provisioned with Prometheus node exporter which pushed the metrics to the Prometheus. And then, I captured the data on the Grafana. And to automate that, I used Ansible and Terraform. Literally, the stress test execution took like a couple bash scripts, essentially.

Test scenarios

All right, there are two test scenarios, many to one and--

Many to one, for example. This can be seen in example where you have a lot of devices in the field. They are measuring some temperature or pressure from tubes. They are sending that information to the broker and there's one controller that receive this data for further processing. The other use case that I have is one to many, where one controller, for example, can send messages to the devices like control messages, for example, to control the devices in the field.

Many to one

All right, so I started from 2000s of producers and consumers in each scenario. And then, I gradually increased by 2000s of those connections every time until 10,000’s of consumers and producers. In each particular scenario, I sent only one message with a payload of 256 bytes every second. There is not much throughput. There is not much payload on the message but it's just a test of a fanout and fan-in example and how that works, how the cluster reacts to these scenarios. Here's the results for the MQTT and EMQX. As you can see, there is a little difference between them.

One to many

And, if you have a look at the one to many, there is quite a bit different. Let me try to explain why this actual happened here, why there is quite a huge gap within them.

Before we can start, let's talk about the communication in the cluster, how that organized. In RabbitMQ, you have distribution links - Erlang distribution links. Each node is connected to the other using the one connection-- one single connection. To forward the message, if one client is connected to one node and the other client is connected to the other node, to proxy the message from one node to the other, it should send that message to that one single connection.

RabbitMQ / delegate

If it's fanout example, you need to send a bunch of messages to a lot of queues which are hosted on different nodes. RabbitMQ tries its best to optimize that way. It doesn't actually send lots of messages. It sends one message. If it has to be delivered to many cues, there's delegated framework that is used for that case. It groups each message by the destination node. And then, since one message which end ups in the delegated process, which is then is dispatched and to send to other queues on those nodes. That is primarily done for ordering so that your messages wouldn’t end up in a different order in the same queue.

The problem here is that, all your messages, they end up in the same delegate process. And the delegate process is chosen by these hash of a sender. If you have one sender, you’re always going to push message into one single delegate process.

EMQX / gen_rpc

In EMQX, it's a bit different. There is distribution link. And also, there is gen_rpc. Gen_rpc is used for message forwarding only. You have those Erlang distribution links where you exchange data between Mnesia tables. But then, whenever you need to publish one message from one node to the other, instead, you spawn a new connection up to 32 connections by default. And then, you forward this message on this connection that is deliberately provisioned for that client. Well, utilize fully all the connections when you do the fanout publish. That is great but I don't have an example and I'm not sure how that scenario would behave in cases where the network is starting partitioning. Partitioning is going to happen.

In RabbitMQ, there is one distribution link. If that distribution link is getting broken, you'd be able to easily react on changes in the cluster that you have to make to either pause the cluster or continue to work depending on your partitioning, as a strategy, and protect your data. In this case, I'm not sure how that is going to perform and how consistent your data is going to end up.

The other thing. Well, that could explain the huge difference between the fanout example, where we've seen the fanout that EMQX outperform the RabbitMQ, usually.

MQTT plugin

The other thing that can contribute to this is how the RabbitMQ plugin works. RabbitMQ plugin works in a way that when you enable that plugin, it creates the MQTT listener which is going to accept messages using the MQTT protocol. Then, it's going to parse the message and convert it to the AMQP one. And then, publish that to RabbitMQ.

The thing is, there is a little like overhead of converting back and forth, all those protocols. I'm really glad to hear that there is a movement towards the non-protocol version of RabbitMQ in 3.9, I think, that Michael said about on the initial keynote. But also, that is going to also eliminate the additional overhead of using AMQP direct channel.

MQTT plugin / Message flow

Currently, in MQTT plugin, if message has to be published to RabbitMQ, it first of all goes to the socket via publisher. Then, it goes to reader. Then, it goes through all those processes in the chain where, eventually, it ends up in the in the AMQP process. The thing is, if you need to receive the same message on the same channel, that message has to go through all that chain, including the mqtt_reader which is also responsible for reading and writing messages.

AMQP

If you have a look at the AMQP example, how AMQP works internally, you’ll see that the message, when it has to be read by your reader, it goes to the reader process while the writer is only responsible for writing messages. so, you see two different channels here. They are not dependent on each other The only channel which is the main Erlang process that is responsible for exchanging messages.

AMQP / many-to-one performance

I did run the test cases, test scenario for AMQP protocol. Well, the difference is less obvious here. You see that the AMQP example and EMQX is almost similar. If you have a look at the fanout, well, there is still a bit difference but it's not that huge gap here. 

Also, you need to realize whenever you publish a message - by the way, when I use those examples, test scenarios, I used Quality of Service 1. Whenever you publish a message using Quality of Service 1, it means that the message is going to be translated to the persistent message which means that RabbitMQ, automatically is going to try to persist the message on a disk. That's where the Q implementation matters here.

In RabbitMQ, there is a really sophisticated implementation of a queue. It's currently, by default, its variable Q. You can plugin a different implementation of it, but there's a variable Q. And it tries to balance between the persistency of the data and the latency that it can provide to the client, to publish message. In worst case scenario, message is going to go through the in memory, and in internal, and in-disk stages, and back to the in-memory only. Whilst in EMQX, you only have in-memory queue which is very simplistic. It's priority queue where, if you have any other messages coming in and EMQX cannot publish more messages to the receiver side, the message eventually is going to be dropped on the floor which means that in RabbitMQ, by default, out-of-box, you have a persistency. You have a queue that can be persistent on a disk. If your RabbitMQ crashes or something, or restarts, your underlying infrastructure doesn't behave very well, the broker is going to restart and the clients will still be able to connect to the broker and receive messages that you published to the broker. Whilst in the EMQX, it is not yet possible, unless you're using a plugin to provide additional persistence layer. But that is only available in the paid version of it.

Throttling

All right, the other thing that can slow down your broker, but it is not necessarily a bad thing, it's the throttling, how it is implemented.

RabbitMQ / flow control

In RabbitMQ, there is a famous flow control mechanism where each participant in the chain of those processes have a credit. So, for example, here in this case, if a message has to be received by the broker, it’s then going to be read by the reader. There's one message going to be read by a reader. Then, if the message has to be sent to the channel, the reader is going to consume one credit. And then, a channel is going to consume one. It’s going to decrement that credit from the originator of the request. These two processes, they are trying to keep in sync those credits to not send more messages. They shouldn't actually send messages if they cannot.

Why this is a good example. I mean, why this architecture is good because if you have a lot of consumers on this side, so you're going to have a lot of queues here, on that side. Each particular message that you publish on the reader either side can really badly affect your RabbitMQ instance because you're sending a lot of messages to a lot of queue. This flow control is going to protect you because the client, the RabbitMQ, will not de-read from the receiving buffer which means that the sending buffer is going to be full. That's how RabbitMQ is going to push back the MQTT clients who are trying to push messages through the RabbitMQ.

EMQX / rate limit

EMQX, on the other hand, it is implemented a bit differently. The only one throttling mechanism that it has is the rate limiting. Rate limiting is implemented on the read side, on the EMQX channel.

In RabbitMQ world, that would be named as rabbit reader. This Erlang process is responsible for reading messages from the socket, essentially. Instead of reading one message from a socket, it reads multiple messages from a socket. And that, really, is configurable. By default, I think, it accepts 200 messages. And then, once the messages are received, it’s going to process them accordingly. Only after that, once the socket reports back that that limit of messages that the reader asked for is reached, it’s going to examine the incoming publishers number and incoming bytes that were received. And then, based on that, it’s going to sleep for a certain amount of time which means that that would look similar to RabbitMQ has at the receiving side. The receiving side buffer is going to eventually fill up. And the same buffer is going to fill up. And the publisher will not be able to push any more messages if they do respect tcp protocol.

All right. That's how it is implemented at the moment. The bad thing about this is it is very easy in terms of how you can implement the throttling. But, on the other hand, if there are a lot of subscribers, if you start with a lot of subscribers, they will create a lot of sessions which is the RabbitMQ queue in terms of the RabbitMQ terminology, that each message can badly affect all those processes that you have in your broker which means that you will have growth of memory uncontrolled and eventually your broker will run out of memory which is bad, essentially.

The winner

All right. So, as the winner, there is no really winner in this case because, if you are trying to aim only for the performance. Performance is clear. The EMQX won in some cases. Whilst, it didn't kind of provide the same level of guarantees as RabbitMQ has.

Do you have message storage out-of-box without any need to pay an additional price for integration with the other storages? The storage that is available for EMQX enterprise version is the MongoDB, MySQL, and Postgres, I think. I do not have even the performance numbers. If you would like to try to store each message in MongoDB, for example, that I would say - I mean, from my perspective, it would be slower because you're trying to communicate to a remote server. Whilst in the RabbitMQ, there a local in-memory implementation of a queue which is supposed to be faster than sending data over the wires to the remote node.

[Applause]

Questions from the audience

Q: I have a question. I guess, RabbitMQ can be considered more as a general-purpose message broker, whereas EMQX is more specialized in its use cases. Is that a right assumption to make from this talk?

A: I think so. If you're trying to aim the maximum performance you can get out-of-box like in the market, at the moment, there are solutions for that. If you have already an environment where you’re using RabbitMQ and you don't have privileges of replacing the whole infrastructure with a different broker, I don't see there is a reason to do that. And also, I'm very happy with what Michael was saying about replacing the only AMQP-related part with the non-particular implementation of protocol, internally. That is going to speed up the MQTT implementation as well.

Q: It really depends on a very specialized use case where you really need, there are a lot of numbers, accordingly?

A: Exactly. There are a lot of variables in this equation. It's wrong to say like, go for performance only.

Q: With scaling up the nodes, would help in this performance? So, for example, adding more nodes to Rabbit or EMQX would help. Did you measure that?

A: I did not measure it. But I would expect that the performance is going to be better because that only is CPU performance. If you add more nodes, you're going to host less, your queues are going to be distributed evenly across all nodes and the CPU load is going to be smaller. Yeah, that’s going to help.

Q: With MQTT use cases, I think the main performance measure is how many connections can you establish on a node. Usually, you’ve IoT devices. You want to connect a million devices to a broker. Just comparing between the two, how many connections can RabbitMQ, a single node, handle in comparison to a single EMQX node?

A: The stack implementation of the networking is very much similar to what RabbitMQ has. It uses the AsocD. It's the asynchronous implementation of Erlang socket D. That is very similar to RabbitMQ does at the moment. It’s just the library is moved to a separate library that everyone can use. Whilst in the RabbitMQ, it's like more internally assembled, not divided. And so, I would say their performance should be similar. I don't have numbers, obviously, so.

I mean, doesn’t it come down to the number of Erlang processes per connection? So, RabbitMQ is going to have a minimum, on an MQTT connection of three Erlang processes to handle that one connection versus EMQX which is going to be.

Q: Doesn’t it come down to the number of Erlang processes per connection? So, RabbitMQ is going to have a minimum of an MQTT connection of, what, three Erlang processes to handle that one connection versus EMQX which is going to be one or two?

A: Yeah, exactly. Yeah. That is going to end up in more CPU and memory usage. Yeah, it is right. That's correct answer. But, as I said, with the work that is coming up about avoiding all of those AMQP direct connections, that's going to be avoided at all. There's not going to be additional CPU load on your broker.

Q: Did you have a look at VerneMQ as it’s also built on Erlang?

A: I did. I did look into VerneMQ but not in terms of the performance measurements. I know about VerneMQ, the test scenarios that they use. They are essentially was inspired by VerneMQ implementation of the test scenario that they used and suggest to use in best scenarios. 

Q: You mentioned clustering, is there any data mirroring in EMQX?

All right, there is no data mirroring. It does not anyhow support it at the moment in EMQX. I guess, the suggested way by EMQX is to plug in the implementation that you'd like to plug in into your broker, to persist the data in a way how you want it to persist the data. Essentially, EMQX does not try to solve this problem. It delegates the problem to other services. That's why they have the integration with RabbitMQ, for example, with MongoDB. They are saying persist the data but the persistency layer is on a third party. It's not under the scope of the project.

CloudAMQP - industry leading RabbitMQ as a service

Start your managed cluster today. CloudAMQP is 100% free to try.

13,000+ users including these smart companies