Please read Part 1 RabbitMQ Best Practice for general best practice and dos and don’ts tips for RabbtitMQ.
Don’t open and close connections or channels repeatedly
The handshake process for an AMQP connection is actually quite involved and requires at least 7 TCP packets (more if TLS is used). Channels can be opened and closed more frequently if required. Even channels should be long-lived if possible, e.g. reuse the same channel per thread of publishing. Don’t open a channel for each time you are publishing. Best practise is to reuse connections and multiplex a connection between threads with channels.
- AMQP connections: 7 TCP packages
- AMQP channel: 2 TCP packages
- AMQP publish: 1 TCP package (more for larger messages)
- AMQP close channel: 2 TCP packages
- AMQP close connection: 2 TCP packages
- Total 14-19 packages (+ Acks)
Don’t use too many connections or channels
Try to keep connection/channel count low. Use separate connections for publish and consume. You should ideally only have one connection per process, and then use a channel per thread in your application.
- Reuse connections
- 1 connection for publishing
- 1 connection for consuming
Don’t share channels between threads
You should make sure that you don’t share channels between threads as most clients don’t make channels thread-safe (because it would have a serious negative effect on the performance impact).
Don't have too large queues
Short queues are the fastest. When a queue is empty, and it has consumers ready to receive messages, then as soon as a message is received by the queue, it goes straight out to the consumer.
Many messages in a queue can put a heavy load on RAM usage. When this happens, RabbitMQ will start flushing (page out) messages to disk in order to free up RAM, and when that happens queueing speeds will deteriorate.
Problems with long queues
- Small messages embedded in queue index
- Take long time to sync between nodes
- Time consuming to start a server with many message
- RabbitMQ management interface collects and stores stats for all queues
Use lazy queues to get predictable performance (or if you have large queues)
With lazy queues, the messages go straight to disk and thereby the RAM usage is minimized, but the throughput time will be larger.
Lazy queues create a more stable cluster, with better predictable performance. Your messages will not, without a warning, get flushed to disk.
Limit queue size, with TTL or max-length
Applications that often get hit by spikes of messages, and where throughput is more important than anything else, is to set a max-length on the queue. This will keep the queue short by discarding messages from the head of the queues so that it’s never larger than the max-length setting.
Use multiple queues and consumers
You will achieve better throughput on a multi-core system if you have multiple queues and consumers. You will achieve optimal throughput if you have as many queues as cores on the underlying node(s).
Persistent messages and durable queues for a message to survive
a server restart
If you cannot afford to lose any messages, make sure that your queue is declared as “durable” and your messages are sent with delivery mode "persistent" (delivery_mode=2).
For high throughput use temporary, or non-durable queues
Split your queues over different cores
Queue performance is limited to one CPU core. You will, therefore, get better performance if you split your queues into different cores, and also into different nodes if you have a RabbitMQ cluster.
- Consume (push), don’t poll (pull) for messages
Don't use old RabbitMQ/Erlang versions or RabbitMQ clients/libraries
Stay up-to-date with the latest stable versions of RabbitMQ and Erlang. Make sure that you are using the latest recommended version of client libraries.
Don't have an unlimited prefetch value
A typical mistake is to have an unlimited prefetch, where one client receives all messages and runs out of memory and crashes, and then all messages are re-delivered again.
Forgott to add HA policy when creating a new vhost on a cluster
When you create a new vhost on a cluster - don't forget to enable an HA-policy for a new vhost (if you have a HA setup). Messages will not be synced between nodes without an HA-policy.