Take Control Of Your RabbitMQ Queues

The Erlang Solutions team explore how to integrate new microservices.

18 min read

You’re a support engineer and your organisation uses a 3-node RabbitMQ cluster to manage its inter-application interactions across your network. On a regular basis, different departments in your organisation approach you with requests to integrate their new microservices into the network, for communication with other microservices via RabbitMQ.

With the organisation being so huge, and offices spread across the globe, onus is on each department’s application developers to handle the integration to the RabbitMQ cluster, only after you’ve approved and given them the green light to do so. Along with approving integration requests, you also provide general conventions which you’ve adopted from prior experience. Part of the conventions you enforce is that the connecting microservices must create their own dedicated queues on integration to the cluster, as the best approach to isolating services and easily managing them. Unless of course, the microservices would be seeking to only consume messages from already existing queues.

So, average message rate across your cluster is almost stable at 1k/s, both from internal traffic, and external traffic which is being generated by some mobile apps publicised by the organisation. Everything is smooth sailing, till you get to a point where you realise that the total number of queues in your cluster is nearing the order of thousands, and one of the three servers seems to be over burdened, using more system resources than rest. Memory utilisation on that server starts reaching alarming thresholds. At this point, you realise that things can only get worse, yet you still have more pending requests for integration of more microservices onto the cluster, but can’t approve them without figuring out how to solve the growing imbalance in system resources across your deployment.

RabbitMQ cluster imbalance

Fig 1. RabbitMQ cluster imbalance illustration

After digging up on some RabbitMQ documentation, you come to light with the fact that since you’re using HA queues, which you’ve adopted to enforce availability of service, all your message operations only reference your master queues. Microservices have been creating queues on certain nodes at will, implying that the provisioning of queues has been random and unstructured across the cluster. Concentration of HA queue masters on one node significantly surpass that on the other nodes, and as a result, with all message consumptions referencing master queues only, the server with the most queue masters is feeling the operational burden in comparison to the rest.

Your load balancer hasn’t been of much help, since whenever you experience a network partition, or purposefully make one of your nodes unavailable for maintenance work, queue provisioning has proceeded uncontrolled on the remaining running nodes. This retains the queue count imbalance upon cluster restoration. A possible immediate solution would be to purge some of the queues, to relieve the memory footprint on the burdened server(s), but you can’t afford to do this as most queued up messages are crucial to all business operations transacting through the cluster. New and existing microservices also can’t continue timelessly creating and adding more queues into the cluster until this problem has been addressed. So what do you do?

Well, as of version 3.6.0, RabbitMQ has introduced a mechanism to grant its users more control in determining a queue master’s location in a cluster, on creation. This is based on some predefined rules and strategies, configured prior to the queue declaration operations. If you can relate with the situation above, or would like to plan ahead and make necessary amendments to your RabbitMQ installation before encountering similar problems, then read on, and give this feature a go.

So how does it work?

Prior to introducing the queue master location mechanism, declaration of queues, by default, had been characterized by the queue master being located on the local node on which the declare operation was being executed on. This is somewhat very limiting, and has been the main reason behind the inefficient imbalance of system resources on a RabbitMQ cluster when the number of queues become significantly large.

Upon introducing this mechanism, the node on which the queue master will be located is now first computed from a configurable strategy, prior to the queue being created.

Configurable strategy is key here, as it leverages full control to RabbitMQ users to dictate the distribution of queue masters across their cluster. There are three means by which a queue master location strategy may be configured;

  1. Queue declare arguments: This is at AMQP level, where the queue master location strategy is defined as part of the queue’s declaration arguments
  2. Policy: Here the strategy is defined as a RabbitMQ policy.
  3. Configuration file: Location strategy is defined in the rabbitmq.config file.

Once set, the internal execution order of declaring a queue would be as follows;

RabbitMQ queue master location execution flow

Fig 2. Queue master location execution flow

These are the three ways in which a queue master location strategy may be configured, and how the execution flow is ordered upon queue declaration. Next, you may be asking yourself the following question;

What are these strategies anyway?

Queue master location strategies are basically the rules which govern the selection of the node on which the queue master will reside, on declaration. If you’re from an Erlang background, you’d understand when I say these strategies are nothing but callback modules of a certain behaviour pattern in RabbitMQ known as the rabbit_queue_master_locator. If you aren’t from an Erlang background, no worries, all you need to know is what strategies are available to you, and how to make use of them. Currently, there are three queue master location strategies available;

  1. Min-Masters: Selects the master node as the one with the least running master queues. Configured as min-masters.
  2. Client-local: Like previous default node selection policy, this strategy selects the queue master node as the local node on which the queue is being declared. Configured as client-local.
  3. Random: Selects the queue master node based on random selection. Configured as random.

So in a nutshell, this is the general theory behind controlling and dictating the location of a queue master’s node. Syntax rules differ for each case, depending on whether the strategy is defined as part of the queue’s declare arguments, as a policy, or as part of the rabbitmq.config file.

NOTE: When both, a queue master location strategy and HA nodes policy have been configured, a conflict could arise in the resulting queue master node. For instance, if one of the slave nodes defined by the HA nodes policy becomes the queue master node computed by the location strategy. In such a scenario, the HA nodes policy would always take precedence over the queue master location strategy.

With this knowledge at hand, the engineer in the situation mentioned above would simply enforce the use of the min-masters queue location strategy as part of the queue declaration arguments for all microservices connecting to the RabbitMQ cluster. Or even easier, he’d simply set the min-masters policy on the cluster nodes, using the match-all wildcard for the queue name match pattern. This would ensure that all newly created queues would be automatically distributed across the cluster until there’s a balance in the number of queue masters per node, and ultimately, a balance in the utilization of system resources across all three servers.

Going forward

At the moment, only three location strategies have been implemented, namely; min-mastersclient-local and random. More strategies are yet to be brewed up, and if you feel you’d like to contribute a rule by which the distribution of queues can be carried out to better improve the performance of a RabbitMQ cluster, please feel free to drop a comment. These will go through some rounds of review, and could possibly be implemented and included in near future releases of RabbitMQ.

Quick and easy experiment

I’ll illustrate how the queue master location strategy is put into effect with a simple experiment to carry out on your local machine. We’re going make things easy by making most of the management UI, to avoid the whole AMQP setup procedures like opening of connections and channels, creating exchanges, and so forth.

  1. Download and install a RabbitMQ package specific for your platform. If you’re on a UNIX based OS, you can just quickly download and extract the generic unix package, and navigate to the sbin directory.tar xvf rabbitmq-server-generic-unix-3.6.1.tar.xz cd rabbitmq-server-generic-unix-3.6.1/sbin
  2. Create a 3-node cluster by executing the following commands;export RABBITMQ_NODE_PORT=5672 && export RABBITMQ_NODENAME=rabbit && ./rabbitmq-server -detached export RABBITMQ_NODE_PORT=5673 && export RABBITMQ_NODENAME=rabbit_1 && ./rabbitmq-server -detached export RABBITMQ_NODE_PORT=5674 && export RABBITMQ_NODENAME=rabbit_2 && ./rabbitmq-server -detached ./rabbitmqctl -n rabbit_1@hostname stop_app ./rabbitmqctl -n rabbit_1@hostname join_cluster rabbit@hostname ./rabbitmqctl -n rabbit_1@hostname start_app ./rabbitmqctl -n rabbit_2@hostname stop_app ./rabbitmqctl -n rabbit_2@hostname join_cluster rabbit@hostname ./rabbitmqctl -n rabbit_2@hostname start_app
  3. Enable the rabbitmq_management plugin;./rabbitmq-plugins enable rabbitmq_management and access the management UI from http://localhost:15672.
  4. Verify that your 3-node cluster was successfully created by checking cluster status;./rabbitmqctl cluster_status Or, from the management UI Overview page.RabbitMQ cluster nodesFig 3. RabbitMQ cluster nodes
  5. Next, navigate to the Queues tab on your management UI, and create 3 qeueues on node rabbit_2@hostname. I prefixed my queues with the node name, i.e. rabbit_2.queue.1, and created them as followsRabbit_2 hostname queuesFig 4. Add rabbit_2@hostname queuesRepeat this procedure for queues rabbit_2.queue.2 and rabbit_2.queue.3.
  6. Repeat step 5 on nodes rabbit_1@hostname and rabbit@hostname, creating 5 and 9 qeueues on each, respectively.
  7. If you’ve carried out steps 5 & 6 correctly, your queue listing should be similar to the following;created rabbitmq queuesFig 5. Created queuesOr, from the command line, by executing the following;./rabbitmqctl list_queues -q name pid state resulting in a queue listing alike the following;Ayandas-MacBook-Pro:sbin ayandadube$ ./rabbitmqctl list_queues -q name pid state rabbit_2.queue.3 <rabbit_2@Ayandas-MacBook-Pro.3.1527.1> running rabbit.queue.3 <rabbit@Ayandas-MacBook-Pro.2.4772.0> running rabbit.queue.2 <rabbit@Ayandas-MacBook-Pro.2.4761.0> running rabbit.queue.4 <rabbit@Ayandas-MacBook-Pro.2.4783.0> running rabbit.queue.6 <rabbit@Ayandas-MacBook-Pro.2.4815.0> running rabbit.queue.7 <rabbit@Ayandas-MacBook-Pro.2.4827.0> running rabbit_1.queue.2 <rabbit_1@Ayandas-MacBook-Pro.2.1433.0> running rabbit_1.queue.4 <rabbit_1@Ayandas-MacBook-Pro.2.1455.0> running rabbit.queue.1 <rabbit@Ayandas-MacBook-Pro.2.4751.0> running rabbit_1.queue.1 <rabbit_1@Ayandas-MacBook-Pro.2.1416.0> running rabbit_2.queue.1 <rabbit_2@Ayandas-MacBook-Pro.3.660.1> running rabbit.queue.9 <rabbit@Ayandas-MacBook-Pro.2.4848.0> running rabbit_1.queue.3 <rabbit_1@Ayandas-MacBook-Pro.2.1444.0> running rabbit_2.queue.2 <rabbit_2@Ayandas-MacBook-Pro.3.896.1> running rabbit.queue.8 <rabbit@Ayandas-MacBook-Pro.2.4838.0> running rabbit_1.queue.5 <rabbit_1@Ayandas-MacBook-Pro.2.1476.0> running rabbit.queue.5 <rabbit@Ayandas-MacBook-Pro.2.4794.0> running NOTE: The pid’s prefix is a good indicator of each queue’s home node.
  8. Next, we’re going to create a queue with the min-masters strategy configured, and verify that it’s been created on the correct and expected node. We’ll call our queue MinMasterQueue.1.Firstly, lets configure the min-masters queue master location policy. We’ll name this policy qml-policy, and set it to be applied to all queues, of names prefixed with MinMasterQueue;./rabbitmqctl set_policy qml-policy "^MinMasterQueue\." '{"queue-master-locator":"min-masters"}' --apply-to queues With our policy is configured, we can now go on and create our MinMasterQueue.1 queue.So to prove that the min-masters policy does work, we’ll create our MinMasterQueue.1 queue on the node with themost number of queues, i.e. rabbit@hostname (9 queues). With the queue master location policy in effect, the setrabbit@hostname node should be overriden by the node computed by the policy, which has the least number of queues, i.e. rabbit_2@hostname. Let’s proceed!
  9. So as mentioned in step 8, let’s create MinMasterQueue.1 on the node with the most queues, rabbit@hostname, as follows;RabbitMQ MinMaster QueueFig 6. MinMasterQueue creation
  10. Now the moment of truth; let’s verify if the queue was created on the correct node;Created MinMasterQueueFig 7. Min-master queueOr, from the command line by executing the following;./rabbitmqctl list_queues -q name pid which should yield the following as part of the full list of displayed queues;MinMasterQueue.1 <rabbit_2@Ayandas-MacBook-Pro.3.1995.3> The results are indeed correct. The home node of MinMasterQueue.1 is rightly the one which had the least number queue masters, i.e. rabbit_2@hostname.
  11. You can repeatedly execute step 9, creating more MinMasterQueue.N queues to see this queue master location strategy in effect. The home node of the queues created will interchange from one node to another, depending on the queue masters’ count per node, at each given moment of execution.Queue MinMastersFig 8. Min-master queues

This is a quick illustration of this mechanism at work. In addition to setting a policy from the command line, as in step 8, there also other means of defining the queue master location strategy which I illustrate in the next section.

How to configure

Following are some examples of how to configure queue master location strategies.

1. rabbitmq.config

Firstly, to set the location strategy from the rabbitmq.config file, simply add the following configuration entry;

{rabbit,[ .
          .
          {queue_master_locator, <<"min-masters">>},
          .
          . ]},

NOTE: The strategy is configured as an Erlang binary data type i.e. <<“min-masters”>>.

2. Policy

As already seen from our experiment, setting the strategy as a policy on a UNIX environment can be carried out as follows;

rabbitmqctl set_policy qml-policy ".*" '{"queue-master-locator":"min-masters"}' --apply-to queues

This creates a min-masters queue location strategy policy, of name qml-policy, which, from the ".*" wildcard match pattern, will be applied to all queues created on the node/cluster.

You can find more information on defining policies from the official RabbitMQ documentation.

3. Declare arguments

I illustrate setting the queue location strategy from declare arguments using three examples; in ErlangJava and Python.

In Erlang, you’d simply specify the location strategy as part of the ‘queue.declare’ record as follows;

  Args = [{<<"x-queue-master-locator">>, <<"min-masters">>}],
  QueueDeclare = #'queue.declare'{queue      = <<"microservice.queue.1">>,
                                  auto_delete= true,
                                  durable    = false,
                                  arguments  = Args },
#'queue.declare_ok'{} = amqp_channel:call(Channel, QueueDeclare),

In Java, just create an arguments map, define the queue master location strategy and declare the queue as follows;

Map args = new HashMap(); 
args.put("x-queue-master-locator", "min-masters"); 
channel.queueDeclare("microservice.queue.1", false, false, false, args);

Similarly, in Python, using Pika AMQP library, you’d carry out something similar to the following;

queue_name = 'microservice.queue.1'
args = {"x-queue-master-locator": "min-masters"}
channel.queue_declare(queue = queue_name, durable = True, arguments = args )

You can find complete versions of these snippets here. These are simplified primers which you can build upon. If you have a requirement to implement something more complex, and need some assistance, please don’t hesitate to get in touch!


Erlang Solutions is the world leader in RabbitMQ consultancy, development, and support.

We can help you design, set up, operate and optimise a system with RabbitMQ. Got a system with more than the typical requirements? We also offer RabbitMQ customisation and bespoke support.

Keep reading

RabbitMQ

RabbitMQ

RabbitMQ is the most deployed open source message broker. It provides a highly available solution to be used as a message bus, as a routing layer for microservices of as a mediation layer for legacy systems . Find out about how our world-leading RabbitMQ experts can help you.