Wow, what the hell am I doing…
I’ve been watching docker from a distance for some time and have been completely fascinated by its potential. Saying that, I’d struggled to find an excuse to play with it partly because I’m a Windows user but mostly because I’m lazy.
I’ve come up against a challenge with a network configuration where I needed to communicate between two networks in separate locations. One of the environments uses RabbitMQ to manage some communication between applications. It’s in its infancy (a single node on one machine) but is already the backbone of a number of systems.
We have an event that can occur in both environments so ideally I’d like to send emit the event and be able to subscribe to it in one of the environments. It turns out that Rabbitmq offers functionality to allow this: A Federation.
I’ll call the two environments “upstream” and “downstream” (all the event processing is done in the downstream environment). I needed a mechanism to get events from the upstream into the downstream ready for processing. It also needed to be resilient and provide high availability:
- What happens when a rabbitmq node fails?
- What happens if the network connection between the environments fails?
I wanted to set this all up quickly and without commissioning a load of kit for it. I decided I could use docker and run multiple rabbitmq instances locally in order to prove the configuration. This would allow me to simulate two environments with two clusters and what would happen when one or part of an environment ceased to exist.
In order to do this I needed to set a few things up. Boot2docker provides a nice way to use docker while on a Windows environment. I installed it with all options:
I also had to make sure ssh.exe could be accessed by adding a path variable to its location (“c:\Program Files (x86)\Git\Bin” in my case):
With this done I was ready to use boot2docker.
With the installation complete all interaction with boot2docker and docker can be done via the command line. In order to initialise docker you run the following commands:
- boot2docker init
- boot2docker start
The above steps prepare the Linux image and start the virtual machine. We then need to add several environmental variables, with powershell this can be done as follows:
- boot2docker shellinit | Invoke-Expression
With boot2docker setup we can now start using docker itself. In my example setup I want two clusters, each having two nodes:
- downstream1 + downstream2
- upstream1 + upstream2
I’ll need 4 containers with the pairs linked. I’ll also need an image to use for the container, I used: bijukunjummen/rabbitmq-server. This image has rabbitmq on it plus the federation plugin. The following command will run the container using that image:
- docker run -d -p 5671:5672 -p 15671:15672 -h downstream1 –name downstream1 bijukunjummen/rabbitmq-server
I’m creating the containers as detached (-d) and I’m mapping two ports from the host to the container (5671 to 5672, 15671 to 15672), I’m then setting the host name as “downstream1” and the container name as “downstream1”. The last segment specifies the image to use. If you’re using an image for the first time you should see an output as follows:
Docker downloads the image for you, run the following command to list running containers:
- docker ps
You can now access the running rabbit instance by the host IP address and management interface port specified (15671). In my case this was 192.168.59.103:15671. If you want to be able to access this from other another machine you can use port forwarding in the virtual box configuration settings.
As we’re running a cluster we’ll need at least one more more node to do this, I used the following command:
- docker run -d -p 5672:5672 -p 15672:15672 -h downstream2 –name downstream2 –link downstream1:downstream1 bijukunjummen/rabbitmq-server
There’s an extra parameter here, “–link downstream1:downstream1”. This creates an alias (name or id to alias) on the container allowing this container to see the other container “downstream1”, without this the nodes won’t be able to communicate.
Run the following command see to the containers running:
- docker ps
There’s a few things to note from the clustering guide. Clustered nodes must share the same Erlang cookie, because I’m using the same image for all the containers they already share this.
We’re going to cluster downstream2 to downstream1. We have to stop the rabbit application in order to join it to a cluster:
- docker exec downstream2 rabbitmqctl stop_app
This executes a command on the container, stop_app stops the rabbit application.
- docker exec downstream2 rabbitmqctl join_cluster rabbit@downstream1
This creates a cluster between downstream2 and downstream1 . We then need to start the rabbit application again on the downstream2 container
- docker exec downstream2 rabbitmqctl start_app
The nodes overview in the rabbit management interface should now look like:
I’m looking to create a federation as I intend to have two rabbit clusters with messages being transported from the upstream. I run the following commands to create this:
- docker run -d -p 5673:5672 -p 15673:15672 -h upstream1 –name upstream1 bijukunjummen/rabbitmq-server
- docker run -d -p 5674:5672 -p 15674:15672 -h upstream2 –name upstream2 –link upstream1:upstream2 bijukunjummen/rabbitmq-server
- docker exec upstream2 rabbitmqctl stop_app
- docker exec upstream2 rabbitmqctl join_cluster rabbit@upstream1
- docker exec upstream2 rabbitmqctl start_app
The above steps setup a cluster in the same way as we did with downstream1 and downstream2.
We should now have 4 images running with 2 pairs of nodes running as clusters 🙂
High Availability Queue
The queue we subscribe to will be on the downstream cluster. The next step was to create the queue on the downstream. I’ve done this using the web interface to rabbitmq:
I then created a fanout exchange (DownstreamExchange):
I then bind this exchange to my queue:
In order to setup high availability in rabbitmq we use a policy:
This creates a “DownstreamHA” policy, applies it to the queue named “DownstreamQueue” and applies it to queues only. I’ve used the “ha-mode” and use the parameter “all”:
“Queue is mirrored across all nodes in the cluster. When a new node is added to the cluster, the queue will be mirrored to that node.“
Adding the policy, this then applies to the DownstreamQueue queue:
The “+1” next to the node tells us that the queue is mirrored.
In order to create a Federation there’s a few things which need setting up on the upstream cluster:
- Create user for the downstream to access with
- Create Virtual Host (I couldn’t get access to the default virtual host when I first tried this)
In this case I’ve created a user “downstream” with a password “password”:
Then I created the virtual host “Downstream”:
In order for the downstream to access permissions then need granting via user admin:
Clicking “Set Permission” grants that user permissions for that virtual host:
We created a federation by defining a Federation Upstream using the admin section on the downstream cluster:
As there are two nodes on the upstream I specified the URI for both: amqp://downstream:email@example.com:5673/downstream and amqp://downstream:firstname.lastname@example.org:5674/downstream.
The Federation Upstream won’t run unless we create a policy which applies it to the exchange. I added a policy as follows:
I applied this only to exchanges and pass the definition “federation-upstream-set” with the parameter “all”.
Once added the upstream status should be visible under the admin section:
The exchange was not visible on the upstream cluster until I granted the user I was logged in with the relevant permissions:
And in the queues:
I wanted the queue to have high availability on the upstream we can create a policy for that:
You can see the policy applied to the queue as follows:
I now have two clusters, high availability queues and a Federation. I then wanted to see it in action, the main thing I wanted to test was that the upstream would still function when the downstream cluster would not be accessed. I tested this by pausing the downstream containers. I was able to push messages into the upstream exchange, which would then wait in the queue until I unpaused one or all of the downstream containers. Pausing individual nodes between clusters also gave me the resilience I wanted.