Distributing your web application over a cluster of cloud compute resources can significantly improve performance and availability. Docker Swarm is the Docker native clustering solution, which can turn a group of distributed Docker hosts into a single large virtual server.

Docker Swarm

Docker Swarm provides the standard Docker API, and it can communicate with any tool that already works with Docker daemon allowing easy scaling to multiple hosts. With resources pooled in a Swarm cluster, your application can run as if it was installed on an extreme performance server while allowing easy scaling by adding or removing resources at the same time. This guide goes through the steps for setting up a simple Docker Swarm cluster.

Deploy your cloud servers

To start with, you are going to need to deploy servers to run the cluster. In this guide, the Swarm will be installed on five servers, a primary manager and a backup manager, a consul server and two compute nodes. Alternatively, it is possible to run a simpler Docker Swarm on a group of three instances, a single manager with consul on the same host and two separate worker nodes like in the first example.

When deploying the cloud servers for the cluster, note that while Docker itself will work on most Linux distributions. CentOS and other Red Hat variants might require additional steps to allow the Swarm to communicate because of their stricter default firewall rules.

Deploy new servers for your Swarm cluster.

  • manager1
  • manager2
  • consul
  • node1
  • node2

Once the new hosts are up and running, perform the usual security preparations e.g. update the system, add users and SSH keys. You can find help with these steps at the guides for Managing Linux User Account Security and Using SSH-keys for Authentication.

Install Docker Engine on each server

With the initial configurations done, install the Docker Engine on each of the servers in your cluster. You will need the curl command line utility to do this. If it is not already installed, you can get it with one of the commands below applicable to your system.

# Debian and Ubuntu
sudo apt-get install curl -y
# CentOS
sudo yum install curl -y

Use the command underneath to download the Docker image. The installation script requires root privileges so you will be asked for your sudo password on any non-root user.

curl -sSL https://get.docker.com/ | sh

After the installation finishes, Docker usually starts up on its own, but for the next part to work you will need to stop it.

sudo service docker stop

Then run the daemon with the following command:

sudo nohup docker daemon -H tcp://0.0.0.0:2375 -H unix:///var/run/docker.sock &

The script leaves the daemon running in the background, and with the Docker ready you can test that it is accepting commands.

sudo docker info

To make working with Docker easier, you should add your username to the Docker users group. Adding a user to the group can be done with the command underneath by replacing the <username> with your username.

sudo usermod -aG docker <username>

Afterward, sign out of the server and then back in again to have the group policy changes take effect. By doing so, you can use Docker commands without needing to invoke sudo.

Configure a discovery back-end

For the Swarm managers to know which nodes in the cluster are accessible, it utilizes something called the consul, which works as the discovery back-end. Usually, the consul is run on its own host, but optionally you can install it directly on your primary Swarm manager. Use the command below to download and install the consul container.

docker run -d -p 8500:8500 --name=consul progrium/consul -server -bootstrap

The consul service maintains a list of IP addresses in your Swarm cluster. Node IPs do not need to be publicly available, and it is recommended to use the private addresses assigned to your cloud servers to create a secure cluster. Continue below with configuring the managers and nodes to your Swarm.

Create a Swarm Cluster

The easiest way to set up a Docker Swarm installation is to use the official images. The Swarm image is built and regularly updated by Docker themselves through automated process. Installing the Swarm on your master and node servers are done with similar single line commands.

Notice that unlike when installing the consul, you will need to define the IP addresses in your cluster. With UpCloud, all of your servers connect to your private network that is only accessible to the servers on your account. Using these static private IP addresses allows you to securely configure the cluster without worrying about firewalls or unintentionally exposing your Swarm to the public internet.

Install on primary manager server. Replace the <manager1 IP> with the private IP address of your manager server, <consul IP> with the private IP of the consul server.

docker run -d -p <manager1 IP>:4000:4000 swarm manage -H :4000 --replication --advertise <manager1 IP>:4000 consul://<consul IP>:8500

Repeat the step on your secondary manager. Note to replace <manager2 IP> with the private IP address of your secondary manager server.

docker run -d -p <manager2 IP>:4000:4000 swarm manage -H :4000 --replication --advertise <manager2 IP>:4000 consul://<consul IP>:8500

Then install Swam on each of your compute nodes. Replace the <node IP> with the private IP of the node server you are currently installing the container on, and <consul IP> as before with master server.

docker run -d swarm join --advertise=<node IP>:2375 consul://<consul IP>:8500

After each command, you should see the usual code when a new container starts successfully. Once you are done installing the images on each of your servers in the cluster, your Docker Swarm is ready to test out.

Note that if you are using CentOS or any other OS with a similarly restrictive firewall by default, you will need to add the port numbers listed in each command to your firewalls to allow the Swarm nodes to communicate.

Running the Swarm

The Docker Swarm works very similarly to the traditional Docker. However unlike the usual Docker commands, you will need to define the host you wish to run the command on to reach the Swarm container. The Docker Swarm manager, when configured as above, allows executing commands from any host with Docker installed that has access to the manager host. Check the configuration on the primary master using the command below.

docker -H <manager1 IP>:4000 info

The output will list information about the state of the Swarm and its nodes. If your consul server can discover the nodes, they will be listed in the output similarly to the example underneath. The nodes will report some useful information such as the number of containers installed on the node, number of CPU cores, amount of RAM and notes about the software the node is running on.

Nodes: 2
 node1.example.com: 10.1.9.23:2375
 └ Status: Healthy
 └ Containers: 1
 └ Reserved CPUs: 0 / 1
 └ Reserved Memory: 0 B / 1.019 GiB
 └ Labels: executiondriver=native-0.2, kernelversion=3.13.0-79-generic, operatingsystem=Ubuntu 14.04.4 LTS, storagedriver=aufs
 └ Error: (none)
 └ UpdatedAt: 2016-03-11T11:24:51Z
 node2.example.com: 10.1.9.25:2375
 └ Status: Healthy
 └ Containers: 1
 └ Reserved CPUs: 0 / 1
 └ Reserved Memory: 0 B / 1.019 GiB
 └ Labels: executiondriver=native-0.2, kernelversion=3.13.0-79-generic, operatingsystem=Ubuntu 14.04.4 LTS, storagedriver=aufs
 └ Error: (none)
 └ UpdatedAt: 2016-03-11T11:25:13Z

If the nodes are listed as (unknown), the consul is unable to communicate with them. In this case, check your firewall rules that the consul has access to the nodes and that the managers can interact with it.

You can start containers on the Swarm with the usual run –command. Test your configuration by running the hello-world with the command below.

docker -H <manager1 IP>:4000 run hello-world

Check which Swarm node ran the application.

docker -H <manager1 IP>:4000 ps -a
CONTAINER ID IMAGE        COMMAND               CREATED       STATUS       PORTS    NAMES
15b5846dfb63 hello-world "/hello"               1 minute ago  Exited (0)            node2.example.com/lonely_shaw
f9a7f0b7c553 swarm       "/swarm join --advert" 4 minutes ago Up 4 minutes 2375/tcp node1.example.com/amazing_ramanujan
86554feb7334 swarm       "/swarm join --advert" 4 minutes ago Up 4 minutes 2375/tcp node2.example.com/condescending_kalam

In the output example above the node2 ran the test application and then exited.

Test Swarm fail-over

Distributing applications on multiple hosts provides increased availability in case one of the nodes has an error or needs to be restarted. In the same way, you can have multiple manager servers to support fail-over. Check your cluster info from the current primary manager like already once before.

docker -H <manager1 IP>:4000 info
Server Version: swarm/1.1.3
Role: primary
Strategy: spread
Filters: health, port, dependency, affinity, constraint

In comparison if you run the same command for your secondary manager the role will be shown as the replica and a reference to the primary.

docker -H <manager2 IP>:4000 info
Server Version: swarm/1.1.3
Role: replica
Primary: 10.1.8.184:4000
Strategy: spread
Filters: health, port, dependency, affinity, constraint

You can test the failover by shutting down your primary manager container on manager1.

docker stop <swarm manager name>

When the nodes in the Swarm realizes the primary manager has failed the replica manager will take the lead in becoming the new primary.

docker -H <manager2 IP>:4000 info
Server Version: swarm/1.1.3
Role: primary
Strategy: spread
Filters: health, port, dependency, affinity, constraint

You can then start the Swarm container on manager1, and it will become the new replica.

docker start <swarm manager name>

Stopping and starting the Swarm containers serves the purpose to demonstrate how managers switch roles when one fails, but it will work the same regardless of the reason the primary manager becomes unavailable.

Conclusions

Docker Swarm is an easy way to get started with computer cluster. It provides high availability no matter the size of your deployment. Docker boasts results of up to a thousand nodes and fifty thousand containers with no performance degradation. Scaling your cluster is also convenient with the fast deployment of new hosts through your UpCloud Control Panel or the UpCloud API. New nodes are added to the cluster through the discovery back-end like the consul, but other options are also available such as etcd and zookeeper as well as a static node list files.

Now that you have a basic Swarm cluster set up, head over to the Docker documentation pages to learn more about Swarm. You can also jump straight into the deep end and for example try out the Swarm at scale.