Docker Swarm
Well, now we have a very interesting tool that allows us to build container clusters natively and with extreme ease, as is customary with products created by the Docker team. ;)
With Docker Swarm you can build container clusters with important features such as load balancing and failover.
To create a cluster with Docker Swarm, you just need to indicate which hosts it will supervise and the rest is up to it.
For example, when you create a new container, it will create it on the host that has the lowest load, meaning it will take care of load balancing and will always ensure that the container is created on the best available host at the time.
The cluster structure of Docker Swarm is quite simple and consists of a manager and several workers. The manager is responsible for orchestrating the containers and distributing them among the worker hosts. The workers are the ones that do the heavy lifting, hosting the containers.
13.1. Creating our cluster​
An important thing that started after version 1.12 was the inclusion of Docker Swarm within Docker itself, meaning that today when you install Docker, you automatically install Docker Swarm, which is nothing more than a way to orchestrate your containers through the creation of a cluster with high availability, load balancing, and encrypted communication, all of this natively, without any effort or difficulty.
For our scenario, we will use three Ubuntu machines. The idea is to have two managers and 1 worker.
We always need to have more than one node representing the manager, because if we run out of managers, our cluster will be completely unavailable.
With this, we have the following scenario:
-
LINUXtips-01 -- Active Manager.
-
LINUXtips-02 -- Manager.
-
LINUXtips-03 -- Worker.
It goes without saying that we need to have Docker installed on all these machines, right, friend? :D
To start, let's run the following command on "LINUXtips-01":
root@linuxtips-01:~# docker swarm init
Swarm initialized: current node (2qacv429fvnret8v09fqmjm16) is now a manager.
To add a worker to this swarm, run the following command:
docker swarm join --token SWMTKN-1-100qtga34hfnf14xdbbhtv8ut6ugcvuhsx427jtzwaw1td2otj-18wccykydxte59gch2pix 172.31.58.90:2377
To add a manager to this swarm, run 'docker swarm join-token manager' and follow the instructions.
root@linuxtips-01:~#
With the previous command, we initialized our cluster!
Notice the following part of the last command's output:
# docker swarm join --token SWMTKN-1-100qtga34hfnf14xdbbhtv8ut6ugcvuhsx427jtzwaw1td2otj-18wccykydxte59gch2pix 172.31.58.90:2377
This line is nothing more than all the information you need to add workers to your cluster! How so?
Simple: what you need to do now is run exactly this command on the next machine you want to include in the cluster as a worker! Easy as pie, right?
According to our plan, the only machine that will be a worker is the "LINUXtips-03" machine, correct? So let's access it and run exactly the command line recommended in the output of "docker swarm init".
root@linuxtips-03:~# docker swarm join --token SWMTKN-1-100qtga34hfnf14xdbbhtv8ut6ugcvuhsx427jtzwaw1td2otj-18wccykydxte59gch2pix 172.31.58.90:2377
This node joined a swarm as a worker.
root@linuxtips-03:~#
Wonderful! Another node added to the cluster!
To see which nodes exist in the cluster, just type the following command on the active manager:
root@linuxtips-01:~# docker node ls
ID HOSTNAME STATUS AVAILABILITY MANAGER STATUS ENGINE VERSION
2qac LINUXtips-01 Ready Active Leader 18.03.1-ce
nmxl LINUXtips-03 Ready Active 18.03.1-ce
root@linuxtips-01:~#
As we can see in the command output, we have two nodes in our cluster, one as manager and another as worker. The "MANAGER STATUS" column shows who is the "Leader", that is, who is our active manager.
In our plan we would have two managers, right?
Now the question is: how do I know which token I need to use to add another node to my cluster, but this time as another manager?
Remember that when we ran the command to add the worker to the cluster, we had a token in the command? Well, this token is what defines whether the node will be a worker or a manager, and in that output it only gave us the token to add workers.
To be able to view the command and token referring to managers, we need to run the following command on the active manager:
root@linuxtips-01:~# docker swarm join-token manager
To add a manager to this swarm, run the following command:
docker swarm join --token SWMTKN-1-100qtga34hfnf14xdbbhtv8ut6ugcvuhsx427jtzwaw1td2otj-3i4jsv4i70odu1mes0ebe1l1e 172.31.58.90:2377
root@linuxtips-01:~#
To view the command and token referring to workers:
root@linuxtips-01:~# docker swarm join-token worker
To add a worker to this swarm, run the following command:
docker swarm join --token SWMTKN-1-100qtga34hfnf14xdbbhtv8ut6ugcvuhsx427jtzwaw1td2otj-18wccykydxte59gch2pixq9av 172.31.58.90:2377
root@linuxtips-01:~#
Easy, right?
Now what we need is to run on "LINUXtips-02" the command to include another node as manager. Therefore, run:
root@linuxtips-02:~# docker swarm join --token SWMTKN-1-100qtga34hfnf14xdbbhtv8ut6ugcvuhsx427jtzwaw1td2otj-3i4jsv4i70odu1mes0ebe1l1e 172.31.58.90:2377
This node joined a swarm as a manager.
root@linuxtips-02:~#
Done! Now we have our complete cluster with two managers and one worker!
Let's view the nodes that are part of our cluster. Remember: any command for cluster administration or service creation must be run on the active manager, always!
root@linuxtips-01:~# docker node ls
ID HOSTNAME STATUS AVAILABILITY MANAGER STATUS ENGINE VERSION
2qac LINUXtips-01 Ready Active Leader 18.03.1-ce
j6lm LINUXtips-02 Ready Active Reachable 18.03.1-ce
nmxl LINUXtips-03 Ready Active 18.03.1-ce
root@linuxtips-01:~#
Notice that the "MANAGER STATUS" of "LINUXtips-02" is "Reachable". This indicates that it is a manager, but it is not the active manager, which always carries the "MANAGER STATUS" as "Leader".
If we want to know details about a specific node, we can use the "inspect" subcommand:
root@linuxtips-01:~# docker node inspect LINUXtips-02
[
{
"ID": "x3fuo6tdaqjyjl549r3lu0vbj",
"Version": {
"Index": 27
},
"CreatedAt": "2017-06-09T18:09:48.925847118Z",
"UpdatedAt": "2017-06-09T18:09:49.053416781Z",
"Spec": {
"Labels": {},
"Role": "worker",
"Availability": "active"
},
"Description": {
"Hostname": "LINUXtips-02",
"Platform": {
"Architecture": "x86_64",
"OS": "linux"
},
"Resources": {
"NanoCPUs": 1000000000,
"MemoryBytes": 1038807040
},
"Engine": {
"EngineVersion": "17.05.0-ce",
"Plugins": [
{
"Type": "Network",
"Name": "bridge"
},
{
"Type": "Network",
"Name": "host"
},
{
"Type": "Network",
"Name": "null"
},
{
"Type": "Network",
"Name": "overlay"
},
{
"Type": "Volume",
"Name": "local"
}
]
}
},
"Status": {
"State": "ready",
"Addr": "172.31.53.23"
}
}
]
root@linuxtips-01:~#
And if we want to promote a worker node to manager, how should we do it? Easy as pie, check it out:
root@linuxtips-01:~# docker node promote LINUXtips-03
Node LINUXtips-03 promoted to a manager in the swarm.
root@linuxtips-01:~# docker node ls
ID HOSTNAME STATUS AVAILABILITY MANAGER STATUS ENGINE VERSION
2qac LINUXtips-01 Ready Active Leader 18.03.1-ce
j6lm LINUXtips-02 Ready Active Reachable 18.03.1-ce
nmxl LINUXtips-03 Ready Active Reachable 18.03.1-ce
root@linuxtips-01:~#
If you want to turn a manager node into a worker, do:
root@linuxtips-01:~# docker node demote LINUXtips-03
Node LINUXtips-03 demoted to a manager in the swarm.
Let's check:
root@linuxtips-01:~# docker node ls
ID HOSTNAME STATUS AVAILABILITY MANAGER STATUS ENGINE VERSION
2qac LINUXtips-01 Ready Active Leader 18.03.1-ce
j6lm LINUXtips-02 Ready Active Reachable 18.03.1-ce
nmxl LINUXtips-03 Ready Active 18.03.1-ce
root@linuxtips-01:~#
Now, if you want to remove a node from the cluster, just type the following command on the desired node:
root@linuxtips-03:~# docker swarm leave
Node left the swarm.
root@linuxtips-03:~#
And we also need to run the removal command for this node on our active manager as follows:
root@linuxtips-01:~# docker node rm LINUXtips-03
LINUXtips-03
root@linuxtips-01:~#
With this, we can run "docker node ls" and verify that the node was actually removed from the cluster. If you want to add it again, just repeat the process that was used to add it, remember? :D
To remove a manager node from our cluster, we need to add the "--force" flag to the "docker swarm leave" command, as shown below:
root@linuxtips-02:~# docker swarm leave --force
Node left the swarm.
root@linuxtips-02:~#
Now, just remove it also on our manager node:
root@linuxtips-01:~# docker node rm LINUXtips-02
LINUXtips-02
root@linuxtips-01:~#
13.2. The sensational services​
One of the best things that Docker Swarm offers us is precisely the ability to make use of services.
Services is nothing more than a VIP or DNS that will perform load balancing of requests among the containers. We can establish a number x of containers responding to a service and these containers will be spread across our cluster, among our nodes, ensuring high availability and load balancing, all natively!
Services is a way, already used in Kubernetes, for you to better manage your containers, focusing on the service that these containers are providing and ensuring high availability and load balancing. It's a very simple and effective way to scale your environment, increasing or decreasing the number of containers that will respond to a given service.
A bit confusing? Yes, I know, but it will become easy. :)
Imagine we need to make the Nginx service available to be the new web server. Before creating this service, we need some information:
-
Name of the service I want to create
webserver.
-
Number of containers I want under the service
5.
-
Ports we will "bind", between the service and the node
8080:80.
-
Image of the containers I will use
nginx.
Now that we have this information, let's create our first service. :)
root@linuxtips-01:~# docker service create --name webserver --replicas 5 -p 8080:80 nginx
0azz4psgfpkf0i5i3mbfdiptk
root@linuxtips-01:~#
Now we have our service created. To test it, just run:
root@linuxtips-01:~# curl ANY_CLUSTER_NODES_IP:8080
The result of the previous command will bring you the Nginx welcome page.
Since we are using services, each connection will go to a different container, thus performing load balancing "automagically"!
To view the created service, run:
root@linuxtips-01:~# docker service ls
ID NAME MODE REPLICAS IMAGE PORTS
0azz4p webserver replicated 5/5 nginx:lates *:8080->80/tcp
As we can see, we have the service created with five replicas running, that is, five containers running.
If we want to know where our containers are running, on which nodes they are being executed, just type the following command:
root@linuxtips-01:~# docker service ps webserver
ID NAME IMAGE NODE DESIRED STATE CURRENT STATE ERROR PORTS
zbt1j webserver.1 nginx:latest LINUXtips-01 Running Running 8 minutes ago
iqm9p webserver.2 nginx:latest LINUXtips-02 Running Running 8 minutes ago
jliht webserver.3 nginx:latest LINUXtips-01 Running Running 8 minutes ago
qcfth webserver.4 nginx:latest LINUXtips-03 Running Running 8 minutes ago
e17um webserver.5 nginx:latest LINUXtips-02 Running Running 8 minutes ago
root@linuxtips-01:~#
This way we can know where each container is running and also its status.
If I need to know more details about my service, just use the "inspect" subcommand.
root@linuxtips-01:~# docker service inspect webserver
[
{
"ID": "0azz4psgfpkf0i5i3mbfdiptk",
"Version": {
"Index": 29
},
"CreatedAt": "2017-06-09T19:35:58.180235688Z",
"UpdatedAt": "2017-06-09T19:35:58.18899891Z",
"Spec": {
"Name": "webserver",
"Labels": {},
"TaskTemplate": {
"ContainerSpec": {
"Image": "nginx:latest@sha256:41ad9967ea448d7c2b203c699b429abe1ed5af331cd92533900c6d77490e0268",
"StopGracePeriod": 10000000000,
"DNSConfig": {}
},
"Resources": {
"Limits": {},
"Reservations": {}
},
"RestartPolicy": {
"Condition": "any",
"Delay": 5000000000,
"MaxAttempts": 0
},
"Placement": {},
"ForceUpdate": 0
},
"Mode": {
"Replicated": {
"Replicas": 5
}
},
"UpdateConfig": {
"Parallelism": 1,
"FailureAction": "pause",
"Monitor": 5000000000,
"MaxFailureRatio": 0,
"Order": "stop-first"
},
"RollbackConfig": {
"Parallelism": 1,
"FailureAction": "pause",
"Monitor": 5000000000,
"MaxFailureRatio": 0,
"Order": "stop-first"
},
"EndpointSpec": {
"Mode": "vip",
"Ports": [
{
"Protocol": "tcp",
"TargetPort": 80,
"PublishedPort": 8080,
"PublishMode": "ingress"
}
]
}
},
"Endpoint": {
"Spec": {
"Mode": "vip",
"Ports": [
{
"Protocol": "tcp",
"TargetPort": 80,
"PublishedPort": 8080,
"PublishMode": "ingress"
}
]
},
"Ports": [
{
"Protocol": "tcp",
"TargetPort": 80,
"PublishedPort": 8080,
"PublishMode": "ingress"
}
],
"VirtualIPs": [
{
"NetworkID": "89t2aobeik8j7jcre8lxhj04l",
"Addr": "10.255.0.5/16"
}
]
}
}
]
root@linuxtips-01:~#
In the "inspect" output we can get important information about our service, such as exposed ports, volumes, containers, limitations, among other things.
A very important piece of information is the VIP address of the service:
"VirtualIPs": [
{
"NetworkID": "89t2aobeik8j7jcre8lxhj04l",
"Addr": "10.255.0.5/16"
}
]
This is the IP address of the "balancer" of this service, meaning that whenever they access via this IP, it will distribute the connection among the containers. Simple, right?
Now, if we want to increase the number of containers under this service, it's very simple. Just run the following command:
root@linuxtips-01:~# docker service scale webserver=10
webserver scaled to 10
root@linuxtips-01:~#
Done, simple as that!
Now we have ten containers responding to requests under our webserver service! Easy as pie!
To view, just run:
root@linuxtips-01:~# docker service ls
ID NAME MODE REPLICAS IMAGE PORTS
0azz webserver replicated 10/10 nginx:latest *:8080->80/tcp
root@linuxtips-01:~#
To know which nodes they are running on, remember "docker service ps webserver".
To access the logs of this service, just type:
root@linuxtips-01:~# docker service logs -f webserver
webserver.5.e17umj6u6bix@LINUXtips-02 | 10.255.0.2 - - [09/Jun/2017:19:36:12 +0000] "GET / HTTP/1.1" 200 612 "-" "curl/7.47.0" "-"
This way, you will have access to the logs of all containers of this service. Very practical!
"I'm tired of playing around! I want to remove my service!" It's as simple as creating it. Type:
root@linuxtips-01:~# docker service rm webserver
webserver
root@linuxtips-01:~#
Done! Your service has been deleted and you can check it in the output of the following command:
root@linuxtips-01:~# docker service ls
ID NAME MODE REPLICAS IMAGE PORTS
root@linuxtips-01:~#
Creating a service with a connected volume is quite simple. Do:
root@linuxtips-01:~# docker service create --name webserver --replicas 5 -p 8080:80 --mount type=volume,src=teste,dst=/app nginx
yfheu3k7b8u4d92jemglnteqa
root@linuxtips-01:~#
When I create a service with a volume connected to it, this indicates that this volume will be available in all my containers of this service, meaning that the volume named "teste" will be mounted in all containers in the "/app" directory.