We hear about containers everywhere. There are many founding technologies out there with the likes of docker, kubernetes and rkt to name a few. Most of us (the ones who still very much like what we do,) have some sort of an idea of what containerization is. Then there is a smaller subset that understand this wonderful technology well, and then there is a much much fewer set of containerization gurus. I am like most of us out there trying to understand what containerization is and how it can make our lives easier and much more exciting.
Like everything else I would like to approach it with a very generic example. Lets say you are running an application on a virtual machine (or a physical machine … really?) Now what if I tell you that you could run this application along with other applications on the same operating system (on a single machine,) without any port conflicts. What if I tell you that each of these little applications that you are running on this “shared” operating system can have their own IP address. They can also have their own run time. What if I tell you that each of these applications can be “contained” within their own environment, and that they can be ported easily from one operating system to the other without making any changes. What if you could control the amount of resources that these applications are consuming off the machine, and protect the applications from outside and each other. What if you could automatically scale these applications so instead of running 1 instance of this application you can run lets say 5. And then what if you could run these 5 instances across multiple virtual / physical machines (or even across data centers/clouds.) And what if I tell you that containerization gives you all this and a lot more. Yup!..I know right.
For the course of this post, we will start by exploring docker, in particular the docker swarm. During the course of this exercise we will try to experience some of the features listed above and if we are lucky may be some that are not. A docker swarm is nothing but a set of machines (does not matter if they are physical or virtual,) that work together to host applications as services, inside docker containers. Too many buzz words. Lets break it down. In a docker environment, (ideally) each application runs in its “containerized” environment called a docker container. When we want to make an application scalable, we create a service for it in docker. The service allows us to scale the application across multiple machines. All these machines together are called the docker swarm. So lets start setting up a docker swarm and try to host a service on it.
In order to set up our docker swarm we will be using 3 Ubuntu machines (Virtual or Physical). You can use virtual machines in a data center, or on your desktop using something like VirtualBox or have 3 Ubuntu Cloud instances. It really does not matter.
- For my environment I am using the following configuration on 3 Ubuntu cloud instances:
|Operating System||Ubuntu Server 16.04.01|
Please note that this is just a test setup. Production configurations would be very different, and will probably be running on Ubuntu Core or Atomic.
- 1 machine is going to serve as the swarm manager, which manages the docker swarm operations, and 2 machines are going to work as worker nodes. Worker nodes are used to host docker containers. Note that the manager can also host docker containers in addition to its managerial jobs :).
- Also make sure that you have root access to all three machines or/and a user that has sudo access.
- Make sure you can ssh into each machine.
- In addition open the following ports on the firewall for incoming traffic for each machine:
- TCP port 2377 for cluster management communications
- TCP and UDP port 7946 for communication among machines
- TCP and UDP port 4789 for overlay network traffic
- Update the host file for each machine to have the hostname / ipaddress entry of each of the other machines. For reference please see the host file in my environment:
$ vi /etc/hosts 10.103.83.8 docker-manager manager 10.103.83.3 docker-worker-1 worker1 10.103.83.14 docker-worker-2 worker2
Now that we have completed the pre-requisites lets get cracking on the docker swarm.
Setting up the docker engine
On all machines (@manager, @worker1,@worker2)
Run the following commands to setup the docker engine on each machine:
- Get the docker key for the docker repository. A few tools are needed before you can get the key
$ sudo apt-get update $ sudo apt-get -y --no-install-recommends install curl apt-transport-https ca-certificates software-properties-common $ curl -fsSL https://apt.dockerproject.org/gpg | sudo apt-key add -
- Check the key
$ apt-key fingerprint 58118E89F3A912897C070ADBF76221572C52609D
- Add the repository
$ sudo add-apt-repository \ "deb https://apt.dockerproject.org/repo/ \ ubuntu-$(lsb_release -cs) \ main" $ sudo apt-get update
- List all the available docker versions and choose the one you need to install. I picked V 1.13.
$ apt-cache madison docker-engine $ sudo apt-get -y install docker-engine=1.13.0-0~ubuntu-xenial
- Test the docker engine on each machine and make sure that you can spawn a test ‘hello world’ container
$ sudo docker run hello-world Hello from Docker! This message shows that your installation appears to be working correctly. To generate this message, Docker took the following steps: 1. The Docker client contacted the Docker daemon. 2. The Docker daemon pulled the "hello-world" image from the Docker Hub. 3. The Docker daemon created a new container from that image which runs the executable that produces the output you are currently reading. 4. The Docker daemon streamed that output to the Docker client, which sent it to your terminal. To try something more ambitious, you can run an Ubuntu container with: $ docker run -it ubuntu bash Share images, automate workflows, and more with a free Docker ID: https://cloud.docker.com/ For more examples and ideas, visit: https://docs.docker.com/engine/userguide/
If your machine is setup correctly then you should see an output similar to the one above. Read through it and see how docker just tested your deployment. If you have any questions, just post in the comments.
What we have achieved so far is three independent machines that you can independently spawn docker containers. However, note that they are not aware of each other in terms of who is running what and where. Its time to get them to talk to each other.
Creating the docker swarm
The first step to creating a docker swarm is to assign one of the nodes as the manager. Go through the following steps to setup the manager node for the docker swarm.
$ sudo docker swarm init --advertise-addr 10.103.83.8
Once you run this command docker performs a number of steps. If all goes well at the end you will get an output similar to the one below. There are two sections.
Swarm initialized: current node (onreognwrkisxounadycyl23d) is now a manager. To add a worker to this swarm, run the following command: docker swarm join \ --token SWMTKN-1-0gfwpcjnssvlnzgnolgp8le1amy2bcmvadlxo2bam514fig015-bgyvpw2rrjeaxo2bsvhjq9k4e \ 10.103.83.8:2377
Take a note of the above command. We will need it to add the worker nodes to the swarm.
To add a manager to this swarm, run 'docker swarm join-token manager' and follow the instructions.
Also note the second section above. You can of course have more than 1 manager in a swarm. For simplicity however we will stick to one manager for this post.
In order to check the status of the swarm run the following command:
$ sudo docker node ls ID HOSTNAME STATUS AVAILABILITY MANAGER STATUS onreognwrkisxounadycyl23d * docker-manager Ready Active Leader
Note that we have successfully initialized a docker swarm. It consists of only one node which has the manager role. Node the Leader value for the MANAGER column.
Now lets add the worker nodes to the swarm. Run the following commands on each of the worker nodes to join them to the swarm. Please note that the token and IP in the command below will need to be replaced by the specifics in your environment. You can get this exact command from the output of the previous step where we created the manager.
$ sudo docker swarm join \ --token SWMTKN-1-0gfwpcjnssvlnzgnolgp8le1amy2bcmvadlxo2bam514fig015-bgyvpw2rrjeaxo2bsvhjq9k4e \ 10.103.83.8:2377
$ sudo docker node ls ID HOSTNAME STATUS AVAILABILITY MANAGER STATUS 3989aeyyf0wswkjtwtmc9kifl docker-worker-2 Ready Active onreognwrkisxounadycyl23d * docker-manager Ready Active Leader v5j89t2ql0l4t8da7r3r88e8i docker-worker-1 Ready Active
Now as you can see above you have one manager and two worker nodes in your swarm.
And that’s it. Believe it or not, you have successfully setup a docker swarm. Don’t believe me ? OK fine, lets see if it actually works.
Deploying a service
What we will do now is setup a very simple service. The idea is to familiarize ourselves the docker command and have some fun with the service. Run the following command on the manager node to create a simple service:
$ sudo docker service create --replicas 1 --name pinger alpine ping www.google.com
Lets look at the above command:
- –replicas 1 : This means that we want to run our service with only 1 container
- –name pinger: This is just the name of the service. You can call it ‘abc’ or ‘itreallydoesnotmatter.’
- alpine ping http://www.google.com :
- alpine : is the name of the docker image that you want to use to launch the container for the service. This is a simple lightweight container running alpine linux.
- ping http://www.google.com : This is the command that your container will be running.
If you are not familiar with docker, then I would suggest reading up on docker hub. Simply put, docker hub is a repository for container images. Every time you want to launch a container you refer to it by its name, (and tag as we will see later.) If docker can find the image locally it will use it else it will download the image from the docker hub on the internet.
So we have launched a service, that runs 1 container, based on alpine linux, and this container is running a ping to http://www.google.com.
If you want to look at the details of you service then run the following command:
$ sudo docker service inspect --pretty pinger ID: e78ek56mh9emqg8az3i1f79f3 Name: pinger Service Mode: Replicated Replicas: 1 Placement: UpdateConfig: Parallelism: 1 On failure: pause Max failure ratio: 0 ContainerSpec: Image: alpine:latest@sha256:dfbd4a3a8ebca874ebd2474f044a0b33600d4523d03b0df76e5c5986cb02d7e8 Args: ping www.google.com Resources: Endpoint Mode: vip
Note the Name, Replicas, Image and Args. This commands list details on the service.
In order to check where in the swarm the container for our service is running, run the following command
$ sudo docker service ps pinger ID NAME IMAGE NODE DESIRED STATE CURRENT STATE ERROR PORTS n5zzyjlxhlfa pinger.1 alpine:latest docker-worker-2 Running Running 2 minutes ago
From the output above, take a note of the node and the container name. Note that the container can be running on any of the three nodes. Login on to the node that the container is running on and run the following command:
@worker2 (worker2 for me, could be different for you)
$ sudo docker ps CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 9abe8c6c0fa1 alpine@sha256:dfbd4a3a8ebca874ebd2474f044a0b33600d4523d03b0df76e5c5986cb02d7e8 "ping www.google.com" 10 minutes ago Up 10 minutes pinger.1.n5zzyjlxhlfa1b9tj36k44sud $ sudo docker logs pinger.1.n5zzyjlxhlfa1b9tj36k44sud ... 64 bytes from XXX.XXX.XXX.XXX: seq=367 ttl=50 time=115.783 ms 64 bytes from XXX.XXX.XXX.XXX: seq=368 ttl=50 time=115.843 ms 64 bytes from XXX.XXX.XXX.XXX: seq=369 ttl=50 time=115.874 ms 64 bytes from XXX.XXX.XXX.XXX: seq=370 ttl=50 time=116.285 ms 64 bytes from XXX.XXX.XXX.XXX: seq=371 ttl=50 time=115.687 ms 64 bytes from XXX.XXX.XXX.XXX: seq=372 ttl=50 time=115.783 ms 64 bytes from XXX.XXX.XXX.XXX: seq=373 ttl=50 time=115.805 ms 64 bytes from XXX.XXX.XXX.XXX: seq=374 ttl=50 time=115.881 ms 64 bytes from XXX.XXX.XXX.XXX: seq=375 ttl=50 time=115.888 ms 64 bytes from XXX.XXX.XXX.XXX: seq=376 ttl=50 time=115.844 ms 64 bytes from XXX.XXX.XXX.XXX: seq=377 ttl=50 time=115.813 ms ...
Note that the name of the container in the command has a long hash attached to it. The first command gives you a list of all the containers running on the local node. Use the name of the correct container from this list in the second command. The name will be pinger.1.<SOMEHASH>
As you can see in the logs the container is happily pinging away. What you have done is accessed the log for the container launched by the service to see what the container is actually doing.
Now lets have some fun. Lets scale the service. By that I mean lets have a service that runs using more than one container for scalability and high availability. Run the following command to scale the service to 5 containers:
$ sudo docker service scale pinger=5 pinger scaled to 5
Now run the following command to see the corresponding containers for the service:
$ sudo docker service ps pinger ID NAME IMAGE NODE DESIRED STATE CURRENT STATE ERROR PORTS n5zzyjlxhlfa pinger.1 alpine:latest docker-worker-2 Running Running 20 minutes ago vfaaheynd9fb pinger.2 alpine:latest docker-worker-1 Running Running 19 seconds ago zjgq59ksvscy pinger.3 alpine:latest docker-worker-2 Running Running 19 seconds ago aqkg2nhx0cih pinger.4 alpine:latest docker-manager Running Running 19 seconds ago 63cfx9rczxl7 pinger.5 alpine:latest docker-manager Running Running 19 seconds ago
As you can see in the command line output above, our service is now running 5 containers, each running on alpine linux and pinging away. Note how the containers are distributed across the three nodes. You could go to each node and verify that the containers are actually there, and check the logs if you are feeling up to it.
If you are done playing around then lets remove the service.
$ sudo docker service rm pinger
Run the following command on each node to make sure that the corresponding containers are removed.
$ sudo docker ps
If you still see the containers, run the command again. It takes a little time to clean up the containers.
I hope you enjoyed the post and that it helps you in getting started with the docker swarm. In my personal opinion it is one of the simplest to setup and is probably why it is the most famous. I was going to add a few more things but I have come to realize that if the post gets too long the readers loose interest towards the later half. Hence, I have decided that I will do a small followup post very soon that will cover tags, rolling updates, draining nodes, publishing ports and maybe adding storage to the service. So do remember to check back.
As always thank you very much for reading. If you have any questions or comments please feel free to share below in the comments. I will try my best to get back to you as soon as I possible.
For my latest posts please visit WhatCloud.
* Note that the logos for docker, kubernetes and rkt are registered logos for their respective organizations.