Understanding basic operations on Docker Swarm services

So I have been writing some stuff, just not on the blog lately. This is the long overdue follow up post to my earlier post Setting up a Docker Swarm. When I wrote the earlier post I had setup a docker swarm on an OpenStack Cloud. However given the time duration (some due to circumstances and mostly due to my laziness,) it so happens that this post is based on a swarm setup on #Alibaba Cloud.  However the fun part is that except of the infrastructure setup and the docker version, nothing much has changed. What I am using for this post is a setup depicted in the diagram below:

Screen Shot 2017-07-09 at 1.52.03 PM.png

If you are interested to know about the infrastructure, it is based on the following:

  1. 4 Alibaba Cloud ECS instances running Ubuntu 16.04
    • 1 Swarm Manager
    • 3 Swarm Workers
  2. 1 Load Balancer based on Alibaba Cloud SLB
    • My Load Balancer is listening on port 80
    • It redirects traffic to port 8080 on each of the servers
    • My containers are publishing their port 80 to port 8080 on their hosts.
  3. Docker version 17.05
  4. Auto Scaling service configured for Docker Swarm Worker Nodes

You could effectively setup a swarm anywhere, be it a on-premise server environment, a private cloud like OpenStack, or any of the Public Cloud providers. I did it on two of these just in the course of two blog posts. Tells you how powerful interoperability is for docker. If however you happen to be on Alibaba Cloud and want to setup something similar you could give it a try using the above information and the following four posts:

Just look at the diagram above and see how to configure it in the posts. If it turns out that this is too much just give me a shout out, and I would be more than happy to help you out with setting it up.

Okay so once you have setup the environment, run the following command to launch an nginx service on your docker swarm:

$ sudo docker service create --replicas 3 --name nginx --constraint 'node.role != manager' --publish 8080:80 --update-delay 10s nginx:1.10.3  

Note the constraint above highlighted in green. This constraint makes sure that this service is only deployed to swarm nodes that do not have a manager role. The command launches:

  • A simple nginx container container.
  • The name is helloworld
  • The service maps the port 80 inside the container to port 8080 on each of the worker hosts.

This completed the setup depicted in the diagram above. If you access the IP of the load balancer you should now be able to see the nginx default web page. Now that we have this out of the way lets start having some fun with the docker swarm.

Creating a service in the swarm

Since we have left the manager to management only :), we will launch all further services excluding the manager. Run the following command on the swarm manager to launch a simple docker swarm service with 1 container.

@SwarmManager

$ sudo docker service create --replicas 1 --constraint 'node.role != manager' --name helloworld alpine ping www.alibabacloud.com 

The command launches:

  • A simple alpine linux container.
  • The name is helloworld
  • The argument tells the container to constantly ping http://www.alibabacloud.com

Inspecting a service

If you want to know details about a service you can always run the inspect command:

@SwarmManager

$ sudo docker service inspect --pretty helloworld
ID:  ez7eubp7w85l4hnvihe483lsg
Name:  helloworld
Service Mode: Replicated
 Replicas: 1
Placement:
 Constraints: [node.role != manager]
UpdateConfig:
 Parallelism: 1
 On failure: pause
 Monitoring Period: 5s
 Max failure ratio: 0
 Update order:      stop-first
RollbackConfig:
 Parallelism: 1
 On failure: pause
 Monitoring Period: 5s
 Max failure ratio: 0
 Rollback order:    stop-first
ContainerSpec:
 Image:  alpine:latest@sha256:1072e499f3f655a032e88542330cf75b02e7bdf673278f701d7ba61629ee3ebe
 Args:  ping www.alibabacloud.com
Resources:
 Endpoint Mode: vip

This command is very simple to understand:

  • The “–pretty” argument makes sure that the output is formatted nicely.
  • There are a number of useful things returned including
    • Number of containers “Replicas,”
    • Constraints “node.role != manager” and
    • Other things like docker Image used and Arguments passed etc.

Viewing the container log

Docker allows you to directly connect to the container log. In order to see where the container for our helloworld service is running, run the following command:

@SwarmManager

$ sudo docker service ps helloworld

ID                  NAME                IMAGE               NODE                      DESIRED STATE       CURRENT STATE                ERROR               PORTS

s2psenelzogg        helloworld.1        alpine:latest       <WorkerNode1>   Running             Running about a minute ago

Look at the NODE value highlighted in green above. This is where the container is sitting. So let us ssh in to this machine:

@SwarmWorker1 (It is swarm worker 1 for me, it could be any other node for you. Check using the command above)

$ docker ps
CONTAINER ID        IMAGE               COMMAND                  CREATED             STATUS              PORTS               NAMES

1ecf98462db3        alpine:latest       "ping www.alibabac..."   5 minutes ago       Up 5 minutes                            helloworld.1.s2psenelzogglkijw0i2l45ce

ec901eadb3a1        nginx:1.10.3        "nginx -g 'daemon ..."   23 minutes ago      Up 23 minutes       80/tcp, 443/tcp     nginx.3.0gg8usy7fxkurmsq0hytgk8eb

Look at the container for the service helloworld. It is highlighted in green above. You should be able to identify it using the NAMES column. Run the following command to look at the log for this container:

$ sudo docker logs helloworld.1.s2psenelzogglkijw0i2l45ce

PING www.alibabacloud.com (47.88.128.164): 56 data bytes
64 bytes from 47.88.128.164: seq=0 ttl=39 time=263.048 ms
64 bytes from 47.88.128.164: seq=1 ttl=39 time=263.075 ms
64 bytes from 47.88.128.164: seq=2 ttl=39 time=247.832 ms
64 bytes from 47.88.128.164: seq=3 ttl=39 time=255.053 ms
64 bytes from 47.88.128.164: seq=4 ttl=39 time=247.829 ms
64 bytes from 47.88.128.164: seq=5 ttl=39 time=261.734 ms
64 bytes from 47.88.128.164: seq=6 ttl=39 time=271.754 ms
64 bytes from 47.88.128.164: seq=7 ttl=39 time=263.092 ms
64 bytes from 47.88.128.164: seq=8 ttl=39 …

Note that I have used the name of the container from the previous command. As you can see the container is doing what it is supposed to do. It is happily pinging away to http://www.alibabacloud.com.

Scaling a docker service

Now lets try to scale our docker service. By scaling we mean horizontal scaling, which means we will increase the number of instances (containers) being used by this service. Scaling lessons 101, horizontal scaling requires that the application be …. Stateless. Good job

Run the following command on the swarm manager to scale the helloworld service:

@SwarmManager

$ sudo docker service scale helloworld=5
helloworld scaled to 5

Now run the following command to look at all the containers running under our helloworld service:

$ sudo docker service ps helloworld

ID                  NAME                IMAGE               NODE                      DESIRED STATE       CURRENT STATE            ERROR               PORTS

s2psenelzogg        helloworld.1        alpine:latest       <WorkerNode1>   Running             Running 8 minutes ago                        
yt7udkzb9uuq        helloworld.2        alpine:latest       <WorkerNode2>   Running             Running 27 seconds ago                       
5dh13iqh1f14        helloworld.3        alpine:latest       <WorkerNode3>   Running             Running 27 seconds ago                       
oa23izqw1mxo        helloworld.4        alpine:latest       <WorkerNode3>   Running             Running 27 seconds ago                       
wjkohdpxkosn        helloworld.5        alpine:latest       <WorkerNode1>   Running             Running 27 seconds ago

As you can see above our helloworld service is now scaled to 5 containers, running across our 3 worker nodes. That is how simple it is to scale a service in a docker swarm.

Removing a docker service

Removing a docker service is very simple. Just run the following command on the swarm manager to remove the docker service:

@SwarmManager

$ sudo docker service rm helloworld

If you run the docker ps command on the worker nodes, you might still see the containers. It takes a little time for the containers to be cleaned. However if you check after a few minutes all containers associated with this service would have disappeared from all the nodes.

Updating a docker service

Docker supports what they call rolling updates. What this means is that updates are rolled out to containers in a gradual fashion. As the update progresses the containers are updated over a period of time until all of them have been updated. Run the following command to update the nginx service:

@SwarmManager

$ sudo docker service update --image nginx:1.11.9 nginx

The above command updates our nginx to 1.11.9 version. In order to show you how the update progresses I ran the docker service ps command a number of times. See the following outputs:

$ sudo docker service ps nginx

ID                  NAME                IMAGE               NODE           DESIRED STATE       CURRENT STATE             ERROR               PORTS

u58bomerqc8h        nginx.1             nginx:1.10.3       <WorkerNode3>   Running             Running 29 minutes ago                        
st2axbpwv1ti        nginx.2             nginx:1.10.3       <WorkerNode1>   Running             Running 29 minutes ago                        
a84uu6xsd303        nginx.3             nginx:1.11.9       <WorkerNode2>   Running             Running 4 seconds ago                         
0gg8usy7fxku         \_ nginx.3         nginx:1.10.3       <WorkerNode2>   Shutdown            Shutdown 13 seconds ago                       

$ sudo docker service ps nginx

ID                  NAME                IMAGE               NODE            DESIRED STATE       CURRENT STATE             ERROR               PORTS

u58bomerqc8h        nginx.1             nginx:1.10.3        <WorkerNode3>   Running             Running 29 minutes ago                        
p9ff5mas2x9p        nginx.2             nginx:1.11.9        <WorkerNode1>   Running             Preparing 4 seconds ago                       
st2axbpwv1ti         \_ nginx.2         nginx:1.10.3        <WorkerNode1>   Shutdown            Shutdown 3 seconds ago                        
a84uu6xsd303        nginx.3             nginx:1.11.9        <WorkerNode2>   Running             Running 14 seconds ago                        
0gg8usy7fxku         \_ nginx.3         nginx:1.10.3        <WorkerNode2>   Shutdown            Shutdown 23 seconds ago                       

$ sudo docker service ps nginx

ID                  NAME                IMAGE               NODE                      DESIRED STATE       CURRENT STATE                 ERROR               PORTS

zz6u1ujzabr5        nginx.1             nginx:1.11.9        <WorkerNode3>   Running             Running 18 seconds ago                           
u58bomerqc8h         \_ nginx.1         nginx:1.10.3        <WorkerNode3>   Shutdown            Shutdown 28 seconds ago                           
p9ff5mas2x9p        nginx.2             nginx:1.11.9        <WorkerNode1>   Running             Running 39 seconds ago                            
st2axbpwv1ti         \_ nginx.2         nginx:1.10.3        <WorkerNode1>   Shutdown            Shutdown 47 seconds ago                           
a84uu6xsd303        nginx.3             nginx:1.11.9        <WorkerNode2>   Running             Running 59 seconds ago                            
0gg8usy7fxku         \_ nginx.3         nginx:1.10.3        <WorkerNode2>   Shutdown            Shutdown about a minute ago

As you can see above, all three containers for the service are simply updated to the new version. There will be times when the service will have containers running different versions during the update. This is just a three-node cluster. Production environments can run on hundreds may be thousands of nodes with several thousand containers behind each service. Look how simple docker has made it to manage these thousands of containers distributed across this large array of nodes.

Mounting local node storage to containers

Docker allows for mounting several kinds of storage directly to containers. Why would I want to mount storage to a container? Well lets say you want to share some data across containers i.e. configuration files, or lets say you want some data to persist outside the container, then it would be best to mount an external storage to the container for this purpose.

Before proceeding lets remove our existing nginx service so we can use the SLB port mapping to test the new service. To remove the nginx service, run the following command

$ sudo docker service rm nginx

Now for our case, we will simply mount a folder from the worker nodes, on to the container. This is the main html configuration folder. Then we will create a small html file on the host swarm workers and test to see if the containers start using this html file.

In order to prepare for this on each of the three worker nodes create a folder called /home/docker-nginx/html. Inside this folder create a file called index.html and put the following content in it:

<html>
<body>
I like to dance
</body>
</html>

You can run the following on each of the worker nodes to achieve this:

@SwarmWorker1,2,3

$ mkdir /home/docker-nginx
$ mkdir /home/docker-nginx/html
$ cd /home/docker-nginx/html
$ vi index.html

<html>
<body>
I like to dance
</body>
</html>

#Press Esc + 😡 to save the file and exit vi.

Once this is done run the following command on the swarm manager to launch a new service:

$ sudo docker service create \
  --name my-web \
  --publish 8080:80 \
  --replicas 3 \
  --mount "type=bind,source=/home/docker-nginx/html,target=/usr/share/nginx/html" \
  nginx

Look at the above command. We are creating a service:

  • With the name my-web
  • We are publishing the port 80 to port 8080 on the host
  • We are launching a service with 3 containers
  • We are mounting the /home/docker-nginx/html on the host to /usr/share/nginx/html inside the container. Note that this is the default web folder used by the nginx container to store html pages. The idea is that the container should now use the index.html file that is stored on the swarm worker node hosts.

Remember that all our swarm workers are added to an SLB and port 80 on the SLB is mapped to port 8080 on each of these hosts. If all goes well you should be able to access the nginx main page using the SLB’s Internet IP in your web browser. Make sure port 80 is open in your Security Group for the SLB. You should see the following page if everything works:

Screen Shot 2017-07-09 at 3.01.12 PM.png

Note that you do not see the default nginx page from inside the container but the html file that you placed on each of the swarm worker nodes. This means that your mount from the hosts to the containers is working fine.

I hope you have enjoyed reading this. As a next step you could use the knowledge you gained earlier to launch another node and add it as a Swarm Manager to your docker swarm setup for high availability. If you are feeling adventurous you can try adding a TCP SLB (Refer to Discovering Custom Images and Load Balancers) to manage the Swarm Managers. If you get stuck just give me a shout.

Please note that the idea of the guide is to get you started on docker. It is by no means meant to be a comprehensive docker guide. For detailed documentation on docker swarm you can visit their fantastic official website at https://docs.docker.com/engine/swarm/ .

Thank you for reading. I hope the articles are fun to read. If you have any questions or comments please feel free to share below in the comments. It is my intention to continue to add content. So don’t forget to check back.

For my latest posts please visit WhatCloud

*Please note that the content/views expressed in this post are solely my own and do not reflect on or represent the official standing/content and views of the Alibaba Cloud organization.
*The logos used in the diagrams above are registered logs for Alibaba Cloud and Docker respectively.
Advertisements

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

Powered by WordPress.com.

Up ↑

%d bloggers like this: