Discovering Custom Images and Load Balancers on Alibaba Cloud

And so it continues. As promised I continue my journey deeper in to the Alibaba Cloud portfolio and as always I want to take you along.

This is second chapter of a number of very simple how-to(s) that cover the basics of getting started with Alibaba Cloud. The idea of this set of posts is for someone completely new to Alibaba Cloud, to just pickup a service and start having fun with it. We will attempt to do the same as we go through this together.

In this second how-to we will cover the Server Load Balancer service also known as SLB for short. In order to make it a little more interesting (and a tad longer,) we will run through a practical example of load-balancing http traffic across a couple of ECS  (Elastic Compute Service) instances. Also along the way we will discover security groups and how to create custom images. These topics might seem disparate, but they all link together, that much I can promise. I am assuming at this point that you already have a working account on Alibaba Cloud and you have played around with the ECS service. If not then you would want to check out the Getting Started with ECS on Alibaba Cloud guide.

Setting up Nginx

If you don’t know Nginx, well it is just an open source reverse proxy/load balancer/http server. For the course of this discussion, we are just going to use the http server features of Nginx. Why I choose Nginx in this initial setup, is because its so simple to setup, it does not take the attention away from the real topic of discussion.

  • To get started, deploy a small ECS instance with at-least 1Mbps bandwidth using a Ubuntu image from the Public Images in ECS. If you need help with doing so, use the Getting Started document mentioned in the previous section.
  • Once you have the image do an ssh to the image using root credentials.
  • Then run the following commands to update the OS repository and install nginx.
$ apt-get update
$ apt-get install nginx -y
$ sudo /etc/init.d/nginx start

Open a browser and access the Internet IP of your ECS instance. What happened? Can’t get to your web server. Don’t get disappointed. There is a reason for that. By default only SSH, ICMP and RDP ports are open on the firewall rules for the internet facing interface of ECS. In order to access your web-server on the Internet, you need to open port 80. Go through the following set of steps in order to open port 80 on your ECS instance:


Click on the instance Instance-ID as highlighted above. This should bring you to the following screen:


Ensure that you have clicked on Security Groups on the left panel as highlighted above. Welcome to a new concept called Security Groups. Simply put these are like firewall rules that you apply to your ECS instances that control inbound and outbound traffic.

Click on Configure Rules to add the http access rule to allow traffic on port 80 from the Internet:


As you can see above there are three rules defined above that allow traffic for ICMP ping, ssh(22) and RDP(3389.) We need to define a fourth rule to allow incoming traffic on port 80, which is the http port nginx is listening on. Click on Add Security Group Rules as highlighted above.


The above pop-up will open up to allow you to build the rule. Look at the table below for descriptions to each of the above items:

Field Description Our Value
NIC The Network Interface you are creating this rule for. In our case we are creating it for the Internet facing interface Internet
Rule Direction Are you trying to create a rule for incoming (inbound) traffic or outgoing (outbound) traffic. Inbound
Authorization Policy Is this a rule to allow traffic (Allow) or deny traffic (Deny) Allow
Protocol What protocol is the traffic on TCP, UDP, HTTP etc HTTP
Port Range What range of Ports are we allowing the traffic on 80/80 (Means port 80 only)
Authorization Type Address Field Access. Means that we allow access to/from a range of IP addresses. Second option is Security Group Access which will allow access to/from all instances inside a particular Security Group Address Field Access
Authorization Object Can be a CIDR to provide a range of IP addresses to/from traffic is being allowed. It can also be the name of a Security Group (To allow anyone to access our webpage)
Priority Rules with lower numbers have higher priority 1
Description A regular description for the rule Rule for default nginx website.

Once you are done entering all the values, press OK and access the Internet IP for your ECS instance on your machine and you should see something similar to the screenshot below:


Congratulations. You have set up your very own web facing nginx in Alibaba Cloud.

Creating a custom image

Wouldn’t it be nice if we could just save all the work that we did to setup the nginx instance, and just do a one click deploy of nginx, the next time we wanted an nginx server. Turns out there is a way. Its called Custom Images. What Custom images allow you is to do is to save the state of an existing instance along with the application deployed and then launch a new instance from that configuration. This is very useful if you want to deploy multiple instances of the same application, load balance or perform auto scaling.

Run through the following steps to create a custom image using Ubuntu and nginx:


Make sure that you are on the Instances list under ECS as highlighted in the left pannel above:

  • Go to your instance in the instance list and press More on the very right.
  • Select Create Custom Image from the dropdown as highlighted above.
  • Enter a name and description for the image and click Create


Go to Images on the left hand panel under Snapshots and Images as highlighted above. Wait for the status of your image to turn to Available.


Go back to Instances as highlighted above in the left panel. Click on the Create Instance button. Go through the following screens to create a custom Image:


  • Pricing Model: Pay-As-You-Go (or Subscription, which ever you like)
  • Datacenter Region and Zone: Singapore (Or something else)


  • Instance Type: Select a small instance. I chose ecs.s1.small
  • Network Type: VPC (With default VPC and Switch selected. The switch might not be selected by default so make sure to select it)
  • Network Billing Type: Data Transfer


  • Network bandwidth Peak: 1 Mbps should be enough. Make sure its not 0 Mbps otherwise the instance will not be accessible from the internet and you will not be able to access internet from inside the instance.
  • Operating System: Go under Custom Images and select the custom image that we just created in the previous steps. For me it is Ubuntu_Nginx_N.


  • Storage: Ultra Cloud Disk 40 GB should be enough.
  • Security: Set the Password, just to see how it works.


  • User Data: Leave it for Later
  • Instance Name: Give it a name just for kicks. I gave it nginx2_N
  • Number of Instances: 1

Now press the Buy Now Orange button. This will lead you to the purchase confirmation page below:


Press the Activate button to complete the purchase and then go back to the Instances page in the console.


Wait for your new instance to get to Running State.


Note that since this instance belongs to the same default security group, there is no need to open port 80 as its already open as can be verified from the Security Group page as shown above.

Use the following command to ssh in to the new instance:


Once logged in, run the following command to make sure that nginx is actually installed and running:

$ sudo service nginx status

If all goes well you should receive an active (Running) status response.

Open your browser and point it to the Internet IP of your new instance. You should see the screen below:


So lets do a quick recap. What have we done so far?

  • We took an Ubuntu Instance and installed nginx on it.
  • We adjusted firewall rules to make sure that we could access the nginx via the web on the Internet
  • We tested that it works
  • We then created a custom image from this instance so we don’t have to install nginx every time we need a new instance of nginx.
  • We used this custom image to launch a new instance. We noticed that the new instance takes its own IP, Hostname and Password attributes, however it retains the nginx installation.
  • We tested that nginx indeed works on the new instance.


Adding a Load Balancer to the mix

The two instances that we launched right now are independent. Lets see if we can load balance HTTP traffic across these two instances.

But before we do that note that both our nginx servers display the same http page, so even if traffic is load balanced we will not know if its actually working, unless we start doing some back end monitoring, which is frankly not as much fun.

So lets change around a few things. Use the following set of commands to make them look a bit different from each other:

On Instance 1:


Edit the default html file for nginx

$ vi /var/www/html/index.nginx-debian.html

<title>Welcome to nginx - 1 !</title>
   body {
       width: 35em;
       margin: 0 auto;
       font-family: Tahoma, Verdana, Arial, sans-serif;
<h1>Welcome to nginx! - I am the ONE</h1>

Just make sure that the beginning section looks like the one above. Leave the rest as is. I have highlighted the changes I made.

Once you are done save the file (I am assuming you know how to use vi 😉 )

On Instance 2:


Edit the default html file for nginx

$ vi /var/www/html/index.nginx-debian.html

<title>Welcome to nginx - 2 !</title>
   body {
       width: 35em;
       margin: 0 auto;
       font-family: Tahoma, Verdana, Arial, sans-serif;
<h1>Welcome to nginx! - I am TWO - not the one :(</h1>

Just make sure that the beginning section looks like the one above. Leave the rest as is. I have highlighted the changes I made.

As before, once you are done, save the file. There you have it now point your browser to the Internet IP of instance 1 and instance 2, and they should look like the two screenshots below:

Instance 1:


Instance 2:


Now when we load balance we should be able to identify which Instance is responding to our request.

Now lets start having some fun with a load balancer.


On your console locate Server Load Balancer as highlighted above and Click it. If you see a message to Active your service then just activate it. This should not incur you any additional charges at this point.


On the Server Load Balancer (SLB) screen go to Instance Management and click Create Server Load Balancer


On the create SLB screen choose the following configuration:

  • Region: Singapore (Make sure you select the same region as your two ECS instances. Note that SLBs work across Zones but cannot work across Regions)
  • Zone Type: Leave it default. If you are in a multi zone region then it will be set to Multi-Zone else Single-Zone
  • Primary Zone: Select this as the zone that you deployed your ECS instance in. In a multi-zone environment, SLB creates a primary and backup zone. When the primary zone is not available the SLB services resume from the backup zone. Instances in both primary and backup zones can connect to the active SLB in both zones. (Too much? I know. Just read through it again)
  • Instance Type: Select Internet, since we want to access the web servers from the Internet. Note that Intranet SLBs are free :D, while Internet SLBs are charged.
  • Bandwidth: Default
  • Quantity: 1

Click on Buy Now to go to the purchase confirmation page.


Press Activate, to complete the purchase.


Go back to Instance Management and then locate your newly created SLB.

  • Refresh until the Status turns to

Click on Manage on the right hand side of the SLB instance


  • Click on VServer Group in the left pane. The load balancer needs to know which servers it is load balancing.


There are two ways to define these servers:

  • Backend Servers
  • VServer Group

Each SLB has a set of Listeners, which we will define in a minute. What a listener does is what the name says. It listens for traffic on a particular port and then forwards to an instance from a group of servers.

When you put servers in Backend Servers all listeners in the SLB use these servers as backend servers. However if you have more than one Listener on the SLB, for example lets say you want to load balance two applications using the same SLB and two listeners, then you would ideally want to direct traffic from each listener to a different group of servers (since its not necessary that all applications are running on the same set of servers.) When you create a VServer Group you can then direct traffic from each listener to a different set of servers defined under a specific VServer Group. In other words each listener can have its own VServer Group to load balance. Hope that makes some sense.

  • Click on the Create VServer Group button as shown above.


  • Group Name: Give the VServer Group a name. I choose Nginx_SG
  • Server Network Type: Select VPC (since that’s the type for your ECS instances)
  • Add both the servers from the Available Server List to the Selected Server List.
  • Enter the Port for each server as 80, the port on which nginx is listening
  • Set the Weight for each server as 100. It is a means of telling the system which server should get more traffic. However this only applies if we are using Weighted Round Robin, which you will see later.
  • Once you are done press OK to create the VServer Group.


Now let us create the listener.


  • Click on Listener on the left hand panel and press Add Listener as highlighted above.


Look at the following parameters:

  • Front End Protocol/port: Enter HTTP and port 80. This means that our SLB will be listening on HTTP port 80 on the internet. The protocol supports HTTP/HTTPs for layer 7 and TCP for Layer 4 load balancing.
  • Back End Protocol/port: Enter HTTP and port 80. This means that our SLB will be directing traffic to HTTP port 80 to the back end ECS instances which are running nginx in our case. The protocol supports HTTP/HTTPs for layer 7 and TCP for Layer 4 load balancing. If your web server was lets say listening on port 8080, you could set the backend port to 8080 and still have the SLB Front End on port 80.
  • Forwarding Rules: There are three options:
    • Weighted Round Robin: Round Robin according to ECS Weight defined earlier. The more the weight the more traffic the ECS sees.
    • Weighted Least Connections: Direct traffic to the ECS with least connections.
    • Round Robin: Sequentially pass traffic to each ECS in turn without considering weight.

In our case we select Weighted Round Robin.

  • User VServer Group: Select the VServer Group that we created earlier. In my case it is Nginx_VSG.

Leave the rest as default and press Next Steps.


  • Active health check: Disable this for now and click Confirm.


If all goes well your Listener should be created successfully. Just press Confirm.


Ensure that your listener is in Running state.


Go back to Server Load Balancer à Instance Management and locate the Internet IP address of the SLB instance as highlighted above.

Now point your browser to the SLB’s Internet IP. You should see one of the Nginx server displays like the one below:


Refresh the page and it should change to something like this:


Keep refreshing and it should flip-flop between the two web servers.



  • We made a few changes in the HTML for each web server to differentiate the two during load balancing
  • We activated and created a Server Load Balancer
  • We configured a VServer Group and added our ECS servers with same weight
  • We configured a listener to listen on HTTP port 80 and then pass the traffic to HTTP port 80 on our VServer Group, using Weighted Round Robin
  • We tested that the load balancing actually works

Just for kicks play around with the ECS weight under the VServer Group configuration and then refresh the SLB page. How has the behavior changed over multiple refreshes? Also try out Weighted Least Connections and Round Robin in the listener.

Congratulations! You have successfully load balanced traffic across two web servers using Alibaba Cloud ECS and SLB.

Thank you for reading. I hope the articles are fun to read. If you have any questions or comments please feel free to share below in the comments. It is my intention to continue to add content. So don’t forget to check back.

For my latest posts please visit WhatCloud

*Please note that the content/views expressed in this post are solely my own and do not reflect on or represent the official standing/content and views of the Alibaba Cloud organization.
*The logos used in the diagrams above are registered logs for Nginx and Ubuntu Canonical respectively.




3 thoughts on “Discovering Custom Images and Load Balancers on Alibaba Cloud

Add yours

  1. Howdy! I just wish to give a huge thumbs up for the nice data you may have right here on this post. I can be coming again to your blog for more soon.

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Google+ photo

You are commenting using your Google+ account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )


Connecting to %s

Powered by

Up ↑

%d bloggers like this: