You thought it was over. Not even close. We are just getting warmed up (Pun intended.) Where did I disappear to ?

  • Option a: I was on a secret mission.
  • Option b: I was just plain lazy
  • Option c: I was creating suspense for the series.

What do you think? :D. So without straying away from the topic, lets get back to the exciting stuff. If you have no clue as to what I am talking about then you have missed Episode 1: The Misconception. If you have some idea about the series but are feeling a little lost then you probably skipped Episode 2 – Getting to know her better. I suggest that you skim through the two episodes to get better a perspective on what we are about to do.

Reading Level: Intermediate

Environment: OpenStack (Newton) on Ubuntu 16.04

RECAP for Episode 1 and 2:

  • You were introduced to OS (OpenStack,) and its whereabouts.
  • You learnt the basic survival elements for OS and how to set them up. (SO you could have an OS of your own.)
  • You got the keys, stole the glances, saw money making potential, and figured that she is simple and sweet to interact with, and possesses some serious networking skills.
  • You got Excited!! and hopefully still are…

Confused ? Don’t be. The diagram below depicts the current state of OS.

ose2-7
OpenStack by the end of Episode 2

So as of right now we have configured the following:

Function Module
User Authentication Keystone
Image Service Glance
The Compute Nova
Networking Neutron
Horizon The GUI (Graphical User Interface)

Lets heat things up ? … Not yet!! Before we can do that we have got to work on this block by block. So lets look at one more block in this puzzle. What we are going to setup next is block storage. What is block storage? You may ask. Well, you are going to be running some cloud instances for your Customers (Tenants), and these customers will need to store the data in some sort of persistent storage (fancy name for storage that can persist beyond reboots and beyond the life of the cloud instance itself.) Cinder (I have no idea why it is called that!!) is the module that allows you to provide additional persistent storage to your cloud instances or other cloud services. In simpler terms its just a service that provides additional disks to you customer machines in the cloud. See, not so complicated now is it?

F. Cinder

ose3-1
Cinder

In the lab environment environment we are going to setup Cinder (Block Storage) service on the Controller node. In a production environment you would want to have independent Cinder Nodes (Machines, yes more than one.) Do note that this will host the disks for your customers so the workload on these nodes will be I/O intensive (Disk I/O.) There are multiple ways to handle the back-end storage for the Cinder nodes. For our lab environment we are using a local disk on the controller. In a production environment this could be a disk mounted on your storage node from a Storage Area Network,  Network Attached Storage or a distributed storage like ‘Ceph.’ The specifics of the back-end storage is beyond the scope of this episode and we don’t want to complicate things now do we? I do intend to do an entire episode on Ceph but like all good things, you have to wait for it :D. So for now, in order to configure the Cinder please perform the following configuration:

Note on the syntax: Although I have covered these in the previous episodes but for the benefit of the new readers please note the following:

  • @<servername> (means that you need to do the configuration that follows, on that server
  • Whenever you see sudo vi <filename> that means that you need to edit that file and the indented text that follows is what needs to be edited in that file.
  • OS means OpenStack

@controller (Note that even if your storage node is separate, this configuration still needs to be done on the controller and NOT on the storage node)

Create the database and assign the user appropriate rights:

sudo mysql -u root -p
  CREATE DATABASE cinder;
  GRANT ALL PRIVILEGES ON cinder.* TO 'cinder'@'localhost' \
  IDENTIFIED BY 'MINE_PASS';
  GRANT ALL PRIVILEGES ON cinder.* TO 'cinder'@'%' \
  IDENTIFIED BY 'MINE_PASS';
  exit

Source the keystone_admin file to get OS command-line access

source ~/keystone_admin

Create the cinder user

openstack user create --domain default --password-prompt cinder

Add the role for the cinder user:

openstack role add --project service --user cinder admin

Cinder requires two services for Operation. Create the required services:

openstack service create --name cinder   --description "OpenStack Block Storage" volume
openstack service create --name cinderv2   --description "OpenStack Block Storage" volumev2

Create the respective endipoints for each service:

openstack endpoint create --region RegionOne   volume public http://controller:8776/v1/%\(tenant_id\)s
openstack endpoint create --region RegionOne   volume internal http://controller:8776/v1/%\(tenant_id\)s
openstack endpoint create --region RegionOne   volume admin http://controller:8776/v1/%\(tenant_id\)s
openstack endpoint create --region RegionOne   volumev2 public http://controller:8776/v2/%\(tenant_id\)s
openstack endpoint create --region RegionOne   volumev2 internal http://controller:8776/v2/%\(tenant_id\)s
openstack endpoint create --region RegionOne   volumev2 admin http://controller:8776/v2/%\(tenant_id\)s

Perform the cinder software installation

sudo apt install cinder-api cinder-scheduler

Perform the edit on the cinder configuration file:

sudo vi /etc/cinder/cinder.conf

 [DEFAULT]
 auth_strategy = keystone
 #Define the URL/credentials to connect to RabbitMQ
 transport_url = rabbit://openstack:RABBIT_PASS@controller
 #This is the Ip of the storage node (in our case it is the controller node)
 my_ip = 10.30.100.215

 [database]
 # Tell cinder how to connect to the database. Comment out any existing connection   lines.
 connection = mysql+pymysql://cinder:MINE_PASS@controller/cinder

 # Tell cinder how to connect to keystone
 [keystone_authtoken]
 auth_uri = http://controller:5000
 auth_url = http://controller:35357
 memcached_servers = controller:11211
 auth_type = password
 project_domain_name = default
 user_domain_name = default
 project_name = service
 username = cinder
 password = MINE_PASS

 [oslo_concurrency]
 lock_path = /var/lib/cinder/tmp

Populate the cinder database

sudo su -s /bin/sh -c "cinder-manage db sync" cinder

Configure the compute service to use cinder for block storage.

sudo vi /etc/nova/nova.conf
 [cinder]
 os_region_name = RegionOne

Restart nova service for the config changes to take effect.

sudo service nova-api restart

Start the cinder services on the controller.

sudo service cinder-scheduler restart
sudo service cinder-api restart

Yes I promised that we will not complicate things. However there are certain items that warrant some explanation. First of all we are going to use the LVM driver in order to manage logical volumes for our disks. For our lab environment note that we have an empty partition on a disk on /dev/vda3. This is the partition that will host all the cinder volumes that we will provide to our customers. For your environment please substitute this with the respective name/path of the empty disk/partition you want to use.

@controller (or your storage node if your storage node is a separate one)

First we install the supporting utility for lvm.

sudo apt install lvm2

Now we setup the disk. The first command initializes the partition on our disk (or the whole disk if you are using a separate disk) As stated above please replace the disk name with the appropriate one in your case.

pvcreate /dev/vda3

The below command creates a volume group on the disk/partition that we initialized above. We use the name ‘cinder-volumes’ for the volume group. This volume group will contain all the cinder volumes (disks for the customer cloud instances.)

vgcreate cinder-volumes /dev/vda3

The below is a filter that needs to be defined in order to avoid performance issues and other complications on the storage node (according to official documentation.) What happens is that by default the LVM scanning tool scans the /dev directory for block storage devices that contain volumes. We only want it to scan the devices that contain the cinder-volumes group (Since that contains the volumes for OS.)

Configure the lvm configuration file as follows:

sudo vi /etc/lvm/lvm.conf
 devices {
 ...
 filter = [ "a/vda2/", "a/vda3/", "r/.*/"]

‘a’ in the filter is for accept and the remaining is a regular expression. The line ends with “r/.*/” which rejects all remaining devices. But wait a minute. So the filter is showing vda3 which is fine (since that contains the cinder-volumes) but what is vda2 doing there. According to the OS documentation if the storage node uses LVM on the operating system disk then we must add the associated device to the filter. For the lab /dev/vda2 is the partition that contains the operating system. I said it before and I will say it again, relationships are work!!!

ose3-4
Logical depiction of cinder-volumes

Now install the volume service software

sudo apt install cinder-volume

Edit the configuration file for cinder

sudo vi /etc/cinder/cinder.conf

 [DEFAULT]
 auth_strategy = keystone
 #Tell cinder how to connect to RabbitMQ
 transport_url = rabbit://openstack:MINE_PASS@controller
 #This is the IP of the storage node (controller for the lab)
 my_ip = 10.30.100.215
 #We are using the lvm backend (This is the name of the section we will define later in the file)
 enabled_backends = lvm
 glance_api_servers = http://controller:9292
 lock_path = /var/lib/cinder/tmp

 #Cinder DB connection. Comment out any existing connection entries.
 [database]
 connection = mysql+pymysql://cinder:MINE_PASS@controller/cinder

 #Tell cinder how to connect to keystone
 [keystone_authtoken]
 auth_uri = http://controller:5000
 auth_url = http://controller:35357
 memcached_servers = controller:11211
 auth_type = password
 project_domain_name = default
 user_domain_name = default
 project_name = service
 username = cinder
 password = MINE_PASS

 #This is the backend subsection
 [lvm]
 #Use the LVM driver
 volume_driver = cinder.volume.drivers.lvm.LVMVolumeDriver
 #This is the name of the volume group we created using the vgcreate command. If you changed the name use the changed name here.
 volume_group = cinder-volumes
 #The volumes are provided to the instances using the ISCSI protocol
 iscsi_protocol = iscsi
 iscsi_helper = tgtadm

Start the block storage service and the dependencies

sudo service tgt restart
sudo service cinder-volume

Verify Operation

@controller

Source the OS command line:

source ~/keystone_admin

List the cinder services and ensure the status is up

openstack volume service list
+------------------+---------------------------+------+---------+-------+----------------------------+
| Binary           | Host                      | Zone | Status  | State | Updated At                 |
+------------------+---------------------------+------+---------+-------+----------------------------+
| cinder-scheduler | controller                | nova | enabled | up    | 2016-12-14T07:24:22.000000 |
| cinder-volume    | controller@lvm            | nova | enabled | up    | 2016-12-14T07:24:22.000000 |
+------------------+---------------------------+------+---------+-------+----------------------------+

Since we worked so hard on this lets do further verification. Lets try and create a test volume of size 1GB.

openstack volume create --size 1 test-vol
+---------------------+--------------------------------------+
| Field               | Value                                |
+---------------------+--------------------------------------+
| attachments         | []                                   |
| availability_zone   | nova                                 |
| bootable            | false                                |
| consistencygroup_id | None                                 |
| created_at          | 2016-12-14T07:28:59.491675           |
| description         | None                                 |
| encrypted           | False                                |
| id                  | f6ec46ca-9ccf-47fb-aaea-cdde4ad9644e |
| migration_status    | None                                 |
| multiattach         | False                                |
| name                | test-vol                             |
| properties          |                                      |
| replication_status  | disabled                             |
| size                | 1                                    |
| snapshot_id         | None                                 |
| source_volid        | None                                 |
| status              | creating                             |
| type                | None                                 |
| updated_at          | None                                 |
| user_id             | 97b1b7d8cb0d473c83094c795282b5cb     |
+---------------------+--------------------------------------+

Now let us list the volumes in the environment and ensure that the volume you just created appears with Status = available.

openstack volume list
+--------------------------------------+--------------+-----------+------+------------------------------+
| ID                                   | Display Name | Status    | Size | Attached to                  |
+--------------------------------------+--------------+-----------+------+------------------------------+
| f6ec46ca-9ccf-47fb-aaea-cdde4ad9644e | test-vol     | available |    1 |                              |
+--------------------------------------+--------------+-----------+------+------------------------------+

Congratulations!! At this point you have reached a significant milestone. You have a fully functioning OS all to yourself. If you are following along these episodes and have successfully verified the operations of the respective modules then give yourself a pat on the back. I usually do a short victory dance at this point. (No I don’t have a video for it :P)

When we started this series, I explained to you that OpenStack is complex (not complicated.) It comprises of a number of parts (services) that work together to bring to us the whole OS experience. We have been introduced to the main characteristics of OS, which ones like to work with which, and what are the basic functions for each. OS has a lot more characteristics and we will encounter a number of them in upcoming episodes.

I know by now you are wondering…but where is the heat? And how do we have cinder before heat. Oh well, as I keep saying this is no ordinary story. Finally though, its time to HEAT things up….Yaaaay. Yes … No … Don’t get too excited. Last time I mentioned physics you translated that to physical. Let me introduce to you one of my favorite characteristics for OS, the orchestration service Heat.

G. Heat

ose3-2
Heat

Heat is a service that manages orchestration. What is Orchestration ? Let me take you through a an examples. In future episodes, we (you and OS like two peas in a pod) are going to start entertaining guests (Customers) in your cloud environment. Lets say you get a new customer. This customer will require certain networks, cloud instances, routers, firewall rules etc etc. One way to achieve this is to use the OS command line tool or the Horizon GUI. Now both of these are good methods, however they are time consuming, require manual intervention and are prone to human errors. What if I were to tell you that there is a way to automate most of these things and standardize them using templates so you can reuse them across Customers. This is what Heat does. It automates the facilitation of services to your guests (Cloud customers.) Since OS is sooo hot, there is a lot of heat to be discussed in half an episode. I will (very soon) dedicate an entire episode to heat (with examples 😀 khekhekhe…) For now please perform the following configuration to setup heat on the controller node:

@controller

Create the heat database and assign full user rights

sudo mysql -u root -p
 CREATE DATABASE heat;
 GRANT ALL PRIVILEGES ON heat.* TO 'heat'@'localhost' \
 IDENTIFIED BY 'MINE_PASS';
 GRANT ALL PRIVILEGES ON heat.* TO 'heat'@'%' \
 IDENTIFIED BY 'MINE_PASS'
 exit

Source the OS command line

source ~/keystone_admin

Create the heat user

openstack user create --domain default --password-prompt heat

Assign the role to the heat user

openstack role add --project service --user heat admin

Create the heat service (heat requires two services)

openstack service create --name heat --description "Orchestration" orchestration
openstack service create --name heat-cfn --description "Orchestration"  cloudformation

Create the respective service endpoints

openstack endpoint create --region RegionOne orchestration public http://controller:8004/v1/%\(tenant_id\)s
openstack endpoint create --region RegionOne orchestration public http://controller:8004/v1/%\(tenant_id\)s
openstack endpoint create --region RegionOne orchestration admin http://controller:8004/v1/%\(tenant_id\)s

openstack endpoint create --region RegionOne cloudformation public http://controller:8000/v1
openstack endpoint create --region RegionOne cloudformation internal http://controller:8000/v1
openstack endpoint create --region RegionOne cloudformation admin http://controller:8000/v1

Heat requires a special domain in OS to operate. Create the respective domain:

openstack domain create --description "Stack projects and users" heat

Create the domain admin for the special heat domain

openstack user create --domain heat --password-prompt heat_domain_admin

Add the role for the heat_domain_admin

openstack role add --domain heat --user-domain heat --user heat_domain_admin admin

Create this new role. You can add this role to any user in OS who needs manage heat stacks (Stacks is a way to represent the application of heat templates and what they are doing in a certain scenario. We will discuss this later)

openstack role create heat_stack_owner

(Optional) Lets say you have a user Customer1 in a project customer1_admin. You can use the following command to allow this user to manage heat stacks.

openstack role add --project Customer1 --user customer1_admin heat_stack_owner

Create the heat_stack_user role

openstack role create heat_stack_user

NOTE: The Orchestration service automatically assigns the heat_stack_user role to users that it creates during the stack deployment. By default, this role restricts API operations. To avoid conflicts, do not add this role to users with the heat_stack_owner role. (From official documentation.)

Install the heat software

sudo apt-get install heat-api heat-api-cfn heat-engine

Configure the heat config file

sudo vi  /etc/heat/heat.conf
 [DEFAULT]
 rpc_backend = rabbit
 heat_metadata_server_url = http://controller:8000
 heat_waitcondition_server_url = http://controller:8000/v1/waitcondition
 #This is the domain admin we defined above
 stack_domain_admin = heat_domain_admin
 stack_domain_admin_password = MINE_PASS
 #This is the name of the special domain we defined for heat
 stack_user_domain_name = heat
 #Tell heat how to connect to RabbitMQ
 transport_url = rabbit://openstack:MINE_PASS@controller

 #Heat DB connection. Comment out any existing connection entries
 [database]
 connection = mysql+pymysql://heat:HEAT_DBPASS@controller/heat

 #Tell heat how to connect to keystone
 [keystone_authtoken]
 auth_uri = http://controller:5000
 auth_url = http://controller:35357
 memcached_servers = controller:11211
 auth_type = password
 project_domain_name = default
 user_domain_name = default
 project_name = service
 username = heat
 password = MINE_PASS

 #This section is required for identity service access
 [trustee]
 auth_type = password
 auth_url = http://controller:35357
 username = heat
 password = MINE_PASS
 user_domain_name = default

 #This section is required for identity service access
 [clients_keystone]
 auth_uri = http://controller:35357

 #This section is required for identity service access
 [ec2authtoken]
 auth_uri = http://controller:5000

Initialize the Heat DB

sudo su -s /bin/sh -c "heat-manage db_sync" heat

Start the heat services.

sudo service heat-api restart
sudo service heat-api-cfn restart
sudo service heat-engine restart

Verify Operation:

Source the OS command line:

source ~/keystone_admin

List the Heat Services and ensure that the status is set to up as show below:

openstack orchestration service list
+-----------------------+-------------+--------------------------------------+-----------------------+--------+----------------------------+--------+
| hostname              | binary      | engine_id                            | host                  | topic  | updated_at                 | status |
+-----------------------+-------------+--------------------------------------+-----------------------+--------+----------------------------+--------+
| controller            | heat-engine | de08860a-8d30-483a-acd5-6cfef8cb7d77 | controller            | engine | 2016-12-14T07:53:42.000000 | up     |
| controller            | heat-engine | 859475c8-9b2a-4793-b877-e89a4f0920f8 | controller            | engine | 2016-12-14T07:53:42.000000 | up     |
| controller            | heat-engine | 4ca0a3bb-7c2b-4fe1-8233-82b7e0548b9a | controller            | engine | 2016-12-14T07:53:42.000000 | up     |
| controller            | heat-engine | d22b36b1-1467-4987-aa30-0ac9787450e1 | controller            | engine | 2016-12-14T07:53:42.000000 | up     |
+-----------------------+-------------+--------------------------------------+-----------------------+--------+----------------------------+--------+

Fantastic!!!

RECAP:

  • You worked on this block by block (I know its cheesy), bear with me (no not bare.)
  • Its starting to get a little hot in here.
  • You can’t wait for the next Episode!!! (I hope :))

Look at how far you and OS have come in only 3 episodes. Analyze the below diagram:

ose3-3
OpenStack base services (+Heat)

NEXT:

  • It is now time to start entertaining guests. We will learn how to do this the hard way. The idea is to understand the process and appreciate what is to follow.
  • The chemistry is good and the heat is real. We will learn how to channel this heat to our advantage.
  • Hopefully the excitement will have translated in to happiness :D!!!

As always I sincerely thank you for reading and your patience. If you have any questions/comments please feel free to share in the comments section below so everyone from different sources can benefit from the discussion. I do intend to have the next episode out very soon so don’t go anywhere.

For my latest posts please visit WhatCloud.

Advertisements