Episode 2 – Getting to know her better (An OpenStack Series)

If you have skipped the first Episode, I am not responsible for the technical and emotional confusion created when you go through Episode 2 of this OpenStack series. So if you haven’t already, I highly recommend Episode 1 – The Misconception. If however, you like to live dangerously then please continue to read.

Reading Level: Intermediate
Environment: OpenStack (Newton) on Ubuntu 16.04

RECAP for Episode 1:

  • You got to know the central character’s name.
  • You got to know its whereabouts.
  • You learnt what all things it needs to survive and how to set them up if you wanted one for yourself.
  • You got all EXICITED!!

This is what we achieved in the last episode:

ose1-2
Base setup for deploying OpenStack

Note this is Episode is a tad (in reality very) long. It is likely that you will read one part of it and hopefully return to it later. If you do then the following navigation menu will help you skip to the sections you are interested in.

This is what we will achieve in this Episode:

Usually in a relationship there comes a point where you exchange keys. This is usually when you know each other better. However this is no ordinary story telling and hence the first step to having your own (OpenStack) OS is to set up the identity service called Keystone.

A. Keystone

ose2-1
Keystone

Simply put Keystone is a service that manages all the identities. These identities can belong to your customers who you have offered services to, and also to all the small little micro services that form up the OS existence. Obviously these identities have user names and passwords associated with them and also the information on who is allowed to do what. There is much more to it but we will leave the details for later episodes. If you work for any tech savy company I am sure you are familiar with the concept of Identity/Access cards. These cards not only identify who you are but also control which doors you can/cannot open on the company premises. I hope you get the idea. So in order to set up this modern day OS watchman perform the following steps:

@controller
Login in to MariaDB (which is mysql – yea really!!)

sudo mysql -u root -p

Create a database for keystone and give the keystone user full privkeystoneileges to the newly created DB.

CREATE DATABASE keystone; 
GRANT ALL PRIVILEGES ON keystone.* TO 'keystone'@'localhost' \ 
  IDENTIFIED BY 'MINE_PASS'; 
GRANT ALL PRIVILEGES ON keystone.* TO 'keystone'@'%' \ 
  IDENTIFIED BY 'MINE_PASS'; 
exit

Install the keystone software

sudo apt install keystone

Edit the keystone configuration file

sudo vi /etc/keystone/keystone.conf 
   #Tell keystone how to access the DB 
   [database] 
   connection = mysql+pymysql://keystone:MINE_PASS@controller/keystone (Comment out the exiting connection entry) 
 
   #Some token management I don’t fully understand. But put it in, its important) 
   [token] 
   provider = fernet

This command will initialize your keystone DB using the configuration that you just did above.

sudo su -s /bin/sh -c "keystone-manage db_sync" keystone

Since we have no identity management (because you are setting it up right now duh!), we need to bootstrap the identity management service to create an admin user for keystone

sudo keystone-manage fernet_setup --keystone-user keystone --keystone-group keystone 
sudo keystone-manage credential_setup --keystone-user keystone --keystone-group keystone

Since OS is composed of a lot of micro services, each service that we define will need to have the endpoint URL(s). This is how other services will access this service. You do notice that there are three URL(s). We will hopefully get in to the details of this in a later Episode. For now take my word for it and run the following:

sudo keystone-manage bootstrap --bootstrap-password MINE_PASS \ 
  --bootstrap-admin-url http://controller:35357/v3/ \ 
  --bootstrap-internal-url http://controller:35357/v3/ \ 
  --bootstrap-public-url http://controller:5000/v3/ \ 
  --bootstrap-region-id RegionOne

You need to configure Apache for keystone. Keystone uses Apache to entertaining requests from its other buddy services in OS. Lets just say Apache is like a good secretary that it is better at handling and managing requests than if Keystone tried to do it independently.

sudo vi /etc/apache2/apache2.conf  
    ServerName controller 
sudo service apache2 restart 
sudo rm -f /var/lib/keystone/keystone.db

One of the most useful ways to interact with OS is via command-line. Yes OS is old school. However if you want to interact with OS, (and since OS is such a celebrity) you need to be authenticated and authorized. An easy way to do this is to create the following file and then source it in your command line.

sudo vi ~/keystone_admin 
    export OS_USERNAME=admin 
    export OS_PASSWORD=MINE_PASS 
    export OS_PROJECT_NAME=admin 
    export OS_USER_DOMAIN_NAME=default 
    export OS_PROJECT_DOMAIN_NAME=default 
    export OS_AUTH_URL=http://controller:35357/v3 
    export OS_IDENTITY_API_VERSION=3 
    export PS1='[\u@\h \W(keystone_admin)]$ '

If you want to source just use the following command on the controller:

source ~/keystone_admin

Before we proceed we need to talk about a few additional terms. OpenStack uses the concept of Domains, Projects and Users.

  • Users are well just users of OS.
  • Projects are similar to Customers in the OS environment. So if I am using my OS environment to host Virtual Machines for Customer ABC and Customer XYZ, then ABC and XYZ could possibly be 2 projects.
  • Domains is a recent addition (as if things weren’t complex enough), that allows you further granularity. Lets say you wanted to have administrative divisions with in OpenStack, so each division could manage their own environment then you use domains. So you could put ABC and XYZ in different domains and have separate administration for both, or you could put them in the same domain and manage them with a single administration. Its just an added level of granularity. And you thought your relationships were complex!

Create a special project to hold all the internal users (Most micro services in OS will have their own service users and they are associated to this special project.)

openstack project create --domain default \ 
  --description "Service Project" service

Verify Operation

Run the following command to request an authentication token using the admin user:

openstack --os-auth-url http://controller:35357/v3 \
  --os-project-domain-name default --os-user-domain-name default \
  --os-project-name admin --os-username admin token issue

Password:
+------------+-----------------------------------------------------------------+
| Field      | Value                                                           |
+------------+-----------------------------------------------------------------+
| expires    | 2016-11-30 13:05:15+00:00                                       |
| id         | gAAAAABYPsB7yua2kfIZhoDlm20y1i5IAHfXxIcqiKzhM9ac_MV4PU5OPiYf_   |
|            | m1SsUPOMSs4Bnf5A4o8i9B36c-gpxaUhtmzWx8WUVLpAtQDBgZ607ReW7cEYJGy |
|            | yTp54dskNkMji-uofna35ytrd2_VLIdMWk7Y1532HErA7phiq7hwKTKex-Y     |
| project_id | b1146434829a4b359528e1ddada519c0                                |
| user_id    | 97b1b7d8cb0d473c83094c795282b5cb                                |
+------------+-----------------------------------------------------------------+

So congratulations. You got her to give you the keys. Lets see what’s next.

The next part of OS’s character that you are going to get introduced to is called Glance. Do not be fooled by the name. This is not about those secretive glances. Glance is actually the image service. This is nothing but a store for all the different flavors (images) for the Virtual Machines that you want to offer to your customers. These images are like stamps. When a customer requests a particular type of Virtual Machine it is the job of Glance to find the correct image in its repository and hand it over to another service (which we will reveal later) for creating this Virtual Machine. I am sure you know what an Operating System image is. If you don’t you are reading the wrong blog :).

B. Glance

ose2-2
Glance

So in order to configure OS’s precious image store perform the following steps on the @controller:

Login to the DB

sudo mysql -u root -p

Create the Database and give full privileges to glance user

CREATE DATABASE glance; 
GRANT ALL PRIVILEGES ON glance.* TO 'glance'@'localhost' \ 
  IDENTIFIED BY 'MINE_PASS'; 
GRANT ALL PRIVILEGES ON glance.* TO 'glance'@'%' \ 
  IDENTIFIED BY 'MINE_PASS'; 
exit

Source the keystone_admin file to get command-line access

source ~/keystonerc_admin

Create the glance user

openstack user create --domain default --password-prompt glance

Give the user rights

openstack role add --project service --user glance admin

Create the glance service

openstack service create --name glance \ 
  --description "OpenStack Image" image

Create the glance endpoints

openstack endpoint create --region RegionOne \ 
  image public http://controller:9292 
openstack endpoint create --region RegionOne \ 
  image internal http://controller:9292 
openstack endpoint create --region RegionOne \ 
  image admin http://controller:9292

Install the glance software

sudo apt install glance

 Configure the configuration file for glance

sudo vi /etc/glance/glance-api.conf
  #Configure the DB connection 
  [database] 
  connection = mysql+pymysql://glance:MINE_PASS@controller/glance 
 
  #Tell glance how to get authenticated via keystone. Every time a service needs to do something it needs to be authenticated via keystone. 
  [keystone_authtoken] 
  auth_uri = http://controller:5000 
  auth_url = http://controller:35357 
  memcached_servers = controller:11211 
  auth_type = password 
  project_domain_name = default 
  user_domain_name = default 
  project_name = service 
  username = glance 
  password = MINE_PASS 
  #(Comment out or remove any other options in the [keystone_authtoken] section.) 
 
  [paste_deploy] 
  flavor = keystone 
 
  #Glance can store images in different locations. We are using file for now 
  [glance_store] 
  stores = file,http 
  default_store /[= file 
  filesystem_store_datadir = /var/lib/glance/images/

Edit another configuration file

sudo vi /etc/glance/glance-registry.conf
  #Configure the DB connection 
  [database] 
  connection = mysql+pymysql://glance:MINE_PASS@controller/glance 
  #Tell glance-registry how to get authenticated via keystone. 
  [keystone_authtoken] 
  auth_uri = http://controller:5000 
  auth_url = http://controller:35357 
  memcached_servers = controller:11211 
  auth_type = password 
  project_domain_name = default 
  user_domain_name = default 
  project_name = service 
  username = glance 
  password = MINE_PASS 
  #(Comment out or remove any other options in the [keystone_authtoken] section.) 
 
  #No Idea just use it. 
  [paste_deploy] 
  flavor = keystone

This command will initialize the glance DB using the configuration files configured above.

sudo su -s /bin/sh -c "glance-manage db_sync" glance

Start the glance services

sudo service glance-registry restart 
sudo service glance-api restart

Verify Operation

Download a cirros cloud image

wget http://download.cirros-cloud.net/0.3.4/cirros-0.3.4-x86_64-disk.img

Login in to command line

source ~/keystonerc_admin

Create an OpenStack image using the command below

openstack image create "cirros" \ 
  --file cirros-0.3.4-x86_64-disk.img \ 
  --disk-format qcow2 --container-format bare \ 
  --public 

List the images and ensure that your image was created successfully

openstack image list
+--------------------------------------+--------+--------+
| ID                                   | Name   | Status |
+--------------------------------------+--------+--------+
| d5edb2b0-ad3c-453a-b66d-5bf292dc2ee8 | cirros | active |
+--------------------------------------+--------+--------+

Are you noticing a pattern here. If not let me help you. If you remember Episode 1, I mentioned that the OS components are similar but subtly different. As you will continue to see, most OS services follow a standard pattern in configuration. This pattern is as follows:

ose2-3
General set of steps for configuring OS components

Phew! Relationships are work. Jokes apart, most OS components will follow the above sequence with minor deviations. So if you are having trouble configuring some component, it would be a good idea to refer to this list and see what you are missing.

Congratulations, along with the keys now you are also getting the occasional glances.

What follows is probably one of the most important part of OS. It is called Nova. It has nothing to do with stars and galaxies, but it is quite galactic. Do you remember those occasional glances from the last section. What you were getting in the end was an Operating System image. For all future purposes whenever there is a reference to an image, the intention is glance image. We introduce another term now called instance. An instance is what is created out of an image. This is the virtual machine that you use to provide services to your customers. In simpler terms lets say you had a Windows CD. You use this CD and install Windows on one Laptop. Then you use the same CD to install another Windows on another laptop. You input different license keys for both, create different users for each and they are two individual / independent laptops running Windows from the same CD. Using this analogy an Image is an equivalent of the Windows CD and the Windows running on each of the laptops are the different instances. Nova simply does the same thing by taking the CD from Glance and creating / configuring and managing instances in the cloud which are then handed over to customers.

C. Nova

ose2-4
Nova

Nova is one of those more complex types that resides on more than one machine and does different things on each. So a component of nova sits on the controller and it is responsible for overall management and communication with other OS services and external world services. The second component sits on each compute (yes you can have multiple computes, but we will look at those later.) This service is primarily responsible for talking to the (new term alert) Hypervisor, to launch and manage instances. What is a hypervisor, is something I am assuming you know already, especially if you are looking to deploy a cloud solution. If not then Google is your best friend. Perform the following configuration to configure the Nova component:

@controller

Create the DB(s) and grant relevant privileges

sudo mysql -u root -p 

CREATE DATABASE nova_api; 
CREATE DATABASE nova; 
 
GRANT ALL PRIVILEGES ON nova_api.* TO 'nova'@'localhost' \ 
  IDENTIFIED BY 'MINE_PASS'; 
GRANT ALL PRIVILEGES ON nova_api.* TO 'nova'@'%' \ 
  IDENTIFIED BY 'MINE_PASS'; 
GRANT ALL PRIVILEGES ON nova.* TO 'nova'@'localhost' \ 
  IDENTIFIED BY 'MINE_PASS'; 
GRANT ALL PRIVILEGES ON nova.* TO 'nova'@'%' \ 
  IDENTIFIED BY 'MINE_PASS'; 
 
exit

Login in to command line

source ~/keystonerc_admin

Create the user and assign the roles

openstack user create --domain default \ 
  --password-prompt nova 
openstack role add --project service --user nova admin 

Create the service and the corresponding endpoints

openstack service create --name nova \ 
  --description "OpenStack Compute" compute 
openstack endpoint create --region RegionOne \ 
  compute public http://controller:8774/v2.1/%\(tenant_id\)s 
openstack endpoint create --region RegionOne \ 
  compute internal http://controller:8774/v2.1/%\(tenant_id\)s 
openstack endpoint create --region RegionOne \ 
  compute admin http://controller:8774/v2.1/%\(tenant_id\)s

Install the software

sudo apt install nova-api nova-conductor nova-consoleauth \ 
  nova-novncproxy nova-scheduler

Configure the configuration file

sudo vi /etc/nova/nova.conf 
  #Configure the DB-1 access 
  [api_database] 
  connection = mysql+pymysql://nova:MINE_PASS@controller/nova_api 
 
  #Configure the DB-2 access (nova has 2 DBs)
  [database] 
  connection = mysql+pymysql://nova:MINE_PASS@controller/nova 
 
  [DEFAULT] 
  #Configure how to access RabbitMQ 
  transport_url = rabbit://openstack:MINE_PASS@controller 
  #Use the below. Some details will follow later 
  auth_strategy = keystone 
  my_ip = 10.30.100.215 
  use_neutron = True 
  firewall_driver = nova.virt.firewall.NoopFirewallDriver
  #Optional parameter - if you want to thin provision VM disks
  disk_allocation_ratio = 3.0 
 
  #Tell Nova how to access keystone 
  [keystone_authtoken] 
  auth_uri = http://controller:5000 
  auth_url = http://controller:35357 
  memcached_servers = controller:11211 
  auth_type = password 
  project_domain_name = default 
  user_domain_name = default 
  project_name = service 
  username = nova 
  password = MINE_PASS 
 
  #This is a remote access to instance consoles (French ? Just take it on faith. We will explore this in a much later episode) 
  [vnc] 
  vncserver_listen = $my_ip 
  vncserver_proxyclient_address = $my_ip 
 
  #Nova needs to talk to glance to get the images 
  [glance] 
  api_servers = http://controller:9292 
 
  #Some locking mechanism for message queing (Just use it.) 
  [oslo_concurrency] 
  lock_path = /var/lib/nova/tmp

Initialize both the DBs using the configuration done above.

sudo su -s /bin/sh -c "nova-manage api_db sync" nova 
sudo su -s /bin/sh -c "nova-manage db sync" nova

Start all the nova services

sudo service nova-api restart 
sudo service nova-consoleauth restart 
sudo service nova-scheduler restart 
sudo service nova-conductor restart 
sudo service nova-novncproxy restart

@compute1
Install the software

sudo apt install nova-compute

Configure the configuration file

sudo vi /etc/nova/nova.conf
  [DEFAULT] 
  #Define DB access 
  transport_url = rabbit://openstack:MINE_PASS@controller 
  #Take it on faith for now 🙂 
  auth_strategy = keystone 
  my_ip = 10.30.100.213 
  use_neutron = True 
  firewall_driver = nova.virt.firewall.NoopFirewallDriver 
 
  #Tell the nova-compute how to access keystone 
  [keystone_authtoken] 
  auth_uri = http://controller:5000 
  auth_url = http://controller:35357 
  memcached_servers = controller:11211 
  auth_type = password 
  project_domain_name = default 
  user_domain_name = default 
  project_name = service 
  username = nova 
  password = MINE_PASS 
 
  #This is a remote access to instance consoles (French ? Just take it on faith. We will explore this in a much later episode) 
  [vnc] 
  enabled = True 
  vncserver_listen = 0.0.0.0 
  vncserver_proxyclient_address = $my_ip 
  novncproxy_base_url = http://controller:6080/vnc_auto.html 
 
  #Nova needs to talk to glance to get the images 
  [glance] 
  api_servers = http://controller:9292 
 
  #Some locking mechanism for message queuing (Just use it.) 
  [oslo_concurrency] 
  lock_path = /var/lib/nova/tmp

Okay so the following requires some explanation. In a production environment your compute will be a Physical Machine and hence the below steps will NOT be required. But since this is a lab, we need to set the virtualization type for KVM hypervisor to qemu (as opposed to kvm.) This setting runs the hypervisor without looking for hardware acceleration that is provided by kvm on a physical machine. So you are going to run virtual machines inside a virtual machine in the lab and it works 😉

For a virtual compute

sudo vi /etc/nova/nova-compute.conf 
  [libvirt] 
  virt_type = qemu

Start the nova service

sudo service nova-compute restart

Verify Operation

@controller

Login in to command line

source ~/keystonerc_admin

Run the following command to list the nova services. Ensure the State is up as show below:

openstack compute service list
+----+------------------+-----------------------+----------+---------+-------+----------------------------+
| ID | Binary           | Host                  | Zone     | Status  | State | Updated At                 |
+----+------------------+-----------------------+----------+---------+-------+----------------------------+
|  3 | nova-consoleauth | controller            | internal | enabled | up    | 2016-11-30T12:54:39.000000 |
|  4 | nova-scheduler   | controller            | internal | enabled | up    | 2016-11-30T12:54:36.000000 |
|  5 | nova-conductor   | controller            | internal | enabled | up    | 2016-11-30T12:54:34.000000 |
|  6 | nova-compute     | compute1              | nova     | enabled | up    | 2016-11-30T12:54:33.000000 |
+----+------------------+-----------------------+----------+---------+-------+----------------------------+

I hope you noticed the similarity in the sequence of steps followed to configure nova. Also I hope you are having fun reading.

So she is talented and makes these amazing galactic objects that you can eventually sell and make money. Aren’t you the lucky one. First the keys, then the glances and now the  money. Don’t get too excited. Lets continue…

Time for some physics. No no no, I did not say physical, I just said physics. But really that is just a reference to the name Neutron. Remember 9 grade physics ? Anyways so what is this Neutron. Remember that our goal at the end of the day is to provide services to our customers. These services are in the form or Virtual Machines, or services running over these virtual machines. If we are looking to cater to a lot of customers then each of them will have their own set of services that they are consuming. These services like any other infrastructure will require a network. You could need things like routers, firewalls, load balancers, VPN and so on and so forth. Now imagine setting these up manually for each customer that you are providing services to and then managing them. Not happening. This is exactly what Neutron does for you.

D. Neutron

ose2-5
Neutron

Of course, its never going to be easy. In my (very humble) personal opinion, this is the part of OS’s character that has the worst temper problem and suffers from very frequent mood swings. In simpler terms its complex. In our setup, major neutron services will be residing on two servers, namely the controller and the neutron. You could put everything on one machine and it should work, however splitting it seems to be the latest approach being proposed by the official documentation. My hunch is that it has something to do with easier scaling of the neutron component later. Honestly I have not explored this yet so I could be wrong. Someday we might have an episode on this too. We never know. There will also be a neutron component on the compute node which I will explain later.

Before we get in to detailing the configuration, I need to explain a few minor terms:

  • When we talk about networks under neutron, we will come across mainly 2 type of networks. One is called external network. This is usually configured once and represents the network used by OS to access the external world. The second is tenant networks, and these are the networks assigned to customers.
  • An OpenStack environment also requires a virtual switching(bridging) component in order to manage virtualized networking across neutron and compute nodes. The two components mostly used are Linux Bridge and OpenVSwtich. (If you want to understand a bit more about OpenVSwitch you can refer to one of my other entries Understanding OpenVSwtich) We will be using OpenVSwitch for our environment. Please note that if you intend to use Linux bridge the configuration will be different.

In order to deploy neutron (sounds like a war order “DEPLOY NEUTRON!!”) please perform the following configuration:

@controller
Create the Database and assign full rights to the neutron user (yawn!!!)

sudo mysql -u root -p 
CREATE DATABASE neutron; 
GRANT ALL PRIVILEGES ON neutron.* TO 'neutron'@'localhost' \ 
  IDENTIFIED BY 'MINE_PASS'; 
GRANT ALL PRIVILEGES ON neutron.* TO 'neutron'@'%' \ 
  IDENTIFIED BY 'MINE_PASS'; 
exit

Login in to command line

source ~/keystonerc_admin

Create the neutron user and add the role

openstack user create --domain default --password-prompt neutron 
openstack role add --project service --user neutron admin

Create the neutron service and the respective endpoints

openstack service create --name neutron \ 
  --description "OpenStack Networking" network 
openstack endpoint create --region RegionOne \ 
  network public http://controller:9696 
openstack endpoint create --region RegionOne \ 
  network internal http://controller:9696 
openstack endpoint create --region RegionOne \ 
  network admin http://controller:9696

Install the software components

sudo apt install neutron-server neutron-plugin-ml2

Configure the neutron config file

sudo vi /etc/neutron/neutron.conf 
  [DEFAULT] 
  #This is a multi-layered plugin 
  core_plugin = ml2 
  service_plugins = router 
  allow_overlapping_ips = True 
  notify_nova_on_port_status_changes = True 
  notify_nova_on_port_data_changes = True 
 
  [DATABASE] 
  #Configure the DB connection 
  connection = mysql+pymysql://neutron:MINE_PASS@controller/neutron 
 
  [keystone_authtoken] 
  #Tell neutron how to talk to keystone 
  auth_uri = http://controller:5000 
  auth_url = http://controller:35357 
  memcached_servers = controller:11211 
  auth_type = password 
  project_domain_name = default 
  user_domain_name = default 
  project_name = service 
  username = neutron 
  password = MINE_PASS 
 
  [nova] 
  #Tell neutron how to talk to nova to inform nova about changes in the network 
  auth_url = http://controller:35357 
  auth_type = password 
  project_domain_name = default 
  user_domain_name = default 
  region_name = RegionOne 
  project_name = service 
  username = nova 
  password = MINE_PASS

Configure the plugin file

sudo vi /etc/neutron/plugins/ml2/ml2_conf.ini 
  [ml2] 
  #In our environment we will use vlan networks so the below setting is sufficient. You could also use vxlan and gre, but that is for a later episode 
  type_drivers = flat,vlan 
  #Here we are telling neutron that all our customer networks will be based on vlans 
  tenant_network_types = vlan 
  #Our SDN type is openVSwitch 
  mechanism_drivers = openvswitch,l2population 
  extension_drivers = port_security 
 
  [ml2_type_flat] 
  #External network is a flat network 
  flat_networks = external 
 
  [ml2_type_vlan] 
  #This is the range we want to use for vlans assigned to customer networks.  
  network_vlan_ranges = external,vlan:1381:1399 
 
  [securitygroup] 
  #Use Ip tables based firewall 
  firewall_driver = iptables_hybrid

Note that I tried to run the su command directly using sudo and for some reasons it fails for me. An alternative is to sudo su (to get root access) and then run the DB instantiate using the config files above. Run the following sequence to instantiate the DB.

sudo su - 
su -s /bin/sh -c "neutron-db-manage --config-file /etc/neutron/neutron.conf \ 
  --config-file /etc/neutron/plugins/ml2/ml2_conf.ini upgrade head" neutron

Start all the neutron services

sudo servive neutron-* restart

Edit the nova configuration file

sudo vi /etc/nova/nova.conf 
  #Tell nova how to get in touch with neutron, to get network updates 
  [neutron] 
  url = http://controller:9696 
  auth_url = http://controller:35357 
  auth_type = password 
  project_domain_name = default 
  user_domain_name = default 
  region_name = RegionOne 
  project_name = service 
  username = neutron 
  password = MINE_PASS 
  service_metadata_proxy = True 
  metadata_proxy_shared_secret = MINE_PASS

Restart nova services

sudo service nova-* restart

@neutron
Install the required services

sudo apt install neutron-plugin-ml2 \ 
  neutron-l3-agent neutron-dhcp-agent \ 
  neutron-metadata-agent neutron-openvswitch-agent

Run the following OpenVSwitch commands to create the following bridges.

Create br-ex bridge that will connect OS to the external network

sudo ovs-vsctl add-br br-ex

Add a port on the br-ex bridge to the ens10 interface. In my environment ens10 is the interface connected on the External Network. You should change it as per your environment.

sudo ovs-vsctl add-port br-ex ens10

The following bridge that we will add is used by the vlan networks for the customer networks in OS. Run the following command to create the bridge.

sudo ovs-vsctl add-br br-vlan

Add a port on the br-vlan bridge to the ens9 interface. In my environment ens9 is the interface connected on the Tunnel Network. You should change it as per your environment.

sudo ovs-vsctl add-port br-vlan ens9 

In order for our OpenVSwitch configuration to persist beyond server reboots we need to configure the interface file accordingly.

sudo vi /etc/network/interfaces 
  # This file describes the network interfaces available on your system 
  # and how to activate them. For more information, see interfaces(5). 
 
  source /etc/network/interfaces.d/* 
 
  # The loopback network interface 
  # No Change 
  auto lo 
  iface lo inet loopback 
 
  #No Change on management network 
  auto ens3 
  iface ens3 inet static 
  address 10.30.100.216 
  netmask 255.255.255.0 
 
  # Add the br-vlan bridge 
  auto br-vlan 
  iface br-vlan inet manual 
  up ifconfig br-vlan up 
 
  # Configure ens9 to work with OVS 
  auto ens9 
  iface ens9 inet manual 
  up ip link set dev $IFACE up 
  down ip link set dev $IFACE down 
 
  # Add the br-ex bridge and move the IP for the external network to the bridge 
  auto br-ex 
  iface br-ex inet static 
  address 172.16.8.216 
  netmask 255.255.255.0 
  gateway 172.16.8.254 
  dns-nameservers 8.8.8.8 

  # Configure ens10 to work with OVS. Remove the IP from this interface  
  auto ens10 
  iface ens10 inet manual 
  up ip link set dev $IFACE up 
  down ip link set dev $IFACE down

Reboot to ensure the new configuration is fully applied

sudo reboot

Configure the neutron config file

sudo vi /etc/neutron/neutron.conf
  [DEFAULT]
  auth_strategy = keystone
  #Tell neutron how to access RabbitMQ
  transport_url = rabbit://openstack:MINE_PASS@controller

  #Tell neutron how to access keystone
  [keystone_authtoken]
  auth_uri = http://controller:5000
  auth_url = http://controller:35357
  memcached_servers = controller:11211
  auth_type = password
  project_domain_name = default
  user_domain_name = default
  project_name = service
  username = neutron
  password = MINE_PASS

Configure the openvswtich agent config file

sudo vi /etc/neutron/plugins/ml2/openvswitch_agent.ini
  #Configure the section for OpenVSwitch
  [ovs]
  #Note that we are mapping alias(es) to the bridges. Later we will use these aliases (vlan,external) to define networks inside OS.
  bridge_mappings = vlan:br-vlan,external:br-ex

  [agent]
  l2_population = True

  [securitygroup]
  #Ip table based firewall
  firewall_driver = iptables_hybrid

Configure the Layer 3 Agent configuration file.

sudo vi /etc/neutron/l3_agent.ini
  [DEFAULT]
  #Tell the agent to use the OVS driver
  interface_driver = neutron.agent.linux.interface.OVSInterfaceDriver
  #This is required to be set like this by the official documentation (If you don’t set it to empty as show below, sometimes your router ports in OS will not become Active)
  external_network_bridge =

Configure the DHCP Agent config file

sudo vi /etc/neutron/dhcp_agent.ini
  [DEFAULT]
  #Tell the agent to use the OVS driver
  interface_driver = neutron.agent.linux.interface.OVSInterfaceDriver
  enable_isolated_metadata = True

Configure the Metadata Agent config file

sudo vi /etc/neutron/metadata_agent.ini

  [DEFAULT]
  nova_metadata_ip = controller
  metadata_proxy_shared_secret = MINE_PASS

Start all neutron services

sudo service neutron-* restart

@compute1

Install the ml2 plugin and the openvswitch agent

sudo apt install neutron-plugin-ml2 \
 neutron-openvswitch-agent

Create the OpenVSwitch bridges for tenant vlans (No external network here)

sudo ovs-vsctl add-br br-vlan

Add a port on the br-vlan bridge to the ens9 interface. In my environment ens9 is the interface connected on the Tunnel Network. You should change it as per your environment.

sudo ovs-vsctl add-port br-vlan ens9

In order for our OpenVSwitch configuration to persist beyond reboots we need to configure the interface file accordingly.

sudo vi /etc/network/interfaces

  # This file describes the network interfaces available on your system
  # and how to activate them. For more information, see interfaces(5).

  source /etc/network/interfaces.d/*

  # The loopback network interface
  #No Change
  auto lo
  iface lo inet loopback

  #No Change to management network
  auto ens3
  iface ens3 inet static
  address 10.30.100.213
  netmask 255.255.255.0

  # Add the br-vlan bridge interface
  auto br-vlan
  iface br-vlan inet manual
  up ifconfig br-vlan up

  #Configure ens9 to work with OVS
  auto ens9
  iface ens9 inet manual
  up ip link set dev $IFACE up
  down ip link set dev $IFACE down

Reboot to ensure the new network configuration is applied successfully

sudo reboot

Configure the neutron config file

sudo vi /etc/neutron/neutron.conf
  [DEFAULT]
  auth_strategy = keystone
  #Tell neutron component how to access RabbitMQ
  transport_url = rabbit://openstack:MINE_PASS@controller

  #Configure access to keystone
  [keystone_authtoken]
  auth_uri = http://controller:5000
  auth_url = http://controller:35357
  memcached_servers = controller:11211
  auth_type = password
  project_domain_name = default
  user_domain_name = default
  project_name = service
  username = neutron
  password = MINE_PASS

Configure the nova config file

sudo vi /etc/nova/nova.conf
  #Tell nova how to access neutron for network topology updates
  [neutron]
  url = http://controller:9696
  auth_url = http://controller:35357
  auth_type = password
  project_domain_name = default
  user_domain_name = default
  region_name = RegionOne
  project_name = service
  username = neutron
  password = MINE_PASS

#Configure the openvswitch agent configuration

sudo vi /etc/neutron/plugins/ml2/openvswitch_agent.ini

#Here we are mapping the alias vlan to the bridge br-vlan
 [ovs]
 bridge_mappings = vlan:br-vlan

[agent]
 l2_population = True

[securitygroup]
 firewall_driver = iptables_hybrid

I find it a good idea to reboot the compute at this point. I was getting connectivity issues without rebooting. Let me know how it goes for you.

sudo reboot

Start all neutron services

sudo service neutron-* restart

Verify Operation

@controller

Login in to command line

source ~/keystonerc_admin

Run the following command to list the neutron agents. Ensure the Alive status is True and State is Up as show below:

openstack network agent list
+--------------------------------------+--------------------+-----------------------+-------------------+-------+-------+---------------------------+
| ID                                   | Agent Type         | Host                  | Availability Zone | Alive | State | Binary                    |
+--------------------------------------+--------------------+-----------------------+-------------------+-------+-------+---------------------------+
| 84d81304-1922-47ef-8b8e-c49f83cff911 | Metadata agent     | neutron               | None              | True  | UP    | neutron-metadata-agent    |
| 93741a55-54af-457e-b182-92e15d77b7ae | L3 agent           | neutron               | None              | True  | UP    | neutron-l3-agent          |
| a3c9c1e5-46c3-4649-81c6-dc4bb9f35158 | Open vSwitch agent | neutron               | None              | True  | UP    | neutron-openvswitch-agent |
| ba9ce5bb-6141-4fcc-9379-c9c20173c382 | DHCP agent         | neutron               | nova              | True  | UP    | neutron-dhcp-agent        |
| e458ba8a-8272-43bb-bb83-ca0aae48c22a | Open vSwitch agent | compute1              | None              | True  | UP    | neutron-openvswitch-agent |
+--------------------------------------+--------------------+-----------------------+-------------------+-------+-------+---------------------------+

At this point I do realize that Its becoming a bit of a drag. But we are almost there.

So till now our interaction with OS has been well, mostly black and white. I mean no offense to the OS command line interface, but its time to add some color to this relationship, and see what we can achieve by pressing the right buttons. The component to be discussed is called Horizon.

E. Horizon

ose2-6
Horizon

Horizon is the component that handles the Graphical User Interface for OS. It simple and sweet and so is its configuration. Perform the below configuration to install and configure Horizon.

Perform the following configuration to deploy Horizon:

@controller
Install the software

sudo apt install openstack-dashboard

Configuration file updates. Please search for these entries in the file and then replace them to avoid any duplicates.

sudo vi /etc/openstack-dashboard/local_settings.py
  OPENSTACK_HOST = "controller"
  ALLOWED_HOSTS = ['*', ]

  SESSION_ENGINE = 'django.contrib.sessions.backends.cache'

  CACHES = {
   'default': {
   'BACKEND': 'django.core.cache.backends.memcached.MemcachedCache',
   'LOCATION': 'controller:11211',
   }
  }
 
  OPENSTACK_KEYSTONE_URL = "http://%s:5000/v3" % OPENSTACK_HOST
  OPENSTACK_KEYSTONE_MULTIDOMAIN_SUPPORT = True
  OPENSTACK_API_VERSIONS = {
   "identity": 3,
   "image": 2,
   "volume": 2,
  }
  OPENSTACK_KEYSTONE_DEFAULT_DOMAIN = "default"
  OPENSTACK_KEYSTONE_DEFAULT_ROLE = "user"
  TIME_ZONE = "TIME_ZONE"

Replace TIME_ZONE with an appropriate time zone identifier. For more information, see the list of time zones.

Start the dashboard service

sudo service apache2 reload

Verify Operation

Using an computer that has access to the controller open the following URL in the browser:

http://controller/horizon

Please replace controller in the above URL with the controller IP if you cannot resolve the controller by its alias. So in my case the URL will become http://10.30.1.215/horizon

You should see a login screen similar to the one below

ose2-8
OpenStack Dashboard Login Screen

Enter admin as the username and the password you used to setup the user. In my case it is MINE_PASS. If you are successfully logged in and see an interface similar (not necessarily same) to the one below then your horizon is working just fine.

ose2-9
OpenStack Dashboard

RECAP:

For the sake of understanding lets do a recap of what we have achieved so far:

  • You got the keys.
  • You occasionally steal a glance or two.
  • You see that there is potential to make money.
  • She has some serious networking skills.
  • And she is simple and sweet to interact with.

Look at the diagram below. This is what we have achieved so far.

ose2-7
OpenStack Environment so far

NEXT:

  • We will learn the importance of building this block by block
  • Add some heat and stir things up
  • Be Excited Again!!!…

If you have survived my absolute ‘non-sense’ (sense) of humor and are still reading, please give yourself a standing ovation. Once again I thank you for reading and your patience. If you have any questions/comments please feel free to share below in the comments section so everyone from different sources can benefit from the discussion. . If you are curious to know what happens next then please read Episode 3 – Lets heat things up.

For my latest posts please visit WhatCloud.

Advertisements

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

Powered by WordPress.com.

Up ↑

%d bloggers like this: