Openstack Laboratory Guide v5.0.1 Pike Release
Openstack Laboratory Guide v5.0.1 Pike Release
Openstack Laboratory Guide v5.0.1 Pike Release
Version 5.0.1
(Pike Release)
Diarmuid Ó Briain
user group
Last updated: 2 October 2017
2 OpenStack Training Laboratory
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file
except in compliance with the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the
License is distributed on an “as is” basis, without warranties of conditions of any kind, either
express or implied. See the License for the specific language governing permissions and
limitations under the License.
Table of Contents
1. INTRODUCTION TO OPENSTACK..........................................................................................................9
1.1 ORIGINS OF OPENSTACK............................................................................................................................9
1.2 ROLE OF THE OPENSTACK FOUNDATION..................................................................................................9
1.3 OPENSTACK SERVICES.............................................................................................................................10
1.3.1 Nova ‘Compute’ Service.................................................................................................................................10
1.3.1 Neutron ‘Networking’ Service.........................................................................................................................11
1.3.2 Swift ‘Object Storage’ service........................................................................................................................12
1.3.3 Cinder ‘Block Storage’ service.......................................................................................................................13
1.3.4 Keystone ‘Identity’ service..............................................................................................................................13
1.3.5 Glance ‘Image store’ service..........................................................................................................................14
1.3.6 Other Services.................................................................................................................................................14
1.4 BEHIND THE CORE OPENSTACK PROJECTS............................................................................................14
1.4.1 The RESTful API.............................................................................................................................................15
1.5 OPENSTACK RELEASES............................................................................................................................15
2. OPENSTACK TRAINING LABORATORY..............................................................................................17
2.1 ARCHITECTURE.........................................................................................................................................17
2.2 CONTROLLER NODE..................................................................................................................................18
2.3 COMPUTE NODE........................................................................................................................................18
2.3.1 Networking.......................................................................................................................................................18
2.4 PASSWORDS..............................................................................................................................................19
3. OPENSTACK TRAINING LABS PRE-INSTALLATION......................................................................21
3.1 GET GIT.....................................................................................................................................................21
3.2 CLONE THE TRAINING LABS......................................................................................................................21
3.3 UPGRADE THE TRAINING LABS..................................................................................................................21
3.4 CLUSTER TRAINING DIRECTORY VARIABLES.............................................................................................21
3.5 PRE-INSTALLATION CHECK.......................................................................................................................22
3.5.1 Enable virtualisation support in BIOS............................................................................................................22
3.6 OPTIMISE THE NODES...............................................................................................................................23
3.7 ENABLE HEAT SERVICE.............................................................................................................................24
3.8 LOG FILES..................................................................................................................................................24
3.9 ADD CONTROLLER AND COMPUTE1 IP TO HYPERVISOR HOSTS FILE......................................................25
4. SETUP OPENSTACK TRAINING LABS ON KVM/QEMU.................................................................27
4.1 INSTALLATION............................................................................................................................................28
4.1.1 Install KVM packages.....................................................................................................................................28
4.2 GNU/LINUX BRIDGE UTILITIES.................................................................................................................28
4.3 VIRT-MANAGER..........................................................................................................................................28
4.4 BUILD INTRODUCTION...............................................................................................................................30
4.5 BUILD STEPS.............................................................................................................................................31
4.6 RUN THE STACKTRAIN SCRIPT..................................................................................................................32
4.6.1 Stacktrain.........................................................................................................................................................32
4.6.2 Confirm installed release.................................................................................................................................32
4.6.3 Memory and harddisks....................................................................................................................................32
4.7 USING THE CLUSTER.................................................................................................................................33
4.7.1 Review the running VMs.................................................................................................................................33
17.2.4 Parameters..................................................................................................................................................159
17.2.5 Resources....................................................................................................................................................159
17.2.6 Outputs........................................................................................................................................................ 159
17.2.7 Conditions....................................................................................................................................................159
17.3 CREATING SINGLE SERVERS.................................................................................................................160
17.4 CREATE COMPLETE NETWORK AND SERVERS......................................................................................166
17.4.1 Networks – child template..........................................................................................................................166
17.4.2 Delete stack.................................................................................................................................................170
17.4.3 Parent template...........................................................................................................................................171
17.4.4 Stack events................................................................................................................................................174
17.4.5 Add a route on the hypervisor to the private network...............................................................................176
17.4.6 Test the configuration.................................................................................................................................177
17.4.7 Review topology on the Horizon dashboard.............................................................................................178
18. APPENDICES..........................................................................................................................................179
18.1 APPENDIX 1 - NAT MASQUERADE SCRIPT FOR HYPERVISOR HOST..................................................179
18.2 APPENDIX 2 – CLUSTER START/STOP SCRIPT....................................................................................180
18.2.1 Running for a KVM/QEMU system............................................................................................................182
18.2.2 Running for a VirtualBox system................................................................................................................183
18.3 APPENDIX 3 - CLEAN NODES SCRIPT FOR HYPERVISOR HOST...........................................................184
18.3.1 Running for a KVM/QEMU system............................................................................................................186
18.3.2 Running for a VirtualBox system................................................................................................................187
18.4 APPENDIX 4 - SCRIPT TO LAUNCH A VM INSTANCE............................................................................188
18.5 APPENDIX 5 - SCRIPT TO LAUNCH A NETWORK WITH VMS.................................................................190
18.6 APPENDIX 6 - STACKTRAIN CLUSTER CREATION SCRIPT – KVM........................................................192
18.7 APPENDIX 7 - STACKTRAIN CLUSTER CREATION SCRIPT – VIRTUALBOX............................................196
19. ABBREVIATIONS...................................................................................................................................201
20. BIBLIOGRAPHY......................................................................................................................................203
Illustration Index
Illustration 1: OpenStack - Projects navigator....................................................................................10
Illustration 2: Neutron networking........................................................................................................11
Illustration 3: Swift 'Object Storage' service.......................................................................................12
Illustration 4: OpenStack Laboratory architecture.............................................................................17
Illustration 5: KVM virtualisation block diagram.................................................................................27
Illustration 6: virt-manager via SSH.....................................................................................................29
Illustration 7: Deploy an instance.........................................................................................................83
Illustration 8: Floating IP addresses.....................................................................................................84
Illustration 9: Virtual Console................................................................................................................94
Illustration 10: Ubuntu instance..........................................................................................................111
Illustration 11: Horizon login - KVM/QEMU testbed........................................................................129
Illustration 12: Horizon login - VirtualBox testbed............................................................................130
Illustration 13: Admin opening dashboard screen...........................................................................131
Illustration 14: Create Project.............................................................................................................132
Illustration 15: Create Flavour............................................................................................................133
Illustration 16: Create User.................................................................................................................134
Illustration 17: Project User opening dashboard screen.................................................................135
Illustration 18: Create Security Group...............................................................................................136
Illustration 19: Adding rules................................................................................................................136
Illustration 20: Launch Instance - Details..........................................................................................137
Illustration 21: Instance Launch - Source.........................................................................................137
Illustration 22: Launch Instance - Flavour.........................................................................................138
Illustration 23: Add the Security Group.............................................................................................138
Illustration 24: Instance launched......................................................................................................139
Illustration 25: Simple network...........................................................................................................141
Illustration 26: Network topology........................................................................................................145
Illustration 27: Network graph............................................................................................................145
Illustration 28: HEAT functional diagram..........................................................................................153
Illustration 29: Network topology........................................................................................................166
Illustration 30: Network graph............................................................................................................166
Illustration 31: Network topology........................................................................................................174
Illustration 32: Network graph............................................................................................................174
1. Introduction to OpenStack
Throughout the years, corporate computing has seen many developments, which
have eventually lead to the rise of cloud computing as we know it today. In the 1990s,
corporate computing was centred around servers in a data centre. In 2000s,
corporate computing was largely based on virtualisation. In the 2010s, we have
witnessed the rise of cloud computing to leverage corporate computing. The concept
of cloud computing is very broad, to the point that we can affirm that cloud computing
is a concept rather than a particular technological development. If you ask an end-
user to explain what cloud computing is, and then ask a system administrator the
same question, you will get two different descriptions. In general, there are three
important approaches when it comes to cloud computing:
• Infrastructure as a Service (IaaS): an infrastructure that is used to provide
Virtual Machines (VM)
• Platform as a Service (PaaS): the provider supplies the network, servers,
storage, OS and middleware to host an application
• Software as a Service (SaaS): the provider gives access to an application.
OpenStack belongs to the IaaS cloud computing category. However, OpenStack is
continuously evolving, broadening its scope. On occasion, the focus of OpenStack
goes beyond IaaS.
Broadcast Domain
Instance Instance Instance Instance
1 2 3 4
Overlay
Underlay
Router #1 Router #2
Illustration 2: Neutron networking
OpenStack Neutron enables Software Defined Networking (SDN). SDN allows users
to define their own networking between the instances that are deployed. Illustration 2
demonstrates a typical OpenStack environment. In the environment there are a
number of different compute nodes connected by using a physical underlay network
involving routing functionality. The OpenStack user is not aware of the detail of the
underlay. The user can see an abstraction network at a higher level, that is called the
Overlay Network. SDN permits the User to create logical networks that do not require
the consideration of the underlying physical network. In fact the User will most likely
be unaware of the topology of the underlay network.
The Neutron service manages this by interfacing with the physical network
architecture using a pluggable architecture that supports many networking vendors
and technologies.
Furthermore the Neutron service also provides an API for users to define networks
and the attachments into them.
App
Cinder
Data
RESTful API
Swift
Proxy
(a)
(b) Data broken into binary objects
(c)
(a) (c) (b) (a) (c) (b) (b) (a) (c) (a)
Storage Storage Storage Storage Storage
node node node node node
The Swift ‘Object Storage’ service provides scalability at the storage level. It works
with binary objects to store data in a distributed, replicated way. Hard-drives are
physical devices, they are limited and they are not very scalable.
The Swift service provides scalability by providing an object-based storage model. An
application normally, in order to write data, writes to a file. In an OpenStack
environment, the application writes to a file but not to a hard drive, the application via
the Cinder ‘Block Storage’ service interfaces with Swift 'Object Storage' service over
a RESTful API which is in turn can communicate with many, many storage nodes.
Swift uses a proxy service which, when it receives data from Cinder, creates chunks
of data called binary objects.
As demonstrated in Illustration 3 the received data is broken into three binary objects
(a), (b), and (c). In Swift, binary objects (a) may be stored in the first storage node,
and binary object (b) in the second storage node with binary object (c) stored in the
third storage node. To create fault tolerance Swift includes a replication algorithm
which stores the binary objects on multiple storage nodes. By default it does this
three times but it is possible to do it more times if necessary.
Efficiency is also achieved because, the moment that the application needs to
retrieve the data, it will address the Swift proxy via the Cinder service which uses an
advanced algorithm to determine exactly where the binary objects reside. It then
sends calls to all the storage nodes that are involved, these are capable of working in
parallel. The data will arrive at the Swift proxy, and onwards to the application via
Cinder quickly and efficiently.
If the storage nodes are for example one terabyte (TB) each and storage is running
low, more Swift storage nodes can simply be added, and the binary objects
rebalanced as set in the Swift storage configuration.
The Swift proxy is communicated with using a Restful API. REST is a standard way
of communicating in an OpenStack environment. The application is not writing a file
to a filesystem, it is using a RESTful API call, which is understood by the Swift proxy.
This API permits the Create, Read, Update and Delete (CRUD) functions.
RESTful API is the native language of OpenStack, and that makes Swift the native
choice for object storage in OpenStack.
An alternative to using Swift ‘Object Storage’ service is Ceph. Ceph is a similar
distributed object store and file system designed to provide excellent performance,
reliability and scalability.
2.1 Architecture
Controller Node Compute Node
Cinder
10.0.0.11/24
unnumbered
DHCP
10.0.0.31/24
unnumbered
Management Provider
Network Network
KVM: virbr1 KVM: virbr2
VBOX: vboxnet0 VBOX: vboxnet1
(10.0.0.1/24) (203.0.113.1/24)
Public Network
192.168.10.1/24
192.168.10.2/24
2.3.1 Networking
The compute node also runs a Neutron networking service agent that connects
instances to virtual networks and provides firewalling services to instances via
security groups.
OpenStack Training Lab scripts automatically create two networks, a Network
Management network and a Provider, or External network. These are named
differently depending on the hypervisor used.
2.4 Passwords
There are many many passwords used in this testbed. Here is a simple list of them
for reference.
OS_LAB=/home/alovelace/OpenStack-lab
OS_ST=/home/alovelace/OpenStack-lab/labs
OS_BASH=/home/alovelace/OpenStack-lab/labs/osbash
EOM
Test the variable by running the ~/.bashrc script again or logging out and back in.
ada:~$ . ~/.bashrc
Check if a 64 bit kernel is running. 0 means that the CPU is not 64-bit. Long Mode
(LM) equates to a 64-bit CPU.
ada:~$ uname -m
x86_64
In the case of the controller node it runs many services and therefore the demand for
memory is high so it is recommend using as much as available on the system. Edit
the config.controller file in the $OS_ST/config directory as outlined above. For the
compute node, the install guide recommends a minimum is 2048 MB and the default
is only 1,024 MB, enough to support 1 instance. The second drive which is distributed
between VMs for root disks also needs to be larger. Edit the config.compute1 file in
the in the $OS_ST/config directory.
So in an 8 GB system (8,192 MB) the table below are suggested values to adjust in
the configuration files. That leaves 1,536 MB for the host system memory.
In a 16 GB system (16,384 MB) the table below are suggested values to adjust in the
configuration files. That leaves 2,048 MB for the host system memory.
ada:~$ cd $OS_ST/config
ada:~$ sed -i.bak '/heat_controller/s/#//' scripts.ubuntu_cluster
# ------------------
# Virtualised nodes
# ------------------
# controller
10.0.0.11 controller
# compute1
10.0.0.31 compute1
EOF
# ------------------
# Virtualised nodes
# ------------------
# controller
10.0.0.11 controller
# compute1
10.0.0.31 compute1
virsh virt-viewer
virsh virt-manager
libvirt libvirt libvirt
QEMU
GNU/Linux Kernel
GNU/Linux host OS
Illustration 5: KVM virtualisation block diagram
4.1 Installation
KVM requires a number of elements to operate.
4.3 virt-manager
It may appear strange but it is important to run the virt-manager. This triggers QEMU
to create a default pool for storage. As the server is headless this must be performed
using Secure SHell (SSH) X11 forwarding.
SSH to the host using the following switches.
-M Places the SSH client into master mode for connection sharing.
-Y Enables trusted X11 forwarding. Trusted X11 forwardings are not subjected
to the X11 SECURITY extension controls.
Set the virsh default connect URI which eliminates the need to use the long-winded
virsh connection command to the KVM/QEMU hypervisor. Enable by running the file.
ada:~$ . .bashrc
ada:~$ virsh
virsh # uri
qemu:///system
osbash@controller:~$ lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
sda 8:0 0 9.8G 0 disk
|-sda1 8:1 0 9.3G 0 part /
|-sda2 8:2 0 1K 0 part
`-sda5 8:5 0 510M 0 part [SWAP]
Compute1
osbash@compute1:~$ cat /proc/meminfo | grep MemTotal
MemTotal: 16432844 kB
osbash@compute1:~$ lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
sda 8:0 0 9.8G 0 disk
|-sda1 8:1 0 9.3G 0 part /
|-sda2 8:2 0 1K 0 part
`-sda5 8:5 0 510M 0 part [SWAP]
sdb 8:16 0 200G 0 disk
ada:~$ virsh
virsh # list
Id Name State
----------------------------------------------------
3 compute1 running
25 controller running
4.7.4 VM IP addresses
The VM IP addresses on the public network are given at the end of the stacktrain
script.
Your cluster nodes:
INFO VM name: compute1
INFO SSH login: ssh osbash@192.168.122.71
INFO (password: osbash)
INFO VM name: controller
INFO SSH login: ssh osbash@192.168.122.205
INFO (password: osbash)
INFO Dashboard: Assuming horizon is on controller VM.
INFO http://192.168.122.205/horizon/
INFO User : demo (password: demo_user_pass)
INFO User : admin (password: admin_user_secret)
INFO Network: mgmt
INFO Network address: 10.0.0.0
INFO Network: provider
INFO Network address: 203.0.113.0
It is also possible from the hypervisor to access the VMs over the management
network.
VM name: compute1
SSH login: ssh osbash@10.0.0.11 (password: osbash)
VM name: controller
SSH login: ssh osbash@10.0.0.31 (password: osbash)
virsh # net-list
Name State Autostart Persistent
----------------------------------------------------------
default active yes yes
labs-mgmt active no yes
labs-provider active no yes
* Documentation: https://help.ubuntu.com
* Management: https://landscape.canonical.com
* Support: https://ubuntu.com/advantage
* Documentation: https://help.ubuntu.com
* Management: https://landscape.canonical.com
* Support: https://ubuntu.com/advantage
4.11 Add hypervisor SSH keys to the controller and compute1 nodes
Optionally add SSH host keys from the hypervisor to the Controller and Compute1
nodes. This removes the need for passwords when logging in to the nodes from the
hypervisor.
ada:~$ ssh-keygen -t rsa -b 4096 -C "ada@lovelace.com"
Generating public/private rsa key pair.
Enter file in which to save the key (/home/alovelace/.ssh/id_rsa):
Enter passphrase (empty for no passphrase):
Enter same passphrase again:
Your identification has been saved in /home/alovelace/.ssh/id_rsa.
Your public key has been saved in /home/alovelace/.ssh/id_rsa.pub.
The key fingerprint is:
SHA256:Y24YPdnqY3TK36Bi2KESL6DdKGrjd7oUqf10LOZr4pA ada@lovelace.com
The key's randomart image is:
+---[RSA 4096]----+
| |
| . . o |
| o . S . |
|. = . o=.+. |
|.E B B.*+o. |
|ooBoXo*o=. o |
|=o+**=.ooo. . |
+----[SHA256]-----+
ada:~$ ssh-agent
SSH_AUTH_SOCK=/tmp/ssh-GW8hKy5WuK2Z/agent.7155; export SSH_AUTH_SOCK;
SSH_AGENT_PID=7156; export SSH_AGENT_PID;
echo Agent pid 7156;
Controller
osbash@controller:~$ cat /proc/meminfo | grep MemTotal
MemTotal: 6110832 kB
osbash@controller:~$ lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
sda 8:0 0 9.8G 0 disk
|-sda1 8:1 0 9.3G 0 part /
|-sda2 8:2 0 1K 0 part
`-sda5 8:5 0 510M 0 part [SWAP]
Compute1
osbash@compute1:~$ cat /proc/meminfo | grep MemTotal
MemTotal: 8175396 kB
osbash@compute1:~$ lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
sda 8:0 0 9.8G 0 disk
|-sda1 8:1 0 9.3G 0 part /
|-sda2 8:2 0 1K 0 part
`-sda5 8:5 0 510M 0 part [SWAP]
sdb 8:16 0 50G 0 disk
<none>
<none>
<none>
Clients so far: 0
Guest:
Guest Facilities:
Snapshots:
<none>
<none>
<none>
Guest:
Guest Facilities:
Snapshots:
5.4.4 VM IP addresses
The VM IP addresses on the public network are given at the end of the stacktrain
script.
Your cluster nodes:
Your cluster nodes:
INFO VM name: compute1
INFO SSH login: ssh -p 2232 osbash@127.0.0.1 (or localhost)
INFO (password: osbash)
INFO VM name: controller
INFO SSH login: ssh -p 2230 osbash@127.0.0.1 (or localhost)
INFO (password: osbash)
INFO Dashboard: Assuming horizon is on controller VM.
INFO http://127.0.0.1:8888/horizon/
INFO User : demo (password: demo_user_pass)
INFO User : admin (password: admin_user_secret)
INFO Network: mgmt
INFO Network address: 10.0.0.0
INFO Network: provider
INFO Network address: 203.0.113.0
Name: vboxnet1
GUID: 786f6276-656e-4174-8000-0a0027000001
DHCP: Disabled
IPAddress: 203.0.113.1
NetworkMask: 255.255.255.0
IPV6Address: fe80:0000:0000:0000:0800:27ff:fe00:0001
IPV6NetworkMaskPrefixLength: 64
HardwareAddress: 0a:00:27:00:00:01
MediumType: Ethernet
Status: Up
VBoxNetworkName: HostInterfaceNetworking-vboxnet1
* Documentation: https://help.ubuntu.com
* Management: https://landscape.canonical.com
* Support: https://ubuntu.com/advantage
osbash@controller:~$
* Documentation: https://help.ubuntu.com
* Management: https://landscape.canonical.com
* Support: https://ubuntu.com/advantage
5.8 Add hypervisor SSH keys to the controller and compute1 nodes
Optionally add SSH host keys from the hypervisor to the Controller and Compute1
nodes. This removes the need for passwords when logging in to the nodes from the
hypervisor.
ada:~$ ssh-keygen -t rsa -b 4096 -C "ada@lovelace.com"
Generating public/private rsa key pair.
Enter file in which to save the key (/home/alovelace/.ssh/id_rsa):
Enter passphrase (empty for no passphrase):
Enter same passphrase again:
Your identification has been saved in /home/alovelace/.ssh/id_rsa.
Your public key has been saved in /home/alovelace/.ssh/id_rsa.pub.
The key fingerprint is:
SHA256:Y24YPdnqY3TK36Bi2KESL6DdKGrjd7oUqf10LOZr4pA ada@lovelace.com
The key's randomart image is:
+---[RSA 4096]----+
| |
| |
| . . o |
| o . S . |
|. = . o=.+. |
|.E B B.*+o. |
|ooBoXo*o=. o |
|=o+**=.ooo. . |
+----[SHA256]-----+
ada:~$ ssh-agent
SSH_AUTH_SOCK=/tmp/ssh-GW8hKy5WuK2Z/agent.7155; export SSH_AUTH_SOCK;
SSH_AGENT_PID=7156; export SSH_AGENT_PID;
echo Agent pid 7156;
ada:~$ virsh
Welcome to virsh, the virtualization interactive terminal.
virsh # list
Id Name State
----------------------------------------------------
3 compute1 running
It is best to shutdown the VM first because a snapshot taken of a running guest only
captures the state of the disk and not the state of the memory.
virsh # shutdown controller
Domain controller is being shutdown
Create a snapshot.
virsh # snapshot-create-as --domain controller
--name "snap01-controller" --disk-only --atomic
--diskspec hda,file=/var/lib/libvirt/images/snap01-controller
Domain snapshot snap01-controller created
virsh # list
Id Name State
----------------------------------------------------
3 controller running
However as can be seen deletion of external disk snapshots is not supported yet. In
this case delete the metadata associated with the snapshot and delete the snapshot
manually.
First shut down the compute1 VM instance and confirm it has shutdown.
virsh # shutdown compute1
Domain compute1 is being shutdown
Edit the VM instance XML file and change the maximum and current memory to 17
GB and CPUs to 2.
virsh # edit compute1
...
<memory unit='KiB'>17825792</memory>
<currentMemory unit='KiB'>17825792</currentMemory>
<vcpu placement='static'>2</vcpu>
...
Now resize the QEMU QCOW Image by adding 50G to bring the image up to 20G.
ada:~$ sudo qemu-img resize /var/lib/libvirt/images/compute1-sdb +50G
Image resized.
Connect to the compute1 node and review the reported size of the physical volume
/dev/sdb by LVM, it is still 1 GB.
osbash@compute1:~$ sudo pvdisplay
--- Physical volume ---
PV Name /dev/sdb
VG Name cinder-volumes
PV Size 200.00 GiB / not usable 4.00 MiB
Allocatable yes
PE Size 4.00 MiB
Total PE 51199
Free PE 51199
Allocated PE 0
PV UUID uRKbME-24kE-1iHL-paym-TPgg-lelk-8OpH1D
Physical Volume Show (pvs) command reports information about physical volumes, it
also considers it still 1 GB.
osbash@compute1:~$ sudo pvs /dev/sdb
PV VG Fmt Attr PSize PFree
/dev/sdb cinder-volumes lvm2 a-- 200.00g 200.00g
Resize the LVM physical volume by forcing LVM to re-evaluate the reported size in
the actual image file.
osbash@compute1:~$ sudo pvresize /dev/sdb
Physical volume "/dev/sdb" changed
1 physical volume(s) resized / 0 physical volume(s) not resized
osbash@controller:~$ . admin-openrc.sh
osbash@controller:~$ openstack compute service list
+----+------------------+------------+----------+---------+-------+----------------------------+
| ID | Binary | Host | Zone | Status | State | Updated At |
+----+------------------+------------+----------+---------+-------+----------------------------+
| 1 | nova-scheduler | controller | internal | enabled | up | 2017-09-24T12:11:19.000000 |
| 5 | nova-consoleauth | controller | internal | enabled | up | 2017-09-24T12:11:15.000000 |
| 6 | nova-conductor | controller | internal | enabled | up | 2017-09-24T12:11:22.000000 |
| 8 | nova-compute | compute1 | nova | enabled | up | 2017-09-24T12:11:17.000000 |
+----+------------------+------------+----------+---------+-------+----------------------------+
Notice that the asterisk (*) has moved to the active snapshot.
ada:~$ vboxmanage snapshot "controller" list
Name: controller_-_cluster_installed
(UUID: b445f1e1-9d87-4eeb-8a12-63396456d190) *
Name: snap01-controller
(UUID: 798b7138-6802-49dc-b1c3-86ed7406417f)
Description: Initial Controller snapshot
Delete a snapshot
Delete a snapshot and notice it removed from the snapshot list.
ada:~$ vboxmanage snapshot "controller" delete snap01-controller
0%...10%...20%...30%...40%...50%...60%...70%...80%...90%...100%
osbash@controller:~$ lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
sda 8:0 0 9.8G 0 disk
|-sda1 8:1 0 9.3G 0 part /
|-sda2 8:2 0 1K 0 part
`-sda5 8:5 0 510M 0 part [SWAP]
sdb 8:16 0 50G 0 disk
First shut down the compute1 VM instance and confirm it has shutdown.
ada:~$ vboxmanage controlvm "compute1" poweroff
0%...10%...20%...30%...40%...50%...60%...70%...80%...90%...100%
As the working image is made up of the base image and the snapshots it is
necessary to clone the VM instance to create a new base.
ada:~$ vboxmanage clonevm "compute1"
0%...10%...20%...30%...40%...50%...60%...70%...80%...90%...100%
Machine has been successfully cloned as "compute1 Clone"
Move the new clone directory to the place previously taken by compute1.
ada:~$ cd ~/'VirtualBox VMs'
ada:~/VirtualBox VMs $ mv 'compute1 Clone' labs/
ada:~/VirtualBox VMs $ cd labs
ada:~/VirtualBox VMs/labs $ mv 'compute1 Clone' compute1
ada:~/VirtualBox VMs $ cd compute1
ada:~/VirtualBox VMs/labs/compute1 $ ls
compute1-disk1.vdi compute1-disk2.vdi compute1.vbox
Edit the vbox file to reflect the new compute1 name and update the vdi names.
ada:~/VirtualBox VMs/labs/compute1 $ sed -i.bak 's/compute1
Clone/compute1/' compute1.vbox
Confirm registration.
ada:~$ vboxmanage list vms
"controller" {85cc5cd8-3392-49bd-bac8-76c4a8bed317}
"compute1" {42d461ef-79cf-49a7-a6fd-5bcfcafcd87c}
Have a look at the compute1-disk2 image as it is, note the size is 1040 MB.
ada:~$ vboxmanage list hdds | awk -v RS='' '/base/'
UUID: 6a0cfecf-fd21-42b0-b91f-58bd7f44c871
Parent UUID: base
State: locked read
Type: multiattach
Location: /home/alovelace/Dropbox/OpenStack-lab/labs/img/base-
ssh-ocata-ubuntu-16.04-amd64.vdi
Storage format: VDI
Capacity: 10000 MBytes
Encryption: disabled
UUID: 5259ea5f-d2ca-402b-99ba-48cb4199b451
Parent UUID: base
State: created
Type: normal (base)
Location: /home/alovelace/VirtualBox VMs/labs/compute1/compute1-
disk1.vdi
Storage format: VDI
Capacity: 10000 MBytes
Encryption: disabled
UUID: 5ead5e53-593b-4eba-a228-7d513751beec
Parent UUID: base
State: created
Type: normal (base)
Location: /home/alovelace/VirtualBox VMs/labs/compute1/compute1-
disk2.vdi
Storage format: VDI
Capacity: 51200 MBytes
Encryption: disabled
osbash@controller:~$ lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
sda 8:0 0 9.8G 0 disk
|-sda1 8:1 0 9.3G 0 part /
|-sda2 8:2 0 1K 0 part
`-sda5 8:5 0 510M 0 part [SWAP]
sdb 8:16 0 60G 0 disk
However the reported size of the physical volume /dev/sdb by LVM, it is still 50 GB.
osbash@compute1:~$ sudo pvdisplay
--- Physical volume ---
PV Name /dev/sdb
VG Name cinder-volumes
PV Size 50.00 GiB / not usable 4.00 MiB
Allocatable yes
PE Size 4.00 MiB
Total PE 12799
Free PE 12799
Allocated PE 0
PV UUID 9XFbcy-WuIl-hBoa-4L4i-SvyL-ay30-M9zfLc
Physical Volume Show (pvs) command reports information about physical volumes, it
also considers it still 1 GB.
osbash@compute1:~$ sudo pvs /dev/sdb
PV VG Fmt Attr PSize PFree
/dev/sdb cinder-volumes lvm2 a-- 50.00g 50.00g
Resize the LVM physical volume by forcing LVM to re-evaluate the reported size in
the actual image file.
osbash@compute1:~$ sudo pvresize /dev/sdb
Physical volume "/dev/sdb" changed
1 physical volume(s) resized / 0 physical volume(s) not resized
Type 'help;' or '\h' for help. Type '\c' to clear the current input
statement.
MariaDB [(none)]>
MariaDB [(none)]>
Now using the username and database password for one of the services, say
keystone review the database tables. Note: sudo is not necessary.
Type 'help;' or '\h' for help. Type '\c' to clear the current input
statement.
MariaDB [(none)]>
Now listing the databases, only the keystone one is available to this user.
Database changed
osbash@controller:~$ . admin-openrc.sh
8.5.1 API
• nova-api service
• Accepts and responds to end user compute API calls. The service
supports the OpenStack Compute API, the Amazon Elastic Compute 2
(EC2) API, and a special Admin API for privileged Users to perform
administrative actions. It enforces some policies and initiates most
orchestration activities, such as running an instance.
• nova-api-metadata service
• Accepts metadata requests from instances. The nova-api-metadata
service is generally used when you run in multi-host mode with nova-
network installations.
• Messaging queue
• Used by most OpenStack Networking installations to route information
between the neutron-server and various agents. It also acts as a
database to store networking state for particular plug-ins.
Neutron mainly interacts with Nova Compute to provide networks and connectivity for
its instances.
02 Oct 2017
OpenStack Training Laboratory
OpenStack Pike
+--------------------------------------+--------------------+------------+-------------------+-------+-------+---------------------------+
| ID | Agent Type | Host | Availability Zone | Alive | State | Binary |
+--------------------------------------+--------------------+------------+-------------------+-------+-------+---------------------------+
| 086c1e9a-4698-4077-9a23-f41a8bffd3ab | Linux bridge agent | controller | None | True | UP | neutron-linuxbridge-agent |
| 368fb404-cf22-4729-a0e3-fa13eb4189ff | Linux bridge agent | compute1 | None | True | UP | neutron-linuxbridge-agent |
| 4d139470-26bb-48be-a926-e20383016656 | DHCP agent | controller | nova | True | UP | neutron-dhcp-agent |
| 52945590-887e-4be6-8dc2-4bdbc7c0d2ab | L3 agent | controller | nova | True | UP | neutron-l3-agent |
| f26821bd-83b1-43ff-832d-d9700e556071 | Metadata agent | controller | None | True | UP | neutron-metadata-agent |
+--------------------------------------+--------------------+------------+-------------------+-------+-------+---------------------------+
netLabs!UG
78
OpenStack Training Laboratory 79
8.6.2 Networking
The network configuration uses a provider (external) network that connects to the
physical network infrastructure via layer-2 (bridging/switching). This network includes
a DHCP server that provides IP addresses to instances.
The provider network uses 203.0.113.0/24 with a gateway on 203.0.113.1. The
DHCP server assigns each instance a floating IP address from the range
203.0.113.101 - 203.0.113.250. All instances use 8.8.4.4 as a DNS resolver. It is
worth noting that on the instance VM itself the floating IP address is not known.
Neutron acts as a NAT router mapping the internal private IP address with the
floating IP address.
+------------+-------------+--------------------------------------+------------+--------+----------------------------+--------+
| Hostname | Binary | Engine ID | Host | Topic | Updated At | Status |
OpenStack Pike
+------------+-------------+--------------------------------------+------------+--------+----------------------------+--------+
| controller | heat-engine | 10d6de5a-beba-414c-9d92-2990dce07ecf | controller | engine | 2017-09-24T14:29:52.000000 | down |
| controller | heat-engine | bf1fcb5f-4c70-4a30-a9ee-586fc1d3f12a | controller | engine | 2017-09-24T14:29:52.000000 | down |
| controller | heat-engine | 7b978702-3f97-4de9-b08c-56a2cbcd53db | controller | engine | 2017-09-24T14:45:44.000000 | up |
| controller | heat-engine | 6f48ea11-8d99-4d1d-a9e3-35ae410c51a6 | controller | engine | 2017-09-24T14:45:44.000000 | up |
| controller | heat-engine | 82b37328-64fe-4514-9b13-1ef36654833a | controller | engine | 2017-09-24T14:29:52.000000 | down |
| controller | heat-engine | d8bfd206-4162-4309-a550-0a16311e16c2 | controller | engine | 2017-09-24T14:29:52.000000 | down |
| controller | heat-engine | 4c019ae1-0122-4b61-bee6-5161b5ccc7fe | controller | engine | 2017-09-24T14:45:44.000000 | up |
| controller | heat-engine | 75acc47d-c0b9-432c-b2f8-bdafca1d15a7 | controller | engine | 2017-09-24T14:45:44.000000 | up |
+------------+-------------+--------------------------------------+------------+--------+----------------------------+--------+
netLabs!UG
82
OpenStack Training Laboratory 83
9. Deploying a VM instance
Instance 5. Nova
- Image 1. Glance
- Security settings 2. Nova
- Networking 3. Neutron – Private network
- Storage 4. Cinder
Compute
Node
Hypervisor
To deploy an instance the compute node has a running KVM/QEMU hypervisor. This
hypervisor will spin up the VM. There are some other requirements. Where does the
image come from?, the Glance service, security is provided from the Nova service,
networking is provided by the Neutron service and storage from the Cinder service
and finally the instance itself from the Nova service. The Neutron service will need a
private network that will be reserved for that specific Project to run the instance on.
Instance 1 Instance 2
192.168.10.21 192.168.10.22
Private
192.168.0.1
network
SDN
Router
192.168.40.201
Floating IP addresses
- 203.0.113.109
- 203.0.113.110
External
network
Illustration 8: Floating IP addresses
osbash@controller:~$ . demo-openrc.sh
Set the permissions of the .pem file so that only you can read and write to it, run the
following command.
osbash@controller:~$ chmod 600 mykey.pem
sftp> cd OpenStack-lab
netLabs!UG
osbash@controller:~$ openstack security group list
There exists by default security group called default.
+--------------------------------------+---------+------------------------+----------------------------------+
| ID | Name | Description | Project |
OpenStack Pike
+--------------------------------------+---------+------------------------+----------------------------------+
| c6d26784-b591-42b5-8624-6c29bebbc152 | default | Default security group | 9a10148d3c414d61800ee2946ac545ea |
+--------------------------------------+---------+------------------------+----------------------------------+
OpenStack Training Laboratory
Security group
02 Oct 2017
9.3.2
90 OpenStack Training Laboratory
By default this security group default is quite restricted. For the purpose of this
exercise permit both SSH and Internet Control Message Protocol (ICMP).
netLabs!UG
osbash@controller:~$ openstack security group rule list default
+--------------------------------------+-------------+-----------+------------+--------------------------------------+
| ID | IP Protocol | IP Range | Port Range | Remote Security Group |
+--------------------------------------+-------------+-----------+------------+--------------------------------------+
| 01b98dc9-5a07-4556-8eee-ee0ec35b4eed | icmp | 0.0.0.0/0 | | None |
OpenStack Pike
| 0a96ebdd-1bfb-41a6-beea-816059574b59 | None | None | | None |
| 0f6c4fc9-e0c4-49af-b8d8-c34619829c07 | None | None | | c6d26784-b591-42b5-8624-6c29bebbc152 |
| 7cb4c9cd-ca01-44ea-97e2-7c47a70ce63b | None | None | | None |
| b5993e8b-520f-4d2e-8ece-3dbb1f12a202 | None | None | | c6d26784-b591-42b5-8624-6c29bebbc152 |
| dfe7f03b-4149-4c58-8fd9-a0983d3855f5 | tcp | 0.0.0.0/0 | 22:22 | None |
+--------------------------------------+-------------+-----------+------------+--------------------------------------+
OpenStack Training Laboratory
02 Oct 2017
92 OpenStack Training Laboratory
Using the LVM display commands it is possible to see the newly created volume on
the compute node. Firstly the pvdisplay command shows the Physical Volume, the
vgdisplay command the Volume Group and finally the lvdisplay command the Logical
Volume. On a production system it is possible to have multiple Logical Volumes in a
Volume Group.
osbash@compute1:~$ sudo pvdisplay
--- Physical volume ---
PV Name /dev/sdb
VG Name cinder-volumes
PV Size 200.00 GiB / not usable 4.00 MiB
Allocatable yes
PE Size 4.00 MiB
Total PE 51199
Free PE 50943
Allocated PE 256
PV UUID 5ULpY2-FTXz-vxMs-0bvp-0wT8-0pu1-DcMU1O
osbash@controller:~$ . demo-openrc.sh
netLabs!UG
+--------------------------------------+--------+--------+
| Status |
+--------------------------------------+--------+--------+
| 79d5847b-bfd4-47e8-badf-cec219687d4e | cirros | active |
+--------------------------------------+--------+--------+
osbash@controller:~$ openstack security group list
| Name
osbash@controller:~$ openstack image list
+--------------------------------------+---------+------------------------+----------------------------------+
OpenStack Pike
| ID | Name | Description | Project |
+--------------------------------------+---------+------------------------+----------------------------------+
| c6d26784-b591-42b5-8624-6c29bebbc152 | default | Default security group | 9a10148d3c414d61800ee2946ac545ea |
+--------------------------------------+---------+------------------------+----------------------------------+
OpenStack Training Laboratory
| ID
02 Oct 2017
96 OpenStack Training Laboratory
netLabs!UG
osbash@controller:~$ openstack server add volume cirrOS-test 1GB-vol
Attach the volume created earlier to the instance VM.
OpenStack Pike
+--------------------------------------+--------------+--------+------+--------------------------------------+
| ID | Display Name | Status | Size | Attached to |
+--------------------------------------+--------------+--------+------+--------------------------------------+
| 10c8ca56-e831-4ada-9a6c-a2ee7b3a03ed | 1GB-vol | in-use | 1 | Attached to cirrOS-test on /dev/vdb |
+--------------------------------------+--------------+--------+------+--------------------------------------+
OpenStack Training Laboratory
02 Oct 2017
9.3.5
98 OpenStack Training Laboratory
Taking the URL given open a Virtual Console to the new instance.
$ cat /etc/os-release
NAME=Buildroot
VERSION=2012.05-dirty
ID=buildroot
VERSION_ID=2012.05
PRETTY_NAME="Buildroot 2012.05"
It is also possible to SSH using the mykey.pem key file instead of a password. This is
actually the more typical method for accessing VM instances in the cloud. Note that
no password is required in this case.
$ cat /etc/os-release
NAME=Buildroot
VERSION=2012.05
ID=buildroot
VERSION_ID=2012.05
PRETTY_NAME="Buildroot 2012.05"
On the compute node it is possible to use the virsh tool to monitor the QEMU VM
instances. virsh uses the libvirt C toolkit to interact with the virtualisation capabilities
of GNU/Linux and while it supports many hypervisors like Xen, KVM, LXC, OpenVZ,
VirtualBox and VMware ESX, it is its support for QEMU that is of interest here.
Get the domain ID of the instance from the QEMU hypervisor perspective.
With the domain ID use the dominfo and domstats commands to find out about the
instance.
osbash@compute1:~$ virsh dominfo instance-00000001
Id: 1
Name: instance-00000001
UUID: f1b1e3a6-076a-4cfe-8411-cc8f4987b8be
OS Type: hvm
State: running
CPU(s): 1
CPU time: 26.4s
Max memory: 65536 KiB
Used memory: 65536 KiB
Persistent: yes
Autostart: disable
Managed save: no
Security model: apparmor
Security DOI: 0
Security label: libvirt-f1b1e3a6-076a-4cfe-8411-cc8f4987b8be (enforcing)
$ df -h
Filesystem Size Used Available Use% Mounted on
/dev 21.3M 0 21.3M 0% /dev
/dev/vda1 23.2M 18.0M 4.0M 82% /
tmpfs 24.8M 0 24.8M 0% /dev/shm
tmpfs 200.0K 72.0K 128.0K 36% /run
/dev/vdb1 1006.9M 17.3M 938.5M 2% /mnt/1GB-vol
ada:~$ $OS_LAB/clean_nodes.sh
. . . . . . . . . .
Returning Controller node to snapshot 'public_private_networks'
Restarting nodes
Waiting for VM "controller" to power on...
VM "controller" has been successfully started.
Waiting for VM "compute1" to power on...
VM "compute1" has been successfully started.
Flavour: m1.nano
Image: cirros
Network UUID=1960a5c6-77eb-47b7-855e-e3a7bf86f183
Security group: default
Key name: mykey
+--------------------------------------+-----------------------------------------------+
| Field | Value |
+--------------------------------------+-----------------------------------------------+
| OS-DCF:diskConfig | MANUAL |
| OS-EXT-AZ:availability_zone | |
| OS-EXT-STS:power_state | NOSTATE |
| OS-EXT-STS:task_state | scheduling |
| OS-EXT-STS:vm_state | building |
| OS-SRV-USG:launched_at | None |
| OS-SRV-USG:terminated_at | None |
| accessIPv4 | |
| accessIPv6 | |
| addresses | |
| adminPass | aAg6Lr8V3p7q |
| config_drive | |
| created | 2016-12-29T12:23:37Z |
| flavor | m1.nano (0) |
| hostId | |
| id | 8affc840-ca6d-4084-a776-9858bc12981d |
| image | cirros (e8d18f95-0eb7-48f1-a9be-d2b8e7b869f1) |
| key_name | mykey |
| name | cirrOS-test |
| os-extended-volumes:volumes_attached | [] |
| progress | 0 |
| project_id | 8b62de81fdb7486486fe11e2bd961301 |
| properties | |
| security_groups | [{u'name': u'default'}] |
| status | BUILD |
| updated | 2016-12-29T12:23:37Z |
| user_id | b8caef709ca648c9bf4cb506a5a89bc7 |
+--------------------------------------+-----------------------------------------------+
. .
Creating volume 1GB-vol
+---------------------+--------------------------------------+
| Field | Value |
+---------------------+--------------------------------------+
| attachments | [] |
| availability_zone | nova |
| bootable | false |
| consistencygroup_id | None |
| created_at | 2016-12-29T12:24:02.637033 |
| description | None |
| encrypted | False |
| id | 35332326-f972-4fad-acda-1e11c1a03031 |
| multiattach | False |
| name | 1GB-vol |
| properties | |
| replication_status | disabled |
| size | 1 |
| snapshot_id | None |
| source_volid | None |
| status | creating |
| type | None |
| updated_at | None |
| user_id | b8caef709ca648c9bf4cb506a5a89bc7 |
+---------------------+--------------------------------------+
+--------------------------------------+--------------+--------+------+--------------------------------------+
| ID | Display Name | Status | Size | Attached to |
+--------------------------------------+--------------+--------+------+--------------------------------------+
| 35332326-f972-4fad-acda-1e11c1a03031 | 1GB-vol | in-use | 1 | Attached to cirrOS-test on /dev/vdb |
+--------------------------------------+--------------+--------+------+--------------------------------------+
Confirm the current vCPU and memory available at the compute node.
ada:~$ ssh osbash@192.168.122.140
osbash@192.168.122.140's password: osbash
Welcome to Ubuntu 16.04.3 LTS (GNU/Linux 4.4.0-57-generic x86_64)
* Documentation: https://help.ubuntu.com
* Management: https://landscape.canonical.com
* Support: https://ubuntu.com/advantage
Last login: Sat May 6 05:34:27 2017 from 192.168.122.1
Change maximum memory limit and then the memory allocation. The first sets the
limit for memory on this VM domain. It is then possible to dynamically modify the VM
domain memory up to the max limit.
Confirm these changes. (Note: using command from host shell as there is no grep
within the virsh # shell).
Before changing the numer of vCPUs, confirm the number of CPUs on the host
system. Obviously it is not possible to use this number as the host’s own
requirements must be catered for.
virsh # maxvcpus
16
Edit the eXtensible Markup Language (XML) file for the VM domain to change the
vcpu placement to 4. This will make 4 vCPU available to the VM domain.
...
<vcpu placement='static'>4</vcpu>
...
Apply the changes to the XML file. (Note: as sudo is required (the XML file is owned
by root) the full form command is necessary). Confirm the vCPUs for the VM domain.
* Documentation: https://help.ubuntu.com
* Management: https://landscape.canonical.com
* Support: https://ubuntu.com/advantage
Last login: Sat May 6 05:34:27 2017 from 192.168.122.1
* Documentation: https://help.ubuntu.com
* Management: https://landscape.canonical.com
* Support: https://ubuntu.com/advantage
Last login: Mon Sept 25 14:02:36 2017 from 10.0.2.2
* Documentation: https://help.ubuntu.com
* Management: https://landscape.canonical.com
* Support: https://ubuntu.com/advantage
Last login: Mon Sept 25 14:02:36 2017 from 10.0.2.2
osbash@controller:~$ cd img
xenial-server-cloudimg-amd64-disk1 100%
[======================================>] 304.88M 668KB/s in 8m
42s
root@controller:~/img$ ls -la
total 325184
drwxrwxr-x 2 osbash osbash 4096 Dec 22 20:41 .
drwxr-xr-x 11 osbash osbash 4096 Dec 22 16:00 ..
-rw-rw-r-- 1 osbash osbash 13287936 May 7 2015 cirros-0.3.4-
x86_64-disk.img
-rw-rw-r-- 1 osbash osbash 63 Dec 20 10:07 cirros-0.3.4-
x86_64-disk.img.md5sum
-rw-rw-r-- 1 osbash osbash 319684608 Dec 21 12:12 xenial-server-
cloudimg-amd64-disk1.img
Create the image as the demo User. Exit from root and set the demo variables via the
demo-openrc script.
osbash@controller:~/img$ . demo-openrc.sh
+------------------+------------------------------------------------------+
| Field | Value |
+------------------+------------------------------------------------------+
| checksum | aae5d19b4e9744e3d4d633ddb5ae6aae |
| container_format | bare |
| created_at | 2016-12-22T20:54:05Z |
| disk_format | qcow2 |
| file | /v2/images/c4ef4b37-16f5-47a2-8815-146dfa103ac6/file |
| id | c4ef4b37-16f5-47a2-8815-146dfa103ac6 |
| min_disk | 0 |
| min_ram | 0 |
| name | Ubuntu |
| owner | 78f6d3e8398e418ea1d08fba14c91c48 |
| properties | architecture='x86_64' |
| protected | False |
| schema | /v2/schemas/image |
| size | 319684608 |
| status | active |
| tags | |
| updated_at | 2016-12-22T20:54:06Z |
| virtual_size | None |
| visibility | private |
+------------------+------------------------------------------------------+
osbash@controller:~/img$ cd ~
11.4 Flavour
Create a special flavour with enlarged memory and a disk size of 3 GB (Ubuntu
image is approximately 2.3 GB).
osbash@controller:~$ openstack flavor create --id 2 --vcpus 1 --ram
2048 --disk 3 m1.medium
+----------------------------+-----------+
| Field | Value |
+----------------------------+-----------+
| OS-FLV-DISABLED:disabled | False |
| OS-FLV-EXT-DATA:ephemeral | 0 |
| disk | 3 |
| id | 2 |
| name | m1.medium |
| os-flavor-access:is_public | True |
| properties | |
| ram | 2048 |
| rxtx_factor | 1.0 |
| swap | |
| vcpus | 1 |
+----------------------------+-----------+
http://10.0.0.11:6080/vnc_auto.html?token=26ab4e72-7aa0-4487-81ee-
a58143a3c5fa
* Documentation: https://help.ubuntu.com
* Management: https://landscape.canonical.com
* Support: https://ubuntu.com/advantage
Last login: Mon Sept 25 20:12:23 2017 from 192.168.122.1
osbash@compute1:~$
# compute1
10.0.0.31 compute1
# compute2
10.0.0.32 compute2
# block1
10.0.0.41 block1
# object1
10.0.0.51 object1
# object2
10.0.0.52 object2
/etc/hostname
Edit the /etc/hostname file.
osbash@compute1:~$ sudo vi /etc/hostname
compute2
/etc/network/interfaces
Edit the /etc/network/interfaces to reflect the IP address or compute2 node as
10.0.0.32.
osbash@compute1:~$ sudo vi /etc/network/interfaces
# The loopback network interface
auto lo
iface lo inet loopback
auto ens4
iface ens4 inet static
address 10.0.0.32
netmask 255.255.255.0
auto ens5
iface ens5 inet manual
up ip link set dev $IFACE up
down ip link set dev $IFACE down
* Documentation: https://help.ubuntu.com
* Management: https://landscape.canonical.com
* Support: https://ubuntu.com/advantage
Last login: Mon Sept 25 20:33:52 2017 from 192.168.122.1
osbash@compute2:~$
virsh # list
Id Name State
----------------------------------------------------
65 controller running
67 compute1 running
68 compute2 running
######################
# program: get_ip.sh #
######################
EOM
ada:~$ ~/get_ip.sh
Controller IP: 192.168.122.82
Compute1 IP: 192.168.122.140
Compute2 IP: 192.168.122.139
* Documentation: https://help.ubuntu.com
* Management: https://landscape.canonical.com
* Support: https://ubuntu.com/advantage
Last login: Mon Sept 25 20:12:23 2017 from 192.168.122.1
osbash@compute1:~$
# compute1
10.0.0.31 compute1
# compute2
10.0.0.32 compute2
# block1
10.0.0.41 block1
# object1
10.0.0.51 object1
# object2
10.0.0.52 object2
* Documentation: https://help.ubuntu.com
* Management: https://landscape.canonical.com
* Support: https://ubuntu.com/advantage
Last login: Mon Sept 25 08:57:48 2017 from 192.168.122.1
osbash@controller:~$
# compute1
10.0.0.31 compute1
# compute2
10.0.0.32 compute2
# block1
10.0.0.41 block1
# object1
10.0.0.51 object1
# object2
10.0.0.52 object2
* Documentation: https://help.ubuntu.com
* Management: https://landscape.canonical.com
* Support: https://ubuntu.com/advantage
osbash@compute1:~$
# compute1
10.0.0.31 compute1
# compute2
10.0.0.32 compute2
# block1
10.0.0.41 block1
# object1
10.0.0.51 object1
# object2
10.0.0.52 object2
/etc/hostname
Edit the /etc/hostname file.
osbash@compute1:~$ sudo vi /etc/hostname
compute2
/etc/network/interfaces
Edit the /etc/network/interfaces to reflect the IP address or compute2 node as
10.0.0.32.
osbash@compute1:~$ sudo vi /etc/network/interfaces
# The loopback network interface
auto lo
iface lo inet loopback
auto enp0s8
iface enp0s8 inet static
address 10.0.0.32
netmask 255.255.255.0
auto enp0s9
iface enp0s9 inet manual
up ip link set dev $IFACE up
down ip link set dev $IFACE down
* Documentation: https://help.ubuntu.com
* Management: https://landscape.canonical.com
* Support: https://ubuntu.com/advantage
Last login: Mon Sept 25 13:53:33 2017 from 10.0.2.2
osbash@compute2:~$
127.0.0.1 localhost
127.0.1.1 compute1-lo
# compute1
10.0.0.31 compute1
# compute2
10.0.0.32 compute2
# block1
10.0.0.41 block1
# object1
10.0.0.51 object1
# object2
10.0.0.52 object2
# compute1
10.0.0.31 compute1
# compute2
10.0.0.32 compute2
# block1
10.0.0.41 block1
# object1
10.0.0.51 object1
# object2
10.0.0.52 object2
osbash@controller:~$ . admin-openrc.sh
virsh #
Note that networks are now active and set to auto start after future reboots.
virsh # list
Id Name State
----------------------------------------------------
1 controller running
2 compute1 running
* Documentation: https://help.ubuntu.com
* Management: https://landscape.canonical.com
* Support: https://ubuntu.com/advantage
15.3 Logging in
Two accounts are configured: admin with the password admin_user_secret and
demo with the password demo_user_pass. The default domain required for login is
default. These and other passwords are configured in config/credentials.
Click Next.
Click the + symbol beside cirros in the Available section to move it to Allocated.
Click Next.
Click the + symbol beside m1.small flavour in the Available section to move it to
Allocated.
Click the + symbol beside The Difference Engine SG in the Available section to
move it to Allocated and click the – symbol beside default to move it from Allocated
to Available.
$ hostname
engine1
.12 .17
Private network: .1
192.168.95.0/24
.104
.102 .103
Provider network:
203.0.113.0/24 .1
Internet
Now that creating instances is mastered consider the creation of networks. The
diagram in Illustration 25 demonstrates a simple network with 4 hosts, two on the
default provider network and two on a new private network which is connected to the
provider network via a router. Here is an explantation of the process for the creation
of the additional private network, hosts, a router and connecting the networks.
Steps to be followed are:
1. Enable the admin-openrc variables
2. Create a flavour
3. Enable the demo-openrc variables
4. Add port 22 (SSH) and ICMP to default security group
5. Create private network
6. Extract provider and private network UUIDs
7. Create hosts on the provider network
8. Create hosts on the private network
9. Create a router
10. Add subnet to the router
11. Add a default route to the router via 203.0.113.1
12. Add a route on the host to the private network.
Connect to one of the hosts on the private network and confirm connectivity to the
Internet.
ada:~$ ssh cirros@192.168.95.12
cirros@192.168.95.12's password: cubswin:)
+-------------------+--------------------------------------+
| Field | Value |
+-------------------+--------------------------------------+
| created_at | 2017-02-17T09:14:28Z |
| description | |
| direction | ingress |
| ethertype | IPv4 |
| headers | |
| id | 4dd564a0-3615-484f-b915-d22b4df8016d |
| port_range_max | None |
| port_range_min | None |
| project_id | f5a2b881391e4170b1649c7343e0b361 |
| project_id | f5a2b881391e4170b1649c7343e0b361 |
| protocol | icmp |
| remote_group_id | None |
| remote_ip_prefix | 0.0.0.0/0 |
| revision_number | 1 |
| security_group_id | 7f4c8b8c-1f55-4273-bc03-60d2ba39fb42 |
| updated_at | 2017-02-17T09:14:28Z |
+-------------------+--------------------------------------+
+-------------------+--------------------------------------+
| Field | Value |
+-------------------+--------------------------------------+
| allocation_pools | 192.168.95.10-192.168.95.20 |
| cidr | 192.168.95.0/24 |
| created_at | 2017-02-17T09:14:31Z |
| description | |
| dns_nameservers | 8.8.8.8 |
| enable_dhcp | True |
| gateway_ip | 192.168.95.1 |
| headers | |
| host_routes | |
| id | de061a25-190c-4cfa-ac1c-444445e963b6 |
| ip_version | 4 |
| ipv6_address_mode | None |
| ipv6_ra_mode | None |
| name | PRIV-SUBNET |
| network_id | 18a99a37-c0bd-4d8a-bdcd-9fad01c267a9 |
| project_id | f5a2b881391e4170b1649c7343e0b361 |
| project_id | f5a2b881391e4170b1649c7343e0b361 |
| revision_number | 2 |
| service_types | [] |
| subnetpool_id | None |
| updated_at | 2017-02-17T09:14:31Z |
+-------------------+--------------------------------------+
Provider: 1ad8799b-8d9a-4ddd-801f-942da3549ee4
PRIV-NET: 18a99a37-c0bd-4d8a-bdcd-9fad01c267a9
Flavour: m1.nano
Image: cirros
Network UUID=1ad8799b-8d9a-4ddd-801f-942da3549ee4
Security group: default
+--------------------------------------+-----------------------------------------------+
| Field | Value |
+--------------------------------------+-----------------------------------------------+
| OS-DCF:diskConfig | MANUAL |
| OS-EXT-AZ:availability_zone | |
| OS-EXT-STS:power_state | NOSTATE |
| OS-EXT-STS:task_state | scheduling |
| OS-EXT-STS:vm_state | building |
| OS-SRV-USG:launched_at | None |
| OS-SRV-USG:terminated_at | None |
| accessIPv4 | |
| accessIPv6 | |
| addresses | |
| adminPass | vHKpBFU3szS4 |
| config_drive | |
| created | 2017-02-17T09:14:38Z |
| flavor | m1.nano (0) |
| hostId | |
| id | 0905d666-353a-4f92-bfc2-4f4630b69564 |
| image | cirros (6846e263-d0c9-46da-b643-4e95340ddef8) |
| key_name | None |
| name | host1 |
| os-extended-volumes:volumes_attached | [] |
| progress | 0 |
| project_id | f5a2b881391e4170b1649c7343e0b361 |
| properties | |
| security_groups | [{u'name': u'default'}] |
| status | BUILD |
| updated | 2017-02-17T09:14:39Z |
| user_id | 4bc1f71e027348a6b81ab62f93bbc9d8 |
+--------------------------------------+-----------------------------------------------+
Flavour: m1.nano
Image: cirros
Network UUID=1ad8799b-8d9a-4ddd-801f-942da3549ee4
Security group: default
+--------------------------------------+-----------------------------------------------+
| Field | Value |
+--------------------------------------+-----------------------------------------------+
| OS-DCF:diskConfig | MANUAL |
| OS-EXT-AZ:availability_zone | |
| OS-EXT-STS:power_state | NOSTATE |
| OS-EXT-STS:task_state | scheduling |
| OS-EXT-STS:vm_state | building |
| OS-SRV-USG:launched_at | None |
| OS-SRV-USG:terminated_at | None |
| accessIPv4 | |
| accessIPv6 | |
| addresses | |
| adminPass | Z2vUYTLGNhWf |
| config_drive | |
| created | 2017-02-17T09:14:44Z |
| flavor | m1.nano (0) |
| hostId | |
| id | da5f5338-296e-4053-b980-bc6718f0d1ab |
| image | cirros (6846e263-d0c9-46da-b643-4e95340ddef8) |
| key_name | None |
| name | host2 |
| os-extended-volumes:volumes_attached | [] |
| progress | 0 |
| project_id | f5a2b881391e4170b1649c7343e0b361 |
| properties | |
| security_groups | [{u'name': u'default'}] |
| status | BUILD |
| updated | 2017-02-17T09:14:44Z |
| user_id | 4bc1f71e027348a6b81ab62f93bbc9d8 |
+--------------------------------------+-----------------------------------------------+
Flavour: m1.nano
Image: cirros
Network UUID=18a99a37-c0bd-4d8a-bdcd-9fad01c267a9
Security group: default
+--------------------------------------+-----------------------------------------------+
| Field | Value |
+--------------------------------------+-----------------------------------------------+
| OS-DCF:diskConfig | MANUAL |
| OS-EXT-AZ:availability_zone | |
| OS-EXT-STS:power_state | NOSTATE |
| OS-EXT-STS:task_state | scheduling |
| OS-EXT-STS:vm_state | building |
| OS-SRV-USG:launched_at | None |
| OS-SRV-USG:terminated_at | None |
| accessIPv4 | |
| accessIPv6 | |
| addresses | |
| adminPass | rvyh5M9VUt2R |
| config_drive | |
| created | 2017-02-17T09:14:48Z |
| flavor | m1.nano (0) |
| hostId | |
| id | f8f27599-2733-4283-9043-a6451ebd9dfd |
| image | cirros (6846e263-d0c9-46da-b643-4e95340ddef8) |
| key_name | None |
| name | host3 |
| os-extended-volumes:volumes_attached | [] |
| progress | 0 |
| project_id | f5a2b881391e4170b1649c7343e0b361 |
| properties | |
| security_groups | [{u'name': u'default'}] |
| status | BUILD |
| updated | 2017-02-17T09:14:49Z |
| user_id | 4bc1f71e027348a6b81ab62f93bbc9d8 |
+--------------------------------------+-----------------------------------------------+
Flavour: m1.nano
Image: cirros
Network UUID=18a99a37-c0bd-4d8a-bdcd-9fad01c267a9
Security group: default
+--------------------------------------+-----------------------------------------------+
| Field | Value |
+--------------------------------------+-----------------------------------------------+
| OS-DCF:diskConfig | MANUAL |
| OS-EXT-AZ:availability_zone | |
| OS-EXT-STS:power_state | NOSTATE |
| OS-EXT-STS:task_state | scheduling |
| OS-EXT-STS:vm_state | building |
| OS-SRV-USG:launched_at | None |
| OS-SRV-USG:terminated_at | None |
| accessIPv4 | |
| accessIPv6 | |
| addresses | |
| adminPass | m6aFFtP5FULR |
| config_drive | |
| created | 2017-02-17T09:14:53Z |
| flavor | m1.nano (0) |
| hostId | |
| id | 2c3fbb81-d3b4-4288-9275-31cce7aa5216 |
| image | cirros (6846e263-d0c9-46da-b643-4e95340ddef8) |
| key_name | None |
| name | host4 |
| os-extended-volumes:volumes_attached | [] |
| progress | 0 |
| project_id | f5a2b881391e4170b1649c7343e0b361 |
| properties | |
| security_groups | [{u'name': u'default'}] |
| status | BUILD |
| updated | 2017-02-17T09:14:54Z |
| user_id | 4bc1f71e027348a6b81ab62f93bbc9d8 |
+--------------------------------------+-----------------------------------------------+
Server list
+--------------------------------------+-------+--------+------------------------+------------+
| ID | Name | Status | Networks | Image Name |
+--------------------------------------+-------+--------+------------------------+------------+
| 2c3fbb81-d3b4-4288-9275-31cce7aa5216 | host4 | BUILD | | cirros |
| f8f27599-2733-4283-9043-a6451ebd9dfd | host3 | BUILD | | cirros |
| da5f5338-296e-4053-b980-bc6718f0d1ab | host2 | BUILD | | cirros |
| 0905d666-353a-4f92-bfc2-4f4630b69564 | host1 | ACTIVE | provider=203.0.113.109 | cirros |
+--------------------------------------+-------+--------+------------------------+------------+
HOT
Template Heat-api Heat
Heat
Service
engine
Service
APIs
Core Services
OvS
17.1 Introduction
In section 8.8 the Heat Orchestration service was briefly described. Heat provides a
template-based orchestration for describing a cloud application by running
OpenStack API calls to generate running cloud applications. To do this it uses Heat
Orchestration Templates (HOT). These templates define multiple composite cloud
applications and when passed to the heat-api, they are interpreted and passed to the
heat-engine. The heat-engine creates jobs that are passed to the core services to
create the cloud storage, network and VM instances as defined within the template.
Heat has a second API called the heat-api-cfn which allows it to interpret AWS
CloudFormation templates also.
heat_template_version: 2016-10-14
description:
# a description of the template
parameter_groups:
# a declaration of input parameter groups and order
parameters:
# declaration of input parameters
resources:
# declaration of template resources
outputs:
# declaration of output parameters
conditions:
# declaration of conditions
17.2.2 Description:
This section provides an optional description of the template.
17.2.4 Parameters
This section specifies input parameters that have to be provided when instantiating
the template.
parameters:
<param name>:
type: <string | number | json | comma_delimited_list | boolean>
label: <human-readable name of the parameter>
description: <description of the parameter>
default: <default value for parameter>
hidden: <true | false>
constraints:
<parameter constraints>
immutable: <true | false>
17.2.5 Resources
The Resources section defines the resources that make up a stack deployed from
the template. Each resource is defined as a separate block in the resources section
with the following syntax
resources:
<resource ID>:
type: <resource type>
properties:
<property name>: <property value>
metadata:
<resource specific metadata>
depends_on: <resource ID or list of ID>
update_policy: <update policy>
deletion_policy: <deletion policy>
external_id: <external resource ID>
condition: <condition name or expression or boolean>
17.2.6 Outputs
This section defines output parameters that should be available to the user after a
stack has been created. Each output parameter is defined as a separate block.
outputs:
<parameter name>:
description: <description>
value: <parameter value>
condition: <condition name or expression or boolean>
17.2.7 Conditions
This section defines one or more conditions which are evaluated based on input
parameter values provided when a user creates or updates a stack. For example,
based on the result of a condition, user can conditionally create resources, user can
conditionally set different values of properties, and user can conditionally give outputs
of a stack.
conditions:
<condition name1>: {expression1}
<condition name2>: {expression2}
...
Consider the available flavours, images and security groups. Their names will be
required when creating the server template.
osbash@controller:~$ openstack flavor list
+--------------------------------------+----------+------+------+-----------+-------+-----------+
| ID | Name | RAM | Disk | Ephemeral | VCPUs | Is Public |
+--------------------------------------+----------+------+------+-----------+-------+-----------+
| f19dab3a-9909-406d-a3fb-9d48fc7a518f | m1.nano | 1024 | 1 | 0 | 1 | True |
+--------------------------------------+----------+------+------+-----------+-------+-----------+
If it doesn't already exist add a new flavour. Note that the flavour must be added as
the administrator user.
osbash@controller:~$ . admin-openrc.sh
osbash@controller:~$ openstack flavor create --vcpus 1 --ram 512 --disk
1 m1.nano
osbash@controller:~$ . demo-openrc.sh
osbash@controller:~$ openstack image list
+--------------------------------------+--------+--------+
| ID | Name | Status |
+--------------------------------------+--------+--------+
| 6846e263-d0c9-46da-b643-4e95340ddef8 | cirros | active |
+--------------------------------------+--------+--------+
Check the default security group and ensure that it allows SSH and ICMP.
If not then create the rules within the default security group.
osbash@controller:~$ openstack security group rule create --proto tcp
--dst-port 22 default
Create a YAML template. This template specifies the flavour, image and public
network (pub_net). These parameters are pulled together under resources declare
based on the parameters selects what needs to be instantiated. The outputs section
specifies output parameters available to users once the template has been
instantiated. This is optional and can be omitted when no output values are required.
parameters:
flavor:
type: string
description: Flavour for the server to be created
default: m1.nano
constraints:
- custom_constraint: nova.flavor
image:
type: string
description: Image name
default: cirros
constraints:
- custom_constraint: glance.image
pub_net:
type: string
description: ID of public network
default: provider
constraints:
- custom_constraint: neutron.network
resources:
server:
type: OS::Nova::Server
properties:
image: { get_param: image }
flavor: { get_param: flavor }
networks:
- network: { get_param: pub_net }
outputs:
server_networks:
description: The networks of the deployed server
value: { get_attr: [server, networks] }
EOM
This stack is created using the defaults within the YAML file. It causes the defined
server to be instantiated.
Review the actions the heat-engine push to the core services once the template has
been interpreted.
osbash@controller:~$ openstack stack event list singlestack
2017-02-17 13:23:44Z [singlestack]: CREATE_IN_PROGRESS Stack CREATE started
2017-02-17 13:23:44Z [server]: CREATE_IN_PROGRESS state changed
2017-02-17 13:24:02Z [server]: CREATE_COMPLETE state changed
2017-02-17 13:24:02Z [singlestack]: CREATE_COMPLETE Stack CREATE completed
successfully
netLabs!UG
osbash@controller:~$ openstack server list
osbash@controller:~$ openstack server list
+--------------------------------------+---------------------------------+--------+------------------------+------------+
OpenStack Pike
| ID | Name | Status | Networks | Image Name |
+--------------------------------------+---------------------------------+--------+------------------------+------------+
| 4eddd826-36c7-4a88-b2c6-1534020baa35 | singlestack-server-6nprugl63so3 | ACTIVE | provider=203.0.113.113 | cirros |
+--------------------------------------+---------------------------------+--------+------------------------+------------+
OpenStack Training Laboratory
02 Oct 2017
164 OpenStack Training Laboratory
It is possible to change parameters from the default by specifying them as part of the
command.
osbash@controller:~$ openstack stack create --template Server.yaml
--parameter flavor=m1.small secondstack
+---------------------+----------------------------------------------------+
| Field | Value |
+---------------------+----------------------------------------------------+
| id | 77dd9180-1472-4487-b909-ce19f2af5c0b |
| stack_name | secondstack |
| description | Hello world HOT template defining a single server. |
| creation_time | 2017-02-17T13:26:30Z |
| updated_time | None |
| stack_status | CREATE_IN_PROGRESS |
| stack_status_reason | Stack CREATE started |
+---------------------+----------------------------------------------------+
netLabs!UG
osbash@controller:~$ openstack server list
OpenStack Pike
osbash@controller:~$ openstack server list
+--------------------------------------+---------------------------------+--------+------------------------+------------+
| ID | Name | Status | Networks | Image Name |
+--------------------------------------+---------------------------------+--------+------------------------+------------+
| c7be5ac2-3137-45e1-bf47-57d10579a9f5 | secondstack-server-psd4jo55kvht | ACTIVE | provider=203.0.113.109 | cirros |
| 4eddd826-36c7-4a88-b2c6-1534020baa35 | singlestack-server-6nprugl63so3 | ACTIVE | provider=203.0.113.113 | cirros |
+--------------------------------------+---------------------------------+--------+------------------------+------------+
OpenStack Training Laboratory
02 Oct 2017
166 OpenStack Training Laboratory
To simplify matters a parent YAML file will be used which will create the servers. It will
also call on a child YAML file to build the networks.
This YAML file starts with a parameter section describing the public network
(pub_net) as the existing provider. It then defines various attributes required to
establish the private network (pri_net) and the associate private network subnet
(pri_subnet).
Resources
The resources section defines the pri_net as a network and generates the associated
subnet pri_subnet by calling on parameters from the section above.
A router is created whose external gateway information is also extracted from the
parameters section, i.e. pub_net pointing to the provider network. An additional
interface is added to the router and the pri_subnet is associated with it.
Outputs
The outputs section returns the names of the networks as key/value pairs:
Key Value
pub_net_name provider
pri_net_name Extract the name given at create time
router_gw Extract the External Gateway information
If this template is executed on its own then these values can be viewed with the
command:
openstack stack show <stackname>
However if this template is called from another then the values are passed back to
the parent template.
parameters:
pub_net:
type: string
label: Public network name or ID
description: Public network with floating IP addresses.
default: provider
pri_net_cidr:
type: string
default: '192.168.95.0/24'
description: Private network address (CIDR notation)
pri_net_gateway:
type: string
default: '192.168.95.1'
description: Private network gateway address
pri_net_nameserver:
type: comma_delimited_list
default: '8.8.8.8'
description: Private network DNS Server address
pri_net_enable_dhcp:
type: boolean
default: 'True'
description: enable DHCP Server
pri_net_pool_start:
type: string
default: '192.168.95.10'
description: Private network Start IP address allocation pool
pri_net_pool_end:
type: string
default: '192.168.95.20'
description: Private network End IP address allocation pool
pri_net_nexthop:
type: string
default: '203.0.113.1'
description: nexthop address for default route
resources:
pri_net:
type: OS::Neutron::Net
pri_subnet:
type: OS::Neutron::Subnet
properties:
network_id: { get_resource: pri_net }
cidr: { get_param: pri_net_cidr }
dns_nameservers: { get_param: pri_net_nameserver }
gateway_ip: { get_param: pri_net_gateway }
enable_dhcp: { get_param: pri_net_enable_dhcp }
allocation_pools:
- start: { get_param: pri_net_pool_start }
end: { get_param: pri_net_pool_end }
host_routes:
- destination: '0.0.0.0/0'
nexthop: { get_param: pri_net_nexthop }
router:
type: OS::Neutron::Router
properties:
external_gateway_info:
network: { get_param: pub_net }
router-interface:
type: OS::Neutron::RouterInterface
properties:
router_id: { get_resource: router }
subnet: { get_resource: pri_subnet }
outputs:
pub_net_name:
description: The public network.
value: provider
pri_net_name:
description: The private network.
value: { get_attr: [pri_net, name] }
router_gw:
description: Router gateway information
value: { get_attr: [router, external_gateway_info] }
EOM
To prove the network template it is possible to run it on its own before working on the
parent. This demonstrates that to this point everything is operational.
osbash@controller:~$ . demo-openrc.sh
Note the outputs. Shortly it will become clear that these values are passed to the
parent YAML template.
Key Value
pub_net_name provider
pri_net_name netstack-pri_net-lrx3l746npza
router_gw ip_address: 203.0.113.112
This YAML file parameter section describes the flavour and image that will be used to
create the hosts.
Resources
Outputs
The outputs section returns the networks of each of the hosts as well as the
external_gateway_info that was gathered by the child template and pass as the key
router_gw.
parameters:
image:
type: string
label: Image name or ID
description: Image to be used for server.
default: cirros
flavor:
type: string
label: Flavor
description: Type of instance (flavor) for the compute instance.
default: m1.nano
resources:
networks:
type: networks.yaml
host1:
type: OS::Nova::Server
properties:
image: { get_param: image }
flavor: { get_param: flavor }
networks:
- network: { get_attr: [networks, pub_net_name] }
host2:
type: OS::Nova::Server
properties:
image: { get_param: image }
flavor: { get_param: flavor }
networks:
- network: { get_attr: [networks, pub_net_name] }
host3:
type: OS::Nova::Server
properties:
image: { get_param: image }
flavor: { get_param: flavor }
networks:
- network: { get_attr: [networks, pri_net_name] }
host4:
type: OS::Nova::Server
properties:
image: { get_param: image }
flavor: { get_param: flavor }
networks:
- network: { get_attr: [networks, pri_net_name] }
outputs:
host1_networks:
description: The networks of the deployed server
value: { get_attr: [host1, networks] }
host2_networks:
description: The networks of the deployed server
value: { get_attr: [host2, networks] }
host3_networks:
description: The networks of the deployed server
value: { get_attr: [host3, networks] }
host4_networks:
description: The networks of the deployed server
value: { get_attr: [host4, networks] }
router_gateway:
description: The router gateway information
value: { get_attr: [networks, router_gw] }
EOM
Note the External IP address. A route will need to be made to the 192.168.95.0/24
network via this IP address on the hypervisor.
netLabs!UG
osbash@controller:~$ openstack server list
+--------------------------------------+------------------------------+--------+--------------------------------------------------------------------+------------+
| ID | Name | Status | Networks | Image Name |
+--------------------------------------+------------------------------+--------+--------------------------------------------------------------------+------------+
| 11689033-a802-4b4a-b977-65201db5ed5f | fullstack-host1-dnduvazhk67p | ACTIVE | provider=203.0.113.112 | cirros |
| 4971c1d4-532e-4cf3-b4c6-d2512ddb0c25 | fullstack-host2-5tboxr2zq5wt | ACTIVE | provider=203.0.113.103 | cirros |
| 8e51648c-44c2-424a-a62b-041ceb75a3eb | fullstack-host3-izkubc3b67pq | ACTIVE | fullstack-networks-zj23gzi6zv3n-pri_net-qt7hvxg47jxx=192.168.95.13 | cirros |
| f25518a1-85b5-4566-9def-a18762a47de5 | fullstack-host4-r3grjbxriho2 | ACTIVE | fullstack-networks-zj23gzi6zv3n-pri_net-qt7hvxg47jxx=192.168.95.20 | cirros |
+--------------------------------------+------------------------------+--------+--------------------------------------------------------------------+------------+
osbash@controller:~$ openstack router list
+--------------------------------------+-----------------------------------------------------+--------+-------+-------------+----+----------------------------------+
| ID | Name | Status | State | Distributed | HA | Project |
+--------------------------------------+-----------------------------------------------------+--------+-------+-------------+----+----------------------------------+
| f5d58527-3758-428c-afab-1e8cf48e0575 | fullstack-networks-zj23gzi6zv3n-router-c2brepuddtje | ACTIVE | UP | | | bdd928b9d2e94a67ad927bc98611917c |
+--------------------------------------+-----------------------------------------------------+--------+-------+-------------+----+----------------------------------+
osbash@controller:~$ openstack network list
+--------------------------------------+------------------------------------------------------+--------------------------------------+
| ID | Name | Subnets |
+--------------------------------------+------------------------------------------------------+--------------------------------------+
OpenStack Pike
| 638400f6-f050-455b-8059-e60f5877250f | fullstack-networks-zj23gzi6zv3n-pri_net-qt7hvxg47jxx | 69409373-3a6c-49e4-b2b2-bef50e5e64b5 |
osbash@controller:~$ openstack network list
| c52e0181-5431-4bee-8b0d-e76b15750d77 |
osbash@controller:~$ openstack server list
| 785f5d02-6690-4e0b-99b3-530741eb1d76 | provider
+--------------------------------------+------------------------------------------------------+--------------------------------------+
Review the servers and router created.
OpenStack Training Laboratory
02 Oct 2017
176 OpenStack Training Laboratory
Connect to one of the hosts on the private network and confirm connectivity to the
Internet.
ada:~$ ssh cirros@192.168.95.13
cirros@192.168.95.13's password: cubswin:)
So using Heat orchestration a network with hosts can be build that is for all intent and
purpose identical to that created in Chapter 16 - Creating networks.
18. Appendices
18.1 Appendix 1 - NAT Masquerade script for Hypervisor host
Enable IP forwarding and setup masquerade in IP Tables for Linux netfilter. enp0S3 is
the interface on the hypervisor host that connects to the Internet, it is considered the
outside network for the NAT masquerade. (note if this computer is connected by
wireless it is likely that this interface will actually be wlp4s0). On KVM/QEMU the
provider network is typically virbr2, while on VirtualBox the network is typically
vboxnet1 with the IP addresses for both from the 203.0.113.0/24 network. It is from
this network that instances are assigned IP addresses from a pool. This address pool
is the inside network for the purpose of the NAT masquerade.
###########################################
# program: nat_tables.sh #
# Author: Diarmuid O'Briain #
# Copyright ©2017 C²S Consulting #
# License: www.gnu.org/licenses/gpl.txt #
###########################################
# Select interface, typically 'wlp4s0' for WIFI and 'enp0s3' for wired Ethernet
# Flush iptables
iptables -F
iptables -F -t nat
# Enable IP forwarding
echo
echo "echo \"1\" > /proc/sys/net/ipv4/ip_forward"
echo "1" > /proc/sys/net/ipv4/ip_forward
echo
modprobe ip_tables
modprobe ip_conntrack
# Print iptables
# END
EOM
###########################################
# program: start-stop-cluster.sh #
# Author: Diarmuid O'Briain #
# Copyright ©2017 C²S Consulting #
# License: www.gnu.org/licenses/gpl.txt #
###########################################
PROVIDER=''
# Help function
function usage {
echo -e "usage: $command <PROVIDER> <START | STOP> help, -h, -help, --help\n"
echo -e " PROVIDER:: kvm | vbox\n"
echo -e " kvm = Kernel based Virtual Machine/Quick Emulator (KVM/QEMU)\n"
echo -e " vbox = Oracle VirtualBox\n"
echo -e " Start or Stop the Virtual Machines in the cluster\n"
exit
}
# Action nodes
# Show cluster
# END
EOM
Cluster state
Id Name State
----------------------------------------------------
29 controller running
30 compute1 running
Cluster state
. .
Id Name State
----------------------------------------------------
Cluster state
Running VMs
"controller" {85cc5cd8-3392-49bd-bac8-76c4a8bed317}
"compute1" {42d461ef-79cf-49a7-a6fd-5bcfcafcd87c}
0%...10%...20%...30%...40%...50%...60%...70%...80%...90%...100%
0%...10%...20%...30%...40%...50%...60%...70%...80%...90%...100%
Cluster state
###########################################
# program: clean_nodes.sh #
# Author: Diarmuid O'Briain #
# Copyright ©2017 C²S Consulting #
# License: www.gnu.org/licenses/gpl.txt #
###########################################
PROVIDER=''
# Help function
function usage {
echo -e "usage: $command <PROVIDER> help, -h, -help, --help\n"
echo -e " PROVIDER:: kvm | vbox\n"
echo -e " kvm = Kernel based Virtual Machine/Quick Emulator (KVM/QEMU)\n"
echo -e " vbox = Oracle VirtualBox\n"
echo -e " Note: For KVM/QEMU this command must be ran as sudo\n"
exit
}
break
fi
done
else
while [[ 1 ]]; do
CONTROLLER_STATE=`vboxmanage showvminfo 'controller' | \
grep '^State' | awk '{print $2}'`
COMPUTE_STATE=`vboxmanage showvminfo 'controller' | \
grep '^State' | awk '{print $2}'`
printf "."
if [[ $CONTROLLER_STATE =~ 'powered' && $COMPUTE_STATE =~ 'powered' ]]; then
echo -e "\n\nController node and Compute1 node are in a shut down state"
break
fi
done
fi
# END
EOM
......
Id Name State
----------------------------------------------------
7 controller running
8 compute1 running
0%...10%...20%...30%...40%...50%...60%...70%...80%...90%...100%
0%...10%...20%...30%...40%...50%...60%...70%...80%...90%...100%
"controller" {e18abd53-5c5c-4938-84af-ca4e6409a734}
"compute1" {bd283312-4d11-4e8f-9ab2-a08c91de59e3}
###########################################
# program: instance_launch.sh #
# Author: Diarmuid O'Briain #
# Copyright ©2017 C²S Consulting #
# License: www.gnu.org/licenses/gpl.txt #
###########################################
# Variables
KEYNAME='mykey'
INSTANCE='cirrOS-test'
VOLNAME='1GB-vol'
FLAVOUR='m1.nano'
IMAGE='cirros'
SSH_HOSTS_FILE='/home/osbash/.ssh/id_rsa'
export OS_USERNAME=admin
export OS_PASSWORD=admin_user_secret
export OS_PROJECT_NAME=admin
export OS_USER_DOMAIN_NAME=Default
export OS_PROJECT_DOMAIN_NAME=Default
export OS_AUTH_URL=http://10.0.0.11:35357/v3
export OS_IDENTITY_API_VERSION=3
export OS_IMAGE_API_VERSION=2
export OS_USERNAME=demo
export OS_PASSWORD=demo_user_pass
export OS_PROJECT_NAME=demo
export OS_AUTH_URL=http://10.0.0.11:5000/v3
if [ -e "$SSH_HOSTS_FILE" ]; then
rm $SSH_HOSTS_FILE
fi
touch $SSH_HOSTS_FILE
echo; echo "Adding port 22 (SSH) and ICMP to default security group"
while [ "$(openstack server list | grep $INSTANCE | awk '{print $6}')" != 'ACTIVE' ]; do
printf ". "
sleep 2
done
echo; echo
# END
EOM
#######################
# network_launch.sh #
# Diarmuid O'Briain #
#######################
# Variables
## Function ##
function host_create() {
local _INSTANCE=$1
local _FLAVOUR=$2
local _IMAGE=$3
local _NIC=$4
## END FUNCTION ##
export OS_PROJECT_DOMAIN_NAME=Default
export OS_USER_DOMAIN_NAME=Default
export OS_PROJECT_NAME=admin
export OS_USERNAME=admin
export OS_PASSWORD=admin_user_secret
export OS_AUTH_URL=http://controller:35357/v3
export OS_IDENTITY_API_VERSION=3
export OS_IMAGE_API_VERSION=2
export OS_PROJECT_NAME=demo
export OS_USERNAME=demo
export OS_PASSWORD=demo_user_pass
export OS_AUTH_URL=http://controller:5000/v3
echo; echo "Adding port 22 (SSH) and ICMP to default security group"
for i in ${INSTANCE_A[@]}; do
host_create $i $FLAVOUR $IMAGE $PROVIDER_NIC
done
for i in ${INSTANCE_B[@]}; do
host_create $i $FLAVOUR $IMAGE $PNET_NIC
done
echo; echo
# END
EOM
Starting install...
Creating domain... | 0 B 00:00:00
Domain installation still in progress. Waiting for installation to complete.
INFO
Waiting 5 seconds for VM base to come up.
INFO Booting into distribution installer.
INFO Initiating boot sequence for base.
INFO Waiting for VM base to be defined.
INFO Waiting for MAC address.
INFO Waiting for IP address.
....................................
INFO Waiting for ping returning from 192.168.122.47.
INFO Waiting for ssh server in VM base to respond at 192.168.122.47:22.
WARNINGAdjusting permissions for key file (0400):
/home/alovelace/OpenStack-lab/labs/lib/osbash-ssh-keys/osbash_key
.........................................................................
.........................................................................
..........................................................
Domain has shutdown. Continuing.
Domain creation completed.
Restarting guest.
............
INFO Connected to ssh server.
INFO Start autostart/00_base_fixups.sh
INFO done
INFO Start autostart/01_apt_init.sh
..........................................................
INFO done
INFO Start autostart/02_apt_upgrade.sh
.............................................................
INFO done
INFO Start autostart/03_pre-download.sh
.....................................................
INFO done
INFO Start autostart/04_apt_pre-download.sh
.........................................................................
.........................................................................
..........................................................
INFO done
INFO Start autostart/05_enable_osbash_ssh_keys.sh
INFO done
INFO Start autostart/06_zero_empty.sh
.............................................
INFO done
INFO Start autostart/07_shutdown.sh
INFO done
INFO Processing of scripts successful.
INFO Waiting for shutdown of VM base.
[sudo] password for alovelace: babbage
INFO Compacting base-ssh-pike-ubuntu-16.04-amd64.
WARNINGNo virt-sparsify executable found.
WARNINGConsider installing libguestfs-tools.
INFO Base disk created.
INFO stacktrain base disk build ends.
INFO Basedisk build took 8489 seconds
INFO Creating mgmt network: 10.0.0.0.
INFO Creating provider network: 203.0.113.0.
INFO Asked to delete VM controller.
INFO not found
INFO Creating copy-on-write VM disk.
WARNING Graphics requested but DISPLAY is not set. Not running virt-viewer.
WARNING No console to launch for the guest, defaulting to --wait -1
Starting install...
Creating domain... | 0 B 00:00:00
Domain creation completed.
You can restart your domain by running:
virsh --connect qemu:///system start controller
INFO Waiting for VM controller to be defined.
INFO Node controller created.
INFO init_xxx_node.sh -> 00_init_controller_node.sh
INFO etc_hosts.sh -> 01_etc_hosts.sh
INFO enable_osbash_ssh_keys.sh -> 02_enable_osbash_ssh_keys.sh
INFO copy_openrc.sh -> 03_copy_openrc.sh
INFO apt_install_mysql.sh -> 04_apt_install_mysql.sh
INFO install_rabbitmq.sh -> 05_install_rabbitmq.sh
INFO install_memcached.sh -> 06_install_memcached.sh
INFO setup_keystone.sh -> 07_setup_keystone.sh
INFO get_auth_token.sh -> 08_get_auth_token.sh
INFO setup_glance.sh -> 09_setup_glance.sh
INFO setup_nova_controller.sh -> 10_setup_nova_controller.sh
INFO setup_neutron_controller.sh -> 11_setup_neutron_controller.sh
INFO setup_self-service_controller.sh -> 12_setup_self-
service_controller.sh
INFO setup_neutron_controller_part_2.sh ->
13_setup_neutron_controller_part_2.sh
INFO setup_horizon.sh -> 14_setup_horizon.sh
INFO setup_cinder_controller.sh -> 15_setup_cinder_controller.sh
INFO setup_heat_controller.sh -> 16_setup_heat_controller.sh
INFO Starting VM controller
INFO Waiting for VM controller to run.
INFO Waiting for MAC address.
INFO Waiting for IP address.
.............
INFO Waiting for ssh server in VM controller to respond at 192.168.122.47:22.
INFO Connected to ssh server.
INFO Start autostart/00_init_controller_node.sh
INFO done
INFO Start autostart/01_etc_hosts.sh
INFO done
INFO Start autostart/02_enable_osbash_ssh_keys.sh
INFO done
INFO Start autostart/03_copy_openrc.sh
INFO done
INFO Start autostart/04_apt_install_mysql.sh
.........................................................................
INFO done
INFO Start autostart/05_install_rabbitmq.sh
................................................
INFO done
INFO Start autostart/06_install_memcached.sh
............
INFO done
Starting install...
Creating domain... | 0 B 00:00:00
Domain creation completed.
You can restart your domain by running:
virsh --connect qemu:///system start compute1
INFO Waiting for VM compute1 to be defined.
INFO Node compute1 created.
INFO init_xxx_node.sh -> 00_init_compute1_node.sh
INFO etc_hosts.sh -> 01_etc_hosts.sh
INFO enable_osbash_ssh_keys.sh -> 02_enable_osbash_ssh_keys.sh
INFO copy_openrc.sh -> 03_copy_openrc.sh
INFO setup_nova_compute.sh -> 04_setup_nova_compute.sh
INFO setup_neutron_compute.sh -> 05_setup_neutron_compute.sh
INFO setup_self-service_compute.sh -> 06_setup_self-service_compute.sh
INFO setup_neutron_compute_part_2.sh ->
07_setup_neutron_compute_part_2.sh
INFO setup_cinder_volumes.sh -> 08_setup_cinder_volumes.sh
INFO Starting VM compute1
INFO Waiting for VM compute1 to run.
INFO Waiting for MAC address.
INFO Waiting for IP address.
.................
INFO Waiting for ssh server in VM compute1 to respond at 192.168.122.64:22.
INFO Connected to ssh server.
INFO Start autostart/00_init_compute1_node.sh
INFO done
INFO Start autostart/01_etc_hosts.sh
INFO done
INFO done
INFO Start autostart/08_get_auth_token.sh
INFO done
INFO Start autostart/09_setup_glance.sh
.................................
INFO done
INFO Start autostart/10_setup_nova_controller.sh
.................................................................................
...................
INFO done
INFO Start autostart/11_setup_neutron_controller.sh
................
INFO done
INFO Start autostart/12_setup_self-service_controller.sh
...............
INFO done
INFO Start autostart/13_setup_neutron_controller_part_2.sh
......................
INFO done
INFO Start autostart/14_setup_horizon.sh
........................................
INFO done
INFO Start autostart/15_setup_cinder_controller.sh
...............................................
INFO done
INFO Start autostart/16_setup_heat_controller.sh
............................................................
INFO done
INFO Processing of scripts successful.
INFO Asked to delete VM compute1
INFO not found
INFO Created VM compute1.
INFO Attaching to VM compute1 (multi):
/home/alovelace/OpenStack-lab/labs/img/base-ssh-pike-ubuntu-16.04-amd64.vdi
INFO Creating disk (size: 204800 MB):
/home/alovelace/OpenStack-lab/labs/img/compute1-sdb.vdi
INFO Attaching to VM compute1:
/home/alovelace/OpenStack-lab/labs/img/compute1-sdb.vdi
INFO Node compute1 created.
INFO init_xxx_node.sh -> 00_init_compute1_node.sh
INFO etc_hosts.sh -> 01_etc_hosts.sh
INFO enable_osbash_ssh_keys.sh -> 02_enable_osbash_ssh_keys.sh
INFO copy_openrc.sh -> 03_copy_openrc.sh
INFO setup_nova_compute.sh -> 04_setup_nova_compute.sh
INFO setup_neutron_compute.sh -> 05_setup_neutron_compute.sh
INFO setup_self-service_compute.sh -> 06_setup_self-service_compute.sh
INFO setup_neutron_compute_part_2.sh ->
07_setup_neutron_compute_part_2.sh
INFO setup_cinder_volumes.sh -> 08_setup_cinder_volumes.sh
INFO Starting VM compute1 with headless GUI
INFO Waiting for ssh server in VM compute1 to respond at 127.0.0.1:2232.
...........
INFO Connected to ssh server.
..
INFO Start autostart/00_init_compute1_node.sh
.....
INFO done
INFO Start autostart/01_etc_hosts.sh
..
INFO done
INFO Start autostart/02_enable_osbash_ssh_keys.sh
INFO done
INFO Start autostart/03_copy_openrc.sh
INFO done
INFO Start autostart/04_setup_nova_compute.sh
.................................................................................
....
INFO done
INFO Start autostart/05_setup_neutron_compute.sh
........
INFO done
19. Abbreviations
AES Advanced Encryption Standard
BIOS Basic Input/Output System
Ceilometer Telemetry service
Cinder Block storage service
AMQP Advanced Message Queuing Protocol
CPU Central Processing Unit
CRUD Create, read, update and delete
CT Container
DHCP Dynamic Host Configuration Protocol
EC2 Elastic Compute 2 (Amazon basic VM)
XML eXtensible Markup Language
GB Gigabytes
Glance Image service
HA High Availability
Heat Orchestration service
Horizon Dashboard
HOT Heat Orchestration Template
HTTP Hypertext Transfer Protocol
HVM Hardware-assisted Virtual Machine
IaaS Infrastructure as a Service
ICMP Internet Control Message Protocol
I/O Input/Output
IP Internet Protocol
Keystone Identity service
KVM Kernel Virtual Machine
L2 Layer-2 bridging/switching.
L3 Layer 3 - Routing
libvirt Toolkit to manage virtualisation hosts
LM Long Mode
MB Megabytes
LVM Logical Volume Manager
NASA National Aeronautics and Space Administration
NAT Masquerading - Network Address Translation
Neutron Networking service
Nova Compute service
NTP Network Time Protocol
ORM Object Relational Mapper
OvS Open vSwitch
20. Bibliography
OpenStack. (2017). Training-Labs Webpage [online]. Available at: http://docs.openstack.org/training_labs [Accessed:
1 Oct 2017].
OpenStack. (2017). Installation Tutorial for Ubuntu [online]. Available at: https://docs.openstack.org/ocata/install-
guide-ubuntu [Accessed: 1 Oct 2017].
KVM. (2017). KVM virtualization solution for Linux Website [online]. Available at: http://www.linux-kvm.org [Accessed:
1 Oct 2017]
Libvirt. (2017). Libvirt Virtualisation API Website [online]. Available at: https://libvirt.org [Accessed: 1 Oct 2017]
QEMU. (2017). QEMU Emulator Website [online]. Available at: http://wiki.qemu.org [Accessed: 1 Oct 2017]
Oracle. (2016). VirtualBox manual [online]. Available at: https://www.virtualbox.org/manual [Accessed: 1 Oct 2017].
Dan Radez (2016). OpenStack Essentials 2nd edition. Packt Publishing, August 31, 2016. ISBN-10: 1786462664,
ISBN-13: 978-1786462664.
Andrey Markelov. (2016). Certified OpenStack Administrator Study Guide 1st ed. Edition. Apress, November 5, 2016.
ISBN-10: 1484221249, ISBN-13: 978-1484221242.