Cloud Computing - Midsem
Cloud Computing - Midsem
Cloud Computing - Midsem
CS-1
Faculty Name: Prof. Pradnya Kashikar
BITS Pilani pradnyak@wilp.bits-Pilani.ac.in
IMP Note to Self
2
IMP Note to Students
➢ It is important to know that just login to the session does not
guarantee the attendance.
➢ Once you join the session, continue till the end to consider you
as present in the class.
➢ IMPORTANTLY, you need to make the class more interactive by
responding to Professors queries in the session.
➢ Whenever Professor calls your number / name ,you need to
respond, otherwise it will be considered as ABSENT
3
Introduction to Cloud Computing, services
and deployment models
• Agenda
1. Introduction to Cloud Computing – Origins and
Motivation
2. 3-4-5 rule of Cloud Computing
3. Types of Clouds and Services
4. Cloud Infrastructure and Deployment
Powerful
multi-core 1. Web Scale
processors Problems
General
Explosion of
domain
purpose 2. Web 2.0 and
graphic
applications Social
processors
Networking
Superior
Proliferation 3. Information
software
of devices
methodologies
Explosion
Virtualization 4. Mobile Web
Wider bandwidth leveraging the
for communication powerful
hardware
BITS Pilani, Deemed to be University under Section 3 of UGC Act, 1956
Evolution of Web
Explosive growth in applications:
• biomedical informatics, space exploration, business analytics,
• web 2.0 social networking: YouTube, Facebook
Extreme scale content generation: e-science and e-business data deluge
Extraordinary rate of digital content consumption: digital gluttony:
• Apple iPhone, iPad, Amazon Kindle, Android, Windows Phone
Exponential growth in compute capabilities:
• multi-core, storage, bandwidth, virtual machines (virtualization)
Very short cycle of obsolescence in technologies:
• Windows 8, Ubuntu, Mac; Java versions; C → C#; Python
Newer architectures: web services, persistence models, distributed file
systems/repositories (Google, Hadoop), multi-core, wireless and mobile
• Diverse knowledge and skill levels of the workforce
BITS Pilani, Deemed to be University under Section 3 of UGC Act, 1956
7 BITS Pilani, Deemed to be University under Section 3 of UGC Act, 1956
Drivers for the new Platform
http://blogs.technet.com/b/yungchou/archive/2011/03/03/chou-s-theories-of-cloud-computing-the-5-3-2-principle.aspx
• Shared pool of
configurable
computing
resources
• On-demand
network access
• Provisioned by
the Service
Provider
Characteristics of Cloud
Computing
Cloud Definitions
• Definition from Wikipedia
▪ Cloud computing is Internet-based computing, whereby shared
resources, software, and information are provided to computers
and other devices on demand, like the electricity grid.
▪ Cloud computing is a style of computing in which dynamically
scalable and often virtualized resources are provided as a
service over the Internet.
Cloud Definitions
• Definition from Whatis.com
▪ The name cloud computing was inspired by the cloud symbol that's
often used to represent the Internet in flowcharts and diagrams.
Cloud computing is a general term for anything that involves
delivering hosted services over the Internet.
Cloud Definitions
• Definition from Berkeley
▪ Cloud Computing refers to both the applications delivered as
services over the Internet and the hardware and systems software
in the datacenters that provide those services.
▪ The services themselves have long been referred to as Software as a
Service (SaaS), so we use that term. The datacenter hardware and
software is what we will call a
Cloud.
▪ When a Cloud is made available
in a pay-as-you-go manner to the
public…… The service being sold is
Utility Computing.
Cloud Definitions
• Definition from Buyya
▪ A Cloud is a type of parallel and distributed system consisting of a
collection of interconnected and virtualized computers that are
dynamically provisioned and presented as one or more unified
computing resources based on service-level agreements
established through negotiation between the service provider and
consumers.
Properties and characteristics
• What is elasticity ?
▪ The ability to apply a quantifiable methodology that allows for the
basis of an adaptive introspection with in a real time infrastructure.
• What is manageability ?
▪ Enterprise-wide administration of cloud computing systems.
Systems manageability is strongly influenced by network
management initiatives in telecommunications.
• What is interoperability ?
▪ Interoperability is a property of a product or system, whose
interfaces are completely understood, to work with other products
or systems, present or future, without any restricted access or
implementation.
• But how to achieve these properties ?
▪ System control automation
▪ System state monitoring
Control Automation
• What is Autonomic Computing ?
▪ Its ultimate aim is to develop computer systems capable of self-
management, to overcome the rapidly growing complexity of
computing systems management, and to reduce the barrier that
complexity poses to further growth.
• Architectural framework :
▪ Composed by Autonomic Components (AC) which will interact
with each other.
▪ An AC can be modeled in terms of two main control loops (local
and global) with sensors (for self-monitoring), effectors (for self-
adjustment), knowledge and planer/adapter for exploiting
policies based on self- and environment awareness.
Control Automation
• Anything more ?
▪ Billing system
Billing System
• How to approach ?
▪ Use pre-defined workflow
▪ System automatic configuration
Accessibility & Portability
Accessibility & Portability
• What is accessibility ?
▪ Accessibility is a general term used to describe the degree to which
a product, device, service, or environment is accessible by as many
people as possible.
54
Systems Programming - V 3.0
Cloud Computing
CS -2
Virtualization Techniques and Types
Cloud Computing 2
IMP Note to Students
➢It is important to know that just login to the session does not
guarantee the attendance.
➢Once you join the session, continue till the end to consider you
as present in the class.
➢IMPORTANTLY, you need to make the class more interactive by
responding to Professors queries in the session.
➢ Whenever Professor calls your number / name ,you need to
respond, otherwise it will be considered as ABSENT
3
Introduction to Virtualisation
• AGENDA
Virtualisation
Introduction to Virtualization
Use & demerits of Virtualization
Virtualize Compute
6
Compute Virtualization
Compute Virtualization
resources
7
Need for Compute Virtualization
Hypervisor
x86 Architecture
x86 Architecture
9
Types of Hypervisor
APP
Hypervisor
Hypervisor
10
Benefits of Compute Virtualization
• Server consolidation
• Isolation
• Encapsulation
• Hardware independence
• Reduced cost
11
Requirements: x86 Hardware Virtualization
12
Full Virtualization
13
Paravirtualization
Physical Machine
X86 Hardware
14
Hardware Assisted Virtualization
15
Virtual Machine
16
Virtual Machine Files
File name Description
Virtual BIOS File • Stores the state of the virtual machine’s (VM’s) BIOS
• Is a VM’s paging file which backs up the VM RAM contents
Virtual Swap File
• The file exists only when VM is running
• Stores the contents of the VM’s disk drive
Virtual Disk File • Appears like a physical disk drive to VM
• VM can have multiple disk drives
• Keeps a log of VM activity
Log File
• Is useful for troubleshooting
• Stores the configuration information chosen during VM creation
Virtual Configuration File • Includes information such as number of CPUs, memory, number and
type of network adaptors, and disk types
17
File System to Manage VM Files
• The file systems supported by hypervisor are Virtual Machine File System (VMFS)
and Network File System (NFS)
• VMFS
• Is a cluster file system that allows multiple physical machines to perform
read/write on the same storage device concurrently
• Is deployed on FC and iSCSI storage apart from local storage
• NFS
• Enables storing VM files on a remote file server (NAS device)
• NFS client is built into hypervisor
18
Virtual Machine Hardware
Parallel Serial/Com USB controller
port ports and USB devices
RAM Keyboard
19
VM Hardware Components
Virtual Hardware Description
• Virtual machine (VM) can be configured with one or more virtual CPUs
vCPU
• Number of CPUs allocated to a VM can be changed
Virtual DVD/CD-ROM Drive • It maps a VM’s DVD/CD-ROM drive to either a physical drive or an .iso file
Virtual Floppy Drive • It maps a VM’s floppy drive to either a physical drive or an .flp file
Virtual SCSI Controller • VM uses virtual SCSI controller to access virtual disk
Virtual USB Controller • Maps VM’s USB controller to the physical USB controller
20
Virtual Machine Console
21
Resource Management
Resource management
22
Resource Pool
Resource pool
23
Resource Pool Example
24
Share, Limit, and Reservation
• Parameters that control the resources consumed by a child resource pool or a virtual
machine (VM) are as follows:
• Share
• Amount of CPU or memory resources a VM or a child resource pool can have
with respect to its parent’s total resources
• Limit
• Maximum amount of CPU and memory a VM or a child resource pool can
consume
• Reservation
• Amount of CPU and memory reserved for a VM or a child resource pool
25
Optimizing CPU Resources
26
Multi-core Processors
VM with VM with VM with
one CPU two CPUs four CPUs
Virtual CPU
Virtual
Physical
Core
Socket
27
Hyper-threading
VM with VM with VM with
one CPU two CPUs one CPU
28
Optimizing Memory Resource
29
Memory Ballooning
No memory shortage, balloon remains
uninflated
30
Memory Swapping
• Each powered-on virtual machine (VM) needs its own swap file
• Created when the VM is powered-on
• Deleted when the VM is powered-off
• Swap file size is equal to the difference between the memory limit and the VM memory
reservation
• Hypervisor swaps out the VM’s memory content if memory is scarce
• Swapping is the last option because it causes notable performance impact
31
Physical to Virtual Machine (P2V) Conversion
P2V Conversion
to boot
Physical Machine Virtual Machine (VM)
32
Benefits of P2V Converter
33
Components of P2V Converter
• There are three key components:
• Converter server
• Is responsible for controlling conversion process
• Is used for hot conversion only (when source is running its OS)
• Pushes and installs agent on the source machine
• Converter agent
• Is responsible for performing the conversion
• Is used in hot mode only
• Is installed on physical machine to convert it to virtual machine (VM)
• Converter Boot CD
• Bootable CD contains its operating system (OS) and converter application
• Converter application is used to perform cold conversion
34
Conversion Options
• Hot conversion
• Occurs while physical machine is running
• Performs synchronization
• Copies blocks that were changed during the initial cloning period
• Performs power off at source and power on at target virtual machine (VM)
• Changes IP address and machine name of the selected machine, if both
machines must co-exist on the same network
• Cold conversion
• Occurs while physical machine is not running OS and application
• Boots the physical machine using converter boot CD
• Creates consistent copy of the physical machine
35
Hot Conversion Process
Converter server
running converter
software
Agent
36
Hot Conversion Process (contd.)
Converter server
running converter
software
Reconfiguration
Agent
Powered-on
Source Physical Source
Snapshot
Machine Volume
37
Cold Conversion Process
Powered-on
Source Physical Source
Volume
Machine
Destination Physical
Machine (Running
Hypervisor)
38
Cold Conversion Process (contd.)
Converter boot CD
Reconfiguration
Powered-on
Source Physical Source
Step 3: Clones source
Machine Volume disk to VM disk
Destination Physical
Machine (Running
Hypervisor)
39
Storage Virtualization
Storage virtualization
40
Benefits of Storage Virtualization
41
Storage Virtualization at Different Layers
Layers Examples
• Block-level virtualization
Network
• File-level virtualization
• Virtual Provisioning
Storage
• Automated Storage Tiering
42
Storage for Virtual Machines
Compute 1 Compute 2
• Size of virtual disk file represents storage space allocated file file file file
NFS
VMFS
to virtual disk
• VMs remain unaware of
FC SAN
• Total space available to the hypervisor IP Network
43
File System for Managing VM Files
44
Network Virtualization
Network Virtualization
It is a process of logically segmenting or grouping physical network(s) and
making them operate as single or multiple independent network(s) called
“Virtual Network(s)”.
45
Network Virtualization in VDC
• Involves virtualizing physical
Physical Server Physical Server
and VM networks
Physical Network
• Consists of following physical Hypervisor Hypervisor
components:
Network adapters, switches, PNIC PNIC
46
Benefits of Network Virtualization
Benefit Description
• Restricts access to nodes in a virtual network from
another virtual network
Enhances security
• Isolates sensitive data from one virtual network to
another
• Restricts network broadcast and improves virtual
Enhances performance
network performance
• Allows configuring virtual networks from a centralized
Improves manageability management workstation using management software
• Eases grouping and regrouping of nodes
• Enables multiple virtual networks to share the same
physical network, which improves utilization of
Improves utilization and reduces
network resource
CAPEX
• Reduces the requirement to setup separate physical
networks for different node groups 47
Components of VDC Network Infrastructure
• VDC network infrastructure includes both virtual and physical network
components
Components are connected to each other to enable network traffic flow
Component Description
• Connects VMs to the VM network
Virtual NIC
• Sends/receives VM traffic to/from VM network
Virtual HBA • Enables a VM to access FC RDM disk/LUN assigned to the VM
• Is an Ethernet switch that forms VM network
• Provides connection to virtual NICs and forwards VM traffic
Virtual switch
• Provides connection to hypervisor kernel and directs hypervisor traffic:
management, storage, VM migration
Physical adapter: NIC, • Connects physical servers to physical network
HBA, CNA • Forwards VM and hypervisor traffic to/from physical network
• Forms physical network that supports Ethernet/FC/iSCSI/FCoE
Physical switch, router • Provides connections among physical servers, between physical servers and
storage systems, and between physical servers and clients 48
Virtual Network Component: Virtual NIC
49
Overview of Desktop and Application Virtualization
Application
Application Virtualization
Isolate the application from OS and hardware
Operating System
Desktop Virtualization
Isolate hardware from OS, application and user
state
Hardware
50
50
Desktop Virtualization
Desktop Virtualization
51
51
Benefits of Desktop Virtualization
52
Desktop Virtualization Techniques
53
Remote Desktop Services
• RDS is traditionally known as terminal services
• A terminal service runs on top of a Windows
installation
Provides individual sessions to client systems
Clients receive visuals of the desktop
Resource consumption takes place on the server
54
Benefits of Remote Desktop Services
55
Virtual Desktop Infrastructure(VDI)
• VDI involves hosting desktop which runs as VM on the server in the VDC
Each desktop has its own OS and applications installed
• User has full access to resources of virtualized desktop
56
VDI: Components
VM execution
• Endpoint devices Endpoint devices server
• VM hosting/execution servers
• Connection Broker
Connection
broker Shared
Storage
PCs,
notebooks
thin clients
57
How does this work?
Server (HW1.js)
Server
require('http');
http.createServer
(…)
58
Use case Scenario for virtualization
Cust 1
Cust 2
Admin
Physical machine
Cust 3
• Suppose Admin has a machine with 4 CPUs and 8 GB of memory, and three customers:
• Cust 1 wants a machine with 1 CPU and 3GB of memory
• Cust 2 wants 2 CPUs and 1GB of memory
• Cust 3 wants 1 CPU and 4GB of memory
• What should Alice do?
59
Resource allocation in virtualization
Cust 1
Virtual
machine
monitor
Cust 2
Admin
Physical machine
Virtual machines Cust 3
• Admin can sell each customer a virtual machine (VM) with the requested resources
• From each customer's perspective, it appears as if they had a physical machine all by
themselves (isolation)
60
How does it work?
VM 1 VM 2
VM Virt Phys App
1 0-99 0-99 App App
1 299-399 100-199
2 0-99 300-399 OS 1 OS 2
2 200-299 500-599
2 600-699 400-499
VMM
Translation table
Physical machine
61
Benefit: Migration in case of disaster
Cust 1
Virtual
machine
Admin monitor
Cust 2
Physical machines
62
Benefit: Time sharing
Cust 4
Cust 1
Virtual
machine
monitor
Cust 2
Admin
Physical machine
63
Benefit and challenge: Isolation
Cust 4
Cust 1
VMM
Cust 2
Admin
Physical machine
Virtual machines Cust 3
64
Recap: Virtualization in the cloud
• Gives cloud provider a lot of flexibility
• Can produce VMs with different capabilities
• Can migrate VMs if necessary (e.g., for maintenance)
• Can increase load by overcommitting resources
• Provides security and isolation
• Programs in one VM cannot influence programs in another
• Convenient for users
• Complete control over the virtual 'hardware' (can install own operating system own
applications, ...)
• But: Performance may be hard to predict
• Load changes in other VMs on the same physical machine may affect the performance seen
by the customer
65
Introduction to Virtualisation
• AGENDA
Types of Virtualization
x86 Hardware Virtualization
Manage the resources for the SaaS, PaaS and IaaS models
Introduction to NFV – VNF
• Examples of Virtualization
• Virtual drives
• Virtual memory
• Virtual machines
• Virtual servers
• Why is it popular?
• Types of Virtualization
67
Hardware Based Virtualization
68
Hardware Based Virtualization
• The different logical layers of operating system-based virtualization, in which the VM is first
installed into a full host operating system and subsequently used to generate virtual machines.
• An abstract execution environment in terms of computer hardware in which guest OS can be
run, referred to as Hardware-level virtualization.
• In this, an operating system represents the guest, the physical computer hardware represents a
host, its emulation represents a virtual machine, and the hypervisor represents the Virtual
Machine Manager.
• When the virtual machines are allowed to interact with hardware without any intermediary
action requirement from the host operating system generally makes hardware-based
virtualization more efficient.
• A fundamental component of hardware virtualization is the hypervisor, or virtual machine
manager (VMM).
69
Hardware Based Virtualization
70
Hardware Based Virtualization
• Type-I hypervisors:
• As a result, they stand in for operating systems and communicate directly with the ISA
interface offered by the underlying hardware, which they replicate to allow guest operating
systems to be managed.
• Because it runs natively on hardware, this sort of hypervisor is also known as a native virtual
machine.
71
Hardware Based Virtualization
• Type-II hypervisors:
• This means they’re operating system-managed applications that communicate with it via
the ABI and simulate the ISA of virtual hardware for guest operating systems.
• Because it is housed within an operating system, this form of hypervisor is also known as a
hosted virtual machine.
72
Hardware Based Virtualization
• A hypervisor has a simple user interface that needs some storage space.
• For the provisioning of virtual machines, device drivers and support software are optimized
while many standard operating system functions are not implemented.
73
Hardware Based Virtualization
• Hardware compatibility is another challenge for hardware-based virtualization.
• The virtualization layer interacts directly with the host hardware, which results that all the
associated drivers and support software must be compatible with the hypervisor.
• As hardware devices drivers available to other operating systems may not be available to
hypervisor platforms similarly.
• Moreover, host management and administration features may not contain the range of
advanced functions that are common to the operating systems.
• Note: Hyper-V communicates with the underlying hardware mostly through vendor-supplied
drivers.
74
Features of hardware-based virtualization are:
• Isolation: Hardware-based virtualization provides strong isolation between virtual machines,
which means that any problems in one virtual machine will not affect other virtual machines
running on the same physical host.
• It also allows for live migration of virtual machines between physical hosts, which can be
used for load balancing and other purposes.
76
Advantages and disadvantages of HBV
• It reduces the maintenance overhead of paravirtualization as it reduces (ideally, eliminates)
the modification in the guest operating system.
• This performance hit can be mitigated by the use of para-virtualized drivers; the combination
has been called “hybrid virtualization”.
77
Hypervisor
78
Hypervisor
Type I versus Type II Hypervisor
79
Virtualization Hardware
• CPU
• At least one CPU core per virtual machine
• Having free cores for high stress situations recommended
• RAM
• No set amount for RAM
• Estimate minimum amounts of RAM and upgrade based on performance
• Networking
• Network Virtualization
80
Virtualization Hardware
• Storage
• Local storage on servers is limited
81
Advantages of Server Virtualization
• Reduce number of servers
• Reduce TCO
82
Types of Server Virtualization
• These are running on another operating system known as the host operating
system.
• Each guest running in this manner is unaware of any other guests running on the
same host.
83
Types of Server Virtualization
1. Hypervisor
• A Hypervisor or VMM(virtual machine monitor) is a layer that exists between the operating system
and hardware.
• It provides the necessary services and features for the smooth running of multiple operating
systems.
• It identifies traps, responds to privileged CPU instructions, and handles queuing, dispatching, and
returning the hardware requests.
• A host operating system also runs on top of the hypervisor to administer and manage the virtual
machines.
84
Types of Server Virtualization
2. Para Virtualization
• It is based on Hypervisor.
• The guest operating system is modified and recompiled before installation into the virtual
machine.
• Due to the modification in the Guest operating system, performance is enhanced as the
modified guest operating system communicates directly with the hypervisor and emulation
overhead is removed.
• Example: Xen primarily uses Paravirtualization, where a customized Linux environment is used
to support the administrative environment known as domain 0.
85
Types of Server Virtualization
Advantages:
•Easier
•Enhanced Performance
Limitations:
86
Types of Server Virtualization
3. Full Virtualization
• It is very much similar to Paravirtualization.
• It can emulate the underlying hardware when necessary.
• The hypervisor traps the machine operations used by the operating system to perform I/O
or modify the system status.
• After trapping, these operations are emulated in software and the status codes are returned
very much consistent with what the real hardware would deliver.
• This is why an unmodified operating system is able to run on top of the hypervisor.
• Example: VMWare ESX server uses this method.
• A customized Linux version known as Service Console is used as the administrative operating
system.
• It is not as fast as Paravirtualization.
87
Types of Server Virtualization
Advantages:
Limitations:
• Complex
88
Types of Server Virtualization
4. Hardware-Assisted Virtualization
• Much of the hypervisor overhead due to trapping and emulating I/O operations and status
instructions executed within a guest OS is dealt with by relying on the hardware extensions of
the x86 architecture.
• Unmodified OS can be run as the hardware support for virtualization would be used to handle
hardware access requests, privileged and protected operations, and to communicate with the
virtual machine.
89
Types of Server Virtualization
• Examples: AMD – V Pacifica and Intel VT Vanderpool provide hardware support for
virtualization.
• Advantages:
• Limitations:
90
Types of Server Virtualization
5. Kernel level Virtualization
• Instead of using a hypervisor, it runs a separate version of the Linux kernel and sees the associated virtual
machine as a user-space process on the physical host.
• A device driver is used for communication between the main Linux kernel and the virtual machine.
• A slightly modified QEMU process is used as the display and execution containers for the virtual machines.
• Examples: User – Mode Linux( UML ) and Kernel Virtual Machine( KVM )
91
Types of Server Virtualization
Advantages:
Limitations:
92
Types of Server Virtualization
6. System Level or OS Virtualization
• Runs multiple but logically distinct environments on a single instance of the operating system kernel.
• Also called shared kernel approach as all virtual machines share a common kernel of host operating system.
• The kernel uses root filesystems to load drivers and perform other early-stage system initialization tasks.
• It then switches to another root filesystem using chroot command to mount an on-disk file system as its final root
filesystem and continue system initialization and configuration within that file system.
93
Types of Server Virtualization
• The chroot mechanism of system-level virtualization is an extension of this concept.
• It enables the system to start virtual servers with their own set of processes that execute
relative to their own filesystem root directories.
• The main difference between system-level and server virtualization is whether different
operating systems can be run on different virtual systems.
• If all virtual servers must share the same copy of the operating system it is system-level
virtualization and if different servers can have different operating systems ( including different
versions of a single operating system) it is server virtualization.
94
Types of Server Virtualization
Advantages:
• Significantly lightweight than complete machines(including a kernel)
• Can host many more virtual servers
• Enhanced Security and isolation
• Virtualizing an operating system usually has little to no overhead.
• Live migration is possible with OS Virtualization.
• It can also leverage dynamic container load balancing between nodes and clusters.
• On OS virtualization, the file-level copy-on-write (CoW) method is possible, making it easier to
back up data, more space-efficient, and easier to cache than block-level copy-on-write
schemes.
Limitations:
• Kernel or driver problems can take down all virtual servers.
95
Brief History of the x86 Architecture
• The x86 architecture has roots that reach back to 8‐bit processors built by Intel in the late 1970s.
• As manufacturing capabilities improved and software demands increased, Intel extended the 8‐bit architecture
to 16 bits with the 8086 processor.
• Later still, with the arrival of the 80386 CPU in 1985, Intel extended the architecture to 32 bits.
• Intel calls this architecture IA‐32, but the vendor‐neutral term x86 is also common.
• Over the following two decades, the basic 32‐bit architecture remained the same, although successive
generations of CPUs added many new features, including an on‐chip floating point unit, support for large
physical memories through physical address extension (PAE), and vector instructions.
• In 2003, AMD introduced a 64‐bit extension to the x86 architecture, initially dubbed AMD64, and began
shipping 64‐bit Opteron CPUs in 2004.
• Later in 2004, Intel announced its own 64‐bit architectural extension of IA‐32, calling it IA‐32e and later also
EM64T.
• The AMD and Intel 64‐bit extensions are extremely similar, although they differ in some minor ways, one of
which is crucial for virtualization
96
x86 Hardware Virtualization
• Microsoft Virtual Server (2005)
• Came with Microsoft Server 2003
• Did not scale well with 64 bit systems
• Replaced by Hyper-V
97
Introduction hyper-converged infrastructure (HCI)
• With VMM 2022, we can manage Azure Stack HCI, 21H2 clusters.
• Azure Stack HCI, version 21H2 is the newly introduced hyper-converged infrastructure (HCI) Operating system
that runs on on-premises clusters with virtualized workloads.
• Most of the operations to manage Azure Stack clusters in VMM are similar to managing Windows Server
clusters.
• Azure Stack HCI is Microsoft’s premier hypervisor offering for running virtual machines on-premises. For
testing and evaluation purposes Azure Stack HCI includes a 60-day free trial and can be downloaded here:
https://azure.microsoft.com/en-us/products/azure-stack/hci/hci-download/
• Microsoft Hyper-V Server 2019 will continue to be supported under its lifecycle policy until January 2029, see
this link for additional information: https://docs.microsoft.com/en-us/lifecycle/products/hyperv-server-2019
98
Virtualization Software
• VMware (Company)
• Releases most popular line of virtualization software
• First company to utilize virtualization on x86 machines
• Software runs on Linux, Windows, and MAC
• VMware Server
• Free
• Not as powerful as ESX or ESXi
99
Introduction to virtualization and resource management in IaaS
The Rise of Resource Overcommitment
• As most applications will never use all the resources allocated to them at all times.
100
Resource Management in IaaS
• Resource management is an indispensable way to make use of the underlying hardware of
the cloud effectively.
• A resource manager oversees physical resources allocation to the virtual machines deployed
on a cluster of nodes in the cloud.
• The resource management systems have differing purposes depending upon the
requirements.
• Using physical machines reduces operational costs and can be accomplished through the
overcommitment of resources.
• However, resource overcommitment comes with new challenges such as removal of the
hotspot and the dilemma of where to schedule new incoming VMs to reduce the chances of
the hotspot.
101
Resource Management in IaaS
102
Mitigating the Challenge of Hotspot
• Ballooning can be used if a VM is low on memory to take away some memory from one guest
on the same host, which has some free memory, and provide it to the needy guest.
• But, if none of the guests have enough free memory, then most of the time, the host is
overloaded.
• In that case, a guest has to be migrated from the current host to a different host while
keeping an account of the complete load of the cluster.
103
How does PaaS compare to internally hosted development environments?
• Development tools
• Middleware
• Operating systems
• Database management
• Infrastructure
• Different vendors may include other services as well, but these are the core PaaS services.
105
Development tools in PaaS?
• PaaS vendors offer a variety of tools that are necessary for software development, including a
source code editor, a debugger, a compiler, and other essential tools.
• These tools may be offered together as a framework.
• The specific tools offered will depend on the vendor, but PaaS offerings should include
everything a developer needs to build their application.
Middleware
• Platforms offered as a service usually include middleware, so that developers don't have to
build it themselves.
• Middleware is software that sits in between user-facing applications and the machine's
operating system; for example, middleware is what allows software to access input from the
keyboard and mouse.
• Middleware is necessary for running an application, but end users don't interact with it.
106
Development tools in PaaS?
Operating systems
A PaaS vendor will provide and maintain the operating system that developers work on and the
application runs on.
Databases
• PaaS providers administer and maintain databases.
• They will usually provide developers with a database management system as well.
Infrastructure
• PaaS is the next layer up from IaaS in the cloud computing service model, and everything
included in IaaS is also included in PaaS.
• A PaaS provider either manages servers, storage, and physical data centers, or purchases
them from an IaaS provider.
107
Why do developers use PaaS?
Faster time to market
• PaaS is used to build applications more quickly than would be possible if developers had to
worry about building, configuring, and provisioning their own platforms and backend
infrastructure.
• With PaaS, all they need to do is write the code and test the application, and the vendor
handles the rest.
• PaaS permits developers to build, test, debug, deploy, host, and update their applications all
in the same environment.
• This enables developers to be sure a web application will function properly as hosted before
they release, and it simplifies the application development lifecycle.
108
Price
• PaaS is more cost-effective than leveraging IaaS in many cases.
• Overhead is reduced because PaaS customers don't need to manage and provision virtual
machines.
• In addition, some providers have a pay-as-you-go pricing structure, in which the vendor only
charges for the computing resources used by the application, usually saving customers
money.
• However, each vendor has a slightly different pricing structure, and some platform providers
charge a flat fee per month.
Ease of licensing
PaaS providers handle all licensing for operating systems, development tools, and everything
else included in their platform.
109
What are the potential drawbacks of using PaaS?
Vendor lock-in
• It may become hard to switch PaaS providers, since the application is built using the vendor's
tools and specifically for their platform.
• Different vendors may not support the same languages, libraries, APIs, architecture, or
operating system used to build and run the application.
• To switch vendors, developers may need to either rebuild or heavily alter their application.
110
What are the potential drawbacks of using PaaS?
Vendor lock-in
• It may become hard to switch PaaS providers, since the application is built using the vendor's
tools and specifically for their platform.
• Different vendors may not support the same languages, libraries, APIs, architecture, or
operating system used to build and run the application.
• To switch vendors, developers may need to either rebuild or heavily alter their application.
111
What are the potential drawbacks of using PaaS?
Vendor dependency
• The effort and resources involved in changing PaaS vendors may make companies more
dependent on their current vendor.
• A small change in the vendor's internal processes or infrastructure could have a huge impact
on the performance of an application designed to run efficiently on the old configuration.
• Additionally, if the vendor changes their pricing model, an application may suddenly become
more expensive to operate.
112
What are the potential drawbacks of using PaaS?
Security and compliance challenges
In a PaaS architecture, the external vendor will store most or all of an application's data, along
with hosting its code.
In some cases the vendor may actually store the databases via a further third party, an IaaS
provider.
Though most PaaS vendors are large companies with strong security in place, this makes it
difficult to fully assess and test the security measures protecting the application and its data.
In addition, for companies that have to comply with strict data security regulations, verifying the
compliance of additional external vendors will add more hurdles to going to market.
113
SaaS
• Software-as-a-service (SaaS), also known as cloud application services, is the most
comprehensive form of cloud computing services, delivering an entire application that is
managed by a provider, via a web browser.
• Software updates, bug fixes, and general software maintenance are handled by the provider
and the user connects to the app via a dashboard or API.
• There’s no installation of the software on individual machines and group access to the
program is smoother and more reliable.
• You’re already familiar with a form of SaaS if you have an email account with a web-based
service like Outlook or Gmail, for example, as you can log into your account and get your
email from any computer, anywhere.
114
SaaS
• SaaS is a great option for small businesses who don’t have the staff or bandwidth to handle
software installation and updates, as well as for applications that don’t require much
customization or that will only be used periodically.
• What SaaS saves you in time and maintenance, however, it could cost you in control, security,
and performance, so it’s important to choose a provider you can trust.
• Dropbox, Salesforce, Google Apps, and Red Hat Insights are some examples of SaaS.
115
Cloud Computing
116
Cloud Computing (continued)
117
Hyper scale Infrastructure is the enabler
27 Regions Worldwide, 22 ONLINE…huge capacity around the world…growing every year
North Central US
Illinois
West Europe
United Kingdom
Canada Central Netherlands
Canada East Regions
Central US Toronto
Iowa Quebec City Germany North East
Magdeburg China North *
US Gov Beijing
Iowa
Germany Central Japan East
North Europe China South *
Frankfurt Tokyo, Saitama
Ireland Shanghai
West US East US
California Virginia
India Central Japan West
Pune Osaka
East US 2
South Central US Virginia India South
Texas US Gov Chennai
India West
Virginia
Mumbai East Asia
Hong Kong
SE Asia
Singapore
Australia East
New South Wales
Brazil South
Sao Paulo State Australia South East
• VMs (IaaS only) that are stopped in Microsoft Azure, only storage charges apply
119
Microsoft Azure Compute
120
Microsoft Azure App Service
• App Service – fully managed platform in Azure for web, mobile and integration scenarios.
This includes
• Web Apps – Enterprise grade web applications
• API Apps – API apps in Azure App Service are used to develop, publish, manage, and
monetize APIs.
• Mobile Apps - Build native and cross platform apps for iOS, Android, and Windows apps
or cross-platform Xamarin or Cordova (Phonegap) apps
• Logic Apps (preview) - Allows developers to design workflows that articulate intent via a
trigger and series of steps, each invoking an App Service API app
121
Microsoft Azure Cloud Services
• Role – a configuration passed to Azure to tell Azure how many machines of which size and
configuration to build for you
• Web Role – Virtual machine with IIS installed
• Worker Role – Virtual machine without IIS installed
• Ability to mix together multiple role configurations within a single Cloud Service
• Package – Source code binaries are packaged and sent with the configuration file to Azure
• Highly scalable – can exceed number of machines capability of App Service Web Apps
• Allows RDP into individual VMs
• Cloud Services are also used to contain IaaS virtual machines (Classic)
122
High Level view of Virtual Machine Services
• Compute resources
• Virtual Machines
• VM Extensions
• Storage Resources
• Blobs, tables, queues and Files functionality
• Storage accounts (blobs) – Standard & Premium Storage
• Networking Resources
• Virtual networks
• Network interface cards (NICs)
• Load balancers
• IP addresses
• Network Security Groups
123
Management model for PaaS/IaaS
124
Introduction to Network Function Virtualization(NFV-VNF)
1. What is NFV?
Overview 2. Why We need NFV?
3. Concepts, Architecture, Requirements
125
Four Innovations of NFV
126
Network Function Virtualization (NFV)
1. Fast standard hardware Software based Devices
Routers, Firewalls, Broadband Remote Access Server (BRAS)
A.k.a. white box implementation
vBase Stations
Residential Set Top
DNS DHCP CDN
LTE 3G 2G Gateway NAT Box
Hardware
Hardware Hardware
Ref: ETSI, “NFV – Update White Paper V3,” Oct 2014, http://portal.etsi.org/NFV/NFV_White_Paper3.pdf (Must read)
127
NFV (Cont)
3. Virtual Machine implementation
Virtual appliances
All advantages of virtualization (quick provisioning, scalability, mobility, Reduced CapEx,
Reduced OpEx, …)
VM VM VM
Hypervisor
Partitioning
4. Standard APIs: New ISG (Industry Specification Group) in ETSI (European Telecom
Standards Institute) set up in November 2012
128
Why We need NFV?
1. Virtualization: Use network resource without worrying about where it is
physically located, how much it is, how it is organized, etc.
2. Orchestration: Manage thousands of devices
3. Programmable: Should be able to change behavior on the fly.
4. Dynamic Scaling: Should be able to change size, quantity
5. Automation
6. Visibility: Monitor resources, connectivity
7. Performance: Optimize network device utilization
8. Multi-tenancy
9. Service Integration
10.Openness: Full choice of Modular plug-ins
Note: These are exactly the same reasons why we need SDN.
129
VNF
• NFV Infrastructre (NFVI): Hardware and software required to deploy, mange and execute
VNFs
• Network Function (NF): Functional building block with a well defined interfaces and well
defined functional behavior
• Container: VNF is independent of NFVI but needs a container software on NFVI to be able to
run on different hardwares
VNF
Container
NFVI
130
NFV Concepts
• Containers Types: Related to Computation, Networking, Storage
• VNF Set: Connectivity between VNFs is not specified, e.g., residential gateways
• VNF Forwarding Graph: Service chain when network connectivity order is important,
e.g., firewall, NAT, load balancer
VNFC 1
V NFC 1
V NFC 1
Load V NFC 1
VNFC 1 VNFC 2 VNFC 3 Balancer VNFC 1
131
NFV Architecture
Execution Reference Points Main NFV Reference Points Other NFV Reference Points
132
NFV Framework Requirements
1. General: Partial or full Virtualization, Predictable performance
5. Resiliency: Be able to recreate after failure. Specified packet loss rate, calls drops, time to
recover, etc.
133
NFV Framework Requirements (Cont)
7. Service Continuity: Seamless or non-seamless continuity after failures or migration
8. Service Assurance: Time stamp and forward copies of packets for Fault detection
11.Service Models: Operators may use NFV infrastructure operated by other operators
134
Any Function Virtualization (FV)
• Network function virtualization of interest to Network service providers
• But the same concept can be used by any other industry, e.g., financial industry, banks, stock
brokers, retailers, mobile games, …
Virtual IP
Multimedia
System
136
Summary
1. NFV aims to reduce OpEx by automation and scalability provided by implementing
network functions as virtual appliances
2. NFV allows all benefits of virtualization and cloud computing including orchestration, scaling,
automation, hardware independence, pay-per-use, fault-tolerance, …
3. NFV and SDN are independent and complementary. You can do either or both.
4. NFV requires standardization of reference points and interfaces to be able to mix and match
VNFs from different sources
138
• Type -1 Hypervisor
• Windows Sandbox – Light weight virtual Machine isolated, temporary virtual environment
• Hyper-V Hyper-V virtual machines, Hypervisor for bare metal virtualisation
• Type -2 Hypervisor
• Oracle Virtual Box
• Ubuntu 22.04 LTS Jammy Jellyfish – With KVM Virtualization Running a Debian Distro
inside it with VMM- for live monitoring - Nested Paging, KVM Paravirtualization.
• Ubuntu 23.04 Lunar Lobster to save the state of the virtual machines -Nested Paging,,
KVM Paravirtualization.
• Storage Virtualisation – Oracle Database 23c VM appliance for data persistence Nested
Paging, PAE/NX, KVM Paravirtualization.
139
Text and References
T1 Mastering Cloud Computing: Foundations and Applications Programming
Rajkumar Buyya, Christian Vecchiola, S.Thamarai Selvi
R1 Moving To The Cloud: Developing Apps in the New World of Cloud Computing 1st
Edition
by Dinkar Sitaram (Author), Geetha Manjunath (Author)
140
141
IMP Note to Self
142
Cloud Computing
CS -3
Cloud Computing 3
IMP Note to Students
➢It is important to know that just login to the session does not
guarantee the attendance.
➢Once you join the session, continue till the end to consider you
as present in the class.
➢IMPORTANTLY, you need to make the class more interactive by
responding to Professors queries in the session.
➢ Whenever Professor calls your number / name ,you need to
respond, otherwise it will be considered as ABSENT
4
x86 Hardware Virtualization
5
Full Virtualization
6
Recap
• What is Virtualization?
• Virtualization is the creation of a virtual resource or device where the framework
divides the resource into one or more execution environments
• Examples of Virtualization
• Virtual drives
• Virtual memory
• Virtual machines
• Virtual servers
• Why is it popular?
• Types of Virtualization
7
Brief History of the x86 Architecture
• The x86 architecture has roots that reach back to 8‐bit processors built by Intel in the late 1970s.
• As manufacturing capabilities improved and software demands increased, Intel extended the 8‐bit architecture
to 16 bits with the 8086 processor.
• Later still, with the arrival of the 80386 CPU in 1985, Intel extended the architecture to 32 bits.
• Intel calls this architecture IA‐32, but the vendor‐neutral term x86 is also common.
• Over the following two decades, the basic 32‐bit architecture remained the same, although successive
generations of CPUs added many new features, including an on‐chip floating point unit, support for large
physical memories through physical address extension (PAE), and vector instructions.
• In 2003, AMD introduced a 64‐bit extension to the x86 architecture, initially dubbed AMD64, and began
shipping 64‐bit Opteron CPUs in 2004.
• Later in 2004, Intel announced its own 64‐bit architectural extension of IA‐32, calling it IA‐32e and later also
EM64T.
• The AMD and Intel 64‐bit extensions are extremely similar, although they differ in some minor ways, one of
which is crucial for virtualization
8
x86 Hardware Virtualization
• Microsoft Virtual Server (2005)
• Came with Microsoft Server 2003
• Did not scale well with 64 bit systems
• Replaced by Hyper-V
9
Introduction hyper-converged infrastructure (HCI)
• With VMM 2022, we can manage Azure Stack HCI, 21H2 clusters.
• Azure Stack HCI, version 21H2 is the newly introduced hyper-converged infrastructure (HCI) Operating system
that runs on on-premises clusters with virtualized workloads.
• Most of the operations to manage Azure Stack clusters in VMM are similar to managing Windows Server
clusters.
• Azure Stack HCI is Microsoft’s premier hypervisor offering for running virtual machines on-premises. For
testing and evaluation purposes Azure Stack HCI includes a 60-day free trial and can be downloaded here:
https://azure.microsoft.com/en-us/products/azure-stack/hci/hci-download/
• Microsoft Hyper-V Server 2019 will continue to be supported under its lifecycle policy until January 2029, see
this link for additional information: https://docs.microsoft.com/en-us/lifecycle/products/hyperv-server-2019
10
Virtualization Software
• VMware (Company)
• Releases most popular line of virtualization software
• First company to utilize virtualization on x86 machines
• Software runs on Linux, Windows, and MAC
• VMware Server
• Free
• Not as powerful as ESX or ESXi
11
Introduction to virtualization and resource management in IaaS
The Rise of Resource Overcommitment
• As most applications will never use all the resources allocated to them at all times.
12
Resource Management in IaaS
• Resource management is an indispensable way to make use of the underlying hardware of
the cloud effectively.
• A resource manager oversees physical resources allocation to the virtual machines deployed
on a cluster of nodes in the cloud.
• The resource management systems have differing purposes depending upon the
requirements.
• Using physical machines reduces operational costs and can be accomplished through the
overcommitment of resources.
• However, resource overcommitment comes with new challenges such as removal of the
hotspot and the dilemma of where to schedule new incoming VMs to reduce the chances of
the hotspot.
13
Resource Management in IaaS
14
Mitigating the Challenge of Hotspot
• Ballooning can be used if a VM is low on memory to take away some memory from one guest
on the same host, which has some free memory, and provide it to the needy guest.
• But, if none of the guests have enough free memory, then most of the time, the host is
overloaded.
• In that case, a guest has to be migrated from the current host to a different host while
keeping an account of the complete load of the cluster.
15
How does PaaS compare to internally hosted development environments?
• Development tools
• Middleware
• Operating systems
• Database management
• Infrastructure
• Different vendors may include other services as well, but these are the core PaaS services.
17
Development tools in PaaS?
• PaaS vendors offer a variety of tools that are necessary for software development, including a
source code editor, a debugger, a compiler, and other essential tools.
• These tools may be offered together as a framework.
• The specific tools offered will depend on the vendor, but PaaS offerings should include
everything a developer needs to build their application.
Middleware
• Platforms offered as a service usually include middleware, so that developers don't have to
build it themselves.
• Middleware is software that sits in between user-facing applications and the machine's
operating system; for example, middleware is what allows software to access input from the
keyboard and mouse.
• Middleware is necessary for running an application, but end users don't interact with it.
18
Development tools in PaaS?
Operating systems
A PaaS vendor will provide and maintain the operating system that developers work on and the
application runs on.
Databases
• PaaS providers administer and maintain databases.
• They will usually provide developers with a database management system as well.
Infrastructure
• PaaS is the next layer up from IaaS in the cloud computing service model, and everything
included in IaaS is also included in PaaS.
• A PaaS provider either manages servers, storage, and physical data centers, or purchases
them from an IaaS provider.
19
Why do developers use PaaS?
Faster time to market
• PaaS is used to build applications more quickly than would be possible if developers had to
worry about building, configuring, and provisioning their own platforms and backend
infrastructure.
• With PaaS, all they need to do is write the code and test the application, and the vendor
handles the rest.
• PaaS permits developers to build, test, debug, deploy, host, and update their applications all
in the same environment.
• This enables developers to be sure a web application will function properly as hosted before
they release, and it simplifies the application development lifecycle.
20
Price
• PaaS is more cost-effective than leveraging IaaS in many cases.
• Overhead is reduced because PaaS customers don't need to manage and provision virtual
machines.
• In addition, some providers have a pay-as-you-go pricing structure, in which the vendor only
charges for the computing resources used by the application, usually saving customers
money.
• However, each vendor has a slightly different pricing structure, and some platform providers
charge a flat fee per month.
Ease of licensing
PaaS providers handle all licensing for operating systems, development tools, and everything
else included in their platform.
21
What are the potential drawbacks of using PaaS?
Vendor lock-in
• It may become hard to switch PaaS providers, since the application is built using the vendor's
tools and specifically for their platform.
• Different vendors may not support the same languages, libraries, APIs, architecture, or
operating system used to build and run the application.
• To switch vendors, developers may need to either rebuild or heavily alter their application.
22
What are the potential drawbacks of using PaaS?
Vendor dependency
• The effort and resources involved in changing PaaS vendors may make companies more
dependent on their current vendor.
• A small change in the vendor's internal processes or infrastructure could have a huge impact
on the performance of an application designed to run efficiently on the old configuration.
• Additionally, if the vendor changes their pricing model, an application may suddenly become
more expensive to operate.
23
What are the potential drawbacks of using PaaS?
Security and compliance challenges
In a PaaS architecture, the external vendor will store most or all of an application's data, along
with hosting its code.
In some cases the vendor may actually store the databases via a further third party, an IaaS
provider.
Though most PaaS vendors are large companies with strong security in place, this makes it
difficult to fully assess and test the security measures protecting the application and its data.
In addition, for companies that have to comply with strict data security regulations, verifying the
compliance of additional external vendors will add more hurdles to going to market.
24
SaaS
• Software-as-a-service (SaaS), also known as cloud application services, is the most
comprehensive form of cloud computing services, delivering an entire application that is
managed by a provider, via a web browser.
• Software updates, bug fixes, and general software maintenance are handled by the provider
and the user connects to the app via a dashboard or API.
• There’s no installation of the software on individual machines and group access to the
program is smoother and more reliable.
• You’re already familiar with a form of SaaS if you have an email account with a web-based
service like Outlook or Gmail, for example, as you can log into your account and get your
email from any computer, anywhere.
25
SaaS
• SaaS is a great option for small businesses who don’t have the staff or bandwidth to handle
software installation and updates, as well as for applications that don’t require much
customization or that will only be used periodically.
• What SaaS saves you in time and maintenance, however, it could cost you in control, security,
and performance, so it’s important to choose a provider you can trust.
• Dropbox, Salesforce, Google Apps, and Red Hat Insights are some examples of SaaS.
26
Cloud Computing
27
Cloud Computing (continued)
28
Hyper scale Infrastructure is the enabler
27 Regions Worldwide, 22 ONLINE…huge capacity around the world…growing every year
North Central US
Illinois
West Europe
United Kingdom
Canada Central Netherlands
Canada East Regions
Central US Toronto
Iowa Quebec City Germany North East
Magdeburg China North *
US Gov Beijing
Iowa
Germany Central Japan East
North Europe China South *
Frankfurt Tokyo, Saitama
Ireland Shanghai
West US East US
California Virginia
India Central Japan West
Pune Osaka
East US 2
South Central US Virginia India South
Texas US Gov Chennai
India West
Virginia
Mumbai East Asia
Hong Kong
SE Asia
Singapore
Australia East
New South Wales
Brazil South
Sao Paulo State Australia South East
• VMs (IaaS only) that are stopped in Microsoft Azure, only storage charges apply
30
Microsoft Azure Compute
31
Microsoft Azure App Service
• App Service – fully managed platform in Azure for web, mobile and integration scenarios.
This includes
• Web Apps – Enterprise grade web applications
• API Apps – API apps in Azure App Service are used to develop, publish, manage, and
monetize APIs.
• Mobile Apps - Build native and cross platform apps for iOS, Android, and Windows apps
or cross-platform Xamarin or Cordova (Phonegap) apps
• Logic Apps (preview) - Allows developers to design workflows that articulate intent via a
trigger and series of steps, each invoking an App Service API app
32
Microsoft Azure Cloud Services
• Role – a configuration passed to Azure to tell Azure how many machines of which size and
configuration to build for you
• Web Role – Virtual machine with IIS installed
• Worker Role – Virtual machine without IIS installed
• Ability to mix together multiple role configurations within a single Cloud Service
• Package – Source code binaries are packaged and sent with the configuration file to Azure
• Highly scalable – can exceed number of machines capability of App Service Web Apps
• Allows RDP into individual VMs
• Cloud Services are also used to contain IaaS virtual machines (Classic)
33
High Level view of Virtual Machine Services
• Compute resources
• Virtual Machines
• VM Extensions
• Storage Resources
• Blobs, tables, queues and Files functionality
• Storage accounts (blobs) – Standard & Premium Storage
• Networking Resources
• Virtual networks
• Network interface cards (NICs)
• Load balancers
• IP addresses
• Network Security Groups
34
Management model for PaaS/IaaS
35
Introduction to Network Function Virtualization(NFV-VNF)
1. What is NFV?
Overview 2. Why We need NFV?
3. Concepts, Architecture, Requirements
36
Four Innovations of NFV
37
Network Function Virtualization (NFV)
1. Fast standard hardware Software based Devices
Routers, Firewalls, Broadband Remote Access Server (BRAS)
A.k.a. white box implementation
vBase Stations
Residential Set Top
DNS DHCP CDN
LTE 3G 2G Gateway NAT Box
Hardware
Hardware Hardware
Ref: ETSI, “NFV – Update White Paper V3,” Oct 2014, http://portal.etsi.org/NFV/NFV_White_Paper3.pdf (Must read)
38
NFV (Cont)
3. Virtual Machine implementation
Virtual appliances
All advantages of virtualization (quick provisioning, scalability, mobility, Reduced CapEx,
Reduced OpEx, …)
VM VM VM
Hypervisor
Partitioning
4. Standard APIs: New ISG (Industry Specification Group) in ETSI (European Telecom
Standards Institute) set up in November 2012
39
Why We need NFV?
1. Virtualization: Use network resource without worrying about where it is
physically located, how much it is, how it is organized, etc.
2. Orchestration: Manage thousands of devices
3. Programmable: Should be able to change behavior on the fly.
4. Dynamic Scaling: Should be able to change size, quantity
5. Automation
6. Visibility: Monitor resources, connectivity
7. Performance: Optimize network device utilization
8. Multi-tenancy
9. Service Integration
10.Openness: Full choice of Modular plug-ins
Note: These are exactly the same reasons why we need SDN.
40
VNF
• NFV Infrastructre (NFVI): Hardware and software required to deploy, mange and execute
VNFs
• Network Function (NF): Functional building block with a well defined interfaces and well
defined functional behavior
• Container: VNF is independent of NFVI but needs a container software on NFVI to be able to
run on different hardwares
VNF
Container
NFVI
41
NFV Concepts
• Containers Types: Related to Computation, Networking, Storage
• VNF Set: Connectivity between VNFs is not specified, e.g., residential gateways
• VNF Forwarding Graph: Service chain when network connectivity order is important,
e.g., firewall, NAT, load balancer
VNFC 1
V NFC 1
V NFC 1
Load V NFC 1
VNFC 1 VNFC 2 VNFC 3 Balancer VNFC 1
42
NFV Architecture
Execution Reference Points Main NFV Reference Points Other NFV Reference Points
43
NFV Framework Requirements
1. General: Partial or full Virtualization, Predictable performance
5. Resiliency: Be able to recreate after failure. Specified packet loss rate, calls drops, time to
recover, etc.
44
NFV Framework Requirements (Cont)
7. Service Continuity: Seamless or non-seamless continuity after failures or migration
8. Service Assurance: Time stamp and forward copies of packets for Fault detection
11.Service Models: Operators may use NFV infrastructure operated by other operators
45
Any Function Virtualization (FV)
• Network function virtualization of interest to Network service providers
• But the same concept can be used by any other industry, e.g., financial industry, banks, stock
brokers, retailers, mobile games, …
Virtual IP
Multimedia
System
47
Summary
1. NFV aims to reduce OpEx by automation and scalability provided by implementing
network functions as virtual appliances
2. NFV allows all benefits of virtualization and cloud computing including orchestration, scaling,
automation, hardware independence, pay-per-use, fault-tolerance, …
3. NFV and SDN are independent and complementary. You can do either or both.
4. NFV requires standardization of reference points and interfaces to be able to mix and match
VNFs from different sources
49
• Type -1 Hypervisor
• Windows Sandbox – Light weight virtual Machine isolated, temporary virtual environment
• Hyper-V Hyper-V virtual machines, Hypervisor for bare metal virtualisation
• Type -2 Hypervisor
• Oracle Virtual Box
• Ubuntu 22.04 LTS Jammy Jellyfish – With KVM Virtualization Running a Debian Distro
inside it with VMM- for live monitoring - Nested Paging, KVM Paravirtualization.
• Ubuntu 23.04 Lunar Lobster to save the state of the virtual machines -Nested Paging,,
KVM Paravirtualization.
• Storage Virtualisation – Oracle Database 23c VM appliance for data persistence Nested
Paging, PAE/NX, KVM Paravirtualization.
50
Text and References
T1 Mastering Cloud Computing: Foundations and Applications Programming
Rajkumar Buyya, Christian Vecchiola, S.Thamarai Selvi
R1 Moving To The Cloud: Developing Apps in the New World of Cloud Computing 1st
Edition
by Dinkar Sitaram (Author), Geetha Manjunath (Author)
51
IMP Note to Self
52
Thank You !
53
Cloud Computing
CS - 4
Cloud Computing 3
IMP Note to Students
➢It is important to know that just login to the session does not
guarantee the attendance.
➢Once you join the session, continue till the end to consider you
as present in the class.
➢IMPORTANTLY, you need to make the class more interactive by
responding to Professors queries in the session.
➢ Whenever Professor calls your number / name ,you need to
respond, otherwise it will be considered as ABSENT
4
Introduction to IaaS
Cloud Computing Services Models - IaaS PaaS SaaS Explained - YouTube
BITS Pilani
Storage as a Service StaaS
Storage as a Service is one of the two major
services offered by IaaS .
It includes
simple storage service which consists of
highly reliable and available storage.
Example - Amazon S3
Simple & relational database services.
Example – Amazon SimpleDB and RDS
(Relational Database Service) which is
MySQL instance over a cloud.
BITS Pilani
Data Storage Needs
Data storage requirement is ever increasing in
the enterprise/industry. Both -
Structured data like the Relational database
which are vital for e-commerce businesses.
Unstructured data in various documents like
plans, strategy etc as per the process
require huge storage even in a small company.
Enterprises may also have to store objects for
their customers e.g. Online photo album
Need to protect the data - both security and
availability is to be provided as per the demand
in spite of various HW, network and SW failures 7
BITS Pilani
Compute as a service
This is the second of the two major services
provided by IaaS.
It makes extensive use of virtualization
technique to provide the computing
resources requested by the user
Typically one or more Virtual computers
(networked together) are provided to the
user
These could be increased or decreased as
per the need from time to time
Sudden increase in traffic can be taken
care of 8
BITS Pilani
IaaS model
StaaS
BITS Pilani
Infrastructure as a Service (IaaS)
Types of IaaS resources
Compute
Cloud compute resources include central processing units
(CPUs), graphical processing units (GPUs), and internal
memory (RAM) that computers require to perform any task.
Networking
IaaS infrastructure also includes networking resources like
routers, switches, and load balancers.
✓ Speed
✓ Performance
✓ Reliability
✓ Backup and Recovery
✓ Competitive Pricing
Security and compliance responsibilities shared
under the IaaS model.
IaaS providers take full responsibility for securing the infrastructure they provide
for your cloud applications. They manage security at all levels, such as:
❑ Physical security of the data center premises using measures like security
cameras, guards, and surveillance.
❑ Data security with very strict controls, encryption, and third-party auditing to
meet all compliance requirements.
Key IaaS Services
High Performance Computing (HPC): The platform can execute calculations in
the quadrillions per second, versus a computer with a 3Ghz chip that processes
three billion calculations per second.
Depending on the business needs, HPC services are available to meet the
requirements of scientific research or an engineering firm’s complex
calculations. This is a forced multiplier when it comes to calculations; the faster
calculations are completed, the less an organization pays for required services.
Key IaaS Services
Edge Computing: The platform is colocated near a business, a user, or a data
source for faster and more reliable services. Edge computing offers a business a
hybrid-like solution, with IaaS services colocated with the business or processed
data.
By placing edge computing closer to the using entity, data is processed faster,
offering more flexibility with hardware and software configurations. This also
increases reliability.
Key IaaS Services
Bare metal services: A single-tenant server that is managed by the tenant to
meet their specific needs. The OS is installed directly onto the server for better
performance. Bare metal services are generally used by healthcare providers,
financial institutions, and retail businesses.
Bare metal services can be deployed in a business data center that uses the
service, or a colocation data center. Businesses that use this type of service must
meet stringent requirements for regulatory compliance, privacy, and security.
Key IaaS Services
Resource auto-scaling: An automated process that occurs in IaaS as client
requests or transactions increase or decrease. Resource auto-tracking is a must-
have feature that allows the IaaS-provided services to automatically adjust to a
business’s on-demand needs.
Businesses using a private cloud have full control over their hardware
and software choices, and they have the ability to customize their
hardware or software — unlike when using a public IaaS provider. IBM
uses Kubernetes to extend its cloud applications to public cloud service
providers and automatically manages the RAM, storage, and CPU usage
as necessary.
Features of the Best IaaS Providers
Ultimately, the IaaS provider you select will have differentiating
features that provide your business an optimal IaaS solution for your
specific needs. However, every IaaS provider you consider must offer
these basic services:
❑ Try to identify a key feature that will address all the input
received from each department.
33
BITS Pilani
Amazon Simple Storage Service S3
• This is highly reliable, scalable, available and
fast storage in the cloud for storing and
retrieving data using simple web services.
• There are three ways of accessing S3
• AWS(Amazon Web Service) console
• REST-ful APIs with HTTP operations like
GET, PUT, DELETE and HEAD
• Libraries and SDKs that abstract these
operations
• There are several S3 browsers available to
access the storage and use it as though it’s
local dir/folder 34
BITS Pilani
Using Amazon S3
Let’s consider that a user wants to back up
(upload) some data for later need.
1. Sign up for S3 at http://aws.amazon.com/s3/
to get AWS access and secret keys similar to
user-id and passwd( Note these keys are for
the complete amazon solution not just S3)
2. Use these credentials to sign in to AWS
Management console
http://console.aws.amazon.com/s3/home
3. Create a bucket giving a name and
geographical location. (Buckets can store
objects/files)
35
BITS Pilani
Using Amazon S3 Contd..
4. Press upload button and follow instructions to
upload the file/object.
5. Now the file is backed up and is available for
use/sharing.
36
BITS Pilani
Buckets, objects and keys
Files are objects in S3. Objects are referred to
with keys – an optional directory path name
followed by object name. Objects are replicated
across geographical locations in multiple places
to protect against failures but the consistency is
not guaranteed unless versioning is enabled.
Objects can be up to 5 terabytes in size and a
bucket can have unlimited number of objects.
Objects have to be stored in buckets which
have unique names and a location (region)
associated with it. There can be 100 buckets per
account
37
BITS Pilani
Accessing objects in S3
Each object can be accessed by its key via
corresponding URL path of AWS console
http://<bucket name>.S3.amazonaws.com/<key>
Or
http://S3.amazonaws.com/<bucketname>/<key>
38
BITS Pilani
Accessing private objects in S3
Users can set permissions for others by right
clicking the object in AWS console and granting
anonymous read permissions for example static
read for a web site.
Alternately they can select object > go to object
menu and click on the “Make Public” option.
They can give permission to specific users to
read/modify object, by clicking on “properties”
option and then mentioning the email ids of
those who are allowed to access/read/write.
39
BITS Pilani
S3 access security contd..
• User can allow others to add/pick up
objects to/from their buckets . This is specially
useful when clients want some document to get
modified.
• Clients can put the doc/object in a bucket for
modification and after it is modified, collect it
back from the same or another bucket. If the
object/doc is put in the same bucket , then Key
is changed to differentiate modified doc/object
from the earlier one.
40
BITS Pilani
S3 access security contd..
There is yet another way to ensure security of
S3 objects. User can turn “Logging On” for a
bucket at the time of its creation or do it from
AWS management console.
41
BITS Pilani
Data protection
• One way of ensuring against loss of data is to
create replicas across multiple storage devices
which helps in two replica failures also. This is
the default mechanism.
• User could request RRS – reduced redundancy
storage for non critical data under which only
two replicas are created.
• S3 does not guarantee consistency of data
across replicas. Versioning when enabled can
take care of inadvertent data loss and also
make it possible to revert to previous version.
42
BITS Pilani
Large objects
• S3 objects can be up to 5 terabytes which is
more than the size of an uncompressed 1080p
HD movie.
• In case the need for still larger storage arises,
the user will have to split it into smaller chunks,
store them separately and re-compose at the
application level.
• Uploading large objects does take time in spite
of the large bandwidth. Moreover if a failure
occurs, the whole process has to be repeated.
43
BITS Pilani
Uploading large objects
• To get over this difficulty multi-part upload is
done. This is an elegant solution which not only
splits the object into multiple parts (10000 parts
per object in S3) to upload independently but
also uses the network bandwidth optimally by
parallelizing the uploads. Very efficient solution.
• Since the uploads of the parts are independent
any failure issue in any one part can be rectified
by repeating only that part upload, thereby a
tremendous saving of time!
44
BITS Pilani
Amazon SimpleDB (SDB)
• This is a simple NoSQL data store interface of
key-value pair, which allows storage and
retrieval of attributes based on the key. A simple
alternative to Relational database.
• SDB is organized into domains. Each item in a
domain must have a unique key provided at the
time of creation. It can have up to 256 attributes
in the form of name-value, similar to a row with
primary key in RDBMS. But in SDB an attribute
can be multi-valued and all of them together
stored against the same attribute name.
45
BITS Pilani
SDB admin
• SDB has many features which increase it’s
reliability and availability
• Automatic resource addition proportional to
the request rate
• Automatic indexing of the attributes for quick
retrieval
• Automatic replicating across different
locations(availability)
• Fields can be added to the dataset anytime
since SDB is schema-less; that makes it
scalable
46
BITS Pilani
Amazon Relational DB (RDB)
• RDB is a traditional DB abstraction in the cloud
• MySQL instance
• RDB instance can be created using the tab in the
AWS management console
• AWS console allows the user to manage the RDB
• How often the backup should happen, how long
should the backup data be available etc can be
configured
• Snapshots of DB can be taken from time to time
• Using Amazon APIs user can build a custom tool
to manage the data if needed
47
BITS Pilani
Compute as a service – EC2
48
BITS Pilani
Compute as a service – EC2
Amazon Elastic compute Cloud (EC2) is a
unique service provider which allows an
enterprise/ user to have virtual servers with virtual
storage and virtual networking to satisfy the
diverse needs -
• The needs of the enterprise vary among high
storage and/or high end computing at different
times for different applications
• Networking/clustering needs as well as
environment needs also vary depending on the
work context
49
BITS Pilani
Amazon EC2
BITS Pilani
Amazon EC2 contd ..
• Next a user has to create a key-value pair to
securely connect to the instance once it’s
operative
• Create a key value pair and save to the file in a
safe place. User can reuse the same for multiple
instances
• Now security groups for the instance need to be
set so that certain ports can be kept open or
blocked depending on the context
• When the instance is launched you get the DNS
name of the server which can be used for
remote login as if it were on the same network 51
BITS Pilani
Accessing EC2
• Use key value pair to login AWS console; get
the Windows admin password from the AWS
instance screen to remotely connect to the
instance/ compute resource
• For a linux m/c from the directory where the key
value file is saved give the following command
ssh -i <keyvaluefile>ec2-67-202-62-112.compute-
1.amazonaws.com
follow a few confirmation screens and one is
logged into the compute resource remotel
52
BITS Pilani
Accessing EC2 contd ..
BITS Pilani
EC2 computing resources request
• EC2 computing resources are requested in
terms of EC2 Compute Unit (CU) for computing
power, like we use bytes for memory
• One EC2 CU is 1.0-1.2 GHz Xeon processor
• There are some Standard Instances families
with configuration suitable for certain needs
hence recommended by Amazon
• Also available are High memory instances, High
CPU instances, Cluster compute instances for
High performance or Graphic processing
54
BITS Pilani
Configuration of EC2 instance
• After getting the resources of required CU, one
needs to configure OS by selecting from the
available images –
AMI Amazon Machine Images
55
BITS Pilani
Region and availability
BITS Pilani
Some more Configuration of EC2 instance
BITS Pilani
EC2 storage resources
There are two types of block storage available for
EC2 that appear as disk storage
• Elastic block storage(EBS) exists independent
of any instance. The size can be configured and
attached to one or more EC2 instances. It is
data persistent.
• Instance storage is configured for EC2, which
can be attached to one and only one instance.
It’s not persistent, ceases to exist when instance
is terminated. So if you need persistence create
instance storage using S3.
58
BITS Pilani
EC2 Networking resources
• Networking between EC2 instances and also
with outside world via gateways/firewalls will
have to happen.
• EC2 instances therefore need both public and
private addresses.
• Private addresses are used for communication
within EC2 , like intranet; for any communication
between EC2 instances since these addresses
can be resolved quickly using NAT- network
Address Translation.
59
BITS Pilani
EC2 N/W resources contd ..
60
BITS Pilani
Elastic IP address
BITS Pilani
IaaS Services
• detailed billing;
• monitoring;
• log access;
• security;
• load balancing;
• clustering; and
• storage resiliency, such as backup, replication and recovery.
BITS Pilani
How does IaaS work?
- IaaS customers access resources and services through a WAN, such
as the internet.
- Customers can use the cloud provider's services to install the
remaining elements of an application stack.
- Customers can create virtual machines (VMs), install operating
systems in each VM, deploy middleware such as databases, create
storage buckets for workloads and backups, and install enterprise
workloads into VMs.
- Providers offer services to track costs, monitor performance, balance
network traffic, troubleshoot application issues, and manage disaster
recovery.
- Cloud computing models require the participation of a provider.
- Third-party organizations specialize in selling IaaS, such as AWS and
GCP.
- A business may choose to deploy a private cloud and become its own
provider of infrastructure services.
BITS Pilani
Advantages of IaaS
• Organizations choose IaaS because it is often easier, faster and more cost-efficient to operate
a workload without having to buy, manage and support the underlying infrastructure. With
IaaS, a business can simply rent or lease that infrastructure from another business.
• IaaS is an effective cloud service model for workloads that are temporary, experimental or that
change unexpectedly. For example, if a business is developing a new software product, it
might be more cost-effective to host and test the application using an IaaS provider.
• Once the new software is tested and refined, the business can remove it from the IaaS
environment for a more traditional, in-house deployment. Conversely, the business could
commit that piece of software to a long-term IaaS deployment if the costs of a long-term
commitment are less.
• In general, IaaS customers pay on a per-user basis, typically by the hour, week or month.
Some IaaS providers also charge customers based on the amount of virtual machine space
they use. This pay-as-you-go model eliminates the capital expense of deploying in-house
hardware and software.
• When a business cannot use third-party providers, a private cloud built on premises can still
offer the control and scalability of IaaS -- though the cost benefits no longer apply.
BITS Pilani
Advantages of IaaS
BITS Pilani
Disadvantages of IaaS
• Despite its flexible, pay-as-you-go model, IaaS billing can be a
problem for some businesses. Cloud billing is extremely granular,
and it is broken out to reflect the precise usage of services. It is
common for users to experience sticker shock -- or finding costs to be
higher than expected -- when reviewing the bills for every resource
and service involved in application deployment. Users should monitor
their IaaS environments and bills closely to understand how IaaS is
being used and to avoid being charged for unauthorized services.
BITS Pilani
IaaS User cases
• Testing and development environments. IaaS offers organizations flexibility when it comes to different
test and development environments. They can easily be scaled up or down according to needs.
• Hosting customer-facing websites. this can make it more affordable to host a website, compared to
traditional means of hosting websites.
• Data storage, backup and recovery. IaaS can be the easiest and most efficient way for organizations to
manage data when demand is unpredictable or might steadily increase. Furthermore, organizations can
circumvent the need for extensive efforts focused on the management, legal and compliance
requirements of data storage.
• Web applications. infrastructure needed to host web apps is provided by IaaS. Therefore, if an
organization is hosting a web application, IaaS can provide the necessary storage resources, servers
and networking. Deployments can be made quickly, and the cloud infrastructure can be easily scaled up
or down according to the application's demand.
• High-performance computing (HPC). certain workloads may demand HPC-level computing, such as
scientific computations, financial modeling and product design work.
• Data warehousing and big data analytics. IaaS can provide the necessary compute and processing
power to comb through big data sets.
BITS Pilani
Resource Virtualization
BITS Pilani
Server Virtualization
Server virtualization can be defined as the conversion of one physical server into several
individual & isolated virtual spaces that can be taken up by multiple users as per their
respective requirements.
• This virtualization is attained through a software application, thereby screening the actual
numbers and identity of physical servers.
TYPES OF SERVER VIRTUALIZATION
• Complete virtualization,
• Para-virtualization
• Operating System (OS) virtualization.
While all the three modes have one physical server acting as host and the virtual servers as
guests, each of the methods allocates server resources differently to the virtual space.
BITS Pilani
Server Virtualization
BITS Pilani
Storage Virtualization
• Storage virtualization is the pooling of physical storage
from multiple network storage devices into what appears
to be a single storage device that is managed from a
central console.
Or
Storage virtualization is the process of grouping the
physical storage from multiple network storage devices so
that it looks like a single storage device.
• Storage virtualization is also known as cloud storage.
• Storage virtualization helps the storage administrator
perform the tasks of backup, archiving and recovery more
easily and in less time by disguising the actual complexity
of a storage area network (SAN).
Storage virtualization can be implemented by using
software applications or appliances.
BITS Pilani
Storage Virtualization
There are three important reasons to implement storage virtualization:
1. Improved storage management in a heterogeneous IT environment
2. Better availability and estimation of down time with automated
management
3. Better storage utilization
Storage virtualization can be applied to any level of a SAN. The
virtualization techniques can also be applied to different storage
functions such as physical storage, RAID groups, logical unit
numbers (LUNs), LUN subdivisions, storage zones and logical
volumes, etc.
The storage virtualization model can be divided into four main layers:
1. Storage devices
2. Block aggregation layer
3. File/record layer
4. Application layer
Some of the benefits of storage virtualization include automated
management, expansion of storage capacity, reduced time in manual
supervision, easy updates and reduced downtime.
BITS Pilani
Network Virtualization
• Network virtualization refers to the management and monitoring of an entire
computer network as a single administrative entity from a single software-
based administrator's console.
• Network virtualization also may include storage virtualization, which involves
managing all storage as a single resource.
• Network virtualization is designed to allow network optimization of data
transfer rates, flexibility, scalability, reliability and security.
• It automates many network administrative tasks, which actually disguise a
network's true complexity.
• All network servers and services are considered one pool of resources, which
may be used without regard to the physical components.
• Network virtualization is especially useful for networks experiencing a rapid,
large and unpredictable increase in usage.
• The intended result of network virtualization is improved network productivity
and efficiency, as well as job satisfaction for the network administrator.
BITS Pilani
Network Virtualization
• Network virtualization is accomplished by using a variety of hardware and
software and combining network components.
Network Virtualization Gives you
• optimize network
• speed
• reliability
• flexibility
• scalability
• security
BITS Pilani
Virtual Machine
Resources Provision and Manageability
• Most business applications run in a mix of physical, virtual and cloud IT
environments.
• Virtual environments are very dynamic by their nature.
• Virtualization solutions dynamically allocate IT resources to applications, .
perform load balancing based on resource utilization levels as well as perform
dynamic power management to cut down power costs.
• IT administrators need to ensure that sufficient server power is available to
support these dynamic environments.
• However, this process can be time consuming and error prone if done
manually.
BITS Pilani
Virtual Machine
BITS Pilani
Virtual Machine
• Storage as Service
• Storage as a Service is a business model in which a large company rents
space in their storage infrastructure to a smaller company or individual.
• Storage as a Service is generally seen as a good alternative for a small or
mid-sized business that lacks the capital budget and/or technical personnel to
implement and maintain their own storage infrastructure.g and error prone if
done manually.
BITS Pilani
Virtual Machine
• Data Storage in Cloud Computing
• In which the digital data is stored in logical pools, the physical storage spans
multiple servers (and often locations), and the physical environment is
typically owned and managed by a hosting company.
• • These cloud storage providers are responsible for keeping the data available
and accessible, and the physical environment protected and running.
• People and organizations buy or lease storage capacity from the providers to
store user, organization or application data.
BITS Pilani
References
CIO Insight (https://www.cioinsight.com/cloud-
virtualization/iaas-providers/ Last Seen : 24th Apr, 2023
https://aws.amazon.com/what-is/iaas/
Text and References
T1 Mastering Cloud Computing: Foundations and Applications Programming
Rajkumar Buyya, Christian Vecchiola, S.Thamarai Selvi
R1 Moving To The Cloud: Developing Apps in the New World of Cloud Computing 1st
Edition
by Dinkar Sitaram (Author), Geetha Manjunath (Author)
81
IMP Note to Self
82
Thank You !
83
Cloud Computing
CS - 5
Cloud Computing 3
IMP Note to Students
➢It is important to know that just login to the session does not
guarantee the attendance.
➢Once you join the session, continue till the end to consider you
as present in the class.
➢IMPORTANTLY, you need to make the class more interactive by
responding to Professors queries in the session.
➢ Whenever Professor calls your number / name ,you need to
respond, otherwise it will be considered as ABSENT
4
• Amazon Web Services (AWS) is a collection of remote
computing services, also called web services, that make up
a cloud computing platform offered by Amazon.com.
• These services are based out of 11 geographical regions
across the world.
10
AWS Cloud architecture for web DNS services with Amazon Route 53 – Provides DNS services to simplify domain management.
Edge caching with Amazon CloudFront – Edge caches high-volume content to decrease the latency to customers.
hosting Edge security for Amazon CloudFront with AWS WAF – Filters malicious traffic, including cross site scripting (XSS) and SQL injection via customer-defined rules.
Load balancing with Elastic Load Balancing (ELB) – Enables you to spread load across multiple Availability Zones and AWS Auto Scaling groups for redundancy and decoupling of services.
DDoS protection with AWS Shield – Safeguards your infrastructure against the most common network and transport layer DDoS attacks automatically.
Firewalls with security groups – Moves security to the instance to provide a stateful, host-level firewall for both web and application servers.
Caching with Amazon ElastiCache – Provides caching services with Redis or Memcached to remove load from the app and database, and lower latency for frequent requests.
BITS Pilani Managed database with Amazon Relational Database Service (Amazon RDS) – Creates a highly available, multi-AZ database architecture with six possible DB engines.
Static storage and backups with Amazon Simple Storage Service (Amazon S3) – Enables simple HTTP-based object storage for backups and static assets like images and video.
11
Web Application Hosting
12
Web Application Hosting
1. Amazon Web Services (AWS) provides services and infrastructure for
building reliable, fault-tolerant, and highly available web applications in the
cloud.
2. Web applications in production environments generate significant amounts
of log information.
3. Analyzing logs can provide valuable insights into traffic patterns, user
behavior, and marketing profiles.
4. As web applications grow and visitor numbers increase, storing and
analyzing web logs becomes more challenging.
5. The diagram illustrates how AWS can be used to construct a scalable and
dependable large-scale log analytics platform.
6. The core component of this architecture is Amazon Elastic MapReduce,
which is a web service enabling analysts to process vast amounts of data
easily and cost-effectively using a Hadoop hosted framework. 13
Web Application Hosting
1. The web front-end servers run on Amazon Elastic Compute Cloud (Amazon EC2) instances.
2. Amazon CloudFront, a content delivery network, distributes static files to customers with low latency and
high data transfer speeds, generating valuable log information.
3. Log files are regularly uploaded to Amazon Simple Storage Service (Amazon S3), a highly available and
reliable data store. The data is sent in parallel from multiple web servers or edge locations.
4. The data set is processed by an Amazon Elastic MapReduce cluster. Amazon Elastic MapReduce utilizes a
hosted Hadoop framework to process the data in a parallel job flow.
5. Amazon EC2 offers unused capacity at a reduced cost known as the Spot Price. This price fluctuates based
on availability and demand. Utilizing Spot Instances can dynamically extend the cluster's capacity and
significantly reduce the cost of running job flows, especially for flexible workloads.
6. The results of data processing are pushed back to a relational database using tools like Apache Hive. This
database can be an Amazon Relational Database Service (Amazon RDS) instance, which simplifies the
setup, operation, and scalability of relational databases in the cloud.
7. Amazon RDS instances are priced on a pay-as-you-go model, just like many other services. After analysis,
the database can be backed up into an Amazon S3 database snapshot and then terminated. Whenever
needed, the database can be recreated from the snapshot.
14
Web Log Analysis 1. Amazon Web Services (AWS) offers services and infrastructure for building reliable, fault-tolerant, and highly available web applications in the cloud.
2. In production environments, web applications generate substantial amounts of log data, which can provide valuable insights.
3. Analyzing logs can reveal valuable information such as traffic patterns, user behavior, and marketing profiles.
4. However, as web applications grow and visitor numbers increase, storing and analyzing web logs becomes more challenging.
BITS Pilani 5. This diagram illustrates how Amazon Web Services can be utilized to construct a scalable and dependable large-scale log analytics platform.
6. At the core of this architecture is Amazon Elastic MapReduce, a web service that empowers analysts to easily and cost-effectively process large volumes of data using a hosted Hadoop framework.
16
1. The web front-end servers run on Amazon Elastic Compute Cloud (Amazon EC2) instances.
3.
Amazon CloudFront, a content delivery network, utilizes low latency and high data transfer speeds to distribute static files to customers. Additionally, this service generates valuable log information.
Periodically, log files are uploaded to Amazon Simple Storage Service (Amazon S3), a highly available and reliable data store. Data is sent in parallel from multiple web servers or edge locations.
4. The data set is processed by an Amazon Elastic MapReduce cluster, which leverages a hosted Hadoop framework for parallel job flow processing.
5. Amazon EC2 offers instances at a reduced cost known as the Spot Price when there is unused capacity. This price fluctuates based on availability and demand. If your workload is flexible in terms of completion time or required capacity, utilizing Spot Instances allows dynamic capacity
extension for your cluster, resulting in significant cost reduction for running job flows.
BITS Pilani 6. Amazon RDS instances, like many other services, follow a pay-as-you-go pricing model. After analysis, the database can be backed up into an Amazon S3 database snapshot and subsequently terminated. The database can then be recreated from the snapshot whenever necessary.
7. Data processing results are pushed back to a relational database using tools such as Apache Hive. The database can be an Amazon Relational Database Service (Amazon RDS) instance, which simplifies the setup, operation, and scalability of relational databases in the cloud.
17
FAULT TOLERANCE & HIGH
AVAILABILITY
Amazon S3
29
Use cases
1. Build a data lake-Run big data analytics, artificial intelligence (AI), machine learning (ML), and high performance computing (HPC) applications to unlock data insights.
2. Back up and restore critical data-Meet Recovery Time Objectives (RTO), Recovery Point Objectives (RPO), and compliance requirements with S3’s robust replication features.
BITS Pilani 3. Run cloud-native applications-Build fast, powerful mobile and web-based cloud-native apps that scale automatically in a highly available configuration.
4. Archive data at the lowest cost-Move data archives to the Amazon S3 Glacier storage classes to lower costs, eliminate operational complexities, and gain new insights.
30
Amazon S3 buckets
• Amazon S3 stores data as objects within resources called "buckets."
• You can store as many objects as you want within a bucket, and
write, read, and delete objects in your bucket.
35
Amazon EBS
• Amazon Elastic Block Store (Amazon EBS) provides
persistent block level storage volumes for use with
Amazon EC2 instances in the AWS Cloud.
• Amazon Elastic Block Store (Amazon EBS) is an easy-to-use, scalable, high-performance block-
storage service designed for Amazon Elastic Compute Cloud (Amazon EC2).
37
Amazon EBS- features
• Once attached, you can create a file system on top of these volumes, run a
database, or use them in any other way you would use a block device.
• Amazon EBS volumes are placed in a specific Availability Zone, where they
are automatically replicated to protect you from the failure of a single
component.
• Amazon EBS provides three volume types:
– General Purpose (SSD),
– Provisioned IOPS (SSD), and
– Magnetic.
• The three volume types differ in performance characteristics and cost, so
you can choose the right storage performance and price for the needs of
your applications.
• All EBS volume types offer the same durable snapshot capabilities and are
designed for 99.999% availability.
Use cases
• Build your SAN in the cloud for I/O intensive applications-Migrate mid-
range, on-premises storage area network (SAN) workloads to the cloud.
Attach high-performance and high-availability block storage for mission-
critical applications.
• Right-size your big data analytics engines-Easily resize clusters for big data
analytics engines, such as Hadoop and Spark, and freely detach and
reattach volumes.
39
AWS Import/Export
• AWS Import/Export accelerates moving large
amounts of data into and out of the
AWS cloud using portable storage devices for
transport.
• AWS Import/Export transfers your data directly
onto and off of storage devices using Amazon’s
high-speed internal network and bypassing the
Internet.
• For significant data sets, AWS Import/Export is
often faster than Internet transfer and more cost
effective than upgrading your connectivity.
AWS Import/Export Contd…
• After the data load completes, the device will be returned to you.
When to Use AWS Import/Export
Available Theoretical Min. Number of
When to ConsiderAWS
Internet Days to Transfer 1TB at 80%
Import/Export?
Connection Network Utilization
T1
82 days 100GB or more
(1.544Mbps)
10Mbps 13 days 600GB or more
T3
3 days 2TB or more
(44.736Mbps)
100Mbps 1 to 2 days 5TB or more
1000Mbps Less than 1 day 60TB or more
For example, at 10Mbps connection and expect to utilize 80% of your network capacity for the data
transfer, transferring 1TB of data over the Internet to AWS will take 13 days.
The volume at which this same set-up will take at least a week, is 600GB, so if you have 600GB of
data or more to transfer, and you want it to take less than a week to get into AWS, we recommend
you using AWS Import/Export.
Amazon CloudFront
• Amazon CloudFront is a content delivery web service.
• Gives developers and businesses an easy way to distribute content to end
users with
– low latency,
– high data transfer speeds, and
– no minimum usage commitments.
• Requests for your content are automatically routed to the nearest edge
location, so content is delivered with the best possible performance.
• Its flexible data model and reliable performance make it a great fit for mobile, web, gaming, ad-tech, IoT, and
many other applications.
• DynamoDB supports storing, querying, and updating documents. Using the AWS SDK you can write
applications that store JSON documents directly into Amazon DynamoDB tables.
• Schema-less
• Amazon DynamoDB has a flexible database schema.
• The data items in a table need not have the same attributes or even the same number of
attributes.
• Multiple data types (strings, numbers, binary data, and sets) add richness to the data model.
• You can easily customize the network configuration for your Amazon Virtual Private Cloud.
– For example, you can create a public-facing subnet for your webservers that has access to the Internet,
and place your backend systems such as databases or application servers in a private-facing subnet with
no Internet access.
• Additionally, you can create a Hardware Virtual Private Network (VPN) connection between your
corporate datacenter and your VPC and leverage the AWS cloud as an extension of your corporate
datacenter.
• organizations can use or reuse AWS-compatible tools, images, and scripts to manage their own on-
premise infrastructure as a service (IaaS) environments.
• The AWS API is implemented on top of Eucalyptus, so tools in the cloud ecosystem that can communicate
with AWS can use the same API with Eucalyptus.
AWS-compatible tools.
Autoscaling -. With auto-scaling, developers can add instances and virtual
machines as traffic demands increase.
Elastic Load Balancing - A service that distributes incoming application traffic and
service calls across multiple Eucalyptus workload instances, providing greater
application fault tolerance.
▪ To provide additional scalability and reliability, these data center facilities are located in different
physical locations.
▪ AWS Regions are large and widely dispersed into separate geographic locations.
▪ Availability Zones are distinct locations within an AWS Region that are engineered to be
isolated from failures in other Availability Zones.
▪ They provide inexpensive, low-latency network connectivity to other Availability Zones in the
same AWS Region.
Regions and Availability zones
➢ You can then delete some or all of the original cache nodes. This
approach is recommend.
*Note: What is Amazon ElastiCache for Memcached? Set up, manage, and scale a
distributed in-memory data store or cache environment in the cloud using the cost-
effective ElastiCache solutions.
Supported regions & endpoints
❖ Amazon ElastiCache is available in multiple AWS Regions. This means that you can launch ElastiCache clusters
in locations that meet your requirements. For example, you can launch in the AWS Region closest to your
customers, or launch in a particular AWS Region to meet certain legal requirements.
❖ By default, the AWS SDKs, AWS CLI, ElastiCache API, and ElastiCache console reference the US-West (Oregon)
region. As ElastiCache expands availability to new regions, new endpoints for these regions are also available
to use in your HTTP requests, the AWS SDKs, AWS CLI, and the console.
❖ Each Region is designed to be completely isolated from the other Regions. Within each Region are multiple
Availability Zones (AZ). By launching your nodes in different AZs you are able to achieve the greatest possible
fault tolerance. For more information on Regions and Availability Zones, see Choosing regions and availability
zones at the top of this topic.
AWS In 10 Minutes | AWS Tutorial For Beginners | AWS Training Video | AWS Tutorial | Simplilearn - YouTube
BITS Pilani
WHAT IT IS WHY USE IT THE COMMUNITY USING OPENSTACK FAQS
What is OpenStack
Programmable infrastructure that lays a One platform for virtual machines, containers
common set of APIs on top of compute, and bare metal
networking and storage
WHAT IT IS WHY USE IT THE COMMUNITY USING OPENSTACK FAQS
Public cloud: shared resource, Private Cloud: dedicated to a Hybrid cloud: a mix of private
“pay-as-you-go” models are single user. Can be hosted private cloud and public cloud
common. OpenStack public cloud cloud in a vendor’s data center or orchestrated together to meet
is available in 60+ datacenters yours, or remotely managed company needs
globally. private cloud.
3 CLOUD MODELS
WHAT IT IS WHY USE IT THE COMMUNITY USING OPENSTACK FAQS
OPENSTACK PRINCIPLES
3 OPEN DEVELOPMENT
See more at
openstack.org/user-stories
WHAT IT IS WHY USE IT THE COMMUNITY USING OPENSTACK FAQS
86% of telecoms say CERN runs one of the largest Comcast powers customer- Banco Santander runs 1,000
OpenStack is important to OpenStack clouds to process facing and internal compute nodes of OpenStack
their business; many are using data from the Large Hadron applications and services for in data centers across the
OpenStack to virtualize their Collider, giving physicists the both production and world, and uses Cloudera on
networks and implement edge resources they need to development environments OpenStack to power fraud
computing to achieve agility unleash the secrets of the with OpenStack. detection.
significant cost savings. universe.
DigitalFilm Tree uses Walmart moved their global e- Adobe Digital Marketing uses Workday moved their on-demand
interoperable OpenStack commerce platform to OpenStack to convert their software services from static,
private and public clouds to OpenStack, powering existing virtualization virtualized environments to a fully
process thousands of hours of desktop, mobile, tablet and environment into self-service elastic and scalable platform
raw footage into a one-hour kiosk users. IT. based on OpenStack.
TV show.
WHAT IT IS WHY USE IT THE COMMUNITY USING OPENSTACK FAQS
openstack.org/foundation
WHAT IT IS WHY USE IT THE COMMUNITY USING OPENSTACK FAQS
81,000+
MEMBERS
187
COUNTRIES
670+
ORGANIZATIONS
WHAT IT IS WHY USE IT THE COMMUNITY USING OPENSTACK FAQS
GOLD MEMBERS
WHAT IT IS WHY USE IT THE COMMUNITY USING OPENSTACK FAQS
Cross-community collaboration
OpenStack integrates with a number of other technologies, including many popular open source projects, enabling users to combine them with
OpenStack.
https://www.openstack.org/software/project-
navigator/openstack-components#openstack-services
WHAT IT IS WHY USE IT THE COMMUNITY USING OPENSTACK FAQS
BITS Pilani
Glance – Image Store
It provides discovery, registration and delivery services for disk
and server images.
List of processes and their functions:
glance-api : It accepts Image API calls for image discovery,
image retrieval and image storage.
glance-registry : it stores, processes and retrieves metadata
about images (size, type, etc.).
glance database : A database to
store the image metadata.
A storage repository for the actual
image files. Glance supports
normal file-systems, Amazon S3,
and Swift.
Nova – Compute
It provides virtual servers upon demand. Nova is
the most complicated and distributed
component of OpenStack. A large number of
processes cooperate to turn end user API
requests into running virtual machines.
References
• Salman A.Basket, Chunqiang Tang, Byung Chul Tak and Long Wang
“Dissecting Open Source Cloud Evolution: Open Stack Case Study”
• https://www.openstack.org/marketplace/books
• https://docs.openstack.org/arch-design
• https://www.researchgate.net/publication/318463331_An_overview
_of_OpenStack...
• https://cloud.google.com/compute/docs/instances/instance-life-
cycle
• https://www.geeksforgeeks.org/hypervisor
• https://docs.aws.amazon.com/AmazonElastiCache/latest/mem-
ug/RegionsAndAZs.html#SupportedRegions
• https://docs.aws.amazon.com/amazonglacier/latest/dev/introductio
n.html
Text and References
T2 Mastering Cloud Computing: Foundations and Applications Programming
Rajkumar Buyya, Christian Vecchiola, S.Thamarai Selvi
T1 Moving To The Cloud: Developing Apps in the New World of Cloud Computing 1st
Edition
by Dinkar Sitaram (Author), Geetha Manjunath (Author)
IMP Note to Self
85
Thank You !
86
Cloud Computing
CS - 6
Cloud Computing 3
IMP Note to Students
➢It is important to know that just login to the session does not
guarantee the attendance.
➢Once you join the session, continue till the end to consider you
as present in the class.
➢IMPORTANTLY, you need to make the class more interactive by
responding to Professors queries in the session.
➢ Whenever Professor calls your number / name ,you need to
respond, otherwise it will be considered as ABSENT
4
Virtual Resource Management
and
Cloud Provisioning
BITS Pilani
Agenda
• Introduction
• Virtual Machine Provisioning and Manageability
• VM Provisioning Process
• Virtual Machine Migration Services
• Migrations Techniques
• VM Provisioning and Migration in action
BITS Pilani
Introduction
Two core services that enable the users to get the best out of the IaaS model in public and private cloud setups.
2) migration services.
• Provisioning a new virtual machine in minutes: saves lots of time and effort.
• Achieving the SLA/ SLO agreements and quality-of-service (QoS) specifications required.
BITS Pilani
Agenda
• Introduction
• Virtual Machine Provisioning and Manageability
• VM Provisioning Process
• Virtual Machine Migration Services
• Migrations Techniques
• VM Provisioning and Migration in action
BITS Pilani
Analogy for Virtual Machine Provisioning
• Historically, when there is a need to install a new server for a certain workload to provide a particular
service for a client, lots of effort was exerted by the IT administrator, and much time was spent to
install and provision a new server.
1) Check the inventory for a new machine
2) Get one
3) Format, install OS required and
4) Install services; a server is needed along with lots of security batches and appliances.
Now, with the emergence of virtualization technology and the cloud computing IaaS Model, It is just a
matter of minutes to achieve the same task.
BITS Pilani
Analogy for Virtual Machine Provisioning: Continue..
• All you need is to provision a virtual server through a self-service interface with small steps to get
1) Provisioning this machine in a public cloud like: Amazon Elastic Compute Cloud (EC2), or
installed at your data centre in order to provision the virtual machine inside the organization and
BITS Pilani
Analogy for Migration Services
Previously, whenever there was a need for performing a server’s upgrade or
performing maintenance tasks, you would exert a lot of time and effort, because it is
an expensive operation to maintain or upgrade a main server that has lots of
applications and users.
Now, with the advance of the revolutionized virtualization technology and migration
services associated with hypervisors’ capabilities, these tasks (maintenance,
upgrades, patches, etc.) are very easy and need no time to accomplish.
BITS Pilani
Revisiting Virtualization Technology
Virtualization can be defined as the abstraction of the four computing resources (storage, processing power, memory,
BITS Pilani
Revisiting Virtualization Technology
The virtualization layer partitions the physical resource of the underlying physical server into multiple virtual machines with
different workloads.
The virtualization layer :
1) Schedules resources,
2) Allocates physical resources,
3) Makes each virtual machine think that it totally owns the whole underlying hardware’s physical resource (Preprocessor,
disks, etc.)
4) Makes it flexible and easy to manage resources.
5) Improve the utilization of resources by multiplexing many virtual machines on one physical host.
6) The machines can be scale up and down on demand with a high level of resources’ abstraction.
7) Enables High, Reliable, and agile deployment mechanism.
8) Provides On-demand cloning and live migration.
9) Having efficient management suite for managing virtual machines.
BITS Pilani
Public Cloud and Infrastructure Services
There are many examples for vendors who publicly provide infrastructure as a service.
Example: Amazon Elastic Compute Cloud (EC2) is an Amazon EC2 Services can be leveraged via
Web services (SOAP or REST), a Web-based AWS (Amazon Web Service) management console, or
the EC2 command line tools.
The Amazon service provides hundreds of pre-made AMIs (Amazon Machine Images) with a variety of
operating systems (i.e., Linux, Open Solaris, or Windows) and pre-loaded software.
• It provides you with complete control of your computing resources and lets you run on Amazon’s
computing and infrastructure environment easily.
• It also reduces the time required for obtaining and booting a new server’s instances to minutes,
thereby allowing a quick scalable capacity and resources, up and down, as the computing
requirements change.
BITS Pilani
Public Cloud and Infrastructure Services: Continue
(b) The high CPU’s needs it provides (medium and extra large high CPU instances), and
(c) High-memory instances (extra large, double extra large, and quadruple extra large instance).
BITS Pilani
Private Cloud and Infrastructure Services
Private cloud exhibits a highly virtualized cloud data center located inside your organization’s firewall.
It may also be a private space dedicated for your company within a cloud vendor’s data center
designed to handle the organization’s workloads, and in this case it is called Virtual Private Cloud (VPC).
BITS Pilani
Agenda
• Introduction
• Virtual Machine Provisioning and Manageability
• VM Provisioning Process
• Virtual Machine Migration Services
• Migrations Techniques
• VM Provisioning and Migration in action
BITS Pilani
VM Provisioning Process
Steps to Provision VM -
• Select a server from a pool of available servers along with the appropriate OS template you need to provision the
virtual machine.
• Load the appropriate software.
• Customize and configure the machine (e.g., IP address, Gateway) to an associated network and storage
resources.
• Finally, the virtual server is ready to start with its newly loaded S/W.
21
BITS Pilani
VM Provisioning Process
• Server provisioning is defining server’s configuration based on the organization requirements, a H/W, and S/W
• Physical servers can also be virtualized and provisioned using P2V (Physical to Virtual)
22
BITS Pilani
VM Provisioning using templates
• After creating a virtual machine by virtualizing a physical server, or by building a new virtual server in the virtual
environment, a template can be created out of it.
• Most virtualization management vendors (VMware, XenServer, etc.) provide the data center’s administration with the ability
to do such tasks
• Provisioning from a template reduces the time required to create a new virtual machine.
• Administrators can create different templates for different purposes.
For example –
• Vagrant provision tool using VagrantFile (template file) - Demo
• Heat – Orchestration Tool of openstack (Heat template in YAML format) - Demo
This enables the administrator to quickly provision a correctly configured virtual server on demand.
Provisioning from a template is an invaluable feature, because it reduces the time required to create a new virtual machine.
BITS Pilani
Agenda
• Introduction
• Virtual Machine Provisioning and Manageability
• VM Provisioning Process
• Virtual Machine Migration Services
• Migrations Techniques
• VM Provisioning and Migration in action
BITS Pilani
Virtual Machine Migration Services
The process of moving a virtual machine from one host server or storage location to another.
In this process, all key machines’ components, such as CPU, storage disks, networking, and memory, are completely
virtualized, thereby facilitating the entire state of a virtual machine to be captured by a set of easily moved data files.
25
BITS Pilani
Agenda
• Introduction
• Virtual Machine Provisioning and Manageability
• VM Provisioning Process
• Virtual Machine Migration Services
• Migrations Techniques
• VM Provisioning and Migration in action
BITS Pilani
Migration Types
• Migration can be categorized as cold or non-live migration and live migration.
• Based on granularity, the migration can be divided into single and multiple migrations.
• The design and continuous optimization and improvement of live migration mechanisms are striving
to minimize downtime and live migration time.
• The downtime is the time interval during the migration service is unavailable due to the need for
synchronization.
• For a single migration, the migration time refers to the time interval between the start of the pre-
migration phase to the finish of post-migration phases that instance is running at the destination
host.
• On the other hand, the total migration time of multiple migrations is the time interval between the
start of the first migration and the completion of the last migration.
BITS Pilani
Live Migration and High Availability
• Live migration (which is also called hot or real-time migration) can be defined as the movement of a
virtual machine from one physical host to another while being powered on.
• When it is properly carried out, this process takes place without any noticeable effect from the end
user’s point of view (a matter of milliseconds).
• One of the most significant advantages of live migration is the fact that it facilitates proactive
maintenance in case of failure, because the potential problem can be resolved before the disruption
of service occurs.
• Live migration can also be used for load balancing in which work is shared among computers in
order to optimize the utilization of available CPU resources.
BITS Pilani
Live Migration Anatomy, Xen Hypervisor Algorithm
The steps of live migration’s mechanism and how memory and virtual machine states are being transferred,
The migration process has been viewed as a transactional interaction between the two hosts involved:
BITS Pilani
Live Migration Technique
BITS Pilani
Live Migration Technique
Live migration process :
Host A Host B
BITS Pilani
Live Migration Technique
Live migration process :
Host A Host B
BITS Pilani
Live Migration Technique
Live migration process :
Host A Host B
BITS Pilani
Phases in Migration
• Push phase where the instance is still running in the source host while memory pages and
disk block or writing data are pushed through the network to the destination host.
• Stop-and-Copy phase where the instance is stopped, and the memory pages or disk data is
copied to the destination across the network. At the end of the phase, the instance will
resume at the destination.
• Pull phase where the new instance executes while pulling faulted memory pages when it is
unavailable in the source from the source host across the network.
BITS Pilani
Live Migration
Image Ref: A Taxonomy of Live Migration Management in Cloud Computing TIANZHANG HE∗ and RAJKUMAR BUYYA
BITS Pilani
Pre - Migration and Post - Migration Phases
• Pre-migration and Post-migration phases are handling the computing and network
configuration.
• During the pre-migration phase, migration management software creates instance’s virtual
interfaces (VIFs) on the destination host, updates interface or ports binding, and networking
management software, such as OpenStack Neutron server, configures the logical router.
• During the post-migration phase, migration management software updates port or interface
states and rebinds the port with networking management software and the VIF driver
unplugs the instance’s virtual ports on the source host.
BITS Pilani
Live Migration Technique
• VM active on host A
• Destination host selected
Pre-migration process (Block devices mirrored)
BITS Pilani
Live Migration Technique
Post-migration code runs to reattach the device’s drivers to the new machine and advertise
moved IP addresses.
This approach to failure management ensures that at least one host has a consistent VM
image at all times during migration:
1) Original host remains stable until migration commits and that the VM may be suspended
and resumed on that host with no risk of failure.
2) A migration request essentially attempts to move the VM to a new host and on any sort of
failure, execution is resumed locally, aborting the migration.
Challenges of live migration :
– VMs have lots of state in memory
– Some VMs have soft real-time
requirements
BITS Pilani
Live Migration Effect on a Running Web Server
Clark et al. evaluated the mentioned migration on Apache 1.3 Web Server; that
served a static content at a high rate. The throughput is achieved when continuously
serving a single 512-KB file to a set of 100 concurrent clients.
BITS Pilani
Live Migration Vendor Implementations Example
There are lots of VM management and provisioning tools that provide the live migration of VM facility
VMware VMotion:
a) Automatically optimize and allocate an entire pool of resources for maximum hardware utilization,
flexibility, and availability.
b) Perform hardware’s maintenance without scheduled downtime along with migrating virtual machines
away from failing or underperforming servers.
• Using Proxmox deployment tool - For Demo- Refer to the recorded lecture
Other links :
Live Storage Migration on Hyper-V Failover Cluster Manager STEP BY STEP TUTORIAL - YouTube
BITS Pilani
Cold/regular migration
With cold migration, You have options of moving the associated disks from one data store to another.
1) Live migrations needs to a shared storage for virtual machines in the server’s pool, but cold migration does not.
2) In live migration for a virtual machine between two hosts, there should be certain CPU compatibility checks, but in
cold migration this checks do not apply.
••The configuration files, including NVRAM file (BIOS Setting), log files, and the disks of the virtual machines,
are moved from the source host to the destination host’s associated storage area.
•• After the migration is completed, the old version of the virtual machine is deleted from the source host.
BITS Pilani
Live Storage Migration of Virtual Machine.
• This kind of migration constitutes moving the virtual disks or configuration file of a running
virtual machine to a new data store without any interruption in the availability of the virtual
machine’s service.
BITS Pilani
Live Storage Migration of Virtual Machine
BITS Pilani
VM Migration, SLA and On-Demand Computing
BITS Pilani
Migration of Virtual Machines to Alternate Platforms
• Data centre's technologies should have the ability to migrate virtual machines
from one platform to another
• There are a number of ways for achieving this, depends on source and
target virtualization’s platforms and vendor’s tools to manage this facility.
• For example, the VMware converter that handles migrations between ESX
hosts: VMware server; and the VMware workstation
BITS Pilani
FUTURE DIRECTIONS
BITS Pilani
References
• Rajkumar Buyya, James Broburg & Anderzej M.G, Cloud Computing – Principles and
Paradigms. John Wiley Pub, 2011
BITS Pilani
Text and References
T1 Moving To The Cloud: Developing Apps in the New World of Cloud Computing 1st
Edition
by Dinkar Sitaram (Author), Geetha Manjunath (Author)
Cloud Computing 3
BITS Pilani, Pilani Campus
IMP Note to Students
➢ It is important to know that just login to the session does not guarantee the
attendance.
➢ Once you join the session, continue till the end to consider you as present in the
class.
➢ IMPORTANTLY, you need to make the class more interactive by responding to
Professors queries in the session.
➢ Whenever Professor calls your number / name ,you need to respond, otherwise it
will be considered as ABSENT
4
BITS Pilani, Pilani Campus
Recap of Virtualization
● Lightweight virtualization.
● OS-level virtualization
● Allow single host to operate multiple
isolated & resource-controlled Linux
Instances.
● included in the Linux kernel called
LXC (Linux Container)
Containers are not a new technology: the earliest iterations of containers have been
around in open source Linux code for decades.
•Cgroups :
Kernel Control Groups (commonly referred to as just “cgroups”) are a Kernel feature that
allows aggregating or partitioning tasks (processes) and all their children into hierarchical
organized groups to isolate resources.
•Container :
A “virtual machine” on the host server that can run any Linux system, for example
openSUSE, SUSE Linux Enterprise Desktop, or SUSE Linux Enterprise Server.
BITS Pilani, Pilani Campus
Terminology Continued...
•Container Name :
A name that refers to a container. The name is used by the lxc commands.
•Kernel Namespaces :
A Kernel feature to isolate some resources like network, users, and others for a group of
processes.
http://osv.io/
https://coreos.com/
https://developer.ubuntu.com/en/snappy/
http://boot2docker.io/
http://rancher.com/rancher-os/ https://vmware.github.io/photon/
http://www.projectatomic.io/
https://blog.inovex.de/docker-a-comparison-of-minimalistic-operating-systems/
OpenBSD sysjail
LXD lmctfy
https://github.com/google/lmctfy
FreeBSD jail
http://linux-vserver.org
https://en.wikipedia.org/wiki/Operating-system-level_virtualization#IMPLEMENTATIONS
BITS Pilani, Pilani Campus
BITS Pilani, Pilani Campus
LXC
https://www.cloudfoundry.org/
http://stratos.apache.org/
https://www.openshift.org/
http://deis.io/
http://getcloudify.org/
https://flynn.io/
https://github.com/dawn/dawn
https://github.com/Yelp/paasta
http://www.octohost.io/
https://tsuru.io/
• It is used to carry out all the heavy tasks such as creating and managing
Docker objects including containers, volumes, images, and networks.
• For example, in the case of a swarm cluster, the host machine’s daemon
can communicate with daemons on other nodes to carry out tasks.
BITS Pilani, Pilani Campus
Docker CLI
• The Docker users can leverage simple HTTP clients like
Command line to interact with Docker.
• When a user executes a Docker command such as
Docker run, the CLI will send this request to the docker
via the REST API.
• The Docker CLI can also communicate with over one
daemon.
• The official Docker registry called Dockerhub contains several official image
repositories.
• A repository contains a set of similar Docker images that are uniquely identified
by Docker tags.
• Dockerhub provides tons of useful official and vendor-specific images to its
users. Some of them include Nginx, Apache, Python, Java, Mongo, Node,
MySQL, Ubuntu, Fedora, Centos, etc.
• You can even create your private repository inside Dockerhub and store your
custom Docker images using the Docker push command.
• Docker allows you to create your own private Docker registry in your local
machine using an image called ‘registry’.
• Once you run a container associated with the registry image, you can use the
Docker push command to push images to this private registry.
• Docker Images are read-only templates that are built using multi-layers of file.
• You can build Docker images using a simple text file called Dockerfile which contains
instructions to build Docker images.
• The first instruction is a FROM instruction which can pull a base image from any Docker
registry. Once this base image layer is created, several instructions are then used to create the
container environment. Each instruction adds a new layer on top of the previous one.
A Docker image is simply a blueprint of the container environment. Once you create a
container, it creates a writable layer on top of the image, and then, you can make changes.
The images all the metadata that describes the container environment. You can either directly
pull a Docker image from Docker hub or create your customized image over a base image
using a Dockerfile.
• Once you have created a Docker image, you can push it on Docker hub or any other registry
and share it with the outside world.
• For example, if you create a container associated with the Ubuntu image, you will have access
to an isolated Ubuntu environment. You can also access the bash of this Ubuntu environment and
execute commands.
Containers have all the access to the resources that you define while using the Dockerfile while
creating an image. Such configurations include build context, network connections, storage,
CPU, memory, ports, etc.
• For example, if you want access to a container with libraries of Java installed, you can use the
Java image from the Dockerhub and run a container associated with this image using the Docker
run command.
You can also create containers associated with the custom images that you create for your
application using the Docker files. Containers are very light and can be spun within a matter of
seconds.
FROM - This is used for to set the base image for the instructions. It is very important to mention
this in the first line of docker file.
MAINTAINER - This instruction is used to indicate the author of the docker file and its non
executable.
RUN - This instruction allows us to execute the command on top of the existing layer and create a
new layer with the result of command execution.
CMD - This instruction doesn’t perform anything during the building of docker image. It Jus
specifies the commands that are used in the image.
– Bridge Driver - Bridge network driver is mostly used when you have a multi-container application running in
the same host machine. This is the default network driver.
– Host Driver - If you don’t require any type of network isolation between the Docker host machine and the
containers on the network, you can use the Host driver.
– Overlay Driver - When you use Docker swarm mode to run containers on different hosts on the same
network, you can use the overlay network driver. It allows different swarm services hosting different
components of multi-container applications to communicate with each other.
– Macvlan - The macvlan driver assigns mac addresses to each container in the network. Due to this, each
container can act as a standalone physical host. The mac addresses are used to route the traffic to appropriate
containers. This can be used in cases such as migration of a VM setup, etc.
BITS Pilani, Pilani Campus
Storage : As soon as you exit a container, all your progress and data inside the container are lost.
To avoid this, you need a solution for persistent storage . Docker provides several options for
persistent storage using which you can share, store, and backup your valuable data. These are -
• Volumes - You can use directories inside your host machine and mount them as volumes inside Docker containers.
These are located in the host machine’s file system which is outside the copy-on-write mechanism of the container.
Docker has several commands that you can use to create, manage, list, and delete volumes.
• Volume Container - You can use a dedicated container as a volume and mount it to other containers. This container will
be independent of other containers and can be easily mounted to multiple containers.
• Directory mounts - You can mount a local directory present in the host machine to a container. In the case of volumes,
the directory must be within the volumes folder in the host machine to be mounted. However, in the case of directory
mounts, you can easily mount any directory on your host as a source.
• Storage Plugins - You can use storage plugins to connect to any external storage platform such as an array, appliance,
etc, by mapping them with the host storage
T1 Moving To The Cloud: Developing Apps in the New World of Cloud Computing 1st
Edition
by Dinkar Sitaram (Author), Geetha Manjunath (Author)
Containers: cgroups, Linux kernel namespaces, ufs, Docker, and intro to Kubernetes pods - YouTube
Dockerfile >Docker Image > Docker Container | Beginners Hands-On | Step by Step - YouTube
Cloud Computing 3
BITS Pilani, Pilani Campus
IMP Note to Students
➢ It is important to know that just login to the session does not guarantee the
attendance.
➢ Once you join the session, continue till the end to consider you as present in the
class.
➢ IMPORTANTLY, you need to make the class more interactive by responding to
Professors queries in the session.
➢ Whenever Professor calls your number / name ,you need to respond, otherwise it
will be considered as ABSENT
4
BITS Pilani, Pilani Campus
Docker Architecture
Docker Architecture
includes a Docker client – used to trigger
Docker commands, a Docker Host –
running the Docker Daemon and a Docker
Registry – storing Docker Images. The
Docker Daemon running within Docker
Host is responsible for the images and
containers.
Software teams use container orchestration to control and automate many tasks:
• Kubernetes: the configuration of our application in a YAML or JSON file, depending on the
orchestration tool. These configurations files (for example, docker-compose.yml) are where the
orchestration tool gather container images (for example, from Docker Hub), how to establish
networking between containers, how to mount storage volumes, and where to store logs for that
container. Further branch and version control these configuration files so they can deploy the same
applications across different development and testing environments before deploying them to
production clusters.
• Containers are deployed onto hosts, usually in replicated groups. When it’s time to deploy a new
container into a cluster, the container orchestration tool schedules the deployment and looks for the
most appropriate host to place the container based on predefined constraints (for example, CPU or
memory availability).
• Once the container is running on the host, the orchestration tool manages its lifecycle according to
the specifications you laid out in the container’s definition file (for example, its Dockerfile.
BITS Pilani, Pilani Campus
Kubernetes: the gold standard
• The container orchestration tools is that we can use them in any environment in which you can run
containers. And containers are supported in just about any kind of environment these days, from traditional
on-premise servers to public cloud instances running in Amazon Web Services (AWS), Google Cloud
Platform (GCP), or Microsoft Azure. Additionally, most container orchestration tools are built with Docker
containers in mind.
• Kubernetes a self-service Platform-as-a-Service (PaaS) that creates a hardware layer abstraction for
development teams. Kubernetes is also extremely portable. It runs on Amazon Web Services
(AWS), Microsoft Azure, the Google Cloud Platform (GCP), or in on-premise installations. we can move
workloads without having to redesign your applications or completely rethink your infrastructure—which
helps you to standardize on a platform and avoid vendor lock-in.
– Cluster. A cluster is a set of nodes with at least one master node and several worker nodes (sometimes
referred to minions) that can be virtual or physical machines.
– Kubernetes master. The master manages the scheduling and deployment of application instances across
nodes, and the full set of services the master node runs is known as the control plane. The master
communicates with nodes through the Kubernetes API server. The scheduler assigns nodes to pods (one or
more containers) depending on the resource and policy constraints you’ve defined.
– Kubelet. Each Kubernetes node runs an agent process called a kubelet that’s responsible for managing the
state of the node: starting, stopping, and maintaining application containers based on instructions from the
control plane. A kubelet receives all of its information from the Kubernetes API server.
• Pods. The basic scheduling unit, which consists of one or more containers guaranteed to be co-located on the
host machine and able to share resources. Each pod is assigned a unique IP address within the cluster,
allowing the application to use ports without conflict. You describe the desired state of the containers in a pod
through a YAML or JSON object called a PodSpec. These objects are passed to the kubelet through the API
server.
• Deployments, replicas, and ReplicaSets. A deployment is a YAML object that defines the pods and the
number of container instances, called replicas, for each pod. You define the number of replicas you want to
have running in the cluster via a ReplicaSet, which is part of the deployment object. So, for example, if a node
running a pod dies, the replica set will ensure that another pod is scheduled on another available node.
• Docker Swarm: a fully integrated and open-source container orchestration tool for
packaging and running applications as containers, deploying them, and even locating
container images from other hosts.
• Swarm. Like a cluster in Kubernetes, a swarm is a set of nodes with at least one master
node and several worker nodes that can be virtual or physical machines.
• Service. A service is the tasks a manager or agent nodes must perform on the swarm, as defined by a swarm
administrator. A service defines which container images the swarm should use and which commands the
swarm will run in each container. A service in this context is analogous to a microservice; for example, it’s
where you’d define configuration parameters for an nginx. web server running in your swarm. You also
define parameters for replicas in the service definition.
• Manager node. When you deploy an application into a swarm, the manager node provides several
functions: it delivers work (in the form of tasks) to worker nodes, and it also manages the state of the swarm
to which it belongs. The manager node can run the same services worker nodes do, but you can also
configure them to only run manager node-related services.
• Worker nodes. These nodes run tasks distributed by the manager node in the swarm. Each worker node
runs an agent that reports back to the master node about the state of the tasks assigned to it, so the
manager node can keep track of services and tasks running in the swarm.
• Task. Tasks are Docker containers that execute the commands you defined in the service. Manager
nodes assign tasks to worker nodes, and after this assignment, the task cannot be moved to another
worker. If the task fails in a replica set, the manager will assign a new version of that task to another
available node in the swarm.
T1 Moving To The Cloud: Developing Apps in the New World of Cloud Computing 1st
Edition
by Dinkar Sitaram (Author), Geetha Manjunath (Author)
What Is Docker? | What Is Docker And How It Works? | Docker Tutorial For Beginners |
Simplilearn (youtube.com)
Dockerfile >Docker Image > Docker Container | Beginners Hands-On | Step by Step (youtube.com)