Software Defined Networking Notes
Software Defined Networking Notes
Software Defined Networking Notes
Unit I
Chapter 1 Elements ofModern Networking 01
Chapter 2 RequirementsAnd Technology 37
Unit II
Chapter 3 Sdn: BackgroundAnd MotivationAnd Sdn Data Plane 71
And Openflow
Chapter 4 Sdn Control PlaneAnd SdnApplication Plane 107
Unit III
Chapter 5 Virtualization, 133
Chapter 6 Nfv Functionality 186
Unit IV
Chapter 7 Quality of Service (Qos)And User Quality ofExperience (Qoe) 217
Chapter 8 Network Design Implications ofQosAnd Qoe 254
Unit V
Chapter 9 Modern NetworkArchitecture: CloudsAnd Fog 216
Cloud Computing
Chapter 10 Modern NetworkArchitecture: CloudsAnd Fog 238
The Internet ofThings
*****
UNIT I
1
ELEMENTS OF MODERN NETWORKING
Unit Structure
1.1 Objectives
1. 2 The Networking Ecosystem
1.3 Example Network Architectures
1.3.1 A Global Network Architecture
1.3.2 A Typical Network Hierarchy
1.4 Ethernet
1.4.1 Applications of Ethernet
1.4.2 Standards
1.4.3 Ethernet Data Rates
1.5 Wi-Fi
1.5.1 Applications of Wi-Fi
1.5.2 Standards
1.5.3 Wi-Fi Data Rates
1.6 4G/5G Cellular
1.6.1 First Generation
1.6.2 Second Generation
1.6.3 Third Generation
1.6.4 Fourth Generation
1.6.5 Fifth Generation
1.7 Cloud Computing
1.7.1 Cloud Computing Concepts
1.7.2 The Benefits of Cloud Computing
1.7.3 Cloud Networking
1.7.4 Cloud Storage
1.8 Internet of Things
1.8.1 Things on the Internet of Things
1.8.2 Evolution
1.8.3 Layers of the Internet of Things
1.9 Network Convergence
1.10 Unified Communications
1.11 Summary
1.12 Unit End Question
1.13 References
1
1.1 OBJECTIVES
2
Figure 1.0 The Modern Networking Ecosystem
3
of an app store has become available for fixed and portable platforms as
well.
Data center networking: Both large enterprise data centers and cloud
provider data centers consist of very large numbers of interconnected
servers. Typically, as much as 80 percent of the data traffic is within
the data center network, and only 20 percent relies on external
networks to reach users.
6
1.3.2 A Typical Network Hierarchy:
One or more access routers connect the local assets to the next
higher level of the hierarchy, the distribution network. This connection
may be via the Internet or some other public or private communications
7
facility. Thus, as described in the preceding subsection, these access
routers function as edge routers that forward traffic into and out of the
access network. For a large local facility, there might be additional access
routers that provide internal routing but do not function as edge routers
(not shown in Figure 1.1).
1.4 ETHERNET
Ethernet has long been used in the home to create a local network
of computers with access to the Internet via a broadband modem/router.
With the increasing availability of high-speed, low-cost Wi-Fi on
computers, tablets, smartphones, modem/routers, and other devices, home
reliance on Ethernet has declined. Nevertheless, almost all home
networking setups include some use of Ethernet.
Ethernet has also long been the dominant network technology for
wired local-area networks (LANs) in the office environment. Early on
there were some competitors, such as IBM’s Token Ring LAN and the
Fiber Distributed Data Interface (FDDI), but the simplicity, performance,
and wide availability of Ethernet hardware eventually made Ethernet the
winner. Today, as with home networks, the wired Ethernet technology
exists side by side with the wireless Wi-Fi technology. Much of the traffic
in a typical office environment now travels on Wi-Fi, particularly to
support mobile devices. Ethernet retains its popularity because it can
support many devices at high speeds, is not subject to interference, and
provides a security advantage because it is resistant to eavesdropping.
Therefore, a combination of Ethernet and Wi-Fi is the most common
architecture.
1.4.2 Standards:
Within the IEEE 802 LAN standards committee, the 802.3 group is
responsible for issuing standards for LANs that are referred to
commercially as Ethernet. Complementary to the efforts of the 802.3
committee, the industry consortium known as The Ethernet Alliance
supports and originates activities that span from incubation of new
Ethernet technologies to interoperability testing to demonstrations to
education.
10-Gbps Ethernet:
13
Initially, network managers used 10-Gbps Ethernet to provide
high-speed, local backbone interconnection between large-capacity
switches. As the demand for bandwidth increased, 10-Gbps Ethernet
began to be deployed throughout the entire network, to include server
farm, backbone, and campus-wide connectivity. This technology enables
ISPs and network service providers (NSPs) to create very high-speed links
at a very low cost between co-located carrier-class switches and routers.
100-Gbps Ethernet:
14
Figure 1.5 Configuration of massive blade server cloud site
25/50-Gbps Ethernet:
It is too early to say how these various options (25, 40, 50, 100
Gbps) will play out in the marketplace. In the intermediate term, the 100-
Gbps switch is likely to predominate at large sites, but the availability of
these slower and cheaper alternatives gives enterprises several paths for
scaling up to meet increasing demand.
400-Gbps Ethernet:
2.5/5-Gbps Ethernet:
1.5 Wi-Fi
Public Wi-Fi:
Access to the Internet via Wi-Fi has expanded dramatically in
recent years, as more and more facilities provide a Wi-Fi hotspot, which
enables any Wi-Fi device to attach. Wi-Fi hotspots are provided in coffee
shops, restaurants, train stations, airports, libraries, hotels, hospitals,
department stores, RV parks, and many other places. So many hotspots are
available that it is rare to be too far from one. There are now numerous
tablet and smartphone apps that increase their convenience.
Even very remote places will be able to support hotspots with the
development of the satellite Wi-Fi hotspot. The first company to develop
17
such a product is the satellite communications company Iridium. The
satellite modem will initially provide a relatively low-speed connection,
but the data rates will inevitably increase.
Enterprise Wi-Fi:
The economic benefit of Wi-Fi is most clearly seen in the
enterprise. Wi-Fi connections to the enterprise network have been offered
by many organizations of all sizes, including public and private sector. But
in recent years, the use of Wi-Fi has expanded dramatically, to the point
that now approximately half of all enterprise network traffic is via Wi-Fi
rather than the traditional Ethernet. Two trends have driven the transition
to a Wi-Fi-centered enterprise. First, the demand has increased, with more
and more employees preferring to use laptops, tablets, and smartphones to
connect to the enterprise network, rather than a desktop computer. Second,
the arrival of Gigabit Ethernet, especially the IEEE 802.ac standard,
allows the enterprise network to support high-speed connections to many
mobile devices simultaneously.
1.5.2 Standards:
Essential to the success of Wi-Fi is interoperability. Wi-Fi-enabled
devices must be able to communicate with Wi-Fi access points, such as
the home router, the enterprise access point, and public hotspots,
regardless of the manufacturer of the device or access point. Such
interoperability is guaranteed by two organizations. First, the IEEE 802.11
wireless LAN committee develops the protocol and signaling standards for
Wi-Fi. Then, the Wi-Fi Alliance creates test suites to certify
interoperability for commercial products that conform to various IEEE
802.11 standards. The term Wi-Fi (wireless fidelity) is used for products
certified by the Alliance.
18
1.5.3 Wi-Fi Data Rates:
IEEE 802.11ac operates in the 5-GHz band, as does the older and
slower standards 802.11a and 802.11n. It is designed to provide a smooth
evolution from 802.11n. This new standard makes use of advanced
technologies in antenna design and signal processing to achieve much
greater data rates, at lower battery consumption, all within the same
frequency band as the older versions of Wi-Fi.
19
reliance on video and multimedia, and multiple broadband connections
offsite. At the same time, the use of wireless LANs has grown
dramatically in the office setting to meet needs for mobility and flexibility.
With the gigabit-range data rates available on the fixed portion of the
office LAN, gigabit Wi-Fi is needed to enable mobile users to effectively
use the office resources. IEEE 8 0 2 . 1 1 ac is likely to be the preferred
gigabit Wi-Fi option for this environment.
Digital traffic channels: The most notable difference between the two
generations is that 1G systems are almost purely analog, whereas 2G
systems are digital. 1G systems are designed to support voice channels;
digital traffic is supported only using a modem that converts the digital
data into analog form. 2G systems provide digital traffic channels. These
20
systems readily support digital data; voice traffic is first encoded in digital
form before transmitting.
Encryption: Because all the user traffic, and the control traffic, is
digitized in 2G systems, it is a relatively simple matter to encrypt all
the traffic to prevent eavesdropping. All 2G systems provide this
capability, whereas 1G systems send user traffic in the clear, providing
no security.
21
4G. 4G systems provide ultra-broadband Internet access for a variety of
mobile devices including laptops, smartphones, and tablets. 4G networks
support Mobile web access and high-bandwidth applications such as high-
definition mobile TV, mobile video conferencing, and gaming services.
24
Figure 1.6 Cloud Computing Context
Figure 1.6 illustrates the typical cloud service context. An
enterprise maintains workstations within an enterprise LAN or set of
LANs, which are connected by a router through a network or the Internet
to the cloud service provider. The cloud service provider maintains a
massive collection of servers, which it manages with a variety of network
management, redundancy, and security tools. In the figure, the cloud
infrastructure is shown as a collection of blade servers, which is a
common architecture.
27
1.8.1 Things on the Internet of Things:
1.8.2 Evolution:
With reference to end systems supported, the Internet has gone through
roughly four generations of deployment culminating in IoT:
28
It is the fourth generation that is usually thought of as the IoT, and which
is marked using billions of embedded devices.
All these layers are essential to an effective use of the IoT concept.
31
These compelling business benefits are motivating companies to
invest in converged network infrastructures. Businesses, however, are
keenly aware of the downside of convergence: having a single network
means a single point of failure. Given their reliance on ICT (infor ation
and communications technology), today’s converged enterprise network
infrastructures typically include redundant components and back up
systems to increase network resiliency and lessen the severity of network
outages.
Architecture:
1.11 SUMMARY
36
1.12 UNIT END QUESTION
37
2
REQUIREMENTS AND TECHNOLOGY
Unit Structure
2.1 Objectives
2.2 Types of Network and Internet Traffic
2.2.1 Real-Time Traffic Characteristics
2.3 Demand: Big Data, Cloud Computing, and Mobile Traffic
2.3.1 Big Data
2.3.2 Cloud Computing
2.3.3 Mobile Traffic
2.4 Requirements: QoS and QoE
2.4.1 Quality of Service
2.4.2 Quality of Experience
2.5 Routing
2.5.1 Characteristics
2.5.2 Packet Forwarding
2. 6 Congestion Control
2.6.1 Effects of Congestion
2. 6.2 Congestion Control Techniques
2.7 SDN and NFV
2.7.1 Software-Defined Networking
2. 7 . 2 Network Functions Virtualization
2.8 Modern Networking Elements
2.9 Summary
2.10 Review Question
2.11 References
2.1 OBJECTIVES
38
Discuss the traffic demands placed on contemporary networks by big
data, cloud computing, and mobile traffic.
Explain the concept of quality of service.
Explain the concept of quality of experience.
Understand the essential elements of routing.
Understand the effects of congestion and the types of techniques
used for congestion control.
Elastic Traffic:
Elastic traffic is that which can adjust, over wide ranges, to
changes in delay and throughput across an internet and still meet the needs
of its applications. This is the traditional type of traffic supported on
TCP/IP-based internets and is the type of traffic for which internets were
39
designed. Applications that generate such traffic typically use
Transmission Control Protocol (TCP) or User Datagram Protocol (UDP)
as a transport protocol. In the case of UDP, the application will use as
much capacity as is available up to the rate that the application generates
data. In the case of TCP, the application will use as much capacity as is
available up to the maximum rate that the end-to-end receiver can accept
data. Also, with TCP, traffic on individual connections adjusts to
congestion by reducing the rate at which data arepresented to the network.
Applications that can be classified as elastic include the common
applications that operate over TCP or UDP, including file transfer (File
Transfer Protocol / Secure FTP [FTP/SFTP]), electronic mail (Simple
Mail Transport Protocol [SMTP]), remote login (Telnet, Secure Shell
[SSH]), network management (Simple Network Management Protocol
[SNMP]), and web access (Hypertext Transfer Protocol / HTTP Secure
[HTTP/HTTPS]). However, there are differences among the requirements
of these applications, including the following:
E-mail is generally insensitive to changes in delay.
When file transfer is done via user command rather than as an
automated background task, the user expects the delay to be
proportional to the file size and so is sensitive to changes in
throughput.
40
Thus, for large transfers, the transfer time is proportional to the size of the
file and the degree to which the source slows because of congestion.
Inelastic Traffic:
41
Table 2.1 Service Class Characteristics
Table 2.1 above shows the loss, delay, and jitter characteristics of
various classes of traffic, as specified in RFC 4594 (Configuration
Guidelines for DiffServ Service Classes, August 20 0 6 ) .
42
Table 2.2 QoS Requirements by Application Class
Data sets continue to grow with more and more being gathered by
remote sensors, mobile devices, cameras, microphones, radio frequency
identification (RFID) readers, and similar technologies. One study from a
few years ago estimated that 2.5 exabytes (2.5 × 10 bytes) of data are
created each day, and 90 percent of the data in the world was created in
the past two years. Those numbers are likely higher today.
47
Figure 2.3 Big Data Networking Ecosystem
2.3.2 Cloud Computing:
Compute Clouds:
Compute clouds allow access to highly scalable, inexpensive, on-demand
computing resources that run the code that they are given. Three examples
of compute clouds are
Amazon’s EC2
Google App Engine
Berkeley Open Infrastructure for Network Computing (BOINC)
Compute clouds are the most flexible in their offerings and can be
used for sundry purposes; it simply depends on the application the user
wants to access. You could close this book right now, sign up for a cloud
computing account, and get started right away. These applications are
49
good for any size organization, but large organizations might be at a
disadvantage because these applications do not offer the standard
management, monitoring, and governance capabilities that these
organizations are used to. Enterprises are not shut out, however. Amazon
offers enterprise-class support and there are emerging sets of cloud
offerings like Terremark’s Enterprise Cloud, which are meant for
enterprise use.
Cloud Storage:
One of the first cloud offerings was cloud storage and it remains a
popular solution. Cloud storage is a big world. There are already more
than 100 vendors offering cloud storage. This is an ideal solution if you
want to maintain files off-site. Security and cost are the top issues in this
field and vary greatly, depending on the vendor you choose. Currently,
Amazon’s S3 is the top player.
Cloud Applications:
Cloud applications differ from compute clouds in that they utilize
software applications that rely on cloud infrastructure. Cloud applications
are versions of Software as a Service (SaaS) and include such things as
web applications that are delivered to users via a browser or application
like Microsoft Online Services. These applications offload hosting and IT
management to the cloud.
Cloud applications often eliminate the need to install and run the
application on the customer’s own computer, thus alleviating the burden
of software maintenance, ongoing operation, and support. Some cloud
applications include
Peer-to-peer computing (like Skype)
Web applications (like MySpace or YouTube)
SaaS (like Google Apps)
Software plus services (like Microsoft Online Services)
Figure 2.5 shows total global monthly network data and voice traffic
from Q1 2014 to Q1 2020, along with the year-on-year percentage change
for mobile network data traffic. Mobile network data traffic depicted in
Figure 2.5 also includes traffic generated by fixed wireless access (FWA)
services and does not include DVB-H, Wi-Fi, or Mobile WiMAX. VoIP is
included.
51
Figure 2.6: Global mobile data traffic (exabytes per month)
Traffic growth can be very volatile between years, and can also vary
significantly between countries, depending on local market dynamics. In
the US, the traffic growth rate declined slightly during 2018 but recovered
to previously expected rates during 2019. In China, 2018 was a year of
record traffic growth. India’s traffic growth continued its upward
trajectory, and it remains the region with the highest usage per smartphone
and per month. Globally, the growth in mobile data traffic per smartphone
can be attributed to three main drivers: improved device capabilities, an
increase in data-intensive content and more affordable data plans.
52
smartphones and people’s changing video viewing habits have continued
to drive monthly usage growth in the region. According to GlobalData,
India Telecom Operators Country Intelligence Report (2 0 1 9 ) , only 4
percent of households have fixed broadband, making smartphones the only
way to access the internet in many cases.
For at least a decade, Quality of Service (QoS) has been one of the
dominating research topics in communication networks. Whereas the
Internet originally has been conceived as a best-effort network, the
introduction of QoS architectures like Integrated Services or Differentiated
Services was supposed to pave the way for high-quality real-time services
like Voice-over-IP or video streaming and thus to increase the
competitiveness of packet-based TCP/IP networks.
Technology-centered approach:
2.5 ROUTING
Routing and congestion control are the basic tools needed to support
network traffic and to provide QoS and QoE. Both mechanisms are
fundamental to the operation of a network and its capability to transmit
and deliver packet traffic.
2.5.1 Characteristics:
Routers forward packets from the original source to the destination.
A router is considered a Layer 3 device because its primary forwarding
decision is based on the information in the Layer 3 IP packet, specifically
the destination IP address. This is known as routing and the decision is
generally based on some performance criterion with the simplest one
being minimum-hop route through the network.
55
(d) Hypothetical Network Architecture
Figure 2.7: Network Architecture Example
56
A generalization of the minimum-hop criterion is least-cost routing.
In this case, a cost is associated with each link, and, for any pair of
attached stations, the route through the network that accumulates the least
cost is sought. Figure 2.7 illustrates a network with numbers circled are
nodes and lines connecting them represent links between these nodes. The
shortest path from node 1 to node 6 is node 1 to node 3 to node 6 (1-3-6).
This term is most often used about routers. When packets arrive at a
router, they must be processed and transmitted. A router can only process
one packet at a time. If packets arrive faster than the router can process
them (such as in a burst transmission) the router puts them into the queue
(also called the buffer) until it can get around to transmitting them. Delay
can also vary from packet to packet, so averages and statistics are usually
generated when measuring and evaluating queuing delay.
Packet Loss:
58
of packets lost with respect to packets sent. In real-time applications like
streaming media or online game, packet loss can affect a user's quality of
experience (QoE). High packet loss rate indicates that users sustain
undoubtedly a very poor quality.
59
2.6.2 Congestion Control Techniques:
1. Retransmission Policy:
It is the policy in which retransmission of the packets are taken
care. If the sender feels that a sent packet is lost or corrupted, the packet
needs to be retransmitted. This transmission may increase the congestion
in the network. To prevent congestion, retransmission timers must be
designed to prevent congestion as also able to optimize efficiency.
2. Window Policy:
The type of window at the sender side may also affect the
congestion. Several packets in the Go-back-n window are resent, although
some packets may be received successfully at the receiver side. This
duplication may increase the congestion in the network and making it
worse. Therefore, Selective repeat window should be adopted as it sends
the specific packet that may have been lost.
3. Discarding Policy:
A good discarding policy adopted by the routers is that the routers
may prevent congestion and at the same time partially discards the
corrupted or less sensitive package as also able to maintain the quality of a
message. In case of audio file transmission, routers can discard less-
sensitive packets to prevent congestion as also maintain the quality of the
audiofile.
4.Acknowledgment Policy:
Since acknowledgement are also the part of the load in network, the
acknowledgment policy imposed by the receiver may also affect
congestion. Several approaches can be used to prevent congestion related
to acknowledgment. The receiver should send acknowledgement for N
packets rather than sending acknowledgement for a single packet. The
receiver should send an acknowledgment only if it must send a packet or a
timer expires.
60
5. Admission Policy:
In admission policy a mechanism should be used to prevent
congestion. Switches in a flow should first check the resource requirement
of a network flow before transmitting it further. If there is a chance of a
congestion or is a congestion in the network, router should deny
establishing a virtual network connection to prevent further congestion.
All the above policies are adopted to prevent congestion before it happens
in the network.
1. Backpressure:
Backpressure is a technique in which a congested node stop
receiving packet from upstream node. This may cause the upstream node
or nodes to become congested and rejects receiving data from above
nodes. Backpressure is a node-to-node congestion control technique that
propagate in the opposite direction of data flow. The backpressure
technique can be applied only to virtual circuit where each node has
information of its above upstream node.
Figure 2. 10 Backpressure
61
the traffic. The intermediate nodes through which the packet has travelled
are not warned about congestion.
3. Implicit Signaling:
In implicit signaling, there is no communication between the
congested nodes and the source. The source guesses that there is
congestion in a network. For example, when sender sends several packets
and there is no acknowledgment for a while, one assumption is that there
is a congestion.
4. Explicit Signaling:
In explicit signaling, if a node experiences congestion it can
explicitly sends a packet to the source or destination to inform about
congestion. The difference between choke packet and explicit signa ing is
that the signal is included in the packets that carry data rather than creating
different packet as in case of choke packet technique.
Forward Signaling:
A bit can be set in a packet moving in the direction of the
congestion. This bit can warn the destination that there is congestion. The
receiver in this case can use policies, such as slowing down the
acknowledgments, to alleviate the congestion.
Backward Signaling:
A bit can be set in a packet moving in the direction opposite to the
congestion. This bit can warn the source that there is congestion and that it
needs to slow down to avoid the discarding of packets.
62
A. Binary:
A bit is set in a data packet as it is forwarded by the congested
node. When a source receives a binary indication of congestion on a
logical connection, it may reduce its traffic flow.
B. Credit based:
These schemes are based on providing an explicit credit to a
source over a logical connection. The credit indicates how many octets or
how many packets the source may transmit. When the credit is exhausted,
the source must await additional credit before sending additional data.
Credit-based schemes are common for end-to-end flow control, in which a
destination system uses credit to prevent the source from overflowing the
destination buffers, but credit-based schemes have also been considered
for congestion control. Credit-based schemes are defined in Frame Relay
and ATM networks.
C. Rate based:
These schemes are based on providing an explicit data rate limit to
the source over a logical connection. The source may transmit data at a
rate up to the set limit. To control congestion, any node along the path of
the connection can reduce the data rate limit in a control message to the
source.
Agile
Abstracting control from forwarding lets administrators dynamically
adjust network-wide traffic flow to meet changing needs.
63
Centrally managed
Network intelligence is (logically) centralized in software based SDN
controllers that maintain a global view of the network, which appears
to applications and policy engines as a single, logical switch.
Programmatically configured
SDN lets network managers configure, manage, secure, and optimize
network resources very quickly via dynamic, automated SDN
programs, which they can write themselves because the programs do
not depend on proprietary software.
64
SDN Functionality:
NFV reduces the need for dedicated hardware to deploy and manage
networks by offloading network functions into software that can run on
industry-standard hardware and can be managed from anywhere within the
operator’snetwork.
If both SDN and NFV are implemented for a network, the following
relationships hold:
Network data plane functionality is implemented on VMs.
The control plane functionality may be implemented on a dedicated
SDN platform or on an SDN VM.
In either case, the SDN controller interacts with the data plane
functions running on VMs.
67
QoS measures are commonly used to specify the service required
by various network customers or users and to dictate the traffic
management policies used on the network. The common case, until
recently, is that QoS was implemented on network that used neither NFV
nor SDN. In this case, routing and traffic control policies must be
configured directly on network devices using a variety of automated and
manual techniques. If NFV but not SDN is implemented, the QoS settings
are communicated to the VMs. With SDN, regardless of whether NFV is
used, it is the SDN controller that is responsible for enforcing QoS
parameters for the various network users. If QoE considerations come into
play, these are used to adjust QoS parameters to satisfy the users’ QoE
requirements.
Elastic traffic is that which can adjust, over wide ranges, to changes in
delay and throughput across an internet and still meet the needs of its
applications
Inelastic traffic does not easily adapt, if at all, to changes in delay and
throughput across an internet
big data refers to everything that enables an organization to create,
manipulate, and manage very large data sets (measured in terabytes,
petabytes, exabytes, and so on) and the facilities in which these are
stored.
Traditional business data storage and management technologies
include relational database management systems (RDBMS), network-
attached storage (NAS), storage-area networks (SANs), data
warehouses (DWs), and business intelligence (BI) analytics.
A cloud-based network is an enterprise network that can be extended
to the cloud
The technology-centered approach mainly emphasizes the concept of
QoS and has its strongest reference from the ITU (International
Telecommunications Union).
Routing and congestion control are the basic tools needed to support network
traffic and to provide QoS and QoE.
Packet loss occurs when one or more packets of data travelling across
a computer network fail to reach their destination
Congestion control techniques can be broadly classified into two
categories: open loop congestion control and closed loop congestion
control
Software-defined networks provide an enhanced level of flexibility
and customizability to meet the needs of newer networking and IT
trends such as cloud, mobility, social networking, and video.
Network Functions Virtualization (NFV) decouples network functions,
such as routing, firewalling, intrusion detection, and Network Address
Translation from proprietary hardware platforms and implements these
functions in software.
2.11 REFERENCES
70
https://www.etsi.org/deliver/etsi_tr/102600_ 102699/102643/01.00.0
1_60/tr_ 102643v010001p.pdf
9. Technical Article on Queuing delay, published by hill associates,
archive available online at
https://web.archive.org/web/20150904041151/http://www.hill2dot0.
com/wiki/index.php?title=Queuing_delay
10. Technical Article on Queuing delay, published by Wikipedia
available online at https://en.wikipedia.org/wiki/Queuing_delay
11. Technical Article on Packet Loss, published by Wikipedia available
online at https://en.wikipedia.org/wiki/Packet_loss
12. Chapter on Building Blocks of TCP, published by O’Reilly at
https://www.oreilly.com/library/view/high-performance-
browser/9781449344757/ch02.html
13. Computer Network: Lecture Notes, prepared by Mr. Daya Ram
Budhathoki, available online at
https://dayaramb.files.wordpress.com/2011/03/computer-network-
notes-pu.pdf
14. SDN Architecture https:// opennetworking.org/sdn-definition/
15. Network Functions Virtualizations approach available online at:
https://www.blueplanet.com/resources/What-is-NFV-prx.html
*****
71
UNIT II
3
SDN: BACKGROUND AND MOTIVATION
AND SDN DATA PLANE AND OPENFLOW
Unit Structure
3.1 Evolving Network Requirements
3.1.1 Demand Is Increasing
3.1.2 Supply Is Increasing
3. 1 . 3 Traffic Patterns Are More Complex
3.1.4 Traditional Network Architectures are Inadequate
3.2 The SDN Approach Requirements
3.2.1 SDN Architecture
3.2.2 Characteristics of Software-Defined Networking
3.3 SDN- and NFV-Related Standards
3. 3 . 1 Standards-Developing Organizations
3.3.2 Industry Consortia
3.3.3 Open Development Initiatives
3.4 SDN Data Plane and OpenFlow
3.4.1 SDN Data Plane
3.4.2 Data Plane Functions
3.4.3 Data Plane Protocols
3.5 OpenFlow Logical Network Device
3.5.1 Flow Table Structure
3.5.2 Flow Table Pipeline
3.5.3The Use of Multiple Tables
3.5.6 Group Table
3.6 OpenFlow Protocol
3.7 Unit End Question
3.8 References
3.0 OBJECTIVES
The review of SDNs starts throughout this chapter, and gives some context
to and motivation for the SDN approach.
73
So increase in the network transmission technologies capacity has
to match with the increase capacity in the performance of the network
devices such as LAN Switches, routers, firewalls, intrusion detection
system/intrusion prevention systems (IDS/IPS), monitoring and
management of networks systems. Year after year such machines becomes
very larger, having faster memories capacity, which allows more buffer
space, faster buffer access, and faster processors. Speed
74
The now traditional application method and virtualization of the
database server .The amount of hosts needing high-volume network
access increased dramatically and resulted in any physical change in
the server resources location.
75
Nevertheless, this distributed, independent methodology evolved
while networks are largely fixed-location of static and end systems.
Dependent on such features, the Open Networking Foundation (ONF)
addresses following four constraints on traditional network design
76
Inconsistent policies: IT can need to configure thousands of devices and
mechanisms for implementing a network-wide policy. For instance, it can
require hours or days for IT to reconfigure ACLs along all the whole
network every time you build a new virtual machine. The complexity of
existing networks makes it quite difficult for IT to apply to increasingly
mobile users a consistent set of access, security, QoS and other policies,
making it vulnerable to security breaches, failures to comply with
regulations, and other negative consequences.
77
The industry has come to a tipping point with this division between
market requirements and network capacity. The industry has therefore
designed and has developed the Software Defined Networking (SDN)
architecture.
78
FIGURE 3.1 a) Traditional networks b) SDN approach is
implemented
Above Image from the reference book “Foundations of Modern
Networking: SDN, NFV, QoE, IoT, and Cloud by William Stallings”
80
Agility: modification of diverse network traffic dynamically to satisfy
evolving network requirements is simple.
Control Layer: This is the middle layer of the SDN architecture which is
the SDN controller that serves like a network’s brain which provides a
broad overview of a network defined as the control plane. It is necessary
to deploy SDN controllers directly on a server or on a virtual server. For
manage the switches in the data plane, Open Flow or some other open API
is used. However, controllers using information regarding the capacity and
demand generated from the traffic networking equipment which the traffic
flows. SDN controllers often open northbound APIs that enable emerging
companies and network administrators, many of them untenable previous
to SDN’s introduction, to deploy a broad range of off-the-shelf and
personalized network applications. No standardized northbound API or
agreement for an open northbound API is currently available. Several
vendors offer a REpresentational State Transfer (REST) API to provide
their SDN controller with a programmable interface.
81
The interface they use to communicate with a control layer is Southbound
APls. OpenFlow Protocol is the most common protocol used to p ovide
Southbound API.
82
TABLE 3.1 SDN and NFV Open Standards Activities
83
Union — standardization in architecture
Telecommunication telecommunications
Standardization Sector
(ITU-T)
Internet Research Task Research group within SDN architecture
Force (IRTF) Software IRTF.
Defined Networking Produces SDN-related
Research Group RFCs
(SDNRG)
Broadband Forum The Broadband Forum is a SDN
(BBF) non-profit industry requirements and
consortium dedicated to framework of
developing broadband broadband
network specifications telecom networks
Metro Ethernet Forum Industry consortium Defining APIs for
(MEF) supporting Ethernet use SDN and NFV
for metropolitan and wide- service
area applications. orchestration
IEEE 802 An IEEE committee Standardize SDN
responsible for the communication
development of LAN network
standards. capabilities.
Optical Internetworking Consortium for industry to Transport network
Forum (OIF) promote the development requirements of
and implementation of SDN architectures
interoperable networking
products and services.
Open Data Center Leading IT companies SDN usage model
Alliance (ODCA) consortium build
interoperable cloud
solutions and services.
Alliance for A standard organization SDN / NFV
Telecommunications that establishes unified programmable
Industry Solutions communications (UC) infrastructure to
(ATIS) standards. technical
opportunities and
challenges
Open Platform for NFV OPNFV created a NFV
(OPNFV)
reference platform through infrastructure
system-level integration,
deployment, and testing, to
accelerate the
transformation of
enterprise and service
provider networks.
84
3.3.1 Standards-Developing Organizations:
The Internet Community, ITU-T and ETSI make significant
contributions to SDN and NFV standardization.
Internet Society:
In the past few years, the next major trend in networking has been
the software-defined networking (SDN) and Network functions
virtualization (NFV). As a consequence, we saw networking standards
development organizations (SDOs) such as ITU, IETF and TMF leap in
the bandwagon to comply with SDN and NFV. The two groups most
involved in the internet society (ISOC) are: IETF and IRTF. The ISOC is
an international non - profit organization which manages the Internet,
education and policy development standards. Founded in 1992, the aim of
ISOC is to promote open Internet development for organizations and
individuals all over the world through enhancement and promotion of
internet usage.
The Task Force for Internet Engineering (IETF) has SDN working groups
in the following areas:
85
addressing technological, operational tariff questions and issuing
Recommendations with a view to standardizing telecommunications
globally. Recommendation ITU- T Y.3 3 0 0 ( Framework of Software-
Defined Networking, June 2014) defines the software networking (SDN)
structure for describing SDN fundamentals. In this Recommendation it
discuss concepts, priorities, high-level capabilities, requirements and high-
level SDN architecture.
OpenStack:
89
Above Image from the reference book “Foundations of Modern
Networking: SDN, NFV, QoE, IoT, and Cloud by William Stallings”
90
computed and established in accordance with the rules defined in the SDN
applications
In Figure 3.5 , the network device shows three I / O ports: one for
SDN-controller control communication, and two for data packet input and
output. The network device could have more than two I/O ports for
packet flows in and out of the device for multiple SDN controllers.
That means that the common logical architecture is required for all
switches, routers, and other network devices to manage with an SDN
controller. The SDN unification feature allows various providers to
implement this logical architecture in various ways to construct network
devices to operate under an SDN controller with a uniform logical
switching function.
92
FIGURE 3.7 OpenFlow Switch
Above Image from the reference book “Foundations of Modern
Networking: SDN, NFV, QoE, IoT, and Cloud by William Stallings”
Physical ports:
The physical ports in OpenFlow are ports which have been
connected to a switch’s hardware interface. For instance, physical ports
map the Ethernet interfaces one-to-one on an Ethernet switch.
Logical port:
The logical ports for OpenFlow are switched ports that do not meet
the hardware of the switch directly. Logical ports are higher level abstracts
which can be specified through non-OpenFlow methods (e.g. link
aggregation classes, tunnels, loopback interfaces) in a switch.
Reserved Ports:
This specification defines the OpenFlow reserved ports. They
signify basic forwarding actions, such as sending to the controller,
flooding or transportation, utilizing non-OpenFlow methods, such as
"normal" switch processing.
A series of tables are used within each switch to control packet flows via
the switch.
93
In the logical switch framework the specification OpenFlow
describes three types of tables. A flow table corresponds to the incoming
packets in one flow and describes the functions on the packets. Multiple
flow tables can be used in a pipeline fashion. A flow table may direct a
flow relate to a group table that can cause a variety of actions involving
one or more flows. A meter table may cause multiple actions related to
performance on a flow. The controller will add, update and delete tables of
flow entries both responsively and constructively utilizing the OpenFlow
switch protocol.
match fields: To match the packets. Which comprise the ingress port and
packet headers, and metadata listed in a previous table optionally.
Priority: matching precedence of the flow entry.
counters: updated when packets are matched.
instructions: to modify the action set or pipeline processing.
timeouts: maximum amount of time or idle time before flow is expired by
the switch.
cookie: opaque data value chosen by the controller. May be used by the
controller to filter flow statistics, flow modification and flow deletion. Not
used when processing packets.
94
Flags: Flags modify the handling of flow entries; for instance the flag
OFPFF_SEND_FLOW_REM causes messages removed from flow entry.
Counter Usage Bit Length
Reference count (active Per flow table 32
entries)
Duration (seconds) Per flow entry 32
Received packets Per port 64
Transmitted packets Per port 64
Duration (seconds) Per port 32
Transmit packets Per queue 64
Duration (seconds) Per queue 32
Duration (seconds) Per group 32
Duration (seconds) Per meter 32
TABLE 3.2 Required OpenFlow Counters
Ingress port: The port identifier on which the packet arrived on this
switch. It maybe a physical port or a switch-defined virtual port. Required
in ingress tables
Egress port: The identifier of the egress port from action set. Required in
egress tables.
IP: Version 4 or 6
IPv4 or IPv6 source address, and destination address: Each entry can
be an exact address, a bitmasked value, a subnet mask value, or a wildcard
value.
TCP source and destination ports: Exact match or wildcard value.
95
Every OpenFlow-compliant switch will follow the previous match fields.
Optionally support can be provided in the following areas.
VLAN ID and VLAN user priority: Fields in the IEEE 802.1Q virtual
LAN header.
SCTP source and destination ports: Exact match or wildcard value for
Stream Transmission Control Protocol.
MPLS label value, traffic class, and BoS: Fields in the top label of an
MPLS label stack.
TCP flags: Flag bits in the TCP header. May be used to detect start and
end of TCP connections.
Update action set: Combine actions to the current action set for this
packet or clear all actions in the action set.
97
Update metadata: a packet can have a metadata value. It is used to
transmit information from table to other table.
Ingress processing:
The processing of Ingress always takes place from table 0 and uses
the input port as an identity. Table 0 could be the only table in which the
ingress processing on the same table is simplified and no egress
processing is carried out.
Egress processing:
The Egress processing is the processing after the output port has
been determined. This occurs in the output port's context. It is an optional
stage. When it occurs, one or more tables can be used. The numerical
identification of the first egress table indicates the separation of both
stages. All tables below the first Egress Table should be used as Input
Tables. An Input Table cannot be used as a table with a number above or
equal to the first Egress Table.
The processing of pipelines also starts with the first flow table
processing; then, the packet has to be matched with the flow inputs of flow
Table 0. Depending on the match outcome of the first table, other ingress
flow tables can be used. The OpenFlow switch can conduct egress
processing in connection with that output port if the result of the ingress
processing is to transfer the packet to an output port.
98
FIGURE 3.9 Simplified Flowchart Detailing Packet Flow Through an
OpenFlow Switch
Above Image from the reference book “Foundations of Modern
Networking: SDN, NFV, QoE, IoT, and Cloud by William Stallings”
99
3. When one or more entries other than the table-miss are matched, then
the match is defined as the highest priority matching entry. The following
actions should betaken:
a. Using this entry, update any counters.
b. Execute any instructions relevant to this entry. These instructions may
involve updating the actions set, the metadata value update and the
actions are performed.
c. The packet is then forwarded to the flow table downwards, to the
group table or the meter table or to the output port
2. Find the flow matching entry with the highest priority. If there is no
match and no table-miss entry, the packet will be discarded. If only a
table-miss entry matches, then one of three actions is given by that entry:
a. Send the controller to packet. This action allows the controller to
define a new packet flow or decide to drop the packet.
b. Down the pipeline, direct the packet to another flow table.
c. Drop the packet.
Forwarding to another flow table for the final table in the pipeline
is not an option. If and when a packet is finally sent to the output port, the
built-in action set is executed and the packet is queued for output. The
complete ingress pipeline process is shown in Figure 1.10.
Group
Group Type Counters Action Buckets
Identifier
Table 2: Main components of a group entry in the group table.
102
The group identification of each group entry (see Table 2) consists of:
Group Identifier: a 32 bit unsigned integer uniquely identifying the
group
Group Type: to determine group semantics
Counters: updated when packets are processed by a group
Action Buckets: an ordered list of action buckets, where each action
bucket contains a set of actions to execute and associated parameters
One of the other types seen in Figure 1.13 is the group: all, select, fast
failover and indirect.
The ALL group would begin with each packet received as input
and duplicate it to be run independently on the bucket list, through this
manner, an ALL group will replicate and then operate on individual copies
of the packet, as defined in each bucket by actions. Each bucket can have
different and distinct actions allowing for different operations on different
packet copies. Each group is used for multicast or broadcast forwarding.
104
the group FAST-FAILOVER quickly selects the next bucket with a watch
/ group that is up in the bucket list.
Symmetric: These messages are received from the controller or the device
without a request. It is indeed simple yet helpful. Hello messages are
usually sent between the controller and the switch when the connection
has been established first. The echo request and reply messages may be
used to test the latency and bandwidth of a controller-switch
connection either by a device or by the controller or merely verifies that
the device is operative. The Experimenter message is used to build
features for future OpenFlow versions.
Message Description
Controller to Switch
Features Features Request the capabilities of a
switch. Switch responds with a
features reply that specifies its
capabilities.
Configuration Configuration Set and query
configuration parameters. Switch
responds with parameter settings.
Modify-State Modify-State Add, deiete, and modify
fiow/group entries and set switch port
properties.
Read-State Read-State Collect information from
switch, such as current configuration,
statistics, and capabilities.
Packet-out Packet-out Direct packet to a specified
port on the switch.
Barrier Barrier request/reply messages are
used by the controller to ensure
message dependencies have been met
or to receive notifications for
completed operations.
Role-Request Role-Request Set or query role of the
OpenFlow channel. Useful when
switch connects to multiple
controllers.
Asynchronous Configuration Set filter on asynchronous messages
or query that filter. Useful when
switch connects to multiple
controllers.
Asynchronous
Packet-in Transfer packet to controller.
Row-Removed Inform the controller about the
removal of a flow entry from a flow
table.
106
Port-Status Inform the controller of a change on a
port
Role-Status Inform controller of a change of its
role for this switch from master
controller to slave controller.
Controller-Status Inform the controller when the status
of an OpenFlow channel changes.
This can assist failover processing if
controllers lose the ability to
communicate among themselves
Flow-monitor Inform the controller of a change in a
flow table. Allows a controller to
monitor in real time the changes to
any subsets of the flow table done by
other controllers
Hello Exchanged between the switch and
controller upon connection startup
Echo Echo request/reply messages can be
sent from either the switch or the
controller, and must return an echo
reply
Error Used by the switch or the controller to
notify problems to the other side of
the connection.
Experimenter For additional functionality,
108
4
SDN CONTROL PLANE AND SDN
APPLICATION PLANE
Unit structure
4.0 Objectives
4.1 SDN Control Plane Architecture
4.1.1 Control Plane Functions
4.1.2 Southbound Interface
4.1.3 Northbound Interface
4.1.4 Routing
4.2 ITU-T Model
4.3 OpenDaylight
4. 3.1 OpenDaylight Architecture
4.3.2OpenDaylight Helium
4.4 REST
4.4.1REST Constraints
4.4.2 Example REST API
4.5 Cooperation and Coordination Among Controllers
4.5.1 Centralized Versus Distributed Controllers
4.5.2 High-Availability Clusters
4.5.3 Federated SDN Networks
4.5.4 Border Gateway Protocol
4. 5 . 5 Routing and QoS Between Domains
4. 5.6 Using BGP for QoS Management
4.5.7 SDN Control Plane
4.5. 8 IETF SDNi
4.5.9 OpenDaylight SNDi
4.6 SDN Application Plane Architecture
4.6.1 Northbound Interface
4. 6 . 2 Network Services Abstraction Layer
4.6.3 Network Applications
4.6.4 User Interface
4.7 Network Services Abstraction Layer
4.7.1Abstractions in SDN
4.7.2 Frenetic
4. 8 Traffic Engineering
4.8.1 PolicyCop
4.9 Measurement and Monitoring
109
4.10 Security
4.1 0. 1 OpenDaylight DDoS Application
4.1 1 Data Center Networking
4.11.1Big Data over SDN
4.11.2 Cloud Networking over SDN
4.1 2 Mobility and Wireless
4. 1 3 Information- Centric Networking
4.13.1CCNx
4.13.2 Use of an Abstraction Layer
4.14 Unit End Question
4.15 Reference
4.0 OBJECTIVES
After you have studied this chapter, you should be able to:
List and explain the key functions of the SDN control plane.
Discuss the routing function in the SDN controller.
Understand the ITU-T Y.3300 layered SDN model.
Present an overview of OpenDaylight.
The requests for the service of the application layer are mapped to
specified commands and directives for data plane switches by the SDN
Control Layer. The SDN control layer then offers data plane topology and
activity information to applications. The control layer then acts as a server
or as a collaborative servers known as SDN controllers.
111
FIGURE 4.2 SDN Control Plane Functions and Interfaces
Above Image from the reference book “Foundations of Modern
Networking: SDN, NFV, QoE, IoT, and Cloud by William Stallings”
Notification Manager:
It must be in an able to receive, process and forward events (e.g. Alarm
notifications, security alarms, state changes).
Security mechanism:
Critical components for providing basic isolation and security compliance
between services and applications.
Topology manager:
It Builds and maintains interconnection topology information.
112
A variety of projects, both open source and commercial, have
contributed to the introduction of the SDN controller. The list below
identifies some significant ones
OpenDaylight:
The largest open source SDN controller, OpenDaylight, helps to
handle transformation. ODL is a modular, open platform for the
customization and automation of any size and scale of networks. The ODL
project originated from the SDN campaign and focused specifically on
network programmability. It was developed from the beginning as a basis
for commercial solutions, which deal with many applications in existing
network environments. The ODL project mission that has been initiated at
the beginning of 2013. It was initially led by IBM and Cisco but was then
hosted by the Linux Foundation. OpenDaylight can be implemented as
one centralized controller which allows for the distribution of controllers
where one or more instances can be run on one or more network clustered
servers.
POX:
POX is an OpenFlow / Defined Networking (SDN) Controller built in
Python. POX offers a framework to communicate using either the
OpenFlow or the OVSDB protocol for SDN switches. To build an SDN
controller, developers can use POX to use the Python programming
language. It is a common method to teach and research software of
defined software networks and network applications.
Beacon:
Beacon is a fast, cross-platform-based modular OpenFlow controller,
based on Java, which is suitable both for event and threaded operation.
Beacon is written in Java and runs from high-end Multi-Core Linux
servers to Android phones on various platforms. Beacon is implemented in
java and runs on several platforms, from high-end multi-core Linux
servers to Android phones. Java and Eclipse make it easier for your
application to develop and debug.
113
Floodlight:
The leading open source OpenFlow Controller is Floodlight. It is
sponsored by a developer community with a number of Big Switch
Networks engineers. The Floodlight Open SDN controller is a Java-based,
Apache-licensed, OpenFlow Controller for enterprise-class applications
that is designed to work with traditional JDK- and ANT-tools. There is
both a web-based and a Java-based GUI and most of its functions are
displayed via a REST API.
Ryu:
Ryu Controller is an open, software-defined SDN network controller
intended to improve network agility by enabling the management and
adaptation of traffic control. The NTT lab supports Ryu controller and is
implemented in laboratories. It is open sourced and fully built in python.
Onix:
Onix is a distributed system operating on a control platform that
distributes, collects and transmits information from switches appropriately
across different servers and offers a broad range of management
applications. VMWare, Google and NTT developed it all together. Onix is
an SDN controller that is commercially available.
114
FIGURE 4.3 SDN Controller Interfaces
Base controller function APIs: This APIs carry out the controller's basic
functions and are used to build network services by developers.
Network service APIs: The network services are exposed to the North in
these APIs.
4.1.4 Routing:
SDN network needs a routing function, just like with any network or
the internet. The routing function constitutes in specific protocol to gather
information on network topology and traffic patterns as well as an
algorithm for network route layout. Two types of routing protocols exist:
117
interior router protocols (IRP), operating in an Autonomous System (AS),
and exterior router protocols (ERPs) that works between autonomous
systems.
118
efficient network implementations SDN needs include: isolation of SDN
control from network resources, preparation of network resources;
integration of network resources by standard information and data models
and support for network resource orchestration and SDN applications and
operations.
Application support:
The application support function provides application control interface
to SDN applications to access network information details and
application-specific behavior of the program
Orchestration:
The orchestration function provides automated network resource
controls and management, as well as managing network resource access
requests on the basis of a multi-tier management policy or application
layer.
119
The orchestration function provides network infrastructure control and
management, such as management of physical and virtual network
topologies, network elements and traffic. It integrates with multi-layer
management features to manage SDN applications such as user
management, service advancement and distribution.
Abstraction:
The Abstraction Function interacts with network resources and gives
an overview of network resources, including network capacity and
features that support management and orchestration of physical and virtual
network resource. This abstraction is based on standard data models and
information, and is autonomous of the transport infrastructure that
underlies it.
Resource layer:
The resource layer is used to transport and process data packets by the
network elements based on the decisions taken by the SDN control layer
and distributed through aresource control interface to the resource layer.
Control support:
The control support function interfaces with and maintains the SDN
control layer programming through resource-control interfaces
4.3 OPENDAYLIGHT
APIs:
The operation of the SAL is seen in Figure 4.9 .The OSGi framework
implies that plug-ins for the available southbound protocols are
dynamically linked. Collection of features that can be depended on by
control plane services through a services manager of the SAL is a
description of the capability of such protocols. In order to map service
requests, the service manager maintains a registry. Depending on the
service request, SAL maps the necessary plug-ins to interact with a certain
network system using the most suitable southbound protocol.
122
Above Image from the reference book “Foundations of Modern
Networking: SDN, NFV, QoE, IoT, and Cloud by William Stallings”
OpenDaylight architecture:
The components in OpenDaylight Helium include a fully connectable
control, interfaces, plugins and protocol applications. There are three main
blocks in the helium controller:
Controller Platform:
The OpenDaylight controller platform
Northbound applications and services
Southbound plugins and protocols
123
A modular architecture ( as seen in figure 4. 1 1 ) is the controller
platform and has "northbound" and "southbound "interface. The
Northbound interface includes controller services and a collection of
standard REST APIs that can be used by applications to manage network
infrastructure configuration. Using the authentication and authorization
models shown as the top layer of the open-day light architecture in Figure
4. 1 0 you can access the northbound interface.
Further, you can configure the statistical query interval with the Helium
release of the Open Daylight controller.
Host Tracker: stores end host information (address of the layer of data,
type of switch, type of port, Network address) and offers APIs to retrieve
end-node information. Host Tracker can operate dynamically or statically.
The Host Tracker uses ARP to monitor the state of the database as
complex operations take place. The Host Tracker database is manually
supplemented by northbound APIs in static mode.
125
L2 Switch: offers L2 switching features to include multiple reusable,
standardized services such as address tracking, basic spanning tree
protocol, modular packet handling and optimal route calculations.
126
networks and ensure interoperability with other technologies and between
suppliers through these SB Protocols. Below are some of the approved SB
plug-ins (such as SB protocols):
The new web-based user interface (UI) for the second ODL Release
"Helium" is Open Daylight User eXperience (DLUX). DLUX is an
interactive, more dynamic UI built as a simple front end technology with
127
Angular JS (a JavaScript client-side framework). Only NB (REST) APIs
are exposed to the ODL and then consumed via UI.
4.4 REST
128
Client-Server
Layered System
Code on Demand
Uniform Interface: This is the main restriction between REST API and
Non-Rest API, indicating that a uniform interaction with a server should
be defined regardless of device or application type like uniform interface
(website, mobile app).
Stateless: It means that the required state is contained within the request
itself and that nothing relevant to the session will be registered by the
server. In REST, the client must provide all information for the server,
whether as part of query parameters, headers or URIs, to complete the
request. The statelessness allows for enhanced availability as the server
has no session state to maintain, update, or communicate. The client has a
downside if he has to send too much information to the server so that
network improvement can be minimized and more bandwidth is needed.
Code on Demand: This is an option. This also means that servers can
provide the client with executable code. For example, compiled
components such as Java applets, client-side scripts such as JavaScript can
include code on demand.
This command's for the localhost part means that the application is
operating on the identical server as the Ryu NOS. The URI will be a URL
providing remote access via HTTP and a web application when the
application is remote. The switch manager addresses this command by
means of a message whose message body contains the dpid and then a set
of value blocks, one per category group in the dpid switch. The following
values are:
type: All, select, fast failover, or indirect
group_id : Group table entry identification.
buckets: A standardized field composed of the following
subsections:
130
Weight: Bucket's relative weight (only for select type).
watch_port: Port whose state impacts if the bucket is alive (only
required for fast failover groups).
watch_group: Group whose state affects if this bucket is live (only
required for fast failover groups).
actions: A list of actions, possibly null.
Once for each table of group entry, the buckets in the message body
are repeated.
Table 1 describes the API functions and the parameters used by the
GET message type for the processing of switch statistics. There are many
functions using the POST message type, under which a set of matching
parameters is included in the request message body.
131
Advertised features, Supported features,
peer features advertised by peer, Current
port bitrate , Max port bitrate
Get queues stats Datapath ID, Port number,Queue ID,
Number of transmitted bytes, Number of
transmitted packets, Number of packets
dropped due to overrun, Time queue has
been alive in seconds, Time queue has
been alive in nanoseconds beyond
duration_ sec.
Get groups stats Datapath ID, Length of this entry, Group
ID ,Number of flows or groups that
directly forward to this group, Number of
packets processed by group,Number of
bytes processed by group, Time group
has been alive in seconds, Time group
has been alive in nanoseconds beyond
duration_sec, bucket_stats, Number of
packets processed by bucket, Number of
bytes processed by bucket,
Get group description Datapath ID, type,
group_id,buckets( weight, watch_ port,
watch_group, actions)
Get group features Datapath ID, types,capabilities,
Maximum number of groups for each
type,actions values supported
Get meters stats Datapath ID ,Meter ID, Length in bytes,
Number of flows bound to meter,
Number of packets in input, Number of
bytes in input, Time meter has been alive
in seconds ,Time meter has been alive in
nanoseconds beyond duration_sec
band_stats, Number of packets in band,
Number of bytes in band.
Get meter config Datapath ID ,flags ,Meter ID bands (type,
rate, burst_size)
Get meter features Datapath ID , Maximum number of
meters, band_types values supported,
Maximum bands per meters, Maximum
color value
TABLE .1. Ryu REST APIs for Retrieving Switch Statistics Using
GET
TABLE .2. Ryu REST APIs for Update Switch Statistics Filtered by
Fields Using POST
133
The use of a single control unit to manage all network devices in a
large enterprise network would prove unmanageable or unnecessary. A
more likely circumstance is to split the network into a series of
nonoverlapping SDN domains, also known as SDN islands ( Figure 4. 1 2 ) ,
managed by distributed controllers that are the operator of a large
enterprise or carrier network. The following list contains reasons for using
SDN domains.
Scalability:
There is a small number of devices that an SDN controller can
handle. Therefore, many SDN controllers can involve a relatively large
network.
Privacy:
A carrier may decide in various SDN domains to enforce different
privacy policies. For example, a domain may be dedicated to a variety of
clients who have their own very customized privacy rules, which would
require that any of the network data within this domain should not be
revealed to an external entity (for example, network topology.
Incremental deployment:
The network of the transporter will consist of parts of conventional
and new infrastructure. The network is divided into many, independently
managed SDN domains that enable flexible iterative deployment.
134
performance and are ideal for data centers while the distributed controllers
handle networks with different locations.
ODL Helium has HA built in and Cisco XNC and the Open Network
controller have HA functionality (up to five in a cluster).
135
FIGURE 4.13 Federation of SDN Controllers
136
Neighbor reachability: After a neighbor has been acquired, a router
inspects to ensure that the neighbor is consistently available and
functioning. This is achieved by sending a message from EGP Hello to
any neighbor for whom a connection was created. The neighbor
responds with a message I Heard You (IHU). The BGP Keepalive
message is somewhat similar but is used in matched pairs.
137
OSPF is used for internal routing within the non-SDN AS. No
SDN domain includes OSPF; instead, the routing information is given
from each data plane switch using the southbound protocol to the
centralized controller (in this case, OpenFlow). BGP is used to share
information between each SDN domain and the AS, for example:
138
cross-domain traffic cannot depend on a standardized class collection, no
standardized markings (class encoding) or a standardized forwarding
behavior. RFC 4 5 9 4 however offers a series of " quality standards" for
these parameters. Network suppliers make separate and uncoordinated
decision-making on QoS policy. This specific statement does not include
current separate agreements providing a quality interconnection that
provides with strict QoS guarantees. Such SLA Agreements are bilateral
or multilateral nevertheless, and do not provide a basis for an overall
interconnection of 'better than best effort.
139
4. The BGP entity of the controller exchanges open messages with the
neighbor when a TCP link is established. Capability information with
Open messages is exchanged.
5. The exchange shall complete when a BGP connection is created.
6. Update Messages are used to exchange NLRIs (network layer
reachability information), showing what networks the entity can reach.
The selecting of the most adequate data route among SDN controllers
uses Reachability information. The information collected by NLRI is
used for updating the controller's Routing Information Base (RIB).
This allows the controller to set the based on the flow data on the data
plane switches.
7. The update message, like available capacity, could even often been
utilized to share QoS information.
8. Route selection is made if the BGP Process Decision allows more than
one path available. Packets can travel effectively among two SDN
domains when the direction has been determined.
SDNi REST API: The REST APIs from the northbound plug-in (SDNi
aggregator) collect added information.
141
through the REST API, the SDNi aggregator gathers statistics and
parameters from the basic network service functions. The focus is on the
implementation of the Border Gateway Protocol by OpenDayslight (BGP).
142
4.6 SDN APPLICATION PLANE ARCHITECTURE
The application plan includes applications and services that assess,
monitor and control resources and behavior of the network. These
applications communicate through the application control interfaces with
SDN control systems, so that the behavior and properties of network
resources can be automatically modified by the SDN control layer. The
SDN application programming makes use of the abstract view of network
resources that are delivered by the SDN control layer, through the
application-control interface, of data and information models. This chapter
gives an overview of the functionality of the application plane shown in
figure 4.18. The components are evaluated in this figure using a bottom-up
approach and following sections include descriptions of particular areas of
application.
143
network switches supporting it. The northbound interface typically offers
an abstract view of the software-controlled network resources of the SDN
control plane.
Network Applications:
144
communication, two possible interfaces take place afterwards. A user who
is co - located with the Application SDN server can use the
keyboard/display of the server.
145
Above Image from the reference book “Foundations of Modern
Networking: SDN, NFV, QoE, IoT, and Cloud by William Stallings”
Distribution Abstraction:
Specification Abstraction:
The distribution abstraction offers a global network view that, even though
there are several cooperating controllers, there is a single central
controller. The abstraction of requirements offers an abstract view of the
global network. This view gives the application enough specifics to set
objectives, such as routing or security policies, without providing the
requisite information to achieve the objectives. The Shenker presentation
summarises the following:
Forwarding interface: an abstract forwarding model that protects higher
levels of forwarding equipment.
Distribution Interface: a worldwide network view protecting higher
layers of state dissemination/collection.
Specification Interface: An abstract network view to protect the
application from physical network data
146
be the physical network. Edge ports that bind to other domains and hosts
on a virtual switch are mapped into ports. A module to learn the Media
Access Control (MAC) address of hosts can be implemented at the
application level. When an unknown host sends a packet, the application
module will connect this address directly with the input port and direct
future traffic to that host. Likewise, the module floods this packet to all of
the output ports if the packet arrives at one virtual switch port with an
undefined target address. The abstraction layer conveys these acts to the
whole physical network, which carries out internal forwarding with the
domain.
4.7.2 Frenetic:
The programming language Frenetic is an example of the network
services abstraction layer. Instead of manually configuring individual
network elements, Frenetic allows network operators to set the entire
network. With OpenFlow models, Frenetic was designed to solve
problems by dealing with a network level abstraction rather than
Openflow explicitly down to a network feature level.
def switch_join(s):
pat1 = {inport:1}
pat2web = {inport:2, srcport:80}
pat2 = {inport:2}
install(s, pat1, DEFAULT, [fwd(2)])
install(s, pat2web, HIGH, [fwd(1)])
install(s, pat2, DEFAULT, [fwd(1)])
query_stats(s, pat2web)
def stats_in(s, xid, pat, pkts, bytes):
print bytes sleep(30) query_stats(s, pat)
The program is written in a way that incorporates the logic for web
monitoring and forwarding. This demonstrates the essence of OpenFlow's
underlying function. Any improvements or new functionality would have
a complex impact on the program. The following can be expressed
independently with Frenetic:
def repeater():
rules=[Rule(inport:1, [fwd(2)])
Rule(inport:2, [fwd(1)])]
register(rules) def web monitor():
q = (Select(bytes) *
Where(inport=2 & srcport=80) *
Every(30))
q >> Print()
def main():
repeater()
monitor()
149
It is a significant field in which SDN applications have been
developed. Kreutz's SDN survey paper in the IEEE's Proceedings January
2015 lists the following functions of the traffic engineering as SDN
applications:
On-demand virtual private networks
Load balancing
Energy-aware routing
Quality of service (QoS) for broadband access networks
Scheduling/optimization
Traffic engineering with minimal overhead
Dynamic QoS routing for multimedia apps
Fast recovery through fast-failover groups
QoS policy management framework
QoS enforcement
QoS over heterogeneous networks
Multiple packet schedulers
Queue management for QoS enforcement
Divide and spread forwarding tables
4.8.1 PolicyCop:
150
FIGURE 4.22 PolicyCop Architecture
151
A Northbound RESTful interface links such control plane
frameworks to the application plane frameworks that are organized into
two components: a policy validator that tracks the network to detect policy
violations, and an enforcer which adjusts control plane regulations to
network and top level policy requirements. All components are centered
on policy database containing a network manager's QoS policy rules. The
following module are:
Traffic Control: The active policy compilation from the policy database
and the monitoring duration, network sections and metrics are
calculated.
Policy Checker: Policy Violation checks, using policy database
details and Traffic Monitor information.
Event Handler: analyses infringement events and either
immediately informs the policy enforcer or sends a request to the
network manager based on the type of event.
Topology Manager: Holds a large number of nodes based on device
tracker data.
Resource management: keeps track of the resources presently
assigned by admission control and the collection of statistics.
Policy Adaptation: requires a set of actions for any form of
infringement.
Provision of resources: This module allocates new resources, or
distributes existing resources or both depending on the infringement
case.
152
Figure 4.23 shows the process workflow in PolicyCop.
4.10 SECURITY
154
DDoS attack patterns are detected as traffic irregularities that deviate
from standard baselines.
Diversion of suspicious traffic from its usual route to AMSs for traffic
scrubbing, limited blockage from sources and so forth. Clean traffic is
reinjects into the original packet destination.from scrubbing centres.
Defence4All then track traffic from all installed POs and resume
readings, rates and averages from each of the respective network sites.
155
Especially, Defese4All measures the average traffic for the real-time
traffic it is calculated with OpenFlow constantly, and an attack is
expected, if the realtime traffic is 80% different from an average.
156
Repository Services: The convergence of the computational status
from computing logic is an important aspect of the framework
philosophy. All permanent states are stored in a series of repositories
which can then be replicated, cached and distributed without computer
logic awareness (framework or application).
157
DF App Root: The application's root module.
The hybrid electrical and optical data centers network with the
OpenFlow enabled high-end (ToR) switches is connected to two
aggregation switches in Figure 4.26: Ethernet switch and optical circuit
switch (OCS). All switches are controlled through an SDN controller
which administers the physical connectivity between ToR switches by
setting the optical switch over optical circuits.
160
Figure 4.26 arranges a scheme that uses the SDN controller to
handle the whole task to continuously handle the network by using traffic
demands of big data applications.
161
a. A consumer in a cloud uses a simple policy to determine the customer
applications' network services. This policy statement will be given to a
server operated by the cloud service provider's cloud controller.
b. The cloud controller compares Network Policy to a communication
matrix that identifies optimal networking services and communication
trends. The matrix is employed to evaluate how virtual machines
(VMs) are optimally placed on cloud servers in order for cloud to
effectively meet with the largest number of global policies. This is
based on the understanding of the requirements and present conditions
of many other clients.
c. The logical communication matrix for data plane forwarding elements
is converted into network-level guidelines. By establishing and placing
the required number of VMs when customer's VM appliances are
deployed.
d. The directives at network level are installed via OpenFlow on the
network devices.
Middlebox: Name a new virtual middlebox with the name and config file.
The cloud providers include the collection of middleboxes accessible and
their configuration syntax. E.g. intrusion detection and audit compliance
system.
164
information can be positioned and information to users can find more
information everywhere on the network, even though the information is
named, addressed and paired autonomously of its location. In ICN, rather
than defining a source host pair for information exchange, a bit of
information is labelled. When a request has been sent to ICN, the network
is accountable for finding the best source that would provide the relevant
information.
It is a challenge to implement ICNs on conventional networks,
because current routing devices must be upgraded or replaced with routing
devices that are ICN-enabled. In addition, the distribution model of ICN is
shifted from host to server to user. This requires a clear difference
between the demand for information and the supply task and the
transmission task. SDN is able to provide the requisite infrastructure for
the ICN implementation as it allows transmission elements to be
configured and control and data planes to be separated.
4.13.1 CCNx:
CCNx has been built as an open source project from the Palo Alto
Research Center (PARC), which has numerous implementations have
been experimentally deployed. Two kinds of packets are used to
communicate to CCN: interest packets and content packets. CCN
communications are by two packet types: packets of interest and packages
of material. A customer requests content delivered via a Interest packet.
Every CCN node receiving interest and naming data matching the interest
responds with a content packet also called as a Content. Content seems to
be of interest unless the name of the Interest packet corresponds the name
of the Content Object packet. When a CCN node is issued an interest, and
no copy of the requested content already occurred, it could transmit the
interest to a content source. There are forwarding tables in the CCN node
which decide which way the interest should be sent. An interest provider
for whom the named content correlates responds with a contents packet.
165
The choice of any intermediate node is to store the content object, and
then the next time an interest packet with the same name is received, it
may respond to that cached copy of the content object.
Content Store: Keeps a table with packages that have already been
viewed
The details about how origins of content are known and how
destinations are developed through the CCN network is beyond our reach.
Shortly, through a cooperation among all the CCN nodes, contents
providers advertise content names and routes across the CCN network.
166
numbers). The broad naming space provided by these fields limits the
likelihood that two separate content names will collision. The forwarding
tables in the OF switch are based on the contents of the hashed fields. The
switch may not "know" whether the contents of such fields no longer are
legal IP addresses, TCP port numbers, etc. As always, it goes forward,
based on the values in the relevant IP packet areas.
Mappings are built on the flow tables of switches via the OF protocol,
which can forward the following Interest packet to suitable caches.
Packet flow shows in Figure 4.30. Any packet obtained from other ports
will be forwarded to your wrapper by the openflow switch and forwarded
to the CCNx module. The OpenFlow switch must be used to identify the
packet's switch source port. This is achieved by selecting the ToS value of
all packets received to the relevant port value and then forwarding them all
to the wrapper port.
The wrapper function for the packets obtained from the CCNx
module is shown in Part b of Figure 4.30. The ToS field defines the output
port for content packets. In order to retrieve the packet content name for
any packet, the packet is then decoded. The name is hashed and the
packet's IP source is set to fit the hashed value. The wrapper then moves
the packets to OF switches. The packets of contents will be returned to
their incoming face. The ToS value is set to 0 for the packets of interest to
be passed by the OF switch to the next hop.
The use of the abstract wrapper layer therefore offers simple ICN
functionality including the deflection functionality without modifying the
CCNx module or OpenFlow switch.
1. List and explain the key functions of the SDN control plane.
2. Discuss the routing function in the SDN controller.
3. Explain the ITU-T Y.3300 layered SDN model.
4. Explain the OpenDaylight Architecture.
169
14. Define the network services abstraction layers.
15. List and explain three forms of abstraction in SDN.
16. List and describe six major application areas of interest for SDN.
17 . Explain the Frenetic Architecture.
18. what is Traffic Engineering?
19. what is PolicyCop’s SDN application of traffic engineering? Explain
the PolicyCop Architecture.
20. Explain the OpenDaylight DDoS Application.
21. what is Cloud Network as a Service (CloudNaaS)? Explain the
Various Steps in the CloudNaaS Framework.
4.15 REFERENCE
*****
170
Model II
UNIT III
5
VIRTUALIZATION
Virtualization, Network Functions Virtualization: Concepts and
Architecture, Background and Motivation for NFV, Virtual Machines The
Virtual Machine Monitor, Architectural Approaches Container
Virtualization, NFV Concepts Simple Example of the Use of NFV, NFV
Principles High-Level NFV Framework, NFV Benefits and Requirements
NFV Benefits, NFV Requirements, NFV Reference Architecture NFV
Management and Orchestration, Reference Points Implementation, NFV
Functionality, NFV Infrastructure, Container Interface, Deployment of
NFVI Containers, Logical Structure of NFVI Domains, Compute Domain,
Hypervisor Domain, Infrastructure Network Domain, Virtualized Network
Functions, VNF Interfaces, VNFC to VNFC Communication, VNF
Scaling, NFV Management and Orchestration, Virtualized Infrastructure
Manager, Virtual Network Function Manager, NFV Orchestrator,
Repositories, Element Management, OSS/BSS, NFV Use Cases
Architectural Use Cases, Service-Oriented Use Cases, SDN and NFV
Network Virtualization, Virtual LANs ,The Use of Virtual LANs,
Defining VLANs, Communicating VLAN Membership, IEEE 8 0 2 . 1 Q
VLAN Standard, Nested VLANs, OpenFlow VLAN Support, Virtual
Private Networks, IPsec VPNs, MPLS VPNs, Network Virtualization,
Simplified Example, Network Virtualization Architecture, Benefits of
Network Virtualization, OpenDaylight’s Virtual Tenant Network,
Software-Defined Infrastructure, Software-Defined Storage, SDI
Architecture
Unit Structure
5.0 Objectives
5.1 Network Functions Virtualization (Nfv)
Concepts And Architecture
5.1.1Virtual Machines
5.1.2Nfv Concepts
5.1 .3 Nfv Benefits And Requirements
Nfv Benefits
5. 1.4 Nfv Reference Architecture
5.2 Nfv Functionality
5.2.1Nfv Infrastructure
5. 2 . 2 Virtualized Network Functions
5. 2 . 3 Nfv Management And Orchestration
171
5.2.4 Nfv Use Casesp
171
5.2.5 Sdn And Nfv
5.3 Network Virtualization
5.3.1 Virtual Lans
5.3.2 Openflow Vlan Support
5.3.3 Virtual Private Networks (Vpn):
5.3.4 Network Virtualization
5.3.5 Opendaylight’s Virtual Tenant Network:
5.4 Summary
5.5 Unit End Question
5.6 References
173
resources. Server virtualization has become a central element in dealing
with big data applications and in implementing cloud computing
infrastructures.
Architectural Approaches:
174
the VMs on the failed host can be quickly and automatically restarted on
another host in the cluster. Compared with providing this type of
availability for a physical server, virtual environments can provide higher
availability at significantly lower cost and less complexity.
There are some important differences between the Type 1 and the
Type 2 hypervisors. A Type 1 hypervisor is deployed on a physical host
and can directly control the physical resources of that host, whereas a
Type 2 hypervisor has an operating system between itself and those
resources and relies on the operating system to handle all the hardware
interactions on the hypervisor’s behalf. Typically, Type 1 hypervisors
perform better than Type 2 because Type 1 hypervisors do not have that
extra layer. Because a Type 1 hypervisor doesn’t compete for resources
with an operating system, there are more resources available on the host,
and by extension, more VMs can be hosted on a virtualization server using
a Type 1 hypervisor.
Container Virtualization:
175
based VMs, containers do not aim to emulate physical servers. Instead, all
containerized applications on a host share a common OS kernel. This
eliminates the resources needed to run a separate OS for each application
and can greatly reduce overhead.
Because the containers execute on the same kernel, thus sharing most of
the base OS, containers are much smaller and lighter weight compared to a
hypervisor/guest OS VM arrangement.
176
In traditional networks, all devices are deployed on
proprietary/closed platforms. All network elements are enclosed boxes,
and hardware cannot be shared. Each device requires additional hardware
for increased capacity, but this hardware is idle when the system is
running below capacity. With NFV, however, network elements are
independent applications that are flexibly deployed on a unified platform
comprising standard servers, storage devices, and switches. In this way,
software and hardware are decoupled, and capacity for each applica ion is
increased or decreased by adding or reducing virtual resources (figure 5).
NFV Principles:
Three key NFV principles are involved in creating practical
network services:
178
Service chaining: VNFs are modular and each VNF provides limited
functionality on its own. For a given traffic flow within a given
application, the service provider steers the flow through multiple
VNFs to achieve the desired network functionality. This is referred to
as service chaining.
Management and orchestration (MANO): This involves deploying
and managing the lifecycle of VNF instances. Examples include VNF
instance creation, VNF service chaining, monitoring, relocation,
shutdown, and billing. MANO also manages the NFV infrastructure
elements.
Distributed architecture: A VNF may be made up of one or more
VNF components (VNFC), each of which implements a subset of the
VNF’s functionality. Each VNFC may be deployed in one or multiple
instances. These instances may be deployed on separate, distributed
hosts to provide scalability and redundancy.
179
The NFV framework consists of three domains of operation:
• Virtualized network functions: The collection of VNFs, implemented
in software that run over the NFVI.
• NFV infrastructure (NFVI): The NFVI performs a virtualization
function on the three main categories of devices in the network
service environment: computer devices, storage devices, and
network devices.
• NFV management and orchestration: Encompasses the orchestration
and lifecycle management of physical/software resources that
support the infrastructure virtualization, and the lifecycle
management of VNFs.
• VNF forwarding graph (VNF FG): Covers the case where network
connectivity between VNFs is specified, such as a chain of VNFs on
the path to a web server tier (for example, firewall, network address
translator, load balancer).
• VNF set: Covers the case where the connectivity between VNFs is not
specified, such as a web server pool.
NFV Benefits:
The ability to innovate and roll out services quickly, reducing the time
to deploy new networking services to support changing business
requirements, seize new market opportunities, and improve return on
investment of new services. Also lowers the risks associated with
rolling out new services, allowing providers to easily trial and evolve
services to determine what best meets the needs of customers.
NFV Requirements:
NFV must be designed and implemented to meet a number of
requirements and technical challenges, including the following:
181
management and orchestration northbound interfaces to well defined
standards and abstract specifications.
182
environment. This includes computing, networking, storage, and VM
resources.
OSS/BSS: Operational and business support systems implemented by
the VNF service provider.
183
Reference Points
184
Os-Ma: Used for interaction between the orchestrator and the
OSS/BSS systems.
Ve-Vnfm: Used for requests for VNF lifecycle management and
exchange of configuration and state information.
Se-Ma: Interface between the orchestrator and a data set that provides
information regarding the VNF deployment template, VNF forwarding
graph, service-related information, and NFV infrastructure information
models.
Implementation:
185
6
NFV FUNCTIONALITY
6.2.1-Nfv Infrastructure:
Container Interface:
The ETSI documents make a distinction between a functional block
interface and a container interface, as follows:
Functional block interface: An interface between two blocks of
software that perform separate (perhaps identical) functions. The
interface allows communication between the two blocks. The two
functional blocks mayor may not be on the same physical host.
186
Container interface: An execution environment on a host system
within which a functional block executes. The functional block is on
the same physical host as the container that provides the container
interface.
Compute Domain
189
Eswitch: Server embedded switch. However, functionally it forms an
integral part of the infrastructure network domain.
Compute/storage execution environment: This is the execution
environment presented to the hypervisor software by the server or
storage device.
Control plane workloads: Concerned with signaling and control
plane protocols such as BGP. Typically, these workloads are more
processor rather than I/O intensive and do not place a significant
burden on the I/O system.
Data plane workloads: Concerned with the routing, switching,
relaying or processing of network traffic payloads. Such workloads
can require high I/O throughput.
Hypervisor Domain:
The hypervisor domain is a software environment that abstracts
hardware and implements services, such as starting a VM, terminating a
191
VM, acting on policies, scaling, live migration, and high availability. The
principal elements in the hypervisor domain are the following:
Compute/storage resource sharing/management: Manages these
resources and provides virtualized resource access for VMs.
Network resource sharing/management: Manages these resources
and provides virtualized resource access for VMs.
Virtual machine management and API: This provides the execution
environment of a single VNFC instance.
Control and admin agent: Connects to the virtualized infrastructure
manager (VIM).
Vswitch: The vswitch function, described in the next paragraph, is
implemented in the hypervisor domain. However, functionally it forms
an integral part of the infrastructure network domain.
Virtual Networks:
In general terms, a virtual network is an abstraction of physical
network resources as seen by some upper software layer. Virtual network
technology enables a network provider to support multiple virtual
networks that are isolated from one another. Users of a single virtual
network are not aware of the details of the underlying physical network or
of the other virtual network traffic sharing the physical network resources.
Two common approaches for creating virtual networks are (1) protocol-
based methods that define virtual networks based on fields in protocol
headers, and (2) virtual-machine-based methods, in which networks are
created among a set of VMs by the hypervisor. The NFVI network
virtualization combines both these forms.
192
L2 Versus L3 Virtual Networks:
193
6.2.2 Virtualized Network Functions:
A VNF is a virtualized implementation of a traditional network
function. Below table contains examples of functions that could be
virtualized.
VNF Interfaces:
As discussed earlier, a VNF consists of one or more VNF
components (VNFCs). The VNFCs of a single VNF are connected internal
to the VNF. This internal structure is not visible to other VNFs or to the
VNF user.
195
controller drivers for fast packet processing on Intel architecture
platforms. Scenario 3 assumes a Type 1 hypervisor .
196
VNF Scaling:
An important property of VNFs is referred to as elasticity, which
simply means the ability to scale up/down or scale out/in. Every VNF has
associated with it an elasticity parameter of no elasticity, scale up/down
only, scale out/in only, or both scale up/down and scale out/in.
197
Virtual Network Function Manager:
NFV Orchestrator:
The NFV orchestrator (NFVO) is responsible for resource
orchestration and network service orchestration. Resource orchestration
manages and coordinates the resources under the management of different
VIMs.
NFVO coordinates, authorizes, releases and engages NFVI
resources among different PoPs or within one PoP. This does so by
engaging with the VIMs directly through their northbound APIs instead of
engaging with the NFVI resources directly.
198
It does the topology management of the network services instances
(also called VNF forwarding graphs).
Repositories:
Associated with NFVO are four repositories of information needed
for the management and orchestration functions:
Network services catalog: List of the usable network services. A
deployment template for a network service in terms of VNFs and
description of their connectivity through virtual links is stored in NS
catalog for future use.
VNF catalog: Database of all usable VNF descriptors. A VNF
descriptor (VNFD) describes a VNF in terms of its deployment and
operational behavior requirements. It is primarily used by VNFM in
the process of VNF instantiation and lifecycle management of a VNF
instance. The information provided in the VNFD is also used by the
NFVO to manage and orchestrate network services and virtualized
resources on NFVI.
NFV instances: List containing details about network services
instances and related VNF instances.
NFVI resources: List of NFVI resources utilized for the purpose of
establishing NFV services.
Element Management:
The element management is responsible for fault, configuration,
accounting, performance, and security (FCAPS) management functionality
for a VNF. These management functions are also the responsibility of the
VNFM. But EM can do it through a proprietary interface with the VNF in
contrast to VNFM. However, EM needs to make sure that it exchanges
information with VNFM through open reference point (VeEm-Vnfm). The
EM may be aware of virtualization and collaborate with VNFM to
perform those functions that require exchange of information regarding
the NFVI resources associated with VNF. EM functions include the
following:
Configuration for the network functions provided by the VNF
Fault management for the network functions provided by the VNF
Accounting for the usage of VNF functions
Collecting performance measurement results for the functions
provided by the VNF Security management for the VNF functions
OSS/BSS:
The OSS/BSS are the combination of the operator’s other
operations and business support functions that are not otherwise explicitly
captured in the present architectural framework, but are expected to have
information exchanges with functional blocks in the NFV-MANO
199
architectural framework. OSS/BSS functions may provide management
and orchestration of legacy systems and may have full end-to-end
visibility of services provided by legacy network functions in an
operator’snetwork.
200
configure custom ETSI NFV-compliant VNFs to augment the catalog of
VNFs offered by the service provider.
iv.VNF Forwarding Graphs: VNF FG allows virtual appliances to be
chained together in a flexible manner. This technique is called service
chaining. For example, a flow may pass through a network monitoring
VNF, a load-balancing VNF, and finally a firewall VNF in passing from
one endpoint to another.
Figure 10, from the ETSI VNF Architecture document, indicates the
potential relationship between SDN and NFV. The arrows can be
described as follows-
SDN enabled switch/NEs include physical switches, hypervisor virtual
switches, and embedded switches on the NICs.
Virtual networks created using an infrastructure network SDN
controller provide connectivity services between VNFC instances.
202
SDN controller can be virtualized, running as a VNF with its EM and
VNF manager. Note that there may be SDN controllers for the
physical infrastructure, the virtual infrastructure, and the virtual and
physical network functions. As such, some of these SDN controllers
may reside in the NFVI or management and orchestration (MANO)
functional blocks (not shown in figure).
SDN enabled VNF includes any VNF that maybe under the control of
an SDN controller (for example, virtual router, virtual firewall).
203
6.3 NETWORK VIRTUALIZATION
204
including multilayer awareness (Layers 3, 4, application), quality of
service (QoS) support, and trunking for wide-area networking.
Figure 2 VLAN:
Figure 2 shows five defined VLANs. A transmission from
workstation X to server Z is within the same VLAN, so it is efficiently
switched at the MAC level. A broadcast MAC frame from X is transmitted
to all devices in all portions of the same VLAN. But a transmission from
X to printer Y goes from one VLAN to another. Accordingly, router logic
at the IP level is required to move the IP packet from X to Y. Fi ure 2
shows that logic integrated into the switch, so that the switch determines
whether the incoming MAC frame is destined for another device on the
same VLAN. If not, the switch routes the enclosed IP packet at he IP
level.
Defining VLANs:
A VLAN is a broadcast domain consisting of a group of end
stations, perhaps on multiple physical LAN segments, that are not
constrained by their physical location and can communicate as if they
were on a common LAN. A number of different approaches have been
used for defining membership, including the following:
Membership by port group: Each switch in the LAN configuration
contains two types of ports: a trunk port, which connects two switches;
and an end port, which connects the switch to an end system. A VLAN
can be defined by assigning each end port to a specific VLAN. This
approach has the advantage that it is relatively easy to configure. The
principle disadvantage is that the network manager must reconfigure
VLAN membership when an end system moves from one port to
another.
Membership by MAC address: Because MAC layer addresses are
hardwired into the workstation’s network interface card (NIC),
VLANs based on MAC addresses enable network managers to move a
206
workstation to a different physical location on the network and have
that workstation automatically retain its VLAN membership. The main
problem with this method is that VLAN membership must be assigned
initially. In networks with thousands of users, this is no easy task.
Also, in environments where notebook PCs are used, the MAC address
is associated with the docking station and not with the notebook PC.
Consequently, when a notebook PC is moved to a different docking
station, its VLAN membership must be reconfigured.
Membership based on protocol information: VLAN membership
can be assigned based on IP address, transport protocol information, or
even higher-layer protocol information. This is a quite flexible
approach, but it does require switches to examine portions of the MAC
frame above the MAC layer, which may have a performance impact.
Figure 6.4 shows the position and content of the 802.1 tag, referred
to as Tag Control Information (TCI). The presence of the two-octet TCI
field is indicated by inserting a Length/Type field in the 802.3 MAC frame
207
with a value of 8100 hex. The TCI consists of three subfields, as described
in the list that follows.
User priority (3 bits): The priority level for this frame.
Canonical format indicator (1 bit): Is always set to 0 for Ethernet
switches. CFI is used for compatibility between Ethernet type
networks and Token Ring type networks. If a frame received at an
Ethernet port has a CFI set to 1, that frame should not be forwarded as
it is to anuntagged port.
VLAN identifier (12 bits): The identification of the VLAN. Of the
4 0 9 6 possible VIDs, a VID of 0 is used to identify that the TCI
contains only a priority value, and 4095 (0 xFFF) is reserved, so the
maximum possible number of VLAN configurations is 4094.
Nested VLANs:
The original 802.1Q specification allowed for a single VLAN tag
field to be inserted into an Ethernet MAC frame. More recent versions of
the standard allow for the insertion of two VLAN tag fields, allowing the
definition of multiple sub-VLANs within a single VLAN.
209
SDN, and in particular OpenFlow, allows for much more flexible
management and control of VLANs. It should be clear how OpenFlow can
set up flow table entries for forwarding based on one or both VLAN tags,
and how tags can be added, modified, and removed.
Part a of Figure 5.7 shows the packet format for an IPsec option
known as tunnel mode. Tunnel mode makes use of the combined
authentication/encryption function IPsec called Encapsulating Security
Payload (ESP), and a key exchange function. For VPNs, both
210
authentication and encryption are generally desired, because it is important
both to (1) ensure that unauthorized users do not penetrate the VPN, and
(2) ensure that eavesdroppers on the Internet cannot read messages sent
over the VPN.
211
IPsec in a firewall is resistant to bypass if all traffic from the outside
must use IP and the firewall is the only means of entrance from the
Internet into the organization.
IPsec is below the transport layer (TCP, UDP) and so is transparent to
applications. There is no need to change software on a user or server
system when IPsec is implemented in the firewall or router. Even if
IPsec is implemented in end systems, upper-layer software, including
applications, is not affected.
IPsec can be transparent to end users. There is no need to train users
on security mechanisms, issue keying material on a per-user basis, or
revoke keying material when users leave the organization.
IPsec can provide security for individual users if needed. This is useful
for offsite workers and for setting up a secure virtual subnetwork
within an organization for sensitive applications.
2 MPLS Vpns:
212
the assignment of a particular packet to a particular FEC is done just once,
when the packet enters the network of MPLS routers.
A Simplified Example:
215
The architecture depicts NV as consisting of four levels:
i. Physical resources
ii. Virtual resources
iii. Virtual networks
iv. Services
*****
217
UNIT IV
7
QUALITY OF SERVICE (QoS) AND USER
QUALITY OF EXPERIENCE (QoE)
Unit Structure
7.0 Objectives
7.1 Introduction
7.2 Background
7.3 QoS Architectural Framework
7.3.1 Data Plane
7.3.2 Control Plane
7.3.3 Management Plane
7.4 Integrated Services Architecture (ISA)
7.4.1 ISAApproach
7.4.2 ISA Components
7.4.3 ISA Services
7.4.4 Queuing Discipline
7. 5 Differentiated Services
7.5.1 Services
7.5.2 DiffServ Field
7.5.3 DiffServ Configuration
7.5.4 DiffServ Operation
7.5.5 Per-Hop Behavior
7.6 Service Level Agreements
7.6.1 IP Performance Metrics
7.6.2 Openflow QoS Support
7.2.1 Queue Structures
7.2.2 Meters
7.7 User Quality of Experience ( QoE)
7.8 Service Failures Due to Inadequate QoE Considerations
7.9 QoE Related Standardization Projects
7.10 Definition of Quality of Experience
7.11 QoE Strategies in Practice
7.12 Factors Influencing QoE
7.13 Measurements of QoE
218
7.13.1 Subjective Assessment
7.13.2 Objective Assessment
7.13.3 End-User Device Analytics
7.14 Applications of QoE
7.0 OBJECTIVES
Describe the ITU-T QoS architectural framework.
Summarize the key concepts of the Integrated Services Architecture.
Compare and contrast elastic and inelastic traffic.
Explain the concept of differentiated services.
Understand the use of service level agreements.
Describe IP performance metrics.
Present an overview of OpenFlow QoS support
Explain the motivations for QoE.
Define QoE.
Explain the factors that could influence QoE.
Present an overview of how QoE can be measured, including a
discussion of the differences between
Subjective and objective assessment.
Discuss the various application areas of QoE.
7.1 INTRODUCTION
219
There is a strong need to be able to support a variety of traffic, with a
variety of QoS requirements, on IP- based networks.
7.2 BACKGROUND
220
In this more sophisticated environment, the term best effort refers
not to the network service as a whole but to a class of traffic treated in best
effort fashion. All packets in the best effort traffic class are transmitted
with no guarantee regarding the speed with which the packets will be
transmitted to the recipient or that the data will even be delivered entirely.
Traffic shaping controls the rate and volume of traffic entering and
transiting the network on a per-flow basis. The entity responsible for
traffic shaping buffers nonconformant packets until it brings the
respective aggregate in compliance with the traffic limitations for
this flow.
Congestion avoidance deals with means for keeping the load of the
network under its capacity such that it can operate at an acceptable
performance level. The specific objectives are to avoid significant
queuing delays and, especially, to avoid congestion collapse. A
typical congestion avoidance scheme acts by senders reducing the
amount of traffic entering the network upon an indication that
network congestion is occurring (or about to occur).
Traffic policing determines whether the traffic being presented is,
on a hop-by-hop basis, compliant with prenegotiated policies or
contracts. Nonconformant packets may be dropped, delayed, or
labeled as nonconformant.
Queuing and scheduling algorithms, also referred to as queuing
discipline algorithms, determine which packet to send next and are
used primarily to manage the allocation of transmission capacity
among flows.
222
7.3.2 Control Plane:
The control plane is concerned with creating and managing the pathways
through which user data flows. It includes admission control, QoS routing,
and resource reservation.
Admission control determines what user traffic may enter the
network. This maybe in part determined by the QoS requirements of
a data flow compared to the current resource commitment within the
network. But beyond balancing QoS requests with available capacity
to determine whether to accept a request, there are other
considerations in admission control.
QoS routing determines a network path that is likely to
accommodate the requested QoS of a flow. This contrasts with the
philosophy of the traditional routing protocols, which generally are
looking for a least-cost path through the network.
Resource reservation is a mechanism that reserves network
resources on demand for delivering desired network performance to
a requesting flow. An example of a protocol that uses this capability
is the Resource Reservation Protocol (RSVP).
The purpose of ISA is to enable the provision of QoS support over IP-
based internets. The central design issue for ISA is how to share the
available capacity in times of congestion. For an IP-based Internet that
provides only a best effort service, the tools for controlling congestion and
providing service are limited. In essence, routers have two mechanisms to
work with:
224
functions of the router; these are executed for each packet and therefore
must be highly optimized. The remaining functions, above the line, are
background functions that create data structures used by the forwarding
functions.
These background functions support the main task of the router, which is
the forwarding of packets. The two principal functional areas that
accomplish forwarding are the following:
225
may correspond to a single flow or to a set of flows with the same QoS
requirements. For example, the packets of all video flows or the
packets of all flows attributable to a particular organization may be
treated identically for purposes of resource allocation and queuing
discipline. The selection of class is based on fields in the IP header.
Based on the packet’s class and its destination IP address, this function
determines the next-hop address for this packet.
2. Packet scheduler: This function manages one or more queues for each
output port. It determines the order in which queued packets are
transmitted and the selection of packets for discard, if necessary.
Decisions are made based on a packet’s class, the contents of the
traffic control database, and current and past activity on this outgoing
port. Part of the packet scheduler’s task is that of policing, which is the
function of determining whether the packet traffic in a given flow
exceeds the requested capacity and, if so, deciding how to treat the
excess packets.
Guaranteed Service:
226
With this service, an application provides a characterization of its
expected traffic profile, and the service determines the end-to-end delay
that it can guarantee. The guaranteed service is the most demanding
service provided by ISA. Because the delay bound is firm, the delay has to
be set at a large value to cover rare cases of long queuing delays.
Controlled Load:
The key elements of the controlled load service are as follows:
The service tightly approximates the behavior visible to applications
receiving best effort service under unloaded conditions.
There is no specified upper bound on the queuing delay through the
network. However, the service ensures that a very high percentage of
the packets do not experience delays that greatly exceed the minimum
transit delay.
A very high percentage of transmitted packets will be successfully
delivered.
Best Effort :
227
A selfish TCP connection, which ignores the TCP congestion control
rules, can crowd out conforming connections. If congestion occurs and
one TCP connection fails to back off, other connections along the
same path segment must back off more than they would otherwise
have to do.
For priority queuing, each packet is assigned a priority level, and there is
one queue for each priority level. In the Cisco implementation, four levels
are used: high, medium, normal, and low. Packets not otherwise classified
are assigned to the normal priority. PQ can flexibly prioritize according to
network protocol, incoming interface, packet size, source/destination
address, or other parameters. The queuing discipline gives absolute
preference based on priority.
The term weighted fair queuing (WFQ) is used in the literature to refer to
a class of scheduling algorithms that use multiple queues to support
capacity allocation and delay bounds. WFQ may also take into account the
amount of service requested by each traffic flow and adjust the queuing
discipline accordingly.
228
Flow-based WFQ, which Cisco simply refers to as WFQ, creates flows
based on a number of characteristics in a packet, including source and
destination addresses, socket numbers, and session identifiers. The flows
are assigned different weights to based on IP precedent bits to provide
greater service for certain queues.
7.5.1 Services:
Packets are labeled for service handling by means of the 6-bit DSField in
the IPv4 header or the IPv6 header. The value of the DSField, referred to
as the DiffServ codepoint (DSCP), is the label used to classify packets for
differentiated services.
230
Figure 7.3
231
Within a domain, the interpretation of DS codepoints is uniform, so that a
uniform, consistent service is provided.
232
1. Shaper: Delays packets as necessary so that the packet stream in a
given class does not exceed the traffic rate specified in the profile for
that class.
2. Dropper: Drops packets when the rate of packets of a given class
exceeds that specified in the profile for that class.
Figure 7. 5 DS Functions
7.5.5 Per-Hop Behavior:
Figure 7.6 shows the DSCP encodings corresponding to the four classes.
The default class, referred to as default forwarding (DF), is the best effort
forwarding behavior in existing routers. Such packets are forwarded in the
order that they are received as soon as link capacity becomes available. If
other higher-priority packets in other DiffServ classes are available for
transmission, the latter are given preference over best effort default
packets. Application traffic in the Internet that uses default forwarding is
expected to be elastic in nature.
RFC 3246 defines the expedited forwarding (EF) PHB as a building block
for low-loss, low-delay, and low- jitter end-to-end services through
DiffServ domains. Therefore, unless the internet is grossly oversized to
eliminate all queuing effects, care must be taken in handling traffic for EF
PHB to ensure that queuing effects do not result in loss, delay, or jitter
above a given threshold.
235
Figure 7.1 shows atypical configuration that lends itself to an SLA. In this
case, a network service provider maintains an IP-based network. A
customer has a number of private networks (for example, LANs) at
various sites. Customer networks are connected to the provider via access
routers at the access points.
236
A standardized and effective set of metrics enables users and service
providers to have an accurate common understanding of the performance
of the Internet and private internets. Measurement data is useful for a
variety of purposes, including the following:
Supporting capacity planning and troubleshooting of large complex
internets.
Encouraging competition by providing uniform comparison metrics
across service providers.
Supporting Internet research in such areas as protocol design,
congestion control, and QoS.
Verification of SLAs.
These metrics are defined in three stages:
1. Singleton metric: The most elementary, or atomic, quantity that can
be measured for a given performance metric. For example, for a delay
metric, a singleton metric is the delay experienced by a single packet.
2. Sample metric: A collection of singleton measurements taken during
a given time period. For example, for a delay metric, a sample metric
is the set of delay values for all the measurements taken during a one-
hour period.
3. Statistical metric: A value derived from a given sample metric by
computing some statistic of the values defined by the singleton metric
on the sample. For example, the mean of all the one-way delay values
on a sample might be defined as a statistical metric.
237
Figure 7.2 Model for Defining Packet Delay Variation
Figure 7.2 illustrates the packet delay variation metric. This metric is used
to measure jitter, or variability, in the delay of packets traversing the
network. The singleton metric is defined by selecting two packet
measurements and measuring the difference in the two delays. The
statistical measures make use of the absolute values of the delays.
OpenFlow offers two tools for implementing QoS in data plane switches.
A data structure defines each queue. The data structure includes a unique
identifier, port this queue is attached to, minimum data rate guaranteed,
and maximum data rate. Counters associated with each queue capture the
number of transmitted bytes and packets, number of packets dropped
because of overrun, and the elapsed time the queue has been installed in
the switch.
A meter is a switch element that can measure and control the rate of
packets or bytes. Associated with each meter is a set of one or more bands.
If the packet or byte rate exceeds a predefined threshold, the meter triggers
the band. The band may drop the packet, in which case it is called a rate
limiter. Other QoS and policing mechanisms can be designed using meter
bands. Each meter is defined by an entry in the meter table for a switch.
Each meter has a unique identifier. Meters are not attached to a queue or a
port; rather, a meter can be invoked by an instruction from a flow table
entry. Multiple flow entries can point to the same meter.
Figure 7.9 shows the structure of a meter table entry and how it is related
to a flow table entry.
Internet service providers (ISPs) do not own the entire content distribution
network, and the risk of quality degradations is high. The access network
may consist of coax, copper, fiber, or wireless (fixed and mobile)
technology. Issues such as packet delay, jitter, and loss may plague such
networks.
The growth and expansion of the Internet over the past couple of decades
has led to an equally huge growth in the availability of network-enabled
video streaming services. Giant technological strides have also been made
in the development of network access devices.
With the current popularity of these services, providers need to ensure that
user experiences are comparable to what the users would consider to be
their reference standards. Users’ standards are often influenced by the
typically high video quality experience with the older technology, that is,
those offered by the cable and satellite TV operators. User expectations
can also be influenced by capabilities that currently can only be
adequately offered by broadcast TV. These capabilities include the
following:
241
Trick mode functionalities, which are features of video streaming systems
that mimic visual feedback given during fast-forward and rewind
operations.
Because the field of QoE has been growing rapidly, a number of projects
have been initiated to address issues relating to best practices and
standards. These projects have been aimed at preventing commercial.
Table 7.1 summarizes the prominent ones amongst these project
initiatives.
Definition of Quality:
Quality is the resulting verdict produced by a user after he/she has carried
a “comparison and judgment” process on an observable occurrence or
event.
Thus, quality is evaluated in terms of the degree to which the user’s needs
have been fulfilled within the context of the event. The result of this
evaluation is usually referred to as the quality score (or rating) if it is
presented with reference to a scale.
243
Definition of Experience:
The reference path reflects the temporal and contextual nature of the
quality formation process. This path is influenced by memories of former
experienced qualities, as indicated by the arrow from experienced quality
to the reference path.
244
Definition of Quality of Experience:
Combining the concepts and definitions from the preceding sections, the
definition of QoE that reflects broad industry and academic consensus is
as follows:
Key findings from QoE-related projects show that for many services,
multiple QoS parameters contribute toward the overall user’s perception
of quality. This has resulted in the emergence of the concept of the
QoE/QoS layered approach.
The QoE/QoS layered approach does not ignore the QoS aspect of the
network, but instead, user and service level perspectives are
complementary, as shown in Figure 7.13.
Although the trade-offs between quality and network capacity may begin
with application-level QoS because of network capacity considerations, an
understanding of the user requirements at the service level (that is, in
terms QoE measures) would enable a better choice of application-level
QoS parameters to be mapped onto the network-level QoS parameters.
QoE must be studied and addressed by taking into account both technical
and nontechnical factors. Many factors contribute to producing a good
QoE. Here, the key factors are as follows:
User demographics: The context of demographics herein refers to the
relatively stable characteristics of a user that might have an indirect
influence on perception, and intimately affects other technical factors
to determine QoE. The grouping of users was based on demographic
characteristics such as their attitudes toward adoption of new
technologies, socio-demographic information, socioeconomic status,
246
and prior knowledge. Cultural background is another user
demographic factor that might also have an influence on perception
because of cultural attitude to quality.
Type of device: Different device types possess different
characteristics that may impact on QoE. An application designed to
run on more than one device type, for example on a connected TV
device such as Roku and on an iOS device such as an iPhone, may not
deliver the same QoE on every device.
Content: Content types can range from interactive content specifically
curated according to personal interests, to content that is produced for
linear TV transmission. Studies have suggested that people tend to
watch video on-demand (VoD) content with a higher level of
engagement than its competing alternative, linear TV. This may be
because users will make an active decision to watch specific VoD
content, and as a result, give their full attention to it.
Connection type: The type of connection used to access the service
influences users’ expectations and their QoEs. Users have been found
to have lower expectations when using 3G connections in contrast to a
wire line connection even when the two connection types were
identical in terms of their technical conditions. Users have also been
found to lower their expectations considerably, and are more tolerant
to visual impairments, on small devices.
Media (audio-visual) quality: This is a significant factor affecting
QoE, as it is the part of a service that is most noticeable by the user.
The overall audio and video quality appears to be content dependent.
For less- complex scenes (for example, head and shoulder content),
audio quality is slightly more important than video quality. In contrast,
for high-motion content, video quality tends to be significantly more
important than audio quality.
Network: Content delivery via the Internet is highly susceptible to the
effects of delays, jitter, packet loss, and available bandwidth. Delay
variation results in the user experiencing frame freeze and the lack of
lip synchronization between what is heard (audio) and what is seen
(video). Although video content can be delivered using a number of
Internet protocols, not all of them are reliable. However, content
delivery is guaranteed using TCP/IP. Nevertheless, bad network
conditions degrade QoE because of increased rebuffering and
increased interruptions in playback. Rebuffering interruptions in IP
video playback is seen to be the worst degradation on user QoE and
should be avoided at the cost of startup delay.
Usability: Another QoE factor is the amount of effort that is required
to use the service. The service design must render good quality without
a great deal of technical input from the user.
247
Cost: The long-established practice of judging quality by price implies
that expectations are price dependent. If the tariff for a certain service
quality is high, users may be highly sensitive to any quality
degradations.
Characterize the service: The task at this stage is to choose the QoE
measures that affect user experience the most. As an example, for a
multimedia conferencing service, the quality of the voice takes
precedence over the quality of video. Also, the video quality required
for such applications does not demand a very high frame rate, provided
that audio-to-video synchronization is maintained. Therefore, the
resolution of individual frames can be considerably lower than the case
of other video streaming services, especially when the size of the
screen is small (such as a mobile phone). So, in multimedia
conferencing, the QoE measures might be prioritized as voice quality,
audio-video synchronization, and image quality.
Design and define test matrix: Once the service has been
characterized, the QoS factors that affect the QoE measures can be
identified. For instance, the video quality in streaming services might
be directly affected by network parameters such as bandwidth, packet
loss, and encoding parameters such as frame rate, resolution, and
codec. The capability of the rendering device will also play a
significant role in terms of screen size and processing power.
However, testing such a large combination of parameters may not be
feasible.
248
Specify test equipment and materials: Subjective tests should be
designed to specify test equipment that will allow the test matrix to be
enforced in a controlled fashion. For instance, to assess the correlation
between NQoS parameters and the perceived QoE in a streaming
application, at least a client device and a streaming server separated by
an emulated network are needed.
Identify sample population: A representative sample population is
identified, possibly covering different classes of users categorized by
the user demographics that are of interest to the experimenter.
Depending on the target environment for the subjective test, at least 24
test subjects has been suggested as the ideal number for a controlled
environment (for example, a laboratory) and at least 35 test subjects
for a public environment. Fewer subjects maybe used for pilot studies
to indicate trending. The use of crowdsourcing in the context of
subjective assessment is still nascent, but it has the potential to further
increase the size of the sample population and could reduce the
completion time of the subjective test.
Subjective methods: Several subjective assessment methodologies
exist within the industry recommendations. However, in most of them,
the typical recommendation is for each test subject to be presented
with the test conditions under scrutiny along with a set of rating scales
that allows the correlation of the users’ responses with the actual QoS
test conditions being tested. There are several rating scales, depending
on the design of the experiment.
Analysis of results: When the test subjects have rated all QoS test
conditions, a post-screening process might be applied to the data to
remove any erroneous data from a test subject that appears to have
voted randomly. Depending on the design of the experiment, a variety
of statistical approaches could be used to analyze results. The simplest
and the most common quantification method is the mean opinion score
(MOS), which is the average of the opinions collected for a particular
QoS test condition. The results from subjective assessment
experiments are used to quantify QoE, and to model the impacts of
QoS factors. However, they are time-consuming, expensive to carry
out, and are not feasible for real-time in-service monitoring.
249
Database of subjective data: A starting point might be the collection
of a group of subjective datasets as this could serve as benchmark for
training and verifying the performance of the objective model. A
typical example of one of these datasets might be the subjective QoE
data generated from well-established subjective testing procedures.
Preparation of objective data: The data preparation for the objective
model might typically include a combination of the same QoS test
conditions as found in the subjective datasets, as well as other complex
QoS conditions. A variety of preprocessing procedures might be
applied to the video data prior to training, and refinement of the
algorithm.
Objective methods: There are various algorithms in existence that can
provide estimates of audio, video, and audiovisual quality as perceived
by the user. Some algorithms are specific to a perceived quality
artifact, while others can provide estimates for a wider scope of quality
artifacts. Examples of the perceived artifacts might include blurring,
blockiness, unnatural motion, pausing, skipping, rebuffering, and
imperfect error concealment after transmission errors.
Verification of results: After the objective algorithm has processed all
QoS test conditions, the predicted values might benefit from a post-
screening process to remove any outliers; this is the same concept
applied to the subjective datasets. The predicted values from the
objective algorithm might be in a different scale as compared to the
subjective QoE datasets.
Validation of objective model: The objective data analysis might be
evaluated with respect to its prediction accuracy, consistency, and
linearity by using a different subjective dataset. It is worth noting that
the performance of the model might depend on the training datasets
and the verification procedures. The Video Quality Experts Group
(VQEG) validates the performance of objective perceptual models.
The practical applications of QoE can be grouped into two areas based on
the main usage.
The first approach can be taken by a service provider, who can provide a
range of QoS offerings with an outline of the QoE that the customer might
reasonably expect.
251
The second approach can be taken by a customer who defines the required
QoE, and then determines what level of service will meet that need.
Figure 7.14 illustrates a scenario where the user can make a selection from
a range of services, including the required level of service (SLA). By
contrast to the purely QoS-based management, the SLA here is not
expressed in terms of raw network parameters. Instead, the user indicates a
QoE target; it is the service provider that maps this QoE target together
with the type of service selected, onto QoS demands.
The service provider selects the appropriate quality prediction model and
management strategy (for example, minimize network resource
consumption) and forwards a QoS request to the operator. It is possible
that the network cannot sustain the required level of QoS, making it
impossible to deliver the requested QoE. This situation leads to a signal
back to the user, prompting a reduced set of services/QoE values.
252
8
NETWORK DESIGN IMPLICATIONS OF
QOS AND QOE
Unit Structure
8.0 Objectives
8.1 Introduction to QoE/QoS mapping model
8.2 Classification of QoE/QoS Mapping Models
8.2.1 Black-Box Media-Based QoS/QoE Mapping Models
8. 2 . 2 Glass- Box Parameter- Based QoS/QoE Mapping Models
8.2.3 Gray-Box QoS/QoE Mapping Models
8.3 IP-Oriented Parameter-Based QoS/QoE Mapping Models
8.3.1 Network Layer QoS/QoE Mapping Models for Video Services
8.3.2 Application Layer QoS/QoE Mapping Models for Video
Services
8.4 Actionable QoE Over IP-Based Networks
8. 4 . 1 The System- Oriented Actionable QoE Solution
8. 4 . 2 The Service- Oriented Actionable QoE Solution
8.5 QoE Versus QoS Service Monitoring
8.5.1 Monitoring and Its Classification
8.5.2 QoS Monitoring Solutions
8.5.3 QoE Monitoring Solutions
8.6 QoE-Based Network and Service Management
8.6.2 QoE-Based Host-Centric Vertical Handover
8. 6 . 3 QoE- Based Network-Centric Vertical Handover
8.0 OBJECTIVES
Translate metrics from QoS to QoE domain.
Select the appropriate QoE/QoS mapping model for a given
operational situation.
Deploy QoE-centric monitoring solutions over a given infrastructure.
Deploy QoE-aware applications over QoE-centric infrastructure.
8.1 INTRODUCTION
QoE/QoS mapping model is a function that transforms metrics from QoS
to QoE domains.
253
8.2 CLASSIFICATION OF QOE/QOS MAPPING
MODELS
QoE/QoS mapping models can be classified according to their inputs into
three categories:
1. Black-box media-based models
2. Glass-box parameter-based models
3. Gray-box parameter-based models
8.2.1 Black- Box Media-Based QoS/QoE Mapping Models:
Black-box media-based quality models rely on the analysis of media
gathered at system entrance and exit. Hence, they account implicitly for
the characteristics of examined media processing system. They are
classified into two categories:
a-Double-sided or full-reference quality models: They use as inputs the
clean stimulus and the corresponding degraded stimulus. They compare
the clean and degraded stimulus in a perceptual domain that accounts for
psychophysics capability of human sensory system. The perceptual
domain is a transformation of traditional physical temporal and frequency
domains performed according to characteristics of users perceptions.
Basically, the larger the perceptual distance, the greater the degradation
level. This model needs to align clean and degraded stimulus because the
comparison is made on per-block basis. The stimulus alignment should be
realized autonomously, that is, without adding extra control information
describing stimulus structure.
The full-reference black-box quality models are widely used for onsite
benchmarking, diagnosis, and tuning of network equipment’s, where clean
stimulus is available. The black-box quality models are used offline for the
evaluation of application-layer components, such as codec, packet loss
concealment (PLC), and buffering schemes.
255
enable a general overview pf QoE values of a voice transmission system at
an early phase. However, for service monitoring and management, online
models are needed. In such a case, the variable model parameters should
be acquired at run time. This is especially suitable for IP-based services
where control data, such as sequence number and time stamp, are included
in each packet header. In such an environment, it is possible to extract
static characterization parameters from signaling messages and variable
ones from the received packets captured at the destination port. This
means that parameters are acquired without acceding to the media content,
which is preferable for privacy reasons.
The network layer QoS/QoE mapping models rely solely on NQoS metrics
gathered from the TCP/IP stack except for the application layer (that is,
transport, network, link, and physical layers). Ketyko et al. proposed the
following parameter based quality model for estimating video streaming-
quality in 3G environment:
(Eq. 1)
where AL and VL refer respectively to audio and video packet loss rates,
AJ and VJ represent respectively audio and video packet jitter (VJ), and
RSSI is the received signal strength indicator.
Kim and Choi presented a two-stage QoE/QoS mapping model for IPTV
over 3G networks. The first stage consists of combining a set of basic QoS
parameters into one metric as follows:
( Eq. 2)
(Eq. 3)
257
where, X is a vector of parameters {L, U, J, D, B} and Qr is a scalar
limiting the range of the IPTV QoE obtained as a function of the display
size/resolution of the screen. The constant A expresses the subscribed
service class and R is a constant reflecting the structure of the video
frames.
(Eq. 4)
where Lx refers to the start-up latency, that is, the waiting time before
playing a video sequence, NQS is the number of quality switches that count
the number of times the video bit rate is changed during a session, NRE is
the number of rebuffering events, and TMR is the mean rebuffering time.
Khan et al., estimate QoE of a generic streamed content video over
wireless networks using MPEG4 codec:
a 1 +a 2 FR+a 3 .In(SBR)
QOE(FR, SBR, PER = 1+a 4 .per+a 5 .(PER) 2
(Eq. 5)
Where FR, SBR and PER refer, respectively, to the frame rate sampled at
the application level, sent bit rate and packet error rate sampled at the
network level. The co-efficient a1 to a5 are used to calibrate the quality
model. This model has been updated to account for three types of video
content: slight movement, gentlewalking, and rapid movement. The
quality model is given by the following :
A QoE /QoS mapping model for IPTV was developed by Kuipers et al,
which accounts for the startup latency and zapping time. The quality
model in given by the following
258
where QoE is a one-dimension QoE component considering zapping
behaviour, ZT is the zapping time expressed in seconds, and a and b are
numeric constants that might be positive or negative.
Each service provider specifies a target QoE level that should be offered
for its customers. The QoE/QoS mapping model should be selected in a
way that guarantees
(a) the availability of quality model input parameters and
(b) conformity with service specifications and conditions.
259
Figure 8.3 A Nominal Environment for Providing QoE-Centric
Services
260
The service-oriented actionable QoE solution involves multiple
advantages.
Per-service, per-user, and per-content QoE monitoring and
management solutions are performed to provide a given QoE level.
It provides more adaptation possibility because it precisely discerns
capability and the role of each service component.
It reduces the communication overhead and balances computing loads.
It enables component-level granularity treatment of QoE in addition to
stream- and packet-level granularities.
The emerging QoS monitoring solutions are basically developed for data
centers and clouds where virtualization technology is supported. Figure
8.6 shows a network- and infrastructure-level monitoring solution built for
cloud-based IPTV service. The audiovisual content servers are placed on a
cloud. The traffic sent from the content servers to IPTV devices is
permanently monitored through a set of Vprobes deployed across the
network. A Vprobe is an open-ended investigatory tool that is used in the
cloud environment to inspect, record, and compute the state of the
hypervisor as well as each virtual machine running service business logics.
262
The flows of video packets are parsed at different measurement points.
The information collected by Vprobes is used next to reconstruct service-
level detailed records (SDRs). Each record contains the most relevant
information of the complete session between an origin (server) and a
destination (user). The critical parameters of the messages associated with
an IPTV session are stored inside the SDRs.
264
Resource: Composed of dimensions representing the characteristics
and performance of the technical system(s) and network resources
used to deliver the service. Examples of such factors include network
QoS in terms of delay, jitter, loss, error rate, and throughput.
Furthermore, system resources such as server processing capabilities
and end user device capabilities are included.
Application: Composed of dimensions representing
application/service configuration factors. Examples of such factors
include media encoding, resolution, sample rate, frame rate, buffer
sizes, SNR, etc.
Interface: Represents the physical equipment and interface through
which the user is interacting with the application (type of device,
screen size, mouse, etc.).
Context: Related to the physical context (e. g. geographical aspects,
ambient light and noise, time of the day), the usage context (e.g.
mobility/no-mobility or stress/no-stress), and the economic context
(e.g. the cost that a user is paying for a service).
Human: Represents all factors related to the perceptual characteristics
of users (e.g. sensitivity to audiovisual stimulus, perception of
durations, etc.).
User: Users’ factors that are not represented in the Human layer.
These factors encompass all aspects of humans as users of services or
applications (e.g., history and social characteristics, motivation,
expectation, and level of expertise).
265
network parameters within a delivery path, such as queuing allocation and
congestion thresholds.
Figure 8.2 illustrates a likely envisaged scenario where the client could be
served either by WiMAX or Wi-Fi systems. Appropriate equipment
should be deployed and configured, such as outdoor and indoor units,
server, router, and Wi-Fi and WiMAX access points to enable network
handover. Throughout a vocal call, the client may switch from WiMAX
system to Wi-Fi system, and viceversa.
Figure 8.8 Network Selection Wi-Fi and WiMAX Based on Client and
Link Quality
8. 6. 3 QoE-Based Network-Centric Vertical Handover:
267
UNIT V
9
MODERN NETWORK ARCHITECTURE:
CLOUDS AND FOGCLOUD COMPUTING
Unit Structure
9.0 Objectives
9.1 Basic Concepts
9.2 Cloud Services
9.2.1 Software as a Service
9.2.2 Platform as a Service
9.2.3 Infrastructure as a Service
9.2.4 Other Cloud Services
9.2.5 XaaS
9.3 Cloud Deployment Models
9.3.1 Public Cloud
9.3.2 Private Cloud
9.3.3 Community Cloud
9.3.4 Hybrid Cloud
9. 4 Cloud Architecture
9.4.1 NIST Cloud Computing Reference Architecture
9.4.2 ITU-T Cloud Computing Reference Architecture
9.5 SDN and NFV
9.5.1 Service Provider Perspective
9.5.2 Private Cloud Perspective
9.5.3 ITU-T Cloud Computing Functional Reference Architecture
9.6 Summary
9.7 Unit End Questions
9.8 Bibliography, References and Further Reading
9.0 OBJECTIVES
The chapter begins with a definition of basic concepts, and then
covers cloud services, deployment models, and architecture. The chapter
then discusses the relationship between cloud computing and software-
defined networking (SDN) and network functions virtualization (NFV).
After studying this chapter, you should be able to:
Present an overview of cloud computing concepts.
List a4nd define the principal cloud services.
List and define the cloud deployment models.
Compare and contrast the NIST and ITU-T cloud computing reference
architectures.
268
Discuss the relevance of SDN and NFV to cloud computing.
270
(a) SaaS (b) PaaS
(c)IaaS
Figure 9. 2 : Cloud Service Models
271
The following list, derived from an ongoing industry survey by
OpenCrowd (http://cloudtaxonomy.opencrowd.com/taxonomy), describes
example SaaS services. The numbers in parentheses refer to the number of
vendors currently offering each service.
Billing: (3): Application services to manage customer billing based on
usage and subscriptions to products and services.
Collaboration: (18): Platforms providing tools that allow users to
collaborate in workgroups, within enterprises, and across enterprises.
Content management: (7): Services for managing the production and
access to content for Web-based applications.
272
Security: (10): Hosted products for security services such as malware
and virus scanning, single sign-on, and so on.
Social networks: (4): Platforms for creating and customizing social
networking applications.
273
9.2.3 Infrastructure as a Service:
A group of capabilities offered via cloud computing in which the
cloud service customer can provision and use processing, storage, or
networking resources. Typically, customers are able to self-provision this
infrastructure, using a web-based graphical user interface that serves as an
IT operations management console for the overall environment. API
access to the infrastructure may also be offered as an option. Examples of
IaaS are Amazon Elastic Compute Cloud (Amazon EC2), Microsoft
Windows Azure, Google Compute Engine (GCE), and Rackspace.
274
Figure 9.3: Separation of Responsibilities based on Cloud Service
Models
9.2.4 Other Cloud Services:
A number of other cloud services have been proposed, with some
available as vendor offerings. A useful list of these additional services is
provided by ITU- T Y.3 5 0 0 ( Cloud Computing — Overview and
Vocabulary, August 2014), which includes the following cloud service
categories:
9.2.5 XaaS:
Some providers package together SaaS, PaaS, and IaaS so that the
customer can do one-stop shopping for the basic cloud services that
enterprises are coming to rely on.
Risks are lowered: XaaS providers offer agreed service levels. This
eliminates the risks of cost overruns so common with internal projects.
The use of a single provider for a wide range of services provides a
single point of contact for resolving problems.
278
house or contract the management function to a third party. In addition,
the cloud servers and storage devices may exist on premises or off
premises.
A key motivation for opting for a private cloud is security. A private cloud
infrastructure offers tighter controls over the geographic location of data
storage and other aspects of security. Other benefits include easy resource
sharing and rapid deployment to organizational entities.
279
A hybrid public/private cloud solution can be particularly attractive
for smaller businesses. Many applications for which security concerns are
less can be offloaded at considerable cost savings without committing the
organization to moving more sensitive data and applications to the public
cloud.
281
platform, such as runtime software execution stack, databases, and other
middleware components. Cloud consumers of PaaS can employ the tools
and execution resources provided by CPs to develop, test, deploy, and
manage the applications hosted in a cloud environment.
A cloud broker is useful when cloud services are too complex for
a cloud consumer to easily manage. Three areas of support can be offered
by a cloud broker:
Service intermediation: These are value-added services, such as
identity management, performance reporting, and enhanced security.
Service aggregation: The broker combines multiple cloud services to
meet consumer needs not specifically addressed by a single CP, or to
optimize performance or minimize cost.
Service arbitrage: This is similar to service aggregation except that
the services being aggregated are not fixed. Service arbitrage means a
broker has the flexibility to choose services from multiple agencies.
The cloud broker, for example, can use a credit-scoring service to
measure and select an agency with the best score.
283
Security and privacy are concerns that encompass all layers and elements
of the cloud provider’s architecture.
9. 4 . 2 ITU- T Cloud Computing Reference Architecture:
ITU-T Cloud Computing Architecture (published in ITU-T
Y.3502, Cloud Computing Architecture, August 2014) is somewhat
broader in scope than the NIST architecture and views the architecture as a
layered functional architecture.
The ITU-T document defines three actors:
Cloud service customer or user: A party that is in a business
relationship for the purpose of using cloud services. The business
relationship is with a cloud service provider or a cloud service partner.
Key activities for a cloud service customer include, but are not limited
to, using cloud services, performing business administration, and
administering use of cloud services.
Cloud service provider: A party that makes cloud services available.
The cloud service provider focuses on activities necessary to provide a
cloud service and activities necessary to ensure its delivery to the
cloud service customer as well as cloud service maintenance. The
cloud service provider includes an extensive set of activities (for
example, provide service, deploy and monitor service, manage
business plan, provide audit data) as well as numerous sub-roles (for
example, business manager, service manager, network provider,
security and risk manager).
Cloud service partner: A party, which is engaged in support of, or
auxiliary to, activities of either the cloud service provider or the cloud
service customer, or both. A cloud service partner’s activities vary
depending on the type of partner and their relationship with the cloud
service provider and the cloud service customer. Examples of cloud
service partners include cloud auditor and cloud service broker.
The user layer is the user interface through which a cloud service
customer interacts with a cloud service provider and with cloud services,
performs customer related administrative activities, and monitors cloud
services. It can also offer the output of cloud services to another resource
layer instance. When the cloud receives service requests, it orchestrates its
own resources and/or other clouds’ resources and provides back cloud
services through the user layer. The user layer is where the CSU resides.
285
systems, device drivers, and so on), and arranges to offer the cloud
services to users via the access layer.
286
central controller, or a few distributed cooperating controllers, can
configure and manage virtual networks and provide QoS and security
services. This relieves network management of the need to individually
configure and program each networking device.
287
9.5.3 ITU-T Cloud Computing Functional Reference Architecture:
For our discussion of the relationship between cloud networking
and NFV, it is instructive to look at an earlier version of this architecture,
defined in ITUT Focus Group on Cloud Computing Technical Report,
Part 2: Functional Requirements and Reference Architecture, February
2012 and shown in Figure 9.8 . This architecture has the same four-layer
structure as that of Y.3502, but provides more detail of the lowest layer,
called the resources and network layer.
288
two layers correspond quite well to the NFVI portion of the NFV
architecture.
9.6 SUMMARY
289
Cloud Computing Architecture, August 2014) is somewhat broader in
scope than the NIST architecture and views the architecture as a
layered functional architecture.
*****
290
10
MODERN NETWORK ARCHITECTURE:
CLOUDS AND FOG THE INTERNET OF
THINGS
Unit Structure
10.0 Objectives
10.1 The IoT Era Begins
10 . 2 The Scope of the Internet of Things
10. 3 Components of IoT-Enabled Things
10.3.1 Sensors
10.3.2 Actuators
10.3.3 Microcontrollers
10.3.4 Transceivers
10.3.5 RFID
10.4 IoT Architecture
10.4.1 ITU-T IoT Reference Model
10.4.2 IoT World Forum Reference Model
10.5 IoT Implementation
10.5.1 IoTivity
10.5.2 Cisco IoT System
10.5.3 ioBridge
10.6 Summary
10.7 Unit End Questions
10.8 Bibliography, References and Further Reading
10.0 OBJECTIVES
The future Internet will involve large numbers of objects that use
standard communications architectures to provide services to end users. It
is envisioned that tens of billions of such devices will be interconnected in
a few years. This will provide new interactions between the physical world
and computing, digital content, analysis, applications, and services. This
resulting networking paradigm is being called the Internet of Things (IoT).
This will provide unprecedented opportunities for users, manufacturers,
and service providers in a wide variety of sectors.
294
Equipment and
Ambulances,
Emergency personnel,
public security
Services police, fire,
regulatory vehicles
Fuel stations,
gaming,
Specialty bowling,
cinema, discos,
special events
Hotels,
POS terminals,
restaurants,
Hospitality tags, cash
Retail bars, cafes,
registers, vending
clubs
machines, signs
Supermarkets,
shopping
centres, single
Stores
site,
distribution
centre
Air, rail,
Nonvehicular
marine
Consumer,
commercial, Vehicles, lights,
Vehicles
Transportation construction, ships, signage,
off-road tolls
Tolls, traffic
Transportation
management,
systems
navigation
Pipelines,
Distribution material
handling,
conveyance
Metals, paper, Pumps, valves,
rubber, plastic, vats, conveyers,
Converting, metalworking, pipelines, motors,
discrete
electronics drives,
Industrial
assembly, test converting,
Petro-chemical, fabrication,
Fluid/processes hydrocarbon, assembly/packing,
food, beverage vessels, tanks
Mining,
Resource irrigation,
automation agricultural,
woodland
Hospital, ER, MRIs, PDAs,
Healthcare and
Care mobile PoC, implants, surgical
Life Sciences
clinic, labs, equipment,
295
doctor office pumps, monitors,
Implants, home telemedicine
In-vivo, home monitoring
systems
Drug
discovery,
Research
diagnostics,
labs
Wiring,
network access,
Infrastructure
energy Digital camera,
management power systems,
Security/alert, dishwashers,
fire safety, eReaders, desktop
Consumer and Awareness and environmental computers,
home safety safety, elderly, washer/dryer,
children, power meters, lights,
protection TVs, MP#, games
HVAC/climate, console, lighting,
Convenience
lightning, alarms
and appliance,
entertainment entertainment
Power
generation,
transport and
distribution,
Supply/Demand
low voltage, Turbines,
power quality, windmills,
energy uninterruptible
management power supply
Energy
Solar, wind, co- (UPS), batteries,
generation, generators,
Alternative
electro- meters, drills, fuel
chemical cells
Rigs, derricks,
well heads,
Oil/Gas
pumps,
pipelines
Office,
education,
retail,
Commercial, HVAC, transport,
hospitality,
institutional fire and safety,
Buildings healthcare,
lighting, security,
airports,
stadiums access
297
small numbers on the one hand, or in large numbers on the other. Table
10.2 lists various types of sensors, with examples of each type.
298
10.3.2 Actuators:
10.3.3 Microcontrollers:
300
Figure 10.4 Typical Microcontroller Chip Elements
10.3.4 Transceivers:
301
Figure 10.5 Simplified Transceiver Block Diagram
10.3.5 RFID:
A RFID is a data collection technology that uses electronic tags
attached to items to allow the items to be identified and tracked by a
remote system. The tag consists of an RFID chip attached to an antenna.
Radiofrequency identification (RFID) technology, which uses radio
waves to identify items, is increasingly becoming an enabling technology
for IoT. The main elements of an RFID system are tags and readers. RFID
tags are small programmable devices used for object, animal and human
tracking. They come in a variety of shapes, sizes, functionalities, and
costs. RFID readers acquire and sometimes rewrite information stored on
RFID tags that come within operating range (a few inches up to several
feet). Readers are usually connected to a computer system that records and
formats the acquired information for further uses.
302
Access control is another widespread application area. RFID
proximity cards control building access at many companies and
universities. Ski resorts and other leisure venues are also heavy users of
this technology.
Figure 10.7 shows the two key components of a tag. The antenna is
a metallic path in the tag whose layout depends on the size and shape of
the tag and the operating frequency. Attached to the antenna is a simple
microchip with very limited processing and nonvolatile storage.
303
Figure 7.7 RFID Tag
304
10.4 IOT ARCHITECTURE
305
Figure 10.8: Types of Devices and their Relationship with Physical
Things
306
Figure 10.9 provides an overview of the elements of interest in
IoT. The various ways that physical devices can be connected are shown
on the left side of the figure. It is assumed that one or multiple networks
support communication among the devices.
308
The management capabilities layer covers the traditional network-
oriented management functions of fault, configuration, accounting, and
performance management. Y . 2 0 6 0 lists the following as examples of
generic support capabilities:
Figure 10 . 1 1 depicts the seven- level model. The white paper in the IWF
model issued by Cisco indicates that the model is designed to have the
following characteristics:
310
model, the elements at this level are not physical things as such, but rather
devices that interact with physical things, such as sensors and actuators.
Among the capabilities that devices may have are analog-to-digital and
digital-to-analog conversion, data generation, and the ability to be
queried/controlled remotely.
313
The Collaboration and Processes Level recognizes the fact that
people must be able to communicate and collaborate to make an IoT
useful. This may involve multiple applications and exchange of data and
control information across the Internet or an enterprise network.
314
10.5 IOT IMPLEMENTATION
10.5.1 IoTivity:
Protocol Architecture:
315
To accommodate constrained devices, the overall protocol
architecture ( see Figure 10 . 1 4 ) is implemented in both constrained and
unconstrained devices. At the transport level, the software relies on User
Datagram Protocol (UDP), which requires minimal processing power and
memory, running on top of Internet Protocol (IP). Running on top of UDP
is the Constrained Application Protocol (CoAP), which is a simplified
query/response protocol designed for constrained devices. The IoTivity
implementation uses libcoap, which is a C implementation of CoAP that
can be used both on constrained and unconstrained devices.
316
web while meeting specialized requirements such as multicast support,
very low overhead, and simplicity for constrained environments.
The IoTivity Base is software that runs on top of the CoAP API. It
presents a resource model to higher layers, consisting of clients and
servers. A server hosts resources, which are of two kinds: entity and entity
handler. An entity corresponds to an IoT thing, either an actuator or a
sensor. An entity handler is an associated device, such as one that caches
data from one or more sensors, or a proxy for gateway type protocol
conversion. The IoTivity Base provides the following services to higher
layers:
Resource registration:
This is used to register aresource for future access.
Resource and device discovery:
This operation returns identification information for all resources of a
given type on the network service. The operation is sent via multicast
to all services.
Querying resource (GET):
Get information from resource.
Setting a resource state (PUT):
This operation sets the value of a simple resource.
Observing resource state:
This operation fetches and registers as an observer for the value of a
simple resource. Notifications are then provided to the client on an
application-specific schedule.
IoTivity Services:
The IoTivity Base services provide a RESTful API for the basic
functions. On top of this, the current release includes four applications
referred to as IoTivity Services. IoTivity Services provide a common set
of functionalities to application development. These primitive services are
designed to provide easy, scalable access to applications and resources and
are fully managed by themselves. The four services are as follows:
Things Manager:
Control Manager:
319
Figure 10.16: Cisco IoT System
Network connectivity: Includes purpose-built routing, switching, and
wireless products available in ruggedized and nonruggedized form
factors.
Fog computing: Provides Cisco’s fog computing, or edge data
processing platform, IOx.
Data analytics: An optimized infrastructure to implement analytics
and harness actionable data for both the Cisco Connected Analytics
Portfolio and third-party analytics software.
Security: Unifies cyber and physical security to deliver operational
benefits and increase the protection of both physical and digital assets.
Cisco’s IP surveillance portfolio and network products with TrustSec
security and cloud/cyber security products allow users to monitor,
detect and respond to combined IT and operational technology (OT)
attacks.
Management and automation: Tools for managing endpoints and
applications.
Application enablement platform: A set of APIs for industries and
cities, ecosystem partners and third-party vendors to design, develop,
and deploy their own applications on the foundation of IoT System
capabilities.
320
Figure 10.17 highlights the key elements of each pillar followed by an
overview of each pillar.
321
Industrial switching: A range of compact, ruggedized Ethernet
switches that handle security, voice, and video traffic across industrial
networks.
Industrial routing: These products are certified to meet harsh
environmental standards. They support a variety of communications
interfaces, such as Ethernet, serial, cellular, WiMAX, and RF mesh.
Industrial wireless: Designed for deployment in a variety of harsh or
demanding environments. These products provide wireless access
point functionality and implement Cisco VideoStream, which uses
multicast encapsulated in unicast to improve multimedia applications.
Embedded networks: Cisco Embedded Service switches are
optimized for mobile and embedded networks that require switching
capability in harsh environments.
Fog Computing:
The fog computing component of IoT System consists of software
and hardware that extends IoT applications to the network edge, enabling
data to be efficiently analyzed and managed where generated, thus
reducing latency and bandwidth requirements. The goal of the fog
computing component is to provide a platform for IoT-related apps to be
deployed in routers, gateways, and other IoT devices. To host new and
existing applications on fog nodes, Cisco provides a new software
platform, called IOx, and an API for deploying applications on IOx. The
IOx platform combines the Cisco IOS operating system and Linux (see
Figure 10.18). Currently, IOx is implemented on Cisco routers.
322
enable partner companies to implement fog applications on the IOx
platform.
Data Analytics:
The data analytics component of IoT System consists of distributed
network infrastructure elements and IoT-specific APIs that run business-
specific software analytics packages throughout the network architecture
— from the cloud to the fog — and that allow customers to feed IoT data
intelligently into business analytics. The Cisco IoT analytics infrastructure
includes the following:
Infrastructure for realtime analytics: The integration of network,
storage, and compute capabilities on select Cisco routers, switches,
Unified Communications System (UCS) servers, and IP cameras
allows analytics to run directly on fog nodes for real-time collection,
storage, and analysis at the network edge.
Cloud to fog: Cisco Fog Data Services includes APIs to apply
business rules and control which data remains in the fog for real-time
analytics and which is sent to the cloud for long-term storage and
historical analysis.
Enterprise analytics integration: Using IOx APIs, enterprises can
run analytics on fog nodes for realtime intelligence. Fog Data Services
allows IoT data exporting to the cloud. Integration of IoT data can
increase operational efficiency, improve product quality, and lower
costs.
Analytics for security: Cisco IP cameras with storage and compute
capabilities support video, audio, and data analytics at the network
edge so enterprises gain real-time security intelligence, including event
processing and classification.
Security:
The intent of the security component is to provide solutions from
the cloud to the fog that address the full attack continuum—before, during,
and after an attack. The component includes cloud-based threat protection,
network and perimeter security, user- and group- based identity services,
video analytics, and secure physical access. The security portfolio includes
the following elements:
Cloud-based threat protection: Provided by Cisco’s Advanced
Malware Protection (AMP) package. This is a broad spectrum of
products that can be deployed on a variety of Cisco and third-party
platforms. AMP products use big data analytics, a telemetry model,
and global threat intelligence to help enable continuous malware
detection and blocking, continuous analysis, and retrospective alerting.
Network and perimeter security: Products include firewall and
intrusion prevention systems.
User- and group- based identity services: Products include an
Identity Service Engine, which is a security policy management
323
platform that automates and enforces context-aware security access to
network resources; and Cisco TrustSec technology, which uses
software-defined segmentation to simplify the provisioning of network
access, accelerate security operations, and consistently enforce policy
anywhere in the network.
Physical security: Cisco’s physical security approach consists of
hardware devices and software for security management. Products
include video surveillance, IP camera technology, electronic access
control, and incident response. Cisco physical security solutions can be
integrated with other Cisco and partner technologies to provide a
unified interface that delivers situational awareness and rapid,
informed decisions.
Management and Automation:
The management and automation component is designed to
provide simplified management of large IoT networks with support for
multiple siloed functions, and to enable the convergence of OT data with
the IT network. It includes the following elements:
IoT Field Network Director: A software platform that provides a
variety of tools for managing routers, switches, and endpoint devices.
These tools include fault management, configuration management,
accounting management, performance management, diagnostic and
troubleshooting, and a northbound API for industry-specific
applications.
Cisco Prime Management Portfolio: A remote management and
provisioning solution that provides visibility into the home network.
The package discovers detailed information about all connected
devices in the home and enables remote management.
Cisco Video Surveillance Manager: Provides video, analytics and
IoT sensor integration for providing physical security management.
Application Enablement Platform:
This component provides a platform for cloud-based app development and
deployment from cloud to fog, simply and at scale. Also offers open APIs
and app development environments for use by customers, partners, and
third parties. It features the following elements:
Cisco IOx App Hosting: With IOx capability, customers from all
segments and solution providers across industries will be able to
develop, manage, and run software applications directly on Cisco
industrial networked devices, including hardened routers, switches,
and IP video cameras.
Cisco Fog Director: Allows central management of multiple
applications running at the edge. This management platform gives
administrators control of application settings and lifecycle, for easier
access and visibility into large-scale IoT deployments.
Cisco IOx Middleware Services: Middleware is the software “glue”
324
that helps programs and databases (which may be on different
platforms) work together. Its most basic function is to enable
communication between different pieces of software. This element
provides tools necessary for IoT and cloud apps to communicate.
10.5.3 ioBridge:
IoBridge provides software, firmware, and web services designed
to make it simple and cost-effective to Internet-enable devices and
products for manufacturers, professionals and casual users. By providing
all the components necessary to web-enable things, ioBridge’s customers
avoid the complexity and cost associated with piecing together solutions
from multiple vendors. The ioBridge offering is essentially a turnkey
solution for abroad range of IoT users.
ioBridge Platform:
IoBridge provides a complete end-to-end platform that is secure,
private, and scalable for everything from do-it-yourself (DIY) home
projects to commercial products and professional applications. ioBridge is
both a hardware and cloud services provider. The IoT platform enables the
user to create the control and monitoring applications using scalable Web
technologies. ioBridge features end-to-end security, real-time I/O
streaming to web and mobile apps, and easy-to-install and easy-to-use
products. Figure 10.19 illustrates some of the major features of ioBridge’s
technology. The tight integration between the embedded devices and the
cloud services enable many of the features shown in the diagram that are
not possible with traditional web server technology. Note that the off-the-
shelf ioBridge embedded modules also include web-programmable control
or “rules and actions.” This enables the ioBridge embedded module to
control devices even when it is not connected to the ioBridge cloud server.
ThingSpeak:
ThingSpeak is an open source IoT platform developed by ioBridge.
ThingSpeak enables the creation of sensor logging applications, location-
tracking applications, and a social network of things with status updates. It
offers the capabilities of real-time data collection, visualizing the collected
data in the form of charts, the ability to create plug-ins and apps for
collaborating with web services, social networks, and other APIs.
Eight fields for storing data of any type: These can be used to store
the data from a sensor or from an embedded device.
Three location fields: Can be used to store the latitude, longitude and
the elevation. These are very useful for tracking a moving device.
One status field: A short message to describe the data stored in the
channel.
ThingSpeak provides apps that allow for an easier integration with web
services, social networks, and other APIs. Some of the apps provided by
ThingSpeak are the following:
RealTime.io:
Another offering of ioBridge is RealTime.io. This technology is
similar to, but more powerful and sophisticated than, ThingSpeak.
RealTime.io is a cloud platform that enables any device to connect to
cloud services and mobile phones to provide control, alerts, data analytics,
customer insights, remote maintenance, and feature selection. The intent is
that product manufacturers that leverage ioBridge’s technology will be
able to quickly and securely bring new connected home products to
market while slashing their cost-per-connected device.
The RealTime.io App Builder allows the user to build web apps directly
on the RealTime.io cloud platform. The user can write web applications
based on HTML5, CSS, and JavaScript and create interactions with
devices, social networks, external APIs, and ioBridge web services. There
is an in-browser code editor, JavaScript library, app update tracking,
327
device manager, and single sign on with existing ioBridge user accounts.
RealTime.io natively works with ioBridge Iota-based devices and
firmware. RealTime.io has built-in template apps or custom apps.
Template apps are prebuilt apps that the user can start with and then
customize. Custom apps allow the user to upload their own files and
images without any starter templates. Figure 1 0 . 2 0 shows the overall
ioBridge environment.
329
330