0% found this document useful (0 votes)
18 views836 pages

Material de Estudio CCNAv2.Cleaned

The CCNA course teaches learners how to install, configure, and manage basic IPv4 and IPv6 networks, covering essential components like switches and routers, as well as security threats. It employs various learning methods including hands-on labs and video demonstrations to enhance the learning experience. The course also emphasizes the importance of networking in business operations and provides insights into network architecture and components.

Uploaded by

Erik Carranza
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
18 views836 pages

Material de Estudio CCNAv2.Cleaned

The CCNA course teaches learners how to install, configure, and manage basic IPv4 and IPv6 networks, covering essential components like switches and routers, as well as security threats. It employs various learning methods including hands-on labs and video demonstrations to enhance the learning experience. The course also emphasizes the importance of networking in business operations and provides insights into network architecture and components.

Uploaded by

Erik Carranza
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 836

CCNA

Implementing and Administering Cisco Solutions

Versión 1.0.30
Welcome to Implementing and Administering Cisco
Solutions
Course Introduction
The Implementing and Administering Cisco Solutions (CCNA) course teaches learners how
to install, operate, configure, and verify a basic IPv4 and IPv6 network, including
configuring network components, such as switches, routers, and WLC. This course also
covers managing network devices and identifying basic security threats.
This course offers various learning methodologies, including videos and hands-on labs to
provide an interactive experience.
During this course you will:

• Read, visualize, and learn by doing


• Watch video demonstrations
• Have access to an extensive glossary of terms
• Practice what you have learned using the Discovery Labs
You can expect to learn to:

• Identify the components of a computer network and explain their basic


characteristics
• Describe the features and functions of the Cisco IOS Software
• Explain IPv4 and IPv6 addressing scheme
• Implement basic configurations on a Cisco Router
• Identify and resolve common switching and routing networking issues
• Describe network and device architectures and explain virtualization
• Describe the smart network management solutions like Cisco DNA Center, SD-
Access and SD-WAN
• Outline threat defense technologies
• And many, many more aspects of a basic IPv4 and IPv6 network
It is a lot to learn; however, this self study takes it one step at a time, at your pace, and
provides you with the tools that you'll need to excel.
Note: Cisco is updating content to be free of offensive or suggestive language. We are
changing terms such as blacklist/whitelist and master/slave to more appropriate
alternatives. While we update our portfolio of products and content, users may see
differences between some content and a product’s user interface or command syntax.
Please use your product’s current terminology as found in its documentation.
1.1 Exploring the Functions of Networking
Introduction
At the most basic level, a “network” is defined as a group of systems interconnected to
share resources. You can find examples of such systems and resources in a social network
to share work experience or personal events or a computer network to share file storage,
printer access, or internet connectivity.
A network connects computers, mobile phones, peripherals, and even IoT (Internet of
Things) devices. Switches, routers, and wireless access points (APs) are the essential
networking basics. Through them, devices connected to your network can communicate
with one another and with other networks, such as the internet, which is a global system
of interconnected computer networks.
Networks carry data in many types of environments, including homes, small businesses,
and large enterprises. Large enterprise networks may have several locations that need to
communicate with each other. You can use a network in your home office to
communicate via the internet to locate information, place orders for merchandise, and
send messages to friends. You can also have a small office that is set up with a network
that connects other computers and printers in the office. Similarly, you can work in a large
enterprise with many computers, printers, storage devices, and servers running
applications that are used to communicate, store, and process information from many
departments over large geographic areas.

A network of computers and other components that are located relatively close together
in a limited area is often referred to as a LAN. Every LAN has specific components,
including hardware, interconnections, and software. WAN communication occurs
between geographically separated areas. It is typically provided by different
telecommunication providers using various technologies using different media such as
fiber, copper, cable, Asymmetric Digital Subscriber Line (ADSL), or wireless links. In
enterprise internetworks, WANs connect the main office, branches, Small Office Home
Office (SOHO), and mobile users.
Listed are some important skills that you will build upon when exploring the functions of
networking:

• Explain the functions, characteristics, and common components of a network.


• Read a network diagram, including comparing and contrasting the logical and
physical topologies.
• Describe the impact of user applications on the network.

1.2 Exploring the Functions of Networking


What Is a Computer Network?
The term network is used in many different aspects. Some examples of networks are
social networks, phone networks, television networks, neural networks, and computer
networks. A network is a system of connected elements that operate together. A
computer network connects PCs, printers, servers, phones, cameras, and other devices. A
computer network connects devices that allows them to exchange data with each other,
which facilitates information and resource sharing. In a home, computers allow family
members to share files (such as photos) and print documents on a network printer,
televisions can play movies or other media stored on your computers, and internet-
enabled devices can connect to web pages, applications, and services anywhere in the
world.
In the business environment, you have many business operations such as marketing, sales,
and IT. You need to develop applications that allow information to be collected and
processed. Computer systems that collect and process the information need to
communicate with each other to share resources. You also need an infrastructure that
supports employees, who need to access these resources and interact with each other. A
network allows multiple devices such as laptops, mobile devices, servers, and shared
storage to exchange information with each other. There are various components
connected to each other that are necessary for this communication to take place. This
infrastructure allows a business to run, lets customers connect to the business (either
through salespeople or through an online store), and allows a business to sell its products
or services. To run normally, a business and its applications rely on networking
technology.
A computer network can exist on its own, independent of other computer networks and it
can also connect to other networks. The internet is an example of many networks
interconnected together. It is global in its span and scope. To operate successfully,
interconnected networks follow standardized rules to communicate. These rules are
accepted and adhered to by each participating network.
Years ago, the internet connected only several mainframe computers with computer
terminals. The mainframe computers were large, and their computing power was
considered enormous (albeit being equivalent to today’s mobile phone). Terminals were
simple and inexpensive devices, which were used only to input data and displayed the
results. Teletype is an example of such a device. The range of devices that connects to the
internet has expanded in last decades. The internet now connects not only laptops,
smartphones, and tablets but also game consoles, television sets, home systems, medical
monitors, home appliances, thunder detectors, environment sensors, and much more.
The earlier concept of centralized computing resources is revived today in the form of
computing clouds.

Computer network engineers design, develop, and maintain highly available network
infrastructure to support the information technology activities of the business. Network
engineers interact with network users and provide support or consultancy services about
design and network optimization. Network engineers typically have more knowledge and
experience than network technicians, operators, and administrators. A network engineer
should constantly update their knowledge of networking to keep up with new trends and
practices.
Users who wish to connect their networks to the internet can acquire access through a
service provider's access network. Service provider networks can use different
technologies from dialup or broadband telephony networks, such as ADSL networks, cable
networks, mobile, radio, or fiber-optic networks. A service provider network can cover
large geographical areas. Service provider networks also maintain connections between
themselves and other service providers to enable global coverage.
Computer networks can be classified in several ways, and then combined to find the most
appropriate one for the implementation.
The distance distinguishes local and remote networks between the user and the computer
networks the user is accessing.
Examples of networks categorized by their purpose would be data center networks and
SAN. Focusing on the technology used, you can distinguish between wireless or wired
networks.
Looking at the size of the network in terms of the number of devices it has, there are
various types of networks. Such as small networks, usually with less than ten devices,
medium to large networks consisting of tens to hundreds of devices, and very large, global
networks, such as the internet, which connects thousands of devices across the world.
One of the most common categorizations looks at the geographical scope of the network.
There are LANs that connect devices located relatively close together in a limited area.
Contrasting LANs, there are WANs, which cover a broad geographic area and are managed
by service providers. An example of a LAN network is a university campus network that
can span several collocated buildings. An example of a WAN would be a
telecommunication provider’s network that interconnects multiple cities and states. This
categorization also includes metropolitan-area networks (MANs), which span a physical
area larger than LAN but smaller than WAN, for instance, a city.
Medium-to-large enterprise networks can span multiple locations. Usually, they have a
main office or Enterprise Campus, which holds most of the corporate resources, and
remote sites, such as branch offices or home offices of remote workers. A home office
usually has a small number of devices and is called a small office, home office (SOHO).
SOHO networks mostly use the internet to connect to the main office. The main office
network, which is a LAN in terms of its geographical span, may consist of several networks
that occupy many floors, or it may cover a campus that contains several buildings. Many
corporate environments require the deployment of wireless networks on a large scale,
and they use Wireless LAN Controllers (WLC) for centralizing the management of wireless
deployments. Enterprise Campuses also typically include a separate data center home to
the computational power, storage, and applications necessary to support an enterprise
business. Enterprises are also connected to the internet, and a firewall protects internet
connectivity. Branch offices have their own LANs with their own resources, such as
printers and servers, and may store corporate information, but their operations largely
depend on the main office, hence the network connection. They connect to the main
office by a WAN or internet using routers as gateways.

Cisco Enterprise Architecture Model

Networks support the activities of many businesses and organizations and are required to
be secure, resilient, and to allow growth. The design of a network requires considerable
technical knowledge. Network engineers commonly use validated network architecture
models to assist in the design and implementation of the network. Examples of validated
models are the Cisco three-tier hierarchical network architecture model, the spine-leaf
model, and the Cisco Enterprise Architecture model. These models provide hierarchical
structure to enterprise networks, which is used to design the network architecture in the
form of layers. For example, LAN Access and LAN Core, with each layer providing different
functionalities.
Note: The words internet and web are often used interchangeably, but they do not share
the same meaning. The internet is a global network that interconnects many networks
and therefore provides a worldwide communication infrastructure. The World Wide Web
describes one way to provide and access information over the internet using a web
browser. It is a service that relies on connections provided by the internet for its function.
The exchange of data within the internet follows the same well-defined rules, called
protocols, designed specifically for internet communication. These protocols specify,
among other things, the usage of hyperlinks and Uniform Resource Identifiers (URIs). The
internet is a base for various data exchange services, such as email or file transfers. It is a
common global infrastructure, composed of many computer networks connected
together that follow communication rules standardized for the internet. A set of
documents called RFCs defines the protocols and processes of the internet.
1.3 Exploring the Functions of Networking
Components of a Network
A network can be as simple as two PCs connected by a wire or as complex as several
thousand devices that are connected through different types of media. The elements that
form a network can be roughly divided into three categories: devices, media, and services.
Devices are interconnected by media, which provides the channel over which the data
travels from source to destination. Services are software and processes that support
common networking applications in use today.

Network Devices
Devices can be further divided into endpoints and intermediary devices:

• Endpoints: In the context of a network, endpoints are called end-user devices and
include PCs, laptops, tablets, mobile phones, game consoles, and television sets.
Endpoints are also file servers, printers, sensors, cameras, manufacturing robots,
smart home components, and so on. All end devices were physical hardware units
years ago. Today, many end devices are virtualized, meaning that they do not exist
as separate hardware units anymore. In virtualization, one physical device is used
to emulate multiple end devices—for example, all the hardware components that
one end device would require. The emulated computer system operates as a
separate physical unit and has its own operating system and other required
software. In a way, it behaves like a tenant living inside a host physical device,
using its resources (processor power, memory, and network interface capabilities)
to perform its functions. Virtualization is commonly applied to servers to optimize
resource utilization, because server resources are often underutilized when they
are implemented as separate physical units.
• Intermediary devices: These devices interconnect end devices or interconnect
networks. In doing so, they perform different functions, which include
regenerating and retransmitting signals, choosing the best paths between
networks, classifying and forwarding data according to priorities, filtering traffic to
allow or deny it based on security settings, and so on. As endpoints can be
virtualized, so can intermediary devices or even entire networks. The concept is
the same as in the endpoint virtualization—the virtualized element uses a subset
of resources available at the physical host system. Intermediary devices that are
commonly found in enterprise networks are:
o Switches: These devices enable multiple endpoints such as PCs, file servers,
printers, sensors, cameras, and manufacturing robots to connect to the
network. Switches are used to allow devices to communicate on the same
network. In general, a switch or group of interconnected switches attempt
to forward messages from the sender so it is only received by the
destination device. Usually, all the devices that connect to a single switch or
a group of interconnected switches belong to a common network and can
therefore communicate directly with each other. If an end device wants to
communicate with a device that is on a different network, then it requires
"services" of a device that is known as a router, which connects different
networks together.
o Routers: These devices connect networks and intelligently choose the best
paths between networks. Their main function is to route traffic from one
network to another. For example, you need a router to connect your office
network to the internet. An analogy that may help you understand the
basic function of switches and routers is to imagine a network as a
neighborhood. A switch is a street that connects the houses, and routers
are the crossroads of those streets. The crossroads contain helpful
information such as road signs to help you in finding a destination address.
Sometimes, you might need the destination after just one crossroad, but
other times you might need to cross several. The same is true in
networking. Data sometimes "stops" at several routers before it is
delivered to the final recipient. Certain switches combine functionalities of
routers and switches, and they are called Layer 3 switches.
o APs: These devices allow wireless devices to connect to a wired network.
An AP usually connects to a switch as a standalone device, but it also can be
an integral component of the router itself.
o WLCs: These devices are used by network administrators or network
operations centers to facilitate the management of many APs. The WLC
automatically manages the configuration of wireless APs.
o Cisco Secure Firewalls: Firewalls are network security systems that monitor
and control the incoming and outgoing network traffic based on
predetermined security rules. A firewall typically establishes a barrier
between a trusted, secure internal network and another outside network,
such as the internet, that is assumed not to be secure or trusted.
o Intrusion Protection System (IPS): An IPS is a system that performs a deep
analysis of network traffic while searching for signs that behavior is
suspicious or malicious. If the IPS detects such behavior, it can take
protective action immediately. An IPS and a firewall can work in
conjunction to defend a network.
o Management Services: A modern management service offers centralized
management that facilitates designing, provisioning, and applying policies
across a network. It includes features for discovery and management of
network inventory, management of software images, device configuration
automation, network diagnostics, and policy configuration. It provides end-
to-end network visibility and uses network insights to optimize the
network. An example of a centralized management service is Cisco DNA
Center.
In user homes, you can often find one device that provides connectivity for wired devices,
connectivity for wireless devices, and provides access to the internet. You may be
wondering which kind of device it is. This device has characteristics of a switch because it
offers physical ports to plug local devices, a router, that enables users to access other
networks and the internet, and a WLAN AP, allowing wireless devices to connect to it. It is
all three of these devices in a single package. This device is often called a wireless router.
Another example of a network device is a file server, which is an end device. A file server
runs software that implements standardized protocols to support file transfer from one
device to another over a network. This service can be implemented by either FTP or TFTP.
Having an FTP or TFTP server in a network allows uploads and downloads of files over the
network. An FTP or TFTP server is often used to store backup copies of files that are
important to network operation, such as operating system images and configuration files.
Having those files in one place makes file management and maintenance easier.
Media
Media are the physical elements that connect network devices. Media carry
electromagnetic signals that represent data. Depending on the medium, electromagnetic
signals can be guided in wires and fiber-optic cables or propagated through wireless
transmissions, such as Wi-Fi, mobile, and satellite. Different media have different
characteristics and selecting the most appropriate medium depends on the circumstances,
such as the environment in which the media is used, distances that need to be covered,
availability of financial resources, and so on. For instance, a satellite connection (air
medium) might be the only available option for a filming crew working in a desert.
Connecting wired media to network devices is considerably eased by the use of
connectors. A connector is a plug, which is attached to each end of the cable. The most
common type of connector on a LAN is the plug that looks like an analog phone connector.
It is called an RJ-45 connector.
To connect the media, which connects a device to a network, devices use network
interface cards (NICs). The media "plugs" directly into the NIC. NICs translate the data
created by the device into a format that can be transmitted over the media. NICs used on
LANs are also called LAN adapters. End devices used in LANs usually come with several
types of NICs installed, such as wireless NICs and Ethernet NICs. NICs on a LAN are
uniquely identified by a MAC address. The MAC address is hardcoded or "burned in" by
the NIC manufacturer. NICs used to interface with WANs are called WAN interface cards
(WICs), and they use serial links to connect to a WAN network.
Network Services
Services in a network comprise software and processes that implement common network
applications, such as email and web, including the less obvious processes implemented
across the network. These generate data and determine how data is moved through the
network.
Companies typically centralize business-critical data and applications into central locations
called data centers. These data centers can include routers, switches, firewalls, storage
systems, servers, and application delivery controllers. Similar to data center centralization,
computing resources can also be centralized off-premises in the form of a cloud. Clouds
can be private, public, or hybrid, and they aggregate the computing, storage, network, and
application resources in central locations. Cloud computing resources are configurable and
shared among many end users. The resources are transparently available, regardless of
the user's point of entry (a personal computer at home, an office computer at work, a
smartphone or tablet, or a computer on a school campus). Data stored by the user is
available whenever the user is connected to the cloud.

1.4 Exploring the Functions of Networking


Characteristics of a Network
When you purchase a mobile phone or a PC, the specifications list tells you the important
characteristics of the device, just as specific characteristics of a network help describe its
performance and structure. When you understand what each characteristic of a network
means, you can better understand how the network is designed, how it performs, and
which aspects you may need to adjust to meet user expectations.
You can describe the qualities and features of a network by considering these
characteristics:
Topology: A network topology is the arrangement of its elements. Topologies give insight
into physical connections and data flows among devices. In a carefully designed network,
data flows are optimized, and the network performs as desired.
Bitrate or Bandwidth: Bitrate measures the data rate in bits per second (bps) of a given
link in the network. This measure is often referred to as bandwidth or speed in device
configurations, which is sometimes thought of as speed. However, it is not about how fast
1 bit is transmitted over a link—which is determined by the physical properties of the
medium that propagates the signal—it is about the number of bits transmitted in a
second. Link bitrates commonly encountered today are 1 and 10 gigabits per second (1 or
10 billion bits per second). 100-Gbps links are also not uncommon.
Availability: Availability indicates how much time a network is accessible and operational.
Availability is expressed in terms of the percentage of time the network is operational.
This percentage is calculated as a ratio of the time in minutes that the network is available
and the total number of minutes over an agreed period, multiplied by 100. In other words,
availability is the ratio of uptime and total time, expressed in percentage. To ensure high
availability, networks should be designed to limit the impact of failures and to allow quick
recovery when a failure does occur. High availability design usually incorporates
redundancy. The redundant design includes extra elements, which serve as backups to the
primary elements and take over the functionality if the primary element fails. Examples
include redundant links, components, and devices.
Reliability: Reliability indicates how well the network operates. It considers the ability of a
network to operate without failures and with the intended performance for a specified
time period. In other words, it tells you how much you can count on the network to
operate as you expect it to. For a network to be reliable, the reliability of all its
components should be considered. Highly reliable networks are highly available, but a
highly available network might not be highly reliable—its components might operate at
lower performance levels. A standard measure of reliability is the mean time between
failures (MTBF), which is calculated as the ratio between the total time in service and the
number of failures. Not meeting the required performance level is considered a failure.
Choosing highly reliable redundant components in the network design increases both
availability and reliability.
For instance, let’s consider a networking device that reboots every hour. The reboot takes
5 minutes, after which the device works as expected. The figure shows the calculations of
availability and reliability.
The availability percentage for one day can be calculated as follows:

Scalability: Scalability indicates how easily the network can accommodate more users and
data transmission requirements without affecting current network performance. If you
design and optimize a network only for the current conditions, it can be costly and difficult
to meet new needs when the network grows.
Security: Security tells you how well the network is defended from potential threats. Both
network infrastructure and the information that is transmitted over the network should
be secured. The subject of security is important, and defense techniques and practices are
constantly evolving. You should consider security whenever you take actions that affect
the network.
Quality of Service (QoS): QoS includes tools, mechanisms, and architectures, which allow
you to control how and when applications use network resources. QoS is essential for
prioritizing traffic when the network is congested.
Cost: Cost indicates the general expense for the initial purchase of the network
components and any costs associated with installing and maintaining these components.
Virtualization: Traditionally, network services and functions have only been provided via
hardware. Network virtualization creates a software solution that emulates network
services and functions. Virtualization solves many of the networking challenges in today’s
networks, helping organizations centrally automate and provision the network from a
central management point.
These characteristics and attributes provide a means to compare various networking
solutions.

1.5 Exploring the Functions of Networking


Physical vs. Logical Topologies
Each network has both a physical and a logical topology. The physical topology of a
network refers to the physical layout of the devices and cabling. The term node is
commonly used when discussing topology diagrams. For networking topology diagrams, a
node is a device.
Two networks might have the same physical topology, but distances between nodes,
physical interconnections, transmission rates, or signal types may differ. A physical
topology must be implemented using media that is appropriate for it. In wired networks,
recognizing the type of cabling used is important in describing the physical topology. The
figure represents some of the physical topologies that you may encounter.

The following are the primary physical topology categories:

• Bus: In a bus topology, every workstation is connected to a common transmission


medium, a single cable, which is called a backbone or bus. Therefore, each
workstation is directly connected to every other workstation in the network. In
previous bus topologies, computers and other network devices were connected to
a central coaxial cable via connectors.
• Ring: In a ring topology, computers and other network devices are cabled in
succession, and the last device is connected to the first one to form a circle or ring.
Each device is connected to exactly two neighbors and has no direct connection to
a third. When one node sends data to another, the data passes through each node
that lies between them until it reaches the destination.
• Star: The most common physical topology is a star topology. In this topology, there
is a central device to which all other network devices connect via point-to-point
links. This topology is also called the hub and spoke topology. There are no direct
physical connections among spoke devices. This topology includes star and
extended star topologies. In an extended star topology, one or more spoke devices
are replaced by a device with its own spokes. In other words, it is composed of
multiple star topologies whose central devices are connected.
• Mesh: In a mesh topology, a device can be connected to more than one other
device. For one node to reach others, there are multiple paths available.
Redundant links increase reliability and self-healing. In a full mesh topology, every
node is connected to every other node. In partial mesh, certain nodes do not have
connections to all other nodes.
The logical topology is the path which data travels from one point in the network to
another. The diagram depicts the logical topology between PC A and the Server. In this
example, data does not follow the shortest physical path, which would go through two
switches. The logical topology requires data to also travel through the router for the two
devices to communicate. The same could be true for all other end devices. Logical
topology would then be a star, where the router is a central device.

The logical and physical topology of a network can be of the same type. However, physical
and logical topologies often differ. For example, an Ethernet hub is a legacy device that
functions as a central device to which other devices connect in a physical star. The
characteristic of a hub is that it "copies" every signal received on one port to all other
ports. So, a signal sent from one node is received by all other nodes. This behavior is
typical of a bus topology. Because data flow has the characteristics of a bus topology, it is
a logical bus topology.
The logical topology is determined by the intermediary devices and the protocols chosen
to implement the network. The intermediary devices and network protocols both
determine how end devices access the media and how they exchange data.
A physical star topology in which a switch is the central device is by far the most common
in implementations of LANs today. When using a switch to interconnect the devices, both
the physical and the logical topologies are star topologies.

1.6 Exploring the Functions of Networking


Interpreting a Network Diagram
Network diagrams are visual aids in understanding how a network is designed and to show
how it operates. In essence, they are maps of the network. They illustrate physical and
logical devices and their interconnections. Depending on the amount of information you
wish to present, you can have multiple diagrams for a network. The most common
diagrams are physical and logical diagrams. Other diagrams used in networking are
sequence diagrams, which illustrate the chronological exchange of messages between two
or more devices.
Both physical and logical diagrams use icons to represent devices and media. Usually,
there is additional information about devices, such as device names and models.
Physical diagrams focus on how physical interconnections are laid out and include device
interface labels (to indicate the physical ports to which media is connected) and location
identifiers (to indicate where devices can be found physically). Logical network diagrams
also include encircling symbols (ovals, circles, and rectangles), which indicate how devices
or cables are grouped. These symbols further include device and network logical
identifiers, such as addresses. These symbols also indicate which networking processes
are configured, such as routing protocols, and provide their basic parameters.
In the example, you can see interface labels" S0/0/0," "Fa0/5," and "Gi0/1." The label is
composed of letters followed by numbers. Letters indicate the type of interface. In the
example, "S" stands for Serial, "Fa" stands for Fast Ethernet, and "Gi" for Gigabit Ethernet.
Devices can have multiple interfaces of the same type. The exact position of the interface
is indicated by the numbers that follow, which are subject to conventions. For instance,
the label S0/0/0 indicates serial port 0 (the last zero in the label), in the interface card slot
0 (the second zero) in the module slot 0 (the first zero).
Note: The name Fast Ethernet indicates an Ethernet link with the speed of 100 Mbps.
The diagram also includes the IPv4 address of the entire network given by 192.168.1.0/24.
This number format indicates the network address, 192.168.1.0, and the network's prefix,
a representation of its subnet mask, which is /24. IPv4 addresses of individual devices are
shown as ".1" and ".2." These numbers are only parts of the complete address, which is
constructed by combining the address of the entire network with the number shown. The
resulting address of the device in the diagram would be 192.168.1.1.

1.7 Exploring the Functions of Networking


Impact of User Applications on the Network
The data traffic flowing in a network can be generated by end users or control traffic.
Users generate traffic by using applications. Control traffic can be generated by
intermediary devices or by activities related to the operation, administration, and
management of the network. Today, users utilize many applications. The traffic created by
these applications differs in its characteristics. Usage of applications can affect network
performance and, in the same way, network performance can affect applications. Usage
translates to the user’s perception of the quality of the provided service—in other words,
a user experience that is good or bad. Recall that QoS is implemented to prioritize
network traffic and maximize the user experience.
User applications can be classified to better describe their traffic characteristics and
performance requirements. It is important to know what traffic is flowing in your network
and describe the traffic in technical terms. An example of traffic types found in today’s
network is given in the figure. This knowledge is used to optimize network design.
To classify applications, their traffic, and performance the requirements are described in
terms of these characteristics:

• Interactivity: Applications can be interactive or noninteractive. Interactivity


presumes that a response is expected for the normal functioning of the application
for a given request. For interactive applications, it is important to evaluate how
sensitive they are to delays—some might tolerate larger delays up to practical
limits, but some might not.
• Real-time responsiveness: Real-time applications expect a timely data serving, and
they are not necessarily interactive. An example of a real-time application is like a
football game video streaming (live streaming) or video conferencing. Real-time
applications are sensitive to delay. Delay is sometimes used interchangeably with
the term latency. Latency refers to the total amount of time from the source
sending data to the destination receiving it. Latency accounts for propagation
delay of signals through media, the time required for data processing on devices it
crosses along the path, etc. Because of the changing network conditions, latency
might vary during data exchange: some data might arrive with less latency than
other data. The variation in latency is called jitter.
• Amount of data generated: Some applications produce a low quantity of data,
such as voice applications. These applications do not require much bandwidth, and
they are usually referred to as benign bandwidth applications. Video streaming
applications produce a significant amount of traffic and are termed bandwidth
greedy.
• Burstiness: Applications that always generate a consistent amount of data are
referred to as smooth or nonbursty applications. On the other hand, bursty
applications at times create a small amount of data, but they can change behavior
for shorter periods. An example, is web browsing. If you open a page in a browser
that contains a lot of text, a small amount of data is transferred. But, if you start
downloading a huge file, the amount of data will increase during the download.
• Drop sensitivity: Packet loss is losing packets along the data path, which can
severely degrade the application performance. Some real-time applications (such
as Video On Demand) are sensitive to the perceived packet loss when using the
network resources. You can say that such applications are drop-sensitive.
• Criticality to business: This aspect of an application is "subjective" in that it
depends on someone's estimate of how valuable and important the application is
to a business. For instance, an enterprise that relies on video surveillance to secure
its premises might consider video traffic as a top priority, while another enterprise
might consider it totally irrelevant.

One way that applications can be classified is as follows:

• Batch applications: Applications such as FTP and TFTP are considered batch
applications. Both are used to send and receive files. Typically, a user selects a
group of files that need to be retrieved and then starts the transfer. Once the
download starts, no additional human interaction is required. The amount of
available bandwidth determines the speed at which the download occurs. While
bandwidth is important for batch applications, it is not critical. Even with low
bandwidth, the download is completed eventually. Their principal characteristics
are:
o Typically do not require direct human interaction.
o Bandwidth is important but not critical.
o Examples: FTP, TFTP, inventory updates.

• Interactive applications: Applications in which the user waits for a response to


their action are interactive. Think of online shopping applications, which are
offered by many retail businesses today. Interactive applications require human
interaction, and their response times are more important than for batch
applications. However, strict response times or bandwidth guarantees might not
be required. If the appropriate amount of bandwidth is not available, then the
transaction may take longer, but it will eventually complete. The main
characteristics of the interactive applications are:
o Typically support human-to-machine interaction.
o Acceptable response times have different values depending on how
important the application is for the business.
o Examples: database inquiry, stock exchange transaction
• Real-time applications: Are applications such as voice and video that may also
involve human interaction. Because of the amount of information that is
transmitted, bandwidth is critical. In addition, because these applications are time-
critical, a delay on the network can cause a problem. Timely delivery of the data is
crucial. It is also important that data is not lost during transmission because real-
time applications, unlike other applications, do not retransmit lost data. Therefore,
sufficient bandwidth is mandatory, and the quality of the transmission must be
ensured by implementing QoS. QoS is a way of granting higher priority to certain
types of data, such as VoIP. The main characteristics of the real-time applications
are:
o Typically support human-to-human interaction.
o End-to-end latency is critical.
o Examples: Voice applications, video conferencing, and online streaming
such as live sports.
Applications may be required to manage different types of communications. One such
application is the factory-automation application. Factory-automation applications deal
with plant process-related data, such as readings from sensors and alarms, which require
guaranteed delivery times and typically require feedback within a prescribed response
time. On the other hand, the same factory-automation application must also handle
certain device configurations and commercial data, which is not time-critical.

2.1 Introducing the Host-To-Host Communications Model


Introduction
When a home user on a smartphone wants to send an email to a user in the enterprise
office, the email application on one host sends an email to an email application on the
other host. It appears that once a send button has been pressed on one side, almost
immediately, the message is received on the receiving side. But there is a series of
processes that happen in between, including using physical media on the end devices,
preparing an email to be sent through the network to the other side, and then also on
every device that connects devices and networks together. In order to logically describe
processes on individual hosts and make sure that both sides are compatible, we use
communication models which consist of different layers.
Cisco Enterprise Architecture Model
In a communication model, a layer does not define a single protocol; it defines a data
communication function that may be performed by any number of protocols. Because
each layer defines a function, it can contain multiple protocols, each of which provides a
service suitable to the function of that layer.
Every protocol communicates with its peer. A peer is an implementation of the same
protocol in the equivalent layer on a remote computer. Peer-level communications are
standardized to ensure that successful communication takes place.
Protocols and mechanisms are implemented in different devices, from hosts to network
devices in between. In the Cisco Enterprise Architecture model, the network architecture
and the decision to place different devices are often influenced by the functions
networking devices perform and protocols they run, so it is also important to understand
the functions of different layers from that perspective.
As a networking engineer, you need to understand the idea of the host-to-host
communications model, which includes important concepts:

• Identification of layers and functions of Transmission Control Protocol/Internet


Protocol (TCP/IP) protocol suite, its layers, and functions
• Comparison to Open Systems Interconnection (OSI) reference model because it is
an alternative to the TCP/IP protocol suite
• How information is transmitted from the sender to the receiver across the network
• Encapsulation and de-encapsulation processes on network and end devices

2.2 Introducing the Host-To-Host Communications Model


Host-To-Host Communications Overview
Communication can be described as successful sharing or exchanging of information. It
involves a source and a destination of the information. Information is represented in some
form of message. In computer networks, the sources of messages are end devices, also
called endpoints or hosts. The messages are created at the source, transferred over the
network, and delivered to the destination. For communication to be successful, the
message has to traverse one or more networks. A network interconnects a large number
of devices produced by different hardware and software manufacturers over many
different transmission media, each one having its specifics. All these parameters make the
network very complex.
Communication models were created to organize internetworking complexity. Two
commonly used models today are ISO, OSI and TCP/IP. Both provide a model of
networking that describes internetworking functions and a set of rules called protocols
that set out requirements for internetworking functions.

Both models present a network in terms of layers. Layers group networking tasks by the
functions that they perform in implementing a network. Each layer has a particular role. In
performing its functions, a layer deals with the layer above it and the layer below it, which
is called "vertical" communication. A layer at the source creates data that is intended for
the same layer on the destination device. This communication of two corresponding layers
is also termed "horizontal".
The second aspect of communication models is protocols. In the same way that
communication functions are grouped in layers, so are the protocols. People usually talk
about the protocols of certain layers, protocol architectures, or protocol suites. In fact,
TCP/IP is a protocol suite.
A networking protocol is a set of rules that describe one type of communication. All
devices participating in internetworking agree with these rules, and this agreement makes
communication successful. Protocols define rules used to implement communication
functions.
Note: As defined by the ISO/International Electrotechnical Commission (IEC) 7498-1:1994
ISO standard, the word "open" in the OSI acronym indicates systems that are open for the
exchange of information using applicable standards. Open does not imply any particular
systems implementation, technology, or means of interconnection, but it refers to the
mutual recognition and support of the applicable standards.
While both ISO OSI and TCP/IP models define protocols, the protocols that are included in
TCP/IP are widely implemented in networking today. Nonetheless, as a general model, ISO
OSI aims at providing guidance for any type of computer system, and it is used in
comparing and contrasting different systems. Therefore, ISO OSI is called the reference
model.
Standards-based, layered models provide several benefits:

• Make complexity manageable by breaking communication tasks into smaller,


simpler functional groups.
• Define and specify communication tasks to provide the same basis for everyone to
develop their own solutions.
• Facilitate modular engineering, allowing different types of network hardware and
software to communicate with one another.
• Prevent changes in one layer from affecting the other layers.
• Accelerate evolution, providing for effective updates and improvements to
individual components without affecting other components or having to rewrite
the entire protocol.
• Simplify teaching and learning.
Note: Knowledge of layers and the networking functions that they describe assists in
troubleshooting network issues, making it possible to narrow the problem to a layer or a
set of layers.
Computer networks were initially concerned only with the transfer of data. The term data
referred to information in an electronic form that could be stored and processed by a
computer. Additionally, different data transfer protocols required completely different
network topologies, equipment, and interconnections. IP, AppleTalk, Token Ring, and FDDI
are examples of data transfer communications protocols that required different hardware,
topologies, and equipment to operate properly. In addition to data transfer, other
communication networks existed in parallel. For example, telephone networks were built
using separate equipment and implemented different protocols and standards. Over the
years, computer networking evolved such that IP became a common data
communications standard. The technology has also been extended to include other types
of communication, such as voice conversations and video. Since only computer
networking protocols and standards are now used for voice, video, and "pure" computer
data, the networking was termed converged networking.
The need to interconnect devices is not exclusive to computer networks. Industrial
manufacturing companies used standards and protocols specifically designed to provide
automation and control over the production process. The management and monitoring of
the manufacturing plant were traditionally the task of the Operational Technology (OT)
departments. IT departments, which manage business applications, and OT departments
functioned independently. Today, thanks to the industrial Internet of Things (IoT),
manufacturers are collecting more data from the plant floor than ever before. However,
that data is only as valuable as the decisions it can support. OT and IT departments
collaborate to make the data meaningful and accessible for use across the organization.
The result is another example of a converged network, called Factory Network, which
connects factory automation and control systems with IT systems using standards-based
networking. The Factory Network provides real-time access to mission-critical data at the
plant level while sharing knowledge throughout the enterprise, helping operations leaders
make decisions that can contribute to safety and operational effectiveness.

2.3 Introducing the Host-To-Host Communications Model


ISO OSI Reference Model
To address the issues with network interoperability, the ISO researched different
communication systems. As a result of this research, the ISO created the ISO OSI model to
serve as a framework on which a suite of protocols can be built. The vision was that this
set of protocols would be used to develop an international network that would not
depend on proprietary systems. In the computer industry, proprietary means that one
company or a small group of companies uses their own interpretation of tasks and
processes to implement networking. Usually, the interpretation is not shared with others,
so their solutions are not compatible; hence they do not communicate. Meanwhile, the
TCP/IP protocol suite was used in the first network implementations. It quickly became a
standard, meaning that it was the protocol suite implemented in practice. Consequently, it
was chosen over the OSI protocol suite and became the standard in network
implementations today.
Note: ISO, the International Organization for Standardization, is an independent,
nongovernmental organization. It is the world's largest developer of voluntary
international standards. Those standards help businesses to increase productivity while
minimizing errors and waste.
The OSI reference model describes how data is transferred over a network. The model
addresses hardware and software equipment and transmission.
The OSI model provides an extensive list of functions and services that can occur at each
layer. It also describes the interaction of each layer with the layers directly above and
below it. More importantly, the OSI model facilitates an understanding of how
information travels throughout the network. It provides vendors with a set of standards
that ensure compatibility and interoperability between the various types of network
technologies that companies produce around the world. The OSI model is also used for
computer network design, operation specifications, and troubleshooting.
Roughly, the model layers can be grouped into upper and lower layers. Layers 5 to 7, or
upper layers, are concerned with user interaction and the information that is
communicated, its presentation, and how the communication proceeds. Layers 1 to 4, the
lower layers, are concerned with how this content is transferred over the network.
The OSI reference model separates network tasks into seven layers, which are named and
numbered.

• Layer 1: The physical layer defines electrical, mechanical, procedural, and


functional specifications for activating, maintaining, and deactivating the physical
link between devices. This layer deals with the electromagnetic representation of
bits of data and their transmission. The physical layer specifications define line
encoding, voltage levels, the timing of voltage changes, physical data rates,
maximum transmission distances, physical connectors, and other attributes. This
layer is the only layer implemented solely in hardware.
• Layer 2: The data link layer defines how data is formatted for transmission and
controlled access to physical media. This layer typically includes error detection
and correction to ensure reliable data delivery. The data link layer involves
network interface controller to network interface controller (NIC-to-NIC)
communication within the same network or subnet. This layer uses a physical
address, sometimes called a MAC address, to identify hosts on the local network.
• Layer 3: The network layer provides connectivity and path selection beyond the
local segment, all the way from the source to the final destination. The network
layer uses logical addressing to manage connectivity. In networking, the logical
address is used to identify the sender and the recipient. The postal system is
another common system that uses addressing to identify the sender and the
recipient. Postal addresses follow the format that includes name, street name, and
number, city, state, and country. Network logical addresses have a different format
than postal addresses; they are determined by the network layer rules. Logical
addressing ensures that a host has a unique address or that it can be uniquely
identified in terms of network communication.
• Layer 4: The transport layer defines segmenting and reassembling of data
belonging to multiple individual communications, defines the flow control, and
defines the mechanisms for reliable transport if required. The transport layer
serves the upper layers, which in turn interface with many user applications. To
distinguish between these application processes, the transport layer uses its own
addressing. This addressing is valid locally, within one host, unlike addressing at
the network layer. The transport services can be reliable or unreliable. The
selection of the appropriate service depends on application requirements. For
instance, the file transfer may be reliable to guarantee that the file arrives intact
and whole. On the other hand, a missing pixel when watching a video might go
unnoticed. In networking, this is called an unreliable service.
• Layer 5: The session layer establishes, manages, and terminates sessions between
two communicating hosts to allow them to exchange data over a prolonged time
period. The session layer is mainly concerned with issues that application
processes may encounter and not with lower-layer connectivity issues. The
sessions, also called dialogs, can determine whether to handle data in both
directions simultaneously or only handle data flow in one direction at a time. It
also takes care of checkpoints and recovery mechanisms. The session layer is
explicitly implemented with applications that use remote procedure calls.
• Layer 6: The presentation layer ensures that data sent by the application layer of
one system is "readable" by the application layer of another system. It achieves
that by translating data into a standard format before transmission and converting
it into a format known to the receiving application layer. It also provides special
data processing that must be done before transmission. It may compress and
decompress data to improve the throughput and may encrypt and decrypt data to
improve security. Compression/decompression and encryption/decryption may
also be done at lower layers.
• Layer 7: The application layer is the OSI layer that is closest to the user. It provides
services to user applications that want to use the network. Services include email,
file transfer, and terminal emulation. An example of a user application is the web
browser. It does not reside at the application layer but uses protocols that operate
at the application layer. Operating systems also use the application layer when
performing tasks triggered by actions that typically do not involve communication
over the network. Examples of such actions are opening a remotely located file
with a text editor or importing a remotely located file into a spreadsheet. The
application layer differs from other layers in that it does not provide services to
any other OSI layer.

2.4 Introducing the Host-To-Host Communications Model


TCP/IP Protocol Suite
The TCP/IP model represents a protocol suite. It is similar to the ISO OSI model in that it
uses layers to organize protocols and explain which functions they perform. TCP/IP
protocols are actively used in actual networks today.
The TCP/IP model defines and describes requirements for the implementation of host
systems. These include standard protocols that these systems should use. It does not
specify how to implement the protocol functions but rather provides guidance for
vendors, implementors, and users of what should be provided within the system.
The TCP/IP protocol suite has four layers and includes many protocols, although its name
stands for only two: TCP, which stands for the Transmission Control Protocol, and IP,
which stands for the Internet Protocol. The reason is that layers represented by these two
protocols carry out functions crucial to successful network communication.
Note: Although this course refers to the TCP/IP protocol stack or protocol suite, it is
common in the industry to shorten this term to "IP stack."

Look at the four layers of the TCP/IP model:

• Link layer: This layer is also known as the media access layer. It defines protocols
used to interface the directly connected network. Tasks of the protocols at this
layer are closely related to the characteristics of the physical medium and deal
primarily with physical network details. The link layer is also referred to as a
network interface, network access, or even data link layer. Because there are many
different types of physical networks, there are many link layer protocols. An
example of the TCP/IP link layer protocol is Ethernet. The link layer introduces
physical addresses, sometimes called hardware addresses or MAC addresses, to
identify devices sharing a particular physical network segment.
• Internet layer: This layer routes data from the source to the destination, provides
a means to obtain information on reaching other networks and deals with
reporting errors. The Internet layer provides logical addressing. Logical addressing
ensures that a host is uniquely identified. An Internet layer logical address, called
an IP address, is used to identify a host. This address is valid globally and aims at
uniquely identifying the host. End devices, such as laptops, mobile phones, and
servers are configured with a logical address before connecting to the network. IP
protocols—namely, IPv4 and the newer version, IPv6—reside in this layer. This
layer serves the upper Transport layer and passes information to the Link layer.
• Transport layer: This layer is the core of the TCP/IP architecture and the Internet
layer. It is placed between "data mover" protocols of the link and internet layers
and software-oriented protocols of the application layer. There are two main
protocols at this layer, TCP and UDP. These protocols serve many application-layer
protocols. Transport services "prepare" application data for transfer over the
network, follow the transfer process, and ensure that data from different
applications is not mixed. To distinguish between the applications, the transport
layer identifies each application with its own addressing. This addressing is valid
locally, within one host, unlike addressing at the Internet layer, which is valid
globally.
• Application layer: The functions of this layer mainly deal with user interaction. It
supports user applications by providing protocols and services that let you actually
use the network. It also supports network application programming interfaces
(APIs) that allow programs to access the network services, regardless of the
operating system that they are running on. This layer accommodates protocols
such as HTTP, HTTPS, Domain Name System (DNS), FTP, Simple Mail Transfer
Protocol (SMTP), Secure Shell (SSH), and many more. These protocols facilitate
applications for web browsing, file transfer, names to IP addresses resolution,
sending of emails, remote access to devices, and many other functions that
network users perform.
2.5 Introducing the Host-To-Host Communications Model
Peer-To-Peer Communications
The term peer means the equal of a person or object. By analogy, peer-to-peer
communication means communication between equals. This concept is at the core of
layered modeling of a communication process. Although a layer deals with layers directly
above and below it in performing its functions, the data it creates is intended for the
corresponding layer at the receiving host. The concept is also called the horizontal
communication.
Except for the physical layer, functions of all layers are typically implemented in software.
Therefore, you hear about the logical communication of layers. Software processes at
different hosts are not communicating directly. Most likely, the hosts are not even
connected directly. Nevertheless, processes on one host manage to accomplish logical
communication with the corresponding processes on another host.
Note: The term peer-to-peer is often used in computing to indicate an application
architecture in which application tasks and workloads are equally distributed among
peers. Contrary to peer-to-peer is client-server architectures in which tasks and workload
are unequally divided.
Applications create data. The intended recipient of this data is the application at the
destination host, which can be distant. In order for application data to reach the recipient,
it first needs to reach the directly connected physical network. In the process, the data is
said to pass down the local protocol stack. First, an application protocol takes user data
and processes it. When processing by the application protocol is done, it passes processed
data down to the transport layer, which does its processing. The logic continues down the
rest of the protocol stack until data is ready for the physical transmission. The data
processing that happens as data traverses the protocol stack alters the initial data, which
means that original application data is not the same as the data represented in the
electromagnetic signal transmitted. On the receiving side, the process is reversed. The
signals that arrive at the destination host are received from the media by the link layer,
which serves data to the internet layer. From there, data is passed up the stack all the way
to the receiving application. Then, the data received as the electromagnetic signal is
different from the data that will be delivered to the application. But the data that the
application sees is the same data that the sending application created.
Passing data up and down the stack is also referred to as vertical communication. For the
horizontal, peer-to-peer communication of layers to happen, it first requires vertical down
the stack and up the stack communication.
As data passes down or up the stack, the unit of data changes—and so does its name. The
generic term used for a data unit, regardless of where it is found in the stack, is a protocol
data unit. Its name depends on where it exists in the protocol stack
Although there is no universal naming convention for PDUs, they are typically named as
follows:

• Data: general term for the PDU that is used at the Application layer
• Segment: transport layer PDU
• Packet: internet layer PDU
• Frame: link layer PDU
To look into PDUs from peer-to-peer communication, you can use a packet analyzer, such
as Wireshark, which is a free and open-source packet analyzer. Packet analyzers capture
all the PDUs on a selected interface. They then examine their content, interpret it and
display it in text or using a graphical interface. Packet analyzers, sometimes also called
sniffers, are used for network troubleshooting, analysis, software and communications
protocol development, and education.
The figure shows a screenshot of a Wireshark capture, which was started on a LAN
Ethernet interface. Wireshark organizes captured information into three windows. The
top window shows a table listing all captured frames. This listing can be filtered to ease
analysis. In the example, the filter is set to show only frames that carry DNS protocol data.
In the details pane, the second middle window shows the details of one frame selected
from the list. Information is given first for the lower layers. For each layer, the information
includes data added by the protocol at that layer. In the third window (not shown in the
figure), the bytes pane displays information selected in the details pane, as it was
captured, in bytes.
In the figure, you can also see how Wireshark organizes analyzed information. In the
details pane, it displays data it finds in headers. It organizes header information by layers,
starting with the Link layer header and proceeding to the application layer. If you look
closely at the display of each header, you will see that information is organized into
meaningful groups—these groups are recognizable by the names, followed by a colon, and
a value, for example, "Source: Cisco_29:ec:52 (04:fe:7f:29:ec:52)" or "Time to live: 127."
These groupings correspond to how information is organized in the header. Headers have
fields and the names Wireshark uses correspond to header field names. For instance,
Source and Destination in Wireshark correspond to Source Address and Destination
Address fields of a header.
2.6 Introducing the Host-To-Host Communications Model
Encapsulation and De-Encapsulation
Information that is transmitted over a network must undergo a process of conversion at
the sending and receiving ends of the communication. The conversion process is known as
encapsulation and de-encapsulation of data. Both processes provide means for
implementation of the concept of horizontal communication where the layer on the
transmitting side is communicating with the corresponding layer on the receiving side.
Have you ever opened a very large present and found a smaller box inside? And then an
even smaller box inside that one, until you got to the smallest box and, finally, to your
present? The process of encapsulation operates similarly in the TCP/IP model. The
application layer receives the user data and adds to it its information in the form of a
header. It then sends it to the transport layer. This process corresponds to putting a
present (user data) into the first box (a header), and adding some information on the box
(application layer data). The transport layer also adds its own header before sending the
package to the Internet layer, placing the first box into the second box and writing some
transport-related information on it. This second box must be larger than the first one to fit
the content. This process continues at each layer. The link layer adds a trailer in addition
to the header. The data is then sent across the physical media.
Note: Encapsulation increases the size of the PDU. The added information is required for
handling the PDU and is called overhead to distinguish it from user data.

The figure represents the encapsulation process. It shows how data passes through the
layers down the stack. The data is encapsulated as follows:
1. The user data is sent from a user application to the application layer, where the
application layer protocol adds its header. The PDU is now called data.
2. The transport layer adds the transport layer header to the data. This header
includes its own information, indicating which application layer protocol has sent
the data. The new data unit is now called a segment. The segment will be further
treated by the Internet layer, which is the next to process it.
3. The Internet layer encapsulates the received segment and adds its own header to
the data. The header and the previous data become a packet. The Internet layer
adds the information used to send the encapsulated data from the source of the
message across one or more networks to the final destination. The packet is then
passed down to the Link layer.
4. The Link layer adds its own header and also a trailer to form a frame. The trailer is
usually a data-dependent sequence, which is used to check for transmission errors.
An example of such a sequence is a Frame Check Sequence (FCS.) The receiver will
use it to detect errors. This layer also converts the frame to a physical signal and
sends it across the network using physical media.

At the destination, each layer looks at the information in the header added by its
counterpart layer at the source. Based on this information, each layer performs its
functions and removes the header before passing it up the stack. This process is
equivalent to unpacking a box. In networking, this process is called de-encapsulation.
The de-encapsulation process is like reading the address on a package to see if it is
addressed to you and then, if you are the recipient, opening the package and removing
the contents of the package.
The following is an example of how the destination device de-encapsulates a sequence of
bits:
1. The link layer reads the whole frame and looks at both the frame header and the
trailer to check if the data has any errors. Typically, if an error is detected, the
frame is discarded, and other layers may ask for the data to be retransmitted. If
the data has no errors, the link layer reads and interprets the information in the
frame header. The frame header contains information relevant for further
processing, such as the type of encapsulated protocol. If the frame header
information indicates that the frame should be passed to upper layers, the link
layer strips the frame header and trailer and then passes the remaining data up to
the Internet layer to the appropriate protocol.
2. The internet layer examines the internet header in the packet received from the
link layer. Based on the information it finds in the header, it decides either to
process the packet at the same layer or to pass it up to the transport layer. Before
the internet layer passes the message to the appropriate protocol on the transport
layer, it first removes the packet header.
3. The transport layer examines the segment header of the received segment. The
information included in the segment header indicates which application layer
protocol should receive the data. The transport layer strips the segment header
from the segment and hands over data to the appropriate application layer
protocol.
4. The application layer protocol strips the data header. It uses the information in the
header to process the data before passing it to the user application.
Not all devices process PDUs at all layers. For instance, a switch might only process a PDU
at the link layer, meaning that it will “read” only frame information that is contained in the
frame header and trailer. Based on the information found in the frame header and trailer,
the switch will either forward the frame unchanged out of a specific port, forward it out all
ports except for the incoming port, or discard the frame if it detects errors. Routers might
look "deeper" into the PDU. A router de-encapsulates the frame header and trailer and
relies on the information contained in the packet header to make their forwarding
decisions. If the router is filtering the packets, it may also look even deeper, into the
information contained in the segment header before it decides on what to do with the
packet.
A host performs encapsulation as it sends data and performs de-encapsulation as it
receives it; it can perform both functions simultaneously as part of multiple
communications it maintains.
Note: In networking, you will often encounter the usage of both OSI and TCP/IP models,
sometimes even interchangeably. You should be familiar with both, so you can
competently communicate with network engineers.
2.7 Introducing the Host-To-Host Communications Model
TCP/IP Stack vs OSI Reference Model
The OSI model and the TCP/IP stack were developed by different organizations at
approximately the same time. The purpose was to organize and communicate the
components that guide the transmission of data.
The speed at which the TCP/IP-based Internet was adopted and the rate at which it
expanded caused the OSI protocol suite development and acceptance to lag behind.
Although few of the protocols that were developed using the OSI specifications are in
widespread use today, the seven-layer OSI model has made major contributions to the
development of other protocols and products for all types of new networks.

The layers of the TCP/IP stack correspond to the layers of the OSI model:

• The TCP/IP link layer corresponds to the OSI physical and data link layers and is
concerned primarily with interfacing with network hardware and accessing the
transmission media. Like layer two of the OSI model, the link layer of the TCP/IP
model is concerned with hardware addresses.
• The TCP/IP internet layer aligns with the network layer of the OSI model and
manages the addressing of and routing between network devices.
• The TCP/IP transport layer, like the OSI transport layer, provides the means for
multiple host applications to access the network layer in a best-effort mode or
through a reliable delivery mode.
• The TCP/IP application layer supports applications that communicate with the
lower layers of the TCP/IP model and corresponds to the separate application,
presentation, and session layers of the OSI model.
Because the functions of each OSI layer are clearly defined, the OSI layers are used when
referring to devices and protocols.
Take, for example, a “Layer 2 switch,” which is a LAN switch. The “Layer 2” in this case
refers to the OSI Layer 2, making it easy for people to know what is meant, as they
associate the OSI Layer 2 with a clearly defined set of functions.
Similarly, it is often said that IP is a “network layer protocol” or a “Layer 3 protocol” as the
TCP/IP’s internet layer can be matched to the OSI network layer.
Next, look at the TCP/IP transport layer, which corresponds to the OSI transport layer. The
functions defined at both layers are the same. However, different specific protocols are
involved. Because of this, it is common to refer to the TCP and UDP as “Layer 4 protocols”
again using the OSI layer number.
Another example is the term “Layer 3 switch.” A switch was traditionally thought of as a
device that works on the link layer level (Layer 2 of the OSI model). A Layer 3 switch is also
capable of providing Internet Layer (Layer 3 of the OSI model) services, which were
traditionally provided by routers.
It is very important to remember that the OSI model terminology and layer numbers are
often used rather than the TCP/IP model terminology and layer numbers when referring
to devices and protocols.

3.1 Operating Cisco IOS Software


Introduction
Just as a personal computer or a smartphone improves individual productivity, an efficient
internetwork improves the productivity of large groups of people, especially in an
enterprise environment. Both enterprises and small offices/home offices (SOHO) users
represented in the Cisco Enterprise Architecture Model depend on a sophisticated
operating system (also implemented in software) to effectively connect users within the
enterprise and all over the world.
An internetwork's intelligence lies in its operating system. Network hardware inevitably
changes every few years with the introduction of new generations of processors,
switching, and memory components. But the internetwork's software is the unifying
thread that connects otherwise disparate networks and provides a scalable migration path
as needs evolve.
Just as enterprises invest in network operating systems that can evolve as new hardware
and applications are introduced, Cisco's IOS supports change and migration through
integration in all evolving classes of network platforms. This includes routers, switches,
and other devices that have an impact on an organization's internetwork. Cisco IOS is an
operating system that implements and controls the logic and functions of many Cisco
devices, and it enables companies to build a single, integrated information systems
infrastructure.

Operating a multitasking software is part of your job as a networking engineer. It all begins
with using the CLI as the primary user interface which can be used for configuring,
monitoring, and maintaining Cisco devices.
As a networking engineer, you will be able to operate Cisco IOS software and perform
various essential tasks, including:

• Use CLI to enter commands.


• Use the built-in help functionality and features in the CLI.
• Navigate various modes in the Cisco IOS CLI and their hierarchical structure.

3.2 Operating Cisco IOS Software


Cisco IOS Software Features and Functions
Like many common end devices (such as laptops, servers, and mobile phones), network
intermediary devices need an operating system to function. An operating system is the
software that manages hardware resources, for example, memory allocation and
input/output functions, and manages interaction among different hardware components.
Many Cisco devices run Cisco IOS Software. The operating system software includes basic
networking functions, but it may also include advanced features, such as management
and security services, quality of service (QoS), or call processing features. Examples of
devices that use Cisco IOS Software are routers, LAN switches, small wireless access points
(APs), and so on. The main function of Cisco IOS Software is to provide network features
and functions.
Cisco IOS Software delivers the following network features and functions:

• Support for basic and advanced networking functions and protocols


• Connectivity for high-speed traffic transmisión
• Security for access control and prevention of unauthorized network use
• CLI-based and GUI-based access enabling users to execute configuration
commands
• Scalability to allow adding hardware and software components
• Reliability to ensure dependable access to networked resources

Networking devices run particular versions of the Cisco IOS Software. The IOS version
depends on the type of device being used and the required features. While all devices
come with a default IOS and feature set, it is possible to upgrade the IOS version and
feature set to obtain additional capabilities.
The portion of the operating system that interfaces with applications and the user is a
program known as a shell. Unlike common end devices, Cisco network devices do not have
a keyboard, monitor, or mouse device to allow direct user interaction. However, users can
interact with the shell using their own computer and accessing a CLI or a GUI. The figure
illustrates the examples of CLI-based and GUI-based access to a shell.
When using a CLI, the user interacts directly with the system in a text-based environment
by entering commands on the keyboard at a command prompt. The system executes the
command, often providing textual output. The CLI requires very little overhead to operate.
But the user must know the underlying structure that controls the system.
GUIs may not always be able to provide all features that are available in the CLI. Some
tasks will require you to use the CLI because they are not supported in the GUI.

3.3 Operating Cisco IOS Software


Cisco IOS Software CLI Functions
One way that you can access the CLI is through a direct, cabled connection that is called
the console connection. To access the CLI directly using a console connection, you must be
physically present at the location of the device. Accessing a device CLI through a console
connection is also called out-of-band (OOB) access, emphasizing that no network
bandwidth is consumed in the process.
Another way to access the CLI is through the network using protocols that are designed to
provide remote access to the CLI, such as Secure Shell (SSH) and Telnet. SSH is a secure
method to remotely access the CLI. Telnet is an unsecured method of establishing a CLI
session. Unlike a console connection, SSH and Telnet connections require an active
networking service running on the device. Because these CLI connections consume
network bandwidth, they are also called in-band access. Network-based access to the CLI
lets you be somewhere other than the location of the accessed device. Remote CLI access
is convenient for network engineers, but it is subject to security risks inherently present in
the networks used in remote access.

Regardless of which connection method you use, access to the Cisco IOS CLI is generally
referred to as an executive or EXEC session. The features that you can access via the CLI
vary according to the version of Cisco IOS Software installed and the type of device.
Note: Some devices, such as routers, may also support a legacy auxiliary port that was
used to establish a CLI session remotely using a modem. Similar to a console connection,
the auxiliary (AUX) port is OOB and does not require networking services to be configured
or available.
The services that are provided by Cisco IOS Software are generally accessed using a CLI.
The CLI is a text-based interface that is similar to the old Microsoft operating system that
is called MS-DOS.
Once you access the shell via the CLI or GUI, you can enter different commands.
Commands are used to configure, monitor, and manage the device and are executed by
the device operating system. While Cisco IOS Software provides core software that
extends across many products, the details of its operation and also the available services
may vary across different devices. Therefore, different devices will have different
commands available for execution.
Cisco IOS software CLI functions

• The CLI is used to enter commands.


• Operations vary on different internetworking devices.
• Users type in or copy and paste entries in the console command modes.
• Command modes have distinctive prompts.
• Pressing Enter instructs the device to parse (translate) and execute the command.
• The two primary EXEC modes are user mode and privileged mode.

Cisco IOS Software is designed as a modal operating system. The term modal describes a
system that has various modes of operation. Each mode has its own set of commands and
command history and is intended for usage for a specific group of tasks. The CLI uses a
hierarchical structure for the modes. This hierarchy starts with the least specific command
mode or higher-level mode and proceeds with more specific or lower-level command
modes. A more specific command mode can be entered from the less specific mode,
which precedes it in the hierarchy.
To enter commands into the CLI, type in or copy and paste the entries within one of the
several console command modes. Each command mode is indicated with a distinctive
visual prompt. The term prompt is used because the system is prompting you to make an
entry. Pressing enter instructs the device to parse and execute the command.
Note: It is important to remember that the command is executed as soon as you enter it.
If you enter an incorrect command on a production router, it can negatively affect the
network.

Each command mode has a name and a distinctive visual prompt by which it can be
recognized. By default, every prompt begins with the device name. Following the device
name, the remainder of the prompt uses special characters and words to indicate the
mode. As you use commands and change the operation mode, the prompt changes to
reflect the current context. To enter a command, you can either type them in or copy and
paste the entries. Once you are done, press Enter and the device will parse and execute
the command if the command was entered correctly.
The example in the figure shows a CLI prompt switch>. The device in the example is
named switch, and the operating CLI mode is indicated by the greater-than sign (>).
As a security feature, to limit the commands that a user can view and execute, Cisco IOS
Software separates CLI sessions into two primary access levels:

• User EXEC: Allows a person to execute only a limited number of basic monitoring
commands.
• Privileged EXEC: Allows a person to execute all device commands, for example, all
configuration and management commands. This level can be password protected
to allow only authorized users to execute the privileged set of commands.
3.4 Operating Cisco IOS Software
Cisco IOS Software Modes
Cisco IOS Software has various modes that are hierarchically structured. The highest
hierarchy is the user EXEC mode. It is followed by the privileged EXEC mode. From the
privileged EXEC mode, you can proceed to the global configuration mode and from there
to more specific configuration modes such as interface configuration mode and router
configuration mode, as shown below.

Because these modes have a hierarchy, you can only access a lower-level mode from a
higher-level mode. For example, to access Global Configuration Mode, you must be in the
Privileged EXEC mode. Each mode is used to accomplish particular tasks and has a specific
set of commands that are available in this mode. Interface-specific configuration
commands are available only in the Interface Configuration Mode. To access interface
configuration commands, your full path through operation mode hierarchy would be: User
EXEC Mode > Privileged EXEC Mode > Global Configuration Mode > Interface
Configuration Mode. All commands that you enter and execute in Interface Configuration
Mode apply only to the device interface you chose to configure.
You can tell the operation mode that you are in by looking at the prompt at the beginning
of the line. Normally when you connect to a device, you are allowed access to the User
EXEC Mode. In User EXEC Mode, you can change the console connection settings, perform
basic connectivity tests, and display system information, but you cannot configure the
device. To leave the User EXEC Mode (to close the console connection), you can use either
the logout, exit, or the quit commands.
To move between the modes, you must use predefined commands. The following table
offers an overview of basic IOS Software operation modes, commands or, methods to
access and leave them, their prompt identifications, and a short description.
You do not have to return to global configuration mode in order to move to a different
configuration mode. Rather, you can enter another configuration mode by typing the
appropriate command at any configuration mode prompt. (Note: you will not be able to
get any help for commands that are not valid at the prompt.)

The figure shows two configuration examples, both performing the same task of providing
descriptions for Ethernet 0/0 and Ethernet 0/1 interfaces. In the configuration on the left,
the administrator started in the Global Configuration Mode and entered the Interface
Configuration Mode by typing the command interface Ethernet 0/0. Note how the prompt
changed from SW1(config)# to SW1(config-if)#. In the second line, the administrator typed
the description command. In the next line, the administrator typed the interface Ethernet
0/1 command; this command causes the switch to enter Interface Configuration mode for
the Ethernet 0/1 interface. Note that the prompt did not change because the prompt does
not indicate the specific interface. The last line applies the description command to the
Ethernet 0/1 interface. In the example on the right, the same configuration is performed,
by exiting and re-entering the Interface Configuration Mode, as evident in the third and
the fourth lines. Both are valid configurations and have the same results.

4.1 Introducing LANs


Introduction
A small home business or a small-office environment can use a small LAN to connect two
or more computers and to connect the computers to one or more shared peripheral
devices, such as printers. A large corporate office can use multiple LANs to accommodate
hundreds of computers and shared peripheral devices, spanning many floors in an office
complex.
The enterprise campus LAN is the portion of the infrastructure that provides network
access to end users and devices located at the same geographical location. It may span
several floors in a single building or multiple buildings covering a larger geographical area.
The campus typically connects to a network core that provides access to other parts of the
network such as data centers, WAN, other campuses, and the internet.

As a networking engineer, you will work with LAN switches, and for successful completion
of various tasks, you first need to:
• Explain what a LAN is and be able to identify LAN components.
• Understand why you need switches.
• List important switch features and characteristics.

4.2 Introducing LANs


Local Area Networks
The LAN emerged to serve the needs of high-speed interconnections between the
computer systems. While there have been many types of LAN transports, Ethernet
became the favorite of businesses starting in the early 1990s. Since its introduction,
Ethernet bandwidth has scaled from the original shared-media 10 Mbps to 400 Gbps in
Cisco Nexus 9000 Series Switches for the data center.
A LAN is a network of endpoints and other components that are located relatively close
together in a limited area.

LANs can vary widely in size. A LAN may consist of only two computers in a home office or
small business, or it may include hundreds of computers in a large corporate office or
multiple buildings. A LAN is typically a network within your own premises (your
organization's campus, building, office suite, or even your home). Organizations or
individuals typically build and own the whole infrastructure, all the way down to the
physical cabling.
The defining characteristics of LANs, in contrast to WANs, include their typically higher
data transfer rates, smaller geographic area, and the lack of need for leased
telecommunication lines.
A WAN is a data communications network that provides access to other networks over a
large geographical area. WANs use facilities that an ISP or carriers, such as a telephone or
cable company, provides. The provider connects locations of an organization to each
other, to locations of other organizations, to external services, and remote users. WANs
carry various traffic types such as voice, data, and video.
4.3 Introducing LANs
LAN Components
On the first LANs, devices with Ethernet connectivity were mostly limited to PCs, file
servers, print servers, and legacy devices such as hubs and bridges. Hubs and bridges were
replaced by switches and are no longer used.
Today, a typical small office will include routers, switches, access points (APs), servers, IP
phones, mobile phones, PCs, and laptops.

Regardless of its size, a LAN requires these fundamental components for its operation:

• Hosts: Hosts include any device that can send or receive data on the LAN.
Sometimes hosts are also called endpoints. Those two terms are used
interchangeably throughout the course.
• Interconnections: Interconnections allow data to travel from one point to another
in the network. Interconnections include these components:
o Network Interface Cards (NICs): NICs translate the data that is produced by
the device into a frame format that can be transmitted over the LAN. NICs
connect a device to the LAN over copper cable, fiber-optic cable, or
wireless communication.
o Network media: In traditional LANs, data was primarily transmitted over
copper and fiber-optic cables. Modern LANs (even small home LANs)
generally include a wireless LAN (WLAN).
• Network devices: Network devices, like switches and routers, are responsible for
data delivery between hosts.
o Ethernet switches: Ethernet switches form the aggregation point for LANs.
Ethernet switches operate at Layer 2 of the Open Systems Interconnection
(OSI) model and provide intelligent distribution of frames within the LAN.
o Routers: Routers, sometimes called gateways, provide a means to connect
LAN segments and provide connectivity to the internet. Routers operate at
Layer 3 of the OSI model.
o APs: APs provide wireless connectivity to LAN devices. APs operate at Layer
2 of the OSI model.
• Protocols: Protocols are rules that govern how data is transmitted between
components of a network. Here are some commonly used LAN protocols:
o Ethernet protocols (IEEE 802.2 and IEEE 802.3)
o IP
o TCP
o UDP
o Address Resolution Protocol (ARP) for IPv4 and Neighbor Discovery
Protocol (NDP) for IPv6
o Common Internet File System (CIFS)
o DHCP
Functions of a LAN
LANs provide network users with communication and resource-sharing functions:

• Data and applications: When users are connected through a network, they can
share files and even software applications. This capability makes data more easily
available and promotes more efficient collaboration on work projects.
• Resources: The resources that can be shared include input devices, such as
cameras, and output devices, such as printers.
• Communication path to other networks: If a resource is not available locally, the
LAN can provide connectivity via a gateway to remote resources, such as the
internet.

4.4 Introducing LANs


Need for Switches
When you connect three or more devices, you need a dedicated network device to enable
communication between these hosts. Switches were introduced to LANs to divide a
network into segments.
A segment is a network connection that is made by a single unbroken network cable.
Ethernet cables and segments can span only a limited physical distance.
Historically, when network devices had few network segments, endpoints shared the
same media. Network segments that share the same media are known as collision
domains because frames may collide with each other. A network collision occurs when
two or more devices connected by a shared medium try to communicate at the same
time. In a collision domain, only one device was able to transmit at the time, while other
devices had to wait before transmitting to avoid collisions. The total bandwidth was
shared across all host devices on a shared media. Collisions also decrease network
efficiency because host devices had to wait before retransmitting data at another time.
Today switches operating at the link layer divide a network into segments and reduce the
number of devices that share the total bandwidth. Each segment, then, results in a new
collision-free domain.
However, switches have additional functionality and can also be a solution for the typical
causes of network congestion.
The most common causes of network congestion are as follows:

• Increasingly powerful computer and network technologies: CPUs, buses, and


peripherals are consistently becoming faster and more powerful; therefore, they
can send more data at higher rates through the network.
• Increasing volume of network traffic: Network traffic is now more common, as
remote resources are used and are even necessary to carry out basic work.
• High-bandwidth applications: Software applications are becoming richer in their
functionality and are requiring more bandwidth to process. Applications such as
desktop publishing, engineering design, VoD, e-learning, and streaming video all
require considerable processing power and speed. This richer functionality puts a
large burden on networks to manage the transmission of their files and requires
sharing of the applications among users.
The figure shows that each switch interface (also called a switch port) connects to a single
PC or server. Each switch port represents a segment. By default, a switch and all
interconnected switches belong to a single LAN.

Switches have the following features and functions:

• Operate at the link layer of the TCP/IP protocol suite


• Selectively forward individual frames
• Have many ports to segment a large LAN into many smaller segments
• Have high speed and support various port speeds
The main purpose of a switch is to forward frames as fast and as efficiently as possible.
When a switch receives a frame on an input interface, it buffers that frame until the
switch performs the required processing and is ready to transmit the frame out an exit
interface. If switches did not have frame buffers, then the frames would be dropped when
the congestion occurs, or the link becomes saturated.
Ethernet switches selectively forward individual frames from the source port to the
destination port.

4.5 Introducing LANs


Characteristics and Features of Switches
Switches have become a fundamental part of most networks. LAN switches have special
characteristics that make them effective in alleviating network congestion by increasing
effective network bandwidth.

Switches provide the following important functions, resulting in even greater benefits for
eliminating network congestion:

• Dedicated communication between devices: This increases frame throughput.


Switches with one user device per port have microsegmented the network. In this
type of configuration, each user receives access to the full bandwidth and does not
have to contend with other users for available bandwidth. As a result, collisions do
not occur.
• Multiple simultaneous conversations: Multiple simultaneous conversations can
occur by forwarding or switching several packets at the same time, increasing
network capacity by the number of conversations that are supported. For example,
when frames are being forwarded between ports 1 and 2, another conversation
can be happening between ports 5 and 6. This multiplication is possible because of
input/output (I/O) buffers and fast internal transfer speeds between ports. A
switch that can support all possible combinations of frame transfers between all
ports simultaneously offers wire-speed and nonblocking performance. Of course,
this class of switch is relatively expensive.
• Full-duplex communication: After a connection is microsegmented, it has only two
devices (the switch and the host). It is now possible to configure the ports so they
can both receive and send data at the same time, which is called full-duplex
communication. For example, point-to-point 100-Mbps connections have 100
Mbps of transmission capacity and 100 Mbps of receiving capacity for an effective
200-Mbps capacity on a single connection. The configuration between half duplex
and full duplex is automatically negotiated at the initial establishment of the link
connection. Half duplex means that there is a transmission of data in just one
direction at a time.
• Media-rate adaptation: A LAN switch that has ports with different media rates can
adapt to between rates. For example, between 10, 100, and 1000 Mbps, 1 and 10
Gbps, 1, 10 and 25 Gbps, 40 Gbps, and 100 Gbps. This adaptability allows
bandwidth to be matched as needed. Without this ability, it is not possible to have
different media-rate ports that are operating at the same time.

Switches connect LAN segments, determine the segment to send the data, and reduce
network traffic. Some important characteristics of switches:

• High port density: Switches have high port densities: 24-, 32-, and 48-port
switches operate at speeds of 100 Mbps, 1 Gbps, 10 Gbps, 25 Gbps, 40 Gbps, and
100 Gbps. Large enterprise switches may support hundreds of ports.
• Large frame buffers: The ability to store more received frames before having to
start dropping them is useful, particularly when there may be congested ports
connected to servers or other heavily used parts of the network.
• Port speed: Depending on the switch, port speed may be possible to support a
range of bandwidths. Ports of 100 Mbps, 1Gbps, and 10 Gbps are expected, but
40- or 100-Gbps ports allow even more flexibility.
• Fast internal switching: Having fast internal switching allows higher bandwidths:
100 Mbps, 1 Gbps, 10 Gbps, 25 Gbps, 40 Gbps, and 100 Gbps.
• Low per-port cost: Switches provide high port density at a lower cost. For this
reason, LAN switches can accommodate network designs that feature fewer users
per segment. This feature, therefore, increases the average available bandwidth
per user.
Switches use ASICs, which are fundamental to how an Ethernet switch works. An ASIC is a
silicon microchip designed for a specific task (such as switching or routing packets) rather
than general-purpose processing such as a CPU. A generic CPU is too slow for forwarding
traffic in a switch. While a general-purpose CPU may be fast at running a random
application on a laptop or server, manipulating and forwarding network traffic is a
different matter. Traffic handling requires constant lookups against large memory tables.

5.1 Exploring the TCP/IP Link Layer


Introduction
When users want to communicate either in an Enterprise environment, at home, or
anywhere in the world, they need to have a way of being connected to the network by
physical media. This is where the TCP/IP link layer becomes vital. There are different ways
of being connected to the network, either using wired or wireless connectivity. In most
enterprise environments and at home, wireless communication is becoming more
common for end users. Still, for the majority of other network devices, Ethernet is the
basis of all enterprise communication.
The TCP/IP Link layer contains Ethernet and other protocols that computers use to deliver
data to the other computers and devices attached to the network. Unlike higher-level
protocols, the Link layer protocols must understand the details of the underlying physical
network, such as the protocol data units structure and the physical address scheme that is
used. Understanding the details and constraints of the physical network ensures that
these protocols can format the data correctly to be transmitted across the network. Keep
in mind how important the physical characteristics of the transmission medium are. They
include different cables, connectors, use of pins, electrical currents, encoding, light
modulation, and the rules for how to activate and deactivate the use of the physical
medium. These characteristics are essential in building any kind of Enterprise or home
network environment.
The design of TCP/IP hides the function of the Link layer from users - it focuses on getting
data across a specific type of physical network (such as Ethernet, and so on). But as a
networking engineer designing, building, and troubleshooting Enterprise networks, you
need to understand the essential ideas:

• Distinguishing different Ethernet media options, including the most common


connectors and cable types.
• Identifying the Ethernet frame structure, MAC addresses, and their functions.
• Understanding Ethernet switches operations and duplex options thoroughly.

5.2 Exploring the TCP/IP Link Layer


Ethernet LAN Connection Media
To connect a switch to a LAN, you must use some sort of media. The most common LAN
media is Ethernet. Ethernet is not just a type of cable or protocol. It is a network standard
published by the IEEE. So you may hear various Ethernet terms, such as Ethernet
protocols, Ethernet cables, Ethernet ports, and Ethernet switches. IEEE 802.3 and Ethernet
are often used synonymously, although they have some differences. The term Ethernet is
more common, and IEEE 802.3 is usually used when referring to a specific part of the
standard, such as a particular frame format. Ethernet is a set of guidelines that enable
various network components to work together. These guidelines specify cabling and
signaling at the physical and data link layers of the OSI model. For example, Ethernet
standards recommend different types of cabling and specify maximum segment lengths
for each type.
The names of the standards (shown in the table) specify the transmission speed, the type
of signaling, and the type of cabling. For example, the standard name 1000BASE-T denotes
the following:

• 1000: Specifies a transmission speed of 1000 Megabits per second (Mbps) or 1


Gigabit per second (Gbps)
• BASE: Refers to baseband signaling (which means that only Ethernet signals are
carried on the medium)
• T: Represents twisted-pair cabling.
Twisted-pair cabling is a type of wiring in which two conductors are twisted together for
the purpose of canceling EMI from external sources.

• The mechanical properties of Ethernet depend on the type of physical medium.


o Coaxial (not used anymore)
o Twisted pair copper
o Fiber optics
• Ethernet was originally based on the concept of computers communicating over a
shared coaxial cable.

Copper Media
Most Ethernet networks use unshielded twisted-pair (UTP) copper cabling for short and
medium-length distances because of its low cost compared to fiber-optic or coaxial cable.
Ethernet over twisted-pair technologies uses twisted-pair cables for the physical layer of
an Ethernet computer network. Twisted-pair cabling is a type of wiring in which two
conductors—the forward and return conductors of a single circuit—are twisted together
for the purposes of canceling EMI from external sources (for example, electromagnetic
radiation from UTP cables and crosstalk between neighboring pairs).
A UTP cable is a four-pair wire. Each of the eight individual copper wires in a UTP cable is
covered by an insulating material. In addition, the wires in each pair are twisted around
each other. The advantage of a UTP cable is its ability to cancel interference because the
twisted-wire pairs limit signal degradation from EMI and radio frequency interference
(RFI). To further reduce crosstalk between the pairs in a UTP cable, the number of twists in
the wire pairs varies. Cables must follow precise specifications regarding how many twists
or braids are permitted per meter.
A UTP cable is used in various types of networks. When used as a networking medium, a
UTP cable has four pairs of either 22- or 24-gauge copper wire. A UTP cable that is used as
a networking medium has an impedance of 100 ohms, differentiating it from other types
of twisted-pair wiring, such as what is used for telephone wiring. A UTP cable has an
external diameter of approximately 0.43 cm (0.17 inches), and its small size can be
advantageous during installation.
Several categories of UTP cable exist:

• Category 5: Capable of transmitting data at speeds of up to 100 Mbps


• Category 5e: Used in networks running at speeds of up to 1000 Mbps (1 Gbps)
• Category 6: Comprises four pairs of 24-gauge copper wires, which can transmit
data at speeds of up to 10 Gbps
• Category 6a: Used in networks running at speeds of up to 10 Gbps
• Category 7: Used in networks running at speeds of up to 10 Gbps
• Category 8: Used in networks running at speeds of up to 40 Gbps
RJ-45 Connector and Jack
UTP cables are used with RJ-45 connectors. The figure shows a UTP cable with an RJ-45
connector and a jack.

The RJ-45 plug is the male component, which is crimped at the end of the cable. As you
look at the male connector from the front, as shown in the figure, the pin locations are
numbered from 8 on the left to 1 on the right.
The jack is the female component in a network device, wall, cubicle partition outlet, or
patch panel. As you look at the female connector from the front, as shown in the figure,
the pin locations are numbered from 1 on the left to 8 on the right.
Power over Ethernet
Power over Ethernet (PoE) describes systems that pass electric power along with data on
Ethernet cabling. This action allows a single Ethernet cable to provide both data
connection and electric power to devices such as wireless access points, IP cameras, and
VoIP phones by utilizing all four pairs in the Category 5 cable or above.
Straight-Through or a Crossover UTP Cable?
When choosing a UTP cable, you must determine whether you need a straight-through
UTP cable or a crossover UTP cable. Straight-through cables are primarily used to connect
electrically, unlike devices, and crossover cables are used to connect electrically like
devices. For example, the receive pin is the same on devices, so it must be crossed to the
transmit pin.
To tell the difference between the two types of cabling, hold the ends of the cable next to
each other, with the connector side of each end facing you. As shown in the figure, the
cable is a straight-through cable if each of the eight pins corresponds to the same pin on
the opposite side. The cable is a crossover cable if some of the wires on one end of the
cable are crossed to a different pin on the other side of the cable, as shown in the figure.
Note: The need for crossover cables is considered legacy because most devices now use
straight-through cables and can internally cross-connect when a crossover is required.
When automatic medium-dependent interface crossover (auto-MDIX) is enabled on an
interface, the interface automatically detects the required cable connection type (straight-
through or crossover) and configures the connection appropriately. With auto-MDIX
enabled, you can use either type of cable to connect to other devices, and the interface
automatically corrects for any incorrect cabling.

The following figure shows when to use straight-through and crossover cables.
Optical Fiber

An optical fiber is a flexible, transparent fiber that is made of very pure glass (silica) and is
not much larger than human hair. It acts as a waveguide, or "light pipe," to transmit light
between the two ends of the fiber. Optical fibers are widely used in fiber-optic
communication, which permits transmission over longer distances and at higher
bandwidths (data rates) than other forms of communication. Fibers are used instead of
metal wires because signals travel along with them with less loss and immunity to EMI.
The two fundamental components that allow a fiber to confine light are the core and the
cladding. Most of the light travels from the beginning to the end inside the core. The
cladding around the core provides confinement. The diameters of the core and cladding
are shown in the figure, but the core diameter may vary for various fiber types. In this
case, the core diameter of 9 micrometers is very small. (The diameter of a human hair is
about 50 micrometers.) The outer diameter of the cladding is a standard size of 125
micrometers. Standardizing the size means that component manufacturers can make
connectors for all fiber-optic cables.
The third element in this picture is the buffer (coating), which has nothing to do with the
confinement of the light in the fiber. Its purpose is to protect the glass from scratches and
moisture. The fiber-optic cable can be easily scratched and broken. If the fiber is
scratched, the scratch could propagate and break the fiber. Another important role of the
buffer is to keep the fiber dry.
Fiber Types
The most significant difference between multimode fiber (MMF) and single-mode fiber
(SMF) is in the ability of the fiber to send light over a long distance at high bit rates. In
general, MMF is used for shorter distances, while SMF is preferrd for long-distance
communications. There are many variations of fiber for both MMF and SMF.
The most significant physical difference is in the size of the core. The glass in the two
fibers is the same, and the index of refraction (a way of measuring the speed of light in a
material) between the core and the cladding changes similarly. The diameter of the fiber
cladding is also the same. However, the core is a different size, which affects how the light
gets through the fiber. MMF supports multiple ways for the light from one source to travel
through the fiber— which is why it is called “multimode." Each path can be thought of as a
mode.
For SMF, the possible ways for light to get through the fiber have been reduced to one—a
"single mode." It is not exactly one, but it is a useful approximation.
MMF device uses LED as a light source, which facilitates short-distance transmissions. On
the other hand, the SMF device uses a laser to generate the signal, which provides higher
transmission rates covering longer distances.
The table summarizes MMF and SMF characteristics.
An optical fiber connector terminates the end of an optical fiber. Various optical fiber
connectors are available. The main differences among the types of connectors are the
dimensions and methods of mechanical coupling. Generally, organizations standardize on
one type of connector, depending on the equipment that they commonly use, or they
standardize per type of fiber (one for MMF and one for SMF). There are about 70
connector types in use today.
The three types of connectors follow:

• Threaded
• Bayonet
• Push-pull
Connectors are made of the following materials:

• Metal
• Plastic sleeve
Here is a list of the most common types of fiber connectors and their typical uses:

• LC: is for enterprise equipment and is commonly used on small form-factor


pluggable (SFP) modules.
• SC: is for enterprise equipment.
• ST: is for patch panels (for their durability).
• FC: is for patch panels and is used by service providers.
• MT-RJ: connector is a two-fiber connector (transmit and receive), has a form factor
and, is used for enterprise equipment.
In data communications and telecommunications applications today, small form factor
(SFF) connectors (for example, LCs) are replacing the traditional connectors (for example,
SCs) mainly to pack more connectors on the faceplate and, as a result, reduce system
footprints.
SFP and SFP+ Transceivers
The SFP transceiver modules are hot-pluggable I/O devices that plug into module sockets.
The transceiver connects the electrical circuitry of the module with the optical or copper
network. In LAN networking devices, SFP modules support Ethernet speeds up to 1 Gbps.
An optical SFP transceiver module with a fiber-optic LC connector is shown in the figure.

The SFP+ transceivers are an enhanced version of SFP transceivers. In LAN networking
devices, SFP+ modules support 10 Gbps Ethernet. SFP and SFP+ modules look the same.
SFP and SFP+ modules can be used in combination with LC or RJ45 connectors.
Different Cisco networking devices support different SFP and SFP+ modules. Different SFP
and SFP+ modules also support different types and lengths of fiber optic cables. You
should always check the device specifications and compatibility information.
5.3 Exploring the TCP/IP Link Layer
Ethernet Frame Structure
Bits that are transmitted over an Ethernet LAN are organized into frames.

In Ethernet terminology, the container into which data is placed for transmission is called
a frame. The frame contains header information, trailer information, and the actual data
that is being transmitted.
There are several types of Ethernet frames, while the Ethernet II frame is the most
common type and is shown in the figure. This frame type is often used to send IP packets.
The table shows the fields of an Ethernet II frame, which are:

• Preamble: This field consists of 8 bytes of alternating 1s and 0s that are used to
synchronize the signals of the communicating computers.
• Destination Address (DA): The DA field contains the MAC address of the network
interface card (NIC) on the local network to which the frame is being sent.
• Source Address (SA): The SA field contains the MAC address of the NIC of the
sending computer.
• Type: This field contains a code that identifies the network layer protocol.
• Payload: This field contains the network layer data. If the data is shorter than the
minimum length of 46 bytes, a string of extraneous bits is used to pad the field.
This field is also known as “data and padding”.
• FCS: The frame check sequence (FCS) field includes a checking mechanism to
ensure that the frame of data has been transmitted without corruption. The
checking mechanism that is being used is the cyclic redundancy check (CRC).

5.4 Exploring the TCP/IP Link Layer


LAN Communication Types
The three major types of network communications are:

• Unicast: Communication in which a frame is sent from one host and is addressed
to one specific destination. In a unicast transmission, there is only one sender and
one receiver. Unicast transmission is the predominant form of transmission on
LANs and within the Internet.
• Broadcast: Communication in which a frame is sent from one address to all other
addresses. In this case, there is only one sender, but the information is sent to all
the connected receivers. Broadcast transmission is used for sending the same
message to all devices on the LAN.
• Multicast: Communication in which information is sent to a specific group of
devices or clients. Unlike broadcast transmission, in multicast transmission, clients
must be members of a multicast group to receive the information.

5.5 Exploring the TCP/IP Link Layer


MAC Addresses
A MAC address uniquely identifies a NIC interface of a device. It is used as a link-layer
address for technologies like Ethernet, Wi-Fi, and Bluetooth for communication within a
network segment. The MAC address provides how data is directed to the proper
destination device. The MAC address of a device is an address that is hardcoded into the
NIC, so the MAC address is also referred to as the physical address or burned-in address or
Ethernet hardware address. The MAC address is expressed as groups of hexadecimal digits
that are organized in pairs or quads.
The following are different display formats:

• 0000.0c43.2e08
• 00:00:0c:43:2e:08
• 00-00-0C-43-2E-08
Note: Hexadecimal (often referred to as simply hex) is a numbering system with a base of
16. This means that it uses 16 unique symbols as digits. The decimal system that you use
on a daily basis has a base of 10, which means that it is composed of 10 unique symbols, 0
through 9. The valid symbols in hexadecimal are 0,1, 2, 3, 4, 5, 6, 7, 8, 9, A, B, C, D, E, and
F. In decimal, A, B, C, D, E, and F equal 10, 11, 12, 13, 14, and 15, respectively. Each
hexadecimal digit is 4 bits long because it requires 4 bits in binary to count to 15. Because
a MAC address is composed of 12 hexadecimal digits, it is 48 bits long. The letters A, B, C,
D, E, and F can be either upper or lower case.
A MAC address is composed of 12 hexadecimal numbers, which means it has 48 bits.
There are two main components of a MAC. The first 24 bits constitute the Organizationally
Unique Identifier (OUI). The last 24 bits constitute the vendor-assigned, end-station
address.

• 24-bit OUI: The OUI identifies the manufacturer of the NIC. The IEEE regulates the
assignment of OUI numbers. Within the OUI, there are 2 bits that have meaning
only when used in the destination address field:
o Broadcast or multicast bit: When the least significant bit in the first octet
of the MAC address is 1, it indicates to the receiving interface that the
frame is destined for all (broadcast) or a group of (multicast) end stations
on the LAN segment. This bit is referred to as the Individual/Group (I/G)
address bit.
o Locally administered address bit: The second least significant bit of the
first octet of the MAC address is referred as a universally or locally (U/L)
administered address bit. Normally, the combination of the OUI and a 24-
bit station address is universally unique. However, if the address is
modified locally, this bit should be set to 1.
• 24-bit, vendor-assigned, end-station address: This portion uniquely identifies the
Ethernet hardware.
The MAC address identifies a specific computer interface on a LAN. Unlike other kinds of
addresses that are used in networks, the MAC address should not be changed unless there
is some specific need to do so.
5.6 Exploring the TCP/IP Link Layer
Frame Switching
The switch builds and maintains a table called the MAC address table, which matches the
destination MAC address with the port that is used to connect to a node. The MAC
address table is stored in the content-addressable memory (CAM), enabling very fast
lookups. Therefore, you might see a switch's MAC address table referred to as a CAM
table.
For each incoming frame, the destination MAC address in the frame header is compared
to the list of addresses in the MAC address table. Switches then use MAC addresses as
they decide whether to filter, forward, or flood frames. When the destination MAC
address of a received unicast frame resides on the same switch port as the source, the
switch drops the frame, which is a behavior known as filtering. Flooding means that the
switch sends the incoming frame to all active ports, except the port on which it received
the frame.
The switch creates and maintains the MAC address table by using the source MAC
addresses of incoming frames and the port number through which the frame entered the
switch. In other words, a switch learns the network topology by analyzing the source
address of incoming frames from all attached networks.
The procedure below describes a specific example when PC A sends a frame to PC B, and
the switch starts with an empty MAC address table.
The switch performs learning and forwarding actions (including in situations that differ
from the example explained above), such as:

• Learning: When the switch receives the frame, it examines the source MAC
address and incoming port number. It performs one of the following actions
depending on whether the MAC address is present in the MAC address table:
o No: Adds the source MAC address and port number to the MAC address
table and starts the default 300 seconds aging timer for this MAC address.
o Yes: Resets the default 300 seconds aging timer.
Note: When the aging timer expires, the MAC address entry is removed from the MAC
address table.

• Unicast frames forwarding: The switch examines the destination MAC address
and, if it is unicast, performs one of the following actions depending on whether
the MAC address is present in the MAC address table:
o No: Forwards the frame out all ports except the incoming port (referred to
as unknown unicast).
o Yes: Forwards the frame out of the port from which that MAC address was
learned previously.
Broadcast or multicast frames forwarding: The switch examines the destination MAC
address and if it is broadcast or multicast, forwards the frame out all ports except the
incoming port (unless using Internet Group Management Protocol (IGMP) with multicast,
in which case it will only send the frame to specific ports).

5.7 Exploring the TCP/IP Link Layer


Duplex Communication
The term duplex communication is used to describe a communications channel that can
carry signals in both directions, as opposed to a simplex channel, which carries a signal in
only one direction. There are two types of duplex settings that are used for
communications on an Ethernet network—full duplex and half duplex.
Half Duplex
Half-duplex communication relies on a unidirectional data flow, which means that data
can go only in one direction at a time. Sending and receiving data are not performed at
the same time. Half-duplex communication is similar to communication with walkie-talkies
or two-way radios, in which only one person can talk at a time. Because data can flow in
only one direction at a time, each device in a half-duplex system must constantly wait its
turn to transmit data. This constant waiting results in performance issues. As a result, full-
duplex communication has replaced half-duplex communication in more current
hardware. Half-duplex connections are typically seen in older hardware such as hubs.
The following are characteristics of half-duplex operation:

• Unidirectional data Flow


• Legacy connectivity
• May have collision issues

If a device transmits while another is also transmitting, a collision occurs. Therefore, half-
duplex communication implements Ethernet Carrier Sense Multiple Access with Collision
Detection (CSMA/CD) to help reduce the potential for collisions and to detect them when
they do occur. CSMA/CD allows a collision to be detected, which causes the offending
devices to stop transmitting. Each device retransmits after a random amount of time has
passed. Because the time at which each device retransmits is random, the possibility that
they again collide during retransmission is very small.
Full Duplex
Full-duplex communication is like telephone communication, in which each person can
talk and hear what the other person says simultaneously. In full-duplex communication,
the data flow is bidirectional, so that data can be sent and received at the same time. The
bidirectional support enhances performance by reducing the wait time between
transmissions. Ethernet, Fast Ethernet, and Gigabit Ethernet NICs sold today offer the full-
duplex capability. In full-duplex mode, the collision-detection circuit is disabled. Frames
that the two connected end nodes send cannot collide because the end nodes use two
separate circuits in the network cable.
The following are characteristics of full-duplex operation:

• Point-to-point only
• Attached to a dedicated switched port
• Requires full-duplex support on both ends

Each full-duplex connection uses only one port. Full-duplex communications require a
direct connection between two nodes that both support full duplex. If one of the nodes is
a switch, the switch port to which the other node is connected must be configured to
operate in the full-duplex mode. The primary cause of duplex issues is mismatched
settings on two directly connected devices. For example, the switch is configured for a full
duplex, and the attached PC is configured for a half duplex.
The duplex Command
The duplex command is used to specify the duplex mode of operation for switch ports.
The duplex command supports the following options:

• The full option sets the full-duplex mode.


• The half option sets the half-duplex mode.
• The auto option sets autonegotiation of the duplex mode. With autonegotiation
enabled, the two ports communicate to decide the best mode of operation.
The figure shows an example of duplex and speed configurations on the Fast Ethernet
interfaces of two switches. To prevent mismatch issues, the settings on each interface are
configured to match the settings of the directly connected interfaces. For example, the
Fa0/5 interface on SwitchX and Fa0/3 on SwitchY are configured to autonegotiate speed
and duplex settings with the connected PC. The Fa0/1 interface on SwitchX that is
connected to SwitchY is configured for full duplex and speed 100 Mbps, which is the same
as the configuration on the Fa0/1 interface on SwitchY.

For 100BASE-FX ports, the default option is full, and they cannot autonegotiate. 100BASE-
FX ports operate only at 100 Mbps in full-duplex mode. For Fast Ethernet and
10/100/1000 ports, the default option is auto. The 10/100/1000 ports operate in either
half-duplex or full-duplex mode when their speed is set to 10 or 100 Mbps, but when their
speed is set to 1000 Mbps, they operate only in the full-duplex mode.
Autonegotiation can at times produce unpredictable results. By default, when
autonegotiation fails, a Cisco Catalyst switch sets the corresponding switch port to half-
duplex mode. Autonegotiation failure occurs when an attached device does not support
autonegotiation. If the attached device is manually configured to also operate in the half-
duplex mode, there is no problem. However, if the device is manually configured to
operate in the full-duplex mode, there is a duplex mismatch. A duplex mismatch causes
late collision errors at the end of the connection. To avoid this situation, manually set the
duplex parameters of the switch to match the attached device.
In the example, the switch ports connected to the PCs are configured for autonegotiation
since the PC's network card supports autonegotiation. The interconnection ports between
the switches have a static configuration to avoid autonegotiation failures if someone
connects a device that only does 10 Mbps or a hub.
You can use the show interfaces command in the privileged EXEC mode to verify the
duplex settings on a switch. This command displays statistics and statuses for all interfaces
or for the interface that you specify. The following example shows the duplex and speed
settings of a Fast Ethernet interface.
6.1 Starting a Switch
Introduction
In every Enterprise environment, switches are located in the heart of the network and link
together all the other equipment, so it is very important that they are configured
correctly. It all begins with the proper physical installation and then the basic
configuration – specifying the hostname, enabling the management interface, assigning an
IP address, and configuring the default gateway and interface descriptions.
Once you have managed to configure one Cisco switch, it is relatively simple to duplicate
the process and configure more switches in a similar way. You can even copy a standard
configuration from one switch to another with only minor changes. But if something goes
wrong, it is also important to recognize that there are issues. With switches, you can
recognize that from the LED indicators.
As a network engineer, it is important that you thoroughly understand the basic processes
of starting a switch:

• Review the general requirements for a physical switch installation.


• Read switch LED indicators to recognize the status of a switch.
• Access a switch CLI.
• Become familiar with CLI configuration commands.
• Review the show commands, which enable you to verify the status of the switch.

6.2 Starting a Switch


Switch Installation
Before you physically install a Catalyst switch, you must have the correct power and
operating environment. When you have correctly connected the power cable, the switch
will turn on if there is no on/off power button, which is the case on the Catalyst switch
shown in the figure.
Note: This figure depicts an example of a Cisco Catalyst switch, while the number and type
of ports and other connections on Cisco switches may vary.
Physical installation and startup of a Catalyst switch require completion of these steps:
1. Before performing physical installation, verify the following:
• Switch power requirements
• Switch operating environment requirements (operational temperature and
humidity)
2. Use the appropriate installation procedures for rack mounting, wall mounting, or
table or shelf mounting.
3. Before starting the switch, verify the network cables that provide connectivity to
end devices to the LAN.
4. Attach the power cable plug to the power supply socket of the switch. The switch
will start. Some Catalyst switches do not have power buttons.
5. Observe the boot sequence:
• When the switch is on, the power on self-test (POST) begins. During POST, the
switch LED indicators blink while a series of tests determine that the switch is
functioning properly.
• The Cisco IOS Software output text is displayed on the console.
When all startup procedures are finished, the switch is ready to configure.

6.3 Starting a Switch


Connecting to a Console Port
Unlike a computer host, Cisco switches do not have a keyboard, monitor, or mouse device
to allow direct user interaction. Upon initial installation, you can configure the switch from
a PC that is connected directly through the console port on the switch.
Cisco devices traditionally have an RJ-45 connector on their serial console port. On newer
Cisco network devices, a USB serial console connection is also supported. An appropriate
console cable is typically included with the Cisco device. Since modern computers and
notebooks rarely include built-in serial ports, you may also need an adapter.
Note: On devices with two console ports, only one console port can be active at a time.
When a cable is plugged into the USB console port, the RJ-45 port becomes inactive.
When the USB cable is removed from the USB port, the RJ-45 port becomes active.
You need the following equipment to access a Cisco device through its console port:

• The appropriate cable and adapters, depending on the console port you use and
the connectors on your PC (such as an RJ-45-to-DB-9 console cable, a USB-to-DB-9
adapter, a USB Type A-to-5-pin, mini-Type B, or USB-C to RJ-45 console cable).
• PC or equivalent with a serial or USB port, an operating system device driver, and
terminal emulator software, such as HyperTerminal or Tera Term, configured with
these settings, as required by the switch or router:
o Speed: 9600 bps
o Data bits: 8
o Parity: None
o Stop bit: 1
o Flow control: None
Note: The console port can be located in various places on different switches.
When a console connection is established, you gain access to user EXEC mode by default.

6.4 Starting a Switch


Switch LED Indicators
Typically, before turning on a device, you need to plug it in. However, some Cisco switches
do not have power switches, so when you plug them in, they power up automatically.
Because of this fact, you should make sure that the console cable is connected and the
terminal program is running before you plug in the switch for the first time. This
preparation will allow you to monitor the boot process of the switch. As the switch
powers on, it begins POST, which is a series of tests that run automatically to ensure that
the switch functions properly. Ensuring the switch passes POST is the first step of
deploying a switch.
When you need to examine how a switch is working or to verify its status and to
troubleshoot any problems, you usually use commands from the Cisco IOS CLI. However,
the switch hardware includes several LEDs that provide some status and troubleshooting
information. Generally, when the Cisco switch is functioning normally, the LEDs are lit in
green, and if there is a malfunction, the LEDs are lit in amber.
The following figure shows the front of a typical Cisco switch with six LEDs on the left, one
LED over each port, and a mode button.
To help make sense of the LEDs, consider the example of the SYST LED for a moment. This
LED provides a quick overall status of the switch with three simple states on most Cisco
switches:

• Off: The switch is not powered on.


• On (green): The switch is powered on and operational. Cisco IOS Software has
been loaded.
• On (amber): The switch POST process failed, and the Cisco IOS Software did not
load.
Note: So, just looking at the SYST LED on the switch tells you whether the switch is
working and, if it is not, whether this issue is due to the loss of power (the SYST LED is off)
or some kind of POST problem (the LED is amber).

6.5 Starting a Switch


Basic show Commands and Information
After you log into a Cisco switch, you can verify the switch software and hardware status
by using several commands that you execute from privileged EXEC mode. These
commands include the show interfaces, show version, and show running-config
commands. Here is a look at each of these commands in more detail.
Switch show interfaces Command
The show interfaces command displays the status and statistical information for the
network interfaces of the switch. The resulting output varies, depending on the network
for which a particular interface has been configured. You usually enter this command with
the options type and slot/number. The type option allows values such as FastEthernet and
GigabitEthernet. The slot/number option indicates the slot number and the port number
on the selected interface (for example, fa0/1).
The table shows some of the fields in the example display that you will find useful for
verifying fundamental switch details:
Note: The show interfaces and show ip interface brief commands are used frequently
when you configure and monitor network devices. You will see the show ip interface brief
command in the upcoming Discovery lab.
Switch show version Command
You can use the show version Cisco IOS command in privileged EXEC mode to verify the
Cisco IOS software version and release numbers of the Cisco IOS Software that is running
on a Cisco switch.
The following table describes some of the output fields of the show version command:
Switch show running-config Command
The show running-config command displays the current running (active) configuration file
of the switch. This command requires privileged EXEC mode access. This command
displays the IPv4 address, subnet mask, and default gateway settings, if they are
configured:

7.1 Introducing the TCP/IP Internet Layer, IPv4 Addressing, and Subnets
Introduction
In every LAN, Ethernet is used to exchange data locally. But suppose you want to
communicate between different LANs, for example. In that case, if a user in an Enterprise
Campus wants to communicate with a user at a remote site or globally, with a web server,
for example, this exchange will cross many different physical networks and devices. For
communication to happen, you need an addressing system that uniquely identifies every
device globally and enables the delivery of packets between them. The delivery function is
provided by the Transmission Control Protocol / Internet Protocol (TCP/IP) Internet Layer,
which provides services to exchange the data over the network between identified end
devices.
The most used protocol in the TCP/IP Internet layer is IPv4, which uses 32-bit numbers.
Remembering 32-bit IPv4 addresses would be cumbersome, so the address is represented
as a dotted decimal notation. As a networking engineer, you will need to use simple math
to convert between the binary and decimal worlds. The IPv4 address, which identifies the
device on the network, is typically accompanied by a subnet mask, which defines the
network.
Working as a network engineer, you will also manipulate the subnet mask to create
subnetworks for network segments of different sizes. This activity is called subnetting.
Subnetting allows you to create multiple logical networks within a single larger network,
which is especially important in large Enterprise environments where you need to logically
organize your environment. And you can do this very efficiently – by using a more
advanced subnetting technique called variable-length subnet mask (VLSM).

As a network engineer, you will encounter different features of the TCP/IP Internet layer
in everyday work, which will include various details:

• Describing IPv4, including IPv4 addressing and its general characteristics.


• Understanding IPv4 address representation, its structure (network and the host
portion of addresses), and the IPv4 address fields.
• Distinguishing between address classes and the types of reserved IPv4 addresses,
with focus on relevant types (network address, broadcast address).
• Demonstrate your knowledge of subnetting and VLSM.
• Verify IPv4 settings on end-host devices.
7.2 Introducing the TCP/IP Internet Layer, IPv4 Addressing, and Subnets
Internet Protocol
The IP component of TCP determines where packets of data are routed based on their
destination addresses. IP has certain characteristics that are related to how it manages
this function.

IP uses packets to carry information through the network. A packet is a self-contained,


independent entity that contains data and sufficient information to be routed from the
source to the destination without reliance on previous packets.
IP has these characteristics:

• IP operates at Layer 3 or the network layer of the Open Systems Interconnection


(OSI) reference model (network layer) and at the Internet layer of the TCP/IP stack.
• IP is a connectionless protocol, in which a one-way packet is sent to the
destination without advance notification to the destination device. The destination
device receives the data and does not return any status information to the sending
device.
• Each packet is treated independently, which means that each packet can travel a
different way to the destination.
• IP uses hierarchical addressing, in which the network identification is the
equivalent of a street, and the host ID is the equivalent of a house or an office
building on that street.
• IP provides service on a best-effort basis and does not guarantee packet delivery. A
packet can be misdirected, duplicated, or lost on the way to its destination.
• IP does not provide any special features that recover corrupted packets. Instead,
the end systems of the network provide these services.
• IP operates independently of the medium that is carrying the data.
• There are two types of IP addresses: IPv4 and IPv6—the latter becoming
increasingly important in modern networks.
Example: Delivering a Letter Through a Postal Service
An analogy for IP services would be mail delivery by postal service. For example, you live
in San Francisco, and your mother lives in New York. You write three letters to your
mother. You seal each letter in a separate envelope, address each letter, and write your
return address in the upper left-hand corner of each envelope.
You deposit the three letters in the outgoing mail slot at your local post office. The postal
service makes its best attempt to deliver all three letters to your mother in New York.
However, the postal service will not guarantee that the letters will arrive at their
destination. It will not guarantee that all three letters will be processed by the same
carrier or take the same route. And it will not guarantee that the letters will arrive in the
order in which you mailed them.

7.3 Introducing the TCP/IP Internet Layer, IPv4 Addressing, and Subnets
Decimal and Binary Number Systems
Most people are accustomed to the decimal numbering system. The decimal (base 10)
system is the numbering system used in everyday mathematics. On the other hand, the
binary (base 2) system is the foundation of computer operations.
Network device addresses also use the binary system to define their location in the
network. IPv4 addresses are based on a dotted-decimal notation of a binary number: four
8-bit fields (octets) converted from binary to decimal numbers, separated by dots. An
example of an IPv4 address written in a dotted-decimal notation is 192.168.10.22. The
binary equivalent of this number is 11000000.10101000.00001010.00010110. You can use
any number of bits for a binary number, but for IPv4 addresses, you will always use 8 bits
when converting each of the decimal numbers to binary. You must have a basic
understanding of the mathematical properties of a binary system to understand
networking.
While the base number is important in any numbering system, it is the position of a digit
that confers value. In the decimal numbering system, the number 10 is represented by a 1
in the tens position and a 0 in the ones position. The number 100 is represented by a 1 in
the hundreds position, a 0 in the tens position, and a 0 in the ones position. In the decimal
system, the digits are 0, 1, 2, 3, 4, 5, 6, 7, 8, and 9. When quantities higher than 9 are
required, the decimal system begins with 10 and continues to 99. When quantities higher
than 99 are required the decimal system begins again with 100, and so on, with each
column to the left raising the exponent by 1. All these tens, hundreds, thousands, and so
on are all powers of 10.
For example, a decimal number 27398 represents the sum (2 x 10,000) + (7 x 1000) + (3 x
100) + (9 x 10) + (8 x 1). If you write this with exponents the sum would look like: (2 x 104)
+ (7 x 103) + (3 x 102) + (9 x 101) + (8 x 100).
The binary system uses only the digits 0 and 1. Therefore, the first digit is 0, followed by 1.
If a quantity higher than 1 is required, the binary system goes to 10, followed by 11. The
binary system continues with 100, 101, 110, 111, then 1000, and so on. The following
figure shows the binary equivalent of the decimal numbers 0 through 19.
Building a binary number follows the same logic as building a decimal number, with the
only difference that the base is 2 so the exponents represent the power of 2. If you take
the binary number 10011 for example, it represents a sum of (1 x 24) + (0 x 23) +(0 x 22)
+(1 x 21) +(1 x 20), which is equal to (1 x 16) + (0 x 8) + (0 x 4) + (1 x 2) + (1 x 1) = 19.

7.4 Introducing the TCP/IP Internet Layer, IPv4 Addressing, and Subnets
Binary-to-Decimal Conversion
Doing the conversion from binary into decimal is easy:
• Start by making a table with all of the 2exponent values listed, for exponent values
0 through 7, as shown in the first row of the following table.
• Add a row that lists the decimal value of each of these exponents, as shown in the
second row; these are the positional or place values (and are also called
placeholders).
• Write out the given bit sequence in the table, as shown in the third row for the
example binary number 10111001.
• For each bit, multiply the place value by the bit value, as shown in the fourth row.
Notice that where the bit value is 0, the answer is 0, and where the bit value is 1,
the answer is the place value.
• Finally, add all of these values together; the result is the decimal value of the
binary number. In this example, the decimal value of the binary number 10111001
is 185.

10111001 = (128*1) + (64*0) + (32*1) + (16*1) + (8*1) + (4*0) + (2*0) + (1 *1) 10111001 =
128 + 0 + 32 + 16 + 8 + 0 + 0 + 1 10111001 = 185
The minimum value of an 8-bit binary number is 00000000, which in decimal equals to 0.
The maximum value of an 8-bit binary number is 11111111, which in decimal equals 255.
If you have a number that is larger than 255, it cannot be written with 8 bits. Each of the
decimal numbers an IPv4 address must be a number between 0 and 255.

7.5 Introducing the TCP/IP Internet Layer, IPv4 Addressing, and Subnets
Decimal-to-Binary Conversion
The process of converting a decimal number into binary can be simplified by using a table.
The table method utilizes elementary mathematics like addition and subtraction. This
process is simple and effective. With a bit of practice, you will learn it quickly.
When converting from decimal into binary, the idea is to find the right sequence of bits by
marking placeholders as 1 or 0. All bits are represented, and each placeholder marked
with 1 adds its value to the converted number, while 0s are ignored. For example, 255 is
represented by marking all placeholders with 1, meaning that summing up each
placeholder value produces the decimal number: 128 + 64 + 32 + 16 + 8 + 4 + 2 + 1 = 255.
The process of converting a decimal number into binary is done by marking the closest
(lower) placeholder as 1 and subtracting the corresponding value from the decimal
number until there is no remainder. Any unused or skipped placeholders are marked as 0.
The binary representation of the decimal number is the 1 and 0 sequence that is
produced.

This figure illustrates the conversion of decimal number 147 to binary. Start by making a
table with all of the 2exponent values listed, for exponent values 0 through 7, as shown in
the first row of the table. Add a row that lists the decimal value of each of these
exponents, as shown in the second row. These are the positional or place values (and are
also called placeholders). The binary number is put in the third row as it is determined.
The following table describes the steps for converting the number 147 to a binary number.
You can also have a number that is smaller than 128, for example 35. 35 in decimal
converts to 00100011 in binary. Note that the first 2 bits of the binary number are zeros;
these zeros are known as leading zeros. Recall that IPv4 addresses are most often written
in the dotted-decimal notation, which consists of four sets of 8-bits (octets) converted
from binary to decimal numbers, separated by dots. For IPv4 addresses, you will always
use 8 bits when converting each of the decimal numbers to binary. Some of these binary
numbers may have leading zeroes.

7.6 Introducing the TCP/IP Internet Layer, IPv4 Addressing, and Subnets
IPv4 Address Representation
Every device must be assigned a unique address to communicate on an IP network. This
includes hosts or endpoints (such as PCs, laptops, printers, web servers, smartphones, and
tablets), as well as intermediary devices (such as routers and switches).
Physical street addresses are necessary to identify the locations of specific homes and
businesses so that mail can reach them efficiently. In the same way, logical IP addresses
are used to identify the location of specific devices on an IP network so that data can
reach those network locations. Every host connected to a network or the internet has a
unique IP address that identifies it. Structured addressing is crucial to route packets
efficiently. Learning how IP addresses are structured and how they function in the
operation of a network provides an understanding of how IP packets are forwarded over
networks using TCP/IP.
An IPv4 address is a 32-bit number, is hierarchical, and consists of two parts:

• The network address portion (network ID): Network ID is the portion of an IPv4
address that uniquely identifies the network in which the device with this IPv4
address resides. The network ID is important because most hosts on a network can
communicate only with devices in the same network. If the hosts need to
communicate with devices with interfaces assigned to some other network ID, a
network device—a router or a multilayer switch—can route data between the
networks.
• The host address portion (host ID): Host ID is the portion of an IPv4 address that
uniquely identifies a device on a given IPv4 network. Host IDs are assigned to
individual devices, both hosts or endpoints and intermediary devices.
Note: There are two versions of IP that are in use: IPv4 and IPv6. IPv4 is the most common
and is currently used on the internet. It has been the mainstay protocol since the 1980s.
IPv6 was designed to solve the problem of global IPv4 address exhaustion. The adoption
of IPv6 was initially very slow but is now reaching wider deployment.
Practical Example of an IPv4 Address
Recall that IPv4 addresses are most often written in the dotted-decimal notation, which
consists of four sets of 8-bits (octets) converted from binary to decimal numbers,
separated by dots. The following example shows an IPv4 address in a decimal form
translated into its binary form, using the method described earlier.
7.7 Introducing the TCP/IP Internet Layer, IPv4 Addressing, and Subnets
IPv4 Header Fields
Before you can send an IP packet, there needs to be a format that all IP devices agree on
to route a packet from the source to the destination. All that information is contained in
the IP header. The IPv4 header is a container for values that are required to achieve host-
to-host communications. Some fields (such as the IP version) are static, and others, such
as Time to Live (TTL), are modified continually in transit.
The IPv4 header has several fields. First, you will learn about these four fields:

• Service type: Provides information on the desired quality of Service


• TTL: Limits the lifetime of a packet
Note: The TTL value does not use time measurement units. It is a value between 1 and
255. The packet source sets the value, and each router that receives the packet
decrements the value by 1. If the value remains above 0, the router forwards the packet. If
the value reaches 0, the packet is dropped. This mechanism keeps undeliverable packets
from traveling between networks for an indefinite amount of time.
Source address: Specifies the 32-bit binary value that represents the IPv4 address of the
sending endpoint
Destination address: Specifies the 32-bit binary value that represents the IPv4 address of
the receiving endpoint
Other fields in the header include:

• Version: Describes the version of IP.


• IHL: Internet Header Length (IHL) describes the length of the header.
• Total Length: Describes the length of a packet, including header and data.
• Identification: Used for unique fragment identification.
• Flag: Sets various control flags regarding fragmentation.
• Fragment Offset: Indicates where a specific fragment belongs.
• Protocol: Indicates the upper-layer protocol that is used in the data portion of an
IPv4 packet. For example, a protocol value of 6 indicates this packet carries a TCP
segment.
• Header Checksum: Used for header error detection.
• Options: Includes optional parameters
• Padding: Used to ensure that the header ends on a 32-bit boundary

7.8 Introducing the TCP/IP Internet Layer, IPv4 Addressing, and Subnets
IPv4 Address Classes
Nowadays, classless addressing is predominantly used. However, to fully understand the
concepts you will learn about here, you need to understand how the ever-changing needs
dictated the evolution of addressing solutions over time.
In the early days of the internet, the standard reserved the first 8 bits of an IPv4 address
for the network part and the remaining 24 bits for the host part. 24 host bits offer
16,777,214 IPv4 host addresses. It soon became clear that such address allocation is
inefficient because most organizations require several smaller networks of smaller size
rather than one network with thousands of computers. Also, most organizations need
several networks of different sizes.
The first step to address this need was made in 1981 when the IETF released RFC 790,
where the IPv4 address classes were introduced for the first time. Here the Internet
Assigned Numbers Authority (IANA) determined IPv4 Class A, Class B, and Class C.
Note: RFC is a formal document from the IETF communicating information about the
internet and defining internet standards.
Assigning IPv4 addresses to classes is known as classful addressing. Each IPv4 address is
broken down into a network ID and a host ID. In addition, a bit or bit sequence at the start
of each address determines the class of the address.

Note: IPv4 hosts only use Class A, B, and C IPv4 addresses for unicast (host-to-host)
communications. In 2002, RFC 3330 also introduced Class D and Class E, defining special-
use IPv4 addresses. This RFC has been later obsoleted by another RFC defining global and
other specialized IPv4 address blocks. Still, Class D and Class E are included here for
completeness, but they are outside the scope of this discussion.
Class A
A Class A address block is designed to support extremely large networks with more than
16 million host addresses. The Class A address uses only the first octet (8 bits) of the 32-bit
number to indicate the network address. The remaining 3 octets of the 32-bit number are
used for host addresses. The first bit of a Class A address is always a 0. Because the first bit
is a 0, the lowest number that can be represented is 00000000 (decimal 0), and the
highest number that can be represented is 1111111 (decimal 127). However, these two
network numbers, 0 and 127, are reserved and cannot be used as network addresses.
Therefore, any address that has a value between 1 and 126 in the first octet of the 32-bit
number is a Class A address.
Class B
The Class B address space is designed to support the needs of moderate to large networks
with more than 65,000 hosts. The Class B address uses two of the four octets (16 bits) to
indicate the network address. The remaining two octets specify host addresses. The first 2
bits of the first octet of a Class B address are always binary 10. Starting the first octet with
binary 10 ensures that the Class B space is separated from the upper levels of the Class A
space. The remaining 6 bits in the first octet may be populated with either ones or zeros.
Therefore, the lowest number that can be represented with a Class B address is 10000000
(decimal 128), and the highest number that can be represented is 10111111 (decimal
191). Any address that has a value in the range of 128 to 191 in the first octet is a Class B
address.
Class C
The Class C address space is the most commonly available address class. This address
space is intended to provide addresses for small networks with a maximum of 254 hosts.
In a Class C address, the first three octets (24 bits) of the address identify the network
portion, with the remaining octet reserved for the host portion. A Class C address begins
with binary 110. Therefore, the lowest number that can be represented is 11000000
(decimal 192), and the highest number that can be represented is 11011111 (decimal
223). If an address contains a number in the range of 192 to 223 in the first octet, it is a
Class C address.
Class D
Class D (multicast) IPv4 addresses are dedicated to multicast applications such as
streaming media. Multicasts are a special type of broadcast in that only hosts that request
to participate in the multicast group will receive the traffic to the IPv4 address of that
group. Unlike IPv4 addresses in Classes A, B, and C, multicast addresses are always the
destination address and never the source. A Class D address begins with binary 1110.
Therefore, the lowest number represented is 11100000 (decimal 224), and the highest
number that can be represented is 11101111 (decimal 239). If an address contains a
number in the range of 224 to 239 in the first octet, it is a Class D address.
Class E
Class E (reserved) IPv4 addresses are reserved by the IANA as a block of experimental
addresses. Class E IPv4 addresses should never be assigned to IPv4 hosts. A Class E address
begins with binary 1111. Therefore, the lowest number that can be represented is
11110000 (decimal 240), and the highest number that can be represented is 11111111
(decimal 255). If an address contains a number in the range of 240 to 255 in the first octet,
it is a Class E address.
The following table shows the IPv4 address range of the first octet (in decimal and binary)
for Class A, B, C, D, and E IPv4 addresses
Note: Class A addresses 127.0.0.0 to 127.255.255.255 cannot be used. This range is
reserved for loopback and diagnostic functions.

7.9 Introducing the TCP/IP Internet Layer, IPv4 Addressing, and Subnets
Subnet Masks
A subnet mask is a 32-bit number that describes which portion of an IPv4 address refers to
the network ID and which part refers to the host ID.
The subnet mask is configured on a device along with the IPv4 address.
If a subnet mask has a binary 1 in a bit position, the corresponding bit in the address is
part of the network ID. If a subnet mask has a binary 0 in a bit position, the corresponding
bit in the address is part of the host ID.
The figure represents an IPv4 address separated into a network and a host part. In the
example the network part ends on the octet boundary, which coincides with what you
learned about IPv4 address class boundaries. The address in the figure belongs to class B,
where the first two octets (16 bits) indicate the network part, and the remaining two
octets represent the host part. Therefore, you create the subnet mask by setting the first
16 bits of the subnet mask to binary 1 and the last 16 bits of the subnet mask to zero.

Notice the prefix /16; it is another way of expressing the subnet mask and it matches the
number of network bits that are set to binary 1 in the subnet mask.
Networks are not always assigned the same prefix. Depending on the number of hosts on
the network, the prefix that is assigned may be different. Having a different prefix number
changes the host range and broadcast address for each network.
Calculating the Network Address
An IPv4 address that has binary zeros in all the host bit positions is reserved for the
network address. The main purpose of the subnet mask is to identify the network address
of a host, which is crucial for routing purposes. Based on the network address, the host
can identify whether a packet's destination address is within the same network or not.
Given an IPv4 address and a subnet mask, you can calculate the network address by using
the AND function between the binary representation of the IPv4 address and the binary
representation of the subnet mask.
The calculation is performed bit-by-bit following these rules:

• 0 AND 0 = 0
• 1 AND 0 = 0
• 0 AND 1 = 0
• 1 AND 1 = 1

The result of the AND operation is the network address of the network on which the
device resides; this is also called the network prefix. You can see that in the network
address, the network part is the same as it is in the original IPv4 address, while the host
bits are all set to zero.
Usually you will use the decimal form of the network address, so you need to remember
the binary to decimal conversion. Look at the figure to remember the conversion process.
7.10 Introducing the TCP/IP Internet Layer, IPv4 Addressing, and Subnets
Subnets
A lot of networks nowadays still use a flat network design. A flat topology is an OSI Layer 2
– switch-connected network where all devices see all the broadcasts in the Layer 2
broadcast domain. In such a network, all devices can reach each other by broadcast. Flat
network design is easy to implement and manage, reducing cost, maintenance, and
administration.
However, such design also brings some concerns:

• Security: Because the network is not segmented, you can not apply security
policies adapted to individual segments. If one device is compromised, it can
quickly affect the whole network.
• Troubleshooting: Isolation of network faults is more challenging, especially in
bigger flat networks, because there is no logical separation or hierarchy.
• Address space utilization: In a large flat network, you can end up with a lot of
wasted IP addresses. You cannot use addresses from this network anywhere else.
• Scalability and speed: A flat network represents a single Layer 2 broadcast
domain. If there is a large amount of broadcast traffic, this can impose
considerable pressure on the available resources. A single broadcast domain
typically should not include more than a couple of hundred devices.
Network administrators can segment their networks, especially large networks, by using
subnetworks or subnets to tackle those challenges. Although subnets were initially
designed to solve the shortage of IPv4 addresses, they are used to address administrative,
organizational, security, and scalability considerations in today's networks. If you break a
bigger network into smaller subnetworks, you can create a network of interconnected
subnetworks.
Imagine a company that occupies a 30-story building divided into departments. Such
company could prepare one large network to address all the IPv4 devices. But putting a
couple of hundred or even thousands of devices into one IPv4 network would make such a
network unusable because of the broadcast traffic, security, and troubleshooting issues. A
better approach is to create a larger number of smaller networks, such as department,
functional or spatial separation. For example, think of the company as a group of
networks, the departments being used as subnets and the devices in the departments as
the individual host addresses belonging to these smaller subnets. This process of creating
smaller networks out of a bigger one is called subnetting.
A subnet segments the hosts within the network. Without subnets, the network has a flat
topology. You use routers to separate networks by breaking the network into multiple
subnets or multiple OSI Layer 3 broadcast domains.
Note: Recall that OSI Layer 2 is the data link layer, and it is equivalent to part of the TCP/IP
link layer. OSI Layer 3 is the network layer, and that it is equivalent to the TCP/IP internet
layer. A Layer 2 broadcast domain is a domain in which all devices see each other's Layer 2
broadcast frames, while a Layer 3 broadcast domain is a domain in which all devices see
each other's Layer 3 broadcast packets.

Segmenting your network using subnets brings several advantages:

• Smaller networks are easier to manage and map to geographical or functional


requirements.
• Better utilization of IP addressing space, because you can adapt subnets sizes.
• Subnetting enables you to create multiple logical networks from a single network
prefix.
• Overall, network traffic is reduced, which can improve performance.
• You can more easily apply network security measures at the interconnections
between subnets than within a single large network.
In multiple-subnetwork environments, each subnetwork may be connected to the internet
by a single router. The figure shows one router connecting multiple subnetworks to the
internet. The details of the internal network environment and how the network is divided
into multiple subnetworks are inconsequential to other IP networks.
As you already know, an IP address has two components: the network part and the host
part. In a flat network, all device IP addresses have the same network part. When the
network is broken into subnets, the IP addressing must be modified to accommodate the
required segmentation. The IP address of each device on a newly created subnetwork has
the same network part and the same subnet part. The subnet part is borrowed from the
host part of the address.

7.11 Introducing the TCP/IP Internet Layer, IPv4 Addressing, and Subnets
Implementing Subnettin: Borrowing Bits
Subnetting allows you to create multiple logical networks that exist within a single larger
network. When you are designing a network addressing scheme, you need to be able to
determine how many logical networks you will need and how many devices you will be
able to fit into these smaller networks.
To subnet a network address, you will borrow host bits and use them as subnet bits. You
will use the subnet mask to indicate how many host bits have been borrowed. Bits must
be borrowed consecutively, starting with the first host bit on the left. This approach
introduces classless networks.
To implement subnets, follow this procedure:

• Determine the IP address for your network as assigned by the registry authority or
network administrator.
• Based on your organizational and administrative structure, determine the number
of subnets that are required for the network. Be sure to plan for growth.
• Based on the required number of subnets, determine the number of bits that you
need to borrow from the host bits.
• Determine the binary and decimal value of the new subnet mask that results from
borrowing bits from the host ID.
• Apply the subnet mask to the network IP address to determine the subnets and
the available host addresses. Also, determine the network and broadcast
addresses for each subnet.
• Assign subnet addresses to all subnets. Assign host addresses to all devices that
are connected to each subnet.
Take a look at the following figure. The top table shows a standard Class C network
address that is not subnetted. The bottom table shows the same address after it is
subnetted by borrowing one host bit. Notice that the prefix length has changed from 24 to
25. The network IPv4 address itself is unchanged, although it is now considered a
subnetwork (subnet) and is one of two subnets that have been created. The subnet mask
has changed from 255.255.255.0 in decimal to 255.255.255.128, because the 128 bit is
now turned on in the last octet.

Each time that a bit is borrowed, the number of subnet addresses increases, and the
number of host addresses that are available per subnet decreases. The algorithm that is
used to compute the number of subnets and hosts uses powers of two. Therefore,
borrowing one host bit enables you to create 21 = 2 subnets, borrowing 2 bits gives you 22
= 4 subnets, and so on.
Note: You can use the following formula to calculate the number of subnets that are
created by borrowing a given number of host bits: Number of subnets = 2s (where s is the
number of bits that are borrowed)
As the following figure shows, you can also determine how many host addresses are
available per subnet when you borrow a given number of bits. Just like on a network, two
addresses are not available to be used as host addresses on a subnet; they are used for
the address of the subnet itself (with all of the host bits set to 0) and the directed
broadcast address on the subnet (with all of the host bits set to 1). The figure shows that
borrowing 1 bit for subnetting the address in the example leaves 7 bits for hosts.
Note: You can use a formula to calculate the number of host addresses that are available
when a given number of host bits are borrowed: Number of hosts = 2h – 2 (where h is the
number of host bits that are remaining after bits are borrowed)
The formula to determine the number of hosts for this example is 27– 2, which calculates
to 126 host addresses per subnet.
Here is another example using the same network, in which five host bits are borrowed for
subnetting. In this example, 25 = 32 subnets are created, and only 23 – 2 = 6 host
addresses are available for each subnet. The new subnet mask is
11111111.11111111.11111111.11111000, which equates to 255.255.255.248 in decimal.

The following figure shows the subnetting of a Class B network address. The top table
shows a network address with the default Class B subnet mask, 255.255.0.0. The second
table shows the same address after it is subnetted by borrowing six host bits. Notice that
the prefix length has changed from 16 to 22. The network IPv4 address itself is unchanged,
but the subnet mask has changed from 255.255.0.0 in decimal to 255.255.252.0.
The next figure shows the subnetting of a Class A network address. The top table shows a
network address with the default Class A subnet mask, 255.0.0.0. The bottom table shows
the same address after it is subnetted by borrowing 8 host bits. Notice that the prefix
length has changed from 8 to 16. The network IPv4 address is unchanged, but the subnet
mask has changed from 255.0.0.0 in decimal to 255.255.0.0.

7.12 Introducing the TCP/IP Internet Layer, IPv4 Addressing, and Subnets
Implementing Subnetting: Determining the Addressing Scheme
If a network address is subnetted, the first subnet that is obtained after subnetting the
network address is called subnet zero, because all of the subnet bits are binary zero. To
determine each subsequent subnet address, increase the subnet address by the bit value
for the last bit that you borrowed.
In the following example, 8 bits are borrowed for subnetting the network address,
172.16.0.0/16. The first subnet address is 172.16.0.0/24; this is subnet zero. The last bit
borrowed is the bit with the value of 1 in the third octet, so the next subnet address is
172.16.1.0/24.
The following figure shows the first six subnets and the last subnet created by borrowing
the 8 bits. There are a total of 28 = 256 subnets.

Notice that the address of a subnet has all of the host bits set to binary 0. This address is
one of the reserved addresses on a subnet. The other reserved address is the subnet-
directed broadcast address, in which all of the host bits are set to binary 1. All of the
addresses between the subnet address and the subnet broadcast address are valid host
addresses on that subnet. On these subnets, there are 28 – 2 = 254 host addresses per
subnet.
Here are the host addresses and broadcast addresses for those subnets.
Class B's 172.16.0.0/16 network address has been subnetted by borrowing two host bits in
the following figure. The first subnet address is 172.16.0.0/18, the zero subnet. The last bit
borrowed is the bit with the value of 64, so the next subnet address is 172.16.64.0/18.

The following figure shows all the subnets that are created by borrowing the 2 bits. The
subnet 172.16.192.0/18 is the last subnet because 192 + 64 = 256, and the highest
possible value for any given octet is 255. However, if the subnet goes over the octet
boundary, you have more subnets, as you will see in the next example.
The following table shows the valid host addresses for each subnet that was created by
borrowing 2 bits. The table shows the valid host IPv4 address range for each subnetwork.
There are 22 = 4 subnets, and 214 – 2 = 16,382 host addresses per subnet.

Here is one more example of subnetting the same /16 network address, this time
borrowing 11 host bits for subnetting. The first subnet address is 172.16.0.0/27. The
second subnet address is 172.16.0.32/27 because the last borrowed bit has a value of 32.
Notice that this time, the last borrowed bit is in the fourth octet. Therefore, the increment
of 32 (the value of the last borrowed bit) is first applied in the fourth octet.
Once all the possible subnet addresses in the fourth octet have been calculated in this
manner, you move back into the third octet since you have borrowed bits from the third
octet as well. You can use all the third octet values from 1 to 255 for your subnet
addresses as well.
The following table shows the first 10 subnet addresses and the last subnet address (with
the corresponding host addresses and broadcast addresses) that result from subnetting
Class B network 172.16.0.0 by borrowing 11 host bits. There are 211 = 2048 subnets, and
25 – 2 = 30 host addresses per subnet.
7.13 Introducing the TCP/IP Internet Layer, IPv4 Addressing, and Subnets
Benefits of VLSM and Implementing VLSM
When you are using subnetting, the same subnet mask is applied for all the subnets of a
given network. This way, each subnet has the same number of available host addresses.
You may need this approach sometimes, but most organizations require several networks
of various sizes rather than one network with thousands of devices. So usually, having the
same subnet mask for all subnets of a given network ends up wasting address space
because each subnet has the same number of available host addresses.
For example, in the following figure, Class B network 172.16.0.0 is subnetted by borrowing
8 host bits and applying a 24-bit subnet mask, allowing 256 subnets with 254 host
addresses each. In this example, many host addresses are wasted. Each WAN link needs
only two host addresses, so 252 host addresses are wasted on each WAN link. Many host
addresses are also wasted on other subnets, and a Variable-length subnet mask (VLSM)
provides a solution.
VLSM allows you to use more than one subnet mask within a network to efficiently use IP
addresses. Instead of using the same subnet mask for all subnets, you can use the most
efficient subnet mask for each subnet. The most efficient subnet mask for a subnet is the
mask that provides an appropriate number of host addresses for that individual subnet.
For example, subnet 172.16.6.0 has only 19 hosts, so it does not need the 254 host
addresses that the 24-bit mask allows. A 27-bit mask would provide 30 host addresses,
which is much more appropriate for this subnet.

In the next figure, the 172.16.0.0/16 network is first divided into subnetworks using a 24-
bit subnet mask. However, one of the subnetworks in this range, 172.16.14.0/24, is
further divided into smaller subnetworks using a 27-bit mask to accommodate the subnets
that have 19 or 28 hosts. These smaller subnetworks range from 172.16.14.0/27 to
172.16.14.224/27. Then, one of these smaller subnets, 172.16.14.128/27, is further
divided using a 30-bit mask, which creates subnets with only two hosts to be used on the
WAN links. The subnets with the 30-bit mask range from 172.16.14.128/30 to
172.16.14.156/30.

In addition to providing a solution to the problem of wasted IP addresses, VLSM has


another important benefit: support for route summarization, which is also called route
aggregation. The hierarchical addressing design of VLSM enables easier summarization of
network addresses. Route summarization reduces the number of routes in routing tables
by representing a range of network subnets in a single summary address. Smaller routing
tables require less CPU time for routing lookups.
In the previous figure, the subnet 172.16.14.0/24 describes all the addresses that are
further subnets of 172.16.14.0, including those addresses from subnet 172.16.14.0/27 to
subnet 172.16.14.128/30.
VLSM is an important technology in large routed networks. VLSM can be used in all
modern networks that run classless routing protocols such as Routing Information
Protocol version 2 (RIPv2), Open Shortest Path First (OSPF), and Enhanced Interior
Gateway Routing Protocol (EIGRP). However, VLSM could not be used on a network using
legacy classful protocols such as Routing Information Protocol version 1 (RIPv1) and
Interior Gateway Routing Protocol (IGRP). Those protocols cannot carry subnet mask
information on their routing updates and are no longer used in today's networks.
Implementing VLSM
The network 172.16.0.0 has already been subnetted by applying a 20-bit subnet mask.
One of the resulting subnet addresses, 172.16.32.0/20, is used for the region of the
enterprise network that the following figure shows. This region needs to assign addresses
to multiple LANs. Each LAN must have 50 hosts. You can use VLSM to further subnet the
address 172.16.32.0/20 to give you more subnet addresses with fewer hosts per subnet.

The next figure shows in binary the original subnetting of the 172.16.0.0/16 network to
/20 by borrowing 4 host bits, which provided 16 subnets with 4094 host addresses each.
The figure also shows how to further subnetting with VLSM increases the number of
subnets and provides the desired number of host addresses per subnet. Borrowing an
additional 6 subnet bits results in an additional 26 = 64 subnets. This leaves 6 host bits,
resulting in 26 – 2 = 62 hosts on each of these subnets.

The following figure shows the subnet addresses and host addresses that are achieved by
using VLSM. The subnet for the region in this example, subnet 172.16.32.0/20, is further
subnetted by applying a 26-bit mask, as the previous figure shows.
The following figure shows some of the new VLSM subnet addresses that are applied to
the regional network.

To calculate the subnet addresses for the WAN links, further subnet one of the unused /26
subnets with a 30-bit subnet mask. For this example, subnet 172.16.33.0/26 will be
further subnetted. Borrowing an additional 4 subnet bits results in an additional 24 = 16
subnets. This leaves 2 host bits, resulting in 22– 2 = 2 hosts on each subnets.
The following figure shows all the new VLSM subnet addresses that are applied to the
regional network.

As seen in this example, where we used VLSM to further subnet the address
172.16.32.0/20 into smaller subnets of different sizes, the easiest way to assign the
subnets is to assign the subnets with the largest number of hosts first.

7.14 Introducing the TCP/IP Internet Layer, IPv4 Addressing, and Subnets
Private vs. Public IPv4 Addresses
As the internet began to grow exponentially in the 1990s, it became clear that if the
current growth trajectory continued, eventually, there would not be enough IPv4
addresses for everyone who wanted one.Work began on a permanent solution, which
would become IPv6, but several other solutions were developed in the interim. These
solutions included Network Address Translation (NAT), classless interdomain routing
(CIDR), private IPv4 addressing, and VLSM.
Public IPv4 Addresses
Hosts that are publicly accessible over the internet require public IP addresses. Internet
stability depends directly on the uniqueness of publicly used network addresses.
Therefore, a mechanism is needed to ensure that addresses are, in fact, unique. This
mechanism was originally managed by the InterNIC. The IANA succeeded the InterNIC. The
IANA carefully manages the remaining supply of IPv4 addresses to ensure that duplication
of publicly used addresses does not occur. Duplication would cause instability on the
internet and compromise its ability to deliver packets to networks using the duplicated
addresses.
With few exceptions, businesses and home internet users receive their IP address
assignment from their Local Internet Registry (LIR), which typically is their ISP. These IP
addresses are called provider-aggregatable (as opposed to provider-independent
addresses) because they are linked to the ISP. If you change ISPs, you will need to
readdress your internet-facing hosts.
The following table provides a summary of public IPv4 addresses.

LIRs obtain IP address pools from their Regional Internet Registry (RIR):
African Network Information Center (AFRINIC)
Asia Pacific Network Information Center (APNIC)
American Registry for Internet Numbers (ARIN)
Latin American and Caribbean Network Information Center (LACNIC)
Réseaux IP Européens Network Coordination Centre (RIPE NCC)
With the rapid growth of the internet, public IPv4 addresses began to run out. New
mechanisms such as NAT, CIDR, VLSM, and IPv6 were developed to help solve the
problem.
Private IPv4 Addresses
Internet hosts require a globally routable and unique IPv4 address, but private hosts that
are not connected to the internet can use any valid address, as long as it is unique within
the private network. However, because many private networks exist alongside public
networks, deploying random IPv4 addresses is strongly discouraged.
In February 1996, the IETF published RFC 1918, "Address Allocation for Private Internets,"
to ease the accelerating depletion of globally routable IPv4 addresses and provide
companies with an alternative to using arbitrary IPv4 addresses. Three blocks of IPv4
addresses (one Class A network, 16 Class B networks, and 256 Class C networks) are
designated for private, internal use.
Addresses in these ranges are not routed on the internet backbone. Internet routers are
configured to discard private addresses. In a private intranet, these private addresses can
be used instead of globally unique addresses. When a network that uses private addresses
must connect to the internet, private addresses must be translated to public addresses.
This translation process is called NAT. A router is often the network device that performs
NAT.
The following table provides a summary of private IPv4 addresses.

7.15 Introducing the TCP/IP Internet Layer, IPv4 Addressing, and Subnets
Reserved IPv4 Addresses
Certain IPv4 addresses are reserved and cannot be assigned to individual devices on a
network. Reserved IPv4 addresses include a network address used to identify the network
itself and a broadcast address, which is used for broadcasting packets to all the devices on
a network.
Network Address
The network address is a standard way to refer to a network. An IPv4 address that has
binary zeros in all the host bit positions is reserved for the network address.
For example, in a Class A network, 10.0.0.0 is the IPv4 address of the network containing
the host 10.1.2.3. All hosts in 10.0.0.0 will have the same network bits. The IPv4 address
172.16.0.0 is a Class B network address, and 192.168.1.0 is a Class C network address. A
router uses the network IPv4 address when it searches its IPv4 routing table for the
destination network location.
When networks are subnetted, the IPv4 address with binary zeros in all the host bit
positions is still reserved for the address of the subnet. For example, 172.16.1.0/24 is the
address of a subnet.
Local Broadcast Address
If an IPv4 device wants to communicate with all the devices on the local network, it sets
the destination address to all ones (255.255.255.255) and transmits the packet. For
example, hosts that do not know their network number will use the 255.255.255.255
broadcast address to ask a server for the network address. The local broadcast is never
routed beyond the local network or subnet.

Directed Broadcast Address


The broadcast IPv4 address of a network is a special address for each network that allows
communication to all the hosts in that network. To send data to all the hosts in a network,
a host can send a single packet that is addressed to the broadcast address of the network.
The broadcast address uses the highest address in the network range, which is the address
in which all of the bits in the host portion are all ones. For network 10.0.0.0/8, with 8
network bits, the broadcast address would be 10.255.255.255. This address is also
referred to as the directed broadcast.
Assuming a hypothetical network in which every IPv4 host address was in use on the
10.0.0.0/8 network, a ping to 10.255.255.255 would receive a response from all
16,777,214 hosts.
For the network address 172.16.0.0/16, the last 16 bits make up the host field (or host
part of the address). The broadcast that would be sent out to all the devices on that
network would have a destination address of 172.16.255.255.
For the network address 192.168.11.0/24, the last 8 bits make up the host field (or host
part of the address). The broadcast that would be sent out to all the devices on that
network would have a destination address of 192.168.11.255.
For the subnet address 192.168.11.32/28, the last 4 bits are the host bits, so the directed
broadcast address would be 192.168.11.47.
Note: The directed broadcast address can be routed over your company intranet and over
the internet. In the 1990s, a popular denial of service (DoS) attack referred to as a Smurf
used directed broadcasts to send so much traffic to an intended victim that they could not
send or receive any legitimate traffic. For this reason, Cisco IOS Software defaults to
disallowing directed broadcasts. This capability can be restored with the ip directed-
broadcast command in the global configuration mode. It is a best practice to leave
directed broadcasts disabled unless you have a specific use case. Routers began using the
no ip directed-broadcast command as a platform default, starting with Cisco IOS Release
12.0.
Local Loopback Address
A local loopback address is used to let the system send a message to itself for testing. The
loopback address creates a shortcut method for TCP/IP applications and services that run
on the same device to communicate with one another. A typical local loopback IPv4
address is 127.0.0.1. You can ping any IPv4 address in the 127.0.0.0/8 range to test the
local TCP/IP stack on a Microsoft Windows host. This is typically done to make sure that
the system's network software and hardware is functioning correctly.
Autoconfiguration IPv4 Addresses
When neither a statically nor a dynamically configured IPv4 address is found on startup,
those hosts supporting IPv4 link-local addresses (RFC 3927) will generate an address in the
169.254.0.0/16 range. This address can be used only for local network connectivity and
operates with many caveats, one of which is that it will not be routed. You will mostly see
this address as a failure condition when a PC fails to obtain an address via DHCP. This
feature called Automatic Private IP Addressing (APIPA) is implemented in Microsoft and
Apple operating systems.
IPv4 Addresses for Documentation
Address blocks 198.51.100.0/24 and 203.0.113.0/24 are assigned for use in
documentation and example code. They are often used along with example.com or
example.net domain names in vendor and protocol documentation. As described in RFC
5737, addresses within these blocks do not legitimately appear on the public internet and
can be used without any coordination with IANA or an internet registry.
All Zeros Address
The address 0.0.0.0 indicates the host in "this" network and is used only as a source
address. An example use case is the DHCP assignment process before the host has a valid
IPv4 address.
For more information about reserved IPv4 addresses, refer to RFC 5735.

7.16 Introducing the TCP/IP Internet Layer, IPv4 Addressing, and Subnets
Verifying IPv4 Address of a Host
All operating systems that are capable of TCP/IP communications include utilities for
configuring, managing and monitoring the IPv4 networking configuration. Operating
systems such as Microsoft Windows, Apple macOS, and most Linux variants have CLI and
GUI tools.
Verifying IPv4 Address of a Host on Windows
On a PC running Microsoft Windows 10, in the Network and Sharing Center you can view
and set the IPv4 address associated with the network adapter by clicking Properties. In
this example, the PC is manually configured with a static IPv4 address.
Note: IP addresses can be either static or dynamic. At this point, all you need to know is
that a static IP address is a fixed IP address that is assigned manually to a device, while a
dynamic IP address is assigned automatically and changes whenever a user reboots a
device.

Note: Navigating to the TCP/IP network settings varies widely, depending on the operating
system that is installed.
Use the ipconfig command to display all current TCP/IP network configuration values at
the command line of a Windows computer.
Note: For additional information about ipconfig and the command syntax, use your
favorite search engine and search for this string: microsoft technet dd197434
site:microsoft.com
Verifying the IPv4 Address of a Host on Apple Mac
Just like in Windows, you can use either GUI or CLI to configure or verify your IP address
settings on Apple macOS. To use the GUI option, click the Apple logo in the taskbar,
choose System Preferences, and choose Network. A pop-up window will open, displaying
your connections. Click the connection you want to manage, choose Advanced, and click
the TCP/IP tab.
You can also acquire this information using a CLI. First, you will need to open the Terminal.
You can do so in several ways, including using the Finder menu bar by choosing Go >
Utilities > Terminal. Then, use the ifconfig {interface name} command to obtain the IPv4
address and other information.
Verifying the IPv4 Address of a Host on Linux
On most Linux operating systems, the ifconfig command performs the same tasks that
ipconfig performs on Microsoft Windows operating systems.
You can get the details of specific syntax on Linux systems for just about any command
using the man (manual) command. In the following example, man ifconfig was entered.
8.1 Explaining the TCP/IP Transport Layer and Application Layer
Introduction
IP addressing is used to uniquely identify the devices globally. But to provide a logical
connection between the endpoints of a network and provide transport services from a
host to a destination, you need a different set of functionalities provided by the TCP/IP
Transport Layer. Another important functionality is that the Transport layer provides the
interface between the Application layer that we use to communicate with through various
applications and the underlying Internet layer, and therefore hides the complexity of the
network from the applications.
The two most important protocols used at the Transport layer are the TCP and the UDP.
While the first one provides reliability, the second one only provides best-effort
communication. Application programmers can choose the service that is the most
appropriate for their specific applications. Both protocols support establishing multiple
sessions from the end-host, which is important so that different applications running on
end hosts can use the same IP address to communicate with the network.
The Application layer provides functions for users or their programs, and it is highly
specific to the application being performed. It provides the services that user applications
use to communicate over the network, and it is the layer in which user-access network
processes reside. These processes encompass the ones that users interact with directly
and other processes of which the users are not aware. There are many Application layer
protocols, and new protocols are constantly being developed.

As a network engineer, you will often design, configure, and troubleshoot different
networks to be suitable for different application layer protocols. You will need to, among
other characteristics, contrast reliable and unreliable transport services provided by TCP
and UDP.

8.2 Explaining the TCP/IP Transport Layer and Application Layer


TCP/IP Transport Layer Functions
The transport layer resides between the application and Internet layers of the TCP/IP
protocol stack. The TCP/IP Internet layer directs information to its destination. Still, it
cannot guarantee that the information will arrive in the correct order, free of errors, or
even that the information will arrive at all. The two most common transport layer
protocols of the TCP/IP protocol suite are TCP and UDP. Both protocols manage the
communication of multiple applications and provide communication services directly to
the application process on the host.
The basic service that the transport layer provides is tracking communication between
applications on the source and destination hosts. This service is called session
multiplexing, and both UDP and TCP perform it. A major difference between TCP and UDP
is that TCP can ensure that the data is delivered, while UDP does not ensure delivery.
Note: Review of Open Systems Interconnection (OSI) and TCP/IP reference models: The
transport layer of the TCP/IP protocol stack maps to the transport layer of the OSI model.
The protocols that operate at this layer are said to operate at Layer 4 of the OSI model. If
you hear someone use the term "Layer 4," they are referring to the transport layer of the
OSI model.

Multiple communications often occur at once; for instance, you may be searching the web
and using FTP to transfer a file at the same time. The transport layer tracks these
communications and keeps them separate. Both UDP and TCP provide this tracking. To
pass data to the proper applications, the transport layer must identify the target
application. If TCP is used, the transport layer has the additional responsibilities of
establishing end-to-end connections, segmenting data and managing each piece,
reassembling the segments into streams of application data, managing flow control, and
applying reliability mechanisms.
Session Multiplexing
Session multiplexing is how an IP host can support multiple sessions simultaneously and
manage the individual traffic streams over a single link. A session is created when a source
machine needs to send data to a destination machine. Most often, this process involves a
reply, but a reply is not mandatory.

Note: Session multiplexing service provided by the transport layer supports multiple TCP
or UDP sessions, and not just one TCP and one UDP session respectively over a single link
as indicated in the figure above.
Identifying the Applications
To pass data to the proper applications, the transport layer must identify the target
application. TCP/IP transport protocols use port numbers to accomplish this task. The
connection is established from a source port to a destination port. Each application
process that needs to access the network is assigned a unique port number in that host.
The destination port number is used in the transport layer header to indicate which target
application that piece of data is associated with. The sending host uses the source port to
help keep track of existing data streams and new connections it initiates. The source and
destination port numbers are not usually the same.
Segmentation
TCP takes variably sized data chunks from the Application layer and prepares them for
transport onto the network. The application relies on TCP to ensure that each chunk is
broken up into smaller segments that will fit the maximum transmission unit (MTU) of the
underlying network layers. UDP does not provide segmentation services. UDP instead
expects the application process to perform any necessary segmentation and supply it with
data chunks that do not exceed the MTU of lower layers.
Note: The MTU of the Ethernet protocol is 1500 bytes. Larger MTUs are possible, but 1500
bytes is the normal size.
Flow Control
If a sender transmits packets faster than the receiver can receive them, the receiver drops
some of the packets and requires them to be retransmitted. TCP is responsible for
detecting dropped packets and sending replacements. A high rate of retransmissions
introduces latency in the communication channel. To reduce the impact of retransmission-
related latency, flow control methods work to maximize the transfer rate and minimize
the required retransmissions.
Basic TCP flow control relies on acknowledgments that are generated by the receiver. The
sender sends some data while waiting for an acknowledgment from the receiver before
sending the next part. However, if the round-trip time (RTT) is significant, the overall
transmission rate may slow to an unacceptable level. To increase network efficiency, a
mechanism called windowing is combined with basic flow control. Windowing allows a
receiving computer to advertise how much data it can receive before transmitting an
acknowledgment to the sending computer.
Windowing enables the avoidance of congestion in the network.
Connection-Oriented Transport Protocol
A connection-oriented protocol establishes a session connection between two IP hosts
within the transport layer and then maintains the connection during the entire
transmission. When the transmission is complete, the session is terminated. TCP provides
connection-oriented reliable transport for application data.
Reliability
TCP reliability has these three main objectives:

• Detection and retransmission of dropped packets


• Detection and remediation of duplicate or out-of-order data
• Avoidance of congestion in the network

8.3 Explaining the TCP/IP Transport Layer and Application Layer


Reliable vs. Best-Effort Transport
The terms reliable and best effort are terms that describe two types of connections
between computers. TCP is a connection-oriented protocol designed to ensure reliable
transport, flow control, and guaranteed delivery of IP packets. For this reason, it is labeled
a "reliable" protocol. UDP is a connectionless protocol that relies on the Application layer
for sequencing and detection of dropped packets and is considered "best effort." Each
protocol has strengths that make them useful for particular applications.

Reliable (Connection-Oriented)
Some types of applications require a guarantee that packets arrive safely and in order. Any
missing packets could cause the data stream to be corrupted. Consider the example of
using your web browser to download an application. Every piece of that application must
be assembled on the receiver in the proper binary order, or it will not execute. FTP is an
application where the use of a connection-oriented protocol like TCP is indicated.
TCP uses a three-way handshake when setting up a connection. You can think of it as
being similar to a phone call. The phone rings, the called party says "hello," and the caller
says "hello." Here are the actual steps:
1. The source of the connection sends a synchronization (SYN) segment to the
destination requesting a session. The SYN segment includes the Sequence Number
(SN).
2. The destination responds to the SYN with a synchronization-acknowledgment
(SYN-ACK) and increments the initiator SN by 1.
3. If the source accepts the SYN-ACK, it sends an acknowledgment (ACK) segment to
complete the handshake.

Here are some common applications that use TCP:

• Web browsers
• Email
• FTP
• Network printing
• Database transactions
To support reliability, a connection is established between the IP source and destination
to ensure that the application is ready to receive data. During the initial connection
establishment process, information is exchanged about the receiver's capabilities, and
starting parameters are negotiated. These parameters are then used for tracking data
transfer during the connection.
When the sending computer transmits data, it assigns a sequence number to each packet.
The receiver then responds with an acknowledgment number that is equal to the next
expected sequence number. This exchange of sequence and acknowledgment numbers
allows the protocol to recognize when data has been lost, duplicated, or arrived out of
order.
Best Effort (Connectionless)
Reliability (guaranteed delivery) is not always necessary, or even desirable. For example, if
one or two segments of a VoIP stream fail to arrive, it would only create a momentary
disruption in the stream. This disruption might appear as a momentary distortion of the
voice quality, but the user may not even notice. In real-time applications, such as voice
streaming, dropped packets can be tolerated as long as the overall percentage of dropped
packets is low.
Here are some common applications that use UDP:

• Domain Name System (DNS)


• VoIP
• TFTP
UDP provides applications with best-effort delivery and does not need to maintain state
information about previously sent data. Also, UDP does not need to establish any
connection with the receiver and is termed connectionless. There are many situations in
which best-effort delivery is more desirable than reliable delivery. A connectionless
protocol is desirable for applications that require faster communication without
verification of receipt.
UDP is also better for transaction type services, such as DNS or DHCP. In transaction type
services, there is only a simple query and response. If the client does not receive a
response, it simply sends another query, which is more efficient and consumes fewer
resources than TCP.
TCP vs. UDP Analogy
The postal service can be used as an analogy to illustrate the differences between
connection-oriented TCP and connectionless services that UDP provides.
Example: TCP—Sending Certified Mail
Imagine that you are a popular author in Seattle. Your editor in Indianapolis is very
anxious to publish your next novel and demands that you mail her each page as you finish
one. You print each page of the book as you write them and put each page in a separate
envelope. To ensure that your editor reassembles the book correctly, you put a page
number on each envelope (a sequence number). You address the envelope and send the
first one as certified mail. The postal service delivers it by any truck and any route. Still,
because it is certified, the carrier who delivers it must get a signature from your editor and
return a delivery certificate to you.
Your contract with the publisher specifies that each page must be in a separate envelope.
But having to go to the post office to send each letter individually is too time-consuming,
so you send several envelopes together. The postal service again delivers each envelope
by any truck and any route. Your editor signs a separate receipt for each envelope in the
batch as she receives them. If one envelope is lost in transit, you will not receive a
certificate of delivery for that numbered envelope, and you will need to resend that page.
As your editor receives your envelopes, she uses the sequence numbers to assemble the
book in the proper order.
Like certified mail, TCP offers sequencing, acknowledgments, and retransmission.

Example: UDP—Sending Regular Mail


UDP services can be compared to using the postal service to pay your bills. You address
each bill payment to a specific company address, stamp the envelope, and include your
return address. The postal service guarantees its best effort to deliver each payment. The
postal service does not guarantee delivery, and it is not responsible for telling you that
delivery was successful or unsuccessful.
Like standard mail, UDP is a simple process that provides basic data transfer services.
8.4 Explaining the TCP/IP Transport Layer and Application Layer
TCP Characteristics
Applications use the connection-oriented services of TCP to provide data reliability
between hosts. TCP includes several important features that provide reliable data
transmission.
TCP can be characterized as follows:

• TCP operates at the transport layer of the TCP/IP stack (OSI Layer 4).
• TCP provides application access to the Internet layer (OSI Layer 3, the network
layer), where application data is routed from the source IP host to the destination
IP host.

• TCP is connection-oriented and requires that network devices set up a connection


to exchange data. The end systems synchronize with one another to manage
packet flows and adapt to congestion in the network.
• TCP provides error checking by including a checksum in the TCP segment to verify
that the TCP header information is not corrupt.
• TCP establishes two connections between the source and destination. The pair of
connections operate in full-duplex mode, one in each direction. These connections
are often called a virtual circuit because, at the transport layer, the source and
destination do not know about the network.
• TCP segments are numbered and sequenced so that the destination can reorder
segments and determine if data is missing or arrives out of order.
• Upon receipt of one or more TCP segments, the receiver returns an
acknowledgment to the sender to indicate that it received the segment.
Acknowledgments form the basis of reliability within the TCP session. When the
source receives an acknowledgment, it knows that the data has been successfully
delivered. If the source does not receive an acknowledgment within a
predetermined period, it retransmits that data to the destination. The source may
also terminate the connection if it determines that the receiver is no longer
connected.
• TCP provides mechanisms for flow control. Flow control assists the reliability of
TCP transmission by adjusting the effective rate of data flow between the two
services in the session.
Reliable data delivery services are critical for applications such as file transfers, database
services, transaction processing, and other applications in which delivery of every packet
must be guaranteed. TCP segments are sent by using IP packets. The TCP header follows
the IP header and supplies information that is specific to the TCP protocol. Flow control,
reliability, and other TCP characteristics are achieved by using fields in the TCP header.
Each field has a specific function.

The TCP header is a minimum of 20 bytes; the fields in the TCP header are as follows:

• Source Port: Calling port number (16 bits)


• Destination Port: Called port number (16 bits)
• Sequence Number and Acknowledgment Number: Used for reliability and
congestion avoidance (32 bits each)
• Header Length: Size of the TCP header (4 bits)
• Reserved: For future use (3 bits)
• Flags: Or control bits (9 bits)
o Nonce Sum (NS): Enables the receiver to demonstrate to the sender that
segments are being acknowledged
o Congestion Window Reduced (CWR): Acknowledges that the congestion-
indication echoing was received
o Explicit Congestion Notification Echo (ECE): Indication of congestión
o Urgent (URG): Data that should be prioritized over other data
o Acknowledgment (ACK): Used for acknowledgment
o Push (PSH): Indicates that application data should be transmitted
immediately and not wait for the entire TCP segment
o Reset (RST): Indicates that the connection should be reset
o Synchronize (SYN): Synchronize sequence numbers
o Finish (FIN): Indicates there is no more data from sender
• Window size: Window size value, used for flow control (16 bits)
• Checksum: Calculated checksum from a constructed pseudo header (containing
the source address, destination address, and protocol from the IP header, TCP
segment length, and reserved bits), and the TCP segment (TCP header and
payload) for error checking (16 bits)
• Urgent Pointer: If the URG flag is set, this field is an offset from the sequence
number indicating the last urgent data byte (16 bits)
• Options: The length of this field is determined by the data offset field (from 0 to
320 bits)
• Data: Upper-layer protocol (ULP) data (varies in size)

8.5 Explaining the TCP/IP Transport Layer and Application Layer


UDP Characteristics
Applications use the connectionless services of UDP to provide high-performance, low-
overhead data communications between hosts. UDP includes several features that
provide for low-latency data transmission.
UDP is a simple protocol that provides basic transport layer functions:

• UDP operates at the transport layer of the TCP/IP stack (OSI Layer 4).
• UDP provides applications with access to the Internet layer (OSI Layer 3, the
network layer), without the overhead of reliability mechanisms.
• UDP is a connectionless protocol in which a one-way datagram is sent to a
destination without advance notification to the destination device.
• UDP performs only limited error checking. A UDP datagram includes a checksum
value, which the receiving device can use to test the integrity of the data.
• UDP provides service on a best-effort basis and does not guarantee data delivery
because packets can be misdirected, duplicated, or lost on the way to their
destination.
• UDP does not provide any special features that recover lost or corrupted packets.
UDP relies on applications that are using its transport services to provide recovery.
• Because of its low overhead, UDP is ideal for applications like DNS and Network
Time Protocol (NTP), where there is a simple request-and-response transaction.
The low overhead of UDP is evident when you review the UDP header length of only 64
bits (8 bytes). The UDP header length is significantly smaller compared with the TCP
minimum header length of 20 bytes.
The following list describes the field definitions in the UDP segment:

• Source port: Calling port number (16 bits)


• Destination port: Called port number (16 bits)
• Length: Length of UDP header and UDP data (16 bits)
• Checksum: Calculated checksum of the header and data fields (16 bits)
• Data: ULP data (varies in size)
Application layer protocols that use UDP include DNS, Simple Network Management
Protocol (SNMP), DHCP, Routing Information Protocol (RIP), TFTP, Network File System
(NFS), online games, and voice streaming.

8.6 Explaining the TCP/IP Transport Layer and Application Layer


TCP/IP Application Layer
UDP and TCP use internal software ports to support multiple conversations between
various network devices. To differentiate the segments and datagrams for each
application, TCP and UDP have header fields that uniquely identify these applications.
These unique identifiers are the port numbers.
Note: The combination of an IP address and a port is strictly known as an endpoint and is
sometimes called a socket. A TCP connection is defined by two endpoints (sockets).
Some of the applications that TCP/IP supports include:

• FTP (port 21, TCP): FTP is a reliable, connection-oriented service that uses TCP to
transfer files between systems that support FTP. FTP supports bidirectional binary
and ASCII file transfers. Besides using port 21 for exchange of control, FTP also uses
one additional port, 20, for data transmission.
• SSH (port 22, TCP): Secure Shell (SSH) provides the capability to access other
computers, servers, and networking devices remotely. SSH enables a user to log in
to a remote host and execute commands. SSH messages are encrypted.
• Telnet (port 23, TCP): Telnet is a predecessor to SSH. It sends messages in
unencrypted cleartext. As a security best practice, most organizations now use SSH
for remote communications.
• HTTP (port 80, TCP): HTTP defines how messages are formatted and transmitted
and which actions browsers and web servers can take in response to various
commands. It uses TCP.
• HTTPS (port 443, TCP): HTTPS combines HTTP with a security protocol (Secure
Sockets Layer [SSL]/Transport Layer Security[TLS]).
• DNS (port 53, TCP, and UDP): DNS is used to resolve Internet names to IP
addresses. DNS uses a distributed set of servers to resolve names that are
associated with numbered addresses. DNS uses TCP for zone transfer between
DNS servers and UDP for name queries.
• TFTP (port 69, UDP): TFTP is a connectionless service. Routers and switches use
TFTP to transfer configuration files and Cisco IOS images, and other files between
systems that support TFTP.
• SNMP (port 161, UDP): SNMP facilitates the exchange of management information
between network devices. SNMP enables network administrators to manage
network performance, find and solve network problems, and plan for network
growth.
Here, you have seen only some applications with their port numbers. Go to the Service
Name and Transport Protocol Port Number Registry for a complete list at
http://www.iana.org/assignments/service-names-port-numbers/service-names-port-
numbers.xhtml.

8.7 Explaining the TCP/IP Transport Layer and Application Layer


Introducing HTTP
In a world that is driven by the Internet and ever-increasing amounts of data, technologies
that enable and standardize the way we exchange information are very useful. HTTP and
HTTP-based (Application Programmable Interfaces) APIs form one of the foundations of
the World Wide Web and provide us with a way to communicate with remote systems.

HTTP is an application layer protocol and is the foundation of communication for the
World Wide Web. It is based on a client-server computing model, where the client (e.g., a
web browser) and the server (e.g., a web server) use a request-response message format
to transfer information. HTTP presumes a reliable underlying transport layer protocol, so
TCP is commonly used. However, UDP can also be used in some cases.
By default, HTTP is a stateless (or connectionless) protocol, meaning it works without the
receiver retaining any client information. Each request can be understood in isolation,
without knowing any commands that came before it. HTTP does have some mechanisms,
namely HTTP headers, to make the protocol behave as if it was stateful.
The information is media-independent, which means that any type of data can be sent by
HTTP as long as both the client and the server know how to handle the data content. Web
browsers commonly use HTTP and servers to transfer the files that make up web pages.
Although the HTTP specification allows for data to be transferred on port 80 using either
TCP or UDP, most implementations use TCP. A secure version of the protocol, HTTPS, uses
TCP port 443.
HTTP Request-Response Cycle
The data is exchanged via HTTP Requests and HTTP Responses, which are specialized data
formats, used for HTTP communication. A sequence of requests and responses is called an
HTTP Session and is initiated by a client by establishing a connection to the server.

1. Client sends an HTTP request to the server.


2. Server receives the request.
3. Server processes the request.
4. Server returns an HTTP response.
5. Client receives the response (e.g., web page content).
An example of using a request-response cycle is web browsing. When a user is browsing
the web, a browser sends an HTTP request to get the HTML document representing the
page. The server responds to the request and returns the HTTP response with a response
code and the content of the page, which is an HTML document. The client browser parses
this document, displaying its content according to layout information and resources
contained within the page (usually images and videos) and sometimes processing
additional requests corresponding to execute scripts. The web browser presents all of this
content to a user as a complete Web page.
8.8 Explaining the TCP/IP Transport Layer and Application Layer
Domain Name System
The DNS provides an efficient way to convert human-readable names of IP end systems
into machine-readable IP addresses necessary for routing.
On TCP/IP networks, hosts are assigned their unique 32-bit IPv4 addresses in the familiar
dotted decimal notation to send and receive messages over the local network and the
Internet. If there were no DNS, you would have to remember the IPv4 address of every
host that you would like to reach.
DNS uses a distributed database that is hosted on several servers, which are located
around the world, to resolve the names that are associated with IP addresses. The DNS
protocol defines an automated service that matches resource names with the required
numeric network address.
An easy way to observe DNS in action can be performed in a command window in
Microsoft Windows, Apple Mac OS X, or your favorite Linux distribution. When the
command window is open, enter nslookup www.google.com. This command queries DNS
to resolve the domain name into IP address. The result will appear below your query.
Note: You may receive a different IP address in your response than the example shows.

Your host sends a DNS query for the IP address of www.google.com. If your DNS server
has the answer cached, it returns the answer directly.
8.9 Explaining the TCP/IP Transport Layer and Application Layer
Explaining DHCP for IPv4
Managing a network can be very time-consuming. Network clients break, or are moved,
and new clients have purchased that need network connectivity. These tasks are all part of
the network administrator job. Depending on the number of IP hosts, manual
configuration of IPv4 addresses for every device on the network is virtually impossible.
DHCP can greatly decrease the workload of the network administrator. DHCP
automatically assigns an IPv4 address from an IPv4 address pool that the administrator
defines. However, DHCP is much more than just a mechanism that allocates IPv4
addresses. This service automates the assignment of IPv4 addresses, subnet masks,
gateways, and other required networking parameters.
DHCP is built on a client/server model. The DHCP server is allocated one or more network
addresses and sends configuration parameters to dynamically configured hosts that
request them. The term "client" refers to a host that is requesting initialization
parameters from a DHCP server. Most endpoint devices on today’s networks are DHCP
clients, including Cisco IP phones, desktop PCs, laptops, printers, and even Blu-Ray players.
Just about any device that you can configure to participate on a TCP/IP network has the
option of using DHCP to obtain its IPv4 configuration.
Depending on the actual DHCP server that is in use, there are three basic DHCP IPv4
address allocation mechanisms:

• Dynamic allocation: Dynamic allocation of IPv4 addresses is the most common


type of address assignment. As devices boot and activate their Ethernet interfaces,
the DHCP client service triggers a DHCP Discover broadcast that includes the DHCP
client MAC address. If a DHCP server is listening on that IPv4 subnet, it responds
with a DHCP Offer message. The DHCP Offer message offers an unused IPv4
address from the address pool that is on the DHCP server. If the IPv4 address is
acceptable, the DHCP client then sends a DHCP Request agreeing to the offered
address. The DHCP server then marks the IPv4 address as "in use" in its database
and sends a final DHCP ACK to the DHCP client. The DHCP server also starts the
countdown on a "lease timer." A DHCP client is given its IPv4 configuration for a
specified amount of time with a dynamic allocation. When the lease time expires,
the DHCP server can reclaim the address, return it to the address pool, and lease it
to another host. DHCP clients can renew their address before it expires.
• Automatic allocation: Automatic allocation of IPv4 addresses is very similar to
dynamic allocation, except that the lease time is set never to expire. This setting
results in the DHCP client always being associated with the same IPv4 address.
• Static allocation: Static allocation is an alternative that is generally used for
devices such as servers and printers, where the device needs to keep the same
IPv4 address configuration permanently. A static entry is made in the DHCP
database that maps the MAC address to an IPv4 address.
The following figure illustrates how a DHCP server assigns an IPv4 address to a DHCP client
computer, while the table provides additional information for the exchanged packets.
The DHCP client and the DHCP server exchange the following packets:
1. DHCP Discover: The DHCP client boots up and sends this message on its local
physical subnet to the subnet's broadcast (destination IPv4 address of
255.255.255.255 and MAC address of ff:ff:ff:ff:ff:ff), with a source IPv4 address of
0.0.0.0 and its MAC address.
2. DHCP Offer:The DHCP server responds and fills the yiaddr (your IPv4 address) field
with the requested IPv4 address. The DHCP server sends the DHCP Offer to the
broadcast address but includes the client hardware address in the chaddr (client
hardware address) field of the offer, so the client knows that it is the intended
destination.
3. DHCP Request: DHCP Request: The DHCP client may receive multiple DHCP Offer
messages, but chooses one and accepts only the DHCP server's offer, implicitly
declining all other DHCP Offer messages. The client identifies the server by
populating the Server Identifier option field with the DHCP server's IPv4 address.
The DHCP Request is also a broadcast, so all DHCP servers that sent a DHCP Offer
will receive it, and each will know whether it was accepted or declined. Even
though the client has been offered an IPv4 address, it will send the DHCP request
message with a source IPv4 address of 0.0.0.0.
4. DHCP ACK: The DHCP server acknowledges the request and completes the
initialization process. DHCP ACK message has a source IPv4 address of the DHCP
server, and the destination address is once again a broadcast and contains all the
parameters that the client requested in the DHCP request message. When the
client receives the DHCP ACK, it enters the bound state and is now free to use the
IPv4 address to communicate on the network.
Configuring a Router as an IPv4 DHCP Client
An ISP sometimes provides a static address for a router interface that is connected to the
Internet. In other cases, an address is provided using DHCP. If the ISP uses DHCP to
provide interface addressing, no manual address can be configured. Instead, the router's
interface is configured to operate as a DHCP client.

The interface interface command on the router specifies an interface. It enters the
interface configuration mode, while the ip address dhcp command enables the interface
to acquire an IPv4 address through DHCP.
If the router receives the optional default gateway DHCP parameter from the server, it will
inject the default route into its routing table, pointing to the default gateway IPv4 address.
To verify that the router interface has acquired an IPv4 address through DHCP, you can
use the show ip interface brief command:

Configuring an IPv4 DHCP Relay


A DHCP relay agent is any host that forwards DHCP packets between clients and servers.
Relay agents are used to forwarding requests and replies between clients and servers
when they are not on the same subnet. DHCP requests are sent as broadcasts, and
because routers block the broadcasts, you need a relay functionality to reach the DHCP
server.
Note: The ip helper-address address command should be issued on the interface where
the DHCP broadcasts are received.
To configure the DHCP relay agent to forward packets to a DHCP server, you should enter
the interface configuration mode using the interface interface command. Then, use the ip
helper-address address command to specify that the interface will forward UDP
broadcasts, including BOOTP and DHCP, to the specified server address.
These steps show how DHCP requests are processed when DHCP relay is used:

• Step 1: A DHCP client broadcasts a DHCP request


• Step 2: DHCP relay includes option 82 and sends the DHCP request as a unicast
packet to the DHCP server. Option 82 includes remote ID and circuit ID.
• Step 3: The DHCP server responds to the DHCP relay
• Step 4: The DHCP relay strips off option 82 and sends the response to the DHCP
client
To verify the DHCP relay configuration in this example, you can check whether the client
computers in the customer LAN have acquired IPv4 addresses from the DHCP server.
You can use packet capture to examine the packets on the customer LAN to observe the
communication between the clients and the DHCP relay agent. Furthermore, you can
examine the packets towards the service provider network to observe that the router has
forwarded the DHCP Discover message from the clients towards the DHCP server using as
a source the IPv4 address from the router's interface GigabitEthernet0/1. You can also
observe that the DHCP server has sent the DHCP Offer, as a unicast packet, back to the
DHCP relay agent from which the DHCP Discover message came.
Configuring a Router as an IPv4 DHCP Server
The Cisco IOS DHCP server is a full DHCP server implementation that assigns and manages
IPv4 addresses from specified address pools within the device to DHCP clients. The DHCP
server can be configured to assign additional parameters such as the IPv4 address of the
DNS server and the default gateway. You can implement a DHCP server on both Cisco IOS
Routers and Cisco Catalyst switches.

To configure the DHCP server on a router, you should enter the DHCP pool configuration
mode using the ip dhcp pool name command. Then, assign the DHCP parameters to the
DHCP pool.

Use the following commands that are shown in the table to define the pool parameters.

You can also exclude the range of IPv4 addresses from the DHCP assignment, by using the
ip dhcp excluded-address ip-address [last-ip-address] command, which is used in the
global configuration mode.
In the configuration example above the, IPv4 addresses are assigned from the address
pool 10.1.50.0/24 with a lease time of 12 hours. Additional parameters are the default
gateway, domain name, and DNS server. Also, IPv4 addresses from 10.1.50.1 to 10.1.50.50
are not assigned to the end devices.
To verify information about the configured DHCP address pools, you can use the show ip
dhcp pool command, and to display the address binding information, which displays a list
of all IPv4 address-to-MAC bindings, you can use the show ip dhcp binding command.
IPv4 DHCP Settings on Windows Host
On a Windows computer, you can use different ipconfig command options to view and
refresh DHCP and DNS settings.
The following is the syntax for the ipconfig command:
ipconfig [/ all] [/ renew [adapter]] [/ release [adapter]] [/displaydns] [/flushdns]
The following command options are commonly used:

• /all This option displays the complete TCP/IP configuration for all adapters,
including DHCP and DNS configuration. Without this parameter, the ipconfig
command displays only the IP address, subnet mask, and default gateway values
for each adapter. Adapters can represent physical interfaces, such as installed
network adapters, or logical interfaces, such as dialup connections.
• /renew [adapter] This option renews DHCP configuration for all adapters (if an
adapter is not specified) or for a specific adapter if the adapter parameter is
included. This parameter is available only on computers with adapters that are
configured to obtain an IP address automatically. To specify an adapter name,
enter the adapter name that appears when you use ipconfig without parameters.
• /release [adapter] This option sends a DHCPRELEASE message to the DHCP server
to release the current DHCP configuration and discard the IP address configuration
for either all adapters (if an adapter is not specified) or for a specific adapter if the
adapter parameter is included. This parameter disables TCP/IP for adapters that
are configured to obtain an IP address automatically. To specify an adapter name,
enter the adapter name that appears when you use ipconfig without parameters.
• /displaydns This option displays the contents of the host DNS cache. When an IP
host makes a DNS query for a hostname, it caches the result to avoid unnecessary
queries.
• /flushdns This option deletes the host DNS cache. This option is useful if the IP
address associated with a hostname has changed, but the host is still caching the
old IP address.
• /? This option displays help at the command prompt.
9.1 Exploring the Functions of Routing
Introduction
One of the intriguing aspects of Cisco routers is how the router chooses which route is the
best among the routes presented by routing protocols, manual configuration, and various
other means. While route selection is much simpler than you might imagine, you need to
learn how Cisco routers work to understand it completely. Determining the best path
involves evaluating multiple paths to the same destination network and selecting the
optimal path to reach that network. This process is performed for every packet that goes
through a router.
A router is a networking device that forwards packets between different networks. A
router is typically positioned at the edge of a network and can provide connections to
other networks. In Enterprise Campus environments, you will typically find devices
providing routing in the center of the network or at the edge where they provide
connectivity to WANs or the internet. Routing functionality can often be provided not only
by routers but also by firewalls or Layer 3 switches. A router is typically part of an all-in-
one device that also provides switching, wireless, and security functions at home.

As a network engineer, you need to understand routing functionality that includes


different processes and ideas:

• Role of a router and router components.


• Routing table function and information in it.
• The types of routes and how forwarding works in routers.
9.2 Exploring the Functions of Routing
Role of a Router
A router is a networking device that forwards packets between different networks.

While switches exchange data frames between segments to enable communication within
a single network, routers are required to reach hosts that are not in the same network.
Routers enable internetwork communication by connecting interfaces in multiple
networks. For example, the router in the figure above has one interface connected to the
192.168.1.0/24 network and another interface connected to the 192.168.2.0/24 network.
The router uses a routing table to route traffic between the two networks.
In the following figure, data frames travel between the various endpoints on LAN A. The
switch enables the communication to all devices within the same network, whose network
IPv4 address is 10.18.0.0/16. Likewise, the LAN B switch enables communication among
the hosts on LAN B, whose network IPv4 address is 10.22.0.0/16.

A host in LAN A cannot communicate with a host in LAN B without the router. Routers
enable communication between hosts that are not in the same local LAN. Routers can do
this function because they can be attached to multiple networks and can route between
them. In the figure, the router is attached to two networks, 10.18.0.0/16 and
10.22.0.0/16. Routers are essential components of large IP networks because they can
accommodate growth across wide geographical areas.
This figure illustrates another important routing concept. Networks to which the router is
attached are called local or directly connected networks. All other networks—networks
that a router is not directly attached to—are called remote networks.

The topology in the figure shows RouterX, which is directly attached to three networks
172.16.1.0/24, 172.16.2.0/24, and 192.168.100.0/24. To RouterX, all other networks, i.e.,
10.10.10.0/24, 10.10.20.0/24, and, 10.10.30.0/24 are remote networks. To RouterY,
networks 10.10.10.0/24, 10.10.20.0/24, and 10.10.30.0/24 are directly connected
networks. RouterX and RouterY have a common directly connected network
192.168.100.0/24.

9.3 Exploring the Functions of Routing


Router Components
Cisco offers many different routers, which are suited for different networking
environments, such as enterprise LANs, service provider WANs, and so on. The various
models offer various features that are suitable for an array of different environments.
However, the core function of a router is to route packets, and for that reason, all routers
have many common components.
These components are as follows:

• CPU: A CPU, or processor, is the chip installed on the motherboard that carries out
the instructions of a computer program. For example, it processes all the
information gathered from other routers or sent to other routers.
• Motherboard: The motherboard is the central circuit board, which holds critical
electronic components of the system. The motherboard provides connections to
other peripherals and interfaces.
• Memory: There are four primary types of memory:
o RAM: RAM is memory on the motherboard that stores data during CPU
processing. It is a volatile type of memory in that its information is lost
when power is switched off. RAM provides temporary memory for the
router's running configuration while the router is powered on.
o NVRAM: NVRAM retains content when the router is powered down.
NVRAM stores the startup configuration file for most router platforms. It
also contains the software configuration register, which determines which
Cisco IOS image is used when booting the router.
o ROM: ROM: ROM is read-only memory on the motherboard. The content of
ROM is not lost when power is switched off. Data stored in ROM cannot be
modified, or it can be modified only slowly or with difficulty. ROM
sometimes contains a ROM monitor (ROMMON). ROM Monitor initializes
the hardware and boots the Cisco IOS software when you power on or
reload a router. You can use the ROM monitor to perform certain
configuration tasks, such as recovering a lost password or downloading
software over the console port. ROM also includes bootloader software
(bootstrap), which helps the router boot when it cannot find a valid Cisco
IOS image in the flash memory. During normal startup, the ROM Monitor
initializes the router, and then control passes to the Cisco IOS software.
o Flash: Flash memory is nonvolatile storage that can be electrically erased
and reprogrammed. Flash memory stores the Cisco IOS image. On some
platforms, it can also store configuration files or boot images.
• Ports (also referred to as interfaces): Ports are used to connect routers to other
devices in the network. Routers can have these types of ports:
o Management ports: Routers have a console port that can be used to attach
to a terminal used for management, configuration, and control. High-end
routers may also have a dedicated Ethernet port that can be used only for
management. An IP address can be assigned to the Ethernet port, and the
router can be accessed from a management subnet. The auxiliary (AUX)
interface on a router is used for remote management of the router.
Typically, a modem is connected to the AUX interface for dial-in access.
From a security standpoint, enabling the option to connect remotely to a
network device carries with it the responsibility of vigilant device security.
o Network ports: The router has many network ports, including various LAN
or WAN media ports, which may be copper or fiber cable. IP addresses are
assigned to network ports.
As an example, the following figure shows the ports on a Cisco integrated services router
(ISR) 4331 Router:

9.4 Exploring the Functions of Routing


Router Functions
Routers have these two important functions:

• Path determination: Routers use their routing tables to determine how to forward
packets. Each router must maintain its own local routing table, which contains a
list of all destinations known to the router and information about reaching those
destinations. When a router receives an incoming packet, it examines the
destination IP address in the packet and searches for the best match between the
destination address and the network addresses in the routing table. A matching
entry may indicate that the destination is directly connected to the router or that it
can be reached via another router. This router is called the next-hop router and is
on the path to the final destination. If there is no matching entry, the router sends
the packet to the default route. If there is no default route, the router drops the
packet.
• Packet forwarding: After a router determines the appropriate path for a packet, it
forwards it through a network interface toward the destination network. Routers
can have interfaces of different types. When forwarding a packet, routers perform
encapsulation following the OSI Layer 2 protocol implemented at the exit
interface. The figure shows router A, which has two FastEthernet interfaces and
one serial interface. When router A receives an Ethernet frame, it de-encapsulates
it, examines it, and determines the exit interface. If the router needs to forward
the packet out of the serial interface, the router will encapsulate the frame
according to the Layer 2 protocol used on the serial link. The figure also shows a
conceptual routing table that lists destination networks known to the router and
its corresponding exit interface or next-hop address. If an interface on the router
has an IPv4 address within the destination network, the destination network is
considered "directly connected" to the router. For example, assume that router A
receives a packet on its Serial0/0/0 interface destined for a host on network
10.1.1.0. Because the routing table indicates that network 10.1.1.0 is directly
connected, router A forwards the packet out of its FastEthernet 0/1 interface, and
the switches on the segment process the packet to the host. If a destination
network in the routing table is not directly connected, the packet must reach the
destination network via the next-hop router. For example, assume that router A
receives a packet on its Serial0/0/0 interface and the destination host address is on
the 10.1.3.0 network. In this case, it must forward the packet to the router B
interface with the IPv4 address 10.1.2.2.
9.5 Exploring the Functions of Routing
Routing Table
A routing table contains a list of all networks known to the router and information about
reaching those networks. Each line or entry of the routing table lists a destination network
and the interface or next-hop address by which that destination network can be reached.
A routing table may contain four types of entries:

• Directly connected networks


• Dynamic routes
• Static routes
• Default routes
Directly Connected Networks
All directly connected networks are added to the routing table automatically. A newly
deployed router, without any configured interfaces, has an empty routing table. The
directly connected routes are added after you assign a valid IP address to the router
interface, enable it with the no shutdown command, and when it receives a carrier signal
from another device (router, switch, end device, and so on). In other words, when the
interface status is up/up, the network of that interface is added to the routing table as a
directly connected network. If the hardware fails or is administratively shut down, the
entry for that network is removed from the routing table. The following figure shows
examples of routing table entries for directly connected networks. An active, properly
configured, directly connected interface, creates two routing table entries. The following
figure displays the IPv4 routing table entries on R1 for the directly connected network
10.1.1.0/24.
The entries contain the following information:

• Route source: Identifies how the route was learned. Directly connected interfaces
have two route source codes. "C" identifies a directly connected network. "L"
identifies the local IPv4 address assigned to the router’s interface.
• Destination network: For directly connected networks, the destination networks
are local to the router. The destination network address is indicated with a
network address and subnet mask in the form of the prefix. Note that "L" entries,
which identify the local IPv4 address of the interface, have a prefix of /32.
• Outgoing interface: Identifies the exit interface to use when forwarding packets to
the destination network.

Dynamic Routes
Routers use dynamic routing protocols to share information about the reachability and
status of remote networks. A dynamic routing protocol allows routers to learn about
remote networks from other routers automatically. These networks, and the best path to
each, are added to the router's routing table and identified as a network learned by a
specific dynamic routing protocol. Cisco routers can support a variety of dynamic IPv4 and
IPv6 routing protocols, such as Border Gateway Protocol (BGP), Open Shortest Path First
(OSPF), Enhanced Interior Gateway Routing Protocol (EIGRP), Intermediate System-to-
Intermediate System (IS-IS), Routing Information Protocol (RIP), and so on. The routing
information is updated when changes in the network occur. Larger networks require
dynamic routing because there are usually many subnets and constant changes. These
changes require updates to routing tables across all routers in the network to prevent
connectivity loss. Dynamic routing protocols ensure that the routing table is automatically
updated to reflect network changes. The following figure displays an IPv4 routing table
entry on R1 for the route to remote network 172.16.1.0/24.
From the example entry, you can tell the following:
• Route source: Identifies how the route was learned. "O" in the figure indicates that
the source of the entry was the OSPF dynamic routing protocol.
• Destination network: Identifies the address of the remote network. The router
knows how to reach 172.16.1.0/24 network.
• Administrative distance: Identifies the trustworthiness of the route source. Lower
values indicate the preferred route source. OSPF has a default administrative
distance value of 110.
• Metric: Identifies the value assigned to reach the remote network. Lower values
indicate preferred routes. This OSPF route has a metric of 2 for the destination
network 172.16.1.0/24.
• Next-hop: Identifies the IPv4 address of the next router to forward the packet to.
The IPv4 address of the next-hop is 192.168.10.2.
• Route time stamp: Identifies how much time has passed since the route was
learned. The information in the example entry was learned 3 minutes and 23
seconds ago.
• Outgoing interface: Identifies the exit interface to use to forward a packet toward
the final destination. The packets destined to the 172.16.1.0/24 network will be
forwarded out of the GigabitEthernet 0/1 interface.

Static Routes
Static routes are entries that you manually enter directly into the configuration of the
router. Static routes are not automatically updated and must be manually reconfigured if
the network topology changes. Static routes can be effective for small, simple networks
that do not change frequently. The benefits of using static routes include improved
security and resource efficiency. The main disadvantage of using static routes is the lack of
automatic reconfiguration if the network topology changes. There are two common types
of static routes in the routing table—static routes to a specific network and the default
static route.
From the example entry, you can tell the following:

• Route source: Identifies how the route was learned. Static routes have a route
source code "S".
• Destination network: The destination network address is indicated with a network
address and subnet mask in the prefix. The router knows how to reach
192.168.30.0/24 network.
• Administrative distance: Identifies the trustworthiness of the route source. Lower
values indicate the preferred route source. Static routes have a default
administrative value of 1.
• Metric: Identifies the value assigned to reach the remote network. Static routes do
not calculate metrics the same way as dynamic routes; metric is set. The default
value for the metric of a static route is 0.
• Next-hop: Identifies the IPv4 address of the next router to forward the packet to.
The IPv4 address of the next-hop is 192.168.10.2.
Default Routes
A default route is an optional entry used by the router if a packet does not match any
other, a more specific route in the routing table. A default route can be dynamically
learned or statically configured. More than one source provides the default route, but the
selected default route is presented in the routing table as Gateway of last resort.

From the example entry, you can tell the following:

• Route source: Route source: Identifies how the route was learned. The default
route is marked with an asterisk (*). Depending on the source of a default route,
an asterisk is added to the route source (S* in the example.)
• Destination network: The destination network for the default route is 0.0.0.0/0.
• Administrative distance: Identifies the trustworthiness of the route source. Lower
values indicate the preferred route source. The default route has the same
administrative distance as a source of a default route. In the example, the default
route's source is a static route, with a default administrative distance value of 1.
• Metric: Identifies the value assigned to reach the remote network. The default
route inherits the metric value from the route source. In the example, since the
default route is statically configured, the metric is 0.
• Next-hop: Identifies the IPv4 address of the next router to forward the packet to.
The IPv4 address of the next-hop for the default route in the example is 10.1.1.1.
IPv4 Routing Table Example
On a Cisco router, the show ip route command can be used to display the IPv4 routing
table of a router. The command output is used to verify that IPv4 networks and specific
interface addresses have been installed in the IPv4 routing table. The following output
displays the routing table of RouterA.

The output shows that RouterA has received routes from multiple sources (static routes
and routing protocols), which would be uncommon in a production network. However,
this table is used here to demonstrate the various route sources. The first part of the
output explains the codes, presenting the letters and the associated sources of the entries
in the routing table.
The letters are:

• C: Indicates directly connected networks; the first and seventh entries are directly
connected networks.
• L: Indicates local interfaces within connected networks; the second and eighth
entries are local interfaces.
• R: Indicates RIP; the third entry is the RIP route.
• O: Indicates OSPF; the fourth entry is an OSPF route.
• D: Indicates EIGRP; the fifth entry is an EIGRP route. The letter D stands for
Diffusing Update Algorithm (DUAL), which is the update algorithm that EIGRP uses.
The code letter E was previously taken by the legacy exterior gateway protocol
(EGP).
• S: Indicates static routes; the sixth and ninth entries are static routes.
• Asterisk (*): Indicates that this static route is a candidate for the default route.

9.6 Exploring the Functions of Routing


Path Determination
Determining the best path involves evaluating multiple paths to the same destination
network and selecting the optimum path to reach that network. When you statically
configure a route, then you determine what the best path to the network is. But when
dynamic routing protocols are used, the best path is selected by a routing protocol based
on the quantitative value called a metric. A metric is a quantitative value used to measure
how to get to a given network. A dynamic routing protocol's best path to a network is the
path with the lowest metric.
Dynamic routing protocols typically use their own rules and metrics. The routing algorithm
calculates a metric for each path to the destination network. Metrics can be based on
either a single characteristic, such as bandwidth, or several characteristics of a path, such
as bandwidth, delay, and reliability. Some routing protocols can base route selection on
multiple metrics, combining them into a single metric.
Review the example topology in the following figure.
Router R1 has multiple paths to LAN B network. One possible path is R1 > R2 > R3. An
alternative path is R1 > R3. If router R1 runs a routing protocol that uses "hop count" as a
metric, that protocol will count how many routers there are to the destination. R1 router
would choose the path R1 > R3, because there is only 1 router on that path to the LAN B.
An example of a protocol that uses hop count as a metric is RIP.
OSPF and EIGRP routing protocols do not count routers, but both take into consideration
the bandwidth of the links on the path to the destination. In the example in the figure,
when bandwidth is considered, then R1 > R2 > R3 path along 1-Gbps links is a better path
than R1 > R3 with the bandwidth of 100 Mbps.
Each dynamic protocol offers its best path (its lowest metric route) to the routing table.
Administrative Distance
Routing tables can be populated from three sources: directly connected networks, static
routes, and routing protocols. The router must evaluate the routing information from all
the sources and select the best route to each destination network to install into the
routing table.
A router can be configured with multiple routing protocols and static routes. The routing
table may have more than one route source for the same destination network if this
occurs. Cisco IOS Software uses what is known as the administrative distance to determine
the route to install into the IP routing table. The administrative distance represents the
"trustworthiness" of the route; the lower the administrative distance, the more
trustworthy the route source. For example, a static route has a default administrative
distance of 1, whereas an OSPF-learned route has a default administrative distance of 110.
Given separate routes to the same destination with different administrative distances, the
router chooses the route with the lowest administrative distance.
Administrative distance is used as a tiebreaker only when different sources offer the
information for the same destination network, i.e., the same network address and subnet
mask. For example, suppose both static and dynamic route sources offer information for
the 172.16.1.0/24 network. In that case, the administrative distance will decide whether a
static or dynamic entry will be installed in the routing table. But, if the static route source
offers information for 172.16.0.0/16 and the dynamic route source offers information for
172.16.1.0/24, then these are considered different routes, and there is no need for an
administrative distance to break a tie.
When a router chooses a static route and an OSPF route, the static route takes
precedence. Similarly, a directly connected route with an administrative distance of 0
takes precedence over a static route with an administrative distance of 1.
Each source type has a default administrative distance. The figure lists several routing
sources and their associated administrative distances. The values in the table are default
values, which can be changed.

Keep in mind these takeaways regarding routing table sources:

• Directly connected networks have an administrative distance of 0 and preempt all


other entries for that destination network. Only a directly connected route can
have an administrative distance of 0 and the administrative distance of 0 cannot be
modified for directly connected networks.
• Static routes have a default administrative distance of 1; therefore, if you configure
a static route, it will be included in the routing table unless there is a direct
connection to the destination network.
• Each routing protocol has its default administrative distance. The OSPF
administrative distance is 110, the administrative distance of the EIGRP protocol is
90.
In the figure, the router has received two routing update messages—one from OSPF and
one from EIGRP. The metric that EIGRP uses has determined that the best path to network
172.17.8.0/24 is via 192.168.5.2, but the metric that OSPF uses has determined that the
best path to 172.17.8.0/24 is via 192.168.3.1. Each routing protocol uses a different metric
to calculate the best path to a given destination if it learns multiple paths to the same
destination.
The router has used the administrative distance feature to determine which route to
install in its routing table. Because the administrative distance for OSPF is 110 and the
administrative distance for EIGRP is 90, the router has chosen the EIGRP route and adds
only the EIGRP route to its routing table.
Route Selection in Cisco Routers
After installing route entries in the routing table, the routing table may contain entries for
a destination network and subnets. For example, the routing table may contain an entry
for 10.0.0.0/8 but also entries for subnets of that network, i.e., 10.10.0.0/16, 10.10.1.0/24,
and 10.10.2.0/24.
When a router receives a packet, it looks at the routing table to determine how to forward
it to the final destination. The router always tries to find an exact match for the
destination IPv4 address included in the IPv4 header of the packet, but very rarely does
such a route exist in the routing table; therefore, the router looks for the router best
match.
Because each entry in a routing table may specify a subnetwork, a packet's destination
address may match more than one routing table entry. For instance, a packet destined to
10.10.2.3 would match entries 10.0.0.0/8, 10.10.0.0/16, and 10.10.2.0/24. Although all
three routes match the destination address, they do not match it in the same way. The
10.10.2.3 destination IPv4 address matches the 10.0.0.0/8 destination network only in the
first 8 bits. The 10.10.2.3 destination IPv4 address matches the 10.10.0.0/16 destination
network only in the first 16 bits. Finally, the 10.10.2.3 destination IPv4 address matches
the 10.10.2.0/24 destination network in the first 24 bits. The routing table entry whose
leading address bits matches the largest number of the packet destination address bits is
called the longest prefix match. In this example, 10.10.2.0/24 is the longest prefix match.
The longest prefix match always wins among the routes installed in the routing table, i.e.
among entries already in the routing table.
Making a forwarding decision consists of three sets of processes: the routing processes,
the routing table, and the actual process that makes the forwarding decision and switches
packets.
Three processes are involved in building and maintaining the routing table in a Cisco
router:

• Various routing processes, which actually run a routing protocol, such as RIP
version 2 (RIPv2), EIGRP, IS-IS, and OSPF. The best route from a routing process has
the potential to be installed into the routing table. The routing protocol with the
lowest administrative distance always wins when installing routes into the routing
table.
• The routing table itself, which accepts information from the routing processes and
also replies to requests for information from the forwarding process.
• The forwarding process, which requests information from the routing table to
make a packet forwarding decision.
10.1 Configuring a Cisco Router
Introduction
Similarly, like a switch, proper physical installation of a router is very important. Since
there are many different models of routers, as a network engineer, you will have to install
and connect your router according to the model specifics, which are always described in
the installation documentation. After a router is physically set up, you will typically need
to connect to the router via a console interface and start configuring it. You need to
understand the initial configuration steps to configure the router properly; however,
different models' initial configurations are typically similar. But before you start with the
initial configuration, it’s always a smart idea to check if the router hardware is working
properly. Then, you can start setting up interfaces connected to different IP networks and
check their status. You can also check what network devices the router can communicate
with on the same link by using different discovery protocols.
In Enterprise environments, routers and other devices performing routing are located in
different parts of the campus. In contrast, at home or smaller branches, they are typically
located close to the link to the telecommunication provider. You will need to configure the
interfaces according to some Enterprise or internet provider IP addressing plan in either
case.

As a network engineer, it is critical that you to thoroughly understand the setup of a


router, including:

• Going through the initial steps to properly configure a router.


• Configure and verify an interface on a router.
• Configure and check the neighbors of your networking devices.
10.2 Configuring a Cisco Router
Initial Router Setup
Cisco provides several different types of router hardware, including routers that only do
routing, while other routers offer additional functions. In fact, Cisco has a series of
integrated services routers (ISRs), with the name emphasizing the fact that many functions
are integrated into a single device.
The following figure shows a Cisco ISR with some of the more important features
highlighted.

Unlike a computer end device, Cisco routers do not have a keyboard, monitor, or mouse
device to allow direct user interaction. However, you can configure the router from a PC.
At the initial installation, the PC has to be connected to the router directly through the
console port. To connect to the console port, you use a console cable, which is also called
a rollover cable.
The console port can be an RJ-45 port or a USB port. A Cisco router might have only one
type or both types of console ports. When the console port on a device is an RJ-45 port,
you require a console cable with an RJ-45 connector on one end. The other end can be a
serial DB-9 connector or a USB connector. Most modern computers have USB ports and
rarely include built-in serial ports. If your console cable has a serial connector, you will
need a serial-to-USB adapter and operating system driver (USB-to-RS-232-compatible
serial port adapter) to establish connectivity.
When the console port is a USB port, you need a suitable USB cable (for example, a USB
Type A-to-5-pin mini Type B) and an operating system device driver to establish
connectivity.
Your PC also needs a serial port and the communications software, such as Tera Term or
putty, configured with the following settings:

• Speed: 9600 bps


• Data bits: 8
• Parity: None
• Stop bit: 1
• Flow control: None
Note: On routers with two console ports, only one console port can be active at a time.
When a cable is plugged into the USB console port, the RJ-45 port becomes inactive.
When the USB cable is removed from the USB port, the RJ-45 port becomes active.
The startup of a Cisco router requires verifying the physical installation, powering up the
router, and viewing the Cisco IOS Software output on the console.
The router completes these tasks to start router operations:
1. Runs the power-on self-test (POST) to test the hardware. During POST, the router
executes diagnostics to verify the basic operation of the CPU, memory, and
interface circuitry.
2. Finds and loads the Cisco IOS Software that the router uses for its operating
system.
3. Finds and loads the configuration file if one exists. The configuration file contains
statements about router-specific attributes, protocol functions, and interface
addresses.
Note: Before you start the router, verify the power and cooling requirements, cabling, and
console connection. Then, push the power switch to "On" and observe both the boot
sequence and the Cisco IOS Software output on the console.
After a router completes POST and loads a Cisco IOS Software image, it looks for a device
configuration file in its NVRAM, known as the startup-config. If the router does not find
one, it executes a question-driven, initial configuration routine that is called "setup."
Setup is a prompt-driven program that allows minimal device configuration. If the router
has a startup configuration file in NVRAM, the user EXEC mode prompt appears.

A router without an existing configuration enters the system configuration dialog.


A configured router with an existing configuration displays a user EXEC mode prompt.

Note: If a username or password is configured, you will instead get a prompt to enter
credentials.
The setup mode is not intended to enter complex protocol features in the router but
rather for a minimal configuration. You do not have to use the setup mode; you can use
other configuration modes to configure the router.
The primary purpose of the setup mode is to rapidly bring up a minimal-feature
configuration for any router that cannot find its configuration from some other source. In
addition to being able to run the setup mode when the router boots, you may also initiate
it by entering the setup privileged EXEC mode command.
To skip the system configuration dialog and configure the router manually, answer the
first question in the system configuration dialog with no, or press Ctrl-C.

To verify the router status, use the show version command:


To verify the running configuration of the router, use the show running-config command:

10.3 Configuring a Cisco Router


Configuring Router Interfaces
One of the main functions of a router is to forward packets from one network device to
another. For the router to perform this task, you must define the characteristics of the
interfaces through which the router receives and forwards the packets.
There are generally two types of physical interfaces used to forward packets on Cisco
routers: Ethernet interfaces and serial interfaces.

• Ethernet interfaces: The term Ethernet interface refers to any type of Ethernet
interface. For example, some Cisco routers have an Ethernet interface that is
capable of only 10 Mbps, so to configure this type of interface, you would use the
interface Ethernet interface-identifier configuration command. However, other
routers have interfaces that are capable of operating up to 100 Mbps. These
interfaces are referred to as Fast Ethernet ports. You use the interface
FastEthernet interface-identifier command to configure these types of ports.
Similarly, the interfaces that are capable of Gigabit Ethernet speeds are referenced
with the interface GigabitEthernet interface-identifier command. The interfaces
that are capable of operating up to 10 Gbps, 25 Gbps, 40 Gbps, and 100 Gbps
Ethernet speed are referenced with the interface TenGigabitEthernet interface-
identifier, interface TwentyFiveGigE interface-identifier, interface
FortyGigabitEthernet interface-identifier, and interface HundredGigE interface-
identifier commands, respectively.
• Serial interfaces: Serial interfaces are the second major type of physical interfaces
on Cisco routers. To support point-to-point leased lines and Frame Relay access-
link standards, Cisco routers use serial interfaces. You can then choose which data
link layer protocol to use, such as High-Level Data Link Control (HDLC) or PPP for
leased lines or Frame Relay for Frame Relay connections, and configure the router
to use the correct data link layer protocol. Use the interface serial interface-
identifier command when configuring these types of interfaces.
Routers use interface identifiers to distinguish between interfaces of the same type.
Depending on the model of the router, the interface-identifier may be:

• An interface number, for example, interface ethernet 1


• A slot/interface number, for example, interface fastethernet 0/1
• A module/slot/interface number, for example, interface serial 1/0/1
Note: It is appropriate to mention the loopback interface here. A loopback interface is a
virtual interface that resides on a router. It is not connected to any other device. Loopback
interfaces are very useful because they will never go "down" unless the entire router goes
down or the interface is manually disabled. This helps manage routers because there will
always be at least one active interface on the routers—the loopback interface. To create a
loopback interface, all you need to do is enter the configuration mode for the interface.
Optionally, you may add an IPv4 address.

An IPv4 address with a mask of 255.255.255.255 (prefix /32, all bits set to binary 1) is
called the host IPv4 address. The host IPv4 address indicates that only one IPv4 address is
used in the subnet and is often used to address loopback interfaces.
You can configure the loopback address with something less than a /32. The routing table
will see that as a directly connected network, but the interface address will be given a /32.
Note: The router interface characteristics include, but are not limited to, interface
description, the IP address of the interface, the data link encapsulation method, the media
type, the bandwidth, and the clock rate. You can enable many features on a per-interface
basis.
When you first configure an interface, except in the setup mode, you must
administratively enable the interface before the router can use it to transmit and receive
packets. Use the no shutdown command to enable the interface.
You may want to disable an interface to perform hardware maintenance on a specific
interface or a segment of a network. You may also want to disable an interface if a
problem exists on a specific segment of the network, and you must isolate this segment
from the rest of the network. The shutdown command disables or administratively turns
off an interface. To re-enable the interface, use the no shutdown command.
To enable an interface, use the following commands:

To disable an interface, use the following commands:

10.4 Configuring a Cisco Router


Configuring IPv4 Addresses on Router Interfaces
When you are sending mail via the postal service, you need street addresses to identify
the locations of specific homes and companies so that mail can reach those real-world
locations. In the same way, each interface on a Cisco router must have its own IP address
to uniquely identify it on the network. If no IP address is configured, even if the interface
is in the "up/up" state, the router will not attempt to send and receive IP packets on the
interface. To attain proper operation, for every interface that a router should use for
forwarding IPv4 packets, the router needs an IPv4 address.
The configuration of an IPv4 address on an interface is relatively simple. To configure the
address and mask, simply use the ip address ip-address mask interface subcommand. The
following example shows the configuration of an IPv4 address on the serial interface of a
router.
The specific steps to configure an interface on a Cisco router are as follows:
Note: Although the use of 172.18.0.0/16 network is technically correct, it represents a
huge waste of IP addresses. Since it allows for 65534 hosts and on serial point-to-point
links, we have only two hosts. So, a better IP Address/Mask would be 172.18.0.1/30 on
Router X and 172.18.0.2/30 on router Y.

10.5 Configuring a Cisco Router


Checking Interface Configuration and Status
When you have completed the router interface configuration, you can verify the
configuration using various show commands.
You can view information about interfaces by using several commands:
• show ip interface brief: A brief list of interfaces and their IPv4 addresses
• show protocols type interface-identifier: Brief details about a particular interface
• show interfaces: Details about all the interfaces (for example, packets flowing in
and out of the interface)
• Optionally, you can include the interface type and slot/interface number on many
commands:
o show interfaces type interface-identifier: Details for a specific interface
Note: When you configure an IP address on an interface, the router automatically
calculates the subnet and installs it in the routing table as a connected route. This will
result in two lines added for that newly configured interface, one with the letter "C" (the
connected route) and another with the letter "L" (the local interface IPv4 address). Note
that "L" entries, which identify the IPv4 address of the interface, have a subnet mask of
/32.
The following examples show sample outputs from the presented commands.

The following table shows the output fields and their meanings.
The table shows some of the output fields for a Gigabit Ethernet interface and their
meanings in this example.
Note: By truncating the words, you can significantly shorten the commands that refer to
router interfaces. For example, you can use show int Fa0/0 instead of show interfaces
FastEthernet0/0.
Each of the command outputs shown in the previous examples lists two interface status
codes. For a router to use an interface, the two interface status codes on the interface
must be in the upstate. The first status code refers to whether the physical layer (Layer 1)
is working, and the second status code mainly (but not always) refers to whether the data
link layer (Layer 2) protocol is working.
Four combinations of settings exist for the status codes when troubleshooting a network.
The following table lists the four combinations and an explanation of the typical reasons
why an interface would be in this state. As you review the list, note that if the hardware
status (the first status code) is not "up," the second will always be "down" because the
data link layer functions cannot work if the physical layer has a problem.

10.6 Configuring a Cisco Router


Exploring Connected Devices
Most network devices, by definition, do not work in isolation. A Cisco device frequently
has other Cisco devices as neighbors on the network. If you can obtain information about
those other devices, it can help you with any network design decisions, troubleshooting,
and completing equipment changes.
If you do not have any documentation about the network topology or if the existing
documentation is not up to date, you may find yourself in a position of needing to
discover the neighboring devices of a router. You can sometimes manually inspect the
physical wiring if the devices are installed next to each other. If you are not local to the
devices or neighboring devices are in other buildings or cities, you must use a different
method.
One possibility is to use a dynamic discovery protocol that gathers information about
directly connected devices. Cisco devices support Cisco Discovery Protocol, which provides
information about connected Cisco devices and their functions and capabilities.
Cisco Discovery Protocol is a Cisco proprietary protocol that discovers basic information
about neighboring Cisco devices without knowing the passwords for the neighboring
devices. To discover information, routers and switches send Cisco Discovery Protocol
messages out each of their interfaces. Devices that support Cisco Discovery Protocol learn
information about other devices by listening to the advertisements of these devices.
From a troubleshooting perspective, you can use Cisco Discovery Protocol to confirm or fix
the information that a network diagram shows or even discover the devices and interfaces
that a network uses. Confirming that the network is cabled to match the network diagram
is a good step before predicting the normal flow of data in a network.
On media that support multicasts at the data -ink layer, Cisco Discovery Protocol uses
multicast frames; on other media, Cisco Discovery Protocol sends a copy of the Cisco
Discovery Protocol update to any known data link addresses. So, any Cisco Discovery
Protocol-supporting device that shares a physical medium with another Cisco Discovery
Protocol-supporting device can learn about the other device. It is common for network
administrators to disable Cisco Discovery Protocol for security reasons.
Another dynamic discovery protocol is Link Layer Discovery Protocol (LLDP), which is a
standardized, vendor-independent discovery protocol that discovers neighboring devices
from different vendors. The IEEE standardized this protocol as the 802.1AB standard. LLDP
performs functions that are similar to Cisco Discovery Protocol.

Information Obtained with Cisco Discovery Protocol


The figure displays an example of how the Cisco Discovery Protocol exchanges information
with its directly connected neighbors. You can display this information exchange results on
a console connected to a network device configured to run Cisco Discovery Protocol on its
interfaces.

Information provided by the Cisco Discovery Protocol about each neighboring device:

• Device identifiers: For example, the configured hostname of the device


• Address list: Up to one network layer address for each protocol that is supported
• Port identifier: The identifier of the local port (on the receiving device) and the
connected remote port (on the sending device)
• Capabilities list: Supported features—for example, the device acting as a source-
route bridge and also as a Router
• Platform: The hardware platform of the device—for example, Cisco 4000 Series
Routers
Notice that the upper router in the figure is not connected directly to switch A (the switch
that the administrator is connected to). To obtain Cisco Discovery Protocol information
about this upper router from switch A console, the administrator could use Telnet or SSH
to connect to a connected switch to this router.
10.8 Configuring a Cisco Router
Using Cisco Discovery Protocol
You can enable or disable Cisco Discovery Protocol on a router as a whole (global) or on a
port-by-port (interface) basis. You can also view Cisco Discovery Protocol information with
the show cdp command. This command has several keywords that enable access to
different types of information and different levels of detail. The following example shows
the different show cdp options.

The Cisco Discovery Protocol is enabled by default on most interfaces (except for some
legacy interfaces), but you can disable this functionality at the device and interface level.
To prevent other Cisco Discovery Protocol capable devices from accessing information
about a specific device, use the no cdp run global configuration command. To disable
Cisco Discovery Protocol on an interface, use the no cdp enable command. To enable
Cisco Discovery Protocol on an interface, use the cdp enable interface configuration
command.

The show cdp neighbors command displays information about Cisco Discovery Protocol
neighbors. The following example shows the Cisco Discovery Protocol output for Router A.
For each Cisco Discovery Protocol neighbor, the following information is displayed:

• Device ID
• Local interface—the interface on this device that is connected to the neighbor
• Holdtime value, in seconds
• Device capability code
• Hardware platform
• Port ID—the interface on the neighboring device that is connected to this device
The hold time value indicates how long (in seconds) the receiving device should hold the
Cisco Discovery Protocol information before discarding it.
Cisco Discovery Protocol information is sent periodically; the hold time counts down, and
if it reaches zero, the information is discarded.
The format of the show cdp neighbors output varies among different types of devices, but
the available information is generally consistent across devices.
You can use the show cdp neighbors command on a Cisco Catalyst switch to display the
Cisco Discovery Protocol updates that the switch receives on the local interfaces. Note
that on a switch, the local interface is referred to as the local port.
If you add the detail argument to the show cdp neighbors command, the resulting output
includes additional information, such as the network layer addresses of neighboring
devices. The output from the show cdp neighbors detail command is identical to the one
that the show cdp entry * command produces.
Note: Cisco Discovery Protocol is limited to gathering information about the directly
connected Cisco neighbors. Other tools, such as Telnet and SSH, are available for
gathering information about remote devices that are not directly connected.

10.9 Configuring a Cisco Router


Configure and Verify LLDP
To permit the discovery of non-Cisco devices, Cisco devices also support LLDP, a vendor-
neutral device discovery protocol defined by IEEE 802.1AB standard. LLDP allows network
devices to advertise information about themselves to other devices on the network. This
protocol runs over the data link layer, which allows two systems running different
network-layer protocols to learn about each other.
LLDP is a protocol that transmits information about the capabilities and the status of a
device and its interfaces. LLDP devices use the protocol to solicit information only from
other LLDP devices.
LLDP supports a set of attributes that it uses to discover other devices. These attributes
contain type, length, value (TLV) descriptions. TLVs are blocks of information embedded in
LLDP advertisements, giving details about optional information elements such as IP
address, Device ID, and Platform. LLDP devices can use TLVs to send and receive
information to other devices on the network. Using this protocol, devices can advertise
details such as configuration information, device capabilities, and device identity.
Some of the TLVs that are advertised by LLDP:

• Management address: the IP address used to access the device for management
(configuring and verifying the device)
• System capabilities: different hardware and software specifications of the device
• System name: the hostname that was configured on that device
LLDP has these configuration guidelines and limitations:

• Must be enabled on the device before you can enable or disable it on any interface
• Is supported only on physical interfaces
• Can discover up to one device per port
• Can discover Linux servers
To enable or disable LLDP globally, use the following command:

To enable or disable LLDP on an interface, use the following commands:

To display information about neighbors, use the following command:

After you globally enable LLDP, it is enabled for transmit and receive on all supported
interfaces by default. The lldp transmit command enables the transmission of LLDP
packets on an interface. The lldp receive command enables the reception of LLDP packets
on an interface.
The show lldp neighbors command displays information about neighbors, including device
ID, interface type and number, hold time settings, capabilities, and port ID.

The output in the example tells you that router R1 has two neighbors, DSW1 and DSW2.
Both devices have routing functionality. The interfaces to reach them through are
Ethernet0/1 and Etherne0/2. This output contains information only about the neighbors
that support LLDP and have it configured to exchange information.
11.1 Exploring the Packet Delivery Process
Introduction
Any device connected to either an Enterprise Campus, Branch, or home network uses an
IP address that identifies the device on the network and a subnet mask that describes
which portion of the address refers to the network ID and which part refers to the host ID.
Having this information, a device is "smart" enough to know if the devices it wants to
communicate with can be reached directly, which means they are on the same network.
In this case, the device can rely on a switch to deliver the frames to the receiver. But the
sender might still not know the physical address of such device. Therefore, a protocol that
can map IP addresses to the physical addresses of a receiver is required.
If the two hosts are on different subnets, then the sending host must send the data to its
default gateway, which will forward the data to the destination. The default gateway, a
router, allows devices on one subnet to communicate with other subnets.
Host-to-host packet delivery either in the same network or in different networks contains
a variety of processes. As a networking engineer, you need to feel confident about them.
This knowledge is especially important when troubleshooting, where different
components are crucial in diagnosing issues in packet delivery.

The packet delivery process includes:

• The role of the Layer 2 address and Layer 3 address.


• The role of Address Resolution Protocol (ARP).
11.2 Exploring the Packet Delivery Process
Layer 2 Addressing
Here you will observe where Open Systems Interconnection (OSI) Layer 2 (corresponding
to the TCP/IP link layer) addresses fit into the host-to-host packet delivery process.
Layer 2 Ethernet LANs have the following characteristics:

• They use MAC addresses.


• They identify end devices in the LAN.
• They enable the packet to be carried by the local media across each segment.

Layer 2 defines how data is formatted for transmission and how access to the physical
media is controlled. Layer 2 devices provide an interface with the physical media. Some
common examples are network interface cards (NICs) installed in a host.
Device-to-device communications require Layer 2 addresses, also known as physical
addresses. For example, Ethernet physical addresses or MAC addresses are embedded in
Ethernet NIC in end devices, such as hosts.
Although MAC addresses are unique, physical addresses are not hierarchical. They are
associated with a particular device, regardless of its location or connected network. These
Layer 2 addresses have no meaning outside the local network media. They are used to
locate the end devices in the local physical network on the data link layer.
An Ethernet MAC address is a two-part, 48-bit binary value that is expressed as 12
hexadecimal digits. The address formats might appear like 00-05-9A-3C-78-00,
00:05:9A:3C:78:00, or 0005.9A3C.7800.
All devices that are connected to an Ethernet LAN have MAC-addressed interfaces. The
NIC uses the MAC address in received frames to determine if a message should be passed
to the upper layers for processing. The MAC address is permanently encoded into a ROM
chip on a NIC. The MAC address is made up of the Organizationally Unique Identifier (OUI)
and the vendor assignment number.
Switches also have MAC addresses, but a device only sends a frame to these addresses
when communicating with the switch, for example, for management. Otherwise, frames
are addressed for other devices, and the switch forwards the frames to those devices.

The figure shows the Layer 2 (L2) addresses on two PCs and a router. Note that the router
has different MAC addresses on each interface.

11.3 Exploring the Packet Delivery Process


Layer 3 Addressing
You will now examine where OSI Layer 3 (corresponding to the TCP/IP Internet layer)
devices and addressing fit into the host-to-host communications model.

Layer 3 provides connectivity and path selection between two host systems located on
geographically separated networks. At the boundary of each local network, an
intermediary network device, usually, a router, de-encapsulates the frame to read the
destination address contained in the packet's header (the Layer 3 protocol data unit
[PDU]). Routers use the network identifier portion of this address to determine which
path to use to reach the destination host. Once the path is determined, the router
encapsulates the packet in a new frame and sends it toward the destination end device.
Layer 3 addresses must include identifiers that enable intermediary network devices to
locate the networks that different hosts belong to. In the TCP/IP protocol suite, every IP
host address contains information about the network where the host is located.
Intermediary devices that connect networks are routers. The role of the router is to select
paths and direct packets toward a destination. This process is known as routing. A router
uses a list of paths located in a routing table to determine where to send data.
Layer 3 addresses are assigned to end devices such as hosts and network devices that
provide Layer 3 functions. The router has its own Layer 3 address on each interface. Each
network device that provides a Layer 3 function maintains a routing table.
As seen in the example, the two router interfaces belong to different networks. The left
interface and the directly connected PC belong to the 192.168.3.0/24 network, while the
right interface and the directly connected PC belong to the 192.168.4.0/24 network. For
devices in different IP networks, a Layer 3 device is needed to route traffic between them.

11.4 Exploring the Packet Delivery Process


Default Gateways
A source host can communicate directly (without a router) with a destination host only if
the two hosts are on the same subnet. If the two hosts are on different subnets, the
sending host must send the data to its default gateway, which will forward the data to the
destination. The default gateway is an address on a router (or Layer 3 switch) connected
to the same subnet that the source host is on.
Therefore, before a host can send a packet to its destination, it must first determine if the
destination address is on its local subnet or not. It uses the subnet mask in this
determination. The subnet mask describes which portion of an IPv4 address refers to the
network or subnet and which part refers to the host.
The source host first does an AND operation between its own IPv4 address and subnet
mask to arrive at its local subnet address. To determine if the destination address is on the
same subnet, the source host then does an AND operation between the destination IPv4
address and the source's subnet mask. This is because it doesn't know the subnet mask of
the destination address, and if the devices are on the same subnet, they must have the
same mask. If the resulting subnet address is the same, it knows the source and
destination are on the same subnet. Otherwise, they are on different subnets.
For example, IPv4 host 10.10.1.241/24 is on the 10.10.1.0/24 subnet. If the host that it
wants to communicate with is 10.10.1.175, it knows that this IPv4 host is also on the local
10.10.1.0/24 subnet.
If the source and destination devices are on the same subnet, then the source can deliver
the packet directly. If they are on different subnets, then the packet must be forwarded to
the default gateway, which will forward it to its destination. The default gateway address
must have the same network and subnet portion as the local host address; in other words,
the default gateway must be on the same subnet as the local host.
Host is configured with the address of their default gateway. On a Windows computer, the
Internet Protocol (TCP/IP) Properties tools are used to enter the default gateway IP
address if you need to set the network parameters manually. These parameters may also
be learned automatically.

11.5 Exploring the Packet Delivery Process


Address Resolution Protocol
When a device sends a packet to a destination, it encapsulates the packet into a frame.
The packet contains IPv4 addresses, and the frame contains MAC addresses. Therefore,
there must be a way to map an IPv4 address to a MAC address. For example, if you enter
the ping 10.1.1.3 command, the MAC address of 10.1.1.3 must be included in the
destination MAC address field of the frame that is sent. To determine the MAC address of
the device with an IPv4 address 10.1.1.3, a process is performed by a Layer 2 protocol
called ARP.
ARP provides two essential services:

• Address resolution: Mapping IPv4 addresses to MAC addresses on a network


• Caching: Locally storing MAC addresses that are learned via ARP
The term address resolution in ARP refers to the process of binding or mapping the IPv4
address of a remote device to its MAC address. ARP sends a broadcast message to all
devices on the local network. This message includes its own IPv4 address and the
destination IPv4 address. The message asks the device on which the destination IPv4
address resides to respond with its MAC address. The address resolution procedure is
completed when the originator receives the reply frame, which contains the required MAC
address, and updates its table containing all the current bindings.
Note: The Layer 2 broadcast address is FF:FF:FF:FF:FF:FF.
Using ARP to Resolve the MAC of a Local IPv4 Address
Because ARP is a Layer 2 protocol, its scope is limited to the local LAN. If the source and
destination devices are on the same subnet, then the source can use ARP to determine
the destination’s MAC address.
For example, IPv4 host 10.10.1.241/24 is on the 10.10.1.0/24 subnet. If the host that it
wants to communicate with is 10.10.1.175, it knows that this IPv4 host is also on the local
10.10.1.0/24 subnet, and it can use ARP to determine its MAC address directly.

The following output shows the Wireshark analysis of the ARP messages. In the first
example, you can see an ARP request sent as a broadcast to find out the MAC address of
IPv4 host 10.10.1.175. In the second ARP message, you can see the ARP reply including the
MAC address of the host, which is 00:bc:22:a8:e0:a0

Using ARP to Resolve the MAC of a Remote IPv4 Address


If the source and destination devices are not on the same subnet, then the source uses
ARP to determine the default gateway’s MAC address.
For example, when the source host 10.10.1.241 wants to communicate with the
destination host 10.10.2.55, it compares this IPv4 address against its subnet mask and
discovers that the host is on a different IPv4 subnet (10.10.2.0/24). When a host wants to
send data to a device that is on another network or subnet, it encapsulates the packet in a
frame addressed to its default gateway. So, the destination MAC address in the frame
needs to be the MAC address of the default gateway. In this situation, the source must
send an ARP request to find the MAC address of the default gateway. In the example, host
10.10.1.241 sends a broadcast with an ARP Request for the MAC address of 10.10.1.1.
The following output shows the Wireshark analysis of ARP messages. In the first example,
you can see an ARP request sent as a broadcast to find out the MAC address of IPv4 host
10.10.1.1. In the second ARP message, you can see the ARP reply showing that the MAC
address of the default gateway is 00:25:b5:9c:34:27.

Understanding the ARP Cache


Each IPv4 device on a network segment maintains a table in memory—the ARP table or
ARP cache. The purpose of this table is to cache recent IPv4 addresses to MAC address
bindings. When a host wants to transmit data to another host on the same subnet, it
searches the ARP table to see if there is an entry. If there is an entry, the host uses it. If
there is no entry, the IPv4 host sends an ARP broadcast requesting resolution.
Note: By caching recent bindings, ARP broadcasts can be avoided for any mappings in the
cache. Without the ARP cache, each IPv4 host would have to send an ARP broadcast each
time it wanted to communicate with another IPv4 host.
Each entry, or row, of the ARP table, has a pair of values—an IPv4 address and a MAC
address. The relationship between the two values is a map, which simply means that you
can locate an IPv4 address in the table and discover the corresponding MAC address. The
ARP table caches the mapping for the devices on the local LAN, including the default
gateway.
The device creates and maintains the ARP table dynamically, adding and changing address
relationships as they are used on the local host. The entries in an ARP table expire after a
while; the default expiry time for Cisco devices is 4 hours. Other operating systems
(Windows, macOS) might have a different value; Windows uses a random value between
15 and 45 seconds. This timeout ensures that the table does not contain information for
systems that may be switched off or moved. When the local host wants to transmit data
again, the entry in the ARP table is regenerated through the ARP process.
If no device responds to the ARP request, then the original packet is dropped because a
frame to put the packet in cannot be created without the destination MAC address.
On a Microsoft Windows PC, the arp -a command displays the current ARP table for all
interfaces on the PC.

To limit the output of the arp command to a single interface, use the arp -a -N ip_address
command.
To display the ARP table on a Cisco IOS router, use the show ip arp or show arp EXEC
command; the output is the same.

The proper syntax to display the ARP table is show ip arp [ip-address] [host-name] [mac-
address] [interface type number].
11.7 Exploring the Packet Delivery Process
Host-To-Host Packet Delivery
Host-to-host packet delivery consists of an interesting series of processes. In this multipart
example, you will discover what happens "behind the scenes" when an IPv4 host
communicates with another IPv4 host, firstly when a router is used. Secondly, when a
switch is responsible for the host-to-host packet delivery process.
Host-To-Host Packet Delivery (Step 1 of 14)

In this example, the host 192.168.3.1 needs to send arbitrary application data to the host
192.168.4.2, located on another subnet. The application does not need a reliable
connection, so it uses UDP. Because it is unnecessary to set up a session, the application
can start sending data, using the UDP port numbers to establish the session and deliver
the segment to the right application.
Host-To-Host Packet Delivery (Step 2 of 14)
UDP prepends a UDP header (UDP HDR) and passes the segment to the IPv4 layer (Layer
3) with an instruction to send the segment to 192.168.4.2. IPv4 encapsulates the segment
in a Layer 3 packet, setting the source address (SRC IP) of the packet to 192.168.3.1, while
the destination address (DST IP) is set to 192.168.4.2.
Host-To-Host Packet Delivery (Step 3 of 14)

When Host A analyzes the destination address, it finds that the destination address is on a
different network. The host forwards any packet that is not destined for the local IPv4
network in a frame addressed to the default gateway. The default gateway is the address
of the local router, which must be configured on hosts (PCs, servers, and so on). IPv4
passes the Layer 3 packet to Layer 2 with instructions to forward it to the default gateway.
Host A must place the packet in its “parking lot” (on hold) until it has the MAC address of
the default gateway.
Host-to-Host Packet Delivery (Step 4 of 14)
To deliver the packet, the host needs the Layer 2 information of the next-hop device. The
ARP table in the host does not have an entry and must resolve the Layer 2 address (MAC
address) of the default gateway. The default gateway is the next hop for the packet. The
packet waits while the host resolves the Layer 2 information.
Host-To-Host Packet Delivery (Step 5 of 14)

Because the host does not know the default gateway’s Layer 2 address, the host uses the
standard ARP process to obtain the mapping. The host sends a broadcast ARP request
looking for the MAC address of its default gateway.
Host-To-Host Packet Delivery (Step 6 of 14)
The host has previously been configured with 192.168.3.2 as the default gateway. The
host 192.168.3.1 sends out the ARP request, and the router receives it. The ARP request
contains information about Host A. Notice that the first thing the router does is add this
information to its own ARP table.
Host-To-Host Packet Delivery (Step 7 of 14)

The router processes the ARP request like any other host would and sends the ARP reply
with its own information directly to the host's MAC address.
Host-to-Host Packet Delivery (Step 8 of 14)
The host receives an ARP reply to its ARP request and enters the information in its local
ARP table.
Host-To-Host Packet Delivery (Step 9 of 14)

Now, the Layer 2 frame with the application data can be sent to the default gateway. The
pending frame is sent with the local host IPv4 address and MAC address as the source.
However, the destination IPv4 address is that of the remote host, but the destination MAC
address is the default gateway.
Host-To-Host Packet Delivery (Step 10 of 14)
When the router receives the frame, it recognizes its MAC address and processes the
frame. At Layer 3, the router sees that the destination IPv4 address is not its address. A
host Layer 3 device would discard the frame. However, because this device is a router, it
passes all IPv4 packets that are not for the router itself to the routing process. The routing
process determines where to send the packet.
Host-To-Host Packet Delivery (Step 11 of 14)

The routing process checks for the longest prefix match of the destination IPv4 address in
its routing table. In this example, the destination network is directly connected. Therefore,
the routing process can pass the packet directly to Layer 2 for the appropriate interface.
Host-To-Host Packet Delivery (Step 12 of 14)
Assuming that the router does not have the mapping to 192.168.4.2, Layer 2 uses the ARP
process to obtain the mapping for the IPv4 address and the MAC address. The router asks
for the Layer 2 information in the same way as the hosts. An ARP request for the
destination MAC address is sent to the link.
The destination host receives and processes the ARP request.
Host-To-Host Packet Delivery (Step 13 of 14)

The destination host receives the frame that contains the ARP request and passes the
request to the ARP process. The ARP process takes the information about the router from
the ARP request and places the information in its local ARP table. The ARP process
generates the ARP reply and sends it back to the router.
The router receives the ARP reply, populates its local ARP table, and starts the packet-
forwarding process.
Host-To-Host Packet Delivery (Step 14 of 14)
The frame is forwarded to the destination. Note that the router changes Layer 2 address
in frames as needed, but it will not change the Layer 3 address in packets.
Role of a Switch in Packet Delivery (Step 1 of 4)
Typically, your network will have switches between hosts and routers. In this multipart
example, you will see what happens on a switch when a host communicates with a router.
Remember that a switch does not change the frame in any way. When a switch receives
the frame, it forwards it out of the proper port according to the MAC address table.
An application on host A wishes to send data to a remote network. Before an IP packet
can be forwarded to the default gateway, its MAC address must be obtained. ARP on Host
A creates an ARP request and sends it out as a broadcast frame. Before the ARP request
reaches other devices on a network, the switch receives it.

When the switch receives the frame, it needs to forward it out on the proper port.
However, in this example, the source MAC address is not in the MAC address table of the
switch. The switch can learn the port mapping for the source host from the source MAC
address in the frame, so the switch adds the information to the table (0800:0222:2222 =
port FastEthernet0/1).
Role of a Switch in Packet Delivery (Step 2 of 4)
Because the destination address of the frame is a broadcast, the switch has to flood the
frame out to all the ports, except the one it came in.

Role of a Switch in Packet Delivery (Step 3 of 4)


The router replies to the ARP request and sends an ARP reply packet back to the sender as
a unicast frame.
The switch learns the port mapping for the router’s MAC address from the source MAC
address in the frame. The switch adds it to the MAC address table (0800:0333:2222 = port
FastEthernet0/3)

Role of a Switch in Packet Delivery (Step 4 of 4)


The destination address of the frame (Host A) is found in the MAC address table so that
the switch can forward the frame out on port FastEthernet0/1. If the destination address
were not found in the MAC address table, the switch would need to flood out the frame
on all ports, except the one it came in.

All frames pass through the switch unchanged. The switch builds its MAC address table
based on the source address of received frames, and it sends all unicast frames directly to
the destination host based on the destination MAC address and port that are stored in the
MAC address table.

12.1 Troubleshooting a Simple Network


Introduction
Smooth operation and high availability of the network are crucial to organizations.
Unplanned downtime can quickly lead to loss of productivity and, therefore, financial loss.
Recent studies have shown that 75% of total operating expenses are due to monitoring
and troubleshooting tasks.
Remember that most issues affecting a network are encountered during the original
implementation. If a network is correctly installed, it should continue to operate without
problems. However, this circumstance is only true in theory. Cabling becomes damaged,
configurations change, and new devices are connected, which may require configuration
changes. Ongoing maintenance is necessary. Therefore, diagnosing and resolving
problems is an essential skill that network engineers use as a part of their many different
job tasks.
There are no specific recipes for troubleshooting. A particular problem can be diagnosed
and sometimes even solved in many different ways. However, by employing a structured
approach to the troubleshooting process, you can greatly reduce the average amount of
time to diagnose and solve a problem.
Troubleshooting can be a very time-consuming process. By using the tools built into the
Cisco IOS Software and on different end-device operating systems, you can shorten the
time to diagnose and resolve problems. There are many technologies and protocols that
you can leverage in combination with specialized tools and applications to support
troubleshooting and maintenance processes.

12.2 Troubleshooting a Simple Network


Troubleshooting Methods
A troubleshooting method is a guiding principle that determines how you move through
the phases of the troubleshooting process.
All troubleshooting processes include the elements of gathering and analyzing
information, eliminating possible causes, and formulating and testing hypotheses.
However, the time one spends on each of these phases and how one moves from phase to
phase can be significantly different from person to person. It is a key differentiator
between novice and expert troubleshooters.
In a typical troubleshooting process, for a complex problem, you would continually move
between the different processes: gather some information, analyze it, eliminate some
possibilities, gather more information, analyze again, formulate a hypothesis, test it, reject
it, eliminate some more possibilities, gather more information, and so on.
If you do not use a structured approach but move between the phases randomly, you
might eventually find the solution, but the process will be very inefficient. In addition, if
your approach has no structure, it is practically impossible to hand it over to someone else
without losing all progress that was made up to this point. You may also need to stop and
resume your troubleshooting process.
A structured approach to troubleshooting (no matter what the exact method is) will yield
more predictable results in the end and will make it easier to pick up the process where
you left off in a later stage or to hand it over to someone else.
Quickly formulating a first hypothesis that is based on common problem causes and
corresponding solutions can be very effective in the short run.
A troubleshooting method commonly deployed by experienced and inexperienced
troubleshooters is the "shoot-from-the-hip" method. After a very short period of
gathering information, the troubleshooter quickly changes to see if it solves the problem.
This action might seem like random troubleshooting, but usually, the guiding principle for
this method is knowing common symptoms and their corresponding causes.
Look at the following example: A user reports a LAN performance problem to you. In 90
percent of similar problems in the past in this environment, the problem was caused by a
duplex mismatch, and the solution was to configure the switch port for 100 Mbps full
duplex. An obvious thing to do is to quickly verify the duplex setting of the switch port to
which the user connects and, if not correct, to change it to 100-Mbps full duplex to see if
this action fixes the problem.
This method can be very effective when it works because very little time is spent on
gathering data, analyzing, and eliminating possible causes. However, the downside is that
if it does not work, you have not come any closer to a possible solution.
Experienced troubleshooters can use this method effectively, and it can also be a useful
tool for inexperienced troubleshooters. However, the main factor in using this method
effectively is knowing when to stop and then switch to a more methodical approach.
A structured troubleshooting method is a guideline that helps you move through the
different phases of the troubleshooting process. The key to all structured troubleshooting
methods is the elimination of the causes of the issue.
By systematically eliminating possible problem causes, you can reduce the scope of the
problem until you manage to isolate and solve the problem. If it turns out that you lack
the knowledge or experience to solve the problem yourself, you can hand it over as a
better-defined problem. So, even if you do not manage to solve the problem, you will
increase the chances that someone else can find the cause of the problem and resolve it
quickly and efficiently.
Several different, structured troubleshooting approaches exist, and the approach to use
may be chosen depending on the problem.
The following troubleshooting methods are the most common:

• Top-down method: Work from the application layer in the Open Systems
Interconnection (OSI) model down to the physical layer. The top-down method
uses the OSI model as a guiding principle. One of the most important
characteristics of the OSI model is that each layer depends on the underlying layers
for its operation. This structure implies that if you find a layer to be operational,
you can safely assume that all underlying layers are fully operational. For example,
suppose you are researching a user who cannot browse a particular website and
find that you can establish a TCP connection on port 80 from this host to the server
and get a response from the server. In that case, you can typically conclude that
the transport layer and all layers below must be fully functional between the client
and the server. It is most likely a client or server problem and not a network
problem. Be aware that, in the example above, it is reasonable to conclude that
Layers 1 through 4 must be fully operational, but this idea is not definitively
proved. For example, unfragmented packets might be routed correctly, while
fragmented packets are dropped. The TCP connection to port 80 might not
uncover such a problem. Therefore, the goal of this method is to find the highest
OSI layer that is still working. All devices and processes that work on that layer or
the layers below it are then eliminated from the scope of your problem. It might
be clear that this method is most effective if the problem is on one of the higher
OSI layers. The top-down method is one of the most straightforward
troubleshooting methods because problems reported by users are typically
defined as application layer problems, so starting the troubleshooting process at
that layer is the obvious thing to do. A drawback or impediment to this method is
that you need to access the application layer software on the machine of the client
to initiate the troubleshooting process. If the software is installed only on few
machines, it might be hard to test it properly.
• Bottom-up method: Work from the physical layer in the OSI model up to the
application layer. The bottom-up approach also uses the OSI model as the guiding
principle, but this time you start on the physical layer and work your way up to the
application layer. By verifying layer by layer that the network is operating correctly,
you steadily eliminate more potential problem causes and narrow the scope of the
potential problems. For example, if you are researching a user who cannot browse
a particular website, you would first verify physical connectivity. You would log in
to the switch and verify the port status. After each test or verification step, you
would move up through the layers of the OSI model. A benefit of this method is
that all the initial troubleshooting takes place on the network, so access to clients,
servers, or applications is not necessary until later in the troubleshooting process.
Also, the thoroughness and steady progress of this method will give you a
relatively high probability of eventual success or, at the very least, a decent
reduction of the problem scope. A disadvantage of this method is that, in large
networks, it can be a very time-consuming process because a lot of effort will be
spent on gathering and analyzing data. Therefore, the best use of this method is to
first reduce the problem scope by using a different strategy and then switching to
this method for clearly bounded parts of the network topology.
• Divide-and-conquer method: Start in the middle of the OSI layers (usually the
network layer) and then go up or down, depending on the results. If it is not clear
whether the top-down or the bottom-up approach would be most effective, it can
be helpful to start in the middle (typically the network layer) and run an end-to-
end test, such as a ping. If the ping succeeds, you can assume that all lower layers
are good, and you can start bottom-up troubleshooting from the network layer.
Alternatively, you can start a top-down troubleshooting process from the network
layer if the test fails. Whether the result of the initial test is positive or negative,
this method usually results in faster elimination of potential problems than what
you would achieve by implementing a full top-down or bottom-up approach,
making the divide-and-conquer method a very effective strategy.
• Follow-the-path method: Determine the path that packets follow through the
network from the source to the destination and track the packets along the path.
Tracing the path of packets through the network eliminates irrelevant links and
devices from the troubleshooting process. The objective of a troubleshooting
method is to isolate the problem by eliminating potential problem areas from the
scope of the troubleshooting process. By analyzing and verifying the path that
packets and frames take through the network as they travel from the source to the
destination, you can reduce the scope of your troubleshooting to just those links
and devices that are actually in the forwarding path.
• Swap components method: Move components physically and observe if the
problem moves with the components or not. A common way to isolate the
problem is to start swapping the components like cables, switches, switch ports, or
network interface cards NICs on the PC to confirm that the problem moves with
the specific component. This method allows you to isolate the problem, even if the
information you can gather is minimal, just by methodically executing simple tests.
Even if you do not solve the problem, you have scoped it to a single element, and
further troubleshooting can now be focused on that element. The drawbacks of
this method are as follows:
o You are isolating the problem to only a limited set of physical elements.
You cannot gain any real insight into what is happening because you are
gathering only very limited, indirect information.
o This method assumes that the problem is with a single component. If the
problem is with a particular combination of elements, you might not isolate
the problem correctly. Be sure to document everything that you change.

• Perform comparison method: Compare devices or processes of the network that


are operating correctly to devices or processes that are not operating as expected.
Gather clues by spotting significant differences. By comparing configurations,
software versions, hardware or other device properties, links, or processes
between working and nonworking situations and then seeing differences between
them, you might be able to resolve the problem by changing the nonoperational
situation to be consistent with the working situation. The biggest disadvantage of
this method is that it can lead to a working situation but not an understanding of
the root cause of the problem. Sometimes, you cannot even be sure if you have
implemented a real solution or only a workaround. Here is an example. You are
troubleshooting a connectivity problem with a branch office router. You have
managed to narrow down the problem to some issue with the default routing, but
you cannot seem to find the cause. You notice that this router is an older type
phased out in most of the other branch offices. You have one of the newer types of
routers in the trunk of your car because you plan to install that in another branch
office next week. You decide to copy the configuration of the existing branch
router to the newer router and replace it. Now everything starts to work as
expected. So what do you do? Do you consider the problem fixed? What was the
root cause? What should you do with the old and new routers now? As you can
see, this method has several drawbacks, but it is still a useful technique because
you can use it even when you lack the background to troubleshoot based on
knowledge of the technology. The effectiveness of this method depends on how
easy it is to compare the working and the nonworking devices, situations, or
processes. Having a good baseline of what constitutes normal behavior on the
network makes it easier to notice abnormal behavior. Also, consistent
configuration templates make it easier to see the significant differences between
functioning and malfunctioning devices. Therefore, the effectiveness of this
method depends on the quality of the overall network maintenance process.

12.3 Troubleshooting a Simple Network


Troubleshooting Tools
Network administrators spend a lot of time troubleshooting the network. Tools that are
used for troubleshooting are capable of generating outputs with a lot of information. In
troubleshooting, one challenge is to know how and what to look for in an output
command—because you want to check only for specific information relevant to the case.
You can focus on specific information with Cisco IOS troubleshooting tools and Microsoft
Windows tools, if appropriate for your network.
Logging
During operation, network devices generate messages about different events. These
messages are sent to an operating system process. This process is responsible for sending
these messages to various destinations, as directed by the device configuration. Logging
messages are also sent to the console by default. Even if the global logging process is
disabled, logging messages are nevertheless sent to the console. You can decide about the
severity level of the logged messages and their destination.
You can verify logging settings on networking devices by using a show logging command.
From the output of the command, you can chronologically see the events that have
triggered logging messages.
The logging messages may be sent to the console, the monitor, and the memory buffer,
which has a size of 4096 bytes. There are eight levels of severity of logging messages.
Levels are numbered from 0 to 7, from most severe to debugging messages: emergency,
alert, critical, error, warning, notification, informational, and debugging. Time stamps
show the time when each event occurred. By default, system logging is on, and the default
severity level is debugging, which means that all messages are logged.
In the output, you can see that the system was restarted once. After the restart, the
interfaces and the line protocols changed the state to "up." This message was logged as a
notification message—level 5.
Cisco IOS doesn't send log messages to a terminal session over IP (Telnet or Secure Shell
protocol [SSH] connections) by default. In the output this is shown by logging to the
monitor setting, which is set to off (disabled). If you need to enable logging to terminal
sessions, you need to use the terminal monitor command. After using the terminal
monitor command, monitor logging enablement can be verified by a show logging
command:

Logging to the monitor (all tty lines) shows "disabled" or, if enabled, the severity level
limit, number of messages logged, and whether XML formatting or filtering is enabled.
Internet Control Message Protocol
Internet Control Message Protocol (ICMP) is a supporting protocol in the TCP/IP protocol
suite. It is used by network devices, including routers, to send error messages and
operational information indicating, for example, that a requested service is not available
or that a host or router could not be reached. ICMP differs from transport protocols such
as TCP and UDP. It is not typically used to exchange data between systems, nor is it
regularly employed by end user network applications (except for some diagnostic tools,
such as ping and traceroute).
ICMP messages are typically used for diagnostic or control purposes or generated in
response to errors in IP operations. ICMP errors are directed to the source IP address of
the originating packet. For example, every device (such as an intermediate router)
forwarding an IPv4 datagram first decrements the Time to Live (TTL) field in the IPv4
header by one. If the resulting TTL is 0, the packet is discarded, and an ICMP time
exceeded in transit message is sent to the packet's source address.
Many commonly used network utilities are based on ICMP messages. The traceroute
command (or tracert Microsoft Windows command) can be implemented by transmitting
packets with specially set IPv4 TTL header fields and looking for ICMP time exceeded in
transit and Destination unreachable messages generated in response. The related ping
utility is implemented using the ICMP echo request and echo reply messages.
ICMP uses the basic support of IP as if it were a higher-level protocol; however, ICMP is
integral to IP. Although ICMP messages are contained within standard IP packets, ICMP
messages are usually processed as a special case, distinguished from normal IP processing.
Often, it is necessary to inspect the contents of the ICMP message and deliver the
appropriate error message to the application responsible for the transmission of the IP
packet that prompted the sending of the ICMP message.
ICMP is a network layer protocol. There is no TCP or UDP port number associated with
ICMP packets as these numbers are associated with the transport layer above.
Verification of End-To-End IPv4 Connectivity
The following are several verification tools to verify end-to-end IPv4 connectivity:

• ping: A successful ping to an IPv4 address means that the endpoints have basic
IPv4 connectivity between them.
• traceroute (or Microsoft Windows tracert): The results of traceroute to an IPv4
address can help you determine how far along the path data can successfully
reach.
• Telnet or SSH: Used to test the transport layer connectivity for any TCP port over
IPv4.
• show ip arp or show arp (or Microsoft Windows arp -a): Used to display the
mapping of IPv4 addresses to MAC addresses to verify connected devices.
• show ip interface brief (or Microsoft Windows ipconfig /all): Used to display the
IPv4 address configuration of the interfaces.
Using ping
The ping command is a very common method for troubleshooting the accessibility of
devices. It uses a series of ICMP Echo messages to determine these parameters:

• Whether a remote host is active or inactive


• The round-trip time (RTT) in communicating with the host
• Packet loss
The ping command first sends an echo request packet to an address, then waits for a
reply. The ping is successful only if the echo request gets to the destination, and the
destination is able to send an echo reply to the source within a predetermined time called
a timeout. The default value of this timeout is two seconds on Cisco devices.
The ICMP header starts after the IPv4 header since the ICMP messages are encapsulated
in IPv4 packets. The first 4 bytes of the ICMP header are fixed in the following format:
• First byte specifies the ICMP type.
• Second byte specifies the code, which depends on the ICMP type.
• Third and the fourth bytes are used for the checksum of the ICMP header.
The remaining part of the header depends on the ICMP message type. The ICMP control
messages are identified by the value in the type field. The code field gives additional
context information for the message.
The following table lists commonly used ICMP-type values during troubleshooting.

The table below lists the possible output characters from the Cisco IOS ping command:
For example, after sending ICMP echo requests, if an ICMP echo reply packet is received
within the default, 2-second (configurable) timeout, an exclamation point (!) is the output,
meaning that the reply was received before the timeout expired. A period (.) is the output
if the reply was not received before the timeout expired.
The device also outputs the min/avg/max RTT in milliseconds.
Note: When pinging, processing delays can be significant because the router considers
that responding to a ping is a low-priority task.
Test the end-to-end connectivity, using the following commands:

Ping with the source from the address of a specific interface, using the following
command:
When a normal ping command is sent from a device, the source address of the ping is the
IPv4 address of the interface that the packet uses to exit the device. The source address
can be changed to the address of any interface on the device.
You can also perform an extended ping, and adjust parameters such as the source IPv4
address, as follows:

If ping fails or returns an unusual RTT, you can use the traceroute command to help
narrow down the problem. You can also vary the size of the ICMP echo payload to test
problems that are related to the MTU.
On a Microsoft Windows device, by default, four packets are sent; information displayed is
similar to the Cisco IOS output, as shown in the example:
Using traceroute (Cisco IOS) or tracert (Microsoft Windows)
Traceroute is used to test the path that packets take through the network. It sends out
either an ICMP echo request (Microsoft Windows) or UDP (most implementations)
messages, gradually increasing IPv4 TTL values to probe the path by which a packet
traverses the network. The first packet with the TTL set to 1 will be discarded by the first-
hop router, which will send an ICMP "time exceeded" message sourced from its IPv4
address. The device that initiated the traceroute therefore knows the address of the first-
hop router. When the TTL is set to 2, the packets will arrive at the second router, which
will respond with an ICMP "time exceeded" message from its IPv4 address. This process
continues until the message reaches its final destination; the destination device will return
either an ICMP echo reply (Windows) or an ICMP port unreachable, indicating that the
request or message has reached its destination.
Cisco traceroute works by sending a sequence of three packets for each TTL value, with
different destination UDP ports, which allows it to report routers that have multiple,
equal-cost paths to the destination. For example, the first three packets with TTL 1 use
UDP ports 33434 (first packet), 33435 (second packet), and 33436 (third packet). The next
three UDP datagrams are sent with a TTL of 2 to destination ports 33437, 33438, and
33439.
Use the extended traceroute command to test connectivity from a specified source.

The table below lists the characters that can appear in the Cisco IOS traceroute command
output.

The tracert command is a Windows implementation of traceroute (and will not work on
Cisco devices).
Using Telnet and SSH
One way to obtain information about a remote network device is to connect to it using
either the Telnet or SSH applications. Telnet and SSH are virtual terminal protocols that
are part of the TCP/IP suite. The protocols allow connections and remote console sessions
from one network device to one or more remote devices.
When you use Telnet to connect to a remote device, the default port number is used. The
default port for Telnet is 23. You can use a different port number, from 1 to 65,535, to test
if a remote device is listening to the port.
Although Telnet can be used as a troubleshooting tool to check transport layer
functionality, it should not be used in a production environment to administer network
devices. Nowadays, SSH is used as it is a secure access method.
To log on to a host that supports Telnet, use the telnet EXEC command:

Test the transport layer using the telnet command.

The telnet command in the output tests if HTTP, which listens on TCP port 80, is open.
Since we get an Open response, we can assume that the remote device is reachable and
listens to TCP port 80. On a Cisco router, to exit the established connection, you must
enter a control+C (^C) hotkey as shown in the output. The hotkey that closes the
connection on a Cisco device is "ctrl+shift+6 and x."
To start an encrypted session with a remote networking device, use the ssh EXEC
command:

Verify ARP table


Devices use ARP to perform IPv4 address resolution for IPv4 to MAC address mapping. The
show ip arp or (show arp) command displays the ARP table on a Cisco router.
The arp -a command displays IPv4-to-MAC-address mappings on a Windows Host.

Verify IPv4 address information


Commands ipconfig and ipconfig /all (Microsoft Windows) are command-line utilities that
are available on all versions of Microsoft Windows, starting with Windows NT. This utility
allows you to get the IPv4 address information of a Windows computer. The ipconfig /all
option displays the IP address, subnet mask, and gateway for all physical and virtual
network adapters. It also displays the Domain Name Service and Microsoft Windows
Internet Name Service (Microsoft WINS) settings for each adapter.
For a brief overview of interface IPv4 addressing and status information on a Cisco device,
use the show ip interface brief command

12.4 Troubleshooting a Simple Network


Troubleshooting Common Switch Media Issues
Switches operate at multiple layers of the OSI model. At Layer 1 of the OSI model,
switches provide an interface to the physical media. At Layer 2 of the OSI model, they
provide switching of frames based on MAC addresses. Therefore, switch problems are
generally seen as Layer 1 and Layer 2 issues. Layer 3 issues concerning IP connectivity to
the switch for management purposes could also occur.

In laying out the troubleshooting methodology, some people start at Layer 1 and start
looking at potential media issues like damage to wiring or interference by electromagnetic
sources. The category of UTP wiring will be critical. Cables of the lower category will have
more sensitivity to certain sources of EMI, such as air-conditioning systems. Category 5
will have better enclosures and plastic around the wiring to protect it from such sources.
Poor cable management could, for example, put a strain on Registered Jack-45 (RJ-45)
connectors, causing some cables to break.
Physical security could also be a cause of media issues. If you allow people to connect
hubs to your switches or connect unwanted sources of traffic to the switch. In that case,
traffic patterns may change, which is not necessarily related to the media or physical
layer, but collisions could increase if you install the hub and connect it to your switch. This
problem is related to physical connectivity, and so it could be categorized as a physical
layer or media issue.
When new equipment is connected to a switch and the connection operates in the half-
duplex mode, or a duplex mismatch occurs, this could lead to an excessive number of
collisions (layer 2 issue).
A collision occurs when a transmitting Ethernet station detects another signal while
transmitting a frame. A late collision is a special type of collision. If a collision occurs after
the first 512 bits (64 octets or bytes) of data are transmitted by the transmitting station,
then a late collision is said to have occurred. Most importantly, late collisions are not
resent by the network interface card; in other words, they are not resent by Ethernet,
unlike collisions occurring before the first 64 octets or bytes. It is left for the upper layers
of the protocol stack to determine that there was a loss of data and retransmit.
Late collisions should never occur in a properly designed Ethernet network. Possible
causes are usually incorrect cabling or a noncompliant number of hubs in the network;
perhaps a bad network interface card could also cause late collisions. If a late collision
happens, they are typically detected using a protocol analyzer, verifying cabling distances
and physical layer requirements and limitations of Ethernet.
A symptom of excessive noise could be several cyclic redundancy check (CRC) errors, or
rather changes in the number of CRC errors not related to collisions. In other words, if the
number of collisions is constant, consistent, and does not change or have peaks, then CRC
errors could be caused by excessive noise and not related to actual collisions.
When this issue happens, cable inspection is probably the first step. You can use the
multitude of cable testers and tools available for that purpose. Poor design in using
perhaps something other than Category 5 cabling for fast Ethernet and 100-Mbps
networks could be the cause, and cable testing plus documentation could tell you how to
fix this problem.
If the rate of collisions exceeds the baseline for your network, then there are other types
of solutions to the problem. There are several guidelines regarding what that baseline
should be, including that the number of collisions compared to the total number of output
packets should be less than 0.1 percent.
If collisions are a problem, the cause could be a defective or ill-behaving device—for
example, a network interface card-sending excessive garbage into the network. This
situation typically happens when there are circuitry or logic failures or even physical
failures on the device. This condition is typically known as jabbering and relates to
network interface cards and other devices continuously sending random or garbage data
into the network. A time-domain reflectometer (TDR) could be used to find unterminated
Ethernet cabling, reflecting signals back into the network and causing collisions.
Fiber media issues have these possible sources:

• Microbend and macrobend losses


o Bending the fiber in too small of a radius causes light to escape.
o Light strikes the core or cladding at less than the critical angle.
o Total internal reflection no longer occurs, and light leaks out.
• Splice losses
• Dirty connectors

There are several ways in which light can be lost from the fiber. Some are due to
manufacturing problems (for example, microbends, macrobends, and splicing fibers that
do not have their cores centered). In contrast, others are physics problems (back
reflections or refractions) because light reflects whenever it encounters a change in the
index of refraction, which defines how much the path of light is bent or refracted when
entering a media. The index of refraction is calculated by dividing the speed of light in a
vacuum by the speed of light in another medium, in this case, optical fiber.
Macrobends typically occur during fiber installation.
One cause of light leaking out at a macrobend is that part of the traveling wave, called the
evanescent wave, travels inside the cladding. Around the bend, part of the evanescent
wave would have to travel faster than the speed of light in the material, which is not
possible, so this light instead radiates out of the fiber.
Bend losses can be minimized by designing a larger index difference between the core and
the cladding. Core and the cladding have different refractive indexes. The refractive index
of the core is always greater than the index of the cladding. Another approach is to
operate at the shortest possible wavelength and perform good installations.
Splices are a way to connect two fibers by fusing their ends. The best way to align the fiber
core is by using the outside diameter of the fiber as a guide. If the core is at the center of
the fiber, a good splice can be achieved. If the core is off-center, then it is impossible to
create a good splice. You would have to cut the fiber further upstream and test again.
Another possibility is that the fibers to be spliced could have dirt on their ends. Dirt can
cause many problems, particularly if the dirt intercepts some or all the light from the core.
The core for single-mode fiber (SMF) is only 9 micrometers. Splicing fiber is a highly
specialized skill in which trained technicians use fusion splicing equipment to connect two
fiber runs.
Any contamination in the fiber connection can cause the failure of the component or
failure of the whole system. Even microscopic dust particles can cause a variety of
problems for optical connections. A particle that partially or completely blocks the core
generates strong back reflections, which can cause instability in the laser system. Dust
particles trapped between two fiber faces can scratch the glass surfaces. Even if a particle
is only situated on the cladding or the edge of the endface, it can cause an air gap or
misalignment between the fiber cores, which significantly degrades the optical signal. In
addition to dust, other types of contamination, like oil, water, powdery coatings, must
also be cleaned off the endface. These contaminants can be more difficult to remove than
dust particles and can also cause damage to equipment if not removed.
When you clean fiber components, always complete the steps in the procedures carefully.
The goal is to eliminate any dust or contamination and provide a clean environment for
the fiber-optic connection. Remember that inspection, cleaning, and reinspection are
critical steps that must be done before you make any fiber-optic connection. When
cleaning optical connectors, the most important warning is always to turn off any laser
sources before inspecting fiber connectors, optical components, or bulkheads.
Troubleshooting Media Issues Workflow
You can use the show interfaces command to diagnose media issues.
To troubleshoot media issues when you have no connection or a bad connection between
a switch and another device, follow this process:
1. Use the show interfaces command to check the interface status. If the interface is
not operational, check the cable and connectors for damage.
2. Use the show interfaces command to check for excessive noise. If there is
excessive noise, you will see increased error counters in the output of the
command. Then first, find and remove the source of the noise, if possible. Verify
that the cable does not exceed the maximum cable length and check the type of
cable used. For copper cable, it is recommended that you use at least Category 5.
3. Use the show interfaces command to check for excessive collisions. If there are
collisions or late collisions, verify the duplex settings on both ends of the
connection.

12.5 Troubleshooting a Simple Network


Troubleshooting Common Switch Port Issues
Port issues will most likely have visible symptoms, such as users being unable to connect
to the network. These problems are sometimes related to faulty media and equipment,
such as NICs, but port issues are often related to duplex and speed settings.
The most common port issues are related to duplex and speed issues.
o Duplex-related issues result from a mismatch in duplex settings.

o Speed-related issues result from a mismatch in speed settings.

A common issue with speed and duplex occurs when the duplex settings are mismatched
between two switches, between a switch and a router, or between a switch and a
workstation or server. This mismatch can occur when you manually hardcode the speed
and duplex or autonegotiation issues between the two devices.
Duplex and Speed-Related Issues
A duplex mismatch is when the switch operates at full duplex, and the connected device
operates at half duplex. The result of a duplex mismatch is extremely slow performance,
intermittent connectivity, and loss of connection. Other possible causes of data-link errors
at full duplex are bad cables, a faulty switch port, or NIC software or hardware issues.
Here are examples of duplex-related issues:
o One end is set to full duplex, and the other is set to half duplex, resulting in a
mismatch.
o One end is set to full duplex, and the other is set to autonegotiation:
o If autonegotiation fails and this end reverts to half duplex, it results in a
mismatch.
o One end is set to half duplex, and the other is set to autonegotiation:
o If autonegotiation fails, this end reverts to half duplex.
o Both ends are set to half duplex, and there is no mismatch.
o Autonegotiation is set on both ends:
o One end fails to full duplex, and the other end fails to half duplex.
o For example, a Gigabit Ethernet interface defaults to full duplex, while a
10/100 defaults to half duplex.
o Autonegotiation is set on both ends:
o Autonegotiation fails on both ends, and they both revert to half duplex.
o Both ends are set to half duplex, and there is no mismatch.
Here are examples of speed-related issues:
o One end is set to one speed, and the other is set to another speed, resulting in a
mismatch.
o One end is set to a higher speed, and autonegotiation is enabled on the other end:
o If autonegotiation fails, the switch senses what the other end is using and
reverts to the optimal speed.
o Autonegotiation is set on both ends:
o Autonegotiation fails on both ends, and they revert to their lowest speed.
o Both ends are set at the lowest speed, and there is no mismatch.
The IEEE 802.3ab Gigabit Ethernet standard mandates the use of autonegotiation for
speed and duplex. Although autonegotiation is not mandatory for other speeds,
practically all fast Ethernet NICs also use autonegotiation by default. The use of
autonegotiation for speed and duplex is the current recommended practice for ports that
are connected to noncritical endpoints. However, if duplex negotiation fails for some
reason, you might have to set the speed and duplex manually on both ends. Typically, this
would mean setting the duplex mode to full duplex on both ends of the connection. You
should manually set the speed and duplex on links between networking devices and ports
connected to critical endpoints, such as servers.
The table summarizes possible speed and duplex settings for a connection between a
switch port and an end-device NIC. The table gives just a general idea about speed and
duplex misconfiguration combinations.
Troubleshooting Process for Duplex and Speed-Related Issues
A common cause of performance problems in Ethernet-based networks is a duplex or
speed mismatch between two ends of a link.
o Guidelines for duplex configuration include:
o Point-to-point Ethernet links should always run in the full-duplex mode.
Half duplex is not common anymore—you can encounter it if hubs are
used.
o Autonegotiation of speed and duplex is recommended on ports that are
connected to noncritical endpoints.
o Manually set the speed and duplex on links between networking devices
and ports connected to critical endpoints.
o Verify duplex and speed settings on an interface.
To troubleshoot switch duplex and speed issues when you have no connection or a bad
connection between a switch and another device, use this general process:
o Use the show interfaces command to check whether there is a speed mismatch
between the switch and a device on the other side (switch, router, server, and so
on). If there is a speed mismatch, set the speed on both sides to the same value.
o Use the show interfaces command to check whether there is a duplex mismatch
between the switch and a device on the other side. It is recommended that you
use full duplex if both sides support it.

The example shows the show interfaces command output. The example highlights duplex
and speed settings for the FastEthernet0/1 interface. Based on the output of the show
interfaces command, you can find, diagnose, and correct the duplex or speed mismatch
between the switch and the device on the other side.

If the mismatch occurs between two Cisco devices with Cisco Discovery Protocol enabled,
you will see Cisco Discovery Protocol error messages on the console or in the logging
buffer of both devices. Cisco Discovery Protocol is useful detecting errors and for
gathering port and system statistics on connected Cisco devices. Whenever there is a
duplex mismatch (in this example, on the FastEthernet0/1 interface), the consoles of Cisco
switches display these error messages:
%CDP-4-DUPLEX_MISMATCH: duplex mismatch discovered on FastEthernet0/1 (not half
duplex)
Use the duplex mode command to configure duplex operation on an interface. The
following are available duplex modes:
o full: Specifies full-duplex operation.
o half: Specifies half-duplex operation.
o auto: Specifies the autonegotiation capability. The interface automatically
operates at half or full duplex, depending on environmental factors such as the
type of media and the transmission speeds for the peer routers, hubs, and
switches that are used in the network configuration.
Troubleshooting Physical Connectivity Issue
Often troubleshooting processes involve a component of hardware troubleshooting.
There are three main categories of issues that could cause a failure on the network:
hardware failures, software failures (bugs), and configuration errors. A fourth category
might be performance problems, but performance problems are a symptom and not the
cause of a problem.
After you have used the ping and traceroute utilities to determine that a network
connectivity problem exists and where it exists, check to see if there are physical
connectivity issues before you get involved in more complex troubleshooting. You could
spend hours troubleshooting a situation only to find that a network cable is loose or
malfunctioning.
The interfaces that the traffic passes through are a component that is always worth
verifying when you are troubleshooting performance-related issues. You suspect the
hardware to be at fault. The interfaces are usually one of the first things you would verify
while tracing the path between devices.
If you have physical access to devices that you suspect are causing network problems, you
can save troubleshooting time by looking at the port LEDs. The port LEDs show the link
status and can indicate an error condition. If a link light for a port is not on, ensure that
both ends of the cable are plugged into the correct ports.
When troubleshooting small form-factor pluggable (SFP) and SFP+ modules, always check
if you are using SFP or SFP+ transceivers in the switch ports. The transceiver type should
match the physical port specification and speed. Normally SFP+ minimum range is 10Gbps,
and they should be the same type on both ends. You should also check that the same
wavelength is used; a transceiver using a 1310nm laser will not communicate with an
850nm transceiver. You need to verify the optical cable, single-mode fiber (SMF), or
Multimode fiber (MMF), the SFP module supports, and that you are using the correct one.
You should always refer to the documentation (typically installation guide) for a specific
networking device to check the specific supported cables and modules.
The output of the show interfaces command lists important statistics that should be
checked. The first line of the output from this command tells you whether an interface is
up or down.
To verify the interface status, use the show interfaces command.
The output of the show interfaces command also displays the following important
statistics:
o Input queue drops: Input queue drops (and the related ignored and throttle
counters) signify the fact that at some point, more traffic was delivered to the
device than it could process. This situation does not necessarily indicate a problem
because it could be normal during traffic peaks. However, it could indicate that the
CPU cannot process packets in time. So if this number is consistently high, you
should try to determine at which moments these counters are increasing and how
this increase relates to the CPU usage.
o Output queue drops: Output queue drops indicate that packets were dropped due
to congestion on the interface. Seeing output drops is normal when the aggregate
input traffic is higher than the output traffic. During traffic peaks, the packets are
dropped if traffic is delivered to the interface faster than the interface can send it
out. However, although this setting is considered normal behavior, it leads to
packet drops and queuing delays, so applications sensitive to packet drops and
queuing delays, such as VoIP, might suffer from performance issues. Consistent
output drops might indicate that you need to implement an advanced queuing
mechanism to provide good quality of service (QoS) to each application.
o Input errors: Input errors indicate errors experienced during the reception of the
frame, such as CRC errors. High numbers of CRC errors could indicate cabling
problems, interface hardware problems, or in an Ethernet-based network, duplex
mismatches.
o Output errors: Output errors indicate errors, such as collisions, during the
transmission of a frame. In most Ethernet-based networks, full-duplex
transmission is the norm, and half-duplex transmission is the exception. In full-
duplex transmission, operation collisions cannot occur. Therefore, collisions,
especially late collisions, often indicate duplex mismatches.
12.6 Troubleshooting a Simple Network
Troubleshooting Common Problems Associated with IPv4 Addressing
Troubleshooting IPv4 addressing is an important skill and will prove valuable when
resolving several network issues. For example, assume that a host cannot communicate to
a server that is on a remote network.
The following are recommended troubleshooting steps to perform from the host:
1. Verify the host IPv4 address and subnet mask.
2. Ping the loopback address.
3. Ping the IPv4 address of the local interface.
4. Ping the default gateway.
5. Ping the remote server.

The following examples assume you are on a Windows host.


Verify the Host IPv4 Address and Subnet Mask
Access the command prompt and use ipconfig command to display all current TCP/IP
network configuration parameters. If the Media State of your adapter indicates that
media is disconnected (like in an example of Wireless LAN adapter below), then you
should troubleshoot the adapter or check the cable (for wired interfaces).

Ping the Loopback Address


Access the command prompt and ping 127.0.0.1. This address is the diagnostic or
loopback address. If you get a successful ping, your IPv4 stack is considered to be
initialized. If it fails, you have an IPv4 stack failure, and you need to reinstall TCP/IP on the
host.

Ping the IPv4 Address of the Local Interface


From the command prompt, ping the IPv4 address of the local host. If the ping is
successful, your network interface card (NIC) is functioning. If it fails, there is a problem
with the NIC. If the ping is successful, it does not mean that a cable is plugged into the NIC,
but only that the IPv4 protocol stack on the host can communicate to the NIC (via the LAN
driver).

Ping the Default Gateway


From the command prompt, ping the default gateway (router). If the ping works, the NIC
is plugged into the network and can communicate on the local network. If it fails, you have
a local physical network problem that could be anywhere from the NIC to the router.
Ping the Remote Server
If the previous steps are successful, try to ping the remote server. If the ping is successful,
you know that you have IPv4 connectivity between the local host and the remote server.
You also know that the remote physical network is working.

If the ping to the remote server fails, you may have some remote physical network
problem. Verify and correct this by going to the server and performing the same steps as
you just did from the host.
Check the Default Gateway
If the previous steps are unsuccessful, there may be an incorrect default gateway
configuration on either the host or the server, or there may be a routing issue.
You can use the traceroute utility to test the path that packets take through the network
to ensure they are going through the router.
To verify the host setting for the default gateway, use the appropriate CLI command or
check the settings in the GUI. A useful command in Windows besides ipconfig is route
print. In the example, the user host has a correct default gateway setting.
One of the possible problems is also a wrong setting of a default gateway on the server.
Depending on the host operating system, you will need to use the proper CLI command or
check the settings in the GUI. The server should have a default gateway of 172.16.20.1.
Next, you should check the IPv4 addresses and subnet masks of the interfaces, and the
routing table, on the default gateway router. You should connect to the router and check
the status of interfaces using the show ip interface brief command. To confirm the IPv4
addresses and subnet masks, use the show running-config command. To check the
routing table, use the show ip route command and confirm that all the networks are listed
in the routing table.
13.1 Introducing Basic IPv6
Introduction
As the global internet continues to grow, its overall architecture needs to evolve to
accommodate the new technologies that support the increasing numbers of users,
applications, appliances, and services. This evolution also includes Enterprise networks
and communication providers, which provide services to home users. IPv6 was proposed
when it became clear that the 32-bit addressing scheme of IPv4 cannot keep up with
internet growth demands. IPv6 quadruples the number of network address bits from 32
bits (in IPv4) to 128 bits. This means that the address pool for IPv6 is around 340
undecillion, or 340 trillion trillion trillion, which is an unimaginably large number.
The larger IPv6 address space allows networks to scale and provide global reachability.
The simplified IPv6 packet header format handles packets more efficiently. The IPv6
network is designed to embrace encryption and favor targeted multicast over often
problematic broadcast communication.
IPv6 as a protocol has been known for a while, but enterprises are beginning to
understand how it can help them achieve their goals, improve efficiency and gain
functionality.

As a network engineer, you will need to get familiar with IPv6, including:
o Describing IPv6 features and advantages and comparing them to IPv4.
o Configuring basic IPv6 addressing and testing IPv6 connectivity in the network.
13.2 Introducing Basic IPv6
IPv4 Address Exhaustion Workarounds
IPv4 provides approximately 4 billion unique addresses. Although 4 billion is a lot of
addresses, it is not enough to keep up with the growth of the internet.
To extend the lifetime and usefulness of IPv4 and to circumvent the address shortage,
several mechanisms were created:
o Classless interdomain routing (CIDR)
o Variable-length subnet masking (VLSM)
o Network Address Translation (NAT)
o Private IPv4 addresses space (RFC 1918)
Note: Over the years, hardware support has been added to devices to support IPv4
enhancements through Application Specific Integrated Circuits (ASICs), offloading the
processing from the equipment CPU to network hardware. This allows more simultaneous
transmission and higher bandwidth utilization.
To allocate IPv4 addresses efficiently, CIDR was developed. CIDR allows the address space
to be divided into smaller blocks, varying in size depending on the number of hosts
needed in individual blocks. These blocks are no longer associated with predefined IPv4
addresses classes, such as class A, B, and C. Instead, the allocation includes a subnet mask
or prefix length, which defines the size of the block.
VLSMs allow more efficient use of IPv4 addresses, specifically on small segments, such as
point-to-point serial links. VLSM usage was recommended in RFC 1817. CIDR and VLSM
support was a prerequisite for ISPs to improve the scalability of the routing on the
internet.
NAT introduced a model in which a device facing outward to the internet has a globally
routable IPv4 address, while the internal network is configured with private RFC 1918
addresses. These private addresses can never be routed outside the site, as they can be
identified in many different enterprise networks. In this way, even large enterprises with
thousands of systems can hide behind a few routable public networks.
DHCP is used extensively in IPv4 networks to dynamically allocate addresses, typically
from private IPv4 addresses space (RFC 1918), then translated to public addresses using
NAT.
One of the arguments against deploying IPv6 is that NAT will solve the problems of limited
address space in IPv4. The use of NAT merely delays the exhaustion of the IPv4 address
space. Many large organizations and ISPs are moving to IPv6 because they are running out
of IPv4 private addresses, for example, as Internet of Things (IoT) devices are added to
their networks.
Negative implications of using NAT, some of which are identified in RFC 2775 and RFC
2993 include:
o NAT breaks the end-to-end model of IP, in which only the endpoints, not the
intermediary devices, should process the packets.
o NAT inhibits end-to-end network security. To protect the integrity of the IP header
by some cryptographic functions, the IP header cannot be changed between the
origin of the packet (to protect the integrity of the header) and the final
destination (to check the integrity of the received packet). Any translation of parts
of a header on the path will break the integrity check.
o When applications are not NAT-friendly, which means that, for a specific
application, more than just the port and address mapping are necessary to forward
the packet through the NAT device, NAT must embed complete knowledge of the
applications to perform correctly. This fact is especially true for dynamically
allocated ports, embedded IP addresses in application protocols, security
associations, and so on. Therefore, the NAT device needs to be upgraded each
time a new non-NAT-friendly application is deployed (for example, peer-to-peer).
o When different networks use the same private address space and merge or
connect, an address space collision occurs. Hosts that are different but have the
same address cannot communicate with each other. There are NAT techniques
available to help with this issue, but they increase NAT complications.

13.3 Introducing Basic IPv6


IPv6 Features
Although VLSM, NAT, and other workarounds (for avoiding the transition to IPv6) are
available, networks with internet connectivity must begin the transition to IPv6 as soon as
possible. For IPv4 networks that provide goods and services to internet users, it is
especially important because the transition by the internet community is already
underway. New networks may be unable to acquire IPv4 addresses, and networks running
IPv6 exclusively will not be able to communicate with IPv4-only networks unless you
configure an intermediary gateway or another transition mechanism. IPv6 and IPv4 are
completely separate protocols, and IPv6 is not backward compatible with IPv4. As the
internet evolves, organizations must adopt IPv6 to support future business continuity,
growth, and global expansion. Furthermore, some ISPs and Regional Internet Registries
(RIRs) are administratively out of IPv4 addresses, meaning that their supply of IPv4
addresses is now limited, and organizations have to migrate to and support IPv6 networks.
IPv6 includes several features that make it attractive for building global-scale, highly
effective networks:
o Larger address space: The expanded address space includes several IP addressing
enhancements:
o It provides improved global reachability and flexibility.
o A better aggregation of IP prefixes is announced in the routing tables. The
aggregation of routing prefixes limits the number of routing table
entries,creating efficient and scalable routing tables.
o Multihoming increases the reliability of the internet connection of an IP
network. With IPv6, a host can have multiple IP addresses over one physical
upstream link. For example, a host can connect to several ISPs.
o Autoconfiguration is available.
o There are more plug-and-play options for more devices.
o Simplified mechanisms are available for address renumbering and
modification.
o Simpler header: Streamlined fixed header structures make the processing of IPv6
packets faster and more efficient for intermediate routers within the network. This
fact is especially true when large numbers of packets are routed in the core of the
IPv6 internet.
o Security and mobility: Features that were not part of the original IPv4
specification, such as security and mobility, are now built into IPv6. IP Security
(IPsec) is available in IPv6, allowing the IPv6 networks to be secure. Mobility
enables mobile network devices to move around in networks without breaks in
established network connections.
o Transition richness: IPv6 also includes a rich set of tools to aid in transitioning
networks from IPv4, to allow an easy, nondisruptive transition over time to IPv6-
dominant networks. An example is dual stacking, in which devices run both IPv4
and IPv6.
13.4 Introducing Basic IPv6
IPv6 Addresses and Address Types
IPv6 addresses consist of 128 bits and are represented as a series of eight 16-bit
hexadecimal fields separated by colons. Although the upper and lower case is permitted,
it is best practice to use lower case for IPv6 representation:
Address representation
o Format is x:x:x:x:x:x:x:x, where x is a 16-bit hexadecimal field.
o Example: 2001:0db8:010f:0001:0000:0000:0000:0acd
o Leading zeros in a field can be omitted.
o Example: 2001:db8:10f:1:0:0:0:acd
o Successive fields of 0 are represented as "::" but only once in an address.
o Example: 2001:db8:10f:1::acd
Note: The a, b, c, d, e, and f in hexadecimal fields can be either uppercase or lowercase
but it is best practice to use lower case for IPv6 representation.
Note: Although Cisco IOS accepts both lowercase and uppercase representation of an IPv6
address, RFC 5952 recommends that IPv6 addresses be represented in lowercase to
ensure compatibility with case-sensitive applications.
Here are two ways to shorten the writing of IPv6 addresses:
o The leading zeros in a field can be omitted, so that 010f can be written as 10f. A
field that contains all zeros (0000) can be written as 0.
o Successive fields of zeros can be represented as a double colon (::) but only once in
an address. An address parser can identify the number of missing zeros by
separating the two parts and filling in zeros until the 128 bits are completed.
However, if two double colons are placed in the address, there is no way to
identify the size of each block of zeros. Therefore, only one double colon is
possible in a valid IPv6 address.
The use of the double-colon technique makes many addresses very small; for example,
ff01:0:0:0:0:0:0:1 becomes ff01::1. All of the zeros addresses are written as a double
colon; this type of address representation is known as the unspecified address.
IPv6 Address Types
IPv6 supports three basic types of addresses. Each address type has specific rules
regarding its construction and use. These types of addresses are:
o Unicast: Unicast addresses are used in a one-to-one context.
o Multicast: A multicast address identifies a group of interfaces. Traffic that is sent
to a multicast address is sent to multiple destinations at the same time. An
interface may belong to any number of multicast groups.
o Anycast: An IPv6 anycast address is assigned to an interface on more than one
node. When a packet is sent to an anycast address, it is routed to the nearest
interface that has this address. The nearest interface is found according to the
measure of metric of the particular routing protocol that is running. All nodes that
share the same address should behave the same way so that the service is offered
similarly, regardless of the node that services the request.

IPv6 does not support broadcast addresses in the way that they are used in IPv4. Instead,
specific multicast addresses (such as the all-nodes multicast address) are used.
IPv6 unicast addresses are assigned to each node (interface). Their uses are discussed in
RFC 4291. The unicast addresses are listed below.
Note: An IPv6 address prefix, in the format ipv6-prefix/prefix-length, can be used to
represent bitwise contiguous blocks of the entire address space. The prefix length is a
decimal value that indicates how many of the high-order contiguous bits of the address
compose the prefix. An IPv6 address network prefix is represented in the same way as the
network prefix (as in 10.1.1.0/24) in IPv4. For example, 2001:db8:8086:6502::/32 is a valid
IPv6 prefix.
IPv6 Address Scopes and Prefixes
To fully understand IPv6 addressing, it is important to have a solid understanding of IPv6
scopes and prefixes. An IPv6 address scope specifies the region of the network in which
the address is valid. For example, the link-local address has a scope that is called "link-
local," which means that it is valid and should be used on a directly attached network
(link). Scopes can apply to both unicast and multicast addresses. There are several
different scopes or regions: the link scope, site scope, organization scope, and global
network scope.
Addresses in the link scope are called link-local addresses, and routers will not forward
these addresses to other links or networks. Addresses that are valid within a single site are
called site-local addresses. Addresses intended to span multiple sites belonging to one
organization are called organization-local addresses, and addresses in the global network
scope are called global unicast addresses.
Multiple IPv6 Addresses on an Interface
As with IPv4, IPv6 addresses are assigned to interfaces; however, unlike IPv4, an IPv6
interface is expected to have multiple addresses. The IPv6 addresses that are assigned to
an interface can be any of the basic types: unicast, multicast, or anycast.
IPv6 Unicast Addresses
An IPv6 unicast address generally uses 64 bits for the network ID and 64 bits for the
interface ID. The network ID is administratively assigned, and the interface ID can be
configured manually or autoconfigured.
Note: When you use the Stateless Address AutoConfiguration (SLAAC) IPv6 address
assignment method, a 64-bit interface ID is required.

Use of EUI-64 Format Interface ID in IPv6 Addresses


The interface ID in an IPv6 address is analogous to the host portion of an IPv4 address; it
uniquely identifies an interface on a link. A 64-bit interface ID is not required but is highly
recommended. However, a 64-bit interface ID is required when an IPv6 address is
autoconfigured. One way to guarantee that the interface ID is unique is to base it on the
MAC address of the interface.
The Extended Universal Identifier 64-bit format (EUI-64) defines the method to create an
interface identifier from an IEEE 48-bit MAC address. Since the EUI-64 format is based on
unique MAC addresses, using this format, a device can automatically assign itself a unique
64-bit IPv6 interface ID without the need for manual configuration or DHCP. The following
figure illustrates this process:

The EUI-64 format interface ID is derived from the 48-bit MAC address by inserting the
hexadecimal number fffe between the upper 3 bytes (OUI field) and the lower 3 vendor
assigned bytes of the MAC address. Then, the seventh bit of the first octet is inverted. (In
a MAC address, this bit indicates the scope and has a value of 1 for global scope and 0 for
local scope; it will be 1 for globally unique MAC addresses. In the EUI-64 format, the
meaning of this bit is opposite, so the bit is inverted.)
IPv6 Global Unicast Address
Both IPv4 and IPv6 addresses are generally assigned in a hierarchical manner. ISPs assign
users IP addresses. ISPs obtain allocations of IP addresses from a local internet registry
(LIR) or National Internet Registry (NIR) or their appropriate RIR. The RIR, in turn, obtains
IP addresses from The Internet Corporation for Assigned Names and Numbers (ICANN),
the operator for IANA.
RFC 4291 specifies the 2000::/3 prefix as the global unicast address space that the IANA
may allocate to the RIRs. A global unicast address (GUA) is an IPv6 address created from
the global unicast prefix. The structure of global unicast addresses enables the
aggregation of routing prefixes, limiting the number of routing table entries in the global
routing table. Global unicast addresses that are used on links are aggregated upward
through organizations and eventually to the ISPs.
The figure shows how address space can be allocated to the RIR and ISP. These values are
minimum allocations, which means that an RIR will get a /23 or shorter, an ISP will get a
/32 or shorter, and a site will get a /48 or shorter. A shorter prefix length allows more
available address space. For example, a site could get a /40 instead of a /48, giving it more
addresses to justify it to its ISP. The figure shows an aggregatable provider model where
the end customer obtains its IPv6 address from the ISP. The end customer can also choose
a provider-independent address space by going straight to the RIR. In this case, it is not
uncommon for an end customer to justify a /32 prefix. The example in the figure uses the
common and recommended size of the network with 64 bits used as interface ID.
Global unicast addresses are routable and reachable across the internet. They are
intended for widespread generic use. A global unicast address is structured hierarchically
to allow address aggregation. In the 2000::/3 prefix, the /3 prefix length states that only
the first 3 bits are significant in matching the prefix 2000. The first 3 bits of the first
hexadecimal value, 2, are 001. The fourth bit is insignificant and can be either a 0 or a 1.
Therefore, the first hex digit is either 2 (0010) or 3 (0011). The remaining 12 bits in the
hextet (16-bit segment) can be a 0 or a 1. This results in a range of global unicast
addresses of 2000::/3 through 3fff::/3.
A global routing prefix is assigned to a service provider by IANA. The fixed first three bits
plus the following 45 bits identify the organization´s site within the public domain.
An individual organization can use a subnet ID to create its own local addressing hierarchy
and identify subnets. A subnet ID is similar to a subnet in IPv4, except that an organization
with an IPv6 subnet ID can support many more individual subnets (the actual number
depends on the global routing prefix). An organization with a 16-bit IPv6 subnet ID can
support up to 65,535 individual subnets.
The interface ID has the same meaning for all unicast addresses. It is used to identify the
interfaces on a link and must be unique to the link. The interface ID is 64 bits long and,
depending on the device operating system, can be created using the EUI-64 format or by
using a randomly generated number. An example of a global unicast address is
2001:0db8:bbbb:cccc:0987:65ff:fe01:2345.
IPv6 Link-Local Unicast Address
Link-local addresses (LLAs): have a smaller scope than site-local addresses—they refer only
to a particular physical link (physical network). The concept of the link-local scope is not
new to IPv6. RFC 3927 defined 169.254.x.x block as link-local for IPv4. These addresses
have a smaller scope than site-local addresses—they refer only to a particular physical link
(physical network). Routers do not forward packets using link-local addresses, not even
within the organization; they are only for local communication on a particular physical
network segment.
A link-local address is an IPv6 unicast address that is automatically configured on any
interface. This address is the first IPv6 address that will be enabled on the interface. A
device does not have to have any other address but must have a link-local address. A link-
local address consists of the link-local prefix fe80::/10 (1111 1110 10) and the interface
identifier that can be modified in EUI-64 format or randomly generated value depending
on the operating system installed on the networking device.
It is common practice to statically configure link-local addresses on the router interfaces
to make troubleshooting easier. Nodes on a local link can use link-local addresses to
communicate; the nodes do not need globally unique addresses to communicate.
Link-local addresses are used for link communications such as automatic address
configuration, neighbor discovery, and router discovery. Many IPv6 routing protocols also
use link-local addresses. For static routing, the address of the next-hop device should be
specified using the link-local address of the device; for dynamic routing, all IPv6 routing
protocols must exchange the link-local addresses of neighboring devices.
An example of a link-local unicast address is fe80:0000:0000:0000:0987:65ff: fe01:2345,
which would generally represent shorthand notation fe80::987:65ff:fe01:2345.
Note: The prefix fe80::/10 for link-local addresses includes addresses beginning with fe80
through febf. In common practice, though, link-local addresses typically begin with fe80.
IPv6 Unique Local Unicast Address
Unique local unicast addresses are analogous to private IPv4 addresses in that they are
used for local communications, intersite VPNs, and so on, except for one important
difference – these addresses are not intended to be translated to a global unicast address.
They are not routable on the internet without IPv6 NAT, but they are routable inside a
limited area, such as a site. They may also be routed between a limited set of sites. A
unique local unicast address has these characteristics:
o It has a globally unique prefix—it has a high probability of uniqueness.
o It has a well-known prefix to enable easy filtering at site boundaries.
o It allows combining or privately interconnecting sites without creating any address
conflicts or requiring a renumbering of interfaces that use these prefixes.
o It is ISP-independent and can be used for communications inside a site without
having any permanent or intermittent internet connectivity.
o If it is accidentally leaked outside of a site via routing or the Domain Name System,
there is no conflict with any other addresses.
o Applications may treat unique local addresses like globally scoped addresses.
In unique local unicast addresses, global IDs are defined by the administrator of the local
domain. Subnet IDs are also defined by the administrator of the local domain. Subnet IDs
are typically defined using a hierarchical addressing plan, allowing routes to be
summarized and, therefore, reducing the size of routing updates and routing tables. An
example of a unique local unicast address is fc00:aaaa:bbbb:cccc:0987:65ff:fe01:2345.
Loopback Addresses
Just as with IPv4, a provision has been made for a special loopback IPv6 address for
testing. Packets that are sent to this address "loop back" to the sending device. However,
in IPv6, there is just one address, not a whole block, for this function. The loopback
address is 0:0:0:0:0:0:0:1, which is normally expressed as "::1."
Unspecified Addresses
In IPv4, an IPv4 address containing all zeroes has a special meaning—it refers to the host
itself and is used as a source address to indicate the absence of an address. In IPv6, this
concept has been formalized, and the all-zeros address is named the unspecified address.
It is typically used in the source field of a packet sent by a device requesting to have its
IPv6 address configured. You can apply address compression to this address. Because the
address is all zeroes, the address is simply expressed by two colons (::).
IPv6 Multicast Addresses
The following figure illustrates the format of an IPv6 multicast address. An IPv6 multicast
address defines a group of devices known as a multicast group. IPv6 multicast addresses
use the prefix ff00::/8, which is equivalent to the IPv4 multicast address 224.0.0.0/4. A
packet sent to a multicast group always has a unicast source address. A multicast address
can never be the source address. Unlike IPv4, there is no broadcast address in IPv6.
Instead, IPv6 uses multicast, including an all-IPv6 devices well-known multicast address
and a solicited-node multicast address.
The first 8 bits are ff, followed by 4 bits allocated for flags and a 4-bit Scope field. The
Scope field defines the range to which routers can forward the multicast packet. The next
112 bits represent the group ID.
The first three flags bits are 0 (reserved), R (rendezvous point), and P (network prefix),
which are beyond the scope of this course. The fourth flag, the least significant bit (LSB),
or the rightmost bit, is the transient flag (T flag). The T flag denotes the two types of
multicast addresses:
o Permanent (0): These addresses, known as predefined multicast addresses, are
assigned by IANA and include both well-known and solicited multicast.
o Nonpermanent (1): These are "transient" or "dynamically" assigned multicast
addresses. Multicast applications assign them.
The scope bits define the scope of the multicast group. For example, a scope value 1
means interface-local scope or node-local scope, which spans only a single interface on a
node. It is used for loopback transmission of multicast. The link-local scope is defined with
the value 2. It spans the topology area of a single link. The admin-local scope is not
automatically defined from the physical topology or another non-multicast-related
configuration and should be defined by an administrator. The admin-local scope is the
smallest administratively defined multicast scope. A site-local scope spans a single site,
whereas organization-local scope spans several sites in one organization.
The following table shows a few examples of well-known IPv6 multicast addresses that
have different scopes:
IPv6 Anycast Addresses
An IPv6 anycast address is an address that can be assigned to more than one interface
(typically on different devices). In other words, multiple devices can have the same
anycast address. According to the router's routing table, a packet sent to an anycast
address is routed to the "nearest" interface having that address.
Anycast addresses are available for both IPv4 and IPv6, initially defined in RFC 1546, Host
Anycasting Service. Anycast was meant to be used for Domain Name System (DNS) and
HTTP services but was never really implemented as designed.
Anycast addresses are syntactically indistinguishable from unicast addresses because
anycast addresses are allocated from the unicast address space. Assigning a unicast
address to more than one interface makes a unicast address an anycast address. The
nodes to which the anycast address is assigned must be explicitly configured to recognize
that the address is an anycast address.
Some reserved anycast address formats, such as the subnet-router anycast address, are
defined in RFC 4291 and RFC 2526. Such anycast address has the following format:
The subnet-router anycast address has a prefix followed by a series of zeros (as the
interface ID). For example, if the prefix for the subnet is 2001:db8:10f:1::/64 then the
subnet router anycast address for that subnet is 2001:db8:10f:1::. If you send a packet to
the subnet-router anycast address, it will be delivered to one router with an interface in
that subnet. All routers must have subnet-router anycast addresses for the subnets that
are configured on their interfaces.
Reserved Addresses
The IETF reserved a portion of the IPv6 address space for various uses, both present, and
future. Reserved addresses represent 1/256th of the total IPv6 address space. The lowest
address within each subnet prefix (the interface identifier set to all zeroes) is reserved as
the subnet-router anycast address. The 128 highest addresses within each /64 subnet
prefix are reserved for use as anycast addresses.

13.5 Introducing Basic IPv6


Comparison of IPv4 and IPv6 Headers
The IPv6 header differs significantly from the IPv4 header in several ways.
The figure illustrates the IPv4 header format.

The IPv4 header contains 12 fields. Following these fields are an Options field of variable
length that the figure shows in yellow and a padding field followed by the data portion,
usually the transport layer segment. The basic IPv4 header has a size of 20 octets. The
Options field increases the size of the IPv4 header.
Of the 12 IPv4 header fields, 6 are removed in IPv6; these fields are shown in green in the
figure. The main reasons for removing these fields in IPv6 are as follows:
o The Internet Header Length field (shown as HD Len in the figure) was removed
because it is no longer required. Unlike the variable-length IPv4 header, the IPv6
header is fixed at 40 octets.
o Fragmentation is processed differently in IPv6 and does not need the related fields
in the basic IPv4 header. In IPv6, routers no longer process fragmentation. IPv6
hosts are responsible for path maximum transmission unit (MTU) discovery. If the
host needs to send data that exceeds the MTU, the host is responsible for
fragmentation (this process is recommended but not required). The related Flags
field option appears in the Fragmentation Extension Header in IPv6. This header is
attached only to a packet that is fragmented.
o The Header Checksum field at the IP layer was removed because most data link
layer technologies already perform checksum and error control. This change forces
formerly optional upper-layer checksums (such as UDP) to become mandatory.
The Options field is not present in IPv6. In IPv6, a chain of extension headers processes
any additional services. Examples of extension headers include Fragmentation,
Authentication Header, and Encapsulating Security Payload (ESP).
Most other fields were either unchanged or changed only slightly.
The figure illustrates the IPv6 header format.

The IPv6 header has 40 octets instead of 20 octets, as in IPv4. The IPv6 header has fewer
fields, and the header is aligned on 64-bit boundaries to enable fast processing by current
and next-generation processors. The Source and Destination address fields are four times
larger than in IPv4.
The IPv6 header contains eight fields:
1. Version: This 4-bit field contains the number 6, instead of the number 4 as in IPv4.
2. Traffic Class: This 8-bit field is similar to the type of service (ToS) field in IPv4. The
source node uses this field to mark the priority of outbound packets.
3. Flow Label: This new field has a length of 20 bits and is used to mark individual
traffic flows with unique values. Routers are expected to apply an identical quality
of service (QoS) treatment to each packet in a flow.
4. Payload Length: This field is like the Total Length field for IPv4, but because the
IPv6 base header is a fixed size, this field describes the length of the payload only,
not of the entire packet.
5. Next Header: The value of this field determines the type of information that
follows the basic IPv6 header.
6. Hop Limit: This field specifies the maximum number of hops that an IPv6 packet
can take. The initial hop limit value is set by an operating system (64 or 128 is
common, but up to the operating system). Each IPv6 router decrements the hop
limit field along the path to the destination. An IPv6 packet is dropped when the
hop limit field reaches 0. The hop limit is designed to prevent packets from
circulating forever if there is a routing error. In normal routing, this limit should
never be reached.
7. Source Address: This field of 16 octets, or 128 bits, identifies the source of the
packet.
8. Destination Address: This field of 16 octets, or 128 bits, identifies the destination
of the packet.
The extension headers, if there are any, follow these eight fields. The number of extension
headers is not fixed, so the total length of the extension header chain is variable.
To further explore IPv6 header fields and their functions, see RFC 8200, Internet Protocol,
Version 6 (IPv6) Specification.
Connecting IPv6 and IPv4 Networks
Devices running different protocols - IPv4 and IPv6 - cannot communicate unless some
translation mechanism is implemented.
Three main options are available for transitioning to IPv6 from the existing IPv4 network
infrastructure: dual-stack network, tunneling, and translation. It is important to note that
the IPv4 and IPv6 devices cannot communicate with each other unless the translation is
configured.
In a dual-stack network, IPv4 and IPv6 are fully deployed across the infrastructure, so
configuration and routing protocols handle IPv4 and IPv6 addressing and adjacencies
separately.
Using the tunneling option, organizations build an overlay network that tunnels one
protocol over the other by encapsulating IPv6 packets within IPv4 packets over the IPv4
network and IPv4 packets within IPv6 packets IPv6 network.
Translation facilitates communication between IPv6-only and IPv4-only hosts and
networks by performing IP header and address translation between the two address
families.
13.6 Introducing Basic IPv6
Internet Control Message Protocol Version 6
Internet Control Message Protocol Version 6 (ICMPv6) provides the same diagnostic
services as Internet Control Message Protocol Version 4 (ICMPv4), and it extends the
functionality for some specific IPv6 functions that did not exist in IPv4.

ICMPv6 enables nodes to perform diagnostic tests and report problems. Like ICMPv4,
ICMPv6 implements two kinds of messages—error messages (such as Destination
Unreachable, Packet Too Big, or Time Exceeded) and informational messages (such as
Echo Request and Echo Reply).
The ICMPv6 packet is identified as 58 in the Next Header field. Inside the ICMPv6 packet,
the Type field identifies the type of ICMP message. The Code field further details the
specifics of this type of message. The Data field contains information that is sent to the
receiver for diagnostics or information purposes.
ICMPv6 is used on-link for router solicitation and advertisement, for neighbor solicitation
and advertisement, and for the redirection of nodes to the best gateway.
Neighbor solicitation messages are sent on the local link when a node wants to determine
the data link layer address of another node on the same local link. After receiving the
neighbor solicitation message, the destination node replies by sending a neighbor
advertisement message. This message includes the data link layer address of the node
sending the neighbor advertisement message. Hosts send router Solicitation messages to
locate the routers on the local link, and routers respond with router advertisements which
enable autoconfiguration of the hosts.

13.7 Introducing Basic IPv6


Neighbor Discovery
Neighbor discovery uses ICMPv6 neighbor solicitation and neighbor advertisement
messages. The figure depicts the neighbor discovery process, where host A wants to
communicate with host B using IPv6. Since it does not know the data link layer address
(MAC address) of host B, it sends a neighbor solicitation message, and host B replies with
a neighbor advertisement message.

Neighbor discovery is a process that enables these functions:


o Determining the data link layer address of a neighbor on the same link, like
Address Resolution Protocol (ARP) does in IPv4
o Finding neighbor routers on a link
o Keeping track of neighbors
o Querying for duplicate addresses
The neighbor discovery process uses solicited-node multicast addresses.
Solicited-Node Multicast Address
The solicited-node address is a multicast address that has a link-local scope. All nodes
must join the solicited-node multicast group that corresponds to each of its unicast and
anycast addresses. The solicited-node address is composed of the ff02:0:0:0:0:1:ff00::/104
prefix, which is concatenated with the right-most 24 bits of the corresponding unicast or
anycast address.

The source node creates a solicited-node multicast address using the right-most 24 bits of
the IPv6 address of the destination node, and sends a Neighbor Solicitation message to
this multicast address. The corresponding node responds with its data link layer address in
a Neighbor Advertisement message.
Multicast Mapping over Ethernet
A packet destined to a solicited-node multicast address is put in a frame destined to an
associated multicast MAC address.
If an IPv6 address is known, then the associated IPv6 solicited-node multicast address is
known. The example in the figure gives the IPv6 address
2001:db8:1001:f:2c0:10ff:fe17:fc0f. The associated solicited-node multicast address is
ff02::1:ff17:fc0f.
If an IPv6 solicited-node multicast address is known, then the associated MAC address is
known, formed by concatenating the last 32 bits of the IPv6 solicited-node multicast
address to 33:33
As the figure shows, the IPv6 solicited-node multicast address is ff02::1:ff17:fc0f. The
associated Ethernet MAC address is 33.33.ff.17.fc.0f.
Understand that the resulting MAC address is a virtual MAC address: It is not burned into
any Ethernet card. Depending on the IPv6 unicast address, which determines the IPv6
solicited-node multicast address, an Ethernet card may be instructed to listen to any of
the 224 possible virtual MAC addresses that begin with 33.33.ff. In IPv6, Ethernet cards
often listen to multiple virtual multicast MAC addresses and their own burned-in unicast
MAC addresses.
A solicited-node multicast is more efficient than an Ethernet broadcast used by IPv4 ARP.
With ARP, all nodes receive and must therefore process the broadcast requests. By using
IPv6 solicited-node multicast addresses, fewer devices receive the request. Therefore
fewer frames need to be passed to an upper layer to determine whether they are
intended for that specific host.

13.8 Introducing Basic IPv6


IPv6 Address Allocation
Interface identifiers in IPv6 addresses are used to identify interfaces on a link. They can
also be thought of as the "host portion" of an IPv6 address. Interface identifiers need to
be unique on a specific link. Interface IDs are typically 64 bits and can be configured in
multiple ways.
There are several ways to assign an IPv6 address to a device:
o Static assignment using a manual interface ID: One way to statically assign an IPv6
address to a device is to manually assign both the prefix (network) and interface ID
(host) portions of the IPv6 address. To configure an IPv6 address on a Cisco router
interface and enable IPv6 processing on that interface, use the ipv6 address ipv6-
address/prefix-length command in the interface configuration mode. The
following example shows how to statically configure a global unicast address and a
link-local address on a router's interface.
o Static assignment using an EUI-64 interface ID: Another way to statically assign an
IPv6 address is to configure the prefix (network) portion of the IPv6 address and
derive the interface ID (host) portion from the MAC address of the device, which is
known as the EUI-64 interface ID.
To configure an IPv6 address for an interface and enable IPv6 processing on the interface
using an EUI-64 interface ID in the low order 64 bits of the address (host), use the ipv6
address ipv6-prefix/prefix-length eui-64 command in the interface configuration mode.
The following example shows how to statically assign an IPv6 address on a router's
interface using an EUI-64 interface ID.

Note: Using an EUI-64 interface ID, Static assignment is used in Cisco IOS Software but not
in all operating systems. For example, Windows operating systems take advantage of
some additional privacy extensions defined in RFC 4941, allowing IPv6 address interface
identifier to be generated randomly.
o Stateless Address Autoconfiguration (SLAAC): As the name implies,
autoconfiguration is a mechanism that automatically configures the IPv6 address
of a node. SLAAC means that the client is assigned their own address based on the
prefix being advertised on their connected interface. As defined in RFC 4862, the
autoconfiguration process includes generating a link-local address, generating
global addresses through SLAAC, and the duplicate address detection procedure to
verify the uniqueness of the addresses on a link. Some clients may choose to use
EUI-64 or a randomized value for the Interface ID. SLAAC uses neighbor discovery
mechanisms to find routers and dynamically assign IPv6 addresses based on the
prefix advertised by the routers. The autoconfiguration mechanism was introduced
to enable plug-and-play networking of devices to help reduce administration
overhead.
o Stateful DHCPv6: DHCP for IPv6 enables DHCP servers to pass configuration
parameters, such as IPv6 network addresses, to IPv6 nodes. It offers the capability
of automatic allocation of reusable network addresses and additional
configuration flexibility. Stateful DHCP means that the DHCP server is responsible
for assigning the IPv6 address to the client. The DHCP server keeps a record of all
clients and the IPv6 address assigned to them.
o Stateless DHCPv6: Stateless DHCP works in combination with SLAAC. The device
gets its IPv6 address and default gateway using SLAAC. The device then sends a
query to a DHCPv6 server for other information such as domain names, DNS
servers, and other client relevant information. This is termed stateless DHCPv6
because the server does not track IPv6 address bindings per client.
IPv6 supports DNS record types that are supported in the DNS name-to-address and
address-to-name lookup processes. The DNS record types support IPv6 addresses. IPv6
also supports the reverse mapping of IPv6 addresses to DNS names. The Dynamic DNS
support for the Cisco IOS Software feature enables Cisco IOS software devices to perform
Dynamic Domain Name System (DDNS) updates to ensure that an IPv6 host DNS name is
correctly associated with its IPv6 address.
Router Advertisements
Routers periodically send router advertisements on all their configured interfaces. The
router sends a router advertisement to the all-nodes multicast address, ff02::1, to all IPv6
nodes in the same link.
This figure depicts the router advertisements sent by the router.
Router advertisement packet features include the following:
o ICMP type: 134
o Source: Router link-local address
o Destination: ff02::1 (all-nodes multicast address)
o Data: Options, prefix, lifetime, autoconfiguration flag

Note: The default gateway is received by the hosts only through router advertisement; the
concept of DHCP in IPv6 has changed from IPv4, and the DHCP server no longer supplies
the default gateway
Here are examples of the information that the message might contain:
o Prefixes that can be used on the link: This information enables stateless
autoconfiguration of the hosts. These prefixes must be /64 for stateless
autoconfiguration.
o Lifetime of the prefixes: The default valid lifetime is thirty days, and the default
preferred lifetime is seven days.
o Flags: Flags indicate the kind of autoconfiguration that the hosts can perform.
Unlike IPv4, the router advertisement message suggests to the host how to obtain
its addressing dynamically. There are three options:
o SLAAC
o SLAAC and stateless DHCPv6
o Stateful DHCPv6
o Default preference field: Provides coarse preference metric (low, medium, or high)
for default devices. For example, two devices on a link may provide equivalent but
not equal-cost routing, and the policy may dictate that one of the devices is
preferred.
o Other types of information for hosts: This information can include the default
MTU and hop count.
By sending prefixes, router advertisements allow host autoconfiguration. You can
configure other advertisement timing and other parameters on routers.
Router Solicitation
A router sends router advertisements every 200 seconds or immediately after a router
solicitation. Router solicitations ask routers that are connected to the local link to send an
immediate router advertisement so that the host can receive the autoconfiguration
information without waiting for the next scheduled router advertisement.

The router solicitation message is defined as follows:


o The ICMP type is 133.
o The source address is usually the unspecified address (the reason for an
unspecified address is because the router advertisement is not sent back as a
unicast but as an all-nodes multicast, so the source address of the router
solicitation is not important.) The source address can also be the link-local address
of the device.
o The destination address is the all-routers multicast address (ff02::2) with the link-
local scope.
When a router sends an answer to a router solicitation, the destination address of the
router advertisement is the all-nodes multicast (ff02::1). The router could be configured to
send solicited router advertisements as a unicast.
A host should send a router solicitation only at the host boot time and only three times.
This practice avoids flooding of router solicitation packets if there is no router on the local
network.
Configuring Stateless Autoconfiguration
The ipv6 address autoconfig command enables stateless autoconfiguration on routers on
an interface-by-interface basis.

13.9 Introducing Basic IPv6


Verification of End-To-End IPv6 Connectivity
You can use several verification tools to verify end-to-end IPv6 connectivity:
o ping: A successful ping means that the device endpoints can communicate. This
result does not mean that there are no problems. It simply proves that the basic
IPv6 connectivity is working.
o traceroute: The traceroute results can help you determine how far along the path
data can successfully travel. Knowing at what point the data fails can help you
determine the location of the issue. Cisco devices use the UDP protocol when
running traceroute. The Windows operating system uses ICMP when running the
similar command tracert.
o Telnet: Used to test the transport layer connectivity for any TCP port over IPv6.
In the following scenario, PC1 wants to access applications on the server. The figure shows
the desirable path.

You can use the ping utility to test end-to-end IPv6 connectivity by providing the IPv6
address as the destination address. The utility recognizes the IPv6 address when one is
provided and uses IPv6 as a protocol to test connectivity.

Use the ping utility on the Windows PC to test IPv6 connectivity:


You can also use the ping utility on the router to test IPv6 connectivity:

Traceroute is a utility that allows observation of the path between two hosts and supports
IPv6. Use the traceroute Cisco IOS command or tracert Windows command, followed by
the IPv6 destination address, to observe the path between two hosts. The trace generates
a list of IPv6 hops that are successfully reached along the path. This list provides important
verification and troubleshooting information.
The tracert utility on the Windows PC allows you to observe the IPv6 path:

You can also use the traceroute utility on the router to observe the IPv6 path:

Similar to IPv4, you can use Telnet to test end-to-end transport layer connectivity over
IPv6 using the Telnet command from a PC, router, or a switch. When you provide the IPv6
destination address, the protocol stack determines that the IPv6 protocol has to be used.
If you omit the port number, the client will connect to port 23. You can specify a specific
port number on the client and connect to any TCP port that you want to test.
Although Telnet can be used as a troubleshooting tool to check transport layer
functionality, it should not be used in a production environment to administer network
devices. Nowadays, a secure access method is used for that purpose using Secure Shell
protocol (SSH).
You can use the telnet command to test the transport layer connectivity for any TCP port
over IPv6.
Use Telnet to connect to the standard Telnet TCP port from a Windows PC.

Use Telnet to connect to the TCP port 80, which tests the availability of the HTTP service.

In the example, you can see two connections from a PC to the Server. The first one
connects to port 23 and tests Telnet over IPv6. The second connects to port 80 and tests
HTTP over IPv6.
The telnet command in the output tests if HTTP, which listens on TCP port 80, is open.
The telnet command can also be used from a Cisco router. In this case, to exit the
established connection, you must enter a control+C hotkey. The hotkey that closes the
connection on a Cisco device is "ctrl+shift+6 and x."
When troubleshooting end-to-end connectivity, verifying mappings between destination
IP addresses and MAC addresses on individual segments is useful. In IPv4, ARP provides
this functionality. In IPv6, the neighbor discovery process and ICMPv6 replace the ARP
functionality. The neighbor discovery table caches IPv6 addresses and their resolved MAC
addresses. As shown in the figure, the netsh interface ipv6 show neighbors Windows
command lists all devices that are currently in the IPv6 neighbor discovery table cache.
The information that is displayed for each device includes the IPv6 address, physical
(MAC) address, and the neighbor cache state, similar to an ARP table in IPv4. By examining
the neighbor discovery table, you can verify that the destination IPv6 addresses map to
the correct Ethernet addresses
Neighbor discovery table on a PC

Neighbor discovery table on a router

The figure also shows an example of the neighbor discovery table on the Cisco IOS router,
using the show ipv6 neighbors command. The table includes the IPv6 address of the
neighbor, age in minutes, the MAC address, the state, and the interface through which the
neighbor is reachable. The states are explained in the table:
You can use other commands to verify that IPv6 is configured correctly on Cisco routers.
o Verify that IPv6 routing has been enabled on the router. In the show running-
config command output, look for the ipv6 unicast-routing command.
o Verify that the interfaces have been configured with the correct IPv6 addresses.
You can use the show ipv6 interface command to display the statuses and
configurations for all IPv6 interfaces.

14.1 Configuring Static Routing


Introduction
Routers preserve knowledge of the network topology and forward packets based on
destinations, choosing the best path across the topology. This knowledge of the topology
and changes in the topology can be maintained statically or dynamically. In large
enterprise campus environments, you would typically use one of the available routing
protocols that calculate route information using dynamic routing algorithms.
Static routes, which define explicit paths between two routers, cannot be automatically
updated. You must manually reconfigure static routes when network changes occur.
Therefore, you should use static routes in environments where network traffic is
predictable and where the network design is simple. For example, networks with only one
exit such as branches, smaller remote sites, Small Office Home Office (SOHO), or in stub
networks.
You should not use static routes in large, constantly changing networks because static
routes cannot react to network changes fast enough. Most networks use dynamic routes
to communicate between routers but might have one or two static routes configured for
special cases. Static routes are also useful for specifying a gateway of last resort (a default
router).

As a network engineer, you will encounter various challenges concerning static routes:
o Explaining the difference between static and dynamic routing.
o Configuring and verifying both static and default static routes.
o Fixing problems with any of the static or static default routes configured on the
routers.

14.2 Configuring Static Routing


Routing Operation
Routing is the process of selecting a path to forward data that originated from one
network and is destined for a different network. Routers gather and maintain routing
information to enable the transmission and receipt of such data packets.
Conceptually, routing information takes the form of entries in a routing table, with one
entry for each identified route. You can manually configure the entries in the routing table
or the router can use a routing protocol to create and maintain the routing table
dynamically to accommodate network changes when they occur.
A router must perform the following actions to route data:
o Identify the destination of the packet: Determine the destination network address
of the packet that needs to be routed by using the subnet mask.
o Identify the sources of routing information: Determine from which sources a
router can learn paths to network destinations.
o Identify routes: Determine sources from which a router can learn paths to
network destinations.
o Select routes: Select the best path to the intended destination.
o Maintain and verify routing information: Update known routes and the selected
route according to network conditions.
The routing information that a router learns is offered to the routing table. The router
relies on this table to tell it which interfaces to use when forwarding packets. The figure
shows that the router on the left uses interface Serial0/0/0 to get to the 172.16.1.0/24
subnet.

If the destination network is directly connected—that is, if there is an interface on the


router that belongs to that network—the router already knows which interface to use
when forwarding packets. If destination networks are not directly attached, the router
must learn which route to use when forwarding packets.
The destination information can be learned in two ways:
o You can enter destination network information manually, also known as a static
route.
o Routers can learn destination network information dynamically through a routing
protocol process that is running on the router.

14.3 Configuring Static Routing


Static and Dynamic Routing Comparison
There are two ways that a router can learn where to forward packets to destination
networks that are not directly connected:
1. Static routing: The router learns routes when an administrator manually
configures the static route. The administrator must manually update this static
route entry whenever an internetwork topology change requires an update. Static
routes are user-defined routes that specify the outgoing interface on the router
when packets should be sent to a specific destination. These administrator-defined
routes allow a very precise control over the routing behavior of the IP
internetwork.
2. Dynamic routing: The router dynamically learns routes after an administrator
configures a routing protocol that determines routes to remote networks. Unlike
static routes, after the network administrator enables dynamic routing, the routing
process automatically updates the routing table whenever the device receives new
topology information. The router learns and maintains routes to the remote
destinations by exchanging routing updates with other routers in the internetwork.
The following are characteristics of static and dynamic routes:
o Static routes
o A network administrator manually enters static routes into the router.
o A network topology change requires a manual update to the route.
o Routing behavior can be precisely controlled.
o Dynamic routes
o A network routing protocol automatically adjusts dynamic routes when the
topology or traffic changes.
o Routers learn and maintain routes to the remote destinations by
exchanging routing updates.
o Routers discover new networks or other changes in the topology by sharing
routing table information.

14.4 Configuring Static Routing


When to Use Static Routing
Static routes are best suited for small networks, such as LANs, where routes rarely change.
If routes change, you need to manually update to reflect the new data transmission paths.
Use static routes In these situations:
o In a small network that requires only simple routing
o In a hub-and-spoke network topology
o When you want to create a quick ad hoc route
o Common use is a default static route
Do not use static routes In these situations:
o In a large network
o When the network is expected to scale
Some of the advantages of using static routes include:
o Conserving router resources: Static routing does not consume network bandwidth
and the CPU resources of the router. When you use a routing protocol, the traffic
between routers adds some overhead as the routers exchange routing updates
about remote networks. Depending on the size of the network, a router requires
some CPU cycles to compute the best way to remote networks.
o Simple to configure in a small network: Static routes are commonly used in small
networks that have few routers. Many small networks are designed as stub
networks (a network that is accessed by a single link); for these types of networks,
static routes are the most appropriate solution. Also, most of these networks are
designed in a hub-and-spoke topology, where you can use default routes for
branches that are pointing to the hub router, which is the gateway to other
networks.
o Security: Sometimes, you may want to define static routes to control the data
transmission paths that are used by your data. This option may be useful in highly
secure environments.
Here are some disadvantages of using static routes:
o Scalability: Static routing might be appropriate for networks that have fewer than
four or five routers. Dynamic routing is more appropriate for large networks to
reduce the probability of errors in a routing configuration. If planned, designed and
implemented correctly, the network can be expanded very easily to meet future
demands. Using dynamic instead of static routing helps the expansion process of
the network and does not require significant redesign of the existing network
infrastructure from scratch.
o Accuracy: If your network changes and you do not update the static routes, your
router does not have accurate knowledge of your network. Not having accurate
knowledge of your network can result in lost or delayed data transmissions.
o High maintenance: When the number of routers increases, the number of static
routes also increases. In large networks, adding even one router with only one new
network means that in addition to configuring the newly added router with static
routes to other networks, you must configure all existing routers in the network
with static routes to the new network.
14.5 Configuring Static Routing
IPv4 Static Route Configuration
Static routes are commonly used when you are routing from a network to a stub network.
Static routes can also be useful for specifying a "gateway of last resort" to which all
packets with an unknown destination address are sent.
Configure unidirectional static routes to and from a stub network to allow communication
to occur.

When configuring a static route, follow these steps as illustrated in the example in the
figure for router A:
o Specify an IPv4 destination network (172.16.1.0 255.255.255.0).
o Use the IPv4 address of the next-hop router (172.16.2.1).
o Or, use the outbound interface of the local router (Serial0/0/0).
Note: Using egress interfaces in the static routes declare that the static networks are
“directly connected” to the egress interfaces and it works fine and without issues only on
point-to-point links, such as serial interfaces running High-Level Data Link Control (HDLC)
or PPP. On the other hand, when the egress interface used in the static route is a
multiaccess interface such as Ethernet (or a serial interface running Frame Relay or
Asynchronous Transfer Mode (ATM)), the solution will likely be complicated and possibly
disastrous. It is highly recommended to configure static routes using only next hop IPv4
address. Static routes defined using only egress interfaces might cause uncertainty or
unpredictable behavior in the network and should not be used unless absolutely
necessary.
Static route pointing to the next-hop IPv4 address
In the figure, router A is configured with a static route to reach the 172.16.1.0/24 subnet
via the next hop IPv4 address 172.16.2.1 using the ip route command.
Alternatively, you can configure the static route by pointing to the exit interface instead of
using the next-hop IPv4 address.

The table lists the ip route command parameters for this example.

In the figure, you would also need to configure router B with a static or default route to
reach the networks behind router A via the serial interface of router B.
Note: A static route is configured for connectivity to remote networks that are not directly
connected to your router. For end-to-end connectivity, you must configure a static route
in both directions.
A host route is a static route for a single host. A host route has a subnet mask of
255.255.255.255.
A floating static route is a static route with administrative distance greater than 1. By
default, static routes have a very low administrative distance of 1, which means that your
router will prefer a static route over any routes that were learned through a dynamic
routing protocol. If you want to use a static route as a backup route (so called floating
static route), you will have to change its administrative distance.
To change administrative distance of a static route, add the admin distance parameter to
the command. For example, to change the administrative distance to 10, add number 10
at the end of the IP route configuration.

14.6 Configuring Static Routing


Default Routes
Use a default route when the route from a source to a destination is not known or when it
is not feasible for the router to maintain many routes in its routing table.

A default static route is a route that matches the destination address of all packets that
don’t match any other more specific routes in the routing table. Default static routes are
used in these instances:
o When no other routes in the routing table match the destination IP address of the
packet or when a more specific match does not exist. A common use for a default
static route is to connect the edge router of a company to an ISP network.
o When a router has only one other router to which it is connected. This condition is
known as a stub router.
The syntax for a default static route is like the one that is used for any other static route,
except that the network address is 0.0.0.0 and the subnet mask is 0.0.0.0.
Or

The 0.0.0.0 network address and 0.0.0.0 subnet mask are called a quad-zero route.
In the figure, router B is configured to forward to router A all packets for which there is no
route for the destination network in the router B routing table.
This table lists the ip route command parameters for this example.

14.7 Configuring Static Routing


Verifying Static and Default Route Configuration
Most routing tables contain a combination of directly connected routes, static routes, and
dynamic routes. However, the routing table must first contain the directly connected
networks that are used to access the remote networks before any static or dynamic
routing can be used.
Verifying Static Route Configuration
To verify static routes in the routing table, examine the routing table with the show ip
route command.
o The static route includes the network address, subnet mask (in prefix form), and
IPv4 address of the next-hop router or exit interface.
o The static route is denoted with the code "S" in the routing table.
Routing tables must contain directly connected networks that are used to connect remote
networks before static or dynamic routing can be used. This means that a route will not
appear in the routing table of the router if the exit interface used for that specific route is
disabled (administratively down) or does not have an IP address assigned. The interface
state needs to be up/up.
A static route includes the network address and prefix of the remote network, along with
the IPv4 address of the next-hop router or exit interface. Static routes are denoted with
the code "S" in the routing table, as shown in the figure.
If you configure a static route to use an egress interface instead of a next-hop IPv4
address, the routing table entry is changed accordingly.
For example, if this default route pointing to the exit interface (Serial0/0/1) is configured
on router B:

The corresponding routing table entry of the static route in the routing table of router B is:
Note that the entry in the routing table no longer refers to the next-hop IPv4 address but
refers directly to the exit interface. This exit interface is the same one to which the static
route was resolved when it used the next-hop IPv4 address. Now that the routing table
process has a match for a packet and this static route, it is able to resolve the route to an
exit interface in a single lookup.
Note: The static route displays the route as directly connected. It is important to
understand that this does not mean that this route is a directly connected network or a
directly connected route. This route is still a static route with the “S” code.
Verifying Default Route Configuration
To verify the default route configuration, examine the routing table on router B:
The example in the figure shows the router B routing table after configuration of the
default route.
The asterisk (*) indicates that the route is a candidate default route.

14.9 Configuring Static Routing


Configuring IPv6 Static Routes
Routing for IPv6 is not enabled by default on Cisco routers. Therefore, you need to enable
IPv6 routing by using the ipv6 unicast-routing command in global configuration mode
before you start configuring IPv6 static routes. The ipv6 unicast-routing command is
required for forwarding and configuring routing protocol but not required to configure
IPv6 addresses on interfaces.
There is an IPv6-specific requirement per RFC 2461 that a router must be able to
determine the link-local address of each of its neighboring routers to ensure that the
target address of a redirect message identifies the neighbor router by its link-local
address. This requirement means that using a global unicast address as a next-hop address
with IPv6 routing is not recommended.
Configuring a static route for IPv6 is almost the same as it is in IPv4. In IPv4, the next-hop
IPv4 address or the exit interface can be specified in the static route configuration,
although using a next-hop is highly recommended and using an exit interface alone should
be avoided. The same approach applies in IPv6, but the next-hop IPv6 address in IPv6 can
either be a link-local address or a global address. If a link-local address is used then the
exit interface must also be specified.
Static routes are used in IPv6 in the same situations as they are used in IPv4. They can
point to specific networks or hosts, or default static routes can be used for identifying a
"gateway of last resort". In addition, when redundancy to specific networks is required,
you can configure a backup route (floating static route) with higher administrative
distance than the primary route.
This example shows how to configure an IPv6 static route using different methods (link-
local or a global address):

The first static route uses a link-local next hop address, specified with the fe80 prefix.
When using a link-local address as the next hop, you must also use an exit interface
because this link-local address could be used on any interface. The second static route
points to the next hop global IPv6 address 2001:0db8:feed::1.
Note: In an IPv6 address the alphanumeric characters used in hexadecimal format are not
case sensitive; therefore, uppercase and lowercase characters are equivalent. Although
Cisco IOS accepts both lowercase and uppercase representation of an IPv6 address, RFC
5952 recommends that IPv6 addresses be represented in lowercase to ensure
compatibility with case-sensitive applications.
IPv6 Static Route Configuration Example
Consider the next example to understand IPv6 static route configuration.

In this example, an IPv6 static network route is configured on the HQ router, pointing to
the Branch router in order to reach the Branch router’s LAN. An IPv6 default route is
configured on the Branch router, pointing to the HQ router in order to reach all other
networks:
The table shows IPv6 static and default route commands:

Verifying IPv6 Static Route Configuration


Use the show ipv6 route static command to verify only the IPv6 static route configuration
in the routing table.
Verify the static IPv6 route on the HQ router.

Verify the IPv6 static route on the Branch router.


Alternatively, you can verify the static IPv6 route using the show ipv6 static command. For
example, here is the static route on the HQ router:

The table shows IPv6 static route verification commands:

You can also verify that the default IPv6 route on the Branch router is working by issuing
the ping command to the server:
15.1 Implementing VLANs and Trunks
Introduction
If an enterprise campus network is poorly designed where it has a large number of devices
in the same LAN segment, the poor design will typically affect performance of the network
due to a large broadcast and failure domain, limited security control, and so on. While a
router could be used to solve the issue because it blocks broadcasts; routers are typically
slower, expensive, and often do not fit the design of an enterprise campus network.
A common solution are VLANs which segment a network on a per ports basis and can span
over multiple switches. This allows you to logically segment a switched network on an
organizational basis by functions, project teams, or applications rather than on a physical
or geographical basis. For example, all workstations and servers used by a particular
workgroup team can be connected to the same VLAN, regardless of their physical
connections to the network or the fact that they might be intermingled with other teams.
Reconfiguration of the network can be done through software rather than by physically
unplugging and moving devices or wires.
In enterprise environments, switches often use links that carry data from multiple VLANs
and allow VLANs to be extended across an entire network. These links are called trunks.

As a networking engineer, you need to gain skills in the area of VLANs, such as:
o Identifying the common issues in a poorly designed local network.
o Familiarizing yourself with the operation of VLANs.
o Implementing correct steps to implement and verify VLANs and trunks.
15.2 Implementing VLANs and Trunks
VLAN Introduction
To understand VLANs, you need a solid understanding of LANs. A LAN is a group of devices
that share a common broadcast domain. When a device on the LAN sends broadcast
messages, the switch floods the broadcast messages (as well as unknown unicast) to all
ports except the incoming port. Therefore, all other devices on the LAN receive them. You
can think of a LAN and a broadcast domain as being basically the same thing. Without
VLANs, a switch considers all its interfaces to be in the same broadcast domain. In other
words, all connected devices are in the same LAN. With VLANs, a switch can put some
interfaces into one broadcast domain and some into another. The individual broadcast
domains that are created by the switch are called VLANs. A VLAN is a group of devices on
one or more LANs that are configured to communicate as if they were attached to the
same wire, when in fact they are located on a number of different LAN segments.

A VLAN allows a network administrator to create logical groups of network devices. These
devices act like they are in their own independent network, even if they share a common
infrastructure with other VLANs. Each VLAN is a separate Layer 2 broadcast domain which
is usually mapped to a unique IP subnet (Layer 3 broadcast domain). A VLAN can exist on a
single switch or span multiple switches. VLANs can include devices in a single building as
illustrated in the figure or multiple-building infrastructures.
Within the switched internetwork, VLANs provide segmentation and organizational
flexibility. You can design a VLAN structure that lets you group devices that are segmented
logically by functions, project teams, and applications without regard to the physical
location of the users. VLANs allow you to implement access and security policies for
particular groups of users. If a switch port is operating as an access port, it can be assigned
to only one VLAN, which adds a layer of security. Multiple ports can be assigned to each
VLAN. Ports in the same VLAN share broadcasts. Ports in different VLANs do not share
broadcasts. Containing broadcasts within a VLAN improves the overall performance of the
network.
If you want to carry traffic for multiple VLANs across multiple switches, you need a trunk
to connect each pair of switches. VLANs can also connect across WANs. It is important to
know that traffic cannot pass directly to another VLAN (between broadcast domains)
within the switch or between two switches. To interconnect two different VLANs, you
must use routers or Layer 3 switches. The process of forwarding network traffic from one
VLAN to another VLAN using a router is called inter-VLAN routing. Routers perform inter-
VLAN routing by either having a separate router interface for each VLAN, or by using a
trunk to carry traffic for all VLANs. The devices on the VLAN send traffic through the
router to reach other VLANs.
Usually, subnet numbers are chosen to reflect which VLANs they are associated. The
figure shows that VLAN 2 uses subnet 10.0.2.0/24, VLAN 3 uses 10.0.3.0/24, and VLAN 4
uses 10.0.4.0/24. In this example, the third octet clearly identifies the VLAN that the
device belongs to. The VLAN design must take into consideration the implementation of a
hierarchical, network-addressing scheme.
Cisco Catalyst Series switches have a factory default configuration in which various default
VLANs are preconfigured to support various media and protocol types. The default
Ethernet VLAN is VLAN 1, which contains all ports by default.
If you want to communicate with the Cisco Catalyst switch for management purposes
from a remote client that is on a different VLAN, which means it is on a different subnet,
then the switch must have an IP address and default-gateway configured. This IP address
must be in the management VLAN, which is by default VLAN 1.

15.3 Implementing VLANs and Trunks


Creating a VLAN
On Cisco Catalyst Series Switches, you can use the vlan global configuration command to
create a VLAN and enter the VLAN configuration mode. Use the no form of this command
to delete the VLAN. The example shows how to add VLAN 2 to the VLAN database and
how to name it "Sales."

Add VLAN 2 and name it "Sales".

The following table lists the VLAN ranges on Cisco Catalyst switches:

To add a VLAN to the VLAN database, use the vlan global configuration command by
entering a VID.
The following table lists the VLAN ranges on Cisco Catalyst switches:
VLANs 1 and 1002–1005 are automatically created by the switch while the others have to
be created manually.
VLAN Trunking Protocol (VTP) is a Cisco proprietary Layer 2 messaging protocol that
maintains VLAN configuration consistency by managing the addition, deletion, and
renaming of VLANs on a networkwide basis. It reduces administration overhead in a
switched network. The switch supports VLANs in VTP client, server, and transparent
modes.
The configurations of VIDs 1 to 1005 are always saved in the VLAN database (vlan.dat file),
which is stored in flash memory. If the VTP mode is transparent, they are also stored in
the switch running configuration file, and you can save the configuration in the startup
configuration file.
In VTP versions 1 and 2, the switch must be in VTP transparent mode when you create
extended VLANs (VIDs 1006 to 4094). These VLANs are not stored in the VLAN database
but because VTP mode is transparent, they are stored in the switch running (and if saved
in startup) configuration file. However, extended-range VLANs created in VTP version 3
are stored in the VLAN database, and can be propagated by VTP. Thus, VTP version 3
supports extended VLANs creation and modification in server and transparent modes.
To create an Ethernet VLAN, you must specify at least a VLAN number. If you do not enter
a name for the VLAN, the default is to append the VLAN number to the vlan command.
For example, VLAN0004 would be the default name for VLAN 4 if you don't specify a
name.

15.4 Implementing VLANs and Trunks


Assigning a Port to a VLAN
The end device connected to the switch has no knowledge of a configured VLAN on the
switch. The configuration is only performed on the switch port. The end device has an IP
address and subnet mask that associates it with a subnet. This subnet then maps to the
VLAN that is configured on the switch port to which the end device is connected.
The commands that define the VLAN port membership mode and characteristics are the
following:
Note: In some other documentation, static-access ports may be referred to as untagged
ports, while the trunk ports may be referred to as tagged ports. Therefore, these two
terms may be used interchangeably.
Assigning a Port to a Data VLAN
When you connect a host to a switch port, you should associate the port with a VLAN in
accordance with the network design and the subnet that it belongs to. To associate a
device with a VLAN, assign the switch port to which the device connects to a single VLAN.
The switch port, therefore, becomes an access port.
After creating a VLAN, you can manually assign a port or many ports to this VLAN. A port
can belong to only one data VLAN at a time.
Note: VLAN 1 is the factory default VLAN. If you do not assign a VLAN to an access port,
VLAN 1 is assigned automatically.
The following example shows how you can assign the previously created VLAN 2 to the
FastEthernet0/3 interface.

Note: On some switches you must create the VLAN before assigning it to a port, or else no
traffic will flow.
The table lists the commands to use when assigning a port to a VLAN.
The following example shows how you use the interface range global configuration
command to enable FastEthernet interfaces 0/1 to 0/3 and assign them to VLAN 2:
The following example shows how you use the default interface global configuration
command to set the interface to factory defaults:

The table lists the commands to use when configuring a range of interfaces as well as to
set the interface to factory defaults.

Assigning a Port to a Voice VLAN


Usually, IP phones are placed next to a computer in the working environment. They use
Ethernet and require the same network cables as computers. Hence, you can use two
separate connections to the network, the computer and the IP phone.
Alternatively, you can connect the computer to an Ethernet port on the IP phone, and
then the connection from the IP phone to the network carries the traffic from both the
computer and the IP phone. This is enabled on some Cisco Catalyst switches with a unique
feature that is called voice VLAN; it lets you overlay a voice topology onto a data network.
You can segment phones into separate logical networks, even though the data and voice
infrastructure are physically the same.
With the IP phones in their own VLANs, network administrators can more easily identify
and troubleshoot network problems. Also, network administrators have the ability to
prioritize voice traffic over data traffic.
The voice VLAN feature allows voice traffic from the attached IP phone and data traffic
from an end station to be transmitted on different VLANs.
You create a voice VLAN in the same way as you create data VLAN, using the vlan global
configuration command. The following example shows how to create VLAN 3 and how to
assign this VLAN as a voice VLAN to the FastEthernet0/2 interface.

Add VLAN 3 and name it "telephony".

Assign interface FastEthernet0/2 to voice VLAN 3.

When an IP phone is connected to a switch port, this port should have a voice VLAN
associated with it. This process is done by assigning a single voice VLAN to the switch port
to which the phone is connected.

You can configure a data and voice VLAN on the same interface, as shown in this example:
Verifying VLANs
After you configure a VLAN, you should validate the parameters for that VLAN.
Use the show vlan command to display information on all configured VLANs. The
command displays configured VLANs, their names, and the ports on the switch that are
assigned to each VLAN. You can observe in the output all information about the VLANs.
To display information on all configured VLANs:
The example shows that VLAN 2 (data) and VLAN 3 (telephony) are created on the switch.
Both are active and are assigned to the FastEthernet0/2. All other interfaces are assigned
to the default VLAN—VLAN 1. Trunk ports that are connected to another device do not
appear in the output of the show vlan command.
Use the show vlan id vlan_number or show vlan name vlan-name command to display
information about a particular VLAN. The example shows the output of the show vlan
command for the "data" VLAN, which is VLAN 2.

On the other hand, you can use the show vlan brief command, which displays one line for
each VLAN with the VLAN name, status, and its ports. Connected trunk ports also do not
appear in the output of the show vlan brief command.

Dynamic Trunking Protocol (DTP) is used by Cisco switches to automatically negotiate


whether an interface used for interconnection between two switches should be put into
access or trunk mode. When the interface is in trunk mode, DTP also negotiates trunk
encapsulation.
The DTP individual modes are:
o dynamic auto: the interface will form a trunk only if it receives DTP messages to do
so from the other side switch. An interface configured in dynamic auto mode does
not generate DTP messages and only listens for incoming DTP messages.
o dynamic desirable: the interface will negotiate the mode automatically and will
actively try to convert the link to a trunk link. An interface configured in dynamic
desirable mode generates DTP messages and listens for incoming DTP messages. If
the port on the other side switch interface is capable to form a trunk, a trunk link
will be formed.
Note: Interfaces on some switches are set to dynamic desirable by default and on other
switches they are set to dynamic auto by default.
The individual combinations of interface settings on the switches lead to following results:

The best practice is to disable the autonegotiation and not use the dynamic auto and
dynamic desirable switch port modes. Instead, the best practice is to manually configure
the port mode as trunk on both sides. If you do not want the switch to negotiate at all, use
the switchport nonegotiate command (necessary only for trunk ports, as the static access
ports do not send DTP packets automatically.)
To verify the VLAN configuration of an interface, as well as the administrative and
operational mode, use show interfaces interface-id switchport command.
You can also use the show mac address-table command to verify which MAC addresses
belong to which port and VLAN. You can also see the MAC addresses that have been
learned on a particular VLAN with the show mac address-table vlan vlan-id command.

Note: If the MAC address has not yet been learned on a particular VLAN and port, then
you will see no entry in the MAC address table. Also remember, that if the MAC address
remains inactive for a specified number of seconds, it is removed from the MAC address
table. The default aging time is 300 seconds.
Each port on a switch belongs to a VLAN. If the VLAN to which the port belongs is deleted,
the port becomes inactive. Also, a port becomes inactive if it is assigned to a nonexistent
VLAN. All inactive ports are unable to communicate with the rest of the network.
As shown in the following example, you can use the show interface interface switchport
command to check whether the port is inactive. If the port is inactive, it will not be
functional until you create the missing VLAN using the vlan vlan_id command or until you
assign the port to a valid VLAN.
15.5 Implementing VLANs and Trunks
Trunking with 802.1Q
Without trunking, running many VLANs between switches would require the same
number of interconnecting links.

If every port belongs to one VLAN and you have several VLANs that are configured on
switches, then interconnecting them requires one physical cable per VLAN. When the
number of VLANs increases, the number of required interconnecting links also increases.
Ports are then used for interswitch connectivity instead of attaching end devices.
Instead, you can use one connection configured as a trunk:
Characteristics of Trunking with 802.1Q include the following:
o Combining many VLANs on the same port is called trunking.
o A trunk allows the transport of frames from different VLANs.
o Each frame has a tag that specifies the VLAN that it belongs to.
o The receiving device forwards the frames to the corresponding VLAN based on the
tag information.

A trunk is a point-to-point link between two network devices such as a server, router and
a switch. Ethernet trunks carry the traffic of multiple VLANs over a single link and allow
you to extend the VLANs across an entire network. A trunk does not belong to a specific
VLAN. Rather, it is a conduit for VLANs between devices. By default, all configured VLANs
are carried over a trunk interface on a Cisco Catalyst switch.
Note: A trunk could also be used between a network device and a server or another
device that is equipped with an appropriate trunk capable network interface card (NIC).
VLAN Tagging
If your network includes VLANs that span multiple interconnected switches, the switches
must use VLAN trunking on the connections between them. Switches use a process called
VLAN tagging in which the sending switch adds another header to the frame before
sending it over the trunk. This extra header is called a tag and includes a VID field so that
the sending switch can list the VLAN ID and the receiving switch can identify the VLAN that
each frame belongs to, as illustrated in the figure.

Trunking allows switches to pass frames from multiple VLANs over a single physical
connection. For example, the figure shows Switch 1 receiving a broadcast frame on the
Fa0/1 interface, which is a member of VLAN 1. In a broadcast, the frame must be
forwarded to all ports in VLAN 1. Because there are ports on Switch 2 that are members of
the VLAN 1 switch, the frame must be forwarded to Switch 2. Before forwarding the
frame, Switch 1 adds a header that identifies the frame as belonging to VLAN 1. This
header tells Switch 2 that the frame should be forwarded to the VLAN 1 ports. Switch 2
removes the header and then forwards the frame for all ports that are part of VLAN 1.
As another example, the device on the Switch 1 Fa0/5 interface sends a broadcast. Switch
1 sends the broadcast out of port Fa0/6 (because this port is in VLAN 2) and out Fa0/23
(because it is a trunk, meaning that it supports multiple VLANs). Switch 1 adds a trunking
header to the frame, listing a VLAN ID of 2. Switch 2 strips off the trunking header, and
because the frame is part of VLAN 2, Switch 2 knows to forward the frame out of only
ports Fa0/5 and Fa0/6 and not ports Fa0/1 and Fa0/2.
IEEE 802.1Q
Cisco Catalyst switches support the IEEE 802.1Q trunking protocol.
When a switch puts an Ethernet frame on a trunk, it needs to add a VLAN tag with
information about the VLAN to which the frame belongs. The switch does so by using the
802.1Q encapsulation header. IEEE 802.1Q uses an internal tagging mechanism that
inserts an extra 4-byte tag field into the original Ethernet frame between the Source
Address and Type or Length fields. As a result, the frame still has the original source and
destination MAC addresses. Also, because the original header has been expanded, 802.1Q
encapsulation forces a recalculation of the original frame check sequence (FCS) field in the
Ethernet trailer, because the FCS is based on the content of the entire frame. It is the
responsibility of the receiving Ethernet switch to look at the 4-byte tag field and
determine where to deliver the frame.
The figure shows the 802.1Q header and framing of the revised Ethernet header.
Here are tag fields:
o Type or tag protocol identifier is set to a value of 0x8100 to identify the frame as
an IEEE 802.1Q-tagged frame.
o Priority indicates the frame priority level that can be used for the prioritization of
traffic.
o Canonical Format Identifier (CFI) is a 1-bit identifier that enables Token Ring
frames to be carried across Ethernet links
o VLAN ID uniquely identifies the VLAN to which the frame belongs.
On an 802.1Q trunk port, there is one VLAN, called the native VLAN, which is untagged. By
default, the native VLAN is VLAN 1, which means that the switch does not insert an extra
802.1Q tag inside an Ethernet frame. When the switch on the receiving side receives the
Ethernet frame that does not have an 802.1Q tag, it knows that the frame belongs to the
native VLAN. All other VLANs are tagged with a VID. IEEE 802.Q specifies that native VLANs
are backward compatible with legacy LAN scenarios, where untagged traffic is common.
Note: Both switches must be configured with the same native VLAN or errors will occur
and untagged traffic will go to the wrong VLAN on the receiving switch. By default it is the
VLAN 1.

15.6 Implementing VLANs and Trunks


Configuring an 802.1Q Trunk
The following example shows the configuration of interface Ethernet0/0 as a trunk. Use
the switchport mode interface configuration command to set an Ethernet port to trunk
mode. The example also shows reconfiguration of the native VLAN. VLAN 99 is configured
as the native VLAN; therefore, traffic from VLAN 99 is sent untagged. You must ensure
that the switch on the other end of the trunk link is configured the same way.
If you do not explicitly configure the VLANs that traverse the trunk, all VLANs will be
allowed to cross the link. Use the switchport mode trunk allowed vlan vlan_list command
to allow only certain VLANs on the trunk link. In the example, only VLANs 10, 20, 30, and
99 are allowed on a trunk link. If you need to add or remove allowed VLANs, use the
switchport trunk allowed vlan {add | remove} vlan_list command.
Note: Use the no form of those commands to reset the trunk port to the default state.
The table lists commands to use when creating a trunk port:

Note: Be extremely careful when adding a new VLAN to the list of allowed VLANs on a
trunk port. It is a common mistake to use the switchport trunk allowed vlan vlan-number
command. This command will overwrite the existing list of allowed VLANs and it will
replace it with the single VLAN you have just specified. Therefore, it is necessary to use
the switchport trunk allowed vlan add vlan-number command.
The following example shows you how to verify the configuration of a trunked interface
using the show interfaces interface-id switchport command.
Display VLAN information for an interface.

In the example, you can see that the interface Ethernet 0/0 operates as a trunk port and
has the VLAN 99 as the native VLAN. It only allows VLANs 10, 20, 30, and 99 to traverse
through the link.
To verify, which ports are configured as trunks on a switch, you can use the show
interfaces trunk command.

You can also use the show interfaces status command to quickly verify which port is a
trunk, and which port belongs to a certain VLAN.
Unlike access ports, when a port is configured as trunk port, it will not be seen under the
show vlan [brief] command. Notice that, in this example, interface Ethernet 0/0 is
missing.

15.7 Implementing VLANs and Trunks


VLAN Design Considerations
VLANs create boundaries that can isolate endpoints or traffic so you should design a multi-
VLAN topology thoughtfully. The general question that you should ask yourself is the
following: "Who is talking to whom and what are they trying to get done?" Here are some
considerations that you need to take into account before implementing VLANs:
The following are some VLAN design considerations:
o The maximum number of VLANs is switch-dependent.
o VLAN 1 is the factory-default Ethernet VLAN.
o Keep management traffic in a separate VLAN.
o Change the native VLAN to something something other than VLAN 1.
Typically, access layer Cisco switches support up to 64, 256, or 1024 VLANs. The maximum
number of VLANs is switch-dependent.
Cisco switches have a factory-default configuration in which default VLANs are
preconfigured to support various media and protocol types. The default Ethernet VLAN is
VLAN 1. For security reasons, a good practice is to configure all the ports on all switches to
be associated with VLANs other than VLAN 1. Also, all unused switch ports should be
assigned to black hole VLAN and set to be administratively down. A black hole VLAN is a
term for a VLAN which is associated with a subnet that has no route, or no default-
gateway to other networks within your organization, or to the internet. Hence, you can
mitigate the security associated with the default VLAN 1.
In this example, a black hole VLAN is created and unused ports are placed into that VLAN.
Also, unused switch ports are shut down to prevent unauthorized access to the network.

Note: If you did not use the shutdown command in the above configuration, then if
someone plugs a device into an unused port, the port will come up, but the device will be
placed into a VLAN that does not have access to anything. Thus, you can successfully
mitigate some network attacks.
A good security practice is to separate management and user data traffic because you do
not want users to be able to establish Secure Shell (SSH) sessions to the switch. The
management VLAN by default is VLAN 1, and it should be changed to a different VLAN. If
you want to communicate with a Cisco switch remotely for management purposes, the
switch must have an IP address and a default-gateway configured and they must be in the
management VLAN. In this case, users who are not in the management VLAN cannot
access the switch, unless they were routed into the management VLAN.
When configuring a trunk port, consider the following:
o Make sure that the native VLAN for an 802.1Q trunk is the same on both ends of
the trunk port.
o Only allow specific VLANs to traverse through the trunk port.
o DTP manages trunk negotiations between Cisco switches.

Make sure that the native VLAN for an IEEE 802.1Q trunk is the same on both ends of the
trunk link. If the configuration is different on the two switches, the traffic will be
forwarded in the wrong VLAN. If IEEE 802.1Q trunk configuration is not the same on both
ends, Cisco IOS Software will report error messages. Note that native VLAN frames are
untagged.
SW1#*Mar 31 06:22:46.631: %CDP-4-NATIVE_VLAN_MISMATCH: Native VLAN mismatch
discovered on Ethernet0/0(999), with SW2 Ethernet0/0 (99).
Another good security practice is to change the native VLAN to something other than
VLAN 1 because all control traffic is sent on VLAN 1. The native VLAN should be changed
to be a VLAN that is not used for any other traffic. By default, the native VLAN is not
tagged, but it is recommended to tag the native VLAN. The example below shows how to
change the native VLAN and tag it.

Switches from other vendors do not support DTP. As discussed, DTP is used by Cisco
switches to automatically negotiate whether an interface between two switches will be
put into access or trunk mode.

16.1 Routing Between VLANs


Introduction
Enterprises typically have several departments, which are separated into different VLANs.
VLANs are used to logically separate switch ports. Essentially, each VLAN behaves like a
separate physical switch with its own Layer 2 broadcast domain, which means broadcast
frames are only switched among the ports within the same VLAN. This behavior is
important in Enterprise environments, because the Campus network can be organized
based on departments, functions, projects, or applications. Each VLAN is also mapped to
its own subnet and Layer 3 broadcast domain.
Users and devices in different departments need to communicate as well, which means
that devices in different VLANs should be able to communicate with each other. You can
permit these devices to communicate by using a solution that is called inter-VLAN routing.
There are different options of achieving that goal depending on what kind of network
devices you use for this task.
As a network engineer, you need to enable routing between the VLANs, which means that
you need to gain skills in:
o Understanding inter-VLAN routing cases.
o Describing different inter-VLAN routing solutions.
o Demonstrating basic configuration examples for some solutions.

16.2 Routing Between VLANs


Purpose of Inter-VLAN Routing
Each VLAN is a unique Layer 2 broadcast domain. Devices on separate VLANs are, by
default, not able to communicate. Each VLAN is usually assigned to a different IP subnet,
which is a Layer 3 broadcast domain. You can permit these devices to communicate by
using a solution that is called inter-VLAN routing. Inter-VLAN communication occurs
between subnets via a Layer 3 device.
VLANs have the following characteristics:
o A VLAN creates a separate Layer 2 broadcast domain.
o Traffic cannot be switched between VLANs.
o Each VLAN is mapped to a separate IP subnet.
o Routing is necessary to forward traffic between VLANs.
VLANs perform network partitioning and traffic separation at Layer 2 and are usually
associated with unique IP subnets on the network, as illustrated in the figure for IPv4
subnets. This subnet configuration facilitates the routing process in a multi-VLAN
environment. Inter-VLAN communication cannot occur without a Layer 3 device. Layer 3
switches or routers perform inter-VLAN routing by either having a separate router
interface for each VLAN, or by using a trunk to carry traffic for all VLANs. The devices on
the VLANs send traffic through the router to reach other VLANs.

16.3 Routing Between VLANs


Options for Inter-VLAN Routing
Inter-VLAN routing is a process of forwarding network traffic from one VLAN to another
VLAN using a Layer 3 device.
Traditional inter-VLAN routing requires multiple physical interfaces on both the router and
the switch. VLANs are associated with unique IP subnets on the network. This subnet
configuration facilitates the routing process in a multi-VLAN environment. When you use a
router to facilitate inter-VLAN routing, the router interfaces are connected to switch
interfaces that are in separate VLANs. Devices on these VLANs send traffic through the
router to reach other VLANs. However, when you use a separate interface for each VLAN
on a router, you can quickly run out of interfaces. This solution is not very scalable.
Option 2: Router on a Stick
Not all inter-VLAN routing configurations require multiple physical interfaces. Some router
software permits configuring router interfaces as trunk links. Trunk links open up new
possibilities for inter-VLAN routing. A router on a stick is a type of router configuration in
which a single physical interface routes traffic among multiple VLANs on a network.

The figure shows a router that is attached to a switch. The router interface is configured to
operate as a trunk link and is connected to a switch port that is configured as a trunk. The
router performs inter-VLAN routing by accepting VLAN-tagged traffic on the trunk
interface coming from the adjacent switch and internally routing between the VLANs
using subinterfaces. Subinterfaces are multiple virtual interfaces that are associated with
one physical interface. To perform inter-VLAN routing functions, the router must know
how to reach all VLANs that are being interconnected; there must be a separate logical
connection on the router for each VLAN. VLAN trunking (such as IEEE 802.1Q) must be
enabled on these connections.
These subinterfaces are configured in software. Each is independently configured with its
own IP addresses and VLAN assignment. The router routes packets incoming from one
subinterface and then sends the data on another subinterface by putting it in a VLAN-
tagged frame and sending it back out the same physical interface. Devices on the VLANs
have their default gateway set to the appropriate router IP address; in this figure, the
devices in VLAN 10 will have default gateway set to 10.1.10.1, and the devices in VLAN 20
will have default gateway set to 10.1.20.1.
Router Trunk Link Configuration Example
The following example shows how you can configure a router on a stick, by configuring
subinterfaces and trunking on the router:

The commands used on the router are as follows:

In the figure, the GigabitEthernet0/0 interface is divided into two subinterfaces—


GigabitEthernet0/0.10 and GigabitEthernet0/0.20. Each subinterface represents the
router in each of the VLANs for which it routes.
In the example, the encapsulation dot1q 20 command enables 802.1Q encapsulation
trunking on the GigabitEthernet0/0.20 subinterface. The value 20 represents the VLAN
number (or VLAN identifier), therefore associating 802.1Q-tagged traffic from this VLAN
with the subinterface.
Each 802.1Q-tagged VLAN on the trunk link requires a subinterface with 802.1Q
encapsulation trunking that is enabled in this manner. The subinterface number does not
have to be the same as the dot1q VLAN number. However, management and
troubleshooting are easier when the two numbers are the same.
In this example, devices in different VLANs use the subinterfaces of the router as default
gateways to access the devices that are connected to the other VLANs.
On the switch, assign ports to specific VLANs and configure the port toward the router as
a trunk. The trunk link will carry traffic from different VLANs, and the router will route
between these VLANs.

The commands used on the switch are as follows:

Verify VLAN Subinterfaces


To verify the router configuration, use the show commands to display the VLANs and IP
routing information for each VLAN to verify that the routing table includes the subnets of
all VLANs.
Verify the VLAN subinterfaces using the show vlans command.
The sample output shows two VLAN subinterfaces—GigabitEthernet0/0.10 and
GigabitEthernet0/0.20.
Verify the IPv4 routing table for the VLAN subinterfaces using the show ip route
command.

The show ip route command displays the state of the routing table. The sample output
shows two subinterfaces. The GigabitEthernet0/0.10 and GigabitEthernet0/0.20 VLAN
subinterfaces are directly connected to the router.
Some switches can perform Layer 3 functions, replacing the need for dedicated routers to
perform basic routing on a network. Layer 3 switches are capable of performing inter-
VLAN routing. Traditionally, a switch makes forwarding decisions by looking at the Layer 2
header, whereas a router makes forwarding decisions by looking at the Layer 3 header. A
Layer 3 switch combines the functionality of a switch and a router in one device. It
switches traffic when the source and destination are in the same VLAN and routes traffic
when the source and destination are in different VLANs (that is, on different IP subnets).
To enable a Layer 3 switch to perform routing functions, you must properly configure
VLAN interfaces on the switch; these are called switch virtual interfaces (SVIs). You must
use the IP addresses that match the subnet that the VLAN is associated with on the
network. The Layer 3 switch must also have IP routing enabled. Devices on the VLANs
have their default gateway set to the appropriate Layer 3 switch IP address.
Layer 3 switching is more scalable than router on a stick because the latter can pass only
so much traffic through the trunk link. In general, a Layer 3 switch is primarily a Layer 2
device that has been upgraded to have some routing capabilities. A router is a Layer 3
device that can perform some switching functions. Layer 3 switches do not have WAN
interfaces, while routers do. Typically, routers also support more advanced Layer 3
features (for example, Network Address Translation, encryption, and tunneling) than Layer
3 switches.
However, the line between switches and routers becomes hazier every day. Some Layer 2
switches support limited Layer 3 functionality, such as static routing on SVIs, so you can
configure static routes, but routing protocols are not supported.
Following is an example configuration on the Layer 3 switch with PCs that are connected
to VLAN 10 and VLAN 20. PCs in VLAN 10 will have default gateway 10.1.10.1, and PCs in
VLAN 20 will have default gateway 10.1.20.1. The Layer 3 switch will perform routing
between VLAN 10 and VLAN 20.
17.1 Introducing OSPF
Introduction
Efficient routing is crucial for network performance in larger networks that encompass
many buildings with endpoints, branches, and remote sites; all implementing different
VLANs. In such large environments, changes are frequent as new networks emerge, paths
change, or different configuration and interface issues occur. The network has to be able
to adapt quickly and automatically to changes. Relying on static routing could result in
long waiting times as network administrators implement the necessary configuration
changes. This is where the role of routing protocols becomes crucial.
The objective of the routing protocol is to exchange network reachability information
between routers and dynamically adapt to network changes. Routing protocols use
routing algorithms to determine the optimal path between different segments in the
network and update routing tables with the best paths. Dynamic routing protocols play an
important role in enterprise networks. There are several different protocols available;
each having its advantages and limitations. Convergence time, support for summarization,
and the ability to scale affect the choice of suitable routing protocols. It is a best practice
that you use one routing protocol throughout the enterprise, if possible.
In an enterprise campus, the routing protocol must support high-availability requirements
and provide very fast convergence. One of the most common IP routing protocols in such
an environment is Open Shortest Path First (OSPF), an open standard protocol that works
as an interior gateway protocol (IGP) at the corporate office and at all the branches.
Despite its relatively simple configuration in small and medium networks, OSPF
implementation and troubleshooting in large-scale networks may represent a real
challenge. Therefore, an understanding of basic OSPF concepts is vital.
As a networking engineer, you will encounter routing protocols when designing,
configuring, and troubleshooting networks. If the protocol used is OSPF, you will need
knowledge of various aspects of OSPF including:
o A solid understanding of OSPF functions.
o Familiarity with OSPF packet types and the link-state database (LSDB).
o The process of OSPF neighbor establishment.
o Configuring and verifying basic OSPF implementation.

17.2 Introducing OSPF


Dynamic Routing Protocols
A routing protocol is a set of processes, algorithms, and messages that are used to
exchange routing information. Routing information is used to populate the routing table
with the best paths to destinations in the network. As routers learn of changes to network
reachability, this information is dynamically passed onto other routers.
A dynamic routing protocol has the following purposes:
o Discovering remote networks
o Maintaining up-to-date routing information
o Choosing the best path to destination networks
o Finding a new best path if the current path is no longer available
All routing protocols have the same purpose: to learn about remote networks and to
quickly adapt whenever there is a change in the topology. The method that a routing
protocol uses to accomplish this purpose depends upon the algorithm that it uses and the
operational characteristics of the protocol. The operations of a dynamic routing protocol
vary depending on the type of routing protocol and on the routing protocol itself.
Although routing protocols provide routers with up-to-date routing tables, they put
additional demands on the memory and processing power of the router. First, the
exchange of route information adds overhead that consumes network bandwidth. Even
though this is almost never an issue in networks today, in rare cases this overhead might
be a problem particularly where low-bandwidth links are used between routers. Second,
after the router receives the route information, protocols such as Enhanced Interior
Gateway Routing Protocol (EIGRP) and OSPF process it extensively to offer information to
the routing table. So, the routers that use these protocols must have sufficient processing
capacity to implement the algorithms of the protocol and to perform timely packet
routing and forwarding.
An autonomous system (AS), otherwise known as a routing domain, is a collection of
routers under a common administration such as an internal company network or an ISP
network. Because the internet is based on the AS concept, the following two types of
routing protocols are required:
o IGP: An IGP routing protocol is used to exchange routing information within an AS.
EIGRP, Intermediate System-to-Intermediate System (IS-IS), OSPF, and the legacy
routing protocol—Routing Information Protocol (RIP)—are examples of IGPs for
IPv4.
o EGP: An Exterior Gateway Protocol (EGP) routing protocol is used to route
between autonomous systems. Border Gateway Protocol (BGP) is the EGP used for
IPv4 today.
Within an AS, most IGP routing can be classified as distance vector or link-state routing:
o Distance vector: The distance vector routing approach determines the direction
(vector) and distance (such as router hops) to any link in the internetwork. Some
distance vector protocols periodically send complete routing tables to all
connected neighbors. In large networks, these routing updates can become very
large, causing significant traffic on the links. The only information that a router
knows about a remote network is the distance or metric to reach this network and
the path or interface to use to get there. Distance vector routing protocols do not
have an actual map of the network topology. RIP is an example of a distance vector
routing protocol while EIGRP is an advanced distance vector routing protocol that
provides additional functionality.
o Link state: The link-state approach, which uses the shortest path first (SPF)
algorithm, creates an abstract of the exact topology of the entire internetwork or
at least of the partition in which the router is situated. A link-state routing protocol
is like having a complete map of the network topology. A link-state router uses the
link-state information to create a topology map and to select the best path to all
destination networks in the topology. The OSPF and IS-IS protocols are examples of
link-state routing protocols.
Routing protocols can also be classified as classful or classless:
o Classless routing protocol: RIP version 2 (RIPv2), EIGRP, OSPF, IS-IS and BGP are
classless routing protocols and can be considered second-generation protocols
because they are designed to address the limitations of classful routing protocols.
A classless routing protocol is a protocol that advertises subnet mask information
in the routing updates for the networks advertised to neighbors. As a result, this
feature enables the protocols to support discontiguous networks (where subnets
of the same major network are separated by a different major network) and
Variable Length Subnet Masking (VLSM). This allows the routers to exchange
routing information for subnets (such as 10.1.1.0/24) as well as for major networks
(for example, 10.0.0.0/8). In the following figure, when routers R1 and R3 send
routing advertisements to router R2, they include the subnet mask in the updates
(10.1.1.0/24 and 10.2.2.0/24), so R2 learns about those specific subnets.
o Classful routing protocol: Classful routing protocols such as RIP version 1 (RIPv1)
and Interior Gateway Routing Protocol (IGRP) are legacy protocols and not used
today. They do not advertise the subnet mask information within the routing
updates. Therefore, only one subnet mask can be used within a major network.
VLSM and discontiguous networks are not supported.
The concept of route summarization plays a really important role when using dynamic
routing protocols because it optimizes the number of routing updates exchanged between
the routers in the routing domain. The purpose of route summarization is to aggregate
multiple routes into one route advertisement. For example, if Router A knows about all
the subnets 10.1.0.0/24, 10.1.1.0/24, 10.1.2.0/24 and so on, all the way up to
10.1.255.0/24, then instead of sending all of these routes to its neighbors, you could
configure it to send the summary route 10.1.0.0/16. In this case, Router A is telling its
neighbors that it knows how to get to all networks that have the same first 16 bits as
10.1.0.0, in other words that start with “10.1”.
All classless routing protocols support manual route summarization. Some of these
protocols have autosummarization at the major network boundary, to the classful
network address, on by default. For example, assume Router A has autosummarization on
and it knows about all the subnets 10.1.0.0/24, 10.1.1.0/24, 10.1.2.0/24 and so on all the
way up to 10.1.255.0/24. In this case Router A would automatically send the 10.0.0.0/8
route to any of its neighbors that are in another major network.
This automatic summarization of a classless routing protocol like EIGRP can be a problem
if your subnets are discontiguous, meaning the 10.1.x.x subnets are separated from the
10.2.x.x subnets by a different classful network such as 172.16.0.0. To stop this from
happening, automatic route summarization must be disabled with the no auto-summary
command under EIGRP. The subnets could then be manually summarized with the /16
mask. Remember, the automatic summarization would not even occur if all subnets are in
the 10.0.0.0 network.
OSPF does not know the concept of autosummarization; hence, you must manually
summarize the routes that should be advertised to neighbor routers. Otherwise, all
subnets will be sent separately and may result in large routing tables in the receiving
routers. As of IOS release 15, EIGRP does not have autosummarization on by default. In
older IOS versions autosummarization was on by default. EIGRP’s autosummarization
feature can be disabled by using the no auto-summary command.
Classful routing protocols do not support manual route summarization and perform only
autosummarization.

17.3 Introducing OSPF


Path Selection
The router determines the best path to the destination network by evaluating multiple
available paths and choosing the optimal one to reach that network. If you want to control
the choice of the best path to the network, you need to statically configure a route. When
the router uses dynamic routing protocols, it chooses the best path by evaluating a value
called metric, which quantifies the path to the destination network. (You will sometimes
see the metric referred to as distance.) A dynamic routing protocol’s best path to a
network is the path with the lowest metric. Dynamic routing protocols use their own rules
and metrics. Each dynamic protocol offers its best path (its lowest metric route) to the
routing table.
In an enterprise network, it is not uncommon to encounter multiple dynamic routing
protocols and static routes configured. If this occurs, the routing table may have more
than one route source (a connected route, a static route, and a dynamic route) for the
same destination network. Cisco IOS Software uses the administrative distance to
determine the route to install into the IP routing table. The administrative distance
represents the "trustworthiness" of the source of the route; the lower the administrative
distance, the more trustworthy the route source. For example, a static route has a default
administrative distance of 1, whereas an OSPF-learned route has a default administrative
distance of 110. Given separate routes to the same destination with different
administrative distances, the router chooses the route with the lowest administrative
distance. Administrative distance is used as a tie breaker only when different sources offer
the information for the same destination network, i.e. the same network address and
subnet mask.
The administrative distance is an integer from 0 to 255. A routing protocol with a lower
administrative distance is considered more trustworthy than one with a higher
administrative distance.
In the figure, the router has received two routing update messages—one from OSPF and
one from EIGRP. The metric that EIGRP uses has determined that the best path to network
172.17.8.0/24 is via 192.168.5.2, but the metric that OSPF uses has determined that the
best path to 172.17.8.0/24 is via 192.168.3.1. Each routing protocol uses a different metric
to calculate the best path to a given destination, if it learns multiple paths to the same
destination.
The router has used the administrative distance feature to determine which route to
install in its routing table. Because the administrative distance for OSPF is 110 and the
administrative distance for EIGRP is 90, the router has chosen the EIGRP route and adds
only the EIGRP route to its routing table.
Note: The default administrative distances can be tuned for each routing protocol.
The table shows the default administrative distance for selected routing information
sources.
17.4 Introducing OSPF
Link-State Routing Protocol Overview
As mentioned, the two basic types of routing protocols are distance vector and link state.
OSPF is an example of a link-state routing protocol.
Although most routing protocols belong to these two types, the Cisco proprietary EIGRP
routing protocol is an exception and is considered as an advanced distance vector
protocol. The reason is it contains some properties of distance vector and some properties
of link state protocols. Despite the fact that it is based on the architecture of the distance
vector protocols, some of the implemented link state features play a key role in the
protocol behavior. For example, EIGRP uses Hello packets to discover neighbors, multiple
parameters are included in the metric (value) calculation for the routes, it uses
incremental updates, and so on.
Link-state routing protocols such as OSPF have several advantages when compared to
traditional distance vector routing protocols.
o Link-state protocols are more scalable.
o Each router has a full map of the topology.
o Updates are sent when a topology change occurs and are reflooded periodically.
o Link-state protocols respond quickly to topology changes.
o More information is communicated between the routers.
When a failure occurs in a network, routing protocols should detect the failure as soon as
possible and find another path across the network. Link-state protocols support fast
convergence with support for scalability and multivendor environments so they are the
usual type of IGP that is found in large network environments. (As noted, EIGRP can also
be used in large networks and one of the benefits it offers is the fast convergence time).
The link-state protocols consist of the following key features:
o They are scalable: Link-state protocols use a hierarchical design and can scale to
very large networks, if properly designed.
o Each router has a full map of the topology: Because each router contains full
information about the routers and links in a network, each router is able to
independently select a loop-free and efficient pathway, which is based on cost, to
reach every neighbor in the network.
o Updates are sent when a topology change occurs and are reflooded periodically:
Link-state protocols send updates of a topology change by using triggered updates.
Also, updates are sent periodically—by default every 30 minutes.
o They respond quickly to topology changes: Link-state protocols establish neighbor
relationships with the adjacent routers. The failure of a neighbor is detected
quickly, and this failure is communicated by using triggered updates to all routers
in the network. This immediate reporting generally leads to fast convergence
times.
o More information is communicated between routers: Routers that run a link-
state protocol have a common view on the network. Each router has full
information about other routers and links between them, including the metric on
each link.

17.5 Introducing OSPF


Link-State Routing Protocol Data Structures
A router that runs a link-state routing protocol must first establish a neighbor adjacency
with its neighboring routers. A router achieves this neighbor adjacency by exchanging
hello packets with the neighboring routers. After neighbor adjacency is established, the
neighbor is put into the neighbor database.
In the example, router A recognizes routers B and D as neighbors.
After a neighbor relationship is established between routers, the routers synchronize their
LSDBs (also known as topology databases or topology tables) by reliably exchanging link-
state advertisements (LSAs). An LSA describes a router and the networks that are
connected to the router. LSAs are stored in the LSDB. By exchanging all LSAs, routers learn
the complete topology of the network. Each router will have the same topology database
within an area, which is a logical collection of OSPF networks, routers, and links that have
the same area identification within the autonomous system.
After the topology database is built, each router applies the SPF algorithm to the LSDB in
that area. The SPF algorithm uses the Dijkstra algorithm to calculate the best (also called
the shortest) path to each destination.
The best paths to destinations are then offered to the routing table. The routing table
includes a destination network and the next-hop IP address. In the example, the routing
table on router A states that a packet should be sent to router D to reach network X.

17.6 Introducing OSPF


Introducing OSPF
OSPF is a link-state routing protocol. You can think of a link as an interface on a router.
The state of the link is a description of that interface and of its relationship to its
neighboring routers. A description of the interface would include, for example, the IP
address of the interface, the subnet mask, the type of network to which it is connected,
the routers that are connected to that network, and so on. The collection of all these link
states forms a LSDB. All routers in the same area share the same LSDB. Routers in other
OSPF areas will have different LSDBs.
OSPF was developed based on an open standard and is supported by several router
manufacturers. OSPF is widely used as an IGP, especially in large network environments.
OSPF was developed as a replacement for the distance vector routing protocol RIP. The
major advantages of OSPF over RIP are its fast convergence and its ability to scale to much
larger networks. The OSPF for IPv4 networks is OSPF version 2 (OSPFv2).
With OSPF, an AS can be logically subdivided into multiple areas.

OSPF uses a two-layer network hierarchy that has two primary elements:
o AS: An AS consists of a collection of networks under a common administration that
share a common routing strategy. An AS, which is sometimes called a domain, can
be logically subdivided into multiple areas.
o Area: An area is a grouping of contiguous networks. Areas are logical subdivisions
of the AS.
Within each AS, a contiguous area 0 (backbone area) must be defined. In the multiarea
design, all other nonbackbone areas are connected to the backbone area.
A multiarea design is more effective because the network is segmented to limit the
propagation of LSAs inside an area. It is especially useful for large networks.
In a multiarea topology, there are some special commonly used OSPF terms, based on the
OSPF router roles. Routers that are only in Area 0 are known as backbone routers. Routers
that are only in nonbackbone (normal) areas are known as internal routers; they have all
interfaces in one area only. An area border router (ABR) connects Area 0 to the
nonbackbone areas. ABRs contain LSDB information for each area, make route
calculations for each area, and advertise routing information between areas. An AS
boundary router (ASBR) is a router that has at least one of its interfaces connected to an
OSPF area and at least one of its interfaces connected to an external non-OSPF domain,
such as EIGRP routing domain.
Note: The optimal number of routers per area varies based on factors such as network
stability, but the general recommendation is to have no more than 50 routers per single
area.
In a single area OSPF, whenever there is a change in a topology, new LSAs are created and
sent throughout the area. All routers change their LSDB when they receive the new LSA,
and the SPF algorithm is run again on the updated LSDB to verify new paths to
destinations within the area.
The OSPF dynamic routing protocol does the following:
o Creates a neighbor relationship by exchanging hello packets
o Propagates LSAs rather than routing table updates:
o Link: Router interface
o State: Description of an interface and its relationship to neighboring
routers
o Floods LSAs to all OSPF routers in the area, not just to the directly connected
routers
o Pieces together all the LSAs that OSPF routers generate to create the OSPF LSDB
o Uses the SPF algorithm to calculate the shortest path to each destination and
places it in the routing table
A router sends LSA packets immediately to advertise its state when there are state
changes. Moreover, the router resends (floods) its own LSAs every 30 minutes by default
as a periodic update. The information about the attached interfaces, the metrics that are
used, and other variables are included in OSPF LSAs. As OSPF routers accumulate link-state
information, they use the SPF algorithm to calculate the shortest path to each network.
Essentially, an LSDB is an overall map of the networks in relation to the routers. It contains
the collection of LSAs that all routers in the same area have sent. Because the routers
within the same area share the same information, they have identical topological
databases.
17.7 Introducing OSPF
Establishing OSPF Neighbor Adjacencies
Neighbor OSPF routers must recognize each other on the network before they can share
information because OSPF routing depends on the status of the link between two routers.
The Hello protocol completes this process. OSPF routers send hello packets on all OSPF-
enabled interfaces to determine if there are any neighbors on those links.
The Hello protocol establishes and maintains neighbor relationships by ensuring
bidirectional (two-way) communication between neighbors.

o OSPF routers first establish neighbor adjacencies.


o Hello packets are periodically sent to the all OSPF routers IPv4 address 224.0.0.5.
o Routers must agree on certain information (*) inside the hello packet before
adjacency can be established.
An OSPF neighbor relationship, or adjacency, is formed between two routers if they both
agree on the area ID, hello and dead intervals, authentication, and stub area flag. Of
course, the routers must also be on the same IPv4 subnet. Bidirectional communication
occurs when a router recognizes itself in the neighbors list in the hello packet that it
receives from a neighbor.
Each interface that participates in OSPF uses the all OSPF routers multicast address
224.0.0.5 to periodically send hello packets. A hello packet contains the following
information:
o Router ID: The router ID is a 32-bit number that uniquely identifies the router; it
must be unique on each router in the network. The router ID is, by default, the
highest IPv4 address on a loopback interface, if there is one configured. If a
loopback interface with an IPv4 address is not configured, the router ID is the
highest IPv4 address on any active interface. You can also manually configure the
router ID by using the router-id command. Even though using a loopback IPv4
address is better approach than using a physical IPv4 address for a router ID, it is
highly recommended to manually set the router ID. In this way, the router ID is
stable and will not change, for example if an interface goes down.
o Hello and dead intervals: The hello interval specifies the frequency in seconds at
which a router sends hello packets to its OSPF neighbors. The default hello interval
on broadcast and point-to-point links is 10 seconds. The dead interval is the time in
seconds that a router waits to hear from a neighbor before declaring the
neighboring router out of service. By default, the dead interval is four times the
hello interval. These timers must be the same on neighboring routers; otherwise,
an adjacency will not be established.
o Neighbors: The Neighbors field lists the adjacent routers from which the router
has received a hello packet. Bidirectional communication occurs when the router
recognizes itself in the Neighbors field of the hello packet from the neighbor.
o Area ID: To communicate, two routers must share a common segment and their
interfaces must belong to the same OSPF area on this segment. The neighbors
must also be on the same subnet (with the same subnet mask). These routers in
the same area will all have the same LSDB information for that area.
o Router priority: The router priority is an 8-bit number. OSPF uses the priority to
select a designated router (DR) and backup designated router (BDR). In certain
types of networks, OSPF elects DRs and BDRs. The DR acts as a central exchange
point to reduce traffic between routers.
o DR and BDR IPv4 addresses: These addresses are the IPv4 addresses of the DR and
BDR for the specific network, if they are known.
o Authentication data: If router authentication is enabled, two routers must
exchange the same authentication data. Authentication is not required, but if it is
enabled, all peer routers must have the same key configured.
o Stub area flag: A stub area is a special area. Designating a stub area is a technique
that reduces routing updates by replacing them with a default route. Two routers
have to agree on the stub area flag in the hello packets to become neighbors.
OSPF routers establish a neighbor relationship over point-to-point links.
o Commonly a serial interface running either PPP or High-Level Data Link Control
(HDLC)
o May also be a point-to-point subinterface running Frame Relay or ATM
o Does not require DR or BDR election
A point-to-point network joins a single pair of routers. A serial line that is configured with
a data link layer protocol such as PPP or HDLC is an example of a point-to-point network.
On these types of networks, the router dynamically detects its neighboring routers by
multicasting its hello packets to all OSPF routers, using the 224.0.0.5 address. On point-to-
point networks, neighboring routers become adjacent whenever they can communicate
directly. No DR or BDR election is performed; there can be only two routers on a point-to-
point link, so there is no need for a DR or BDR. The default OSPF hello and dead timers on
point-to-point links are 10 seconds and 40 seconds, respectively.

17.8 Introducing OSPF


OSPF Neighbor States
When routers that run OSPF are initialized, an exchange process occurs, with the Hello
protocol as the first procedure.
OSPF routers go through different OSPF states.

The figure illustrates the exchange process that happens when routers appear on the
network:
1. A router interface is enabled on the network. The OSPF process is in a down state
because the router has not yet exchanged information with any other router. The
router begins by sending a hello packet out of the OSPF-enabled interface,
although it does not know the identity of any other routers.
2. All directly connected routers that are running OSPF receive the hello packet from
the first router and add the router to their lists of neighbors. After adding the
router to the list, other routers are in the initial state (INIT state).
3. Each router that received the hello packet sends a unicast reply hello packet to the
first router with its corresponding information. The Neighbors field in the hello
packet lists all neighboring routers, including the first router.
4. When the first router receives the hello packets from the neighboring routers
containing its own router ID inside the list of neighbors, it adds the neighboring
routers to its own neighbor relationship database. After recognizing itself in the
neighbor list, the first router goes into two-way state with those neighbors. At this
point, all routers that have each other in their lists of neighbors have established a
bidirectional (two way) communication. When routers are in two-way state, they
must decide whether to proceed in building an adjacency or staying in the current
state.
If the link type is a multiaccess broadcast network (for example, an Ethernet LAN), a DR
and BDR must first be selected. The DR acts as a central exchange point for routing
information to reduce the amount of routing information that the routers have to
exchange. The DR and BDR are selected after routers are in the two-way state. Note that
the DR and BDR is per LAN, not per area. The router with the highest priority becomes the
DR and the router with the second highest priority becomes the BDR. If there is a tie, the
router with the highest router ID becomes the DR and the router with the second highest
router ID becomes the BDR. Among the routers on a LAN that are not elected as the DR or
BDR, the exchange process stops at this point and the routers remain in the two-way
state. Routers then communicate only with the DR (or BDR) by using the OSPF DR
multicast IPv4 address 224.0.0.6. The DR uses the 224.0.0.5 multicast IPv4 address to
communicate with all other non-DR routers. On point-to-point links, there is no DR/BDR
election, because only two routers can be connected on a single point-to-point segment
and there is no need for using DR or BDR.

After the DR and BDR are selected, the routers are considered to be in the exstart state.
The routers are then ready to discover the link-state information about the internetwork
and create their LSDBs. The exchange protocol is used to discover the network routes, and
it brings all the routers from the exchange state to a full state of communication with the
DR and BDR.
As shown in the figure, the exchange protocol continues as follows:
1. In the exstart state a primary/secondary relationship is created between each
router and its adjacent DR and BDR. The router with the higher router ID acts as
the primary router during the exchange process. The primary/secondary election
dictates which router will start the exchange of routing information. This step is
not shown in the figure.
2. The primary/secondary routers exchange one or more database description (DBD)
packets, containing a summary of their LSDB. The routers are in the exchange
state.
3. A router compares the DBD that it received with the LSAs that it has. If the DBD has
a more up-to-date link-state entry, the router sends a link-state request (LSR) to
the other router. When routers start sending LSRs, they are in the loading state.
4. The router sends a link state update (LSU), containing the entries requested in the
LSR. This is acknowledged with a link state acknowledgment (LSAck). When all LSRs
have been satisfied for a given router, the adjacent routers are considered
synchronized and are in the full state.
All states except two-way and full are transitory, and routers should not remain in these
states for extended periods of time.

17.9 Introducing OSPF


SPF Algorithm
A metric is an indication of the overhead that is required to send packets across a certain
interface. OSPF uses cost as a metric. A smaller cost indicates a better path than a higher
cost. By default on Cisco devices, the cost of an interface is inversely proportional to the
bandwidth of this interface, so a higher bandwidth indicates a lower cost. For example,
there is more overhead, a higher cost, and more time delays that are involved in crossing
a 10-Mbps Ethernet line than in crossing a 100-Mbps Ethernet line.
On Cisco devices, the formula used to calculate OSPF cost is cost = reference bandwidth /
interface bandwidth (in bits per second).
The default reference bandwidth is 108, which is 100,000,000, or the equivalent of the
bandwidth of Fast Ethernet. Therefore, the default cost of a 10-Mbps Ethernet link will be
108 / 107 = 10, and the cost of a 100-Mbps link will be 108 / 108 = 1. The problem arises
with links that are faster than 100 Mbps. Because the OSPF cost has to be an integer, all
links that are faster than Fast Ethernet will have an OSPF cost of 1.
There are three approaches you can take to influence the cost to be more realistic,
especially on high-speed links:
o Reference bandwidth: You can set the reference bandwidth on the router globally
to provide granular link costs.
o To adjust the reference bandwidth for a link, use the ospf auto-cost
reference-bandwidth reference-bandwidth command that is configured in
the OSPF routing process configuration mode.
o Interface cost: You can choose to use arbitrary cost numbers on every interface.
o To override the cost that is calculated for an interface for the OSPF routing
process, use the ip ospf cost cost interface configuration command.
o Interface bandwidth: You can configure the bandwidth kilobits-per-second
command on an interface to override the default bandwidth.
Note: Whether you choose the reference bandwidth method, interface cost method, or
interface bandwidth method for adjusting OSPF link costs, it is imperative that you
consistently configure adjustments on every router in the OSPF network. Inconsistent
application of OSPF link costs can lead to suboptimal path selection.
The cost to reach a distant network from a router is the cumulative cost of all links on the
path from the router to the network. In the example, the cost from R1 to the destination
network via R3 is 40 (20 + 10 + 10), and the cost via R2 is 30 (10 + 10 + 10). The path via R2
is better because it has a lower cost.
The figure represents the R1 view of the network, where R1 is the root and calculates the
pathways by assuming this view.

Each router has its own view of the topology even though all the routers build the shortest
path trees by using the same LSDB.
Each router places itself as the root of a tree and then runs the SPF algorithm. The path
calculation is based on the cumulative cost that is required to reach that destination. LSAs
are flooded throughout the area by using a reliable algorithm, which ensures that all the
routers in an area have the same LSDB (topological database). Because of the flooding
process, R1 has learned the link-state information for each router in its area. Each router
uses the information in its topological database to calculate a shortest path tree, with
itself as the root. The router then uses this tree to determine the best routes, which are
offered to the routing table to route network traffic.

For R1, the best path to each LAN and its cost are shown in the table. Note that in terms of
number of hops (routers) to reach the destination, the shortest path might not necessarily
be the best one, because the selection of the best route is based on the lowest total cost
value from the available paths. Each router has its own view of the topology, even though
the routers build shortest path trees by using the same LSDB.

17.10 Introducing OSPF


Building a Link-State Database
When two routers discover each other and establish adjacency by using hello packets,
they use the exchange protocol to exchange information about the LSAs.
OSPF uses five types of routing protocol packets from which four types of OSPF packets
are involved in building the LSDB.
As shown in the table, the exchange protocol operates as follows:
1. The routers exchange one or more DBD packets. A DBD includes information about
the LSA entry header that appears in the LSDB of the router. Each LSA entry header
includes information about the link-state type, the address of the advertising
router, the cost of the link, and the sequence number. The router uses the
sequence number to determine the "newness" of the received link-state
information.
2. When the router receives the DBD, it acknowledges the receipt of the DBD that is
using the LSAck packet.
3. The routers compare the information that they receive with the information that
they have. If the received DBD has a more up-to-date link-state entry, the router
sends an LSR to the other router to request the updated link-state entry.
4. The other router responds with complete information about the requested entry in
an LSU packet.
5. When the router receives an LSU, it adds the new link-state entries to its LSDB and
it sends an LSAck.

17.11 Introducing OSPF


Routing for IPv6
There are different routing protocols available for IPv6 because routing for IPv6 is needed
just as it is for IPv4.
The same rules apply for routers using IPv6 as they do for IPv4. Routers can learn where to
forward packets to destination networks that are not directly connected by performing
static or dynamic routing. Because static routing is best suited for small networks where
changes rarely happen, dynamic routing is recommended in medium or large networks.
To support IPv6, all the IPv4 routing protocols had to go through varying degrees of
changes, with the most obvious being that each had to be changed to support longer
addresses and prefixes.

As with IPv4, most IPv6 routing protocols are IGPs, with BGP still being the only EGP of
note. All these IGPs and BGP were updated to support IPv6. The table lists the routing
protocols and their new RFCs.
Each of these routing protocols had to be changed to support IPv6. The actual messages
that are used to send and receive routing information have changed, using IPv6 headers
instead of IPv4 headers and using IPv6 addresses in those headers. For example, RIP next
generation (RIPng) sends routing updates to the IPv6 destination multicast address ff02::9
instead of to the former RIPv2 IPv4 224.0.0.9 address. Also, the routing protocols typically
advertise their link-local IPv6 address as the next hop in a route.
The routing protocols still retain many of the same internal features. For example, RIPng is
based on RIPv2 and is still a distance vector protocol, with the hop count as the metric and
15 hops as the highest valid hop count (16 is infinity). OSPF version 3 (OSPFv3), which was
created specifically to support IPv6 (and also supports IPv4), is still a link-state protocol,
with the cost as the metric but with many internals, including LSA types, changed. OSPFv3
uses multicast addresses, including the all OSPF routers IPv6 address ff02::5, and the OSPF
DR IPv6 address ff02::6. As a result, OSPFv2 is not compatible with OSPFv3. However, the
core operational concepts remain the same.
18.1 Building Redundant Switched Topologies
Introduction
In any kind of enterprise environment, one of the most important aspects of network
design is to provide redundancy. A network should never rely on one device to be a single
point of failure because that can effectively cause the loss of digital communication within
and even outside the enterprise.
Therefore, it is crucial to build a redundant topology, including implementing additional
switches and redundant links between them. Although a redundant topology in a switched
network has its benefits, it can also cause problems, such as Open System Interconnect
(OSI) Layer 2 loops. To avoid Layer 2 loops in a switched topology, Spanning Tree Protocol
(STP) is used as a Layer 2 loop prevention mechanism while still providing network link
redundancy. Thus, you should never disable STP in Layer 2 environments.
A limitation of the traditional STP is the convergence delay after a topology change, so the
use of Rapid STP (RSTP) is recommended. Although RSTP is backward-compatible with
STP, the two protocols are different in many ways. In order to take the full advantage of
RSTP, all switches in a spanning tree topology must run the rapid version of the protocol.

As a networking engineer working with Cisco Catalyst switches, you should be familiar
with STP and all its more optimized variants such as Cisco’s Per VLAN Spanning Tree Plus
(PVST+), and get a firm grip on physical redundancy and STP concepts, such as:
o issues in Redundant Topologies.
o STP and RSTP protocols operation.
o implementation of STP stability mechanism.
18.2 Building Redundant Switched Topologies
Physical Redundancy in a LAN
Enterprise voice and data networks are designed with physical component redundancy to
eliminate the possibility of any single point of failure causing a loss of function for an
entire switched network. Building a reliable switched network requires additional switches
and redundant physical links. However, redundant Layer 2 switch topologies require
planning and configuration to operate without introducing Layer 2 loops.
Physical loops may occur in the network as part of a design strategy for redundancy in a
switched network. Adding additional switches to LANs can add the benefit of redundancy.
Connecting two switches to the same network segments ensures continuous operation if
there are problems with one of the segments. Redundancy can ensure the constant
availability of the network. However, when adding redundant physical links and additional
switches, a physical loop is created and by spanning a single VLAN between connected
switches a Layer 2 loop is also created.
Layer 2 LAN protocols, such as Ethernet, lack a mechanism for recognizing and eliminating
endless looping of frames, as illustrated in the figure. Some Layer 3 protocols implement a
Time to Live (TTL) or hop limit mechanism that limits the number of times that a Layer 3
networking device can retransmit a packet or limit how many Layer 3 devices a packet can
traverse. Lacking such a mechanism, Layer 2 devices would continue to retransmit looping
traffic indefinitely.

Layer 2 loops affect performance in a switched LAN. A loop-avoidance mechanism solves


these problems and STP was developed for that purpose.

18.3 Building Redundant Switched Topologies


Issues in Redundant Topologies
In the absence of a protocol to monitor link forwarding states, a redundant switch
topology is vulnerable to these conditions:
o Continuous frame duplication: Without some loop-avoidance process, each switch
floods broadcast, multicast, and unknown unicast frames endlessly. Switches flood
broadcast frames to all ports except the port on which the frame was received.
The frames then duplicate and travel endlessly around the loop in all directions.
The result of continuous broadcast frame duplication is called a broadcast storm.
o Multiple frame transmission: Multiple copies of unicast frames may be delivered
to destination stations. Many protocols expect to receive only a single copy of each
transmission. Multiple copies of the same frame can cause unrecoverable errors.
o MAC database instability: Instability in the content of the MAC address table
results from the fact that different ports of the switch receive copies of the same
frame. Data forwarding can be impaired when the switch consumes the resources
that are coping with instability in the MAC address table.
For example, in the topology that is shown in the figure no Layer 2 loop prevention
mechanism is implemented. Suppose that host A sends a frame to host B. Host A resides
on network segment A, and host B resides on network segment B. Assume that none of
the switches have learned the address of host B.

Host A transmits the frame destined for host B on segment A.


Switch W receives the frame that is destined for host B, learns the MAC address of host A
on segment A, and floods it out to switches X and Y.
Switch X and switch Y both receive the frame from host A (via switch W) and correctly
learn that host A is on segment 1 for switch X and on segment 2 for switch Y. Switch X and
switch Y then forward the frame to switch Z. Switch Z receives two copies of the frame
from host A: one copy through switch X on segment 3 and one copy through switch Y on
segment 4.
Assume that the first copy of the frame from switch X arrives first. Switch Z learns that
host A resides on segment 3. Because switch Z does not know where host B is connected,
it forwards the frame to all its ports (except the incoming port on segment 3) and
therefore to host B and also to switch Y.
When the second copy of the frame from switch Y arrives at switch Z on segment 4, switch
Z updates its table to indicate that host A resides on segment 4. Switch Z then forwards
the frame to host B and switch X.
In this example where no loop prevention mechanism exists the result is that host B has
received multiple copies of the frame, which can cause problems with the receiving
application directly on the host B.
Switches X and Y now change their internal tables to indicate that host A is on segment 3
for switch X and on segment 4 for switch Y. The copies of the initial frame from host A
being received on different segments of the switches results in MAC database instability.
Furthermore, if the initial frame from host A was a broadcast frame, then all switches
forward the frames endlessly. Switches flood broadcast frames to all ports except the port
on which the frame was received. The frames then duplicate and travel endlessly around
the loop in all directions. They eventually would use all available network bandwidth and
block transmission of other packets on both segments. This situation results in a broadcast
storm.

18.4 Building Redundant Switched Topologies


Spanning Tree Operation
The solution to prevent Layer 2 loops is STP. STP enables the use of physical path
redundancy while preventing the undesirable effects of active Layer 2 loops in the
network. By default, STP is turned on in Cisco Catalyst switches.
There are several varieties of STP. All variants of STP provide Layer 2 loop prevention by
managing the physical paths to given network segments. The original STP is an IEEE
committee standard that is defined as 802.1D and was created for a bridged network
using Ethernet bridges. Ethernet bridges are obsolete and replaced with Ethernet
switches, so the devices running any variant of STP nowadays are switches. Note,
however, that STP terminology includes bridge even though it is being run on switches.

STP behaves in the following way:


o STP uses bridge protocol data units (BPDUs) for communication between switches.
o STP forces certain ports into a blocked state so that they do not listen to, forward,
or flood data frames. The overall effect is that only one path to each network
segment is active at any time.
o If there is a connectivity problem with any active network segment, STP activates a
previously inactive path, if one exists (changing the blocked port to the forwarding
state).
To prevent Layer 2 loops in a network, STP uses a reference point called root bridge. The
root bridge is the logical center of the spanning tree topology. All paths that are not
needed to reach the root bridge from anywhere in the network are placed in STP blocking
mode.
The root bridge is chosen with an election. In the original STP, each switch has a unique
64-bit bridge ID (BID) that consists of the 16-bit bridge priority and 48-bit MAC address as
shown in the figure. The bridge priority is a number between 0 and 65535 and the default
on Cisco switches is 32768.

In evolved variants of STP, like Cisco PVST+, RSTP or Multiple Spanning Tree Protocol
(MSTP), the original bridge priority field in the BID is changed to include an Extended
System ID field as shown in the figure. This field carries information such as VLAN ID or
instance number required for the evolved variants of STP to operate. The bridge priority
field in this case is 4 bits and the Extended System ID field is 12 bits. In command outputs
you will either see this combination written as a 16-bit field, or as two components: a 16-
bit bridge priority where the lower 12 bits are binary 0, and a 12-bit Extended System ID.
In the latter case, the bridge priority is a number between 0 and 65535 in increments of
4096, and the default on Cisco switches is 32768.

The following are the steps of the spanning tree algorithm:


1. All interfaces on all switches in the spanning tree topology start in blocked mode.
2. The switches elect a root bridge. The root bridge is selected based on the lowest
BID (in all STP variants). If all switches in the network have the same bridge
priority, the switch with the lowest MAC address becomes the root bridge. You can
only have one root bridge per network in an original STP and one root bridge per
VLAN in Cisco PVST+. By default, if a switch that is elected as a root bridge fails, the
switch with the next lowest BID becomes the new root bridge. Cisco enables the
configuration of a primary and secondary root bridge. If a primary root bridge
failure occurs, the configured secondary becomes the new root bridge.
3. Each nonroot switch determines a root port. The root port is the port with the best
path to the root bridge. The root path cost value is used in this calculation; it is the
cumulative STP cost of all links to the root bridge. The root port is the port with the
lowest root path cost to the root bridge.
4. On each segment, a designated port is selected. This is again calculated based on
the lowest root path cost. The designated port on a segment is on the switch with
the lowest root path cost. On root bridges, all switch ports are designated ports.
Each network segment will have one designated port.
5. The root ports and designated ports transition to the forwarding state and any
other ports (called nondesignated ports) stay in the blocking state.
The STP path cost depends on the speed of the link. The first table shows the default STP
link costs. The second table shows the summary of the STP port roles.
Spanning Tree Operation Example
The first step in the spanning tree algorithm is the election of a root bridge. Initially, all
switches assume that they are the root. They start transmitting BPDUs with the Root ID
field containing the same value as the bridge ID field. Thus, each switch essentially claims
that it is the root bridge on the network.

When the switches start receiving BPDUs from the other switches, each switch compares
the root ID in the received BPDUs against the value that it currently has recorded as the
root ID. If the received value is lower than the recorded value (which was originally the
BID of that switch), the switch replaces the recorded value with the received value and
starts transmitting this value in the Root ID field in its own BPDUs.
Eventually, all switches learn and record the BID of the switch that has the lowest BID. The
switches all transmit this BID in the Root ID field of their BPDUs.
In the example, Switch B becomes the root bridge because it has the lowest BID. Switch A
and switch B have the same priority, but switch B has a lower MAC address value.

When a switch recognizes that it is not the root (because it is receiving BPDUs that have a
root ID value that is lower than its own BID), it marks the port on which it is receiving
those BPDUs as its root port.
A switch could receive BPDUs on multiple ports. In this case, the switch elects the port
that has the lowest-cost path to the root as its root port. If two ports have an equal path
cost to the root, the switch looks at the BID values in the received BPDUs to make a
decision (where the lowest BID is considered best, similar to root bridge election). If the
root path cost and the BID in both BPDUs are the same because both ports are connected
to the same upstream switch, the switch looks at the Port ID field in the received BPDUs
and selects its root port based on the lowest value in that field.
By default, the cost that is associated with each port is related to its speed (the higher the
interface bandwidth, the lower the cost), but the cost can be manually changed.
Switches A, C, and D mark the ports that are directly connected to switch B (which is the
root bridge) as the root port. These directly connected ports on switches A, C, and D have
the lowest cost to the root bridge.

After electing the root bridge and root ports, the switches determine which switch will
have the designated port for each Ethernet segment; the switch with the designated port
is called the designated bridge for the segment. This process is similar to the root bridge
and root port elections. Each switch that is connected to a segment sends BPDUs out of
the port that is connected to that segment, claiming to be the designated bridge for that
segment. At this point, it considers its port to be a designated port.
When a switch starts receiving BPDUs from other switches on that segment, it compares
the received values of the root path cost, BID, and port ID fields (in that order) against the
values in the BPDUs that it is sending out its own port. The switch stops transmitting
BPDUs on the port and marks it as a nondesignated port if the other switch has lower
values.
In the example, all ports on the root bridge (switch B) are designated ports. The ports on
switch A that are connecting to switch C and switch D become designated ports, because
switch A has the lower root path cost.

To prevent Layer 2 loops while STP executes its algorithm, all ports start out in the
blocking state. When STP marks a port as either a root port or a designated port, the
algorithm starts to transition this port to the forwarding state and all nondesignated ports
remain in the blocking state.
The original and rapid versions of STP both execute the same algorithm in the decision-
making process. However, in the transition of a port from the blocking (or discarding, in
rapid spanning tree terms) to the forwarding state, there is a big difference between
those two spanning tree versions. Classic 802.1D would simply take 30 seconds to
transition the port to forwarding. The rapid spanning tree algorithm can use additional
mechanisms to transition the port to forwarding in less than a second.
Although the order of the steps that are listed in the diagrams suggests that STP goes
through them in a coordinated, sequential manner, that is not actually the case. If you
look back at the description of each step in the process, you see that each switch is going
through these steps in parallel. Also, each switch might adapt its selection of root bridge,
root ports, and designated ports as it receives new BPDUs. As the BPDUs are propagated
through the network, all switches eventually have a consistent view of the topology of the
network. When this stable state is reached, BPDUs are transmitted only by designated
ports. However, all blocking ports are continuously listening for BPDUs that are sent every
2 seconds. If a blocking port stops receiving BPDUs, it will begin transition to the
forwarding state.
There are two loops in the sample topology, meaning that two ports should be in the
blocking state to break both loops. The port on Switch C that is not directly connected to
Switch B (root bridge) is blocked, because it is a nondesignated port. The port on Switch D
that is not directly connected to Switch B (root bridge) is also blocked, because it is a
nondesignated port.

18.5 Building Redundant Switched Topologies


Types of Spanning Tree Protocols
The STP is a network protocol that ensures a loop-free topology.
Several varieties of spanning tree protocols exist:
o STP (IEEE 802.1D) is the legacy standard that provides a loop-free topology in a
network with redundant links. STP creates a Common Spanning Tree (CST) that
assumes one spanning tree instance for the entire bridged network, regardless of
the number of VLANs.
o PVST+ is a Cisco enhancement of STP that provides a separate 802.1D spanning
tree instance for each VLAN that is configured in the network.
o MSTP, or IEEE 802.1s, is an IEEE standard that is inspired by the earlier Cisco
proprietary Multi-Instance STP (MISTP) implementation. MSTP maps multiple
VLANs into the same spanning tree instance.
o RSTP, or IEEE 802.1w, is an evolution of STP that provides faster convergence of
STP. It redefines port roles and enhances BPDU exchanges.
o Rapid PVST+ is a Cisco enhancement of RSTP that uses PVST+. Rapid PVST+
provides a separate instance of 802.1w per VLAN.
Note: When Cisco documentation and this course refer to implementing RSTP, they are
referring to the Cisco RSTP implementation—Rapid PVST+.
Comparison of Spanning Tree Protocols
The following are the characteristics of various spanning tree protocols:
o STP assumes one 802.1D spanning tree instance for the entire bridged network,
regardless of the number of VLANs. Because only one instance exists, the central
processing unit (CPU) and memory requirements for this version are lower than for
the other protocols. However, because of only one instance, there is only one root
bridge and one tree. Traffic for all VLANs flows over the same path, which can lead
to suboptimal traffic flows. Because of the limitations of 802.1D, this version is
slow to converge.
o PVST+ is a Cisco enhancement of STP that provides a separate 802.1D spanning
tree instance for each VLAN that is configured in the network. The separate
instance supports features like PortFast, UplinkFast, BackboneFast, BPDU guard,
BPDU filter, root guard, and loop guard to enhance security. Creating an instance
for each VLAN increases the CPU and memory requirements but allows for per-
VLAN root bridges. The use of PVST+ gives the administrator the ability to load
balance traffic per VLAN. For example, the root bridge for VLAN 10 could be switch
A and the root bridge for VLAN 20 could be switch B. Convergence of this version is
similar to the convergence of 802.1D. However, convergence is per-VLAN.
o RSTP, or IEEE 802.1w, is an evolution of STP that provides faster STP convergence.
This version addresses many convergence issues, but because it still provides a
single instance of STP, it does not address the suboptimal traffic flow issues. To
support that faster convergence, the CPU usage and memory requirements of this
version are slightly higher than the requirements of original STP but lower than
requirements of PVST+.
o Rapid PVST+ is a Cisco enhancement of RSTP that uses PVST+. It provides a
separate instance of 802.1w per VLAN. This version addresses both the
convergence issues and the suboptimal traffic flow issues. However, this version
has the largest CPU and memory requirements.
o MSTP is an IEEE standard that is inspired by the earlier Cisco proprietary MISTP
implementation. To reduce the number of required STP instances, MSTP enables
mapping of multiple VLANs into the same spanning tree instance with common
root bridge. The Cisco implementation of MSTP provides up to 16 instances of
RSTP (802.1w) and combines many VLANs with the same physical and logical
topology into a common RSTP instance. Each instance supports PortFast, BPDU
guard, BPDU filter, root guard, and loop guard security enhancements. The CPU
and memory requirements of this version are lower than the requirements of
Rapid PVST+ but are higher than requirements of RSTP.
Default Spanning Tree Configuration
Default spanning tree configuration for Cisco Catalyst switches include the following
characteristics:
o PVST+
o Enabled on all ports in VLAN 1
o Slower convergence after topology change than with RSTP
The default spanning tree mode for Cisco Catalyst switches is PVST+, which is enabled on
all ports. PVST+ has much slower convergence after a topology change than the Rapid
PVST but requires less CPU and memory resources to compute the shortest path tree
upon topology changes.

18.6 Building Redundant Switched Topologies


PortFast and BPDU Guard
Two features that enhance STP are PortFast and BPDU guard. To fully appreciate the
benefits of these features, review the STP initialization process that a switch port
transitions through when it is enabled.
Because STP is responsible for maintaining a loop-free topology, precautions are required
each time that you enable a switch port. If the port is connected to another switch, BPDUs
are exchanged every two seconds to ensure that a loop is not introduced into the
topology. In STP and PVST+, a port goes through these stages when it is enabled:
1. Blocking: For up to 20 seconds, the port remains in the blocking state.
2. Listening: For 15 seconds, the port listens to BPDUs that it received and listens for
new topology information. The switch processes received BPDUs and determines if
any better BPDU was received that would cause the port to transition back to the
blocking state. If no better BPDU was received, the port transitions into a learning
state. In the listening state, the port does not populate the MAC address table with
the addresses it learns and it does not forward any frames.
3. Learning: For up to 15 seconds, the port updates the MAC address forwarding
table, but it does not begin forwarding.
4. Forwarding: Once the switch port is certain it will not form a loop by forwarding
frames, it enters the forwarding state. It still monitors for topology changes that
could require it to transition back to the blocking state to prevent a loop.

If a switch port connects to another switch, the STP initialization cycle must transition
from state to state to ensure a loop-free topology.
However, for access devices such as PCs, laptops, servers, and printers, the delays that
incurred with STP initialization can cause problems such as DHCP timeouts. Cisco designed
the PortFast and BPDU guard features as enhancements to STP to reduce the time that is
required for an access device to enter the forwarding state.
STP is designed to prevent loops. Because there can be no loop on a port that is connected
directly to a host or server, the full function of STP is not needed for that port. PortFast is
a Cisco enhancement to STP that allows a switchport to begin forwarding much faster
than a switchport in normal STP mode.

When the PortFast feature is enabled on a switch port that is configured as an access port,
that port bypasses the typical STP listening and learning states. This feature allows the
port to transition from the blocking to the forwarding state immediately. You can use
PortFast on access ports that are connected to a single workstation or to a server to allow
those devices to connect to the network immediately rather than waiting for spanning
tree to converge.
In a valid PortFast configuration, no BPDUs should be received because access and Layer 3
devices do not generate BPDUs. If a port receives a BPDU, that would indicate that
another bridge or switch is connected to the port. This event could happen if a user
plugged a switch on their desk into the port where the user PC was previously plugged
into.
For example, assume that users decide they want more bandwidth. Since there are two
network access connections in their office, they decide to use both of them. To use them
both, they unplug their individual PCs from the network switches and plug it into their
own switch. They then plug the new switch into both of the network access ports. If
PortFast is enabled on both ports of the network switch, this action could cause a loop
and bring the network to a halt.
To avoid such situation when using PortFast, the BPDU guard enhancement is the
solution. It allows network designers to enforce the STP domain diameter and keep the
active topology predictable. The devices behind the ports that have STP PortFast and
BPDU guard enabled are not able to influence the STP topology thus preventing the users
to connect additional switches and violating STP diameter. At the reception of BPDUs, the
BPDU guard operation effectively disables the port that has PortFast configured, by
transitioning the port into errdisable state. A message also appears on the switch console.
For example, the following message might appear:

Note: Because the purpose of PortFast is to minimize the time that ports must wait for
spanning tree to converge, you should use it only on ports that no other switch is
connected to, like access ports for connecting user equipment and servers or on trunk
ports when connecting to a router in a router on a stick configuration. If you enable
PortFast on a port that is connecting to another switch, you risk creating a spanning tree
loop, or with the BPDU guard feature enabled the port will transition in errdisable.

18.7 Building Redundant Switched Topologies


Rapid Spanning Tree Protocol
A limitation of a traditional STP is the convergence delay after a topology change and this
is why the use of RSTP is recommended. RSTP is an IEEE standard that redefines STP port
roles, states, and BPDUs. It greatly improves the recalculation of the spanning tree, and
thus the convergence time, when the Layer 2 topology changes, including when links
come up and for indirect link failures.

Note: the following regarding IEEE 802.1w RSTP:


o 802.1D STP was designed for an era of networks that were more tolerant of 50-
second delays in redundancy.
o There are many proprietary mechanisms in place to enhance the performance and
convergence of STP; however, not all vendors or all switches have these
mechanisms.
o The 802.1w RSTP allows the network to converge faster than 802.1D.
o Because RSTP is a standards-based protocol, it operates across multiple vendor
platforms
o RSTP is backwards compatible with 802.1D.
The immediate hindrance of STP is convergence. Depending on the type of failure, it takes
anywhere from 30 to 50 seconds to converge after a network change. RSTP helps with
convergence issues that plague traditional STP. Cisco proprietary Rapid PVST+ is based on
the 802.1w standard in the same way that PVST+ is based on 802.1D. The operation of
Rapid PVST+ is simply a separate instance of 802.1w for each VLAN.
RSTP is proactive and therefore negates the need for the 802.1D delay timers. RSTP
supersedes 802.1D while remaining backward compatible. Much of the 802.1D
terminology and most parameters remain unchanged. In addition, RSTP is capable of
reverting to 802.1D to interoperate with traditional switches on a per-port basis, and
negotiate port states on a peer switch basis, using a proposal and agreement process.
Numerous differences exist between RSTP and STP, including that RSTP requires a full-
duplex point-to-point connection between adjacent switches.
RSTP Port Roles
With RSTP, port roles are slightly different than with STP.

RSTP defines the following port roles.


o Root: The root port is the switch port on every nonroot bridge that is the best path
to the root bridge. There can be only one root port on each switch. The root port is
considered part of the active topology. It forwards, sends, and receives BPDUs.
o Designated: In the active topology, a designated port is the switch port that will
receive and forward frames toward the root bridge as needed. There can be only
one designated port per segment.
o Alternate: The alternate port is a switch port that offers an alternate path toward
the root bridge. It assumes a discarding state in an active topology. The alternate
port makes a transition to a designated port if the current designated port fails.
o Backup: The backup port is an additional switch port on the designated switch with
a redundant link to the shared segment for which the switch is designated. The
backup port is in the discarding state in active topology. The backup port moves to
the forwarding state if there is a failure on the designated port for the segment.
o Disabled: A disabled port has no role within the operation of spanning tree.
There is a difference between STP and RSTP port roles. Instead of the STP nondesignated
port role, there are now alternate and backup port roles. These additional port roles allow
RSTP to define a standby switch port before a failure or topology change.
Note: You will probably not see a backup port role in practice. It is used only when
switches are connected to a shared segment. To build shared segments, you need hubs,
which are obsolete.
Comparison of RSTP and STP Port States

The RSTP port states correspond to the three basic operations of a switch port: discarding,
learning, and forwarding. There is no listening state as there was with STP. The listening
and blocking STP states are replaced with the discarding state.
In a stable topology, RSTP ensures that every root port and designated port transit to
forwarding, while all alternate ports and backup ports are always in the discarding state.
The characteristics of RSTP port states are as follows:
A port will accept and process BPDU frames in all port states.
Note: In RSTP the PortFast feature is known as an edge port concept. All ports directly
connected to end stations cannot create bridging loops in the network. Therefore, the
edge port directly transitions to the forwarding state, and skips the listening and learning
stages. Unlike PortFast, an edge port that receives a BPDU immediately loses its edge port
status and becomes a normal spanning-tree port.
19.1 Improving Redundant Switched Topologies with EtherChannel
Introduction
The increasing deployment of higher-speed switched Ethernet to the desktop can be
attributed to the proliferation of bandwidth-intensive intranet applications. Any-to-any
communications of new intranet applications such as video to the desktop, interactive
messaging, VoIP, and collaborative applications are increasing the need for scalable
bandwidth within the core and at the edge of campus networks.
Additional bandwidth is required at the access to the network where end devices
generate larger amounts of traffic, at the links that carry traffic aggregated from multiple
end devices (uplinks), and at the links that carry application traffic; for example, at the
links to the Data Center. When additional bandwidth is needed, the speed of these links
can be increased, but only to a certain point. As the speed increases on the links, this
solution finds its limitation where the fastest possible port is no longer fast enough to
aggregate the traffic coming from all the devices.
A second option is to multiply the numbers of physical links between both switches to
increase the overall speed of the switch-to-switch communication. But if there are simply
just multiple links between the two devices, the Spanning Tree Protocol (STP) is going to
block all except one link in order to avoid loops in the network.
A solution lies in a technology called EtherChannel. EtherChannel is a technology that
allows you to circumvent these issues by creating logical links made up of several physical
links.
As a network engineer, you will work with EtherChannel in enterprise environments so
you should be aware of the following:
o The need for EtherChannel technology.
o Different options for creating EtherChannels.
o Configuration steps for EtherChannel implementation.

19.2 Improving Redundant Switched Topologies with EtherChannel


EtherChannel Overview
The proliferation of bandwidth-intensive applications such as video and interactive
messaging created a necessity for links with greater bandwidth. Additional bandwidth is
required both at the access to the network, where end devices generate larger amounts of
traffic, and at the links that carry traffic aggregated from multiple end devices—for
instance, at the uplinks.
You can increase link bandwidth by choosing links of higher bit rate but higher bit-rate
links are more expensive. This solution cannot scale indefinitely and, at some point, even a
port with the greatest possible bandwidth might no longer suffice.
Another way to increase link bandwidth is by using more than one link between devices.
Aggregating multiple physical links increases the available bandwidth between two
devices. Aggregating multiple physical links also adds resiliency against link failure, by
providing link redundancy.
In a LAN environment, you can create logical aggregated Ethernet links. Both Layer 2 and
Layer 3 interfaces can be aggregated.
Note: Aggregation can also be implemented on the network on Layer 1. An example of
aggregation at the physical layer is combining frequency bands in wireless
communications.
EtherChannel is a technology that enables link aggregation. In the industry, you will often
encounter terms such as port channel and Ethernet port channel. When using these terms
in an Ethernet LAN environment, EtherChannel, and Ethernet port channel technology
mean the same.
Note: Many other terms are used to name the aggregation concept. Some of them are
link-bundling, network interface card (NIC) bonding, NIC teaming, network bonding, and
channel bonding.
EtherChannel enables packets to be sent over several physical interfaces as if over a single
interface. EtherChannel logically bonds several physical connections into one logical
connection. The process offers redundancy and load balancing while maintaining the
combined throughput of physical links.
Without EtherChannel technology, most control plane protocols such as Layer 2 STP or
Layer 3 routing protocols will treat multiple links as individual links. In the case of STP,
multiple links between the same two devices are treated as loops and, to avoid loops, STP
makes sure that only one link remains operational. Although the additional links add
resiliency, because of the STP, the available bandwidth between the two devices is not
increased.
EtherChannel bundles individual links into a channel group to create a single logical
interface called a port channel that provides the aggregate bandwidth of several physical
links. Each link can be in only one port channel. All the links in a port channel must be
compatible. Among other requirements, they must use the same speed and operate in
full-duplex mode.
The EtherChannel technology was originally developed by Cisco as a means of increasing
speed between switches by grouping several FastEthernet or Gigabit Ethernet ports into
one logical link, called an EtherChannel link, as shown in the following figure. Since the
multiple physical links are bundled into a single EtherChannel, STP no longer sees them as
separate physical links. Instead it sees a single EtherChannel link. As a result, STP does not
consider a single link to be a loop and puts the port channel interface (containing all ports)
in the forwarding state. Therefore, the combined bandwidth of bundled physical links is
available to the logical link.
Some devices other than switches also support link aggregation. You can create an
EtherChannel link between two switches or between an EtherChannel-enabled server and
a switch. EtherChannel always creates one-to-one logical links. You cannot send traffic to
two different switches through the same EtherChannel logical link. One EtherChannel
logical link always connects only two devices.
For an EtherChannel logical link to form, all ports on both devices must be correctly
configured. On both sides, ports that are part of the logical link all belong to a logical port
channel interface. You can group from two to eight physical ports (or more on some
platforms) into a port channel logical interface but you cannot mix port types within a
single EtherChannel. For example, you could group four FastEthernet ports into one logical
Ethernet link but you could not group two FastEthernet ports and two GigabitEthernet
ports into one logical Ethernet link.

You can also configure multiple EtherChannel links between two devices, as shown in the
figure above. However, when several logical EtherChannel links exist between two
switches, STP detects loops. To avoid loops, STP will make only one logical link
operational. When STP blocks the redundant links, it blocks one entire EtherChannel, thus
blocking all the ports belonging to that EtherChannel link.
The advantages of the EtherChannel link aggregation include:
o EtherChannel creates an aggregation that is seen as one logical link. Where there is
only one EtherChannel link, all physical links in the EtherChannel are active
because STP sees only one (logical) link. The bandwidth of physical links is
combined to provide increased bandwidth over the logical link.
o Because EtherChannel relies on the existing switch ports, you do not need to
upgrade the ports to faster and more expensive ones to obtain more bandwidth.
Most configuration tasks can be performed on the EtherChannel logical interface
instead of on each individual port, which ensures configuration consistency
throughout the links.
o Load balancing is possible across the physical links that are part of the same
EtherChannel.
o EtherChannel improves resiliency against link failure, as it provides link
redundancy. The loss of a physical link within an EtherChannel does not create a
change in the topology and there will not be a spanning-tree recalculation. As long
as at least one physical link is active, the EtherChannel is functional, even if its
overall throughput decreases.

19.3 Improving Redundant Switched Topologies with EtherChannel


EtherChannel Configuration Options
As a Cisco proprietary technology, EtherChannel was initially implemented using Cisco
proprietary Port Aggregation Protocol (PAgP). Since the link aggregation concept has
become widely adopted within the industry, to avoid interoperability issues, the
aggregation control protocol was first standardized in the form of the IEEE 802.3ad
standard or Link Aggregation Control Protocol (LACP). LACP is currently defined in IEEE
802.1AX. Because LACP is an IEEE standard, you can use it to facilitate EtherChannel in
multivendor environments.
To implement aggregated logical links, you can choose to configure them statically or to
configure a dynamic aggregation protocol to automatically create them. Static
configuration is simpler, but more error prone. Link aggregation protocols define a
dynamic negotiation procedure between adjoining switches. Using dynamic protocols
provides a more efficient use of the aggregated logical link.
With LACP, you can control link aggregation (for example the maximum number of
bundled ports allowed). LACP is also superior to static port channels with its automatic
failover where traffic from a failed link within EtherChannel is sent over remaining working
links in the EtherChannel.
LACP controls the bundling of physical interfaces to form a single logical interface. When
you configure LACP, LACP packets are sent between LACP enabled ports to negotiate the
forming of a channel. When LACP identifies matched Ethernet links, it groups the
matching links into a logical EtherChannel link.
The individual links must match on several parameters:
o Interface types cannot be mixed, for instance FastEthernet or Gigabit Ethernet
cannot be bundled into a single EtherChannel.
o Speed and duplex settings must be the same on all the participating links.
o Switchport mode and VLAN information must match. Access ports must be
assigned to the same VLAN. Trunk ports must have the same allowed range of
VLANs. The native VLAN must be the same on all the participating links.
The best practice is to ensure that interfaces have consistent settings, before you enable
LACP protocol on them. It is important to remember that interfaces on both sides must be
consistently configured.
The LACP protocol defines two modes:
o LACP active: This LACP mode places a port in an active negotiating state. In this
state, the port initiates negotiations with other ports by sending LACP packets.
o LACP passive: This LACP mode places a port in a passive negotiating state. In this
state, the port responds to the LACP packets that it receives but it does not initiate
LACP packet negotiation. The passive mode is useful when you do not know
whether the remote system supports LACP.
Note: When you configure the desired LACP mode on an interface, you automatically
enable the LACP protocol on that interface.
Manual static configuration places the interface in an EtherChannel manually, without any
negotiation. No negotiation between the two switches means that there is no checking to
make sure that all the ports have consistent settings. There are no link management
mechanisms either.
With static configuration, you define a mode for a port. There is only one static mode, the
on mode. When static on mode is configured, the interface does not negotiate—it does
not exchange any control packets. It immediately becomes part of the aggregated logical
link, even if the port on the other side is disabled. With the on mode, the EtherChannel
configuration is unconditional. If the port is not configured with the static on mode, then it
is not meant to be included in the aggregated link.
For the EtherChannel link to form, modes on both sides of the individual links must be
compatible. The table shows which modes result in aggregation and which do not.
As you can see from the table, if you configure one side to be in passive mode, it will
behave passively, waiting for the other side to initiate the EtherChannel negotiation. If the
other side is also set to passive, the negotiation never starts and the EtherChannel does
not form. If you disable all modes by using the no version of the command or if no mode is
configured, then the interface is placed in the off mode and EtherChannel is disabled.
For the LACP enabled link to be included in the EtherChannel, at least one of the ports
must be configured with the active mode.
The on mode manually places the interface in an EtherChannel, without any negotiation. It
works only if the other side is also set to on. If the other side is set to negotiate
parameters through LACP, no EtherChannel will form, because the side that is set to on
mode will not negotiate.
Note that once an EtherChannel is formed, whether by static configuration or dynamic
negotiation, if a link within the EtherChannel fails, the EtherChannel will still be functional,
as long as at least one physical link is active. The overall throughput would of course
decrease in this situation.

An advantage of configuring dynamic protocols to establish EtherChannel is protection


from misconfigurations. LACP can ensure that the configuration at both ends fulfill the link
aggregation requirements, before establishing an EtherChannel link. If you accidentally
misconfigure a port or if you accidentally make a mistake in cabling, for instance, by
plugging a cable in a trunk port on one side, and in an access port on the other side, LACP
will not allow an EtherChannel link to form. With static link aggregation, a cabling or
configuration mistake could go undetected and cause undesirable network behavior.
Because EtherChannel uses several links to transport packets through the physical
infrastructure, the packets will be distributed between the physical links through load
balancing. Load balancing takes place between links that are part of the same
EtherChannel. Depending on the hardware platform, one or more load-balancing methods
can be implemented. These methods include source MAC address to destination MAC
address load balancing—or source IP address to destination IP address load balancing—
across the physical links. Some methods can include source and destination port numbers
as well.
The goal of load balancing is not only to utilize all available links, but also to ensure that
packets with the same header information will be forwarded on the same physical link to
prevent unordered packet delivery. Load-balancing is performed in the hardware and is
enabled by default.
After you configure an EtherChannel, any configuration changes applied to the port
channel interface apply to all the physical ports assigned to the port channel interface.
Configuration changes applied to the physical port affect only the port where you apply
the configuration, so it is best not to change the configuration of a physical port once it is
part of an EtherChannel. To change the parameters of all ports in an EtherChannel, apply
configuration commands to the port channel interface—for example, spanning-tree
commands or commands to configure a Layer 2 EtherChannel as a trunk.
Layer 2 and Layer 3 EtherChannel
Interfaces can be bundled into two types of EtherChannels, depending on the type of
interfaces you are attempting to join to the port channel:
o Layer 2 EtherChannel bundles access or trunk ports between switches or other
devices (for example, servers).
o Layer 3 EtherChannel bundles routed ports between switches or routers.
Both Layer 2 and Layer 3 EtherChannels are common in an enterprise network. Layer 3
EtherChannel links are implemented within the LAN, mostly between Layer 3 switches, or
between a Layer 3 switch and a router. Enterprises also implement Layer 3 EtherChannel
on the links connecting to the WAN service provider, where the aggregated link is
established between the enterprise edge router and the service provider’s router.
In the figure, you see an example of the enterprise LAN topology. Layer 2 and Layer 3
switches are connected using EtherChannel links, which consist of pairs of ports.
Aggregated links that exist between SW1 and the Access switch and SW2 and the Access
switch, are Layer 2 EtherChannel links. Aggregated links between the SW1 and SW2
switches and between the SW1 switch and router R1, are Layer 3 EtherChannel links.
When an aggregated link is a Layer 3 link, the IP addresses are assigned to the logical port
channel interfaces on both sides of the link, and not to the member interfaces. A port
channel for a Layer 3 aggregated link is a routed interface and it can have subinterfaces,
just like other nonaggregated routed interfaces. It can also be enabled for routing
protocols.
The configuration options are the same for both types of EtherChannel links – you can
choose to configure an aggregation protocol (LACP) or you can manually configure the
link. Whatever the aggregation method you choose, ports you are aggregating must be of
the same type, such as routed ports, and they must have the same attributes.
WAN service providers sometimes implement Layer 1 devices in the connection from the
customer’s router to the service provider router. Examples of these devices are media
converters and multiplexers. This intermediary Layer 1 equipment might block LACP
protocol messages. To ensure link aggregation, you should opt for static manual Layer 3
EtherChannel configuration.
Note: Some older router platforms do not support dynamic aggregation protocols. On
these platforms, you must configure EtherChannel manually.
In the default Layer 3 switch configuration, the routing function is disabled and all ports
are switched Layer 2 ports. Switched ports can be converted to routed ports. Routed ports
behave like ports found on router platforms. Routed ports do not run Layer 2
management protocols, like STP, Dynamic Trunking Protocol (DTP), and others. Routed
ports are not members of any VLANs manually configured on the switch. A routed port on
a switch represents a boundary between different Layer 2 domains. Routed ports do not
run STP, so ports don’t need to wait for STP calculations.
For successful establishment of a Layer 3 EtherChannel link, physical ports on each side of
the aggregated link must be configured as routed ports. They also must have matching
port attributes, such as bandwidth and duplex mode.
In addition, the logical port channel interface must be a routed interface. A Layer 3
EtherChannel will become active only when both the aggregated interface and its
constituent physical interfaces are routed interfaces.
The figure illustrates the difference between Layer 2 and Layer 3 EtherChannel. In the
figure, there are three aggregated links, represented by logical interfaces port-channel 20,
port-channel 21, and port-channel 22.
Port-channel 20 is an access Layer 2 logical interface for the aggregated link that bundles
physical ports GigabitEthernet 1/1 and GigabitEthernet 1/2. Note that both
GigabitEthernet 1/1 and GigabitEthernet 1/2 have the same Layer 2 attributes: they are
both access ports and they both belong to VLAN 1.
Similarly, port-channel 21 is a trunk Layer 2 logical interface for the aggregated link that
bundles GigabitEthernet 2/1 and GigabitEthernet 2/2 physical interfaces. Both
GigabitEthernet 2/1 and GigabitEthernet 2/2 are configured as trunks and allow the same
VLANs: VLAN 1 and VLAN 2.
VLAN 1 and VLAN 2 have corresponding Switch Virtual Interfaces (SVIs) configured. These
SVIs provide Layer 3 IP connectivity to their corresponding VLANs. Although related to
those SVIs, both EtherChannels that are represented by port-channel 20 and port-channel
21 are still Layer 2 aggregated links, for example, they run Layer 2 management protocols.
Port-channel 22 is a routed logical interface for the aggregated link that bundles physical
ports GigabitEthernet 1/3 and GigabitEthernet 1/4. Both GigabitEthernet 1/3 and
GigabitEthernet 1/4 are configured as routed ports. The figure also shows an
unaggregated routed port GigabitEthernet 2/3.

19.4 Improving Redundant Switched Topologies with EtherChannel


Configuring and Verifying EtherChannel
Configuring EtherChannel
EtherChannel bundles individual links into a single logical interface called a port channel.
There are slight differences in configuration of Layer 2 EtherChannel and Layer 3
EtherChannel. For both types of EtherChannels, you configure interface bundling on
physical interfaces. Once interfaces are bundled, you configure aggregated link
parameters on the logical port channel interface.
Requirements and restrictions for EtherChannel configuration include:
o All Ethernet interfaces must support EtherChannel.
o The same configuration options must be supported on both devices connected
with the aggregated link. You should verify which configuration options are
available on devices. Some router platforms support only manual configuration.
o You can typically group from two to eight physical ports, but some platforms allow
more ports to be included. Verify the maximum allowed number of ports that you
can bundle.
o The interface mode, switched or routed, should be the same for aggregated
interface and for member interfaces.
o You cannot mix port types within a single EtherChannel. For example, you could
group four FastEthernet ports into one logical Ethernet link, but you could not
group two FastEthernet ports and two GigabitEthernet ports into one logical
Ethernet link. There is no requirement that the interfaces should be physically
contiguous, or on the same module.
o All ports on both devices must be correctly configured. The individual links must
match on several parameters:
o Speed and duplex settings must be the same on all the participating links.
o Interface mode must match: you cannot aggregate switched and routed
ports in the same EtherChannel.
o Switchport mode must match: all interfaces in the EtherChannel bundle
must be assigned to the same VLAN or be configured as a trunk.
o VLAN information must match: access ports must be assigned to the same
VLAN. Trunk ports must have the same allowed range of VLANs. The native
VLAN must be the same on all the participating links.
You should ensure that interfaces have consistent settings before you bundle them. If you
have to change these settings later, configure them on the corresponding port channel
interface.
After you configure the port channel interface, any configuration that you apply to the
port channel interface affects member interfaces as well. The opposite does not apply and
will cause interface incompatibility and link suspension from the EtherChannel.
It is recommended that you configure LACP protocol to establish EtherChannel links, if the
platform you are working on supports it. LACP can prevent misconfiguration issues by
ensuring that configurations at both ends of the aggregated link fulfill the link aggregation
requirements.
To ensure configuration consistency of the physical interfaces, you can utilize the interface
range command. The syntax of the command requires that you enter the interface type,
followed by identifiers of the first and the last interface in the range.

The example in the figure shows how the interface range command is used to configure
four GigabitEthernet interfaces of SW1. The range is specified by providing interface type
(GigabitEthernet), and identifiers of the first interface and the last interface (0/1–4). The
command in the example specifies four interfaces, the first being GigabitEthernet 0/1, and
the last being GigabitEthernet 0/4. Once you specify the range, all the configuration
commands that follow apply to all the interfaces included in the range. Using the interface
range command, you easily ensure that all the interfaces have the same configuration. A
similar configuration must be applied on the SW2 switch also.
Once you successfully bundle the ports, you can ensure consistent configuration by
applying it to the port channel interface.
When configuring EtherChannel, a good practice is to start by shutting down the
interfaces to be aggregated, so that incomplete configuration will not start to create
activity on the link.
After shutting down the member interfaces, proceed by using the channel-group
command to specify the port channel identifier, also called channel group number, and
the method for establishing the aggregated link.
The command syntax is channel-group channel-group-number mode { on } | { active |
passive }.
o The channel-group command assigns the interface to the port channel interface
and automatically creates the port channel interface. The channel-group-number
specifies the identifier of the port channel interface for the aggregated link.
o With the mode keyword, the channel-group command also specifies the method
for link aggregation. The keywords specifying the link aggregation method have the
following meanings:
o on: Forces the port to aggregate without LACP. In the on mode, an
EtherChannel is established only when a port group in the on mode is
connected to another port group in the on mode.
o active: Enables LACP only if a LACP device is detected at the other end of
the link. The active mode places the port into an active negotiating state in
which the port starts negotiations with other port by sending LACP packets.
o passive: Enables LACP on the port and places it into a passive negotiating
state in which the port responds to LACP packets that it receives, but does
not start LACP packet negotiation
The example configuration in the previous figure bundles GigabitEthernet0/1,
GigabitEthernet0/2, GigabitEthernet0/3, and GigabitEthernet0/4 into a Layer 2
EtherChannel link represented by the logical interface port-channel 1. Layer 2 settings of
the EtherChannel interface, trunking, and VLANs allowed on the trunk, are configured on
the logical port channel interface.
The following list summarizes the steps used to configure Layer 2 EtherChannel:
1. Use interface range command to configure interface attributes for interfaces that
are being aggregated [optional].
2. Shut down interfaces that will be aggregated, using the shutdown command.
3. Bundle the interfaces using channel-group command by specifying the port
channel identifier and aggregation method:
a. Choose on for manual unconditional aggregation
b. Choose active or passive to enable LACP
4. Configure the port channel interface.
5. Enable interfaces that were previously shut down.
Note: The channel-group identifier does not need to match on both sides of the port
channel. However, it is a good practice to do so because it makes it easier to manage the
configuration.
To configure Layer 2 EtherChannel, you do not need to change the default settings for the
interface modes. On switches, physical interfaces and port channel interfaces are Layer 2
ports, or switched ports, by default.
When configuring Layer 3 EtherChannel on a Layer 3 switch, there are several specifics
that you normally do not encounter with Layer 2 EtherChannels.
First, you should ensure that the interfaces that you are aggregating are all routed
interfaces—and that applies to both the port channel interface and to member interfaces.
When configuring Layer 3 EtherChannels, it is recommended that you first manually
create the port channel logical interface, and convert it to the routed interface.
To create the port channel logical interface, use the interface port-channel port-channel-
identifier global configuration mode command. By default, port channel interface is a
Layer 2 interface. Therefore, use the no switchport command to make it a routed
interface. The no switchport command deletes any configuration specific to Layer 2 on
the interface.
In the next step, you configure the port channel interface with an IP version 4 (IPv4)
address using the ip address command. Note that the IPv4 address is assigned to the
logical port channel interface, and not to any of the member physical interfaces. If a
member interface already has an IPv4 address assigned, and you wish to assign the same
IPv4 address to the port channel interface, you must first delete the IPv4 address from the
member interface before configuring it on the port channel interface.
Finally, you should configure member interface bundling. For successful bundling to a
Layer 3 EtherChannel, all member interfaces must be routed interfaces. Use the no
switchport command to make the interfaces routed interfaces.
The command used to bundle member interfaces is the same as for the Layer 2
EtherChannels. Use the channel-group command to specify the port channel identifier
and the method of aggregation. The port channel identifier that you choose for the logical
interface must match the number you use with the channel-group command when
configuring member interfaces.
Use LACP where the platforms allow it. If the platform does not support aggregation
protocols, you have to configure static aggregation. Beware that, with static configuration,
misconfigurations on devices are not going to be detected automatically.
The following is an example of Layer 3 EtherChannel configuration.
The example in the figure shows the configuration of a Layer 3 EtherChannel link between
two Layer 3 switches. The configuration example is given only for the SW1 switch. A
similar configuration must be applied on the SW2 switch also.
The first line of the configuration creates a logical port channel interface with the
identifier 3. When the port channel interface does not exist, it is created using interface
port-channel command. The port channel interface is configured as a routed interface
using the no switchport command. Once the port channel interface is a routed interface,
you can configure Layer 3 parameters, such as the IPv4 address. The IPv4 address assigned
to the port-channel 3 interface on SW 1 is 172.16.3.10/24.
To configure member interface bundling, the example uses the interface range command.
All member interfaces are converted to routed interfaces using the no switchport
command. Interface bundling is specified with the channel-group command. The channel-
group identifier is 3, which matches the identifier of the previously created port channel
interface. The aggregation method is set to on, which means that the interfaces are
bundled manually. For the Layer 3 EtherChannel to be fully operational, the configuration
on SW2 switch must specify the same aggregation method. The port channel identifier
does not need to match between SW1 and SW2 switches, but it is best practice that they
do match.
The following list summarizes the steps used to configure Layer 3 EtherChannel:
1. Create a logical port channel interface using interface port-channel command.
2. Turn the logical port channel interface into routed interface, using the no
switchport command.
3. Assign IPv4 address to the port channel interface.
4. Use the interface range command to configure member interfaces:
a. Convert member interfaces into routed ports using the no switchport
command
b. Bundle the interfaces using the channel-group command by specifying the
logical interface identifier and aggregation method: choose on for manual
unconditional aggregation, or choose active or passive to enable LACP.
Verifying EtherChannel Configuration
You can use several commands to verify an EtherChannel configuration. Using verification
commands, you can make sure that EtherChannel link is operational, at which layer it is
operating, whether all its member interfaces are active, which aggregation method is
configured, and so on.
The following commands are available for EtherChannel verification in Cisco IOS Software:
o The show interface port-channel command displays the general status of the
logical port channel interface that represents the aggregated link. In the example,
the interface port-channel 1 is operational.
o The show etherchannel summary command displays one line of information per
port channel and is particularly useful when several port channel interfaces are
configured on the same device. The output of the command provides, among
other, information on port channel interface status, method used for link
aggregation, member interfaces, and their status. In the example output, the
switch has one EtherChannel configured; group 1 uses LACP. The interface bundle
consists of the FastEthernet0/1 and FastEthernet0/2 interfaces; the letter P
indicates that these ports are bundled. You can see that the aggregated link is a
Layer 2 EtherChannel, and that it is in use. The letters SU indicate that the
interface is a Layer 2 interface: The letter S stands for Layer 2 and the letter U
stands for in use.

o The show etherchannel port-channel command displays information about the


specific port channel interface. In the output, the Group: field indicates the port
channel identifier, which is 1 in the example. The protocol used to bundle the ports
is indicated in the Protocol: output field, and it is LACP in the example. Member
interfaces are indicated in the table called Ports in the Port-channel: and, in this
example, the members are two physical interfaces FastEthernet0/1 and
FastEthernet0/2. The EC state indicates whether the member interface is
operational, for instance, whether it actively participates in the EtherChannel. The
example output shows that both member interfaces are active.
Note: Load does not actually indicate the load over an interface. It is a hexadecimal value
that indicates which interface will be chosen for a specific flow of traffic.
o The show etherchannel summary command displays a summary of EtherChannel
information. In the following example, you see summarized information about the
Layer 3 EtherChannel links on an SW1 switch.
The one-line summary shows the "RU" indication next to the Po5 interface label. The “R”
tells you that port-channel 5 represents a Layer 3 EtherChannel. The member interfaces
GigabitEthernet 0/1 and GigabitEthernet 0/2, are indeed bundled, because the "P" flag,
standing for “bundled in port-channel” is next to each of them. Note that this command
will not give you information about the IPv4 address configured, which you can obtain
using the show ip interface port-channel 5 command
GigabitEthernet 0/1 and GigabitEthernet 0/2 will now behave as one virtual Layer 3
physical interface. For instance, if you issue the show ip route command, routes will be
seen as being accessible through PortChannel 5 and not either GigabitEthernet 0/1 or
GigabitEthernet 0/2, as illustrated in the following example output:
20.1 Exploring Layer 3 Redundancy
Introduction
End devices are typically configured with a single default gateway IP address that does not
change when the network topology changes. If the router whose IP address is configured
as the default gateway fails, the local device is unable to send packets off the local
network segment, so it effectively gets disconnected from the rest of the network. Even if
a redundant router exists that could serve as a default gateway for that segment, there is
no dynamic method to help the devices in the segment determine the address of a new
default gateway.
A solution lies in creating a type of router redundancy where a set of routers work
together to present the illusion of a single router to the hosts on the LAN. By sharing an IP
address and a MAC address, two or more routers can act as a single “virtual” router. The
redundancy protocol provides the mechanism for determining which router should take
the active role in forwarding traffic and determining when that role must be taken over by
a standby router. The transition from one forwarding router to another is transparent to
the end devices. Even though the example is explained on routers, in modern networks,
the devices performing this function would typically be Layer 3 switches.

As a networking engineer, you will need to be familiar with Layer 3 redundancy including:
o The need for default gateway redundancy.
o The default gateway redundancy protocol options.
20.2 Exploring Layer 3 Redundancy
Need for Default Gateway Redundancy
When routers have different paths to specific destinations (through redundant next-hop
routers) and the primary path becomes unavailable, the routing protocol between the
routers will dynamically converge, providing a connection through the secondary path.
Hosts that run routing protocols can react in the same manner when the primary path
towards different subnets fails, since they do not depend on a default gateway
configuration. For example, you can have a Microsoft Windows server with the LAN
routing feature that can communicate with two different routers to establish connectivity
to remote subnets. If the primary router fails, the server will use the information from the
routing protocol to switch to the secondary path.
However, most client computers, servers, printers, and so on do not support dynamic
routing protocols and whenever they need to communicate with a host that is located in a
different subnet, they must relay packets through the default gateway. Therefore, the
availability of this gateway is extremely important.
For example, a company that has dual redundant routers that connect users to the
internet may experience a problem when the primary router goes down. Without an extra
protocol, none of the devices on the company's network can access the internet because
of the primary router failure. Even though the secondary router is operational, the devices
may not be configured to access a secondary router when the primary router goes down.
Hence, an extra feature is needed that can provide default gateway redundancy to the
clients.
The following figure illustrates a topology with redundant routers that provide routing
functions in the specific segment. When the host determines that a destination IPv4
network is not on its local subnet, it forwards the packet to the default gateway. Most
IPv4 hosts do not run a dynamic routing protocol to build a list of reachable networks.
Instead, they rely on a manually configured or dynamically learned default gateway to
route all packets. Typically, IPv4 hosts are configured to request addressing information,
including the default gateway, from a DHCP server.
Redundant equipment alone does not guarantee failover. In this example, both Router A
and Router B are responsible for routing packets for the 10.1.10.0/24 subnet. Because the
routers are deployed as a redundant pair, if Router A becomes unavailable, the interior
gateway protocol (IGP) can quickly and dynamically converge and determine that Router B
will now transfer the packets that would otherwise have gone through Router A. Because
the end device does not run a routing protocol, it will not receive the dynamic routing
information.
The end device is configured with a single default gateway IPv4 address, which does not
dynamically update when the network topology changes. If the default gateway fails, the
local device is unable to send packets out of the local network segment. As a result, the
host is isolated from the rest of the network. Even if a redundant router that could serve
as a default gateway for that segment exists, there is no dynamic method by which these
devices can determine the address of a new default gateway.
Note: Though the example is illustrated on routers, it is equally valid on Layer 3 switches.

20.3 Exploring Layer 3 Redundancy


Understanding FHRP
Since most IP hosts have a single default gateway IP address, which does not change when
the network topology changes, an extra feature on routers is needed that can provide
Layer 3 gateway redundancy to the hosts. First Hop Redundancy Protocols (FHRPs) are a
group of protocols with similar functionality that enable a set of routers or Layer 3
switches to present an illusion of a "virtual" router. The virtual router is assigned a virtual
IP address and virtual MAC address, which is shared by two routers. This is the base
prerequisite to achieve gateway redundancy to hosts that cannot detect a change in the
network.
The following figure represents a generic FHRP scenario with a set of routers working
together to present the illusion of a single router to the hosts on the LAN. By sharing an IP
(Layer 3) address and a MAC (Layer 2) address, two or more routers can act as a single
"virtual" router.

Hosts that are on the local subnet should have the IP address of the virtual router as their
default gateway. When an IPv4 host needs to communicate to another IPv4 host on a
different subnet, it will use Address Resolution Protocol (ARP) to resolve the MAC address
of the default gateway. The ARP resolution returns the MAC address of the virtual router.
The host then encapsulates the packets inside frames sent to the MAC address of the
virtual router; these packets are then routed to their destination by any active router that
is part of that virtual router group. The standby router takes over if the active router fails.
Therefore, the virtual router as a concept has an active (forwarding) router and standby
router.
You use an FHRP to coordinate two or more routers as the devices that are responsible for
processing the packets that are sent to the virtual router. The host devices send traffic to
the address of the virtual router. The actual (physical) router that forwards this traffic is
transparent to the end stations.
The redundancy protocol provides the mechanism for determining which router should
take the active role in forwarding traffic and determining when a standby router should
take over that role. When the forwarding router fails, the standby router detects the
change and a failover occurs. Hence, the standby router becomes active and starts
forwarding traffic destined for the shared IP address and MAC address. The transition
from one forwarding router to another is transparent to the end devices.
A common feature of FHRP is to provide a default gateway failover that is transparent to
hosts. Cisco routers and switches typically support the use of three FHRPs:
1. Hot Standby Router Protocol (HSRP): HSRP is an FHRP that Cisco designed to
create a redundancy framework between network routers or Layer 3 switches to
achieve default gateway failover capabilities. Only one router per subnet forwards
traffic. HSRP is defined in RFC 2281.
2. Virtual Router Redundancy Protocol (VRRP): VRRP is an open FHRP standard that
offers the ability to add more than two routers for additional redundancy. Only
one router per subnet forwards traffic. VRRP is defined in RFC 5798.
3. Gateway Load Balancing Protocol (GLBP): GLBP is an FHRP that Cisco designed to
allow multiple active forwarders to load-balance outgoing traffic on a per host
basis rather than a per subnet basis like HSRP.
The routers communicate FHRP information between each other through hello messages,
which also represent a keepalive mechanism. This figure illustrates the FHRP failover
process.

When the forwarding router or the link, where FHRP is configured, fails, these steps take
place:
1. The standby router stops seeing hello messages from the forwarding router.
2. The standby router assumes the role of the forwarding router.
3. Because the new forwarding router assumes both the IP and MAC addresses of the
virtual router, the end stations see no disruption in service.
20.4 Exploring Layer 3 Redundancy
Understanding HSRP
HSRP is an FHRP that facilitates transparent failover of the first-hop IP device (default
gateway). When you use HSRP, you configure the host with the HSRP virtual IP address as
its default gateway, instead of using the IP address of the router.
HSRP Overview
HSRP defines a standby group of routers, while one router is designated as the active
router, as depicted in this figure.

HSRP provides gateway redundancy by sharing IP and MAC addresses between redundant
gateways. The protocol consists of virtual IP and MAC addresses that the two routers that
belong to the same HSRP group share between each other.
Hosts on the IP subnet that are protected by HSRP have their default gateway configured
with the HSRP group virtual IP address.
When IPv4 hosts use ARP to resolve the MAC address of the default gateway IPv4 address,
the active HSRP router responds with the shared virtual MAC address. The packets that
are received on the virtual IPv4 address are forwarded to the active router.
The HSRP active and the standby router perform the following functions:
o Active router:
o Responds to default gateway ARP requests with the virtual router MAC
address.
o Assumes active forwarding of packets for the virtual router.
o Sends hello messages between the active and standby routers.
o Knows the virtual router IPv4 address.
o Standby router:
o Sends hello messages.
o Listens for periodic hello messages.
o Assumes active forwarding of packets if it does not hear from active router.
o Sends Gratuitous ARP message when standby becomes active.
HSRP routers send hello messages that reach all HSRP routers. The active router sources
hello packets from its configured IPv4 address and the shared virtual MAC address. The
standby router sources hellos from its configured IPv4 address and its burned-in MAC
address (BIA). Hence, the HSRP routers can identify who is the active and who is the
standby router.
The following table summarizes the HSRP terminology:
The function of the HSRP standby router is to monitor the operational status of the HSRP
group and to quickly assume the packet-forwarding responsibility if the active router
becomes inoperable. When the primary HSRP router comes back online, it will not regain
the active role by default. To transfer the active role to the primary router, you have to
configure pre-emption.
The standby preempt command enables the HSRP router with the highest priority to
immediately become the active router. Priority is determined first by the configured
priority value and then by the IPv4 address. In each case, a higher value is of greater
priority. Pre-emption is recommended because you want your network to have
deterministic behavior.
HSRP for IPv4 has two versions: Version 1 and Version 2. The default HSRP version is 1.
Because the two versions are not compatible, you must use the same version on your
HSRP enabled routers.
The shared virtual MAC address is generated by combining a specific MAC address range
and the HSRP group number. HSRP Version 1 uses a MAC address in the form
0000.0C07.ACXX and HSRP Version 2 uses a MAC address in the form 0000.0C9F.FXXX,
where XX or XXX stand for the group number. For example, the virtual MAC address for a
HSRP Version 2 virtual router in group 10 would be 0000.0C9F.F00A. The A in 00A is the
hexadecimal value for 10.
In addition, routers with HSRP Version 1 send hello packets to the multicast address of
224.0.0.2 (reserved multicast address used to communicate to all routers) on UDP port
1985, while HSRP Version 2 uses the 224.0.0.102 multicast address on UDP port 1985.
HSRP Advanced Features
Besides the default behavior, you can configure some other HSRP features to increase
your network availability and performance:
o Load balancing: Routers can simultaneously provide redundant backup and
perform load sharing across various subnets and VLANs.
o Interface tracking: When a tracked interface becomes unavailable, the HSRP
tracking feature ensures that a router with the unavailable interface will relinquish
the active router role.
The following figure illustrates a topology with multiple VLANs that can benefit from the
HSRP load-balancing feature.

The two Layer 3 switches have HSRP enabled in two separate VLANs. For each VLAN, HSRP
allocates a standby group, a virtual IPv4 address, and a virtual MAC address. The active
router for each HSRP group is on a different Layer 3 switch. Thus, the hosts in different
VLANs use a different Layer 3 switch, which enables load sharing across various subnets
and VLANs.
The active router in HSRP is elected based on the HSRP priority, which is 100 by default
and is configurable per HSRP group. In the case of an equal priority, the router with the
highest IPv4 address for the respective group is elected as an active router.
The HSRP interface tracking feature decreases the priority of the router by a configured
value, when a tracked interface becomes unavailable. In this situation, the priority of a
standby group router may become higher and it will take the role of active router.
Therefore, a router with the unavailable interface will relinquish the active router role.
The following topology has two redundant routers that connect a host to the internet. The
routers have HSRP enabled on the interfaces that are facing the host network (interface
fa0/0 on each router.)
The primary router (Router 1) is configured with priority 110 while the secondary router
(Router 2) has a default priority of 100. Router 1 is also configured with the HSRP interface
tracking option for the interface Fa0/1, which is connected to the internet. If this interface
becomes unavailable, the Router 1 HSRP priority is configured to decrease by 20, which
will relinquish the active router role to Router 2, which will have a higher priority during
this incident. When interface Fa0/1 on Router 1 comes back online, the router will revert
to the configured priority and will become the active router. These changes will happen
only if you have enabled pre-emption.
HSRP is a Cisco proprietary protocol and VRRP is a standard protocol. VRRP is similar to
HSRP, both in operation and configuration, and the differences between HSRP and VRRP
are very slight. The VRRP master is analogous to the HSRP active gateway, while the VRRP
backup is analogous to the HSRP standby gateway. Other VRRP differences from HSRP
include that it allows you to use the actual IP address of one of the VRRP group members
as a virtual IP address, and that it uses a different multicast address for communication
between peers.
21.1 Introducing WAN Technologies
Introduction
When users in enterprise networks need access to remote sites, or a branch must connect
to the enterprise campus, or when remote users must access the enterprise LAN, a wide-
area network, or WAN, is needed. As the name suggests, WANs cover large geographical
areas. WANs are operated by companies such as telephone or cable companies, service
providers, or satellite companies. They build large networks that span entire cities or
regions and lease the right to use their networks to their customers.
Many WAN technologies exist today and new technologies, such as 4G and 5G Mobile
networks, are constantly emerging. An increasingly common option for enterprises is also
to use the global internet infrastructure for WAN connectivity.
One of the most important aspects of interconnecting enterprise sites and users is
security. In order to secure traffic in transit over the service provider networks or internet,
Virtual Private Networks (VPNs) are deployed. There are multiple options for VPNs and
sometimes enterprises need to combine multiple different services in the network,
depending on the availability of services and business needs.

As a network engineer, you should keep up on possible WAN connectivity options and
other WAN details by acquiring:
o Knowledge of WAN devices and cabling.
o Awareness of WAN protocols and topology options.
o Familiarity with VPN options.
21.2 Introducing WAN Technologies
Introduction to WAN Technologies
A WAN is a data communications network that operates beyond the geographic scope of a
LAN. To implement a WAN, enterprises use the facilities of service providers or carriers,
such as a telephone or cable company. The provider interconnects enterprises own
locations, and connects it to locations of other enterprises, to external services, and to
remote users. WANs carry various traffic types such as voice, data, and video.

WANs have three major characteristics:


o WANs generally connect devices that are separated by a broader geographic area
than a LAN can serve.
o WANs use the services of carriers such as telephone companies, cable companies,
satellite systems, and network providers.
o WANs use connections of various types to provide access to bandwidth over large
geographic areas.
WAN operations focus primarily on the physical layer (Open Systems Interconnection [OSI]
Layer 1) and the data link layer (OSI Layer 2). WAN access standards typically describe
both physical layer delivery methods and data link layer requirements. The data link layer
requirements include physical addressing, flow control, and encapsulation.
WAN access standards are defined and managed by several recognized authorities; among
them are the EIA/TIA, ISO, and IEEE.
LAN technologies provide high-speed and cost efficiency for the transmission of data in
organizations but only in relatively small geographic areas. Businesses require
communication with distant sites, provided by a WAN, for several reasons including:
o People and processes in the enterprise’s regional and branch offices exchange and
share data.
o Enterprises often share information with other organizations across large
distances.
o Employees who travel or work from remote sites frequently need to access
information that resides on their corporate networks—that is, they need access to
centrally located applications and services.
o Applications and services used by employees can be hosted in the cloud.
WANs may provide high bandwidth over long distances across complex physical networks,
often with performance guarantees. WANs are very different than LANs; maintenance
costs increase with longer distances. To guarantee performance, the network should
recover from faults very quickly, the signal quality and bit-error rates must be kept under
control, and bandwidth should be carefully managed. Therefore, many technologies and
standards were developed specifically to meet WAN requirements.
For many enterprises, it is not feasible to build their own infrastructure to connect
computers across a country or around the world, in the same way they would do for a
LAN. Therefore, they use the existing WAN technologies and resources to fulfill this need.
Nevertheless, the cost of the network and its related services is a significant expense.
Increasingly, the internet is being used as an inexpensive alternative to an enterprise WAN
for some applications.
When a WAN service provider receives data from a client site, it must forward the data to
the remote site for final delivery to the recipient. Sometimes, the remote site may be
connected to the same service provider as the originating site, but it is not always the
case. If providers are not the same, the originating provider must pass the data to the
destination provider, through provider interconnections.
WAN service providers use several different technologies to connect their subscribers. The
connection type that is used may not be the same as the one service provider employs
inside its own network or the one it uses to connect to other service providers.
Note: Service provider networks are complex. They are mostly built of high-bandwidth
fiber-optic media, using dense wavelength-division multiplexing (DWDM), SONET in North
America, and Synchronous Digital Hierarchy (SDH) in the rest of the world. These
standards define how to transfer data over optical fiber over great distances.

21.3 Introducing WAN Technologies


WAN Devices and Demarcation Point
Several types of devices are specific to WAN environments including modems and certain
types of routers and switches. The following are descriptions of WAN devices and terms
used in discussing WANs. Some of the devices were used in the past with legacy WAN
connectivity options.

Modems are devices that modulate and demodulate analog carriers to encode and
retrieve digital information. A modem interprets digital and analog signals, enabling data
to be transmitted over voice-grade telephone lines. At the source, digital signals are
converted to a form that is suitable for transmission over analog communication facilities.
At the destination, these analog signals are returned to their digital form. Pure analog
circuits are not often encountered today. Modems still do modulate multiple carriers and
implement coding schemes, which are digital. Nonetheless, the word modem is still in use,
by convention, for devices that work on lines that were not primarily intended for data
service, such as phone lines of various types and cable TV lines. The terms transceiver or
converter or media converter are used for fiber lines. Modems are part of the equipment
installed at the customer location, although it is not necessary that they are owned and
managed by the customer (for example, the enterprise). In the figure, a Digital Subscriber
Line (DSL) modem (which is used in broadband environments based on DSL technology)
connects to a router with an Ethernet cable and connects to the service provider network
with a telephone cable. A modem can also be implemented as a router module.
Optical fiber converters are used where a fiber-optic link terminates to convert optical
signals into electrical signals and vice versa. You can also implement the converter as a
router or switch module.
A router provides internetworking and WAN access interface ports that are used to
connect to the service provider network. These interfaces may be serial connections or
other WAN interfaces. With some types of WAN interfaces, you need an external device
such as a CSU/DSU or modem (analog, cable, or DSL) to connect the router to the local
point of presence (POP) of the service provider.
A core router or multilayer switch resides within the middle or backbone of the WAN,
rather than at its periphery. To fulfil this role, a router or multilayer switch must be able to
support multiple telecommunications interfaces of the highest speed in use in the WAN
core. It must also be able to forward IP packets at wire speed on all these interfaces. The
router or multilayer switch must support the routing protocols that are being used in the
core.
Wireless routers are used when you are using the wireless medium for WAN connectivity.
You can also use an access point instead of a wireless router.
Router with cellular connectivity features are used when connecting to a WAN via a
cellular/mobile broadband access network. Routers with cellular connectivity features
include an interface which supports cellular communication standards and protocols.
Interfaces for cellular communication can be factory installed, or they can embed a
module that provides cellular connectivity. A router can be moved between locations. It
can also operate while in motion (in trucks, buses, cars, trains). Enterprise grade routers
that support cellular connectivity also include diagnostic and management functions,
enable multiple cellular connections to one or more service providers, support Quality of
Service (QoS), etc.
DTE/DCE and CSU/DSU: data terminating equipment (DTE) and data communications
equipment (DCE) are terms that were used in the context of WAN connectivity options
that are mostly considered legacy today. The two terms name two separate devices. The
DTE device is either a source or a destination for digital data. Specifically, these devices
include PCs, servers, and routers. In the figure, a router in either office would be
considered a DTE. DCE devices convert the data received from the sending DTE into a form
acceptable to the WAN service provider. The purpose is to convert a signal from a form
used for local transmission to a form used for long distance transmission. Converted
signals travel across provider’s network to the remote DCE device, which connects the
receiving DTE. You could say that a DCE translates data from LAN to WAN "language." To
simplify, the data path over a WAN would be DTE > DCE > DCE > DTE.

DCEs deal with both analog and digital data representations. When dealing only with
digitized data, a DCE is a CSU/DSU. In other words, when you connect a digital device to a
digital line, you use CSU/DSU. It connects two different types of digital signals. When
connecting a digital device to an analog circuit (such as phone line), the DCE is a modem.
In the figure, the router, a digital device, connects to a line, which is digital, via the
CSU/DSU unit. The CSU/DSU connects to the service provider infrastructure using a
telephone or coaxial cable, and it connects to the router with a serial cable. The DSU
converts the telephone line frames into frames that can be interpreted on the LAN and
vice versa. It also provides a clocking signal on the serial line. If a CSU/DSU is implemented
as a module within a router, a serial cable is not necessary.

Nowadays, CSU and DSU are two components within one piece of hardware. The DSU
manages the interface with the DTE. In serial communication, where clocking is required,
the DSU plays the role of the DCE and provides clocking. The DSU converts DTE serial
communications to frames which the CSU can understand and vice versa, it converts the
carrier’s signal into frames that can be interpreted on the LAN. The CSU deals with the
provider’s part of the network. It connects to the provider’s communication circuit and
places the frames from the DSU onto it and from it to the DSU. The CSU ensures
connection integrity through error correction and line monitoring.
WAN Interface Cards (WICs) in a router may contain an integrated CSU/DSU.

Note: The preceding list is not exhaustive and other devices may be required, depending
on the WAN access technology chosen.
The demarcation point is a marking which separates a customer’s WAN equipment from
the service provider’s equipment. The customer side of the demarcation point
accommodates the Customer Premises Equipment (CPE).
CPE are typically devices inside the wiring closet located on the subscriber’s premises. CPE
either belongs to the subscriber or is leased from the service provider. CPE is connected to
the closest point in the service provider’s network (an edge router or an exchange/central
office). This link is called the local loop or last mile. This point where the subscriber
connects to the service providers network is called a POP. Examples of CPE devices are
modems, routers, optical converters, and so on. A copper or fiber cable connects the CPE
to the nearest exchange or central office of the service provider.
The provider’s side of demarcation point includes links that connect to the service
provider equipment—that is, the local loop or last mile.
Physically, the demarcation point can be a cabling junction box, located on the customer
premises, that connects the CPE wiring to the local loop. It is usually placed for easy access
by a technician.
The demarcation point is the place where the responsibility for the connection changes
from the user to the service provider. When problems arise, it is necessary to determine
whether the user or the service provider is responsible for troubleshooting or repair.
Note: The exact demarcation point is different from country to country.

21.4 Introducing WAN Technologies


WAN Topology Options
A physical topology describes the physical arrangement of network devices that allows for
data to move from a source to a destination network, while the logical topology
represents how data actually flows. When considering WAN topologies, you think about
logical topologies. The logical topologies below are the enterprise’s view of its data flows
through the service provider’s network. A physical topology might show all the various
service provider devices that are switching the data flows inside the cloud.
The four basic logical topologies in a WAN design are shown in the figure.
Point-to-point topology: This topology establishes a circuit (a logical connection) between
exactly two sites. It is also called a Layer 2 service as it creates a connection, via which it
seems that both sites are on the same physical segment. It is considered transparent to
the enterprise network as if there was a direct physical link between two endpoints. The
link capacity is dedicated to the customer. Typically, a point-to-point topology is offered in
the form of leased lines. This solution does not scale well and can be costly.
Hub-and-spoke topology: This topology features a central router or multilayer switch,
acting as the hub, which is connected to all other remote devices, the spokes. All
communication among the spoke networks traverses the hub. The advantages of a hub-
and-spoke design are that it is a simple network with simplified management, requires
few circuits, allows for centralized network services, and consequently minimizes the
network operational costs. However, the disadvantages are significant:
o The central router (hub) represents a single point of failure.
o The central router limits the overall performance for access to centralized
resources. The central router is a single pipe that manages all traffic that is
intended either for the centralized resources or for the other regional routers.
o There may be suboptimal traffic flows, as traffic between spokes must go through
the hub.
With only one hub node, the topology is also called single-homed. Providing an extra hub
node provides for redundancy and the topology is referred to as dual-homed hub-and-
spoke.
Meshed topologies are the ones in which there are many redundant interconnections
between the nodes. There are two types of meshed topologies, full mesh and partial
mesh:
o Full mesh topology: In this topology, each remote node on the periphery of a
given service provider network has a direct logical connection, also called a circuit,
to every other remote node. Any site can communicate directly with any other
site. The key rationale for creating a full mesh environment is to provide a high
level of redundancy. A disadvantage of a full mesh topology is that it can be
complex to configure and maintain the large number of circuits required, and
therefore it does not scale well.
o Partial mesh topology: In this topology, almost, but not all remote nodes are
interconnected. It reduces the number of sites that have direct connections to all
other nodes. Partial meshes are highly flexible topologies that can take various
very different configurations. Some nodes are organized in a full mesh scheme, but
others are only connected to one or two in the network. A partial mesh topology is
commonly found in peripheral networks connected to a full meshed backbone.
This topology is often used in regionalized enterprise networks. Among all its
locations, an enterprise chooses several to act as regional centers. These centers
serve a selected area and implement a hub-and-spoke topology within them,
where they are the hub. This process creates multiple hub-and-spoke topologies in
an enterprise network. Hubs themselves are connected in a full mesh topology.
This setup lowers the number of required circuits, provides redundancy, and
ensures performance. The cost of a partial mesh topology is higher than hub-and-
spoke but less than full mesh.
Note: Large networks usually deploy a combination of these topologies—for example, a
partial mesh in the network core, redundant hub-and-spoke for larger branches, and
simple hub-and-spoke for noncritical remote locations.
Network downtime can be very expensive in terms of decreased productivity and
potential loss of revenue. Take, for example, a company that sells products online. Not
being able to access the warehouse records might significantly decrease company’s
income. To increase network availability, many organizations deploy a dual-carrier WAN
design to increase redundancy and path diversity. Dual-carrier WAN means that the
enterprise has connections to two different carriers (service providers).
Aspects of the WAN service are determined by a legal agreement between the enterprise
and the service provider. These agreements define technical, administrative, and financial
aspects of the service. Technical details are commonly included in Service Level
Agreements (SLAs) and they describe aspects of the service, such as quality, availability,
reliability, and so on.
Single-carrier WANs are simpler and easier to support and manage. However, network
outages can be detrimental.
Dual-carrier WANs provide better path diversity with better fault isolation between
providers. They offer enhanced network redundancy, load balancing, distributed
computing or processing, and the ability to implement backup service provider
connections. The disadvantage of dual-carrier topologies is that they are more expensive
to implement than single-carrier topologies, because they require additional networking
hardware. They are also more difficult to implement because they require additional, and
more complex, configurations. The cost of downtime to your organization usually exceeds
the additional cost of the second provider and the complexity of managing redundancy.

21.5 Introducing WAN Technologies


WAN Connectivity Options
Before physically connecting to a service provider network, an enterprise needs to choose
the type of WAN service or connectivity that it requires.
WAN networks have undergone many technological changes. When the internet was first
developed, it was built on the existing telecom infrastructure, which was intended for
telephony voice traffic. The telephony network, known under the term Public Switched
Telephone Network (PSTN), was constructed over the period of a hundred years. Its reach
was global and reached very remote areas. Most of the subscriber telephone links—that
is, local loops or last mile—installed copper cabling. The international traffic and traffic
within service providers networks was using the fiber optic cabling and satellite systems.
The first WAN connectivity options and technologies leveraged the existing physical
infrastructure, because installing cables for the new WAN infrastructure represents most
of the cost in developing a new WAN network. The proliferation of the internet and
internet-related business activities economically justified the investment into new WAN
infrastructure. During the late 1990s, many telecommunications companies invested into
building an optical fiber global network. Today, the optical fiber network has largely
replaced the copper-based network, and it extends to many user homes—that is, it
replaces the traditional copper cabling also on the local loop.
When discussing WANs, it can be useful to distinguish its geographical elements, as WAN
technologies vary in these different geographical elements. A WAN network consist of:
o The local-loop/last-mile network, which represents end user connections to the
service providers. Local-loop connections terminate at service provider's access
nodes, which are the first points that aggregate traffic from multiple end users. A
local loop can connect only one subscriber with a service provider access node –
such as fiber point-to-point telephone lines, or it can connect a number of
subscribers with a service provider access node – such as coaxial cable TV systems.
The local loop is the smallest geographical WAN element. The local loop was
traditionally built using copper cabling, but is currently being replaced with optical
fiber.
o Backhaul networks, which connect multiple access nodes of the service provider’s
network. Service provider backhaul networks can span over smaller areas, such as
municipalities, or larger areas, such as countries and regions. Backhaul networks
are also connected to internet service providers and to the backbone network.
Backhaul networks can be implemented using optical fiber, or using microwave
links. Local loops together with backhaul networks are sometimes called access
networks. Examples of access networks are the telephone network, cable TV
network, and cellular network. These example access networks provide both
access to the internet, and access to another communication service, such as
telephony, or TV.
o The backbone network, or backbone, interconnects service provider’s networks.
Backbone networks are large, high-capacity networks with a very large geographic
span, owned and managed by governments, universities, and commercial entities.
Backbone networks are connected among themselves to create a redundant
network. Other service providers can connect to the backbone directly or through
another service provider. Backbone network service providers are also called Tier-1
providers. Backbone networks are built mainly using optical fiber.
Note: One of the first Internet backbone networks was NSFNET, which was built in 1987. It
was funded by the National Science Foundation (NSF) and it used the combination of
optical fiber and copper cable links to interconnect higher education communities.
Overview of WAN Connectivity Options
There are many options for implementing WAN solutions. These options differ in
technology, bandwidth, and cost.

The diagram in the figure gives an overview of available WAN connectivity options, taking
into consideration also the traditional, now mostly legacy connection options, that were
built to leverage the telephone network.
Both the traditional, and current and emerging WAN connectivity options can be broadly
classified into the following:
o Dedicated communication links, which provide permanent dedicated connections
using point-to-point links with various capacities that are limited only by the
underlying physical facilities and the willingness of enterprises to pay for these
dedicated lines. A point-to-point link provides a pre-established WAN
communications path from the customer premises through the provider network
to a remote destination. They are simple to implement and provide high quality
and permanent dedicated capacity. They are generally costly and have fixed
capacity, which makes them inflexible.
o Switched communication links can be either circuit-switched or packet-switched.
It is important to differentiate between the two switching models:
o Circuit-switched communication: Circuit switching establishes a dedicated
virtual connection, called a circuit, between a sender and a receiver. The
connection through the network of the service provider is established
dynamically, before communication can start, using signaling which varies
for different technologies. During transmission, all communication takes
the same path. The fixed capacity allocated to the circuit is available for the
duration of the connection, regardless of whether there is information to
transmit or not. Computer network traffic can be bursty in nature. Because
the subscriber has sole use of the fixed capacity allocation, switched circuits
are generally not suited for data communication. Examples of circuit-
switched communication links are PSTN analog dialup and Integrated
Services Digital Network (ISDN).
o Packet-switched communication: Using circuit switching does not make
efficient use of the allocated fixed bandwidth due to the data flow
fluctuations. In contrast to circuit switching, packet switching segments
data into packets that are routed over a shared network. Packet-switching
networks do not require a dedicated circuit to be established, and they
allow many pairs of nodes to communicate over the same channel. Packet-
switched communication links include Ethernet WAN (Metro Ethernet),
Multiprotocol Label Switching (MPLS), legacy Frame Relay, and legacy
Asynchronous Transfer Mode (ATM).
o Internet-based communication links: Instead of using a separate WAN
infrastructure, enterprises today commonly take advantage of the global internet
infrastructure for WAN connectivity. Previously, the internet was not a viable
option for a WAN connection due to many security risks and lack of SLA, that is,
the lack of adequate performance guarantees. Nowadays, with the development
of VPN technologies, the internet has become one of the most common
connection types that is cheap and secure. Internet WAN connection links include
various broadband access technologies, such as fiber, DSL, cable, and broadband
wireless. They are usually combined with VPN technologies to provide security.
Other access options are cellular (or mobile) networks and satellite systems.
Each of the WAN technologies provides advantages and disadvantages for the customer.
When choosing an appropriate WAN connection, consider whether to use the internet
based public connections or connections implemented within nonpublic service providers
networks. Internet based connections are readily available, flexible, and a cheaper option
that can be made secure using technologies, such as VPNs. Connections within a service
provider’s network guarantee security and performance.
Another element to consider when deciding about WAN connections, is the number of
nodes you need to interconnect. Also, consider the traffic requirements and QoS for each
of the required connections. If traffic is sensitive to delays, such as is voice or video,
private dedicated or switched connections might be better. One of the factors that will
limit your choices is what connection options are locally available. In remote areas, you
might have only satellite access at your disposal. Since WAN costs can be significant, your
operating budget will also influence your choice.
Note: Software defined WAN (SD-WAN) is a new concept in WAN. SD-WAN uses a
different approach from legacy WAN networking when it comes to WAN device
communication and WAN device management. In a legacy network all routers are
independently configured. A small change on a network may require manual
reconfiguration of hundreds of routers. In SD-WAN, all changes are centrally managed and
require only a few clicks to deploy. SD-WAN is an industry response to a trend of more
and more users accessing enterprise resources from more locations. At the same time, the
resources, such as applications and services, are hosted by more and more clouds.
Different user locations have different WAN connectivity options available. For an
enterprise, managing a large number of WAN connections can become inefficient. SD-
WAN provides a software layer to control and manage available WAN connections and
provide users with the best connection for the applications/services they require. It also
provides security features.
Traditional WAN Connectivity Options
WAN technologies that emerged at the beginning of the data communications era were
developed so they could leverage the existing global telephone network. Nowadays, most
of them are considered legacy. However, even today, you might still encounter situations
in which these legacy connectivity options might be the only ones available.

The figure illustrates legacy WAN connectivity options.


Leased lines are an example of legacy dedicated communication links, which have existed
since the early 1950s, and for this reason, are referred to by different names such as
leased circuits, serial link, serial line, or point-to-point link. The term leased line refers to
the fact that the organization pays a monthly lease fee to a service provider to use the
line. Leased lines are available in different capacities and are generally priced based on the
bandwidth required and the distance between the two connected points.
In North America, service providers use the T-carrier system to define the digital
transmission capability of a serial copper media link, while Europe uses the E-carrier
system. For instance, a T1 link supports 1.544 Mbps, an E1 link supports 2.048 Mbps, a T3
link supports 43.7 Mbps, and an E3 link supports 34.368 Mbps. The copper cable physical
infrastructure has largely been replaced by an optical fiber network. Transmission rates in
optical fiber networks are given in terms of Optical Carrier (OC) transmission rates, which
define the digital transmitting capacity of a fiber optic network.
Note: Communications across a serial connection is a method of data transmissions in
which the bits are transmitted sequentially over a single channel.
The two types of legacy circuit-switched WAN technologies are the PSTN analog dialup
connection and ISDN connections. Both connections utilize the copper cabling at the local
loop, to connect the equipment in the subscriber premises to the Central Office, which
acts as the access node of the service provider network.
o In the dialup connections, binary computer data is transported through the voice
telephone network using a device called modem. The physical characteristics of
the cabling and its connection to the PSTN limit the rate of the signal to less than
56 kbps. The legacy dialup WAN connection is a solution for remote areas with
limited WAN access options.
o ISDN technology enables the local loop of a PSTN to carry digital signals, resulting
in higher capacity switched connections. The capacity ranges from 64 kbps to
2.048 Mbps.
The examples of legacy packet switched networks are Frame Relay and ATM.
o Frame Relay is a Layer 2 technology which defines virtual circuits (VCs). Each VC
represents a logical end-to-end link mapped over the physical service provider’s
Frame Relay WAN. An enterprise can use a single router interface to connect to
multiple sites using different VCs. VCs are used to carry both voice and data traffic
between a source and a destination. Each frame carries the identification of the VC
it should be transferred over. This identification is called a Data-Link Connection
Identifier (DLCI). VCs are configurable, offering flexibility in defining WAN
connections.
o ATM technology is built on a cell-based architecture rather than on a frame-based
architecture. ATM cells are always a fixed length of 53 bytes. Small, fixed-length
cells are well-suited for carrying voice and video traffic because this traffic is
intolerant of delay. Video and voice traffic do not have to wait for larger data
packets to be transmitted. The 53-byte ATM cell is less efficient than the bigger
frames and packets. A typical ATM line needs almost 20 percent greater bandwidth
than Frame Relay to carry the same volume of network layer data. ATM was
designed to be extremely scalable and to support link speeds of T1/E1 to optical
fiber network speeds of 622 Mbps and faster. ATM also defines VCs and also
allows multiple VCs on a single interface to the WAN network.
Current and Emerging WAN Connectivity Options
WAN technologies and approaches are constantly evolving. Some technologies that were
once commonplace, like dialup, ISDN and Frame Relay, are rarely seen today. The
following figure depicts WAN technologies that you are likely to encounter today.

Multiprotocol Label Switching


Service providers build networks by using different underlying technologies, the most
popular being MPLS. MPLS is an IETF standard that defines a packet label-based switching
technique, which was originally devised to perform fast switching in the core of IP
networks. This technique helped carriers and large enterprises scale their networks as
increasingly large routing tables become more complex to manage. Service providers
began implementing MPLS in 2001 as a way to allow enterprises to create end-to-end
circuits across any type of transport medium using any available WAN technology.

MPLS is an architecture that combines the advantages of Layer 3 routing with the benefits
of Layer 2 switching.
The multiprotocol in the name means that the technology is able to carry any protocol as
payload data. Payloads may be IPv4 and IPv6 packets, Ethernet, or DSL, and so on. This
means that different sites can connect to the provider’s network using different access
technologies.
When a packet enters an MPLS network, the first MPLS router adds a short fixed-length
label to each packet, placed between a packet's data link layer header and its IP header.
The label is removed by the egress router, when the packet leaves the MPLS network. The
label is added by a provider edge (PE) router when the packet enters the MPLS network
and is removed by a PE router when leaving the MPLS network. This process is transparent
to the customer.
MPLS routers are also called label switched routers (LSRs). Based on its location in the
network, a router can be a customer edge router (CE router), a provider edge router (PE
router), or an internal provider router (P router). To forward a packet, routers use the
label to determine the packet's next hop.
MPLS is a connection-oriented protocol. For a packet to be forwarded, a path must be
defined beforehand. A label-switched path (LSP) is constructed by defining a sequence of
labels that must be processed from the network entry to the network exit point. Using
dedicated protocols, routers exchange information about what labels to use for each flow.
Since packets sent between the same endpoints might belong to different MPLS flows,
they might flow through different paths in the network.
MPLS labels can be added one on top of another. This feature of MPLS is called label
stacking. Therefore, a protocol data unit (PDU) may carry multiple labels. The top label is
always processed first, making it possible to combine labels in many different ways. Label
stacking allows the possibility to create many paths, which can have different processing
characteristics, which in turn means that MPLS can accommodate a great variety of
customer requirements.
MPLS provides several services. The most common ones are QoS support, traffic
engineering, quick recovery from failures, and VPNs.
Ethernet over WAN
Ethernet was originally developed to be a LAN access technology. At that time, it was not
suitable as a WAN access technology because the maximum cable length supported was
only up to a kilometer. Over the years the Ethernet physical layer media and coding
schemes constantly changed, while the Ethernet frame has remained the same, enabling a
consistent link layer and upper layer interface. Ethernet has therefore become a
reasonable WAN access option.
Service providers now offer Ethernet WAN services using fiber optic cabling. The Ethernet
WAN service can go by many names, including Metropolitan Ethernet (Metro Ethernet),
Ethernet over MPLS (EoMPLS), and Virtual Private LAN Service (VPLS).
With Ethernet WAN, all sites look as if they are connected to the same Ethernet switch
inside the service provider network. Therefore, all sites are on a single multiaccess
network and each site can communicate directly with all others on the WAN. As Ethernet
operates at layer 2 of the OSI model, you can use your own IP addressing space for routing
purposes. You can also extend your internal LAN QoS policies across the service provider
network.
Ethernet as the WAN connectivity protocol can be deployed in several ways:
o Pure Ethernet connectivity, that is, end-to-end Ethernet connectivity without
transformations to other WAN technologies, has a geographic span determined by
the physical layer limitations. Therefore, the service is limited to specific
geographic regions and is more adequate for Metropolitan Area Network (MAN)
implementations, hence the name Metro Ethernet. Pure Ethernet-based
deployments are cheaper but less reliable and scalable. They can handle hundreds
of remote sites.
o Ethernet over SDH/ SONET deployments are useful when there is an existing
SDH/SONET infrastructure already in place. SDH/SONET are two versions of the
protocol designed for, and used within, the service provider network
infrastructure. Ethernet frames must undergo reframing in order to be transferred
over a SDH/SONET network. Also, the bit-rate hierarchy of the SDH/SONET
network must be followed, which limits bandwidth flexibility.
o MPLS based deployments are a service provider solution that uses an MPLS
network to provide virtual private Layer 2 WAN connectivity for customers. MPLS
based Ethernet WANs can connect a very large number (thousands) of locations,
and are reliable and scalable.
Benefits of Ethernet WAN include:
o Reduced expenses and administration – Ethernet WAN provides a switched, high-
bandwidth Layer 2 network capable of managing data, voice, and video all on the
same infrastructure. These characteristics increase bandwidth and eliminate
expensive conversions to other WAN technologies. The technology enables
businesses to inexpensively connect numerous sites, in a metropolitan area, to
each other and to the internet. An all-Ethernet infrastructure simplifies the
network management process because every device uses the same protocol to
communicate.
o Easy integration with existing networks – Ethernet WAN connects easily to existing
Ethernet LANs, reducing installation costs and time.
o Enhanced business productivity – Ethernet WAN enables businesses to continue to
use IP-based business applications already developed, and to utilize the
accumulated knowledge, that is, to reuse the investment made in software and
training.
Broadband Internet Access
Broadband connectivity options could be classified into wired and wireless. Wired
connections use some sort of cabling, such as fiber or copper wires. These wired
connections tend to be permanent, that is, permanently enabled, dedicated, and mostly
offer consistent bandwidth. On the other hand, given the nature of wireless
communications, wireless connectivity solutions do not offer the same consistency of
bandwidth, error rate and latency as wired connections. This is due to factors such as
location (distance from radio towers, multipath propagation, radio interference from
other sources, etc.), weather, and bandwidth usage (local loop is usually shared among
multiple users). In addition, these factors can vary over time. In order to deliver highly
reliable and consistent performance, an understanding of the radio propagation and
conditions at each installation is needed. Examples of wired broadband connectivity are
DSL, cable TV connections, and optical fiber networks. Examples of wireless broadband
are cellular 3G/4G/5G or satellite internet services.
Broadband solutions are inexpensive when compared to other WAN connectivity options.
However, they do not allow customer to control latency or QoS. In terms of broadband
throughput, there are usually several options from which to choose.
Wired Broadband Internet Access
DSL technology is an always-on connection technology that uses existing twisted-pair
telephone lines to transport high-bandwidth data, and provides IP services to subscribers.
Service providers deploy DSL connections in the local loop/last mile. The connection is set
up between a pair of modems on either end of a copper wire that extends between the
customer premises equipment (CPE) and the DSL access multiplexer (DSLAM). A DSLAM is
the device located at the Central Office (CO) of the provider, which concentrates
connections from multiple DSL subscribers. The DSLAM combines individual DSL
connections from users into one high-capacity link to an ISP, and, therefore, to the
internet. DSL is a broadband technology of choice for many remote workers.

There are many DSL varieties, differing in available bit rates, and underlying data link and
physical layer characteristics. Different DSL flavors are: asymmetric DSL (ADSL), with
different upload and download bit rates; ADSL2+, with higher data rates, longer reach, and
improvements for packet transmission; high-data-rate DSL (HDSL); ISDN-based DSL, with
the longest DSL reach of all DSL technologies; and symmetric DSL (SDSL), which allows
symmetric bandwidth on the upstream and downstream and offers multiple rates, very-
high-data-rate DSL (VDSL), and so on. All these variations are encompassed under the
term xDSL, which denotes any of the DSL technologies.
Generally, a subscriber cannot choose to connect to an enterprise network directly, but
must first connect to an ISP, and then an IP connection is made through the internet to
the enterprise. Security risks are incurred in this process, but can be mitigated with
security measures.
Another wired broadband access option is cable access. Accessing the internet through
cable utilizes the cable network, which was primarily developed for TV signal distribution,
and is known as the cable TV system. At the physical layer, the coaxial cable was the
primary medium used to build cable TV systems. It carries radio frequency (RF) signals.
Most cable operators are deploying hybrid fiber-coaxial networks. Internet service is
provided by the ISP associated with the cable service provider.
To enable the transmission of data over the cable system and to add high-speed data
transfer to an existing cable TV system, the Data over Cable Service Interface Specification
(DOCSIS) international standard defines the communications requirements and operation
support interface requirements.
Two types of equipment are required to send signals upstream and downstream on a
cable system:
o Cable Modem (CM) on the subscriber end.
o Cable Modem Termination System (CMTS) at the headend of the cable operator.
The topology in the figure displays a sample cable WAN connection. A headend CMTS
communicates with CMs located in subscriber homes. The headend is actually a router
with databases for providing internet services to cable subscribers. When deploying
hybrid fiber-coaxial (HFC) networks, service providers enable high-speed transmission of
data to cable modems located in residential areas. Using optical fiber, the headend is
connected to a node that also connects to coaxial cables, called feeder cables, which
connect multiple subscribers. The node performs optical-to-RF signal conversion.
Wireless Broadband Internet Access
Wireless technology uses RF spectrum to send and receive data. One limitation of wireless
access was the need to be within the local transmission range (typically less than 150
feet/46 m) of a wireless router or a wireless modem that has a wired connection to the
internet. However, developments in broadband wireless technology are increasing the
reach of wireless connections and now include WANs.
The following technologies enable wireless broadband access:
o Municipal Wi-Fi: Many municipal governments, often working with service
providers, are deploying wireless networks. Some of these networks provide high-
speed internet access at no cost or for substantially less than the price of other
broadband services. Other cities reserve their Wi-Fi networks for official use,
providing police, fire fighters, and city workers remote access to the internet and
municipal networks. To connect to a municipal Wi-Fi, a subscriber typically needs a
wireless modem, which provides a stronger radio and directional antenna than
conventional wireless adapters. Most service providers provide the necessary
equipment for free or for a fee, much like they do with DSL or cable modems.
o Cellular/Mobile broadband refers to wireless internet access delivered through
mobile phone towers to computers, mobile phones, and other digital devices.
Devices use a small radio antenna to communicate with a larger antenna at the
phone tower, via radio waves. Organizations leverage cellular networks for a
variety of use cases, such as for metering devices (sensors, vehicle diagnostics),
temporary sites (sports/fair/conference access), and to connect smaller and
remote business sites. Three common terms that are used when discussing
cellular/mobile networks include:
o Mobile Internet or Mobile Data is a general term for the internet services
from a mobile phone or from any device that uses the same technology. A
mobile phone subscription does not necessarily include a mobile data
subscription.
o Long-Term Evolution (LTE) A mobile technology that increased the capacity
and speed of the wireless link compared to 2G and 3G technologies. It
introduced novelties, such as using a different radio interface, and core
network improvements. It is considered to be the part of 4G technology,
although it was its predecessor.
o 2G/3G/4G/5G acronyms refer to the mobile wireless technologies and
standards, and stand for second, third, fourth, and fifth generations of
mobile wireless technologies. Each new generation is an evolution of the
previous one. Each generation defines its own standards and with each
new generation the access bit rates continue to increase. 4G standards
provided bit rates up to 450 Mbps download and 100 Mbps upload. The 5G
standard should provide cellular data transfer speeds from 100 Mbps to 10
Gbps and beyond. Also, 5G should significantly decrease latency and
improve reliability of cellular broadband. 5G uses new and so far, rarely
used radio frequency bands. 5G will also work in a directional way, meaning
the effects of interferences from other wireless signals will be minimized.
Low latency is one of 5G's most important attributes, making the
technology highly suitable for critical applications that require rapid
responsiveness.
o Satellite Internet is a high-speed bidirectional internet connection made through
geostationary communications satellites. Internet-by-satellite speed and cost
nowadays compare with DSL broadband offerings. Satellite Internet is typically
used in locations where land-based internet access is not available or for
temporary installations that are mobile. Internet access using satellites is available
worldwide, including for providing internet access to vessels at sea, airplanes in
flight, and vehicles moving on land. To access satellite internet services,
subscribers need a satellite dish, two modems (uplink and downlink), and coaxial
cables between the dish and the modem. The only prerequisite is that the dish can
see the sky; that it has clear line-of-sight to a geostationary satellite. A company
can create a private WAN using satellite communications and Very Small Aperture
Terminals (VSAT). A VSAT is a type of satellite dish similar to the ones used for
satellite TV from the home and is usually about 1 meter in width. The VSAT dish
sits outside, pointed at a specific satellite, and is cabled to a special router
interface, with the router inside the building.
o Worldwide Interoperability for Microwave Access (WiMAX) provides high-speed
broadband service with wireless access and provides broad coverage similar to a
cell phone network rather than through small Wi-Fi hotspots. WiMAX is a wireless
technology for both fixed and mobile implementations. WiMAX operates in a
similar way to Wi-Fi, but at higher speeds, over greater distances, and for a greater
number of users. It uses a network of WiMAX towers that are similar to cell phone
towers. To access a WiMAX network, subscribers must subscribe to an ISP with a
WiMAX tower within 30 miles of their location. They also need some type of
WiMAX receiver and a special encryption code to get access to the base station.
WiMAX may still be relevant for some areas of the world. However, in most of the
world, WiMAX has largely been replaced by LTE for mobile access and cable or DSL
for wired access.
Optical Fiber in WAN connections
Due to much lower attenuation and interference, optical fiber has large advantages over
existing copper wire, especially in long-distance, high-demand applications. Until recently,
optical fiber infrastructures were complex and expensive to install and operate and they
have been installed mainly within the service provider backhaul and backbone networks,
where they could be utilized to their full capacity. In the late 1990s, the price for installing
fiber dropped and many telecommunication companies invested in building optical fiber
networks with sufficient capacity to take existing traffic and future traffic, which was
forecast to grow exponentially, and to expend the optical network to include the local
loop. At the same time, the technologies used for transmission of optical signals also
evolved. With the development of wavelength division multiplexing (WDM), the capacity
of the single strand of optical fiber increased significantly, and as a consequence, many
fiber optic cable runs were left "unlit"—that is, were not in use. Today, this optic fiber is
offered under the term "dark fiber."
Fiber to the x
Optical fiber network architectures, in which optical fiber reaches the subscriber home,
premises, or building, are referred to as Fiber to the x (FTTx), which includes Fiber to the
Home (FTTH), Fiber to the Premises (FTTP), or Fiber to the Building (FTTB). When optical
cabling reaches a device that serves several customers, with copper wires (twisted pair or
coaxial) completing the connection, the architecture is referred to as Fiber to the
Node/Neighborhood (FTTN), or Fiber to the Curb/Cabinet (FTTC). In FTTN, the final
subscriber gains broadband internet access using cable or some form of DSL.
SONET and SDH
The standards used in service provider optical fiber networks are SONET or SDH.
SONET/SDH were designed specifically as WAN physical layer standards. SONET is used in
the United States and Canada, while SDH is used in the rest of the world. Both standards
are essentially the same and, therefore, are often listed as SONET/SDH. Both define how
to transfer multiple data, voice, and video communications over optical fiber using lasers
or light-emitting diodes (LEDs) over great distances.
Note: The SDH standard was originally defined by the European Telecommunications
Standards Institute (ETSI) and is formalized as International Telecommunication Union
(ITU) standards G.707, G.783, G.784, and G.803. The SONET standard was defined by
Telcordia and American National Standards Institute (ANSI) standard T1.105. which define
the set of transmission formats and transmission rates in the range above 51.840 Mbps.
SONET/SDH standards are used on the ring network topology. The ring contains
redundant fiber paths and allows traffic to flow in both directions.
SONET and SDH have a hierarchical signal structure. This means that a basic unit of
transmission is defined, which can be multiplexed, or combined, to achieve greater data
rates.
STS = Synchronous Transport Signal
OC = Optical Carrier
STM = Synchronous Transport Module
The figure shows the SONET and SDH signal hierarchy. SONET and SDH both have their
own terminology for the basic unit of transmission. In SONET the basic unit of
transmission is called Synchronous Transport Signal 1 (STS-1) or Optical Carrier 1 (OC-1)
and operates at 51.84 Mbps. The higher-level signals are multiples of STS-1 signals and
operate at multiples of base transmission rate. For example, STS-3 operates at a bit rate of
155.52 Mbps interleaving frames coming from three STS-1 signals. Four STS-3 streams can
be multiplexed into an STS-12 stream and so on.
The STS-1 and OC-1 designations are often used interchangeably, though the OC refers to
the physical signal, that is, the signal in its optical form, while STS-1 specifies the
transmission format.
In SDH the basic unit of transmission is the Synchronous Transport Module, level 1 (STM-
1). which operates at 155.520 Mbps. The STM-1 bit rate is the same as the SONET STS-3
bit rate.
Each rate is an exact multiple of the lower rate, ensuring that the hierarchy is
synchronous.
Dense Wavelength-Division Multiplexing
Along with installing extensive optical fiber networks, the technologies for transmission of
optical signals also advanced. DWDM is a form of wavelength division multiplexing that
combines multiple high-bit-rate optical signals into one optical signal transmitted over one
fiber strand. Each of the input optical signals is assigned a specific light wavelength, or
“color”, and is transmitted using that wavelength. Different signals can be extracted from
the multiplexed signal at the reception in a way that there is no mixing of traffic. As
demands change, more capacity can be added, either by simple equipment upgrades or by
increasing the number of wavelengths on the fiber, without expensive upgrades. The
figure below illustrates this multiplexing concept.
Specifically, DWDM:
o Assigns incoming optical signals to specific wavelengths of light (that is,
frequencies).
o Can multiplex more than 96 different channels of data (that is, wavelengths) onto a
single fiber.
o Each channel is capable of carrying a 200 Gbps multiplexed signal.
o Can amplify these wavelengths to boost the signal strength.
o Is protocol agnostic, it supports various protocols with different bit rates, including
Ethernet, Fiber Channel, SONET and SDH standards
DWDM circuits are used in all modern submarine communications cable systems and
other long-haul circuits.
Dark Fiber
The availability of WDM reduced the demand for fiber by increasing the capacity that
could be placed on a single fiber strand. As a result, many fiber optic cable runs were left
"unlit"—that is, were not in use.
Enterprises can use dark fiber to interconnect their remote locations directly. The
enterprises can create a privately operated optical fiber network over dark fiber leased or
purchased from another supplier. Both ends of the link are controlled by the same entity.
Dark fiber networks can operate using WDM to add capacity where needed. The cost to
lease fiber is usually more expensive than any other WAN option available today. On the
other hand, connecting remote sites using dark fiber offers the greatest flexibility and
control. Therefore, dark fiber is leased when speed and security are of utmost importance.
WAN-Related Protocols
In addition to understanding the various technologies available for broadband internet
access, it is also important to understand the underlying data link layer protocol used by
the ISP.
A data-link protocol that is commonly used by ISP on links to the customers is PPP. PPP
originally emerged as an encapsulation protocol for transporting IP traffic over point-to-
point links, such as links in analog dialup and ISDN access networks. PPP specifies
standards for the assignment and management of IP addresses, encapsulation, network
protocol multiplexing, link configuration, link quality testing, error detection, and option
negotiation for such capabilities as network layer address negotiation and data
compression negotiation. PPP provides router-to-router and host-to-network connections
over both synchronous and asynchronous circuits. An example of an asynchronous
connection is a dialup connection. An example of a synchronous connection is a leased
line.
Additionally, ISPs often use PPP as the data link protocol over broadband DSL connections.
There are several reasons for this. First, PPP supports the ability to automate assigning of
the IP addresses to remote ends of a PPP link. With PPP enabled, ISPs can use PPP to
assign each customer one public IPv4 address. PPP also includes the link-quality
management feature. If too many errors are detected, PPP takes down the link. More
importantly, PPP supports authentication. ISPs often want to use this feature to
authenticate customers because during authentication, ISPs can check accounting records
to determine whether the customer’s bill is paid, prior to letting the customer connect to
the internet. Also, ISPs can use the same authentication model as the ones already in
place for analog and ISDN connections.
Analog dialup and ISDN WAN technologies supported PPP, but are largely deprecated
today. On the other hand, the DSL subscriber base is still significant. When deploying a DSL
network, ISPs often provide their customers with a DSL modem. A DSL modem has one
Ethernet interface to connect to the customer Ethernet segment, and another interface
for DSL line connectivity. While ISPs value PPP because of the authentication, accounting,
and link management features, customers appreciate the ease and availability of the
Ethernet connection. However, Ethernet links do not natively support PPP. PPP over
Ethernet (PPPoE) provides a solution to this situation. As shown in the figure, PPPoE
allows the sending of PPP frames encapsulated inside Ethernet frames.
PPPoE provides an emulated point-to-point link across a shared medium, typically a
broadband aggregation network such as the ones that you can find in DSL service
providers. A very common scenario is to run a PPPoE client on the customer side, which
connects to and obtains its configuration from the PPPoE server at the ISP side.
The figure illustrates a DSL deployment, in which the DSL modem is the only intermediary
device on the customer side, between the PC and the Internet. In such case, there can be
only one PPPoE client device on the LAN side of the connection, which is the PC. The
modem converts the Ethernet frames to PPP frames by stripping the Ethernet headers.
The modem then transmits these PPP frames on the ISP’s DSL network.
The following figure shows a typical network topology, in which Cisco IOS router is added
on the customer site. The Cisco IOS router connects to the Ethernet LAN on one side and
to the DSL modem on the other. The customer’s router is connected to a DSL modem
using an Ethernet cable You can run the PPPoE client IOS feature on the Cisco router. This
way, you can connect multiple PCs on the Ethernet segment that is connected to the Cisco
IOS router.

PPPoE creates a PPP tunnel over an Ethernet connection. This allows PPP frames to be
sent across the Ethernet cable to the service provider from the customer’s router. The
modem converts the Ethernet frames to PPP frames by stripping the Ethernet headers.
The modem then transmits these PPP frames on the service provider’s DSL network. The
PPPoE client initiates a PPPoE session. If the session has a timeout or is disconnected, the
PPPoE client will immediately attempt to reestablish the session.
Enterprise Internet Connectivity Options
When connecting an enterprise network to an ISP, redundancy is a serious concern. There
are different aspects that can be addressed to achieve redundant connectivity.
Using redundant links protects your network against link failure between your router and
the ISP router. Deployment of redundant equipment protects your network against device
failure. If one router fails, internet connectivity is still established through the redundant
router. You also need redundant links to connect all devices.
If you are hosting important servers in your network, it is best to have two redundant
internet providers. If there is a failure in one ISP network, all traffic is automatically
rerouted through the second ISP.
There are multiple strategies for connecting your network to an ISP. The topology
depends on the needs of the company.

There are four basic ISP connectivity types:


o Single-homed: Single-homed ISP connectivity is used in cases when a loss in
internet connectivity is not as problematic to a customer (although the internet is
typically a vital resource). Single-homed customers use only one service provider
for the internet uplink, and use only one physical uplink to connect to the ISP so
there is no redundancy.
o Dual-homed: With a single ISP, customers can still achieve redundancy if two links
toward the same ISP are used, effectively making a customer dual-homed. The
dual-homed design could also be configured to load balance traffic over both of
the links, but the dual-homed redundancy option cannot protect the customer if
the ISP has an outage.
o Multihomed: If a customer network is connected to multiple ISPs, the solution is
said to be multihomed. The customer is responsible for announcing their own IP
address space to upstream ISPs. The customer should avoid forwarding any routing
information between ISPs, or they become a transit provider between the two
ISPs. This design provides more than redundancy—it also enables load-balancing of
customer traffic between both ISPs.
o Dual-multihomed: To enhance resiliency, a customer can have two links to each
ISP, making the solution dual-multihomed. This dual-multihomed solution gives an
organization the most redundancy possible. This would be an option for a data
center or a large enterprise with plenty of resources, as it would be the most costly
option.

21.6 Introducing WAN Technologies


Virtual Private Networks
VPN is a technology that secures communication across an untrusted network. According
to RFC 2828, a VPN is "a restricted-use, logical (for example, artificial or simulated)
computer network that is constructed from the system resources of a relatively public,
physical (for example, real) network (such as the internet), often by using encryption
(located at hosts or gateways), and often by tunneling links of the virtual network across
the real network".
Simply stated, a VPN can be defined as:
o Virtual: Logical networks, independent of physical architecture.
o Private: Independent of IP addressing and routing schemes (noncryptographic
approaches). Secure confidentiality, message integrity, and origin authentication
(cryptographic approaches).
o Network: Interconnected computers, devices, and resources that are grouped to
share information.
With the advent of VPNs, enterprises can support remote users by leveraging the internet.
A mobile user simply needs access to the internet to communicate with the main office.
For telecommuters, their internet connectivity is typically a broadband, DSL, or cable
connection.
The word tunneling is often used in networking. To explain its networking meaning, think
of real-world tunnels. Usually, a tunnel construction involves building a tunnel pipe. If you
put something into a pipe, you hide it from view—it is the pipe surface that remains
visible, not what is inside it. You can put a pipe into another pipe, making the insides more
difficult to see. On the other hand, by removing the pipes, you can get to the content. In
networking, the tunnel effect is achieved by adding a new header to the packet, in front of
the existing one, for example, by additional encapsulation. The newly added header
becomes the first one “visible” and it is often called the outer header. Sometimes, the
trailer is added also. The new header can be added at the source endpoint or can be
added by a dedicated networking node. The tunneled packet is processed on its path
throughout the network. The processing can consider only the outside header, which
typically happens at devices that are not aware of tunneling actions. On the nodes that are
aware of the applied tunneling, the processing can go further to expose the inner data of
the packet.
As piping can be made more or less difficult to remove, so the tunneling can involve more
or less processing to protect content. For instance, if you make your physical piping
openable by using a key, only someone in possession of the key can get to the inside
content easily. The same is true of VPN tunnels. They can employ cryptographic functions,
in which case they are called cryptographic VPNs, or they can be constructed just by
adding readily readable information.
VPNs are classified according to the following criteria:
o Deployment mode: Site-to-site VPN and remote-access VPN. A site-to-site VPN
connects two entire networks, is statically configured, and serves traffic of many
hosts. A remote-access VPN connects an individual endpoint over the internet to
the VPN device at the edge of the remote network.
o Underlying technology: IP Security (IPsec) VPN, Secure Sockets Layer (SSL) VPN,
MPLS VPN, and hybrid VPNs combining multiple technologies.
IPsec and SSL VPNs are both cryptography-based VPNs. VPNs can also be network-based.
For example, a service provider can use a technology such as MPLS to segregate customer
traffic as it crosses a shared infrastructure. The service provider is providing a network
that is virtually private. While traffic physically crosses shared infrastructure, there is no
mixing of traffic. One customer cannot see another customer’s traffic. This behavior is
different from an IPsec VPN, which uses cryptographic technologies to transform packet
data to provide privacy, data integrity, and origin authentication.

The figure illustrates a site-to-site VPN and a remote-access VPN. These two basic VPN
deployment models typically use either IPsec or SSL technologies to secure the
communications.
VPNs provide these benefits:
o Cost savings: VPNs enable organizations to use a cost-effective, third-party
internet transport to connect remote offices and remote users to the main
corporate site. The use of VPNs therefore eliminates expensive, dedicated WAN
links. Furthermore, with the advent of cost-effective, high-bandwidth technologies
such as DSL, organizations can use VPNs to reduce their connectivity costs while
simultaneously increasing remote connection bandwidth.
o Scalability: VPNs enable corporations to use the internet infrastructure, which
makes it easy to add new users. Therefore, corporations can expand capacity
without adding significant infrastructure. For instance, a corporation with an
existing VPN between a branch office and the headquarters can securely connect
new offices by simply making a few changes to the VPN configuration and ensuring
that the new office has an internet connection. Scalability is a major benefit of
VPNs.
o Compatibility with broadband technology: VPNs allow mobile workers,
telecommuters, and people who want to extend their work day to take advantage
of high-speed, broadband connectivity, such as DSL and cable, to gain access to
their corporate network. This ability provides workers with significant flexibility
and efficiency. Furthermore, high-speed, broadband connections provide a cost-
effective solution for connecting remote offices.
o Security: Cryptographic VPNs can provide the highest level of security by using
advanced encryption and authentication protocols that protect data from
unauthorized access. The two available options are IPsec and SSL.

21.7 Introducing WAN Technologies


Enterprise-Managed VPNs
When an enterprise designs VPN connectivity, there are two deployment modes that are
usually implemented: site-to-site VPNs and remote-access VPNs.

Here are the two deployment modes in enterprise-managed VPNs:


o Site-to-site VPNs connect entire networks to each other, such as connecting
branch offices, home offices, or business partners networks to the main office
network. Each site has a VPN capable device, called VPN gateway. Routers,
firewalls, and other security appliances, such as the Cisco Adaptive Security
Appliance (ASA), can act as VPN gateways. The VPN gateways establish the
connection between themselves. VPN gateways have static VPN configurations
and are ‘aware’ of the VPN tunnel established between them. End hosts are not
aware of the tunneling. They send and receive normal traffic. The traffic that must
be “placed” into the tunnel passes through a VPN gateway, which
cryptographically processes it if required, encapsulates it, and sends it through the
tunnel to the peer VPN gateway. The peer VPN gateway strips the headers,
verifies, and decrypts the content and relays the packet toward the destination
end host within its inside network. The destination host receives the normal traffic,
again, unaware of tunneling that happened.
o Remote-access VPNs are used to connect individual hosts to remote networks over
the internet. These VPNs support the need of telecommuters, mobile users, and
enterprise customers to access remote networks and applications. Individual hosts
typically connect to internet via broadband connections. To establish VPNs, end
hosts use VPN client software or a web-based client, such as an SSL-enabled web
browser. On the remote side, the end host connects to a VPN server device. The
end-host data is encrypted by the client software, sent over the internet to the
VPN gateway, where it is decapsulated, decrypted and relayed to the appropriate
host inside its network. Remote-access VPNs are not statically set up. They are
dynamic and can be set up when required.
Site-to-site VPN options in use today are:
o IPsec tunnel: IPsec is a framework of open standards that spells out the rules for
secure communications. IPsec relies on existing algorithms to implement
cryptographic functions. The framework allows technologies to be replaced over
time. When cryptographic technologies become obsolete, it does not make the
IPsec framework obsolete. Current technologies are swapped to replace the
obsolete ones, keeping the framework in place. The IPsec framework provides a
tunnel mode of operation, which enables you to use it as a standalone connection
method. This option is the most fundamental IPsec VPN design model. IPsec
provides security services to VPN tunnels.

o Generic Routing Encapsulation (GRE) over IPsec: Although IPsec provides a secure
method for tunneling data across an IP network, it has limitations. IPsec does not
support IP broadcast or IP multicast, for example, it cannot be used when
exchanging messages from protocols that rely on these features, such as routing
protocols. IPsec also does not support the use of non-IP protocols. GRE is a
tunneling protocol developed by Cisco that can encapsulate a wide variety of
network layer protocol packet types, such as IP broadcast or IP multicast, and non-
IP protocols, inside IP tunnels but it does not support encryption. Using GRE
tunnels with IPsec will give you the ability to securely run routing protocol, IP
multicast, or multiprotocol traffic across the network between the remote
locations.

o With a generic hub-and-spoke topology, you can typically implement static tunnels
(typically GRE over IPsec) between the central hub and remote spokes. When you
want to add a new spoke to the network, you need to configure it on the hub
router. Also, the traffic between spokes has to traverse the hub, where it must exit
one tunnel and enter another. Static tunnels may be an appropriate solution for
small networks, but this solution becomes unacceptable as the number of spokes
grows larger. Cisco Dynamic Multipoint Virtual Private Network (DMVPN) is a
Cisco proprietary software solution that simplifies the device configuration when
there is a need for many VPN connections. With Cisco DMVPN, a hub-and-spoke
topology is first implemented. The configuration of this network is facilitated by a
multipoint GRE tunnel interface, established on the hub. Multipoint in the name
signifies that a single GRE interface can support multiple IPsec tunnels. The hub is a
permanent tunnel source. The size of the configuration on the hub router remains
constant even if you add more spoke routers to the network. The spokes are
configured to establish a VPN connection with the hub. After building the hub-and-
spoke VPNs, the spokes can obtain information about other spokes from the hub
and establish direct spoke-to-spoke tunnels.
o IPsec virtual tunnel interface (VTI): IPsec VTI is a feature that associates an IPsec
tunnel endpoint with a virtual interface. Traffic is encrypted or decrypted when it
is forwarded from or to the tunnel interface and is managed by the IP routing
table. Using IP routing to forward the traffic to the tunnel interface simplifies the
IPsec VPN configuration compared to the conventional process, allowing for the
flexibility of sending and receiving both IP unicast and multicast encrypted traffic
on any physical interface. The IPsec tunnel protects the routing protocol and
multicast traffic, like with GRE over IPsec, but without the need to configure GRE.
Keep in mind that all traffic is encrypted and that it supports, like standard IPsec,
only one protocol (IPv4 or IPv6), which allows for the flexibility of sending and
receiving both IP unicast and multicast encrypted traffic on any physical interface,
such as in the case of multiple paths.

21.8 Introducing WAN Technologies


Provider-Managed VPNs
Using the technology implemented in their networks, service providers can provide VPNs
for the enterprises. A service provider segregates customer traffic as it crosses a shared
infrastructure. While traffic physically crosses shared infrastructure, there is no mixing of
traffic. One customer cannot see another customer’s traffic. When using MPLS, service
providers can create Layer 2 MPLS VPNs or Layer 3 MPLS VPNs.
A Layer 2 MPLS VPN is useful for customers who run their own Layer 3 infrastructure and
require only Layer 2 connectivity from the service provider. In this case, the customer
manages its own routing information. One advantage that Layer 2 VPN has over its Layer 3
counterpart is that some applications do not work if nodes are not in the same Layer 2
network.
Some typical examples of Layer 2 VPN are VPLS and Virtual Private Wire Service (VPWS). If
you look from the customer perspective, with Layer 2 MPLS VPN, you can imagine a whole
service provider network as one big virtual switch.

A Layer 3 MPLS VPN provides a Layer 3 service across the backbone. A separate IP subnet
is used on each customer site. When you deploy a routing protocol over this VPN, the
service provider needs to participate in the exchange of routes. Neighbor adjacency is
established between your CE router and the PE router (which the service provider owns).
Within the service provider network, there are many P routers (service provider core
routers). The job of P routers is to provide connectivity between PE routers. What this
situation means is that the service provider becomes the backbone of your (customer)
network.
Layer 3 VPN is appropriate for customers who prefer to outsource their routing to a
service provider. The service provider maintains and manages routing for the customer
sites. If you look from the customer perspective, with Layer 3 MPLS VPN, you can imagine
the whole service provider network as one big virtual router.

22.1 Explaining the Basics of ACL


Introduction
In the enterprise environments users often need to communicate with devices that
contain sensitive information and the access to these devices should be controlled at the
network level. Therefore, enterprises implement network-based security solutions that
can control communication between various segments in the company network. The main
goal is to protect the network from unauthorized use. For example, you could use this
mechanism to specify which users, based on their network address, can connect to a
server with sensitive information in the data center. However, often enterprises want to
control network communication on all devices where possible, to increase the level of
overall security in the network.
In certain cases, only basic packet filtering is necessary. For example, internally in a
network, where firewalls do not tend to operate, devices such as routers and multilayer
switches can provide basic traffic filtering capabilities using access control lists (ACLs). An
ACL is a versatile tool that is used by a network administrator and supported by all Cisco
IOS routers and multilayer switches. There are also L2 ACLs but only L3 ACLs for IPv4 are
covered here. ACLs are a way to identify traffic; one of the many uses of ACLs is for packet
filtering. With ACLs, packets are filtered based only on information that is contained in the
packet itself. Therefore, packet filtering is often too simple to guard against modern
network attacks, so you need more sophisticated devices such as firewalls.
A networking engineer needs to have a good understanding of ACLs including:
o General operation and how ACLs can be used.
o Use of appropriate wildcard masks when implementing an ACL.
o Knowledge of different types of ACLs.
o Configuration and verification of different types of basic ACLs.

22.2 Explaining the Basics of ACL


ACL Overview
ACLs are among the most commonly used features of Cisco IOS Software. ACLs are often
associated with the control of packets traveling in and out of a router interface. However,
there are many different applications of ACLs. For example, they can also be used to
control route advertisements, to specify interesting traffic for a VPN, and to limit debug
output.
An ACL in isolation is a series of statements that specify traffic selection criteria. By using
an ACL, an administrator defines packet selection criteria, which are also called matching
rules. Matches are based on information found in packet headers. Along with the
matching rule, an administrator defines a permit or deny action for the packets that meet
the criteria. You can interpret the permit action as "include into selection" and the deny
action as "do not include into selection". What is done with selected packets depends on
how the ACL is applied at the device.
ACLs are supported on a wide range of products including routers and switches.
The example in the figure shows two access lists: access list 15 and access list
PING_BLOCK. Both access lists contain a sequence of commands. Each command contains
an action, indicated by the red font, and a rule for matching packets, indicated by the blue
font. The actions are either permit or deny. The ordering of statements is indicated by the
number at the front of the row. At this point, it is enough to identify the general structure
of an access list. The details of the syntax will be explained.
One of the common applications of ACLs is traffic filtering. By default, a device does not
have ACLs configured; therefore, a device does not filter traffic. For instance, traffic that
enters the router is routed solely based on information within the routing table. However,
when an ACL is applied to an interface, the router performs the additional task of
evaluating all network packets against the ACL as they pass through the interface to
determine if the packet can be forwarded.
When used for traffic filtering, ACLs provide these features:
o Limit network traffic to increase network performance. For example, if a corporate
policy does not allow video traffic on the network, ACLs that block video traffic
could be configured and applied. This action would greatly reduce the network
load and increase network performance.
o Provide traffic flow control. For example, ACLs are used for traffic prioritizing or to
limit certain type of traffic.
o Provide a basic level of security for network access. ACLs can allow one host to
access a part of the network and prevent another host from accessing the same
part. For example, access to the Human Resources network can be restricted to
specific users.
o Filter traffic based on traffic type. For example, an ACL can permit email traffic but
block all Telnet traffic.
o Screen hosts to permit or deny access to network services. For example, ACLs can
permit or deny a user access to FTP or HTTP servers.
In addition to either permitting or denying traffic, ACLs can be used for selecting types of
traffic to be analyzed, forwarded, or processed in other ways. For example, ACLs can be
used to classify traffic to enable priority processing. This capability is similar to having a
VIP pass at a concert or sporting event. The VIP pass gives selected guests privileges not
offered to general admission ticket holders, such as priority entry or being able to enter a
restricted area.
Note: There are many types of ACLs you can configure on Cisco devices. Examples are:
dynamic ACLs, reflexive ACLs, time-based ACLs, MAC ACLs, VLAN ACLs, port ACLs, and so
on. This course covers some types of IPv4 access lists.

22.3 Explaining the Basics of ACL


ACL Operation
An ACL is a sequential list of permit or deny statements, known as access control entries
(ACEs) or ACL statements. When network traffic is processed by an ACL, the device
compares packet header information against the ACE matching criteria. ACL statements
are evaluated one by one, in a sequential order from the first to the last, to determine if
the packet matches one of them. This process is called packet filtering.
IP packet filtering can be based only on information found in Open Systems
Interconnection (OSI) Layer 3 header or on both Layer 3 and Layer 4 header information. A
device extracts the relevant information from the packet headers and compares the
information to the ACE matching rule.
ACL statements operate in a sequential, logical order. When a packet matches a rule in the
statement, the corresponding action is executed, and ACL processing stops. For instance,
in an access list with 15 statements, if a packet matches the first statement, the packet is
not evaluated against other 14 statements. Only the instruction of the first matching
statement is executed, even if the packet would match subsequent ones.
The matching process continues until the end of the list. If a match is not found, the
packet is processed with a deny action and dropped. The last statement of an ACL is
always an implicit deny. This statement is automatically inserted at the end of each ACL
even though you do not see it when you view the content of an ACL. If the ACL is used for
traffic filtering the implicit deny blocks all traffic. Because of this implicit deny, an ACL that
does not have at least one permit statement will deny all traffic.
The processing of ACL in traffic filtering is displayed in the figure.

22.4 Explaining the Basics of ACL


ACL Wildcard Masking
ACE processing compares the packet header information to a matching rule in the ACE.
The rules that define matching of IPv4 addresses are written using wildcard masks. As with
subnet mask and an IPv4 address, a wildcard mask is a string of 32 binary digits. However,
a wildcard mask is used by a device to determine which bits of the address to examine for
a match.
A wildcard mask is not used on its own. It is used in conjunction with an IPv4 address. The
matching rule consists of a reference IPv4 address and a wildcard mask that applies to it.
When a wildcard mask is applied to the reference IPv4 address, the result is the matching
pattern of binary digits. For a match to occur, the IPv4 address from the packet header
must match the resulting pattern.
When a wildcard mask is applied, the following rules are in place:
o Where a wildcard mask bit is 0: the value found at the same position in the
"reference" IPv4 address must be matched.
o Where a wildcard mask bit is 1: the value found at the same position in the
"reference" IPv4 address can be ignored.
Matching criteria/matching rule has two elements:
o IPv4 address provides a reference against which IPv4 packet information is
evaluated.
o Wildcard mask provides evaluation criteria:
o 0 = this bit must match the value in the reference IPv4 address
o 1 = this bit can have whatever value

When comparing an IPv4 address from the packet header with the reference address from
the ACL statement, a device is looking for a match only for those bits of the reference IPv4
address that are masked by 0s in the wildcard mask.

The example demonstrates how the wildcard mask is interpreted. The reference IPv4
address is 172.16.100.1. For clarity, the analysis is given only for the third octet. The
second column lists some possible values for a wildcard mask octet, from 0000 0000 to
1111 1111. The third column contains the reference octet value 100 decimal, presented in
its binary form. Where the wildcard mask has a 0, the binary digit is colored red to indicate
that the digit must match with the same digit in the analyzed packet. Where the wildcard
mask has a 1, the binary digit is green, to indicate that the digit can be of any value. For
each wildcard mask - reference value combination, the fourth column gives the resulting
matching pattern. The "×" character indicates a bit of whatever value. The final column
gives the decimal values that would match the criteria specified with the reference value
and the wildcard mask.
A wildcard mask is sometimes referred to as an inverse mask. In a subnet mask, binary 1 is
equal to a match and binary 0 is not a match. The reverse is true for wildcard masks. A 0 in
a bit position of the wildcard mask indicates that the corresponding bit in the address
must be matched. A 1 in a bit position of the wildcard mask indicates that the
corresponding bit in the address is not interesting and can be ignored. There is another
significant difference between the subnet mask and the wildcard mask. After the first zero
in a subnet mask, all subsequent bits are 0s. There is no such regularity in wildcard
masks—after the first one in a wildcard mask, subsequent bits can be either 0s or 1s.
The example illustrates different matching rules.

The figure illustrates several examples of matching rules. The first two examples have
different reference addresses (172.16.100.0 and 172.16.100.1), but the result in the same
range of addresses because the relevant portions of the reference addresses (the first 24
bits, as indicated by the wildcard mask) are equivalent in both addresses. The third
example shows a wildcard mask that does not have only continuous sequences of 0s and
1s. Note that the third octet of the 0.0.254.255 wildcard mask breaks the array of 1s. This
wildcard mask requires that the last bit of the third octet must match the same bit in the
reference address, which is 1. Since only odd values have the last bit 1, the matching
criteria requires an IPv4 address to be from one of the odd numbered /24 networks, such
as 192.168.1.0, 192.168.3.0, 192.168.5.0, and so on.
With wildcard masks you can define criteria for matching all bits of an IPv4 address, or you
can define rules that require only parts of the reference address to match. Partial match
requirement results in a range of addresses matching the criteria, such as IPv4 addresses
of many subnets. By carefully setting wildcard masks, with one ACE you can select a single
IPv4 address or multiple IPv4 addresses.

The figure illustrates two uses of wildcard masks: one to match only one subnet and
another to match a range of subnets. Assume that you have subnetted address
172.16.0.0, and you want to create a wildcard mask that matches packets from subnets
172.16.16.0/24 through 172.16.31.0/24. There are 16 different subnets in that range. All
16 subnets have the first two octets identical. The third octet has values from 16 to 31.
To create rules that would match all 16 subnets, one way would be to create 16 different
ACEs, one for each matching subnet. Each of the 16 ACE statements would include a
criterion composed of the subnet ID and the wildcard mask 0.0.0.255, as illustrated in the
first table in the example. However, that would unnecessarily create an ACL with 16 or
more statements, when the criterion could be met with a single statement. Minimizing
ACL statements optimizes ACL processing speed.
To minimize the number of statements in an access list, you should try to find a wildcard
mask that matches a wider range of subnets. To do so, look into the binary
representations of the desired address range and identify which bits are identical in all of
them. In the example for subnets 172.16.16.0/24 through 172.16.31.0/24, it is easy to see
that the first two octets are identical in all addresses. Therefore, for a packet to match the
range, it too must have the same first two octets equal to 172.16. The wildcard mask that
requires matching the first two octets has 0s at the first 16 positions.
The third octet can have any value from 16 to 31. If you look closer into binary
representations of numbers 16 to 31, you will notice that all of them start with the same 4
bits 0001. The last 4 bits differ, from 16 having all zeros 0000, to 31 having all ones, 1111.
Therefore, any packet that has an address with the third octet starting with 0001, belongs
to the desired range. You can now determine the wildcard mask value. Four "must match"
bits followed by 4 "whatever" bits translate to a wildcard mask octet 00001111, or 15 in
decimal representation.
The last octet can have any value because you wish to select all packets from desired
subnets. Therefore, the last octet of the wildcard mask is all 1s, or 255 in decimal
representation. The entire wildcard mask would be 0.0.15.255.
Now that you have the wildcard mask determined, you need to determine the reference
IPv4 address and you will have the range matching rule. As the reference IPv4 address,
you can select any address from the range you wish to match. Examples of matching rules
would be 172.16.16.1 0.0.15.255, 172.16.17.1 0.0.15.255, or 172.16.30.200 0.0.15.255. As
long as you keep the wildcard mask unchanged, all these matching rules result in the same
range of matched IPv4 addresses. However, if you configure an ACL with any of these
matching rules, it will be changed to be an entry that has all nonmatching bits in the
reference IPv4 address set to binary 0, so you may see the configured reference IPv4
address different than you typed in. In this example, the matching rule would be changed
to 172.16.16.0 0.0.15.255.
Note that wildcard mask 0.0.15.255 matches all possible subnets that have the same 4 bits
in the third octet. In the example, there are exactly 16 values that have 0001 as the first 4
bits. You were looking for one statement to match exactly 16 subnets—no more or less.
You will not always be able to find a perfect fit with just one ACL statement. For instance,
if you had to match subnets 172.16.16.0 to 172.16.27.0 (only 12 subnets), the same
matching rule 172.16.16.0 0.0.15.255 would include the desired 12 subnets, but it would
be too wide because it would also include subnets 172.16.28.0 to 172.16.31.0.
To match the desired range of addresses exactly, sometimes you will have to use more
than one ACL statement. For example, to match a range of addresses from 172.16.16.0/24
to 172.16.32.0/24, you should use two entries with the following matching rules:
172.16.16.0 0.0.15.255 and 172.16.32.0 0.0.0.255.
Note: Unlike IPv4 ACLs, IPv6 ACLs do not use wildcard masks. Instead, the prefix-length is
used to indicate how much of an IPv6 source or destination address should be matched.
IPv6 ACLs are beyond the scope of this course.
22.5 Explaining the Basics of ACL
Wildcard Mask Abbreviations
Working with decimal representations of binary wildcard mask bits can be tedious. To
make configuration easier and to improve readability when viewing an access list, the
most commonly used wildcard masks are represented by the keywords host and any.
The host keyword is equal to the wildcard mask 0.0.0.0. The host keyword and all-zeros
mask require all bits of the IPv4 address to match the reference IPv4 address.
The any keyword is equal to the wildcard mask 255.255.255.255. The any keyword and all-
ones mask do not require any of the IPv4 address bits to match the reference IPv4
address.
When using the host keyword to specify a matching rule, use the keyword before the
reference IPv4 address.
When using the any keyword, the keyword alone is enough, you do not specify the
reference IPv4 address.

In the example, you can see how the keyword host and any are used. Instead of typing
172.30.16.5 0.0.0.0, you can type host 172.30.16.5. Instead of typing 172.30.16.5
255.255.255.255, you can type the any keyword. Note that 0.0.0.0 255.255.255.255 is
equivalent to any; thus you could type any reference IPv4 address with the wildcard mask
255.255.255.255, and it would indeed match any address.
Note how using keywords shortens the matching rule, making it easier to read and
interpret ACL statement.
If a wildcard mask is omitted in the matching criteria in a standard IPv4 ACL, a wildcard
mask of 0.0.0.0 is assumed.
22.6 Explaining the Basics of ACL
Types of Basic ACLs
Cisco routers support the following two basic types of IP ACLs:
o Standard IP ACLs specify matching rules for source addresses of packets only. The
matching rules are not concerned with the destination addresses of packets nor
with the protocols, whose data is carried in those packets. Matching rules specify
ranges of networks, specific networks, or single IP addresses. Standard IP ACLs
filter IP packets based on the packet source address only. They filter traffic based
in the IP layer, which means that they do not distinguish between TCP, UDP, or
HTTPS traffic, for example.
o Extended IP ACLs examine both the source and destination IP addresses. They can
also check for specific protocols, port numbers, and other parameters, which allow
administrators more flexibility and control.

The figure illustrates and compares standard ACL and extended ACL filtering for IPv4
traffic.
A standard ACL can only specify source IP addresses and source networks as matching
criteria, so it is not possible to filter based on a specific destination. For more precise
traffic filtering, you should use extended ACLs.
Extended ACLs provide a greater range of control. In addition to verifying packet source
addresses, extended ACLs also may check destination addresses, protocols, and source
and destination port numbers, as shown in the figure. They provide more criteria on which
to base the ACL. For example, an extended ACL can simultaneously allow email traffic
from a network to a specific destination and deny file transfers and web browsing for a
specific host. The ability to filter on a protocol and port number allows you to build very
specific extended ACLs. Using the appropriate port number or well-known protocol
names, you can permit or deny traffic from specific applications.
The following are the two general methods that you can use to create ACLs:
o Numbered ACLs use a number for identification of the specific access list. Each
type of ACL, standard or extended, is limited to a preassigned range of numbers.
For example, specifying an ACL number from 1 to 99 or 1300 to 1999 instructs the
router to accept numbered standard IPv4 ACL statements. Specifying an ACL
number from 100 to 199 or 2000 to 2699 instructs the router to accept numbered
extended IPv4 ACL statements. Based on ACL number it is easy to determine the
type of ACL that you are using. Numbering ACLs is an effective method on smaller
networks with more homogeneously defined traffic.
o Named ACLs allow you to identify ACLs with descriptive alphanumeric string
(name) instead of the numeric representations. Naming can be used for both IP
standard and extended ACLs.
Cisco IOS Software provides a specific configuration mode for named access lists, which is
called Named Access List configuration mode. It is recognized by HOSTNAME(config-std-
nacl)# CLI prompt. The Named Access List configuration mode provides more flexibility in
configuring and modifying ACL entries.
For IPv4 and IPv6 packet filtering, you have to create separate ACLs. For each of the
protocols (IPv4 or IPv6), you can create multiple ACLs that are differentiated by their
numbers or names for IPv4 and by names only for IPv6. However, you are restricted in
how many ACLs you can apply simultaneously, depending on the purpose of ACL. For
instance, to filter traffic on an interface, you can apply only one ACL per protocol and
traffic direction.

22.7 Explaining the Basics of ACL


Configuring Standard IPv4 ACLs
These commands are used in the global configuration mode to configure standard
numbered IPv4 ACLs:
Router(config)# access-list access-list-number permit|deny source [source-wildcard] |
host { address | name} | any
The access-list access-list-number command is used for the numbered access lists. To
configure a standard IPv4 ACL, you must choose a number from a range assigned for
standard access list. Specifying an ACL number from 1 to 99 or 1300 to 1999 instructs the
router to accept numbered standard IPv4 ACL statements. The CLI will allow only the
syntax applicable for standard ACL statements.
The following commands are used to configure standard named IPv4 ACLs. When adding
ACL statements to the configuration, Cisco IOS Software automatically numbers each
statement. By default, the numbering starts with 10 and subsequent numbers are
incremented by 10. The sequence number determines where the new entry will be placed
in the ACL. You can use any number that is not currently assigned, even if it is not a
multiple of 10.
Router(config)# ip access-list standard access-list-name
Router(config-std-nacl)# [sequence-number] permit | deny source [source-wildcard] |
host { address | name} | any

The figure shows the anatomy of a numbered standard ACL statement. A numbered
standard ACL statement consists of the access list identification (a number) followed by a
keyword indicating the action to be taken, and the matching criteria. Since standard IPv4
access lists allow matching only on source IPv4 address, the matching criteria always
refers to the source IPv4 address.
Each ACL statement includes a keyword indicating an action that a device must take for
the packet that matches the criteria. The action is either a permit action, allowing a
matching packet to be processed further (forwarded, analyzed, …) or a deny action, which
discards the matching packets when using ACL for packet filtering. To specify an action
that a device takes for the packet that matches the criteria, use the keywords permit or
deny.
You specify matching criteria either by using reference IPv4 address and a wildcard mask
or its abbreviated keyword. If wildcard mask is not specified, the 0.0.0.0 value is assumed.
An example of a standard ACL configuration on RouterX is:
If this ACL is used as a traffic filter, it would discard traffic from host 172.16.3.3 and allow
traffic from other devices in the network 172.16.0.0. Note that an access list must have at
least one permit statement, otherwise it blocks all traffic.
The ip access-list standard command is used for named standard IPv4 access lists. Note
the ip keyword added at the beginning of the command.
The name you choose for the access list is an arbitrary descriptive alphanumeric string.
Because it is arbitrary, the CLI cannot interpret it ambiguously. You must specify the type
of list that you are naming, which is why the keyword standard must be used in the ip
access-list command when configuring a named standard IPv4 ACL. Using meaningful
descriptive names to identify access lists makes it easier to indicate its purpose.
Capitalizing ACL names makes them stand out when viewing device configuration and to
distinguish ACL name from actual device CLI command.
Note: When using named ACL configuration, you can specify a number as a name. The
numbers have the same meaning as in numbered ACL configuration. You must use the
correct number for a standard ACL. For example, you cannot configure a standard ACL
with number 125, because this is not a valid number for a standard ACL.
Using the ip access-list standard command takes you to the Named Access List
configuration mode, which is indicated by the Router(config-std-nacl)# prompt. Note the
abbreviations std and nacl in the prompt, that stand for standard and named ACL,
respectively.
Once you are in the Named Access List configuration mode, you enter the ACL statements.
Each statement has the same elements as its numbered counterpart—it contains the
action keyword followed by matching criteria.
The configuration of the same access list used in the previous example using named
configuration method is the following:

The order of the ACL statements is important. Recall that ACL processing stops after the
first match is encountered. Only the matching statement is executed. Other statements
are not evaluated. More specific statements, such as those permitting and denying
particular hosts, should be placed before the statements matching on a wider range of
addresses.
An implicit deny any statement is added to the end of each standard IPv4 access list,
denying all other packets that did not match ACL statements.

22.8 Explaining the Basics of ACL


Configuring Extended IPv4 ACLs
The following commands are used in global configuration mode to configure extended
numbered IPv4 ACLs:
Router(config)# access-list access-list-number permit|deny protocol
source_matching_criteria destination_matching_criteria
Specifying an ACL number from 100 to 199 or 2000 to 2699 instructs the router to accept
numbered extended IPv4 ACL statements. The CLI allows only the syntax applicable for
extended ACL statements.
The following commands are used to configure extended named IPv4 ACLs:
Router(config)# ip access-list extended access-list-name
Router(config-ext-nacl)# [sequence-number] permit | deny protocol
source_matching_criteria destination_matching_criteria

The figure shows the anatomy of a numbered extended ACL statement. An extended ACL
statement has more elements than a standard ACL statement.
In addition to the ACL number and action keyword, the extended ACL statement contains:
o A keyword indicating a protocol suite, such as ip, icmp, tcp, or udp. Keyword ip
matches all protocols.
o Matching criteria for the source IPv4 address and optionally port
o Matching criteria for the destination IPv4 address and optionally port
The syntax for specifying matching criteria allows the following:
o Specifying IPv4 address using syntax source [source-wildcard] | host {address |
name} | any
o Option 1: Reference IPv4 address and a wildcard mask
o Option 2: Keyword host and a reference IPv4 address or host name
o Option 3: Keyword any
o Optionally specifying either the source port or the destination port, or both ports,
using the syntax operator port
o Port matching criteria uses operators to specify a single port number or
range of port numbers
o Specify a single port using syntax operand eq (equal), lt (less than), gt (greater
than), or neq (not equal), followed by port number (for instance 80), or the
protocol name (for well-known protocols, such as www).
o Specify a range of ports using syntax: operand range (inclusive range) with the first
and the last port number of the range
In an extended access list, you must specify matching criteria both for the source and for
the destination header parameters. The previous figure shows a sample configuration
which includes a statement permitting only TCP connections from client port numbers in
the range of 56000 to 60000 on the host 172.16.3.3 to establish connection to port 80 on
host 203.0.113.30. Note that matching criteria is first fully specified for the source
information and only then matching criteria for the destination is given. If you are
specifying both IPv4 address and a port, specify both for the source part before you
specify the destination criteria.
An example of a numbered extended ACL configuration on RouterX, denying remote
access via Telnet or SSH from the devices in the 172.16.3.0/24 subnet and permitting
other traffic, is below:

It is very important to note that in this example the port numbers are destination port
numbers because they come after the destination address (which in this case is
represented by the keyword any). Port numbers that appear after the source address are
source port numbers.
The ip access-list extended command is used for the named extended IPv4 access lists.
Note the ip keyword added at the beginning of the command. You must specify the
keyword extended.
Using the ip access-list extended command takes you to the Named Access List
configuration mode, which is indicated by the Router(config-ext-nacl)# prompt. Note the
abbreviation ext in the prompt, standing for extended.
Named configuration mode allows you to specify numbers as names, as long as you use
the numbers assigned for the type of the access list you are configuring.
An example of a named configuration of the same extended ACL from the previous
example is:

Notice that in the named configuration of the same extended ACL the port number 23 was
used instead of keyword telnet.
The order of the ACL statements is important also for the extended ACLs. The ACL
processing is sequential. For example, ACL processing starts with the first ACL statement
and continues top down until the first match is encountered. The matching ACL statement
is executed and the processing stops. Remaining statements are not evaluated. Therefore,
more specific ACL statements, such as those permitting and denying particular hosts,
should be placed before the statements matching on wider range of addresses.
An implicit deny any any statement is added to the end of each extended IPv4 access list,
denying all traffic that did not match ACL statements.

22.8 Explaining the Basics of ACL


Configuring Extended IPv4 ACLs
The following commands are used in global configuration mode to configure extended
numbered IPv4 ACLs:
Router(config)# access-list access-list-number permit|deny protocol
source_matching_criteria destination_matching_criteria
Specifying an ACL number from 100 to 199 or 2000 to 2699 instructs the router to accept
numbered extended IPv4 ACL statements. The CLI allows only the syntax applicable for
extended ACL statements.
The following commands are used to configure extended named IPv4 ACLs:
Router(config)# ip access-list extended access-list-name
Router(config-ext-nacl)# [sequence-number] permit | deny protocol
source_matching_criteria destination_matching_criteria

The figure shows the anatomy of a numbered extended ACL statement. An extended ACL
statement has more elements than a standard ACL statement.
In addition to the ACL number and action keyword, the extended ACL statement contains:

• A keyword indicating a protocol suite, such as ip, icmp, tcp, or udp. Keyword ip
matches all protocols.
• Matching criteria for the source IPv4 address and optionally port
• Matching criteria for the destination IPv4 address and optionally port
The syntax for specifying matching criteria allows the following:

• Specifying IPv4 address using syntax source [source-wildcard] | host {address |


name} | any
o Option 1: Reference IPv4 address and a wildcard mask
o Option 2: Keyword host and a reference IPv4 address or host name
o Option 3: Keyword any
• Optionally specifying either the source port or the destination port, or both ports,
using the syntax operator port
o Port matching criteria uses operators to specify a single port number or
range of port numbers
• Specify a single port using syntax operand eq (equal), lt (less than), gt (greater
than), or neq (not equal), followed by port number (for instance 80), or the
protocol name (for well-known protocols, such as www).
• Specify a range of ports using syntax: operand range (inclusive range) with the first
and the last port number of the range
In an extended access list, you must specify matching criteria both for the source and for
the destination header parameters. The previous figure shows a sample configuration
which includes a statement permitting only TCP connections from client port numbers in
the range of 56000 to 60000 on the host 172.16.3.3 to establish connection to port 80 on
host 203.0.113.30. Note that matching criteria is first fully specified for the source
information and only then matching criteria for the destination is given. If you are
specifying both IPv4 address and a port, specify both for the source part before you
specify the destination criteria.
An example of a numbered extended ACL configuration on RouterX, denying remote
access via Telnet or SSH from the devices in the 172.16.3.0/24 subnet and permitting
other traffic, is below:

It is very important to note that in this example the port numbers are destination port
numbers because they come after the destination address (which in this case is
represented by the keyword any). Port numbers that appear after the source address are
source port numbers.
The ip access-list extended command is used for the named extended IPv4 access lists.
Note the ip keyword added at the beginning of the command. You must specify the
keyword extended.
Using the ip access-list extended command takes you to the Named Access List
configuration mode, which is indicated by the Router(config-ext-nacl)# prompt. Note the
abbreviation ext in the prompt, standing for extended.
Named configuration mode allows you to specify numbers as names, as long as you use
the numbers assigned for the type of the access list you are configuring.
An example of a named configuration of the same extended ACL from the previous
example is:

Notice that in the named configuration of the same extended ACL the port number 23 was
used instead of keyword telnet.
The order of the ACL statements is important also for the extended ACLs. The ACL
processing is sequential. For example, ACL processing starts with the first ACL statement
and continues top down until the first match is encountered. The matching ACL statement
is executed and the processing stops. Remaining statements are not evaluated. Therefore,
more specific ACL statements, such as those permitting and denying particular hosts,
should be placed before the statements matching on wider range of addresses.
An implicit deny any any statement is added to the end of each extended IPv4 access list,
denying all traffic that did not match ACL statements.

22.9 Explaining the Basics of ACL


Verifying and Modifying IPv4 ACLs
When adding ACL statements to the configuration, Cisco IOS Software automatically
numbers each statement. By default, numbering starts with 10 and subsequent numbers
are incremented by 10.
To verify the configured access-list, you can use the following commands:

• The show access-lists command displays the content of all configured ACLs. The
output can be narrowed to a specific list by providing its number or its name
• The show ip access-lists command displays the content of all IPv4 access list. The
output can be narrowed by specifying a specific ACL number or name.
Both commands display ACL statements with their sequence numbers.
The verification command output for the access lists from the previous examples would
be:

Modifying an access list differs for numbered and named configuration method. Adding
and removing statements is more convenient when using named configuration method.
When modifying access lists, you can delete the entire lists or add/remove specific entries.
To delete an IPv4 access list, you can use one of the following commands.
• no access-list access-list number
• no ip access-list standard|extended access-list-name
Both commands require you to specify the number or the name of the access list you wish
to delete.
Using numbered configuration method, you cannot add or remove individual statements
directly. Instead, you would have to first copy the entire access list, modify it in the text
editor, delete it from the configuration and enter the modified ACL statements.
Because the show access-lists command displays ACL statements in the format different
to the syntax used to configure them, it is more convenient to use the show running-
config command when you want to edit the statements in an editor. In the running
configuration file, the ACL statements are stored with the proper syntax, therefore they
can be easily reused. To filter out only numbered access lists from the show running-
config output, use the include and access-lists keywords.
Note: You cannot delete individual entries using numbered configuration method. If you
mistakenly issue a no version of the ACL statement you used to configure an entry, for
instance no access-list 15 permit host 192.168.1.1, you will delete the entire access-list
15.
With named configuration method, modifying an ACL is significantly easier. Before you
implement the modification, you need to know the sequence number of the statement
you wish to add or remove. Modifications are implemented in the Named Access List
configuration mode.
To add an entry from within Named Access List configuration mode, use one of the
following commands, depending on whether you are modifying a standard or an extended
access list:

• For the standard ACL use the [sequence-number] permit|deny


source_matching_criteria command
• For the extended ACL use the [sequence-number] permit|deny protocol
source_matching_criteria destination_matching_criteria command
When you choose the sequence number, you choose where the new entry will be placed
in the ACL. You can use any number that is not currently assigned, even if it is not a
multiple of 10.
For example, if you want to modify an existing access list 1 from the previous example
with another specific host entry, you enter the following commands:
In this example you assigned number 15 to a new entry. An entry matching a single IPv4
address is more specific than an entry matching a subnet or a range of subnets, therefore
it was added near the top of the ACL. The ACL entries are as follows:

Specific statements cannot be overwritten using the same sequence number as an existing
statement. The current statement must be deleted first, and then the new one can be
added.
To delete an entry, go to the Named Access List configuration mode. When deleting an
entry for the numbered access list, use the ACL number as the name of the list you wish to
modify. Again, you need to know the sequence number of the statement you wish to
delete. To delete a statement, use the command no sequence-number.
Note that a reload will resequence numbers in the ACL so that all numbers are multiples
of 10. To initiate resequencing on your own and avoid reloading, use the access-list ACL
name resequence command.
After modifying an access list, verify the changes using the show access-lists command.

22.10 Explaining the Basics of ACL


Applying IPv4 ACLs to Filter Network Traffic
The proper placement of an ACL to filter traffic can make the network operate more
efficiently. An ACL can be placed to reduce unnecessary traffic. The general logic with
filtering packets is to filter them as soon as possible. You do not want to carry and route
them through the network and waste resources on the path for nothing if you eventually
end up dropping those packets somewhere else.
To decide on the proper placement, you need to understand how ACLs process the
packets. Standard and extended ACLs process packets differently.
Standard IPv4 ACLs, whether numbered (1 to 99 and 1300 to 1999) or named, filter
packets based solely on the source address, and they permit or deny the entire TCP/IP
suite. It is not possible to implement granular, per protocol policies with standard ACLs.
When processing a standard ACL, devices need to inspect only one field in the IP packet,
and that's the source IP address in the IP header. The source IP address is then taken and
matched to the matching criteria that is specified in the ACL statement. And when a match
is found, the packet is permitted or denied, based on the instructions specified in the ACL
statement.
The best practice is to place standard ACLs as close to the destination of traffic as possible.
It may sound inefficient because you are allowing traffic almost to its destination and
allow it to consume resource even if it is going to be discarded. But, because standard
ACLs are simple, if you place them too close to the source, they might filter out more
traffic than you actually want.
The figure illustrates the logic for placing standard ACLs. In the example network, you
want to block IP traffic from Guest VLAN to your internal servers (A). If you place standard
ACL too close to the source of the traffic at router R1, you would block traffic going to
internal servers, but you will also block access to the internet (B). Standard access lists do
not consider the destination IP address or protocols carried, so they cannot differentiate
between traffic flows going to internal servers and traffic flows going to the internet.
Because you wanted to block the access to internal servers and still allow guests to access
the internet, the best placement for standard ACL would be at router R2, which is closest
to the internal servers (C). Now you can see that it is very important to correctly place
standard ACLs.
In many situations, a standard ACL may not provide the required level or type of control
you need. You may need a more precise tool for selecting network traffic and more
flexibility in how you place the ACL. One of the solutions is to use extended access lists,
which allow you more granular control of traffic flows.
When you are using extended ACLs, it is best practice to place them as close to the source
of discarded traffic as possible. This way, you deny packets as soon as possible and do not
allow them to cross network infrastructure. Extended ACLs allow you to define matching
criteria that consider the carried protocol and destination addresses and ports, therefore,
refining packet selection.
In the example from the previous figure, you wanted to block IP access from Guest VLAN
to your internal servers (A). With an extended access list, you can be selective and block
traffic destined to the internal servers. Therefore, you can place the extended access list
on the router R1 closest to the Guest VLAN and prevent it from crossing other devices (B).
Note how extended access list restricts traffic so that is discarded and it does not cross
other network devices and consume network resources like in example when using
standard ACL (C).
Placement of the ACL and, therefore, the type of ACL used may also depend on these
parameters:

• The extent of the network administrator’s control: Placement of the ACL can
depend on whether the network administrator has control of both the source and
destination networks.
• The bandwidth of the networks involved: Filtering unwanted traffic at the source
prevents transmission of the traffic before it consumes bandwidth on the path to a
destination—especially important in networks that have low bandwidth.
• Ease of configuration: If a network administrator wants to deny traffic coming
from several networks, one option is to use a single standard ACL on the router
closest to the destination. The disadvantage is that traffic from these networks will
use bandwidth unnecessarily. An extended ACL could be used on each router
where the traffic originated, which will save bandwidth by filtering the traffic at
the source, but requires the knowledge to create extended ACLs.
Traffic filtering is a common application of ACLs. Traffic filtering controls access to a
network by analyzing the incoming and outgoing packets and forwarding them or
discarding them based on ACL criteria. Traffic filtering can occur at Layer 3 or Layer 4.
Standard ACLs only filter at Layer 3. Extended ACLs can filter at both Layer 3 and Layer 4.
When you decide which device and which interface is the most appropriate for the
placement of an access list for traffic filtering, you need to decide to which traffic direction
the ACL should be applied.
There are two possible traffic directions: inbound and outbound.

Traffic directions are determined from the device’s point of view. The figure illustrates the
concept. Imagine that you are standing inside the device. Traffic arriving on an interface
that enters the device to be processed is called inbound, ingress, or incoming traffic.
Traffic leaving the device out through an interface is called outbound, egress, or exiting
traffic.
Note: ACLs for traffic filtering do not act on packets that originate from the router itself.

The figure illustrates how packet processing occurs when ACLs are applied:

• Inbound ACLs process incoming packets as they enter the interface, before they
are routed to the outbound interface. An inbound ACL is efficient because it saves
the overhead of routing lookups if the packet is discarded. If the packet is
permitted by the ACL, it is then processed for routing.
• Outbound ACLs process packets that are routed to the outbound interface. They
are processed before they exit the interfaces.
After you have configured an ACL, you link the ACL to an interface using the ip access-
group command. The following figure describes the command syntax and shows examples
of applying standard and extended access lists on an interface.
To remove an ACL from an interface, first enter the no ip access-group command on the
interface, then enter the global no access-list command to remove the entire ACL if
needed.
You can configure one ACL per protocol, per direction, per interface:

• One ACL per protocol: To control traffic flow on an interface, an ACL must be
defined for each protocol enabled on the interface. For instance, if you wish to
filter both IPv4 and IPv6 traffic on the interface in one direction, you have to create
and apply two access lists, one for each protocol.
• One ACL per direction: ACLs control traffic in one direction at a time. Two separate
ACLs may be created to control both inbound and outbound traffic on an interface,
or you can use the same ACL and apply it in both directions, if it makes sense to do
so.

The figure shows a scenario in which ACL is used to deny access to the internet only to the
host IPv4 address 10.1.1.101. Traffic from other hosts within 10.1.1.0/24 is allowed.
Two ACL implementations are represented in the figure. The first uses a standard ACL 15.
A standard ACL is applied at the point closest to the destination. That point is the Gi0/1
interface on the Branch router. Filtering should happen for the traffic exiting Gi0/1
interface, in the outbound direction. Note that PC2 traffic would reach the router, be
processed to determine the outbound interface (it will be routed), and, only at the very
exit, it will be discarded. The processing power and bandwidth of the router are used for
both permitted and discarded traffic.
Note that another possible placement for standard ACL 15 would be the
GigabitEthernet0/0 interface. For the traffic to be filtered, the direction would have to be
inbound. However, this solution would not only prevent host PC2 from accessing the
internet but would also deny all communication between PC2 and the router.
The second implementation uses an extended ACL NOINTERNET_PC2. An extended access
list should be placed as close to the source of the denied traffic as possible. In the
example, the denied traffic is the PC2 traffic. The closest point to PC2 is the Gi0/0
interface on the Branch router. It should filter traffic incoming to the router; therefore,
the ACL should be applied in the inbound direction. Traffic from PC2 will be discarded
before it is routed, which saves the processing power and bandwidth of the router.
In real life networks, you could encounter complex security policies and ACLs. As network
engineers, you need to have solid knowledge of how the ACL statements affect the traffic,
so that you can place ACLs where they have the greatest impact on efficiency.
23.1 Enabling Internet Connectivity
Introduction
One of the most important tasks when designing a network topology is planning for
enterprise internet connectivity. Modern corporate networks are connected to the global
internet and use it for some data transport needs. Corporations provide many services to
customers and business partners via the internet. When planning for internet
connectivity, it is also important to understand the process of assigning IP addresses. An
ISP can provide internet connectivity by providing statically assigned, public IP addresses,
or dynamically allocate them with DHCP. Depending on the option that is used, the
internet-facing interfaces need to be configured accordingly.
The IPv4 address space is not large enough to uniquely identify all network-capable
devices that need internet connectivity. As a response to this limitation, private addresses
have been reserved. However, since private addresses are not routed by internet routers,
there needs to be a mechanism in place to translate private addresses to public addresses,
which are routed by internet routers. The mechanism that is used to perform this
translation is called the Network Address Translation (NAT).
NAT is usually implemented on border devices such as firewalls or routers. This
implementation allows devices within an organization to have private addresses and to
only translate traffic when it needs to be sent to the internet.

As a networking engineer, you need to be able to tackle different situations related to


enterprise internet connectivity and know how to handle different tasks, for example:
• How DHCP can be used for address assignment by an ISP and what is needed on
the customer side.
• When to use NAT and when to use Port Address Translation (PAT).
• How to configure static and dynamic NAT.

23.2 Enabling Internet Connectivity


Introducing Network Address Translation
NAT is a protocol that is used for connecting multiple devices on internal, private
networks to a public network such as the internet, using a limited number of public IPv4
addresses. It was designed for conserving IPv4 address space.
The IPv4 address space is not large enough to uniquely identify all network-capable
devices that need IP-based network connectivity. This limitation led to the development of
private addresses. Private addresses are described in RFC 1918. Private addresses are not
routed by internet routers and should be used only within an enterprise. Devices in the
enterprise network must have a mechanism in place to "procure" a public address when
they need internet access and to translate private addresses to public addresses. Public
addresses are routed by internet routers. The mechanism that "procures" a public address
for a device with a private address that needs access to the internet is NAT. NAT performs
translations. Most commonly, the subject of translation is an IPv4 address, and it is
translated from a private address to a public address.
Note: Although the IPv6 protocol suite also includes NAT mechanisms (mostly for
translation between IPv4 and IPv6), in this course, you will focus on its use in the context
of IPv4. Where translation happens only between two IPv4 address, NAT is also called
NAT44.
To illustrate how NAT performs its tasks, presume that an enterprise network uses a
private IPv4 addressing scheme. The translation usually happens when a device in the
enterprise network initiates communication with a device in the internet. Just before the
packets enter the internet realm, a device at the border between the enterprise network
and the internet translates or swaps the private address with a public address. The
packets reach their destination in the destination device and, eventually, the same border
device receives responses. It is important to note that responses are destined to the
public address, and the public address is now written in the destination IPv4 address
header field. The border device is the only device that knows how to translate the public
address back to the appropriate private address. The translation now happens in reverse,
public to private direction. The public address is translated back to the private address
before the responses are forwarded to the initiator of the communication. The key point
is that address translation, or address swapping, happens for traffic traveling in both
directions, outbound and inbound.
In an enterprise environment, NAT is usually implemented on border devices such as
firewalls or routers. This implementation allows devices within an enterprise network to
have private addresses to communicate among themselves and to translate addresses
only when they need to send traffic to the internet or outside networks in general. When
accessing the internet, the border device translates private addresses to public addresses
and keeps a mapping between them, to match the returning traffic. In a home
environment, this device might be an access point that has routing capability, or the DSL
or cable router.
NAT can also be used when there is an addressing overlap between two private networks.
An example of this implementation would be when two companies merge and they were
both using the same private address range. In this case, NAT can be used to translate one
intranet's private addresses into another private range, avoiding an addressing conflict
and enabling devices from one intranet to connect to devices on the other intranet.
Therefore, NAT is not implemented only for translations between private and public IPv4
address spaces, but it can also be used for generic translations between any two different
IPv4 address spaces.

23.3 Enabling Internet Connectivity


NAT Terminology and Translation Mechanisms

In NAT terminology, addresses are categorized into two types. All classifications described
apply to the border device that performs translations.
The first classification divides addresses based on where they exist in the network:
• Inside addresses are addresses that belong to the network in question, such as
addresses of devices internal to the network. The inside network is the set of
networks that are subject to translation.
• Outside addresses are all addresses that do not belong to the network in question.
The outside network refers to all other addresses.
The second classification divides addresses based on where they are "viewed:"

• Local addresses are address values that are "seen" by a local device or, in other
words, address values that are intended to be used by the devices in the local
(inside) network.
• Global addresses are address values as seen globally or, in other words, address
values meant be used by the devices in external (outside) networks. You can also
think of a global address as the address seen or used by devices in the internet,
when they refer to an inside device. However, remember that NAT can also
translate between private only address realms. Devices in the internet always see
public addresses.
The figure illustrates a sample packet that is being transmitted from an inside network
through a border device to an outside network and back (from PC1 to SRV1 on the
internet and back). For IPv4 header fields, their NAT names are indicated. Each translation
is based on the mapping table entries that are created at the border device. Note that
both source and destination IPv4 addresses can be subject to NAT translation. Usually,
only source (inside) addresses are translated while destination (outside) addresses
remains unchanged. When only the inside address is translated, NAT is called inside NAT,
and when only the outside address is translated, NAT is called outside NAT.

All combinations of the two types of addresses are possible:


• Inside local address: IP address of an inside network device that is used in all
packets that remain in the inside network. An inside device always has the inside
local address configured. The IPv4 ranges here are typically those from the private
IPv4 address ranges described in RFC 1918. In this example, when PC1 sends
packets to SRV1, PC1 uses the 192.168.10.10 as source IPv4 address, which is
indicated as the inside local address in the mapping table.
• Inside global address: IP address of an internal device as it appears to the external
networks, that is, the translated inside local address. For example, when PC1 sends
packets to SRV1, its inside local address is translated to an inside global address,
which is 209.165.200.226. In typical NAT implementations, where only inside NAT
is performed, the inside global IPv4 address is a public IPv4 address.
• Outside local address: IP address of an external device as it appears to the internal
network. The outside local address is allocated from a routable address space on
the inside. Since only inside NAT is performed in this example, the outside local
IPv4 address is the SRV1’s IPv4 address 209.165.201.1.
• Outside global address: IP address of an external device as seen externally. The
external device had this address before arriving to the enterprise network border
device. This address is the address that the device owner assigns to the device for
the ‘external’ use. As far as the NAT device is concerned, this is the source IPv4
address the NAT device sees in the packets arriving on its outside interface. In this
example, that is the SRV1’s IPv4 address 209.165.201.1.
Note: The focus of this lesson is on inside NAT, where only the inside address is translated.
To classify NAT, it is important to clarify which header fields can be the subject of
translation first. So far, the NAT explanation focused on IPv4 address fields. But NAT
implementations can also translate port numbers.
Depending on the scope of translation (only IPv4 address or both IPv4 address and port
number) and depending on the translation mechanism details, there are these NAT
implementations:

• Static NAT maps a local IPv4 address to a global IPv4 address (one to one). Port
numbers are not translated. Static NAT is particularly useful when a device must be
accessible from an external network, such as when a device must have a static,
unchanging address accessible from the internet. Static NAT is usually used when a
company has a server that must be always reachable, from both inside and outside
networks. Both server addresses, local and global, are static. So the translation is
also always static. The server's local IPv4 address will always be translated to the
known global IPv4 address. This fact also implies that one global address cannot be
assigned to any other device. It is an exclusive translation for one local address.
Static translations last forever.
• Dynamic NAT maps local IPv4 addresses to a pool of global IPv4 addresses. When
an inside device accesses an outside network, it is assigned a global address that is
available at the moment of translation. The assignment follows a first-come first-
served algorithm, there are no fixed mappings; therefore, the translation is
dynamic. The number of translations is limited by the size of the pool of global
addresses. When using dynamic NAT, make sure that enough global addresses are
available to satisfy the needed number of user sessions. Dynamic translations
usually have a limited duration. After this time elapses, the mapping is no longer
valid and the global IPv4 address is made available for new translations. An
example of when dynamic NAT is used is a merger of two companies that are using
the same private address space. Dynamic NAT effectively readdresses packets from
one network and is an alternative to complete readdressing of one network.
• Network Address and Port Translation (NAPT) or Port Address Translation (PAT)
maps multiple local IPv4 addresses to just a single global IPv4 address (many to
one). This process is possible because the source port number is translated also.
Therefore, when two local devices communicate to an external network, packets
from the first device will get the global IPv4 address and a port number X, and the
packets from the second device will get the same global IPv4 address but a
different port number Y. PAT is also known as NAT overloading, because you
overload one global address with ports until you exhaust available port numbers.
The mappings in the case of PAT have the format of local_IP:local_port –
global_IP:global_port. PAT enables multiple local devices to access the internet,
even when the device bordering the ISP has only one public IPv4 address assigned.
PAT is the most common type of network address translation.

The inside and outside definition is important for NAT operation. The figure illustrates the
importance of inside and outside definitions regarding the processing sequence. When a
packet travels from an inside domain to an outside domain, it is received at an inside
interface, routed, and, only then, addresses are translated to global addresses. At this
point, the border device automatically creates translation-mapping (basically a "dictionary
entry") if the mapping does not exist. The packet is then forwarded out the exit (outside)
interface. In dynamic translation, the border device also sets a timeout value for each
mapping it creates. The key point to remember is that with dynamic NAT implementation,
mapping creation is "provoked" by inside to outside traffic. Without outbound traffic, no
mappings are created.
When a packet travels from an outside domain to an inside domain, the process is
reversed: packets arriving from the outside with their global addresses are first translated
back to their local addresses and, only then, routed. Note that the inbound traffic has the
translated address (the inside global address) in the destination IPv4 header. Since the
routing happens after translation, it will be based on the original, local IPv4 address.
However, all outside routers—routers in external networks—must have a route towards
the global IPv4 address in order for packets to reach the inside network. Only the global
address is visible in the external world.
What happens if a packet arrives from the outside, and there is no mapping for its
destination address? When NAT service on a device cannot find a mapping for an inbound
packet, it will discard the packet. When is this situation encountered? Dynamic NAT
creates mappings when an inside host initiates communication with the outside. However,
dynamic mappings do not last forever. After a dynamic mapping timeout expires, the
mapping is automatically deleted. Recall that dynamic mappings are not created unless
there is inside to outside traffic. Also, when NAT is required, the outside to inside
communication will not be possible, unless there was prior outbound communication. In
other words, NAT does not allow requests initiated from the outside.
If the return communication is received after the timeout expires, there would be no
mappings, and the packets will be discarded. You will not encounter this issue in static
NAT. A static NAT configuration creates static mappings, which are not time limited. In
other words, statically created mappings are always present. Therefore, those packets
from outside can arrive at any moment, and they can be either requests initiating
communication from the outside, or they can be responses to requests sent from inside.

23.4 Enabling Internet Connectivity


Benefits and Drawbacks of NAT
Here are the benefits of NAT:

• NAT conserves public addresses by enabling multiple privately addressed hosts to


communicate using a limited, small number of public addresses instead of
acquiring a public address for each host that needs to connect to internet. The
conserving effect of NAT is most pronounced with PAT, where internal hosts can
share a single public IPv4 address for all external communication.
• NAT increases the flexibility of connections to the public network.
• NAT provides consistency for internal network addressing schemes. When a public
IPv4 address scheme changes, NAT eliminates the need to readdress all hosts that
require external access, saving time and money. The changes are applied to the
NAT configuration only. Therefore, an organization could change ISPs and not need
to change any of its inside clients.
• NAT can be configured to translate all private addresses to only one public address
or to a smaller pool of public addresses. When NAT is configured, the entire
internal network hides behind one address or a few addresses. To the outside, it
seems that there is only one or a limited number of devices in the inside network.
This hiding of the internal network helps provide additional security as a side
benefit of NAT.
The disadvantages of NAT include:

• End-to-end functionality is lost. Many applications depend on the end-to-end


property of IPv4-based communication. Some applications expect the IPv4 header
parameters to be determined only at endpoints of communication. NAT interferes
by changing the IPv4 address and sometimes transport protocol port (if using PAT)
numbers at network intermediary points. Changed header information can block
applications.
o For instance, call signaling application protocols include the information
about the device's IPv4 address in its headers. Although the application
protocol information is going to be encapsulated in the IPv4 header as data
is passed down the TCP/IP stack, the application protocol header still
includes the device's IPv4 address as part of its own information.
o The transmitted packet will include the sender's IPv4 address twice: in the
IPv4 header and in the application header. When NAT makes changes to
the source IPv4 address (along the path of the packet), it will change only
the address in the IPv4 header. NAT will not change IPv4 address
information that is included in the application header.
o At the recipient, the application protocol will rely only on the information in
the application header. Other headers will be removed in the de-
encapsulation process. Therefore, the recipient application protocol will
not be aware of the change NAT has made and it will perform its functions
and create response packets using the information in the application
header.
o This process results in creating responses for unroutable IPv4 addresses
and ultimately prevents calls from being established. Besides signaling
protocols, some security applications, such as digital signatures, fail
because the source IPv4 address changes. Sometimes, you can avoid this
problem by implementing static NAT mappings.
• End-to-end IPv4 traceability is also lost. It becomes much more difficult to trace
packets that undergo numerous packet address changes over multiple NAT hops,
so troubleshooting is challenging. On the other hand, for malicious users, it
becomes more difficult to trace or obtain the original source or destination
addresses.
• Using NAT also creates difficulties for the tunneling protocols, such as IP Security
(IPsec), because NAT modifies the values in the headers. Integrity checks declare
packets invalid if anything changes in them along the path. NAT changes interfere
with the integrity checking mechanisms that IPsec and other tunneling protocols
perform.
• Services that require the initiation of TCP connections from an outside network (or
stateless protocols, such as those using UDP) can be disrupted. Unless the NAT
router makes specific effort to support such protocols, inbound packets cannot
reach their destination. Some protocols can accommodate one instance of NAT
between participating hosts (passive mode FTP, for example) but fail when NAT is
performed at multiple points between communicating systems, for instance both
in the source and in the destination network.
• NAT can degrade network performance. It increases forwarding delays because the
translation of each IPv4 address within the packet headers takes time. For each
packet, the router must determine whether it should undergo translation. If
translation is performed, the router alters the IPv4 header and possibly the TCP or
UDP header. All checksums must be recalculated for packets in order for packets to
pass the integrity checks at the destination. This processing is most time
consuming for the first packet of each defined mapping. The performance
degradation of NAT is particularly disadvantageous for real time applications, such
as VoIP.

23.5 Enabling Internet Connectivity


Static NAT and Port Forwarding
Static NAT is a one-to-one mapping between a local address and global address. It can
apply to both inside and outside addresses; however, you will focus only on inside
addresses.
As shown in the figure, static mappings define a global version of the local addresses. The
mapping includes local to global mapping for both inside and outside addresses. When
only inside address translation is performed, outside local and outside global address are
the same. In an outbound packet (a packet leaving an inside network and going to an
outside network), the inside address is present in the source address IPv4 header field,
and the outside address is present in the destination address IPv4 header field.
Static mappings are present from the moment they are manually configured until they are
manually removed. They do not have a timeout period defined. Because the mappings are
always present, the border device can perform translation for inside to outside traffic and
also for the outside to inside traffic, regardless of whether it was first initiated from the
inside. Remember that dynamic NAT does not allow requests initiated from the outside.
This "always present" property of static mappings allows external devices to initiate
connections to internal devices. Static NAT is especially useful when you want to make a
company's resources available to external networks. An example of such a resource is a
company's web server. With static NAT, you ensure that the web server will be presented
to the outside world with the same global address, so it will always be accessible at that
address.
The figure illustrates a router that is performing inside address translation when static
NAT is configured. Outside addresses, local and global, are the same. The router is
translating the source IPv4 address from a local to a global source IPv4 address. PC1 is
initiating communication with the SRV1 server.
In the figure, you can see these steps:
1. The user at PC1 with IPv4 address 192.168.10.10 wants to open a connection to
SRV1 using its public IPv4 address 209.165.201.1.
2. The first packet from 192.168.10.10 that the router receives on its interface in the
inside network causes the router to check its NAT table.
3. For static NAT, the router finds a mapping that specifies that 192.168.10.10 IPv4
address should be replaced with 209.165.200.226. You can see the mapping in the
table under the topology. This mapping was configured previously by the network
administrator. Note that configuration-based entries in the mapping table do not
have values for the outside parameters.
4. The source IPv4 address of the packet is swapped with the inside global IPv4
address. At this point, the new mapping that has the outside values is created.
Since only inside NAT is performed, both local and global outside IPv4 addresses
are the same, destination IPv4 address 209.165.201.1.
5. SRV1 receives the packet and responds to it. Since SRV1 sees the 209.165.200.226
as the source IPv4 address, it addresses its response to that address—the inside
global address of PC1.
6. When the router receives the response packet on its outside interface, it will check
the mapping table for the 209.165.200.226 inside global entry. Since the entry
exists, the router will replace the destination IPv4 address in the header with the
inside local address 192.168.10.10 found in the mapping. The packet is then routed
based on the 192.168.10.10 destination address and forwarded out of the router's
interface in the inside network.
7. PC1 receives a packet at its inside local address 192.168.10.10 and continues the
conversation. The router performs Steps 2 through 5 for each packet.
The example illustrated the inside address translation. In the same manner, the outside
address could be translated, as long as the outside local to outside global mapping is
configured. In the outbound packet, the outside address is the destination IPv4 address.
Port Forwarding
Besides configuring static mapping for IPv4 addresses, it is possible to configure static
mapping that also involves TCP and UDP port numbers. Port numbers can be used to
identify network services, because network services use application protocols, which use
different port numbers. Port forwarding uses this identifying property of port numbers.
Port forwarding specifies a static mapping that translates both inside local IPv4 address
and port number to inside global IPv4 address and port number. As with all static
mappings, port forwarding mapping will always be present at the border device, and when
packets arrive from outside networks, the border device would be able to translate global
address and port to corresponding local address and port.
Port forwarding allows users on the internet to access internal servers by using the WAN
(ISP facing) address of the border device and a selected outside port number. To the
outside, the border device appears to be providing the service. Outside devices are not
aware of the mapping that exists between the border device and the inside server. The
static nature of the mapping ensures that any traffic received at the specified port will be
translated and then forwarded to the internal server. The internal servers are typically
configured with RFC 1918 private IPv4 addresses.
Because the mapping in port forwarding is static and always present, a request from the
internet will always be "forwarded" to the inside server. The only condition for a packet to
be forwarded is that it must be properly addressed—the destination IPv4 address must be
the inside global address in the mapping and the destination port number must match the
chosen and configured port number. So the administrators can choose any value for the
global port number. For instance, instead of specifying port 80 for web service, the
administrator can choose to specify 8080. When you wish to specify an arbitrary value for
web service, the browser must be instructed to use this value, otherwise, the browser
would always attempt to connection to the well-defined port number 80. To instruct the
browser to use unconventional port number, specify the URL in the following format
URL:port_number, for instance http://www.example.com: 8080.
Note: When a well-known port number is not being used, the client must specify the port
number in the application.
The figure shows an example of port forwarding on router R2. IPv4 address
192.168.10.254 is the inside local IPv4 address of the web server listening on port 80.
Users will access this internal web server using the global IPv4 address 209.165.200.226, a
globally unique public IPv4 address. In this case, it is the address of the outside interface
of R2. The global port is configured as 8080. This port will be the destination port used,
along with the global IPv4 address of 209.165.200.226 to access the internal web server.

23.6 Enabling Internet Connectivity


Dynamic NAT
While static NAT provides a permanent mapping between a single local and a single global
address, dynamic NAT maps multiple local to multiple global addresses. Therefore, you
must define two sets of addresses: the set of local addresses and the set of global
addresses. The sets usually do not have the same size. Since the set of global addresses
usually contains public IPv4 addresses, it is smaller than the set of local addresses.
In Cisco IOS Software terminology, a group of addresses is called a pool of addresses.
Address pools are named and are referenced by their name in commands and command
outputs. IP addresses that belong to a pool are specified using a reference IP address and
a subnet mask or prefix length.
Dynamic NAT takes global addresses from a pool on a first-come, first-served basis. Each
connection initiated from the inside will use one of the addresses from the pool of global
addresses. Once the global pool is exhausted, new connections will not be translated and
the communication with the outside networks will not be possible. Although the global
pool is smaller, it usually suffices, since local devices do not connect to the outside at the
same moment. Nevertheless, you should make sure to provide enough global addresses to
satisfy the need for outbound communications.
The figure illustrates an example of dynamic inside NAT implementation. The router is
translating inside addresses:
1. The users at PC1 and PC2 with IPv4 addresses 192.168.10.10 and 192.168.10.11
want to connect to SRV1 at IPv4 address 209.165.201.1.
2. PC1 initiates the connection first. When the router receives a packet from
192.168.10.10, it will check its NAT-mapping table. Because there are no static
configurations, and the packet is the first one processed, the router finds no
entries for the 192.168.10.10 address.
3. The router then selects an inside global address from the configured pool of
addresses and creates a mapping in the table. This type of entry is called a simple
entry. Based on the configuration, the router in the example selects
209.165.200.226 as the inside global address.
4. The router then swaps 192.168.10.10 source IPv4 address with 209.165.200.226,
adds an entry for the translation into the table, and forwards the packet. The
router also sets a timeout for the newly added translation.
5. When PC2 at 192.168.10.11 initiates connection to SRV1, the router performs
similar lookup in the mapping table, this time for its address. There are no entries
for 192.168.10.11 IPv4 address.
6. The router selects the next available global address from the address pool and
creates a second simple entry to map 192.168.10.11 to 209.165.200.227.
7. The router swaps the 192.168.10.11 with 209.165.200.227, creates a translation
entry in the mapping table, and forwards the packet out of its outside interface.
Both specific mappings (next to numbers 4 and 7 in the figure) have a timeout set.
8. When SRV1 receives the packet from 209.165.200.226, it sends the responses to
that address. It does the same when it receives a packet from 209.165.200.227.
SRV1 is not aware of the translations that happened. Packets from SRV1 to the
inside hosts have the inside global addresses as their destination IPv4 addresses.
9. When the router receives the packet with the inside global IPv4 address
209.165.200.226, the router performs a mapping table lookup. It searches the
inside global addresses looking for the packet's destination address (in this case
209.165.200.226). The router finds an entry and translates the destination address
to the inside local address 192.168.10.10 and forwards the packet out the inside
interface. The router behaves the same for all the packets that it receives on its
outside interface. When it receives a packet destined to IPv4 address
209.165.200.227, after searching the mapping table, the router translates the
address back to the inside local address 192.168.10.11 and forwards the packet to
the inside network.
10. PC1 and PC2 receive the packets and continue the conversation. The router
performs the previous steps for each packet.
23.7 Enabling Internet Connectivity
Port Address Translation
PAT is the most widely used form of NAT. Sometimes referred to as NAT overload, the PAT
translation mechanism is dynamic and it applies to IPv4 addresses and to TCP or UDP port
numbers. As far as the addresses are concerned, PAT maps multiple local addresses to a
single global address or to a pool of global addresses. As far as port numbers are
concerned, PAT maps multiple local port numbers to multiple global port numbers.
Mappings that are created by PAT always specify pairs of values, consisting of an IPv4
address and a port number.
PAT allows multiple devices to share a single or a few inside global addresses. Most home
routers operate in this way. Your ISP assigns one public address to your router, yet several
members of your family can simultaneously surf the internet.
PAT translates local IPv4 addresses to one or more global IPv4 addresses. In either case,
PAT has to ensure that each connection is translated unambiguously. When only a single
global IPv4 address is available, PAT will assign each translated packet the same global
IPv4 address, but different port number. When all available port numbers are exhausted,
the translations will not be possible, and no new connections would be created. The
number of available port numbers determines the number of simultaneous outbound
connections. Since the port numbers are not as scarce as global IPv4 addresses, PAT is
very efficient and can accommodate many outbound connections.
The mechanism of translating port numbers tries to preserve the original local port
number, meaning that it tries to avoid port translation. If more than one connection uses
the same original local port number, PAT will preserve the port number only for the first
connection translated. All other connections will have the port number translated.
Note: The range of available port numbers and device behavior, when there are multiple
global IPv4 addresses, are subject to configuration. However, this discussion is outside of
the scope for this course.
As with dynamic NAT, all mappings created by PAT have a timeout. Once they expire, the
mappings are deleted from the mapping table.
When responses to the translated packets are received at the outside interface of the
border device, the destination IPv4 address and destination port numbers are translated
back from global to local values. For this "backward" translation to succeed, both
destination IPv4 address and destination port number of the inbound packet must have
entries in the mapping table. PAT can be implemented for both inside and outside
addresses, but this course focuses only on inside PAT translations.
Incoming packets from the outside network are delivered to the destination device on the
inside network by looking for a match in the NAT-mapping table. This mechanism is called
connection tracking.
The figure illustrates an example of using PAT. In the example, the R2 router is configured
to perform only inside PAT. It initially has no mappings in its mapping table. This example
shows the port preserving feature of PAT. There are two hosts on the inside network that
access servers in the internet.
1. Host 192.168.10.10 is first to initiate communication. It initiates an HTTP
connection to SRV1, using source port number 1555.
2. When packets from 192.168.10.10 reach inside interface of the R2 router, it will
make a lookup in its mapping table. Since the mapping table is empty at the
beginning, it finds no entries for host 192.168.10.10.
3. Because it is configured to perform PAT, the R2 router creates a mapping in the
mapping table. It enters values from the source fields of the packet,
192.168.10.10:1555, as inside local values. Values from the destination fields of
the packet, 209.165.201.1:80, are entered as outside local values. Since the R2
router is configured to translate only inside addresses, it will enter the same
outside local values as outside global ones. It allocates 209.165.200.226 IPv4 public
address and chooses the same port number 1555 for the translation and it enters
these values as inside global values.
4. The R2 router performs translation of source IPv4 address and source port number
and forwards the packet to SRV1.
5. Host 192.168.10.11 now initiates communication with SRV2.
6. R2 receives the packet from 192.168.10.11 on its inside interfaces and performs a
lookup in the mapping table. There is only one entry and its local inside values
(192.168.10.10:1555) do not match the source field values of the packet
(192.168.10.11:1331).
7. R2 creates a new mapping in the mapping table. It enters values from the source
fields of the packet, such as 192.168.10.11:1331, as inside local values. Values from
the destination fields of the packet, such as 209.165.202.129:80, are entered as
outside local IPv4 address and port number. It enters the same IPv4 address and
port number as outside global values. It allocates the same 209.165.200.226 IPv4
public address and chooses the port number 1331 chosen by the client. It enters
209.165.200.226:1331 as inside global values.
8. R2 performs translation of source IPv4 address and source port number and
forwards the packet to SRV2.
9. Both hosts next initiate communication to other outside servers (host
192.168.10.10 to SRV3 and host 192.168.10.11 to SRV4); both hosts use the same
source port number. Packets from host 192.168.10.10 reach the router R2 inside
interface first. Since there are no mappings for 192.168.10.10:1444 in the mapping
table, the router creates a new one. The allocated global IPv4 address is the same
as in other translations. Since port number 1444 is not yet mapped, the router will
preserve the same port number 1444. The inside global entry will be
209.165.200.226:1444.
10. When packets from host 192.168.10.11 reach the R2 router, the router will
perform the same actions as for packets from 192.168.10.10 host. When it creates
the new mapping, it will find that port number 1444 is already mapped. Therefore,
it chooses a different value, which is 1024 in the example. The mapping for host
192.168.10.11 cannot preserve the original port value.

23.8 Enabling Internet Connectivity


Configuring and Verifying Inside IPv4 NAT
To configure any of the NAT types on a Cisco IOS Software device, perform the following
steps on NAT-enabled devices:

• Specify inside and outside interfaces. You must instruct the border device on
where to expect the inside traffic that needs to be translated (inside interface) and
where to inspect outside traffic (outside interface) that needs to be translated.
Inside/outside interface specification is required regardless of whether you are
configuring inside only NAT or outside only NAT.
• Specify local addresses that need to be translated. NAT might not be performed
for all inside segments and you have to specify exactly which local addresses
require translation.
• Specify global addresses available for translations.
• Specify NAT type using ip nat inside source command. The syntax of the command
is different for different NAT types.
Configuration commands that tell a device which interfaces are inside and which are
outside are common to all NAT types. To specify inside and outside interfaces use the ip
nat inside and ip nat outside interface configuration commands respectively.

In the example configuration, interface GigabitEthernet 0/1 with public IPv4 address
209.165.200.226/27 is configured as a NAT outside interface. Interface GigabitEthernet
0/0 with private IPv4 address 172.16.1.1/24 is a NAT inside interface.
You can specify more than one inside interface.
The remaining configuration steps differ in NAT types.
Configuring Static Inside IPv4 NAT and Port Forwarding
For static inside NAT, you have to configure a static mapping between exactly one local
and one global IPv4 address. Specification of the local address, global address, and NAT
type, are all done using one command only.
To configure static inside IPv4 NAT, use the ip nat inside source command with the
keyword static. The global configuration mode command has the following syntax: ip nat
inside source static local-ip global-ip.
Packets arriving on the inside interface and matching the defined local address will be
translated to the defined global address, and vice versa.
The keyword inside in the command specifies that only inside address is translated (from
local to global). The keyword static indicates that the mapping that follows is static.
Note: Do not confuse the ip nat inside source static and ip nat source static commands.
The latter does not include the word inside. The ip nat source static command is used
when configuring NAT on a virtual interface. If you wish to configure NAT for physical
interfaces, use ip nat inside source static command.
The ip nat inside source static local-ip global-ip creates an entry in the NAT-mapping
table. To verify which addresses are currently being translated, issue the show ip nat
translations command.
Static mapping entries appear in the translations table even when there is no traffic from
the inside to the outside interface.
The following is an example of creating and verifying a static entry which maps
172.16.1.10 local IPv4 address to 209.165.200.230 global IPv4 address.

In the example, output of the show ip nat translations command, there is a mapping
present. When traffic is generated and static NAT is performed, both outside local and
outside global fields are populated. Empty outside local and outside global fields indicate
that this entry is result of the configuration activity.
To configure port forwarding, you also specify a static inside mapping. However, in port
forwarding you must specify local and global port numbers and indicate the transport
protocol that the port numbers refer to.
To configure inside IPv4 port forwarding, use the ip nat inside source static tcp|udp local-
ip local-port global-ip global-port command.

The sample configuration shows an example of configuring port forwarding. The web
server 192.168.10.254 in the inside network is listening on port 80 for the incoming
connections. Users will access this internal web server using the global IPv4 address
209.165.200.226 as the destination IPv4 address and destination port 8080.
In the example, the port forwarding entry is verified using the show ip nat translations
command. In the output, note that port forwarding mapping has the IPv4:port-number
format.
Configuring Dynamic IPv4 Inside NAT
Dynamic NAT configuration differs from static NAT, but it also has some similarities. Like
static NAT, it requires the configuration to identify each interface as an inside or outside
interface. However, rather than creating a static map between one local and only one
global IPv4 address, you can specify pools of addresses.
To specify a pool of local addresses that need to be translated, you use access control lists
(ACLs). With an ACL, you identify only those local addresses that are to be translated. You
can configure either a named or a numbered ACL.
Note: Remember that there is an implicit deny any statement at the end of each ACL. An
ACL that is too permissive can lead to unpredictable results. Using permit any can result in
NAT consuming too much router resources, which can cause network problems.
To specify a pool of global addresses available for dynamic translations, use the ip nat
pool name start-ip end-ip {netmask netmask | prefix-length prefix-length} command.
The pool of global IPv4 addresses is available to any device on the inside network on a
first-come first-served basis. The NAT pool is referenced in commands by its name.
Outside routers are not aware of NAT translations performed on the inside network. To
reach the inside network, outside routers must have a route to the network to which the
addresses are translated, in other words to the inside global network. The inside global
network contains the range of IPv4 addresses that is specified in the NAT pool.
It remains to specify how NAT should be performed. To configure dynamic inside IPv4
NAT, use the ip nat inside source command followed by the mapping between the ACL-
defined local addresses and the NAT pool defined global addresses. The ACL and NAT pool
are referenced by their names (or number for ACLs). The syntax of the global
configuration command is ip nat inside source list ACL-identifier pool pool-name.
Note: Whatever type of inside NAT you are specifying, the syntax of the ip nat inside
source command always specifies the local addresses first, followed by the specification of
global addresses.
The example configuration has a numbered ACL1 that identifies all addresses in the
10.1.1.0/24 subnet; therefore, packets from both PC1 and PC2 will be translated.
The available global addresses are identified in the NAT pool called NAT-POOL. The pool
includes six addresses, from 209.165.200.230 to 209.165.200.235, that belong to the
209.165.200.224/27 subnet as indicated by the subnet mask 255.255.255.224.
The ip nat inside source command creates a mapping between ACL 1 (list 1 in the
command) and NAT-POOL (pool NAT-POOL in the command), which indicates to the
router that dynamic many-to-many NAT is performed.
Finally, the translations are verified by the show ip nat translations command. In the
example, output of the commands includes specific translations, along with configuration-
based entries. Configuration-based entries have "Outside" fields empty. Note that the first
IPv4 address from the NAT pool, 209.165.200.230 was used first to translate the
10.1.1.100 address, when Internet Control Message Protocol (ICMP) traffic was generated.
The second address from the pool was used for 10.1.1.101 address. The traffic that
crossed the router included both ICMP and TCP (Telnet) packets. ICMP packets do not
have port numbers. Instead of port numbers, for ICMP traffic, the value from the ICMP
message identifier field is used.
Note: Dynamic NAT entries time out. These entries have a default timeout value of 86400
seconds (24 hours), after which they are removed from the table if there is no activity for
the duration of the timeout.
Configuring IPv4 Inside PAT
PAT mappings include both port numbers along with IPv4 addresses. To specify which
local IPv4 addressees and port numbers are to be translated, use ACLs, like in the case of
dynamic NAT.
Specification of global IPv4 addresses in PAT depends on whether you are using only one
global IPv4 address or a pool of global IPv4 addresses. When only one global IPv4 address
is used, it is usually the IPv4 address of the outside interface of the border device. To
configure this address as the global address, it is enough to specify the interface in the ip
nat inside source command.
The configuration of the pool of global IPv4 addresses for NAT uses the ip nat pool
command. The syntax of the command is the same as for the dynamic NAT: ip nat pool
name start-ip end-ip {netmask netmask | prefix-length prefix-length}.
When creating port mappings, the device tries to preserve the local port number value. If
the local value cannot be preserved, by default the mapped ports are chosen from the
same range of ports as the local port number.
To specify that PAT is to be performed, you use the ip nat inside source command. The
local IPv4 addresses are specified by list keyword followed by an ACL identifier.
The global IPv4 addresses are specified using one of the following options:

• When there is only one global IPv4 address, such as the address of the device's
outside interface, the interface label is specified in the ip nat inside source list
ACL-identifier interface interface-type-number overload command.
• When there is a pool of global addresses, the name of the NAT pool is specified.
The command syntax is ip nat inside source list ACL-identifier pool pool-name
overload.
The command syntax for PAT adds a keyword overload at the end. This keyword indicates
to the device that PAT is implemented.
In the example configuration, ACL 1 identifies all addresses in the 172.16.1.0/24 subnet as
local addresses. The router's GigabitEthernet0/1 interface with IPv4 address
209.165.200.226 is used for PAT. In the ip nat inside source command, it is specified by its
type and number. To instruct the router to perform PAT, the keyword overload is added
at the end of the command. The router will translate traffic from both PCs. It will try to
preserve the port numbers selected by PCs, if they are available. To the outside networks,
the entire inside network of 172.16.1.0/24 is represented by only one IPv4 address
209.165.200.226.
24.1 Introducing QoS
Introduction
IP was designed to provide best-effort service for delivery of data packets and to run
across virtually any network transmission media and system platform. As user applications
continue to drive network growth and evolution, the demand to support various types of
traffic is also increasing. Network traffic from business-critical and delay-sensitive
applications must be serviced with priority and protected from other types of traffic. To
manage applications such as VoIP, video, e-commerce, and databases, among others, a
network requires quality of service (QoS).
Networks must provide secure, predictable, measurable, and guaranteed services.
Network administrators and architects can achieve better performance from the network
by managing bandwidth provisioning, delay, jitter (delay variation), and packet loss with
QoS mechanisms. As networks increasingly converge to support voice, video, and data
traffic, there is a growing need for QoS.
QoS is a crucial element of any administrative policy that mandates how to handle
application traffic on an enterprise network. Many QoS building blocks or features operate
at different parts of a network to create an end-to-end QoS system. For example, traffic
can be classified and assigned a priority when forwarded by access switches. Then in the
LAN Core for example, different congestion management mechanisms for different types
of traffic can be used. QoS and its implementations in a converged network are complex
and create many challenges for network administrators and architects.

As a networking engineer, you will be asked to participate in designing QoS in network


devices to accommodate traffic demands in enterprise environments, which include:
• Identifying common causes of QoS issues on converged networks.
• Describing mechanisms that are used to deploy a QoS.
• Describing models for providing QoS on a network.

24.2 Introducing QoS


Converged Networks
Converged networks carry multiple types of traffic, such as voice, video, and data, which
were traditionally transported on separate and dedicated networks.
Converged networks have the following important traffic characteristics:

• Competition between constant, small-packet voice flows and bursty video and
data flows
• Time-sensitive voice and video flows
• Critical traffic that must get priority

The figure illustrates a converged network in which voice, video, and data traffic use the
same network facilities instead of a dedicated network for each traffic type. Although
there are several advantages to converged networks, merging these different traffic
streams with dramatically differing requirements can lead to a number of quality
problems.
Data traffic is typically not real-time traffic. Data applications may be bursty in that they
create unpredictable traffic patterns, and thus have widely varying packet arrival times.
Many types of application data exist within an organization. For example, some are
relatively noninteractive and therefore not delay-sensitive (such as email). Other
applications involve users entering data and waiting for responses (such as database
applications) and are therefore very delay-sensitive. You can also classify data according to
its importance to the overall corporate business objectives. For example, a company that
provides interactive, live e-learning sessions to its customers would consider that traffic to
be mission-critical. On the other hand, a manufacturing company might consider that
same traffic important, but not critical to its operations.
Voice traffic is real-time traffic and comprises constant and predictable bandwidth and
packet arrival times.
Video traffic comprises several traffic subtypes, including passive streaming video and
real-time interactive video. Video traffic can be in real time, but not always. Video has
varied bandwidth requirements, and it comprises different types of packets with different
delay and tolerance for loss within the same session.
Interactive video, or video conferencing, has the same delay, jitter, and packet loss
requirements as voice traffic. The difference is the bandwidth requirements—voice
packets are small while video-conferencing packet sizes can vary, as can the data rate. A
general guideline for overhead is to provide 20-percent more bandwidth than the data
currently requires. Streaming video has different requirements than interactive video. An
example of the use of streaming video is when an employee views an online video during
an e-learning session. As such, this video stream is not nearly as sensitive to delay or loss
as interactive video is. Requirements for streaming video include a loss of no more than 5
percent and a delay of no more than 4 to 5 seconds. Depending on how important this
traffic is to the organization, it can be given precedence over other traffic.
Voice and some video traffic are not tolerant of delay, jitter, or packet loss, and excessive
amounts of any of these will result in a poor experience for the end users. Data flows are
typically more tolerant of delay, jitter, and packet loss but are very bursty in nature and
will typically use as much bandwidth as possible.
The different traffic flows on a converged network will be in competition for network
resources. Unless some mechanism mediates the overall traffic flow, voice and video
quality will be severely compromised at times of network congestion. The critical, time-
sensitive flows must be given priority in order to preserve the quality of this traffic.
Quality Issues in Converged Networks
Four major problems affect quality on converged networks:

• Bandwidth capacity: Large graphic files, multimedia uses, and increasing use of
voice and video can cause bandwidth capacity problems over data networks.
Multiple traffic flows compete for a limited amount of bandwidth and may require
more bandwidth than is available.
• Delay: Delay is the time that it takes for a packet to reach the receiving endpoint
after being transmitted by the sender. This period of time is called the end-to-end
delay and consists of variable delay components (processing and queueing delay)
and fixed delay components (serialization and propagation delay).
• Jitter: Jitter is the variation in latency or end-to-end delay that is experienced
between when a signal is sent and when it is received. It may also be described as
a disruption in the normal flow of packets as they traverse the network.
• Packet loss: Loss of packets is usually caused by congestion, faulty connectivity, or
faulty network equipment.
Multimedia streams, such as those used in IP telephony or video conferencing, are
sensitive to delivery delays. High delay can cause noticeable echo or talker overlap. Voice
transmissions can be choppy or unintelligible with high packet loss or jitter. Images may
be jerky, or the sound might not be synchronized with the image. Voice and video calls
may disconnect or not connect if signaling packets are not delivered.
Some data applications can also be severely affected by poor QoS. Time-sensitive
applications, such as virtual desktop or interactive data sessions, may appear
unresponsive. Delayed application data could have serious performance implications for
users that depend on timely responses, such as in brokerage houses or call centers.
Managing Quality Issues in Converged Networks
Different techniques are employed to manage quality issues:

• Available bandwidth across a network path is limited by the lowest-bandwidth


circuit and the number of traffic flows competing for the bandwidth on the path.
The best way to manage persistent congestion is to increase the link capacity to
accommodate the bandwidth requirements. Circuit upgrades are not always
possible due to the cost or the amount of time that is required to perform an
upgrade. Alternatives to a link upgrade include utilizing a queuing technique to
prioritize critical traffic or enabling a compression technique to reduce the number
of bits that are transmitted for packets on the link.
• Delay can be managed by upgrading the link bandwidth, utilizing a queuing
technique to prioritize critical traffic, or enabling a compression technique to
reduce the number of bits that are transmitted for packets on the link
• When a media endpoint such as an IP phone or a video gateway receives a stream
of IP packets, it must compensate for the jitter that is encountered on the IP
network. The mechanism that manages this function is a dejitter buffer. The
dejitter buffer must buffer these packets and then play them out in a steady
stream. This process adds to the total delay of the packet that is being delivered as
an audio or video stream but allows for a smooth delivery of the real-time traffic. If
the amount of jitter that is experienced by the packet exceeds the dejitter buffer
limits, the packet is dropped and the quality of the media stream is affected.
• Packet loss typically occurs when routers run out of space in a particular interface
output queue. The term used for these drops is output drop or tail drop. (Packets
are dropped at the tail of the queue.) Packet loss due to tail drop can be managed
by increasing the link bandwidth, using a queuing technique that guarantees
bandwidth and buffer space for applications that are sensitive to packet loss, or by
preventing congestion by shaping or dropping packets before congestion occurs.
24.3 Introducing QoS
QoS Defined
QoS offers intelligent network services that, when correctly applied, help to provide
consistent, predictable performance.

The goal of QoS is a better and more predictable network service with dedicated
bandwidth, controlled jitter and latency, and improved loss characteristics as required by
the business applications. QoS achieves these goals by providing tools for managing
network congestion, shaping network traffic, using expensive wide-area links more
efficiently, and setting traffic policies across the network.
QoS gives priority to some sessions over other sessions. Packets of delay-sensitive sessions
bypass queues of packets belonging to nondelay-sensitive sessions. When queue buffers
overflow, packets are dropped on the session that can recover from the loss or those
sessions that can be eliminated with minimal business impact.
To make space for applications that are important and cannot tolerate loss without
affecting the end-user experience, QoS manages other sessions based on QoS policy
decisions that you implement in the network. Managing refers to selectively delaying or
dropping packets when contention arises.
QoS is not a substitute for bandwidth. If the network is congested, packets will be
dropped. QoS allows the administrators control on how, when, and what traffic is dropped
during congestion.
Note: QoS describes technical network performance and you can measure QoS
quantitatively. Measurements are numerical: jitter, latency, bandwidth, and loss. Quality
of Experience (QoE) measures end-user perception of the network performance. QoE is
not a technical metric; it is a subjective metric describing the end user experience. You
deploy QoS features to maximize QoE for the end user. When you have a session between
two users, QoE is what these two users experience, regardless of how the network
between them works. QoS is often meaningless when you implement it on only a segment
of your network, because the QoE is equal to the experience impact of the worst-
performing segment of the network that the traffic passes on its way.

24.4 Introducing QoS


QoS Policy
A QoS policy is a definition of the QoS levels that are assigned across a network. In a
converged network, having a QoS policy is as important as having a security policy. A
written and public QoS policy allows users to understand and negotiate for QoS in the
network.

There are three basic steps involved in defining QoS policies for a network:
1. Identify traffic and its requirements. Study the network to determine the type of
traffic running on the network and then determine the QoS requirements for the
different types of traffic. The figure shows a network traffic discovery identifying
voice, video, and data traffic.
2. Group the traffic into classes with similar QoS requirements. For example, the
voice and video traffic are put into dedicated classes, and all of the data traffic is
put into a best-effort class.
3. Define QoS policies that will meet the QoS requirements for each traffic class. In
the example in the figure, the voice traffic is given top priority and always
transmitted first. The video traffic is transmitted after voice but before the best-
effort traffic that is only transmitted when no other traffic is present.
Identify Network Traffic and Requirements
Before deploying a QoS policy, network traffic must be identified.

Follow these steps to identify network traffic:


1. Network Audit
2. Business Audit
3. Service level review
The first step in creating a QoS policy is to identify network traffic, by determining the
traffic flows on the network, understanding the business requirements that are associated
with the types of traffic, and understanding the service-level requirements of the traffic.
Identifying network traffic can be accomplished by deploying classification tools on the
network such as Network-Based Application Recognition (NBAR), NetFlow, or packet
sniffers, and by conducting interviews with the different departments of the enterprise to
determine which applications they utilize to perform their job functions. Enterprise
networks typically have a prohibitively large number of applications running on them, so it
is important to utilize the department interviews to limit the scope of the network
discovery.
Once the network traffic has been identified, a business audit should be completed to
determine how the application requirements for each business unit maps into the overall
business model and goals. It is important to have executive sponsorship for this process as
QoS inherently means that some traffic, and users, will receive priority over others.
Finally, define the service levels that are required by different traffic classes in terms of
delay and jitter requirements, packet loss tolerance, bandwidth that is required, and time
sensitivity. This will be determined by understanding the traffic profile and the business
use case of the application. For example, database access by end users might require low
delay for a good user experience, while database backups might be able to occur during
low network use without affecting business requirements.
Voice traffic typically has a smooth, relatively low bandwidth profile with strict latency,
delay, and packet loss requirements. Voice traffic has extremely stringent QoS
requirements. Voice traffic usually generates a smooth demand on bandwidth that will
cause minimal network impact with proper capacity planning.
Video-conferencing traffic typically has a bursty, relatively high bandwidth profile with
strict latency, delay, and packet loss requirements. Video-conferencing applications have
stringent QoS requirements. Video-conferencing traffic is bursty and greedy (requires high
bandwidth) in nature and, as a result, can impact other traffic.
Because of the wide variety of data applications, the profile of data traffic varies greatly.
There are literally hundreds of thousands of data networking applications, and the
associated QoS requirements vary greatly. Different applications may make very different
demands on the network, and even different versions of the same application may have
varying network traffic characteristics. Transport protocol, delay and packet loss
sensitivity, bandwidth requirement, and traffic profile will vary greatly depending on the
implementation of the data application. Data traffic differs from voice and video traffic in
that it typically has less stringent delay and packet loss requirements. Because data traffic
can normally not tolerate drops, the retransmit capabilities of TCP become important and,
as a result, many data applications use TCP.
Group Traffic into QoS Classes
It is recommended that data traffic be classified into no more than four groups as
described in the table.

After the majority of network traffic, which besides different classes of data traffic include
voice and video, has been identified and measured, use the business and service-level
requirements to define traffic classes.
Due to its stringent QoS requirements, voice traffic will almost always exist in a dedicated
class. Cisco has developed specific QoS mechanisms, that ensure priority treatment over
all other traffic.
After you define the applications with the most critical requirements, you can define the
remaining traffic classes using the business requirements.
An enterprise might define traffic classes as follows:

• Voice: Absolute priority for VoIP traffic


• Mission-critical: Small set of locally defined applications that are critical to the
business
• Transactional and interactive: Database access, transaction services, interactive
traffic, and preferred data services
• Best-effort: Internet access and email
• Scavenger (less than best-effort): Nonbusiness applications such as peer-to-peer
file sharing, streaming audio or video sites, or gaming sites
Define Policies for Traffic Classes
Once traffic classes have been defined, QoS policies must be defined for each class.
The following parameters must be defined for each traffic class:

• Minimum or maximum bandwidth


• Priority
• Congestion management technique
Note: Priority values typically range from 0 for low priority traffic to 7 for high priority
traffic. These values can be carried in Layer 2 802.1Q frame headers as a class of service
(CoS) value or Layer 3 IP packets as IP precedence or as part of Differentiated Services
Code Point (DSCP) value.
Defining the QoS policy for each traffic class requires setting the minimum and maximum
bandwidth limits, assigning priority to the class, and using QoS technologies to manage
congestion.
An enterprise might determine QoS policies as follows for a four-class policy:
1. Voice: Minimum bandwidth is 1 Mbps. Mark as priority level 5 and use LLQ to
always give voice priority.
2. Mission-critical and transactional: Minimum bandwidth is 1 Mbps. Mark as
priority level 4 and use CBWFQ to prioritize traffic flow over best-effort and
scavenger queues.
3. Best-effort: Maximum bandwidth is 500 kbps. Mark as priority level 2 and use
CBWFQ to prioritize best-effort traffic that is below mission-critical and voice.
4. Scavenger: Maximum bandwidth is 100 kbps. Mark as priority level 0 and use
CBWFQ to prioritize scavenger traffic and WRED to drop these packets whenever
the network has a propensity for congestion.
24.5 Introducing QoS
QoS Mechanisms
QoS mechanisms refer to the set of tools and techniques to manage network resources
and are considered the key enabling technology for network convergence. The objective
of QoS mechanisms is to make voice, video and data convergence appear transparent to
end users. QoS mechanisms allow different types of traffic to contend inequitably for
network resources. Voice, video, and critical data applications may be granted priority or
preferential services from network devices so that the quality of these strategic
applications does not degrade to the point of being unusable. Therefore, QoS is a critical,
intrinsic element for successful network convergence.
Several important mechanisms are used to implement a QoS policy in an IP network:

• Classification
• Marking
• Policing and shaping
• Congestion management
• Congestion avoidance
• Link efficiency
The following mechanisms are used to implement QoS in a network:

• Classification: Packet identification and classification determines which treatment


that traffic should receive according to behavior and business policies. In a QoS-
enabled network, all traffic is typically classified at the input interface of a QoS-
aware device at the access layer and network edge.
• Marking: Packets are marked based upon classification, metering, or both so that
other network devices have a mechanism of easily identifying the required
treatment. Marking is typically performed as close to the network edge as possible.
• Congestion management: Each interface must have a queuing mechanism to
prioritize the transmission of packets based on the packet marking. Congestion
management is normally implemented on all output interfaces in a QoS-enabled
network.
• Congestion avoidance: Specific packets are dropped early, based on marking, in
order to avoid congestion on the network. Congestion avoidance mechanisms are
typically implemented on output interfaces wherever a high-speed link or set of
links feeds into a lower-speed link (such as a LAN feeding into a slower WAN link).
• Policing and shaping: Traffic conditioning mechanisms police traffic by dropping
misbehaving traffic to maintain network integrity or shape traffic to control bursts.
These mechanisms are used to enforce a rate limit that is based on metering, with
excess traffic being dropped, marked, or delayed. Policing mechanisms can be used
at either input or output interfaces, while shaping mechanisms are only used on
output interfaces.
• Link efficiency: Link efficiency mechanisms improve bandwidth efficiency or the
serialization delay impact of low-speed links through compression and link
fragmentation and interleaving. These mechanisms are normally implemented on
low-speed WAN links.

Cisco network devices can provide a complete toolset of QoS features and solutions for
addressing the diverse needs of voice, video and multiple classes of data applications. QoS
mechanisms allow complex network control and predictable service for a variety of
networked applications and traffic types. They can effectively control bandwidth, delay,
jitter, and packet loss. By ensuring the desired results, the QoS mechanisms lead to
efficient, predictable services for business-critical applications.
Classification and Marking
In any network in which networked applications require differentiated levels of service,
traffic must be sorted into different classes upon which QoS is applied. Classification and
marking are two critical functions of any successful QoS implementation. Classification
allows network devices to identify traffic as belonging to a specific class with specific QoS
requirements, as determined by an administrative QoS policy. After network traffic is
sorted, individual packets are marked (also called colored) so that other network devices
can apply QoS features uniformly to those packets in compliance with the defined QoS
policy.
A classifier is a tool that inspects packets within a flow to identify the type of traffic that
the packet is carrying. Traffic is then marked so that a policy enforcement mechanism will
implement the policy for that type of traffic.
Classification is the identifying and splitting of traffic into different classes.

• It is the most fundamental QoS building block.


• Traffic can be classified by various means.
• Without classification, all packets are treated the same.
Commonly used traffic descriptors include CoS at layer 2, incoming interface, IP
precedence, Differentiated Services Code Point (DSCP) at layer 3, source or destination
address, and application. After the packet has been classified, the packet is then available
for QoS handling on the network.
Using packet classification, you can partition network traffic into multiple priority levels or
classes of service. When traffic descriptors are used to classify traffic, the source agrees to
adhere to the contracted terms and the network promises a QoS. Different QoS
mechanisms, such as traffic policing, traffic shaping, and queuing techniques, use the
classification of the packet to ensure adherence to this agreement.
Classification should take place at the network edge, typically in the wiring closet, in IP
phones or at network endpoints. It is recommended that classification occur as close to
the source of the traffic as possible.
The concept of trust is key for deploying QoS. When an end device (such as a workstation
or an IP phone) marks a packet with CoS or DSCP, a switch or router has the option of
accepting or not accepting values from the end device. If the switch or router chooses to
accept the values, the switch or router trusts the end device. If the switch or router trusts
the end device, it does not need to do any reclassification of packets coming from this
interface. If the switch or router does not trust the interface, it must perform a
reclassification to determine the appropriate QoS value for the packets coming from this
interface. Switches and routers are generally set to not trust end devices and must
specifically be configured to trust packets coming from an interface.
Marking is related to classification and allows network devices to classify a packet or
frame, based on a specific traffic descriptor.
Marking, also known as coloring, marks each packet as a member of a network class so
that the packet class can be quickly recognized throughout the rest of the network.
Marking a packet or frame with its classification allows network devices to easily
distinguish the marked packet or frame as belonging to a specific class. After the packets
or frames are identified as belonging to a specific class, other QoS mechanisms can use
these markings to uniformly apply QoS policies.
CoS, Type of Service (ToS), DSCP, Class Selector, and Traffic Identifier (TID) are different
terms to describe designated fields in a frame or packet header. How devices treat
packets in your network depends on the field values.

• CoS is usually used with Ethernet 802.1q frames and contains 3 bits.
• ToS is generally used to indicate the Layer 3 IPv4 packet field and comprises 8 bits,
3 of which are designated as the IP precedence field. IPv6 changes the terminology
for the same field in the packet header to "Traffic Class."
• DSCP is a set of 6-bit values that can describe the meaning of the Layer 3 IPv4 ToS
field. While IP precedence is the old way to mark ToS, DSCP is the new way. The
transition from IP precedence to DSCP was made because IP precedence only
offers 3 bits, or eight different values, to describe different classes of traffic. DSCP
is backward-compatible with IP precedence.
• Class Selector is a term that is used to indicate a 3-bit subset of DSCP values. The
class selector designates the same 3 bits of the field as IP precedence.
• TID is a term that is used to describe a 4-bit field in the QoS control field of
wireless frames (802.11 MAC frame). TID is used for wireless connections, and CoS
is used for wired Ethernet connections.
Ultimately, there are various Layer 2 and Layer 3 mechanisms that are used in the network
for marking traffic.
Layer 3 packet marking with IP precedence and DSCP is the most widely deployed marking
option because Layer 3 packet markings have end-to-end significance. Layer 3 markings
can also be easily translated to and from Layer 2 markings.
DSCP Encoding
DSCP is encoded in the header of both IPv4 and IPv6 packets.
DiffServ uses the DiffServ (DS) field in the IP header to mark packets according to their
classification. The DS field occupies the eight-bit ToS field in the IPv4 header or the Traffic
Class field in the IPv6 header.
The following three IETF standards describe the purpose of the eight bits of the DS field:
1. RFC 791 includes specification of the ToS field, where the high-order three bits are
used for IP precedence. The other bits are used for delay, throughput, reliability,
and cost.
2. RFC 1812 modifies the meaning of the ToS field by removing meaning from the five
low-order bits (which should all be “0”). This gained widespread use and became
known as the original IP precedence.
3. RFC 2474 replaces the ToS field with the DS field, where the six high-order bits are
used for the DSCP. The remaining two bits are used for explicit congestion
notification. RFC 3260 (New Terminology and Clarifications for Diffserv) updates
RFC 2474 and provides terminology clarifications.
Policing and Shaping
Within a network, different forms of connectivity can have significantly different costs for
an organization. Because WAN bandwidth is relatively expensive, many organizations
would like to limit the amount of traffic that specific applications can send. This is
especially true when enterprise networks use internet connections for connectivity to
remote sites and the extranet. Downloading nonbusiness-critical images, music, and
movie files can greatly reduce the amount of bandwidth that is available for other
mission-critical applications. Traffic policing and traffic shaping are two QoS techniques
that can limit the amount of bandwidth that a specific application, user, or class of traffic
can use on a link.
Policers and shapers are both rate-limiters, but they differ in how they treat excess traffic;
policers drop it and shapers delay it.
Policers and shapers are tools that identify and respond to traffic violations. They usually
identify traffic violations in a similar manner, but they differ in their response:

• Policers perform checks for traffic violations against a configured rate. The action
that they take in response is either dropping or re-marking the excess traffic.
Policers do not delay traffic; they only check traffic and take action if needed.
• Shapers are traffic-smoothing tools that work in cooperation with buffering
mechanisms. A shaper does not drop traffic, but it smooths it out so it never
exceeds the configured rate. Shapers are usually used to meet service level
agreements (SLAs). Whenever the traffic spikes above the contracted rate, the
excess traffic is buffered and thus delayed until the offered traffic goes below the
contracted rate.
You can use traffic policing to control the maximum rate of traffic that is sent or received
on an interface. Traffic policing is often configured on interfaces at the edge of a network
to limit traffic into or out of the network. You can use traffic shaping to control the traffic
going out an interface in order to match its flow to the speed of the remote target
interface and to ensure that the traffic conforms to policies contracted for it.
Policer characteristics

• They are ideally placed as ingress tools (drop it as soon as possible so you do not
waste resources).
• They can be placed at egress to control the amount of traffic per class.
• When traffic is exceeded, policers can either drop traffic or re-mark it.
• Significant number of TCP re-sends can occur.
• They do not introduce jitter or delay.
Shaper characteristics

• They are usually deployed between enterprise network and the service provider to
make sure that enterprise traffic is under contracted rate.
• There are fewer TCP re-sends than with policers.
• Shapers introduce delay and jitter.
Policers make instantaneous decisions and are thus optimally deployed as ingress tools.
The logic is that if you are going to drop the packet, you might as well drop it before
spending valuable bandwidth and CPU cycles on it. However, policers can also be
deployed at egress to control the bandwidth that a particular class of traffic uses. Such
decisions sometimes cannot be made until the packet reaches the egress interface.
When traffic exceeds the allocated rate, the policer can take one of two actions. It can
either drop traffic or re-mark it to another class of service. The new class usually has a
higher drop probability, which means packets in this new class will be discarded earlier
than packets in classes with higher priority.
Shapers are commonly deployed on enterprise-to-service provider links on the enterprise
egress side. Shapers ensure that traffic going to the service provider does not exceed the
contracted rate. If the traffic exceeds the contracted rate, it would get policed by the
service provider and likely dropped.
While policers can cause a significant number of TCP re-sends when traffic is dropped,
shaping involves fewer TCP re-sends. Policing does not cause delay or jitter in a traffic
stream, but shaping does.
Traffic-policing mechanisms such as class-based policing also have marking capabilities in
addition to rate-limiting capabilities. Instead of dropping the excess traffic, traffic policing
can alternatively mark and then send the excess traffic. Excess traffic can be re-marked
with a lower priority before the excess traffic is sent out. Traffic shapers, on the other
hand, do not re-mark traffic; these only delay excess traffic bursts to conform to a
specified rate.
Note: Regulating real-time traffic such as voice and video with policing and shaping is
generally counterproductive. You should use Call Admission Control (CAC) strategies to
prevent real-time traffic from exceeding the capacity of the network. Policing and shaping
tools are best employed to regulate TCP-based data traffic.
Tools for Managing Congestion
Congestion occurs any time an interface is presented with more traffic than it is able to
transmit. Aggressive traffic can fill interface queues and starve more fragile flows such as
voice, video, and interactive traffic. The results can be devastating for delay-sensitive
traffic types, making it difficult to meet the service-level requirements that these
applications require. There are many congestion management techniques available on
Cisco platforms that can provide effective means to manage software queues and to
allocate the required bandwidth to specific applications when congestion exists.
Whenever a packet arrives at an exit interface faster than it can exit, the potential for
congestion exists. If there is no congestion, packets are sent when they arrive at the exit
interface. If congestion occurs, congestion management tools are activated.

Congestion management tools include the following:

• Scheduling is a process of deciding which packet should be sent out next.


Scheduling occurs regardless of whether there is congestion on the link; if there is
no congestion, packets are sent as they arrive at the interface.
• Queuing (or buffering) is the logic of ordering packets in output buffers. It is only
activated when congestion occurs. When queues fill up, packets can be reordered
so that the higher-priority packets can be sent out of the exit interface sooner than
the lower-priority packets.
Traffic scheduling is the methodical output of packets at a desired frequency to
accomplish a consistent flow of traffic. You can apply traffic scheduling to different traffic
classes to weight the traffic by priority. Different scheduling mechanisms exist. The
following are three basic examples:

• Strict priority: The queues with lower priority are only served when the higher-
priority queues are empty. There is a risk with this kind of scheduler that the
lower-priority traffic will never be processed. This situation is commonly referred
to as traffic starvation.
• Round-robin: Packets in queues are served in a set sequence. There is no
starvation with this scheduler, but delays can badly affect the real-time traffic.
• Weighted fair: Queues are weighted, so that some are served more frequently
than others. This method thus solves starvation and also gives priority to real-time
traffic. One drawback is that this method does not provide bandwidth guarantees.
The resulting bandwidth per flow varies based on the number of flows present and
the weights of each of the other flows
The scheduling tools that you use for QoS deployments therefore offer a combination of
these algorithms and various ways to mitigate their downsides. This combination allows
you to best tune your network for the actual traffic flows that are present.
Queuing algorithms are one of the primary ways to manage congestion in a network.
Network devices handle an overflow of arriving traffic by using a queuing algorithm to sort
traffic and determine a method of prioritizing the traffic onto an output link. Each queuing
algorithm was designed to solve a specific network traffic problem and has a particular
effect on network performance.
There are many different queuing mechanisms. Older methods are insufficient for modern
rich-media networks. However, you need to understand these older methods to
comprehend the newer methods:

• First-In, First-Out (FIFO) is a single queue with packets that are sent in the exact
order that they arrived.
• Priority Queuing (PQ) is a set of four queues that are served in strict-priority order.
By enforcing strict priority, the lower-priority queues are served only when the
higher-priority queues are empty. This method can starve traffic in the lower-
priority queues.
• Custom Queueing (CQ) is a set of 16 queues with a round-robin scheduler. To
prevent traffic starvation, it provides traffic guarantees. The drawback of this
method is that it does not provide strict priority for real-time traffic.
• Weighted Fair Queuing (WFQ) is an algorithm that divides the interface bandwidth
by the number of flows, thus ensuring proper distribution of the bandwidth for all
applications. This method provides a good service for the real-time traffic, but
there are no guarantees for a particular flow.
Here are two examples of newer queuing mechanisms that are recommended for rich-
media networks:

• CBWFQ is a combination of bandwidth guarantee with dynamic fairness of other


flows. It does not provide latency guarantee and is only suitable for data traffic
management.
• LLQ is a method that is essentially CBWFQ with strict priority. This method is
suitable for mixes of data and real-time traffic. LLQ provides both latency and
bandwidth guarantees.

Note: The figure shows the LLQ queuing mechanism, which is suitable for networks with
real-time traffic. If you remove the low-latency queue (at the top), what you are left with
is CBWFQ, which is only suitable for nonreal-time data traffic networks.
With CBWFQ, you define the traffic classes based on match criteria, including protocols,
Access Control Lists (ACLs), and input interfaces. Packets satisfying the match criteria for a
class constitute the traffic for that class. A queue is reserved for each class, and traffic
belonging to a class is directed to that class queue.
After a class has been defined according to its match criteria, you can assign
characteristics to it. To characterize a class, you assign it the minimum bandwidth that it
will be delivered during congestion.
To characterize a class, you also specify the queue limit for that class, which is the
maximum number of packets allowed to accumulate in the class queue. Packets belonging
to a class are subject to the bandwidth and queue limits that characterize the class. After a
queue has reached its configured queue limit, enqueuing of additional packets to the class
causes tail drop or random packet drop to take effect, depending on how the class policy
is configured.
For CBWFQ, the weight for a packet belonging to a specific class is derived from the
bandwidth that you assigned to the class when you configured it. Therefore, the
bandwidth assigned to the packets of a class determines the order in which packets are
sent. All packets are serviced fairly based on weight; no class of packets may be granted
strict priority. This scheme poses problems for voice traffic, which is largely intolerant of
delay, especially jitter.
The LLQ brings strict priority queuing to CBWFQ. Strict priority queuing allows delay-
sensitive data such as voice to be dequeued and sent first (before packets in other queues
are dequeued), giving delay-sensitive data preferential treatment over other traffic.
Tools for Congestion Avoidance
Congestion is a normal occurrence in networks. Whether congestion occurs as a result of a
lack of buffer space, network aggregation points, or a low-speed wide-area link, many
congestion management techniques exist to ensure that specific applications and traffic
classes are given their share of available bandwidth when congestion occurs. When
congestion occurs, some traffic is delayed or even dropped at the expense of other traffic.
When drops occur, different problems may arise that can exacerbate the congestion, such
as retransmissions and TCP global synchronization in TCP/IP networks. Network
administrators can use congestion avoidance mechanisms to reduce the negative effects
of congestion by penalizing the most aggressive traffic streams as software queues begin
to fill.
TCP has built-in flow control mechanisms that operate by increasing the transmission
rates of traffic flows until packet loss occurs. When packet loss occurs, TCP drastically
slows down the transmission rate and then again begins to increase the transmission rate.
Because of TCP behavior, tail drop of traffic can result in suboptimal bandwidth utilization.
TCP global synchronization is a phenomenon that can happen to TCP flows during periods
of congestion because each sender will reduce the transmission rate at the same time
when packet loss occurs
Congestion avoidance techniques are advanced packet-discard techniques that monitor
network traffic loads in an effort to anticipate and avoid congestion at common network
bottleneck points.
Queues are finite on any interface. Devices can either wait for queues to fill up and then
start dropping packets, or drop packets before the queues fill up. Dropping packets as
they arrive is called tail drop. Selective dropping of packets while queues are filling up is
called congestion avoidance. Queuing algorithms manage the front of the queue, and
congestion mechanisms manage the back of the queue.
Randomly dropping packets instead of dropping them all at once, as it is done in a tail
drop, avoids global synchronization of TCP streams. One such mechanism that randomly
drops packets is random early detection (RED). RED monitors the buffer depth and
performs early discards (drops) on random packets when the minimum defined queue
threshold is exceeded.
Cisco IOS Software does not support pure RED, but does support WRED. The principle is
the same as with RED, except that the traffic weights skew the randomness of the packet
drop. In other words, traffic that is more important will be less likely to be dropped than
less important traffic.
The idea behind using WRED is both to maintain the queue length at a level somewhere
between the minimum and maximum thresholds and to implement different drop policies
for different classes of traffic. WRED can selectively discard lower-priority traffic when the
interface becomes congested, and it can provide differentiated performance
characteristics for different classes of service.
The figure shows how WRED is implemented, as well as the parameters that WRED uses to
influence packet-drop decisions.
WRED Building Blocks

The router constantly updates the WRED algorithm with the calculated average queue
length, which is based on the recent history of queue lengths.
When a packet arrives at the output queue, the QoS marking value is used to select the
correct WRED profile for the packet. The packet is then passed to WRED for processing.
Based on the selected traffic profile and the average queue length, WRED calculates the
probability for dropping the current packet (Probability Denominator). If the average
queue length is greater than the minimum threshold but less than the maximum
threshold, WRED will either queue the packet or perform a random drop. If the average
queue length is less than the minimum threshold, the packet is passed to the output
queue.
If the queue is already full, the packet is tail-dropped. Otherwise, the packet will
eventually be transmitted out onto the interface.
Link Efficiency Mechanisms
Increasing the bandwidth of WAN links can be expensive. An alternative is to use QoS
techniques to improve the efficiency of low bandwidth links, which, in this context,
typically refer to links with speeds less than or equal to 768 kbps. Header compression and
payload compression mechanisms reduce the sizes of packets, reducing delay and
increasing available bandwidth on a link. Other QoS link efficiency techniques, such as Link
Fragmentation and Interleaving (LFI), allow traffic types, such as voice and interactive
traffic, to be sent either ahead or interleaved with larger, more aggressive flows. These
techniques decrease latency and assist in meeting the service-level requirements of delay-
sensitive traffic.
While many QoS mechanisms exist for optimizing throughput and reducing delay in
network traffic, QoS mechanisms do not create bandwidth. QoS mechanisms optimize the
use of existing resources, and they enable the differentiation of traffic according to a
policy. Link efficiency QoS mechanisms such as payload compression, header
compression, and LFI are deployed on WAN links to optimize the use of WAN links.
Payload compression increases the amount of data that can be sent through a
transmission resource. Payload compression is primarily performed on Layer 2 frames and
therefore compresses the entire Layer 3 packet. The Layer 2 payload compression
methods include Stacker, Predictor, and Microsoft Point-to-Point Compression (MPPC).
Compression methods are based on eliminating redundancy. The protocol header is an
item of repeated data. The protocol header information in each packet in the same flow
does not change much over the lifetime of that flow. Using header compression
mechanisms, most header information can be sent only at the beginning of the session,
stored in a dictionary, and then referenced in later packets by a short dictionary index.
Cisco IOS header compression methods include TCP header compression, Real-Time
Transport Protocol (RTP) header compression, class-based TCP header compression, and
class-based RTP header compression.
It is important to note that Layer 2 payload compression and header compression are
performed on a link-by-link basis. These compression techniques cannot be performed
across multiple routers because routers need full Layer 3 header information to be able to
route packets to the next hop.
LFI is a Layer 2 technique in which large frames are broken into smaller, equally sized
fragments and then transmitted over the link in an interleaved fashion with more latency-
sensitive traffic flows (like Voice over IP). Using LFI, smaller frames are prioritized, and a
mixture of fragments is sent over the link. LFI reduces the queuing delay of small frames
because the frames are sent almost immediately. Link fragmentation, therefore, reduces
delay and jitter by expediting the transfer of smaller frames through the hardware
transmit queue.

24.6 Introducing QoS


QoS Models
Three different models exist for addressing QoS on a network. The best-effort model was
designed for best-effort, no-guarantee delivery of packets and is still the predominant
model in the internet today. The Integrated Services (IntServ) model was introduced to
supplement the best-effort delivery by setting aside some bandwidth for applications that
require bandwidth and delay guarantees. IntServ expects applications to signal their
requirements to the network. The Differentiated Services (DiffServ) model was added to
provide greater scalability in providing QoS to IP packets.
In a best-effort model, QoS is not applied to traffic, and packets are serviced in the order
they are received with no preferential treatment. If it is not important when or how
packets arrive, or if there is no need to differentiate between traffic flows, the best-effort
model is appropriate.
The IntServ model provides guaranteed QoS to IP packets. Applications signal to the
network that they will require special QoS for a period of time and that bandwidth is
reserved across the network. With IntServ, packet delivery is guaranteed; however, the
use of this model can limit the scalability of the network.
The DiffServ model provides scalability and flexibility in implementing QoS in a network.
Network devices recognize traffic classes and provide different levels of QoS to different
traffic classes.
Differentiated Services Model
DiffServ is a multiple-service model for implementing QoS in the network. With DiffServ,
the network tries to deliver a particular kind of service that is based on the QoS that is
specified by each packet. This specification can occur in different ways, such as using DSCP
or source and destination addresses in IP packets. The network uses the QoS specification
of each packet to classify, shape, and police traffic and to perform intelligent queuing.
DiffServ was designed to overcome the limitations of both the best-effort and IntServ
models. DiffServ can provide an "almost guaranteed" QoS while still being cost-effective
and scalable.
The DiffServ model has the following characteristics:

• It is similar to a package delivery service.


• The network traffic is identified by class.
• The network QoS policy enforces differentiated treatment of traffic classes.
• You choose the level of service for each traffic class.
DiffServ has these major benefits:

• It is highly scalable.
• It provides many different levels of quality.
DiffServ also has these drawbacks:

• No absolute guarantee of service quality can be made.


• It requires a set of complex mechanisms to work in concert throughout the
network.
With the DiffServ model, QoS mechanisms are used without prior signaling, and QoS
characteristics (for example, bandwidth and delay) are managed on a hop-by-hop basis
with policies that are established independently at each device in the network. This
approach is not considered an end-to-end QoS strategy because end-to-end guarantees
cannot be enforced. However, DiffServ is a more scalable approach to implementing QoS
because hundreds or potentially thousands of applications can be mapped into a small set
of classes upon which similar sets of QoS behaviors are implemented. Although QoS
mechanisms in this approach are enforced and applied on a hop-by-hop basis, uniformly
applying global meaning to each traffic class provides both flexibility and scalability.
With DiffServ, network traffic is divided into classes that are based on business
requirements. Each of the classes can then be assigned a different level of service. As the
packets traverse a network, each of the network devices identifies the packet class and
services the packet according to this class. You can choose many levels of service with
DiffServ. For example, voice traffic from IP phones is usually given preferential treatment
over all other application traffic. Email is generally given best-effort service. Nonbusiness,
or scavenger, traffic can either be given very poor service or blocked entirely.
DiffServ works like a package delivery service. You request (and pay for) a level of service
when you send your package. Throughout the package network, the level of service is
recognized and your package is given either preferential or normal service, depending on
what you requested.
DiffServ Terminology
Key DiffServ terminology is used to describe the DiffServ mechanisms.

• Behavior Aggregate (BA)


• DSCP
• Per-Hop Behavior (PHB)

To understand the DiffServ model, you must understand DiffServ terminology:

• BA: A collection of packets with the same DSCP value crossing a link in a particular
direction. Packets from multiple applications and sources can belong to the same
BA.
• DSCP: A value in the IP header that is used to select a QoS treatment for a packet.
In the DiffServ model, classification and QoS revolve around the DSCP.
• PHB: An externally observable forwarding behavior (or QoS treatment) that is
applied at a DiffServ-compliant node to a DiffServ BA. The term PHB refers to the
packet scheduling, queuing, policing, or shaping behavior of a node on any given
packet belonging to a BA. The DiffServ model itself does not specify how PHBs
must be implemented. A variety of techniques may be used to affect the desired
traffic conditioning and PHB. In Cisco IOS Software, you can configure PHBs by
using Modular QoS CLI (MQC) policy maps.
The DiffServ architecture is based on a simple model in which traffic entering a network is
classified at the boundaries of the network. The traffic class is then marked, using a DSCP
marking in the IP header. Packets with the same DSCP markings create BAs as they
traverse the network in a particular direction, and these aggregates are forwarded
according to the PHB that is associated with the DSCP marking.
Each DSCP value identifies a BA. Each BA is assigned a PHB. Each PHB is implemented
using the appropriate QoS mechanism or set of QoS mechanisms.
One of the primary principles of DiffServ is that you should mark packets as close to the
edge of the network as possible. It is often a difficult and time-consuming task to
determine the traffic class for a data packet, and you should classify the data as few times
as possible. By marking the traffic at the network edge, core network devices and other
devices along the forwarding path will be able to quickly determine the proper QoS
treatment to apply to a given traffic flow, based on the PHB that is associated with the
DSCP marking.
Per-Hop Behaviors
Different PHBs are used in a network, based on the DSCP of the IP packets.
DSCP selects PHBs throughout the network.

• Default PHB: Tail drop


• Expedited Forwarding (EF)
• Assured Forwarding (AF)
• Class selector: IP precedence PHB
The table describes the PHBs that are defined by IETF standards.
The default PHB essentially specifies that a packet marked with a DSCP value of 000000
(recommended) receives the traditional best-effort service.
The EF PHB is intended to provide a guaranteed bandwidth rate with the lowest possible
delay. This is achieved by providing prioritized forwarding for the EF PHB. Due to the
prioritized forwarding, this PHB polices excess bandwidth so that other classes that are
not using this PHB are not starved for bandwidth. Applications such as VoIP, video, and
online trading programs require this kind of robust service.
The AF PHB provides a mechanism to provide different levels of forwarding assurances.
Each level should be treated independently and should have allocated bandwidth that is
based on the QoS policy. An AF implementation must detect and respond to long-term
congestion within each class by dropping packets while handling short-term congestion
(packet bursts) by queuing packets. This implies the presence of a smoothing or filtering
function that monitors the instantaneous congestion level and computes a smoothed
congestion level. The dropping algorithm uses this smoothed congestion level to
determine when packets should be discarded.
Class selector DSCPs are values that are backward compatible with IP precedence. When
converting between IP precedence and DSCP, match the three most significant bits.
Packets with higher IP precedence should be (on average) forwarded in less time than
packets with lower IP precedence.
Expedited Forwarding
The EF PHB provides a mechanism to offer guaranteed bandwidth with the lowest delay.
EF PHB

• Ensures a minimum departure rate


• Guarantees an amount of bandwidth with prioritized forwarding
• Polices excess bandwidth
The EF PHB is intended to provide a guaranteed bandwidth rate with the lowest possible
delay. This is achieved by providing prioritized forwarding for the EF PHB. Due to the
prioritized forwarding, this PHB polices excess bandwidth so that other classes that are
not using this PHB are not starved for bandwidth. Strict priority queuing is typically used
for EF traffic.
Packets requiring EF PHB should be marked with a DSCP binary value of 101110, or
decimal 46. Non-DiffServ-compliant devices will regard the EF DSCP value as IP
precedence 5. This precedence is the highest user-definable IP precedence and is typically
used for delay-sensitive traffic such as VoIP.
Assured Forwarding
The AF PHB provides a mechanism to provide different levels of forwarding assurances.
AF PHB

• Guarantees bandwidth and allows access to extra bandwidth when available


• Has four standard classes (af1, af2, af3, and af4)
• DSCP values of aaadd0, where aaa is the class and dd is the drop probability

The AF PHB defines a method by which BAs can be given different forwarding assurances.
There are four standard defined AF classes that are represented by the aaa values of 001,
010, 011, and 100. Each class should be treated independently and should have allocated
bandwidth that is based on the QoS policy.
Traffic in different classes is usually given a proportional measure of priority. If congestion
occurs between classes, the traffic in the higher class is given priority. Also, instead of
using strict PQ, more balanced queue servicing algorithms are implemented (fair queuing
or weighted fair queuing). If congestion occurs within a class, the packets with the higher
drop probability are discarded first. Typically sophisticated drop selection algorithms like
RED are used to avoid tail drop issues.
Class Selector
The class selector provides interoperability between DSCP-based and IP precedence-based
devices in a network.
The following are characteristics of the class selector:

• Class-selector xxx000 DSCP


• Compatibility with current IP precedence usage = maps IP precedence to DSCP
• Differentiates probability of timely forwarding
o DSCP = 011000 has a greater probability of timely forwarding than DSCP =
00100

The meaning of the eight bits in the DS field of the IP packet has changed over time to
meet the expanding requirements of IP networks.
Originally, the DS field was referred to as the ToS field, and the first three bits of the field
(bits 5 to 7) defined a packet IP precedence value. A packet could be assigned one of six
priorities based on the IP precedence value (eight total values minus two reserved values).
IP precedence 5 (101) was the highest priority that could be assigned (RFC 791).
RFC 2474 replaced the ToS field with the DS field, where a range of eight values (class
selector) is used for backward compatibility with IP precedence. There is no compatibility
with other bits that are used by the ToS field.
The class-selector PHB was defined to provide backward compatibility for DSCP with ToS-
based IP precedence. RFC 1812 simply prioritizes packets according to the precedence
value. The PHB is defined as the probability of timely forwarding. Packets with higher IP
precedence should be (on average) forwarded in less time than packets with lower IP
precedence.
The last three bits of the DSCP (bits 2 to 4), set to 0, identify a class-selector PHB. You can
calculate the DSCP value for a CS PHB by multiplying the class number by 8. For example,
the DSCP value for CS3 would be equal to (3 * 8) = 24.
24.7 Introducing QoS
Deploying End-to-End QoS
To facilitate true end-to-end QoS on an IP network, a QoS policy must be deployed in the
campus network and the WAN. Each network segment has specific QoS policy
requirements and associated best practices. When the enterprise uses a service provider
network that provides Layer 3 transport, end-to-end QoS requires close coordination
between the QoS policy of the enterprise and of the service provider. Designing and
testing QoS policies is just one step in the overall QoS deployment methodology in the
Enterprise environment.
Deploying QoS in an enterprise is a multistep process that is repeated as the business
requirements of the enterprise change.
A successful QoS deployment in Enterprise comprises multiple phases:
1. Strategically defining QoS objectives
2. Analyzing application service-level requirements
3. Designing and testing QoS policies
4. Implementing QoS policies
5. Monitoring service levels to ensure business objectives are being met

A successful QoS policy deployment requires a clear definition of the business objectives
that the enterprise wants to achieve with the QoS implementation. Interview business
stakeholders to identify their business-critical applications and to understand the service-
level requirements of these applications as implemented in the enterprise. It is also crucial
to have executive approval of the QoS policy to ensure that the QoS policy aligns with the
overall strategy of the organization. Once a good understanding of the service-level
requirements for the critical business applications is understood and executive approval of
the proposed policy is in place, a detailed policy can be created and tested. Once tested,
the policy can be rolled out across the entire enterprise network. The policy and
performance of the business-critical applications should be constantly monitored to
ensure that the QoS objectives are being met.
These phases need to be repeated as business conditions evolve.
Enterprise Campus QoS Guidelines
A QoS policy is only as strong as the weakest point in the network. If VoIP or video traffic
experiences packet loss or jitter at any point in the network, the user experience will be
noticeably impacted. In order to provide QoS guarantees, an end-to-end QoS deployment
is required that covers traffic from endpoint to endpoint across the entire network path.
The rapid rise of highly sensitive collaboration traffic has made it even more critical to
ensure that QoS is not only deployed on the WAN, where congestion on low-speed links
was the typical cause of poor application performance, but also in the high-speed campus
environment.
Although network administrators sometimes equate QoS only with queuing, the QoS
toolset extends considerably beyond queuing tools. Classification, marking, and policing
are all important QoS functions that are optimally performed within the campus network,
particularly at the access layer ingress edge (the access edge).

• Classify and mark applications as close to their sources as technically and


administratively feasible.
• Police unwanted traffic flows as close to their sources as possible.
• Always perform QoS in hardware rather than software when a choice exists.
• Enable queuing policies at every node where the potential for congestion exists.
• Protect the control plane and the data plane.

Classifying and marking applications as close to their sources as technically and


administratively possible enables end-to-end DiffServ PHBs. Sometimes endpoints can be
trusted to set CoS or DSCP markings correctly, but this is not recommended because users
can easily abuse provisioned QoS policies if they are permitted to mark their own traffic.
For example, if DSCP EF received priority services throughout the enterprise, users could
easily configure the NIC on a PC to mark all traffic to DSCP EF, thus hijacking network
priority queues to service their nonreal-time traffic. Such abuse could easily ruin the
service quality of real-time applications, such as VoIP, throughout the enterprise. For this
reason, the phrase "as close as ... administratively feasible" is included in the design
principle.
There is little sense in forwarding unwanted traffic only to police and drop it at a
subsequent node, especially when the unwanted traffic is the result of attacks, such as
when the attackers flood the victim with a high volume of packets or connections
overwhelming the network. Such attacks can cause network outages by overwhelming
network device processors with traffic.
Always perform QoS in hardware rather than software when a choice exists. Cisco IOS
routers perform QoS in software. This situation places additional demands on the CPU,
depending on the complexity and functionality of the policy. Cisco Catalyst switches, on
the other hand, perform QoS in dedicated hardware ASICs and therefore do not tax their
main CPUs to administer QoS policies. You can therefore apply complex QoS policies at 1
Gigabit, 10 Gigabit, 25 Gigabit or 40 Gigabit Ethernet line speeds in these switches.
Most campus links are underutilized. Some studies have shown that 95 percent of campus
access layer links are utilized at less than 5 percent of their capacity. This underutilization
means that you can design campus networks to accommodate oversubscription between
access, distribution, and core layers. Oversubscription allows for uplinks to be utilized
more efficiently, and more importantly, it reduces the overall cost of building the campus
network. Common campus oversubscription values are 20:1 for the access-to-distribution
layers and 4:1 for the distribution-to-core layers.
It is quite rare, under normal operating conditions, for campus networks to suffer
congestion. If congestion does occur, it is usually momentary and not sustained, as at a
WAN edge. However, these short moments of congestion can lead to packet loss due to
instantaneous buffer overload on these high-speed links.
The only way to provide service guarantees is to enable queuing at any node that has the
potential for congestion, regardless of how rarely this congestion may actually occur. The
potential for congestion exists in campus uplinks because of oversubscription ratios and
speed mismatches in campus downlinks (for example, Gigabit Ethernet to Fast Ethernet
links).
Queuing helps to meet network requirements under normal operating conditions, but
enabling QoS within the campus is even more critical under abnormal network conditions.
During such conditions, network traffic may increase exponentially until links are fully
utilized. Without QoS, the attack-generated traffic drowns out applications and causes
denial of service through unavailability. Enabling QoS policies within the campus maintains
network availability by protecting and servicing critical applications such as VoIP, video,
and even best-effort traffic.
25.1 Explaining Wireless Fundamentals
Introduction
Today, wireless LANs (WLANs) are used in most enterprises. Most devices now have
wireless capabilities, enhancing the mobility of users. Mobility enhances employee
productivity, enhances collaboration, and improves responsiveness to customers. Simple
movement around a company is now seamless.
The size and number of Wi-Fi network deployments have increased exponentially over the
past few years. Beginning with a single access point (AP) replacing a cable on the floor, Wi-
Fi networks have become a true extension of the wired LAN, sometimes spreading over
entire warehouses or even campuses and providing simultaneous network connectivity to
thousands of laptops, tablets, and smart phones.
As the size of these networks increases, so does the complexity of the network design. To
create and maintain an effective and secure WLAN in today's environment, it is important
for the administrators and engineers to understand the various wireless components and
architectures that can be used.
For a long time, the only possible WLAN service was though an autonomous AP
architecture. In a campus environment, where there are 20 or 30 APs, configuration can
be time consuming. That’s where the centralized deployment offers the ease of
centralized management, security, and mobility. Cisco offers such flexibility with wireless
LAN controllers (WLCs), which typically use a GUI to configure and manage various
features.

As a networking engineer, you need to be familiar with different aspects of Wi-Fi


deployments, such as the following:

• Options for wireless architecture


• Options to deploy multiple APs in the wireless network to cover the complete
campus
• Wireless channel usage and how non-overlapping channels avoid interference and
poor connection quality.
• Familiarity with the WLAN management approach and required management
protocols
• Integration of the wireless solution into the existing wired infrastructure

25.2 Explaining Wireless Fundamentals


Wireless Technologies
Here are some general wireless topologies:

• Wireless personal area network (WPAN)


• WLAN
• Wireless metropolitan area network (WMAN)

Wireless Personal Area Network


A personal area network (PAN) is a network that exists within a relatively small area and
connects electronic devices such as desktop computers, printers, scanners, fax machines,
and notebook computers. In the past, connecting these devices required extensive
cabling, connectors, and adapters. Now, these devices typically use Bluetooth to connect
to the WPAN.
Typical applications for WPANs are in the office environment. WPAN there enables
communication between devices in proximity (within several meters of each other) to
communicate as if they were connected by a cable.
Wireless Local Area Network
In contrast to WPANs, WLANs provide more robust wireless network connectivity over a
local area, up to approximately 100 m (328 feet) between an AP and associated clients.
The goal is not to connect one device to another but to connect end devices to the
backbone network (typically a wired LAN) without the need for cables. WLANs today are
based on the IEEE 802.11 standard and are referred to as Wi-Fi networks. Because these
networks are common, a wireless network administrator needs to understand how they
work and how to configure and troubleshoot them.
Wireless Metropolitan Area Network
A WMAN is a wireless communications network that covers a large geographic area, such
as a city or a suburb. In this type of area, the wireless signal often provides a point-to-
point or point-to-multipoint backbone. Wireless can be used to create links at a low cost:
organizations need only two end devices that point at each other instead of a complex and
costly wired infrastructure.

25.3 Explaining Wireless Fundamentals


WLAN Architectures
Wireless networks usually consist of these components:

• Clients with wireless adapters.


• APs, which are Layer 2 devices whose primary function is to bridge 802.11 WLAN
traffic to 802.3 Ethernet traffic. APs can have internal (integrated) or external
antennas to radiate the wireless signal and provide coverage with the wireless
network.
• APs can be standalone or centralized:
o Standalone (autonomous) APs, managed individually
o Centralized APs, managed by a Cisco WLC
• (Optional) A Cisco WLC that presents a central point for AP management,
configuration of APs, and user traffic termination for centralized APs.
The most common wireless network is the centralized APs, or lightweight architecture.
Ad Hoc Networks
Ad hoc wireless networks are used among a small group of hosts.
Characteristics of an ad hoc network include the following:

• It creates an Independent Basic Service Set (IBSS).


• It exists when two wireless devices communicate.
• It contains a limited number of devices because of collision and organization
issues.
To create a Wi-Fi network, users need to have wireless-capable devices. When two
wireless-capable devices are in range of each other, they need only share a common set of
basic parameters (frequency and so on) to be able to communicate. Surprisingly, this set
of parameters is all it takes to create a personal area Wi-Fi network. The first station
defines the radio parameters and group name; the other station only needs to detect the
group name. The other station then adjusts its parameters to the parameters that the first
station defined, and a group is formed that is known as an ad hoc network. Most
operating systems are designed to make this type of network easy to set up.
A Basic Service Set (BSS) is the area within which a computer can be reached through its
wireless connection. Because the computers in an ad hoc network communicate without
other devices (AP, switch, and so on), the BSS in an ad hoc network is called an IBSS.
Computer-to-computer wireless communication is most commonly referred to as an ad
hoc network, IBSS, or peer-to-peer (wireless) network.
Wi-Fi Direct
Wi-Fi Direct is used to connect wireless devices for printing, sharing, syncing, and display.
Wi-Fi Direct in the Enterprise
Not everyone has (or wants) access to a Wi-Fi AP or hotspot. However, users often carry
content and applications that they want to share, print, display, or synchronize. Wi-Fi
Direct is a certification by the Wi-Fi Alliance. The intent is the creation of peer-to-peer Wi-
Fi connections between devices, without the need for an AP. It is another example of a
WPAN.
This connection, which can operate without a dedicated AP, does not operate in IBSS
mode. Wi-Fi Direct is an innovation that operates as an extension to the infrastructure
mode of operation. With the technology that underlies Wi-Fi Direct, a device can maintain
a peer-to-peer connection to another device inside an infrastructure network—an
impossible task in ad hoc mode.
Wi-Fi Direct devices include Wi-Fi Protected Setup (WPS), which makes it easy to set up a
connection and enable security protections. Often, these processes are as simple as
pushing a button on each device.
Devices can operate one-to-one or one-to-many for connectivity.
Wi-Fi Direct Predefined Services
Here are the predefined services that Wi-Fi Direct brings:

• Miracast connections over Wi-Fi Direct allow a device to display photos, files, and
videos on an external monitor or television.
• Wi-Fi Direct for Digital Living Network Alliance (DLNA) lets devices stream music
and video between each other.
• Wi-Fi Direct Print gives users the ability to print documents directly from a smart
phone, tablet, or PC.
Infrastructure Mode
In the infrastructure mode design, an AP is dedicated to centralizing the communication
between clients. This AP defines the frequency and wireless workgroup values. The clients
need to connect to the AP in order to communicate with the other clients in the group and
to access other network devices and resources.
The following are characteristics of the infrastructure mode:

• The AP functions as a translational bridge between 802.3 wired media and 802.11
wireless media.
• Wireless is a half-duplex environment.
• A basic service area (BSA) is also called a wireless cell.
• A BSS is the service that the AP provides.

The central device in the BSA or wireless cell is an AP, which is close in concept to an
Ethernet hub in relaying communication. But, as in an ad hoc network, all devices share
the same frequency. Only one device can communicate at a given time, sending its frame
to the AP, which then relays the frame to its final destination—this is half-duplex
communication.
Although the system might be more complex than a simple peer-to-peer network, an AP is
usually better equipped to manage congestion. An AP can also connect one client to
another in the same Wi-Fi space or to the wired network—a crucial capability.
The comparison to a hub is made because of the half-duplex aspect of the WLAN client
communication. However, APs have some functions that a wired hub simply does not
possess. For example, an AP can address and direct Wi-Fi traffic. Managed switches
maintain dynamic MAC address tables that can direct packets to ports that are based on
the destination MAC address of the frame. Similarly, an AP directs traffic to the network
backbone or back into the wireless medium, based on MAC addresses. The IEEE 802.11
header of a wireless frame typically has three MAC addresses but can have as many as
four in certain situations. The receiver is identified by MAC Address 1, and the transmitter
is identified by MAC Address 2. The receiver uses MAC Address 3 for filtering purposes,
and MAC Address 4 is only present in specific designs in a mesh network. The AP uses the
specific Layer 2 addressing scheme of the wireless frames to forward the upper-layer
information to the network backbone or back to the wireless space toward another
wireless client.
In a network, all wireless-capable devices are called stations. End devices are often called
client stations, whereas the AP is often referred to as an infrastructure device.
Like a PC in an ad hoc network, an AP offers a BSS. An AP does not offer an IBSS because
the AP is a dedicated device. The area that the AP radio covers is called a BSA or cell.
Because the client stations connect to a central device, this type of network is said to use
an infrastructure mode as opposed to an ad hoc mode.
If necessary, the AP converts 802.11 frames to IEEE 802.3 frames and forwards them to
the distribution system, which receives these packets and distributes them wherever they
need to be sent, even to another AP.
When the distribution system links two APs, or two cells, the group is called an Extended
Service Set (ESS). This scenario is common in most Wi-Fi networks because it allows Wi-Fi
stations in two separate areas of the network to communicate and, with the proper
design, also permits roaming.
In a Wi-Fi network, roaming occurs when a station moves. It leaves the coverage area of
the AP to which it was originally connected and arrives at the BSA of another AP. In a
proper design scenario, a station detects the signal of the second AP and jumps to it
before losing the signal of the first AP.
For the user, the experience is a seamless movement from connection to connection. For
the infrastructure, the designer must make sure that an overlapping area exists between
the two cells to avoid loss of connection. If an authentication mechanism exists,
credentials can be sent from one AP to another fast enough for the connection to remain
intact. Modern networks often use Cisco WLCs (not shown in the above figure)—central
devices that contain the parameters of all the APs and the credentials of connected users.
Because an overlap exists between the cells, it is better to ensure that the APs do not
work on the same frequency (also called a channel). Otherwise, any client that stays in the
overlapping area affects the communication of both cells. This problem occurs because
Wi-Fi is half duplex. The problem is called co-channel interference and must be avoided by
making sure that neighbor APs are set on frequencies that do not overlap.
Service Set Identifiers
To roam between different APs within a network, the APs must share the same network
name. This network name is called the Service Set Identifier (SSID), which has as many as
32 ASCII characters and is configured on both the AP and the client stations that wish to
join (associate) with this AP. However, the SSID may also require some type of
authorization to determine which station has the right to connect. The term WLAN is
often used to define both the SSID and the associated parameters (VLAN, security, quality
of service [QoS], and so on).

When a profile is configured on a client station, the SSID is a name that identifies which
WLAN the client station may connect to. The AP associates a MAC address to this SSID.
This MAC address can be the MAC address of the radio interface if the AP supports only
one SSID, or it can be derived from the MAC address of the radio interface if the AP
supports several SSIDs. Because each AP has a different radio MAC address, the derived
MAC address is different on each AP for the same SSID name. This configuration allows a
station that stays in the overlapping area to hear one SSID name and still understand that
the SSID is offered by two APs.
The MAC address, usually derived from the radio MAC address and associated with an
SSID, is the Basic Service Set Identifier (BSSID). The BSSID identifies the BSS that is
determined by the AP coverage area.
Because this BSSID is a MAC address that is derived from the radio MAC address, APs can
often generate several values. This ability allows the AP to support several SSIDs in a single
cell.
An administrator can create several SSIDs on the same AP (for example, a guest SSID and
an internal SSID). The criteria by which a station is allowed on one or the other SSID will be
different, but the AP will be the same. This configuration is an example of Multiple Basic
SSIDs (MBSSIDs).
MBSSIDs are basically virtual APs. All of the configured SSIDs share the same physical
device, which has a half-duplex radio. As a result, if two users of two SSIDs on the same AP
try to send a frame at the same time, the frames will collide. Even if the SSIDs are
different, the Wi-Fi space is the same. Using MBSSIDs is only a way of differentiating the
traffic that reaches the AP, not a way to increase the capacity of the AP.
Broadcast Versus Hidden SSID
SSIDs can be either broadcast (or advertised) or not broadcast (or hidden) by the APs. A
hidden network is still detectable. SSIDs are advertised in Wi-Fi packets that are sent from
the client, and SSIDs are advertised in Wi-Fi responses that are sent by the APs.
Client devices that are configured to connect to nonbroadcasting networks will send a Wi-
Fi packet with the network (SSID) that they wish to connect to. This is considered a
security risk because the client may advertise networks that it connects to from home
and/or work. This SSID can then be broadcasted by a hacker to entice the client to join the
hacker network and then exploit the client (connect to the client device or get the user to
provide security credentials).
Centralized Wireless Architecture
The centralized, or lightweight, architecture allows the splitting of 802.11 functions
between the controller-based AP, which processes real-time portions of the protocol, and
the WLC, which manages items that are not time-sensitive. This model is also called split
MAC. Split MAC is an architecture for the Control and Provisioning of Wireless Access
Points (CAPWAP) protocol defined in RFC 5415.
Alternatively, an AP can function as a standalone element, without a Cisco WLC, which is
called autonomous mode. In that case, there is no WLC and the AP supports all the
functionalities.
The following are features of Split MAC:

• Centralized tunneling of user traffic to the WLC (data plane and control plane)
• Systemwide coordination for wireless channel and power assignment, rogue AP
detection, security attacks, interference, and roaming
All MAC functionality that is not real time is processed by the Cisco WLC. The APs handle
only real-time MAC functionality, which includes the following:

• Frame exchange handshake between client and AP when connecting to a wireless


network
• Frame exchange handshake between client and AP when transferring a frame
• Transmission of beacon frames, which advertise all the nonhidden SSIDs
• Buffering and transmission of frames for clients in a power-save operation
• Providing real-time signal quality information to WLC with every received frame
• Monitoring all radio channels for noise, interference, and other WLANs, and
monitoring for the presence of other APs
• Wireless encryption and decryption of 802.11 frames
All remaining functionality is managed in Cisco WLC, where time sensitivity is not a
concern and WLC-wide visibility is required. Some of the MAC functions that are provided
in the Cisco WLC are as follows:

• 802.11 authentication
• 802.11 association and reassociation (roaming)
• 802.11 frame translation and bridging to non-802.11 networks, such as 802.3
• Radio frequency (RF) management
• Security management
• QoS management
APs in a centralized architecture can have different modes of operation:

• Local mode, which is the default operational mode of APs when connected to the
Cisco WLC. When an AP is operating in local mode, all user traffic is tunneled to the
WLC, where VLANs are defined.
• FlexConnect mode, which is a Cisco wireless solution for branch and remote office
deployments, to eliminate the need for WLC on each location. In FlexConnect
mode, client traffic may be switched locally on the AP instead of tunneled to the
WLC.
Control and Provisioning of Wireless Access Points
CAPWAP is the current industry-standard protocol for managing APs. CAPWAP functions
for both IPv4 and IPv6.

• A CAPWAP tunnel uses the following UDP port numbers:


o Control plane uses UDP port number 5246
o Data plane uses UDP port number 5247

CAPWAP is an open protocol that enables a WLC to manage a collection of wireless APs.
CAPWAP control messages are exchanged between the WLC and AP across an encrypted
tunnel. CAPWAP includes the WLC discovery and join process, AP configuration and
firmware push from the WLC, and statistics gathering and wireless security enforcement.
After the AP discovers the WLC, a CAPWAP tunnel is formed between the WLC and AP.
This CAPWAP tunnel can be IPv4 or IPv6. CAPWAP supports only Layer 3 WLC discovery.
Once an AP joins a WLC, the APs will download any new software or configuration
changes. For CAPWAP operations, any firewalls should allow the control plane (UDP port
5246) and the data plane (UDP port 5247).
Mapping SSIDs to VLANs
VLANs provide an ideal way of separating users on different WLAN SSIDs when they access
the wired side of the network. By associating each SSID to a different VLAN, you can group
users on the Ethernet segment the same way that they were grouped in the WLAN. You
can also isolate groups from each other, in the same way that they were isolated on the
WLAN.
In the example illustrated in the figure, two SSIDs are associated with different VLANs. The
"Internal" SSID is intended for internal users in the company, while the "Guest" SSID is for
guests visiting the company. Hence, the internal traffic is separated from the guest traffic
in the wired and wireless environment.

When the frames are in different SSIDs in the wireless space, they are isolated from each
other. Different authentication and encryption mechanisms per SSID and subnet isolate
them, even though they share the same wireless space.
When frames come from the wireless space and reach the Cisco WLC, they contain the
SSID information in the 802.11 encapsulated header. The Cisco WLC uses the information
to determine which SSID the client was on.
When configuring the Cisco WLC, the administrator associates each SSID to a VLAN ID. As
a result, the Cisco WLC changes the 802.11 header into an 802.3 header, and adds the
VLAN ID that is associated with the SSID. The frame is then sent on the wired trunk link
with that VLAN ID.
Switch Configuration to Support WLANs
WLCs and APs are usually connected to switches. The switch interfaces must be
configured appropriately, and the switch must be configured with the appropriate VLANs.
The configuration on switches regarding the VLANs is the same as usual. The configuration
differs on interfaces though, depending on if the deployment is centralized (using a WLC)
or autonomous (without a WLC).
Switch VLAN Configuration to Support WLANs
The following types of VLANs are required with WLANs:
1. Management VLAN
2. AP VLAN
3. Data VLAN
The management VLAN is for the WLC management interface configured on the WLC. The
APs that register to the WLC can use the same VLAN as the WLC management VLAN, or
they can use a separate VLAN. The APs can use this VLAN to obtain IP addresses through
DHCP and send their discovery request to the WLC management interface using those IP
addresses. To support wireless clients, you will need a VLAN (or VLANs) with which to map
the client SSIDs to the WLC. You may also want to use the DHCP server for the clients.
Note: Layer 3 mode is the dominant mode today, where the AP interfaces are on a
different subnet than the WLC management interface.
On the switch, the VLANs must first be created to support the WLAN management, APs,
and wireless clients, as shown in the following example:

Note: It is good practice to use a naming convention that easily identifies your VLANs in
the switch.
Second, you will need either the Layer 3 switch or a router to perform inter-VLAN routing.
Usually, inter-VLAN routing is configured along with the VLAN creation. For this example,
assume that inter-VLAN routing is already configured.
Switch Port Connected to WLC Configuration
The following example shows the configuration of the switch interface that is connected
to the Cisco WLC. The WLC and the switch are as usual connected through a trunk port.
Allowed VLANs, as per security recommendations, should only be the ones that are
needed, therefore only WLC management, AP, and data VLANs are allowed.

The following are the steps for configuration of the switch port connected to the WLC:
1. Enter global configuration mode.
2. Choose the physical port that the WLC is connected to on the switch.
3. Enter a description (for example, WLC hostname).
4. Set the port to trunk mode.
5. Set the allowed VLANs and, optionally, a native VLAN.

In this example, VLAN 11 represents the WLC management VLAN, and VLAN 14 represents
the wireless client VLAN (associated to an SSID). The AP VLAN 12 must be allowed on this
trunk, since the connectivity between the AP and the WLC is over Layer 3 connection.
Optionally, you can use link aggregation (LAG) to bundle multiple ports on the WLC,
providing port redundancy and load balancing. Note that a WLC can still connect to only
one neighboring switch. In this case:

• The switch needs to bundle ports towards the WLC into an EtherChannel with
mode "on" configured.
• The switch port channel interface must be configured as trunk port, with all data
VLANs and the AP and management VLANs allowed.
Switch Port Connected to WLC-Based AP Configuration
The WLC-based AP in local mode usually connects to an access port (nontrunking). The
access VLAN is used for traffic to and from the WLC. In a typical configuration, no traffic
from or to a wireless client can transit directly through the AP without going to the WLC.

The following are the steps for configuration of the switch port connected to the AP:
1. Enter global configuration mode.
2. Choose the physical port that the AP is connected to on the switch.
3. Enter a description (for example, AP hostname).
4. Set the access VLAN (AP VLAN).
5. Set the port to access mode.

In this example, VLAN 12 represents the AP VLAN, which allows the AP to access its DHCP
server. As indicated, it should have Layer 3 connectivity with the WLC management.

The figure and the following steps illustrate how CAPWAP communication works:
1. Based on the switch port configuration, the AP is connected to the switch on an
access port (the VLAN for AP to get DHCP). The WLC is connected to the switch on
a trunk port, allowing VLANs for WLC management (VLAN 11), AP (VLAN 12), and
the wireless clients (VLAN 14).
2. The AP and WLC create a CAPWAP tunnel.
3. The client associates to the AP with an SSID of "CORP."
4. The AP sends the client data that is marked with SSID "CORP" through the CAPWAP
tunnel to the WLC.
5. The WLC decapsulates the CAPWAP traffic.
6. The SSID of "CORP" is mapped in the WLC to VLAN ID 14.
7. The WLC tags the data with VLAN 14 before sending it back on the trunk port
(where VLAN 14 is allowed) to the switch.
8. The switch sends it on to the network (based on the destination in the packet).
Switch Port Connected to Autonomous AP Configuration
An autonomous AP connects to a trunk port. On the trunk a native (untagged) VLAN is
required for management of the AP. By default, all VLANs are allowed over the trunk link.
To enhance security, you should specify which VLANs are permitted over the trunk link,
which should include the AP management VLAN.

The configuration of the switch port in this case is very similar to the configuration of a
port connected to a WLC, as shown in the following example.

Autonomous AP Communication: Locally Switched


The figure and the following steps illustrate how communication works with an
autonomous AP:
1. Based on the switch port configuration, the AP is connected to the switch as a
trunk port, allowing VLANs for AP management (VLAN 12) and the wireless clients
(VLAN 14).
2. The client associates to the AP with an SSID of "CORP."
3. The SSID of "CORP" is mapped in the AP to VLAN ID 14.
4. The AP tags the data with VLAN 14 before sending it on the trunk port (where
VLAN 14 is allowed) to the switch.
5. The switch may send it on to the network (based on the destination in packet).
Workgroup Bridges
Devices can be located in places where inserting an Ethernet cable is not feasible because
the devices are supposed to be movable, or because of the environment (for example, a
warehouse where the distance to the switch could exceed 100 m). A wireless setup in
such cases is a natural way to provide access to the network, but devices might only have
an Ethernet connection, not a slot for a Wi-Fi card.
A workgroup bridge (WGB) is an AP that is configured to bridge between its Ethernet and
wireless interfaces.
A WGB provides a wireless connection to devices connected to its Ethernet port.
You can use a WGB with multiple clients, but the WGB in that case must be connected to a
hub or a switch.
Mesh Networks
Providing full wireless coverage is a challenge in various environments. To provide
pervasive network connectivity, enterprises must be able to deploy wireless APs wherever
necessary. Typical APs must connect to Ethernet cables that extend up to 100 meters (328
feet) from the Ethernet port. Running Ethernet cables to every AP to provide full coverage
is often too difficult in hard-to-wire environments.
The Cisco wireless mesh networking solution provides wireless connectivity to areas that,
until now, have been difficult or impossible to wire. These mesh APs deliver mobile
connectivity to users located in previously inaccessible areas, while backhauling wireless
traffic to traditional APs connected to Ethernet ports.
Mesh APs connect to the network using wireless.

• One AP radio is used to serve clients.


• The second AP radio is used to backhaul traffic.
Using one radio, each mesh AP can provide wireless coverage for client devices within its
area, while backhauling traffic through the second radio. Usually, network access to users
is delivered over the 2.4-GHz frequency and the 5-GHz band is used to backhaul traffic.

25.4 Explaining Wireless Fundamentals


Wi-Fi Channels
Wi-Fi-compatible devices can connect to a network or the internet via a WLAN and an AP.
Coverage can be as small as a single room, or as large as many square kilometers achieved
by using multiple overlapping APs. Wi-Fi works best in line of sight. Many construction
materials and other obstacles absorb or reflect Wi-Fi, which restricts Wi-Fi's range below
its 100-meter maximum distance.
Wi-Fi networks are based on the IEEE 802.11 standard and operate in the 2.4-GHz and 5-
GHz spectrum, which is allocated for unlicensed industrial, scientific, and medical (ISM)
usage.
Note: Devices that operate in unlicensed bands do not require any formal licensing
process, but when operating in these bands, the user is obligated to follow the
government regulations for that region. The regulatory domains in different parts of the
world monitor these bands according to different criteria, and the WLAN devices used in
these domains must comply with the specifications of the relevant governing regulatory
domain.
IEEE 802.11 amendments evolved through time. The original standard was created in
1997. Then, wireless was designed to operate in 2.4-GHz frequency bands and enabled
low data rates (1 and 2 Mbps), resulting in slow connectivity.
It was quickly realized that those data rates were too slow, so the first amendment to the
standard was developed—802.11b. 802.11b also operates in 2.4-GHz frequency band, but
it offers higher data rates (1, 2, 5.5, and 11 Mbps). At the same time, a new amendment
was developed, 802.11a, that extended wireless operation also in the 5-GHz frequency
band. This development was necessary, because the 2.4 GHz was becoming crowded,
causing interference. Since 802.11a operated in a different frequency band, another type
of modulation was used that changed (increased) the data rates up to 54 Mbps (6, 9, 12,
18, 24, 36, 48, and 54 Mbps).
What was achieved in the 5-GHz frequency band was also needed in the 2.4-GHz band. For
this reason, the 802.11g amendment was developed, offering the same data rates, up to
54 Mbps in the 2.4-GHz frequency band. For backward compatibility with 802.11b, data
rates of 1, 2, 5.5, and 11 are still supported in 802.11g.
In 2009, a new amendment was ratified, 802.11n, which tried to address the negative
aspects of previous amendments. Using different techniques (modulation, beamforming,
and so on), it improved the performance of the wireless significantly, also increasing the
data rates up to 600 Mbps. 802.11n is the first amendment that supports both frequency
bands, 2.4 GHz and 5 GHz, and is, therefore, backward compatible with all the existing
amendments—802.11a, b, and g.
802.11ac further improved those data rates that 802.11n brought and can, in theory,
achieve up to almost 3500 Mbps. However, 802.11ac operates only in the 5-GHz
frequency band and is, therefore, backward compatible only with 802.11a and n.
Each amendment is backward compatible with the other amendments that operate at the
same frequency. This compatibility enables you to, for example, replace the APs but still
keep older client devices.
The following are the available channels for Wi-Fi usage:
• The 2.4-GHz ISM band ranges from 2.4 to 2.4835 GHz or 2.497 GHz in Japan (11
available channels in the U.S., 13 in Europe, 14 in Japan).
• The 5-GHz Unlicensed National Information Infrastructure (UNII) band is
subdivided into four ranges:
o UNII-1 ranges from 5.15 to 5.25 GHz
o UNII-2 ranges from 5.25 to 5.35 GHz
o UNII-2 extended ranges from 5.470 to 5.725 GHz
o UNII-3 ranges from 5.725 to 5.825 GHz
• The 5-GHz also has an ISM band that ranges from 5.725 to 5.875 GHz (overlaps
with UNII-3 band).
Each AP operates in one channel. The goal is that neighboring APs do not use the same
channel, so you need multiple non-overlapping channels. Using overlapping channels
could lead to:

• Co-channel interference: APs use the same channel.


• Adjacent channel interference: APs use channels that are too close to each other
(for example, channel 1 and 3)
The difference between co-channel and adjacent channel interference is that co-channel
interference just slows down the wireless operation, while adjacent channel interference
leads to collisions and, therefore, disrupts wireless operation.

The ISM band (2.4-GHz spectrum) was planned with channels that are 22-MHz wide. The
channels also require 5 MHz of separation from each other. There are 11 channels
available in the United States, 13 in Europe, and 14 in Japan.
But if a device uses a channel that is 22-MHz wide (11 MHz on each side of the peak
channel), then this channel will encroach on the neighboring channels. As a result, there
are only three nonoverlapping channels in the United States and in Europe: 1, 6, and 11.
Any attempt to use channels that are closer to each other will result in interference issues.
Nonoverlapping channels need to be separated by 25 MHz at center frequency or by five
channel bands. In Japan, four channels (1, 6, 11, and 14) can be used because channel 14
is far apart from the other channels. Channel 14 can only be used in 802.11b networks
(not IEEE 802.11g/n).
Note: 802.11n allows for 40-MHz channels for 2.4 GHz, but the implementation is only
feasible in residential deployments. Using 40-MHz channels in the 2.4-GHz band reduces
the nonoverlapping channels.

The 5-GHz band is divided into several sections: four UNII bands and one ISM band.
Channels in the sections are spaced at 20-MHz intervals and are considered
noninterfering, however, they do have a slight overlap in frequency spectrum.
Consecutive channels can be used in neighboring cell coverage, but neighboring cell
channels should be separated by at least one channel when possible.
Since there are more non-overlapping channels in 5 GHz, you can use so-called "channel
bonding," where you can merge two adjacent channels together and achieve wider
channels (40-MHz, 80-MHz, or 160-MHz wide instead of 20 MHz), which in practice means
multiplied data rates by 2, 4, or 8.
Many regulatory domains enforce different laws for each of these bands, so even though
they may all be considered 5-GHz bands, operation in each set of channels may be
different. Also, some of the channels might not be available in all regulatory domains
(United States, Europe, Japan).
2.4-GHz and 5-GHz Comparison
Signals in the 2.4-GHz frequency have greater range and better propagation through
obstacles. On the other hand, many devices are using the 2.4-GHz frequency band and,
therefore, producing interference. It is not only Wi-Fi devices, but also many nonwireless
devices exist, so the spectrum is really crowded. There are also a limited number of
channels that do not overlap.
The 5-GHz spectrum is less crowded with many more non-overlapping channels, but it still
has some drawbacks. Older devices do not support it, so you might still need 2.4 GHz in
your network. The signal is weaker and therefore the propagation is worse. Also, it is not
completely non-Wi-Fi interference-free because weather radars can operate in this
frequency.
Other Non-802.11 Radio Interferers
Because the 2.4-GHz ISM band is unlicensed, the band is crowded by transmissions from
many devices, such as RF video cameras, baby monitors, and microwave ovens. Most of
these devices are high-powered, and they do not send IEEE 802.11 frames but can still
cause interference for Wi-Fi networks.

For example, RF video cameras operate by exchanging information (the image stream)
between a transmitter (the camera) and the receiver (linking to a video display). These
cameras usually use 100 milliwatt (mW) and a channel that is narrower than Wi-Fi. The
stream of information is continuous and severely affects any Wi-Fi network in the
neighboring channels. These cameras and Wi-Fi are incompatible—an AP cannot natively
receive and understand a camera video stream.
As another example, baby monitors are found more in home environments than in
industrial or office networks (although they can be found in hospitals, nurseries, and many
other social service or education-related environments). The exchanged keepalive
information between the monitoring stations can be single-way or double-way and half
duplex. Some of these monitors can use several channels for one monitor station to
control two devices. The monitors can use 100 mW of power. They are not 802.11
technologies but work in the same frequency and power as 802.11 devices.
Microwave ovens provide a pulse form of interference in the middle of the Wi-Fi, 2.4-GHz
band at a much higher power. Wi-Fi AP transmitters are measured in milliwatts, while
microwave ovens use a power level of over 1000 W.
Fluorescent lights also can interact with Wi-Fi systems but not as interference. The form of
the interaction is that the lamps are driven with alternating current (AC) power, so they
switch on and off many times each second. When the lights are on, the gas in the tube is
ionized and conductive. Because the gas is conductive, it reflects RF. When the tube is off,
the gas does not reflect RF. The net effect is a potential source of interference that comes
and goes many times per second.
Generally speaking, any device that uses a radio should be checked to determine whether
it works in one of the Wi-Fi spectrums.

25.5 Explaining Wireless Fundamentals


AP and WLC Management
Designing a WLAN infrastructure is similar in some ways to designing a LAN infrastructure.
There are DHCP servers, Domain Name System (DNS) servers, and management protocols
such as Simple Network Management Protocol (SNMP). The provisioning of services may
be different, depending on whether the deployment is centralized or distributed.
Protocols that are used for management and operations must not be blocked by firewalls
or other security devices.
Dynamic Host Configuration Protocol
Both clients and APs will need IP addresses in the WLAN. You will need to create different
subnets for each to break up the broadcast domain and segment for security and routing.
Using different IP subnets eliminates contention between wired and wireless clients.
Client VLANs can also have different subnets and DHCP servers from each other; for
example, the employee VLAN (and SSID) and subnet compared with the guest VLAN (SSID)
and subnet.

When APs and WLCs are on separated subnets (no common broadcast domain), DHCP
Option 43 is one method that can be used to map APs to their WLCs.
DHCP Option 43 is specified as a vendor class identifier in RFC 2132. It is used to identify
the vendor type and configuration of a DHCP client. Option 43 can be used to include the
IP address of the Cisco WLC interface that the AP is attached to.
Note: In IPv6, DHCP version 6 (DHCPv6) option 52 can be used for the same purpose. For
simplicity purpose, only DHCP for IPv4 will be discussed here.
There are two ways of implementing DHCP:

• Using an internal DHCP server on the Cisco WLC.


o WLCs contain an internal DHCP server. This server is typically used in
branch offices that do not already have a DHCP server.
o DHCP Option 43 is not supported on the WLC internal server. Therefore,
the AP must use an alternative method to locate the management interface
IPv4 address of the WLC, such as local subnet broadcast or DNS.
• Using a switch or a router as a DHCP server.
o Because the WLC captures the client IPv4 address that is obtained from a
DHCP server, it maintains the same IPv4 address for that client during its
roaming.
Internal DHCP on the WLC has some limitations, for example, not having support for DHCP
option 43, so using an external DHCP server (a switch or router) is the preferred solution.
Domain Name System
A DHCP server can be configured with various options that are included inside the DHCP
packet. If you have configured your DHCP server to provide both Option 6 (DNS server
address) and Option 15 (Domain name) information, both clients and APs can obtain this
information from the DHCP option.

An AP can use DNS during the boot process as a mechanism to discover WLCs that it can
join. This process is done using a DNS server entry for CISCO-CAPWAP-
CONTROLLER.localdomain.
The localdomain entry represents the domain name that is passed to the AP in DHCP
Option 15.
The DNS discovery option mode operates as follows:
1. The AP requests its IPv4 address from DHCP, and includes Options 6 and 15
configured to get DNS information.
2. The IPv4 address of the DNS server is provided by the DHCP server from the DHCP
option 6.
3. The AP will use this information to perform a hostname lookup using CISCO-
CAPWAP-CONTROLLER.localdomain. This hostname should be associated to the
available Cisco WLC management interface IP addresses (IPv4, IPv6, or both).
4. The AP will then be able to associate to responsive WLCs by sending packets to the
provided address.
Network Time Protocol
Network Time Protocol (NTP) is used in WLANs, much like it is in LANs. It provides
date/time synchronization for logs and scheduled events.
In WLANs, NTP also plays an important role in the AP join process. When an AP is joining a
Cisco WLC, the WLC verifies the AP embedded certificate and if the date and time that are
configured on the WLC precede the creation and installation date of certificates on the AP,
the AP fails to join the WLC. Therefore the WLC and AP should synchronize their time
using NTP.

Authentication, Authorization, and Accounting


Users that access the wireless network need to be authenticated. The most secured way is
for each user to have its own identity, which can be achieved using IEEE 802.1X
authentication.
With IEEE 802.1X, an Authentication, Authorization, and Accounting (AAA) server defines
conditions by which access to the network is granted or refused. Conditions can range
from group membership, to the VLAN origin to the time of day. An AAA server does not
need to contain all the information, rather it can point to an external resource (in the
example of group membership, it can be matched against Active Directory).
The AAA server functionality can be provided:

• Locally by a Cisco WLC.


• Globally by an AAA server (for example, Cisco Identity Service Engine [ISE]).

When using a global AAA server, there must be IP reachability between the WLC and the
AAA server, because it will need to authenticate itself and pass client credentials as well.
Management Protocols
Small to midsize businesses can use HTTPS and manage their Cisco WLCs directly through
the GUI. From the GUI, you can view the status and trap logs from the Management
console menu.
Larger businesses can use SNMP to view the status of the Cisco WLC, or to control it from
a remote management station. Cisco Digital Network Architecture (DNA) Center is an
example of one such management station.
Command-Line Interface
A Cisco WLC does not have a default configuration, so you must run a setup wizard. The
initial WLC configuration is accomplished either via the console port and CLI or via the
WLC web interface. The setup using the console port requires a PC with either an available
serial (DB-9) or Universal Serial Bus (USB) port and appropriate adapter.
Like on other Cisco devices, the WLC CLI is available via the following:

• Telnet (not secured, so should not be used if possible)


• Secure Shell (SSH)
• Console port
o Registered Jack-45 (RJ-45) or USB port
o Default port configuration
▪ 9600 baud, 8 data bits, 1 stop bit, no parity, and no hardware flow
control
WLCs typically use an RJ-45 jack as their serial port. In addition to the RJ-45 jack, some
models offer the option to use a USB cable to establish a console connection to a PC using
a USB Type A-to-5-pin mini-Type B cable. An adapter may also be required, depending on
the interfaces available on the PC.
The PC also needs a serial port and the communications software, such as HyperTerminal
or PuTTY, configured with the following settings:

• Speed: 9600 bps


• Data bits: 8
• Parity: None
• Stop bit: 1
• Flow control: None
The CLI can be used for normal configuration changes, or configuration changes can be
done from the Web GUI.
The APs also have similar CLI access (console port and Telnet or SSH).
26.1 Introducing Architectures and Virtualization
Introduction
Designing an enterprise campus network is no different than designing any large, complex
system, such as a piece of software or even something as sophisticated as the space
shuttle. The use of a guiding set of fundamental engineering principles serves to ensure
that the campus design provides for the balance of availability, security, flexibility, and
manageability required to meet current and future business and technological needs.
Designing a LAN for the campus use case is not a one-design-fits-all proposition. The scale
of a campus LAN can be as simple as a single switch and wireless access point (AP) at a
small remote site, or a large, distributed, multibuilding complex with high-density wired
port and wireless LAN (WLAN) requirements that are typically designed in hierarchical
tiers. Platform choices for these deployments are often driven by needs for network
capacity, the device and network capabilities offered, and also the need to meet any
compliance requirements that are important to the organization..
Even though the traditional multilayer campus design is a widely deployed, valid design
choice, there are also alternatives that are currently available that might better fit
customer needs. Perhaps virtualization or outsourcing Information Technology (IT)
resources to the cloud would be a more efficient or optimal solution for the customer.

As a networking engineer, you will be asked to participate in designing enterprise


environments, but you first need to become familiar with basic network design principles,
such as the following:

• Available network architecture models, including enterprise hierarchical and spine-


leaf network design
• Cloud computing basics
• Virtualization fundamentals and the design of the way devices interconnect to
provide a full set of services required (device architecture)

26.2 Introducing Architectures and Virtualization


Introduction to Network Design
Network architecture is the result of planning and design. Network design requires
considerable technical knowledge and experience.
Design decisions are based on network analysis, which is based on quality and capacity
observations.
Principal objectives of network design are providing scalability and resiliency, while
meeting the needs of organizations.
A scalable network can expand quickly to support new users and applications without
impacting performance of the service being delivered to existing users. In a well-designed
network, it is relatively easy to add new elements. Scalability can be achieved through
modular structuring. It is done by constructing smaller units, called modules, which are
added or removed as required.
A resilient network is both highly available and highly reliable. Such a network allows for
almost immediate data flow recovery in the event of a failure of any component. An area
of the network that is impacted by a device or network service problem is called a failure
domain. Small, limited failure domains are a characteristic of a good design. High reliability
is achieved by choosing correctly sized devices with all the needed features in the correct
location. It also involves appropriate dimensioning of interlinks in terms of required
bandwidth. Resilient networks employ redundancy at multiple levels—device level,
interlink level, software, and processes level.
Security and quality of service (QoS) are common requirements in network design.
Designs that meet these requirements incorporate measures for physically securing the
devices and measures to protect information. A network designed with QoS requirements
in mind controls how and when network resources are used by applications both under
normal operating conditions and when congestions occur.
The modular design approach addresses both scalability and resiliency. The term module
can apply to hardware (a line card in modular switches), network design building block (a
switch block in a hierarchical architecture), or a functional segment of a network (data
center, service block, internet edge). Modularity also facilitates implementation of
services and helps in troubleshooting.
Cisco tiered models propose a hierarchical design. They divide the network into discrete
layers or tiers. Each tier provides specific functions that help to select and optimize the
right network hardware, software, and features. Hierarchical models apply to LAN and
WAN design. Examples of tiered models are the three-tier hierarchical and spine-and-leaf
models.
The following figure represents a three-tiered architectural model, which has three
distinct functional layers, each playing a specific role.

The spine-and-leaf model is a two-tiered architecture where servers connect to the leaf
switches in the topology, while the spine layer is the backbone that interconnects all leaf
switches.

A model that helps in the design of a larger enterprise network, is the Cisco Enterprise
Architecture model.
This design is based on modules that correspond to a specific place in the network or a
specific function they have in a network. These modules represent areas that have
different physical or logical connectivity. Basic modules in the Cisco Enterprise
Architecture model are Enterprise Campus, Enterprise Edge, and Service Provider Edge.
Larger network designs also include a module for Remote Locations, such as Enterprise
Branch, Remote Data Center, and Remote Workers.
Issues in a Poorly Designed Network
A poorly designed network has increased support costs, reduced service availability, and
limited support for new applications and solutions. A suboptimal performance directly
affects end users and their access to resources.
One symptom of a poorly designed network is congestion. Congestion is a result of
suboptimal traffic flow or the selection of inappropriate devices or links. The most
probable cause of any problem is an inadequate or outdated design.
Even when a network is first implemented following a validated architecture, in time, its
design can turn into an undesirable network. This situation might result from
nonsystematic, uncontrolled expansion, or in other words, a liberal addition of devices
without overall consideration of the design.
The importance of careful design can be seen by examining an example of a flat network
that does not follow a structured design. Devices in a flat design are connected to each
other using Layer 2 switches without the use of VLANs. All devices on this network share
the available bandwidth and all are members of the same broadcast domain. They are
usually also in the same IP subnet. Layer 2 devices that build a flat network provide little
opportunity to control broadcasts or to filter undesirable traffic. As more devices and
applications are added to a flat network, network performance degrades until the
network becomes unpredictable, slow, or even unusable.
These issues are often found in poorly designed networks:

• Large broadcast domains: Broadcasts exist in every network. Many applications


and network operations use broadcasts to function properly. Therefore, you
cannot eliminate them completely. In the same way that avoiding large failure
domains involves clearly defining boundaries, broadcast domains should also have
clear boundaries. They should also include a limited number of devices to minimize
the negative effect of broadcasts.
• Management and support difficulties: A poorly designed network may be
disorganized, poorly documented, and lack easily identifiable traffic paths. These
issues can make support, maintenance, and troubleshooting time-consuming and
difficult.
• Possible security vulnerabilities: A switched network that has been designed with
little attention to security requirements at network access points can compromise
the integrity of the entire network.
• Failure domains: One of the reasons to implement an effective network design is
to minimize the extent of problems when they occur. When you do not clearly
define Layer 2 and Layer 3 boundaries, a failure in one network area can have a
far-reaching effect.
A poor network design always has a negative effect that is exacerbated over time. A
poorly designed network can quickly become a support and cost burden for its users.

26.3 Introducing Architectures and Virtualization


Enterprise Three-Tier Hierarchical Network Design
Tiered network design models use a hierarchical design. The key principle of a hierarchical
design is that each element in the hierarchy has a specific set of functions and services
that it offers and a specific role to play in the design, which allows you to choose the right
devices, systems, and features for the layer.
A tiered design brings these benefits:

• A tiered design allows you to better understand the features that may be needed,
where they will be needed, and which devices need them within your final
solution. Knowing which feature goes where helps when choosing the needed
devices.
• A tiered design has stood the test of time, because it can be upgraded as
technology changes and it evolves as needs grow. This adaptability allows a
corporation to continue with a design philosophy and reuse (or reprobes)
equipment, perhaps at a different level, as they upgrade over time.
• A tiered design makes it easy to discuss and learn about a particular part of the
solution.
• The modularity of tiered models is based on designing in layers, each with its own
functionalities and devices. The network can expand by adding additional devices
in different layers and interconnecting them.

The hierarchical three-tier model includes access, distribution, and core layers.

• The access layer provides physical connection for devices to access the network.
The distribution layer is designed to aggregate traffic from the access layer.
• The distribution layer is also called the aggregation layer. As such, it represents a
point that most of the traffic traverses. Such a transitory position is appropriate for
applying policies, such as QoS, routing, or security policies.
• The core layer provides fast transport between distribution layer devices and it is
an aggregation point for the rest of the network. All distribution layer devices
connect to the core layer. The core layer provides high-speed packet forwarding
and redundancy.
If you choose a hierarchical tiered architecture, the exact number of tiers that you would
implement in a network depends on the characteristics of the deployment site. For
example, a site that occupies a single building might only require two layers while a larger
campus of multiple buildings will most likely require three layers. In smaller networks,
core and distribution layers are combined and the resulting architecture is called a
collapsed core architecture.
End devices on the LAN communicate with end devices on the same or separate network
segments. If the destination end device is on the same network segment, the request will
get switched directly to the connected host. If the destination end device is in another
segment, the request traverses one or more extra network hops, through the distribution
layer to the core, which introduces latency. The communication of end devices that flows
through other tiers (goes "up and down" the devices) is said to have a "north-south"
nature.
Typically, devices placed in distribution and core layers are required to be more resilient,
have better performance characteristics, and support more features. They are usually
termed high-end or higher-end devices in contrast to low-end devices often found in the
access layer, which provide only basic functions and features.

The three-tier model is usually applied for server and desktop connectivity in a campus.
The model has evolved to include a design for small and midsize environments. For
example, the figure shows a data center that provides dedicated network services. Note
that the figure has the access layer at the top instead of the bottom. Network topologies
may have the layers in different positions. However, what is important is the function of
each layer, not its position in a diagram. The network provides access to all services
available in the data center, such as IP Telephony services, wireless controller services,
and network management. It can also include computing and data storage services,
located within the data center.
The three-tier approach is also used for private and public external connections, for
instance, in an enterprise edge module that includes private WAN and virtual private
network (VPN) connections, and public internet connectivity.
Access Layer
The main purpose of the access layer is to enable end devices to connect to the network
via high-bandwidth links. It attaches endpoints and devices that extend the network, such
as IP phones and wireless APs. The access layer handles different types of traffic, including
voice and video that has different demands on network resources.
The access layer serves several functions, including network access control such as:

• Port security and VLANs


• Access control lists (ACLs)
• DHCP snooping
• Address Resolution Protocol (ARP) inspection
• QoS classification and marking
• Support for multicast delivery, Power over Ethernet (PoE), and auxiliary VLANs for
VoIP
From the device perspective, the access layer is the entry point to the rest of the network
and provides redundant uplinks leading to the distribution layer.
The access layer can be designed with only Layer 2 devices, or it can include Layer 3
routing. When it provides only Layer 2 switching, VLANs expand up to the distribution
layer, where they are terminated. Redundant uplinks between access and distribution
layers are blocked due to the Spanning Tree Protocol (STP) operation, which means that
available links are underutilized. If the access layer introduces Layer 3 functions, VLANs
are terminated on the access layer devices, which participate in routing with distribution
devices. Using higher-end switches in the access layer offers greater control over the
traffic before it enters the distribution and core layers.
Distribution Layer
At sites with two or more access layer devices, it is impractical to interconnect all access
switches. The three-tier architecture model does not interconnect access layer switches.
The distribution layer aggregates the high number of connected ports from the access
layer below into the core layer above. All traffic generated at the access layer that is not
destined to the same access switch traverses a distribution layer device. The distribution
layer facilitates connectivity that needs to traverse the LAN end-to-end, whether between
different access layer devices or from an access layer device to the WAN or internet. The
distribution layer supports many important services.
Because of its centralized position in data flows, the distribution layer is the place where
routing and packet manipulation are performed and can act as a routing boundary
between the access and core layers. The distribution layer performs tasks, such as routing
decision-making and filtering to implement policy-based connectivity and QoS.
For some networks, the distribution layer offers a default route to access layer routers and
runs dynamic routing protocols when communicating with core routers.
The distribution layer uses a combination of Layer 2 switching and Layer 3 routing to
segment the network and isolate network problems, preventing these problems from
affecting the core layer and other access network segments. This segmentation creates
smaller failure domains that compartmentalize network issues.
The network services distribution layer is commonly used to terminate VLANs from access
layer switches, also referred to as Layer 2 boundaries. It is often the first point of routing
in the physical network and a central point for configuration of Layer 3 features, such as
route summarization, DHCP relay, and ACLs.
The distribution layer implements policies regarding QoS, security, traffic loading, and
routing. The distribution layer provides default gateway redundancy by using a First Hop
Redundancy Protocol (FHRP), such as Hot Standby Router Protocol (HSRP), Virtual Router
Redundancy Protocol (VRRP), or Gateway Load Balancing Protocol (GLBP).
Core Layer
The core layer, also called the backbone, binds together all the elements of the campus
architecture. The core layer provides fast packet transport for all connecting access and
aggregation segments of the entire corporate network (switch blocks) and is the boundary
for the corporation when connecting to the outside world.
The core layer interconnects distribution layer switches. A large LAN environment often
has multiple distribution layer switches. When access layer switches are located in
multiple geographically dispersed buildings, each location has a distribution layer switch.
When the number of access layer switches connecting to a single distribution switch
exceeds the performance limits of the distribution switch, then an extra distribution
switch is required. In a modular and scalable design, you can colocate distribution layers
for the data center, WAN connectivity, or internet edge services.
In environments where multiple distribution layer switches exist in proximity and where
fiber optics provide the ability for high-bandwidth interconnects, a core layer reduces the
network complexity, as the figure shows. Without a core layer, the distribution layer
switches will need to be fully meshed. This design is difficult to scale, and increases the
cabling as well as port requirements. The routing complexity of a full-mesh design
increases as you expand the network.
The main purpose of the core layer is to provide scalability to minimize the risk from
failures while simplifying moves, adds, and changes in the campus. In general, a network
that requires routine configuration changes to the core devices does not yet have the
appropriate degree of design modularization. As the network increases in size or
complexity and changes begin to affect the core devices, it often points out design reasons
for physically separating the core and distribution layer functions into a different physical
device.
The core layer is, in some ways, the simplest yet most critical part of the campus. It
provides a very limited set of services but is redundant and is ideally always online. In the
modern business world, it is becoming ever more vital that the core of the network
operates as a nonstop, always available system. The core should also have sufficient
resources to handle the required data flow capacities of the corporate network.
The key design objectives for the core layer are based on providing the appropriate level
of redundancy to allow for near-immediate data-flow recovery in the event of the failure
of any hardware component. The network design must also permit the occasional, but
necessary, hardware and software upgrades or changes to be made without disrupting
network operation.
The core layer of the network should not implement any complex policy services, nor
should it have any directly attached user or server connections to keep the core of the
network manageable, fast, and secure.

26.4 Introducing Architectures and Virtualization


Spine-Leaf Network Design
The growth in data centers and advances in data center technologies have brought on a
new approach to network design. The new approach is known as a spine and leaf
architecture, or simply spine-leaf. Spine-leaf architecture is a two-tier architecture that
resembles Cisco’s original collapsed core design when using the three-tier approach.
Communication among the servers in the data center adds a considerable load on
networking devices. The data flows of these communications appear horizontal and are
said to be of the "east-west" nature. With virtualized servers, applications are increasingly
deployed in a distributed way, which leads to increased east-west traffic. Such traffic
needs to be handled efficiently, with low and predictable latency.

In spine-leaf two-tier architecture, every lower-tier switch (leaf layer) is connected to each
of the top-tier switches (spine layer) in a full-mesh topology. The leaf layer consists of
access switches that connect to devices such as servers. The spine layer is the backbone of
the network and is responsible for interconnecting all leaf switches. Every leaf switch
connects to every spine switch. Typically a Layer 3 network is established between leaves
and spines, so all the links can be used simultaneously.
The path between leaf and spine switches is randomly chosen so that the traffic load is
evenly distributed among the top-tier switches. If one of the top tier switches were to fail,
it would only slightly degrade performance throughout the data center. If
oversubscription of a link occurs (that is, if more traffic is generated than can be
aggregated on the active link at one time), the process for expanding the network is
straightforward. An extra spine switch can be added, and uplinks can be extended to
every leaf switch, resulting in the addition of interlayer bandwidth and reduction of the
oversubscription. If device port capacity becomes a concern, a new leaf switch can be
added by connecting it to every spine switch.

With a spine-leaf architecture, the traffic between two leaves always crosses the same
number of devices (unless communicating devices are located on the same leaf.) This
approach keeps latency at a predictable level because a payload only has to hop to a spine
switch and another leaf switch to reach its destination.
A spine-leaf approach allows architects to build a network that can expand and collapse
(be more elastic) as needed, meaning that components (servers, switches, and ports) can
be added dynamically as the load of applications grows. This elastic approach suits data
centers that host applications that are distributed across many hosts—with hosts being
dynamically added as the solution grows.
This approach is beneficial for topologies where end devices are relatively close together
and where fast scaling is necessary, such as modern data centers.
A main concern that the spine-leaf model addresses is the addition of new leaf (access)
layer switches and the redundant cross-connections that are needed for a scalable data
center. It has been estimated that a spine-leaf model allows for 25-percent greater
scalability over a three-tier model when used for data center designs.
The spine-leaf design has these additional benefits for a modern data center:

• Increased scale within the spine to create equal-cost multipaths from leaf to spine
• Support for higher performance switches and higher speed links (10-Gigabits per
second [Gbps], 25-Gbps, 40-Gbps, and 100-Gbps)
• Reduced network congestion by isolating traffic and VLANs on a leaf-by-leaf basis
• Optimization and control of east-west traffic flows
26.5 Introducing Architectures and Virtualization
Cisco Enterprise Architecture Model
The Cisco Enterprise Architecture model recognizes several functional areas of a network
and provides a network module to support these functions. Failures within a module are
isolated from the rest of the network. Changes and upgrades can be applied to particular
modules and implemented in a controlled manner. Network services, such as security and
QoS, are also implemented on a modular basis.

The following modules make up the Cisco Enterprise Architecture:

• Enterprise Campus: A campus network spans a fixed geographic area. It consists of


a building or a group of buildings connected into one network, which consists of
many network segments. An example of a campus network is a university campus,
or an industrial complex. The enterprise campus module follows the three-tier
architecture with access, distribution, and core tiers, but it includes network
services, normally inside a data center submodule. The data center submodule
centralizes server resources that provide services to internal users, such as
application, file, email, and Domain Name System (DNS) servers. It typically
supports network management services for the enterprise, including monitoring,
logging, and troubleshooting. Inside the data center submodule, the architecture is
spine-leaf.
• Enterprise Edge: The enterprise edge module provides the connectivity outside the
enterprise. This module often functions as an intermediary between the enterprise
campus module, to which it connects via its core, and other modules. It can
contain submodules that provide internet connectivity to one or more ISPs,
termination for remote access and site-to-site VPN, WAN connectivity via
purchased WAN services (Multiprotocol Label Switching [MPLS], Metro Ethernet,
Synchronous Optical Network [SONET], and so on).
• Service Provider Edge: A module that provides connectivity between the
enterprise main site and its remote locations. This module’s functions and features
are determined by the service agreements between the enterprise and the
providers.
• Remote Locations: A module that represents geographically distant parts of the
enterprise network, such as branch offices, a teleworker’s network, or a remote
data center.

26.6 Introducing Architectures and Virtualization


Cloud Computing Overview
In its Special Publication 800-145, the National Institute of Standards and Technology
(NIST) defines cloud computing as "a model for enabling ubiquitous, convenient, on-
demand network access to a shared pool of configurable computing resources (for
example, networks, servers, storage, applications, and services) that can be rapidly
provisioned and released with minimal management effort or service provider
interaction." This definition along with the description of cloud computing characteristics,
service models, and deployment mode was given in 2011. Although it might not be
comprehensive enough to accommodate all cloud computing developments of today, NIST
terminology is still accurate and in use by most cloud computing providers.

Clouds have become common in modern internet-driven technologies. Many popular


internet services function as a cloud. Some are intended for individuals—for instance,
Gmail and Yahoo mail—providing an email service, or Microsoft OneDrive, Google Drive,
Adobe Creative Cloud, or Dropbox providing document sharing, storage, or media editing
services. Other cloud products address business and organizational needs, such as
Amazon Web Services, Google Cloud, Salesforce, Oracle Cloud, and Alibaba.
Enterprise networks often include data centers. Data centers are part of IT infrastructure,
where computing resources are centralized and virtualized to support applications and
access to those applications, while ensuring high availability and resiliency. Data centers
also provide support for a mobile workforce, varying access device types, and a high-
volume, data-driven business. When a data center is located at the enterprise premises, it
is called an on-premises data center.
Clouds are in essence large-scale data centers, or an interconnection of data centers,
which have all or some of the following characteristics, as described by NIST:

• On-demand self-service: Cloud computing capabilities, such as server computing


time and network storage, are activated as needed without requiring human
interaction with each cloud provider.
• Broad network access: Clouds are accessible to the users (businesses and
individuals) remotely via some type of network connectivity, through the internet
or cloud-dedicated WAN networks. The cloud can be accessed by using a variety of
client platforms, such as mobile phones, tablets, laptops, and workstations.
• Resource pooling: Clouds serve multiple customers with different requirements.
Users of different customers are isolated. Users generally have no control or
knowledge over the exact location of the provided resources. The cloud resources
appear centralized to the user. Users can move from one device to another, or one
location to another, and always experience a familiar environment. Backups and
data management are centralized, so users and IT staff no longer need to be
concerned about backing up data on individual computers.
• Rapid elasticity: Customers can scale (in other words add or release) resources on
their own. Resources can be allocated to meet the realistic requirements, avoiding
overprovisioning. Optimization of resource allocation usually results in reducing
costs.
• Measured service: Clouds use metering (or measuring) to monitor and control
resource usage. Different usage elements can be measured, for instance, storage
capacity, processing, bandwidth, or numbers of concurrent users, therefore
providing a basis for billing. Clouds also provide reporting, which can be used to
control and optimize resource use and costs.
For an enterprise, outsourcing computing resources to a cloud provider can be a solution
in these cases:

• For an enterprise that may not have the in-house expertise to effectively manage
their current and future IT infrastructure, especially if cloud services primarily
involve basic elements such as email, DHCP, DNS, document processing, and
collaboration tools.
• For large enterprises and government or public organizations, where resources are
shared by many users or organizational units.
• For enterprises in which computing resource needs might increase on an ad hoc
basis and for a short term. This usage scenario is sometimes called cloud bursting.
When computing requirements increase, the cloud resources are coupled with on-
premises resources only while required.
• For enterprises that decide to outsource only part of their resources. For example,
an enterprise might outsource their web front-end infrastructure, while keeping
other resources on-premises, such as application and database services.
There are also situations in which cloud outsourcing would not be possible. Regulations
might dictate that an enterprise fully own and manage their infrastructure. For enterprises
running business applications that have strict response-time requirements, cloud
outsourcing might not be the appropriate solution.

Cloud deployment models describe the cloud ownership and control of data in the cloud.
The four cloud deployment models distinguished by NIST are as follows:

• Public clouds: Public clouds are open to use by the general public and managed by
a dedicated cloud service provider. The cloud infrastructure exists on the premises
of the cloud provider and is external to the customers (businesses and individuals).
The cloud provider owns and controls the infrastructure and data. Outsourcing
resources to a public cloud provider means that you have little or no control over
upgrades, security fixes, updates, feature additions, or how the cloud provider
implements technologies.
• Private cloud: The main characteristic of a private cloud is the lack of public access.
Users of private clouds are particular organizations or groups of users. A private
cloud infrastructure is owned, managed, and operated by a third party, or the user
itself. An enterprise might own a cloud data center and IT departments might
manage and operate it, which allows the user to enjoy advantages that a cloud
provides, such as resiliency, scalability, easier workload distribution, while
maintaining control over corporate data, security and performance.
• Community cloud: The community cloud is an infrastructure intended for users
from specific organizations that have common business-specific objectives or work
on joint projects and have the same requirements for security, privacy,
performance, compliance, and so on. Community clouds are "dedicated," in other
words, they are provisioned according to the community requirements. They can
be considered halfway between a public and private cloud—they have a
multitenant infrastructure, but are not open for public use. A community cloud can
be managed internally or by a third party and it may exist on or off premises. An
example of a community cloud is Worldwide LHC Computing Grid, a European
Organization for Nuclear Research global computing resource to store, distribute,
and analyze the data of operations from the Large Hadron Collider (LHC).
• Hybrid cloud: A hybrid cloud is the cloud infrastructure that is a composition of
two or more distinct cloud infrastructures, such as private, community, or public
cloud infrastructures. This deployment takes advantage of security provided in
private clouds and scalability of the public clouds. Some organizations outsource
certain IT functions to a public cloud but prefer to keep higher-risk or more
tailored functions in a private cloud or even in-house. An example of hybrid
deployment would be using public clouds for archiving of older data, while keeping
the current data in the private cloud. The user retains control over how resources
are distributed. For hybrid solutions to provide data protection, great care must be
taken that sensitive data is not exposed to the public.
Clouds are large data centers, whose computing resources, in other words storage,
processing, memory, and network bandwidth, are shared among many users. Computing
resources of a cloud are offered as a service, rather than a product. Clouds can offer
anything a computer can offer, from processing capabilities to operating system and
applications, therefore cloud services vary considerably. Service models define which
services are included in the cloud.

NIST has defined three service models which differ in the extent that the IT infrastructure
is provided by the cloud. The following three NIST-defined service models also define the
responsibilities for management of the equipment and the software between the service
provider and the customer.

• Infrastructure as a Service (IaaS) clouds offer pure computing, storage, and


network resources. For instance, you can purchase a certain number of virtual
machines. Software components in IaaS cloud are added by the customer and
include operating systems and applications. You are responsible for specifying the
resources for the IaaS cloud, such as memory, disk space, and central processing
unit (CPU) speed. Any changes to the infrastructure, such as adding or removing
resources, are your responsibility and not the provider’s. The IaaS model offers
customers the greatest control. Examples of IaaS clouds are Amazon Elastic
Computing Cloud (EC2), Microsoft Azure Virtual Machines, Google Compute
Engine, Oracle Compute Cloud Service, IBM Cloud Virtual Servers, and others.
• Platform as a Service (PaaS) model offers a software platform with everything
required to support the complete life cycle of building and delivering applications.
Users can build, debug, and deploy their own applications and host them on the
PaaS provider's infrastructure. However, the provider decides which programming
languages, libraries, services, and tools to support. When hosting the applications
in the PaaS cloud, the scalability of the hardware and software is the responsibility
of the provider. Examples of PaaS offerings are Google App Engine, Salesforce,
Heroku, Oracle Cloud Platform, and others.
• Software as a Service (SaaS) is also called a hosted software model, and includes
ready-to-use applications or software with all the required infrastructure elements
required for running them, such as operating system, database, and network. The
SaaS provider is responsible for managing the software and performs installation,
maintenance, upgrading, and patching. Users access the applications from various
client devices through either a thin client (software that does not use many
resources on the client but uses the resources of a server, for example web-based
email), or through a program interface. The customer does not manage or control
the underlying cloud infrastructure. Examples of SaaS cloud services are Cisco
Webex, Salesforce, Microsoft 365, Adobe Creative Cloud, and others.
Today, other service models exist. Anything as a service (XaaS) is a concept that
emphasizes that the cloud offer can include any computing service. Examples of new
service models are serverless clouds, which provide computing resources to execute only
a specific function developed by the customer. Serverless clouds are also known as
Function as a Service (FaaS). Other emerging models include: Database as a Service
(DBaaS), Desktop as a Service (DaaS), Business Process as a Service (BPaaS), Network as a
Service (NaaS), and so on. Examples of XaaS services are Cisco DaaS, Microsoft Azure SQL
Database, Amazon Relational Database Service, Google Function, and others.
26.7 Introducing Architectures and Virtualization
Network Device Architecture
Network devices implement processes that can be broken down into three functional
planes: the data plane, control plane, and management plane. Under normal network
operating conditions, the network traffic consists mostly of data plane transit packets.
Network devices are optimized to handle these packets efficiently. Typically, there are
considerably fewer control and management plane packets.

The data plane: The primary purpose of routers and switches is to forward packets and
frames through the device onward to final destinations. The data plane, also called the
forwarding plane, is responsible for the high-speed forwarding of data through a network
device. Its logic is kept simple so that it can be implemented by hardware to achieve fast
packet forwarding. The forwarding engine processes the arrived packet and then forwards
it out of the device. Data plane forwarding is very fast. It is performed in hardware. To
achieve the efficient forwarding, routers and switches create and utilize data structures,
usually called tables, which facilitate the forwarding process. The control plane dictates
the creation of these data structures. Examples of data plane structures are Content
Addressable Memory (CAM) table, Ternary CAM (TCAM) table, Forwarding Information
Base (FIB) table, and Adjacency table.
Cisco routers and switches also offer many features to secure the data plane. Almost
every network device has the ability to utilize ACLs, which are processed in hardware, to
limit allowed traffic to only well known and desirable traffic.
Note: Data plane forwarding is implemented in specialized hardware. The actual
implementation depends on the switching platform. High-speed forwarding hardware
implementations can be based on specialized integrated circuits called ASICs, field-
programmable gate arrays (FPGAs), or specialized network processors. Each of the
hardware solutions is designed to perform a particular operation in a highly efficient way.
Operations performed by ASIC may vary from compression and decompression of data, or
computing and verifying checksums to filter or forward frames based on their MAC
address.
The control plane consists of protocols and processes that communicate between
network devices to determine how data is to be forwarded. When packets that require
control plane processing arrive at the device, the data plane forwards them to the device’s
processor, where the control plane processes them.
In cases of Layer 3 devices, the control plane sets up the forwarding information based on
the information from routing protocols. The control plane is responsible for building the
routing table or Routing Information Base (RIB). The RIB in turn determines the content of
the forwarding tables, such as the FIB and the adjacency table, used by the data plane. In
Layer 2 devices, the control plane processes information from Layer 2 control protocols,
such as STP and Cisco Discovery Protocol, and processes Layer 2 keepalives. It also
processes information from incoming frames (such as the source MAC address to fill in the
MAC address table).
When high packet rates overload the control or management plane (or both), device
processor resources can be overwhelmed, reducing the availability of these resources for
tasks that are critical to the operation and maintenance of the network. Cisco networking
devices support features that facilitate control of traffic that is sent to the device
processor to prevent the processor itself from being overwhelmed and affecting system
performance.
The control plane processes the traffic that is directly or indirectly destined to the device
itself. Control plane packets are handled directly by the device processor, which is why
control plane traffic is called process switched traffic.
There are generally two types of process switched traffic. The first type of traffic is
directed, or addressed, to the device itself and must be handled directly by the device
processor. An example would be a routing protocol data exchange. The second type of
traffic that is handled by the CPU is data plane traffic with a destination beyond the device
itself, but which requires special processing by the device processor. One example of such
traffic is IPv4 packets that have a Time to Live (TTL) value, or IPv6 packets that have a Hop
Limit value that is less than or equal to one. They require Internet Control Message
Protocol (ICMP) Time Exceeded messages to be sent, which results in CPU processing.
The management plane consists of functions that achieve the management goals of the
network, which include interactive configuration sessions, and statistics gathering and
monitoring. The management plane performs management functions for a network and
coordinates functions among all the planes (data, control, and management). In addition,
the management plane is used to manage a device through its connection to the network.
The management plane is associated with traffic related to the management of the
network or the device. From the device point of view, management traffic can be destined
to the device itself or intended for other devices. The management plane encompasses
applications and protocols such as Secure Shell (SSH), Simple Network Management
Protocol (SNMP), HTTP, HTTPS, Network Time Protocol (NTP), TFTP, FTP, and others that
are used to manage the device and the network.

From the perspective of a network device, there are three general types of packets as
related to the functional planes:

• Transit packets and frames include packets and frames that are subjected to
standard, destination IP, and MAC-based forwarding functions. In most networks
and under normal operating conditions, transit packets are typically forwarded
with minimal CPU involvement or within specialized high-speed forwarding
hardware.
• Receive or for-us packets include control plane and management plane packets
that are destined for the network device itself. Receive packets must be handled
by the CPU within the device processor, because they are ultimately destined for
and handled by applications running at the process level within the device
operating system.
• Exception IP and non-IP information include IP packets that differ from standard IP
packets, such as IPv4 packets containing the Options field in the IPv4 header, IPv4
packets with a TTL that expires, and IPv4 packets with unreachable destinations.
Examples of non-IP packets are Layer 2 keepalives, ARP frames, and Cisco
Discovery Protocol frames. All packets and frames in this set must be handled by
the device processor.
In traditional networking, the control and data planes exist within one device. With the
introduction of software-defined networking (SDN), the management and control planes
are abstracted into a controlled layer, typically a centralized solution, a specialized
network controller, which implements a virtualized software orchestration to provide
network management and control functions. Infrastructure layer devices, such as switches
and routers, focus on forwarding data. The application layer consists of SDN applications,
which communicate network requirements towards the controller.

26.8 Introducing Architectures and Virtualization


Virtualization Fundamentals
Virtualization is a technology that transforms a hardware element into a software element
that emulates the behavior of the hardware. Virtualization is not new in computing. The
first work began as early as the 1960s. Today, virtualization is applied to various elements
of IT infrastructure, from end device hardware to storage and entire networks.
Virtualization is commonly used on servers. A virtual system is also referred to as a virtual
machine (VM). A VM is an emulation of a computer system. To create and run a VM on a
physical machine, you use specialized virtualization software. Virtualization software runs
on the physical hardware and emulates hardware elements that are required by the VM.
The VM operates like it has its own physical hardware and installs its own operating
system and other software.
Like a physical PC or a server, a VM also has hardware specifications. The difference is that
in the VM environment, some hardware specifications, such as memory and CPU capacity,
can vary according to the physical resources that are available. Other hardware
specifications do not vary, such as network interface cards (NICs) and disk controllers.
Note: When you create a VM, you in fact create a set of files. Some of the information
stored in these files include VM settings, logs, resource descriptors, the running state of a
VM (stored in a snapshot state file), and others.
Prior to virtualization, data centers and server farms consisted of multiple, clustered
physical servers that provided necessary redundancy for stable operation of applications.
However, they were often underutilized and it was difficult or impossible to redistribute
unused resources. Underutilization directly impacts both operational and capital
expenditure by increasing the number of servers that are required. Each extra server
requires additional physical space, power, and cooling systems. As the number of servers
grows, management challenges also increase.
The figure shows how three physical systems that are running different operating systems
can be consolidated on a single physical device that is split into three VMs, to better
manage resources.
Depending on the average load of existing deployments, it is not unusual to be able to put
three or more servers onto a single piece of hardware. In data center environments, the
ratio of virtual servers to physical servers can be in the range of hundreds.
The virtualization software is known as a hypervisor. The hypervisor divides (partitions)
physical hardware resources in software and allocates them to create multiple VM
instances. The hypervisor abstracts (isolates) operating systems and applications from the
underlying computer hardware. This abstraction allows the underlying physical machine,
also called the host machine, to independently operate one or more VMs as guest
machines.
The figure illustrates how the hypervisor isolates virtual hardware resources from
underlying physical hardware, for each VM.

A VM runs its own operating system and applications. The applications are not aware that
they are running in a virtualized environment.
A hypervisor has these tasks:

• Providing an operating platform to VMs, providing unified and consistent access to


the host machine CPU, memory, network, and input and output units.
• Managing the execution of the guest operating system.
• Providing connectivity between VMs, and between the VMs and external network
resources.
There are different virtualization implementations in use today, which differ in how the
guest operating system, hypervisor, and hardware communicate. The most common
implementation is full virtualization.
Full virtualization provides a complete emulation of the hardware environment. VM
operating systems are completely unaware that they are running in a virtual environment.
There are two types of full virtualizations:

• The hypervisor is running directly on the physical server hardware. This is also
called native, bare-metal, or Type-1 hypervisors.
• The hypervisor runs on a host operating system (in other words the operating
system of the physical device). This is also called a hosted or Type-2 hypervisor.
The figure illustrates types of full virtualization.

Note: Other virtualization types are partial virtualization and paravirtualization. In partial
virtualization, the guest operating system is aware of the physical hardware that the
hypervisor is running on and adjusts so that the communication is easier to translate for
the hypervisor, reducing overhead. In paravirtualization, the guest operating system is
aware of the hypervisor communication requirements and translates complex calls that
cause most of the overhead into the hypervisor-optimized calls or initiates special features
of the hypervisor.
Examples of hypervisor software are VMware ESXi and VMware Workstation, Microsoft
Hyper-V and Microsoft Virtual PC, Citrix XenServer, Oracle VM and Oracle VM Virtual Box,
Red Hat Enterprise Virtualization, and others.
VMs offer several benefits over physical devices.

• Partitioning:
o VMs allow for a more efficient use of resources, because a single physical
device can serve many VMs, which can be rearranged across different
servers, according to load.
o A hypervisor divides host system resources between VMs and allows VM
provisioning and management.
• Isolation:
o VMs in a virtualized environment have as much security as is present in
traditional physical server environments because VMs are unaware of the
presence of other VMs.
o VMs that share the same host are completely isolated from each other, but
can communicate over the network.
o Recovery in cases of failure is much faster with VMs than with physical
servers. Failure of a critical hardware component, such as a motherboard or
power supply, can bring down all the VMs that reside on the affected host.
Affected VMs can be easily and automatically migrated to other hosts in
the virtual infrastructure, providing for shorter downtime.
• Encapsulation:
o VMs reside in a set of files that describe them and define their resource
usage and unique identifiers.
o VMs are extremely simple to back up, modify, or even duplicate in a
number of ways.
o This encapsulation can be deployed in environments that require multiple
instances of the same VM, such as classrooms.
• Hardware abstraction:
o Any VM can be provisioned or migrated to any other physical server that
has similar characteristics.
o Support is provided for multiple operating systems: Windows, Linux, and so
on.
o Broader support for hardware, since the VM is not reliant on drivers for
physical hardware.
An issue with hosting multiple VMs on physical servers is that the physical server
represents a single point of failure for all guest machines and services running on them.
Also, if maintenance of the physical server requires machine shutdown, it shuts down all
the software components on it. Since VMs exist as files, the migration of an entire VM,
with its operating system and applications, is a matter of copying a file to another physical
machine. Once files are copied, the VM can be started and resume its operation on the
new physical host.
The mobility of VMs is an advantage of virtualized environment and is beneficial for these
reasons:
• Optimum performance: If a VM on a given host starts exceeding the resources of
the host, it can be moved to another host that has sufficient resources.
• Maintenance: If there is a need to perform maintenance or upgrade a host, the
VMs from that host can be temporarily redistributed to other hosts. After the
maintenance is complete, the process can be reversed, resulting in no downtime
for users.
• Resource optimization: If the resource usage of one or more VMs decreases, one
or more hosts may no longer be needed. In this case, the VMs can be redistributed
and the hosts that are emptied can be powered off to reduce cooling and power
requirements.
Virtualization is not limited to servers but also extends to other infrastructure
components, including networks. Virtualized servers communicate among themselves and
with the external resources. Virtualization affects networking requirements because
communications of multiple VMs are multiplexed onto the same physical network
connections provided by the host machine. Networking functions such as NIC cards,
firewalls, and switches can also be virtualized and moved to reside inside a host machine.
A virtual switch emulates a Layer 2 switch. It runs as part of a hypervisor and provides
network connectivity for all VMs. When connected to a virtual switch, VMs behave as if
they are connected to a normal network switch.
The figure shows four VMs that are connected to the same virtual switch that has access
to the outside network.

Containers
Containers are similar to VMs in many ways, but also different. Just as with VMs,
containers are instances that run on a host (bare metal or virtual) machine. Like VMs, they
can be customized and built to whatever specification is desired, and can be used the
same way that a VM is used, allowing isolated processes, networking, users, and so on.
Containers differ from VMs in that a guest operating system is not installed. Rather, when
application code is run, the container only runs the necessary processes that support the
application. This is because containers are made possible using kernel features of the host
operating system and a layered file system instead of the emulation layer required to run
VMs. This also means that containers do not consist of different operating systems with
installed applications, but instead have the necessary components that set them aside as
different Linux vender versions and variants.
Even more so, this means that because a container does not require its own operating
system, it uses fewer resources and consumes only the resources required for the
application that is run upon starting the container. Therefore applications can consist of
smaller containerized components (which are the binaries and libraries required by the
applications) instead of legacy monolithic applications installed on a virtual or bare metal
system.
How containers are similar to VMs is that they also are stored as images, although a big
difference is that container images are much smaller and more portable to use than VM
images for the aforementioned reasons of not requiring an operating system installation
as part of the image. This makes it possible to have a packaged, ready-to-use application
that runs the same regardless of where it is, as long as the host system runs containers
(Linux containers specifically).

A number of container technologies are available, with Linux leading the charge. One of
the more popular platforms is Docker, which is now based on Linux libcontainer. Actually,
Docker is a management system that is used to create, manage, and monitor Linux
containers. Ansible is another container-management system favored by Red Hat.
Virtualization of Networking Functions
Networking functions can also be virtualized with networking devices acting as hosts. The
virtualization main principle remains the same: one physical device can be segmented into
several devices that function independently. Examples include subinterfaces and virtual
interfaces, Layer 2 VLANs, Layer 3 virtual routing and forwarding (VRF), and Layer 2 virtual
device contexts.
Network device interfaces can be logically divided into subinterfaces, which are created
without special virtualization software. Rather, subinterfaces are a configuration feature
supported by the network device operating system. Subinterfaces are used when
providing router-on-a-stick inter-VLAN routing, but there are other use cases also.
VLANs are a virtual element mostly related to Layer 2 switches. VLANs divide a Layer 2
switch into multiple virtual switches, one for each VLAN, effectively creating separate
network segments. Traffic from one VLAN is isolated from the traffic of another VLAN.
A switch virtual interface (SVI) is another virtualization element in Layer 2 devices. It is a
virtual interface that can have multiple physical ports associated with it. In a way, it acts as
a virtual switch in a virtualized machine. Again, to create VLANs and SVIs you only need to
configure them using features included in the device operating system.

To provide logical Layer 3 separation within a Layer 3 device, the data plane and control
plane functions of the device must be segmented into different VRF contexts. This process
is similar to the way that a Layer 2 switch separates the Layer 2 control and data planes
into different VLANs.
With VRFs, routing and related forwarding information is separated from other VRFs. Each
VRF is isolated from other VRFs. Each VRF contains a separate address space, and makes
routing decisions that are independent of any other VRF Layer 3 interfaces, logical or
physical.
27.1 Explaining the Evolution of Intelligent Networks
Introduction
Since the beginning of computer networking, network configuration practices have
centered on a device-by-device manual configuration methodology. In the early years, this
did not pose much of a problem, but more recently this method for configuring the
hundreds if not thousands of devices on a network has been a stumbling block for
efficient and speedy application service delivery. As the scale increases, it becomes more
likely that changes that are implemented by humans are going to have a higher chance of
misconfigurations, whether simple typos, applying a new change to the wrong device, or
even completely missing a device altogether. Performing repetitive tasks that demand a
high degree of consistency unfortunately always introduces a risk for error. And the
number of changes humans are making is increasing as there are more demands from the
business to deploy more applications at a faster rate than ever before.
The solution lies in automation. The economic forces of automation are manifested in the
network domain via network programmability and software-defined networking (SDN)
concepts. Network programmability helps reduce operational expenses (OPEX), which
represents a very significant portion of the overall network costs, and speeds up service
delivery by automating tasks that are typically done via CLI. The CLI is simply not the
optimal approach in large-scale automation.
Automation tools for network configuration have existed in the past, but often they suffer
from complexities that make deployment difficult. Taking responsibility away from
devices, and driving dynamic changes from a central location is desirable, a task well-
suited to software implementing network programmability applications that can scale
from simply automating just a couple of devices to a whole Enterprise network
architecture. This solution takes into account the application and user demand and applies
configuration to the connected networking devices within the enterprise campus and
beyond.
As a networking engineer, you need to prepare yourself for the evolution of network
management, which includes developing skills in different areas:

• Network programmability in enterprise networks, including an overview of a


model-driven programmability stack
• SDN concepts and Cisco SDN Enterprise Solutions (Cisco Digital Network
Architecture [DNA] Center, Software-Defined Access [SD-Access], and Software-
Defined WAN [SD-WAN])
• Configuration Management Tools like Ansible, Chef, and Puppet.

27.2 Explaining the Evolution of Intelligent Networks


Overview of Network Programmability in Enterprise Networks
Current broad industry trends have influenced more specific networking trends that are
redefining the way network engineers build and manage modern infrastructures. It has
caused a need to develop new skills necessary to deploy and manage networks, and
therefore programmability options are being built into networking devices.
An important concept to understand is that network programmability seeks to decrease
human-to-machine interaction in order to fulfill the goals of automation. There are
various threads that lead to the idea of automation with network programming, including
SDN, IT development and operations, agile development, and the demands of large-scale
service delivery. Network programmability speeds up service delivery by automating tasks
that are typically done via CLI.
Current Industry Trends
In today’s world, the emergence of artificial intelligence, the Internet of Things (IoT), the
cloud, ever-expanding amounts of data, and increasingly complex cybersecurity threats
are changing the technology landscape at breakneck speed. Applications and services are
moving everywhere in the enterprise—from the remote edge to branch offices, to
headquarters, and all the way to data centers in public-, private-, and hybrid-cloud
environments.
Specific networking industry trends include:

• DevOps: DevOps is a methodology that strives to develop and promote methods


to drive speed and agility in the deployment, maintenance, and continual
improvement of systems and infrastructure. This cultural trend is driving better
configuration and automation tools for network engineers. DevOps has a different
type of information flow, such as working in the open, for example, and DevOps
managers tend to embrace the opposite of a top-down style of communication
among teams. Ultimately, engineers should be comfortable asking and explaining
why a task is being done. Examples of tools that are used by a DevOps culture to
enable a robust deployment pipeline include various Linux operating systems,
several programming languages (such as Python, Go, and Ruby), configuration
management mechanisms (such as SaltStack, Ansible, Chef, and Puppet),
continuous integration build servers (such as Travis CI and Jenkins), and version
control using Git.
• Programmable Infrastructure: An important concept to understand is that
network programmability seeks to decrease human-to-machine interaction in
order to fulfill the goals of SDN and the DevOps movement. Therefore, network
programmability is a tangible aspect of SDN in production today. Automating and
scripting network tasks is the hallmark of a programmable infrastructure. In this
section, you will be investigating two forms of network programmability—"on-
box" and "off-box." On-box programming refers to scripting mechanisms such as
the Tool Command Language (TCL) and Embedded Event Manager (EEM), which
are both prebuilt into the network operating system (NOS) of various Cisco
platforms. Several platforms expose a native Linux interface and offer access to a
Python execution engine used to extend on-box programmability. On-box
mechanisms are normally specific to the platform itself. Off-box programming
refers to scripting mechanisms that exist outside a network device. It can be in the
form of an external controller or some external server that often communicates to
the network device using robust and modern application programming interfaces
(APIs). Examples of these APIs include Network Configuration Protocol (NETCONF),
Representational State Transfer (REST), and Representational State Transfer
Configuration Protocol (RESTCONF).
• Open source software (OSS): Open source generally refers to a community-driven
model of developing and maintaining software to increase flexibility and
customizability, while lowering the capital expense required. Therefore, OSS
development has the ability to outpace many commercial products. Closely related
to the goals of software-defined networking and the DevOps movement, open
networking seeks to improve networking by implementing an open source
foundation to the NOS and a community-driven model for continuous
improvement. The term "open" can refer to multiple concepts, such as open
source software, supporting open APIs, and supporting open protocol standards.
Therefore, there can be very divergent means of creating "openness" in order to
serve various purposes. A typical example is usage of Linux software, where many
network devices are now entirely Linux-based, which means that Linux is almost
always a part of the development of off-box methods for network
programmability.
• Software-defined networking: SDN refers to the set of techniques that are used to
manage and change a network behavior through an open interface rather than
closed-box methods. SDN seeks to program network devices either through a
controller or some other external mechanism. It allows the network to be
managed as a whole and increase the ability to configure the network in a more
deterministic and predictable manner. Several common themes in this trend are
disaggregation of a network device’s control and data planes, virtualization of
network functionality, policy-centric networking, and movement toward open
protocols.
• Intent Based Networking: A trend in the networking industry is to focus on
business goals and applications. Intent-based networking (IBN) transforms a
hardware-centric, manual network into a controller-led network that captures
business intent and translates it into policies that can be automated and applied
consistently across the network. The goal is for the network to continuously
monitor and adjust network performance to help assure desired business
outcomes. IBN builds on SDN principles, transforming from a hardware-centric and
manual approach to designing and operating networks that are software-centric
and fully automated and that add context, learning, and assurance capabilities. IBN
captures business intent and uses analytics, machine learning, and automation to
align the network continuously and dynamically to changing business needs. That
means continuously applying and assuring application performance requirements
and automating user, security, compliance, and IT operations policies across the
whole network.
Overview of Network Operations in an Enterprise Network
Current network operations are based on human interaction with network devices, which
is error-prone and less efficient when you need to scale the network. Therefore
programmability options are being built into networking devices to build and manage the
modern networks and minimise the human-to machine interaction.
Current network operations:

• CLI was built for manual human interaction.


• Configuration is one device at a time.
• Copying and pasting are the standard.
• Configuration is prone to error.
• Tasks are not easily repeatable.
• Notepad is the most common text editor.
Future network operations:

• Programmability tools will be used to automate.


• Version control will be used for all configurations; monitoring changes.
• Automated systems will perform testing before any change is made to the
configuration including system, style, reachability, etc.
Network operations currently are based on human interaction with network devices.
Network engineers commonly use Notepad, one of the most common text editors, for
templating by hand to configure one device at a time. This approach is prone to error and
does not scale well, for the following reasons:.

• The CLI was designed for human interaction, limiting the speed of configuration to
as fast as a person can work. While the CLI will continue to play an integral role in
troubleshooting and operations, it is error-prone and inefficient.
• Manual configuration and common copying and pasting methods are extremely
prone to error, especially when configuring multiple devices.
• Tasks are not easily repeatable, resulting in inefficient workflows.
• Unstructured text data used in the CLI requires postprocessing (screen scraping) to
transcode to machine-friendly formatting. The CLI does not return error or exit
codes on which the operator can act programmatically.
Using tools that are common in software development, network engineers can perform
more optimal workflows such as using version control systems to store network
configurations. This way, configurations are versioned and tracked, and in addition, can be
used as the "single source of truth." Also, any change that is accepted will be fully tested,
using automated tooling, to ensure that changes are valid before deploying.
Uses of Network Automation
The value of network programmability and use cases suggest possibilities for network
programmability solutions.
Network automation is used for many common tasks, such as the following:
• Device provisioning: Device provisioning is likely one of the first things that comes
to an engineer’s mind when they think about network automation. Device
provisioning is simply configuring network devices more efficiently, faster, and
with fewer errors, because with automation, human interaction with each network
device is decreased. Automated processes also streamline the replacement of
faulty equipment.
• Device software management: Controlling the download and deployment of
software updates is a relatively simple task, but it can be time-consuming and
prone to error. Many automated tools have been created to address this issue, but
they can lag behind customer requirements. A simple network programmability
solution for device software management is beneficial in many environments.
• Data collection and telemetry: A common part of effectively maintaining a
network is collecting data from network devices, including telemetry on network
behavior. The way that data is collected is changing as many devices, such as Cisco
IOS-XE devices, can push data (and stream) off-box in real time in contrast to being
polled every few minutes.
• Compliance checks: Network automation methods allow the unique ability to
quickly audit large groups of network devices for configuration errors and
automatically make the appropriate corrections with built-in regression tests.
• Reporting: Automation decreases the manual effort that is needed to extract
information and coordinate data from disparate information sources in order to
create meaningful and human-readable reports.
• Troubleshooting: Network automation makes troubleshooting easier by making
configuration analysis and real-time error checking very fast and simple, even with
many network devices.
Network Programmability Technology
Network programmability is so much more than having programmatic interfaces on
network devices.
There are many technologies that are used when introducing network programmability
and automation into a given environment:

• Linux: The foundation of everything begins with Linux. From version control to
programming languages and configuration management, tools such as Ansible and
Puppet almost always run on Linux operating systems.
• Device and controller APIs: The API is the mechanism by which an end user makes
a request of a network device and the network device responds to the end user.
This method provides increased functionality and scalability over traditional
network management methods, and is how modern tools interact with network
devices.
• Version control: All network configuration information should be versioned. Using
a platform such as Git makes it easier to share and collaborate on projects
involving anything from code to configuration files. You can use many different
tools to accomplish automated testing in an environment where version control is
used to manage configuration files.
• Software development: While not every network programmability engineer will
be an expert programmer, understanding software development processes is
critical to understanding how software development can be used to extend or
customize open source tools.
• Automated testing: A key area of network programmability and software
development is automated testing. Deploying proper testing, such as pre- and
post-changes on the network, in an automated way improves the use of network
resources. Network administrators should use tests that run automatically under
defined conditions, or whenever a change is being proposed.
• Continuous integration (CI): CI tools are used commonly by developers, and can
drastically improve the release cycle of software and network configuration
changes. Deploying CI tools and pipelines can help with execution of your tests so
that they run when changes are being proposed (using version control tools).

Network Programmability Options


There are different network programmability options available today, as shown in the
figure.
On the left side of the figure is how our network management applications and
monitoring tools access the device today, using CLI and Simple Network Management
Protocol (SNMP), NetFlow, and so on. This approach has evolved in different directions.
When SDN started to evolve, Cisco and other vendors started offering vendor-specific APIs
to program and control the existing network devices. As you can see in the figure (option
1) the control and data planes are still in the same box as in traditional approach. An
example would be an NX-API interface that is used in Cisco Nexus Data Center Switches.
Later, open APIs (NETCONF, RESTCONF, and so on) were added to vendor-specific APIs.
Option 2a shows a pure SDN environment where the control plane has been separated to
a controller. OpenFlow was the first protocol for communication between the controller
and the data plane, but it required a hardware upgrade to understand OpenFlow
commands. Now, there are a variety of APIs that can be used. NETCONF, for example, is
one of the most popular network configuration protocols, but others can be used (for
example, Path Computation Element Protocol [PCEP] and Interface to the Routing System
[I2RS]).
The limitations of a pure SDN approach have led to a Hybrid approach (Option 2b), which
today is used by most of the vendors, including Cisco. A control plane is still needed on the
network devices, so that it can independently run some network protocols (routing, for
example). Also, the controller uses an abstraction layer between the applications and the
network devices. Applications can communicate with a controller in a programmable way
and achieve automation through it.
The last option (3) represents an overlay approach, which commonly uses Virtual
Extensible LAN (VXLAN) protocol. The main idea is that the existing devices are kept intact
and that a virtual network using overlays is created. Automation (and programmability) is
achieved on top of the virtual network.
27.3 Explaining the Evolution of Intelligent Networks
Software-Defined Networking
One solution to simplify how networks are built and managed are controller-based
networking solutions and architectures. SDN controllers centralize management of many
devices in one single point of administration. This method decreases complexity, human
error, and the time it takes to deliver a new service.
The SDN movement was started to rearchitect packet forwarding techniques, but it was
soon realized that the real improvement that was needed was in network operations.
There is a drive now to eliminate access to the CLI (and the GUI) to minimize touchpoints
on the network. You need to stop managing devices one at a time and stop using
commands that return a “blob” of raw text. Rather, you need better programmatic
interfaces, that offer a means to automate, while working with structured objects such as
eXtensible Markup Language (XML) and JavaScript Object Notation (JSON). The bottom
line is, in order to remain competitive, you need to learn how to do more with less. If it
means simply checking inventory, serial numbers, or getting ready for an IPv6 migration,
you need to start designing and thinking about network management much differently.
What is software-defined networking?

• An approach and architecture in networking where control and data planes are
decoupled, and intelligence and state are logically centralized
• An implementation where the underlying network infrastructure is abstracted
from the applications (via network virtualization)
• A concept that leverages programmatic interfaces to enable external systems to
influence network provisioning, control, and operations
SDN is a set of techniques, not necessarily a specific technology, that seeks to program
network devices either through a controller or some other external mechanism. SDN
refers to the capacity to control, manage, and change network behavior dynamically
through an open interface rather than through direct, closed-box methods. It allows the
network to be managed as a whole and increases the ability to configure the network in a
more deterministic and predictable way.
With SDN, you can reduce the complexity of your network by using a standardized
network topology and by building an abstract overlay network on top. In this way, you
move from a single device view of the network (box-oriented) to a global, high-level view
(network-oriented). This high-level view enables you to use abstractions and
simplifications when provisioning new services. For example, the network operator
configuring a virtual private network (VPN) for a remote office environment is not
concerned (and should not be) with the physical layout of the network. The only
requirement of the remote site and operator is that the network spans all geographic
regions required for the VPN (for example, the Main Campus and Remote Office). The
controller will figure out what needs to be provisioned. The prerequisite to this is that the
controller is the central point of management and the "source of truth" for the
configuration.
Using abstractions when managing your network also enables you to use standardized
components. SDN implementations typically define a standard architecture and APIs that
the network devices use. To a limited degree, you can also swap a network device for a
different model or a different vendor altogether. The controller will take care of
configuration, but the high-level view of the network for the operators and customers will
stay the same.
Simplification of configuration and automated management also directly results in OPEX
savings. Typically, the total cost of ownership (TCO) for a network in a five-year span
comprises about 30 percent capital expenditure (CAPEX) and about 70 percent OPEX.
Manual service configuration and activation represent a significant chunk of OPEX.
SDN addresses the need for the following:

• Centralized configuration, management, control, and monitoring of network


devices (physical or virtual)
• The ability to override traditional forwarding algorithms to suit unique business or
technical needs
• Allowing external applications or systems to influence network provisioning and
operation
• Rapid and scalable deployment of network services with lifecycle management
Software-defined networking allows network engineers to provision, manage, and
program networks more rapidly, as it greatly simplifies automation tasks by providing a
single point of administration for the programming of the infrastructure.
Controller-based networking makes centralized policy easy to achieve. Networkwide
policy can be easily defined and distributed consistently to the devices connected to the
controller. For example, instead of attempting to manage access control lists across many
individual devices, a flow rule can be defined on the central controller and pushed down
to all the forwarding devices as part of the normal operations.
Compared to traditional networking, controller-based networking makes it easy to define
special treatment for specific network traffic. Instead of adding complexity to the network
through advanced mechanisms like policy-based routing, a traffic flow rule can be defined
on the controller and pushed down to all the forwarding devices as part of normal
operations. The largest benefit here is that there is a device, a controller, that has a
unified view of the network in one location.
This single point of administration addresses the scalability problem in that administrators
are no longer required to touch each individual device to be able to make changes to the
environment. This concept is also not new as controllers have also been around for many
years and used for campus wireless networking. Similar to the behavior between a Cisco
Wireless LAN Controller (WLC) and its managed access points (APs), the controller
provides a single point to define business intent or policy, reducing overall complexity
through the consistent application of intent or policy to all devices that fall within the
management domain of the controllers. For example, think about how easy it is to enable
authentication, authorization, and accounting (AAA) for wireless clients using a WLC,
compared to enabling AAA for wired clients (where you would need AAA changes on every
switch if you are not using a controller).
With automated processes, the time to provision a new service or implement a change
request is drastically reduced. What would previously take days or weeks to implement
can be automated to run in hours, along with testing and verification. Another important
step in automation is lifecycle management, from day 0 design and installation of the
infrastructure components to day 1 service enablement and day 2 management and
operations. Also, after the customer no longer needs the service, you must deallocate the
resources that are used and clean up the configuration on the devices. Even with proper
change management procedures, this process is tedious at best if performed manually. If
the process is fully automated, you can make sure that the same configuration changes
that were applied when provisioning the new service will be removed when it is
deprovisioned.
Traditional versus Software-Defined Networks
Traditional networks comprise several devices (for example, routers, switches, and WLCs)
that are equipped with software and networking functionality:

• The data (or forwarding) plane is responsible for the forwarding of data through a
network device.
• The control plane is responsible for controlling the forwarding tables that the data
plane uses.
• The management plane is integrated into the control plane.
• In a traditional network, the data plane acts on the forwarding decisions.
• In a traditional network, the control and management planes learn/compute
forwarding decisions.
The figure shows a traditional network. Each device has a control and data plane. This
means that all devices are equally smart and can make decisions on their own, since the
control plane exists. Of course, the data plane is what is responsible for the actual packet
forwarding. This network is now referred to as the traditional network, and is still the
dominant network type deployed.
With SDN, the network changes.

• The control (and management) plane becomes centralized.


• Physical devices retain data plane functions only.

As SDN first emerged, there was the thought that the control (and management) plane
should be removed from each device and that the control and management planes must
be centralized into an SDN controller. While a major benefit to this approach is that you
can evolve the control and management plane protocols independently of the hardware
while now having a central point of control, there were significant scaling problems with
this approach.
Note: The management and control planes are abstracted into a centralized, specialized
network controller, which implements a virtualized software orchestration to provide
network management and control functions.
The figure shows the network as it could be in a "hybrid SDN".

• A controller is centralized and separated from the physical device, but devices still
retain localized control plane intelligence.
The hybrid SDN option is a combination of the best of both schemas. In a hybrid SDN, the
controller becomes an active part of the distributed network control plane, rather than a
means to configure the network control plane behavior in devices. This solution also offers
a centralized view of the network, giving an SDN controller the ability to act as the brain of
the network. In traditional networking, some network protocols, for example routing
protocols, scale well, meaning that they provide a certain level of automation. Adding a
controller offers a single pane of glass and administration while also offering a single API
to interface to the network, as opposed to establishing Secure Shell (SSH) connections to a
number of network devices to make a change or retrieve data.
SDN Layers

The SDN architecture differs from the architecture of traditional networks. It comprises
three stacked layers (from the bottom up):

• Infrastructure layer: Contains network elements (any physical or virtual device that
deals with traffic).
• Control layer: Represents the core layer of the SDN architecture. It contains SDN
controllers, which provide centralized control of the devices in the data plane.
• Application layer: Contains the SDN applications, which communicate network
requirements towards the controller.
The controller uses southbound APIs to control individual devices in the infrastructure
layer. The controller uses northbound APIs to provide an abstracted network view to
upstream applications in the application layer.
Note: SDN can be compared to network functions virtualization (NFV). Researchers
created SDN to easily test and implement new technologies and concepts in networking,
but a consortium of service providers created NFV. Their main motivation was to speed up
deployment of new services and reduce costs. NFV accomplishes these tasks by
virtualizing network devices that were previously sold only as a separate box (such as the
switch, router, firewall, and intrusion prevention system [IPS]) and by enabling them to
run on any server. It is perfectly possible to use both technologies at the same time to
complement each other. In other words, SDN decouples the control plane and data plane
of network devices, and NFV decouples network functions from proprietary hardware
appliances.
Northbound and Southbound APIs
Traditionally, methods such as SNMP, Telnet, and SSH were among the only options to
interact with a network device. However, over the last few years, networking vendors,
including Cisco, have developed and made available APIs on their platforms in order for
network operators to more easily manage network devices and gain flexibility in
functionality.
The API is the mechanism by which an end user makes a request of a network device and
the network device responds to the end user. This method provides increased
functionality and scalability over traditional network management methods. In order to
transmit information, APIs require a transport mechanism such as SSH, HTTP, and HTTPS,
though there are other possible transport mechanisms as well.
An SDN offers a centralized view of the network, giving an SDN controller the ability to act
as the brain of the network. The control layer of the SDN is usually a software solution
called the SDN controller. The SDN controller uses APIs to communicate with the
application and infrastructure layers. An API is a set of functions and procedures which
enable communication with a service. Using APIs, business applications can tell the SDN
controller what they need from the network. Then the controller uses the APIs to pass
instructions to network devices, such as routers, switches, and WLCs. However, those sets
of APIs are very different. Communication with the infrastructure layer is defined with
southbound APIs, while services are offered to the application layer using the northbound
APIs.
Northbound APIs or northbound interfaces are responsible for the communication
between the SDN controller and the services that run over the network. Northbound APIs
enable your applications to manage and control the network. So, rather than adjusting
and tweaking your network repeatedly to get a service or application running correctly,
you can set up a framework that allows the application to demand the network setup that
it needs. These applications range from network virtualization and dynamic virtual
network provisioning to more granular firewall monitoring, user identity management,
and access policy control. Currently REST API is predominately being used as a single
northbound interface that you can use for communication between the controller and all
applications.
SDN controller architectures have evolved to include a southbound abstraction layer. This
abstraction layer abstracts the network away to have one single place where you start
writing the applications to and allows application policies to be translated from an
application through the APIs, using whichever southbound protocol is supported and
available on the controller and infrastructure device. This new approach allows for the
inclusion of both new protocols and southbound controller protocols and APIs, including
(but not limited to) the following:

• OpenFlow: An industry-standard API, which the Open Networking Foundation


(ONF) defines. OpenFlow allows direct access to and manipulation of the
forwarding plane of network devices such as switches and routers, both physical
and virtual (hypervisor-based). The actual configuration of the devices is by the use
of Network Configuration Protocol (NETCONF).
• NETCONF: An IETF standardized network management protocol. It provides
mechanisms to install, manipulate, and delete the configuration of network
devices via Remote Procedure Call (RPC) mechanisms. The messages are encoded
by using XML. Not all devices support NETCONF—the ones that do support it
advertise their capabilities via the API.
• RESTCONF: In the simplest terms, RESTCONF adds a REST API to NETCONF.
• OpFlex: An open-standard protocol that provides a distributed control system that
is based on a declarative policy information model. The big difference between
OpFlex and OpenFlow lies with their respective SDN models. OpenFlow uses an
imperative SDN model, where a centralized controller sends detailed and complex
instructions to the control plane of the network elements to implement a new
application policy. In contrast, OpFlex uses a declarative SDN model. The
controller, which, in this case, is called by its marketing name, Cisco Application
Policy Infrastructure Controller (APIC), sends a more abstract policy to the network
elements. The controller trusts the network elements to implement the required
changes using their own control planes.
• REST: The software architectural style of the world wide web. REST APIs allow
controllers to monitor and manage infrastructure through the HTTP and HTTPS
protocols, with the same HTTP verbs (GET, POST, PUT, DELETE, and so on) that web
browsers use to retrieve web pages.
• SNMP: SNMP is used to communicate management information between the
network management stations and the agents in the network elements.
• Vendor-specific protocols: Many vendors use their own proprietary solutions,
which provide REST API to a device, for example, Cisco uses NX-API for the Cisco
Nexus family of data center switches.
Note: In recent years, NETCONF is becoming a dominant protocol that allows you to
modify the configuration of a networking device, whereas OpenFlow is a protocol that
allows you to modify its forwarding table. If you need to reconfigure a device, NETCONF is
the way to go. If you want to implement a new functionality that is not easily configurable
within the software that your networking device is running, you should be able to modify
the forwarding plane directly by using OpenFlow, if the networking device supports
OpenFlow.

27.4 Explaining the Evolution of Intelligent Networks


Common Programmability Protocols and Methods
Programmability and automation are relevant in all IT deployments—not just for the new
networks or "next generation" networks. The intent is to move away from the CLI, toward
a more automation-friendly way of interacting with network, server, and storage devices.
This configuration automation is not a new concept. However, in modern large-scale
networks with complex configurations, an effective management system is almost
unavoidable. Several solutions exist for this purpose, each with its own advantages and
disadvantages.
Evolution of Network Configuration
Networks have grown multifold in the number, classes, and complexity of devices. More
protocols and technologies are supported, and the network itself has many products
beyond just switches and routers, such as load balancers, firewalls, and security
appliances.
However, network management has not evolved to the point of replacing the CLI. The CLI
is still the primary tool used for provisioning. A new set of tools is required to be able to
provision and manage these new classes of devices and technologies in a more dynamic
fashion. Today, with a few clicks on one of the thousands of the servers, a virtual machine
(VM) can be brought up with a particular IP address and VLAN. The network must be able
to be provisioned in a similar manner, so that this new VM can communicate with its
peers on the network automatically and securely. As the networking industry evolves and
continues to refine network programmability methods and philosophies, several protocols
have emerged and taken center stage in the trend to program network devices more
efficiently, with less human interaction and with greater ability to manage entire
infrastructures programmatically.
Evolution of Device Management and Programmability
Managing networks began years ago when networks first started being deployed. When
networks were deployed, they were deployed manually and via the command line, just as
they often are today. The difference is there were far fewer devices to deploy and
manage, along with far fewer features, as compared to what is deployed today. It was not
necessarily obvious that SNMP or CLI management was ultimately not the right choice for
device management.
In fact, those mechanisms served most environments just fine. SNMP still dominates the
market for monitoring basic up and down status, and CLI is still the primary means of
management. However, CLI was built for humans—it was not meant for machines.
Machines are some of the main drivers urging us to adopt and implement more advanced
APIs that are meant for automation.

• Managing networks via the CLI was (and is) the norm.
• Networks were static when protocols such as SNMP emerged.
• Networks have grown to be overly complex.
• Regular expressions and scripting were the main tools for those who worked with
automation.
CLI syntax and configurable options that are associated with features such as Border
Gateway Protocol (BGP), quality of service (QoS), or VPNs varied widely across vendors,
platforms, and software releases. Over time, these differences, combined with the
limitations of the CLI, started to inhibit the ability to configure and manage networks at
scale. Configuring and operating a single feature within a large network could require the
use of several different CLIs. Trying to automate with screen-scraping scripts and regular
expressions started to make matters worse.
Another traditional management protocol, SNMP, has been around for many years. It has
been the de facto way to monitor networks. It worked great when networks were small
and polling a device every 15 to 30 minutes met operational requirements. However,
SNMP often caused operational issues when polling devices too frequently. While SNMP
has served the industry reasonably well from a device monitoring perspective, it does
have plenty of weaknesses. One of the most problematic issues from the network
programmability perspective is that SNMP lacks libraries for various programming
languages.

If you consider the way devices have been managed, you can see that there has been no
good way to handle machine-to-machine communication with the network. Expect
scripting and custom parsers were the best the industry had to offer. It is now
unacceptable because the rate of change continues to increase and there are more
devices and higher demands are being placed on the network.

If you look at where configuration management is with SNMP and CLI, you can easily
outline the requirements for next-generation configuration management:

• Provide easier to use management interfaces.


o Newer interfaces (APIs) need clear and well-defined ways to interact with
them. They should also be able to leverage custom and open source tools
to easily consume the APIs.
• Support client-side validation and error checking.
o APIs should support the ability to offer client-side validation. When using a
next-generation approach that is purely model-driven, it becomes a great
byproduct. Rather than have the device do error checking, the
management applications that leverage the device API and model
automatically handle it.
• Separate configuration and operational data.
o There is a delineation between configuration state and operational state,
and it needs to be reflected in the API. Any attribute, configuration
parameter, or statistic should be accessible via the API.
• Contain a built-in backup and restore capability.
o Next-generation APIs should support the ability to perform multiple types
of configurations, making it simpler to perform backups and restores, but
also to improve how changes are made.
• Be both human and machine-friendly.
o APIs need to be easy for humans to read. Having APIs that support readable
data formats such as JSON and XML simplifies adoption. Having the data
encoded documents that are derived from data models further improves
the machine readability. It also improves the pace at which changes can be
deployed on the device, and in turn, reflected and understood by humans.

As next-generation programmatic interfaces are being built, there are a few key attributes
that must be met:

• They must support different types of transport: HTTP, SSH, Transport Layer
Security (TLS).
• They must be flexible and support different types of data encoding formats such as
XML and JSON.
• There must be efficient and easy-to-use tooling that helps in using the new APIs,
for example, programming libraries (software development kits [SDKs]).
• There must be extensible and open APIs: REST, RESTCONF, NETCONF, Google-
defined remote procedure calls (gRPCs).
Also, they must be model-driven. Being model-driven is what helps any transport, API,
encoding, and data format.
Model-Driven Programmability
The solution for next generation management lies in adopting a programmatic and
standards-based way of writing configurations to any network device, replacing the
process of manual configuration. A main component of those innovations is model-driven
programmability.
Data models are developed in a standard, industry-defined language, that can define
configuration and state information of a network. Using data models, network devices
running on different Cisco operating systems can support the automation of configuration
for multiple devices across the network.
Model-driven programmability of Cisco devices allows you to automate the configuration
and control of those devices or even use orchestrators to provide end-to-end service
delivery (for example in Cloud Computing). Data modeling provides a programmatic and
standards-based method of writing configurations to network devices, replacing the
process of manual configuration. Although configuration using a CLI may be more human-
friendly, automating the configuration using data models results in better scalability.
Note: Orchestrator enables the IT administrators to automate management, coordination,
and deployment of IT infrastructure. It is typically used in cloud services delivery.

This figure represents the model-driven programmability stack.


The core components of the complete device API include the following:

• Data models: The foundation of the API are data models. Data models define the
syntax and semantics, including constraints of working with the API. They use well-
defined parameters to standardize the representation of data from a network
device so the output among various platforms is the same. Device configuration
can be validated against a data model in order to check if the changes are valid for
the device before committing the changes.
• Transport: Model-driven APIs support one or more transport methods including
SSH, TLS, and HTTP(s).
• Encoding: The separation of encodings from the choice of model and protocol
provides additional flexibility. Data can be encoded in JSON, XML, or Google
Protocol Buffers (GPB) format. While some transports are currently tied to specific
encodings (for example, NETCONF and XML), the programmability infrastructure is
designed to support different encodings of the same data model if the transport
protocol supports it.
• Protocols: Model-driven APIs also support multiple options for protocols, with the
three core protocols being NETCONF, RESTCONF or gRPC protocols. Data models
are not used to actually send information to devices and instead rely on these
protocols. REST is not explicitly listed because when REST is used in a modeled
device, it becomes RESTCONF. However, pure or native REST is also used in certain
network devices. Protocol choice will ultimately be influenced by your networking,
programming, and automation background, plus available tooling.
Note: An SDK is a set of tools and software libraries that allows an end user to create their
own custom applications for various purposes, including managing hardware platforms.
The process of automating configurations and monitoring in a network involves the use of
these core components:

• Client application: Manages the configurations and monitors the devices in the
network. A client application can be written in different programming languages
(such as Python) and SDKs are often used to simplify the implementation of
applications for network automation.
• Network device: Acts as a server, responds to requests from the client application,
and configures the devices in the network.
• Data Model (YANG) module: Describes configuration and operational data of the
network device, and performs actions.
• Communication protocol: Provides mechanisms to install, manipulate, and delete
the configuration of network devices. The protocol encodes data in a particular
format (XML, JSON, gRPC) and transports the data using one of the transport
methods (HTTP, HTTPS, SSH, TLS).
Telemetry is an automated communications process by which measurements and other
data are collected at remote or inaccessible points and transmitted to receiving
equipment for monitoring. Model-driven telemetry provides a mechanism to stream data
from a model-driven telemetry-capable device to a destination.
Different Cisco operating systems provide several mechanisms such as SNMP, CLI, and
syslog to collect data from a network. These mechanisms have limitations that restrict
automation and scale. One limitation is the use of the pull model, where the initial request
for data from network elements originates from the client. The pull model does not scale
when there is more than one network management system (NMS) in the network. With
this model, the server sends data only when clients request it. To initiate such requests,
continual manual intervention is required. This continual manual intervention makes the
pull model inefficient. Model-driven streaming telemetry is able to push data off of the
device to a defined endpoint such as JSON or using GPB at a much higher frequency and
more efficiently.
Telemetry uses a subscription model to identify information sources and destinations.
Model-driven telemetry replaces the need for the periodic polling of network elements—
instead, a continuous request for information to be delivered to a subscriber is established
upon the network element. Then, either periodically, or as objects change, a subscribed
set of YANG objects are streamed to that subscriber. The data to be streamed is driven
through subscription. Subscriptions allow applications to subscribe to updates (automatic
and continuous updates) from a YANG data store, which enables the publisher to push
and in effect stream those updates.
Data Models
What are data models?

• Data models describe a constrained set of data in the form of a schema language.
• They use well-defined parameters to standardize the representation of data from a
network device, so that the output among various platforms is the same.
• They are not used to actually send information to devices, but instead, they rely on
protocols such as NETCONF and RESTCONF to send JSON- and XML-encoded
documents that simply adhere to a given model.
• Device configuration can be validated against a data model in order to check if the
changes are valid for the device before committing the changes.
Data models are used to describe the syntax and semantics of working with specific data
objects. They can define attributes and answers such as the following:

• What is the range of a valid VLAN ID?


• Can a VLAN name have spaces in it?
• Should the values be enumerated and only support “up” or “down” for an admin
state?
• Should the value be a string or an integer?
The industry is migrating from a world of having no framework (no modeling) when using
CLI commands and text output, to a world of a fully modeled device. In other words, a
device that has a JSON and XML representation of its full configuration and that is fully
driven from a robust model such as YANG. Models also define operational data and
statistics on devices.
Data models provide a well-defined hierarchy of the configurational and operational data
of a router, and actions that can be performed by a protocol such as NETCONF:

• Configuration data: A set of writable data that is required to transform a system


from an initial default state into its current state. For example, configuring entries
of the IP routing tables, configuring the interface MTU to use a specific value,
configuring an Ethernet interface to run at a given speed, and so on.
• Operational state data: A set of data that is obtained by the system at runtime and
influences the behavior of the system in a manner similar to configuration data.
However, in contrast to configuration data, operational state data is transient. The
data is modified by interactions with internal components or other systems using
specialized protocols. For example, entries obtained from routing protocols such as
Open Shortest Path First (OSPF), attributes of the network interfaces, and so on.
• Actions: A set of actions that support robust networkwide configuration
transactions. When a change is attempted that affects multiple devices, the
actions simplify the management of failure scenarios, resulting in the ability to
have transactions that will dependably succeed or fail.
YANG Data Models
In recent years, YANG has become a de facto data modeling language. It is a standards-
based data modeling language used to create device configuration requests or requests
for operational (show command) data. It has a structured format similar to a computer
program that is human readable. Several applications are available that can be run on a
centralized management platform (for example, a laptop) to create these configuration
and operational data requests.
YANG

• Modeling language defined in RFC 6020


• Initially built for NETCONF, now also used by RESTCONF and gRPC
• Models configuration and operational state data
• Provides syntax and semantics
• Utilizes reusable data structures
An example of YANG usage can be seen in the next figure, where it is used to define an
Interface model.
YANG Interface model
YANG is a formal contract language with rich syntax and semantics, on which you can
build applications. It provides these rich semantics that offer constraints, but also provides
reusable structures that can be used within and between YANG models.
YANG has been around since 2010 but has been tightly coupled with the NETCONF
management protocol. It is defined in RFC 6020: YANG - A data modeling language for
NETCONF. More recently, YANG models are being used independently of NETCONF and
for other protocols such as RESTCONF and gRPC.
There are both standard (common) YANG data models that apply to all vendors (for
example, a request to disable or shut down an Ethernet interface should be identical for
both Cisco and non Cisco devices) as well as device (native, vendor-specific) data models
that facilitate configuring or collecting operational data associated with proprietary
vendor features.
Note: While YANG is becoming the dominant way to model network configuration and
state information, it is not the only method. There are other Cisco platforms and solutions
(for example, Cisco Nexus 9000 family, Cisco Unified Computing System) that do not use
YANG, but are fully driven by an object model. Each of these platforms was built using a
custom proprietary object model that offers the same properties as if they were built
using YANG models.
Encoding Formats
There are many different encoding (or data) formats used for your applications to
communicate with a wide range of APIs available on the internet. Each format represents
syntax coding data that can be read by another machine and that humans can also
understand.
Having APIs that support human-readable data formats simplifies adoption. Each of them
provides a structured way of using data formatting to send data between two systems.
This is in stark contrast to using SSH and issuing CLI commands, in which data is sent as
raw text, such as strings.
But if you want to use an API to configure a Cisco router, you have to check which data
types are supported by that API. Then, you can start writing a request to be handled by
that API that has an effect on your router configuration. An API server comprehends your
written code and translates it into instructions suitable for your router to process and
create an action based on that.
Working with Cisco network devices, you will most likely encounter these common data
formats:

• XML
• JavaScript Object Notation (JSON)
• Google Protocol Buffers (GPBs)
• YAML Ain't Markup Language (YAML)
Note: GPBs are really just numbers, not strings, and not easily read by a human. However,
there are benefits to using number codes. GPB is an efficient way of encoding telemetry
data and represents the ultimate in efficiency and speed.
Note: YAML, which, as the name suggests, is not a markup language such as JSON and
XML. With its minimalistic format, it is more humanly writable and readable but works the
same way as other data formats. In general, it is the most humanly readable of all the
formats and at the same time just as easy for programs to use, and it is gaining popularity
among engineers working with programmability. YAML is not supported on Cisco device
APIs, but it is still used to configure Cisco network devices through the Ansible
configuration management tool.
The following are common characteristics of API encoding formats:

• Format syntax
• Concept of an object
o Element that has characteristics
o Can contain many attributes
• Key/value notation
o Key: Identifies a set of data
o Value: Is the data itself
• Array or list
• Importance of whitespaces
• Case sensitivity
A syntax is a way to represent a specific data format in textual form. You notice that some
formats use curly braces or square brackets, and others have tags marking the beginning
and end of an element. In some formats, quotation marks or commas are heavily used
while others do not require them at all. But no matter which syntax they use, each of
these data formats has a concept of an object. You can think of an object as a packet of
information, an element that has characteristics. An object can have one or more
attributes attached to it.
Many characteristics will be represented by the key/value concept, the key and value
often being separated by a colon. The key identifies a set of data and it is often positioned
on the left side of the colon. The values are the actual data that you are trying to
represent. In most cases, the data appears on the right side of the colon.
To extract the meaning of the syntax, you must recognize how keys and values are
notated when looking at the data format. A key must be a string, while a value could be a
string, a number, or a Boolean (for instance, true or false). Other values could be more
complicated, containing an array or an entirely new object that represents its own data.
Another thing to notice when looking at a particular data format is the importance of
whitespaces and case sensitivity. In some cases these could be of high importance, and in
others, they could carry no significance, as you will get to know through some examples.
One of the main points about data formats that you should bear in mind is that you can
represent any kind of data in any given format.
In the figure, there are the three previously mentioned common data formats—JSON,
XML, and YAML. Each of these examples provides details about a specific network
interface, GigabitEthernet5, providing description, IPv4 address, and more.
You can quickly recognize that the exact same data is represented in all three formats, so
it really comes down to two factors when considering which one to choose:

• If the system you are working with prefers one format over the other, pick the
format that the system prefers.
• If the system can support any of them, pick the format that you are more
comfortable working with.
In other words, if the API you are addressing uses one specific format or a handful of
them, you will have to choose one of those. If the API supports any given format, it is up
to you which one you prefer to use.
XML Overview
XML is a markup format that is human-readable, while enabling computers to efficiently
parse the information stored in the XML format. While it is not as easy for humans to
understand visually, it is easy for machines to parse and generate. XML has been created
to structure, store, and transport information. The content is wrapped in tags.
The code block is an example of XML-formatted information:
XML may look similar to HTML, but they are, in fact, different. While both use tags <tags>
to define objects and elements, HTML is used to display data. A web browser knows how
to display websites—it consumes an HTML object and displays it. XML, on the other hand,
is used to describe data such that your XML client (programming language, and so on) can
consume an object that has meaning to it.
XML namespaces are common when using XML APIs on network devices, so it is important
to understand them and know why they are used. As the number of XML files exchanged
on the internet rises, it becomes increasingly likely that two or more applications end up
using the same tag names but represent different objects. This creates a conflict with
systems trying to parse some information from a specific tag. Solving that issue requires
the use of namespaces. A namespace essentially becomes an identifier for each XML
element, distinguishing the element from any other similar element. Besides creating your
own namespaces, you can use an existing namespace, for example, referring to a YANG
model. YANG models are like templates used to generate consistent XML.
In the example, a specific YANG model (ietf-interfaces) is referred to by the namespace
urn:ietf:params:xml:ns:yang:ietf-interfaces.
JavaScript Object Notation
JavaScript Object Notation (JSON) is a lightweight data format that is used in web services
for transmitting data. It is widely used in scripting-based platforms because of its simple
format.
Compared to XML, JSON has the following advantages:

• Simpler and more compact


• Faster for humans to write
• Better suited for object-oriented systems
Compared to XML, JSON has a disadvantage—it is less extensible.
Much like XML, JSON is plaintext and human-readable, perhaps more so than XML. JSON
uses a hierarchical structure and contains nested values. Unlike XML, JSON has no end
tags, is shorter, and quicker to read and write. JSON can be parsed using JavaScript. JSON
supports basic attribute-value-types such as string, number, and Boolean. In addition,
JSON also supports ordered lists such as arrays, hashes, and dictionaries.
JSON syntax uses curly braces, square brackets, and quotes for its data representation.
Typically, the very first character in a JSON file is a curly brace defining a new object
structure. Below that, other objects can be defined in the same way, starting with a name
of an object in quotes following a colon and a curly bracket. Underneath you will find all of
the information about that object.
This code block is an example of JSON formatted information about an interface, again
referring to the ietf-interfaces YANG data model.

JSON is language- and platform-independent. JSON parsers and JSON libraries exist for
many different programming languages. The JSON text format is syntactically identical to
the code for creating JavaScript objects, since JSON is a JavaScript Object.
JSON Data Types
JSON uses six data types. The first four data types (string, number, Boolean, and null) are
referred to as simple data types. The last two data types (object and array) are referred to
as complex data types.
String is any sequence of characters between two double quotes. For example:
Number is a decimal number, which may use exponential notation and could contain a
fractional part. For example:

Boolean can be either of the "true" or "false" values. For example:

Null is an empty value, represented by the word null. For example:

Object is an unordered collection of name–value pairs. The names (also called keys) are
represented by strings. Objects are typically rendered in curly braces. For example,
information about interface would look as follows:

Array is an ordered collection of name–value pairs, which can be of any type. For example,
a configuration of two static routes would look as follows in JSON:
Namespaces
Similar to XML, JSON (and YAML) can also use namespaces that define the syntax and
semantics of a name element, and in that way avoid element name conflicts. Take a look
at the example code from each format:

In this figure, you can find the same namespace ietf-interfaces in each of the formats.
When you are using RESTCONF, JSON requires a namespace, which has to be in the
correct URI format (ietf-interfaces:interfaces). A corresponding URL address, which is
used by RESTCONF protocol would look as follows:
https://<ROUTER_ADDRESS>/restconf/data/ietf-
interfaces:interfaces/interface=GigabitEthernet5
Protocols
To manipulate and automate on the data models supported on a network device, a
network management protocol needs to be used between the application client (such as
an SDN controller) and the network devices. Different devices support one or more
protocols such as REST, NETCONF, RESTCONF, and gRPC via a corresponding
programmable interface agent for these protocols—sometimes a native REST agent is
used.
When a request from a client is received via a NETCONF, RESTCONF, or gRPC protocol, the
corresponding programmable interface agent converts the request into an abstract
message object that is distributed to the underlying model infrastructure. The appropriate
model is selected and the request is passed to it for processing. The model infrastructure
executes the request (read or write) on the device data store, returning the results to the
originating agent for response transmission back to the requesting client.
Representational State Transfer
There is often a perception that REST is a complex topic to learn about, but in reality it is
analogous to browsing a website with a web browser.
REST is an architectural style (versus a protocol) for designing networked applications.
There are two types of URIs:

• Uniform Resource Name (URN): A name of something with no method specified to


look it up, such as example.com.
• URL: A name of something with the lookup method specified, such as
http://www.cisco.com/index.html. URLs may include the following:
o Protocol/scheme: HTTP(S), FTP, Telnet, mailto, Network News Transfer
Protocol (NNTP), and so on
o Hostname: For example, www.cisco.com
o Path and file name: For example, /index.html
REST uses a stateless client-server model that typically uses HTTP(S) to make calls
between entities, where resource representations are identified by a URL.
REST supports create, read, update, and delete (CRUD) operations by using specific HTTP
verbs. Create, retrieve, update, and delete refers to the four major functions that are
implemented in database applications (including network devices). CRUD operations are
how you can work with network APIs to create objects (for example, create loopback
interfaces), retrieve objects (particular sections of config or operational data), update
objects (perform a given change), or delete an object (remove a route, an IP address, and
so on).

CRUD operations are used with the URL and payload. It is how the server (network device)
knows what action to perform. With the REST API, your application passes a request for a
certain type of data by specifying the URL path that models the data. Both the request and
response are JSON or XML-formatted data. The following image shows an example of an
URI address composition. The protocol used is HTTP.

The most common HTTP verbs that are used by REST are GET, POST, PUT, PATCH, and
DELETE. HTTP verbs are the methods that are used to perform some sort of action on a
specific resource. Because HTTP is a standardized and ubiquitous protocol, the semantics
are well known.
GET is used to read or retrieve information from a resource and returns a representation
of the data in JSON or XML. Because the GET method only reads data and does not change
it, it is considered a “safe” method, which means there is no risk of data corruption.
POST, on the other hand, creates new resources, which means it is not considered a
“safe” method.
PUT is normally used to update or replace an already existing resource. It is called “PUT-
ing” to a resource and involves sending a request with the updated representation of the
original resource.
PATCH is similar in some ways to PUT in that PATCH modifies the capabilities of a
resource. The difference between PUT and PATCH is that PATCH sends a request
containing only the changes to the resource and not a complete updated resource.
DELETE simply deletes a resource that is identified by a URI.
When an HTTP method is used, there is a specific response code returned. For example,
upon successful deletion of a resource using DELETE, the client will receive a 200 message
signifying that the request succeeded.
HTTP response codes were developed by the IETF and therefore easy to look up online or
on their website, ietf.org. These codes are useful in troubleshooting because they provide
specific information regarding the error on the client side or server side. For example, if a
client receives a 400 response from a server, you can conclude that there is a syntax
problem in the request.
In the tables below, you can see some of the most common HTTP response codes.
Common HTTP Response Codes
Several tools exist that are used to test REST APIs:

• cURL: A simple Linux command line tool within a shell script that provides an easy
way to transfer data with URL syntax.
• Postman: A Google Chrome application that provides you an easy GUI to read REST
APIs from within the Chrome web browser.
• Python: Requests make use of embedded Python libraries and a small variety of
methods to send HTTP requests to a resource API.
Network Configuration Protocol
NETCONF is an IETF standard transport protocol for communicating with network devices,
retrieving operational data and both setting and reading configuration data. Operational
data includes interface statistics, memory utilization, errors, and so on. The configuration
data refers to how particular interfaces, routing protocols, and other features are enabled
and provisioned. NETCONF purely defines how to communicate with the devices.
NETCONF uses an XML management interface for configuration data and protocol
messages. The protocol messages are exchanged on top of a secure transport protocol
such as SSH or TLS. NETCONF is session-oriented and stateful—it is worth pointing out as
other APIs such as native REST and RESTCONF are stateless.
NETCONF is fairly sophisticated and it uses an RPC paradigm to facilitate communication
between the client (for example, an NMS server or an open source script) and the server.
NETCONF supports device transaction, which means that when you make an API call
configuring multiple objects and one fails, the entire transaction fails,and you do not end
up with a partial configuration. NETCONF is fairly sophisticated—it is not simple CRUD
processing.
NETCONF encodes messages, operations, and content in XML, which is intended to be
machine and human-readable.
NETCONF utilizes multiple configuration data stores (including candidate, running, and
startup). This is one of the most unique attributes of NETCONF, though a device does not
have to implement this feature to “support” the protocol. NETCONF utilizes a candidate
configuration, which is simply a configuration with all proposed changes applied in an
uncommitted state. It is the equivalent of entering CLI commands and having them not
take effect right away. You would then “commit” all the changes as a single transaction.
Once committed, you would see them in the running configuration.
The example shows different NETCONF data stores:

There are four core layers to the NETCONF protocol stack:


1. Content: Consists of configuration data and notification data. Embedded as XML
objects within the operations tag are XML documents, specific data you want to
retrieve or configure. It is the content that is an XML representation of YANG
models or XML schema definitions.
2. Operations: Defines a set of base protocol operations to retrieve and edit config
data. Each device and platform supports a given number of operations. Common
operations are the following:
3. Messages: A mechanism for encoding RPCs and notifications. NETCONF encodes
everything in XML, starting with the XML header and message. The first element in
the XML document is always the RPC element that is simply telling the server that
an RPC is going to be used on the device. These RPC elements map directly back to
specific operations on the device.
4. Transport: How the NETCONF client communicates with the NETCONF server.
Secure and reliable transport of messages between client and server.
There are a few steps that occur during a NETCONF session; they can be summarized as
follows:
1. Client first connects to the server NETCONF SSH subsystem.
2. After client connects to the server (network device) and establishes a connection,
the server sends a hello and it includes all its supported NETCONF capabilities.
3. When the server sends its hello, the client needs to send a hello with its supported
capabilities. The client can respond back with all the capabilities the server
supports (assuming the client does, too), or just with the bare minimum to do edits
and GETs.
4. Once the client sends its capabilities, it can then send NETCONF requests. When a
request is received via NETCONF, the request is converted into an abstract
message object. That message object is distributed to the underlying model
infrastructure based on the namespace in the request. Using the namespace, the
appropriate model is selected and the request is passed to it for processing. The
model infrastructure executes the request (read or write) on the device data store.
5. The server processes the client request and responds with the configuration as
expected.
Representational State Transfer Configuration Protocol
RESTCONF characteristics include the following:

• Functional subset of NETCONF


• Exposes YANG models via a REST API (URL)
• Uses HTTP or HTTPS as transport
• Uses XML or JSON for encoding
• Developed to use HTTP tools and programming libraries
• Uses common HTTP verbs in REST APIs
Once you understand REST and NETCONF on their own, RESTCONF becomes much easier
to digest. RESTCONF, in simplest terms, adds a REST API to NETCONF. YANG models are
used as when you use RESTCONF, and the URLs, HTTP verbs, and Request bodies are
automatically generated from the associated YANG model. Unlike NETCONF, RESTCONF
supports both XML and JSON.
Note: RESTCONF is a subset of NETCONF, so not all operations are supported. The HTTP
POST, PUT, PATCH, and DELETE methods are used to edit data resources represented by
YANG data models. These basic edit operations allow just the running configuration to be
altered by a RESTCONF client.
Remember that RESTCONF is just using REST principles and therefore uses HTTP verbs
appropriately. The one key operation to take note of is the use of the PUT operation. The
PUT operation has the ability to replace entire sections of configuration that is based on
what you send. It is analogous to declarative network configuration. For example, if you
PATCH one static route, it will add the route. If you PUT the route, you will end up with
just one route configured.
RESTCONF is not intended to replace NETCONF, but rather to provide an HTTP interface
that follows REST principles and is compatible with the NETCONF data store model.
RESTCONF provides a simplified interface that follows REST-like principles running on top
of HTTP or HTTPS transport, making RESTCONF an attractive choice for application
developers.
Google RPC
gRPC is an open-source RPC framework that provides simple client development. It is
based on Protocol Buffers (Protobuf), which is an open source binary serialization
protocol. gRPC provides a flexible, efficient, high performance automated mechanism for
serializing structured data, like XML, but is smaller and simpler to use. This makes it
especially useful in model-driven telemetry.
The user needs to define the structure by defining protocol buffer message types in .proto
files. Each protocol buffer message is a small logical record of information, containing a
series of name-value pairs. The structure of the data is defined by YANG models. gRPC
encodes requests and responses in binary. gRPC is extensible to other content types along
with Protobuf. The Protobuf binary data object in gRPC is transported over HTTP/2.
Note: HTTP version 2 (HTTP/2) is a new, more efficient version of the HTTP protocol
defined in IETF RFC 7540.
27.5 Explaining the Evolution of Intelligent Networks
Configuration Management Tools
Configuration Management is the practice of defining performance, functional, and
physical attributes of a product and then ensuring the consistency of a systems
"configuration" throughout its life. Configuration management tools that have been
traditionally used in the systems realm for server and application deployments are now
being used as configuration management automation tools to improve network
operations. These tools are not new. What is new and exciting is how their use in
networking is revolutionizing how infrastructures are managed.
Configuration management tools offer the following benefits:

• Automate the provisioning and deployment of applications and infrastructure


• Require no knowledge of programming—they use the declarative model (intent),
not scripting
• Leverage software development practices for deployments, including version
control and testing
• Common tools are Puppet, Ansible, and Chef
From a networking perspective, it is common to deploy changes manually. A change could
be adding a VLAN across a data center or campus, or making daily changes to firewall
policies for new applications being deployed. When there is a defined manual workflow to
perform a set of tasks, proper tools should be used to automate it. It does not make sense
to spend an hour performing a change. This change could take just a few minutes by using
a properly engineered tool. This process is where open source tools such as Puppet, Chef,
and Ansible can dramatically reduce the number of manual interactions with the network.
With configuration management tools, you can define and enforce configurations related
to system level operations (for example, authentication, logging, image), interface level
configuration (for example, VLAN, QoS, security), routing configurations (for example,
OSPF or BGP specifications), and so on.
These tools are often referred to as DevOps tools. They are more specifically configuration
management and automation tools that happen to be used by those organizations that
have implemented some form of DevOps practices, such as version control and testing.
These tools enable you to automate applications, infrastructure, and networks to a high
degree without the need to do any manual programming. While they do reduce the time
that it takes to perform certain tasks, they also offer greater predictability.
A common architecture of configuration management tool systems consists of a central
server, where a required or intended state of the system is defined, and a number of
devices, where this state is pushed to and needs to be enforced.
Note: Configuration changes in a network have traditionally been made on a per-device
basis using a specific set of procedures and configuration line items. These devices were
configured imperatively, that is, the exact steps to achieve the desired end state were
specified. This differs from using a declarative model, where an administrator models how
they would like their environment to look, and the devices, configuration management
tool, or a combination thereof, decides on how best to implement the requested changes.
Agent vs. Agentless Approach
The two models of automated configuration management illustrated in the figure reflect
two different philosophies. The first model is the intent-based model, where a central
server defines the required or intended state of the system. Agents on system elements
enforce that state. Agent-based configuration management is pull-based, and requires
installation of an agent on a network device.
The second model is an evolution of traditional CLI and SSH techniques with automation
to create reusable command sets and frameworks for scalability. No agent or client is
required on the target elements. Agentless management is accomplished through remote
shell access. With remote shell access, a configuration management server or primary will
utilize a push model to deliver a payload in the form of a script through the network
In the agent-based concept, exemplified by Puppet and Chef, the control software – the
Puppet Master or the Chef Server – defines the intent of the configuration state of the
target elements. On a target element, a software agent, or client, is monitoring the actual
configuration state of the element. The agent or client interacts with the master or server
to report actual state and to receive the intended state. If the actual state of the element
differs from the intended state, then the agent takes necessary steps to enforce the
configuration. An example of such step would be configuring that element to bring it in
line with the intended state as defined by the master or server. The architecture is
uniform for all target elements that can support the agent or client. When agents are used
for switch management, they can be installed in the native Linux user space, a Linux
container (LXC).
In the agentless approach, exemplified by Ansible, the desired configuration state is
defined in Ansible playbooks. Playbooks are text-based configuration files that define how
Ansible modules should be used. The Ansible framework interprets the contents of the
playbooks and uses the modules to provision (issue commands to configure) the target
elements. For Linux servers, for example, these commands are executed via Python scripts
which are deployed via SSH, that run on the target element. For Cisco platforms, these
commands are executed via CLI.
A major advantage to the remote shell access method is that there is little to no
configuration required on the device as there are no agents required for installation. A
potential drawback is the need to ensure that the security configuration on the device is
kept synchronized, as any change can have a significant impact on the configuration
management tool's ability to access the switch.
The key point with both approaches is that a single toolset can be used to configure IT
infrastructure devices—such as application servers and the software that runs on them—
and the network connectivity required between them. This means that the overall
deployment lifecycle for an application can be defined and managed from a single point,
by a single team. This is generally not the case with traditional network infrastructure
because it is generally managed and configured separately from IT infrastructure. That
separation of concerns can inhibit efficient application services deployments. Unifying
configuration of all layers in a single toolset increases efficiency and agility, and reduces
costs.
Puppet

• Puppet is a configuration management framework.


• Puppet agents get installed on the devices.
• Agents give us the ability to regularly poll the system to constantly check the
desired state and enforce configurations (manifests) as needed from the
centralized place, the Puppet Master (server).
• Puppet is written in the Ruby language.
Puppet was created in 2005 and has been around the longest compared to Chef and
Ansible. Puppet manages systems in a declarative manner, meaning that you define the
state that the target system should be in without worrying about how it happens. In
reality, that is true for all these tools. Puppet is written in Ruby and refers to its
automation instruction set as Puppet manifests. The major point to realize is that Puppet
is agent-based. Agent-based means a software agent needs to be installed on all devices
that you want to manage with Puppet, such as servers, routers, switches, and firewalls. It
is often not possible to load an agent on many network devices. This procedure limits the
number of devices that can be used with Puppet out of the box. By out of the box, you can
infer that it is possible to have proxy devices when using Puppet. However, this process
means that using Puppet has a greater entry barrier to getting started.
Puppet components include a puppet agent that runs on the managed device (node) and a
Puppet Master (server). The Puppet Master typically runs on a separate dedicated server
and serves multiple devices. The operation of the puppet agent involves periodically
connecting to the Puppet Master, which in turn compiles and sends a configuration
manifest to the agent. The agent reconciles this manifest with the current state of the
node and updates the state that is based on differences.
A puppet manifest is a collection of property definitions for setting the state on the
device. The details for checking and setting these property states are abstracted so that a
manifest can be used for more than one operating system or platform. Manifests are
commonly used for defining configuration settings, but they also can be used to install
software packages, copy files, and start services.
Chef

• Chef is an open source configuration management and system orchestration


software.
• Chef will install a client (it uses Ruby as the client) on every device which would do
the actual configuration.
• Each chef-client has a cookbook that tells how each node in your organization
should be configured.
• The Chef server stores cookbooks, the policies that are applied to the nodes.
• Using Chef Client, Nodes asks the Chef Server for configuration details.
Chef, another popular configuration management tool, follows much of the same model
as Puppet. Chef transforms complex infrastructure into code, enabling network
infrastructure automation using a declarative, intent-based model. Chef is based in Ruby,
uses a declarative model, is agent-based, and refers to its automation instruction as
recipes.
Chef is an open-source software package that is developed by Chef Software, Inc. The
software package is a systems and cloud infrastructure automation framework that
deploys servers and applications to any physical, virtual, or cloud location, no matter the
size of the infrastructure. Each organization consists of one or more workstations, a single
server, and every node that the chef-client has configured and is maintaining. Cookbooks
and recipes are used to tell the chef-client how each node should be configured. The chef-
client, which is installed on every node, does the actual configuration.
A Chef cookbook is the fundamental unit of configuration and policy distribution. A
cookbook defines a scenario and contains everything that is required to support that
scenario, including libraries, recipes, files, and more. A Chef recipe is a collection of
property definitions for setting state on the device. The details for checking and setting
these property states are abstracted away so that a recipe may be used for more than one
operating system or platform. While recipes are commonly used for defining configuration
settings, they also can be used to install software packages, copy files, start services, and
more.
Ansible

• Ansible is a configuration management orchestrator born from configuration file


management on Linux hosts that has extended to network applications.
• Ansible is a great way to organize scripts to allow for large collections of tasks to be
run together and iterated over any number of devices.
• It uses an agentless push model (easy to adopt).
• It leverages YAML to create Ansible playbooks.
Ansible is an open-source software platform for configuring and managing compute and
switching infrastructure using playbooks. It was created in 2012 as an alternative to
Puppet and Chef. Ansible was later acquired by Red Hat in 2015. Ansible features a state-
driven resource model that describes the desired state of computer systems and services.
It is used to automate the configuration of a company’s compute and switching resources
in an agentless manner. Another notable difference between Puppet, Chef, and Ansible is
that Ansible is written in Python. It also uses YAML as a language to write Playbooks,
resulting in human readable scripts.
Being natively agentless significantly lowers the barrier to entry from an automation
perspective. Since Ansible is agentless, it can integrate and automate any device using any
API. For example, integrations can use REST APIs, NETCONF, SSH, or even SNMP, if
desired.
The key Ansible components are:

• Inventory: Contains the hosts operated by Ansible.


• Modules: Modules are the components that do the actual work in Ansible. They
are what gets executed (applied) in each playbook task.
• Playbooks: A playbook is composed of one or more plays in an ordered list. Each
play executes part of the overall goal of the playbook, running one or more tasks.
Each task calls an Ansible module, which orchestrates, configures, administers, or
deploys. These playbooks describe the policy to be executed to the host or hosts.
People refer to these playbooks as “design plans” which are designed to be
human-readable and are developed in the basic text language, YAML.
• ansible.cfg: The default configuration file that controls the operation of Ansible.
How Ansible works

• The Ansible controller defines a playbook to describe the desired state.


• The controller connects to the target device using SSH (or APIs) and pushes the
configuration.
• The target device applies the playbook with the plays (and its tasks) being
performed in order—as defined in the playbook.
• Tasks are only performed if the resource is not already in the desired state.
A network or DevOps engineer typically defines a playbook with the desired state of the
target device on the Ansible controller. The Ansible controller then establishes an SSH
connection to a target and compares the current state of resources on a target with the
desired state as defined in the playbook. If a change is needed, the controller executes the
playbook and then notifies the administrator of the changed or unchanged state of the
device.

27.6 Explaining the Evolution of Intelligent Networks


Introducing Cisco DNA Center
The following trends in traditional networks present significant challenges:

• There are more users and endpoints, therefore, more VLANs and subnets. It
becomes more difficult to keep track of and segment all those groups.
• There are so many different types of users coming in to the network that it is
becoming more complex to configure. Multiple steps are required to give users
credentials and support connectivity choices.
• As users and devices move around the network, policy is not consistent, which
makes it difficult to find users when they move around and troubleshoot issues.
Cisco DNA Center is a Cisco SDN controller for enterprise networks—branch, campus, and
WAN. Cisco DNA Center can program the network in an automated way, based on the
application requirements, and it represents a basis for intent-based networking.
Cisco DNA Center provides open programmability APIs for policy-based management and
security through a single controller. It provides an abstraction of the network, which leads
to simplification of the management of network services. This approach automates what
has typically been a tedious manual configuration.
The controller provisions network services consistently and provides rich network
information and analytics across all network resources: both LAN and WAN, wired and
wireless, and physical and virtual infrastructures. This visibility allows you to optimize
services and support new applications and business models. The controller bridges the
gap between open, programmable network elements and the applications that
communicate with them, automating the provisioning of the entire end-to-end
infrastructure.

Intent-based Networking
SDN is a foundational building block of intent-based networking. The good news for SDN
practitioners is that intent-based networking addresses shortfalls of SDN, which include
automated translation of business policies to IT (security and compliance) policies,
automated deployment of these policies and assurance that if the network is not
providing the requested policies, they will receive proactive notification. Intent-based
networking adds context, learning and assurance capabilities, by tightly coupling policy
with intent. “Intent” enables the expression of both business purpose and network
context through abstractions, which are then translated to achieve the desired outcome
for network management. SDN is purposely focused on instantiating change in network
functions.

There are three foundational elements of intent-based networking:

• The translation element enables the operator to focus on what they want to
accomplish, and not how they want to accomplish it. The translation element takes
the desired intent and translates it to associated network policies and security
policies. Before applying these new policies, the system checks if these policies are
consistent with the already deployed policies or if they will cause any
inconsistencies.
• Once approved, the new policies are then activated (automatically deployed across
the network).
• With assurance, an intent-based network performs continuous verification that the
network is operating as intended. Any discrepancies are identified; root-cause
analysis can recommend fixes to the network operator. The operator can then
"accept" the recommended fixes to be automatically applied, before another cycle
of verification. Assurance does not occur at discrete times in an intent-based
network. Continuous verification is essential since the state of the network is
constantly changing. Continuous verification assures network performance and
reliability.
Cisco DNA Center Features and Tools
Cisco DNA provides a single dashboard for managing and controlling the enterprise
network. It uses workflows to simplify provisioning of user access policies combined with
advanced assurance capabilities. It also provides open platform APIs, adapters, and SDKs
for integration with business applications and orchestrators.

How does Cisco DNA Center work? The enterprise programmable network infrastructure
sends data to the Cisco DNA Center Appliance. The appliance activates features and
capabilities on your network devices using Cisco DNA software. Everything is managed
from the Cisco DNA Center dashboard.
Cisco DNA Center is a software solution that resides on the Cisco DNA Center appliance.
The Cisco DNA Center dashboard provides an overview of network health and helps in
identifying and remediating issues. Automation and orchestration capabilities provide
zero-touch provisioning based on profiles, facilitating network deployment in remote
branches. Advanced assurance and analytics capabilities use deep insights from devices,
streaming telemetry, and rich context to deliver an experience while proactively
monitoring, troubleshooting, and optimizing your wired and wireless network.

The following are the tools of Cisco DNA Center:

• Discovery: This tool scans the network for new devices.


• Inventory: This tool provides the inventory for devices.
• Topology: This tool helps you to discover and map network devices to a physical
topology with detailed device-level data.
• Image Repository: This tool helps you to automatically download and manage
physical and virtual software images.
• Command Runner: This tool allows you to run diagnostic CLI commands against
one or more devices.
• License Manager: This tool visualizes and manages license usage.
• Template Editor: This tool is an interactive editor to author CLI templates.
• Network Plug and Play: This tool provides a simple and secure approach to
provision networks with a near zero touch experience.
• Telemetry: This tool provides telemetry design and provisioning.
• Data and Reports: This tool provides access to data sets and schedules data
extracts for download in multiple formats like PDF reports, comma-separated
values (CSV), Tableau, and so on.
Using Cisco DNA Center for Path Tracing
The Cisco DNA Center path trace service analysis allows you to examine the path that a
specific type of packet travels as it makes its way across the network from a source to a
destination node.
The path takes into account not only the source and destination IP addresses, but also the
TCP or UDP source and destination ports. This way, if there are specific configuration
settings that are related to these packet fields that would impact forwarding behavior,
then Cisco DNA Center will take these factors into account (for example, by showing
where the specific traffic gets blocked).
The result of a path trace is a visual and textual representation of the path that a packet
takes across all the devices, and links between the source and destination.

When you fill in the fields for the source, destination, and optionally the application, the
path trace is initiated. The output for a path trace consists of two elements:

• The graphical display of the path between the hosts.


• The list of each device along the path, with details about the interfaces.
This example shows how traffic passes between two particular devices and over all
network devices in a communication flow.
27.7 Explaining the Evolution of Intelligent Networks
Introducing Cisco SD-Access
Over the years, the networking technologies that have been the foundation of
interconnectivity between clients, devices, and applications have generally remained
static. While IT teams have a number of technology choices about ways to design and
operate their networks, there has not been a comprehensive, turnkey solution to address
the rapidly evolving enterprise needs around mobility, IoT, cloud, and security.
A slow-to-deploy network impedes the ability of many organizations to innovate rapidly
and adopt new technologies such as video, collaboration, and connected workspaces. The
ability of a company to adopt any of these is impeded if the network is slow to change and
adapt. In addition, one of the major challenges with wireless deployment today is that it
does not easily utilize network segmentation. While wireless can leverage multiple SSIDs
for traffic separation over the air, these are limited in the number that can be deployed
and are ultimately mapped back into VLANs at the WLC. The WLC itself has no concept of
virtual routing and forwarding (VRF) or Layer 3 segmentation, making deployment of a
true wired and wireless network virtualization solution very challenging.
Policy is one of those abstract words that can mean many different things to many
different people. However, in the context of networking, every organization has multiple
policies that they implement. Use of security ACLs on a switch, or security rulesets on a
firewall, is security policy. Using QoS to sort traffic into different classes, and using queues
on network devices to prioritize one application versus another, is QoS policy. Placing
devices into separate VLANs based on their role is device-level access control policy. The
traditional methods used today for policy administration (large and complex ACLs on
devices and f̀ irewalls) are difficult to implement and maintain. Also, most organizations
want to establish user and device identity and use it end-to-end for policy. And at the end,
most organizations lack comprehensive visibility into network operation, limiting their
ability to proactively respond to changes. All of these influence how long it takes for a new
network service to be deployed. A more comprehensive, end-to-end approach is needed,
one that allows insights to be drawn from the mass of data that potentially can be
reported from the underlying infrastructure.
The Cisco Software-Defined Access (SD-Access) solution is a programmable network
architecture that provides software-based policy and segmentation from the edge of the
network to the applications. SD-Access is implemented via Cisco DNA Center, which
provides design settings, policy definition, and automated provisioning of the network
elements, as well as assurance analytics for an intelligent wired and wireless network.
In an enterprise architecture, the network may span multiple domains, locations, or sites
such as main campuses and remote branches, each with multiple devices, services, and
policies. The Cisco SD-Access solution offers an end-to-end architecture that ensures
consistency in terms of connectivity, segmentation, and policy across different locations
(sites).

Cisco SD-Access comprises these elements:

• Cisco DNA Center: automation, policy, assurance, and integration infrastructure


• SD-Access fabric: physical and logical network forwarding infrastructure
SD-Access Management with Cisco DNA Center
Cisco DNA Center provides a central management plane for building and operating an SD-
Access fabric. The management plane is responsible for forwarding configuration and
policy distribution, as well as device management and analytics.
There are two main functions of Cisco DNA Center: automation and assurance.
Cisco DNA Center automation provides the definition and management of SD-Access
group-based policies, along with the automation of all policy-related configurations. Cisco
DNA Center integrates directly with Cisco ISE to provide host onboarding and policy
enforcement capabilities. With SD-Access, Cisco DNA Center uses controller-based
automation as the primary configuration and orchestration model, to design, deploy,
verify, and optimize wired and wireless network components for both nonfabric and
fabric-based deployments.
Network assurance quantifies availability and risk from an IT network perspective, based
on a comprehensive set of network analytics. Beyond general network management,
network assurance measures the impact of network change on security, availability, and
compliance.
The key enabler to Cisco DNA Assurance is analytics—the ability to continually collect data
from the network and transform it into actionable insights. To achieve this, Cisco DNA
Center collects a variety of network telemetry, in traditional forms (SNMP, NetFlow,
syslogs, and so on) and also emerging forms (NETCONF, YANG, streaming telemetry, and
others). Cisco DNA Assurance then performs advanced processing to evaluate and
correlate events to continually monitor how devices, users, and applications are
performing.
Correlation of data is key since it allows for troubleshooting issues and analyzing network
performance across both the overlay and underlay portions of the SD-Access fabric. Other
solutions often lack this level of correlation and thus lose visibility into underlying traffic
issues that may affect the performance of the overlay network. By providing correlated
visibility into both underlay and overlay traffic patterns and usage via fabric-aware
enhancements to NetFlow, SD-Access ensures that network visibility is not compromised
when a fabric deployment is used.
SD-Access Fabric
Part of the complexity in a network comes from the fact that policies are tied to network
constructs such as IP addresses, VLANs, ACLs, and so on. The concept of fabric changes
that. With a fabric, an enterprise network is thought of as being divided into two different
layers, each for different objectives. One layer is dedicated to the physical devices and
forwarding of traffic (known as an underlay), and the other entirely virtual layer (known as
an overlay) is where wired and wireless users and devices are logically connected
together, and services and policies are applied. This provides a clear separation of
responsibilities and maximizes the capabilities of each sublayer while dramatically
simplifying deployment and operations since a change of policy would only affect the
overlay and the underlay would not be touched.
The combination of an underlay and an overlay is called a "network fabric".
The concepts of overlay and fabric are not new in the networking industry. Existing
technologies such as Multiprotocol Label Switching (MPLS), Generic Routing Encapsulation
(GRE), Locator/ID Separation Protocol (LISP), and Overlay Transport Virtualization (OTV)
are all examples of network tunneling technologies that implement an overlay. Another
common example is Cisco Unified Wireless Network (Cisco UWN), which uses Control and
Provisioning of Wireless Access Points (CAPWAP) to create an overlay network for wireless
traffic.
The SD-Access architecture is supported by a fabric technology implemented for the
campus, enabling the use of virtual networks (overlay networks) running on a physical
network (underlay network) creating alternative topologies to connect devices.
SD-Access network underlay (or simply, underlay) is comprised of the physical network
devices, such as routers, switches, and WLCs, plus a traditional Layer 3 routing protocol.
This provides a simple, scalable, and resilient foundation for communication between the
network devices. The network underlay is not used for client traffic (client traffic uses the
fabric overlay).
All network elements of the underlay must establish IPv4 connectivity between each
other. This means an existing IPv4 network can be leveraged as the network underlay.
Although any topology and routing protocol could be used in the underlay, the
implementation of a well-designed Layer 3 access topology (that is, a routed access
topology) is highly recommended. Using a routed access topology (leveraging routing all of
the way down to the access layer) eliminates the need for Spanning Tree Protocol (STP),
VLAN Trunk Protocol (VTP), Hot Standby Router Protocol (HSRP), Virtual Router
Redundancy Protocol (VRRP), and other similar protocols in the network underlay,
simplifying the network and at the same time increasing resiliency and improving fault
tolerance.
Cisco DNA Center provides a prescriptive LAN automation service to automatically
discover, provision, and deploy network devices according to Cisco design best practices.
Once discovered, the automated underlay provisioning leverages plug-and-play (PnP) to
apply the required IP address and routing protocol configurations.
The SD-Access fabric overlay (or simply, overlay) is the logical, virtualized topology built on
top of the physical underlay. An overlay network is created on top of the underlay to
create a virtualized network. In the SD-Access fabric, the overlay networks are used for
transporting user traffic within the fabric. The fabric encapsulation also carries scalable
group information used for traffic segmentation inside the overlay. The data plane traffic
and control plane signaling are contained within each virtualized network, maintaining
isolation among the networks as well as independence from the underlay network. The
SD-Access fabric implements virtualization by encapsulating user traffic in overlay
networks using IP packets that are sourced and terminated at the boundaries of the fabric.
The fabric boundaries include borders for ingress and egress to a fabric, fabric edge
switches for wired clients, and fabric APs for wireless clients. Overlay networks can run
across all or a subset of the underlay network devices. Multiple overlay networks can run
across the same underlay network to support multitenancy through virtualization.
A fundamental benefit of SD-Access is the ability to instantiate logical network policy,
based on services offered by the fabric.
There are three primary types of policies that can be automated in the SD-Access fabric:

• Security: Access control policy, which dictates who can access what
• QoS: Application policy, which invokes the QoS service to provision differentiated
access to users on the network, from an application experience perspective
• Copy: Traffic copy policy, which invokes the traffic copy service for monitoring
specific traffic flows
These services are offered across the entire fabric, independently of device-specific
address or location.
SD-Access benefits
SD-Access provides automated end-to-end services (such as segmentation, QoS, and
analytics) for user, device, and application traffic. SD-Access automates user policy so
organizations can ensure that the appropriate access control and application experience
are set for any user or device to any application across the network. This is accomplished
with a single network fabric across LAN and WLAN, which creates a consistent user
experience, anywhere, without compromising on security.
SD-Access benefits include the following:

• Automation: Plug-and-play for simplified deployment of new network devices,


along with consistent management of wired and wireless network configuration
provisioning
• Policy: Automated network segmentation and group-based Policy
• Assurance: Contextual insights for fast issue resolution and capacity planning
• Integration: Open and programmable interfaces for integration with third-party
solutions

27.8 Explaining the Evolution of Intelligent Networks


Introducing Cisco SD-WAN
The traditional WAN function was connecting users at the branch or campus to
applications hosted on servers in the data center. Typically, dedicated MPLS circuits were
used to help ensure security and reliable connectivity. This no longer works in a cloud-
centric world, because WAN networks designed for a different era are not ready for the
unprecedented explosion of WAN traffic that cloud adoption brings. That traffic causes
management complexity, application performance unpredictability, and data vulnerability.
Cisco SD-WAN is a software-defined approach to managing WANs. Cisco SD-WAN
simplifies the management and operation of a WAN by separating the networking
hardware from its control mechanism. This solution virtualizes much of the routing that
used to require dedicated hardware.
SD-WAN represents an evolution of networking from an older, hardware-based model to a
secure, software-based, virtual IP fabric. The overlay network forms a software overlay
that runs over standard network transport services, including the public internet, MPLS,
and broadband. The overlay network also supports next-generation software services,
thereby accelerating the shift to cloud networking.
The Cisco SD-WAN solution is comprised of separate orchestration, management, control,
and data planes.

• The orchestration plane assists in the automatic onboarding of the SD-WAN


routers into the SD-WAN overlay.
• The management plane is responsible for centralized configuration and
monitoring.
• The control plane builds and maintains the network topology and makes decisions
on where traffic flows.
• The data plane is responsible for forwarding packets based on decisions from the
control plane.

The primary components for the Cisco SD-WAN solution consist of the vManage network
management system (management plane), the vSmart controller (control plane), the
vBond orchestrator (orchestration plane), and the vEdge router (data plane). The
components are:
• Management plane (vManage): Centralized network management system
provides a GUI interface to monitor, configure, and maintain all Cisco SD-WAN
devices and links in the underlay and overlay network.
• Control plane (vSmart Controller): This software-based component is responsible
for the centralized control plane of the SD-WAN network. It establishes a secure
connection to each vEdge router and distributes routes and policy information via
the Overlay Management Protocol (OMP). It also orchestrates the secure data
plane connectivity between the vEdge routers by distributing crypto key
information.
• Orchestration plane (vBond Orchestrator): This software-based component
performs the initial authentication of vEdge devices and orchestrates vSmart and
vEdge connectivity. It also has an important role in enabling the communication of
devices that sit behind Network Address Translation (NAT).
• Data plane (vEdge Router): This device, available as either a hardware appliance or
software-based router, sits at a physical site or in the cloud and provides secure
data plane connectivity among the sites over one or more WAN transports. It is
responsible for traffic forwarding, security, encryption, QoS, routing protocols such
as BGP and OSPF, and more.
• Programmatic APIs (REST): Programmatic control over all aspects of vManage
administration.
• Analytics (vAnalytics): Adds a cloud-based predictive analytics engine for Cisco SD-
WAN.

This sample topology depicts two sites and two public internet transports. The SD-WAN
controllers (the two vSmart controllers), and the vBond orchestrator, along with the
vManage management GUI that resides on the internet, are reachable through either
transport.
At each site, vEdge routers are used to directly connect to the available transports. Colors
are used to identify an individual WAN transport, as different WAN transports are
assigned different colors, such as mpls, private1, biz-internet, metro-ethernet, lte, and so
on. The topology uses one color for the internet transports and a different one for the
public-internet.
The vEdge routers form a Datagram Transport Layer Security (DTLS) or Transport Layer
Security (TLS) control connection to the vSmart controllers and connect to both of the
vSmart controllers over each transport. The vEdge routers securely connect to vEdge
routers at other sites with IPsec tunnels over each transport. The Bidirectional Forwarding
Detection (BFD) protocol is enabled by default and will run over each of these tunnels,
detecting loss, latency, jitter, and path failures.
Policies are an important part of the Cisco SD-WAN solution and are used to influence the
flow of data traffic among the vEdge routers in the overlay network. Policies apply either
to control plane or data plane traffic and are configured either centrally on vSmart
controllers (centralized policy) or locally (localized policy) on vEdge routers.
Centralized control policies operate on the routing and transport location (TLOC)
information and allow for customizing routing decisions and determining routing paths
through the overlay network. These policies can be used in configuring traffic engineering,
path affinity, service insertion, and different types of VPN topologies (full-mesh, hub-and-
spoke, regional mesh, and so on). Another centralized control policy is application-aware
routing, which selects the optimal path based on real-time path performance
characteristics for different traffic types. Localized control policies allow you to affect
routing policy at a local site.
Data policies influence the flow of data traffic through the network based on fields in the
IP packet headers and VPN membership. Centralized data policies can be used in
configuring application firewalls, service chaining, traffic engineering, and QoS. Localized
data policies allow you to configure how data traffic is handled at a specific site, such as
ACLs, QoS, mirroring, and policing. Some centralized data policy may affect handling on
the vEdge itself, as in the case of app-route policies or a QoS classification policy. In these
cases, the configuration is still downloaded directly to the vSmart controllers, but any
policy information that needs to be conveyed to the vEdge routers is communicated
through OMP.
28.1 Introducing System Monitoring
Introduction
The first step in understanding how the network performs is to gather as much
information about the network as possible. Often the existing documentation does not
provide sufficient information, because the most recent condition of the network is
required. This is where network audits and traffic and events analysis can provide the key
information that is needed and where system monitoring tools become important.
System monitoring is necessary to get a good overall picture of the network and can help
you quickly recognize issues and consequently make sure that the network performs as it
should. It also provides you with a proper network performance baseline so that you have
a comparison tool when troubleshooting.
Enterprises want to have proactive systems and find anomalies in their networks quicker.
They want to implement a central network management system (NMS), which
communicates with a few crucial protocols that are used in system monitoring. Examples
of such protocols are syslog and Simple Network Management Protocol (SNMP), whose
reporting should be configured on devices so that network or device events can be
forwarded to a central server, which can then provide a larger picture of the events
happening in the network. For this approach to work smoothly and efficiently, proper time
synchronization is important so that you can build a picture of the sequence of events
when multiple network components or networks are affected.

As a networking engineer, you will frequently work with system monitoring tools and you
need to have a good understanding on these important ideas:
• The purpose of time synchronization and how important it is that you have
synchronized time on all network devices
• The structure of Cisco IOS system messages and how they can be stored on a
centrally located external server for better readability and análisis
• Usage of the SNMP protocol to monitor performance of network devices

28.2 Introducing System Monitoring


Introducing Syslog
Syslog is a protocol that allows a device to send event notification messages across IP
networks to event message collectors. By default, a network device sends the output from
system messages and debug-privileged EXEC commands to a logging process. The logging
process controls the distribution of logging messages to various destinations, such as the
logging buffer, console line, terminal lines, or a syslog server, depending on your
configuration. Logging services enable you to gather logging information for monitoring
and troubleshooting, to select the type of logging information that is captured, and to
specify the destinations of captured syslog messages.
Note: The syslog receiver is commonly called syslogd, syslog daemon, or syslog server.
Syslog messages can be sent via UDP (port 514) or TCP (port 6514).
You can set the severity level of the messages to control the type of messages that the
consoles display and where the messages are displayed. You can configure the device to
add time stamps to log messages, and to set the syslog source address, to enhance real-
time debugging and management.
You can access logged system messages by using the device CLI or by saving them to a
syslog server. The switch or router software saves syslog messages in an internal buffer.
You can remotely monitor system messages by viewing the logs on a syslog server or by
accessing the device through Telnet, Secure Shell (SSH), or through the console port.
Administrators usually see syslog messages on the router console. Here are common
syslog messages that you may have seen:

28.3 Introducing System Monitoring


Syslog Message Format
The full format of a syslog message as sent by a device has three distinct parts:
• PRI (priority)
• Header
• MSG (message text)
The syslog packet size is limited to 1024 bytes.

Priority
Priority is an 8-bit number and its value represents the facility and severity of the
message. The three least significant bits represent the severity of the message (with 3 bits,
you can represent eight different severity levels), and the upper 5 bits represent the
facility of the message.
You can use the facility and severity values to apply certain filters on the events in the
syslog daemon.
Note: The priority and facility values are created by the syslog clients (applications or
hardware) on which the event is generated. The syslog server is just an aggregator of the
messages.
Facility
Syslog messages are broadly categorized based on the sources that generate them. These
sources can be the operating system, process, or an application. The source is defined in a
syslog message by a numeric value.
These integer values are called facilities. The local use facilities are not reserved; the
processes and applications that do not have preassigned facility values can choose any of
the eight local use facilities. As such, Cisco devices use one of the local use facilities for
sending syslog messages.
By default, Cisco IOS Software-based devices use facility local7. Most Cisco devices provide
options to change the facility level from their default value.
This table lists all facility values.
Severity
The log source or facility (a router or mail server, for example) that generates the syslog
message specifies the severity of the message using single-digit integers 0-7.
The severity levels are often used to filter out messages which are less important, to make
the amount of messages more manageable. Severity levels define how severe the issue
reported is, which is reflected in the severity definitions in the table.
The following table explains the eight levels of message severity, from the most severe
level to the least severe level.
Header
The header contains these fields:

• Time stamp
• Hostname
Time Stamp
The time stamp field is used to indicate the local time, in MMM DD HH:MM:SS format, of
the sending device when the message is generated.
For the time stamp information to be accurate, it is good administrative practice to
configure all the devices to use the Network Time Protocol (NTP). In recent years,
however, the time stamp and hostname in the header field have become less relevant in
the syslog packet itself because the syslog server will time stamp each received message
with the server time when the message is received, as well as the IP address (or
hostname) of the sender, taken from the source IP address of the packet.
A correct sequence of events is vital for troubleshooting in order to accurately determine
the cause of an issue. Often an informational message can indicate the cause of a critical
message. The events can follow each other by milliseconds.
Hostname
The hostname field consists of the host name (as configured on the host) or the IP
address. In devices such as routers or firewalls, which have multiple interfaces, syslog uses
the IP address of the interface from which the message is transmitted.
Many people can get confused by "host name" and "hostname." The latter is typically
associated with a Domain Name System (DNS) lookup. If the device includes its "host
name" in the actual message, it may be (and often is) different than the actual DNS
hostname of the device. A properly configured DNS system should include reverse lookups
to help facilitate proper sourcing for incoming messages.
Syslog MSG
The message is the text of the syslog message, with additional information about the
process that generated the message.
How to Read System Messages
The general format of syslog messages that the syslog process on Cisco IOS Software
generates by default are structured as follows:

The following table explains the items that a Cisco IOS Software syslog message contains.

An example of a syslog message that is informing the administrator that FastEthernet0/22


came up follows (note that this message does not contain a sequence number):
*Apr 22 11:05:55.423: %LINEPROTO-5-UPDOWN: Line protocol on Interface
FastEthernet0/22, changed state to up
Note these elements of the syslog message:

• LINEPROTO is the facility code.


• 5 is the severity level.
• UPDOWN is the mnemonic code.
• "Line protocol on Interface FastEthernet0/22, changed state to up" is the
description.
Note: The definition of "facility" in a Cisco System Message on Cisco IOS Software is not
the same as the RFC definition of "facility" (such as local7). Cisco facilities are a free-form
method of identifying the source message type such as SYS, IP, LDP, L2, MEM, FILESYS,
DOT11, LINEPROTO, and so on. (The list is very large.)
This table explains some of the facility codes that you may see in a Cisco IOS Software
syslog message:

Note: Many more facility codes exist and can be found here:
https://www.cisco.com/c/en/us/td/docs/ios/15_0sy/system/messages/15sysmg/sm15syo
vr.html
Note that sequence numbers are not enabled by default. You can change this behavior
with the following commands:

Note that time stamps are enabled by default because it is much easier to identify the
problem in a chronological order if you can see the time stamps on syslog messages. The
time stamp can be turned off with the following command:
Within Cisco IOS Software, the severity levels that are associated with events often relate
more to device health and network management than to security. For example, the
following four messages are listed in order of severity:

Obviously, a power supply failure (severity level 1) is an urgent issue, as it affects the
operating health of a device and of the network in which it resides. An interface failure
(severity level 3) is generally less severe than a complete device failure, but it can certainly
affect the device and the network. A configuration change (severity level 5) is routine in
network maintenance and is assigned a relatively low severity. But from a security
perspective, auditing configuration changes is very important. The last example is a logged
hit on an access control list (ACL). If a security administrator has specified the log option
on a particular line in the ACL, this event is probably significant. However, the severity
level is only a 6. It is important to note that the severity levels on syslog messages are not
necessarily prioritized according to security.
Syslog Configuration
By default, the console receives debugging messages and numerically lower levels. To
change the level of messages that are sent to the console, use the logging console level
command. If severity level 0 is configured, it means that only emergency-level messages
will be displayed. For example, if severity level 4 is configured, all messages with severity
levels up to 4 will be displayed (Emergency, Alert, Critical, Error, and Warning).
Note: Network devices should log levels 0–6 under normal operation. Level 7 should be
used for console troubleshooting only.
While logging to the console is enabled by default, it is very expensive in terms of CPU
resources on a Cisco IOS device. The reason is that the console is a character-by-character
serial device. Each character that is displayed to the console requires a CPU interrupt. As
such, it is common to disable logging to the console when logging to a centralized syslog
server is configured.
To log messages to a syslog server, specify a syslog server host as a destination for syslog
messages and limit the syslog messages that are sent to the syslog server based on
severity, as shown in the example:
The example shows a configuration for logging syslog messages to a syslog server with the
IPv4 address 10.1.1.10. The router will send syslog messages with interface Loopback0’s
IPv4 address. The details of the commands used are shown in the table.

Note: After you use the logging ip-address command, the router will start to send syslog
messages to that IP address, even if no syslog server is configured there.
The Cisco IOS devices can also send syslog messages to multiple syslog servers. To do so,
you have to enter multiple logging host ip-address commands, each with a different IP
address.
If you want to check syslog messages that are stored in the router, you can use the show
logging command. This command also shows you how many messages are logged to
various destinations, and what severity level is configured for that destination.
The output indicates that R1 is now sending syslog messages to 10.1.1.10 with the
minimum severity threshold set to "informational." The output also indicates that five
messages have been sent to the syslog server. Syslog uses UDP for transport and is
inherently not reliable. If these five messages are lost somewhere in the transport path,
there is no mechanism to recognize the lost message or to request a retransmission.
There is a local logging buffer. It is in its default state, with a severity threshold of
"debugging" (severity 7) and sized at 4096 bytes. In the sample output, 29 messages have
been logged in the local buffer. The end of the show logging command output displays the
contents of the buffer. At the start, the buffer is mostly filled with the messages that were
produced when R1 booted. At the end of the buffer, however, are the three syslog
messages that were produced when a no shutdown command was issued on the router.

28.4 Introducing System Monitoring


SNMP Overview
In a complex network of routers, switches, and servers, it can be daunting to manage all
devices on your network, and make sure that they are not only up and running, but also
performing optimally. SNMP was introduced to meet the growing need for a standard of
managing IP devices.
SNMP allows an NMS to retrieve the environment and performance parameters of a
network device. The NMS will collect and process the data.
SNMP is a management protocol that supports message exchange.

• SNMP manager: Polls agents on the network and displays information.


• SNMP agent: Stores information and responds to manager requests. It also
generates traps, which are unsolicited messages.
o You can set thresholds to trigger a trap notification process when values
are exceeded.
• MIB: Contains a database of objects (information variables).

SNMP is an application layer protocol that defines how SNMP managers and SNMP agents
exchange management information. SNMP uses the UDP transport mechanism to retrieve
and send management information, such as MIB variables.
SNMP is broken down into these three components:
• SNMP manager: Periodically polls the SNMP agents on managed devices by
querying the device for data. The SNMP manager can be part of an NMS such as
Cisco Prime Infrastructure.
• SNMP agent: Runs directly on managed devices, collects device information, and
translates it into a compatible SNMP format according to the MIB.
• MIB: Represents a virtual information storage location that contains collections of
managed objects. Within the MIB, there are objects that relate to different defined
MIB modules (for example, the interface module).
Routers and other network devices keep statistics about the information of their
processes and interfaces locally. SNMP on a device runs a special process that is called an
agent. This agent can be queried, using SNMP. SNMP is typically used to gather
environment and performance data such as device CPU usage, memory usage, interface
traffic, interface error rate, and so on. By periodically querying or "polling" the SNMP
agent on a device, an NMS can gather or collect statistics over time. The NMS polls devices
periodically to obtain the values defined in the MIB objects that it is set up to collect. It
then offers a look into historical data and anticipated trends. Based on SNMP values, the
NMS triggers alarms to notify network operators.
To obtain information from the MIB on the SNMP agent, you can use several different
operations:

• Get: This operation is a request sent by the manager to the SNMP agent to retrieve
one or more values from the MIB of the managed device.
• Get-next: This operation is used to get the next object in the MIB from an SNMP
agent.
• Get-bulk: This operation allows a management application to retrieve a large
section of a table at once.
• Set: This operation is used to put information in the MIB from an SNMP manager.
• Trap: This operation is used by the SNMP agent to send a triggered piece of
information to the SNMP manager.
• Inform: This operation is the same as a trap, but it adds an acknowledgment that a
trap does not provide.
The SNMP manager polls the SNMP agents and queries the MIB via SNMP agents on UDP
port 161.
The SNMP agent can also send triggered messages called traps to the SNMP manager, on
UDP port 162. For example, if the interface fails, the SNMP agent can immediately send a
trap message to the SNMP manager, notifying the manager about the interface status.
This feature is extremely useful because you can get information almost immediately
when something happens. Remember, the SNMP manager periodically polls SNMP agents,
which means that you will always receive the information on the next agent poll.
Depending on the interval, this could mean a 10-minute delay.
The SNMP trap operation is shown in the example.
1. Interface Ethernet0/0 fails on the branch router.
2. The branch router sends an SNMP trap to the NMS, informing that interface
Ethernet0/0 has failed.
3. The NMS receives the SNMP trap and raises an alarm, which notifies the network
operations center (NOC), which in turn can proactively solve the problem or notify
the customer regarding the problem.
Note: In step 3, the role of SNMP is just to send the trap. All other actions are performed
by NMS and NOC, if present.
All versions of SNMP utilize the concept of the MIB. The MIB organizes configuration and
status data into a tree structure. The figure below shows a small portion of an MIB tree
structure.

Objects in the MIB are referenced by their object ID (OID), which specifies the path from
the tree root to the object. For example, system identification data is located under
1.3.6.1.2.1.1. Some examples of system data include the system name (OID
1.3.6.1.2.1.1.5), system location (OID 1.3.6.1.2.1.1.6), and system uptime (OID
1.3.6.1.2.1.1.3).
Note that the following commands are not available on Cisco IOS Software, but they are
shown as an example of what you can achieve with SNMP. In these examples, a Linux PC is
used.
The snmpwalk command recursively pulls data from the MIB tree, starting from the
specified location. For example, you could use it to show which interfaces exist on a
router.
The snmpwalk command essentially performs a whole series of get-next requests
automatically for you and stops when it returns results that are no longer inside the range
of the OID that you originally specified.
You can also use the snmpset command to reset the interface. In this example, the no
shutdown command was issued on Serial2/0 via the snmpset command.

Here is the syslog message on the router, which shows that interface Serial2/0 changed
state to up.
*Apr 10 18:35:00.273: %SYS-5-CONFIG_I: Configured from 10.1.10.10 by snmp
*Apr 10 18:35:02.274: %LINK-3-UPDOWN: Interface Serial2/0, changed state to up
*Apr 10 18:35:03.278: %LINEPROTO-5-UPDOWN: Line protocol on Interface Serial2/0,
changed state to up
However, working with long MIB variable names like 1.3.6.1.2.1.2.2.1.2 can be
problematic for the average user. More commonly, the network operations staff uses a
network management product with an easy-to-use GUI, with the entire MIB data variable
naming transparent to the user.
When dealing with SNMP, other useful tools are the Cisco SNMP Object Navigator and
Cisco IOS MIB Locator. The Cisco SNMP Object Navigator allows you to find more details
about a particular OID, and the Cisco IOS MIB Locator can tell you which OIDs exist on a
particular Cisco platform or product. Both features are extremely helpful when you want
to create a new graph in your NMS for a particular set of OID return values.
Note: The tools that are mentioned above can be found here:
https://mibs.cloudapps.cisco.com/ITDIT/MIBS/servlet/index.
Use Case: Using SNMP to Gather Information
You are redesigning a network for a customer. An engineer on their side pointed out that
some users are complaining about slow internet connection. The engineer is asking you to
take this issue into account during the redesign.
You can use SNMP to monitor the behavior of the router that is connected to the internet.
CPU, memory, and link overutilization are usually the reason for a router's poor
performance.
Note: SNMP can only be used to interact with devices under your control. Devices and
services that exist outside of your network and may be actually the ones causing the issue
cannot be inspected by you using SNMP.
A network management application (for example, Cisco Prime Infrastructure) can display
data that is gathered via SNMP in the form of graphs and reports.

To gather information, configure SNMP on a router to gather performance data such as


CPU and memory usage, interface traffic, and so on. Send the data to the network
management system and represent it graphically. One example of such a system is Cacti,
an open source network monitoring solution.
From the graphs in the example, you can determine that the router has high CPU usage.
You have read the network documentation and determined that the customer's router
that is connected to the internet is a Cisco 1941 Series Integrated Services Router. It has a
limitation of 150-Mbps throughput, but since the customer is using a VPN—traffic
encryption is performed—the limitation is around 60 Mbps, according to Cisco
documentation. You can conclude that the router has increased CPU usage due to high
traffic on the interface that is connected to the internet (average value is a bit less than 58
Mbps). So, the router cannot process all the traffic; therefore, the users are experiencing
slow internet connectivity. To complete the test, you should also confirm the CPU usage
when there is no congestion and verify whether the user experience is flawless at low
load.
Consider the gathered information when redesigning the network. There might be time to
install a more powerful router on the network. Make sure that all the processes running
on the router are relevant for the operation of your network and the CPU load is not
caused by an unnecessary service, such as console log output being left enabled after a
troubleshooting session.
SNMP Versions
SNMP has evolved through three versions. SNMP versions 1 and 2 do not provide much
security. Operations are controlled with community strings, which function as
authentication strings (a password). Community strings can be read-only or read-write.
GET requests will be honored if the network management system provides either a valid
read-only or read-write community string. SET requests require the network management
system to provide a read-write community string. With SNMP versions 1 and 2, all data is
sent in cleartext, including the community strings. If an attacker can sniff the SNMP
communications, they can extract the read-write community strings. It is just as
dangerous for an attacker to have the read-write community string as it is for an attacker
to have the enable secret password for a Cisco IOS device, because the attacker can issue
SET commands on devices.
SNMPv3 adds a well-developed security model. SNMPv3 authentication verifies the origin
and data integrity. That is, SNMPv3 verifies the originator of the message and that the
message has not been altered in transit. SNMPv3 also offers privacy via encryption. Both
mechanisms are optional.
There are currently three versions of SNMP.

The following list describes the different versions of SNMP:

• SNMP version 1: SNMPv1 is the initial version of SNMP. SNMPv1 security is based
on communities that are nothing more than passwords: plaintext strings that allow
any SNMP-based application that knows the strings to gain access to the
management information of a device. There are typically three communities in
SNMPv1: read-only, read-write, and trap. A key security flaw in SNMPv1 is that the
only authentication available is through a community string. Anyone who knows
the community string is allowed access. Adding to this problem is the fact that all
SNMPv1 packets pass across the network unencrypted. Therefore, anyone who
can sniff a single SNMP packet now has the community string that is needed to get
access.
• SNMP version 2c: SNMPv2 was the first attempt to fix SNMPv1 security flaws.
However, SNMPv2 never really took off. The only prevalent version of SNMPv2
today is SNMPv2c, which contains SNMPv2 protocol enhancements but leaves out
the security features that no one could agree on. The letter "c" designates v2c as
being "community-based," which means that it uses the same authentication
mechanism as v1: community strings.
• SNMP version 3: SNMPv3 is the latest version. It adds support for strong
authentication and private communication between managed entities. You can
define a secure policy for each group, and optionally you can limit the IP addresses
to which its members can belong. You have to define encryption and hashing
algorithms and passwords for each user. The key security additions to SNMPv3 are
as follows:
o Can use Message Digest 5 (MD5) or Secure Hash Algorithm (SHA) hashes for
authentication
o Can encrypt the entire Packet
o Can guarantee message integrity
SNMPv3 introduces three levels of security:

• Security level noAuthNoPriv: No authentication is required, and no privacy


(encryption) is provided.
• Security level authNoPriv: Authentication is required, but no encryption is
provided.
• Security level authPriv: In addition to authentication, encryption is also used.
Note: Neither SNMPv1 nor SNMPv2c offer security features. Specifically, SNMPv1 and
SNMPv2c can neither authenticate the source of a management message nor provide
encryption.

28.5 Introducing System Monitoring


Enabling Network Time Protocol
Time synchronization is crucial in secure management and reporting. Reviewing log files
on multiple devices is common in a security event response process. If the clocks on the
reporting devices are not consistent with each other, the analysis is much more difficult.
In many jurisdictions, log files without valid time stamps are rejected as evidence in
criminal prosecution. Also, synchronized clocks in log files are often requirements of
security compliance standards. Accurate time status is critical for other aspects of security
as well. Likewise, access policies may be time-based and digital certificates have explicit
validity periods.
Imagine that there is an OSPF neighbor adjacency problem between two routers in your
network. The central and branch routers do not synchronize their clocks. You have
decided to look at the log messages that are stored on the routers. After further
inspection, you notice that at the central router the neighbor adjacency went down at
around 7:10 p.m. (1910), but you do not think to look for messages from the branch
router that has a timestamp of around 1:35 p.m. (1335).
Log messages on central router
*Apr 8 19:10:40.086: %LINEPROTO-5-UPDOWN: Line protocol on Interface Ethernet0/0,
changed state to down
*Apr 8 19:10:40.086: %OSPF-5-ADJCHG: Process 1, Nbr 200.1.1.1 on Ethernet0/0 from
FULL to DOWN, Neighbor Down: Interface down or detached
Log messages on branch router
Apr 8 13:35:01.880: %OSPF-5-ADJCHG: Process 1, Nbr 200.1.1.2 on Ethernet0/2 from
FULL to DOWN, Neighbor Down: Interface down or detached
Apr 8 13:35:04.885: %LINEPROTO-5-UPDOWN: Line protocol on Interface Ethernet0/2,
changed state to down
Both messages appeared within 1 second of each other because the shutdown command
was issued on one of the routers. Note that the clocks were not synchronized between the
routers. Therefore, you can see that these two messages can be treated as two separate
events, which makes it harder to troubleshoot the problems in your network.
The heart of the time service is the system clock. Routers, switches, firewalls, and other
networking devices have an internal system clock. The system clock runs from the
moment the system starts and keeps track of the current date and time. The system clock
can be set from several sources and, in turn, can be used to distribute the current time
through various mechanisms to other systems. The system clock keeps track of time
internally based on Coordinated Universal Time (UTC), which is the same as Greenwich
Mean Time (GMT). The system clock keeps track of whether the time is authoritative or
not. If it is not authoritative, the time is available only for display purposes and cannot be
redistributed. Authoritative refers to the trustworthiness of the source. Nonauthoritative
sources do not guarantee accurate time. It is recommended to set clocks on all network
devices to UTC regardless of their location, and then configure the time zone to display
the local time if desired.
Software Clock
Like most computers, most Cisco IOS devices have two clocks: a software clock and a
hardware clock, which in the CLI commands is referred to as calendar. The software clock
is initialized at bootup from the hardware clock (which is operational even when the
device is powered off). The software clock is basically a binary signal emitter, keeping
track of seconds and microseconds, starting at boot up. The software clock is also referred
to as system clock.
To change the system clock manually, you need to use the clock set command from
privileged exec mode and not global configuration mode. The date and time should be set
in UTC and not the local time zone. The local time zone and, if applicable, daylight saving
time needs to be configured.
To change the system clock, enter the following commands:

If you add the detail keyword to the show clock command, it will tell you what the source
of clock configuration is.
The system clock keeps an authoritative flag that indicates whether the time is
authoritative (believed to be accurate). The asterisk in front of the first line of the
command output means that the time is not believed to be accurate.
You can also change the time zone and enable daylight saving time. In this example,
Central European Time (CET) is used.
Notice how clock settings now reflect local time, because Central European Summer Time
(CEST) is 2 hours ahead of UTC.
To configure the time zone, use the clock timezone zone-name hours-offset [minutes-
offset] global configuration command.

Hardware Clock
The hardware clock is a chip with a rechargeable backup battery that can retain the time
and date information across reboots of the device.
The hardware clock (also called the system calendar) maintains time separately from the
software clock, but is usually updated from the software clock when the software clock is
synchronized with an authoritative time source. The hardware clock continues to run
when the system is restarted or when the power is turned off. Typically, the hardware
clock needs to be manually set only once, when the system is installed, but to prevent
drifting over time it needs to be readjusted at regular intervals.
You should avoid setting the hardware clock if you have access to a reliable external time
source. Time synchronization should instead be established using NTP.
You can update the hardware clock with a new software clock setting with the following
command:
Router# clock update-calendar
Network Time Protocol
To maintain a consistent time across the network, the software clock must receive time
updates from an authoritative time on the network. Network Time Protocol (NTP) is a
protocol designed to time-synchronize a network of machines. A secure method of
providing clocking for the network is for network administrators to implement their own
private network master clocks that are synchronized to UTC-based satellite or radio.
However, if network administrators do not wish to implement their own master clocks
because of cost or other reasons, other clock sources are available on the internet, such as
ntp.org, but this option is less secure.
Correct time within networks is important for the following reasons:

• Correct time allows the tracking of events in the network in the correct order.
• Clock synchronization is critical for the correct interpretation of events within
syslog data.
• Clock synchronization is critical for digital certificates and authentication protocols
such as Kerberos.
NTP runs over UDP, using port 123 as both the source and destination, which in turn runs
over IP. NTP distributes this time across the network. NTP is extremely efficient—no more
than one packet per minute is necessary to synchronize two devices to within a
millisecond of one another.
NTP uses the concept of a stratum to describe how many NTP hops away a machine is
from an authoritative time source, a stratum 0 source. A stratum 1 time server has a radio
or atomic clock that is directly attached, a stratum 2 time server receives its time from a
stratum 1 time server, and so on. A device running NTP automatically chooses as its time
source the device with the lowest stratum number that it is configured to communicate
with through NTP. This strategy effectively builds a self-organizing tree of NTP speakers.

NTP can get the correct time from an internal or external time source:

• Local master clock


• Master clock on the internet
• Global positioning system (GPS) or atomic clock (stratum 0)
Note: A master clock maintains accurate time from a hardware clock source. Examples are
global positioning systems such as GPS, GLONASS, Galileo or atomic clocks.
NTP has two ways to avoid synchronizing to a device whose time might be ambiguous:

• NTP never synchronizes to a device that is not synchronized itself.


• NTP compares the time that is reported by several devices and does not
synchronize to a device whose time is significantly different from the others, even
if its stratum is lower.
Configuring and Verifying NTP
A router can act as an NTP server and client. Other devices (NTP clients) synchronize time
with the router (acting as an NTP server).
To configure NTP on Cisco devices, use the following commands, as illustrated in the
example:
The figure shows an example configuration scenario. Both the Branch router and SW1
switch are configured as NTP clients using the ntp server ip-address global configuration
command. The IP address of the NTP server is configured.
Note that a Cisco IOS device acting as an NTP client will also respond to received time
requests. This fact enables SW1 to sync directly with the branch router and optimize
traffic flows. Alternatively, you could configure SW1 to sync with the Central router (or
even an external NTP server).
Cisco IOS devices can also act as NTP servers. To configure a Cisco IOS device as an NTP
master clock to which peers synchronize themselves, use the ntp master command in the
global configuration mode: ntp master [stratum].
Note: Use this command with caution. You can easily override valid time sources using
this command, especially if a low stratum number is configured. Configuring multiple
devices in the same network with the ntp master command can cause instability in
keeping time if the devices do not agree on the time.
When configuring NTP, consider the following:

• You should check within the company where you are implementing the NTP and
what stratum level you are supposed to set in the ntp master command. It must be
a higher number than the stratum level of the upstream NTP device.
• The ntp master command should only be configured on a device that has
authoritative time. Therefore, it must either be configured to synchronize with
another NTP server (using the ntp server command) and actually be synchronized
with that server, or it must have its time set using the clock set command.
The stratum value is a number from 1 to 15. The lowest stratum value indicates a higher
NTP priority. It also indicates the NTP stratum number that the system will claim.
Optionally, you can also configure a loopback interface, whose IP address will be used as
the source IP address when sending NTP packets.
For example, consider the following scenario, where you have multiple routers. The
Central router acts as an authoritative NTP server, while the Branch1 and Branch2 routers
act as NTP clients. In this case, initially Branch1 and Branch2 routers are referencing their
clocks via NTP to the 172.16.1.5 IPv4 address, which belongs to Ethernet 0/0 interface on
the Central router. Now imagine if that interface on the Central router fails, what do you
think will happen? The Branch1 and Branch2 routers cannot reach that IPv4 address,
which means that they will stop referencing their clocks via NTP and their clocks will
become unsynchronized. The solution for that is to use a loopback interface, which is a
virtual interface on a router and is always in up/up state. Therefore, even if one of the
interfaces fails on the Central router, the Branch1 and Branch2 routers can still use NTP if
they have a backup path to the IPv4 address of the loopback interface on the Central
router.

Configure the Central router as an NTP server:

Configure the Branch1 router as an NTP client, which will synchronize its time with the
Central router:

Configure the Branch2 router as an NTP client, which will synchronize its time with the
Central router.
Use the show ntp associations and the show ntp status commands to verify your
configuration.

29.1 Managing Cisco Devices


Introduction
In an enterprise environment, network engineers need to maintain control of software
and configuration running on networking devices. As a network grows, keeping track of all
the Cisco IOS Software images and configuration files running on your devices is crucial for
reliable network operations. Production internetworks usually span wide areas and
contain multiple routers and switches. For any network, it is prudent to retain a backup
copy of the Cisco IOS Software image or other network devices operating systems in case
the system image in the networking devices becomes corrupted or is accidentally erased.
Network devices are also constantly adding new features and fixing potential software
issues, so it is extremely important to keep operating systems up to date with the latest
software releases.
Management and maintenance of routers and other networking device software and
configuration begins with understanding the sequence of events when a device boots up,
where and how the operating system is loaded, and when the configuration is applied.
This is of utmost importance for all operational and troubleshooting tasks that need to be
performed in an enterprise network.

As a networking engineer, you will need to manage different Cisco devices, operating
systems, and configuration files, which includes these responsibilities:

• Troubleshooting the boot sequence of the routers and switches


• Performing backups of the Cisco IOS images and configuration files from all devices
• Checking the Cisco IOS image on all devices for consistency
• Upgrading the system image on devices
Note: Different Cisco operating systems (such as Cisco IOS, NX-OS) use similar command
sets and system configuration approaches. There are, however, some differences in
commands and functionalities. Differences can also be present between different
software versions.

29.2 Managing Cisco Devices


Cisco IOS Integrated File System and Devices
Normally, the Cisco IOS image is a file that is stored in flash on the router. To understand
the management and maintenance of IOS, you must first understand the Cisco IOS file
system. There are many similarities between the Cisco IOS file system and the file systems
that are used by computer operating systems such as Windows, Linux, and UNIX. If you
are familiar with those file systems, the experience will translate well to the Cisco IOS file
system.
The Cisco IOS file system allows for the storage, retrieval, and manipulation of files by
Cisco IOS and by administrators. The file system interface uses URLs to specify the location
of a file. URLs are commonly used to specify files or locations on the web. On Cisco
routers, URLs can be used to specify the location of files on the router or remote file
servers. Files are stored in different locations or accessible by different protocols,
specified by a prefix. There are many potential prefixes. The hardware and configuration
of a device will handle local prefixes, and the network and server configuration will handle
the remote prefixes. Prefix specifications end with a colon (for example, "flash:," "ftp:").
Cisco IOS devices provide a feature that is called the Cisco IOS Integrated File System (IFS).
This system allows you to create, navigate, and manipulate files and directories on a Cisco
device. The directories that are available depend on the platform. The Cisco IFS feature
provides a single interface to all the file systems that a Cisco device uses, including these
systems:

• Flash memory file systems


• Network file systems such as TFTP, Remote Copy Protocol (RCP), and FTP
• Any other memory available for reading or writing data, such as NVRAM and RAM
The following example shows the output of the show file systems command, which lists
all the available file systems on a Cisco Integrated Services Router. This command provides
insightful information, such as the amount of available and free memory and the type of
file system and its permissions. Permissions include read only (indicated by the "ro" flag),
write only (indicated by the "wo" flag), and read and write (indicated by the "rw" flag).
The flash file system (FFS) has an asterisk preceding it, which indicates it is the current
default file system. The bootable Cisco IOS Software is located in the flash memory, so the
pound symbol (#) that is appended to the flash listing indicates a bootable disk.

An important feature of the Cisco IFS is the use of the URL convention to specify files on
network devices and the network. The URL prefix specifies the file location.
Commonly used prefix examples include the following:

• flash: The primary flash device. Some devices have more than one flash location,
such as slot0: and slot1:. In such cases, an alias is implemented to resolve flash: to
the flash device that is considered primary.
• nvram: NVRAM. Among other things, NVRAM is where the startup configuration is
stored.
• system: A partition that is built in RAM that holds, among other things, the running
configuration.
• tftp: Indicates that the file is stored on a server that can be accessed using the
TFTP protocol.
• ftp: Indicates that the file is stored on a server that can be accessed using the FTP
protocol.
• scp: Indicates that the file is stored on a server that can be accessed using the
Secure Copy Protocol (SCP).
Directories and subdirectories can be used to organize files in manageable containers. A
preceding slash (/) character indicates the root directory, and the slash character is also
used to separate directory names from the directory’s contents. Individual files have
names, which must be unique within the directory in which they are stored. URLs are used
to specify files and they provide full specification of a locally stored file, including the
prefix, directory path, and filename.
Here are some examples:

• flash:/c2900-universalk9-mz.SPA.153-1.T.bin
• nvram:/startup-config
• system:/running-config
URLs that specify remote files can be more complex. After the prefix, a server location (IP
address or resolvable hostname) must be specified. For protocols that require user
authentication, usernames and passwords may be specified. If they are not specified in
the URL, they will need to be specified interactively by the application.
Here are some examples of remote file specifications:

• tftp://10.10.10.10/backup-cfg.txt
• ftp://10.10.20.20/admin:Adm1nPwd/c2900-universalk9-mz.SPA.154-1.T.bin
• scp://cfg-srv/admin:Adm1nPwd/c2900-universalk9-mz.SPA.154-1.T2.bin
Note: The third example uses a hostname instead of an IP address. For it to function, the
router must have Domain Name System (DNS) properly configured or local IP host entries
defined in the running configuration, allowing the resolution of the name cfg-srv to an IP
address.
29.3 Managing Cisco Devices
Stages of the Router Power-On Boot Sequence
When a Cisco networking device boots, it performs a series of steps that include loading
Cisco operating system software and the device configuration.
The example shows a router power-on boot sequence, which consists of a series of steps
that include loading Cisco IOS Software and the router configuration.
The following stages and router components are used in the router power-on boot
sequence:

The sequence of events that occurs during the power-on (boot) of a router is explained in
detail here. Understanding these events will help you accomplish operational tasks and
troubleshoot router problems.
1. Perform POST: This event is a series of hardware tests that verifies that all
components of a Cisco router are functional. During this test, the router also
determines which hardware is present. Power-on self-test (POST) executes from
microcode that is resident in the system read-only memory (ROM).
2. Load and run bootstrap code: Bootstrap code is used to perform subsequent
events such as finding Cisco IOS Software at all possible locations, loading it into
RAM, and running it. After Cisco IOS Software is loaded and running, the bootstrap
code is not used until the next time the router is reloaded or power-cycled.
3. Locate Cisco IOS Software: The bootstrap code determines the location of Cisco
IOS Software that will be run. Normally, the Cisco IOS Software image is located in
the flash memory, but it can also be stored in other places such as a TFTP server.
The configuration register and configuration file, which are located in NVRAM,
determine where the Cisco IOS Software images are located and which image file
to use. If a complete Cisco IOS image cannot be located, a scaled-down version of
Cisco IOS Software is copied from ROM into RAM. This version of Cisco IOS
Software is used to help diagnose any problems and can be used to load a
complete version of Cisco IOS Software into RAM.
4. Load Cisco IOS Software: After the bootstrap code has found the correct image, it
loads this image into RAM and starts Cisco IOS Software. Some older routers do
not load the Cisco IOS Software image into RAM but execute it directly from flash
memory instead.
5. Locate the configuration: After Cisco IOS Software is loaded, the bootstrap
program searches for the startup configuration file in NVRAM.
6. Load the configuration: If a startup configuration file is found in NVRAM, Cisco IOS
Software loads it into RAM as the running configuration and executes the
commands in the file one line at a time. The running configuration file contains
interface addresses, starts routing processes, configures router passwords, and
defines other characteristics of the router. If no configuration file exists in NVRAM,
the router enters the setup utility or attempts an autoinstall to look for a
configuration file from a TFTP server.
7. Run the configured Cisco IOS Software: When the prompt is displayed, the router
is running Cisco IOS Software with the current running configuration file. You can
then begin using Cisco IOS commands on the router.

29.4 Managing Cisco Devices


Loading and Managing System Images Files
Usually, the Cisco networking devices operating system software image is located in the
flash memory, but it can also be stored in other places, such as a TFTP server. In the
example of Cisco IOS, the configuration register and configuration file determine where
the Cisco IOS Software images are located and which image file to use.
Locating Cisco IOS Image Files
When a Cisco networking device boots, one of the steps is searching for Cisco IOS
Software. For example, when a Cisco router boots, it searches for the Cisco IOS image in a
specific sequence, as shown in the figure.
The bootstrap code is responsible for locating Cisco IOS Software. It searches for the Cisco
IOS image in this sequence:
1. The bootstrap code checks the boot field of the configuration register. The
configuration register is a 16-bit value; the lower 4 bits are the boot field. The boot
field tells the router how to boot up. The boot field can indicate that the router
looks for the Cisco IOS image in flash memory, or looks in the startup configuration
file (if one exists) for commands that tell the router how to boot, or looks on a
remote TFTP server. Alternatively, the boot field can specify that no Cisco IOS
image will be loaded, and the router should start a Cisco ROM monitor (ROMMON)
session.
2. The bootstrap code evaluates the configuration register boot field value as
described in the following list. In a configuration register value, the "0x" indicates
that the digits that follow are in hexadecimal notation. A configuration register
value of 0x2102, which is also a default factory setting, has a boot field value of
0x2—the right-most digit in the register value is 2 and represents the lowest 4 bits
of the register.
a. If the boot field value is 0x0, the router boots to the ROMMON session.
b. If the boot field value is 0x1, the router searches flash memory for Cisco IOS
images.
c. If the boot field value is 0x2 to 0xF, the bootstrap code parses the startup
configuration file in NVRAM for boot system commands that specify the
name and location of the Cisco IOS Software image to load. (Examples of
boot system commands will follow.) If boot system commands are found,
the router sequentially processes each boot system command in the
configuration, until a valid image is found. If there are no boot system
commands in the configuration, the router searches the flash memory for a
Cisco IOS image.
3. If the router searches for and finds valid Cisco IOS images in flash memory, it loads
the first valid image and runs it.
4. If it does not find a valid Cisco IOS image in flash memory, the router attempts to
boot from a network TFTP server using the boot field value as part of the Cisco IOS
image filename.
5. After six unsuccessful attempts at locating a TFTP server, the router starts a
ROMMON session.
Note: The procedure for locating the Cisco IOS image depends on the Cisco router
platform and default configuration register values. The procedure that is described here
applies to the Cisco Integrated Services Routers.
Configuration Register
If you set the configuration register boot field value to 0x0, you must have console port
access to boot the operating system manually from the ROMMON session. If you set the
boot field to a value of 0x2 to 0xF, and there is a valid boot system command that is
stored in the configuration file, the router software processes each boot system
command in sequence until the process is successful or the end of the list is reached. If
there are no boot system commands in the configuration file, the router attempts to boot
the first file in the flash memory.

Entering boot system commands in sequence in a router configuration can create a fault-
tolerant boot plan. The boot system command is a global configuration command that
allows you to specify the source for the Cisco IOS Software image to load. For example,
the following command boots the system boot image file that is named c2900-
universalk9-mz.SPA.152-4.M1.bin from the flash memory device:

This next example specifies a TFTP server as a source of a Cisco IOS image, with a
ROMMON session as the backup:
Loading Cisco IOS Image Files
When a Cisco router locates a valid Cisco operating system image file in the flash memory,
the Cisco operating system image is normally loaded into RAM to run. Image files are
typically compressed, so the file must first be decompressed. After the file is
decompressed into RAM, it is started.
For example, when Cisco IOS Software begins to load, you may see a string of hash signs
(#), as shown in the figure, while the image decompresses.
The Cisco IOS image file is decompressed and stored in RAM. The output shows the boot
process on a router.

Use the show version command to help verify and troubleshoot some of the basic
hardware and software components of the router. The show version command displays
information about the version of Cisco IOS Software that is currently running on the
router, the version of the bootstrap program, and information about the hardware
configuration, including the amount of system memory.
Output from the show version command includes the following:

• Cisco IOS version


Cisco IOS Software, C2900 Software (C2900-UNIVERSALK9-M), Version 15.2(4)M1,
RELEASE SOFTWARE (fc1)
This line from the example output shows the version of Cisco IOS Software in RAM that
the router is using.

• ROM bootstrap program


ROM: System Bootstrap, Version 15.0(1r)M15, RELEASE SOFTWARE (fc1)
This line from the example output shows the version of the system bootstrap software
that is stored in ROM and was initially used to boot up the router.

• Location of Cisco IOS image


System image file is "flash0:c2900-universalk9-mz.SPA.152-4.M1.bin"
This line from the example output shows where the Cisco IOS image is located and loaded
as well as its complete filename.

• Interfaces
2 Gigabit Ethernet interfaces
1 Serial (sync/async) interface
This section of the output displays the physical interfaces on the router. In this example,
the Cisco 2901 router has two Gigabit Ethernet interfaces and one serial interface.

• Amount of NVRAM
255 KB of NVRAM
This line from the example output shows the amount of NVRAM on the router.

• Amount of Flash
250880 KB of ATA System CompactFlash 0 (Read/Write)
This line from the example output shows the amount of flash memory on the router.

• Configuration register
Configuration register is 0x2102
The last line of the show version command displays the current configured value of the
software configuration register in hexadecimal format. This value indicates that the router
will attempt to load a Cisco IOS Software image from flash memory and load the startup
configuration file from NVRAM. 0x2102 is the factory-setting default.

29.5 Managing Cisco Devices


Loading Cisco IOS Configuration Files
After the Cisco IOS Software image is loaded and started, the networking device must be
configured to be useful. If there is an existing saved configuration file (startup-config) in
NVRAM, it is executed. If there is no saved configuration file in NVRAM, the device
typically either begins autoinstall or enters the setup utility. An example below illustrates
the procedure in a Cisco IOS router.
The router loads and executes the configuration from NVRAM. If no configuration is
present in NVRAM, it prompts for an initial configuration dialog (also called setup utility or
setup mode).
If the startup configuration file does not exist in NVRAM, the router may search for a TFTP
server. If the router detects that it has an active link, it sends a broadcast searching for a
configuration file across the active link. This condition will cause the router to pause, but if
the configuration source is not discovered, you will eventually see a console message such
as this one:

The setup utility prompts the user at the console for specific configuration information to
create a basic initial configuration on the router, as shown in this example:

Note: If you type "yes" at this stage, the setup utility prompts you for basic information
about your router and network, and it creates an initial configuration file. Since the result
configuration is basic, typically you would type "no" at this point and continue with a
manual configuration.
To display the current configuration, enter the show running-config command.
To display the saved configuration, enter the show startup-config command.
The show running-config and show startup-config commands are among the most
common Cisco IOS Software EXEC commands because they allow you to see the current
running configuration in RAM on the router or the startup configuration commands in the
startup configuration file in NVRAM that the router will use at the next restart.
If the words "Current configuration" are displayed, the active running configuration from
RAM is being displayed.
If there is a message at the top indicating how much nonvolatile memory is being used
("Using 1318 out of 262136 bytes" in this example), the startup configuration file from
NVRAM is being displayed.

29.6 Managing Cisco Devices


Validating Cisco IOS Images Using MD5
If the Cisco IOS on a router is corrupt or compromised, service outages and network
attacks may result. Therefore, it is important to take security precautions throughout the
lifecycle of the Cisco IOS images on your Cisco routers. A first step that you can take is to
verify that the Cisco IOS image has not been corrupted in any way during transit from the
Cisco download center. Cisco makes the Message Digest 5 (MD5) version of the Cisco IOS
image files available for download.
Message Digest files provide a checksum verification of the integrity of the downloaded
IOS image. If the image is corrupted, the MD5 check will fail.
When retrieving IOS images from https://www.cisco.com, you should verify the Secure
Sockets Layer (SSL) connection to ensure that you are connected to the real site. You
should then note the MD5 digest that is reported on the download page. After retrieving
the image from the Cisco site, you should use an MD5 digest calculator on the local
system to verify that the digest matches that which is posted on Cisco site. Cisco IOS itself
offers the verify command to compute MD5 digests, as shown in the example:
Rtr-1# verify /md5 flash:/c2900-universalk9-mz.SPA.154-2.T1.bin
......................................................................
.........<…Output Omitted…>............................Done!
verify /md5 (flash:/c2900-universalk9-mz.SPA.154-2.T1.bin) =
c1cb5a732753825baf9cs68d

29.7 Managing Cisco Devices


Managing Cisco IOS Images and Device Configuration Files
The Cisco IOS on Cisco devices is packaged in system images. Your device typically already
has an image on it when you receive it. However, you may want to load a different image
onto a device at some point. For example, you may want to upgrade your software to the
latest release, or use the same version of the software for all the routers in a network.
Also, creating, loading, and maintaining configuration files enables you to generate a set
of user-configured commands to customize the functionality of your Cisco networking
device.
Managing Cisco IOS Images
As a network grows, keeping track of all of the Cisco IOS Software images and
configuration files running on your devices is key.
Production internetworks usually span wide areas and contain multiple routers. For any
network, it is important to retain a backup copy of the Cisco IOS Software image in case
the system image in the router becomes corrupted or is accidentally erased.
Widely distributed routers need a source or backup location for Cisco IOS Software
images. Using a network TFTP server allows image (and configuration) uploads and
downloads over the network. Storing Cisco IOS Software images and configuration files on
a central TFTP server enables you to control the number and revision level of the files that
must be maintained. The network TFTP server can be another router, a workstation, or a
host system.

You can copy Cisco IOS image files from a TFTP, RCP, FTP, or SCP server to the flash
memory of a networking device. You may want to perform this function to upgrade the
Cisco IOS image, or to use the same image as on other devices in your network.
You can also copy (upload) Cisco IOS image files from a networking device to a file server
by using TFTP, FTP, RCP, or SCP protocols, so that you have a backup of the current IOS
image file on the server.
The protocol you use depends on which type of server you are using. The FTP and RCP
transport mechanisms provide faster performance and more reliable delivery of data than
TFTP. These improvements are possible because the FTP and RCP transport mechanisms
are built on and use the TCP/IP stack, which is connection-oriented.
Note: Just as Secure Shell (SSH) and HTTPS are more secure than Telnet and HTTP, SCP is
more secure than FTP or TFTP. SCP is an implementation of RCP that runs over an SSH
connection. By using SSH as the underlying data transfer conduit, SCP offers the same
security benefits as SSH. If the public key of the server is properly validated, then SCP
offers origin authentication, data integrity, and privacy.
Managing Device Configuration Files
Device configuration files contain a set of user-defined configuration commands that
customize the functionality of a Cisco device.
Device configurations can be loaded from the following components:

• NVRAM
• Terminal
• Network file server (for example, TFTP, SCP, and others)

Cisco router configuration files are stored in these locations:

• The running configuration is stored in RAM.


• The startup configuration is stored in NVRAM.
Similar to Cisco IOS images, you can copy configuration files from a TFTP, RCP, FTP, or SCP
server to the running configuration or startup configuration of the networking device. You
may want to perform this function for one of the following reasons:

• To restore a backed-up configuration file.


• To use the configuration file for another networking device. For example, you may
add another router to your network and want it to have a similar configuration to
the original router. After copying the file to the new router, you can then change
the relevant parts, rather than re-creating the whole file.
• To load the same configuration commands on all the routers in your network so
that all the routers have similar configurations.
You can also copy (upload) configuration files from a networking device to a file server by
using TFTP, FTP, RCP, or SCP protocols. You might perform this task to back up a current
configuration file to a server before changing its contents, so that you can later restore the
original configuration file from the server.
For example, in the copy running-config tftp: command, the running configuration in RAM
is copied to a TFTP server.
Use the copy running-config startup-config command after a configuration change is
made in RAM to save the updated configuration to the startup configuration file in
NVRAM.
You can copy the startup configuration file in NVRAM back into RAM with the copy
startup-config running-config command (but be careful because this is a merge, not a
copy, as described in the following note).
Note: When a configuration is copied into RAM from any source, the configuration merges
with or overlays any existing configuration in RAM, rather than overwriting it. New
configuration parameters are added, and changes to existing parameters overwrite the
old parameters. Configuration commands that exist in RAM for which there are no
corresponding commands in NVRAM remain unaffected. In contrast, copying a
configuration from any source into the startup configuration file in NVRAM will overwrite
the startup configuration file in NVRAM.
Similar commands exist for copying between a TFTP server and either NVRAM or RAM.
The following examples show common copy command usage. The examples list two
methods to accomplish the same tasks. The first example is simple syntax, and the second
example provides more explicit syntax.

• Copy the running configuration from RAM to the startup configuration in NVRAM,
overwriting the existing file:

• Copy the running configuration from RAM to a remote TFTP server location,
overwriting the existing file:

• Copy a configuration from a remote source to the running configuration, merging


the new content with the existing content:
• Copy a configuration from a remote source to the startup configuration,
overwriting the existing file:

Use the configure terminal command to interactively create configurations in RAM from
the console or remote terminal.
Use the erase startup-config command to delete the saved startup configuration file in
NVRAM. (Note that this command cannot be abbreviated.)
This figure shows an example of how to use the copy tftp: running-config command to
merge the running configuration in RAM with a saved configuration file on a TFTP server.

The following is an example of saving the current configuration to a TFTP server:

The following is an example of merging a configuration file from the TFTP server with the
running configuration in RAM:
You can use the TFTP servers to store configurations in a central place, allowing
centralized management and updating. Regardless of the size of the network, there
should always be a copy of the current running configuration online as a backup.
The copy running-config tftp: command allows the current configuration to be uploaded
and saved to a TFTP server. The IP address or name of the TFTP server and the destination
filename must be supplied. A series of exclamation marks in the display shows the
progress of the upload.
The copy tftp: running-config command downloads a configuration file from the TFTP
server to the running configuration of the RAM. Again, the address or name of the TFTP
server and the source and destination filename must be supplied. In the example, IPv4 is
used as a transport protocol. In this case, because you are copying the file to the running
configuration, the destination filename should be running-config. This process is a merge
process, not an overwrite process.
30.1 Examining the Security Threat Landscape
Introduction
Modern networks are very large and intensely interconnected. As such, modern networks
are often open to being accessed, and a potential attacker can often easily attach to or
remotely access such networks. Widespread internetworking increases the probability
that more attacks are carried out over large, heavily interconnected networks such as the
internet.
Computer systems and applications that are attached to these networks are becoming
increasingly complex. Because of this, it has become more difficult to analyze, secure, and
properly test the security of computer systems and their applications. When these
systems and their applications are attached to large networks, the risk to the systems
dramatically increases.
The ever-evolving security landscape presents a continuous challenge to organizations.
The fast proliferation of botnets, the increasing sophistication of network attacks and the
alarming growth of Internet-based organized crime and espionage are examples of threats
that shape the security landscape. Security professionals also need to protect networks
and users from identity and data theft, more innovative insider attacks, and emerging new
forms of threats on mobile systems.
There are various concepts across the modern network security threat landscape. There is
no way to linearly convey all the branches, loops, and combinations of related concepts.
There is also no way to catalog the entire threat landscape statically, as new concepts and
combinations are always evolving.
As a network engineer, you must be aware of the security threats landscape, which
includes important concepts:

• The threats (possible attacks) that could compromise security and


• The associated risks of the threats—that is, how relevant those threats are for a
particular system

30.2 Examining the Security Threat Landscape


Security Threat Landscape Overview
It is easy to recognize the need for improved network security just by monitoring the
news. Every few months, there is a new story about large companies falling victim to
attacks and losing huge amounts of private data and intellectual property. No industry is
exempt. Companies in the financial, retail, entertainment, energy, and technology
industries have all been attacked.
Attackers are not limited to individuals or small teams of hackers. Organized crime and
even national governments are often implicated in attacks. Attackers today are
exceedingly clever and devious and have vast support through funding and resources.
As a defender, you must not be restricted by preconceptions on how things are designed
to work or strict classification of known network threats. Attackers do not restrict
themselves; they are creative thinkers and combine old and new techniques to produce
unique new threats. As a defender, you too must be prepared to think outside the box and
evolve to respond to the ever-changing threat landscape.
Here are some key security concepts that will help you as you learn about today's threat
landscape:

• Threat: Any circumstance or event with the potential to cause harm to an asset in
the form of destruction, disclosure, adverse modification of data, or denial of
service (DoS). An example of a threat is malicious software that targets
workstations.
• Vulnerability: A weakness that compromises either the security or the
functionality of a system. Weak or easily guessed passwords are considered
vulnerabilities.
• Exploit: A mechanism that uses a vulnerability to compromise the security or
functionality of a system. An example of an exploit is malicious code that gains
internal access. When a vulnerability is disclosed to the public, attackers often
create a tool that implements an exploit for the vulnerability. If they release this
tool or proof of concept code to the internet, other less-skilled attackers and
hackers (the so-called script kiddies) can then easily exploit the vulnerability.
• Risk: The likelihood that a particular threat using a specific attack will exploit a
particular vulnerability of an asset that results in an undesirable consequence.
• Mitigation techniques: Methods and corrective actions to protect against threats
and different exploits, such as implementing updates and patches, to reduce the
possible impact and minimize risks.

30.3 Examining the Security Threat Landscape


Malware
Malware is malicious software that comes in several forms; these are some of them:

• Viruses: A virus is a type of malware that propagates by inserting a copy of itself


into another program and becoming part of that program. It spreads from one
computer to another, leaving infections as it travels. Viruses require human help
for propagation, such as inserting an infected USB drive into a USB port on a PC.
Viruses can range in severity from causing mildly annoying effects to damaging
data or software and causing DoS conditions.
• Worms: Computer worms are similar to viruses in that they replicate functional
copies of themselves and can cause the same type of damage. In contrast to
viruses, which require spreading an infected host file, worms are standalone
software and use self-propagation; they don't require a host program or human
help to propagate. To spread independently, worms often make use of common
attack and exploit techniques. A worm enters a computer through a vulnerability
in the system and takes advantage of file-transport or information-transport
features on the system, allowing it to travel unaided.
• Trojan horses: A Trojan horse is named after the wooden horse the Greeks used to
infiltrate Troy. It is a harmful piece of software that looks legitimate. Users are
typically tricked into loading and executing it on their systems. After it is activated,
it can achieve any number of attacks on the host, from irritating the user (popping
up windows or changing desktops) to damaging the host (deleting files, stealing
data, or activating and spreading other malware, such as viruses). Trojans are also
known to create back doors to give malicious users access to the system. Unlike
viruses and worms, Trojans do not reproduce by infecting other files, nor do they
self-replicate. Trojans must spread through user interaction, such as opening an
email attachment or downloading and running a file from the Internet.
The Morris worm is often credited as the first internet-based worm. It was launched in
1988. It was named after its author, a graduate student at Cornell University. The author
claimed that it was not written to cause any damage but instead to gauge the size of the
internet. However, the worm did cause damage as systems could be infected multiple
times. The more copies of the worm running on a system, the greater drain of resources it
caused, potentially making systems unusable. The worm was released from a network
belonging to the Massachusetts Institute of Technology to disguise its origin. It had the
capability of exploiting multiple vulnerabilities in sendmail, finger, and rsh/rexec. It could
use the local C compiler on systems to compile code. It utilized the words file on Unix
systems for dictionary attacks against weak passwords. Potentially the most interesting
aspect of this worm is that it was written so long ago. The use of multiple attack vectors
and the use of resources available on the compromised systems were quite ingenious for
the first worm. The security professional must understand that the ingenuity brought to
malware development has continued to compound over the decades.
Internet worm production was especially prolific between 1999 and 2004. Examples of
worms from this period include Melissa, ILOVEYOU, Anna Kournikova, Code Red, Nimda,
SQL Slammer, MyDoom, and Sasser. Details for any of these worms can be found with
simple internet search queries. In general, these worms were mostly about wreaking
havoc. Their targets were not directed as they victimized any vulnerable system. They
consumed resources such as networking bandwidth, system CPU and memory, and IT
resources to eradicate them.
Since the early 2000s, much has changed about worms in particular and network security
in general. The Conficker worm, first identified in late 2008, was very different. It was very
stealthy and resulted in a botnet with millions of infected machines. The worm mutated
from version to version with ever-changing propagation and update strategies. The
Stuxnet worm was discovered in June 2010. It was designed to attack industrial
programmable logic controllers. It reportedly targeted the country of Iran’s nuclear
program and was successful in destroying approximately one-fifth of the country’s nuclear
centrifuges.
More recently, with the rise of cryptocurrencies, ransomware has risen in prominence.
Ransomware is an attack that encrypts the data on your local drives and the shared
network drives, making it unusable without a key. The ransomware then presents means
of buying the encryption key from the attackers, usually utilizing cryptocurrencies. An
example of a ransomware attack is the WannaCry cryptoworm, which was introduced in
May 2017.
Malware is commonly utilized by advanced persistent threats (APTs), a set of continuous
hacking processes targeting a specific entity, often with a specific goal. Some
characteristics of APTs are obvious from the name. They are advanced; the attackers have
the most advanced intelligence systems and techniques at their disposal and will use what
is optimal for each step. They may utilize commonly available security tools when they are
sufficient, but they may also discover and exploit zero-day (unpublished) vulnerabilities
when necessary. They are also persistent. The attackers focus on their goal. They do not
cash in on short-term opportunities. Instead, they maintain discreet access, slowly but
surely infiltrating deeper into systems until their objectives can be met.
The structure of an APT attack does not follow a blueprint. As with any network attack,
the scenario varies with circumstance. However, a common methodology is as follows:

• Initial compromiso
• Escalation of privileges
• Internal reconnaissance
• Lateral propagation, compromising other systems on track towards its goal
• Mission completion
Each of these steps is taken very stealthily, with the goal of evading detection and
maintaining a presence.

30.4 Examining the Security Threat Landscape


Hacking Tools
The distinction between a security tool and a hacking (or attack) tool is in the intent of the
user. A penetration tester legitimately uses tools to penetrate an organization's security
defenses. The organization uses the results of the penetration test to improve its security
defenses. However, the same tools that the penetration tester uses can be used
illegitimately by an attacker.
Innumerable quantities of hacking (or security) tools can be found on the internet. The
following list provides a few examples. More important than the details of any single
example is the understanding of how easy it is now to obtain and use very powerful attack
tools.
Hacking (or security) tools can be found in the following places:

• sectools.org: A website run by the Nmap Project, which regularly polls the network
security community regarding their favorite security tools. It lists the top security
tools in order of popularity. A short description is provided for each tool, along
with user reviews and links to the publisher's website. There are password
auditors, sniffers, vulnerability scanners, packet crafters, and exploitation tools,
among the many categories. The site provides information disclosure. Security
professionals should review the list and read the descriptions of the tools. Network
attackers certainly will.
• Kali Linux: The Knoppix Security Tools Distribution was published in 2004. It was a
live Linux distribution that ran from a CD-ROM and included more than 100
security tools. Back when security tools were uncommon in Windows, Windows
users could boot their PCs with the Knoppix STD CD and have access to that
toolset. Over the years, Knoppix STD evolved through WHoppix, Whax, and
Backtrack to its current distribution as Kali Linux. The details of the evolution are
not as important as the fact that a live Linux distribution that can be easily booted
from removable media or installed in a virtual machine has been well supported
for over a decade. The technology continues to be updated to remain current and
relevant. Kali Linux packages over 300 security tools in a Debian-based Linux
distribution. Kali Linux may be deployed on removable media, much like the
original Knoppix Security Tools Distribution. It may also be deployed on physical
servers or run as a virtual machine (VM).
• Metasploit: When Metasploit was first introduced, it had a big impact on the
network security industry. It was a very potent addition to the penetration tester's
toolbox. While it provided a framework for advanced security engineers to develop
and test exploit code, it also lowered the threshold for the experience required for
a novice attacker to perform sophisticated attacks. The framework separates the
exploit (code that uses a system vulnerability) from the payload (code injected to
the compromised system). The framework is distributed with hundreds of exploit
modules and dozens of payload modules. To launch an attack with Metasploit, you
must first select and configure an exploit. Each exploit targets a vulnerability of an
unpatched operating system or application server. The use of a vulnerability
scanner can help determine the most appropriate exploits to attempt. The exploit
must be configured with relevant information such as the target IP address. Next,
you must select a payload. The payload might be remote shell access, Virtual
Network Computing (VNC) access, or remote file downloads. You can add exploits
incrementally. Metasploit exploits are often published with or shortly after the
public disclosure of vulnerabilities.
Note: Using security tools on networks is often a violation of the security policy governing
those networks. You should never experiment with security tools on a network where you
do not have explicit authorization to do so.

30.5 Examining the Security Threat Landscape


DoS and DDoS
DoS attacks attempt to consume all critical computer or network resources to make them
unavailable for proper use. DoS attacks are considered a major risk because they can
easily disrupt the operations of a business, and they are relatively simple to conduct. A
TCP synchronization (SYN) flood attack is a classic example of a DoS attack. The TCP SYN
flood attack exploits the TCP three-way handshake design by sending multiple TCP SYN
packets with random source addresses to a victim host. The victim host sends a
synchronization-acknowledgment (SYN-ACK) back to the random source address and adds
an entry to the connection table. Because the SYN-ACK is destined for an incorrect or
nonexistent host, the last part of the three-way handshake is never completed, and the
entry remains in the connection table until a timer expires. By generating TCP SYN packets
from random IP addresses rapidly, the attacker can fill up the connection table and deny
TCP services (such as email, file transfer, or World Wide Web) to legitimate users. There is
no easy way to trace the originator of the attack because the IP address of the source is
forged or spoofed. An attacker creates packets with random IP source addresses with IP
spoofing to obfuscate the actual originator.
Some DoS attacks, such as the Ping of Death, can cause a service, system, or group of
systems to crash. In Ping of Death attacks, the attacker creates a packet fragment,
specifying a fragment offset indicating a full packet size of more than 65,535 bytes. 65,535
bytes is the maximum packet size as defined by the IP protocol. A vulnerable machine that
receives this type of fragment will attempt to set up buffers to accommodate the packet
reassembly, and the out-of-bounds request causes the system to crash or reboot. The Ping
of Death exploits vulnerabilities in processing at the IP layer, but similar attacks exploit
vulnerabilities at the application layer. Attackers have also exploited vulnerabilities by
sending malformed Simple Network Management Protocol (SNMP), system logging
(syslog), Domain Name System, or other UDP-based protocol messages. These malformed
messages can cause various parsing and processing functions to fail, resulting in a system
crash and a reload in most circumstances. The IPv6 Ping of Death—the IPv6 version of the
original Ping of Death—was also created.
Note: Because most computer systems have been patched and today's modern firewalls
provide protection against protocol anomaly, Ping of Death and other basic protocol
attacks are not an issue for most systems.
Variants of the previously mentioned DoS attacks include Internet Control Message
Protocol (ICMP) or UDP floods, which can slow down network operations. These attacks
cause the victim to use resources such as bandwidth and system buffers to service attack
requests at the expense of valid requests. ICMP flood attacks have existed for many years.
In these attacks, the attacker overwhelms the targeted resource with ICMP packets such
as echo request (ping) packets to saturate and slow down the victim's network
infrastructure. A UDP flood attack is triggered by sending many UDP packets to the target
system.
When a DoS attempt derives from a single host of the network, it constitutes a DoS attack.
Malicious hosts can also coordinate to flood a victim with an abundance of attack packets,
so that the attack takes place simultaneously from potentially thousands of sources. This
type of attack is called a distributed denial of service (DDoS) attack. DDoS attacks typically
emanate from networks of compromised systems that are known as botnets. In many
cases, users and administrators are not even aware that their system is part of a botnet.
A botnet consists of a group of "zombie" programs known as robots or bots and a main
control mechanism that provides direction and control for the zombies. The originator of a
botnet uses the main control mechanism on a command-and-control server to control the
zombie computers remotely, often by using Internet Relay Chat (IRC) networks.
A botnet typically operates in this manner:

• A botnet operator infects computers by infecting them with malicious code, which
runs the malicious bot process. A malicious bot is a self-propagating malware
designed to infect a host and connect back to the command-and-control server. In
addition to its worm-like ability to self-propagate, a bot can include the ability to
log keystrokes, gather passwords, capture and analyze packets, gather financial
information, launch DoS attacks, relay spam, and open back doors on the infected
host. Bots have all the advantages of worms but are generally much more versatile
in their infection vector and are often modified within hours of publication of a
new exploit. They have been known to exploit back doors opened by worms and
viruses, which allows them to access networks with good perimeter control. Bots
rarely announce their presence with visible actions such as high scan rates, which
negatively affect the network infrastructure; instead, they infect networks in a way
that escapes immediate notice.
• The bot on the newly infected host logs into the command-and-control server and
awaits commands. Often, the command-and-control server is an IRC channel or a
web server.
• Instructions are sent from the command-and-control server to each bot in the
botnet to execute actions. When the bots receive the instructions, they begin
generating malicious traffic that is aimed at the victim. Some bots also can be
updated to introduce new functionalities to the bot..
In the example, an attacker controls the bots to launch a DDoS attack against the victim's
infrastructure. These bots run a covert channel that is protected, obfuscated, or uses
other security mitigation techniques, to communicate with the command-and-control
server that the attacker controls. This communication often takes place over IRC,
encrypted channels, bot-specific peer-to-peer networks, and even Twitter.
30.6 Examining the Security Threat Landscape
Spoofing
An attack is considered a spoofing attack when an attacker injects traffic that appears to
be sourced from a system other than the attacker's system itself. Spoofing is not
specifically an attack, but spoofing can be incorporated into various types of attacks.
Unlike other attack types, most spoofing can be easily prevented by well-known
mitigation techniques.
There are several types of spoofing; here are some of them:

• IP address spoofing: IP address spoofing is the most common type of spoofing. To


perform IP address spoofing, attackers inject a source IP address in the IP header
of packets different from their real IP addresses.
• MAC address spoofing: To perform MAC address spoofing, attackers use MAC
addresses that are not their own. MAC address spoofing is generally used to
exploit weaknesses at Layer 2 of the network.
• Application or service spoofing: An example is DHCP spoofing for IPv4, which can
be done with either the DHCP server or the DHCP client. To perform DHCP server
spoofing, the attacker enables a rogue DHCP server on a network. When a victim
host requests a DHCP configuration, the rogue DHCP server responds before the
authentic DHCP server. The victim is assigned an attacker-defined IPv4
configuration. An attacker can spoof many DHCP client requests from the client-
side, specifying a unique MAC address per request. This process may exhaust the
DHCP server's IPv4 address pool, leading to a DoS against valid DHCP client
requests. Another simple example of spoofing at the application layer is an email
from an attacker which appears to have been sourced from a trusted email
account.
The following figure illustrates IPv4 address spoofing. Attacker 172.25.9.7 sends a packet
to server 10.1.2.3 but specifies 192.168.6.4 as the source address of the packet. Server
10.1.2.3 sends its response packet to what it believes to be the originating system, host
192.168.6.4.
Another example of spoofing is a land attack. The attack is named for the name of the file,
land.c, used for the original source code that is compiled into an attack tool. In a land
attack, the attacker sends a TCP SYN request using the same IP address and port as both
the source and destination IP address and port. The IP address and port combination that
is used is that of the target system. The target system replies to itself and, if the system is
vulnerable, the response leads to a system crash.

30.7 Examining the Security Threat Landscape


Reflection and Amplification Attacks
A reflection attack is a type of DoS attack in which the attacker sends a flood of protocol
request packets to various IP hosts. The attacker spoofs the source IP address of the
packets such that each packet has as its source address the IP address of the intended
target rather than the IP address of the attacker. The IP hosts that receive these packets
become "reflectors." The reflectors respond by sending response packets to the spoofed
address (the target), thus flooding the unsuspecting target.
If the request packets that are sent by the attacker solicit a larger response, the attack is
also an amplification attack. In an amplification attack, a small forged packet elicits a large
reply from the reflectors. For example, some small DNS queries elicit large replies.
Amplification attacks enable an attacker to use a small amount of bandwidth to create a
massive attack on a victim by hosts around the internet.
It is important to note that reflection and amplification are two separate elements of an
attack. An attacker can use amplification with a single reflector or multiple reflectors.
Reflection and amplification attacks are very hard to trace because the actual source of
the attack is hidden.
A classic example of reflection and amplification attacks is the smurf attack, which was
common during the late 1990s. Although the smurf attack no longer poses much of a
threat (because mitigation techniques became standard practice some time ago), it
provides a good example of amplification. In a smurf attack, the attacker sends numerous
ICMP echo-request packets to the broadcast address of a large network. These packets
contain a spoofed address of the victim as the source IPv4 address. Every host that
belongs to the large network responds by sending ICMP echo-reply packets to the
proclaimed source of the initiating packet. Since the source is spoofed (source IPv4 of the
request changed to the IPv4 of the victim), the victim is flooded with unsolicited ICMP
echo-reply packets.
The figure illustrates a smurf attack. Note the differentials in the bandwidth of the
internet connections. The attacker has a very small, 56-Kbps dialup connection. The target
has a much larger T1 connection (1.544 Mbps). The reflector network has an even larger
DS3 connection (45 Mbps). The small 56K stream of echo requests with the spoofed
source address of victim 10.1.1.5 is sent to the broadcast addresses of the large network.
As a result, thousands of echo replies are sent to 10.1.1.5 for each spoofed echo, and the
target T1 is fully consumed.

Smurf attacks can easily be mitigated on a Cisco IOS device by using the no ip directed-
broadcast interface configuration command, which has been the default setting since
Cisco IOS Software Release 12.0. With the no ip directed-broadcast command configured
for an interface, broadcasts destined for the subnet to which that interface is attached will
be dropped rather than being broadcast.
Note: An IP directed broadcast is an IPv4 packet whose destination address is a valid
broadcast address for some IPv4 subnet but which originates from a node that is not itself
part of that destination subnet.
While smurf attacks no longer pose the threat they once did, newer reflection and
amplification attacks may pose a huge threat. For example, in March 2013, DNS
amplification was used to cause a DDoS that made it impossible for anyone to access an
organization's website. This attack was so massive that it also slowed internet traffic
worldwide. The attackers were able to generate up to 300 Gbps of attack traffic by
exploiting DNS open recursive resolvers, which respond to DNS queries, including queries
outside their IP range. By sending an open resolver, a very small, deliberately formed
query with the spoofed source address of a target, an attacker can evoke a significantly
larger response to the intended target. These types of attacks use large numbers of
compromised source systems and multiple DNS open resolvers, so the effects on the
target devices are magnified. The Open Resolver Project cataloged 28 million open
recursive DNS resolved on the internet in 2013.
In February 2014, a Network Time Protocol (NTP) amplification attack generated a new
record in attack traffic, over 400 Gbps. NTP has some characteristics that make it an
attractive attack vector. Like DNS, NTP uses UDP for transport. Like DNS, some NTP
requests can result in replies that are much larger than the request. For example, NTP
supports a command that is called monlist, which can be sent to an NTP server for
monitoring purposes. The monlist command returns the addresses of up to the last 600
machines with which the NTP server has interacted. If the NTP server is relatively active,
this response is much bigger than the request sent, making it ideal for an amplification
attack.

30.8 Examining the Security Threat Landscape


Social Engineering
Social engineering is the process of manipulating people to capitalize on expected
behaviors. Social engineering often involves utilizing social skills, relationships, or
understanding of cultural norms to manipulate people inside a network to provide the
information that is needed to access the network. The following are examples of social
engineering:

• Calling users on the phone claiming to be IT and convincing them that they need to
set their passwords to particular values in preparation for the server upgrade that
will take place tonight.
• An individual without a badge following a badged user into a badge-secured area
(tailgating).
• Sending an infected USB key along with book or magazine samples.
• Developing fictitious personalities on social networking sites to obtain and abuse
"friend" status.
• Sending an email enticing a user to click a link to a malicious website (this is called
phishing).
• Visual hacking, where the attacker physically observes the victim entering
credentials (such as a workstation login, a bank machine PIN, or the combination
on a physical lock).
Phishing is a common social engineering technique. Typically, a phishing email pretends to
be from a large, legitimate organization, as in the figure.

Since the organization from which the phishing email appears to originate is legitimate,
the target may have a real account with the organization. The malicious website generally
resembles that of the real organization. The goal is to get the victim to enter personal
information such as account numbers, social security numbers, usernames, or passwords.
Social engineering is a serious threat and may lead to other types of attacks, and therefore
organizations should take measures to mitigate the risk from these types of attacks.
Hence, an organization should raise user awareness and educate employees to defend
against social engineering deceptions that threaten organizational security, conduct
training sessions on this subject regularly, ensure that social engineering attackers find it
difficult to breach physical security in the organization, and so on.

30.9 Examining the Security Threat Landscape


Evolution of Phishing
The evolution of phishing provides a good example of how attacks morph over time. The
original concept of phishing (sending an email and enticing users to click a link to a
malicious website) was clever, and it remains effective. It is easy to send huge numbers of
emails. Obtaining a fraction of a percent of positive responses is significant. However,
more sophisticated forms of phishing have evolved.

• Spear phishing: Emails are sent to smaller, more targeted groups. Spear phishing
may even target a single individual. Knowing more about the target community
allows the attacker to craft an email that is more likely to deceive the target
successfully. For example, an attacker sends an email with the source address of
the human resources department to the employees.
• Whaling: Like spear phishing, whaling uses the concept of targeted emails;
however, it targets a high-profile target. The target of a whaling attack is often one
or more of the top executives of an organization. The whaling email content is
designed to get an executive's attention, such as a subpoena request or a
complaint from an important customer.
• Pharming: Whereas phishing entices the victim to a malicious website, pharming
lures victims by compromising name services. Pharming can be done by injecting
entries into localhost files or by poisoning the DNS in some fashion. When victims
attempt to visit a legitimate website, the name service instead provides the IP
address of a malicious website. In the following figure, an attacker has injected an
erroneous entry into the host file on the victim system. As a result, when the
victims attempt to do online banking with BIG-bank.com, they are directed to the
address of a malicious website instead. Pharming can be implemented in other
ways. For example, the attacker may compromise legitimate DNS servers. Another
possibility is for the attacker to compromise a DHCP server, causing the DHCP
server to specify a rogue DNS server to the DHCP clients. Consumer-market routers
acting as DHCP servers for residential networks are prime targets for this form of
pharming attack.

• Watering hole: A watering hole attack uses a compromised web server to target
select groups. The first step of a watering hole attack is determining the websites
that the target group visits regularly. The second step is to compromise one or
more of those websites. The attacker compromises the websites by infecting them
with malware that can identify members of the target group. Only members of the
target group are attacked. Other traffic is undisturbed. It makes it difficult to
recognize watering holes by analyzing web traffic. Most traffic from the infected
website is benign.
• Vishing: Vishing uses the same concept as phishing, except that it uses voice and
the phone system as its medium instead of email. For example, a visher may call a
victim claiming that the victim is delinquent in loan payments and attempt to
collect personal information such as the victim's social security number or credit
card information.
• Smishing: Smishing uses the same concept as phishing, except that it uses short
message service (SMS) texting as the medium instead of email.

30.10 Examining the Security Threat Landscape


Password Attacks
Password attacks have been a problem since the beginning of network security, and they
continue to be a dominant problem in current network security. Every year SplashData
publishes a report on the most commonly used passwords that are leaked online. In 2018,
they analyzed over 5 million leaked passwords and created a report based on how
commonly passwords are used. The password "password" was number two on the list.
Five of the top 10 were numeric sequence passwords starting with 1 and varying only in
the length of the sequence (for example, 123456). There were a few clever yet still poor
passwords such as iloveyou and letmein. Most of the remaining top passwords were
simple dictionary words, often all in lower case. According to SplashData, almost every
tenth user selected at least one of the 25 top most common passwords for at least one of
their services.
Attackers have several methods of obtaining user passwords; here are some of these
methods:

• Guessing: To perform password guessing, an attacker can either manually enter


passwords or use a software tool to automate the process. Truly bad passwords
can be susceptible to a lone attacker making informed guesses.
• Brute force: Brute force password attacks are performed by computer programs
called "password crackers." A password cracker performs a brute force crack by
systematically trying every possible password until it succeeds. For example, it may
start by trying all one-character passwords, then move to two-character
passwords, and so on, trying all possible combinations until one works. The speed
at which an attacker can obtain a password with this method may depend upon
the speed of the attacker's computer (how many calculations it can perform per
second), the speed of the attacker's internet connection, and the length and
complexity of the password. Many password crackers are available, and many of
them are available at no cost.
• Dictionary attacks: Dictionary attacks use word lists to structure login attempts.
The word lists can contain millions of words, including words from natural
language dictionaries and words such as sports team names, profanity, and slang.
Dictionary attacks are not always successful and are often attempted before a
brute force attack. In some ways, however, a dictionary attack is similar to a brute
force attack. It is an automated process that is performed by a password cracker
program; the speed at which it enables the attacker to obtain a password may
depend upon the speed of the attacker's computer (how many calculations it can
perform per second), the speed of the attacker's internet connection, and the
length and complexity of the password; and many dictionary attack tools are
available for free on the internet.
A password attack can be either an online attack or an offline attack. In an online attack,
an attacker makes repeated attempts to log in. The activity is visible to the authentication
system, so the system can automatically lock the account after too many bad guesses.
Account lockout disables the account and makes it unavailable for further attacks during
the lockout period. The lockout period and the number of allowed login attempts are
configurable by a system administrator.
Offline attacks are far more dangerous. In an online attack, the password protects the
system in which it is stored, but there is no such protection in offline attacks. In an offline
attack, the attacker captures the password hash or the encrypted form of the password.
The attacker can then make countless attempts to crack the password without being
noticed. The longer and more complex a password are, the more difficult and time-
consuming it is for attackers to crack it.
Many authentication systems require a certain degree of password complexity. Specifying
a minimum length of a password and forcing an enlarged character set (upper case, lower
case, numeric, and special characters) can greatly influence the feasibility of brute force
attacks. However, if users attempt to meet the enlarged character set requirements by
making simple adjustments, such as capitalizing the first letter and appending a number
and an exclamation point (changing, for example, unicorn to Unicorn1!), little is gained
against a dictionary attack using some simple transforms.

30.11 Examining the Security Threat Landscape


Reconnaissance Attacks
A reconnaissance attack attempts to learn more about the intended victim before
attempting a more intrusive attack. Attackers can use standard networking tools such as
dig, nslookup, and whois to gather public information about a target network from DNS
registries. All three are command-line tools. The nslookup and whois tools are available on
Windows, UNIX, and Linux platforms, and dig is available on UNIX and Linux systems.
The following example shows a partial output of a whois query:

The next example shows a partial output of a dig query:

This example shows a partial output of a nslookup query:


The DNS queries can reveal such information as who owns a particular domain and
addresses assigned to that domain. Ping sweeps of the addresses revealed by the DNS
queries can present a picture of the live hosts in a particular environment. After a list of
live hosts is generated, the attacker can probe further by running port scans on the live
hosts. Port scanning tools can cycle through all well-known ports to provide a complete
list of all services running on the hosts. The attacker can use this information to determine
the easiest way to exploit a vulnerability.
An authorized security administrator can use vulnerability scanners such as Nessus and
OpenVAS to locate vulnerabilities in their own networks and patch them before being
exploited. Of course, attackers can also use these tools to locate vulnerabilities before an
organization even knows that they exist.
Note: The use of vulnerability scanners by unauthorized personnel is usually a violation of
organizational security policies. Do not experiment with vulnerability scanners on
networks unless you are explicitly authorized to do so.

30.12 Examining the Security Threat Landscape


Buffer Overflow Attacks
Attackers can analyze network server applications for flaws. A buffer overflow
vulnerability is one type of flaw. A buffer is a typically volatile or nonpersistent memory to
"buffer" inputs or outputs. Buffers are a very common approach with most software
components and are typically allocated from system memory. Suppose a service accepts
input and expects the input to be within a certain size but does not verify the size of input
upon reception. In that case, the corresponding buffer overflows and may become
vulnerable to a buffer overflow attack. This means that an attacker can provide larger than
expected input, and the service will accept the input and write it to memory, filling up the
associated buffer and overwriting adjacent memory. These overwrites may corrupt the
system and cause it to crash, resulting in a DoS. In the worst cases, the attacker can inject
malicious code in the buffer overflow, leading to a system compromise.
Buffer overflow attacks are a common vector for client-side attacks. Malicious code can be
injected into data files, and the code can be executed when a vulnerable client application
opens the data file.
For example, assume that an attacker posts such an infected file to the Internet. An
unsuspecting user downloads the file and opens it with a vulnerable application. A
malicious process connects to rogue systems on the Internet and downloads additional
payloads on the user's system. Firewalls generally do a much better job of preventing
inbound malicious connections from the Internet than they do of preventing outbound
malicious connections to the internet.

30.13 Examining the Security Threat Landscape


Man-in-the-Middle Attacks
A man-in-the-middle attack is a generalized concept that can be implemented in many
different scenarios than a specific attack. Generally, in these attacks, a system that can
view the communication between two systems imposes itself in the communication path
between those other systems. Man-in-the-middle attacks are complex attacks that require
successful attacks against IP routing or protocols (such as Address Resolution Protocol
[ARP], neighbor discovery [ND] for IPv6, DNS, or DHCP), resulting in the misdirection of
traffic.
For example, an ARP-based man-in-the-middle attack is achieved when an attacker
poisons the ARP cache of two devices with the MAC address of the attacker's network
interface card (NIC). Once the ARP caches have been successfully poisoned, each victim
device sends all its packets to the attacker when communicating to the other device. The
attacker is put in the middle of the communications path between the two victim devices.
It allows an attacker to monitor all communication between victim devices easily. The
intent is to intercept and view the information being passed between the two victim
devices and potentially introduce sessions and traffic between the two victim devices.
The figure illustrates an ARP-based man-in-the-middle attack.
The attacker poisons the ARP caches of hosts A and B so that each host will send all its
packets to the attacker when communicating to the other host.
A man-in-the-middle attack can be passive or active. In passive attacks, attackers steal
confidential information. In active attacks, attackers modify data in transit or inject data of
their own. ARP cache poisoning attacks often target a host and the host’s default gateway.
The attacker is put as a man-in-the-middle between the host and all other systems outside
of the local subnet.
Today, there are many standard approaches and best practices to protect against man-in-
the-middle attacks. Strong cryptography in combination with a fully verified trust chain
belongs to the best.
Note: As you saw with vulnerability scanners, security technologies can often be used
either for attack or defense. The discussion above illustrates man-in-the-middle
techniques for attack. Some security products have features that also rely on man-in-the-
middle behavior. For example, the Cisco Web Security Appliance can be deployed as an
HTTPS proxy. It decrypts and re-encrypts Secure Sockets Layer (SSL) protected data to
analyze the contained data and provide services to protect against data loss and
exfiltration.

30.14 Examining the Security Threat Landscape


Vectors of Data Loss and Exfiltration
The expression "vector of data loss and exfiltration" refers to how data leaves the
organization without authorization.
Common vectors of data loss and exfiltration include the following:
• Email attachments: Email attachments often contain sensitive information like
confidential corporate, customer, and personal data. The attachments can leave
the organization in various ways. For example, the email with the attachment
might be intercepted, or a user might accidentally send the email to the wrong
person.
• Unencrypted devices: Smartphones and other personal devices are often
protected only with a password. Employees sometimes send sensitive company
information to these devices. While the data may be encrypted while traversing
the internet to the device, it can be unencrypted when it lands on the personal
device. If the device password is compromised, an attacker can steal corporate
data and perhaps even gain unauthorized access to the company network.
• Cloud storage services: Company employees are often tempted to transfer large
files by using cloud storage services of their own choosing without the approval of
the company's IT department. The result can be theft of sensitive documents by
someone like a social network "friend" with whom the employee shares a
directory on the cloud storage server.
• Removable storage devices: Putting sensitive data on a removable storage device
may pose more of a threat than putting that data on a smartphone. Such devices
are not only easily lost or stolen; they also typically do not have passwords,
encryption, or any other protection for the data they contain. While such
protection for removable storage devices is available, it is relatively expensive and
infrequently used as of this writing.
• Improper access controls: Without proper access controls such as access control
lists on firewalls, the risk of data loss is high. Organizations can lower their risk of
data loss by fine-tuning access controls and patching known vulnerabilities.

30.15 Examining the Security Threat Landscape


Other Considerations
The following are a few of the many common myths that pertain to network security:

• No one would be interested in my network: In the past, this statement might have
been true if your network was very small, but attackers are now interested in
smaller targets that are easier to attack. If you think no one would be interested in
attacking your network, your network is probably not as secure as it could be,
making it very interesting indeed to attackers. Even if you have a two-computer
network that contains no tempting information such as banking information, debit
card information, or national defense secrets, your computers can still be a target
for several reasons. One reason is that an attacker can use your computers to
launch larger, distributed attacks. Another reason is an attacker may use your
computers to access the remote systems which your computers access.
• Router or gateway uses NAT; network is inaccessible and secure: NAT has been
created to address issues with overlapping IP address spaces and allow translation
between private and public address spaces. NAT has never been a security
mechanism, and indeed many techniques allow bidirectional communication over
NAT gateways. Without any rules, inspection, or other real firewalling procedures,
pure NAT gateways should never be considered part of a network security
architecture.
• The company has never been hacked: Unless you regularly monitor and analyze
the activity on your assets, you cannot be sure that you have never been hacked or
are not currently being attacked. Effective monitoring and analysis almost certainly
require software to automate the analysis.
• IT staff is responsible for implementing security: While IT staff play a very
important role in the configuration, maintenance, and monitoring of security
controls, end users play a primary role in implementing security. In a typical
environment, end users heavily outnumber IT staff. End users must understand the
need for security policies and their role in policy execution.
• The company has a firewall in place; it is secure: It used to be very common to use
resources to secure the perimeter and have very open systems within the
perimeter. The understanding that internal systems must be secured has gained
much traction, but there are still proponents of focusing on a hardened perimeter.
Also, reliance on a single security technology is risky. For example, firewalls can be
poorly configured, and client-side attacks are very difficult for firewalls to deal
with. Focusing on individual security points and relying on any single security
technology is insufficient in today’s networking environments.
This section provided an overview of the current networking threat landscape, but it only
addresses the basics. The threats are innumerable and constantly changing. The list below
provides more examples of today’s threat vectors:

• Cognitive threats via social networks: Social engineering takes a new meaning in
the era of social networking. Attackers can create false identities on social
networks, building and exploiting friend relationships with others on the social
network. Phishing attacks can much more accurately target susceptible audiences.
Confidential information may be exposed due to a lack of defined or enforced
policy.
• Consumer electronics exploits: The operating systems on consumer devices
(smartphones, tablets, and so on) are an option of choice for high-volume attacks.
The proliferation of applications for these operating systems, and the nature of the
development and certification processes for those applications, augments the
problem. The common expectation of bring your own device (BYOD) support
within an organization’s network increases the importance of this issue.
• Widespread website compromises: Malicious attackers compromise popular
websites, forcing the sites to download malware to connecting users. Attackers
typically are not interested in the data on the website, but they use it as a
springboard to infect the systems of users connecting to the site.
• Disruption of critical infrastructure: The Stuxnet worm confirmed concerns about
an increase in targeted attacks that are aimed at the power grid, nuclear plants,
and other critical infrastructure.
• Virtualization exploits: Device and service virtualization adds more complexity to
the network. Attackers know this fact and increasingly target virtual servers, virtual
switches, and trust relationships at the hypervisor level.
• Memory scraping: Increasingly popular, this technique is aimed at fetching
information directly from volatile memory. The attack tries to exploit operating
systems and applications that leave traces of data in memory. Attacks are
particularly aimed at accessing data that is encrypted when stored on a disk or
sent across a network but is clear text when processed in the RAM of the
compromised system.
• Hardware hacking: These attacks aim to exploit the hardware architecture of
specific devices, with consumer devices being increasingly popular. Attack
methods include bus sniffing, altering firmware, and memory dumping to find
crypto keys. Hardware-based keyloggers can be placed between a keyboard and a
computer system. Bank machines can be hacked with inconspicuous magnetic card
readers and microcameras.
• IPv6-based attacks: These attacks are becoming more pervasive as the migration
to IPv6 becomes widespread. Attackers are focus initially on covert channels
through various tunneling techniques, and man-in-the-middle attacks use IPv6 to
exploit IPv4 in dual-stack deployments.
Note: Most modern operating systems on client devices have IPv6 enabled by default.
Even though you may not yet be routing IPv6 traffic, it may be flowing in your network.
Having appropriate protection and security mechanisms for IPv6 is therefore always
recommended.
31.1 Implementing Threat Defense Technologies
Introduction
As networks become increasingly interconnected and data flows more freely, it becomes
important to enable networks to provide security services. In the commercial world,
connectivity is no longer optional. Therefore, security services must provide adequate
protection to companies that conduct business in a relatively open environment. Trends in
security threats result in the need for dynamic security intelligence gathering and
distribution, early warning systems, and application layer inspection for mobile services
where data and applications are hosted in the cloud. Enterprise network design principles
must include technologies for threat control and containment, which typically include
using firewalls and intrusion prevention systems (IPSs).
Enterprises also use the internet to connect branch offices, remote employees, and
business partners to their resources. A reliable way to maintain company privacy while
streamlining operations and allowing flexible network administration is to use
cryptographic technologies.
WLANs are widely deployed in Enterprise environments such as corporate offices,
industrial warehouses, internet-ready classrooms, and even canteens. These WLANs
present new challenges for network administrators and information security
administrators alike. Unlike the relative simplicity of wired Ethernet deployments, 802.11-
based WLANs broadcast radio-frequency (RF) data for the client stations to hear. This
presents new and complex security issues.
As a networking engineer, you need to have skills in the security technologies that are
available to protect networks in the modern network security threatscape, such as in the
following areas:

• concepts of firewalls and IPSs


• cryptographic technologies
• wireless security protocols

31.2 Implementing Threat Defense Technologies


Information Security Overview
To provide adequate protection of networked resources, the procedures and technologies
that you deploy must guarantee three things:

• Confidentiality: Providing confidentiality of data guarantees that only authorized


users can view sensitive information.
• Integrity: Providing integrity of data guarantees that only authorized users can
change sensitive information. Integrity might also guarantee the authenticity of
data.
• System and data availability: Providing system and data availability guarantees
uninterrupted access by authorized users to important computing resources and
data.
When designing network security, a designer must be aware of the following:

• The threats (possible attacks) that could compromise security


• The associated risks of the threats—that is, how relevant those threats are for a
particular system
• The cost to implement the proper security countermeasures for a threat
• The need to perform a cost-benefit analysis to determine if it is worthwhile to
implement security countermeasures
Security Awareness
In order to get non-IT staff to think about information security, an organization must
regularly attempt to remind staff members about security. Members of the technical staff
also need regular reminders, because their jobs tend to emphasize performance rather
than secure performance. Therefore, leadership must develop a nonintrusive program
that keeps everyone aware of security and how to work together to maintain the security
of their data. The three primary components that are used to implement this type of
program are awareness, training, and education. As illustrated in the figure below, an
effective computer security awareness and training program requires proper planning,
implementation, maintenance, and periodic evaluation.

In general, a computer security awareness and training program should encompass the
following seven steps:
1. Identify program scope, goals, and objectives: The scope of the program should
provide training to all of the types of people who interact with IT systems. Because
users need training that relates directly to their use of particular systems, you need
to supplement a large organizationwide program with more system-specific
programs.
2. Identify training staff: It is important that trainers have sufficient knowledge of
computer security issues, principles, and techniques. It is also vital that they know
how to communicate information and ideas effectively.
3. Identify target audiences: Not everyone needs the same degree or type of
computer security information to do their jobs. A computer security awareness
and training program that distinguishes between groups of people and presents
only the information that is needed by that particular audience, omitting irrelevant
information, will obtain the best results.
4. Motivate management and employees: To successfully implement an awareness
and training program, it is important to gain the support of management and
employees. Consider using motivational techniques to show management and
employees how their participation in a computer security and awareness program
benefits the organization.
5. Administer the program: Several important considerations for administering the
program include visibility, selection of appropriate training methods, topics,
materials, and presentation techniques.
6. Maintain the program: The organization should make an effort to keep current
with changes in computer technology and security requirements. A training
program that meets the needs of an organization today might become ineffective
when the organization starts to use a new application or changes its environment,
7. Evaluate the program: An evaluation should attempt to ascertain how much
information is retained, to what extent computer security procedures are being
followed, and general attitudes toward computer security.

31.3 Implementing Threat Defense Technologies


Firewalls
A firewall is a system that enforces an access control policy between two or more security
zones. The figure below illustrates the concept.

There are several types of firewalls, but all firewalls should have these properties:

• The firewall itself must be resistant to attack—otherwise, it would allow an


attacker to disable the firewall or change its access rules.
• All traffic between security domains must flow through the firewall. This
requirement prevents a backdoor connection that could be used to bypass the
firewall, violating the network access policy.
• A firewall must have traffic-filtering capabilities.
The simplest type of a firewall is a packet filter. As the name implies, packet filters look at
individual packets in isolation. Based on the contents of the packet and the configured
policy, they make a permit or deny decision. Packet filters generally have robust options
for differentiating desirable and undesirable packets.
Firewalls commonly control access between the security zones that are based on the
packet source and destination IP address and port. The figure below shows a firewall that
permits HTTP traffic but denies Telnet traffic to a protected network. In this example, the
device providing the firewalling services is a Cisco ASA adaptive security appliance (Cisco
ASA)..

Where a packet filter controls access on a packet-by-packet basis, stateful firewalls control
access on a session-by-session basis. It is called stateful because the firewall is
remembering the state of the session. By default, a stateful firewall does not allow any
traffic from the outside into the secure inside network, except for reply traffic, because
users from the secure inside network first initiated the traffic to the outside destination.
A firewall can be a hardware appliance, a virtual appliance, or a software that runs on
another device such as a router. Although firewalls can be placed in various locations
within a network (including on endpoints), they are typically placed at least at the internet
edge, where they provide vital security. Firewall threat controls should be implemented at
least at the most exposed and critical parts of enterprise networks. The internet edge is
the network infrastructure that provides connectivity to the internet and acts as the
gateway for the enterprise to the rest of the cyberspace. Because it is a public-facing
network infrastructure, it is particularly exposed to a large array of external threats.
Firewalls are also often used to protect data centers. The data center houses most of the
critical applications and data for an enterprise. The data center is primarily inward facing
and most clients are on the internal network. The intranet data center is still subject to
external threats, but must also be guarded against threat sources inside the network
perimeter.
Many firewalls also provide a suite of additional services such as Network Address
Translation (NAT) and multiple security zones. Another important service that is also
frequently provided by firewalls is Virtual Private Network (VPN) termination.
Note: NAT by itself does not provide security. Due to the stateful nature of NAT, if an
unknown packet arrives from the outside network, it is dropped because the NAT device
does not know to which device it should forward the packet. However, this function
should not be counted as a firewall feature. In addition, as soon as the inside host opens a
session through NAT, anyone can send TCP or UDP packets to the source port used by that
host.
Firewall products have evolved to meet the needs of borderless networks of today. From
simple perimeter security with access control lists (ACLs), based on IP addresses and ports,
firewalls have evolved to offer some advanced security services. The hard outer shell that
firewalls provided in the past is now superseded by security capabilities that are
integrated into the very fiber of the network to defend against multivector and persistent
threats. Because of the current threat landscape, Cisco Secure Firewalls (formerly Cisco
Next Generation Firewalls [NGFWs]) are needed.
In addition to the standard first-generation firewall capabilities, Cisco Secure Firewalls also
have these capabilities:

• Integrate security functions tightly to provide highly effective threat and advanced
malware protection
• Implement policies that are based on application visibility instead of transport
protocols and ports
• Provide URL filtering and other controls over web traffic
• Provide actionable indications of compromise to identify malware activity
• Offer comprehensive network visibility
• Help reduce complexity
• Integrate and interface smoothly with other security solutions

31.4 Implementing Threat Defense Technologies


Intrusion Prevention Systems
An IPS is a system that performs deep analysis of network traffic, searching for signs of
suspicious or malicious behavior. If it detects such behavior, the IPS can take protective
action. Because it can perform deep packet analysis, an IPS can complement a firewall by
blocking attacks that would normally pass through a traditional firewall device. For
example, an IPS can detect and block a wide range of malicious files and behavior,
including some botnet attacks, malware, and application abuse.
The figure below shows a firewall and an IPS working in conjunction to defend a network.
This example shows a network-based IPS, in which IPS devices are deployed at designated
network points to address network attacks regardless of the location of the attack target.
Network-based IPS technology is deployed in a sensor, which can be a dedicated appliance
or a module that is installed in another network device. There are also host-based IPSs
that only detect attacks that occur on the hosts on which they are installed.

Several methods of traffic inspection are used in various IPS systems:

• Signature-based inspection: A signature-based IPS examines the packet headers


and data payloads in network traffic and compares the data against a database of
known attack signatures. The database must be continually updated to remain
effective. A signature might be a sequence or a string of bytes in a certain context.
Signature-based inspection is sometimes referred to as rule-based or pattern-
matching inspection.
• Anomaly-based inspection: Anomaly-based network IPS devices observe network
traffic and act if a network event outside normal network behavior is detected.
There are two types of anomaly-based network IPS systems:

• Statistical anomaly detection (network behavior analysis): Observes network


traffic over time and builds a statistical profile of normal traffic behavior based
on communication patterns, traffic rate, mixture of protocols, and traffic
volume. After a normal profile has been established, statistical anomaly
detection systems detect or prevent activity that violates the normal profile.
• Protocol verification: Observes network traffic and compares network,
transport, and application layer protocols that are used inside network traffic
protocol to standards. If a deviation from standard-based protocol behavior is
detected (such as a malformed IP packet), the system can take appropriate
action.
• Policy-based inspection: A policy-based IPS analyzes network traffic and takes
action if it detects a network event outside a configured traffic policy.
Modern next-generation IPSs (NGIPSs) combine the benefits of these inspection methods.
They utilize technology such as traffic normalization and decode protocols to counter
evasive attacker techniques and to improve efficacy. They also utilize newer and more
sophisticated technologies such as reputation, context awareness, event correlation, and
cloud-based services to provide more robust and flexible protection.

31.5 Implementing Threat Defense Technologies


Protection Against Data Loss and Phishing Attacks
Targeted or directed attacks, such as phishing attacks, try to mislead employees into
releasing sensitive information such as credit card numbers, social security numbers, or
intellectual property. Phishing attacks might direct employees to inadvertently browse
malicious websites that distribute additional malware to computer endpoints. No matter
how much the threat landscape changes, malicious email remains a vital tool for
adversaries to distribute malware because they take threats straight to the endpoint. By
applying the right mix of social engineering techniques, such as phishing and malicious
links and attachments, adversaries need only to sit back and wait for unsuspecting users
to activate their exploits.
Cisco Email Security Appliance (Cisco ESA) keeps your inbox highly secure. This all-in-one
appliance defends against spam, advanced malware, phishing, and data loss. The Cisco
ESA protects the email infrastructure and employees who use email at work by filtering
unsolicited and malicious email before it reaches the user.
Graymail is categorized as marketing, social networking, and bulk messages. Using an
unsubscribe mechanism, end users can indicate to the sender that they want to opt out of
receiving such emails. Since mimicking an unsubscribe mechanism is a popular phishing
technique, users are wary of clicking the unsubscribe links. For that reason, the Cisco ESA
uses graymail detection and filters out the graymail according to the rules and actions that
you configure.
On the other side, the anti-malware system gives the Cisco Secure Web Appliance
(formerly, Cisco Web Security Appliance [WSA]), the distinction of being the first solution
on the market to offer multiple anti-malware scanning engines on a single, integrated
appliance. This system uses the Cisco Dynamic Vectoring and Streaming (DVS) engine, and
third-party verdict engines from Webroot, Sophos, and McAfee, to provide protection
against the widest variety of web-based threats. These threats can range from adware,
browser hijackers, phishing, and pharming attacks to more malicious threats such as
rootkits, Trojans, worms, system monitors, and keylogger. Furthermore, Cisco Web-Based
Reputation Filtering prevents client devices from accessing dangerous websites that
contain viruses, spyware, malware, or phishing links. Web reputation filters analyze web
server behavior and assign a reputation score to a URL to determine the likelihood that it
contains URL-based malware. Web reputation filtering helps protect against URL-based
malware that threatens end-user privacy and sensitive corporate information. The Cisco
Secure Web Appliance uses URL reputation scores to identify suspicious activity and stop
malware attacks before they occur.
In addition to protecting against phishing attacks, enterprises can also protect against data
loss by scanning the web and email traffic leaving the networks. Intellectual property is
one of an organization's most important business assets and it can be lost through
inadvertent disclosure, or through malicious action by an employee or an outsider.
Businesses lose billions of dollars each year from theft of trade secrets.
Sensitive data can leave the network perimeter by many different means, such as email,
web applications, file transfers, and instant messaging. Enforcing content policies at the
network perimeter is an effective defense against accidental data loss. Cisco partners with
RSA, a data loss prevention (DLP) solution provider, to provide integrated DLP technology
on Cisco ESA and Cisco Secure Web Appliance.
A primary goal of data security systems is to protect against theft of intellectual property
and confidential customer data. Doing so helps organizations comply with legal and
regulatory standards. The DLP feature secures your organization's proprietary information
and intellectual property and enforces compliance with government regulations by
preventing users from maliciously or unintentionally emailing sensitive data from your
network or uploading content on public cloud services. You define the types of data that
your employees are not allowed to email or upload by creating DLP policies that are used
to scan email and web traffic for any data that violate laws or corporate policies.
The Cisco Secure Firewall also provides protection against phishing attacks and other
malware, such as viruses, worms, or Trojans that might be included in the incoming and
outgoing traffic, as well as various malware sites and applications.

31.6 Implementing Threat Defense Technologies


Defending Against DoS and DDoS Attacks
The challenge in preventing DoS and DDoS attacks lies in the nature of the traffic and the
nature of the "attack" because most often the traffic is legitimate as defined by a protocol.
Therefore, there is not a straightforward approach or method to filter or block the
offending traffic.
There are two types of attacks, volumetric and application-level. Volumetric attacks use an
increased attack footprint that seeks to overwhelm the target. This traffic can be
application specific, but it is most often simply random traffic sent at a high intensity to
overutilize the target's available resources. Volumetric attacks generally use botnets to
amplify the attack footprint. Additional examples of volumetric attacks are Domain Name
System (DNS) amplification attacks and TCP SYN floods. Application-level attacks exploit
specific applications or services on the targeted system. They typically bombard a protocol
and port a specific service uses to render the service useless. Most often, these attacks
target common services and ports, such as HTTP (TCP port 80) or DNS (TCP/UDP port 53).
Some of the various solutions that can protect against denial of service (DoS) and
distributed denial of service (DDoS) attacks are as follows:

• Stateful devices, such as firewalls and IPS systems: Stateful devices do not provide
complete coverage and mitigation for DDoS attacks because of their ability to
monitor connection states and maintain a state table. Maintaining such
information is central processing unit (CPU) and memory intensive. When
bombarded with an influx of traffic, the stateful device spends most, if not all, of
its resources tracking states and further connection-oriented details. This effort
often causes the stateful device to be the "choke point" or succumb to the attack.
• Route filtering techniques: Remotely triggered black hole (RTBH) filtering can drop
undesirable traffic before it enters a protected network. Network black holes are
places where traffic is forwarded and dropped. When an attack has been detected,
black holing can be used to drop all attack traffic at the network edge, based on
destination or source IP address.
• Unicast Reverse Path Forwarding: Network administrators can use Unicast
Reverse Path Forwarding (uRPF) to help limit malicious traffic flows occurring on a
network, as is often the case with DDoS attacks. This security feature works by
enabling a router to verify the "reachability" of the source address in packets being
forwarded. This capability can limit the appearance of spoofed addresses on a
network. If the source IP address is not valid, the packet is discarded.
• Geographic dispersion (global resources anycast): A newer solution for mitigating
DDoS attacks dilutes attack effects by distributing the footprint of DDoS attacks so
that the targets are not individually saturated by the volume of attack traffic. This
solution uses a routing concept known as anycast. Anycast is a routing
methodology that allows traffic from a source to be routed to various nodes
(representing the same destination address) via the nearest hop or node in a group
of potential transit points. This solution effectively provides "geographic
dispersion."
• Tightening connection limits and timeouts: Antispoofing measures such as limiting
connections and enforcing timeouts in a network environment seek to ensure that
DDoS attacks are not launched or spread from inside the network, intentionally or
unintentionally. Administrators are advised to leverage these solutions to enable
antispoofing and thwart random DDoS attacks on the inside "zones" or internal
network. Such limitations that can be configured on the firewalls are half-opened
connection limits, global TCP SYN-flood limits, and so on.
• Reputation-based blocking: Reputation-based technology provides URL analysis
and establishes a reputation for each URL. Reputation technology has two aspects.
The intelligence aspect couples worldwide threat telemetry, intelligence engineers,
and analytics/modeling. The decision aspect focuses on the trustworthiness of a
URL. Reputation-based blocking limits the impact of untrustworthy URLs.
• Access control lists: ACLs provide a flexible option to a variety of security threats
and exploits, including DDoS. ACLs provide day zero or reactive mitigation for DDoS
attacks, as well as a first-level mitigation for application-level attacks. An ACL is an
ordered set of rules that filter traffic. Each rule specifies a set of conditions that a
packet must satisfy to match the rule. Firewalls, routers, and even switches
support ACLs.
• DDoS run books: The premise behind a DDoS run book is simply to provide a
"playbook" for an organization in the event that a DDoS attack arises. In essence,
the run book provides crisis management (better known as an incident response
plan) in the event of a DDoS attack. The run book provides details about who owns
which aspects of the network environment, which rules or regulations must still be
adhered to, and when to activate certain processes, solutions, and mitigation
plans.
• Manual responses to DDoS attacks: Manual responses to DDoS attacks focus on
measures and solutions that are based on details administrators discover about
the attack. For example, when an attack such as an HTTP GET/POST flood occurs,
given the information known, an organization can create an ACL to filtering known
bad actors or bad IP addresses and domains. When an attack arises, administrators
can configure or tune firewalls or load balancers to limit connection attempts.

31.7 Implementing Threat Defense Technologies


Introduction to Cryptographic Technologies
Cryptography is the practice and study of techniques to secure communications in the
presence of third parties. Historically, cryptography was synonymous with encryption. Its
goal was to keep messages private. In modern times, cryptography includes other
responsibilities:

• Confidentiality: Ensuring that only authorized parties can read a message


• Data integrity: Ensuring that any changes to data in transit will be detected and
rejected
• Origin authentication: Ensuring that any messages received were actually sent
from the perceived origin
• Non-repudiation: Ensuring that the original source of a secured message cannot
deny having produced the message
Hashing Algorithms
Hashing is a mechanism that is used for data integrity assurance. Hashes confirm that the
message is authentic without transmitting the message itself. Hashing functions are
designed so that you cannot revert hashed data into the original message.
Hashing is based on a one-way mathematical function: functions that are relatively easy to
compute, but significantly difficult to reverse. Grinding coffee is a good example of a one-
way function: it is easy to grind coffee beans, but it is almost impossible to put back
together all the tiny pieces to rebuild the original beans.
Data of an arbitrary length is input into the hash function, and the result of the hash
function is the fixed-length hash, which is known as the "digest" or "fingerprint." If the
same data is passed through a hash algorithm at different times, the output is identical.
Any small modification to the data produces a drastically different output. For example,
flipping one bit in the data might produce output in which half the bits are flipped. This
characteristic is often referred to as the avalanche effect, because one bit flipped and
caused an avalanche of bits to flip. Data is deemed authentic if running the data through
the hash algorithm produces the expected result, also called the fingerprint.
The following output illustrates how adding a period to the end of the sentence results in
a different digest due to the avalanche effect.

Note: SHA-256 is one of the hashing algorithms that is used for data integrity.
Since hash algorithms produce a fixed-length output, there are a finite number of possible
outputs. It is possible for two different inputs to produce an identical output. They are
referred to as hash collisions.
Hashing is similar to the calculation of cyclic redundancy check (CRC) checksums, but it is
much stronger cryptographically. CRCs were designed to detect randomly occurring errors
in digital data, while hash algorithms were designed to assure data integrity even when
data modifications are intentional with the objective to pass fraudulent data as authentic.
One primary distinction is the size of the digest produced. CRC checksums are relatively
small, often 32 bits. Commonly used hash algorithms produce digests in the range of 128
to 512 bits in length. It is relatively easier for an attacker to find two inputs with identical
32-bit checksum values than it is to find two inputs with identical digests of 128 to 512 bits
in length.
The following figure illustrates how hashing is performed.

The following figure shows one use of hash algorithms to provide data integrity.
Organizations that offer software for download often publish hash digests on the
download page that can be used to verify data integrity of the downloaded software.
Examples of hash algorithms include:

• Deprecated:
o MD5: Produces a 128-bit hash value that is typically represented as a
sequence of 32-hex digits. However, MD5 is considered insecure and
should be avoided.
o SHA-1: Produces a 160-bit hash value that is typically represented as a
sequence of 40-hex digits. It is a legacy algorithm and thus is adequately
secure. Both MD5 and SHA-1 are vulnerable to hash collisions.
• Next-generation (recommended):
o SHA-2: Includes significant changes from its predecessor SHA-1, and is the
recommended hash algorithm today. The SHA-2 family consists of multiple
hash functions with different bit values. The larger the better, and more
bits equal better security.
o SHA-256: Produces a 256-bit hash value that is typically represented as a
sequence of 64-hex digits.
o SHA-384: Produces a 384-bit hash value that is typically represented as a
sequence of 96-hex digits.
o SHA-512: Produces a 512-bit hash value that is typically represented as a
sequence of 128-hex digits.
Encryption
A cipher is an algorithm for performing encryption and decryption. Ciphers are a series of
well-defined steps that you can follow as a procedure.
Encryption is the process of disguising a message in such a way as to hide its original
contents. With encryption, the plaintext readable message is converted to ciphertext,
which is the unreadable, "disguised" message. Decryption reverses this process.
Encryption is used to guarantee confidentiality so that only authorized entities can read
the original message.

Modern encryption relies on public algorithms that are cryptographically strong using
secret keys. It is much easier to change keys than it is to change algorithms. In fact, most
cryptographic systems dynamically generate new keys over time, limiting the amount of
data that may be compromised with the loss of a single key.
Encryption can provide confidentiality at different network layers, such as the following:

• Encrypt application layer data, such as encrypting email messages with Pretty
Good Privacy (PGP).
• Encrypt session layer data using a protocol such as Secure Sockets Layer (SSL) or
Transport Layer Security (TLS). Both SSL and TLS are considered to be operating at
the session layer and higher in the Open Systems Interconnection (OSI) reference
model.
• Encrypt network layer data using protocols such as those provided in the IP
Security (IPsec) protocol suite.
• Encrypt data link layer using MAC Security (MACsec) (IEEE 802.1AE) or proprietary
link-encrypting devices.
Encryption Algorithm Features
A good cryptographic algorithm is designed in such a way that it resists common
cryptographic attacks. The best way to break data that is protected by the algorithm is to
try to decrypt the data using all possible keys. The amount of time needed by such an
attack depends on the number of possible keys, but the time is generally very long. With
appropriately long keys, such attacks are usually considered unfeasible.
Variable key lengths and scalability are also desirable attributes of a good encryption
algorithm. The longer the encryption key is, the longer it takes an attacker to break it. For
example, a 16-bit key means that there are 65,536 possible keys, but a 56-bit key means
that there are around 72,000,000,000,000,000 possible keys. Scalability provides flexible
key length and allows you to select the strength and speed of encryption that you need.
Changing only a few bits of the plaintext message causes its ciphertext to change
completely, which is known as an avalanche effect. The avalanche effect is a desired
feature of an encryption algorithm, because it allows very similar messages to be sent
over an untrusted medium, with the encrypted (ciphertext) messages being completely
different.
You must carefully consider export and import restrictions when you use encryption
internationally. Some countries do not allow the export of encryption algorithms, or they
allow only the export of those algorithms with shorter keys. Some countries impose
import restrictions on cryptographic algorithms.
Encryption Algorithms and Keys
A key is a required parameter for encryption algorithms. There are two classes of
encryption algorithms, which differ in their use of keys:

• Symmetric encryption algorithm: Uses the same key to encrypt and decrypt data
• Asymmetric encryption algorithm: Uses different keys to encrypt and decrypt data
Symmetric Encryption Algorithms
Symmetric encryption algorithms use the same key for encryption and decryption.
Therefore, the sender and the receiver must share the same secret key before
communicating securely. The security of a symmetric algorithm rests in the secrecy of the
shared key; by obtaining the key, anyone can encrypt and decrypt messages. Symmetric
encryption is often called secret-key encryption. Symmetric encryption is the more
traditional form of cryptography. The typical key-length range of symmetric encryption
algorithms is 40 to 256 bits.

Because symmetric algorithms are usually quite fast, they are often used for wire-speed
encryption in data networks. Symmetric algorithms are based on simple mathematical
operations and can easily be accelerated by hardware.
Key management can be a challenge, because the communicating parties must obtain a
common secret key before any encryption can occur. Therefore, the security of any
cryptographic system depends greatly on the security of the key management methods.
Symmetric algorithms are frequently used for encryption services, with additional key
management algorithms providing secure key exchange. They are used for bulk encryption
when data privacy is required, such as to protect a VPN. The reason we use symmetrical
encryption algorithms for most of the data in VPNs is because it is much faster to use a
symmetrical algorithm and takes less CPU than it would for an asymmetrical algorithm.
Some examples of where the symmetric encryption is used:

• Payment applications: Where the data needs to be protected to prevent identity


theft or fraudulent charges, such as card transactions.
• Validating information: To confirm that the sender of a message is who they claim
to be. For example, if a device A receives an encrypted message, and if that device
can decrypt it to produce a valid message, then this validates that it was sent by a
device B. Knowing only the valid key, the sender was able to encrypt the message,
and at the receiver end, the same key was used to decrypt the message in
symmetric encryption algorithms, which validates the sender.
• Random Number Generation: In essence, the data that you encrypt tends to have
strong random characteristics.
Note: Symmetric encryption algorithms are sometimes referred to as private-key
encryption. Examples of symmetric encryption algorithms are Data Encryption Standard
(DES), Triple Data Encryption Standard (3DES), Advanced Encryption Standard (AES),
International Data Encryption Algorithm (IDEA), RC2/4/5/6 (RC stands for Rivest Cipher),
and Blowfish.
Asymmetric Encryption Algorithms

Asymmetric algorithms use a pair of keys for encryption and decryption. The paired keys
are intimately related and are generated together. Most commonly, an entity with a key
pair will share one of the keys (the public key) and it will keep the other key in complete
secrecy (the private key). The private key cannot, in any reasonable amount of time, be
calculated from the public key. Data that is encrypted with the private key requires the
public key to decrypt. Vice versa, data that is encrypted with the public key requires the
private key to decrypt. Asymmetric encryption is also known as public key encryption.
The typical key length range for asymmetric algorithms is 512 to 4096 bits. You cannot
directly compare the key length of asymmetric and symmetric algorithms, because the
underlying design of the two algorithm families differs greatly.
Asymmetric algorithms are substantially slower than symmetric algorithms. Their design is
based on computational problems, such as factoring extremely large numbers or
computing discrete logarithms of extremely large numbers. Because they lack speed,
asymmetric algorithms are typically used in low-volume cryptographic mechanisms, such
as digital signatures and key exchange. However, the key management of asymmetric
algorithms tends to be simpler than symmetric algorithms, because usually one of the two
encryption or decryption keys can be made public.
Examples of asymmetric cryptographic algorithms include Rivest, Shamir, and Adleman
(RSA), Digital Signature Algorithm (DSA), ElGamal, and elliptic curve algorithms.
Usually asymmetric algorithms, such as RSA and DSA, are used for digital signatures.
For example, a customer sends transaction instructions via an email to a stockbroker, and
the transaction turns out badly for the customer. It is conceivable that the customer could
claim never to have sent the transaction order or that someone forged the email. The
brokerage could protect itself by requiring the use of digital signatures before accepting
instructions via email.
Handwritten signatures have long been used as a proof of authorship of, or at least
agreement with, the contents of a document. Digital signatures can provide the same
functionality as handwritten signatures, and much more.
The idea of encrypting a file with your private key is a step toward digital signatures.
Anyone who decrypts the file with your public key knows that you were the one who
encrypted it. But, since asymmetric encryption is computationally expensive, this is not
optimal. Digital signatures leave the original data unencrypted. It does not require
expensive decryption to simply read the signed documents. In contrast, digital signatures
use a hash algorithm to produce a much smaller fingerprint of the original data. This
fingerprint is then encrypted with the signer’s private key. The document and the
signature are delivered together. The digital signature is validated by taking the document
and running it through the hash algorithm to produce its fingerprint. The signature is then
decrypted with the sender’s public key. If the decrypted signature and the computed hash
match, then the document is identical to what was originally signed by the signer.
31.8 Implementing Threat Defense Technologies
IPsec Security Services
IPsec VPNs provide security services to traffic traversing a relatively less trustworthy
network between two relatively more trusted systems or networks. The less-trusted
network is usually the public internet. But IPsec VPNs can also be used for things like
protecting network management traffic as it crosses an organization intranet.
IPsec provides these essential security functions:

• Confidentiality: IPsec ensures confidentiality by using encryption. Data encryption


prevents third parties from reading the data. Only the IPsec peer can decrypt and
read the encrypted data.
• Data integrity: IPsec ensures that data arrives unchanged at the destination,
meaning that the data has not been manipulated at any point along the
communication path. IPsec ensures data integrity by using hash-based message
authentication.
• Origin authentication: Authentication ensures that the connection is made with
the desired communication partner. IPsec uses Internet Key Exchange (IKE) to
authenticate users and devices that can carry out communication independently.
IKE uses several methods to authenticate the peer system.
• Anti-replay protection: Anti-replay protection verifies that each packet is unique
and is not duplicated. IPsec packets are protected by comparing the sequence
number of the received packets with a sliding window on the destination host or
security gateway. A packet that has a sequence number that comes before the
sliding window is considered either late, or a duplicate packet. Late and duplicate
packets are dropped.
• Key management: Allows for an initial secure exchange of dynamically generated
keys across a nontrusted network and a periodic rekeying process, limiting the
maximum amount of time and data that are protected with any one key.
IPsec is a framework of open standards that spells out the rules for secure
communications. IPsec relies on existing algorithms to implement encryption,
authentication, and key exchange. The figure below illustrates some of the standard
algorithms that IPsec uses. The framework is a modular design allowing technologies to be
replaced over time. When cryptographic technologies become obsolete, it does not make
the IPsec framework obsolete. Current technologies are swapped to replace the obsolete
technologies, keeping the framework in place. For example, AES is now commonly
implemented for confidentiality in place of the aging DES and 3DES technologies. Similarly,
SHA 2-based algorithms are used for data integrity, in place of the deprecated MD5 and
SHA-1-based algorithms.
IPsec Framework Protocols
There are two main IPsec framework protocols:

• Authentication Header (AH): AH, which is IP protocol 51, is the appropriate


protocol to use when confidentiality is not required or permitted. AH does not
provide data confidentiality (encryption). All text is transported unencrypted. If the
AH protocol is used alone, it provides weak protection. AH does, however, provide
origin authentication, data integrity, and anti-replay protection for IP packets that
are passed between two systems.
• Encapsulating Security Payload (ESP): ESP is a security protocol that provides
origin authentication, data integrity, and anti-replay protection. However, unlike
AH, it also provides confidentiality. ESP, which is IP protocol 50, provides
confidentiality by encrypting IP packets. The IP packet encryption conceals the
data payload and the identities of the ultimate source and destination.
Note: In modern IPsec VPN implementations, AH is not commonly used.
ESP supports various symmetric encryption algorithms. The original data is well protected
by ESP, because the original IP packet is encrypted. As illustrated in the figure for an IPv4
packet, an ESP header is added to the ciphertext, which consists of the encrypted IPv4
packet and ESP trailer. When ESP authentication is also used, the encrypted IPv4 packet
and ESP trailer, as well as the ESP header, are included in the hashing process.

When both authentication and encryption are used, the encryption is performed first.
Authentication is then performed by sending the encrypted information through a hash
algorithm. The hash provides data integrity and data origin authentication. Finally, a new
IPv4 header is prepended to the authenticated payload. The new IPv4 address is used to
route the packet. ESP does not attempt to provide data integrity for this new external IP
header.
Note: AH does provide data integrity for the external IP header. Due to this, AH is not
compatible with NAT performed in the transmission path. NAT changes the IP addresses in
the IP header, causing AH data integrity checks to fail.
Performing encryption before authentication facilitates rapid detection and rejection of
replayed or bogus packets by the receiving device. Before decrypting the packet, the
receiver can authenticate inbound packets. By doing this authentication, it can quickly
detect problems and potentially reduce the impact of DoS attacks. ESP can, optionally,
enforce anti-replay protection by requiring that a receiving host sets the replay bit in the
header to indicate that the packet has been seen.
In modern IPsec VPN implementations, the use of ESP is common. Although both
encryption and authentication are optional in ESP, one of them must be used.
ESP can operate in either the transport mode or tunnel mode:

• ESP transport mode: Does not protect the original packet IP header. Only the
original packet payload is protected—the original packet payload and ESP trailer
are encrypted. An ESP header is inserted between the original IP header and the
protected payload. Transport mode can be negotiated directly between two IP
hosts. ESP transport mode can be used for site-to-site VPN if another technology,
such as Generic Routing Encapsulation (GRE) tunneling, is used to provide the
outer IP header.
• ESP tunnel mode: Protects the entire original IP packet, including its IP header. The
original IP packet (and ESP trailer) is encrypted. An ESP header is applied for the
transport layer header, and this is encapsulated in a new packet with a new IP
header. The new IP header specifies the VPN peers as the source and destination
IP addresses. The IP addresses specified in the original IP packet are not visible.
Note: AH can also be implemented in either the tunnel mode or transport mode. The key
distinction between these modes is what is done with the original IP header. Tunnel mode
provides a new IP header. Transport mode maintains the original IP header.
Confidentiality
Choosing an encryption algorithm is one of the most important decisions that a network
security professional makes when building a cryptosystem.
When choosing an algorithm, two main criteria are considered:

• The algorithm must be trusted by the cryptographic community.


• The algorithm must adequately protect against brute-force attacks.
Below are some of the encryption algorithms and key lengths that IPsec can use:

• DES algorithm: DES, developed by IBM, uses a 56-bit key, ensuring high-
performance encryption. DES is a symmetric key cryptosystem.
• 3DES algorithm: The 3DES algorithm is a variant of the 56-bit DES. 3DES operates
in a way that is similar to how DES operates, in that data is broken into 64-bit
blocks. 3DES then processes each block 3 times, each time with an independent
56-bit key. 3DES provides a significant improvement in encryption strength over
56-bit DES. 3DES is a symmetric key cryptosystem.
• AES: The National Institute of Standards and Technology (NIST) adopted AES to
replace the aging DES-based encryption in cryptographic devices. AES provides
stronger security than DES and is computationally more efficient than 3DES. AES
offers three different key lengths: 128-, 192-, and 256-bit keys.
• RSA: RSA is an asymmetrical key cryptosystem. It commonly uses a key length of
1024 bits or larger. IPsec does not use RSA for data encryption. IKE uses RSA
encryption only during the peer authentication phase.
• SEAL: Software-Optimized Encryption Algorithm (SEAL) is a stream cipher that was
developed in 1993 by Phillip Rogaway and Don Coppersmith, and uses a 160-bit
key for encryption.
Note: AES replaced DES and 3DES, because the key length of AES is much stronger than
DES. AES is more efficient and runs faster than DES and 3DES on comparable hardware,
usually by a factor of five when it is compared with DES. Also, AES is more suitable for
high-throughput, low-latency environments, especially if pure software encryption is used.
Symmetric encryption algorithms such as AES require a common shared-secret key to
perform encryption and decryption. You can use email, courier, or overnight express to
send the shared-secret keys to the administrators of the devices. This method is obviously
impractical, and does not guarantee that keys are not intercepted in transit. Public-key
exchange methods allow shared keys to be dynamically generated between the
encrypting and decrypting devices:
Key Management
Public key exchange methods allow shared keys to be dynamically generated between
encrypting and decrypting devices:

• The Diffie-Hellman (DH) key agreement is a public key exchange method. This
method provides a way for two peers to establish a shared secret key, which only
they know, even though they are communicating over an insecure channel.
• Elliptical Curve Diffie-Hellman (ECDH) is a variant of the DH protocol using elliptic
curve cryptography (ECC). It is part of the Suite B standards.
These algorithms are used within IKE to establish session keys. They support different
prime sizes that are identified by different DH or ECDH groups.
DH groups vary in the computational expense that is required for key agreement and the
strength against cryptographic attacks. Larger prime sizes provide stronger security, but
require more computational horsepower to execute:

• DH1: 768-bit
• DH2: 1024-bit
• DH5: 1536-bit
• DH14: 2048-bit
• DH15: 3072-bit
• DH16: 4096-bit
• DH19: 256-bit ECDH
• DH20: 384-bit ECDH
• DH24: 2048-bit ECDH
The following figure illustrates the key exchange process.
Internet Key Exchange
IPsec implements a VPN solution using an encryption process that involves the periodic
changing of encryption keys. IPsec uses the IKE protocol to authenticate a peer computer
and to generate encryption keys. IKE negotiates a security association (SA), which is an
agreement between two peers engaging in an IPsec exchange, and the SA consists of all
the required parameters that are necessary to establish successful communication.
IPsec uses the IKE protocol to provide the following functions:

• Negotiation of SA characteristics
• Automatic key generation
• Automatic key refresh
• Manageable manual configuration
Two versions of the IKE protocol

• IKE version 1 (IKEv1)


• IKE version 2 (IKEv2)
IKEv2 was created to overcome some of the IKEv1 limitations.
Data Integrity
VPN data is transported over untrusted networks such as the public internet. Potentially,
this data could be intercepted and read or modified. To guard against modification,
Hashed Message Authentication Codes (HMACs) are used by IPsec.
IPsec uses HMAC as the data integrity algorithm that verifies the integrity of the message.
Hashing algorithms such as SHA-2 are the basis of the protection mechanism of HMAC.
HMACs use existing hash algorithms, but with a significant difference. HMACs add a secret
key as input to the hash function. Only the sender and the receiver know the secret key,
and the output of the hash function now depends on the input data and the secret key.
Therefore, only parties who have access to that secret key can compute the digest of an
HMAC function.
The following figure depicts a keyed hash that is a simplification of the more complex
HMAC algorithm. The HMAC algorithm itself is beyond the scope of this material. HMAC is
defined in RFC 2104. Like a keyed hash, HMAC utilizes a secret key known to the sender
and the receiver.
Origin Authentication
When you conduct business long distance, it is important to know who is at the other end
of the phone, email, or fax. The same is true of VPN networks. The device on the other
end of the VPN tunnel must be authenticated before the communication path is
considered secure.
IPsec uses these methods for peer-authentication:

• Pre-shared keys (PSKs): A secret key value is entered into each peer manually and
is used to authenticate the peer. At each end, the PSK is combined with other
information to form the authentication key.
• RSA signatures: The exchange of digital certificates authenticates the peers. The
local device derives a hash and encrypts it with its private key. The encrypted hash
is attached to the message and is forwarded to the remote end, and it acts like a
signature. At the remote end, the encrypted hash is decrypted using the public key
of the local end. If the decrypted hash matches the recomputed hash, the
signature is genuine.
• RSA encrypted nonces: A nonce is a random number that is generated by the peer.
RSA-encrypted nonces use RSA to encrypt the nonce value and other values. This
method requires that each peer is aware of the public key of the other peer before
negotiation starts. For this reason, public keys must be manually copied to each
peer as part of the configuration process. This method is the least used of the
three authentication methods.
• ECDSA signatures: Elliptic Curve Digital Signature Algorithm (ECDSA) is the elliptic
curve analog of the DSA signature method. ECDSA signatures are smaller than RSA
signatures of similar cryptographic strength. On many platforms, ECDSA operations
can be computed more quickly than similar-strength RSA operations. These
advantages of signature size, bandwidth, and computational efficiency might make
ECDSA an attractive choice for many IKE and IKEv2 implementations.

31.9 Implementing Threat Defense Technologies


Secure Sockets Layer and Transport Layer Security
To prevent unauthorized access to data, it is necessary to secure the transmission of
information over a public network through encryption. For example, if you go to your
online bank website, then not only do you want to prevent an attacker from seeing your
usernames, passwords, and personal information, but you also do not want an attacker to
be able to alter the packets in transit during a bank transaction.
This could be achieved by using IPsec to encrypt the data, to ensure that data is not
altered during transit, and to authenticate the bank server you are connected to.
Unfortunately, not every device has an IPsec client software installed. Therefore, other
cryptographic protocols are used to provide confidentiality, integrity, and authentication
services.
One such protocol is TLS, which is used to provide secure communication on the internet
for things such as web browsing, email, instant messaging, online banking, and other data
transfers.
Note: TLS is natively supported in modern browsers, therefore there is no need for users
to install any additional software on their device.
The SSL protocol is the predecessor of TLS, therefore terms SSL and TLS are often used
interchangeably by IT professionals. Note, that modern systems implement TLS, and that
SSL is not used. To use TLS in a browser, the user has to connect to a TLS server, which
means that the web server itself has to support the TLS, by using HTTPS instead of HTTP.
For example, if you want to visit cisco.com website, then even if you type
http://cisco.com, the website itself will automatically redirect you to use HTTPS. The user
itself does not need to configure any settings, because everything happens in the
background and secure communication is negotiated between the browser and the web
server.
Note: Some companies might prevent users to access websites that only use HTTP, which
is considered insecure, because the data is sent in plaintext.
Cryptographically, SSL and TLS rely on public key infrastructure (PKI) and digital certificates
for authentication. In this case, the web server sends the copy of its public digital
certificate to the browser, which in turn authenticates the web server by looking at the
digital signature of the Certification Authority (CA) that is on the certificate.
Two very important terms must be defined when talking about a PKI:

• CA: The trusted third party that signs the public keys of entities in a PKI-based
system.
• Certificate: A document, which in essence binds together the name of the entity
and its public key, which has been signed by the CA.
The most widely used application-layer protocol that uses TLS is HTTPS, but other well-
known protocols also use it. Examples are Secure File Transfer Protocol (SFTP), Post Office
Protocol version 3 Secure (POP3S), Secure LDAP, wireless security (Extensible
Authentication Protocol-Transport Layer Security [EAP-TLS]), and other application-layer
protocols. It is important to distinguish that even though TLS in its name contains
transport layer security, both SSL and TLS are considered to be operating at the session
layer and higher in the OSI model. In that sense, these protocols encrypt and authenticate
from the session layer up, including the presentation and application layers.
The SSL and TLS protocols support the use of various cryptographic algorithms, or ciphers,
for use in operations such as authenticating the server and client to each other,
transmitting certificates, and establishing session keys. Symmetric algorithms are used for
bulk encryption; asymmetric algorithms are used for authentication and the exchange of
keys, and hashing is used as part of the authentication process.
The following figure depicts the steps that are taken in the negotiation of a new TLS
connection between a web browser and a web server. The figure illustrates the
cryptographic architecture of SSL and TLS, based on the negotiation process of the
protocol.
Cisco AnyConnect SSL VPN
TLS is not only used for communication on the internet, but is also used for remote-access
VPNs to secure the transit data between the remote workers and internal servers of the
company.
Cisco AnyConnect is a VPN remote-access client providing a secure endpoint access
solution. It delivers enforcement that is context-aware, comprehensive, and seamless. The
Cisco AnyConnect client uses TLS and Datagram TLS (DTLS). DTLS is the preferred protocol,
but if, for some reason, the Cisco AnyConnect client cannot negotiate DTLS, there is a
fallback to TLS.
Note: DTLS is an alternative VPN transport protocol to SSL or TLS. DTLS allows datagram-
based applications to communicate in a way that is designed to prevent eavesdropping,
tampering, or message forgery. The DTLS protocol is based on the stream-oriented TLS
protocol and is intended to provide similar security guarantees.
A basic Cisco AnyConnect SSL VPN provides users with flexible, client-based access to
sensitive resources over a remote-access VPN gateway, which is implemented on the
Cisco ASA. In a basic Cisco AnyConnect remote-access SSL VPN solution, the Cisco ASA
authenticates the user against its local user database, which is based on a username and
password. The client authenticates the Cisco ASA with a certificate-based authentication
method. In other words, the basic Cisco AnyConnect solution uses bidirectional
authentication.
After authentication, the Cisco ASA applies a set of authorization and accounting rules to
the user session. When the Cisco ASA has established an acceptable VPN environment
with the remote user, the remote user can forward IP traffic into the SSL/TLS tunnel. The
Cisco AnyConnect client creates a virtual network interface to provide this functionality.
This virtual adapter requires an IP address, and the most basic method to assign an IP
address to the adapter is to create a local pool of IP addresses on the Cisco ASA. The client
can use any application to access any resource behind the Cisco ASA VPN gateway, subject
to access rules and the split tunneling policy that are applied to the VPN session.
There are two types of tunneling policies for a VPN session:

• Full-tunneling: The traffic generated from the user is fully encrypted and is sent to
the Cisco ASA, where it is routed. This process occurs for all traffic, even when the
users want to access the resources on the internet. It is especially useful to use this
type of tunneling policy when the endpoint is connected to the unsecured public
wireless network.
• Split-tunneling: This approach only tunnels the traffic when the users want to
access any internal resources of the organization. The other traffic will utilize the
client’s own internet connection for connectivity.

31.10 Implementing Threat Defense Technologies


Wireless Security Protocols
Unlike your LAN, a WLAN does not have exact boundaries. Vulnerabilities exist both inside
and outside your buildings. Because of the popularity of Wi-Fi and the low cost of
equipment, hackers can easily obtain devices and software to exploit a Wi-Fi network. Like
the LAN, the WLAN needs security.
The two primary facilities for securing the WLAN are authentication and encryption.
Current security standards require that both authentication and encryption are used to
protect the client devices from having their data intercepted.
Authentication
Authentication is a process of discovering whether something is exactly what it appears to
be. In wireless networks, authentication is used to prove the identity of the person or
device that is trying to gain access.
Proving an identity is difficult because it is rarely possible to physically identify the person
that is trying to connect. Approaches for human authentication rely on at least one of
these methods:

• Something that you know (such as a password): This type of authentication is the
most common for users. Unfortunately, something that users know can easily
become something that they forget. And if users write down the information to
help them remember it, other people might find it.
• Something that you have (such as a smart card): This method offers no risk of
forgetting information, but users must have physical possession of the item or they
cannot be authenticated. This object can be lost or stolen, after which it might be
used by an attacker.
• Something that you are (such as a fingerprint): This method is based on
something that is specific to the person who is being authenticated. Unfortunately,
biometric sensors imply physical contact, which is not a reasonable user-
authentication method in wireless networks. (However, this technique can be used
to authenticate devices.)
These authentication methods apply to human or user-based authentication. In secured
environments, authenticating the devices that are used to access the network is also
common. A device can be authenticated by using a signature that is based on its specific
hardware characteristics.
The limitation of device authentication is that it does not authenticate the person who
makes the connection. The same authentication process occurs whether the device is
being used by a valid user or by an attacker. For this reason, storing personal passwords
on laptop or desktop computers is considered dangerous, although many systems allow
this possibility. Unless authentication requires the user to enter information, then the
device, not the user, is being authenticated.
Encryption
In wireless networks, privacy means that although an eavesdropper might receive the
wireless signal, this signal cannot be read and understood. Keeping the data private is the
role of encryption.
Several methods of encryption can be used:

• WEP: Wired Equivalent Privacy (WEP) uses a shared key (both sides have the key)
and was a very weak form of encryption (no longer used).
• TKIP: Temporal Key Integrity Protocol (TKIP) uses a suite of algorithms surrounding
WEP and Wi-Fi Protected Access (WPA) to enhance its security (no longer used
with WEP).
• AES: AES allows for longer keys and is used for most WLAN security.
Key Management
Whether keys are used to authenticate users or to encrypt data, they are the secret values
upon which wireless networks rely.
Common Keys
A key can be common to several users. For wireless networks, the key can be stored on
the access point (AP) and shared among users who are allowed to access the network via
this AP.
A common key can be used in three ways:

• For authentication only: Limits access to the network to only users who have the
key, but their subsequent communication is sent unencrypted.
• For encryption only: Any user can associate to the WLAN, but only users who have
a valid key can send and receive traffic to and from other users of the AP.
• For authentication and encryption: The key is used for both authentication and
encryption.
The advantage of this common key system is its simplicity. Everyone uses the same
algorithm and the same key. The risk is that anyone who has the key can capture and read
the frames of any other user from the same AP. A hacker needs only to compromise one
device to be able to read the traffic of all the clients in the cell.
Individual Keys
To provide more security, an individual key can be defined for each user. This approach
can be accomplished in two ways:

• The key is individual from the beginning. This method implies that the
infrastructure must store and manage individual user keys, typically by using a
central authentication server.
• The key is common at first, but it is used to create a second key that is unique to
each user. This system has many advantages. A single key is stored on the AP, and
then individual keys are created in real time and are valid only during the user
session.
Security Standards
When WEP was found to be weak and easily breakable, both the IEEE 802.11 committee
and the Wi-Fi Alliance worked to replace it. Two generations of solutions emerged: WPA
and IEEE 802.11i Wi-Fi Protected Access 2 (WPA2). These solutions offer an authentication
and encryption framework.
Currently, multiple wireless security standards exist:

• WPA: Determines two modes of wireless protected access: WPA-Personal mode,


which uses PSKs (WPA-PSK), or WPA-Enterprise mode, which uses IEEE 802.1X.
• WPA2: WPA2 is the current implementation of the 802.11i security standard and
deprecates the use of WEP, WPA, and TKIP. WPA2 supports either 802.1X or PSK
authentication.
• WPA3: Wi-Fi Protected Access 3 (WPA3) was announced as a replacement of
WPA2 and is the next generation of wireless security standard that provides more
resiliency to network attacks.
WPA, WPA2, and WPA3 support two modes of wireless protected access, WPA-Enterprise
and WPA-Personal:
Note: WPA3 supports two more additional features called Open Networks and Internet of
Things (IoT) secure onboarding.
Wi-Fi Protected Access
Both WPA-Personal and WPA-Enterprise modes use TKIP for encryption, but they use
different authentication features for WLAN clients.

• WPA-Personal: Uses a PSK to authenticate WLAN clients. It is designed for home


and small office networks, because it does not require an external authentication
server.
• WPA-Enterprise: Adds 802.1X and Extended Authentication Protocol (EAP) based
authentication. It is designed for enterprise networks, and requires an external
authentication server. It requires a more complicated setup, but provides
additional security.
The initial authentication process of WLAN clients is either performed with a pre-shared
key or after an EAP exchange through 802.1X. This process ensures that the WLAN client is
authenticated with the AP. After the authentication process, a shared secret key, which is
called Pairwise Master Key (PMK), is generated. The PMK is derived from the password
that is put through the hash function. In WPA-Personal mode, the PSK is the PMK. On the
other hand, if WPA-Enterprise is used, then the PMK is derived during EAP authentication.
The four-way handshake was designed so that both the AP and WLAN client can prove to
each other that they know the PMK, without ever sharing the key. The access point and
WLAN client encrypt messages with the PMK and send them to each other. Therefore,
those messages can only be decrypted by using the PMK that they already have in
common. If the decryption is successful, then this proves that they know the PMK.
The four-way handshake is established to derive another two keys called Pairwise
Transient Key (PTK), and Group Transient Key (GTK). Those two keys are generated using
various attributes, including PMK. PTK is a unique key, which is used for unicast frames,
and GTK is a common key, which is used for broadcast and multicast frames.
Another important thing to note is that in WPA-Personal, all WLAN clients encrypt their
data with the same PMK. This means if someone gets a hold of the PMK, then it can be
used to decrypt all data encrypted with that key. On the other hand, in WPA-Enterprise,
each WLAN client has a unique PMK, therefore if a single PMK is compromised, then only
that client session can be decrypted. In conclusion, WPA-Enterprise is more secure than
WPA-Personal.
Note: This four-way handshake has been shown to be vulnerable to KRACK (key
reinstallation attack). It is a severe replay attack on the WPA protocol. The weakness is
exhibited in the Wi-Fi standard itself, and not due to errors in the implementation.
Wi-Fi Protected Access 2
WPA2 is the current implementation of the 802.11i security standard and deprecates the
use of WEP, WPA, and TKIP. WPA2 supports either 802.1X or PSK authentication:

• PSK is for home or small office and is also known as WPA2-Personal.


• 802.1X is used by enterprise-class networks and is often referred to as WPA2-
Enterprise.
WPA2-Personal uses the same shared key as in WPA, but it uses AES-Counter with CBC-
MAC Protocol (AES-CCMP) to encrypt and protect frames. Strong keys should be used
because there are tools available that can successfully perform a dictionary attack on
WPA2-Personal.
WPA2-Enterprise uses the same 802.1X or EAP authentication as in WPA, but it uses the
AES-CCMP to encrypt and protect frames.
Wi-Fi Protected Access 3
WPA was the stopgap security measure that replaced WEP, but it was then replaced
(deprecated) by WPA2. However, the "KRACK attack" blog, published in October 2017, put
a spotlight on Wi-Fi security that highlighted a need for the industry to move to a new
generation of authentication and encryption mechanisms. This need brought forward
enhancements to the existing WPA2 features, creating the next iteration, named WPA3.
WPA3 covers four different features, with four different contexts:

• WPA3-Personal
• WPA3-Enterprise
• Open Networks
• IoT secure onboarding
WPA3 will be backward compatible with WPA2, meaning your WPA3 devices will be able
to run WPA2. However, it is expected that it will take a few years for vendors to fully
transition to WPA3-only modes, therefore WPA2 transmission capabilities may still be in
use in the near future. Cisco has been instrumental in the development of WPA3, and as
this transition starts to happen, Cisco will roll WPA3 support in the wireless LAN
controllers (WLCs) and APs, allowing early adopters to start enjoying this added level of
security.
WPA3-Personal
WPA-Personal uses passwords, called PSKs. Attackers can eavesdrop on a WPA2 valid
initial "handshake," and attempt to use brute force to deduce the PSK. With the PSK, the
attacker can connect to the network, but also decrypt passed captured traffic. The
likelihood of succeeding in such an attack depends on the password complexity: dictionary
words or other simple passwords are vulnerable.
WPA3-Personal utilizes Simultaneous Authentication of Equals (SAE), defined in the IEEE
802.11-2016 standard. With SAE, the experience for the user is unchanged (create a
password and use it for WPA3-Personal). However, WPA3 adds a step to the "handshake"
that makes brute force attacks ineffective. The passphrase is never exposed, making it
impossible for an attacker to find the passphrase through brute force dictionary attacks.
WPA3 also makes management frames more robust with the mandatory addition of
Protected Management Frames (PMF) that adds an extra layer of protection from
deauthentication and disassociation attacks.
WPA3-Enterprise
Enterprise Wi-Fi commonly uses individual user authentication through 802.1X/EAP.
Within such networks, PMF is also mandatory with WPA3. WPA3 also introduces a 192-bit
cryptographic security suite. This level of security provides consistent cryptography and
eliminates the "mixing and matching of security protocols" that are defined in the 802.11
Standards. This security suite is aligned with the recommendations from the Commercial
National Security Algorithm (CNSA) Suite, commonly in place in high security Wi-Fi
networks in government, defense, finance, and industrial verticals.
Open Networks
In public spaces, Wi-Fi networks are often unprotected, with no encryption and no
authentication, or simply a web-based onboarding page. As a result, Wi-Fi traffic is visible
to any eavesdropper. The upgrade to WPA3 Open Networks includes an extra mechanism
for public Wi-Fi, Opportunistic Wireless Encryption (OWE). With this mechanism, the end
user onboarding experience is unchanged and the Wi-Fi communication is automatically
encrypted.
IoT Secure Onboarding—DPP
Device Provisioning Protocol (DPP) is used for provisioning of IoT devices, making
onboarding of such devices easier. DPP allows an IoT device to be provisioned with the
Service Set Identifier (SSID) name and secure credentials through an out-of-band
connection. DPP is based on quick response (QR) code, and in the future Bluetooth, near
field communication (NFC), or other connections.
32.1 Securing Administrative Access
Introduction
Securing the network infrastructure requires securing the management access to these
infrastructure devices. If infrastructure device access is compromised, the security and
management of the entire network can be compromised. Consequently, it is critical to
establish the appropriate controls to prevent unauthorized access to infrastructure
devices.
Network infrastructure devices often provide a range of different access mechanisms,
including console and asynchronous connections, as well as remote access based on
protocols such as Telnet, HTTP, and Secure Shell (SSH). Some mechanisms are typically
enabled by default with minimal security associated with them. For example, Cisco IOS
Software-based platforms are shipped with console and modem access that is enabled by
default. For this reason, each infrastructure device should be carefully reviewed and
configured to ensure that only supported access mechanisms are enabled and that they
are properly secured.

As a networking engineer, you will need to be able to secure administrative access to the
networking devices in enterprise environments, which will include important tasks such as
the following:

• Providing secure access to the privileged level of the networking devices.


• Enabling a password and enabling a secret password on a device.
• Providing secure remote access to the devices, including enabling SSH on a Cisco
switch or router.
• Enabling protection on the vty lines with a standard, numbered access control list
(ACL).
32.2 Securing Administrative Access
Network Device Security Overview
Many forms of security threats have emerged because of the rapid growth of the internet.
Viruses, Trojan horse attacks, malicious hackers, and even the employees of an
organization are potential security hazards to corporate networks. Such threats have the
potential to steal and destroy sensitive corporate data, tie up valuable resources, and
inflict major damage due to network downtime, which might lead to a financial crisis
within the company. Security breaches are also encountered more frequently in home or
private networks. Everyone has a reason to be concerned.
Common threats to network device security, and mitigation strategies, can be summarized
as follows:

• Remote access threats: Unauthorized remote access is a threat when security is


weak in remote access configuration. Mitigation techniques for this type of threat
include configuring strong authentication and encryption for remote access policy
and rules, configuration of login banners, use of ACLs, and virtual private network
(VPN) access.
• Local access and physical threats: These threats include physical damage to
network device hardware, password recovery that is allowed by weak physical
security policies, and device theft. Mitigation techniques for this type of threat
include locking the wiring closet and allowing access only to authorized personnel.
It also includes blocking physical access through a dropped ceiling, raised floor,
window, duct work, or other possible points of entry. Use electronic access control
and log all entry attempts. Monitor facilities with security cameras.
• Environmental threats: Extreme temperature (heat or cold) or humidity extremes
(too wet or too dry) can present a threat. Mitigation techniques for this type of
threat include creating the proper operating environment through temperature
control, humidity control, positive air flow, remote environmental alarms, and
recording and monitoring.
• Electrical threats: Voltage spikes, insufficient supply voltage (brownouts),
unconditioned power (noise), and total power loss are potential electrical threats.
Mitigation techniques for this type of threat include limiting potential electrical
supply problems by installing uninterruptible power supply (UPS) systems and
generator sets, following a preventative maintenance plan, installing redundant
power supplies, and using remote alarms and monitoring.
• Maintenance threats: These threats include improper handling of important
electronic components, lack of critical spare parts, poor cabling, and inadequate
labeling. Mitigation techniques for this type of threat include using neat cable runs,
labeling critical cables and components, stocking critical spares, and controlling
access to console ports.
Remote and local device access risks can also be mitigated with correct password policies.
Even if potential intruders gain access to the device’s network or physical access to it,
there is still a password needed before its configuration can be viewed or altered.
Password attacks are common. The three main methods of obtaining user passwords are
as follows:

• Guessing: The attacker enters the passwords either manually or using a tool to
automate the process.
• Brute force: Computer programs called "password crackers" perform the attack by
systematically entering all possible passwords until it succeeds.
• Dictionary attacks: This method is similar to brute force, but it uses word lists
containing millions of words to create passwords to log in instead of random
characters in sequences.
A password attack can be either an online attack or an offline attack. In an online attack,
an attacker makes repeated attempts to log in. The activity is visible to the authentication
system, so the system can automatically lock the account after too many bad guesses.
Account lockout disables the account and makes it unavailable for further attacks during
the lockout period. The lockout period and the number of allowed login attempts are
configurable by a system administrator.
Offline attacks are far more dangerous. In an online attack, the password has the
protection of the system in which it is stored, but there is no such protection in offline
attacks. In an offline attack, the attacker captures the password hash or the encrypted
form of the password. The attacker can then make countless attempts to crack the
password without being noticed.
Longer and more complex passwords are time-consuming for the attackers to crack them.
Specifying a minimum length of a password and forcing an enlarged character set (upper
case, lower case, numeric, and special characters) can have an enormous influence on the
feasibility of brute force attacks. However, if users attempt to meet the enlarged
character set requirements by making simple adjustments, such as capitalizing the first
letter and appending a number and an exclamation point (changing, for example, unicorn
to Unicorn1!), little is gained against a dictionary attack using some simple transforms.
Besides the password creation, password policy consists of password management, which
includes storage, protection, and password changes. In an enterprise environment,
password management should follow these guidelines:

• Storing, transmitting, or copying passwords in cleartext should not be possible.


• Passwords should be stored in a centralized database, where each user uses their
own credentials to access only the passwords they are authorized to Access
• The access to passwords should be audited and a log kept with a timestamp, user
and which password was accessed.
• Creating, deleting, and editing passwords should follow company’s security
guidelines.
• Passwords should be changed regularly, depending on the password complexity
and company security policy, and old passwords should never be reused.
The combination of a username and a password to access a device in cases where a device
is a high security risk might not be enough. Additional or alternative security measures can
be introduced:

• Multi-factor authentication: In addition to the username and password, at least an


extra step is required for authentication. The second factor can be in different
forms, such as a push notification from a server (used on web and mobile
applications), or a security token from either a hardware device, a piece of
software, or a text message. One-Time-Password (OTP), where a security value is
being used only once, is one of the most commonly used approaches for multi-
factor authentication. An example of this is when a text message or email is sent to
your mobile phone with the required additional information.
• Digital certificate: A document, which in essence binds together the name of the
entity and its public key, which has been signed by the certificate authority. The
document ensures that the certificate holder is really who they say they are, and
this information can be verified by the certificate authority. Certificates are
commonly used on websites, where on initial connection with the server your
computer verifies the server’s certificate with the certificate authority, to trust the
server.
• Biometrics: This technology is widely used in phones and personal computers. It
offers ease of use, relatively high security compared to entering PIN numbers or
passwords and is linked to an individual person, and therefore, it is hard to
compromise. Biometric technologies include fingerprint, iris, voice, face, heart
beat, and other types of recognitions. Many of them can be found in mobile
devices, smartphones, and even in home door access control systems today. To
increase the reliability and security, systems might use a combination of these
technologies and traditional username and password credentials to authenticate
users.
32.3 Securing Administrative Access
Securing Access to Privileged EXEC Mode
You can secure a router or a switch by using passwords to restrict access. Using passwords
and assigning privilege levels is a way to provide terminal access control in a network. It is
a form of management plane hardening. You can establish passwords on individual lines,
such as the console, and to the privileged EXEC mode.
Note: Keep in mind that passwords are always case-sensitive.
Configure the enable secret password and configure the enable password::

Verify the configured passwords:

Note: The passwords shown in the example are for instructional purposes only. Passwords
that are used in an actual implementation should meet the requirements of strong
passwords.
The enable password and enable secret global command restrict access to the privileged
EXEC mode.
Note: A configured enable secret always takes precedence over a configured enable
password. It is recommended to use the enable secret command instead of the enable
password command.
The enable secret command in older devices uses the Message Digest 5 (MD5) hashing by
default. The number 5 in the command in the configuration indicates that a MD5-type
hash was used to protect the password.
Note: MD5 has been deprecated due to the existence of predictable collisions. Latest
implementations use Secure Hashing Algorithm 2 (SHA-2) and its derived successors.
Encrypt plaintext passwords.
You can also add a further layer of security to any plaintext passwords in your
configuration, which is particularly useful when the configuration is viewed, or when it is
stored elsewhere, such as on a TFTP server. To enable encryption when plaintext
passwords are viewed, enter the service password-encryption command in the global
configuration mode. Passwords that are already configured, or set after you configure the
service password-encryption command, will no longer appear in plaintext when you view
the configuration. However, note that service password encryption uses type-7
obfuscation, which is not very secure. There are several tools and web pages available that
convert a type-7 protected password into a plaintext string.

32.4 Securing Administrative Access


Securing Console Access
Console access to a Cisco IOS device does not have access protection set by default. To
secure the access to the console of a device, you first need to configure it. By securing the
console line, the device requests that the access credentials (password or username and
password) be entered before any further commands.
One way to secure console access is to use the line console 0 command followed by the
password and login subcommands to require the login and establish a login password on
a console terminal. By default, no login is required on the console line.
Since the password is in plaintext, use the service password-encryption command.
Remember, the password is protected with the type-7 encryption, which is actually only
an obfuscation.
Note: Enter the service password-encryption command in the global configuration mode
to encrypt the passwords, including the console password. Although this encryption is
weak and can be easily decrypted, it is still better than a plaintext password. At least you
are protected against exposing the password to casual observers.
Alternatively, you can use the username username password password command in the
global configuration mode to create a user with the corresponding password. Next, use
the line console 0 command followed by the login local subcommand to require the login
on the console terminal. The local keyword indicates that the required credentials are in
the local list of usernames and passwords. The next time that you connect to the console,
a username and password will be required. In the configuration, the password is not
encrypted. To encrypt the password, use the service password-encryption command, but,
as mentioned, it uses a weak algorithm. For a higher security level, use the username
username secret password command.
Configure a password in the username command and require it for access to the console.

Configure the secret password in the username command and require it for access to the
console.

The secret is entered in plaintext and by default is encrypted with the SHA256 algorithm,
which is indicated by the number 4 before the ciphertext when displaying the username
command configuration.
Note: Remember to always configure a password when using the login command or a
username and password when using the login local command, before closing the console
session. Entering only the login or login local command in the configuration of the console
line will result in the console terminal being inaccessible if other methods of accessing the
device were not configured.
EXEC timeout configuration

The exec-timeout minutes [seconds] command prevents users from remaining connected
to a line when the line is idle. In the example, when no user input is detected on the
console for 5 minutes, the user that is connected to the console port is automatically
disconnected. Using the exec-timeout 0 0 command disables the timeout. This should not
be used in a production environment because it is not a secure practice.

32.5 Securing Administrative Access


Securing Remote Access
You can secure the virtual terminal lines on Cisco devices in the same way that the console
line is secured, using a password, or a username and password. The EXEC timeout is also
configured the same way. By default, Telnet sessions are allowed if a password is
configured.
Virtual terminal password configuration:

EXEC timeout:

The line vty 0 15 command, followed by the login and password subcommands, requires
login and establishes a login password on incoming Telnet sessions.
The exec-timeout command prevents users from remaining connected to a vty line when
the line is idle. In the example, when no user input is detected on a vty line for 5 minutes,
the vty session is automatically disconnected.
You can use the login local command to require a username and password as vty line
credentials, the same as you can for the console line. The username and password or
secret password are specified with the username global configuration command.
To configure the secret password in the username command and require it for access to
the vty lines:

However, devices such as switches and routers are usually accessed via SSH, which needs
to be configured on the device first.
SSH configuration:
To configure SSH on a Cisco switch or router, you need to complete these steps:
1. Use the hostname command to configure the hostname of the device so that it is
not Switch (on a Cisco switch) or Router (on a Cisco router).
2. Configure the Domain Name System (DNS) domain with the ip domain-name
command. The domain name is required to be able to generate certificate keys.
3. Generate RSA keys that will be used for authentication. Use the crypto key
generate rsa command; you will also need to configure the modulus that defines
the key length.
4. Configure the user credentials that the user will use for authentication, using the
username username secret password command.
5. Specify the login local command for vty lines, so that it will use locally defined
credentials for authentication.
6. By default, Telnet is allowed. To limit access to a device to users that use SSH and
block Telnet, use the transport input ssh mode command. If you want to support
login banners and enhanced security encryption algorithms, force SSH version 2
(SSHv2) on your device with the ssh version 2 command in global configuration
mode.
Note: RSA is one of the most common asymmetric algorithms with variable key length,
usually from 1024 to 4096 bits. Smaller keys require less computational overhead to use,
large keys provide stronger security. The RSA algorithm is based on the fact that each
entity has two keys, a public key and a private key. The public key can be published and
given away, but the private key must be kept secret and one cannot be determined from
the other. What one of the keys encrypts, the other key decrypts, and vice versa. SSH uses
the RSA algorithm to securely exchange the symmetric keys used during the session for
the bulk data encryption in real time.
To display the version and configuration data for SSH on the device that you configured as
an SSH server, use the show ip ssh command. In the example, SSHv2 is enabled.
To check the SSH connection to the device, use the show ssh command.
Verify that SSH is enabled.

Check the SSH connection to the device.

SSH and Telnet provide remote console access, but unlike Telnet, SSH is designed to
provide privacy, data integrity, and origin authentication. SSH version 1 (SSHv1)
introduced better cryptographic security features compared with Telnet. After its
introduction, a vulnerability was found in the implementation of SSHv1. Therefore, a
second version with additional security features, SSHv2, was introduced and adopted.
SSHv1 is legacy and obsolete. The key exchange methodology that is used by SSHv2 is
more complex, using Diffie-Hellman. You will be presented the connection process of
SSHv1 for simplicity.
SSHv1 uses asymmetric encryption to facilitate symmetric key exchange. Computationally
expensive asymmetric encryption is only required for a small step in the negotiation
process. After key exchange, a much more computationally efficient symmetric encryption
is used for bulk data encryption between the client and server.
The connection process used by SSHv1 is as follows:

• The client connects to the server and the server presents the client with its public
key.
• The client and server negotiate the security transforms. The two sides agree to a
mutually supported symmetric encryption algorithm. This negotiation occurs in the
clear. A party that intercepts the communication will be aware of the encryption
algorithm that is agreed upon.
• The client constructs a session key of the appropriate length to support the
agreed-upon encryption algorithm. The client encrypts the session key with the
server public key. Only the server has the appropriate private key that can decrypt
the session key.
• The client sends the encrypted session key to the server. The server decrypts the
session key using its private key. At this point, both the client and the server have
the shared session key. That key is not available to any other system. From this
point on, the session between the client and server is encrypted using a symmetric
encryption algorithm.
• With privacy in place, user authentication ensues. The user’s credentials and all
other data are protected.
Not only does the use of asymmetric encryption facilitate symmetric key exchange, it also
facilitates peer authentication. If the client is aware of the server’s public key, it would
recognize if it connected to a nonauthentic system when the nonauthentic system
provided a different public key. The nonauthentic system cannot provide the real server’s
public key because it does not have the corresponding private key. While the ability to
provide peer authentication is certainly a step in the right direction, the responsibility is
generally put on the user to have prior knowledge of the server’s public key. When the
SSH client software connects to a new server for the first time, it will generally display the
server’s public key (or a hash of the server’s public key) to the user. The client software
will only continue if the user authorizes the server’s public key.

32.6 Securing Administrative Access


Configuring the Login Banner
You can define a customized login banner to be displayed before the username and
password login prompts.
Configure the login banner.
Switch(config)# banner login "Access for authorized users only. Please enter your
username and password."
A user connecting to the device sees the following message:

To configure a login banner, use the banner login command in global configuration mode.
Enclose the banner text in quotation marks or use a delimiter that is different from any
character appearing in the banner string.
Note: Use caution when you create the text that is used in the login banner. Words such
as "welcome" may imply that access is not restricted and may allow hackers some legal
defense of their actions.
To define and enable a message-of-the-day (MOTD) banner, use the banner motd
command in global configuration mode.
This MOTD banner is displayed to all terminals that are connected and is useful for
sending messages that affect all users (such as impending system shutdowns).

32.7 Securing Administrative Access


External Authentication Options
Administrative access to a specific network device should be secured so that only
authenticated users can access the device.
In a small network, local authentication is often used. When you have more than a few
user accounts in a local device database, managing those user accounts becomes more
complex. For example, if you have 100 network devices, adding one user account means
that you have to add this user account on all 100 devices in the network. Also, when you
add one network device to the network, you have to add all user accounts to the local
device database to enable all users to access that device.
Because maintaining the local database for each network device for the size of the
network is usually not feasible, you can use an external authentication, authorization, and
accounting (AAA) server that will manage all user and administrative access needs for an
entire network.
AAA refers to a security architecture for distributed systems that enables control over
which users are allowed access to which services and keeps track of how many resources
they have used. It is a framework that supports and provides these services:

• Authentication: This service identifies users, including login and password dialog,
challenge and response, messaging support, and encryption, depending on the
security protocol that you select.
• Authorization: This service provides access control by assembling a set of
attributes that describe what the user is authorized to perform.
• Accounting: This service provides the method for collecting information, logging
the information locally, and sending the information to the AAA server for billing,
auditing, and reporting.
To better understand the three services, imagine attending an invitation-only event.
Authentication can be compared to being stopped at the office lobby by a security guard.
After you provide a driver’s license to validate that you are on the guest list, you are given
an access badge. Authorization relates to which doors the access badge opens in the
building. Your access is restricted by the badge policy. Accounting is the system that tracks
your movements through the building and records which doors you accessed with your
badge and whether access was permitted or denied.
Here are the two most popular options for external AAA:

• RADIUS: RADIUS is an open standard that combines authentication and


authorization services as a single process—after users are authenticated, they are
also authorized. It uses UDP for the authentication and authorization service.
• TACACS+: TACACS+ is a Cisco proprietary security mechanism that separates AAA
services. Because it has separated services, you can, for example, use TACACS+
only for authorization and accounting, while using another method of
authentication. It uses TCP for all three services.
By using the RADIUS or TACACS+ authentication, all authentication requests are relayed to
the external server, which permits (access–allow) or denies (access–reject) the user
according to its user database. The server then instructs the network device to permit or
deny access.
The figure shows the external authentication process:
1. A host connects to the network. At this point, the host is prompted for a username
and password.
2. The network device passes a RADIUS/TACACS+ access request, along with user
credentials, to the authentication server.
3. The authentication server uses an identity that is stored to validate user
credentials, and sends a RADIUS/TACACS+ response (Access-Accept or Access-
Reject) to the network device.
4. The network device will apply the decision.

IEEE 802.1X
The access layer is the point at which user devices connect to the network. This layer,
therefore, is the connection point between the network and any client device. So,
protecting the access layer is important for protecting other users, applications, and the
network itself from human errors and malicious attacks. Network access control at the
access layer can be managed by using the IEEE 802.1X protocol to secure the physical
ports where end users connect. A network where each user is verified before they access
it is called an identity-based network.
Identity-based networking allows you to verify users when they connect to a switch port.
Identity-based networking authenticates users and places them in the right VLAN, based
on their identity. Should any users fail to pass the authentication process, their access can
be rejected, or they might be simply put in a guest VLAN.
The IEEE 802.1X standard allows you to implement identity-based networking based on a
client-server access control model. The following three roles are defined by the standard:

• Client: Also known as the supplicant, it is the workstation with 802.1X-compliant


client software.
• Authenticator: Usually the switch, which controls the physical access to the
network; it acts as a proxy between the client and authentication server.
• Authentication server (RADIUS): The server that authenticates each client that
connects to a switch port before making available any services that the switch or
the LAN offer.

The procedure of a client connecting to a network with 802.1X port-based authentication


has five stages:
1. Session initiation: The client sends a request to initiate the authentication or the
authenticator detects a link on a port and initiates the authentication.
2. Session authentication: The authenticator relays messages between the client and
the authentication (RADIUS) server. The client sends the credentials to the RADIUS
server.
3. Session authorization: The RADIUS server validates the received credentials and if
valid credentials were submitted, the server sends a message to the authenticator
to allow the client access to the port. If the credentials are not valid, the RADIUS
server sends a message to the authenticator to deny access to the client.
4. Session accounting: When the client is connected to the network, the
authenticator collects the session data and sends it to the RADIUS server.
5. Session termination: When the client disconnects from the network, the session is
terminated immediately.
33.1 Implementing Device Hardening
Introduction
An important element in the overall security posture of an organization is the security of
the network infrastructure. The network infrastructure is the foundation that is built with
routers, switches, and other equipment that provides the fundamental network services
that keep a network running. The infrastructure is often the target of denial of service
(DoS) and other attacks that can directly or indirectly disrupt the network operation. In
order to ensure the availability of the network, it is critical to implement the security tools
and best practices that help protect each network element, and the infrastructure as a
whole.
Different threats exploit different vulnerabilities in different functional areas of the
network. Spoofing attacks are commonly used to power DoS attacks from the external
network, to exhaust edge device resources and link bandwidth. On internal security
domains, such as server farms and the campus access layer, Layer 2 attacks are typically
aimed at trust exploitation and information theft using man-in-the-middle mechanisms.
Deploying the right policy and feature to the appropriate device is critical to the effective
mitigation of these threats.

As a networking engineer, you will need to be able to implement proper device hardening
mechanisms, which might include practices such as the following:

• securing unused ports on devices


• enabling infrastructure access control lists (iACLs)
• disabling unnecessary services on the devices to lower the risk of security exploits
on the unneeded services
• mitigating attacks on LAN infrastructure, such as VLAN-based attacks
• mitigating Address Resolution Protocol (ARP) and DHCP spoofing attacks

33.2 Implementing Device Hardening


Securing Unused Ports
Organizations commonly implement security solutions using routers, firewalls, intrusion
prevention system (IPS), and virtual private network (VPN) devices. These devices protect
the elements in Open Systems Interconnection Layer 3 through Layer 7. Layer 2 LANs are
often considered to be a safe and secure environment. However, Ethernet, the most
commonly deployed data link layer technology, provides no mechanisms for security. A
similar lack of security exists in many technologies that are built on top of Ethernet and
depend on Ethernet, such as transparent bridging and switching, Spanning Tree Protocol
(STP), and ARP. If Layer 2 is compromised, then all layers above it are also affected. Layer
2 security solutions must be implemented to help secure a network. Fortunately, features
have been designed and implemented within network devices to provide security services
for Layer 2.
Device security features aim to mitigate known LAN attacks, such as Cisco Discovery
Protocol reconnaissance attack, MAC address table flooding attack, VLAN attacks, and
DHCP attacks.
Disabling an Interface (Port)
Many of the attacks presume that the attacker has access to the network. Since the
attacker needs to physically connect to the network somehow, a simple method to help
secure the network from unauthorized access is to disable all unused ports on a switch.
By default, switch ports are forwarding, and they are not shut down. To disable a port, you
administratively shut it down in the interface configuration mode using the shutdown
command.
Disabling ports one by one is time-consuming. If continuous ranges of ports should be shut
down, use the interface range interface-id command.
Using the interface range command also ensures that you are providing uniform
configuration across multiple ports.
You can verify that you have disabled a port by looking for the shutdown command within
an interface configuration in the running configuration.
The process of enabling and disabling ports can be time-consuming, but it enhances
security on the network and is well worth the effort.
Adding Unused Ports to a Dedicated VLAN
Another security practice is to create a VLAN dedicated for unused ports. Do not create a
switch virtual interface (SVI) for that VLAN, to prevent the unauthorized person from
attacking the switch itself. Once the VLAN is created, add unused ports to the VLAN.
Sometimes, this VLAN is called the "parking" or "black hole" VLAN.
Use the vlan command to configure a new VLAN. Then, on each unused interface, use the
switchport mode access command to prevent the interface from trunking by default if the
other side wants to trunk, and the switchport access vlan vlan-id command to add ports
into the VLAN.
In addition, you can shutdown the VLAN using the shutdown command in VLAN
configuration mode. After you shut down a VLAN, the traffic ceases to flow on that VLAN.
This way, you prevent unused ports from communicating with each other.
Note: The order in which you type commands to shut down a VLAN is important. All
interfaces in the VLAN must be in the shutdown state before shutting down the VLAN.
Otherwise, the VLAN will remain active even if you administratively shutdown the VLAN.
Also, after a VLAN is shutdown, enabling interfaces that belong to the VLAN will have no
effect. You must first enable the VLAN before interfaces in the VLAN can be enabled.

33.3 Implementing Device Hardening


Infrastructure ACL
To increase the overall security of your network, you have to protect the traffic that
transits the network devices using the forwarding path, such as email, web traffic, and so
on, as well as the traffic that is intended for the network devices themselves, such as
Secure Shell (SSH) and Simple Network Management Protocol (SNMP) for management,
Border Gateway Protocol (BGP) and Enhanced Interior Gateway Routing Protocol (EIGRP)
as routing protocols between network devices, and so on.
Access control lists (ACLs) provide a powerful mechanism to control the traffic entering or
leaving your network. To protect infrastructure devices and minimize the risk, impact, and
effectiveness of direct infrastructure attacks, administrators are advised to deploy
infrastructure access control lists (iACLs), as one of the most critical security controls that
can be implemented in networks. The iACLs explicitly permit only authorized traffic to the
infrastructure equipment, as well as permit transit traffic (all traffic not destined for the
infrastructure).
The protections provided by iACLs are relevant to both the management and control
planes on the devices. The iACLs enable you to implement policy enforcement to the
traffic sent to infrastructure devices. Therefore, you can construct an iACL by explicitly
permitting only authorized traffic sent to these devices in accordance with existing
security policies and configurations.
When compared to other types of ACLs, iACLs have the following characteristics:

• They protect the traffic destined to the network infrastructure equipment, to


mitigate directed attacks.
• Their design depends on the protocols used on the network infrastructure
equipment.
• Typically, they should be deployed at network ingress points, as a first line of
defense against external threats. For example, iACLs can provide defense against
certain types of invalid traffic on the internet.
• They can be deployed on other locations in the network, depending on the
positioning of critical network infrastructure equipment.
Once created, the iACL must be applied to all interfaces that face noninfrastructure
devices. This includes interfaces that connect to the internet, other organizations, remote
access segments, user segments, and segments in data centers.
Developing an iACL
Because iACLs are designed as a first line of defense against external threats, they should
be deployed at network ingress points to protect the infrastructure from various risks,
both accidental and malicious. More precisely, iACLs are extended ACLs that restrict
external access to the infrastructure address space. They allow only authorized devices to
communicate with infrastructure elements such as routers and switches, while letting
transit traffic flow freely.
For example, iACLs deployed at a network ingress point from the internet should
implement basic RFC 1918, RFC 3330, and antispoof filtering.
Note: RFC 1918 defines IPv4 private address space that is not a valid source address on
the internet. RFC 3330 defines IPv4 special use addresses that might require filtering. RFC
2827 provides network ingress filtering guidelines for antispoof protection.
When developing an iACL, you should understand the required protocols by the specific
infrastructure. Although every site has specific requirements, certain protocols are
commonly deployed and must be understood. For example, external BGP connections to
external peers needs to be explicitly permitted. Any other protocols that require direct
access to the infrastructure router need to be explicitly permitted as well. For example, if
you terminate a Generic Routing Encapsulation (GRE) tunnel on a core infrastructure
router, then protocol 47 (GRE) also needs to be explicitly permitted. Similarly, if you
terminate an IPv6 over IPv4 tunnel on a core infrastructure router, then protocol 41 (IPv6
over IPv4) also needs to be explicitly permitted.
In addition to required protocols, the infrastructure address space needs to be identified
since this is the address space that the iACL protects. The infrastructure address space
includes any addresses that are used for the internal network and are rarely accessed by
external sources such as router interfaces, point-to-point link addressing, and critical
infrastructure services. Since these addresses are used for the destination portion of the
iACL, summarization is critical. Wherever possible, these addresses must be grouped into
classless interdomain routing (CIDR) blocks.
The following figure illustrates an example of a company that is multihomed to two ISPs
through two local routers. The iACLs are applied inbound on the ingress interfaces of
company routers, which represent entry points to the network. Each router has a BGP
connection to one ISP, for load sharing of the internet traffic on the basis of
predetermined policies.

This IPv4 example is based on the following addressing:

• The internet IPv4 address block that the company is using for the infrastructure is
209.165.200.224/27.
• The interface Gi 0/0 on router R1 is configured with the IPv4 address
209.165.201.1/30. This address is used to establish BGP session with the ISP 1
router, which uses the IPv4 address 209.165.201.2/30 on its interface Gi 0/0.
• The interface Gi 0/0 on router R2 is configured with the IPv4 address
209.165.201.5/30. This address is used to establish BGP session with the ISP 2
router, which uses the IPv4 address 209.165.201.6/30 on its interface Gi 0/0.
Since many attacks rely on flooding routers with fragmented packets, filtering incoming
fragments to the infrastructure provides an added measure of protection and helps
ensure that an attack cannot inject fragments by simply matching Layer 3 rules in the iACL.
ACLs can use the fragments keyword that enables specialized fragmented packet-handling
behavior. Without this fragments keyword, noninitial fragments that match the Layer 3
statements (irrespective of the Layer 4 information) in an ACL are affected by the permit
or deny statement of the matched entry. However, by adding the fragments keyword, you
can force ACLs to either deny or permit noninitial fragments with more granularity.
Filtering fragments can be added to the example as an additional layer of protection
against a denial of service (DoS) attack that uses noninitial fragments (that is, fragment
offset > 0). Using a deny statement for noninitial fragments at the beginning of the iACL
denies all noninitial fragments from accessing the router. Under rare circumstances, a
valid session might require fragmentation, and will be filtered if a deny fragment
statement exists in the ACL.
To deny any noninitial fragments, while nonfragmented packets or initial fragments are
able to pass to the next lines of the ACL, use the following entries at the beginning of an
iACL:

These separate entries in the iACL facilitate classification of the attack, since each
protocol, TCP, UDP, and Internet Control Message Protocol (ICMP), increments separate
counters in the ACL.
As previously mentioned, an iACL built without the proper understanding of the protocols
and devices involved, may end up being ineffective and may even cause a DoS attack,
instead of preventing it. Therefore, you should have a clear understanding of the
legitimate traffic required by your infrastructure before deploying an iACL. Also, you
should use a conservative methodology for deploying iACLs, leveraging iterative iACL
configurations that can help you identify and incrementally filter unwanted traffic.
The following example illustrates the iACL applied inbound on interface Gi 0/0 on router
R1, which provides antispoof filters, permits external BGP peering to the external peer,
and protects the infrastructure from all external access. R2 uses a similar iACL.
Note: The iACL shown above permits the flow of transit traffic to noninfrastructure
destinations, expecting other devices to filter the internet traffic based on the security
policies in the company regarding its internet services, such as web and email servers, and
others. iACLs are designed to secure the infrastructure and do not provide protection from
attacks against targets other than the infrastructure itself.
The network troubleshooting tools ping and traceroute use ICMP, which you can also
filter in the iACLs. Therefore, you can permit ICMP messages by name or type and code, to
allow traffic from trusted management stations to the infrastructure devices while
blocking all other ICMP packets to these devices.

33.4 Implementing Device Hardening


Disabling Unused Services
To facilitate deployment, Cisco routers and switches start with a list of services that are
turned on and that are considered to be appropriate for most network environments.
However, because not all networks have the same requirements, some of these services
may not be needed. Disabling the unnecessary services has two benefits: It helps preserve
system resources, and it eliminates the potential for security exploits on the unneeded
services.
To display the list of UDP or TCP ports that the device is listening on and to determine
which services need to be disabled, use the show control-plane host open-ports
command.

In the example, services that are enabled on the router are SSH, Telnet, TACACS, and
DHCP.
Note: As an alternative, Cisco IOS Software provides the AutoSecure function that helps
disable these unnecessary services while enabling other security services.
Along with services at a higher level of the TCP/IP stack, lower-layer services should also
be considered.
Cisco Discovery Protocol can be useful for network troubleshooting. Cisco Discovery
Protocol is enabled by default in Cisco IOS Software Release 15.0 and later. Some network
management software takes advantage of Cisco Discovery Protocol neighbor data to map
out topological connectivity. Cisco VoIP deployments can take advantage of Cisco
Discovery Protocol to automatically assign the voice VLAN to Cisco IP phones. On the
other hand, Cisco Discovery Protocol provides an easy reconnaissance vector to any
attacker with an Ethernet connection. For example, when a switch sends a Cisco Discovery
Protocol announcement out of a port where a workstation is connected, the workstation
normally ignores it. However, with a simple tool such as Wireshark, an attacker can
capture and analyze the Cisco Discovery Protocol announcement. Included in the Cisco
Discovery Protocol data is the model number and operating system version of the switch.
An attacker can then use this information to look up published vulnerabilities that are
associated with that operating system version and potentially follow up with an exploit of
the vulnerability. The organization must decide whether the convenience that Cisco
Discovery Protocol brings is greater than the security risk that comes with Cisco Discovery
Protocol.
Here are some general best practices:

• Although the HTTP service enables convenient browser-based access to a Cisco


device, it is strongly recommended that you turn off the HTTP service that is
running. HTTPS can stay on.
o Use the no ip http server global configuration command to disable HTTP
service. To re-enable the HTTP service after disabling it, use the ip http
server command in global configuration mode.
• You should enable Cisco Discovery Protocol on selected ports only, where the
service does not represent a risk. Examples of interfaces that are at risk include
external interfaces, such as those at the internet edge, and data-only ports at the
campus and branch access.
o Use the no cdp enable command in interface configuration mode to disable
Cisco Discovery Protocol on an interface. To disable Cisco Discovery
Protocol on all interfaces, use the no cdp run command in global
configuration mode. You can use the cdp enable and cdp run commands to
re-enable Cisco Discovery Protocol as needed.

33.5 Implementing Device Hardening


Port Security
All switch ports, both unused and used, should be secured before the switch is deployed
for production use. One way to secure ports is by implementing the port security feature.
Port Security Overview
Port security restricts access to a switch port based on the connecting devices MAC
addresses. A port that is configured with port security accepts frames only from secure
MAC addresses. The MAC addresses of legitimate devices are allowed access, while other
MAC addresses are denied. Port security limits the number of valid MAC addresses
allowed on a port. Enabling port security can be used to control unauthorized expansion
of the network. Also, port security is the simplest and most effective method to prevent
MAC address table flooding attacks.
Port security is not enabled by default. When enabled on a port, the port is considered to
be a secured port.
You can configure a secure port to allow only one specific MAC address, multiple specific
MAC addresses, or a maximum number of unspecified MAC addresses.
Allowed MAC addresses are called secure addresses and are stored in the secure MAC
address table. Depending on how the secure MAC address is learned by the switch, it can
be one of the following addresses:

• Static secure MAC addresses: Specific MAC addresses that are manually
configured on a port. MAC addresses configured in this way are stored in the
secure MAC address table and are added to the running configuration on the
switch.
• Dynamic secure MAC addresses: MAC addresses that are dynamically learned
from devices that connect to the port and are not specified manually. The
maximum number of such addresses accepted on a port is configured. This port
security configuration is used when you care only about how many MAC addresses
are permitted to use the port, rather than which MAC addresses are permitted.
Dynamically learned MAC addresses that are not secure are stored in the MAC
address table until they age out. However, dynamic secure MAC addresses do not
age out by default. Instead, they are removed when the switch restarts or the port
goes down. Dynamic secure MAC addresses are not stored in the running
configuration.
• Sticky secure MAC addresses: MAC addresses that are dynamically learned and
then stored in the address table and added to the running configuration. In other
words, sticky secure MAC addresses are learned dynamically and automatically
added to the configuration. If you save the running configuration to the startup
configuration, then the sticky secure MAC addresses are saved to the startup
configuration file, and then when the switch restarts the interface does not need
to relearn the addresses. If the sticky secure addresses are not saved, they will be
lost.
When a frame arrives on a port for which port security is configured, its source MAC
address is checked against the secure MAC address table. If the source MAC address
matches an entry in the table for this port, the device forwards the frame to be processed.
Otherwise, the device does not forward the frame.
In the example in the figure below, traffic from Attacker 1 and Attacker 2 will be dropped
at the switch because the source MAC addresses of these frames do not match MAC
addresses in the list of secured (allowed) addresses.
The following are Port Security recommendations:

• Implement port security on all switch ports.


• Specify one or more MAC addresses allowed on a port.
• Specify measures for unauthorized MAC address connection attempts.

A security violation occurs in these situations:

• When ingress traffic from a MAC address that is different than the allowed MAC
addresses arrives at an interface.
• When ingress traffic from a MAC address that is different than the allowed MAC
addresses tries to connect when the maximum number of allowed MAC addresses
on the port is already reached.
• When ingress traffic from a secure MAC address arrives at a different interface in
the same VLAN as the interface on which the address is secured.
Note: After a secure MAC address is configured or learned on one secure port, the
sequence of events that occurs when port security detects that secure MAC address on a
different port in the same VLAN is known as a MAC move violation.
As an administrator, you can configure how a switch reacts when a security violation
occurs by specifying the violation mode of the port.
One of these actions is taken, based on the configured violation mode:

• Protect: The offending frame is dropped.


• Restrict: The offending frame is dropped and an SNMP trap and a syslog message
are generated. The security violation causes the violation counter to increment.
• Shutdown: The offending frame is dropped. The interface is placed in an error-
disabled state and an SNMP trap and a syslog message are generated. The
violation counter increments. The port is inactive while in an error-disabled state.
Administrative action is required to return the port to a normal state. To make the
interface usable, you must use a manual intervention, by first using the shutdown
command and then the no shutdown command on the interface, or you must
configure error-disabled recovery. Shutdown is the default violation mode.
You can also specify how dynamic secure MAC addresses age, by configuring aging type
and aging time; the default is that they do not age. Aging type can be absolute or based on
inactivity. When you configure absolute aging, all the dynamically learned secure
addresses age out when the aging time expires. When you configure inactivity aging, the
aging time defines the period of inactivity after which all the dynamically learned secure
addresses age out.
Configuring Port Security
To configure port security on a switch port, follow the steps described below.

• Change the switchport mode from the default Dynamic Trunking Protocol (DTP)
dynamic auto mode to either access or trunk. You can configure port security only
on static access ports or trunk ports. When an interface is in the default mode, it
cannot be configured as a secure port.
o In interface configuration mode, use the switchport mode { access | trunk
} command to set the mode to either access or trunk, or use the switchport
nonegotiate command to disable DTP.
o Use the switchport port-security interface command without keywords to
enable port security on an interface.
For all other parameters, such as a secure MAC address, a maximum number of secure
MAC addresses, or the violation mode, you use the switchport port-security interface
command with keywords. Use the no form of this command to disable port security or to
set the parameters to their default states

• Optionally, set the maximum number of secure MAC addresses for the interface.
The range depends on the switch platform. The default value is 1.
o Set the maximum number of secure MAC addresses using the switchport
port-security maximum value command
• Optionally, specify the allowed MAC addresses or sticky learning.
When defining static entries, you have to specify the specific MAC address that is allowed
on an interface. You can enter as many secure MAC addresses as is the maximum number
of MAC addresses you defined. If you configure fewer secure MAC addresses than the
maximum, the remaining MAC addresses are dynamically learned.

• To specify a specific allowed MAC address, use the switchport port-security


mac-address mac-address command.
• To specify sticky learning of MAC addresses, use the switchport port-security
mac-address sticky command. The no version of the command, no switchport
port-security mac-address sticky, stops sticky learning and deletes learned
sticky MAC addresses from the running configuration, but keeps learned MAC
addresses in the address table.
Optionally, set the violation mode. This table summarizes the effect of each violation
mode. The default mode is shutdown.

• To set the violation mode, use the switchport port-security violation { protect |
restrict | shutdown } command.
Optionally, set aging parameters for dynamically learned addresses. Use aging
parameters to remove and add devices on a secure port without manually deleting the
existing secure MAC addresses. Here are the supported aging types:
Absolute: The secure addresses on the port are deleted after the specified aging time.
Absolute aging is the default type if aging is enabled.
Inactivity: The secure addresses on the port are deleted only if the secure addresses
are inactive for a specified aging time. Aging time is specified in minutes.

• To set the aging type, use the switchport port-security aging type {absolute |
inactivity} command.
• To set the aging time, use switchport port-security aging time minutes command.
When the port security violation mode is set to shutdown, the port with the security
violation goes to the error-disabled state and you receive syslog notification on the device:
Sep 20 12:44:54.966: %PM-4-ERR_DISABLE: psecure-violation error detected on Fa0/5,
putting Fa0/5 in err-disable state
Sep 20 12:44:54.966: %PORT_SECURITY-2-PSECURE_VIOLATION: Security violation
occurred, caused by MAC address 000c.292b.4c75 on port FastEthernet0/5.
Sep 20 12:44:55.973: %LINEPROTO-5-PPDOWN: Line protocol on Interface
FastEthernet0/5, changed state to down
Sep 20 12:44:56.971: %LINK-3-UPDOWN: Interface FastEthernet0/5, changed state to
down
To make the interface operational again, you need to disable the interface
administratively and then enable it again, as shown here:
SwitchX(config)# interface FastEthernet 0/5
SwitchX(config-if)# shutdown
Sep 20 12:57:28.532: %LINK-5-CHANGED: Interface FastEthernet0/5,changed state to
administratively down
SwitchX(config-if)# no shutdown
Sep 20 12:57:48.186: %LINK-3-UPDOWN: Interface FastEthernet0/5, changed state to up
Sep 20 12:57:49.193: %LINEPROTO-5-UPDOWN: Line protocol on Interface
FastEthernet0/5, changed state to up
The example below shows a typical port security configuration for a voice port. Two MAC
addresses are allowed and they are learned dynamically. One MAC address is for the IP
phone and the other MAC address is for the PC connected to the IP phone. Violations of
this policy result in the port being shut down. Aging timeout for the learned MAC
addresses is set to 2 hours.

Verifying Port Security


Cisco IOS Software includes several commands that allow you to verify port security
configuration. Most of the commands are derivatives of the show port-security command,
which is used without and with other keywords.
Use the show port-security command without keywords to view port security settings for
the switch, including violation count, configured interfaces, and security violation actions.
The example of the show port-security command shows that port security is enabled on
port GigabitEthernet0/8 with a maximum MAC address count of 2. Currently, no MAC
addresses are learned on that port and the violation action has been set to shutdown.
Use the show port-security interface interface-id command to view port security settings
for the specified interface, including the maximum allowed number of secure MAC
addresses for each interface, the number of secure MAC addresses on the interface, the
number of security violations that have occurred, and the violation mode.
The output displays the following information (from the top down):

• Whether the port security feature is enabled


• Current port status
• The violation mode
• Information about secure MAC addresses: the maximum allowed number, the
total number of different MAC addresses received, the number of statically
configured secure MAC addresses
• Aging time and type
• SecureStatic address aging
• The number of security violations that have occurred
Note: If the violation mode is set to protect, the violation counters do not increment, that
is, you might see a 0 as the number of security violations when they are in fact happening.
The example of the show port-security interface GigabitEthernet0/8 command shows
that a violation has occurred. The secure-down status of the port indicates that the port
has been shut down (error-disabled) because of the port-security policy violation.
You can also use the show interface status command to verify the status of the interface.
Use the show port-security [interface interface-id] address command to view all the
secure MAC addresses that are configured on all switch interfaces, or on a specified
interface, with aging information for each address.
The example below shows that port GigabitEthernet0/8 is in VLAN 1 and has a secured
MAC address of 0000.ffff.aaaa, which means that the host with the 0000.ffff.aaaa MAC
address can connect to port GigabitEthernet0/8.

33.7 Implementing Device Hardening


Mitigating VLAN Attacks
The VLAN architecture simplifies network maintenance and improves performance.
However, VLAN operation opens the door to abuse.
Two common VLAN-based attacks are:

• VLAN hopping attack


• Double-tagging VLAN hopping attack
VLAN hopping allows traffic from one VLAN to be seen by another VLAN without first
crossing a router. Under certain circumstances, attackers can sniff data and extract
passwords and other sensitive information at will.
The attack works by taking advantage of an incorrectly configured trunk port. By default,
trunk ports have access to all VLANs and pass traffic for multiple VLANs across the same
physical link, generally between switches.
In a basic VLAN hopping attack, the attacker takes advantage of the fact that DTP is
enabled by default on most switches. The network attacker configures a system to use
DTP to negotiate a trunk link to the switch. As a result, the attacker is a member of all the
VLANs that are trunked on the switch and can “hop” between VLANs. In other words, the
attacker can send and receive traffic on all those VLANs.
The best way to prevent a basic VLAN hopping attack is to turn off DTP on all ports, and
explicitly configure trunking mode or access mode as appropriate on each port.
The double-tagging (or double-encapsulated) VLAN hopping attack takes advantage of the
way that hardware operates on some switches. Some switches perform only one level of
802.1Q decapsulation and allow an attacker, in specific situations, to embed a second
802.1Q tag inside the frame. This tag allows the frame to go to a VLAN that the outer
802.1Q tag did not specify. An important characteristic of the double-encapsulated VLAN
hopping attack is that it can work even if DTP is disabled on the attacker’s access port.
A double-tagging VLAN hopping attack follows these steps, as illustrated in the figure
below:
1. The attacker sends a double-tagged 802.1Q frame to the switch. The outer header
has the VLAN tag of the attacker, which is the same as the native VLAN of the trunk
port. For the purposes of this example, assume that it is VLAN 10. The inner tag is
the victim's VLAN, VLAN 20.
2. The frame arrives on the switch. Assume that there is no MAC address table entry
for the destination MAC address. Therefore, the switch sends the frame out to all
VLAN 10 ports (including the trunk). On egress out of the trunk port, the switch
sees that the frame has an 802.1Q tag for the native VLAN, and it strips that tag
(because 802.1Q specifies that native VLAN traffic is not tagged). The inner tag to
VLAN 20 remains.
3. The frame arrives at the second switch, which has no knowledge that it was
supposed to be for VLAN 10. The second switch looks only at the 802.1Q tag (the
former inner tag that the attacker sent) and sees that the frame is destined for
VLAN 20 (the victim VLAN).
4. The second switch sends the packet on to the victim port, or floods it, depending
on whether there is an existing MAC address table entry for the victim host.

It is important to note that this attack is unidirectional and works only when the attacker’s
VLAN and trunk port native VLAN are the same. Stopping this type of attack is not as easy
as stopping basic VLAN hopping attacks. The best approach is to create a VLAN to use as
the native VLAN on all trunk ports and explicitly do not use that VLAN for any access ports.
To prevent a VLAN hopping attack that uses double 802.1Q encapsulation, the switch
must look further into the packet to determine whether more than one VLAN tag is
attached to a given frame. Unfortunately, the application-specific integrated circuits
(ASICs) that many switches use are only hardware optimized to look for one tag and then
switch the frame.
The double-tagging VLAN hop attack requires that the attacker is on the native VLAN of
the outbound trunk port. This attack can be mitigated by ensuring that no systems attach
to the native VLAN used by trunks. Specify a unique native VLAN for use on all trunk ports
and do not use that VLAN anywhere else on the switch.

In summary, you have the following two options to control trunking port behavior:

• For links that you do not intend to trunk across, use the switchport mode access
interface configuration command to disable trunking. This command configures
the port as an access port.
• For links that you do intend to trunk across, take the following actions:
o Use the switchport mode trunk interface configuration command to cause
the interface to become a trunk link and use the switchport nonegotiate
interface configuration command to prevent the generation of DTP frames.
o Use the switchport trunk native vlan vlan_number interface configuration
command to set the native VLAN on the trunk to an unused VLAN. The
default native VLAN is VLAN 1.

33.8 Implementing Device Hardening


DHCP Snooping
DHCP-related attacks can be performed at Layer 2: DHCP server spoofing attacks and
DHCP starvation attacks.
DHCP does not include authentication and is therefore easily vulnerable to spoofing
attacks. The simplest attack is DHCP server spoofing. The attacker runs DHCP server
software and replies to DHCP requests from legitimate clients. As a rogue DHCP server,
the attacker can cause a DoS by providing invalid IP information. The attacker can also
perform confidentiality or integrity breaches via a man-in-the-middle attack. The attacker
can assign itself as the default gateway or Domain Name System (DNS) server in the DHCP
replies, later intercepting IP communications from the configured hosts to the rest of the
network.
To mitigate that threat, you can use static IPv4 addresses (static IPv4 addresses are
obviously not scalable in large environments) or configure the infrastructure to use DHCP
snooping to control DHCP traffic.

A DHCP starvation attack works by sending a flood of DHCP requests with spoofed MAC
addresses. If enough requests are sent, the network attacker can exhaust the address
space available on the DHCP servers. This flooding would cause a loss of network
availability to new DHCP clients as they connect to the network. A DHCP starvation attack
may be executed before a DHCP spoofing attack. If the legitimate DHCP server’s resources
are exhausted, then the rogue DHCP server on the attacker system has no competition
when it responds to new DHCP requests from clients on the network.
To mitigate DHCP address starvation attacks, deploy port security address limits, which set
an upper limit of secure MAC addresses that can be accepted into the MAC address table
from any single port. Because each DHCP request must be sourced from a separate MAC
address, this mitigation technique effectively limits the number of IP addresses that can
be requested from a switch-port-connected attacker. Set this parameter to a value that is
never legitimately exceeded in your environment.
DHCP for IPv4 Snooping
DHCP snooping is a Layer 2 security feature that specifically prevents DHCP server
spoofing attacks and mitigates DHCP starvation to a degree. DHCP snooping provides
DHCP control by filtering untrusted DHCP messages and by building and maintaining a
DHCP snooping binding database, which is also referred to as a DHCP snooping binding
table.
For DHCP snooping to work, each switch port must be labeled as trusted or untrusted.
Trusted ports are the ports over which the DHCP server is reachable and that will accept
DHCP server replies. All other ports should be labeled as untrusted ports and can only
source DHCP requests. Typically, this approach means the following:

• All access ports should be labeled as untrusted, except the port to which the DHCP
server is directly connected.
• All interswitch ports should be labeled as trusted.
• All ports pointing towards the DHCP server (that is, the ports over which the reply
from the DHCP server is expected) should be labeled as trusted.
Untrusted ports are those ports that are not explicitly configured as trusted. A DHCP
binding table is automatically built by analyzing normal DHCP transactions on all untrusted
ports. Each entry contains the client MAC address, IPv4 address, lease time, binding type,
VLAN number, and port ID that are recorded as clients make DHCP requests. The table is
then used to filter subsequent DHCP traffic. From a DHCP snooping perspective, untrusted
access ports should not send any DHCP server responses, such as DHCPOFFER, DHCPACK,
or DHCPNAK. The switch will drop all such DHCP packets.
The figure below shows the deployment of DHCP protection mechanisms on the access
layer of the network. User ports are designated as untrusted for DHCP snooping (indicated
by red dots), while interswitch links are designated as trusted (indicated by green dots), if
the DHCP server is reachable through the network core. User ports also have port security
to limit MAC addresses and prevent DHCP starvation attacks.
To mitigate the chances of DHCP spoofing, these procedures are recommended:

• Enable DHCP snooping globally. By default, the feature is not enabled.


o Use the ip dhcp snooping command without keywords.
• Enable DHCP snooping on selected VLANs.
o Use the ip dhcp snooping vlan vlan-list command.
• Configure trusted interfaces, because untrusted is the default configuration.
o Use the ip dhcp snooping trust interface configuration mode command.
Use the no version of the command to identify an interface as untrusted.
Below is an example DHCP snooping configuration. DHCP snooping must first be enabled
globally. DHCP is then explicitly enabled on VLANs 10 through 19. Interface Gi0/24 is an
uplink to another switch where the DHCP server resides. It is configured to be trusted by
DHCP snooping, so DHCP replies from the server will be allowed. Assume that interfaces
Gi0/1 through Gi0/12 are access ports. By default, they are untrusted, therefore additional
DHCP snooping configuration is not necessary.

To verify DHCP snooping, use the show ip dhcp snooping command. The output shows
only trusted ports or the ports with a configured rate limit. In the example, DHCP
snooping is enabled for VLANs 10 through 19 and only GigabitEthernet 0/24 is specified as
trusted.

To display all the known DHCP bindings that have been learned on a switch, use the show
ip dhcp snooping binding command. In the example, there are two PCs connected to the
switch, so there is a binding for each of them in this table:
Switch# show ip dhcp snooping binding
MacAddress IpAddress Lease(sec) Type VLAN Interface
------------------ --------------- ---------- ------------- ---- ----------
----------
00:24:13:47:AF:C2 192.168.1.4 85858 dhcp-snooping 10 GigabitEthernet0/1
00:24:13:47:7D:B1 192.168.1.5 85859 dhcp-snooping 10 GigabitEthernet0/2
Total number of bindings: 2
33.9 Implementing Device Hardening
Dynamic ARP Inspection
ARP Spoofing Attack
In normal ARP operation, a host sends a broadcast to determine the MAC address of a
destination host with a particular IPv4 address. The device with the IPv4 address replies
with its MAC address. The originating host caches the ARP response, using it to populate
the destination MAC address in frames that encapsulate packets sent to that IPv4 address.
By spoofing an ARP reply from a legitimate device with a malicious ARP reply, an attacking
device appears to be the destination host that is sought by the sender. The ARP message
from the attacker causes the sender to store the MAC address of the attacking system in
its ARP cache. All packets that are destined for that IPv4 address are forwarded to the
attacker system.
An ARP spoofing attack, also known as ARP cache poisoning, can result in a man-in-the-
middle situation. In the figure below, the Attacker on Host B tricks both Host A and its
default gateway Router C. Host A sends traffic to Host B instead of the gateway and the
gateway sends traffic to host B instead of Host A. The attacker on Host B can passively
collect data from the packets before forwarding them on to their correct destination. The
attacker can also actively allow, deny, and insert data as a man-in-the-middle.
Mitigating the ARP Spoofing Attack
To prevent ARP spoofing, or "poisoning," a switch can inspect transit ARP traffic to ensure
that only valid ARP requests and responses are relayed. The ARP inspection feature of
Cisco Catalyst switches prevents ARP spoofing attacks by intercepting and validating all
ARP requests and responses. Each intercepted ARP reply is verified for valid MAC-to-IPv4
address bindings before it is forwarded. ARP replies with invalid MAC-to-IPv4 address
bindings are dropped.
Dynamic ARP Inspection (DAI) can determine the validity of an ARP reply based on
bindings that are stored in a DHCP snooping database. In non-DHCP environments, DAI
can validate ARP packets against user-configured ARP ACLs for hosts with statically
configured IPv4 addresses.
DAI associates each interface with a trusted state or an untrusted state. To ensure that
only valid ARP requests and responses are relayed, DAI takes these actions:

• Forwards ARP packets received on a trusted interface without any checks.


• Intercepts all ARP packets on untrusted ports.
• Verifies that each intercepted packet has a valid MAC-to-IPv4 address binding
before updating the local ARP cache or before forwarding the packet to the
appropriate destination.
• Drops ARP packets with invalid MAC-to-IPv4 address bindings.
In a typical network DAI configuration, configure all access switch ports that are
connected to host ports as untrusted and all switch ports that are connected to other
switches as trusted. With this configuration, all ARP packets entering a switch from
another switch bypass the security check, which is safe because all switches validate the
ARP packets as they are sent by hosts that are connected to untrusted access ports.
To mitigate the chances of ARP spoofing, these procedures are recommended:

• Enable DHCP snooping globally.


• Enable DHCP snooping on selected VLANs.
• Enable DAI:
o Use the ip arp inspection vlan-id command to enable ARP inspection on
selected VLANs.
• Configure trusted interfaces for DHCP snooping and ARP inspection (untrusted is
the default configuration).
o Use the ip arp inspection trust command, to enable DAI on an interface
and set the interface as a trusted interface
In the example configuration, switch SW has globally enabled DHCP snooping. DHCP
snooping and ARP inspection are enabled for the PC VLAN, VLAN 10. The uplink, the
Ethernet 0/0 interface, is configured as trusted for DHCP snooping and ARP inspection.

To view the status of the DAI configuration, the show ip arp inspection and show ip arp
inspection interfaces commands can be used. To review DAI activity, the show ip arp
inspection log and the show ip arp inspection statistics commands can be used.
Dynamic ARP Inspection in Action
The figure below shows a user with an IPv4 address of 10.0.1.2 connected through a
switch to a default gateway with an IPv4 address of 10.0.1.1. An intruder residing on an
untrusted port sends an unsolicited ARP message in an attempt to poison the MAC-to-IPv4
bindings so that all traffic from 10.0.1.2 to the 10.0.1.1 default gateway goes to the
attacker. The attacker attempts to poison the ARP cache of 10.0.1.2, so 10.0.1.2 thinks the
attacker MAC address is the MAC address of the 10.0.1.1 default gateway.
DAI examines the ARP packet and compares its information with the information in the
switch DHCP binding table. Because there is no match for the 10.0.1.1 IPv4 address to the
attacker MAC address of aaaa.1111.2345 in the DHCP binding table, the ARP packet is
dropped.

33.10 Implementing Device Hardening


Mitigating STP Attacks
In a Layer 2 network, redundant designs can mitigate the possibility of a single point of
failure, which causes a loss of function for the entire switched or bridged network.
However, redundant designs can cause unwanted problems, such as broadcast storms,
multiple frame transmissions, and MAC database instability. For that reason the STP must
be used in the Layer 2 networks. It provides redundancy, but at the same time protects
against the unwanted problems that might be caused by poor design implementations.
In addition, suboptimal paths can be used in the networks after the implementation of
STP without appropriate protection against various STP attacks. This can lead to man-in-
the-middle attacks, as well as to creating loops even with the STP running on the switches.
The figure illustrates how a network attacker can use STP to change the topology of a
network so that the network attacker appears to be a root bridge with a lower priority.
The attacker sends out Bridge Protocol Data Units (BPDUs) with a better bridge ID and
thus becomes the root bridge. As a result, traffic between the two switches in the figure
passes through the new root bridge, which is actually the attacker system.
Note: This attack can be used against all three security objectives of confidentiality,
integrity, and availability
The root guard feature of Cisco switches prevents a switch from becoming a root bridge
on configured ports. The root guard feature is designed to provide a way to enforce the
placement of root bridges in the network. Root guard limits the switch ports from which
the root bridge can be negotiated. If a port where root guard is enabled receives BPDUs
that are superior to BDPUs which the current root bridge is sending, then the port
transitions to a root-inconsistent state, which is effectively equal to an STP listening state,
and no data traffic is forwarded across that port.

Root guard is best deployed toward ports that connect to switches that should not be the
root bridge. Root guard is enabled using the spanning-tree guard root command in
interface configuration mode.
The figure illustrates how the attacker sends out spoofed BPDUs to become the root
bridge. Upon receipt of a BPDU, the switch with the root guard feature configured on that
port ignores the BPDU and puts the port in a root-inconsistent state. The port will recover
when the offending BPDUs stop.

34.1 Describing the Cloud and the Cisco Meraki Dashboard


Introduction
Efficiency is a requisite focus of everyday life. Being able to manage multiple applications
at once is more than a simple thing that is “nice to have”—it is essential.
Cisco Meraki has combined its knowledge in a seamless management system with the
Cisco Meraki dashboard. From this single platform, the administrator can centrally
manage the entire system seamlessly, easily, and with an unparalleled reporting structure.
Modern enterprise networks are large, and they usually consist of multiple vendors and
devices, and many device types. Managing a network of this scale, density, and diversity is
very difficult, but the entire Cisco Meraki product family resides together under one roof.
The Cisco Meraki family of products meets the needs of the modern enterprise network
for scalability, security, and visibility.
Traditional methods of systems management no longer meet the needs of the dynamic
environment. The need for scalability, security, and immediate visibility require a robust
and coherent solution. The products introduced here fulfill the needs of the modern,
innovative enterprise.

34.2 Describing the Cloud and the Cisco Meraki Dashboard


Single Pane of Glass Management
The Cisco Meraki dashboard is referred to as a single pane of glass management system.
As the administrator, this system lets you easily and quickly view the real-time status of
the entire system. You can keep a watchful eye on the entire system, and you will be
immediately notified about issues that arise so that you can respond to them quickly.
The figure introduces the Cisco Meraki product family.
At the center of the family is the Cisco Meraki dashboard.
The dashboard is the Cisco Meraki flagship product. The dashboard is the central
management and monitoring interface for all Cisco Meraki products, and it runs in the
cloud.

34.3 Describing the Cloud and the Cisco Meraki Dashboard


Cisco Meraki Full Stack Capabilities
Cisco Meraki Full Stack Services brings multiple benefits to your organization through its
easy management system. It offers simple licensing tools and models, and it can easily
scale as your enterprise grows and your needs increase.

The following are Cisco Meraki benefits.

• Deployment: Many features can be deployed quickly and easily. Rolling out
deployments is much easier.
• Cloning configurations: When creating a new network, administrators can choose
to clone the configuration for the new network from an existing network.
• Configuration templates: An administrator can make one change that can be
applied to many networks and the devices within those networks.
• Zero-touch deployment: The cloud architecture allows you to configure devices
without having the hardware. This approach is possible because configurations are
stored and managed in the cloud, so administrators can stage configurations
before they have the hardware.
Cisco Meraki solutions are very scalable and can extend to hundreds of thousands of
devices. Scaling means simply adding more devices and licenses to the dashboard.

• Management and monitoring: All management and monitoring functions happen


through the dashboard. No additional hardware is required. Only the Cisco Meraki
devices themselves are needed. Remote troubleshooting tools can be accessed
from anywhere:
o Remote cable test
o Ping and traceroute
o Reboot device
o Packet capture
• Reduced licensing overhead: Simplified licensing makes it easier to keep track of
the license expiration date. This simplification alleviates one of the headaches of
IT.

34.4 Describing the Cloud and the Cisco Meraki Dashboard


Cisco Meraki Devices and the Cloud
In its effort to reduce the overhead that constricts enterprise growth, Cisco Meraki has
introduced a system that works from the cloud, offers full functionality, and can be easily
managed and monitored using helpful diagnostic tools. Cisco Meraki devices are
onboarded easily in the cloud.
Cisco Meraki devices have a highly effective out-of-band control plane.

• Step 1: Deploy
o Cisco Meraki appliances and devices are deployed in your campus or
remote branches.
• Step 2: Connect
o Devices automatically securely connect to the Cisco Meraki cloud, register
to the proper network, and download their configurations.
• Step 3: Manage
o The centralized dashboard provides visibility, diagnostic tools, and
management of the entire network.
You simply deploy the devices.
When powered on, Cisco Meraki devices automatically establish a connection with the
Cisco Meraki cloud.
These devices come preconfigured with the hostname and IP addresses to reach the
dashboard. They also come with a certificate that is used for encryption. All management
traffic is encrypted using a proprietary lightweight encryption tunnel with Advanced
Encryption Standard (AES) 256 encryption.
The Cisco Meraki dashboard manages and monitors all devices. The dashboard is a web
interface and can be accessed via any modern web browser.
Each device creates its own management tunnel. Management traffic has a low overhead:

• Each device uses 1 to 2 kbps for management data.


• Management data consists of configuration downloads and monitoring data that is
reported back to the cloud.
• User traffic is routed normally and is not proxied through the Cisco Meraki cloud.
Note: If the Motion Detection feature is enabled on a Cisco Meraki MV camera, there is an
increase in management data with spikes of up to 200 kbps. This feature can be disabled
to bring the bandwidth consumption down within 2 kbps.

34.5 Describing the Cloud and the Cisco Meraki Dashboard


Benefits of a Cloud-Based Solution
Some people have concerns about hosting their applications in the cloud because of
historical security issues. Is it safe? Can anyone see my data? Will I lose control of my
data? Over time, those questions, and many more, have been answered by using robust,
secure solutions. One such answer is the Cisco Meraki solution.
The cloud is sometimes explained as “someone else’s computer.” This definition may be
meant to be humorous, but it is important. “Someone else” now addresses redundancy,
availability, reliability, and scalability concerns. There is no controller hardware to
manage, and no controller software to worry about. You only need an Internet connection
and a web browser to manage your Cisco Meraki equipment.
This cloud is a collection of highly reliable multi-tenant servers distributed around the
world at Cisco Meraki data centers. Customer management data is replicated across
independent same-region data centers in real time. The same data is also replicated in
automatic nightly archival backups. The Cisco Meraki cloud does not store any customer
user data.
Take a look at some common customer questions.
Common questions about the architecture

• Security
o Does my network traffic flow through the Cisco Meraki cloud
infrastructure?
• Reliability
o What happens if the devices cannot access the Cisco Meraki cloud?
• Future-Proof
o How do firmware upgrades work? How often do I get new features?
• Scalability
o How do I scale?
Now, take a look at some Cisco Meraki answers.

• Security: User traffic never touches the Cisco Meraki cloud, so the cloud
infrastructure is Health Insurance Portability and Accountability Act (HIPAA)- and
Payment Card Industry (PCI)-compliant.
• Reliability: Cisco Meraki cloud has been achieving 99.99 percent uptime. Cisco
Meraki has globally distributed data centers. If a device loses connectivity to the
cloud, it is most often due to an upstream issue such as an ISP outage, Layer 1
issue, or firewall rules on a third-party device.
In these situations, Cisco Meraki devices continue to pass user traffic because they
store the last known configuration locally. Devices have a locally hosted configuration
page used to make basic configuration changes such as uplink settings.

• Future-proof: Firmware upgrades are scheduled and pushed via the


dashboard. No onsite presence is required. Cisco Meraki has three firmware
tracks: stable, stable release candidate, and beta. The firmware versions are
chosen on a per-network basis and can be chosen through the dashboard.
Firmware upgrades can be scheduled at a specific date and time, can be
manually activated, or can be turned off.
• Scalability: Cisco Meraki manages back-end scalability, so if you want to scale a
solution, you simply add more devices and licenses to the dashboard.
34.6 Describing the Cloud and the Cisco Meraki Dashboard
Cisco Meraki Dashboard Organizational Structure
The dashboard is organized into groups that are called organizations and networks. The
Cisco Meraki dashboard seamlessly brings together multiple solutions under one
umbrella. From this single and elegantly simple platform, you can control, manage,
monitor, and resolve every aspect of the entire network.

The Cisco Meraki dashboard is organized as follows:

• Organizations: Organizations contain dashboard networks, device licensing, device


inventory, and other high-level settings. Generally, a company will most often have
a single dashboard organization.
• Networks: Networks contain Cisco Meraki devices, their configurations, and
statistics and analytics. A common approach is to separate devices into networks
based on their physical location. For example, if a company has an office in Chicago
and an office in San Francisco, devices at those locations would be assigned to
separate dashboard networks. Networks can be separated according to their
device type. Combined networks are the most common type of network, but an
administrator could choose to divide a location into separate networks for each
type of Cisco Meraki device.
Note: Note that the event correlation may become a challenge if you combine similar
devices that reside across larger geographic locations. For this reason, you should group
devices into a single network within the same time zone.

• Dashboard accounts: Dashboard accounts access dashboard organizations and


networks. Dashboard accounts or administrators can be assigned different access
privileges at the level of organization and at the level of individual networks.
A dashboard account privilege defined at an organization level applies to everything in the
dashboard, including every network in the organization. Exception are those networks
where a network-specific account privilege has been defined.
Administrators can be given the following access to organizations:

• Full: An administrator with full organization access can view and make changes to
any dashboard network in the organization.
• Read-only: An administrator with read-only organization access can see and view
everything, but cannot make any changes.
• None: An administrator can be configured to have no access to organizations and
then can be granted to access individual networks instead.
Network privileges can be used to restrict access to a network. Network-level privileges
allow an administrator to view or configure the networks and their devices for which they
have privileges assigned.
The following network access privileges are available:

• Full: An administrator with full network-level access can make changes to anything
in that network.
• Read-only: An administrator can view the configuration of this network but cannot
make any changes.
• Guest ambassador: An administrator can only see the list of Cisco Meraki
authentication users, can add users, can update existing users, and can authorize
or deauthorize users on an SSID or Client VPN.
• Monitor-only: An administrator can only view a subset of the Monitor section in
the dashboard and cannot make any changes. Monitor-only administrator can view
summary reports but cannot schedule reports via email in the dashboard.

• Network tags: Network tags can be used to assign privileges to administrators. For
example, 30 networks could be assigned a tag of IT Admin. An administrator could
then be given permission for the IT Admin tag, which means that the administrator
would then be able to configure all 30 of those networks. If tags are created and
assigned to networks based on roles, role-based access can be provisioned for
administrators. Cisco Meraki tags should not be confused or used interchangeably
with the traditional 802.1Q tag in a traditional Layer 2 Ethernet header.
Tags are also used for reporting purposes when generating summary reports for specific
networks or device groups.

34.7 Describing the Cloud and the Cisco Meraki Dashboard


Multiorganizational Structure
A dashboard account can have access to more than one organization. Enterprises are not
always simple structures. Some are incredibly complex and have multiple needs for each
aspect of the business. The Cisco Meraki multiorganizational structure provides a solution
for managing even the most complex arrangement. Each aspect can be individually
controlled, managed, and monitored.

It is uncommon for a single organizational entity, such as a company or school district, to


have more than one organization. Multiple organizations are more common with
Managed Service Providers (MSPs). MSPs typically have one organization for each of their
customers.
MSP Portal
If an administrator has permissions for more than one organization, upon singing-in, they
will see a page in the dashboard that is called the MSP Portal. Here, they can view license
expiration dates for every organization and view high-level information such as network
health.
34.8 Describing the Cloud and the Cisco Meraki Dashboard
Licensing
Understanding licensing with Cisco Meraki is very important. Cisco Meraki has created
simple, easy-to-manage licensing models to help enterprises keep licensing up to date and
to avoid complexity. The streamlined licensing models keep track of licenses and their
associated products, and notify you if any problems occur or when licenses are nearing
renewal.

The Cisco Meraki solution has a simple licensing model. There is a 1:1 ratio of hardware to
license. If you have 20 access point (AP) devices, you need 20 AP licenses—it is that
simple.
The dashboard license includes the following:

• New features via firmware upgrades


• 24-by-7 support
• Lifetime warranty (Lifetime refers to the expected lifetime of the device; see the
Cisco Meraki website for details. The lifetime warranty excludes the outdoor APs,
which have a one-year warranty, and Cisco Meraki MV cameras, which have a
three-year warranty.)
Licensing Models
Customers can choose between two licensing models: cotermination licensing and per-
device licensing. Customers who are using cotermination licensing mode can choose to
migrate to per-device licensing, but it is not possible to then migrate back to
cotermination licensing.
Cisco Meraki cotermination uses a weighted average that takes into account the weight of
the license and the license term. Cisco Meraki offers 1-, 3-, 5-, 7-, and 10-year license
terms.
Cotermination Case Study
You will see an example of cotermination licensing in the following figures.

In January, the customer buys some Cisco Meraki devices and licensing.
The Cisco Meraki licensing time starts counting when the Cisco Meraki solution provides
the license key to a customer.
In the figure, the customer bought 20 devices, each with 12 months of licensing.

This figure shows that the customer delayed installation and activation for one month. At
this point, the customer has 11 months of licensing left.
Time passes, and the customer is happy. In this figure, the end of June has arrived.

In this figure, in July, the customer adds 20 more devices, each with 12 months of
licensing. The cotermination algorithm adjusts the total licenses to 40 with nine months
remaining. (In this simplified example, the devices are all equal. There is a weighting value
that adjusts for a higher-end device versus a lower-end device).
If a customer needs to replace a device using a Return Materials Authorization (RMA), a
case must be opened with support. Support will verify that the device needs to be
replaced and will then replace it with a new device. Replacement is typically accomplished
on the next business day. In the figure, the customer returned a faulty device, and
replaced it in September. The customer deletes the serial number of the old device from
the dashboard and adds the serial number of the new device. (Because licenses are not
tied to serial numbers, it is easy to have hardware cold spares.)

The figure shows that the customer is approaching the final 30 days on their license, so
they need to add a renewal license.

The dashboard licensing now expired and a renewal license needs to be added.
If the dashboard licensing is not renewed before the cotermination expiration date, the
organization enters a grace period. The figure illustrates this situation.

The grace period provides an extra 30 days, as shown in the figure. If dashboard licensing
is not renewed before the grace period ends, Cisco Meraki devices stop passing traffic and
administrators lose the ability to make configuration changes.
Per-Device Licensing Features
A new per-device licensing model is now available to give greater flexibility.

The new per-device licensing model includes the following features:


• Partial renewals: Enjoy the ability to renew all your devices or a subset of devices
as you prefer.
• Move licenses between organizations: An Org Admin (read/write) on multiple
organizations can move a license (or licenses and devices together) between those
organizations without calling in to Cisco Meraki support. This functionality is
available through the dashboard and application programming interfaces (APIs).
• 90-day license activation window: You will have up to 90 days to claim and assign
your licenses before they activate, which gives you more time to deploy Cisco
Meraki products before your licenses consume time.
• APIs: APIs are available to claim, assign, and move licenses, which allows a greater
level of automation and the ability to integrate with other systems.
• Individual device shutdowns: If a license expires on a device, the Cisco Meraki
solution will only shut down that device or product (after the 30-day grace period).
In addition, the following features are available:

• One-day SKU: This new stock-keeping unit (SKU) type enables fine-tuning of
expiration dates.
• License devices individually: Assign a license to a specific device (Cisco Meraki MR
wireless AP, MS switch, MX security appliance, MV camera, MG cellular gateway)
or a network (with vMX and SM licenses) and maintain a shared expiration date or
separate expiration dates across devices, networks, or organizations.
Per-Device Case Study
This figure illustrates a per-device licensing case study.

Customers can now choose a per-network expiration date.


In the example in the figure, Networks A and B have two different expiration dates.
During a renewal, customers can now choose to renew only a subset of devices, even
within a network.
In the example, the customer chooses to renew only an AP in Network C.
Cotermination vs. Per-Device Licensing
Look at a comparison of cotermination and per-device licensing.

Some of the available add-on licenses are Cisco Meraki MR Advanced license, Cisco Meraki
MS Advanced license, Cisco Meraki MX Advanced Security license, and Cisco Meraki
Secure SD-WAN Plus license.
35.1 Describing Cisco Meraki Products and Administration
Introduction
The Cisco Meraki solution is a suite of products that make life incredibly simple for
administrators. These products work together to perform the tasks for the efficient
functioning of the enterprise networks, and are easy to set up, deploy, manage, monitor,
and troubleshoot.
Each product comes with a set of features that go beyond simple functionality and offer
the user a sense of extravagance in their capabilities and ease of use. Each product is full
of useful features that will make most administrators wonder how they ever managed
before they started using them.

35.2 Describing Cisco Meraki Products and Administration


Cisco Meraki MX Security and SD-WAN Appliance
The Cisco Meraki MX security appliance is a feature-packed multifunctional security and
software-defined WAN (SD-WAN) enterprise appliance. It is robust, secure, resilient, and
highly customizable. This product is a one-box solution for security, networking
connectivity, including SD-WAN and Auto VPN, and application control. The Cisco Meraki
MX security appliance can usually be found on the edge of the customer’s network, where
they connect to the IP world.

Because of its vast range of security and network connectivity features, it is the ideal
device to sit on the edge. It is the perfect unified solution for the customer edge, and its
unified threat management features help keep your network safe.
Features and Components
The Cisco Meraki MX security appliance includes security and application control feature
sets.
Next-generation firewall (NGFW) has the following components:

• Threat protection with Cisco Advanced Malware Protection (AMP) cloud


• Next-generation intrusion prevention system (IPS) using the Snort engine
• Content filtering that is based on BrightCloud category referencing
• Stateful firewall
• Security features that are cloud-managed for fast remediation.
• Country-based geolocation rules
• Long-Term Evolution (LTE) backup connection via either a third-party dongle, or
built-in LTE
Auto VPN and SD-WAN have the following features:

• Auto VPN for cloud-brokered provisioning of IPsec tunnels


• Transport Independence
• Secure connectivity
• Intelligent path selection
• SD-WAN functionality that optimizes traffic over VPN tunnels
• SD-WAN allows dynamic traffic load balancing based on changing WAN conditions
• Dual uplink support and easy management of a redundant ISP link
• Redundant link that can be used for failover or have a dynamic load-balancing
policy applied

35.3 Describing Cisco Meraki Products and Administration


Cisco Meraki MS Switches
Cisco Meraki MS switches are versatile, capable, and fast, and they easily manage complex
requirements. Some models can manage dynamic routing easily, all while maintaining a
high level of security. These switches come with various extra tools, including several
troubleshooting tools that help keep the network running smoothly.
Features
The Cisco Meraki MS switches have all the traditional features and provide several
important additional features:

• Packet capture tool to observe live network traffic passed by Cisco Meraki devices
• Enterprise security features
• Cable test tool to test the integrity of the cable
• Live troubleshooting tolos
• Virtual stacking to easily push configuration to hundreds of ports in the network
regardless of where the switches are physically located
• Topology page
• Port security
• Biggest differentiator is the inherent visibility
• Layer 7 visibility into data flowing through clients
• Voice and video quality of service (QoS)
The following features are hardware-dependent, require specialized hardware, and are
not supported on every model:

• Physical stacking
• Cisco Meraki StackPower redundant power feature
• DHCP server functionality
• Multigigabit Support
• Universal Power over Ethernet (UPoE)
• Dynamic routing (Open Shortest Path First [OSPF])
Newer Cisco Meraki MS switch models now scale into high-performance distribution and
aggregation switches.
35.4 Describing Cisco Meraki Products and Administration
Cisco Meraki MR Wireless APs
The Cisco Meraki MR wireless access point (AP) was the first Cisco Meraki product that
was launched, so its feature set has been well refined over time. Cisco Meraki MR wireless
APs have been refined to provide the best possible features available. Various models
deliver solutions for a range of user needs, from basic coverage to high-usage, high-
density requirements. Cisco Meraki MR wireless APs can be customized to your exact
requirements while offering advanced features.

Models range from basic coverage to high-end, high-density, and stadium and outdoor
wireless networks.
Cisco Meraki AP features include the following:

• Application traffic-shaping by prioritizing critical applications over high bandwidth


consumers
• Guest Access
• Self-healing, zero-configuration mesh that automatically detects other APs and
select the best route to a wired Gateway
• Wireless health engine that identifies anomalies impacting end users' experience
Most AP models have a third radio that is called the scanning (or security) radio. This extra
radio enables full-time wireless intrusion prevention system (WIPS) with Air Marshal, Auto
RF, and Location analytics.
35.5 Describing Cisco Meraki Products and Administration
Cisco Meraki Systems Manager Endpoint Management
The Cisco Meraki Systems Manager Endpoint Management solution is designed to manage
and control end-user devices such as mobile and desktop machines. In cases where
dynamic provisioning and or onboarding is required, this system allows you to quickly and
seamlessly make sure that the network remains secure while providing the necessary
coverage to rapidly meet end-user needs. Also, it offers the visibility to troubleshoot these
devices if needed. Most common platforms are supported, which gives visibility into
device behavior and allows profiles to be pushed down to devices.

Note: MDM: Mobility Device Management


These profiles can disable functions or applications and hide features. Profiles are great
for staging, they can download an SSID configuration and enable auto-join.
There are many granular toggles:

• Disable specific operating system features, hide applications, and enforce


restrictions
• Enforce security policies
• Geofences allow an administrator to be notified when a device is stolen
• Remote live tools for control
• Remotely wipe devices
• Take remote screenshots
• Shut down and reboot
• Push applications and profiles remotely
• Push certificates to devices
Cisco Meraki Systems Manager includes Cisco Meraki SM Sentry features, which create
unification of Cisco Meraki Systems Manager with Cisco Meraki network solutions such as
wireless and security.
The Cisco Meraki Systems Manager Sentry suite feature sets include:

• Cisco Meraki SM Sentry Enrollment: Only allow devices managed with Cisco
Meraki Systems Manager to access the network. With zero-touch deployment, the
unmanaged devices can install and enroll with Cisco Meraki Systems Manager to
gain access to the network. Cisco Meraki SM Sentry Enrollment is supported on
Android, Apple iOS, Apple macOS, and Microsoft Windows devices and enables
employee self-service for securing BYOD devices.
• Cisco Meraki SM Sentry VPN: VPN settings can be automatically provisioned to
connect managed devices to a Cisco Meraki MX security appliance hosting client
VPN. Changes to VPN configurations on the Cisco Meraki MX side are automatically
reflected in Cisco MerakiSystems Manager without any manual action needed.
• Cisco Meraki SM Sentry WiFi: WiFi settings are provisioned automatically to
connect managed devices to a Cisco Meraki Meraki MR wireless network. If a
connected device fails security compliance, Cisco Meraki Systems Manager can
automatically revoke device access to the network.
• Cisco Meraki SM Sentry Policies: Network settings such as firewall rules, traffic
shaping policies, and content filtering can be dynamically changed, controlled,
updated, and remediated automatically.

35.6 Describing Cisco Meraki Products and Administration


Cisco Meraki MV Security Cameras
Cisco Meraki MV security cameras are designed to help keep your world secure. They
constantly scan for issues and are themselves subject to a high level of security. They offer
numerous features to make deployment simple and effective, while allowing the user to
further customize deployment to meet the exact needs of the organization. These
products provide various features to easily and quickly enable you to examine and
investigate any incidents that the cameras notice.
Cisco Meraki MV cameras use edge architecture and store footage on a local solid-state
drive (SSD), which means that no NVRAM or centralized storage is required. Camera
footage can be streamed directly if it is local to the camera, or it can be streamed through
the cloud if it is remote from the camera.
Cisco Meraki MV cameras include many features that contribute value to multiple teams
within an organization:

• Retroactive motion event searching


• Analytics including object detection and motion heatmaps
• Hardware designed for different deployments
• Indoor and outdoor models
• Infrared (IR) illumination lights to illuminate dark scenes
• Optical lenses and fixed lenses
• Wireless connectivity
• Pan-tilt-zoom (PTZ)-like functionality available
• Cisco Meraki MV Cloud Archive off-camera cloud storage (requires an extra
license)

35.7 Describing Cisco Meraki Products and Administration


Cisco Meraki Insight Web-Based Application Analytics
Cisco Meraki Insight Web-Based Application Analytics analyzes application performance.
As its name suggests, it literally provides insight into application performance and
operation.
This visibility makes Insight a powerful and extremely useful tool by opening a view into
applications, services, and performance, which enables you to closely monitor their
activity.
This doorkeeper closely watches over the network. It reports anomalies or abnormalities
and records that data in graphic form for you to use as required. It enables you to respond
to and mitigate problems quickly, efficiently, and in real time, thus avoiding serious issues
that could otherwise occur.
Cisco Meraki Insight runs in the cloud and it uses Cisco Meraki MX security appliance as a
data collector only.
It can highlight poor WAN operation and packet loss. It alerts you when set thresholds are
exceeded. It backs up insight information with graphs and data that can be very helpful
when, for example, you are trying to convince your ISP that they are not meeting service
level agreements (SLAs).
Cisco Meraki also introduced the VoIP Health feature. The VoIP Health feature monitors
network links for the performance of the uplink for VoIP traffic. Monitoring occurs via
ICMP probes from the Cisco Meraki MX to the VoIP endpoint every second. You can use
this information to estimate the call quality along each path. The results are provided
based on the mean opinion score (MOS) of the link.

36.1 Describing Cisco Meraki Troubleshooting


Introduction
Cisco Meraki offers a suite of utilities and tools to help users navigate, manage, monitor,
and troubleshoot its products and platforms. As you would expect from a well-developed
suite of integrated products, the available tools offer features that make life easier for the
administrator and offer the level of robust features that today’s enterprise customers
expect to address complex problems.

36.2 Describing Cisco Meraki Troubleshooting


Cisco Meraki Dashboard Sync and Real-Time Tools
Cisco Meraki Dashboard is a convenient front end for managing Cisco Meraki products.
There is an expected delay between configuring a device and the configuration arriving at
the device. This situation is partly due to the cloud operation and partly due to the system
conscientiously following its required steps in sequence to ensure valid delivery.
On the other hand, the system offers several real-time commands, such as ping, DNS test,
and traceroute, which are instantaneous. The ability to issue these commands in
instantaneous time, more than makes up for the small delay that is caused by the
configuration update synchronization.

Because of the nature of operating from the cloud and the need to take steps in an
ordered fashion, it is expected that configuration changes will take time to filter down to
the devices. You will need to be patient while the configuration changes are made and
downloaded safely to your devices. This delay is normal and to be expected.
Configuration updates that are made in the dashboard can have a 1- to 2-minute delay.
If a device is powered off and on, it could take 3 to 5 minutes for it to reboot and
download its configuration. The length of the delay is device dependent. Similarly, if a port
is reset, it could interfere with PoE. Any attached powered devices may take time to
power on and reboot. They may also need to download a configuration.
Live tools, however, do not have any significant delay and interactions should be nearly
instantaneous.

36.3 Describing Cisco Meraki Troubleshooting


Cisco Meraki Monitoring and Troubleshooting Tools
Cisco Meraki offers monitoring and troubleshooting tools that are valuable when
troubleshooting a problem. The alerting system alerts you to the location and nature of
anomalies, which allows you to immediately begin an investigation to determine whether
the threat is serious and to take early steps to mitigate or eliminate the problem.
These tools should be considered your go-to tools when you start troubleshooting, and
they can supply valuable information that can shorten the fault resolution process.
The following are some of the tools in this category:

• Topology View: The Topology view displays the status of devices and their
connectivity position in the network. Both their position and status icons are
dynamic. A device is green if all is good, red if there is a failure, and yellow if the
device is alerting. A simple issue like VLAN mismatch or DNS failure can be resolved
quickly and easily by checking the colors in the Topology view. The Cisco Meraki
dashboard Topology view is highly valuable to operators for maintaining an always
up to date network topology and subnet topology diagrams without any manual
input from the operator.
Note: You must deploy at least one Cisco Meraki MS switch in the network for the
Network Topology feature to be available. Cisco Meraki switches act as collectors of CDP
and LLDP information to build out topology, and therefore Topology View is only available
in deployments where Cisco Meraki switches are used.
The topology view may need up to 30 days to accurately reflect nodes and links that are
disconnected from the network.
Cisco Meraki offers Layer 2 and Layer 3 Topology views. Layer 2 Topology view displays
the physical topology of the network. Layer 3 Topology view displays the logical (subnet)
topology of the network.
The Topology view also shows other LLDP- or CDP-enabled Cisco and third-party devices
that are one hop away from a Cisco Meraki switch.

• Packet Capture: The Packet Capture tool is one of the most powerful tools and can
help in troubleshooting Cisco Meraki systems. It is available by default, with no
need for additional configuration. The Packet Capture tool gives you visibility into
the raw traffic running on the wire (or even wireless). You can capture packets
almost anywhere on the Cisco Meraki full stack. You can display captures directly
in the dashboard or export a pcap file to analyze in other tools (such as Wireshark).
Packet captures can also help Cisco Meraki support quickly identify and resolve a
problem.The Packet Capture tool allows you to set different capturing options for
different devices. For example, you can select which port or interface the packet
capture should run on, define capture output (pcap file or display), define if you
want to ignore broadcast or multicast traffic, set verbosity level of displayed
capture, or apply different capturing filters.
The Event Log and Change Log are enabled by default and hosted by Cisco Meraki in the
cloud.

• Event Log: The Event Log displays network events on a networkwide basis. These
events can include wireless client issues, spanning tree issues, and port flapping
issues. Also, RADIUS password issues will appear here. Logs can be valuable
sources of data during the troubleshooting process.
You can access the Event Log in Cisco Meraki dashboard under Network-wide > Monitor >
Event log.

• Change Log: The Change Log logs configuration changes on an organization wide
basis. As an example, the Change Log will show if an SSID has changed. The Change
Log adds accountability by identifying who changed what and when. The Change
Log shows the administrator who made the change, the old configuration, and the
new configuration.
You can access the Change Log in Cisco Meraki dashboard under Organization > Monitor >
Change Log.

36.4 Describing Cisco Meraki Troubleshooting


Integration of Cisco Meraki Monitoring and Troubleshooting Tools in
Existing Systems
The Cisco Meraki solution makes a great effort to ensure that its suite of products
seamlessly integrates with relevant third-party tools and utilities. From logs to
authentication, splash pages, and other features, the Cisco Meraki suite of products
integrates with other products to deliver a comprehensive solution.
The following logging options are supported with Cisco Meraki solution:

• Syslog
• Simple Network Management Protocol (SNMP)
• SNMP traps
• NetFlow visibility
The following encryptions and authentication options are supported with Cisco Meraki
solution:

• RADIUS for IEEE 802.1X and Wi-Fi Protected Access 2 (WPA2) enterprise (for
selected switches, access points, small branch security devices)
• Cisco Meraki Authentication is a user-defined username/password RADIUS server
in the dashboard.
• On wireless side, Wired Equivalent Privacy (WEP), identity pre-shared key (PSK)
with RADIUS, and identity PSK without RADIUS authentication options are also
supported.
There are several options for splash page authentication:

• Facebook
• Google auth
• Lightweight Directory Access Protocol (LDAP)
• RADIUS
• Cisco Meraki proprietary authentication
• Microsoft Active Directory
• MAC address--based authentication
Cisco Meraki can also be integrated with:

• Cisco Identity Services Engine (ISE) for RADIUS authentication and accounting,
change of authorization, central web authentication
• Cisco DNA Center as a unified monitoring and assurance platform

36.5 Describing Cisco Meraki Troubleshooting


Application Programming Interfaces
The Cisco Meraki solution allows you to automate the tasks of extracting analytics and
monitoring and controlling devices by using APIs.
APIs are the best way to automate time-consuming and error-prone repetitive manual
tasks. Integrated API capability in the dashboard provides methods to develop custom
tools for additional time-saving and labor-saving features that increase efficiency and
reduce the potential for user errors caused by manually performing repetitive tasks.
Cisco Meraki APIs allow you to extract analytics and monitor and control devices using
standard API functionality. They also allow you to perform some Cisco Meraki dashboard
tasks from these external third-party tools, such add new organizations, users, networks,
devices, VLANs, and SSIDs.

Three notable Cisco Meraki API categories are available:

• Dashboard API: This API is used to pull and push information and configurations to
and from the dashboard. This API allows you to extract device statuses and post
configurations. You can export serial numbers of the devices in the network or
even create a network from the beginning.
• Scanning API: Export location analytics data (such as data collected by the AP
Bluetooth radio) from the dashboard to a third-party application or server. The
Scanning API is often used to export that data to the Cisco DNA Spaces or other
third-party software to analyze footfall and provide valuable market analytics and
usage statistics for physical spaces.
• Captive Portal API: This API extends the power of the built-in Cisco Meraki splash
process. Captive Portal API integrates third-party splash page tools that may be
offering more flexibility compared to splash page that is natively available through
Cisco Meraki dashboard.
Cisco Meraki Marketplace at https://apps.meraki.io/ offers you an extensive catalog of the
applications that were developed on top of the Cisco Meraki platform by the Cisco Meraki
technology partners. The marketplace allows customers and partners to view, demo, and
deploy solutions.

36.6 Describing Cisco Meraki Troubleshooting


How to Work with Cisco Meraki Support
Cisco Meraki support engineers are highly trained and offer a “one-stop support shop” for
resolving problems. Customers no longer have to experience being passed around from
engineer to engineer and department to department. Cisco Meraki offers highly
experienced and proficient support staff.
The engineers are trained this way specifically to complement each other and help if the
presented problem is too great to solve on their own.

• Phone support at Cisco Meraki support centers is always staffed for timely, one-
on-one case management.
• Online support cases that are opened via email or the dashboard allow Cisco
Meraki support to quickly locate and solve issues.
• Ongoing cases can be managed, updated, or audited directly in the dashboard
(Help > Cases).
Cisco Meraki support agents can access your systems and dashboard with your
permission. They can show you what the problem is, rather than simply telling you,
without having to start up remote control software.
Your dashboard license includes 24-hour support.

37.1 Brief history Telephony


Introduction
The way the world works is changing. Cisco collaboration tools are an integral part of
changes many organizations are making towards smarter ways of working. This section
provides an overview of the different components of the collaboration architecture from
Cisco.

37.2 Brief history Telephony


Define Collaboration Benefits
Collaboration The situation of two or more people working together to create or achieve
the same thing. Cambridge Dictionary
Cisco Collaboration provides tools for people to collaborate effectively regardless of
distance.
The world today is even more connected and engaged across regional and worldwide
boundaries. In order for companies and employees to stay on top of their game, they
need to be able to engage and communicate in real-time with their peers and customers.
Here are some of today's most important business imperatives:
1. Improving customer experience
2. Greater operational efficiency
3. Growing revenues
4. Growing market share
5. Improving product/service innovation
Today, business focuses on improving customer experience, greater efficiency, growing
revenue and market share, and improving product and service innovation. Cisco
collaboration can play a strategic role in all these areas.

But today's digital economy demands greater agility. Organizations are challenged by
rapidly changing technology as well as business trends and industry disruptions. The
digital economy accelerates the pace of change and innovation. It places new engagement
demands on the business to achieve results to stay competitive and engage with
customers, employees, and across new and emerging ecosystems.
Cisco's collaboration portfolio is designed to help organizations meet these challenges
head-on by providing seamless connectivity between customers, partners, and employees
wherever they are with all the tools they would have access to if they were in the same
room.
The COVID-19 pandemic has forced many people to work from home away from
colleagues. Many companies realize just how important collaboration technologies can be
when a workforce is dispersed. The "new normal" will very likely see an upturn in mobile
and home workers long-term. Good, reliable collaboration tools are essential to maintain
communication with colleagues, partners, and customers.

37.3 Brief history Telephony


Describe On-Premises, Cloud, and Hybrid Deployments
Collaboration Deployment Models: On-Premises
More and more customers are moving IT services to the cloud, and collaboration is no
different. Collaboration applications and services may be delivered solely on-premises,
solely in the cloud, or more commonly in a combination known as a hybrid deployment.

On-premises deployments are where collaboration applications are deployed within the
enterprise premises to provide voice and video calling; text, voice, and video messaging;
presence; and video conferencing and desk, screen, and content sharing.
Collaboration Deployment Models: Cloud
In the case of cloud deployments, collaboration services delivered from the cloud include
voice and video calling, messaging, and meetings with video, as well as content and screen
sharing. Webex is a cloud-based service used for delivering these services. In a cloud
deployment, end-user devices such as phones and telepresence systems are still located
on the customer site.
Additional cloud implementations of collaboration can include collaboration platform-
based services as provided by third-party managed service providers and integrators that
deliver traditional on-premises collaboration applications and services from the cloud.
Cisco Hosted Collaboration Solution (HCS) is an example of this type of cloud platform-
based service.
Collaboration Deployment Models: Hybrid

In cases where enterprises desire the benefits of both on-premises services (such as
existing investment, high-quality voice and video calling, and so on) and cloud services
(such as continuous delivery or mobile and web delivery), those enterprises are most
often implementing hybrid deployments with a combination of both on-premises and
cloud-based collaboration applications and services.

37.4 Brief history Telephony


Describe On-Premises Collaboration Deployments
Cisco on-premises collaboration devices include call control devices, application devices,
edge devices, conferencing devices, and endpoint devices.
Cisco collaboration components can be broken down into five areas.

• Call Control: A call control device is responsible for routing calls and maintaining
the connection between two endpoints. Cisco call control devices also provide a
number of other services such as bandwidth management, endpoint registration,
phone feature management, directory services, and call admission control
• Collaboration applications: Applications include voicemail, instant messaging,
presence, and contact center services.
• Edge: Edge devices manage connectivity outside of a company. These connections
may be to home workers, offices not connected by VPN, external customers and
partners via the internet, connections to Session Initiation Protocol (SIP) service
providers, and connections directly to the telephone network.
• Conferencing: Conferencing devices provide multiparty connections into a single
conference. These multiparty conferences may be voice only or video.
Conferencing resources can exist as separate devices or within other collaboration
devices.
• Endpoints: Cisco Endpoints include everything from running software on
computers and smart phones to fully integrated room systems.

37.5 Brief history Telephony


Describe Cisco Collaboration Endpoints
Cisco Collaboration endpoints are changing all the time. Always consult www.cisco.com
for the latest information. Collaboration endpoints are available for different environment
settings, from phones on desks to immersive Cisco TelePresence rooms. These endpoints
are grouped into different categories.

• IP phones: Designed for wired and wireless phones. Capabilities can include color
or monochrome displays, in-built cameras, specialty phones for conference rooms,
reception desks, and other nondesk locations.
• Desktop endpoints: Designed for a single user at the desk. HD capable. It can be
used as a monitor and supports screen sharing.
• Room endpoints: Designed for conference rooms, room endpoints come as a fully
integrated system including monitors and stands, a kit version that can be added
to an existing monitor, or a Webex Teams Board, which is essentially a whiteboard
with integrated Cisco TelePresence capabilities.
• Mobile endpoints: Collaboration endpoints for mobile endpoints includes Cisco
Jabber and Cisco Webex Teams
• Integrator solutions: The Cisco Webex Room Kit Pro is designed for larger custom
rooms such as auditoriums and boardrooms.

37.6 Brief history Telephony


Describe Cisco Collaboration On-Premises Call Controllers
Cisco call control devices, including Cisco Unified Communications Manager, Cisco
Expressway, and Cisco Unified Communications Manager Express, provide the following
features to endpoint devices.

• Call processing: Setting up and tearing down of calls, including the routing of
media channels and negotiation of codecs.
• Endpoint registration: Endpoints registered to the call control device are listed in a
database mapping user-facing names and numbers to IP addresses. Cisco Unified
Communications Manager and Cisco Unified Communications Manager Express
also provide endpoint devices with configuration files.
• Phone Feature Administration: The features available depend on the call control
device, but as an example, Cisco Unified Communications Manager administers
features such as Extension Mobility, Device Mobility, Call Park, and Call Pickup, to
name a few.
• Directory Services: Ability for users to access a directory of users rather than
remember each individual directory number. External directory services can also
be referenced.
• Call Admission Control: A mechanism that can control which users or devices can
call other users or devices. Call Admission Control (CAC) can also be used to allow
access to external resources to certain users, such as access to conferencing
capabilities.
• Call Routing Control: How calls are routed to services outside the call-processing
device. These services could be voicemail, contact center tools, other call-
processing devices within the organization, or gateways to other types of networks
such as Public Switched Telephone Network (PSTN), SIP service providers, or the
internet.
• Bandwidth Control: A feature that controls how much bandwidth a call is allowed
to use. This can be set per call and controlled per link. For example, calls between
site one and site two cannot exceed a set limit.
Collaboration Protocols
Cisco Collaboration uses a number of protocols for communication.

SIP: SIP is a protocol for the registration of devices and the initiation, management, and
termination of real-time sessions, such as voice and video, over IP networks. SIP was
developed by the IETF. SIP is becoming the default standard for voice and video.
H.323: H.323 is also a protocol for the registration of devices and the initiation,
management, and termination of real-time sessions. H.323 is an older standard developed
by the ITU based on the H.320 standard used for video conferencing over ISDN networks.
SCCP: Skinny Client Control Protocol (SCCP) is a Cisco proprietary protocol for the
registration of devices and the initiation, management, and termination of real-time
sessions.
MGCP: Media Gateway Control Protocol (MGCP) used by Cisco Unified Communications
Manager to control remote gateways.
Connectivity between Cisco devices generally uses the SIP protocol. Cisco phones can also
use SCCP, but any third-party phones and Cisco TelePresence devices all use SIP. Cisco
Expressway supports both H.323 and SIP endpoints. Connections between Cisco Unified
Communications Manager and voice gateways can use SIP, H.323, or MGCP.
Call Signaling and Media Flow
Cisco Unified Communications Manager uses different signaling protocols to communicate
with Cisco IP phones for call setup and maintenance tasks, including SIP and SCCP. After
the call setup is finished, media exchange normally occurs directly between Cisco IP
phones using Real-Time Transport Protocol (RTP) to carry the audio and potentially video
stream.
In the figure, User A on IP phone A (left device) wants to make a call to IP phone B (right
device). User A enters the number of User B. In this scenario, dialed digits are sent to Cisco
Unified Communications Manager (Cisco Unified CM), which performs its main function of
call processing. Cisco Unified Communications Manager finds the IP address of the
destination and determines where to route the call.
Using SCCP or SIP, Cisco Unified Communications Manager checks the current status of
the called party phone. If Cisco Unified Communications Manager is ready to accept the
call, it sends the called party details and signals, via ringback, to the calling party to
indicate that the destination is ringing.
When User B accepts the call, the RTP media path opens between the two devices. User A
and User B may now begin a conversation.
Cisco IP phones require no further communication with Cisco Unified Communications
Manager until either User A or User B invokes a feature, such as a call transfer, call
conferencing or call termination.

Cisco Unified Communications Manager is the core call-processing platform for most on-
premises customers and offers the largest number of features.

• Call processing: Call processing refers to the complete process of routing,


originating, and terminating calls, including any billing and statistical collection
processes.
• Signaling and device control: Cisco Unified Communications Manager sets up all
the signaling connections between call endpoints and directs devices such as
phones, gateways, and conference bridges to establish and tear down streaming
connections.
• Dial plan administration: The dial plan is a set of configurable lists that Cisco
Unified Communications Manager uses to determine call routing. Cisco Unified
Communications Manager provides the ability to create scalable dial plans for
users.
• Phone feature administration: Cisco Unified Communications Manager extends
services such as hold, transfer, forward, conference, speed dial, last number redial,
Call Park, and other features to IP phones and gateways.
• Directory services: Cisco Unified Communications Manager uses its own database
to store user information. You can authenticate users either locally or against an
external directory. You can provision users by directory synchronization. With
directory synchronization, you can automatically add users from the directory to
the local database.
• Programming interface to external applications: Cisco Unified Communications
Manager provides a programming interface to external applications such as Cisco
IP Communicator, Cisco Unified IP IVR, Cisco Personal Assistant, and Cisco Unified
Communications Manager Attendant Console.
• Bandwidth Control: Bandwidth per call and between locations is controlled using
Regions and Locations.
Cisco Expressway Call Control
Cisco Expressway can be used as a call control platform. Cisco Expressway also provides
call processing and bandwidth management capabilities, but these are not as
comprehensive as those provided by Cisco Unified Communications Manager. Cisco
Expressway's main strengths are its ability to interwork seamlessly between different
devices using different protocols and its role along with Cisco Unified Communications
Manager to provide edge services. Since Expressway was traditionally a video call-
processing platform, it still has a number of video-centric capabilities that customers who
do not use Cisco Unified Communications Manager as a phone system can make full use
of.
Cisco Unified Communications Manager Express is essentially a call-processing software
component that runs on Cisco Routers. Cisco Unified CME is positioned more towards
smaller businesses and smaller offices of larger businesses.

37.7 Brief history Telephony


Describe Cisco Collaboration On-Premises Edge Solutions
This topic describes the main function of Cisco Collaboration On-Premises Edge Solutions.
Cisco routers can have a number of different interface cards installed in them. Cards can
be installed to connect to T1, E1, or BRI interfaces to enable connectivity to the PSTN.
Routers with voice connectivity are often referred to as a voice gateway. Calls are routed
from Cisco Unified Communications Manager to the voice gateway using SIP/H.323 or
MGCP. The gateway converts the call signaling and media to a circuit-switched format
used on ISDN networks.
Cisco Expressway
Cisco Expressway is used to manage three types of edge communications.
B2B Communication

Business-to-business (B2B) communication enables calls to be made from an internal


telepresence system to a telepresence system at an external site and vice versa. When an
internal device makes a call to an external device, the Expressway-C routes the call to the
Expressway-E on the other side of the corporate firewall. The Expressway-E then uses DNS
to locate the IP address of the external company's firewall traversal device (it does not
have to be an Expressway). The call signaling messages are then sent to the external
company.
The Expressway-C maintains a keepalive signal to the Expressway-E device. When an
external call is received by the Expressway-E, the Expressway-E can reply back to the
Expressway-C's keepalive message with a call set up message. When the call setup
message is sent to the endpoint, then media negotiation is completed over the existing
link. Media channels are always set up from the inside to the outside, enabling the firewall
ports to be opened from the inside to the outside.
Mobile and Remote Access

Mobile and remote access allow mobile workers to use internal collaboration services
from the public network without the need for a VPN. The external device uses DNS to
locate the Cisco Expressway-E device and send their registration messages to the
Expressway-E. The Expressway-E sends these messages onto the Expressway-C and
subsequently onto the Cisco Unified Communications Manager. The phone, Jabber
endpoint, or video device then registers as normal. As far as the endpoint is concerned, it
is talking directly to the Cisco Unified Communications Manager.
Hybrid Services
Expressway is also used to connect cloud services to on-premises services in a hybrid
deployment. Hybrid services that use Expressway include:
Hybrid Call Service: This service allows a Webex Teams customer with Cisco Unified
Communications Manager, Business Edition 6000, or Cisco Hosted Collaboration Solution
to integrate their current call control with the Cisco Collaboration Cloud.
Hybrid Calendar Service: This service allows any Webex Teams customer to enable
scheduling of Webex meetings with an automatically created and associated Webex Team
space. By adding @meet to the location field of the exchange appointment, all attendees
of the appointment are automatically added to the Webex Teams room. With @Webex in
the location field, the details of the user's personal Webex Meetings room are
automatically added to the invitation sent for the appointment.
Hybrid Directory Service: This service allows any Webex Teams customer to synchronize
their current Active Directory with the Cisco Webex Cloud. This service makes onboarding
users to the cloud simple and more secure.
Cisco Unified Border Element
IP PSTN connectivity with the SIP trunk
Cisco Unified Border Element, also called the Session Border Controller (SBC), is also used
to connect Cisco on-premises devices to external devices, usually via an Internet
Telephony Service Provider (ITSP). Cisco Unified Border Element is an additional function
of a Cisco Router.
Individual SIP trunks can also be set up for customers or partners if required. Like
Expressway, Cisco Unified Border Element can be used to interwork calls and modify
media ports as required. Each call made through a Cisco Unified Border Element is split
into two separate call legs. Cisco Border Element can handle large volumes of calls and is
typically used for voice, while Expressway Business-to-Business is typically used for Video.

37.8 Brief history Telephony


Describe Cisco Collaboration On-Premises Applications
Cisco applications include voicemail, instant messaging, presence, and contact center
solutions.
Cisco Unity Connection is the Cisco voicemail platform. As well as supporting traditional
voicemail recording and playback, messages can also be converted to text and sent via
email. In addition, Cisco Unity Connection provides an AutoAttendant feature that allows
you to route calls to people, departments and collect information using an interview call
handler.
The Cisco Unified Communications Manager Instant Messaging and Presence (IM&P)
server provides chat messaging and presence indicators inside Jabber for on-premises
users. Presence indicators show when a person is available or not, often a green dot
beside a name indicating they are available and online. In contrast, a red dot indicates
they are busy.

Cisco has two on-premises contact center products:

• Cisco Unified Contact Center Express: A single-box solution for small- to medium-
sized businesses for up to 400 agents supporting voice, Interactive Voice Response
(IVR), and digital channels such as email and chat.
• Cisco Unified Contact Center Enterprise: A suite of products that can support up to
24,000 agents supporting voice, IVR, and digital channels. Additional products can
be integrated to provide features such as reporting and management.

37.9 Brief history Telephony


Describe Cisco Collaboration On-Premises Conferencing Solutions
Cisco Unified Communications Manager provides limited conferencing capabilities
through the Cisco IP Voice Media Streaming Application service. The software conference
bridge supports G.711 audio by default and has no support for video.
Cisco IOS devices can support conferencing using Digital Signal Processors (DSPs)
resources which are hardware installed inside the router itself. The capabilities depend on
the cards installed and the router platform itself. IOS hardware conferencing is also voice
only.
Cisco Meeting Server is a dedicated conferencing platform supporting voice, video, and
web conferencing. Cisco Meeting Server also supports third-party devices such as
Microsoft Skype for Business. Cisco Meeting Management platform works with Cisco
Meeting Server to provide operators the ability to monitor and control meetings on behalf
of clients.
Cisco TelePresence Management Suite provides a complete scheduling solution for
conferencing.

37.10 Brief history Telephony


Describe Cisco Cloud Services
Cisco has a number of cloud-based services for collaboration. All Cisco collaboration cloud
offerings can be used on their own or integrated with existing on-premises infrastructure
in a hybrid deployment.

• Faster deployment
• Pay for what you need
• Expand and contract with business requirements
• Reduced onsite expertise
• Reduced large upfront investment
The main benefits of a cloud deployment include:

• Faster deployment: Cloud services can be up and running in days rather than
months.
• Pay for what you need: Cloud services are fully scalable and can be purchased per
user, allowing customers to deploy exactly what they need.
• Expand and contract with business requirements: Cloud services are flexible and
can be increased and decreased as the needs of a business change.
• Reduced onsite expertise: Endpoints are the only devices on the customer site.
Configuration of endpoints is simplified using a wizard in most cases which can be
configured by an end-user, reducing the need for support functions on every site.
• Reduced large upfront investment: No requirement to purchase servers to run
management software. Some investment in endpoints may be required upfront.
Running costs are operating expenses rather than capital expenditures.
Cisco Webex Meetings
Cisco Webex Meetings are managed and hosted by Cisco. Cisco Webex Meetings platform
provides a multiperson meetings capability. There are four types of meeting platforms
available.
Cisco Webex Meetings: Used for most day-to-day meetings with up to two hundred
named attendees in a single meeting. Cisco Webex Meetings has the capability for each
named user to have a personal meeting room. Meetings can also be scheduled using
calendar platform integrations as well as from the Webex Meetings app or web page.
Meetings include chat capabilities, recording capabilities, sharing of content including
computer applications, videos and whiteboards, and file transfer capabilities. Users can
connect to Cisco Webex Meetings using HD video systems, laptops, smart phones, or
traditional PSTN audio dial-up.
Cisco Webex Training: Specifically designed for training, Cisco Webex Training includes
sharing and whiteboard capabilities, breakout sessions, integrated labs, Q&A capabilities,
chat, polls, attention indicators, integrated test engines, file transfer, and recording. When
setting up a Cisco Webex Training session, you can include attachments, require
registration and integrate with a payment system.
Cisco Webex Events: Cisco Webex Events is specifically designed for larger groups of up to
3000 people in a nonvideo-enabled event and 500 in a video-enabled event. Speakers can
share multimedia and whiteboards the same way as Training and Meetings. Q&A, chat,
polling, recording, and attention monitoring capabilities are all included. Cisco Webex
Events also supports registration and payment capabilities for both live events and access
to recordings.
Cisco Webex Support: Enables support representative to take control of a remote desktop
while connected to a user with audio and video capabilities. For more complex issues, up
to five participants can be connected to a support call. It has a very simple "click to
connect" option to bring a customer into the call. Other features include file transfer
capabilities, custom scripts, chat, ability to have multiple connections at one time and
reboot and reconnect to customer machines.
Cisco Webex Teams
Cisco Webex Teams provides a single application for meetings, calls, and chats. Cisco
Webex Teams are managed and hosted by Cisco.

From the main Cisco Webex Teams interface, you can chat to individuals (People) or
create a space for multiple people. Spaces can exist on their own or can be part of a team
with multiple spaces. Within a space or individual chat, you can also share files, launch, or
schedule a meeting. Users can be invited from outside your organization as well as
internal. Cisco Webex Teams provides presence information on users and a custom status
capability.
The Cisco Webex Teams app is available for Windows, macOS, Android, and iOS, as well as
a generic web browser version for all other devices.
Meetings can be set up directly from the Webex Application or from a Webex device.
When setting a meeting up using the app, participants can use any nearby video system
without having to manually dial the device. Participants can also connect to meetings
using standard SIP devices, dial in from a phone, or Microsoft Skype for Business. Each
meeting is connected to a space, either because it was launched from one or a new one
will be created. All whiteboard sessions, files shared, and chat within the meeting will be
available after the meeting by all participants.

Cisco Webex devices include the desktop app, smart phone app, compliant phones
including conference phones, Cisco TelePresence devices, and Cisco Webex Teams boards.
All Cisco Webex Meeting and Teams solutions have a number of security features built-in,
including meeting passwords, end-to-end encryption, and Active Directory integration for
user and password management. The Webex application programming interface (API)
enables developers to extend the features for Webex, adding applications or using APIs,
software development kits (SDKs), and widgets to embed Webex into other applications.

Cisco Webex Calling is a cloud-based PBX for the Enterprise optimized for midsized
businesses. Cisco Webex Calling enables devices to register to the cloud and from there be
routed to the PSTN. The customer has the choice of how they wish to access the PSTN.
They can either use one of the Cisco Cloud Connected Partners (CCP) and have calls
routed straight from the Cisco Cloud to the CCP cloud or have calls routed back to the
customer premises and use their existing or preferred PSTN provider. Cisco Webex Calling
is a fully featured PBX providing all the features you would expect from an Enterprise-
grade solution, including Hunt Groups, Call Queues, Voicemail, Auto Attendants, Paging
Groups, and Call Park Groups, for example.
Cisco Webex Contact Center is a cloud-based contact center platform supporting voice,
email, and chat communication with customers. Cisco Webex Contact Center is managed
and hosted by Cisco. Webex Contact Center has the ability to route customer queries
based on agent skills and availability. Staff not from the contact center, such as managers
and subject matter experts, can join interactions using Cisco Webex Teams if needed.
Cisco Contact Center can integrate with a number of Customer Relationship Management
tools such as Salesforce and Microsoft. Data from customer interaction and agent activity
records, including IVR and Automatic Call Distributor (ACD), is brought together into real-
time and historical reports and dashboards.
Optional components, including workforce optimization, enable the dynamic management
of agent schedules, forecasts, and staff planning. Qualify management tools to enable
customers to measure efficiency and performance, and outbound campaigns manager can
be utilized for outbound sales and marketing campaigns.
Cisco Webex Teams, Meetings, Calling, and Contact Center are all administered using the
Cisco Webex Control Hub. From the control hub, an administrator can add, modify, and
delete users, import users from Active Directory, manage user subscriptions, configure
locations and physical devices, set up initial configuration and specific services, and run
reports.

Cisco Hosted Collaboration Solutions (HCS) are partner hosted rather than Cisco hosted.
Essentially all the component parts you would deploy in an on-premises solution are
available within an HCS solution. Deployments can be fully partner-hosted and managed,
hosted on the customer premises but managed by the partner or a dedicated data center
built for a customer and managed by the partner. Smaller customers may share devices
such as Cisco Unified Communications Manager in a multitenant environment, while
larger customers have an independent environment.
On top of the devices normally found in an on-premises solution, Cisco HCS also includes
Cisco Hosted Collaboration (HCM-F), mediation fulfillment which performs centralized
management for the entire Cisco HCS solution. High-Performance Compression Module
(HCM) performs aggregation and provides a central connection to the service provider
cloud. HCM provides northbound interface (NBI) services to integrate Cisco HCS with the
service provider business support system (BSS), operations support system (OSS), and
Manager of Managers (MoM).

38.1 Digital Voz


Introduction
This section describes some of the standards and protocols associated with voice and
video calls. Topics include how calls are set up using the Session Initiation Protocol (SIP)
protocol, the protocols used to carry media, and the codecs used to compress media

38.2 Digital Voz


Define Codecs
This topic will cover the methods used to convert human speech into digital packets that
are compressed by using codecs.

The human ear and voice communicate using sound waves, which are analog signals.
Modern communication networks communicate using digital signals. Before data can be
sent from one phone to another, the data has to be converted into digital. At the receiving
end, the digital signal is converted back to analog so that the receiving party can
understand the message.
A digital format is used to transmit signals because any signal will degrade over distance.
First, digital signals can degrade a lot further and still be readable; you can still tell a "1"
from a "0." Second, when an analog signal degrades, then the signal is amplified at regular
intervals, but amplification does not get rid of any unwanted noise that was picked up
along the way. A digital signal is not amplified. It is recreated, which removes all the noise
and creates a clean signal again. Noise created during transmission is analog in nature, so
you can distinguish it from a digital signal but not an analog signal.

The first three steps of the analog to digital conversion describe the pulse code
modulation (PCM) process, which corresponds to the G.711 codec. Step 4 explains
compression that is performed by low-bandwidth codecs, such as G.729, G.728, G.726, or
Internet Low Bitrate Codec (iLBC).
1. Sample the analog signal regularly: The sampling rate must be twice the highest
frequency to produce playback that does not appear either choppy or too smooth.
The sampling rate used in telephony is 8000 samples per second (8 kHz), which
reflects the fact that the bulk of human voice energy is carried in the spectrum of
0–4 kHz.
2. Quantize the sample: Quantization consists of a scale made up of 8 major
segments. Each segment is subdivided into 16 intervals. The segments are not
equally spaced but are actually finest near the origin. Intervals are equal within the
segments but different when they are compared between the segments. Finer
graduations at the origin result in less distortion for low-level tones.
3. Encode the value into an 8-bit digital form: Encoding maps a value derived from
the quantization to an 8-bit number (octet).
4. (Optional) Compress the samples to reduce bandwidth: Signal compression is
used to reduce the bandwidth usage per call.

Sampling is a process that takes readings of the waveform amplitude at regular intervals
by a process called pulse amplitude modulation (PAM). The output is a series of pulses
that approximate the analog waveform. For this output to have an acceptable level of
quality for the signal to be reconstructed, the sampling rate must be rapid enough.
Harry Nyquist developed a mathematical proof about the rate at which a waveform can be
sampled and the information that can be recovered from those samples. The Nyquist
theorem states that when a signal is instantaneously sampled at the transmitter in regular
intervals and has a rate of at least twice the highest channel frequency, the samples will
contain sufficient information to allow an accurate reconstruction of the signal at the
receiver.
While the human ear can sense sounds from 20 to 20,000 Hz, speech encompasses sounds
from about 200 to 9000 Hz. The telephone channel was designed to operate at
frequencies of 300 to 4000 Hz. This economical range offers enough fidelity for voice
communications, although higher frequency tones are not transmitted. The removal of
higher frequencies leads to issues with sounds such as “s” or “th.” The voice frequency of
4000 Hz requires 8000 samples per second; that is, one sample every 125 microseconds.
Nyquist theorem specifies that the significant articulation range of human voice is
between of 300 – 4000 Hz. This range is the range that telephones were designed to
sample and the range that VoIP was initially designed to sample to match that of
traditional telephony. This range is also known as Narrowband. Although this range works
well with human speech, it still does not sample the full human speech range and does
not work well for music, which is a concern when using MOH (Music on Hold).
Over time new codecs have been developed, and higher sampling rates have been
included to allow for crisper, more precise speech transmission. Codecs offering sampling
in the full band range are suitable for live music performances.

Quantization divides the range of amplitude values that are present in an analog signal
sample into a set of discrete steps that is closest in value to the original analog signal.
Quantization matches a PAM signal to a segmented scale. The scale measures the
amplitude (height) of the PAM signal and assigns an integer number to define that
amplitude.
The figure shows quantization. In the example, the x-axis represents time, and the y-axis
represents the voltage value (PAM). The voltage range is divided into 16 segments (0 to 7
positive and 0 to 7 negative). Starting with segment 0, each segment is twice the length of
the preceding one, which reduces the signal-to-noise ratio (SNR) and makes the segment
uniform. This segmentation also corresponds closely to the logarithmic behavior of the
human ear. The two principal schemes for generating these samples in electronic
communication are a-law and mu-law.
The a-law and mu-law standards are audio compression schemes defined by ITU-T G.711
that compress 16-bit linear PCM data down to 8 bits of logarithmic data. The a-law
standard is primarily used in Europe and the rest of the world. The mu-law standard is
used in North America and Japan.
Although a-law and mu-law are very similar, there are a few differences that make them
incompatible. An international connection must use a-law. The mu-law to a-law
conversion is the responsibility of the mu-law country.

Encoding converts an integer base-10 number to a binary number. The output of encoding
is a binary expression in which each bit is either a 1 (pulse) or a 0 (no pulse). After PAM
samples an input analog voice signal, the next step is to encode these samples in
preparation for transmission over a telephony network. This process is called PCM.
The PCM process mathematically converts the value obtained from PAM sampling to
another binary value within the range –127 to +127. The first bit represents positive (1) or
negative (0), while the remaining 7 bits form the number between 0 to 127.
It is during this conversion where a-law and mu-law differ in their algorithms. A-law would
represent the number +127 as 11111111 where the first bit is 1 (positive), and the
remaining bits equal 127. Mu-law inverts the last 7 bits, which results in +127 been
represented as 10000000.
It is at this stage that companding, the process of first compressing an analog signal at the
source and then expanding this signal back to its original size when it reaches its
destination, is applied. This whole process is generally referred to as PCM coding. A digital
signal processor (DSP), which is a specialized chip, performs the PCM process quickly.

Uncompressed digital speech signals are sampled at a rate of 8000 samples per second,
with each sample consisting of 8 bits. Therefore, you have 64 kbps per call (8000 * 8).
Multiple algorithms have been developed to allow voice transmission at lower bandwidth
consumption. The most common coder-decoder (codec) algorithms are presented in the
table in the figure, together with their bandwidth usage. Codecs offer compression to
voice, much like .zip or .rar offer compression to files.

38.3 Digital Voz


Compare Audio Codecs
This topic will cover considerations when choosing an audio codec and a comparison
between different audio codecs.
Audio Codec Selection
The following are some audio codec considerations:

• Call quality
• Network latency and reliability
• Endpoint Support
• Codec complexity
• Transcoder avoidance
• Bandwidth
A codec is a software algorithm that compresses and decompresses speech or audio
signals. There are many standardized codecs that are used in VoIP networks.
When selecting a codec to use within an enterprise environment, consider the following:

• Bandwidth: Usually the first consideration when selecting a codec, especially when
considering total consumption of bandwidth for multiple simultaneous calls over
low-speed WAN links. On the LAN, bandwidth is not paramount because most
networks will have 100 Mbps at a minimum, with many networks now having 1
Gbps to each endpoint.
• Call quality: Second to bandwidth is the call quality the codec is capable of
providing. In the past, call quality was an especially important consideration,
although, in recent years, it is not an important deciding factor because all
mainstream codecs offer premium quality calls. The most common method for
quality score is the Mean Opinion Score (MOS). Although there are other methods
available, MOS is still the first score most people will look at. The score is between
1 to 5, with 5 being perfect face-to-face quality and 4.3 being the highest quality
possible over a phone because of the Nyquist theorem.
• Network latency and reliability: The reliability and latency of your network will
have an impact on the quality of the call more so than the quality score for the
codec. However, there are differences between codecs in their ability to conceal
latency and packet loss issues experienced on the network.
• Endpoint support: Codecs improve year over year, and although the latest codecs
often offer the lowest bandwidth and highest quality, it is imperative to identify
codec support of the current endpoints in the environment. The introduction of
new codecs might require transcoding (translating from one codec to another),
which will require additional resources until all endpoints support the newer
codecs.
• Codec complexity: Each codec has a different amount of compression that it needs
to perform to maintain its quality score and bandwidth usage, so different
amounts of processing power are needed for different codecs. This processing
power is often consumed on the DSP chips on the routers. Knowing how much
processing a codec requires will help determine scalability issues with existing and
future hardware.
• Transcoder avoidance: Transcoders allow endpoints that have incompatible codec
support to communicate with each other through the transcoder, which will
translate between the two codecs in use. Transcoding, however, comes at a
resource cost because DSP resources are needed. Avoiding endpoints that are
incompatible from a codec point of view will remove this resource requirement.
* Acquired by Google in 2011
G.711 is an ITU-T standard that uses PCM to encode analog signals into a digital
representation by regularly sampling the magnitude of the signal at uniform intervals and
then quantizing it into a series of symbols in a digital (usually binary) code. The voice
samples created by the PCM process generate 64 kbps of data.
G.722 is an ITU-T standard wideband speech codec operating at 48, 56, and 64 kbps.
G.722 is typically used in LAN deployments, where the required bandwidth is not
prohibitive. Unlike G.711, which has a sampling of 8 kHz as per the Nyquist theorem,
G.722 has double the spectrum size at 16 kHz. In this type of deployment, G.722 offers a
significant improvement in audio quality over older narrowband codecs, such as G.711,
without causing an excessive increase in implementation complexity. Cisco Unified
Communications Manager calculates G.722 with 64 kbps.
iLBC is a speech codec that is suitable for robust voice communication over IP. The codec
is designed for narrowband speech and results in a payload bit rate of 13.3 kbps. The CPU
load is like the G.729A, with higher quality and better response to packet loss. If there are
lost frames, iLBC processes voice quality issues through graceful speech quality
degradation. Lost frames often occur with lost or delayed IP packets. Ordinary low-bitrate
codecs exploit dependencies between speech frames, which unfortunately results in error
propagation when packets are lost or delayed. In contrast, iLBC-encoded speech frames
are independent, so this problem will not occur.
G.729 is the compression algorithm that Cisco uses for high-quality 8-kbps voice. G.729 is
a high-complexity, processor-intensive compression algorithm that monopolizes
processing resources.
Although G.729A is also an 8-kbps compression algorithm, it is not as processor-intensive
as G.729. G.729A is a medium-complexity variant of G.729 with slightly lower voice quality
and is more susceptible to network irregularities such as delay, variation, and
"tandeming." Tandeming causes distortion that occurs when speech is coded, decoded,
then coded and decoded again, much like the distortion that occurs when a videotape is
repeatedly copied.
The Annex B variant of G.729 is also a high-complexity algorithm that adds VAD (Voice
Activity Detection) and CNG (Comfort Noise Generation) to the codec. VAD detects silence
that occurs in typical conversations. This silence is present when one end is talking and the
other is listening. The listening end can have the Real-Time Transport Protocol (RTP)
stream that is going toward the talker temporarily suppressed. The benefit of this
suppression is an approximate 35 percent savings in bandwidth. The RTP stream is
reactivated upon the detection of sound on the listening end, which can cause clipping of
the first syllable when the RTP stream restarts. With traditional voice circuits, users are
used to hearing white noise. When users switch to digital circuits, the lack of white noise
can be mistaken for a disconnection. CNG inserts white noise into the line to
accommodate users who are changing from traditional voice circuits.
G.729AB is a medium-complexity variant of G.729B with slightly lower voice quality.
Opus is an open, royalty-free codec standardized by the IETF in 2012. It supports bitrates
from 6 kbps to 510 kbps and sampling rates between 8 kHz (narrowband) to 48 kHz (full
band). This sets opus apart from other codecs as it has an unmatched quality for
interactive speech and music transmission. It has support for CBR and variable bitrate
(VBR). It is the codec of choice when using Webex.

38.4 Digital Voz


Compare Video Codecs
This topic will show the methods used to capture and transmit video over the network
and the differences between different codecs.
Before you look at video standards in more detail, there are a few terms you need to
understand. Firstly a "frame" is an image made up of dots called "pixels." The picture size,
or resolution, is the number of pixels across the width by the number of pixels vertically.
Resolutions are not restricted to telepresence. Your PC has a screen resolution that it is
using now. If you right-click your desktop and go to Properties, then select the Settings
tab, you can adjust your screen resolution.
Different video standards use different screen resolutions. While a higher screen
resolution provides a higher picture quality, it also needs a lot more bandwidth to send. If
you send more bandwidth than the network can handle, then not all the information will
arrive at the far end, and actually, the picture quality will degrade significantly.
Quality is also dependent on the size of the screen you are using. Within the figure, the
motorcycle looks very clear in the top picture, but if you keep blowing it up and it will look
awful. Your system will negotiate the best resolution based on the bandwidth you made
the call with. But if you receive a poor quality image, then redialing the call at a lower
bandwidth might improve it.

Video codecs all fundamentally work the same way at the start. They take a single frame
of the video, group pixels into blocks, and then group blocks into macroblocks. Where
codecs differ is how this grouping process is performed and the quantity of each element
used in the groupings.
Macroblocks that are in a contiguous row are grouped into slices (a single row of
macroblock).
To reduce the amount of data needed to create a video, video codecs are designed to
identify changes from one frame to the next and only update the changes. Initially, the
process requires a full-frame, or otherwise known as an I-Frame (Intra-coded), to be used
as the starting point, the second frame is compared to the first to identify which
macroblocks have changed, and only those blocks are updated and transmitted in the
form of P-Frames (predictive-coded) or B-frames (bidirectional predictive-coded).
Pictures, or frames, are grouped into a group of pictures (GOP), with the I-frame as the
starting point and P-frames following it to the next I-frame.

The main reason why video applications are more loss-sensitive than voice is that the
codecs that are used in video compression work differently from the way that voice
codecs work.
Commonly used video codecs, such as MPEG-2, MPEG-4, H.264, and H.265, use temporal
compression algorithms. A codec that uses temporal compression does not send a
complete frame sample (called an I-frame or keyframe) at every sampling interval. Only
some of the frames that are sent are I-frames. Between the full frames (I-frames), only the
differences with the previous frame, represented as motion vectors and prediction errors,
are encoded. The frames that carry these frame deltas (P- or B-frames) tend to be much
smaller than I-frames, which is how the compression algorithm reduces the bandwidth.
With the temporal compression algorithm, a spatial compression algorithm is typically
used on each frame to reduce the number of bytes that is necessary to encode the frame
itself. This process is similar to the type of encoding that is used in picture compression
methods such as JPEG.
What does this mean for the loss tolerance of video traffic?
At the commonly used 30-f/s frame rate, a frame is sent every 33 ms. Depending on the
resolution and spatial compression that is used, each frame is broken down into several
packets, and these packets are transmitted onto the network in a short burst.
What happens if you lose a single packet out of this burst?
To begin, losing a single packet means losing the complete frame, so the loss is magnified
by the fact that a sample is not a single packet as it is with voice. Next, if only spatial
compression would be used, then a new frame would arrive after 33 ms, and you would
experience a 33-ms freeze. However, due to the temporal encoding scheme, you will not
always receive a new I-frame as the next frame. If you lose an I-frame, it can take several
hundred milliseconds (depending on the codec) before you get a new I-frame. If you lose a
P- or B-frame, the effect is slightly less severe, but this loss will still translate to clearly
visible artifacts in subsequent frames. Therefore, from a network design standpoint, to
provide a good user experience for video applications, you should design it to be as close
to lossless as possible for the video traffic.
Another related design objective is to design the network with very high availability in
mind. A commonly used target for network reconvergence in the campus is below 200 ms.
If a media application loses 200 ms worth of packets, this loss is definitely noticeable to
the user, but it will generally be accepted because it is only an incident. A longer period of
loss for media applications is noticeable and will detract from the user experience.
Video Codec Selection
The following are some video codec considerations:
• Transcoding
• Bitrate
• Quality
• Network latency and reliability
• Endpoint Support
• Complexity
When choosing a codec, especially for off-network calls, you must consider the following:

• Which codecs are supported on the off-network endpoints? If possible, you should
avoid the need for transcoding, especially video transcoding. Transcoding requires
dedicated hardware resources. Therefore, try to choose common codecs, if
possible. If endpoints do not have a codec in common, then a transcoder is
required.
• Is there enough bandwidth for the desired number of audio and video calls?
• Is quality of service (QoS) implemented? What latency, jitter, and packet loss do I
have to expect?
• What are the desired audio and video quality?

The two leading video codecs are H.264 and H.265. Although H.264 is still generally the
de-facto standard, H.265 has some benefits.
H.264, or otherwise known as MPEG-4 AVC (Advanced Video Coding), was released in
2003 and is used widely by most video hosting companies, including Netflix, Google, and
YouTube. It set itself apart from its predecessor, H.263 (MPEG-2), by reducing the bitrate
by half while still maintaining the same quality. Or it is alternatively increasing the quality
substantially while maintaining the same storage usage as H.263. It was able to achieve
this benefit without increasing the processing requirements or complexity. It supports
Spatial and Temporal compression and supports resolutions up to 8K Ultra High-Definition
with a maximum resolution of 8192x4320.
H.265 evolved from H.264 and was ratified in 2013. The bitrate was halved again when
compared to its predecessor, but the complexity increased, requiring a lot more
processing power to encode and decode H.265. All other values of H.265 remained the
same when compared to H.264.

39.1 Network PSTN


Describe the Call Setup and Teardown Process
There are many different protocols and methods for call setup and teardown; however,
they all have a few fundamental steps in common. This overview will cover the steps
during call setup and teardown that are not specific to any protocol.

• Initiating the call: After the user has lifted the handset and received a dial tone,
they will dial the called party's number. These digits are sent to the call control
device.
• Endpoint Discovery: The call control device then needs to identify the location of
the called party. This endpoint could be registered locally on the same device, or it
might require call routing configuration in order to route the a call to a remote
destination.
• Permission Check: A permission check may be performed to confirm the calling
party has sufficient rights to dial the number in question.
• Bandwidth Check: The call control agent may check to determine if there is
sufficient bandwidth on the network to allow the call. The bandwidth check is
better known as Call Admission Control (CAC). There are many options for the
implementation of CAC, each with its own requirements.
• Call progress tones: After the called party endpoint is ringing, a call progress tone
needs to be sent back to the calling party, which will then play the ringing tone to
the user. Similarly, if the called party is engaged, the engaged tone will be played
to the calling party. These are just two examples of call progress tones that are
available during call setup and teardown.

• Call answered: After the called party answers the call, call progress tones are no
longer needed.
• Call Detail Record (CDR): The call control agent can be configured to log all call
information. The logging can be done through several different technologies such
as syslog, RADIUS, or direct database entries. The method used for logging will
largely depend on the type of call control agent that is used.
• Codec negotiation: A capabilities exchange is now required in order to find a
common supported and requested codec between the two endpoints.
• Negotiating the streams: The two endpoints will then negotiate the ports to be
used to establish the bidirectional connections for RTP and Real-Time Transport
Control Protocol (RTCP).
o RTP will also use an even-numbered port between the range 16,384 –
32,767.
o RTCP will use the RTP port plus one in order to form a paired connection,
which results in RTCP always using an odd port number.

• Hang up: When either the calling or called party terminates the call by hanging up
the endpoint, signaling is sent to the call control agent to tear down the call and
release the resources used.
• Close connection: Signaling between the call control agent and the called party is
sent to notify and acknowledge the termination of the call and the release of the
resources.
• CDR: Call details records are closed to mark the completion of the call.
• Bandwidth release: The bandwidth that may have been allocated to the call
through CAC gets released in order to make resources available for future calls.

39.2 Network PSTN


Describe SIP Call Signaling for Call Setup and Teardown
This figure depicts the call setup between two SIP endpoints. The process is detailed in the
following steps:
1. User dials 2000.
2. The phone sends a SIP Invite message to the Call Control containing the phone's
capable codecs inside the message; this is called a Session Description Protocol
(SDP) message.
3. The call control device reads the SIP invite and SDP message before routing the
message to the receiving phone.
4. The call control device also sends a 100 Trying message back to the originating
phone followed by a 180 Ringing message.
5. Afterthe user answers the call, the receiving phone sends a 200 OK message to the
call control containing the receiving phone's SDP message.
6. The call control device reads the SDP message and sends the 200 OK message to
the dialing endpoint.
7. An acknowledgement (ACK) message is sent from the dialing endpoint via the call
control device to the receiving endpoint.
8. RTP media streams with corresponding RTCP streams (not shown on the figure) are
established directly between the phones.
This figure depicts the call teardown between two SIP endpoints. The process is detailed in
the following steps:
1. The user hangs up the call
2. The phone sends a SIP BYE message to the call control device
3. The call control device reads the SIP BYE message then sends it onto the other
endpoint in the call
4. The second endpoint terminates the call then responds with a 200 OK message to
the call control device, which is then forwarded to the initial phone.
Session Description Protocol
SIP leverages a number of other standards-based protocols to provide a large set of
features based on relatively simple mechanisms. One of the relevant protocols is the SDP.
The SDP is an IETF-based format for describing streaming media initialization parameters
in an ASCII string. SDP is intended for describing multimedia communication sessions for
the purposes of session announcement, session invitation, and parameter negotiation.
SDP does not deliver media itself but is used for negotiation between endpoints of media
type, format, and all associated properties. The set of properties and parameters are often
called a session profile. SDP is designed to be extensible to support new media types and
formats.
This figure presents two SDP examples, and the table explains the parameters that are
used in these two examples as follows:

• Version: Protocol version.


• Origin: Describes the sender of the message and may include one or more of these
parameters: username, session ID, address type, and the address value.
• Times: Optionally defines the session start and end time stamps. The values are
not set when a call setup is signaled.
• Connection data: Provides the parameters for media endpoint termination:
network type ("IN" is defined as "Internet," and other types may be added),
address type (IPv4/v6), and the connection address (IP address)
• Media: Specifies the media type (audio/video), the UDP transport port, and one or
more media formats. Examples of Audio Video Profile (AVP) codes are:
o 0 (G.711 mu-law)
o 8 (G.711 a-law)
o 3 (Global System for Mobile Communications [GSM] codec)
o 18 (G.729)
Note: The list is ordered according to the priority. SDP content varies depending on the
message type.
Offer Types
In a Delayed Offer, the session initiator does not send its capabilities in the initial invite
but waits for the called device to send its capabilities first (for example, the list of codecs
that are supported by the called device, thus allowing the calling device to choose the
codec to be used for the session).
Delayed Offer
There are two ways to exchange the SDP Offer and Answer messages. These methods are
commonly known as Delayed Offers and Early Offers. In the simplest terms, an initial SIP
Invite that is sent with SDP in the message body defines an Early Offer, whereas an initial
SIP Invite without SDP in the message body defines a Delayed Offer.

The Delayed Offer is recommended for SIP trunks because it enables the Internet
telephony service provider (ITSPs) to provide their capabilities first. Cisco Unified
Communications Manager allows the administrator to select the offer method. Cisco
gateways support both methods but originating gateways default to Early Offer.
Early Offer
In an Early Offer, the session initiator (calling device) sends its capabilities (including
supported codecs) in the SDP contained in the initial Invite. This method allows the called
device to choose its preferred codec for the session. Early Offer is the default method that
is used by a Cisco voice gateway acting as the originating gateway.
40.1 Digital Protocols
Explore Media Streams at the Application Layer
This topic will compare the different media streams found at the application layer, namely
RTP, Secure Real-Time Transport Protocol (SRTP), and RTCP.

After signaling has been completed, the media stream is formed directly between the two
endpoints. This can either be done with RTP in an unsecured manner, meaning that the
traffic is not encrypted, or the traffic can be secured using SRTP.
In either scenario, RTCP is used and set up as a separate stream in order to control the
media stream.
Since packets are required to be sent continuously and constantly due to the real-time
nature of the traffic, a different protocol is required when compared to data traffic. RTP
provides end-to-end delivery for real-time data such as voice and video. Unlike traditional
data, which uses acknowledgments for each packet to confirm delivery, real-time
information cannot afford the delay associated with acknowledgments.
RTP runs on an even port number randomly selected from the UDP port range 16,384 –
32,767. Even though UDP is used as the underlying protocol and does not use
acknowledgments, RTP adds sequence numbering in order to make sure packets are in the
correct order.
RTP also includes the following:

• Payload type: this is used to identify the codec type and media format. This allows
for codecs to change during the transmission.
• Sequence numbering: as already noted, this allows for packets to be sorted into
the correct order, but it also allows to identify if packet loss has occurred.
• Time stamp: this allows the protocol to measure delay and jitter and allows the
protocol to space the packets correctly on the receiving end using a playout buffer.
This is to ensure packets are timed correctly and played back at the correct speed.
This also assists the protocol to remove jitter caused by variations in delay
experienced during transmission.
RTCP is set up as a separate stream from the RTP or SRTP stream. This is set up on the port
number selected by RTP+1, which means it will always run on an odd port number.
RTCP provides out-of-band statistics and control information and includes the following:

• Packet count: How many packets have been used since the start of the call in both
directions.
• Packet delay: The delay between packets since the last RTCP packet.
• Octet count: Bandwidth usage used during the call represented in octets (8 bits).
• Packet loss: Total amount of packets lost.
• Jitter: The variation in delay between packets.
SRTP allows for the authentication and encryption of voice and video traffic. As encrypting
the RTP header would introduce routing issues for the calls, the header can be validated
and authenticated but not encrypted. The RTP payload, which contains the voice and
video traffic, can be encrypted as well as authenticated allowing for the secure
transmission and antireplay of the conversations.

You might also like