Material de Estudio CCNAv2.Cleaned
Material de Estudio CCNAv2.Cleaned
Versión 1.0.30
Welcome to Implementing and Administering Cisco
Solutions
Course Introduction
The Implementing and Administering Cisco Solutions (CCNA) course teaches learners how
to install, operate, configure, and verify a basic IPv4 and IPv6 network, including
configuring network components, such as switches, routers, and WLC. This course also
covers managing network devices and identifying basic security threats.
This course offers various learning methodologies, including videos and hands-on labs to
provide an interactive experience.
During this course you will:
A network of computers and other components that are located relatively close together
in a limited area is often referred to as a LAN. Every LAN has specific components,
including hardware, interconnections, and software. WAN communication occurs
between geographically separated areas. It is typically provided by different
telecommunication providers using various technologies using different media such as
fiber, copper, cable, Asymmetric Digital Subscriber Line (ADSL), or wireless links. In
enterprise internetworks, WANs connect the main office, branches, Small Office Home
Office (SOHO), and mobile users.
Listed are some important skills that you will build upon when exploring the functions of
networking:
Computer network engineers design, develop, and maintain highly available network
infrastructure to support the information technology activities of the business. Network
engineers interact with network users and provide support or consultancy services about
design and network optimization. Network engineers typically have more knowledge and
experience than network technicians, operators, and administrators. A network engineer
should constantly update their knowledge of networking to keep up with new trends and
practices.
Users who wish to connect their networks to the internet can acquire access through a
service provider's access network. Service provider networks can use different
technologies from dialup or broadband telephony networks, such as ADSL networks, cable
networks, mobile, radio, or fiber-optic networks. A service provider network can cover
large geographical areas. Service provider networks also maintain connections between
themselves and other service providers to enable global coverage.
Computer networks can be classified in several ways, and then combined to find the most
appropriate one for the implementation.
The distance distinguishes local and remote networks between the user and the computer
networks the user is accessing.
Examples of networks categorized by their purpose would be data center networks and
SAN. Focusing on the technology used, you can distinguish between wireless or wired
networks.
Looking at the size of the network in terms of the number of devices it has, there are
various types of networks. Such as small networks, usually with less than ten devices,
medium to large networks consisting of tens to hundreds of devices, and very large, global
networks, such as the internet, which connects thousands of devices across the world.
One of the most common categorizations looks at the geographical scope of the network.
There are LANs that connect devices located relatively close together in a limited area.
Contrasting LANs, there are WANs, which cover a broad geographic area and are managed
by service providers. An example of a LAN network is a university campus network that
can span several collocated buildings. An example of a WAN would be a
telecommunication provider’s network that interconnects multiple cities and states. This
categorization also includes metropolitan-area networks (MANs), which span a physical
area larger than LAN but smaller than WAN, for instance, a city.
Medium-to-large enterprise networks can span multiple locations. Usually, they have a
main office or Enterprise Campus, which holds most of the corporate resources, and
remote sites, such as branch offices or home offices of remote workers. A home office
usually has a small number of devices and is called a small office, home office (SOHO).
SOHO networks mostly use the internet to connect to the main office. The main office
network, which is a LAN in terms of its geographical span, may consist of several networks
that occupy many floors, or it may cover a campus that contains several buildings. Many
corporate environments require the deployment of wireless networks on a large scale,
and they use Wireless LAN Controllers (WLC) for centralizing the management of wireless
deployments. Enterprise Campuses also typically include a separate data center home to
the computational power, storage, and applications necessary to support an enterprise
business. Enterprises are also connected to the internet, and a firewall protects internet
connectivity. Branch offices have their own LANs with their own resources, such as
printers and servers, and may store corporate information, but their operations largely
depend on the main office, hence the network connection. They connect to the main
office by a WAN or internet using routers as gateways.
Networks support the activities of many businesses and organizations and are required to
be secure, resilient, and to allow growth. The design of a network requires considerable
technical knowledge. Network engineers commonly use validated network architecture
models to assist in the design and implementation of the network. Examples of validated
models are the Cisco three-tier hierarchical network architecture model, the spine-leaf
model, and the Cisco Enterprise Architecture model. These models provide hierarchical
structure to enterprise networks, which is used to design the network architecture in the
form of layers. For example, LAN Access and LAN Core, with each layer providing different
functionalities.
Note: The words internet and web are often used interchangeably, but they do not share
the same meaning. The internet is a global network that interconnects many networks
and therefore provides a worldwide communication infrastructure. The World Wide Web
describes one way to provide and access information over the internet using a web
browser. It is a service that relies on connections provided by the internet for its function.
The exchange of data within the internet follows the same well-defined rules, called
protocols, designed specifically for internet communication. These protocols specify,
among other things, the usage of hyperlinks and Uniform Resource Identifiers (URIs). The
internet is a base for various data exchange services, such as email or file transfers. It is a
common global infrastructure, composed of many computer networks connected
together that follow communication rules standardized for the internet. A set of
documents called RFCs defines the protocols and processes of the internet.
1.3 Exploring the Functions of Networking
Components of a Network
A network can be as simple as two PCs connected by a wire or as complex as several
thousand devices that are connected through different types of media. The elements that
form a network can be roughly divided into three categories: devices, media, and services.
Devices are interconnected by media, which provides the channel over which the data
travels from source to destination. Services are software and processes that support
common networking applications in use today.
Network Devices
Devices can be further divided into endpoints and intermediary devices:
• Endpoints: In the context of a network, endpoints are called end-user devices and
include PCs, laptops, tablets, mobile phones, game consoles, and television sets.
Endpoints are also file servers, printers, sensors, cameras, manufacturing robots,
smart home components, and so on. All end devices were physical hardware units
years ago. Today, many end devices are virtualized, meaning that they do not exist
as separate hardware units anymore. In virtualization, one physical device is used
to emulate multiple end devices—for example, all the hardware components that
one end device would require. The emulated computer system operates as a
separate physical unit and has its own operating system and other required
software. In a way, it behaves like a tenant living inside a host physical device,
using its resources (processor power, memory, and network interface capabilities)
to perform its functions. Virtualization is commonly applied to servers to optimize
resource utilization, because server resources are often underutilized when they
are implemented as separate physical units.
• Intermediary devices: These devices interconnect end devices or interconnect
networks. In doing so, they perform different functions, which include
regenerating and retransmitting signals, choosing the best paths between
networks, classifying and forwarding data according to priorities, filtering traffic to
allow or deny it based on security settings, and so on. As endpoints can be
virtualized, so can intermediary devices or even entire networks. The concept is
the same as in the endpoint virtualization—the virtualized element uses a subset
of resources available at the physical host system. Intermediary devices that are
commonly found in enterprise networks are:
o Switches: These devices enable multiple endpoints such as PCs, file servers,
printers, sensors, cameras, and manufacturing robots to connect to the
network. Switches are used to allow devices to communicate on the same
network. In general, a switch or group of interconnected switches attempt
to forward messages from the sender so it is only received by the
destination device. Usually, all the devices that connect to a single switch or
a group of interconnected switches belong to a common network and can
therefore communicate directly with each other. If an end device wants to
communicate with a device that is on a different network, then it requires
"services" of a device that is known as a router, which connects different
networks together.
o Routers: These devices connect networks and intelligently choose the best
paths between networks. Their main function is to route traffic from one
network to another. For example, you need a router to connect your office
network to the internet. An analogy that may help you understand the
basic function of switches and routers is to imagine a network as a
neighborhood. A switch is a street that connects the houses, and routers
are the crossroads of those streets. The crossroads contain helpful
information such as road signs to help you in finding a destination address.
Sometimes, you might need the destination after just one crossroad, but
other times you might need to cross several. The same is true in
networking. Data sometimes "stops" at several routers before it is
delivered to the final recipient. Certain switches combine functionalities of
routers and switches, and they are called Layer 3 switches.
o APs: These devices allow wireless devices to connect to a wired network.
An AP usually connects to a switch as a standalone device, but it also can be
an integral component of the router itself.
o WLCs: These devices are used by network administrators or network
operations centers to facilitate the management of many APs. The WLC
automatically manages the configuration of wireless APs.
o Cisco Secure Firewalls: Firewalls are network security systems that monitor
and control the incoming and outgoing network traffic based on
predetermined security rules. A firewall typically establishes a barrier
between a trusted, secure internal network and another outside network,
such as the internet, that is assumed not to be secure or trusted.
o Intrusion Protection System (IPS): An IPS is a system that performs a deep
analysis of network traffic while searching for signs that behavior is
suspicious or malicious. If the IPS detects such behavior, it can take
protective action immediately. An IPS and a firewall can work in
conjunction to defend a network.
o Management Services: A modern management service offers centralized
management that facilitates designing, provisioning, and applying policies
across a network. It includes features for discovery and management of
network inventory, management of software images, device configuration
automation, network diagnostics, and policy configuration. It provides end-
to-end network visibility and uses network insights to optimize the
network. An example of a centralized management service is Cisco DNA
Center.
In user homes, you can often find one device that provides connectivity for wired devices,
connectivity for wireless devices, and provides access to the internet. You may be
wondering which kind of device it is. This device has characteristics of a switch because it
offers physical ports to plug local devices, a router, that enables users to access other
networks and the internet, and a WLAN AP, allowing wireless devices to connect to it. It is
all three of these devices in a single package. This device is often called a wireless router.
Another example of a network device is a file server, which is an end device. A file server
runs software that implements standardized protocols to support file transfer from one
device to another over a network. This service can be implemented by either FTP or TFTP.
Having an FTP or TFTP server in a network allows uploads and downloads of files over the
network. An FTP or TFTP server is often used to store backup copies of files that are
important to network operation, such as operating system images and configuration files.
Having those files in one place makes file management and maintenance easier.
Media
Media are the physical elements that connect network devices. Media carry
electromagnetic signals that represent data. Depending on the medium, electromagnetic
signals can be guided in wires and fiber-optic cables or propagated through wireless
transmissions, such as Wi-Fi, mobile, and satellite. Different media have different
characteristics and selecting the most appropriate medium depends on the circumstances,
such as the environment in which the media is used, distances that need to be covered,
availability of financial resources, and so on. For instance, a satellite connection (air
medium) might be the only available option for a filming crew working in a desert.
Connecting wired media to network devices is considerably eased by the use of
connectors. A connector is a plug, which is attached to each end of the cable. The most
common type of connector on a LAN is the plug that looks like an analog phone connector.
It is called an RJ-45 connector.
To connect the media, which connects a device to a network, devices use network
interface cards (NICs). The media "plugs" directly into the NIC. NICs translate the data
created by the device into a format that can be transmitted over the media. NICs used on
LANs are also called LAN adapters. End devices used in LANs usually come with several
types of NICs installed, such as wireless NICs and Ethernet NICs. NICs on a LAN are
uniquely identified by a MAC address. The MAC address is hardcoded or "burned in" by
the NIC manufacturer. NICs used to interface with WANs are called WAN interface cards
(WICs), and they use serial links to connect to a WAN network.
Network Services
Services in a network comprise software and processes that implement common network
applications, such as email and web, including the less obvious processes implemented
across the network. These generate data and determine how data is moved through the
network.
Companies typically centralize business-critical data and applications into central locations
called data centers. These data centers can include routers, switches, firewalls, storage
systems, servers, and application delivery controllers. Similar to data center centralization,
computing resources can also be centralized off-premises in the form of a cloud. Clouds
can be private, public, or hybrid, and they aggregate the computing, storage, network, and
application resources in central locations. Cloud computing resources are configurable and
shared among many end users. The resources are transparently available, regardless of
the user's point of entry (a personal computer at home, an office computer at work, a
smartphone or tablet, or a computer on a school campus). Data stored by the user is
available whenever the user is connected to the cloud.
Scalability: Scalability indicates how easily the network can accommodate more users and
data transmission requirements without affecting current network performance. If you
design and optimize a network only for the current conditions, it can be costly and difficult
to meet new needs when the network grows.
Security: Security tells you how well the network is defended from potential threats. Both
network infrastructure and the information that is transmitted over the network should
be secured. The subject of security is important, and defense techniques and practices are
constantly evolving. You should consider security whenever you take actions that affect
the network.
Quality of Service (QoS): QoS includes tools, mechanisms, and architectures, which allow
you to control how and when applications use network resources. QoS is essential for
prioritizing traffic when the network is congested.
Cost: Cost indicates the general expense for the initial purchase of the network
components and any costs associated with installing and maintaining these components.
Virtualization: Traditionally, network services and functions have only been provided via
hardware. Network virtualization creates a software solution that emulates network
services and functions. Virtualization solves many of the networking challenges in today’s
networks, helping organizations centrally automate and provision the network from a
central management point.
These characteristics and attributes provide a means to compare various networking
solutions.
The logical and physical topology of a network can be of the same type. However, physical
and logical topologies often differ. For example, an Ethernet hub is a legacy device that
functions as a central device to which other devices connect in a physical star. The
characteristic of a hub is that it "copies" every signal received on one port to all other
ports. So, a signal sent from one node is received by all other nodes. This behavior is
typical of a bus topology. Because data flow has the characteristics of a bus topology, it is
a logical bus topology.
The logical topology is determined by the intermediary devices and the protocols chosen
to implement the network. The intermediary devices and network protocols both
determine how end devices access the media and how they exchange data.
A physical star topology in which a switch is the central device is by far the most common
in implementations of LANs today. When using a switch to interconnect the devices, both
the physical and the logical topologies are star topologies.
• Batch applications: Applications such as FTP and TFTP are considered batch
applications. Both are used to send and receive files. Typically, a user selects a
group of files that need to be retrieved and then starts the transfer. Once the
download starts, no additional human interaction is required. The amount of
available bandwidth determines the speed at which the download occurs. While
bandwidth is important for batch applications, it is not critical. Even with low
bandwidth, the download is completed eventually. Their principal characteristics
are:
o Typically do not require direct human interaction.
o Bandwidth is important but not critical.
o Examples: FTP, TFTP, inventory updates.
Both models present a network in terms of layers. Layers group networking tasks by the
functions that they perform in implementing a network. Each layer has a particular role. In
performing its functions, a layer deals with the layer above it and the layer below it, which
is called "vertical" communication. A layer at the source creates data that is intended for
the same layer on the destination device. This communication of two corresponding layers
is also termed "horizontal".
The second aspect of communication models is protocols. In the same way that
communication functions are grouped in layers, so are the protocols. People usually talk
about the protocols of certain layers, protocol architectures, or protocol suites. In fact,
TCP/IP is a protocol suite.
A networking protocol is a set of rules that describe one type of communication. All
devices participating in internetworking agree with these rules, and this agreement makes
communication successful. Protocols define rules used to implement communication
functions.
Note: As defined by the ISO/International Electrotechnical Commission (IEC) 7498-1:1994
ISO standard, the word "open" in the OSI acronym indicates systems that are open for the
exchange of information using applicable standards. Open does not imply any particular
systems implementation, technology, or means of interconnection, but it refers to the
mutual recognition and support of the applicable standards.
While both ISO OSI and TCP/IP models define protocols, the protocols that are included in
TCP/IP are widely implemented in networking today. Nonetheless, as a general model, ISO
OSI aims at providing guidance for any type of computer system, and it is used in
comparing and contrasting different systems. Therefore, ISO OSI is called the reference
model.
Standards-based, layered models provide several benefits:
• Link layer: This layer is also known as the media access layer. It defines protocols
used to interface the directly connected network. Tasks of the protocols at this
layer are closely related to the characteristics of the physical medium and deal
primarily with physical network details. The link layer is also referred to as a
network interface, network access, or even data link layer. Because there are many
different types of physical networks, there are many link layer protocols. An
example of the TCP/IP link layer protocol is Ethernet. The link layer introduces
physical addresses, sometimes called hardware addresses or MAC addresses, to
identify devices sharing a particular physical network segment.
• Internet layer: This layer routes data from the source to the destination, provides
a means to obtain information on reaching other networks and deals with
reporting errors. The Internet layer provides logical addressing. Logical addressing
ensures that a host is uniquely identified. An Internet layer logical address, called
an IP address, is used to identify a host. This address is valid globally and aims at
uniquely identifying the host. End devices, such as laptops, mobile phones, and
servers are configured with a logical address before connecting to the network. IP
protocols—namely, IPv4 and the newer version, IPv6—reside in this layer. This
layer serves the upper Transport layer and passes information to the Link layer.
• Transport layer: This layer is the core of the TCP/IP architecture and the Internet
layer. It is placed between "data mover" protocols of the link and internet layers
and software-oriented protocols of the application layer. There are two main
protocols at this layer, TCP and UDP. These protocols serve many application-layer
protocols. Transport services "prepare" application data for transfer over the
network, follow the transfer process, and ensure that data from different
applications is not mixed. To distinguish between the applications, the transport
layer identifies each application with its own addressing. This addressing is valid
locally, within one host, unlike addressing at the Internet layer, which is valid
globally.
• Application layer: The functions of this layer mainly deal with user interaction. It
supports user applications by providing protocols and services that let you actually
use the network. It also supports network application programming interfaces
(APIs) that allow programs to access the network services, regardless of the
operating system that they are running on. This layer accommodates protocols
such as HTTP, HTTPS, Domain Name System (DNS), FTP, Simple Mail Transfer
Protocol (SMTP), Secure Shell (SSH), and many more. These protocols facilitate
applications for web browsing, file transfer, names to IP addresses resolution,
sending of emails, remote access to devices, and many other functions that
network users perform.
2.5 Introducing the Host-To-Host Communications Model
Peer-To-Peer Communications
The term peer means the equal of a person or object. By analogy, peer-to-peer
communication means communication between equals. This concept is at the core of
layered modeling of a communication process. Although a layer deals with layers directly
above and below it in performing its functions, the data it creates is intended for the
corresponding layer at the receiving host. The concept is also called the horizontal
communication.
Except for the physical layer, functions of all layers are typically implemented in software.
Therefore, you hear about the logical communication of layers. Software processes at
different hosts are not communicating directly. Most likely, the hosts are not even
connected directly. Nevertheless, processes on one host manage to accomplish logical
communication with the corresponding processes on another host.
Note: The term peer-to-peer is often used in computing to indicate an application
architecture in which application tasks and workloads are equally distributed among
peers. Contrary to peer-to-peer is client-server architectures in which tasks and workload
are unequally divided.
Applications create data. The intended recipient of this data is the application at the
destination host, which can be distant. In order for application data to reach the recipient,
it first needs to reach the directly connected physical network. In the process, the data is
said to pass down the local protocol stack. First, an application protocol takes user data
and processes it. When processing by the application protocol is done, it passes processed
data down to the transport layer, which does its processing. The logic continues down the
rest of the protocol stack until data is ready for the physical transmission. The data
processing that happens as data traverses the protocol stack alters the initial data, which
means that original application data is not the same as the data represented in the
electromagnetic signal transmitted. On the receiving side, the process is reversed. The
signals that arrive at the destination host are received from the media by the link layer,
which serves data to the internet layer. From there, data is passed up the stack all the way
to the receiving application. Then, the data received as the electromagnetic signal is
different from the data that will be delivered to the application. But the data that the
application sees is the same data that the sending application created.
Passing data up and down the stack is also referred to as vertical communication. For the
horizontal, peer-to-peer communication of layers to happen, it first requires vertical down
the stack and up the stack communication.
As data passes down or up the stack, the unit of data changes—and so does its name. The
generic term used for a data unit, regardless of where it is found in the stack, is a protocol
data unit. Its name depends on where it exists in the protocol stack
Although there is no universal naming convention for PDUs, they are typically named as
follows:
• Data: general term for the PDU that is used at the Application layer
• Segment: transport layer PDU
• Packet: internet layer PDU
• Frame: link layer PDU
To look into PDUs from peer-to-peer communication, you can use a packet analyzer, such
as Wireshark, which is a free and open-source packet analyzer. Packet analyzers capture
all the PDUs on a selected interface. They then examine their content, interpret it and
display it in text or using a graphical interface. Packet analyzers, sometimes also called
sniffers, are used for network troubleshooting, analysis, software and communications
protocol development, and education.
The figure shows a screenshot of a Wireshark capture, which was started on a LAN
Ethernet interface. Wireshark organizes captured information into three windows. The
top window shows a table listing all captured frames. This listing can be filtered to ease
analysis. In the example, the filter is set to show only frames that carry DNS protocol data.
In the details pane, the second middle window shows the details of one frame selected
from the list. Information is given first for the lower layers. For each layer, the information
includes data added by the protocol at that layer. In the third window (not shown in the
figure), the bytes pane displays information selected in the details pane, as it was
captured, in bytes.
In the figure, you can also see how Wireshark organizes analyzed information. In the
details pane, it displays data it finds in headers. It organizes header information by layers,
starting with the Link layer header and proceeding to the application layer. If you look
closely at the display of each header, you will see that information is organized into
meaningful groups—these groups are recognizable by the names, followed by a colon, and
a value, for example, "Source: Cisco_29:ec:52 (04:fe:7f:29:ec:52)" or "Time to live: 127."
These groupings correspond to how information is organized in the header. Headers have
fields and the names Wireshark uses correspond to header field names. For instance,
Source and Destination in Wireshark correspond to Source Address and Destination
Address fields of a header.
2.6 Introducing the Host-To-Host Communications Model
Encapsulation and De-Encapsulation
Information that is transmitted over a network must undergo a process of conversion at
the sending and receiving ends of the communication. The conversion process is known as
encapsulation and de-encapsulation of data. Both processes provide means for
implementation of the concept of horizontal communication where the layer on the
transmitting side is communicating with the corresponding layer on the receiving side.
Have you ever opened a very large present and found a smaller box inside? And then an
even smaller box inside that one, until you got to the smallest box and, finally, to your
present? The process of encapsulation operates similarly in the TCP/IP model. The
application layer receives the user data and adds to it its information in the form of a
header. It then sends it to the transport layer. This process corresponds to putting a
present (user data) into the first box (a header), and adding some information on the box
(application layer data). The transport layer also adds its own header before sending the
package to the Internet layer, placing the first box into the second box and writing some
transport-related information on it. This second box must be larger than the first one to fit
the content. This process continues at each layer. The link layer adds a trailer in addition
to the header. The data is then sent across the physical media.
Note: Encapsulation increases the size of the PDU. The added information is required for
handling the PDU and is called overhead to distinguish it from user data.
The figure represents the encapsulation process. It shows how data passes through the
layers down the stack. The data is encapsulated as follows:
1. The user data is sent from a user application to the application layer, where the
application layer protocol adds its header. The PDU is now called data.
2. The transport layer adds the transport layer header to the data. This header
includes its own information, indicating which application layer protocol has sent
the data. The new data unit is now called a segment. The segment will be further
treated by the Internet layer, which is the next to process it.
3. The Internet layer encapsulates the received segment and adds its own header to
the data. The header and the previous data become a packet. The Internet layer
adds the information used to send the encapsulated data from the source of the
message across one or more networks to the final destination. The packet is then
passed down to the Link layer.
4. The Link layer adds its own header and also a trailer to form a frame. The trailer is
usually a data-dependent sequence, which is used to check for transmission errors.
An example of such a sequence is a Frame Check Sequence (FCS.) The receiver will
use it to detect errors. This layer also converts the frame to a physical signal and
sends it across the network using physical media.
At the destination, each layer looks at the information in the header added by its
counterpart layer at the source. Based on this information, each layer performs its
functions and removes the header before passing it up the stack. This process is
equivalent to unpacking a box. In networking, this process is called de-encapsulation.
The de-encapsulation process is like reading the address on a package to see if it is
addressed to you and then, if you are the recipient, opening the package and removing
the contents of the package.
The following is an example of how the destination device de-encapsulates a sequence of
bits:
1. The link layer reads the whole frame and looks at both the frame header and the
trailer to check if the data has any errors. Typically, if an error is detected, the
frame is discarded, and other layers may ask for the data to be retransmitted. If
the data has no errors, the link layer reads and interprets the information in the
frame header. The frame header contains information relevant for further
processing, such as the type of encapsulated protocol. If the frame header
information indicates that the frame should be passed to upper layers, the link
layer strips the frame header and trailer and then passes the remaining data up to
the Internet layer to the appropriate protocol.
2. The internet layer examines the internet header in the packet received from the
link layer. Based on the information it finds in the header, it decides either to
process the packet at the same layer or to pass it up to the transport layer. Before
the internet layer passes the message to the appropriate protocol on the transport
layer, it first removes the packet header.
3. The transport layer examines the segment header of the received segment. The
information included in the segment header indicates which application layer
protocol should receive the data. The transport layer strips the segment header
from the segment and hands over data to the appropriate application layer
protocol.
4. The application layer protocol strips the data header. It uses the information in the
header to process the data before passing it to the user application.
Not all devices process PDUs at all layers. For instance, a switch might only process a PDU
at the link layer, meaning that it will “read” only frame information that is contained in the
frame header and trailer. Based on the information found in the frame header and trailer,
the switch will either forward the frame unchanged out of a specific port, forward it out all
ports except for the incoming port, or discard the frame if it detects errors. Routers might
look "deeper" into the PDU. A router de-encapsulates the frame header and trailer and
relies on the information contained in the packet header to make their forwarding
decisions. If the router is filtering the packets, it may also look even deeper, into the
information contained in the segment header before it decides on what to do with the
packet.
A host performs encapsulation as it sends data and performs de-encapsulation as it
receives it; it can perform both functions simultaneously as part of multiple
communications it maintains.
Note: In networking, you will often encounter the usage of both OSI and TCP/IP models,
sometimes even interchangeably. You should be familiar with both, so you can
competently communicate with network engineers.
2.7 Introducing the Host-To-Host Communications Model
TCP/IP Stack vs OSI Reference Model
The OSI model and the TCP/IP stack were developed by different organizations at
approximately the same time. The purpose was to organize and communicate the
components that guide the transmission of data.
The speed at which the TCP/IP-based Internet was adopted and the rate at which it
expanded caused the OSI protocol suite development and acceptance to lag behind.
Although few of the protocols that were developed using the OSI specifications are in
widespread use today, the seven-layer OSI model has made major contributions to the
development of other protocols and products for all types of new networks.
The layers of the TCP/IP stack correspond to the layers of the OSI model:
• The TCP/IP link layer corresponds to the OSI physical and data link layers and is
concerned primarily with interfacing with network hardware and accessing the
transmission media. Like layer two of the OSI model, the link layer of the TCP/IP
model is concerned with hardware addresses.
• The TCP/IP internet layer aligns with the network layer of the OSI model and
manages the addressing of and routing between network devices.
• The TCP/IP transport layer, like the OSI transport layer, provides the means for
multiple host applications to access the network layer in a best-effort mode or
through a reliable delivery mode.
• The TCP/IP application layer supports applications that communicate with the
lower layers of the TCP/IP model and corresponds to the separate application,
presentation, and session layers of the OSI model.
Because the functions of each OSI layer are clearly defined, the OSI layers are used when
referring to devices and protocols.
Take, for example, a “Layer 2 switch,” which is a LAN switch. The “Layer 2” in this case
refers to the OSI Layer 2, making it easy for people to know what is meant, as they
associate the OSI Layer 2 with a clearly defined set of functions.
Similarly, it is often said that IP is a “network layer protocol” or a “Layer 3 protocol” as the
TCP/IP’s internet layer can be matched to the OSI network layer.
Next, look at the TCP/IP transport layer, which corresponds to the OSI transport layer. The
functions defined at both layers are the same. However, different specific protocols are
involved. Because of this, it is common to refer to the TCP and UDP as “Layer 4 protocols”
again using the OSI layer number.
Another example is the term “Layer 3 switch.” A switch was traditionally thought of as a
device that works on the link layer level (Layer 2 of the OSI model). A Layer 3 switch is also
capable of providing Internet Layer (Layer 3 of the OSI model) services, which were
traditionally provided by routers.
It is very important to remember that the OSI model terminology and layer numbers are
often used rather than the TCP/IP model terminology and layer numbers when referring
to devices and protocols.
Operating a multitasking software is part of your job as a networking engineer. It all begins
with using the CLI as the primary user interface which can be used for configuring,
monitoring, and maintaining Cisco devices.
As a networking engineer, you will be able to operate Cisco IOS software and perform
various essential tasks, including:
Networking devices run particular versions of the Cisco IOS Software. The IOS version
depends on the type of device being used and the required features. While all devices
come with a default IOS and feature set, it is possible to upgrade the IOS version and
feature set to obtain additional capabilities.
The portion of the operating system that interfaces with applications and the user is a
program known as a shell. Unlike common end devices, Cisco network devices do not have
a keyboard, monitor, or mouse device to allow direct user interaction. However, users can
interact with the shell using their own computer and accessing a CLI or a GUI. The figure
illustrates the examples of CLI-based and GUI-based access to a shell.
When using a CLI, the user interacts directly with the system in a text-based environment
by entering commands on the keyboard at a command prompt. The system executes the
command, often providing textual output. The CLI requires very little overhead to operate.
But the user must know the underlying structure that controls the system.
GUIs may not always be able to provide all features that are available in the CLI. Some
tasks will require you to use the CLI because they are not supported in the GUI.
Regardless of which connection method you use, access to the Cisco IOS CLI is generally
referred to as an executive or EXEC session. The features that you can access via the CLI
vary according to the version of Cisco IOS Software installed and the type of device.
Note: Some devices, such as routers, may also support a legacy auxiliary port that was
used to establish a CLI session remotely using a modem. Similar to a console connection,
the auxiliary (AUX) port is OOB and does not require networking services to be configured
or available.
The services that are provided by Cisco IOS Software are generally accessed using a CLI.
The CLI is a text-based interface that is similar to the old Microsoft operating system that
is called MS-DOS.
Once you access the shell via the CLI or GUI, you can enter different commands.
Commands are used to configure, monitor, and manage the device and are executed by
the device operating system. While Cisco IOS Software provides core software that
extends across many products, the details of its operation and also the available services
may vary across different devices. Therefore, different devices will have different
commands available for execution.
Cisco IOS software CLI functions
Cisco IOS Software is designed as a modal operating system. The term modal describes a
system that has various modes of operation. Each mode has its own set of commands and
command history and is intended for usage for a specific group of tasks. The CLI uses a
hierarchical structure for the modes. This hierarchy starts with the least specific command
mode or higher-level mode and proceeds with more specific or lower-level command
modes. A more specific command mode can be entered from the less specific mode,
which precedes it in the hierarchy.
To enter commands into the CLI, type in or copy and paste the entries within one of the
several console command modes. Each command mode is indicated with a distinctive
visual prompt. The term prompt is used because the system is prompting you to make an
entry. Pressing enter instructs the device to parse and execute the command.
Note: It is important to remember that the command is executed as soon as you enter it.
If you enter an incorrect command on a production router, it can negatively affect the
network.
Each command mode has a name and a distinctive visual prompt by which it can be
recognized. By default, every prompt begins with the device name. Following the device
name, the remainder of the prompt uses special characters and words to indicate the
mode. As you use commands and change the operation mode, the prompt changes to
reflect the current context. To enter a command, you can either type them in or copy and
paste the entries. Once you are done, press Enter and the device will parse and execute
the command if the command was entered correctly.
The example in the figure shows a CLI prompt switch>. The device in the example is
named switch, and the operating CLI mode is indicated by the greater-than sign (>).
As a security feature, to limit the commands that a user can view and execute, Cisco IOS
Software separates CLI sessions into two primary access levels:
• User EXEC: Allows a person to execute only a limited number of basic monitoring
commands.
• Privileged EXEC: Allows a person to execute all device commands, for example, all
configuration and management commands. This level can be password protected
to allow only authorized users to execute the privileged set of commands.
3.4 Operating Cisco IOS Software
Cisco IOS Software Modes
Cisco IOS Software has various modes that are hierarchically structured. The highest
hierarchy is the user EXEC mode. It is followed by the privileged EXEC mode. From the
privileged EXEC mode, you can proceed to the global configuration mode and from there
to more specific configuration modes such as interface configuration mode and router
configuration mode, as shown below.
Because these modes have a hierarchy, you can only access a lower-level mode from a
higher-level mode. For example, to access Global Configuration Mode, you must be in the
Privileged EXEC mode. Each mode is used to accomplish particular tasks and has a specific
set of commands that are available in this mode. Interface-specific configuration
commands are available only in the Interface Configuration Mode. To access interface
configuration commands, your full path through operation mode hierarchy would be: User
EXEC Mode > Privileged EXEC Mode > Global Configuration Mode > Interface
Configuration Mode. All commands that you enter and execute in Interface Configuration
Mode apply only to the device interface you chose to configure.
You can tell the operation mode that you are in by looking at the prompt at the beginning
of the line. Normally when you connect to a device, you are allowed access to the User
EXEC Mode. In User EXEC Mode, you can change the console connection settings, perform
basic connectivity tests, and display system information, but you cannot configure the
device. To leave the User EXEC Mode (to close the console connection), you can use either
the logout, exit, or the quit commands.
To move between the modes, you must use predefined commands. The following table
offers an overview of basic IOS Software operation modes, commands or, methods to
access and leave them, their prompt identifications, and a short description.
You do not have to return to global configuration mode in order to move to a different
configuration mode. Rather, you can enter another configuration mode by typing the
appropriate command at any configuration mode prompt. (Note: you will not be able to
get any help for commands that are not valid at the prompt.)
The figure shows two configuration examples, both performing the same task of providing
descriptions for Ethernet 0/0 and Ethernet 0/1 interfaces. In the configuration on the left,
the administrator started in the Global Configuration Mode and entered the Interface
Configuration Mode by typing the command interface Ethernet 0/0. Note how the prompt
changed from SW1(config)# to SW1(config-if)#. In the second line, the administrator typed
the description command. In the next line, the administrator typed the interface Ethernet
0/1 command; this command causes the switch to enter Interface Configuration mode for
the Ethernet 0/1 interface. Note that the prompt did not change because the prompt does
not indicate the specific interface. The last line applies the description command to the
Ethernet 0/1 interface. In the example on the right, the same configuration is performed,
by exiting and re-entering the Interface Configuration Mode, as evident in the third and
the fourth lines. Both are valid configurations and have the same results.
As a networking engineer, you will work with LAN switches, and for successful completion
of various tasks, you first need to:
• Explain what a LAN is and be able to identify LAN components.
• Understand why you need switches.
• List important switch features and characteristics.
LANs can vary widely in size. A LAN may consist of only two computers in a home office or
small business, or it may include hundreds of computers in a large corporate office or
multiple buildings. A LAN is typically a network within your own premises (your
organization's campus, building, office suite, or even your home). Organizations or
individuals typically build and own the whole infrastructure, all the way down to the
physical cabling.
The defining characteristics of LANs, in contrast to WANs, include their typically higher
data transfer rates, smaller geographic area, and the lack of need for leased
telecommunication lines.
A WAN is a data communications network that provides access to other networks over a
large geographical area. WANs use facilities that an ISP or carriers, such as a telephone or
cable company, provides. The provider connects locations of an organization to each
other, to locations of other organizations, to external services, and remote users. WANs
carry various traffic types such as voice, data, and video.
4.3 Introducing LANs
LAN Components
On the first LANs, devices with Ethernet connectivity were mostly limited to PCs, file
servers, print servers, and legacy devices such as hubs and bridges. Hubs and bridges were
replaced by switches and are no longer used.
Today, a typical small office will include routers, switches, access points (APs), servers, IP
phones, mobile phones, PCs, and laptops.
Regardless of its size, a LAN requires these fundamental components for its operation:
• Hosts: Hosts include any device that can send or receive data on the LAN.
Sometimes hosts are also called endpoints. Those two terms are used
interchangeably throughout the course.
• Interconnections: Interconnections allow data to travel from one point to another
in the network. Interconnections include these components:
o Network Interface Cards (NICs): NICs translate the data that is produced by
the device into a frame format that can be transmitted over the LAN. NICs
connect a device to the LAN over copper cable, fiber-optic cable, or
wireless communication.
o Network media: In traditional LANs, data was primarily transmitted over
copper and fiber-optic cables. Modern LANs (even small home LANs)
generally include a wireless LAN (WLAN).
• Network devices: Network devices, like switches and routers, are responsible for
data delivery between hosts.
o Ethernet switches: Ethernet switches form the aggregation point for LANs.
Ethernet switches operate at Layer 2 of the Open Systems Interconnection
(OSI) model and provide intelligent distribution of frames within the LAN.
o Routers: Routers, sometimes called gateways, provide a means to connect
LAN segments and provide connectivity to the internet. Routers operate at
Layer 3 of the OSI model.
o APs: APs provide wireless connectivity to LAN devices. APs operate at Layer
2 of the OSI model.
• Protocols: Protocols are rules that govern how data is transmitted between
components of a network. Here are some commonly used LAN protocols:
o Ethernet protocols (IEEE 802.2 and IEEE 802.3)
o IP
o TCP
o UDP
o Address Resolution Protocol (ARP) for IPv4 and Neighbor Discovery
Protocol (NDP) for IPv6
o Common Internet File System (CIFS)
o DHCP
Functions of a LAN
LANs provide network users with communication and resource-sharing functions:
• Data and applications: When users are connected through a network, they can
share files and even software applications. This capability makes data more easily
available and promotes more efficient collaboration on work projects.
• Resources: The resources that can be shared include input devices, such as
cameras, and output devices, such as printers.
• Communication path to other networks: If a resource is not available locally, the
LAN can provide connectivity via a gateway to remote resources, such as the
internet.
Switches provide the following important functions, resulting in even greater benefits for
eliminating network congestion:
Switches connect LAN segments, determine the segment to send the data, and reduce
network traffic. Some important characteristics of switches:
• High port density: Switches have high port densities: 24-, 32-, and 48-port
switches operate at speeds of 100 Mbps, 1 Gbps, 10 Gbps, 25 Gbps, 40 Gbps, and
100 Gbps. Large enterprise switches may support hundreds of ports.
• Large frame buffers: The ability to store more received frames before having to
start dropping them is useful, particularly when there may be congested ports
connected to servers or other heavily used parts of the network.
• Port speed: Depending on the switch, port speed may be possible to support a
range of bandwidths. Ports of 100 Mbps, 1Gbps, and 10 Gbps are expected, but
40- or 100-Gbps ports allow even more flexibility.
• Fast internal switching: Having fast internal switching allows higher bandwidths:
100 Mbps, 1 Gbps, 10 Gbps, 25 Gbps, 40 Gbps, and 100 Gbps.
• Low per-port cost: Switches provide high port density at a lower cost. For this
reason, LAN switches can accommodate network designs that feature fewer users
per segment. This feature, therefore, increases the average available bandwidth
per user.
Switches use ASICs, which are fundamental to how an Ethernet switch works. An ASIC is a
silicon microchip designed for a specific task (such as switching or routing packets) rather
than general-purpose processing such as a CPU. A generic CPU is too slow for forwarding
traffic in a switch. While a general-purpose CPU may be fast at running a random
application on a laptop or server, manipulating and forwarding network traffic is a
different matter. Traffic handling requires constant lookups against large memory tables.
Copper Media
Most Ethernet networks use unshielded twisted-pair (UTP) copper cabling for short and
medium-length distances because of its low cost compared to fiber-optic or coaxial cable.
Ethernet over twisted-pair technologies uses twisted-pair cables for the physical layer of
an Ethernet computer network. Twisted-pair cabling is a type of wiring in which two
conductors—the forward and return conductors of a single circuit—are twisted together
for the purposes of canceling EMI from external sources (for example, electromagnetic
radiation from UTP cables and crosstalk between neighboring pairs).
A UTP cable is a four-pair wire. Each of the eight individual copper wires in a UTP cable is
covered by an insulating material. In addition, the wires in each pair are twisted around
each other. The advantage of a UTP cable is its ability to cancel interference because the
twisted-wire pairs limit signal degradation from EMI and radio frequency interference
(RFI). To further reduce crosstalk between the pairs in a UTP cable, the number of twists in
the wire pairs varies. Cables must follow precise specifications regarding how many twists
or braids are permitted per meter.
A UTP cable is used in various types of networks. When used as a networking medium, a
UTP cable has four pairs of either 22- or 24-gauge copper wire. A UTP cable that is used as
a networking medium has an impedance of 100 ohms, differentiating it from other types
of twisted-pair wiring, such as what is used for telephone wiring. A UTP cable has an
external diameter of approximately 0.43 cm (0.17 inches), and its small size can be
advantageous during installation.
Several categories of UTP cable exist:
The RJ-45 plug is the male component, which is crimped at the end of the cable. As you
look at the male connector from the front, as shown in the figure, the pin locations are
numbered from 8 on the left to 1 on the right.
The jack is the female component in a network device, wall, cubicle partition outlet, or
patch panel. As you look at the female connector from the front, as shown in the figure,
the pin locations are numbered from 1 on the left to 8 on the right.
Power over Ethernet
Power over Ethernet (PoE) describes systems that pass electric power along with data on
Ethernet cabling. This action allows a single Ethernet cable to provide both data
connection and electric power to devices such as wireless access points, IP cameras, and
VoIP phones by utilizing all four pairs in the Category 5 cable or above.
Straight-Through or a Crossover UTP Cable?
When choosing a UTP cable, you must determine whether you need a straight-through
UTP cable or a crossover UTP cable. Straight-through cables are primarily used to connect
electrically, unlike devices, and crossover cables are used to connect electrically like
devices. For example, the receive pin is the same on devices, so it must be crossed to the
transmit pin.
To tell the difference between the two types of cabling, hold the ends of the cable next to
each other, with the connector side of each end facing you. As shown in the figure, the
cable is a straight-through cable if each of the eight pins corresponds to the same pin on
the opposite side. The cable is a crossover cable if some of the wires on one end of the
cable are crossed to a different pin on the other side of the cable, as shown in the figure.
Note: The need for crossover cables is considered legacy because most devices now use
straight-through cables and can internally cross-connect when a crossover is required.
When automatic medium-dependent interface crossover (auto-MDIX) is enabled on an
interface, the interface automatically detects the required cable connection type (straight-
through or crossover) and configures the connection appropriately. With auto-MDIX
enabled, you can use either type of cable to connect to other devices, and the interface
automatically corrects for any incorrect cabling.
The following figure shows when to use straight-through and crossover cables.
Optical Fiber
An optical fiber is a flexible, transparent fiber that is made of very pure glass (silica) and is
not much larger than human hair. It acts as a waveguide, or "light pipe," to transmit light
between the two ends of the fiber. Optical fibers are widely used in fiber-optic
communication, which permits transmission over longer distances and at higher
bandwidths (data rates) than other forms of communication. Fibers are used instead of
metal wires because signals travel along with them with less loss and immunity to EMI.
The two fundamental components that allow a fiber to confine light are the core and the
cladding. Most of the light travels from the beginning to the end inside the core. The
cladding around the core provides confinement. The diameters of the core and cladding
are shown in the figure, but the core diameter may vary for various fiber types. In this
case, the core diameter of 9 micrometers is very small. (The diameter of a human hair is
about 50 micrometers.) The outer diameter of the cladding is a standard size of 125
micrometers. Standardizing the size means that component manufacturers can make
connectors for all fiber-optic cables.
The third element in this picture is the buffer (coating), which has nothing to do with the
confinement of the light in the fiber. Its purpose is to protect the glass from scratches and
moisture. The fiber-optic cable can be easily scratched and broken. If the fiber is
scratched, the scratch could propagate and break the fiber. Another important role of the
buffer is to keep the fiber dry.
Fiber Types
The most significant difference between multimode fiber (MMF) and single-mode fiber
(SMF) is in the ability of the fiber to send light over a long distance at high bit rates. In
general, MMF is used for shorter distances, while SMF is preferrd for long-distance
communications. There are many variations of fiber for both MMF and SMF.
The most significant physical difference is in the size of the core. The glass in the two
fibers is the same, and the index of refraction (a way of measuring the speed of light in a
material) between the core and the cladding changes similarly. The diameter of the fiber
cladding is also the same. However, the core is a different size, which affects how the light
gets through the fiber. MMF supports multiple ways for the light from one source to travel
through the fiber— which is why it is called “multimode." Each path can be thought of as a
mode.
For SMF, the possible ways for light to get through the fiber have been reduced to one—a
"single mode." It is not exactly one, but it is a useful approximation.
MMF device uses LED as a light source, which facilitates short-distance transmissions. On
the other hand, the SMF device uses a laser to generate the signal, which provides higher
transmission rates covering longer distances.
The table summarizes MMF and SMF characteristics.
An optical fiber connector terminates the end of an optical fiber. Various optical fiber
connectors are available. The main differences among the types of connectors are the
dimensions and methods of mechanical coupling. Generally, organizations standardize on
one type of connector, depending on the equipment that they commonly use, or they
standardize per type of fiber (one for MMF and one for SMF). There are about 70
connector types in use today.
The three types of connectors follow:
• Threaded
• Bayonet
• Push-pull
Connectors are made of the following materials:
• Metal
• Plastic sleeve
Here is a list of the most common types of fiber connectors and their typical uses:
The SFP+ transceivers are an enhanced version of SFP transceivers. In LAN networking
devices, SFP+ modules support 10 Gbps Ethernet. SFP and SFP+ modules look the same.
SFP and SFP+ modules can be used in combination with LC or RJ45 connectors.
Different Cisco networking devices support different SFP and SFP+ modules. Different SFP
and SFP+ modules also support different types and lengths of fiber optic cables. You
should always check the device specifications and compatibility information.
5.3 Exploring the TCP/IP Link Layer
Ethernet Frame Structure
Bits that are transmitted over an Ethernet LAN are organized into frames.
In Ethernet terminology, the container into which data is placed for transmission is called
a frame. The frame contains header information, trailer information, and the actual data
that is being transmitted.
There are several types of Ethernet frames, while the Ethernet II frame is the most
common type and is shown in the figure. This frame type is often used to send IP packets.
The table shows the fields of an Ethernet II frame, which are:
• Preamble: This field consists of 8 bytes of alternating 1s and 0s that are used to
synchronize the signals of the communicating computers.
• Destination Address (DA): The DA field contains the MAC address of the network
interface card (NIC) on the local network to which the frame is being sent.
• Source Address (SA): The SA field contains the MAC address of the NIC of the
sending computer.
• Type: This field contains a code that identifies the network layer protocol.
• Payload: This field contains the network layer data. If the data is shorter than the
minimum length of 46 bytes, a string of extraneous bits is used to pad the field.
This field is also known as “data and padding”.
• FCS: The frame check sequence (FCS) field includes a checking mechanism to
ensure that the frame of data has been transmitted without corruption. The
checking mechanism that is being used is the cyclic redundancy check (CRC).
• Unicast: Communication in which a frame is sent from one host and is addressed
to one specific destination. In a unicast transmission, there is only one sender and
one receiver. Unicast transmission is the predominant form of transmission on
LANs and within the Internet.
• Broadcast: Communication in which a frame is sent from one address to all other
addresses. In this case, there is only one sender, but the information is sent to all
the connected receivers. Broadcast transmission is used for sending the same
message to all devices on the LAN.
• Multicast: Communication in which information is sent to a specific group of
devices or clients. Unlike broadcast transmission, in multicast transmission, clients
must be members of a multicast group to receive the information.
• 0000.0c43.2e08
• 00:00:0c:43:2e:08
• 00-00-0C-43-2E-08
Note: Hexadecimal (often referred to as simply hex) is a numbering system with a base of
16. This means that it uses 16 unique symbols as digits. The decimal system that you use
on a daily basis has a base of 10, which means that it is composed of 10 unique symbols, 0
through 9. The valid symbols in hexadecimal are 0,1, 2, 3, 4, 5, 6, 7, 8, 9, A, B, C, D, E, and
F. In decimal, A, B, C, D, E, and F equal 10, 11, 12, 13, 14, and 15, respectively. Each
hexadecimal digit is 4 bits long because it requires 4 bits in binary to count to 15. Because
a MAC address is composed of 12 hexadecimal digits, it is 48 bits long. The letters A, B, C,
D, E, and F can be either upper or lower case.
A MAC address is composed of 12 hexadecimal numbers, which means it has 48 bits.
There are two main components of a MAC. The first 24 bits constitute the Organizationally
Unique Identifier (OUI). The last 24 bits constitute the vendor-assigned, end-station
address.
• 24-bit OUI: The OUI identifies the manufacturer of the NIC. The IEEE regulates the
assignment of OUI numbers. Within the OUI, there are 2 bits that have meaning
only when used in the destination address field:
o Broadcast or multicast bit: When the least significant bit in the first octet
of the MAC address is 1, it indicates to the receiving interface that the
frame is destined for all (broadcast) or a group of (multicast) end stations
on the LAN segment. This bit is referred to as the Individual/Group (I/G)
address bit.
o Locally administered address bit: The second least significant bit of the
first octet of the MAC address is referred as a universally or locally (U/L)
administered address bit. Normally, the combination of the OUI and a 24-
bit station address is universally unique. However, if the address is
modified locally, this bit should be set to 1.
• 24-bit, vendor-assigned, end-station address: This portion uniquely identifies the
Ethernet hardware.
The MAC address identifies a specific computer interface on a LAN. Unlike other kinds of
addresses that are used in networks, the MAC address should not be changed unless there
is some specific need to do so.
5.6 Exploring the TCP/IP Link Layer
Frame Switching
The switch builds and maintains a table called the MAC address table, which matches the
destination MAC address with the port that is used to connect to a node. The MAC
address table is stored in the content-addressable memory (CAM), enabling very fast
lookups. Therefore, you might see a switch's MAC address table referred to as a CAM
table.
For each incoming frame, the destination MAC address in the frame header is compared
to the list of addresses in the MAC address table. Switches then use MAC addresses as
they decide whether to filter, forward, or flood frames. When the destination MAC
address of a received unicast frame resides on the same switch port as the source, the
switch drops the frame, which is a behavior known as filtering. Flooding means that the
switch sends the incoming frame to all active ports, except the port on which it received
the frame.
The switch creates and maintains the MAC address table by using the source MAC
addresses of incoming frames and the port number through which the frame entered the
switch. In other words, a switch learns the network topology by analyzing the source
address of incoming frames from all attached networks.
The procedure below describes a specific example when PC A sends a frame to PC B, and
the switch starts with an empty MAC address table.
The switch performs learning and forwarding actions (including in situations that differ
from the example explained above), such as:
• Learning: When the switch receives the frame, it examines the source MAC
address and incoming port number. It performs one of the following actions
depending on whether the MAC address is present in the MAC address table:
o No: Adds the source MAC address and port number to the MAC address
table and starts the default 300 seconds aging timer for this MAC address.
o Yes: Resets the default 300 seconds aging timer.
Note: When the aging timer expires, the MAC address entry is removed from the MAC
address table.
• Unicast frames forwarding: The switch examines the destination MAC address
and, if it is unicast, performs one of the following actions depending on whether
the MAC address is present in the MAC address table:
o No: Forwards the frame out all ports except the incoming port (referred to
as unknown unicast).
o Yes: Forwards the frame out of the port from which that MAC address was
learned previously.
Broadcast or multicast frames forwarding: The switch examines the destination MAC
address and if it is broadcast or multicast, forwards the frame out all ports except the
incoming port (unless using Internet Group Management Protocol (IGMP) with multicast,
in which case it will only send the frame to specific ports).
If a device transmits while another is also transmitting, a collision occurs. Therefore, half-
duplex communication implements Ethernet Carrier Sense Multiple Access with Collision
Detection (CSMA/CD) to help reduce the potential for collisions and to detect them when
they do occur. CSMA/CD allows a collision to be detected, which causes the offending
devices to stop transmitting. Each device retransmits after a random amount of time has
passed. Because the time at which each device retransmits is random, the possibility that
they again collide during retransmission is very small.
Full Duplex
Full-duplex communication is like telephone communication, in which each person can
talk and hear what the other person says simultaneously. In full-duplex communication,
the data flow is bidirectional, so that data can be sent and received at the same time. The
bidirectional support enhances performance by reducing the wait time between
transmissions. Ethernet, Fast Ethernet, and Gigabit Ethernet NICs sold today offer the full-
duplex capability. In full-duplex mode, the collision-detection circuit is disabled. Frames
that the two connected end nodes send cannot collide because the end nodes use two
separate circuits in the network cable.
The following are characteristics of full-duplex operation:
• Point-to-point only
• Attached to a dedicated switched port
• Requires full-duplex support on both ends
Each full-duplex connection uses only one port. Full-duplex communications require a
direct connection between two nodes that both support full duplex. If one of the nodes is
a switch, the switch port to which the other node is connected must be configured to
operate in the full-duplex mode. The primary cause of duplex issues is mismatched
settings on two directly connected devices. For example, the switch is configured for a full
duplex, and the attached PC is configured for a half duplex.
The duplex Command
The duplex command is used to specify the duplex mode of operation for switch ports.
The duplex command supports the following options:
For 100BASE-FX ports, the default option is full, and they cannot autonegotiate. 100BASE-
FX ports operate only at 100 Mbps in full-duplex mode. For Fast Ethernet and
10/100/1000 ports, the default option is auto. The 10/100/1000 ports operate in either
half-duplex or full-duplex mode when their speed is set to 10 or 100 Mbps, but when their
speed is set to 1000 Mbps, they operate only in the full-duplex mode.
Autonegotiation can at times produce unpredictable results. By default, when
autonegotiation fails, a Cisco Catalyst switch sets the corresponding switch port to half-
duplex mode. Autonegotiation failure occurs when an attached device does not support
autonegotiation. If the attached device is manually configured to also operate in the half-
duplex mode, there is no problem. However, if the device is manually configured to
operate in the full-duplex mode, there is a duplex mismatch. A duplex mismatch causes
late collision errors at the end of the connection. To avoid this situation, manually set the
duplex parameters of the switch to match the attached device.
In the example, the switch ports connected to the PCs are configured for autonegotiation
since the PC's network card supports autonegotiation. The interconnection ports between
the switches have a static configuration to avoid autonegotiation failures if someone
connects a device that only does 10 Mbps or a hub.
You can use the show interfaces command in the privileged EXEC mode to verify the
duplex settings on a switch. This command displays statistics and statuses for all interfaces
or for the interface that you specify. The following example shows the duplex and speed
settings of a Fast Ethernet interface.
6.1 Starting a Switch
Introduction
In every Enterprise environment, switches are located in the heart of the network and link
together all the other equipment, so it is very important that they are configured
correctly. It all begins with the proper physical installation and then the basic
configuration – specifying the hostname, enabling the management interface, assigning an
IP address, and configuring the default gateway and interface descriptions.
Once you have managed to configure one Cisco switch, it is relatively simple to duplicate
the process and configure more switches in a similar way. You can even copy a standard
configuration from one switch to another with only minor changes. But if something goes
wrong, it is also important to recognize that there are issues. With switches, you can
recognize that from the LED indicators.
As a network engineer, it is important that you thoroughly understand the basic processes
of starting a switch:
• The appropriate cable and adapters, depending on the console port you use and
the connectors on your PC (such as an RJ-45-to-DB-9 console cable, a USB-to-DB-9
adapter, a USB Type A-to-5-pin, mini-Type B, or USB-C to RJ-45 console cable).
• PC or equivalent with a serial or USB port, an operating system device driver, and
terminal emulator software, such as HyperTerminal or Tera Term, configured with
these settings, as required by the switch or router:
o Speed: 9600 bps
o Data bits: 8
o Parity: None
o Stop bit: 1
o Flow control: None
Note: The console port can be located in various places on different switches.
When a console connection is established, you gain access to user EXEC mode by default.
7.1 Introducing the TCP/IP Internet Layer, IPv4 Addressing, and Subnets
Introduction
In every LAN, Ethernet is used to exchange data locally. But suppose you want to
communicate between different LANs, for example. In that case, if a user in an Enterprise
Campus wants to communicate with a user at a remote site or globally, with a web server,
for example, this exchange will cross many different physical networks and devices. For
communication to happen, you need an addressing system that uniquely identifies every
device globally and enables the delivery of packets between them. The delivery function is
provided by the Transmission Control Protocol / Internet Protocol (TCP/IP) Internet Layer,
which provides services to exchange the data over the network between identified end
devices.
The most used protocol in the TCP/IP Internet layer is IPv4, which uses 32-bit numbers.
Remembering 32-bit IPv4 addresses would be cumbersome, so the address is represented
as a dotted decimal notation. As a networking engineer, you will need to use simple math
to convert between the binary and decimal worlds. The IPv4 address, which identifies the
device on the network, is typically accompanied by a subnet mask, which defines the
network.
Working as a network engineer, you will also manipulate the subnet mask to create
subnetworks for network segments of different sizes. This activity is called subnetting.
Subnetting allows you to create multiple logical networks within a single larger network,
which is especially important in large Enterprise environments where you need to logically
organize your environment. And you can do this very efficiently – by using a more
advanced subnetting technique called variable-length subnet mask (VLSM).
As a network engineer, you will encounter different features of the TCP/IP Internet layer
in everyday work, which will include various details:
7.3 Introducing the TCP/IP Internet Layer, IPv4 Addressing, and Subnets
Decimal and Binary Number Systems
Most people are accustomed to the decimal numbering system. The decimal (base 10)
system is the numbering system used in everyday mathematics. On the other hand, the
binary (base 2) system is the foundation of computer operations.
Network device addresses also use the binary system to define their location in the
network. IPv4 addresses are based on a dotted-decimal notation of a binary number: four
8-bit fields (octets) converted from binary to decimal numbers, separated by dots. An
example of an IPv4 address written in a dotted-decimal notation is 192.168.10.22. The
binary equivalent of this number is 11000000.10101000.00001010.00010110. You can use
any number of bits for a binary number, but for IPv4 addresses, you will always use 8 bits
when converting each of the decimal numbers to binary. You must have a basic
understanding of the mathematical properties of a binary system to understand
networking.
While the base number is important in any numbering system, it is the position of a digit
that confers value. In the decimal numbering system, the number 10 is represented by a 1
in the tens position and a 0 in the ones position. The number 100 is represented by a 1 in
the hundreds position, a 0 in the tens position, and a 0 in the ones position. In the decimal
system, the digits are 0, 1, 2, 3, 4, 5, 6, 7, 8, and 9. When quantities higher than 9 are
required, the decimal system begins with 10 and continues to 99. When quantities higher
than 99 are required the decimal system begins again with 100, and so on, with each
column to the left raising the exponent by 1. All these tens, hundreds, thousands, and so
on are all powers of 10.
For example, a decimal number 27398 represents the sum (2 x 10,000) + (7 x 1000) + (3 x
100) + (9 x 10) + (8 x 1). If you write this with exponents the sum would look like: (2 x 104)
+ (7 x 103) + (3 x 102) + (9 x 101) + (8 x 100).
The binary system uses only the digits 0 and 1. Therefore, the first digit is 0, followed by 1.
If a quantity higher than 1 is required, the binary system goes to 10, followed by 11. The
binary system continues with 100, 101, 110, 111, then 1000, and so on. The following
figure shows the binary equivalent of the decimal numbers 0 through 19.
Building a binary number follows the same logic as building a decimal number, with the
only difference that the base is 2 so the exponents represent the power of 2. If you take
the binary number 10011 for example, it represents a sum of (1 x 24) + (0 x 23) +(0 x 22)
+(1 x 21) +(1 x 20), which is equal to (1 x 16) + (0 x 8) + (0 x 4) + (1 x 2) + (1 x 1) = 19.
7.4 Introducing the TCP/IP Internet Layer, IPv4 Addressing, and Subnets
Binary-to-Decimal Conversion
Doing the conversion from binary into decimal is easy:
• Start by making a table with all of the 2exponent values listed, for exponent values
0 through 7, as shown in the first row of the following table.
• Add a row that lists the decimal value of each of these exponents, as shown in the
second row; these are the positional or place values (and are also called
placeholders).
• Write out the given bit sequence in the table, as shown in the third row for the
example binary number 10111001.
• For each bit, multiply the place value by the bit value, as shown in the fourth row.
Notice that where the bit value is 0, the answer is 0, and where the bit value is 1,
the answer is the place value.
• Finally, add all of these values together; the result is the decimal value of the
binary number. In this example, the decimal value of the binary number 10111001
is 185.
10111001 = (128*1) + (64*0) + (32*1) + (16*1) + (8*1) + (4*0) + (2*0) + (1 *1) 10111001 =
128 + 0 + 32 + 16 + 8 + 0 + 0 + 1 10111001 = 185
The minimum value of an 8-bit binary number is 00000000, which in decimal equals to 0.
The maximum value of an 8-bit binary number is 11111111, which in decimal equals 255.
If you have a number that is larger than 255, it cannot be written with 8 bits. Each of the
decimal numbers an IPv4 address must be a number between 0 and 255.
7.5 Introducing the TCP/IP Internet Layer, IPv4 Addressing, and Subnets
Decimal-to-Binary Conversion
The process of converting a decimal number into binary can be simplified by using a table.
The table method utilizes elementary mathematics like addition and subtraction. This
process is simple and effective. With a bit of practice, you will learn it quickly.
When converting from decimal into binary, the idea is to find the right sequence of bits by
marking placeholders as 1 or 0. All bits are represented, and each placeholder marked
with 1 adds its value to the converted number, while 0s are ignored. For example, 255 is
represented by marking all placeholders with 1, meaning that summing up each
placeholder value produces the decimal number: 128 + 64 + 32 + 16 + 8 + 4 + 2 + 1 = 255.
The process of converting a decimal number into binary is done by marking the closest
(lower) placeholder as 1 and subtracting the corresponding value from the decimal
number until there is no remainder. Any unused or skipped placeholders are marked as 0.
The binary representation of the decimal number is the 1 and 0 sequence that is
produced.
This figure illustrates the conversion of decimal number 147 to binary. Start by making a
table with all of the 2exponent values listed, for exponent values 0 through 7, as shown in
the first row of the table. Add a row that lists the decimal value of each of these
exponents, as shown in the second row. These are the positional or place values (and are
also called placeholders). The binary number is put in the third row as it is determined.
The following table describes the steps for converting the number 147 to a binary number.
You can also have a number that is smaller than 128, for example 35. 35 in decimal
converts to 00100011 in binary. Note that the first 2 bits of the binary number are zeros;
these zeros are known as leading zeros. Recall that IPv4 addresses are most often written
in the dotted-decimal notation, which consists of four sets of 8-bits (octets) converted
from binary to decimal numbers, separated by dots. For IPv4 addresses, you will always
use 8 bits when converting each of the decimal numbers to binary. Some of these binary
numbers may have leading zeroes.
7.6 Introducing the TCP/IP Internet Layer, IPv4 Addressing, and Subnets
IPv4 Address Representation
Every device must be assigned a unique address to communicate on an IP network. This
includes hosts or endpoints (such as PCs, laptops, printers, web servers, smartphones, and
tablets), as well as intermediary devices (such as routers and switches).
Physical street addresses are necessary to identify the locations of specific homes and
businesses so that mail can reach them efficiently. In the same way, logical IP addresses
are used to identify the location of specific devices on an IP network so that data can
reach those network locations. Every host connected to a network or the internet has a
unique IP address that identifies it. Structured addressing is crucial to route packets
efficiently. Learning how IP addresses are structured and how they function in the
operation of a network provides an understanding of how IP packets are forwarded over
networks using TCP/IP.
An IPv4 address is a 32-bit number, is hierarchical, and consists of two parts:
• The network address portion (network ID): Network ID is the portion of an IPv4
address that uniquely identifies the network in which the device with this IPv4
address resides. The network ID is important because most hosts on a network can
communicate only with devices in the same network. If the hosts need to
communicate with devices with interfaces assigned to some other network ID, a
network device—a router or a multilayer switch—can route data between the
networks.
• The host address portion (host ID): Host ID is the portion of an IPv4 address that
uniquely identifies a device on a given IPv4 network. Host IDs are assigned to
individual devices, both hosts or endpoints and intermediary devices.
Note: There are two versions of IP that are in use: IPv4 and IPv6. IPv4 is the most common
and is currently used on the internet. It has been the mainstay protocol since the 1980s.
IPv6 was designed to solve the problem of global IPv4 address exhaustion. The adoption
of IPv6 was initially very slow but is now reaching wider deployment.
Practical Example of an IPv4 Address
Recall that IPv4 addresses are most often written in the dotted-decimal notation, which
consists of four sets of 8-bits (octets) converted from binary to decimal numbers,
separated by dots. The following example shows an IPv4 address in a decimal form
translated into its binary form, using the method described earlier.
7.7 Introducing the TCP/IP Internet Layer, IPv4 Addressing, and Subnets
IPv4 Header Fields
Before you can send an IP packet, there needs to be a format that all IP devices agree on
to route a packet from the source to the destination. All that information is contained in
the IP header. The IPv4 header is a container for values that are required to achieve host-
to-host communications. Some fields (such as the IP version) are static, and others, such
as Time to Live (TTL), are modified continually in transit.
The IPv4 header has several fields. First, you will learn about these four fields:
7.8 Introducing the TCP/IP Internet Layer, IPv4 Addressing, and Subnets
IPv4 Address Classes
Nowadays, classless addressing is predominantly used. However, to fully understand the
concepts you will learn about here, you need to understand how the ever-changing needs
dictated the evolution of addressing solutions over time.
In the early days of the internet, the standard reserved the first 8 bits of an IPv4 address
for the network part and the remaining 24 bits for the host part. 24 host bits offer
16,777,214 IPv4 host addresses. It soon became clear that such address allocation is
inefficient because most organizations require several smaller networks of smaller size
rather than one network with thousands of computers. Also, most organizations need
several networks of different sizes.
The first step to address this need was made in 1981 when the IETF released RFC 790,
where the IPv4 address classes were introduced for the first time. Here the Internet
Assigned Numbers Authority (IANA) determined IPv4 Class A, Class B, and Class C.
Note: RFC is a formal document from the IETF communicating information about the
internet and defining internet standards.
Assigning IPv4 addresses to classes is known as classful addressing. Each IPv4 address is
broken down into a network ID and a host ID. In addition, a bit or bit sequence at the start
of each address determines the class of the address.
Note: IPv4 hosts only use Class A, B, and C IPv4 addresses for unicast (host-to-host)
communications. In 2002, RFC 3330 also introduced Class D and Class E, defining special-
use IPv4 addresses. This RFC has been later obsoleted by another RFC defining global and
other specialized IPv4 address blocks. Still, Class D and Class E are included here for
completeness, but they are outside the scope of this discussion.
Class A
A Class A address block is designed to support extremely large networks with more than
16 million host addresses. The Class A address uses only the first octet (8 bits) of the 32-bit
number to indicate the network address. The remaining 3 octets of the 32-bit number are
used for host addresses. The first bit of a Class A address is always a 0. Because the first bit
is a 0, the lowest number that can be represented is 00000000 (decimal 0), and the
highest number that can be represented is 1111111 (decimal 127). However, these two
network numbers, 0 and 127, are reserved and cannot be used as network addresses.
Therefore, any address that has a value between 1 and 126 in the first octet of the 32-bit
number is a Class A address.
Class B
The Class B address space is designed to support the needs of moderate to large networks
with more than 65,000 hosts. The Class B address uses two of the four octets (16 bits) to
indicate the network address. The remaining two octets specify host addresses. The first 2
bits of the first octet of a Class B address are always binary 10. Starting the first octet with
binary 10 ensures that the Class B space is separated from the upper levels of the Class A
space. The remaining 6 bits in the first octet may be populated with either ones or zeros.
Therefore, the lowest number that can be represented with a Class B address is 10000000
(decimal 128), and the highest number that can be represented is 10111111 (decimal
191). Any address that has a value in the range of 128 to 191 in the first octet is a Class B
address.
Class C
The Class C address space is the most commonly available address class. This address
space is intended to provide addresses for small networks with a maximum of 254 hosts.
In a Class C address, the first three octets (24 bits) of the address identify the network
portion, with the remaining octet reserved for the host portion. A Class C address begins
with binary 110. Therefore, the lowest number that can be represented is 11000000
(decimal 192), and the highest number that can be represented is 11011111 (decimal
223). If an address contains a number in the range of 192 to 223 in the first octet, it is a
Class C address.
Class D
Class D (multicast) IPv4 addresses are dedicated to multicast applications such as
streaming media. Multicasts are a special type of broadcast in that only hosts that request
to participate in the multicast group will receive the traffic to the IPv4 address of that
group. Unlike IPv4 addresses in Classes A, B, and C, multicast addresses are always the
destination address and never the source. A Class D address begins with binary 1110.
Therefore, the lowest number represented is 11100000 (decimal 224), and the highest
number that can be represented is 11101111 (decimal 239). If an address contains a
number in the range of 224 to 239 in the first octet, it is a Class D address.
Class E
Class E (reserved) IPv4 addresses are reserved by the IANA as a block of experimental
addresses. Class E IPv4 addresses should never be assigned to IPv4 hosts. A Class E address
begins with binary 1111. Therefore, the lowest number that can be represented is
11110000 (decimal 240), and the highest number that can be represented is 11111111
(decimal 255). If an address contains a number in the range of 240 to 255 in the first octet,
it is a Class E address.
The following table shows the IPv4 address range of the first octet (in decimal and binary)
for Class A, B, C, D, and E IPv4 addresses
Note: Class A addresses 127.0.0.0 to 127.255.255.255 cannot be used. This range is
reserved for loopback and diagnostic functions.
7.9 Introducing the TCP/IP Internet Layer, IPv4 Addressing, and Subnets
Subnet Masks
A subnet mask is a 32-bit number that describes which portion of an IPv4 address refers to
the network ID and which part refers to the host ID.
The subnet mask is configured on a device along with the IPv4 address.
If a subnet mask has a binary 1 in a bit position, the corresponding bit in the address is
part of the network ID. If a subnet mask has a binary 0 in a bit position, the corresponding
bit in the address is part of the host ID.
The figure represents an IPv4 address separated into a network and a host part. In the
example the network part ends on the octet boundary, which coincides with what you
learned about IPv4 address class boundaries. The address in the figure belongs to class B,
where the first two octets (16 bits) indicate the network part, and the remaining two
octets represent the host part. Therefore, you create the subnet mask by setting the first
16 bits of the subnet mask to binary 1 and the last 16 bits of the subnet mask to zero.
Notice the prefix /16; it is another way of expressing the subnet mask and it matches the
number of network bits that are set to binary 1 in the subnet mask.
Networks are not always assigned the same prefix. Depending on the number of hosts on
the network, the prefix that is assigned may be different. Having a different prefix number
changes the host range and broadcast address for each network.
Calculating the Network Address
An IPv4 address that has binary zeros in all the host bit positions is reserved for the
network address. The main purpose of the subnet mask is to identify the network address
of a host, which is crucial for routing purposes. Based on the network address, the host
can identify whether a packet's destination address is within the same network or not.
Given an IPv4 address and a subnet mask, you can calculate the network address by using
the AND function between the binary representation of the IPv4 address and the binary
representation of the subnet mask.
The calculation is performed bit-by-bit following these rules:
• 0 AND 0 = 0
• 1 AND 0 = 0
• 0 AND 1 = 0
• 1 AND 1 = 1
The result of the AND operation is the network address of the network on which the
device resides; this is also called the network prefix. You can see that in the network
address, the network part is the same as it is in the original IPv4 address, while the host
bits are all set to zero.
Usually you will use the decimal form of the network address, so you need to remember
the binary to decimal conversion. Look at the figure to remember the conversion process.
7.10 Introducing the TCP/IP Internet Layer, IPv4 Addressing, and Subnets
Subnets
A lot of networks nowadays still use a flat network design. A flat topology is an OSI Layer 2
– switch-connected network where all devices see all the broadcasts in the Layer 2
broadcast domain. In such a network, all devices can reach each other by broadcast. Flat
network design is easy to implement and manage, reducing cost, maintenance, and
administration.
However, such design also brings some concerns:
• Security: Because the network is not segmented, you can not apply security
policies adapted to individual segments. If one device is compromised, it can
quickly affect the whole network.
• Troubleshooting: Isolation of network faults is more challenging, especially in
bigger flat networks, because there is no logical separation or hierarchy.
• Address space utilization: In a large flat network, you can end up with a lot of
wasted IP addresses. You cannot use addresses from this network anywhere else.
• Scalability and speed: A flat network represents a single Layer 2 broadcast
domain. If there is a large amount of broadcast traffic, this can impose
considerable pressure on the available resources. A single broadcast domain
typically should not include more than a couple of hundred devices.
Network administrators can segment their networks, especially large networks, by using
subnetworks or subnets to tackle those challenges. Although subnets were initially
designed to solve the shortage of IPv4 addresses, they are used to address administrative,
organizational, security, and scalability considerations in today's networks. If you break a
bigger network into smaller subnetworks, you can create a network of interconnected
subnetworks.
Imagine a company that occupies a 30-story building divided into departments. Such
company could prepare one large network to address all the IPv4 devices. But putting a
couple of hundred or even thousands of devices into one IPv4 network would make such a
network unusable because of the broadcast traffic, security, and troubleshooting issues. A
better approach is to create a larger number of smaller networks, such as department,
functional or spatial separation. For example, think of the company as a group of
networks, the departments being used as subnets and the devices in the departments as
the individual host addresses belonging to these smaller subnets. This process of creating
smaller networks out of a bigger one is called subnetting.
A subnet segments the hosts within the network. Without subnets, the network has a flat
topology. You use routers to separate networks by breaking the network into multiple
subnets or multiple OSI Layer 3 broadcast domains.
Note: Recall that OSI Layer 2 is the data link layer, and it is equivalent to part of the TCP/IP
link layer. OSI Layer 3 is the network layer, and that it is equivalent to the TCP/IP internet
layer. A Layer 2 broadcast domain is a domain in which all devices see each other's Layer 2
broadcast frames, while a Layer 3 broadcast domain is a domain in which all devices see
each other's Layer 3 broadcast packets.
7.11 Introducing the TCP/IP Internet Layer, IPv4 Addressing, and Subnets
Implementing Subnettin: Borrowing Bits
Subnetting allows you to create multiple logical networks that exist within a single larger
network. When you are designing a network addressing scheme, you need to be able to
determine how many logical networks you will need and how many devices you will be
able to fit into these smaller networks.
To subnet a network address, you will borrow host bits and use them as subnet bits. You
will use the subnet mask to indicate how many host bits have been borrowed. Bits must
be borrowed consecutively, starting with the first host bit on the left. This approach
introduces classless networks.
To implement subnets, follow this procedure:
• Determine the IP address for your network as assigned by the registry authority or
network administrator.
• Based on your organizational and administrative structure, determine the number
of subnets that are required for the network. Be sure to plan for growth.
• Based on the required number of subnets, determine the number of bits that you
need to borrow from the host bits.
• Determine the binary and decimal value of the new subnet mask that results from
borrowing bits from the host ID.
• Apply the subnet mask to the network IP address to determine the subnets and
the available host addresses. Also, determine the network and broadcast
addresses for each subnet.
• Assign subnet addresses to all subnets. Assign host addresses to all devices that
are connected to each subnet.
Take a look at the following figure. The top table shows a standard Class C network
address that is not subnetted. The bottom table shows the same address after it is
subnetted by borrowing one host bit. Notice that the prefix length has changed from 24 to
25. The network IPv4 address itself is unchanged, although it is now considered a
subnetwork (subnet) and is one of two subnets that have been created. The subnet mask
has changed from 255.255.255.0 in decimal to 255.255.255.128, because the 128 bit is
now turned on in the last octet.
Each time that a bit is borrowed, the number of subnet addresses increases, and the
number of host addresses that are available per subnet decreases. The algorithm that is
used to compute the number of subnets and hosts uses powers of two. Therefore,
borrowing one host bit enables you to create 21 = 2 subnets, borrowing 2 bits gives you 22
= 4 subnets, and so on.
Note: You can use the following formula to calculate the number of subnets that are
created by borrowing a given number of host bits: Number of subnets = 2s (where s is the
number of bits that are borrowed)
As the following figure shows, you can also determine how many host addresses are
available per subnet when you borrow a given number of bits. Just like on a network, two
addresses are not available to be used as host addresses on a subnet; they are used for
the address of the subnet itself (with all of the host bits set to 0) and the directed
broadcast address on the subnet (with all of the host bits set to 1). The figure shows that
borrowing 1 bit for subnetting the address in the example leaves 7 bits for hosts.
Note: You can use a formula to calculate the number of host addresses that are available
when a given number of host bits are borrowed: Number of hosts = 2h – 2 (where h is the
number of host bits that are remaining after bits are borrowed)
The formula to determine the number of hosts for this example is 27– 2, which calculates
to 126 host addresses per subnet.
Here is another example using the same network, in which five host bits are borrowed for
subnetting. In this example, 25 = 32 subnets are created, and only 23 – 2 = 6 host
addresses are available for each subnet. The new subnet mask is
11111111.11111111.11111111.11111000, which equates to 255.255.255.248 in decimal.
The following figure shows the subnetting of a Class B network address. The top table
shows a network address with the default Class B subnet mask, 255.255.0.0. The second
table shows the same address after it is subnetted by borrowing six host bits. Notice that
the prefix length has changed from 16 to 22. The network IPv4 address itself is unchanged,
but the subnet mask has changed from 255.255.0.0 in decimal to 255.255.252.0.
The next figure shows the subnetting of a Class A network address. The top table shows a
network address with the default Class A subnet mask, 255.0.0.0. The bottom table shows
the same address after it is subnetted by borrowing 8 host bits. Notice that the prefix
length has changed from 8 to 16. The network IPv4 address is unchanged, but the subnet
mask has changed from 255.0.0.0 in decimal to 255.255.0.0.
7.12 Introducing the TCP/IP Internet Layer, IPv4 Addressing, and Subnets
Implementing Subnetting: Determining the Addressing Scheme
If a network address is subnetted, the first subnet that is obtained after subnetting the
network address is called subnet zero, because all of the subnet bits are binary zero. To
determine each subsequent subnet address, increase the subnet address by the bit value
for the last bit that you borrowed.
In the following example, 8 bits are borrowed for subnetting the network address,
172.16.0.0/16. The first subnet address is 172.16.0.0/24; this is subnet zero. The last bit
borrowed is the bit with the value of 1 in the third octet, so the next subnet address is
172.16.1.0/24.
The following figure shows the first six subnets and the last subnet created by borrowing
the 8 bits. There are a total of 28 = 256 subnets.
Notice that the address of a subnet has all of the host bits set to binary 0. This address is
one of the reserved addresses on a subnet. The other reserved address is the subnet-
directed broadcast address, in which all of the host bits are set to binary 1. All of the
addresses between the subnet address and the subnet broadcast address are valid host
addresses on that subnet. On these subnets, there are 28 – 2 = 254 host addresses per
subnet.
Here are the host addresses and broadcast addresses for those subnets.
Class B's 172.16.0.0/16 network address has been subnetted by borrowing two host bits in
the following figure. The first subnet address is 172.16.0.0/18, the zero subnet. The last bit
borrowed is the bit with the value of 64, so the next subnet address is 172.16.64.0/18.
The following figure shows all the subnets that are created by borrowing the 2 bits. The
subnet 172.16.192.0/18 is the last subnet because 192 + 64 = 256, and the highest
possible value for any given octet is 255. However, if the subnet goes over the octet
boundary, you have more subnets, as you will see in the next example.
The following table shows the valid host addresses for each subnet that was created by
borrowing 2 bits. The table shows the valid host IPv4 address range for each subnetwork.
There are 22 = 4 subnets, and 214 – 2 = 16,382 host addresses per subnet.
Here is one more example of subnetting the same /16 network address, this time
borrowing 11 host bits for subnetting. The first subnet address is 172.16.0.0/27. The
second subnet address is 172.16.0.32/27 because the last borrowed bit has a value of 32.
Notice that this time, the last borrowed bit is in the fourth octet. Therefore, the increment
of 32 (the value of the last borrowed bit) is first applied in the fourth octet.
Once all the possible subnet addresses in the fourth octet have been calculated in this
manner, you move back into the third octet since you have borrowed bits from the third
octet as well. You can use all the third octet values from 1 to 255 for your subnet
addresses as well.
The following table shows the first 10 subnet addresses and the last subnet address (with
the corresponding host addresses and broadcast addresses) that result from subnetting
Class B network 172.16.0.0 by borrowing 11 host bits. There are 211 = 2048 subnets, and
25 – 2 = 30 host addresses per subnet.
7.13 Introducing the TCP/IP Internet Layer, IPv4 Addressing, and Subnets
Benefits of VLSM and Implementing VLSM
When you are using subnetting, the same subnet mask is applied for all the subnets of a
given network. This way, each subnet has the same number of available host addresses.
You may need this approach sometimes, but most organizations require several networks
of various sizes rather than one network with thousands of devices. So usually, having the
same subnet mask for all subnets of a given network ends up wasting address space
because each subnet has the same number of available host addresses.
For example, in the following figure, Class B network 172.16.0.0 is subnetted by borrowing
8 host bits and applying a 24-bit subnet mask, allowing 256 subnets with 254 host
addresses each. In this example, many host addresses are wasted. Each WAN link needs
only two host addresses, so 252 host addresses are wasted on each WAN link. Many host
addresses are also wasted on other subnets, and a Variable-length subnet mask (VLSM)
provides a solution.
VLSM allows you to use more than one subnet mask within a network to efficiently use IP
addresses. Instead of using the same subnet mask for all subnets, you can use the most
efficient subnet mask for each subnet. The most efficient subnet mask for a subnet is the
mask that provides an appropriate number of host addresses for that individual subnet.
For example, subnet 172.16.6.0 has only 19 hosts, so it does not need the 254 host
addresses that the 24-bit mask allows. A 27-bit mask would provide 30 host addresses,
which is much more appropriate for this subnet.
In the next figure, the 172.16.0.0/16 network is first divided into subnetworks using a 24-
bit subnet mask. However, one of the subnetworks in this range, 172.16.14.0/24, is
further divided into smaller subnetworks using a 27-bit mask to accommodate the subnets
that have 19 or 28 hosts. These smaller subnetworks range from 172.16.14.0/27 to
172.16.14.224/27. Then, one of these smaller subnets, 172.16.14.128/27, is further
divided using a 30-bit mask, which creates subnets with only two hosts to be used on the
WAN links. The subnets with the 30-bit mask range from 172.16.14.128/30 to
172.16.14.156/30.
The next figure shows in binary the original subnetting of the 172.16.0.0/16 network to
/20 by borrowing 4 host bits, which provided 16 subnets with 4094 host addresses each.
The figure also shows how to further subnetting with VLSM increases the number of
subnets and provides the desired number of host addresses per subnet. Borrowing an
additional 6 subnet bits results in an additional 26 = 64 subnets. This leaves 6 host bits,
resulting in 26 – 2 = 62 hosts on each of these subnets.
The following figure shows the subnet addresses and host addresses that are achieved by
using VLSM. The subnet for the region in this example, subnet 172.16.32.0/20, is further
subnetted by applying a 26-bit mask, as the previous figure shows.
The following figure shows some of the new VLSM subnet addresses that are applied to
the regional network.
To calculate the subnet addresses for the WAN links, further subnet one of the unused /26
subnets with a 30-bit subnet mask. For this example, subnet 172.16.33.0/26 will be
further subnetted. Borrowing an additional 4 subnet bits results in an additional 24 = 16
subnets. This leaves 2 host bits, resulting in 22– 2 = 2 hosts on each subnets.
The following figure shows all the new VLSM subnet addresses that are applied to the
regional network.
As seen in this example, where we used VLSM to further subnet the address
172.16.32.0/20 into smaller subnets of different sizes, the easiest way to assign the
subnets is to assign the subnets with the largest number of hosts first.
7.14 Introducing the TCP/IP Internet Layer, IPv4 Addressing, and Subnets
Private vs. Public IPv4 Addresses
As the internet began to grow exponentially in the 1990s, it became clear that if the
current growth trajectory continued, eventually, there would not be enough IPv4
addresses for everyone who wanted one.Work began on a permanent solution, which
would become IPv6, but several other solutions were developed in the interim. These
solutions included Network Address Translation (NAT), classless interdomain routing
(CIDR), private IPv4 addressing, and VLSM.
Public IPv4 Addresses
Hosts that are publicly accessible over the internet require public IP addresses. Internet
stability depends directly on the uniqueness of publicly used network addresses.
Therefore, a mechanism is needed to ensure that addresses are, in fact, unique. This
mechanism was originally managed by the InterNIC. The IANA succeeded the InterNIC. The
IANA carefully manages the remaining supply of IPv4 addresses to ensure that duplication
of publicly used addresses does not occur. Duplication would cause instability on the
internet and compromise its ability to deliver packets to networks using the duplicated
addresses.
With few exceptions, businesses and home internet users receive their IP address
assignment from their Local Internet Registry (LIR), which typically is their ISP. These IP
addresses are called provider-aggregatable (as opposed to provider-independent
addresses) because they are linked to the ISP. If you change ISPs, you will need to
readdress your internet-facing hosts.
The following table provides a summary of public IPv4 addresses.
LIRs obtain IP address pools from their Regional Internet Registry (RIR):
African Network Information Center (AFRINIC)
Asia Pacific Network Information Center (APNIC)
American Registry for Internet Numbers (ARIN)
Latin American and Caribbean Network Information Center (LACNIC)
Réseaux IP Européens Network Coordination Centre (RIPE NCC)
With the rapid growth of the internet, public IPv4 addresses began to run out. New
mechanisms such as NAT, CIDR, VLSM, and IPv6 were developed to help solve the
problem.
Private IPv4 Addresses
Internet hosts require a globally routable and unique IPv4 address, but private hosts that
are not connected to the internet can use any valid address, as long as it is unique within
the private network. However, because many private networks exist alongside public
networks, deploying random IPv4 addresses is strongly discouraged.
In February 1996, the IETF published RFC 1918, "Address Allocation for Private Internets,"
to ease the accelerating depletion of globally routable IPv4 addresses and provide
companies with an alternative to using arbitrary IPv4 addresses. Three blocks of IPv4
addresses (one Class A network, 16 Class B networks, and 256 Class C networks) are
designated for private, internal use.
Addresses in these ranges are not routed on the internet backbone. Internet routers are
configured to discard private addresses. In a private intranet, these private addresses can
be used instead of globally unique addresses. When a network that uses private addresses
must connect to the internet, private addresses must be translated to public addresses.
This translation process is called NAT. A router is often the network device that performs
NAT.
The following table provides a summary of private IPv4 addresses.
7.15 Introducing the TCP/IP Internet Layer, IPv4 Addressing, and Subnets
Reserved IPv4 Addresses
Certain IPv4 addresses are reserved and cannot be assigned to individual devices on a
network. Reserved IPv4 addresses include a network address used to identify the network
itself and a broadcast address, which is used for broadcasting packets to all the devices on
a network.
Network Address
The network address is a standard way to refer to a network. An IPv4 address that has
binary zeros in all the host bit positions is reserved for the network address.
For example, in a Class A network, 10.0.0.0 is the IPv4 address of the network containing
the host 10.1.2.3. All hosts in 10.0.0.0 will have the same network bits. The IPv4 address
172.16.0.0 is a Class B network address, and 192.168.1.0 is a Class C network address. A
router uses the network IPv4 address when it searches its IPv4 routing table for the
destination network location.
When networks are subnetted, the IPv4 address with binary zeros in all the host bit
positions is still reserved for the address of the subnet. For example, 172.16.1.0/24 is the
address of a subnet.
Local Broadcast Address
If an IPv4 device wants to communicate with all the devices on the local network, it sets
the destination address to all ones (255.255.255.255) and transmits the packet. For
example, hosts that do not know their network number will use the 255.255.255.255
broadcast address to ask a server for the network address. The local broadcast is never
routed beyond the local network or subnet.
7.16 Introducing the TCP/IP Internet Layer, IPv4 Addressing, and Subnets
Verifying IPv4 Address of a Host
All operating systems that are capable of TCP/IP communications include utilities for
configuring, managing and monitoring the IPv4 networking configuration. Operating
systems such as Microsoft Windows, Apple macOS, and most Linux variants have CLI and
GUI tools.
Verifying IPv4 Address of a Host on Windows
On a PC running Microsoft Windows 10, in the Network and Sharing Center you can view
and set the IPv4 address associated with the network adapter by clicking Properties. In
this example, the PC is manually configured with a static IPv4 address.
Note: IP addresses can be either static or dynamic. At this point, all you need to know is
that a static IP address is a fixed IP address that is assigned manually to a device, while a
dynamic IP address is assigned automatically and changes whenever a user reboots a
device.
Note: Navigating to the TCP/IP network settings varies widely, depending on the operating
system that is installed.
Use the ipconfig command to display all current TCP/IP network configuration values at
the command line of a Windows computer.
Note: For additional information about ipconfig and the command syntax, use your
favorite search engine and search for this string: microsoft technet dd197434
site:microsoft.com
Verifying the IPv4 Address of a Host on Apple Mac
Just like in Windows, you can use either GUI or CLI to configure or verify your IP address
settings on Apple macOS. To use the GUI option, click the Apple logo in the taskbar,
choose System Preferences, and choose Network. A pop-up window will open, displaying
your connections. Click the connection you want to manage, choose Advanced, and click
the TCP/IP tab.
You can also acquire this information using a CLI. First, you will need to open the Terminal.
You can do so in several ways, including using the Finder menu bar by choosing Go >
Utilities > Terminal. Then, use the ifconfig {interface name} command to obtain the IPv4
address and other information.
Verifying the IPv4 Address of a Host on Linux
On most Linux operating systems, the ifconfig command performs the same tasks that
ipconfig performs on Microsoft Windows operating systems.
You can get the details of specific syntax on Linux systems for just about any command
using the man (manual) command. In the following example, man ifconfig was entered.
8.1 Explaining the TCP/IP Transport Layer and Application Layer
Introduction
IP addressing is used to uniquely identify the devices globally. But to provide a logical
connection between the endpoints of a network and provide transport services from a
host to a destination, you need a different set of functionalities provided by the TCP/IP
Transport Layer. Another important functionality is that the Transport layer provides the
interface between the Application layer that we use to communicate with through various
applications and the underlying Internet layer, and therefore hides the complexity of the
network from the applications.
The two most important protocols used at the Transport layer are the TCP and the UDP.
While the first one provides reliability, the second one only provides best-effort
communication. Application programmers can choose the service that is the most
appropriate for their specific applications. Both protocols support establishing multiple
sessions from the end-host, which is important so that different applications running on
end hosts can use the same IP address to communicate with the network.
The Application layer provides functions for users or their programs, and it is highly
specific to the application being performed. It provides the services that user applications
use to communicate over the network, and it is the layer in which user-access network
processes reside. These processes encompass the ones that users interact with directly
and other processes of which the users are not aware. There are many Application layer
protocols, and new protocols are constantly being developed.
As a network engineer, you will often design, configure, and troubleshoot different
networks to be suitable for different application layer protocols. You will need to, among
other characteristics, contrast reliable and unreliable transport services provided by TCP
and UDP.
Multiple communications often occur at once; for instance, you may be searching the web
and using FTP to transfer a file at the same time. The transport layer tracks these
communications and keeps them separate. Both UDP and TCP provide this tracking. To
pass data to the proper applications, the transport layer must identify the target
application. If TCP is used, the transport layer has the additional responsibilities of
establishing end-to-end connections, segmenting data and managing each piece,
reassembling the segments into streams of application data, managing flow control, and
applying reliability mechanisms.
Session Multiplexing
Session multiplexing is how an IP host can support multiple sessions simultaneously and
manage the individual traffic streams over a single link. A session is created when a source
machine needs to send data to a destination machine. Most often, this process involves a
reply, but a reply is not mandatory.
Note: Session multiplexing service provided by the transport layer supports multiple TCP
or UDP sessions, and not just one TCP and one UDP session respectively over a single link
as indicated in the figure above.
Identifying the Applications
To pass data to the proper applications, the transport layer must identify the target
application. TCP/IP transport protocols use port numbers to accomplish this task. The
connection is established from a source port to a destination port. Each application
process that needs to access the network is assigned a unique port number in that host.
The destination port number is used in the transport layer header to indicate which target
application that piece of data is associated with. The sending host uses the source port to
help keep track of existing data streams and new connections it initiates. The source and
destination port numbers are not usually the same.
Segmentation
TCP takes variably sized data chunks from the Application layer and prepares them for
transport onto the network. The application relies on TCP to ensure that each chunk is
broken up into smaller segments that will fit the maximum transmission unit (MTU) of the
underlying network layers. UDP does not provide segmentation services. UDP instead
expects the application process to perform any necessary segmentation and supply it with
data chunks that do not exceed the MTU of lower layers.
Note: The MTU of the Ethernet protocol is 1500 bytes. Larger MTUs are possible, but 1500
bytes is the normal size.
Flow Control
If a sender transmits packets faster than the receiver can receive them, the receiver drops
some of the packets and requires them to be retransmitted. TCP is responsible for
detecting dropped packets and sending replacements. A high rate of retransmissions
introduces latency in the communication channel. To reduce the impact of retransmission-
related latency, flow control methods work to maximize the transfer rate and minimize
the required retransmissions.
Basic TCP flow control relies on acknowledgments that are generated by the receiver. The
sender sends some data while waiting for an acknowledgment from the receiver before
sending the next part. However, if the round-trip time (RTT) is significant, the overall
transmission rate may slow to an unacceptable level. To increase network efficiency, a
mechanism called windowing is combined with basic flow control. Windowing allows a
receiving computer to advertise how much data it can receive before transmitting an
acknowledgment to the sending computer.
Windowing enables the avoidance of congestion in the network.
Connection-Oriented Transport Protocol
A connection-oriented protocol establishes a session connection between two IP hosts
within the transport layer and then maintains the connection during the entire
transmission. When the transmission is complete, the session is terminated. TCP provides
connection-oriented reliable transport for application data.
Reliability
TCP reliability has these three main objectives:
Reliable (Connection-Oriented)
Some types of applications require a guarantee that packets arrive safely and in order. Any
missing packets could cause the data stream to be corrupted. Consider the example of
using your web browser to download an application. Every piece of that application must
be assembled on the receiver in the proper binary order, or it will not execute. FTP is an
application where the use of a connection-oriented protocol like TCP is indicated.
TCP uses a three-way handshake when setting up a connection. You can think of it as
being similar to a phone call. The phone rings, the called party says "hello," and the caller
says "hello." Here are the actual steps:
1. The source of the connection sends a synchronization (SYN) segment to the
destination requesting a session. The SYN segment includes the Sequence Number
(SN).
2. The destination responds to the SYN with a synchronization-acknowledgment
(SYN-ACK) and increments the initiator SN by 1.
3. If the source accepts the SYN-ACK, it sends an acknowledgment (ACK) segment to
complete the handshake.
• Web browsers
• Email
• FTP
• Network printing
• Database transactions
To support reliability, a connection is established between the IP source and destination
to ensure that the application is ready to receive data. During the initial connection
establishment process, information is exchanged about the receiver's capabilities, and
starting parameters are negotiated. These parameters are then used for tracking data
transfer during the connection.
When the sending computer transmits data, it assigns a sequence number to each packet.
The receiver then responds with an acknowledgment number that is equal to the next
expected sequence number. This exchange of sequence and acknowledgment numbers
allows the protocol to recognize when data has been lost, duplicated, or arrived out of
order.
Best Effort (Connectionless)
Reliability (guaranteed delivery) is not always necessary, or even desirable. For example, if
one or two segments of a VoIP stream fail to arrive, it would only create a momentary
disruption in the stream. This disruption might appear as a momentary distortion of the
voice quality, but the user may not even notice. In real-time applications, such as voice
streaming, dropped packets can be tolerated as long as the overall percentage of dropped
packets is low.
Here are some common applications that use UDP:
• TCP operates at the transport layer of the TCP/IP stack (OSI Layer 4).
• TCP provides application access to the Internet layer (OSI Layer 3, the network
layer), where application data is routed from the source IP host to the destination
IP host.
The TCP header is a minimum of 20 bytes; the fields in the TCP header are as follows:
• UDP operates at the transport layer of the TCP/IP stack (OSI Layer 4).
• UDP provides applications with access to the Internet layer (OSI Layer 3, the
network layer), without the overhead of reliability mechanisms.
• UDP is a connectionless protocol in which a one-way datagram is sent to a
destination without advance notification to the destination device.
• UDP performs only limited error checking. A UDP datagram includes a checksum
value, which the receiving device can use to test the integrity of the data.
• UDP provides service on a best-effort basis and does not guarantee data delivery
because packets can be misdirected, duplicated, or lost on the way to their
destination.
• UDP does not provide any special features that recover lost or corrupted packets.
UDP relies on applications that are using its transport services to provide recovery.
• Because of its low overhead, UDP is ideal for applications like DNS and Network
Time Protocol (NTP), where there is a simple request-and-response transaction.
The low overhead of UDP is evident when you review the UDP header length of only 64
bits (8 bytes). The UDP header length is significantly smaller compared with the TCP
minimum header length of 20 bytes.
The following list describes the field definitions in the UDP segment:
• FTP (port 21, TCP): FTP is a reliable, connection-oriented service that uses TCP to
transfer files between systems that support FTP. FTP supports bidirectional binary
and ASCII file transfers. Besides using port 21 for exchange of control, FTP also uses
one additional port, 20, for data transmission.
• SSH (port 22, TCP): Secure Shell (SSH) provides the capability to access other
computers, servers, and networking devices remotely. SSH enables a user to log in
to a remote host and execute commands. SSH messages are encrypted.
• Telnet (port 23, TCP): Telnet is a predecessor to SSH. It sends messages in
unencrypted cleartext. As a security best practice, most organizations now use SSH
for remote communications.
• HTTP (port 80, TCP): HTTP defines how messages are formatted and transmitted
and which actions browsers and web servers can take in response to various
commands. It uses TCP.
• HTTPS (port 443, TCP): HTTPS combines HTTP with a security protocol (Secure
Sockets Layer [SSL]/Transport Layer Security[TLS]).
• DNS (port 53, TCP, and UDP): DNS is used to resolve Internet names to IP
addresses. DNS uses a distributed set of servers to resolve names that are
associated with numbered addresses. DNS uses TCP for zone transfer between
DNS servers and UDP for name queries.
• TFTP (port 69, UDP): TFTP is a connectionless service. Routers and switches use
TFTP to transfer configuration files and Cisco IOS images, and other files between
systems that support TFTP.
• SNMP (port 161, UDP): SNMP facilitates the exchange of management information
between network devices. SNMP enables network administrators to manage
network performance, find and solve network problems, and plan for network
growth.
Here, you have seen only some applications with their port numbers. Go to the Service
Name and Transport Protocol Port Number Registry for a complete list at
http://www.iana.org/assignments/service-names-port-numbers/service-names-port-
numbers.xhtml.
HTTP is an application layer protocol and is the foundation of communication for the
World Wide Web. It is based on a client-server computing model, where the client (e.g., a
web browser) and the server (e.g., a web server) use a request-response message format
to transfer information. HTTP presumes a reliable underlying transport layer protocol, so
TCP is commonly used. However, UDP can also be used in some cases.
By default, HTTP is a stateless (or connectionless) protocol, meaning it works without the
receiver retaining any client information. Each request can be understood in isolation,
without knowing any commands that came before it. HTTP does have some mechanisms,
namely HTTP headers, to make the protocol behave as if it was stateful.
The information is media-independent, which means that any type of data can be sent by
HTTP as long as both the client and the server know how to handle the data content. Web
browsers commonly use HTTP and servers to transfer the files that make up web pages.
Although the HTTP specification allows for data to be transferred on port 80 using either
TCP or UDP, most implementations use TCP. A secure version of the protocol, HTTPS, uses
TCP port 443.
HTTP Request-Response Cycle
The data is exchanged via HTTP Requests and HTTP Responses, which are specialized data
formats, used for HTTP communication. A sequence of requests and responses is called an
HTTP Session and is initiated by a client by establishing a connection to the server.
Your host sends a DNS query for the IP address of www.google.com. If your DNS server
has the answer cached, it returns the answer directly.
8.9 Explaining the TCP/IP Transport Layer and Application Layer
Explaining DHCP for IPv4
Managing a network can be very time-consuming. Network clients break, or are moved,
and new clients have purchased that need network connectivity. These tasks are all part of
the network administrator job. Depending on the number of IP hosts, manual
configuration of IPv4 addresses for every device on the network is virtually impossible.
DHCP can greatly decrease the workload of the network administrator. DHCP
automatically assigns an IPv4 address from an IPv4 address pool that the administrator
defines. However, DHCP is much more than just a mechanism that allocates IPv4
addresses. This service automates the assignment of IPv4 addresses, subnet masks,
gateways, and other required networking parameters.
DHCP is built on a client/server model. The DHCP server is allocated one or more network
addresses and sends configuration parameters to dynamically configured hosts that
request them. The term "client" refers to a host that is requesting initialization
parameters from a DHCP server. Most endpoint devices on today’s networks are DHCP
clients, including Cisco IP phones, desktop PCs, laptops, printers, and even Blu-Ray players.
Just about any device that you can configure to participate on a TCP/IP network has the
option of using DHCP to obtain its IPv4 configuration.
Depending on the actual DHCP server that is in use, there are three basic DHCP IPv4
address allocation mechanisms:
The interface interface command on the router specifies an interface. It enters the
interface configuration mode, while the ip address dhcp command enables the interface
to acquire an IPv4 address through DHCP.
If the router receives the optional default gateway DHCP parameter from the server, it will
inject the default route into its routing table, pointing to the default gateway IPv4 address.
To verify that the router interface has acquired an IPv4 address through DHCP, you can
use the show ip interface brief command:
To configure the DHCP server on a router, you should enter the DHCP pool configuration
mode using the ip dhcp pool name command. Then, assign the DHCP parameters to the
DHCP pool.
Use the following commands that are shown in the table to define the pool parameters.
You can also exclude the range of IPv4 addresses from the DHCP assignment, by using the
ip dhcp excluded-address ip-address [last-ip-address] command, which is used in the
global configuration mode.
In the configuration example above the, IPv4 addresses are assigned from the address
pool 10.1.50.0/24 with a lease time of 12 hours. Additional parameters are the default
gateway, domain name, and DNS server. Also, IPv4 addresses from 10.1.50.1 to 10.1.50.50
are not assigned to the end devices.
To verify information about the configured DHCP address pools, you can use the show ip
dhcp pool command, and to display the address binding information, which displays a list
of all IPv4 address-to-MAC bindings, you can use the show ip dhcp binding command.
IPv4 DHCP Settings on Windows Host
On a Windows computer, you can use different ipconfig command options to view and
refresh DHCP and DNS settings.
The following is the syntax for the ipconfig command:
ipconfig [/ all] [/ renew [adapter]] [/ release [adapter]] [/displaydns] [/flushdns]
The following command options are commonly used:
• /all This option displays the complete TCP/IP configuration for all adapters,
including DHCP and DNS configuration. Without this parameter, the ipconfig
command displays only the IP address, subnet mask, and default gateway values
for each adapter. Adapters can represent physical interfaces, such as installed
network adapters, or logical interfaces, such as dialup connections.
• /renew [adapter] This option renews DHCP configuration for all adapters (if an
adapter is not specified) or for a specific adapter if the adapter parameter is
included. This parameter is available only on computers with adapters that are
configured to obtain an IP address automatically. To specify an adapter name,
enter the adapter name that appears when you use ipconfig without parameters.
• /release [adapter] This option sends a DHCPRELEASE message to the DHCP server
to release the current DHCP configuration and discard the IP address configuration
for either all adapters (if an adapter is not specified) or for a specific adapter if the
adapter parameter is included. This parameter disables TCP/IP for adapters that
are configured to obtain an IP address automatically. To specify an adapter name,
enter the adapter name that appears when you use ipconfig without parameters.
• /displaydns This option displays the contents of the host DNS cache. When an IP
host makes a DNS query for a hostname, it caches the result to avoid unnecessary
queries.
• /flushdns This option deletes the host DNS cache. This option is useful if the IP
address associated with a hostname has changed, but the host is still caching the
old IP address.
• /? This option displays help at the command prompt.
9.1 Exploring the Functions of Routing
Introduction
One of the intriguing aspects of Cisco routers is how the router chooses which route is the
best among the routes presented by routing protocols, manual configuration, and various
other means. While route selection is much simpler than you might imagine, you need to
learn how Cisco routers work to understand it completely. Determining the best path
involves evaluating multiple paths to the same destination network and selecting the
optimal path to reach that network. This process is performed for every packet that goes
through a router.
A router is a networking device that forwards packets between different networks. A
router is typically positioned at the edge of a network and can provide connections to
other networks. In Enterprise Campus environments, you will typically find devices
providing routing in the center of the network or at the edge where they provide
connectivity to WANs or the internet. Routing functionality can often be provided not only
by routers but also by firewalls or Layer 3 switches. A router is typically part of an all-in-
one device that also provides switching, wireless, and security functions at home.
While switches exchange data frames between segments to enable communication within
a single network, routers are required to reach hosts that are not in the same network.
Routers enable internetwork communication by connecting interfaces in multiple
networks. For example, the router in the figure above has one interface connected to the
192.168.1.0/24 network and another interface connected to the 192.168.2.0/24 network.
The router uses a routing table to route traffic between the two networks.
In the following figure, data frames travel between the various endpoints on LAN A. The
switch enables the communication to all devices within the same network, whose network
IPv4 address is 10.18.0.0/16. Likewise, the LAN B switch enables communication among
the hosts on LAN B, whose network IPv4 address is 10.22.0.0/16.
A host in LAN A cannot communicate with a host in LAN B without the router. Routers
enable communication between hosts that are not in the same local LAN. Routers can do
this function because they can be attached to multiple networks and can route between
them. In the figure, the router is attached to two networks, 10.18.0.0/16 and
10.22.0.0/16. Routers are essential components of large IP networks because they can
accommodate growth across wide geographical areas.
This figure illustrates another important routing concept. Networks to which the router is
attached are called local or directly connected networks. All other networks—networks
that a router is not directly attached to—are called remote networks.
The topology in the figure shows RouterX, which is directly attached to three networks
172.16.1.0/24, 172.16.2.0/24, and 192.168.100.0/24. To RouterX, all other networks, i.e.,
10.10.10.0/24, 10.10.20.0/24, and, 10.10.30.0/24 are remote networks. To RouterY,
networks 10.10.10.0/24, 10.10.20.0/24, and 10.10.30.0/24 are directly connected
networks. RouterX and RouterY have a common directly connected network
192.168.100.0/24.
• CPU: A CPU, or processor, is the chip installed on the motherboard that carries out
the instructions of a computer program. For example, it processes all the
information gathered from other routers or sent to other routers.
• Motherboard: The motherboard is the central circuit board, which holds critical
electronic components of the system. The motherboard provides connections to
other peripherals and interfaces.
• Memory: There are four primary types of memory:
o RAM: RAM is memory on the motherboard that stores data during CPU
processing. It is a volatile type of memory in that its information is lost
when power is switched off. RAM provides temporary memory for the
router's running configuration while the router is powered on.
o NVRAM: NVRAM retains content when the router is powered down.
NVRAM stores the startup configuration file for most router platforms. It
also contains the software configuration register, which determines which
Cisco IOS image is used when booting the router.
o ROM: ROM: ROM is read-only memory on the motherboard. The content of
ROM is not lost when power is switched off. Data stored in ROM cannot be
modified, or it can be modified only slowly or with difficulty. ROM
sometimes contains a ROM monitor (ROMMON). ROM Monitor initializes
the hardware and boots the Cisco IOS software when you power on or
reload a router. You can use the ROM monitor to perform certain
configuration tasks, such as recovering a lost password or downloading
software over the console port. ROM also includes bootloader software
(bootstrap), which helps the router boot when it cannot find a valid Cisco
IOS image in the flash memory. During normal startup, the ROM Monitor
initializes the router, and then control passes to the Cisco IOS software.
o Flash: Flash memory is nonvolatile storage that can be electrically erased
and reprogrammed. Flash memory stores the Cisco IOS image. On some
platforms, it can also store configuration files or boot images.
• Ports (also referred to as interfaces): Ports are used to connect routers to other
devices in the network. Routers can have these types of ports:
o Management ports: Routers have a console port that can be used to attach
to a terminal used for management, configuration, and control. High-end
routers may also have a dedicated Ethernet port that can be used only for
management. An IP address can be assigned to the Ethernet port, and the
router can be accessed from a management subnet. The auxiliary (AUX)
interface on a router is used for remote management of the router.
Typically, a modem is connected to the AUX interface for dial-in access.
From a security standpoint, enabling the option to connect remotely to a
network device carries with it the responsibility of vigilant device security.
o Network ports: The router has many network ports, including various LAN
or WAN media ports, which may be copper or fiber cable. IP addresses are
assigned to network ports.
As an example, the following figure shows the ports on a Cisco integrated services router
(ISR) 4331 Router:
• Path determination: Routers use their routing tables to determine how to forward
packets. Each router must maintain its own local routing table, which contains a
list of all destinations known to the router and information about reaching those
destinations. When a router receives an incoming packet, it examines the
destination IP address in the packet and searches for the best match between the
destination address and the network addresses in the routing table. A matching
entry may indicate that the destination is directly connected to the router or that it
can be reached via another router. This router is called the next-hop router and is
on the path to the final destination. If there is no matching entry, the router sends
the packet to the default route. If there is no default route, the router drops the
packet.
• Packet forwarding: After a router determines the appropriate path for a packet, it
forwards it through a network interface toward the destination network. Routers
can have interfaces of different types. When forwarding a packet, routers perform
encapsulation following the OSI Layer 2 protocol implemented at the exit
interface. The figure shows router A, which has two FastEthernet interfaces and
one serial interface. When router A receives an Ethernet frame, it de-encapsulates
it, examines it, and determines the exit interface. If the router needs to forward
the packet out of the serial interface, the router will encapsulate the frame
according to the Layer 2 protocol used on the serial link. The figure also shows a
conceptual routing table that lists destination networks known to the router and
its corresponding exit interface or next-hop address. If an interface on the router
has an IPv4 address within the destination network, the destination network is
considered "directly connected" to the router. For example, assume that router A
receives a packet on its Serial0/0/0 interface destined for a host on network
10.1.1.0. Because the routing table indicates that network 10.1.1.0 is directly
connected, router A forwards the packet out of its FastEthernet 0/1 interface, and
the switches on the segment process the packet to the host. If a destination
network in the routing table is not directly connected, the packet must reach the
destination network via the next-hop router. For example, assume that router A
receives a packet on its Serial0/0/0 interface and the destination host address is on
the 10.1.3.0 network. In this case, it must forward the packet to the router B
interface with the IPv4 address 10.1.2.2.
9.5 Exploring the Functions of Routing
Routing Table
A routing table contains a list of all networks known to the router and information about
reaching those networks. Each line or entry of the routing table lists a destination network
and the interface or next-hop address by which that destination network can be reached.
A routing table may contain four types of entries:
• Route source: Identifies how the route was learned. Directly connected interfaces
have two route source codes. "C" identifies a directly connected network. "L"
identifies the local IPv4 address assigned to the router’s interface.
• Destination network: For directly connected networks, the destination networks
are local to the router. The destination network address is indicated with a
network address and subnet mask in the form of the prefix. Note that "L" entries,
which identify the local IPv4 address of the interface, have a prefix of /32.
• Outgoing interface: Identifies the exit interface to use when forwarding packets to
the destination network.
Dynamic Routes
Routers use dynamic routing protocols to share information about the reachability and
status of remote networks. A dynamic routing protocol allows routers to learn about
remote networks from other routers automatically. These networks, and the best path to
each, are added to the router's routing table and identified as a network learned by a
specific dynamic routing protocol. Cisco routers can support a variety of dynamic IPv4 and
IPv6 routing protocols, such as Border Gateway Protocol (BGP), Open Shortest Path First
(OSPF), Enhanced Interior Gateway Routing Protocol (EIGRP), Intermediate System-to-
Intermediate System (IS-IS), Routing Information Protocol (RIP), and so on. The routing
information is updated when changes in the network occur. Larger networks require
dynamic routing because there are usually many subnets and constant changes. These
changes require updates to routing tables across all routers in the network to prevent
connectivity loss. Dynamic routing protocols ensure that the routing table is automatically
updated to reflect network changes. The following figure displays an IPv4 routing table
entry on R1 for the route to remote network 172.16.1.0/24.
From the example entry, you can tell the following:
• Route source: Identifies how the route was learned. "O" in the figure indicates that
the source of the entry was the OSPF dynamic routing protocol.
• Destination network: Identifies the address of the remote network. The router
knows how to reach 172.16.1.0/24 network.
• Administrative distance: Identifies the trustworthiness of the route source. Lower
values indicate the preferred route source. OSPF has a default administrative
distance value of 110.
• Metric: Identifies the value assigned to reach the remote network. Lower values
indicate preferred routes. This OSPF route has a metric of 2 for the destination
network 172.16.1.0/24.
• Next-hop: Identifies the IPv4 address of the next router to forward the packet to.
The IPv4 address of the next-hop is 192.168.10.2.
• Route time stamp: Identifies how much time has passed since the route was
learned. The information in the example entry was learned 3 minutes and 23
seconds ago.
• Outgoing interface: Identifies the exit interface to use to forward a packet toward
the final destination. The packets destined to the 172.16.1.0/24 network will be
forwarded out of the GigabitEthernet 0/1 interface.
Static Routes
Static routes are entries that you manually enter directly into the configuration of the
router. Static routes are not automatically updated and must be manually reconfigured if
the network topology changes. Static routes can be effective for small, simple networks
that do not change frequently. The benefits of using static routes include improved
security and resource efficiency. The main disadvantage of using static routes is the lack of
automatic reconfiguration if the network topology changes. There are two common types
of static routes in the routing table—static routes to a specific network and the default
static route.
From the example entry, you can tell the following:
• Route source: Identifies how the route was learned. Static routes have a route
source code "S".
• Destination network: The destination network address is indicated with a network
address and subnet mask in the prefix. The router knows how to reach
192.168.30.0/24 network.
• Administrative distance: Identifies the trustworthiness of the route source. Lower
values indicate the preferred route source. Static routes have a default
administrative value of 1.
• Metric: Identifies the value assigned to reach the remote network. Static routes do
not calculate metrics the same way as dynamic routes; metric is set. The default
value for the metric of a static route is 0.
• Next-hop: Identifies the IPv4 address of the next router to forward the packet to.
The IPv4 address of the next-hop is 192.168.10.2.
Default Routes
A default route is an optional entry used by the router if a packet does not match any
other, a more specific route in the routing table. A default route can be dynamically
learned or statically configured. More than one source provides the default route, but the
selected default route is presented in the routing table as Gateway of last resort.
• Route source: Route source: Identifies how the route was learned. The default
route is marked with an asterisk (*). Depending on the source of a default route,
an asterisk is added to the route source (S* in the example.)
• Destination network: The destination network for the default route is 0.0.0.0/0.
• Administrative distance: Identifies the trustworthiness of the route source. Lower
values indicate the preferred route source. The default route has the same
administrative distance as a source of a default route. In the example, the default
route's source is a static route, with a default administrative distance value of 1.
• Metric: Identifies the value assigned to reach the remote network. The default
route inherits the metric value from the route source. In the example, since the
default route is statically configured, the metric is 0.
• Next-hop: Identifies the IPv4 address of the next router to forward the packet to.
The IPv4 address of the next-hop for the default route in the example is 10.1.1.1.
IPv4 Routing Table Example
On a Cisco router, the show ip route command can be used to display the IPv4 routing
table of a router. The command output is used to verify that IPv4 networks and specific
interface addresses have been installed in the IPv4 routing table. The following output
displays the routing table of RouterA.
The output shows that RouterA has received routes from multiple sources (static routes
and routing protocols), which would be uncommon in a production network. However,
this table is used here to demonstrate the various route sources. The first part of the
output explains the codes, presenting the letters and the associated sources of the entries
in the routing table.
The letters are:
• C: Indicates directly connected networks; the first and seventh entries are directly
connected networks.
• L: Indicates local interfaces within connected networks; the second and eighth
entries are local interfaces.
• R: Indicates RIP; the third entry is the RIP route.
• O: Indicates OSPF; the fourth entry is an OSPF route.
• D: Indicates EIGRP; the fifth entry is an EIGRP route. The letter D stands for
Diffusing Update Algorithm (DUAL), which is the update algorithm that EIGRP uses.
The code letter E was previously taken by the legacy exterior gateway protocol
(EGP).
• S: Indicates static routes; the sixth and ninth entries are static routes.
• Asterisk (*): Indicates that this static route is a candidate for the default route.
• Various routing processes, which actually run a routing protocol, such as RIP
version 2 (RIPv2), EIGRP, IS-IS, and OSPF. The best route from a routing process has
the potential to be installed into the routing table. The routing protocol with the
lowest administrative distance always wins when installing routes into the routing
table.
• The routing table itself, which accepts information from the routing processes and
also replies to requests for information from the forwarding process.
• The forwarding process, which requests information from the routing table to
make a packet forwarding decision.
10.1 Configuring a Cisco Router
Introduction
Similarly, like a switch, proper physical installation of a router is very important. Since
there are many different models of routers, as a network engineer, you will have to install
and connect your router according to the model specifics, which are always described in
the installation documentation. After a router is physically set up, you will typically need
to connect to the router via a console interface and start configuring it. You need to
understand the initial configuration steps to configure the router properly; however,
different models' initial configurations are typically similar. But before you start with the
initial configuration, it’s always a smart idea to check if the router hardware is working
properly. Then, you can start setting up interfaces connected to different IP networks and
check their status. You can also check what network devices the router can communicate
with on the same link by using different discovery protocols.
In Enterprise environments, routers and other devices performing routing are located in
different parts of the campus. In contrast, at home or smaller branches, they are typically
located close to the link to the telecommunication provider. You will need to configure the
interfaces according to some Enterprise or internet provider IP addressing plan in either
case.
Unlike a computer end device, Cisco routers do not have a keyboard, monitor, or mouse
device to allow direct user interaction. However, you can configure the router from a PC.
At the initial installation, the PC has to be connected to the router directly through the
console port. To connect to the console port, you use a console cable, which is also called
a rollover cable.
The console port can be an RJ-45 port or a USB port. A Cisco router might have only one
type or both types of console ports. When the console port on a device is an RJ-45 port,
you require a console cable with an RJ-45 connector on one end. The other end can be a
serial DB-9 connector or a USB connector. Most modern computers have USB ports and
rarely include built-in serial ports. If your console cable has a serial connector, you will
need a serial-to-USB adapter and operating system driver (USB-to-RS-232-compatible
serial port adapter) to establish connectivity.
When the console port is a USB port, you need a suitable USB cable (for example, a USB
Type A-to-5-pin mini Type B) and an operating system device driver to establish
connectivity.
Your PC also needs a serial port and the communications software, such as Tera Term or
putty, configured with the following settings:
Note: If a username or password is configured, you will instead get a prompt to enter
credentials.
The setup mode is not intended to enter complex protocol features in the router but
rather for a minimal configuration. You do not have to use the setup mode; you can use
other configuration modes to configure the router.
The primary purpose of the setup mode is to rapidly bring up a minimal-feature
configuration for any router that cannot find its configuration from some other source. In
addition to being able to run the setup mode when the router boots, you may also initiate
it by entering the setup privileged EXEC mode command.
To skip the system configuration dialog and configure the router manually, answer the
first question in the system configuration dialog with no, or press Ctrl-C.
• Ethernet interfaces: The term Ethernet interface refers to any type of Ethernet
interface. For example, some Cisco routers have an Ethernet interface that is
capable of only 10 Mbps, so to configure this type of interface, you would use the
interface Ethernet interface-identifier configuration command. However, other
routers have interfaces that are capable of operating up to 100 Mbps. These
interfaces are referred to as Fast Ethernet ports. You use the interface
FastEthernet interface-identifier command to configure these types of ports.
Similarly, the interfaces that are capable of Gigabit Ethernet speeds are referenced
with the interface GigabitEthernet interface-identifier command. The interfaces
that are capable of operating up to 10 Gbps, 25 Gbps, 40 Gbps, and 100 Gbps
Ethernet speed are referenced with the interface TenGigabitEthernet interface-
identifier, interface TwentyFiveGigE interface-identifier, interface
FortyGigabitEthernet interface-identifier, and interface HundredGigE interface-
identifier commands, respectively.
• Serial interfaces: Serial interfaces are the second major type of physical interfaces
on Cisco routers. To support point-to-point leased lines and Frame Relay access-
link standards, Cisco routers use serial interfaces. You can then choose which data
link layer protocol to use, such as High-Level Data Link Control (HDLC) or PPP for
leased lines or Frame Relay for Frame Relay connections, and configure the router
to use the correct data link layer protocol. Use the interface serial interface-
identifier command when configuring these types of interfaces.
Routers use interface identifiers to distinguish between interfaces of the same type.
Depending on the model of the router, the interface-identifier may be:
An IPv4 address with a mask of 255.255.255.255 (prefix /32, all bits set to binary 1) is
called the host IPv4 address. The host IPv4 address indicates that only one IPv4 address is
used in the subnet and is often used to address loopback interfaces.
You can configure the loopback address with something less than a /32. The routing table
will see that as a directly connected network, but the interface address will be given a /32.
Note: The router interface characteristics include, but are not limited to, interface
description, the IP address of the interface, the data link encapsulation method, the media
type, the bandwidth, and the clock rate. You can enable many features on a per-interface
basis.
When you first configure an interface, except in the setup mode, you must
administratively enable the interface before the router can use it to transmit and receive
packets. Use the no shutdown command to enable the interface.
You may want to disable an interface to perform hardware maintenance on a specific
interface or a segment of a network. You may also want to disable an interface if a
problem exists on a specific segment of the network, and you must isolate this segment
from the rest of the network. The shutdown command disables or administratively turns
off an interface. To re-enable the interface, use the no shutdown command.
To enable an interface, use the following commands:
The following table shows the output fields and their meanings.
The table shows some of the output fields for a Gigabit Ethernet interface and their
meanings in this example.
Note: By truncating the words, you can significantly shorten the commands that refer to
router interfaces. For example, you can use show int Fa0/0 instead of show interfaces
FastEthernet0/0.
Each of the command outputs shown in the previous examples lists two interface status
codes. For a router to use an interface, the two interface status codes on the interface
must be in the upstate. The first status code refers to whether the physical layer (Layer 1)
is working, and the second status code mainly (but not always) refers to whether the data
link layer (Layer 2) protocol is working.
Four combinations of settings exist for the status codes when troubleshooting a network.
The following table lists the four combinations and an explanation of the typical reasons
why an interface would be in this state. As you review the list, note that if the hardware
status (the first status code) is not "up," the second will always be "down" because the
data link layer functions cannot work if the physical layer has a problem.
Information provided by the Cisco Discovery Protocol about each neighboring device:
The Cisco Discovery Protocol is enabled by default on most interfaces (except for some
legacy interfaces), but you can disable this functionality at the device and interface level.
To prevent other Cisco Discovery Protocol capable devices from accessing information
about a specific device, use the no cdp run global configuration command. To disable
Cisco Discovery Protocol on an interface, use the no cdp enable command. To enable
Cisco Discovery Protocol on an interface, use the cdp enable interface configuration
command.
The show cdp neighbors command displays information about Cisco Discovery Protocol
neighbors. The following example shows the Cisco Discovery Protocol output for Router A.
For each Cisco Discovery Protocol neighbor, the following information is displayed:
• Device ID
• Local interface—the interface on this device that is connected to the neighbor
• Holdtime value, in seconds
• Device capability code
• Hardware platform
• Port ID—the interface on the neighboring device that is connected to this device
The hold time value indicates how long (in seconds) the receiving device should hold the
Cisco Discovery Protocol information before discarding it.
Cisco Discovery Protocol information is sent periodically; the hold time counts down, and
if it reaches zero, the information is discarded.
The format of the show cdp neighbors output varies among different types of devices, but
the available information is generally consistent across devices.
You can use the show cdp neighbors command on a Cisco Catalyst switch to display the
Cisco Discovery Protocol updates that the switch receives on the local interfaces. Note
that on a switch, the local interface is referred to as the local port.
If you add the detail argument to the show cdp neighbors command, the resulting output
includes additional information, such as the network layer addresses of neighboring
devices. The output from the show cdp neighbors detail command is identical to the one
that the show cdp entry * command produces.
Note: Cisco Discovery Protocol is limited to gathering information about the directly
connected Cisco neighbors. Other tools, such as Telnet and SSH, are available for
gathering information about remote devices that are not directly connected.
• Management address: the IP address used to access the device for management
(configuring and verifying the device)
• System capabilities: different hardware and software specifications of the device
• System name: the hostname that was configured on that device
LLDP has these configuration guidelines and limitations:
• Must be enabled on the device before you can enable or disable it on any interface
• Is supported only on physical interfaces
• Can discover up to one device per port
• Can discover Linux servers
To enable or disable LLDP globally, use the following command:
After you globally enable LLDP, it is enabled for transmit and receive on all supported
interfaces by default. The lldp transmit command enables the transmission of LLDP
packets on an interface. The lldp receive command enables the reception of LLDP packets
on an interface.
The show lldp neighbors command displays information about neighbors, including device
ID, interface type and number, hold time settings, capabilities, and port ID.
The output in the example tells you that router R1 has two neighbors, DSW1 and DSW2.
Both devices have routing functionality. The interfaces to reach them through are
Ethernet0/1 and Etherne0/2. This output contains information only about the neighbors
that support LLDP and have it configured to exchange information.
11.1 Exploring the Packet Delivery Process
Introduction
Any device connected to either an Enterprise Campus, Branch, or home network uses an
IP address that identifies the device on the network and a subnet mask that describes
which portion of the address refers to the network ID and which part refers to the host ID.
Having this information, a device is "smart" enough to know if the devices it wants to
communicate with can be reached directly, which means they are on the same network.
In this case, the device can rely on a switch to deliver the frames to the receiver. But the
sender might still not know the physical address of such device. Therefore, a protocol that
can map IP addresses to the physical addresses of a receiver is required.
If the two hosts are on different subnets, then the sending host must send the data to its
default gateway, which will forward the data to the destination. The default gateway, a
router, allows devices on one subnet to communicate with other subnets.
Host-to-host packet delivery either in the same network or in different networks contains
a variety of processes. As a networking engineer, you need to feel confident about them.
This knowledge is especially important when troubleshooting, where different
components are crucial in diagnosing issues in packet delivery.
Layer 2 defines how data is formatted for transmission and how access to the physical
media is controlled. Layer 2 devices provide an interface with the physical media. Some
common examples are network interface cards (NICs) installed in a host.
Device-to-device communications require Layer 2 addresses, also known as physical
addresses. For example, Ethernet physical addresses or MAC addresses are embedded in
Ethernet NIC in end devices, such as hosts.
Although MAC addresses are unique, physical addresses are not hierarchical. They are
associated with a particular device, regardless of its location or connected network. These
Layer 2 addresses have no meaning outside the local network media. They are used to
locate the end devices in the local physical network on the data link layer.
An Ethernet MAC address is a two-part, 48-bit binary value that is expressed as 12
hexadecimal digits. The address formats might appear like 00-05-9A-3C-78-00,
00:05:9A:3C:78:00, or 0005.9A3C.7800.
All devices that are connected to an Ethernet LAN have MAC-addressed interfaces. The
NIC uses the MAC address in received frames to determine if a message should be passed
to the upper layers for processing. The MAC address is permanently encoded into a ROM
chip on a NIC. The MAC address is made up of the Organizationally Unique Identifier (OUI)
and the vendor assignment number.
Switches also have MAC addresses, but a device only sends a frame to these addresses
when communicating with the switch, for example, for management. Otherwise, frames
are addressed for other devices, and the switch forwards the frames to those devices.
The figure shows the Layer 2 (L2) addresses on two PCs and a router. Note that the router
has different MAC addresses on each interface.
Layer 3 provides connectivity and path selection between two host systems located on
geographically separated networks. At the boundary of each local network, an
intermediary network device, usually, a router, de-encapsulates the frame to read the
destination address contained in the packet's header (the Layer 3 protocol data unit
[PDU]). Routers use the network identifier portion of this address to determine which
path to use to reach the destination host. Once the path is determined, the router
encapsulates the packet in a new frame and sends it toward the destination end device.
Layer 3 addresses must include identifiers that enable intermediary network devices to
locate the networks that different hosts belong to. In the TCP/IP protocol suite, every IP
host address contains information about the network where the host is located.
Intermediary devices that connect networks are routers. The role of the router is to select
paths and direct packets toward a destination. This process is known as routing. A router
uses a list of paths located in a routing table to determine where to send data.
Layer 3 addresses are assigned to end devices such as hosts and network devices that
provide Layer 3 functions. The router has its own Layer 3 address on each interface. Each
network device that provides a Layer 3 function maintains a routing table.
As seen in the example, the two router interfaces belong to different networks. The left
interface and the directly connected PC belong to the 192.168.3.0/24 network, while the
right interface and the directly connected PC belong to the 192.168.4.0/24 network. For
devices in different IP networks, a Layer 3 device is needed to route traffic between them.
The following output shows the Wireshark analysis of the ARP messages. In the first
example, you can see an ARP request sent as a broadcast to find out the MAC address of
IPv4 host 10.10.1.175. In the second ARP message, you can see the ARP reply including the
MAC address of the host, which is 00:bc:22:a8:e0:a0
To limit the output of the arp command to a single interface, use the arp -a -N ip_address
command.
To display the ARP table on a Cisco IOS router, use the show ip arp or show arp EXEC
command; the output is the same.
The proper syntax to display the ARP table is show ip arp [ip-address] [host-name] [mac-
address] [interface type number].
11.7 Exploring the Packet Delivery Process
Host-To-Host Packet Delivery
Host-to-host packet delivery consists of an interesting series of processes. In this multipart
example, you will discover what happens "behind the scenes" when an IPv4 host
communicates with another IPv4 host, firstly when a router is used. Secondly, when a
switch is responsible for the host-to-host packet delivery process.
Host-To-Host Packet Delivery (Step 1 of 14)
In this example, the host 192.168.3.1 needs to send arbitrary application data to the host
192.168.4.2, located on another subnet. The application does not need a reliable
connection, so it uses UDP. Because it is unnecessary to set up a session, the application
can start sending data, using the UDP port numbers to establish the session and deliver
the segment to the right application.
Host-To-Host Packet Delivery (Step 2 of 14)
UDP prepends a UDP header (UDP HDR) and passes the segment to the IPv4 layer (Layer
3) with an instruction to send the segment to 192.168.4.2. IPv4 encapsulates the segment
in a Layer 3 packet, setting the source address (SRC IP) of the packet to 192.168.3.1, while
the destination address (DST IP) is set to 192.168.4.2.
Host-To-Host Packet Delivery (Step 3 of 14)
When Host A analyzes the destination address, it finds that the destination address is on a
different network. The host forwards any packet that is not destined for the local IPv4
network in a frame addressed to the default gateway. The default gateway is the address
of the local router, which must be configured on hosts (PCs, servers, and so on). IPv4
passes the Layer 3 packet to Layer 2 with instructions to forward it to the default gateway.
Host A must place the packet in its “parking lot” (on hold) until it has the MAC address of
the default gateway.
Host-to-Host Packet Delivery (Step 4 of 14)
To deliver the packet, the host needs the Layer 2 information of the next-hop device. The
ARP table in the host does not have an entry and must resolve the Layer 2 address (MAC
address) of the default gateway. The default gateway is the next hop for the packet. The
packet waits while the host resolves the Layer 2 information.
Host-To-Host Packet Delivery (Step 5 of 14)
Because the host does not know the default gateway’s Layer 2 address, the host uses the
standard ARP process to obtain the mapping. The host sends a broadcast ARP request
looking for the MAC address of its default gateway.
Host-To-Host Packet Delivery (Step 6 of 14)
The host has previously been configured with 192.168.3.2 as the default gateway. The
host 192.168.3.1 sends out the ARP request, and the router receives it. The ARP request
contains information about Host A. Notice that the first thing the router does is add this
information to its own ARP table.
Host-To-Host Packet Delivery (Step 7 of 14)
The router processes the ARP request like any other host would and sends the ARP reply
with its own information directly to the host's MAC address.
Host-to-Host Packet Delivery (Step 8 of 14)
The host receives an ARP reply to its ARP request and enters the information in its local
ARP table.
Host-To-Host Packet Delivery (Step 9 of 14)
Now, the Layer 2 frame with the application data can be sent to the default gateway. The
pending frame is sent with the local host IPv4 address and MAC address as the source.
However, the destination IPv4 address is that of the remote host, but the destination MAC
address is the default gateway.
Host-To-Host Packet Delivery (Step 10 of 14)
When the router receives the frame, it recognizes its MAC address and processes the
frame. At Layer 3, the router sees that the destination IPv4 address is not its address. A
host Layer 3 device would discard the frame. However, because this device is a router, it
passes all IPv4 packets that are not for the router itself to the routing process. The routing
process determines where to send the packet.
Host-To-Host Packet Delivery (Step 11 of 14)
The routing process checks for the longest prefix match of the destination IPv4 address in
its routing table. In this example, the destination network is directly connected. Therefore,
the routing process can pass the packet directly to Layer 2 for the appropriate interface.
Host-To-Host Packet Delivery (Step 12 of 14)
Assuming that the router does not have the mapping to 192.168.4.2, Layer 2 uses the ARP
process to obtain the mapping for the IPv4 address and the MAC address. The router asks
for the Layer 2 information in the same way as the hosts. An ARP request for the
destination MAC address is sent to the link.
The destination host receives and processes the ARP request.
Host-To-Host Packet Delivery (Step 13 of 14)
The destination host receives the frame that contains the ARP request and passes the
request to the ARP process. The ARP process takes the information about the router from
the ARP request and places the information in its local ARP table. The ARP process
generates the ARP reply and sends it back to the router.
The router receives the ARP reply, populates its local ARP table, and starts the packet-
forwarding process.
Host-To-Host Packet Delivery (Step 14 of 14)
The frame is forwarded to the destination. Note that the router changes Layer 2 address
in frames as needed, but it will not change the Layer 3 address in packets.
Role of a Switch in Packet Delivery (Step 1 of 4)
Typically, your network will have switches between hosts and routers. In this multipart
example, you will see what happens on a switch when a host communicates with a router.
Remember that a switch does not change the frame in any way. When a switch receives
the frame, it forwards it out of the proper port according to the MAC address table.
An application on host A wishes to send data to a remote network. Before an IP packet
can be forwarded to the default gateway, its MAC address must be obtained. ARP on Host
A creates an ARP request and sends it out as a broadcast frame. Before the ARP request
reaches other devices on a network, the switch receives it.
When the switch receives the frame, it needs to forward it out on the proper port.
However, in this example, the source MAC address is not in the MAC address table of the
switch. The switch can learn the port mapping for the source host from the source MAC
address in the frame, so the switch adds the information to the table (0800:0222:2222 =
port FastEthernet0/1).
Role of a Switch in Packet Delivery (Step 2 of 4)
Because the destination address of the frame is a broadcast, the switch has to flood the
frame out to all the ports, except the one it came in.
All frames pass through the switch unchanged. The switch builds its MAC address table
based on the source address of received frames, and it sends all unicast frames directly to
the destination host based on the destination MAC address and port that are stored in the
MAC address table.
• Top-down method: Work from the application layer in the Open Systems
Interconnection (OSI) model down to the physical layer. The top-down method
uses the OSI model as a guiding principle. One of the most important
characteristics of the OSI model is that each layer depends on the underlying layers
for its operation. This structure implies that if you find a layer to be operational,
you can safely assume that all underlying layers are fully operational. For example,
suppose you are researching a user who cannot browse a particular website and
find that you can establish a TCP connection on port 80 from this host to the server
and get a response from the server. In that case, you can typically conclude that
the transport layer and all layers below must be fully functional between the client
and the server. It is most likely a client or server problem and not a network
problem. Be aware that, in the example above, it is reasonable to conclude that
Layers 1 through 4 must be fully operational, but this idea is not definitively
proved. For example, unfragmented packets might be routed correctly, while
fragmented packets are dropped. The TCP connection to port 80 might not
uncover such a problem. Therefore, the goal of this method is to find the highest
OSI layer that is still working. All devices and processes that work on that layer or
the layers below it are then eliminated from the scope of your problem. It might
be clear that this method is most effective if the problem is on one of the higher
OSI layers. The top-down method is one of the most straightforward
troubleshooting methods because problems reported by users are typically
defined as application layer problems, so starting the troubleshooting process at
that layer is the obvious thing to do. A drawback or impediment to this method is
that you need to access the application layer software on the machine of the client
to initiate the troubleshooting process. If the software is installed only on few
machines, it might be hard to test it properly.
• Bottom-up method: Work from the physical layer in the OSI model up to the
application layer. The bottom-up approach also uses the OSI model as the guiding
principle, but this time you start on the physical layer and work your way up to the
application layer. By verifying layer by layer that the network is operating correctly,
you steadily eliminate more potential problem causes and narrow the scope of the
potential problems. For example, if you are researching a user who cannot browse
a particular website, you would first verify physical connectivity. You would log in
to the switch and verify the port status. After each test or verification step, you
would move up through the layers of the OSI model. A benefit of this method is
that all the initial troubleshooting takes place on the network, so access to clients,
servers, or applications is not necessary until later in the troubleshooting process.
Also, the thoroughness and steady progress of this method will give you a
relatively high probability of eventual success or, at the very least, a decent
reduction of the problem scope. A disadvantage of this method is that, in large
networks, it can be a very time-consuming process because a lot of effort will be
spent on gathering and analyzing data. Therefore, the best use of this method is to
first reduce the problem scope by using a different strategy and then switching to
this method for clearly bounded parts of the network topology.
• Divide-and-conquer method: Start in the middle of the OSI layers (usually the
network layer) and then go up or down, depending on the results. If it is not clear
whether the top-down or the bottom-up approach would be most effective, it can
be helpful to start in the middle (typically the network layer) and run an end-to-
end test, such as a ping. If the ping succeeds, you can assume that all lower layers
are good, and you can start bottom-up troubleshooting from the network layer.
Alternatively, you can start a top-down troubleshooting process from the network
layer if the test fails. Whether the result of the initial test is positive or negative,
this method usually results in faster elimination of potential problems than what
you would achieve by implementing a full top-down or bottom-up approach,
making the divide-and-conquer method a very effective strategy.
• Follow-the-path method: Determine the path that packets follow through the
network from the source to the destination and track the packets along the path.
Tracing the path of packets through the network eliminates irrelevant links and
devices from the troubleshooting process. The objective of a troubleshooting
method is to isolate the problem by eliminating potential problem areas from the
scope of the troubleshooting process. By analyzing and verifying the path that
packets and frames take through the network as they travel from the source to the
destination, you can reduce the scope of your troubleshooting to just those links
and devices that are actually in the forwarding path.
• Swap components method: Move components physically and observe if the
problem moves with the components or not. A common way to isolate the
problem is to start swapping the components like cables, switches, switch ports, or
network interface cards NICs on the PC to confirm that the problem moves with
the specific component. This method allows you to isolate the problem, even if the
information you can gather is minimal, just by methodically executing simple tests.
Even if you do not solve the problem, you have scoped it to a single element, and
further troubleshooting can now be focused on that element. The drawbacks of
this method are as follows:
o You are isolating the problem to only a limited set of physical elements.
You cannot gain any real insight into what is happening because you are
gathering only very limited, indirect information.
o This method assumes that the problem is with a single component. If the
problem is with a particular combination of elements, you might not isolate
the problem correctly. Be sure to document everything that you change.
Logging to the monitor (all tty lines) shows "disabled" or, if enabled, the severity level
limit, number of messages logged, and whether XML formatting or filtering is enabled.
Internet Control Message Protocol
Internet Control Message Protocol (ICMP) is a supporting protocol in the TCP/IP protocol
suite. It is used by network devices, including routers, to send error messages and
operational information indicating, for example, that a requested service is not available
or that a host or router could not be reached. ICMP differs from transport protocols such
as TCP and UDP. It is not typically used to exchange data between systems, nor is it
regularly employed by end user network applications (except for some diagnostic tools,
such as ping and traceroute).
ICMP messages are typically used for diagnostic or control purposes or generated in
response to errors in IP operations. ICMP errors are directed to the source IP address of
the originating packet. For example, every device (such as an intermediate router)
forwarding an IPv4 datagram first decrements the Time to Live (TTL) field in the IPv4
header by one. If the resulting TTL is 0, the packet is discarded, and an ICMP time
exceeded in transit message is sent to the packet's source address.
Many commonly used network utilities are based on ICMP messages. The traceroute
command (or tracert Microsoft Windows command) can be implemented by transmitting
packets with specially set IPv4 TTL header fields and looking for ICMP time exceeded in
transit and Destination unreachable messages generated in response. The related ping
utility is implemented using the ICMP echo request and echo reply messages.
ICMP uses the basic support of IP as if it were a higher-level protocol; however, ICMP is
integral to IP. Although ICMP messages are contained within standard IP packets, ICMP
messages are usually processed as a special case, distinguished from normal IP processing.
Often, it is necessary to inspect the contents of the ICMP message and deliver the
appropriate error message to the application responsible for the transmission of the IP
packet that prompted the sending of the ICMP message.
ICMP is a network layer protocol. There is no TCP or UDP port number associated with
ICMP packets as these numbers are associated with the transport layer above.
Verification of End-To-End IPv4 Connectivity
The following are several verification tools to verify end-to-end IPv4 connectivity:
• ping: A successful ping to an IPv4 address means that the endpoints have basic
IPv4 connectivity between them.
• traceroute (or Microsoft Windows tracert): The results of traceroute to an IPv4
address can help you determine how far along the path data can successfully
reach.
• Telnet or SSH: Used to test the transport layer connectivity for any TCP port over
IPv4.
• show ip arp or show arp (or Microsoft Windows arp -a): Used to display the
mapping of IPv4 addresses to MAC addresses to verify connected devices.
• show ip interface brief (or Microsoft Windows ipconfig /all): Used to display the
IPv4 address configuration of the interfaces.
Using ping
The ping command is a very common method for troubleshooting the accessibility of
devices. It uses a series of ICMP Echo messages to determine these parameters:
The table below lists the possible output characters from the Cisco IOS ping command:
For example, after sending ICMP echo requests, if an ICMP echo reply packet is received
within the default, 2-second (configurable) timeout, an exclamation point (!) is the output,
meaning that the reply was received before the timeout expired. A period (.) is the output
if the reply was not received before the timeout expired.
The device also outputs the min/avg/max RTT in milliseconds.
Note: When pinging, processing delays can be significant because the router considers
that responding to a ping is a low-priority task.
Test the end-to-end connectivity, using the following commands:
Ping with the source from the address of a specific interface, using the following
command:
When a normal ping command is sent from a device, the source address of the ping is the
IPv4 address of the interface that the packet uses to exit the device. The source address
can be changed to the address of any interface on the device.
You can also perform an extended ping, and adjust parameters such as the source IPv4
address, as follows:
If ping fails or returns an unusual RTT, you can use the traceroute command to help
narrow down the problem. You can also vary the size of the ICMP echo payload to test
problems that are related to the MTU.
On a Microsoft Windows device, by default, four packets are sent; information displayed is
similar to the Cisco IOS output, as shown in the example:
Using traceroute (Cisco IOS) or tracert (Microsoft Windows)
Traceroute is used to test the path that packets take through the network. It sends out
either an ICMP echo request (Microsoft Windows) or UDP (most implementations)
messages, gradually increasing IPv4 TTL values to probe the path by which a packet
traverses the network. The first packet with the TTL set to 1 will be discarded by the first-
hop router, which will send an ICMP "time exceeded" message sourced from its IPv4
address. The device that initiated the traceroute therefore knows the address of the first-
hop router. When the TTL is set to 2, the packets will arrive at the second router, which
will respond with an ICMP "time exceeded" message from its IPv4 address. This process
continues until the message reaches its final destination; the destination device will return
either an ICMP echo reply (Windows) or an ICMP port unreachable, indicating that the
request or message has reached its destination.
Cisco traceroute works by sending a sequence of three packets for each TTL value, with
different destination UDP ports, which allows it to report routers that have multiple,
equal-cost paths to the destination. For example, the first three packets with TTL 1 use
UDP ports 33434 (first packet), 33435 (second packet), and 33436 (third packet). The next
three UDP datagrams are sent with a TTL of 2 to destination ports 33437, 33438, and
33439.
Use the extended traceroute command to test connectivity from a specified source.
The table below lists the characters that can appear in the Cisco IOS traceroute command
output.
The tracert command is a Windows implementation of traceroute (and will not work on
Cisco devices).
Using Telnet and SSH
One way to obtain information about a remote network device is to connect to it using
either the Telnet or SSH applications. Telnet and SSH are virtual terminal protocols that
are part of the TCP/IP suite. The protocols allow connections and remote console sessions
from one network device to one or more remote devices.
When you use Telnet to connect to a remote device, the default port number is used. The
default port for Telnet is 23. You can use a different port number, from 1 to 65,535, to test
if a remote device is listening to the port.
Although Telnet can be used as a troubleshooting tool to check transport layer
functionality, it should not be used in a production environment to administer network
devices. Nowadays, SSH is used as it is a secure access method.
To log on to a host that supports Telnet, use the telnet EXEC command:
The telnet command in the output tests if HTTP, which listens on TCP port 80, is open.
Since we get an Open response, we can assume that the remote device is reachable and
listens to TCP port 80. On a Cisco router, to exit the established connection, you must
enter a control+C (^C) hotkey as shown in the output. The hotkey that closes the
connection on a Cisco device is "ctrl+shift+6 and x."
To start an encrypted session with a remote networking device, use the ssh EXEC
command:
In laying out the troubleshooting methodology, some people start at Layer 1 and start
looking at potential media issues like damage to wiring or interference by electromagnetic
sources. The category of UTP wiring will be critical. Cables of the lower category will have
more sensitivity to certain sources of EMI, such as air-conditioning systems. Category 5
will have better enclosures and plastic around the wiring to protect it from such sources.
Poor cable management could, for example, put a strain on Registered Jack-45 (RJ-45)
connectors, causing some cables to break.
Physical security could also be a cause of media issues. If you allow people to connect
hubs to your switches or connect unwanted sources of traffic to the switch. In that case,
traffic patterns may change, which is not necessarily related to the media or physical
layer, but collisions could increase if you install the hub and connect it to your switch. This
problem is related to physical connectivity, and so it could be categorized as a physical
layer or media issue.
When new equipment is connected to a switch and the connection operates in the half-
duplex mode, or a duplex mismatch occurs, this could lead to an excessive number of
collisions (layer 2 issue).
A collision occurs when a transmitting Ethernet station detects another signal while
transmitting a frame. A late collision is a special type of collision. If a collision occurs after
the first 512 bits (64 octets or bytes) of data are transmitted by the transmitting station,
then a late collision is said to have occurred. Most importantly, late collisions are not
resent by the network interface card; in other words, they are not resent by Ethernet,
unlike collisions occurring before the first 64 octets or bytes. It is left for the upper layers
of the protocol stack to determine that there was a loss of data and retransmit.
Late collisions should never occur in a properly designed Ethernet network. Possible
causes are usually incorrect cabling or a noncompliant number of hubs in the network;
perhaps a bad network interface card could also cause late collisions. If a late collision
happens, they are typically detected using a protocol analyzer, verifying cabling distances
and physical layer requirements and limitations of Ethernet.
A symptom of excessive noise could be several cyclic redundancy check (CRC) errors, or
rather changes in the number of CRC errors not related to collisions. In other words, if the
number of collisions is constant, consistent, and does not change or have peaks, then CRC
errors could be caused by excessive noise and not related to actual collisions.
When this issue happens, cable inspection is probably the first step. You can use the
multitude of cable testers and tools available for that purpose. Poor design in using
perhaps something other than Category 5 cabling for fast Ethernet and 100-Mbps
networks could be the cause, and cable testing plus documentation could tell you how to
fix this problem.
If the rate of collisions exceeds the baseline for your network, then there are other types
of solutions to the problem. There are several guidelines regarding what that baseline
should be, including that the number of collisions compared to the total number of output
packets should be less than 0.1 percent.
If collisions are a problem, the cause could be a defective or ill-behaving device—for
example, a network interface card-sending excessive garbage into the network. This
situation typically happens when there are circuitry or logic failures or even physical
failures on the device. This condition is typically known as jabbering and relates to
network interface cards and other devices continuously sending random or garbage data
into the network. A time-domain reflectometer (TDR) could be used to find unterminated
Ethernet cabling, reflecting signals back into the network and causing collisions.
Fiber media issues have these possible sources:
There are several ways in which light can be lost from the fiber. Some are due to
manufacturing problems (for example, microbends, macrobends, and splicing fibers that
do not have their cores centered). In contrast, others are physics problems (back
reflections or refractions) because light reflects whenever it encounters a change in the
index of refraction, which defines how much the path of light is bent or refracted when
entering a media. The index of refraction is calculated by dividing the speed of light in a
vacuum by the speed of light in another medium, in this case, optical fiber.
Macrobends typically occur during fiber installation.
One cause of light leaking out at a macrobend is that part of the traveling wave, called the
evanescent wave, travels inside the cladding. Around the bend, part of the evanescent
wave would have to travel faster than the speed of light in the material, which is not
possible, so this light instead radiates out of the fiber.
Bend losses can be minimized by designing a larger index difference between the core and
the cladding. Core and the cladding have different refractive indexes. The refractive index
of the core is always greater than the index of the cladding. Another approach is to
operate at the shortest possible wavelength and perform good installations.
Splices are a way to connect two fibers by fusing their ends. The best way to align the fiber
core is by using the outside diameter of the fiber as a guide. If the core is at the center of
the fiber, a good splice can be achieved. If the core is off-center, then it is impossible to
create a good splice. You would have to cut the fiber further upstream and test again.
Another possibility is that the fibers to be spliced could have dirt on their ends. Dirt can
cause many problems, particularly if the dirt intercepts some or all the light from the core.
The core for single-mode fiber (SMF) is only 9 micrometers. Splicing fiber is a highly
specialized skill in which trained technicians use fusion splicing equipment to connect two
fiber runs.
Any contamination in the fiber connection can cause the failure of the component or
failure of the whole system. Even microscopic dust particles can cause a variety of
problems for optical connections. A particle that partially or completely blocks the core
generates strong back reflections, which can cause instability in the laser system. Dust
particles trapped between two fiber faces can scratch the glass surfaces. Even if a particle
is only situated on the cladding or the edge of the endface, it can cause an air gap or
misalignment between the fiber cores, which significantly degrades the optical signal. In
addition to dust, other types of contamination, like oil, water, powdery coatings, must
also be cleaned off the endface. These contaminants can be more difficult to remove than
dust particles and can also cause damage to equipment if not removed.
When you clean fiber components, always complete the steps in the procedures carefully.
The goal is to eliminate any dust or contamination and provide a clean environment for
the fiber-optic connection. Remember that inspection, cleaning, and reinspection are
critical steps that must be done before you make any fiber-optic connection. When
cleaning optical connectors, the most important warning is always to turn off any laser
sources before inspecting fiber connectors, optical components, or bulkheads.
Troubleshooting Media Issues Workflow
You can use the show interfaces command to diagnose media issues.
To troubleshoot media issues when you have no connection or a bad connection between
a switch and another device, follow this process:
1. Use the show interfaces command to check the interface status. If the interface is
not operational, check the cable and connectors for damage.
2. Use the show interfaces command to check for excessive noise. If there is
excessive noise, you will see increased error counters in the output of the
command. Then first, find and remove the source of the noise, if possible. Verify
that the cable does not exceed the maximum cable length and check the type of
cable used. For copper cable, it is recommended that you use at least Category 5.
3. Use the show interfaces command to check for excessive collisions. If there are
collisions or late collisions, verify the duplex settings on both ends of the
connection.
A common issue with speed and duplex occurs when the duplex settings are mismatched
between two switches, between a switch and a router, or between a switch and a
workstation or server. This mismatch can occur when you manually hardcode the speed
and duplex or autonegotiation issues between the two devices.
Duplex and Speed-Related Issues
A duplex mismatch is when the switch operates at full duplex, and the connected device
operates at half duplex. The result of a duplex mismatch is extremely slow performance,
intermittent connectivity, and loss of connection. Other possible causes of data-link errors
at full duplex are bad cables, a faulty switch port, or NIC software or hardware issues.
Here are examples of duplex-related issues:
o One end is set to full duplex, and the other is set to half duplex, resulting in a
mismatch.
o One end is set to full duplex, and the other is set to autonegotiation:
o If autonegotiation fails and this end reverts to half duplex, it results in a
mismatch.
o One end is set to half duplex, and the other is set to autonegotiation:
o If autonegotiation fails, this end reverts to half duplex.
o Both ends are set to half duplex, and there is no mismatch.
o Autonegotiation is set on both ends:
o One end fails to full duplex, and the other end fails to half duplex.
o For example, a Gigabit Ethernet interface defaults to full duplex, while a
10/100 defaults to half duplex.
o Autonegotiation is set on both ends:
o Autonegotiation fails on both ends, and they both revert to half duplex.
o Both ends are set to half duplex, and there is no mismatch.
Here are examples of speed-related issues:
o One end is set to one speed, and the other is set to another speed, resulting in a
mismatch.
o One end is set to a higher speed, and autonegotiation is enabled on the other end:
o If autonegotiation fails, the switch senses what the other end is using and
reverts to the optimal speed.
o Autonegotiation is set on both ends:
o Autonegotiation fails on both ends, and they revert to their lowest speed.
o Both ends are set at the lowest speed, and there is no mismatch.
The IEEE 802.3ab Gigabit Ethernet standard mandates the use of autonegotiation for
speed and duplex. Although autonegotiation is not mandatory for other speeds,
practically all fast Ethernet NICs also use autonegotiation by default. The use of
autonegotiation for speed and duplex is the current recommended practice for ports that
are connected to noncritical endpoints. However, if duplex negotiation fails for some
reason, you might have to set the speed and duplex manually on both ends. Typically, this
would mean setting the duplex mode to full duplex on both ends of the connection. You
should manually set the speed and duplex on links between networking devices and ports
connected to critical endpoints, such as servers.
The table summarizes possible speed and duplex settings for a connection between a
switch port and an end-device NIC. The table gives just a general idea about speed and
duplex misconfiguration combinations.
Troubleshooting Process for Duplex and Speed-Related Issues
A common cause of performance problems in Ethernet-based networks is a duplex or
speed mismatch between two ends of a link.
o Guidelines for duplex configuration include:
o Point-to-point Ethernet links should always run in the full-duplex mode.
Half duplex is not common anymore—you can encounter it if hubs are
used.
o Autonegotiation of speed and duplex is recommended on ports that are
connected to noncritical endpoints.
o Manually set the speed and duplex on links between networking devices
and ports connected to critical endpoints.
o Verify duplex and speed settings on an interface.
To troubleshoot switch duplex and speed issues when you have no connection or a bad
connection between a switch and another device, use this general process:
o Use the show interfaces command to check whether there is a speed mismatch
between the switch and a device on the other side (switch, router, server, and so
on). If there is a speed mismatch, set the speed on both sides to the same value.
o Use the show interfaces command to check whether there is a duplex mismatch
between the switch and a device on the other side. It is recommended that you
use full duplex if both sides support it.
The example shows the show interfaces command output. The example highlights duplex
and speed settings for the FastEthernet0/1 interface. Based on the output of the show
interfaces command, you can find, diagnose, and correct the duplex or speed mismatch
between the switch and the device on the other side.
If the mismatch occurs between two Cisco devices with Cisco Discovery Protocol enabled,
you will see Cisco Discovery Protocol error messages on the console or in the logging
buffer of both devices. Cisco Discovery Protocol is useful detecting errors and for
gathering port and system statistics on connected Cisco devices. Whenever there is a
duplex mismatch (in this example, on the FastEthernet0/1 interface), the consoles of Cisco
switches display these error messages:
%CDP-4-DUPLEX_MISMATCH: duplex mismatch discovered on FastEthernet0/1 (not half
duplex)
Use the duplex mode command to configure duplex operation on an interface. The
following are available duplex modes:
o full: Specifies full-duplex operation.
o half: Specifies half-duplex operation.
o auto: Specifies the autonegotiation capability. The interface automatically
operates at half or full duplex, depending on environmental factors such as the
type of media and the transmission speeds for the peer routers, hubs, and
switches that are used in the network configuration.
Troubleshooting Physical Connectivity Issue
Often troubleshooting processes involve a component of hardware troubleshooting.
There are three main categories of issues that could cause a failure on the network:
hardware failures, software failures (bugs), and configuration errors. A fourth category
might be performance problems, but performance problems are a symptom and not the
cause of a problem.
After you have used the ping and traceroute utilities to determine that a network
connectivity problem exists and where it exists, check to see if there are physical
connectivity issues before you get involved in more complex troubleshooting. You could
spend hours troubleshooting a situation only to find that a network cable is loose or
malfunctioning.
The interfaces that the traffic passes through are a component that is always worth
verifying when you are troubleshooting performance-related issues. You suspect the
hardware to be at fault. The interfaces are usually one of the first things you would verify
while tracing the path between devices.
If you have physical access to devices that you suspect are causing network problems, you
can save troubleshooting time by looking at the port LEDs. The port LEDs show the link
status and can indicate an error condition. If a link light for a port is not on, ensure that
both ends of the cable are plugged into the correct ports.
When troubleshooting small form-factor pluggable (SFP) and SFP+ modules, always check
if you are using SFP or SFP+ transceivers in the switch ports. The transceiver type should
match the physical port specification and speed. Normally SFP+ minimum range is 10Gbps,
and they should be the same type on both ends. You should also check that the same
wavelength is used; a transceiver using a 1310nm laser will not communicate with an
850nm transceiver. You need to verify the optical cable, single-mode fiber (SMF), or
Multimode fiber (MMF), the SFP module supports, and that you are using the correct one.
You should always refer to the documentation (typically installation guide) for a specific
networking device to check the specific supported cables and modules.
The output of the show interfaces command lists important statistics that should be
checked. The first line of the output from this command tells you whether an interface is
up or down.
To verify the interface status, use the show interfaces command.
The output of the show interfaces command also displays the following important
statistics:
o Input queue drops: Input queue drops (and the related ignored and throttle
counters) signify the fact that at some point, more traffic was delivered to the
device than it could process. This situation does not necessarily indicate a problem
because it could be normal during traffic peaks. However, it could indicate that the
CPU cannot process packets in time. So if this number is consistently high, you
should try to determine at which moments these counters are increasing and how
this increase relates to the CPU usage.
o Output queue drops: Output queue drops indicate that packets were dropped due
to congestion on the interface. Seeing output drops is normal when the aggregate
input traffic is higher than the output traffic. During traffic peaks, the packets are
dropped if traffic is delivered to the interface faster than the interface can send it
out. However, although this setting is considered normal behavior, it leads to
packet drops and queuing delays, so applications sensitive to packet drops and
queuing delays, such as VoIP, might suffer from performance issues. Consistent
output drops might indicate that you need to implement an advanced queuing
mechanism to provide good quality of service (QoS) to each application.
o Input errors: Input errors indicate errors experienced during the reception of the
frame, such as CRC errors. High numbers of CRC errors could indicate cabling
problems, interface hardware problems, or in an Ethernet-based network, duplex
mismatches.
o Output errors: Output errors indicate errors, such as collisions, during the
transmission of a frame. In most Ethernet-based networks, full-duplex
transmission is the norm, and half-duplex transmission is the exception. In full-
duplex transmission, operation collisions cannot occur. Therefore, collisions,
especially late collisions, often indicate duplex mismatches.
12.6 Troubleshooting a Simple Network
Troubleshooting Common Problems Associated with IPv4 Addressing
Troubleshooting IPv4 addressing is an important skill and will prove valuable when
resolving several network issues. For example, assume that a host cannot communicate to
a server that is on a remote network.
The following are recommended troubleshooting steps to perform from the host:
1. Verify the host IPv4 address and subnet mask.
2. Ping the loopback address.
3. Ping the IPv4 address of the local interface.
4. Ping the default gateway.
5. Ping the remote server.
If the ping to the remote server fails, you may have some remote physical network
problem. Verify and correct this by going to the server and performing the same steps as
you just did from the host.
Check the Default Gateway
If the previous steps are unsuccessful, there may be an incorrect default gateway
configuration on either the host or the server, or there may be a routing issue.
You can use the traceroute utility to test the path that packets take through the network
to ensure they are going through the router.
To verify the host setting for the default gateway, use the appropriate CLI command or
check the settings in the GUI. A useful command in Windows besides ipconfig is route
print. In the example, the user host has a correct default gateway setting.
One of the possible problems is also a wrong setting of a default gateway on the server.
Depending on the host operating system, you will need to use the proper CLI command or
check the settings in the GUI. The server should have a default gateway of 172.16.20.1.
Next, you should check the IPv4 addresses and subnet masks of the interfaces, and the
routing table, on the default gateway router. You should connect to the router and check
the status of interfaces using the show ip interface brief command. To confirm the IPv4
addresses and subnet masks, use the show running-config command. To check the
routing table, use the show ip route command and confirm that all the networks are listed
in the routing table.
13.1 Introducing Basic IPv6
Introduction
As the global internet continues to grow, its overall architecture needs to evolve to
accommodate the new technologies that support the increasing numbers of users,
applications, appliances, and services. This evolution also includes Enterprise networks
and communication providers, which provide services to home users. IPv6 was proposed
when it became clear that the 32-bit addressing scheme of IPv4 cannot keep up with
internet growth demands. IPv6 quadruples the number of network address bits from 32
bits (in IPv4) to 128 bits. This means that the address pool for IPv6 is around 340
undecillion, or 340 trillion trillion trillion, which is an unimaginably large number.
The larger IPv6 address space allows networks to scale and provide global reachability.
The simplified IPv6 packet header format handles packets more efficiently. The IPv6
network is designed to embrace encryption and favor targeted multicast over often
problematic broadcast communication.
IPv6 as a protocol has been known for a while, but enterprises are beginning to
understand how it can help them achieve their goals, improve efficiency and gain
functionality.
As a network engineer, you will need to get familiar with IPv6, including:
o Describing IPv6 features and advantages and comparing them to IPv4.
o Configuring basic IPv6 addressing and testing IPv6 connectivity in the network.
13.2 Introducing Basic IPv6
IPv4 Address Exhaustion Workarounds
IPv4 provides approximately 4 billion unique addresses. Although 4 billion is a lot of
addresses, it is not enough to keep up with the growth of the internet.
To extend the lifetime and usefulness of IPv4 and to circumvent the address shortage,
several mechanisms were created:
o Classless interdomain routing (CIDR)
o Variable-length subnet masking (VLSM)
o Network Address Translation (NAT)
o Private IPv4 addresses space (RFC 1918)
Note: Over the years, hardware support has been added to devices to support IPv4
enhancements through Application Specific Integrated Circuits (ASICs), offloading the
processing from the equipment CPU to network hardware. This allows more simultaneous
transmission and higher bandwidth utilization.
To allocate IPv4 addresses efficiently, CIDR was developed. CIDR allows the address space
to be divided into smaller blocks, varying in size depending on the number of hosts
needed in individual blocks. These blocks are no longer associated with predefined IPv4
addresses classes, such as class A, B, and C. Instead, the allocation includes a subnet mask
or prefix length, which defines the size of the block.
VLSMs allow more efficient use of IPv4 addresses, specifically on small segments, such as
point-to-point serial links. VLSM usage was recommended in RFC 1817. CIDR and VLSM
support was a prerequisite for ISPs to improve the scalability of the routing on the
internet.
NAT introduced a model in which a device facing outward to the internet has a globally
routable IPv4 address, while the internal network is configured with private RFC 1918
addresses. These private addresses can never be routed outside the site, as they can be
identified in many different enterprise networks. In this way, even large enterprises with
thousands of systems can hide behind a few routable public networks.
DHCP is used extensively in IPv4 networks to dynamically allocate addresses, typically
from private IPv4 addresses space (RFC 1918), then translated to public addresses using
NAT.
One of the arguments against deploying IPv6 is that NAT will solve the problems of limited
address space in IPv4. The use of NAT merely delays the exhaustion of the IPv4 address
space. Many large organizations and ISPs are moving to IPv6 because they are running out
of IPv4 private addresses, for example, as Internet of Things (IoT) devices are added to
their networks.
Negative implications of using NAT, some of which are identified in RFC 2775 and RFC
2993 include:
o NAT breaks the end-to-end model of IP, in which only the endpoints, not the
intermediary devices, should process the packets.
o NAT inhibits end-to-end network security. To protect the integrity of the IP header
by some cryptographic functions, the IP header cannot be changed between the
origin of the packet (to protect the integrity of the header) and the final
destination (to check the integrity of the received packet). Any translation of parts
of a header on the path will break the integrity check.
o When applications are not NAT-friendly, which means that, for a specific
application, more than just the port and address mapping are necessary to forward
the packet through the NAT device, NAT must embed complete knowledge of the
applications to perform correctly. This fact is especially true for dynamically
allocated ports, embedded IP addresses in application protocols, security
associations, and so on. Therefore, the NAT device needs to be upgraded each
time a new non-NAT-friendly application is deployed (for example, peer-to-peer).
o When different networks use the same private address space and merge or
connect, an address space collision occurs. Hosts that are different but have the
same address cannot communicate with each other. There are NAT techniques
available to help with this issue, but they increase NAT complications.
IPv6 does not support broadcast addresses in the way that they are used in IPv4. Instead,
specific multicast addresses (such as the all-nodes multicast address) are used.
IPv6 unicast addresses are assigned to each node (interface). Their uses are discussed in
RFC 4291. The unicast addresses are listed below.
Note: An IPv6 address prefix, in the format ipv6-prefix/prefix-length, can be used to
represent bitwise contiguous blocks of the entire address space. The prefix length is a
decimal value that indicates how many of the high-order contiguous bits of the address
compose the prefix. An IPv6 address network prefix is represented in the same way as the
network prefix (as in 10.1.1.0/24) in IPv4. For example, 2001:db8:8086:6502::/32 is a valid
IPv6 prefix.
IPv6 Address Scopes and Prefixes
To fully understand IPv6 addressing, it is important to have a solid understanding of IPv6
scopes and prefixes. An IPv6 address scope specifies the region of the network in which
the address is valid. For example, the link-local address has a scope that is called "link-
local," which means that it is valid and should be used on a directly attached network
(link). Scopes can apply to both unicast and multicast addresses. There are several
different scopes or regions: the link scope, site scope, organization scope, and global
network scope.
Addresses in the link scope are called link-local addresses, and routers will not forward
these addresses to other links or networks. Addresses that are valid within a single site are
called site-local addresses. Addresses intended to span multiple sites belonging to one
organization are called organization-local addresses, and addresses in the global network
scope are called global unicast addresses.
Multiple IPv6 Addresses on an Interface
As with IPv4, IPv6 addresses are assigned to interfaces; however, unlike IPv4, an IPv6
interface is expected to have multiple addresses. The IPv6 addresses that are assigned to
an interface can be any of the basic types: unicast, multicast, or anycast.
IPv6 Unicast Addresses
An IPv6 unicast address generally uses 64 bits for the network ID and 64 bits for the
interface ID. The network ID is administratively assigned, and the interface ID can be
configured manually or autoconfigured.
Note: When you use the Stateless Address AutoConfiguration (SLAAC) IPv6 address
assignment method, a 64-bit interface ID is required.
The EUI-64 format interface ID is derived from the 48-bit MAC address by inserting the
hexadecimal number fffe between the upper 3 bytes (OUI field) and the lower 3 vendor
assigned bytes of the MAC address. Then, the seventh bit of the first octet is inverted. (In
a MAC address, this bit indicates the scope and has a value of 1 for global scope and 0 for
local scope; it will be 1 for globally unique MAC addresses. In the EUI-64 format, the
meaning of this bit is opposite, so the bit is inverted.)
IPv6 Global Unicast Address
Both IPv4 and IPv6 addresses are generally assigned in a hierarchical manner. ISPs assign
users IP addresses. ISPs obtain allocations of IP addresses from a local internet registry
(LIR) or National Internet Registry (NIR) or their appropriate RIR. The RIR, in turn, obtains
IP addresses from The Internet Corporation for Assigned Names and Numbers (ICANN),
the operator for IANA.
RFC 4291 specifies the 2000::/3 prefix as the global unicast address space that the IANA
may allocate to the RIRs. A global unicast address (GUA) is an IPv6 address created from
the global unicast prefix. The structure of global unicast addresses enables the
aggregation of routing prefixes, limiting the number of routing table entries in the global
routing table. Global unicast addresses that are used on links are aggregated upward
through organizations and eventually to the ISPs.
The figure shows how address space can be allocated to the RIR and ISP. These values are
minimum allocations, which means that an RIR will get a /23 or shorter, an ISP will get a
/32 or shorter, and a site will get a /48 or shorter. A shorter prefix length allows more
available address space. For example, a site could get a /40 instead of a /48, giving it more
addresses to justify it to its ISP. The figure shows an aggregatable provider model where
the end customer obtains its IPv6 address from the ISP. The end customer can also choose
a provider-independent address space by going straight to the RIR. In this case, it is not
uncommon for an end customer to justify a /32 prefix. The example in the figure uses the
common and recommended size of the network with 64 bits used as interface ID.
Global unicast addresses are routable and reachable across the internet. They are
intended for widespread generic use. A global unicast address is structured hierarchically
to allow address aggregation. In the 2000::/3 prefix, the /3 prefix length states that only
the first 3 bits are significant in matching the prefix 2000. The first 3 bits of the first
hexadecimal value, 2, are 001. The fourth bit is insignificant and can be either a 0 or a 1.
Therefore, the first hex digit is either 2 (0010) or 3 (0011). The remaining 12 bits in the
hextet (16-bit segment) can be a 0 or a 1. This results in a range of global unicast
addresses of 2000::/3 through 3fff::/3.
A global routing prefix is assigned to a service provider by IANA. The fixed first three bits
plus the following 45 bits identify the organization´s site within the public domain.
An individual organization can use a subnet ID to create its own local addressing hierarchy
and identify subnets. A subnet ID is similar to a subnet in IPv4, except that an organization
with an IPv6 subnet ID can support many more individual subnets (the actual number
depends on the global routing prefix). An organization with a 16-bit IPv6 subnet ID can
support up to 65,535 individual subnets.
The interface ID has the same meaning for all unicast addresses. It is used to identify the
interfaces on a link and must be unique to the link. The interface ID is 64 bits long and,
depending on the device operating system, can be created using the EUI-64 format or by
using a randomly generated number. An example of a global unicast address is
2001:0db8:bbbb:cccc:0987:65ff:fe01:2345.
IPv6 Link-Local Unicast Address
Link-local addresses (LLAs): have a smaller scope than site-local addresses—they refer only
to a particular physical link (physical network). The concept of the link-local scope is not
new to IPv6. RFC 3927 defined 169.254.x.x block as link-local for IPv4. These addresses
have a smaller scope than site-local addresses—they refer only to a particular physical link
(physical network). Routers do not forward packets using link-local addresses, not even
within the organization; they are only for local communication on a particular physical
network segment.
A link-local address is an IPv6 unicast address that is automatically configured on any
interface. This address is the first IPv6 address that will be enabled on the interface. A
device does not have to have any other address but must have a link-local address. A link-
local address consists of the link-local prefix fe80::/10 (1111 1110 10) and the interface
identifier that can be modified in EUI-64 format or randomly generated value depending
on the operating system installed on the networking device.
It is common practice to statically configure link-local addresses on the router interfaces
to make troubleshooting easier. Nodes on a local link can use link-local addresses to
communicate; the nodes do not need globally unique addresses to communicate.
Link-local addresses are used for link communications such as automatic address
configuration, neighbor discovery, and router discovery. Many IPv6 routing protocols also
use link-local addresses. For static routing, the address of the next-hop device should be
specified using the link-local address of the device; for dynamic routing, all IPv6 routing
protocols must exchange the link-local addresses of neighboring devices.
An example of a link-local unicast address is fe80:0000:0000:0000:0987:65ff: fe01:2345,
which would generally represent shorthand notation fe80::987:65ff:fe01:2345.
Note: The prefix fe80::/10 for link-local addresses includes addresses beginning with fe80
through febf. In common practice, though, link-local addresses typically begin with fe80.
IPv6 Unique Local Unicast Address
Unique local unicast addresses are analogous to private IPv4 addresses in that they are
used for local communications, intersite VPNs, and so on, except for one important
difference – these addresses are not intended to be translated to a global unicast address.
They are not routable on the internet without IPv6 NAT, but they are routable inside a
limited area, such as a site. They may also be routed between a limited set of sites. A
unique local unicast address has these characteristics:
o It has a globally unique prefix—it has a high probability of uniqueness.
o It has a well-known prefix to enable easy filtering at site boundaries.
o It allows combining or privately interconnecting sites without creating any address
conflicts or requiring a renumbering of interfaces that use these prefixes.
o It is ISP-independent and can be used for communications inside a site without
having any permanent or intermittent internet connectivity.
o If it is accidentally leaked outside of a site via routing or the Domain Name System,
there is no conflict with any other addresses.
o Applications may treat unique local addresses like globally scoped addresses.
In unique local unicast addresses, global IDs are defined by the administrator of the local
domain. Subnet IDs are also defined by the administrator of the local domain. Subnet IDs
are typically defined using a hierarchical addressing plan, allowing routes to be
summarized and, therefore, reducing the size of routing updates and routing tables. An
example of a unique local unicast address is fc00:aaaa:bbbb:cccc:0987:65ff:fe01:2345.
Loopback Addresses
Just as with IPv4, a provision has been made for a special loopback IPv6 address for
testing. Packets that are sent to this address "loop back" to the sending device. However,
in IPv6, there is just one address, not a whole block, for this function. The loopback
address is 0:0:0:0:0:0:0:1, which is normally expressed as "::1."
Unspecified Addresses
In IPv4, an IPv4 address containing all zeroes has a special meaning—it refers to the host
itself and is used as a source address to indicate the absence of an address. In IPv6, this
concept has been formalized, and the all-zeros address is named the unspecified address.
It is typically used in the source field of a packet sent by a device requesting to have its
IPv6 address configured. You can apply address compression to this address. Because the
address is all zeroes, the address is simply expressed by two colons (::).
IPv6 Multicast Addresses
The following figure illustrates the format of an IPv6 multicast address. An IPv6 multicast
address defines a group of devices known as a multicast group. IPv6 multicast addresses
use the prefix ff00::/8, which is equivalent to the IPv4 multicast address 224.0.0.0/4. A
packet sent to a multicast group always has a unicast source address. A multicast address
can never be the source address. Unlike IPv4, there is no broadcast address in IPv6.
Instead, IPv6 uses multicast, including an all-IPv6 devices well-known multicast address
and a solicited-node multicast address.
The first 8 bits are ff, followed by 4 bits allocated for flags and a 4-bit Scope field. The
Scope field defines the range to which routers can forward the multicast packet. The next
112 bits represent the group ID.
The first three flags bits are 0 (reserved), R (rendezvous point), and P (network prefix),
which are beyond the scope of this course. The fourth flag, the least significant bit (LSB),
or the rightmost bit, is the transient flag (T flag). The T flag denotes the two types of
multicast addresses:
o Permanent (0): These addresses, known as predefined multicast addresses, are
assigned by IANA and include both well-known and solicited multicast.
o Nonpermanent (1): These are "transient" or "dynamically" assigned multicast
addresses. Multicast applications assign them.
The scope bits define the scope of the multicast group. For example, a scope value 1
means interface-local scope or node-local scope, which spans only a single interface on a
node. It is used for loopback transmission of multicast. The link-local scope is defined with
the value 2. It spans the topology area of a single link. The admin-local scope is not
automatically defined from the physical topology or another non-multicast-related
configuration and should be defined by an administrator. The admin-local scope is the
smallest administratively defined multicast scope. A site-local scope spans a single site,
whereas organization-local scope spans several sites in one organization.
The following table shows a few examples of well-known IPv6 multicast addresses that
have different scopes:
IPv6 Anycast Addresses
An IPv6 anycast address is an address that can be assigned to more than one interface
(typically on different devices). In other words, multiple devices can have the same
anycast address. According to the router's routing table, a packet sent to an anycast
address is routed to the "nearest" interface having that address.
Anycast addresses are available for both IPv4 and IPv6, initially defined in RFC 1546, Host
Anycasting Service. Anycast was meant to be used for Domain Name System (DNS) and
HTTP services but was never really implemented as designed.
Anycast addresses are syntactically indistinguishable from unicast addresses because
anycast addresses are allocated from the unicast address space. Assigning a unicast
address to more than one interface makes a unicast address an anycast address. The
nodes to which the anycast address is assigned must be explicitly configured to recognize
that the address is an anycast address.
Some reserved anycast address formats, such as the subnet-router anycast address, are
defined in RFC 4291 and RFC 2526. Such anycast address has the following format:
The subnet-router anycast address has a prefix followed by a series of zeros (as the
interface ID). For example, if the prefix for the subnet is 2001:db8:10f:1::/64 then the
subnet router anycast address for that subnet is 2001:db8:10f:1::. If you send a packet to
the subnet-router anycast address, it will be delivered to one router with an interface in
that subnet. All routers must have subnet-router anycast addresses for the subnets that
are configured on their interfaces.
Reserved Addresses
The IETF reserved a portion of the IPv6 address space for various uses, both present, and
future. Reserved addresses represent 1/256th of the total IPv6 address space. The lowest
address within each subnet prefix (the interface identifier set to all zeroes) is reserved as
the subnet-router anycast address. The 128 highest addresses within each /64 subnet
prefix are reserved for use as anycast addresses.
The IPv4 header contains 12 fields. Following these fields are an Options field of variable
length that the figure shows in yellow and a padding field followed by the data portion,
usually the transport layer segment. The basic IPv4 header has a size of 20 octets. The
Options field increases the size of the IPv4 header.
Of the 12 IPv4 header fields, 6 are removed in IPv6; these fields are shown in green in the
figure. The main reasons for removing these fields in IPv6 are as follows:
o The Internet Header Length field (shown as HD Len in the figure) was removed
because it is no longer required. Unlike the variable-length IPv4 header, the IPv6
header is fixed at 40 octets.
o Fragmentation is processed differently in IPv6 and does not need the related fields
in the basic IPv4 header. In IPv6, routers no longer process fragmentation. IPv6
hosts are responsible for path maximum transmission unit (MTU) discovery. If the
host needs to send data that exceeds the MTU, the host is responsible for
fragmentation (this process is recommended but not required). The related Flags
field option appears in the Fragmentation Extension Header in IPv6. This header is
attached only to a packet that is fragmented.
o The Header Checksum field at the IP layer was removed because most data link
layer technologies already perform checksum and error control. This change forces
formerly optional upper-layer checksums (such as UDP) to become mandatory.
The Options field is not present in IPv6. In IPv6, a chain of extension headers processes
any additional services. Examples of extension headers include Fragmentation,
Authentication Header, and Encapsulating Security Payload (ESP).
Most other fields were either unchanged or changed only slightly.
The figure illustrates the IPv6 header format.
The IPv6 header has 40 octets instead of 20 octets, as in IPv4. The IPv6 header has fewer
fields, and the header is aligned on 64-bit boundaries to enable fast processing by current
and next-generation processors. The Source and Destination address fields are four times
larger than in IPv4.
The IPv6 header contains eight fields:
1. Version: This 4-bit field contains the number 6, instead of the number 4 as in IPv4.
2. Traffic Class: This 8-bit field is similar to the type of service (ToS) field in IPv4. The
source node uses this field to mark the priority of outbound packets.
3. Flow Label: This new field has a length of 20 bits and is used to mark individual
traffic flows with unique values. Routers are expected to apply an identical quality
of service (QoS) treatment to each packet in a flow.
4. Payload Length: This field is like the Total Length field for IPv4, but because the
IPv6 base header is a fixed size, this field describes the length of the payload only,
not of the entire packet.
5. Next Header: The value of this field determines the type of information that
follows the basic IPv6 header.
6. Hop Limit: This field specifies the maximum number of hops that an IPv6 packet
can take. The initial hop limit value is set by an operating system (64 or 128 is
common, but up to the operating system). Each IPv6 router decrements the hop
limit field along the path to the destination. An IPv6 packet is dropped when the
hop limit field reaches 0. The hop limit is designed to prevent packets from
circulating forever if there is a routing error. In normal routing, this limit should
never be reached.
7. Source Address: This field of 16 octets, or 128 bits, identifies the source of the
packet.
8. Destination Address: This field of 16 octets, or 128 bits, identifies the destination
of the packet.
The extension headers, if there are any, follow these eight fields. The number of extension
headers is not fixed, so the total length of the extension header chain is variable.
To further explore IPv6 header fields and their functions, see RFC 8200, Internet Protocol,
Version 6 (IPv6) Specification.
Connecting IPv6 and IPv4 Networks
Devices running different protocols - IPv4 and IPv6 - cannot communicate unless some
translation mechanism is implemented.
Three main options are available for transitioning to IPv6 from the existing IPv4 network
infrastructure: dual-stack network, tunneling, and translation. It is important to note that
the IPv4 and IPv6 devices cannot communicate with each other unless the translation is
configured.
In a dual-stack network, IPv4 and IPv6 are fully deployed across the infrastructure, so
configuration and routing protocols handle IPv4 and IPv6 addressing and adjacencies
separately.
Using the tunneling option, organizations build an overlay network that tunnels one
protocol over the other by encapsulating IPv6 packets within IPv4 packets over the IPv4
network and IPv4 packets within IPv6 packets IPv6 network.
Translation facilitates communication between IPv6-only and IPv4-only hosts and
networks by performing IP header and address translation between the two address
families.
13.6 Introducing Basic IPv6
Internet Control Message Protocol Version 6
Internet Control Message Protocol Version 6 (ICMPv6) provides the same diagnostic
services as Internet Control Message Protocol Version 4 (ICMPv4), and it extends the
functionality for some specific IPv6 functions that did not exist in IPv4.
ICMPv6 enables nodes to perform diagnostic tests and report problems. Like ICMPv4,
ICMPv6 implements two kinds of messages—error messages (such as Destination
Unreachable, Packet Too Big, or Time Exceeded) and informational messages (such as
Echo Request and Echo Reply).
The ICMPv6 packet is identified as 58 in the Next Header field. Inside the ICMPv6 packet,
the Type field identifies the type of ICMP message. The Code field further details the
specifics of this type of message. The Data field contains information that is sent to the
receiver for diagnostics or information purposes.
ICMPv6 is used on-link for router solicitation and advertisement, for neighbor solicitation
and advertisement, and for the redirection of nodes to the best gateway.
Neighbor solicitation messages are sent on the local link when a node wants to determine
the data link layer address of another node on the same local link. After receiving the
neighbor solicitation message, the destination node replies by sending a neighbor
advertisement message. This message includes the data link layer address of the node
sending the neighbor advertisement message. Hosts send router Solicitation messages to
locate the routers on the local link, and routers respond with router advertisements which
enable autoconfiguration of the hosts.
The source node creates a solicited-node multicast address using the right-most 24 bits of
the IPv6 address of the destination node, and sends a Neighbor Solicitation message to
this multicast address. The corresponding node responds with its data link layer address in
a Neighbor Advertisement message.
Multicast Mapping over Ethernet
A packet destined to a solicited-node multicast address is put in a frame destined to an
associated multicast MAC address.
If an IPv6 address is known, then the associated IPv6 solicited-node multicast address is
known. The example in the figure gives the IPv6 address
2001:db8:1001:f:2c0:10ff:fe17:fc0f. The associated solicited-node multicast address is
ff02::1:ff17:fc0f.
If an IPv6 solicited-node multicast address is known, then the associated MAC address is
known, formed by concatenating the last 32 bits of the IPv6 solicited-node multicast
address to 33:33
As the figure shows, the IPv6 solicited-node multicast address is ff02::1:ff17:fc0f. The
associated Ethernet MAC address is 33.33.ff.17.fc.0f.
Understand that the resulting MAC address is a virtual MAC address: It is not burned into
any Ethernet card. Depending on the IPv6 unicast address, which determines the IPv6
solicited-node multicast address, an Ethernet card may be instructed to listen to any of
the 224 possible virtual MAC addresses that begin with 33.33.ff. In IPv6, Ethernet cards
often listen to multiple virtual multicast MAC addresses and their own burned-in unicast
MAC addresses.
A solicited-node multicast is more efficient than an Ethernet broadcast used by IPv4 ARP.
With ARP, all nodes receive and must therefore process the broadcast requests. By using
IPv6 solicited-node multicast addresses, fewer devices receive the request. Therefore
fewer frames need to be passed to an upper layer to determine whether they are
intended for that specific host.
Note: Using an EUI-64 interface ID, Static assignment is used in Cisco IOS Software but not
in all operating systems. For example, Windows operating systems take advantage of
some additional privacy extensions defined in RFC 4941, allowing IPv6 address interface
identifier to be generated randomly.
o Stateless Address Autoconfiguration (SLAAC): As the name implies,
autoconfiguration is a mechanism that automatically configures the IPv6 address
of a node. SLAAC means that the client is assigned their own address based on the
prefix being advertised on their connected interface. As defined in RFC 4862, the
autoconfiguration process includes generating a link-local address, generating
global addresses through SLAAC, and the duplicate address detection procedure to
verify the uniqueness of the addresses on a link. Some clients may choose to use
EUI-64 or a randomized value for the Interface ID. SLAAC uses neighbor discovery
mechanisms to find routers and dynamically assign IPv6 addresses based on the
prefix advertised by the routers. The autoconfiguration mechanism was introduced
to enable plug-and-play networking of devices to help reduce administration
overhead.
o Stateful DHCPv6: DHCP for IPv6 enables DHCP servers to pass configuration
parameters, such as IPv6 network addresses, to IPv6 nodes. It offers the capability
of automatic allocation of reusable network addresses and additional
configuration flexibility. Stateful DHCP means that the DHCP server is responsible
for assigning the IPv6 address to the client. The DHCP server keeps a record of all
clients and the IPv6 address assigned to them.
o Stateless DHCPv6: Stateless DHCP works in combination with SLAAC. The device
gets its IPv6 address and default gateway using SLAAC. The device then sends a
query to a DHCPv6 server for other information such as domain names, DNS
servers, and other client relevant information. This is termed stateless DHCPv6
because the server does not track IPv6 address bindings per client.
IPv6 supports DNS record types that are supported in the DNS name-to-address and
address-to-name lookup processes. The DNS record types support IPv6 addresses. IPv6
also supports the reverse mapping of IPv6 addresses to DNS names. The Dynamic DNS
support for the Cisco IOS Software feature enables Cisco IOS software devices to perform
Dynamic Domain Name System (DDNS) updates to ensure that an IPv6 host DNS name is
correctly associated with its IPv6 address.
Router Advertisements
Routers periodically send router advertisements on all their configured interfaces. The
router sends a router advertisement to the all-nodes multicast address, ff02::1, to all IPv6
nodes in the same link.
This figure depicts the router advertisements sent by the router.
Router advertisement packet features include the following:
o ICMP type: 134
o Source: Router link-local address
o Destination: ff02::1 (all-nodes multicast address)
o Data: Options, prefix, lifetime, autoconfiguration flag
Note: The default gateway is received by the hosts only through router advertisement; the
concept of DHCP in IPv6 has changed from IPv4, and the DHCP server no longer supplies
the default gateway
Here are examples of the information that the message might contain:
o Prefixes that can be used on the link: This information enables stateless
autoconfiguration of the hosts. These prefixes must be /64 for stateless
autoconfiguration.
o Lifetime of the prefixes: The default valid lifetime is thirty days, and the default
preferred lifetime is seven days.
o Flags: Flags indicate the kind of autoconfiguration that the hosts can perform.
Unlike IPv4, the router advertisement message suggests to the host how to obtain
its addressing dynamically. There are three options:
o SLAAC
o SLAAC and stateless DHCPv6
o Stateful DHCPv6
o Default preference field: Provides coarse preference metric (low, medium, or high)
for default devices. For example, two devices on a link may provide equivalent but
not equal-cost routing, and the policy may dictate that one of the devices is
preferred.
o Other types of information for hosts: This information can include the default
MTU and hop count.
By sending prefixes, router advertisements allow host autoconfiguration. You can
configure other advertisement timing and other parameters on routers.
Router Solicitation
A router sends router advertisements every 200 seconds or immediately after a router
solicitation. Router solicitations ask routers that are connected to the local link to send an
immediate router advertisement so that the host can receive the autoconfiguration
information without waiting for the next scheduled router advertisement.
You can use the ping utility to test end-to-end IPv6 connectivity by providing the IPv6
address as the destination address. The utility recognizes the IPv6 address when one is
provided and uses IPv6 as a protocol to test connectivity.
Traceroute is a utility that allows observation of the path between two hosts and supports
IPv6. Use the traceroute Cisco IOS command or tracert Windows command, followed by
the IPv6 destination address, to observe the path between two hosts. The trace generates
a list of IPv6 hops that are successfully reached along the path. This list provides important
verification and troubleshooting information.
The tracert utility on the Windows PC allows you to observe the IPv6 path:
You can also use the traceroute utility on the router to observe the IPv6 path:
Similar to IPv4, you can use Telnet to test end-to-end transport layer connectivity over
IPv6 using the Telnet command from a PC, router, or a switch. When you provide the IPv6
destination address, the protocol stack determines that the IPv6 protocol has to be used.
If you omit the port number, the client will connect to port 23. You can specify a specific
port number on the client and connect to any TCP port that you want to test.
Although Telnet can be used as a troubleshooting tool to check transport layer
functionality, it should not be used in a production environment to administer network
devices. Nowadays, a secure access method is used for that purpose using Secure Shell
protocol (SSH).
You can use the telnet command to test the transport layer connectivity for any TCP port
over IPv6.
Use Telnet to connect to the standard Telnet TCP port from a Windows PC.
Use Telnet to connect to the TCP port 80, which tests the availability of the HTTP service.
In the example, you can see two connections from a PC to the Server. The first one
connects to port 23 and tests Telnet over IPv6. The second connects to port 80 and tests
HTTP over IPv6.
The telnet command in the output tests if HTTP, which listens on TCP port 80, is open.
The telnet command can also be used from a Cisco router. In this case, to exit the
established connection, you must enter a control+C hotkey. The hotkey that closes the
connection on a Cisco device is "ctrl+shift+6 and x."
When troubleshooting end-to-end connectivity, verifying mappings between destination
IP addresses and MAC addresses on individual segments is useful. In IPv4, ARP provides
this functionality. In IPv6, the neighbor discovery process and ICMPv6 replace the ARP
functionality. The neighbor discovery table caches IPv6 addresses and their resolved MAC
addresses. As shown in the figure, the netsh interface ipv6 show neighbors Windows
command lists all devices that are currently in the IPv6 neighbor discovery table cache.
The information that is displayed for each device includes the IPv6 address, physical
(MAC) address, and the neighbor cache state, similar to an ARP table in IPv4. By examining
the neighbor discovery table, you can verify that the destination IPv6 addresses map to
the correct Ethernet addresses
Neighbor discovery table on a PC
The figure also shows an example of the neighbor discovery table on the Cisco IOS router,
using the show ipv6 neighbors command. The table includes the IPv6 address of the
neighbor, age in minutes, the MAC address, the state, and the interface through which the
neighbor is reachable. The states are explained in the table:
You can use other commands to verify that IPv6 is configured correctly on Cisco routers.
o Verify that IPv6 routing has been enabled on the router. In the show running-
config command output, look for the ipv6 unicast-routing command.
o Verify that the interfaces have been configured with the correct IPv6 addresses.
You can use the show ipv6 interface command to display the statuses and
configurations for all IPv6 interfaces.
As a network engineer, you will encounter various challenges concerning static routes:
o Explaining the difference between static and dynamic routing.
o Configuring and verifying both static and default static routes.
o Fixing problems with any of the static or static default routes configured on the
routers.
When configuring a static route, follow these steps as illustrated in the example in the
figure for router A:
o Specify an IPv4 destination network (172.16.1.0 255.255.255.0).
o Use the IPv4 address of the next-hop router (172.16.2.1).
o Or, use the outbound interface of the local router (Serial0/0/0).
Note: Using egress interfaces in the static routes declare that the static networks are
“directly connected” to the egress interfaces and it works fine and without issues only on
point-to-point links, such as serial interfaces running High-Level Data Link Control (HDLC)
or PPP. On the other hand, when the egress interface used in the static route is a
multiaccess interface such as Ethernet (or a serial interface running Frame Relay or
Asynchronous Transfer Mode (ATM)), the solution will likely be complicated and possibly
disastrous. It is highly recommended to configure static routes using only next hop IPv4
address. Static routes defined using only egress interfaces might cause uncertainty or
unpredictable behavior in the network and should not be used unless absolutely
necessary.
Static route pointing to the next-hop IPv4 address
In the figure, router A is configured with a static route to reach the 172.16.1.0/24 subnet
via the next hop IPv4 address 172.16.2.1 using the ip route command.
Alternatively, you can configure the static route by pointing to the exit interface instead of
using the next-hop IPv4 address.
The table lists the ip route command parameters for this example.
In the figure, you would also need to configure router B with a static or default route to
reach the networks behind router A via the serial interface of router B.
Note: A static route is configured for connectivity to remote networks that are not directly
connected to your router. For end-to-end connectivity, you must configure a static route
in both directions.
A host route is a static route for a single host. A host route has a subnet mask of
255.255.255.255.
A floating static route is a static route with administrative distance greater than 1. By
default, static routes have a very low administrative distance of 1, which means that your
router will prefer a static route over any routes that were learned through a dynamic
routing protocol. If you want to use a static route as a backup route (so called floating
static route), you will have to change its administrative distance.
To change administrative distance of a static route, add the admin distance parameter to
the command. For example, to change the administrative distance to 10, add number 10
at the end of the IP route configuration.
A default static route is a route that matches the destination address of all packets that
don’t match any other more specific routes in the routing table. Default static routes are
used in these instances:
o When no other routes in the routing table match the destination IP address of the
packet or when a more specific match does not exist. A common use for a default
static route is to connect the edge router of a company to an ISP network.
o When a router has only one other router to which it is connected. This condition is
known as a stub router.
The syntax for a default static route is like the one that is used for any other static route,
except that the network address is 0.0.0.0 and the subnet mask is 0.0.0.0.
Or
The 0.0.0.0 network address and 0.0.0.0 subnet mask are called a quad-zero route.
In the figure, router B is configured to forward to router A all packets for which there is no
route for the destination network in the router B routing table.
This table lists the ip route command parameters for this example.
The corresponding routing table entry of the static route in the routing table of router B is:
Note that the entry in the routing table no longer refers to the next-hop IPv4 address but
refers directly to the exit interface. This exit interface is the same one to which the static
route was resolved when it used the next-hop IPv4 address. Now that the routing table
process has a match for a packet and this static route, it is able to resolve the route to an
exit interface in a single lookup.
Note: The static route displays the route as directly connected. It is important to
understand that this does not mean that this route is a directly connected network or a
directly connected route. This route is still a static route with the “S” code.
Verifying Default Route Configuration
To verify the default route configuration, examine the routing table on router B:
The example in the figure shows the router B routing table after configuration of the
default route.
The asterisk (*) indicates that the route is a candidate default route.
The first static route uses a link-local next hop address, specified with the fe80 prefix.
When using a link-local address as the next hop, you must also use an exit interface
because this link-local address could be used on any interface. The second static route
points to the next hop global IPv6 address 2001:0db8:feed::1.
Note: In an IPv6 address the alphanumeric characters used in hexadecimal format are not
case sensitive; therefore, uppercase and lowercase characters are equivalent. Although
Cisco IOS accepts both lowercase and uppercase representation of an IPv6 address, RFC
5952 recommends that IPv6 addresses be represented in lowercase to ensure
compatibility with case-sensitive applications.
IPv6 Static Route Configuration Example
Consider the next example to understand IPv6 static route configuration.
In this example, an IPv6 static network route is configured on the HQ router, pointing to
the Branch router in order to reach the Branch router’s LAN. An IPv6 default route is
configured on the Branch router, pointing to the HQ router in order to reach all other
networks:
The table shows IPv6 static and default route commands:
You can also verify that the default IPv6 route on the Branch router is working by issuing
the ping command to the server:
15.1 Implementing VLANs and Trunks
Introduction
If an enterprise campus network is poorly designed where it has a large number of devices
in the same LAN segment, the poor design will typically affect performance of the network
due to a large broadcast and failure domain, limited security control, and so on. While a
router could be used to solve the issue because it blocks broadcasts; routers are typically
slower, expensive, and often do not fit the design of an enterprise campus network.
A common solution are VLANs which segment a network on a per ports basis and can span
over multiple switches. This allows you to logically segment a switched network on an
organizational basis by functions, project teams, or applications rather than on a physical
or geographical basis. For example, all workstations and servers used by a particular
workgroup team can be connected to the same VLAN, regardless of their physical
connections to the network or the fact that they might be intermingled with other teams.
Reconfiguration of the network can be done through software rather than by physically
unplugging and moving devices or wires.
In enterprise environments, switches often use links that carry data from multiple VLANs
and allow VLANs to be extended across an entire network. These links are called trunks.
As a networking engineer, you need to gain skills in the area of VLANs, such as:
o Identifying the common issues in a poorly designed local network.
o Familiarizing yourself with the operation of VLANs.
o Implementing correct steps to implement and verify VLANs and trunks.
15.2 Implementing VLANs and Trunks
VLAN Introduction
To understand VLANs, you need a solid understanding of LANs. A LAN is a group of devices
that share a common broadcast domain. When a device on the LAN sends broadcast
messages, the switch floods the broadcast messages (as well as unknown unicast) to all
ports except the incoming port. Therefore, all other devices on the LAN receive them. You
can think of a LAN and a broadcast domain as being basically the same thing. Without
VLANs, a switch considers all its interfaces to be in the same broadcast domain. In other
words, all connected devices are in the same LAN. With VLANs, a switch can put some
interfaces into one broadcast domain and some into another. The individual broadcast
domains that are created by the switch are called VLANs. A VLAN is a group of devices on
one or more LANs that are configured to communicate as if they were attached to the
same wire, when in fact they are located on a number of different LAN segments.
A VLAN allows a network administrator to create logical groups of network devices. These
devices act like they are in their own independent network, even if they share a common
infrastructure with other VLANs. Each VLAN is a separate Layer 2 broadcast domain which
is usually mapped to a unique IP subnet (Layer 3 broadcast domain). A VLAN can exist on a
single switch or span multiple switches. VLANs can include devices in a single building as
illustrated in the figure or multiple-building infrastructures.
Within the switched internetwork, VLANs provide segmentation and organizational
flexibility. You can design a VLAN structure that lets you group devices that are segmented
logically by functions, project teams, and applications without regard to the physical
location of the users. VLANs allow you to implement access and security policies for
particular groups of users. If a switch port is operating as an access port, it can be assigned
to only one VLAN, which adds a layer of security. Multiple ports can be assigned to each
VLAN. Ports in the same VLAN share broadcasts. Ports in different VLANs do not share
broadcasts. Containing broadcasts within a VLAN improves the overall performance of the
network.
If you want to carry traffic for multiple VLANs across multiple switches, you need a trunk
to connect each pair of switches. VLANs can also connect across WANs. It is important to
know that traffic cannot pass directly to another VLAN (between broadcast domains)
within the switch or between two switches. To interconnect two different VLANs, you
must use routers or Layer 3 switches. The process of forwarding network traffic from one
VLAN to another VLAN using a router is called inter-VLAN routing. Routers perform inter-
VLAN routing by either having a separate router interface for each VLAN, or by using a
trunk to carry traffic for all VLANs. The devices on the VLAN send traffic through the
router to reach other VLANs.
Usually, subnet numbers are chosen to reflect which VLANs they are associated. The
figure shows that VLAN 2 uses subnet 10.0.2.0/24, VLAN 3 uses 10.0.3.0/24, and VLAN 4
uses 10.0.4.0/24. In this example, the third octet clearly identifies the VLAN that the
device belongs to. The VLAN design must take into consideration the implementation of a
hierarchical, network-addressing scheme.
Cisco Catalyst Series switches have a factory default configuration in which various default
VLANs are preconfigured to support various media and protocol types. The default
Ethernet VLAN is VLAN 1, which contains all ports by default.
If you want to communicate with the Cisco Catalyst switch for management purposes
from a remote client that is on a different VLAN, which means it is on a different subnet,
then the switch must have an IP address and default-gateway configured. This IP address
must be in the management VLAN, which is by default VLAN 1.
The following table lists the VLAN ranges on Cisco Catalyst switches:
To add a VLAN to the VLAN database, use the vlan global configuration command by
entering a VID.
The following table lists the VLAN ranges on Cisco Catalyst switches:
VLANs 1 and 1002–1005 are automatically created by the switch while the others have to
be created manually.
VLAN Trunking Protocol (VTP) is a Cisco proprietary Layer 2 messaging protocol that
maintains VLAN configuration consistency by managing the addition, deletion, and
renaming of VLANs on a networkwide basis. It reduces administration overhead in a
switched network. The switch supports VLANs in VTP client, server, and transparent
modes.
The configurations of VIDs 1 to 1005 are always saved in the VLAN database (vlan.dat file),
which is stored in flash memory. If the VTP mode is transparent, they are also stored in
the switch running configuration file, and you can save the configuration in the startup
configuration file.
In VTP versions 1 and 2, the switch must be in VTP transparent mode when you create
extended VLANs (VIDs 1006 to 4094). These VLANs are not stored in the VLAN database
but because VTP mode is transparent, they are stored in the switch running (and if saved
in startup) configuration file. However, extended-range VLANs created in VTP version 3
are stored in the VLAN database, and can be propagated by VTP. Thus, VTP version 3
supports extended VLANs creation and modification in server and transparent modes.
To create an Ethernet VLAN, you must specify at least a VLAN number. If you do not enter
a name for the VLAN, the default is to append the VLAN number to the vlan command.
For example, VLAN0004 would be the default name for VLAN 4 if you don't specify a
name.
Note: On some switches you must create the VLAN before assigning it to a port, or else no
traffic will flow.
The table lists the commands to use when assigning a port to a VLAN.
The following example shows how you use the interface range global configuration
command to enable FastEthernet interfaces 0/1 to 0/3 and assign them to VLAN 2:
The following example shows how you use the default interface global configuration
command to set the interface to factory defaults:
The table lists the commands to use when configuring a range of interfaces as well as to
set the interface to factory defaults.
When an IP phone is connected to a switch port, this port should have a voice VLAN
associated with it. This process is done by assigning a single voice VLAN to the switch port
to which the phone is connected.
You can configure a data and voice VLAN on the same interface, as shown in this example:
Verifying VLANs
After you configure a VLAN, you should validate the parameters for that VLAN.
Use the show vlan command to display information on all configured VLANs. The
command displays configured VLANs, their names, and the ports on the switch that are
assigned to each VLAN. You can observe in the output all information about the VLANs.
To display information on all configured VLANs:
The example shows that VLAN 2 (data) and VLAN 3 (telephony) are created on the switch.
Both are active and are assigned to the FastEthernet0/2. All other interfaces are assigned
to the default VLAN—VLAN 1. Trunk ports that are connected to another device do not
appear in the output of the show vlan command.
Use the show vlan id vlan_number or show vlan name vlan-name command to display
information about a particular VLAN. The example shows the output of the show vlan
command for the "data" VLAN, which is VLAN 2.
On the other hand, you can use the show vlan brief command, which displays one line for
each VLAN with the VLAN name, status, and its ports. Connected trunk ports also do not
appear in the output of the show vlan brief command.
The best practice is to disable the autonegotiation and not use the dynamic auto and
dynamic desirable switch port modes. Instead, the best practice is to manually configure
the port mode as trunk on both sides. If you do not want the switch to negotiate at all, use
the switchport nonegotiate command (necessary only for trunk ports, as the static access
ports do not send DTP packets automatically.)
To verify the VLAN configuration of an interface, as well as the administrative and
operational mode, use show interfaces interface-id switchport command.
You can also use the show mac address-table command to verify which MAC addresses
belong to which port and VLAN. You can also see the MAC addresses that have been
learned on a particular VLAN with the show mac address-table vlan vlan-id command.
Note: If the MAC address has not yet been learned on a particular VLAN and port, then
you will see no entry in the MAC address table. Also remember, that if the MAC address
remains inactive for a specified number of seconds, it is removed from the MAC address
table. The default aging time is 300 seconds.
Each port on a switch belongs to a VLAN. If the VLAN to which the port belongs is deleted,
the port becomes inactive. Also, a port becomes inactive if it is assigned to a nonexistent
VLAN. All inactive ports are unable to communicate with the rest of the network.
As shown in the following example, you can use the show interface interface switchport
command to check whether the port is inactive. If the port is inactive, it will not be
functional until you create the missing VLAN using the vlan vlan_id command or until you
assign the port to a valid VLAN.
15.5 Implementing VLANs and Trunks
Trunking with 802.1Q
Without trunking, running many VLANs between switches would require the same
number of interconnecting links.
If every port belongs to one VLAN and you have several VLANs that are configured on
switches, then interconnecting them requires one physical cable per VLAN. When the
number of VLANs increases, the number of required interconnecting links also increases.
Ports are then used for interswitch connectivity instead of attaching end devices.
Instead, you can use one connection configured as a trunk:
Characteristics of Trunking with 802.1Q include the following:
o Combining many VLANs on the same port is called trunking.
o A trunk allows the transport of frames from different VLANs.
o Each frame has a tag that specifies the VLAN that it belongs to.
o The receiving device forwards the frames to the corresponding VLAN based on the
tag information.
A trunk is a point-to-point link between two network devices such as a server, router and
a switch. Ethernet trunks carry the traffic of multiple VLANs over a single link and allow
you to extend the VLANs across an entire network. A trunk does not belong to a specific
VLAN. Rather, it is a conduit for VLANs between devices. By default, all configured VLANs
are carried over a trunk interface on a Cisco Catalyst switch.
Note: A trunk could also be used between a network device and a server or another
device that is equipped with an appropriate trunk capable network interface card (NIC).
VLAN Tagging
If your network includes VLANs that span multiple interconnected switches, the switches
must use VLAN trunking on the connections between them. Switches use a process called
VLAN tagging in which the sending switch adds another header to the frame before
sending it over the trunk. This extra header is called a tag and includes a VID field so that
the sending switch can list the VLAN ID and the receiving switch can identify the VLAN that
each frame belongs to, as illustrated in the figure.
Trunking allows switches to pass frames from multiple VLANs over a single physical
connection. For example, the figure shows Switch 1 receiving a broadcast frame on the
Fa0/1 interface, which is a member of VLAN 1. In a broadcast, the frame must be
forwarded to all ports in VLAN 1. Because there are ports on Switch 2 that are members of
the VLAN 1 switch, the frame must be forwarded to Switch 2. Before forwarding the
frame, Switch 1 adds a header that identifies the frame as belonging to VLAN 1. This
header tells Switch 2 that the frame should be forwarded to the VLAN 1 ports. Switch 2
removes the header and then forwards the frame for all ports that are part of VLAN 1.
As another example, the device on the Switch 1 Fa0/5 interface sends a broadcast. Switch
1 sends the broadcast out of port Fa0/6 (because this port is in VLAN 2) and out Fa0/23
(because it is a trunk, meaning that it supports multiple VLANs). Switch 1 adds a trunking
header to the frame, listing a VLAN ID of 2. Switch 2 strips off the trunking header, and
because the frame is part of VLAN 2, Switch 2 knows to forward the frame out of only
ports Fa0/5 and Fa0/6 and not ports Fa0/1 and Fa0/2.
IEEE 802.1Q
Cisco Catalyst switches support the IEEE 802.1Q trunking protocol.
When a switch puts an Ethernet frame on a trunk, it needs to add a VLAN tag with
information about the VLAN to which the frame belongs. The switch does so by using the
802.1Q encapsulation header. IEEE 802.1Q uses an internal tagging mechanism that
inserts an extra 4-byte tag field into the original Ethernet frame between the Source
Address and Type or Length fields. As a result, the frame still has the original source and
destination MAC addresses. Also, because the original header has been expanded, 802.1Q
encapsulation forces a recalculation of the original frame check sequence (FCS) field in the
Ethernet trailer, because the FCS is based on the content of the entire frame. It is the
responsibility of the receiving Ethernet switch to look at the 4-byte tag field and
determine where to deliver the frame.
The figure shows the 802.1Q header and framing of the revised Ethernet header.
Here are tag fields:
o Type or tag protocol identifier is set to a value of 0x8100 to identify the frame as
an IEEE 802.1Q-tagged frame.
o Priority indicates the frame priority level that can be used for the prioritization of
traffic.
o Canonical Format Identifier (CFI) is a 1-bit identifier that enables Token Ring
frames to be carried across Ethernet links
o VLAN ID uniquely identifies the VLAN to which the frame belongs.
On an 802.1Q trunk port, there is one VLAN, called the native VLAN, which is untagged. By
default, the native VLAN is VLAN 1, which means that the switch does not insert an extra
802.1Q tag inside an Ethernet frame. When the switch on the receiving side receives the
Ethernet frame that does not have an 802.1Q tag, it knows that the frame belongs to the
native VLAN. All other VLANs are tagged with a VID. IEEE 802.Q specifies that native VLANs
are backward compatible with legacy LAN scenarios, where untagged traffic is common.
Note: Both switches must be configured with the same native VLAN or errors will occur
and untagged traffic will go to the wrong VLAN on the receiving switch. By default it is the
VLAN 1.
Note: Be extremely careful when adding a new VLAN to the list of allowed VLANs on a
trunk port. It is a common mistake to use the switchport trunk allowed vlan vlan-number
command. This command will overwrite the existing list of allowed VLANs and it will
replace it with the single VLAN you have just specified. Therefore, it is necessary to use
the switchport trunk allowed vlan add vlan-number command.
The following example shows you how to verify the configuration of a trunked interface
using the show interfaces interface-id switchport command.
Display VLAN information for an interface.
In the example, you can see that the interface Ethernet 0/0 operates as a trunk port and
has the VLAN 99 as the native VLAN. It only allows VLANs 10, 20, 30, and 99 to traverse
through the link.
To verify, which ports are configured as trunks on a switch, you can use the show
interfaces trunk command.
You can also use the show interfaces status command to quickly verify which port is a
trunk, and which port belongs to a certain VLAN.
Unlike access ports, when a port is configured as trunk port, it will not be seen under the
show vlan [brief] command. Notice that, in this example, interface Ethernet 0/0 is
missing.
Note: If you did not use the shutdown command in the above configuration, then if
someone plugs a device into an unused port, the port will come up, but the device will be
placed into a VLAN that does not have access to anything. Thus, you can successfully
mitigate some network attacks.
A good security practice is to separate management and user data traffic because you do
not want users to be able to establish Secure Shell (SSH) sessions to the switch. The
management VLAN by default is VLAN 1, and it should be changed to a different VLAN. If
you want to communicate with a Cisco switch remotely for management purposes, the
switch must have an IP address and a default-gateway configured and they must be in the
management VLAN. In this case, users who are not in the management VLAN cannot
access the switch, unless they were routed into the management VLAN.
When configuring a trunk port, consider the following:
o Make sure that the native VLAN for an 802.1Q trunk is the same on both ends of
the trunk port.
o Only allow specific VLANs to traverse through the trunk port.
o DTP manages trunk negotiations between Cisco switches.
Make sure that the native VLAN for an IEEE 802.1Q trunk is the same on both ends of the
trunk link. If the configuration is different on the two switches, the traffic will be
forwarded in the wrong VLAN. If IEEE 802.1Q trunk configuration is not the same on both
ends, Cisco IOS Software will report error messages. Note that native VLAN frames are
untagged.
SW1#*Mar 31 06:22:46.631: %CDP-4-NATIVE_VLAN_MISMATCH: Native VLAN mismatch
discovered on Ethernet0/0(999), with SW2 Ethernet0/0 (99).
Another good security practice is to change the native VLAN to something other than
VLAN 1 because all control traffic is sent on VLAN 1. The native VLAN should be changed
to be a VLAN that is not used for any other traffic. By default, the native VLAN is not
tagged, but it is recommended to tag the native VLAN. The example below shows how to
change the native VLAN and tag it.
Switches from other vendors do not support DTP. As discussed, DTP is used by Cisco
switches to automatically negotiate whether an interface between two switches will be
put into access or trunk mode.
The figure shows a router that is attached to a switch. The router interface is configured to
operate as a trunk link and is connected to a switch port that is configured as a trunk. The
router performs inter-VLAN routing by accepting VLAN-tagged traffic on the trunk
interface coming from the adjacent switch and internally routing between the VLANs
using subinterfaces. Subinterfaces are multiple virtual interfaces that are associated with
one physical interface. To perform inter-VLAN routing functions, the router must know
how to reach all VLANs that are being interconnected; there must be a separate logical
connection on the router for each VLAN. VLAN trunking (such as IEEE 802.1Q) must be
enabled on these connections.
These subinterfaces are configured in software. Each is independently configured with its
own IP addresses and VLAN assignment. The router routes packets incoming from one
subinterface and then sends the data on another subinterface by putting it in a VLAN-
tagged frame and sending it back out the same physical interface. Devices on the VLANs
have their default gateway set to the appropriate router IP address; in this figure, the
devices in VLAN 10 will have default gateway set to 10.1.10.1, and the devices in VLAN 20
will have default gateway set to 10.1.20.1.
Router Trunk Link Configuration Example
The following example shows how you can configure a router on a stick, by configuring
subinterfaces and trunking on the router:
The show ip route command displays the state of the routing table. The sample output
shows two subinterfaces. The GigabitEthernet0/0.10 and GigabitEthernet0/0.20 VLAN
subinterfaces are directly connected to the router.
Some switches can perform Layer 3 functions, replacing the need for dedicated routers to
perform basic routing on a network. Layer 3 switches are capable of performing inter-
VLAN routing. Traditionally, a switch makes forwarding decisions by looking at the Layer 2
header, whereas a router makes forwarding decisions by looking at the Layer 3 header. A
Layer 3 switch combines the functionality of a switch and a router in one device. It
switches traffic when the source and destination are in the same VLAN and routes traffic
when the source and destination are in different VLANs (that is, on different IP subnets).
To enable a Layer 3 switch to perform routing functions, you must properly configure
VLAN interfaces on the switch; these are called switch virtual interfaces (SVIs). You must
use the IP addresses that match the subnet that the VLAN is associated with on the
network. The Layer 3 switch must also have IP routing enabled. Devices on the VLANs
have their default gateway set to the appropriate Layer 3 switch IP address.
Layer 3 switching is more scalable than router on a stick because the latter can pass only
so much traffic through the trunk link. In general, a Layer 3 switch is primarily a Layer 2
device that has been upgraded to have some routing capabilities. A router is a Layer 3
device that can perform some switching functions. Layer 3 switches do not have WAN
interfaces, while routers do. Typically, routers also support more advanced Layer 3
features (for example, Network Address Translation, encryption, and tunneling) than Layer
3 switches.
However, the line between switches and routers becomes hazier every day. Some Layer 2
switches support limited Layer 3 functionality, such as static routing on SVIs, so you can
configure static routes, but routing protocols are not supported.
Following is an example configuration on the Layer 3 switch with PCs that are connected
to VLAN 10 and VLAN 20. PCs in VLAN 10 will have default gateway 10.1.10.1, and PCs in
VLAN 20 will have default gateway 10.1.20.1. The Layer 3 switch will perform routing
between VLAN 10 and VLAN 20.
17.1 Introducing OSPF
Introduction
Efficient routing is crucial for network performance in larger networks that encompass
many buildings with endpoints, branches, and remote sites; all implementing different
VLANs. In such large environments, changes are frequent as new networks emerge, paths
change, or different configuration and interface issues occur. The network has to be able
to adapt quickly and automatically to changes. Relying on static routing could result in
long waiting times as network administrators implement the necessary configuration
changes. This is where the role of routing protocols becomes crucial.
The objective of the routing protocol is to exchange network reachability information
between routers and dynamically adapt to network changes. Routing protocols use
routing algorithms to determine the optimal path between different segments in the
network and update routing tables with the best paths. Dynamic routing protocols play an
important role in enterprise networks. There are several different protocols available;
each having its advantages and limitations. Convergence time, support for summarization,
and the ability to scale affect the choice of suitable routing protocols. It is a best practice
that you use one routing protocol throughout the enterprise, if possible.
In an enterprise campus, the routing protocol must support high-availability requirements
and provide very fast convergence. One of the most common IP routing protocols in such
an environment is Open Shortest Path First (OSPF), an open standard protocol that works
as an interior gateway protocol (IGP) at the corporate office and at all the branches.
Despite its relatively simple configuration in small and medium networks, OSPF
implementation and troubleshooting in large-scale networks may represent a real
challenge. Therefore, an understanding of basic OSPF concepts is vital.
As a networking engineer, you will encounter routing protocols when designing,
configuring, and troubleshooting networks. If the protocol used is OSPF, you will need
knowledge of various aspects of OSPF including:
o A solid understanding of OSPF functions.
o Familiarity with OSPF packet types and the link-state database (LSDB).
o The process of OSPF neighbor establishment.
o Configuring and verifying basic OSPF implementation.
OSPF uses a two-layer network hierarchy that has two primary elements:
o AS: An AS consists of a collection of networks under a common administration that
share a common routing strategy. An AS, which is sometimes called a domain, can
be logically subdivided into multiple areas.
o Area: An area is a grouping of contiguous networks. Areas are logical subdivisions
of the AS.
Within each AS, a contiguous area 0 (backbone area) must be defined. In the multiarea
design, all other nonbackbone areas are connected to the backbone area.
A multiarea design is more effective because the network is segmented to limit the
propagation of LSAs inside an area. It is especially useful for large networks.
In a multiarea topology, there are some special commonly used OSPF terms, based on the
OSPF router roles. Routers that are only in Area 0 are known as backbone routers. Routers
that are only in nonbackbone (normal) areas are known as internal routers; they have all
interfaces in one area only. An area border router (ABR) connects Area 0 to the
nonbackbone areas. ABRs contain LSDB information for each area, make route
calculations for each area, and advertise routing information between areas. An AS
boundary router (ASBR) is a router that has at least one of its interfaces connected to an
OSPF area and at least one of its interfaces connected to an external non-OSPF domain,
such as EIGRP routing domain.
Note: The optimal number of routers per area varies based on factors such as network
stability, but the general recommendation is to have no more than 50 routers per single
area.
In a single area OSPF, whenever there is a change in a topology, new LSAs are created and
sent throughout the area. All routers change their LSDB when they receive the new LSA,
and the SPF algorithm is run again on the updated LSDB to verify new paths to
destinations within the area.
The OSPF dynamic routing protocol does the following:
o Creates a neighbor relationship by exchanging hello packets
o Propagates LSAs rather than routing table updates:
o Link: Router interface
o State: Description of an interface and its relationship to neighboring
routers
o Floods LSAs to all OSPF routers in the area, not just to the directly connected
routers
o Pieces together all the LSAs that OSPF routers generate to create the OSPF LSDB
o Uses the SPF algorithm to calculate the shortest path to each destination and
places it in the routing table
A router sends LSA packets immediately to advertise its state when there are state
changes. Moreover, the router resends (floods) its own LSAs every 30 minutes by default
as a periodic update. The information about the attached interfaces, the metrics that are
used, and other variables are included in OSPF LSAs. As OSPF routers accumulate link-state
information, they use the SPF algorithm to calculate the shortest path to each network.
Essentially, an LSDB is an overall map of the networks in relation to the routers. It contains
the collection of LSAs that all routers in the same area have sent. Because the routers
within the same area share the same information, they have identical topological
databases.
17.7 Introducing OSPF
Establishing OSPF Neighbor Adjacencies
Neighbor OSPF routers must recognize each other on the network before they can share
information because OSPF routing depends on the status of the link between two routers.
The Hello protocol completes this process. OSPF routers send hello packets on all OSPF-
enabled interfaces to determine if there are any neighbors on those links.
The Hello protocol establishes and maintains neighbor relationships by ensuring
bidirectional (two-way) communication between neighbors.
The figure illustrates the exchange process that happens when routers appear on the
network:
1. A router interface is enabled on the network. The OSPF process is in a down state
because the router has not yet exchanged information with any other router. The
router begins by sending a hello packet out of the OSPF-enabled interface,
although it does not know the identity of any other routers.
2. All directly connected routers that are running OSPF receive the hello packet from
the first router and add the router to their lists of neighbors. After adding the
router to the list, other routers are in the initial state (INIT state).
3. Each router that received the hello packet sends a unicast reply hello packet to the
first router with its corresponding information. The Neighbors field in the hello
packet lists all neighboring routers, including the first router.
4. When the first router receives the hello packets from the neighboring routers
containing its own router ID inside the list of neighbors, it adds the neighboring
routers to its own neighbor relationship database. After recognizing itself in the
neighbor list, the first router goes into two-way state with those neighbors. At this
point, all routers that have each other in their lists of neighbors have established a
bidirectional (two way) communication. When routers are in two-way state, they
must decide whether to proceed in building an adjacency or staying in the current
state.
If the link type is a multiaccess broadcast network (for example, an Ethernet LAN), a DR
and BDR must first be selected. The DR acts as a central exchange point for routing
information to reduce the amount of routing information that the routers have to
exchange. The DR and BDR are selected after routers are in the two-way state. Note that
the DR and BDR is per LAN, not per area. The router with the highest priority becomes the
DR and the router with the second highest priority becomes the BDR. If there is a tie, the
router with the highest router ID becomes the DR and the router with the second highest
router ID becomes the BDR. Among the routers on a LAN that are not elected as the DR or
BDR, the exchange process stops at this point and the routers remain in the two-way
state. Routers then communicate only with the DR (or BDR) by using the OSPF DR
multicast IPv4 address 224.0.0.6. The DR uses the 224.0.0.5 multicast IPv4 address to
communicate with all other non-DR routers. On point-to-point links, there is no DR/BDR
election, because only two routers can be connected on a single point-to-point segment
and there is no need for using DR or BDR.
After the DR and BDR are selected, the routers are considered to be in the exstart state.
The routers are then ready to discover the link-state information about the internetwork
and create their LSDBs. The exchange protocol is used to discover the network routes, and
it brings all the routers from the exchange state to a full state of communication with the
DR and BDR.
As shown in the figure, the exchange protocol continues as follows:
1. In the exstart state a primary/secondary relationship is created between each
router and its adjacent DR and BDR. The router with the higher router ID acts as
the primary router during the exchange process. The primary/secondary election
dictates which router will start the exchange of routing information. This step is
not shown in the figure.
2. The primary/secondary routers exchange one or more database description (DBD)
packets, containing a summary of their LSDB. The routers are in the exchange
state.
3. A router compares the DBD that it received with the LSAs that it has. If the DBD has
a more up-to-date link-state entry, the router sends a link-state request (LSR) to
the other router. When routers start sending LSRs, they are in the loading state.
4. The router sends a link state update (LSU), containing the entries requested in the
LSR. This is acknowledged with a link state acknowledgment (LSAck). When all LSRs
have been satisfied for a given router, the adjacent routers are considered
synchronized and are in the full state.
All states except two-way and full are transitory, and routers should not remain in these
states for extended periods of time.
Each router has its own view of the topology even though all the routers build the shortest
path trees by using the same LSDB.
Each router places itself as the root of a tree and then runs the SPF algorithm. The path
calculation is based on the cumulative cost that is required to reach that destination. LSAs
are flooded throughout the area by using a reliable algorithm, which ensures that all the
routers in an area have the same LSDB (topological database). Because of the flooding
process, R1 has learned the link-state information for each router in its area. Each router
uses the information in its topological database to calculate a shortest path tree, with
itself as the root. The router then uses this tree to determine the best routes, which are
offered to the routing table to route network traffic.
For R1, the best path to each LAN and its cost are shown in the table. Note that in terms of
number of hops (routers) to reach the destination, the shortest path might not necessarily
be the best one, because the selection of the best route is based on the lowest total cost
value from the available paths. Each router has its own view of the topology, even though
the routers build shortest path trees by using the same LSDB.
As with IPv4, most IPv6 routing protocols are IGPs, with BGP still being the only EGP of
note. All these IGPs and BGP were updated to support IPv6. The table lists the routing
protocols and their new RFCs.
Each of these routing protocols had to be changed to support IPv6. The actual messages
that are used to send and receive routing information have changed, using IPv6 headers
instead of IPv4 headers and using IPv6 addresses in those headers. For example, RIP next
generation (RIPng) sends routing updates to the IPv6 destination multicast address ff02::9
instead of to the former RIPv2 IPv4 224.0.0.9 address. Also, the routing protocols typically
advertise their link-local IPv6 address as the next hop in a route.
The routing protocols still retain many of the same internal features. For example, RIPng is
based on RIPv2 and is still a distance vector protocol, with the hop count as the metric and
15 hops as the highest valid hop count (16 is infinity). OSPF version 3 (OSPFv3), which was
created specifically to support IPv6 (and also supports IPv4), is still a link-state protocol,
with the cost as the metric but with many internals, including LSA types, changed. OSPFv3
uses multicast addresses, including the all OSPF routers IPv6 address ff02::5, and the OSPF
DR IPv6 address ff02::6. As a result, OSPFv2 is not compatible with OSPFv3. However, the
core operational concepts remain the same.
18.1 Building Redundant Switched Topologies
Introduction
In any kind of enterprise environment, one of the most important aspects of network
design is to provide redundancy. A network should never rely on one device to be a single
point of failure because that can effectively cause the loss of digital communication within
and even outside the enterprise.
Therefore, it is crucial to build a redundant topology, including implementing additional
switches and redundant links between them. Although a redundant topology in a switched
network has its benefits, it can also cause problems, such as Open System Interconnect
(OSI) Layer 2 loops. To avoid Layer 2 loops in a switched topology, Spanning Tree Protocol
(STP) is used as a Layer 2 loop prevention mechanism while still providing network link
redundancy. Thus, you should never disable STP in Layer 2 environments.
A limitation of the traditional STP is the convergence delay after a topology change, so the
use of Rapid STP (RSTP) is recommended. Although RSTP is backward-compatible with
STP, the two protocols are different in many ways. In order to take the full advantage of
RSTP, all switches in a spanning tree topology must run the rapid version of the protocol.
As a networking engineer working with Cisco Catalyst switches, you should be familiar
with STP and all its more optimized variants such as Cisco’s Per VLAN Spanning Tree Plus
(PVST+), and get a firm grip on physical redundancy and STP concepts, such as:
o issues in Redundant Topologies.
o STP and RSTP protocols operation.
o implementation of STP stability mechanism.
18.2 Building Redundant Switched Topologies
Physical Redundancy in a LAN
Enterprise voice and data networks are designed with physical component redundancy to
eliminate the possibility of any single point of failure causing a loss of function for an
entire switched network. Building a reliable switched network requires additional switches
and redundant physical links. However, redundant Layer 2 switch topologies require
planning and configuration to operate without introducing Layer 2 loops.
Physical loops may occur in the network as part of a design strategy for redundancy in a
switched network. Adding additional switches to LANs can add the benefit of redundancy.
Connecting two switches to the same network segments ensures continuous operation if
there are problems with one of the segments. Redundancy can ensure the constant
availability of the network. However, when adding redundant physical links and additional
switches, a physical loop is created and by spanning a single VLAN between connected
switches a Layer 2 loop is also created.
Layer 2 LAN protocols, such as Ethernet, lack a mechanism for recognizing and eliminating
endless looping of frames, as illustrated in the figure. Some Layer 3 protocols implement a
Time to Live (TTL) or hop limit mechanism that limits the number of times that a Layer 3
networking device can retransmit a packet or limit how many Layer 3 devices a packet can
traverse. Lacking such a mechanism, Layer 2 devices would continue to retransmit looping
traffic indefinitely.
In evolved variants of STP, like Cisco PVST+, RSTP or Multiple Spanning Tree Protocol
(MSTP), the original bridge priority field in the BID is changed to include an Extended
System ID field as shown in the figure. This field carries information such as VLAN ID or
instance number required for the evolved variants of STP to operate. The bridge priority
field in this case is 4 bits and the Extended System ID field is 12 bits. In command outputs
you will either see this combination written as a 16-bit field, or as two components: a 16-
bit bridge priority where the lower 12 bits are binary 0, and a 12-bit Extended System ID.
In the latter case, the bridge priority is a number between 0 and 65535 in increments of
4096, and the default on Cisco switches is 32768.
When the switches start receiving BPDUs from the other switches, each switch compares
the root ID in the received BPDUs against the value that it currently has recorded as the
root ID. If the received value is lower than the recorded value (which was originally the
BID of that switch), the switch replaces the recorded value with the received value and
starts transmitting this value in the Root ID field in its own BPDUs.
Eventually, all switches learn and record the BID of the switch that has the lowest BID. The
switches all transmit this BID in the Root ID field of their BPDUs.
In the example, Switch B becomes the root bridge because it has the lowest BID. Switch A
and switch B have the same priority, but switch B has a lower MAC address value.
When a switch recognizes that it is not the root (because it is receiving BPDUs that have a
root ID value that is lower than its own BID), it marks the port on which it is receiving
those BPDUs as its root port.
A switch could receive BPDUs on multiple ports. In this case, the switch elects the port
that has the lowest-cost path to the root as its root port. If two ports have an equal path
cost to the root, the switch looks at the BID values in the received BPDUs to make a
decision (where the lowest BID is considered best, similar to root bridge election). If the
root path cost and the BID in both BPDUs are the same because both ports are connected
to the same upstream switch, the switch looks at the Port ID field in the received BPDUs
and selects its root port based on the lowest value in that field.
By default, the cost that is associated with each port is related to its speed (the higher the
interface bandwidth, the lower the cost), but the cost can be manually changed.
Switches A, C, and D mark the ports that are directly connected to switch B (which is the
root bridge) as the root port. These directly connected ports on switches A, C, and D have
the lowest cost to the root bridge.
After electing the root bridge and root ports, the switches determine which switch will
have the designated port for each Ethernet segment; the switch with the designated port
is called the designated bridge for the segment. This process is similar to the root bridge
and root port elections. Each switch that is connected to a segment sends BPDUs out of
the port that is connected to that segment, claiming to be the designated bridge for that
segment. At this point, it considers its port to be a designated port.
When a switch starts receiving BPDUs from other switches on that segment, it compares
the received values of the root path cost, BID, and port ID fields (in that order) against the
values in the BPDUs that it is sending out its own port. The switch stops transmitting
BPDUs on the port and marks it as a nondesignated port if the other switch has lower
values.
In the example, all ports on the root bridge (switch B) are designated ports. The ports on
switch A that are connecting to switch C and switch D become designated ports, because
switch A has the lower root path cost.
To prevent Layer 2 loops while STP executes its algorithm, all ports start out in the
blocking state. When STP marks a port as either a root port or a designated port, the
algorithm starts to transition this port to the forwarding state and all nondesignated ports
remain in the blocking state.
The original and rapid versions of STP both execute the same algorithm in the decision-
making process. However, in the transition of a port from the blocking (or discarding, in
rapid spanning tree terms) to the forwarding state, there is a big difference between
those two spanning tree versions. Classic 802.1D would simply take 30 seconds to
transition the port to forwarding. The rapid spanning tree algorithm can use additional
mechanisms to transition the port to forwarding in less than a second.
Although the order of the steps that are listed in the diagrams suggests that STP goes
through them in a coordinated, sequential manner, that is not actually the case. If you
look back at the description of each step in the process, you see that each switch is going
through these steps in parallel. Also, each switch might adapt its selection of root bridge,
root ports, and designated ports as it receives new BPDUs. As the BPDUs are propagated
through the network, all switches eventually have a consistent view of the topology of the
network. When this stable state is reached, BPDUs are transmitted only by designated
ports. However, all blocking ports are continuously listening for BPDUs that are sent every
2 seconds. If a blocking port stops receiving BPDUs, it will begin transition to the
forwarding state.
There are two loops in the sample topology, meaning that two ports should be in the
blocking state to break both loops. The port on Switch C that is not directly connected to
Switch B (root bridge) is blocked, because it is a nondesignated port. The port on Switch D
that is not directly connected to Switch B (root bridge) is also blocked, because it is a
nondesignated port.
If a switch port connects to another switch, the STP initialization cycle must transition
from state to state to ensure a loop-free topology.
However, for access devices such as PCs, laptops, servers, and printers, the delays that
incurred with STP initialization can cause problems such as DHCP timeouts. Cisco designed
the PortFast and BPDU guard features as enhancements to STP to reduce the time that is
required for an access device to enter the forwarding state.
STP is designed to prevent loops. Because there can be no loop on a port that is connected
directly to a host or server, the full function of STP is not needed for that port. PortFast is
a Cisco enhancement to STP that allows a switchport to begin forwarding much faster
than a switchport in normal STP mode.
When the PortFast feature is enabled on a switch port that is configured as an access port,
that port bypasses the typical STP listening and learning states. This feature allows the
port to transition from the blocking to the forwarding state immediately. You can use
PortFast on access ports that are connected to a single workstation or to a server to allow
those devices to connect to the network immediately rather than waiting for spanning
tree to converge.
In a valid PortFast configuration, no BPDUs should be received because access and Layer 3
devices do not generate BPDUs. If a port receives a BPDU, that would indicate that
another bridge or switch is connected to the port. This event could happen if a user
plugged a switch on their desk into the port where the user PC was previously plugged
into.
For example, assume that users decide they want more bandwidth. Since there are two
network access connections in their office, they decide to use both of them. To use them
both, they unplug their individual PCs from the network switches and plug it into their
own switch. They then plug the new switch into both of the network access ports. If
PortFast is enabled on both ports of the network switch, this action could cause a loop
and bring the network to a halt.
To avoid such situation when using PortFast, the BPDU guard enhancement is the
solution. It allows network designers to enforce the STP domain diameter and keep the
active topology predictable. The devices behind the ports that have STP PortFast and
BPDU guard enabled are not able to influence the STP topology thus preventing the users
to connect additional switches and violating STP diameter. At the reception of BPDUs, the
BPDU guard operation effectively disables the port that has PortFast configured, by
transitioning the port into errdisable state. A message also appears on the switch console.
For example, the following message might appear:
Note: Because the purpose of PortFast is to minimize the time that ports must wait for
spanning tree to converge, you should use it only on ports that no other switch is
connected to, like access ports for connecting user equipment and servers or on trunk
ports when connecting to a router in a router on a stick configuration. If you enable
PortFast on a port that is connecting to another switch, you risk creating a spanning tree
loop, or with the BPDU guard feature enabled the port will transition in errdisable.
The RSTP port states correspond to the three basic operations of a switch port: discarding,
learning, and forwarding. There is no listening state as there was with STP. The listening
and blocking STP states are replaced with the discarding state.
In a stable topology, RSTP ensures that every root port and designated port transit to
forwarding, while all alternate ports and backup ports are always in the discarding state.
The characteristics of RSTP port states are as follows:
A port will accept and process BPDU frames in all port states.
Note: In RSTP the PortFast feature is known as an edge port concept. All ports directly
connected to end stations cannot create bridging loops in the network. Therefore, the
edge port directly transitions to the forwarding state, and skips the listening and learning
stages. Unlike PortFast, an edge port that receives a BPDU immediately loses its edge port
status and becomes a normal spanning-tree port.
19.1 Improving Redundant Switched Topologies with EtherChannel
Introduction
The increasing deployment of higher-speed switched Ethernet to the desktop can be
attributed to the proliferation of bandwidth-intensive intranet applications. Any-to-any
communications of new intranet applications such as video to the desktop, interactive
messaging, VoIP, and collaborative applications are increasing the need for scalable
bandwidth within the core and at the edge of campus networks.
Additional bandwidth is required at the access to the network where end devices
generate larger amounts of traffic, at the links that carry traffic aggregated from multiple
end devices (uplinks), and at the links that carry application traffic; for example, at the
links to the Data Center. When additional bandwidth is needed, the speed of these links
can be increased, but only to a certain point. As the speed increases on the links, this
solution finds its limitation where the fastest possible port is no longer fast enough to
aggregate the traffic coming from all the devices.
A second option is to multiply the numbers of physical links between both switches to
increase the overall speed of the switch-to-switch communication. But if there are simply
just multiple links between the two devices, the Spanning Tree Protocol (STP) is going to
block all except one link in order to avoid loops in the network.
A solution lies in a technology called EtherChannel. EtherChannel is a technology that
allows you to circumvent these issues by creating logical links made up of several physical
links.
As a network engineer, you will work with EtherChannel in enterprise environments so
you should be aware of the following:
o The need for EtherChannel technology.
o Different options for creating EtherChannels.
o Configuration steps for EtherChannel implementation.
You can also configure multiple EtherChannel links between two devices, as shown in the
figure above. However, when several logical EtherChannel links exist between two
switches, STP detects loops. To avoid loops, STP will make only one logical link
operational. When STP blocks the redundant links, it blocks one entire EtherChannel, thus
blocking all the ports belonging to that EtherChannel link.
The advantages of the EtherChannel link aggregation include:
o EtherChannel creates an aggregation that is seen as one logical link. Where there is
only one EtherChannel link, all physical links in the EtherChannel are active
because STP sees only one (logical) link. The bandwidth of physical links is
combined to provide increased bandwidth over the logical link.
o Because EtherChannel relies on the existing switch ports, you do not need to
upgrade the ports to faster and more expensive ones to obtain more bandwidth.
Most configuration tasks can be performed on the EtherChannel logical interface
instead of on each individual port, which ensures configuration consistency
throughout the links.
o Load balancing is possible across the physical links that are part of the same
EtherChannel.
o EtherChannel improves resiliency against link failure, as it provides link
redundancy. The loss of a physical link within an EtherChannel does not create a
change in the topology and there will not be a spanning-tree recalculation. As long
as at least one physical link is active, the EtherChannel is functional, even if its
overall throughput decreases.
The example in the figure shows how the interface range command is used to configure
four GigabitEthernet interfaces of SW1. The range is specified by providing interface type
(GigabitEthernet), and identifiers of the first interface and the last interface (0/1–4). The
command in the example specifies four interfaces, the first being GigabitEthernet 0/1, and
the last being GigabitEthernet 0/4. Once you specify the range, all the configuration
commands that follow apply to all the interfaces included in the range. Using the interface
range command, you easily ensure that all the interfaces have the same configuration. A
similar configuration must be applied on the SW2 switch also.
Once you successfully bundle the ports, you can ensure consistent configuration by
applying it to the port channel interface.
When configuring EtherChannel, a good practice is to start by shutting down the
interfaces to be aggregated, so that incomplete configuration will not start to create
activity on the link.
After shutting down the member interfaces, proceed by using the channel-group
command to specify the port channel identifier, also called channel group number, and
the method for establishing the aggregated link.
The command syntax is channel-group channel-group-number mode { on } | { active |
passive }.
o The channel-group command assigns the interface to the port channel interface
and automatically creates the port channel interface. The channel-group-number
specifies the identifier of the port channel interface for the aggregated link.
o With the mode keyword, the channel-group command also specifies the method
for link aggregation. The keywords specifying the link aggregation method have the
following meanings:
o on: Forces the port to aggregate without LACP. In the on mode, an
EtherChannel is established only when a port group in the on mode is
connected to another port group in the on mode.
o active: Enables LACP only if a LACP device is detected at the other end of
the link. The active mode places the port into an active negotiating state in
which the port starts negotiations with other port by sending LACP packets.
o passive: Enables LACP on the port and places it into a passive negotiating
state in which the port responds to LACP packets that it receives, but does
not start LACP packet negotiation
The example configuration in the previous figure bundles GigabitEthernet0/1,
GigabitEthernet0/2, GigabitEthernet0/3, and GigabitEthernet0/4 into a Layer 2
EtherChannel link represented by the logical interface port-channel 1. Layer 2 settings of
the EtherChannel interface, trunking, and VLANs allowed on the trunk, are configured on
the logical port channel interface.
The following list summarizes the steps used to configure Layer 2 EtherChannel:
1. Use interface range command to configure interface attributes for interfaces that
are being aggregated [optional].
2. Shut down interfaces that will be aggregated, using the shutdown command.
3. Bundle the interfaces using channel-group command by specifying the port
channel identifier and aggregation method:
a. Choose on for manual unconditional aggregation
b. Choose active or passive to enable LACP
4. Configure the port channel interface.
5. Enable interfaces that were previously shut down.
Note: The channel-group identifier does not need to match on both sides of the port
channel. However, it is a good practice to do so because it makes it easier to manage the
configuration.
To configure Layer 2 EtherChannel, you do not need to change the default settings for the
interface modes. On switches, physical interfaces and port channel interfaces are Layer 2
ports, or switched ports, by default.
When configuring Layer 3 EtherChannel on a Layer 3 switch, there are several specifics
that you normally do not encounter with Layer 2 EtherChannels.
First, you should ensure that the interfaces that you are aggregating are all routed
interfaces—and that applies to both the port channel interface and to member interfaces.
When configuring Layer 3 EtherChannels, it is recommended that you first manually
create the port channel logical interface, and convert it to the routed interface.
To create the port channel logical interface, use the interface port-channel port-channel-
identifier global configuration mode command. By default, port channel interface is a
Layer 2 interface. Therefore, use the no switchport command to make it a routed
interface. The no switchport command deletes any configuration specific to Layer 2 on
the interface.
In the next step, you configure the port channel interface with an IP version 4 (IPv4)
address using the ip address command. Note that the IPv4 address is assigned to the
logical port channel interface, and not to any of the member physical interfaces. If a
member interface already has an IPv4 address assigned, and you wish to assign the same
IPv4 address to the port channel interface, you must first delete the IPv4 address from the
member interface before configuring it on the port channel interface.
Finally, you should configure member interface bundling. For successful bundling to a
Layer 3 EtherChannel, all member interfaces must be routed interfaces. Use the no
switchport command to make the interfaces routed interfaces.
The command used to bundle member interfaces is the same as for the Layer 2
EtherChannels. Use the channel-group command to specify the port channel identifier
and the method of aggregation. The port channel identifier that you choose for the logical
interface must match the number you use with the channel-group command when
configuring member interfaces.
Use LACP where the platforms allow it. If the platform does not support aggregation
protocols, you have to configure static aggregation. Beware that, with static configuration,
misconfigurations on devices are not going to be detected automatically.
The following is an example of Layer 3 EtherChannel configuration.
The example in the figure shows the configuration of a Layer 3 EtherChannel link between
two Layer 3 switches. The configuration example is given only for the SW1 switch. A
similar configuration must be applied on the SW2 switch also.
The first line of the configuration creates a logical port channel interface with the
identifier 3. When the port channel interface does not exist, it is created using interface
port-channel command. The port channel interface is configured as a routed interface
using the no switchport command. Once the port channel interface is a routed interface,
you can configure Layer 3 parameters, such as the IPv4 address. The IPv4 address assigned
to the port-channel 3 interface on SW 1 is 172.16.3.10/24.
To configure member interface bundling, the example uses the interface range command.
All member interfaces are converted to routed interfaces using the no switchport
command. Interface bundling is specified with the channel-group command. The channel-
group identifier is 3, which matches the identifier of the previously created port channel
interface. The aggregation method is set to on, which means that the interfaces are
bundled manually. For the Layer 3 EtherChannel to be fully operational, the configuration
on SW2 switch must specify the same aggregation method. The port channel identifier
does not need to match between SW1 and SW2 switches, but it is best practice that they
do match.
The following list summarizes the steps used to configure Layer 3 EtherChannel:
1. Create a logical port channel interface using interface port-channel command.
2. Turn the logical port channel interface into routed interface, using the no
switchport command.
3. Assign IPv4 address to the port channel interface.
4. Use the interface range command to configure member interfaces:
a. Convert member interfaces into routed ports using the no switchport
command
b. Bundle the interfaces using the channel-group command by specifying the
logical interface identifier and aggregation method: choose on for manual
unconditional aggregation, or choose active or passive to enable LACP.
Verifying EtherChannel Configuration
You can use several commands to verify an EtherChannel configuration. Using verification
commands, you can make sure that EtherChannel link is operational, at which layer it is
operating, whether all its member interfaces are active, which aggregation method is
configured, and so on.
The following commands are available for EtherChannel verification in Cisco IOS Software:
o The show interface port-channel command displays the general status of the
logical port channel interface that represents the aggregated link. In the example,
the interface port-channel 1 is operational.
o The show etherchannel summary command displays one line of information per
port channel and is particularly useful when several port channel interfaces are
configured on the same device. The output of the command provides, among
other, information on port channel interface status, method used for link
aggregation, member interfaces, and their status. In the example output, the
switch has one EtherChannel configured; group 1 uses LACP. The interface bundle
consists of the FastEthernet0/1 and FastEthernet0/2 interfaces; the letter P
indicates that these ports are bundled. You can see that the aggregated link is a
Layer 2 EtherChannel, and that it is in use. The letters SU indicate that the
interface is a Layer 2 interface: The letter S stands for Layer 2 and the letter U
stands for in use.
As a networking engineer, you will need to be familiar with Layer 3 redundancy including:
o The need for default gateway redundancy.
o The default gateway redundancy protocol options.
20.2 Exploring Layer 3 Redundancy
Need for Default Gateway Redundancy
When routers have different paths to specific destinations (through redundant next-hop
routers) and the primary path becomes unavailable, the routing protocol between the
routers will dynamically converge, providing a connection through the secondary path.
Hosts that run routing protocols can react in the same manner when the primary path
towards different subnets fails, since they do not depend on a default gateway
configuration. For example, you can have a Microsoft Windows server with the LAN
routing feature that can communicate with two different routers to establish connectivity
to remote subnets. If the primary router fails, the server will use the information from the
routing protocol to switch to the secondary path.
However, most client computers, servers, printers, and so on do not support dynamic
routing protocols and whenever they need to communicate with a host that is located in a
different subnet, they must relay packets through the default gateway. Therefore, the
availability of this gateway is extremely important.
For example, a company that has dual redundant routers that connect users to the
internet may experience a problem when the primary router goes down. Without an extra
protocol, none of the devices on the company's network can access the internet because
of the primary router failure. Even though the secondary router is operational, the devices
may not be configured to access a secondary router when the primary router goes down.
Hence, an extra feature is needed that can provide default gateway redundancy to the
clients.
The following figure illustrates a topology with redundant routers that provide routing
functions in the specific segment. When the host determines that a destination IPv4
network is not on its local subnet, it forwards the packet to the default gateway. Most
IPv4 hosts do not run a dynamic routing protocol to build a list of reachable networks.
Instead, they rely on a manually configured or dynamically learned default gateway to
route all packets. Typically, IPv4 hosts are configured to request addressing information,
including the default gateway, from a DHCP server.
Redundant equipment alone does not guarantee failover. In this example, both Router A
and Router B are responsible for routing packets for the 10.1.10.0/24 subnet. Because the
routers are deployed as a redundant pair, if Router A becomes unavailable, the interior
gateway protocol (IGP) can quickly and dynamically converge and determine that Router B
will now transfer the packets that would otherwise have gone through Router A. Because
the end device does not run a routing protocol, it will not receive the dynamic routing
information.
The end device is configured with a single default gateway IPv4 address, which does not
dynamically update when the network topology changes. If the default gateway fails, the
local device is unable to send packets out of the local network segment. As a result, the
host is isolated from the rest of the network. Even if a redundant router that could serve
as a default gateway for that segment exists, there is no dynamic method by which these
devices can determine the address of a new default gateway.
Note: Though the example is illustrated on routers, it is equally valid on Layer 3 switches.
Hosts that are on the local subnet should have the IP address of the virtual router as their
default gateway. When an IPv4 host needs to communicate to another IPv4 host on a
different subnet, it will use Address Resolution Protocol (ARP) to resolve the MAC address
of the default gateway. The ARP resolution returns the MAC address of the virtual router.
The host then encapsulates the packets inside frames sent to the MAC address of the
virtual router; these packets are then routed to their destination by any active router that
is part of that virtual router group. The standby router takes over if the active router fails.
Therefore, the virtual router as a concept has an active (forwarding) router and standby
router.
You use an FHRP to coordinate two or more routers as the devices that are responsible for
processing the packets that are sent to the virtual router. The host devices send traffic to
the address of the virtual router. The actual (physical) router that forwards this traffic is
transparent to the end stations.
The redundancy protocol provides the mechanism for determining which router should
take the active role in forwarding traffic and determining when a standby router should
take over that role. When the forwarding router fails, the standby router detects the
change and a failover occurs. Hence, the standby router becomes active and starts
forwarding traffic destined for the shared IP address and MAC address. The transition
from one forwarding router to another is transparent to the end devices.
A common feature of FHRP is to provide a default gateway failover that is transparent to
hosts. Cisco routers and switches typically support the use of three FHRPs:
1. Hot Standby Router Protocol (HSRP): HSRP is an FHRP that Cisco designed to
create a redundancy framework between network routers or Layer 3 switches to
achieve default gateway failover capabilities. Only one router per subnet forwards
traffic. HSRP is defined in RFC 2281.
2. Virtual Router Redundancy Protocol (VRRP): VRRP is an open FHRP standard that
offers the ability to add more than two routers for additional redundancy. Only
one router per subnet forwards traffic. VRRP is defined in RFC 5798.
3. Gateway Load Balancing Protocol (GLBP): GLBP is an FHRP that Cisco designed to
allow multiple active forwarders to load-balance outgoing traffic on a per host
basis rather than a per subnet basis like HSRP.
The routers communicate FHRP information between each other through hello messages,
which also represent a keepalive mechanism. This figure illustrates the FHRP failover
process.
When the forwarding router or the link, where FHRP is configured, fails, these steps take
place:
1. The standby router stops seeing hello messages from the forwarding router.
2. The standby router assumes the role of the forwarding router.
3. Because the new forwarding router assumes both the IP and MAC addresses of the
virtual router, the end stations see no disruption in service.
20.4 Exploring Layer 3 Redundancy
Understanding HSRP
HSRP is an FHRP that facilitates transparent failover of the first-hop IP device (default
gateway). When you use HSRP, you configure the host with the HSRP virtual IP address as
its default gateway, instead of using the IP address of the router.
HSRP Overview
HSRP defines a standby group of routers, while one router is designated as the active
router, as depicted in this figure.
HSRP provides gateway redundancy by sharing IP and MAC addresses between redundant
gateways. The protocol consists of virtual IP and MAC addresses that the two routers that
belong to the same HSRP group share between each other.
Hosts on the IP subnet that are protected by HSRP have their default gateway configured
with the HSRP group virtual IP address.
When IPv4 hosts use ARP to resolve the MAC address of the default gateway IPv4 address,
the active HSRP router responds with the shared virtual MAC address. The packets that
are received on the virtual IPv4 address are forwarded to the active router.
The HSRP active and the standby router perform the following functions:
o Active router:
o Responds to default gateway ARP requests with the virtual router MAC
address.
o Assumes active forwarding of packets for the virtual router.
o Sends hello messages between the active and standby routers.
o Knows the virtual router IPv4 address.
o Standby router:
o Sends hello messages.
o Listens for periodic hello messages.
o Assumes active forwarding of packets if it does not hear from active router.
o Sends Gratuitous ARP message when standby becomes active.
HSRP routers send hello messages that reach all HSRP routers. The active router sources
hello packets from its configured IPv4 address and the shared virtual MAC address. The
standby router sources hellos from its configured IPv4 address and its burned-in MAC
address (BIA). Hence, the HSRP routers can identify who is the active and who is the
standby router.
The following table summarizes the HSRP terminology:
The function of the HSRP standby router is to monitor the operational status of the HSRP
group and to quickly assume the packet-forwarding responsibility if the active router
becomes inoperable. When the primary HSRP router comes back online, it will not regain
the active role by default. To transfer the active role to the primary router, you have to
configure pre-emption.
The standby preempt command enables the HSRP router with the highest priority to
immediately become the active router. Priority is determined first by the configured
priority value and then by the IPv4 address. In each case, a higher value is of greater
priority. Pre-emption is recommended because you want your network to have
deterministic behavior.
HSRP for IPv4 has two versions: Version 1 and Version 2. The default HSRP version is 1.
Because the two versions are not compatible, you must use the same version on your
HSRP enabled routers.
The shared virtual MAC address is generated by combining a specific MAC address range
and the HSRP group number. HSRP Version 1 uses a MAC address in the form
0000.0C07.ACXX and HSRP Version 2 uses a MAC address in the form 0000.0C9F.FXXX,
where XX or XXX stand for the group number. For example, the virtual MAC address for a
HSRP Version 2 virtual router in group 10 would be 0000.0C9F.F00A. The A in 00A is the
hexadecimal value for 10.
In addition, routers with HSRP Version 1 send hello packets to the multicast address of
224.0.0.2 (reserved multicast address used to communicate to all routers) on UDP port
1985, while HSRP Version 2 uses the 224.0.0.102 multicast address on UDP port 1985.
HSRP Advanced Features
Besides the default behavior, you can configure some other HSRP features to increase
your network availability and performance:
o Load balancing: Routers can simultaneously provide redundant backup and
perform load sharing across various subnets and VLANs.
o Interface tracking: When a tracked interface becomes unavailable, the HSRP
tracking feature ensures that a router with the unavailable interface will relinquish
the active router role.
The following figure illustrates a topology with multiple VLANs that can benefit from the
HSRP load-balancing feature.
The two Layer 3 switches have HSRP enabled in two separate VLANs. For each VLAN, HSRP
allocates a standby group, a virtual IPv4 address, and a virtual MAC address. The active
router for each HSRP group is on a different Layer 3 switch. Thus, the hosts in different
VLANs use a different Layer 3 switch, which enables load sharing across various subnets
and VLANs.
The active router in HSRP is elected based on the HSRP priority, which is 100 by default
and is configurable per HSRP group. In the case of an equal priority, the router with the
highest IPv4 address for the respective group is elected as an active router.
The HSRP interface tracking feature decreases the priority of the router by a configured
value, when a tracked interface becomes unavailable. In this situation, the priority of a
standby group router may become higher and it will take the role of active router.
Therefore, a router with the unavailable interface will relinquish the active router role.
The following topology has two redundant routers that connect a host to the internet. The
routers have HSRP enabled on the interfaces that are facing the host network (interface
fa0/0 on each router.)
The primary router (Router 1) is configured with priority 110 while the secondary router
(Router 2) has a default priority of 100. Router 1 is also configured with the HSRP interface
tracking option for the interface Fa0/1, which is connected to the internet. If this interface
becomes unavailable, the Router 1 HSRP priority is configured to decrease by 20, which
will relinquish the active router role to Router 2, which will have a higher priority during
this incident. When interface Fa0/1 on Router 1 comes back online, the router will revert
to the configured priority and will become the active router. These changes will happen
only if you have enabled pre-emption.
HSRP is a Cisco proprietary protocol and VRRP is a standard protocol. VRRP is similar to
HSRP, both in operation and configuration, and the differences between HSRP and VRRP
are very slight. The VRRP master is analogous to the HSRP active gateway, while the VRRP
backup is analogous to the HSRP standby gateway. Other VRRP differences from HSRP
include that it allows you to use the actual IP address of one of the VRRP group members
as a virtual IP address, and that it uses a different multicast address for communication
between peers.
21.1 Introducing WAN Technologies
Introduction
When users in enterprise networks need access to remote sites, or a branch must connect
to the enterprise campus, or when remote users must access the enterprise LAN, a wide-
area network, or WAN, is needed. As the name suggests, WANs cover large geographical
areas. WANs are operated by companies such as telephone or cable companies, service
providers, or satellite companies. They build large networks that span entire cities or
regions and lease the right to use their networks to their customers.
Many WAN technologies exist today and new technologies, such as 4G and 5G Mobile
networks, are constantly emerging. An increasingly common option for enterprises is also
to use the global internet infrastructure for WAN connectivity.
One of the most important aspects of interconnecting enterprise sites and users is
security. In order to secure traffic in transit over the service provider networks or internet,
Virtual Private Networks (VPNs) are deployed. There are multiple options for VPNs and
sometimes enterprises need to combine multiple different services in the network,
depending on the availability of services and business needs.
As a network engineer, you should keep up on possible WAN connectivity options and
other WAN details by acquiring:
o Knowledge of WAN devices and cabling.
o Awareness of WAN protocols and topology options.
o Familiarity with VPN options.
21.2 Introducing WAN Technologies
Introduction to WAN Technologies
A WAN is a data communications network that operates beyond the geographic scope of a
LAN. To implement a WAN, enterprises use the facilities of service providers or carriers,
such as a telephone or cable company. The provider interconnects enterprises own
locations, and connects it to locations of other enterprises, to external services, and to
remote users. WANs carry various traffic types such as voice, data, and video.
Modems are devices that modulate and demodulate analog carriers to encode and
retrieve digital information. A modem interprets digital and analog signals, enabling data
to be transmitted over voice-grade telephone lines. At the source, digital signals are
converted to a form that is suitable for transmission over analog communication facilities.
At the destination, these analog signals are returned to their digital form. Pure analog
circuits are not often encountered today. Modems still do modulate multiple carriers and
implement coding schemes, which are digital. Nonetheless, the word modem is still in use,
by convention, for devices that work on lines that were not primarily intended for data
service, such as phone lines of various types and cable TV lines. The terms transceiver or
converter or media converter are used for fiber lines. Modems are part of the equipment
installed at the customer location, although it is not necessary that they are owned and
managed by the customer (for example, the enterprise). In the figure, a Digital Subscriber
Line (DSL) modem (which is used in broadband environments based on DSL technology)
connects to a router with an Ethernet cable and connects to the service provider network
with a telephone cable. A modem can also be implemented as a router module.
Optical fiber converters are used where a fiber-optic link terminates to convert optical
signals into electrical signals and vice versa. You can also implement the converter as a
router or switch module.
A router provides internetworking and WAN access interface ports that are used to
connect to the service provider network. These interfaces may be serial connections or
other WAN interfaces. With some types of WAN interfaces, you need an external device
such as a CSU/DSU or modem (analog, cable, or DSL) to connect the router to the local
point of presence (POP) of the service provider.
A core router or multilayer switch resides within the middle or backbone of the WAN,
rather than at its periphery. To fulfil this role, a router or multilayer switch must be able to
support multiple telecommunications interfaces of the highest speed in use in the WAN
core. It must also be able to forward IP packets at wire speed on all these interfaces. The
router or multilayer switch must support the routing protocols that are being used in the
core.
Wireless routers are used when you are using the wireless medium for WAN connectivity.
You can also use an access point instead of a wireless router.
Router with cellular connectivity features are used when connecting to a WAN via a
cellular/mobile broadband access network. Routers with cellular connectivity features
include an interface which supports cellular communication standards and protocols.
Interfaces for cellular communication can be factory installed, or they can embed a
module that provides cellular connectivity. A router can be moved between locations. It
can also operate while in motion (in trucks, buses, cars, trains). Enterprise grade routers
that support cellular connectivity also include diagnostic and management functions,
enable multiple cellular connections to one or more service providers, support Quality of
Service (QoS), etc.
DTE/DCE and CSU/DSU: data terminating equipment (DTE) and data communications
equipment (DCE) are terms that were used in the context of WAN connectivity options
that are mostly considered legacy today. The two terms name two separate devices. The
DTE device is either a source or a destination for digital data. Specifically, these devices
include PCs, servers, and routers. In the figure, a router in either office would be
considered a DTE. DCE devices convert the data received from the sending DTE into a form
acceptable to the WAN service provider. The purpose is to convert a signal from a form
used for local transmission to a form used for long distance transmission. Converted
signals travel across provider’s network to the remote DCE device, which connects the
receiving DTE. You could say that a DCE translates data from LAN to WAN "language." To
simplify, the data path over a WAN would be DTE > DCE > DCE > DTE.
DCEs deal with both analog and digital data representations. When dealing only with
digitized data, a DCE is a CSU/DSU. In other words, when you connect a digital device to a
digital line, you use CSU/DSU. It connects two different types of digital signals. When
connecting a digital device to an analog circuit (such as phone line), the DCE is a modem.
In the figure, the router, a digital device, connects to a line, which is digital, via the
CSU/DSU unit. The CSU/DSU connects to the service provider infrastructure using a
telephone or coaxial cable, and it connects to the router with a serial cable. The DSU
converts the telephone line frames into frames that can be interpreted on the LAN and
vice versa. It also provides a clocking signal on the serial line. If a CSU/DSU is implemented
as a module within a router, a serial cable is not necessary.
Nowadays, CSU and DSU are two components within one piece of hardware. The DSU
manages the interface with the DTE. In serial communication, where clocking is required,
the DSU plays the role of the DCE and provides clocking. The DSU converts DTE serial
communications to frames which the CSU can understand and vice versa, it converts the
carrier’s signal into frames that can be interpreted on the LAN. The CSU deals with the
provider’s part of the network. It connects to the provider’s communication circuit and
places the frames from the DSU onto it and from it to the DSU. The CSU ensures
connection integrity through error correction and line monitoring.
WAN Interface Cards (WICs) in a router may contain an integrated CSU/DSU.
Note: The preceding list is not exhaustive and other devices may be required, depending
on the WAN access technology chosen.
The demarcation point is a marking which separates a customer’s WAN equipment from
the service provider’s equipment. The customer side of the demarcation point
accommodates the Customer Premises Equipment (CPE).
CPE are typically devices inside the wiring closet located on the subscriber’s premises. CPE
either belongs to the subscriber or is leased from the service provider. CPE is connected to
the closest point in the service provider’s network (an edge router or an exchange/central
office). This link is called the local loop or last mile. This point where the subscriber
connects to the service providers network is called a POP. Examples of CPE devices are
modems, routers, optical converters, and so on. A copper or fiber cable connects the CPE
to the nearest exchange or central office of the service provider.
The provider’s side of demarcation point includes links that connect to the service
provider equipment—that is, the local loop or last mile.
Physically, the demarcation point can be a cabling junction box, located on the customer
premises, that connects the CPE wiring to the local loop. It is usually placed for easy access
by a technician.
The demarcation point is the place where the responsibility for the connection changes
from the user to the service provider. When problems arise, it is necessary to determine
whether the user or the service provider is responsible for troubleshooting or repair.
Note: The exact demarcation point is different from country to country.
The diagram in the figure gives an overview of available WAN connectivity options, taking
into consideration also the traditional, now mostly legacy connection options, that were
built to leverage the telephone network.
Both the traditional, and current and emerging WAN connectivity options can be broadly
classified into the following:
o Dedicated communication links, which provide permanent dedicated connections
using point-to-point links with various capacities that are limited only by the
underlying physical facilities and the willingness of enterprises to pay for these
dedicated lines. A point-to-point link provides a pre-established WAN
communications path from the customer premises through the provider network
to a remote destination. They are simple to implement and provide high quality
and permanent dedicated capacity. They are generally costly and have fixed
capacity, which makes them inflexible.
o Switched communication links can be either circuit-switched or packet-switched.
It is important to differentiate between the two switching models:
o Circuit-switched communication: Circuit switching establishes a dedicated
virtual connection, called a circuit, between a sender and a receiver. The
connection through the network of the service provider is established
dynamically, before communication can start, using signaling which varies
for different technologies. During transmission, all communication takes
the same path. The fixed capacity allocated to the circuit is available for the
duration of the connection, regardless of whether there is information to
transmit or not. Computer network traffic can be bursty in nature. Because
the subscriber has sole use of the fixed capacity allocation, switched circuits
are generally not suited for data communication. Examples of circuit-
switched communication links are PSTN analog dialup and Integrated
Services Digital Network (ISDN).
o Packet-switched communication: Using circuit switching does not make
efficient use of the allocated fixed bandwidth due to the data flow
fluctuations. In contrast to circuit switching, packet switching segments
data into packets that are routed over a shared network. Packet-switching
networks do not require a dedicated circuit to be established, and they
allow many pairs of nodes to communicate over the same channel. Packet-
switched communication links include Ethernet WAN (Metro Ethernet),
Multiprotocol Label Switching (MPLS), legacy Frame Relay, and legacy
Asynchronous Transfer Mode (ATM).
o Internet-based communication links: Instead of using a separate WAN
infrastructure, enterprises today commonly take advantage of the global internet
infrastructure for WAN connectivity. Previously, the internet was not a viable
option for a WAN connection due to many security risks and lack of SLA, that is,
the lack of adequate performance guarantees. Nowadays, with the development
of VPN technologies, the internet has become one of the most common
connection types that is cheap and secure. Internet WAN connection links include
various broadband access technologies, such as fiber, DSL, cable, and broadband
wireless. They are usually combined with VPN technologies to provide security.
Other access options are cellular (or mobile) networks and satellite systems.
Each of the WAN technologies provides advantages and disadvantages for the customer.
When choosing an appropriate WAN connection, consider whether to use the internet
based public connections or connections implemented within nonpublic service providers
networks. Internet based connections are readily available, flexible, and a cheaper option
that can be made secure using technologies, such as VPNs. Connections within a service
provider’s network guarantee security and performance.
Another element to consider when deciding about WAN connections, is the number of
nodes you need to interconnect. Also, consider the traffic requirements and QoS for each
of the required connections. If traffic is sensitive to delays, such as is voice or video,
private dedicated or switched connections might be better. One of the factors that will
limit your choices is what connection options are locally available. In remote areas, you
might have only satellite access at your disposal. Since WAN costs can be significant, your
operating budget will also influence your choice.
Note: Software defined WAN (SD-WAN) is a new concept in WAN. SD-WAN uses a
different approach from legacy WAN networking when it comes to WAN device
communication and WAN device management. In a legacy network all routers are
independently configured. A small change on a network may require manual
reconfiguration of hundreds of routers. In SD-WAN, all changes are centrally managed and
require only a few clicks to deploy. SD-WAN is an industry response to a trend of more
and more users accessing enterprise resources from more locations. At the same time, the
resources, such as applications and services, are hosted by more and more clouds.
Different user locations have different WAN connectivity options available. For an
enterprise, managing a large number of WAN connections can become inefficient. SD-
WAN provides a software layer to control and manage available WAN connections and
provide users with the best connection for the applications/services they require. It also
provides security features.
Traditional WAN Connectivity Options
WAN technologies that emerged at the beginning of the data communications era were
developed so they could leverage the existing global telephone network. Nowadays, most
of them are considered legacy. However, even today, you might still encounter situations
in which these legacy connectivity options might be the only ones available.
MPLS is an architecture that combines the advantages of Layer 3 routing with the benefits
of Layer 2 switching.
The multiprotocol in the name means that the technology is able to carry any protocol as
payload data. Payloads may be IPv4 and IPv6 packets, Ethernet, or DSL, and so on. This
means that different sites can connect to the provider’s network using different access
technologies.
When a packet enters an MPLS network, the first MPLS router adds a short fixed-length
label to each packet, placed between a packet's data link layer header and its IP header.
The label is removed by the egress router, when the packet leaves the MPLS network. The
label is added by a provider edge (PE) router when the packet enters the MPLS network
and is removed by a PE router when leaving the MPLS network. This process is transparent
to the customer.
MPLS routers are also called label switched routers (LSRs). Based on its location in the
network, a router can be a customer edge router (CE router), a provider edge router (PE
router), or an internal provider router (P router). To forward a packet, routers use the
label to determine the packet's next hop.
MPLS is a connection-oriented protocol. For a packet to be forwarded, a path must be
defined beforehand. A label-switched path (LSP) is constructed by defining a sequence of
labels that must be processed from the network entry to the network exit point. Using
dedicated protocols, routers exchange information about what labels to use for each flow.
Since packets sent between the same endpoints might belong to different MPLS flows,
they might flow through different paths in the network.
MPLS labels can be added one on top of another. This feature of MPLS is called label
stacking. Therefore, a protocol data unit (PDU) may carry multiple labels. The top label is
always processed first, making it possible to combine labels in many different ways. Label
stacking allows the possibility to create many paths, which can have different processing
characteristics, which in turn means that MPLS can accommodate a great variety of
customer requirements.
MPLS provides several services. The most common ones are QoS support, traffic
engineering, quick recovery from failures, and VPNs.
Ethernet over WAN
Ethernet was originally developed to be a LAN access technology. At that time, it was not
suitable as a WAN access technology because the maximum cable length supported was
only up to a kilometer. Over the years the Ethernet physical layer media and coding
schemes constantly changed, while the Ethernet frame has remained the same, enabling a
consistent link layer and upper layer interface. Ethernet has therefore become a
reasonable WAN access option.
Service providers now offer Ethernet WAN services using fiber optic cabling. The Ethernet
WAN service can go by many names, including Metropolitan Ethernet (Metro Ethernet),
Ethernet over MPLS (EoMPLS), and Virtual Private LAN Service (VPLS).
With Ethernet WAN, all sites look as if they are connected to the same Ethernet switch
inside the service provider network. Therefore, all sites are on a single multiaccess
network and each site can communicate directly with all others on the WAN. As Ethernet
operates at layer 2 of the OSI model, you can use your own IP addressing space for routing
purposes. You can also extend your internal LAN QoS policies across the service provider
network.
Ethernet as the WAN connectivity protocol can be deployed in several ways:
o Pure Ethernet connectivity, that is, end-to-end Ethernet connectivity without
transformations to other WAN technologies, has a geographic span determined by
the physical layer limitations. Therefore, the service is limited to specific
geographic regions and is more adequate for Metropolitan Area Network (MAN)
implementations, hence the name Metro Ethernet. Pure Ethernet-based
deployments are cheaper but less reliable and scalable. They can handle hundreds
of remote sites.
o Ethernet over SDH/ SONET deployments are useful when there is an existing
SDH/SONET infrastructure already in place. SDH/SONET are two versions of the
protocol designed for, and used within, the service provider network
infrastructure. Ethernet frames must undergo reframing in order to be transferred
over a SDH/SONET network. Also, the bit-rate hierarchy of the SDH/SONET
network must be followed, which limits bandwidth flexibility.
o MPLS based deployments are a service provider solution that uses an MPLS
network to provide virtual private Layer 2 WAN connectivity for customers. MPLS
based Ethernet WANs can connect a very large number (thousands) of locations,
and are reliable and scalable.
Benefits of Ethernet WAN include:
o Reduced expenses and administration – Ethernet WAN provides a switched, high-
bandwidth Layer 2 network capable of managing data, voice, and video all on the
same infrastructure. These characteristics increase bandwidth and eliminate
expensive conversions to other WAN technologies. The technology enables
businesses to inexpensively connect numerous sites, in a metropolitan area, to
each other and to the internet. An all-Ethernet infrastructure simplifies the
network management process because every device uses the same protocol to
communicate.
o Easy integration with existing networks – Ethernet WAN connects easily to existing
Ethernet LANs, reducing installation costs and time.
o Enhanced business productivity – Ethernet WAN enables businesses to continue to
use IP-based business applications already developed, and to utilize the
accumulated knowledge, that is, to reuse the investment made in software and
training.
Broadband Internet Access
Broadband connectivity options could be classified into wired and wireless. Wired
connections use some sort of cabling, such as fiber or copper wires. These wired
connections tend to be permanent, that is, permanently enabled, dedicated, and mostly
offer consistent bandwidth. On the other hand, given the nature of wireless
communications, wireless connectivity solutions do not offer the same consistency of
bandwidth, error rate and latency as wired connections. This is due to factors such as
location (distance from radio towers, multipath propagation, radio interference from
other sources, etc.), weather, and bandwidth usage (local loop is usually shared among
multiple users). In addition, these factors can vary over time. In order to deliver highly
reliable and consistent performance, an understanding of the radio propagation and
conditions at each installation is needed. Examples of wired broadband connectivity are
DSL, cable TV connections, and optical fiber networks. Examples of wireless broadband
are cellular 3G/4G/5G or satellite internet services.
Broadband solutions are inexpensive when compared to other WAN connectivity options.
However, they do not allow customer to control latency or QoS. In terms of broadband
throughput, there are usually several options from which to choose.
Wired Broadband Internet Access
DSL technology is an always-on connection technology that uses existing twisted-pair
telephone lines to transport high-bandwidth data, and provides IP services to subscribers.
Service providers deploy DSL connections in the local loop/last mile. The connection is set
up between a pair of modems on either end of a copper wire that extends between the
customer premises equipment (CPE) and the DSL access multiplexer (DSLAM). A DSLAM is
the device located at the Central Office (CO) of the provider, which concentrates
connections from multiple DSL subscribers. The DSLAM combines individual DSL
connections from users into one high-capacity link to an ISP, and, therefore, to the
internet. DSL is a broadband technology of choice for many remote workers.
There are many DSL varieties, differing in available bit rates, and underlying data link and
physical layer characteristics. Different DSL flavors are: asymmetric DSL (ADSL), with
different upload and download bit rates; ADSL2+, with higher data rates, longer reach, and
improvements for packet transmission; high-data-rate DSL (HDSL); ISDN-based DSL, with
the longest DSL reach of all DSL technologies; and symmetric DSL (SDSL), which allows
symmetric bandwidth on the upstream and downstream and offers multiple rates, very-
high-data-rate DSL (VDSL), and so on. All these variations are encompassed under the
term xDSL, which denotes any of the DSL technologies.
Generally, a subscriber cannot choose to connect to an enterprise network directly, but
must first connect to an ISP, and then an IP connection is made through the internet to
the enterprise. Security risks are incurred in this process, but can be mitigated with
security measures.
Another wired broadband access option is cable access. Accessing the internet through
cable utilizes the cable network, which was primarily developed for TV signal distribution,
and is known as the cable TV system. At the physical layer, the coaxial cable was the
primary medium used to build cable TV systems. It carries radio frequency (RF) signals.
Most cable operators are deploying hybrid fiber-coaxial networks. Internet service is
provided by the ISP associated with the cable service provider.
To enable the transmission of data over the cable system and to add high-speed data
transfer to an existing cable TV system, the Data over Cable Service Interface Specification
(DOCSIS) international standard defines the communications requirements and operation
support interface requirements.
Two types of equipment are required to send signals upstream and downstream on a
cable system:
o Cable Modem (CM) on the subscriber end.
o Cable Modem Termination System (CMTS) at the headend of the cable operator.
The topology in the figure displays a sample cable WAN connection. A headend CMTS
communicates with CMs located in subscriber homes. The headend is actually a router
with databases for providing internet services to cable subscribers. When deploying
hybrid fiber-coaxial (HFC) networks, service providers enable high-speed transmission of
data to cable modems located in residential areas. Using optical fiber, the headend is
connected to a node that also connects to coaxial cables, called feeder cables, which
connect multiple subscribers. The node performs optical-to-RF signal conversion.
Wireless Broadband Internet Access
Wireless technology uses RF spectrum to send and receive data. One limitation of wireless
access was the need to be within the local transmission range (typically less than 150
feet/46 m) of a wireless router or a wireless modem that has a wired connection to the
internet. However, developments in broadband wireless technology are increasing the
reach of wireless connections and now include WANs.
The following technologies enable wireless broadband access:
o Municipal Wi-Fi: Many municipal governments, often working with service
providers, are deploying wireless networks. Some of these networks provide high-
speed internet access at no cost or for substantially less than the price of other
broadband services. Other cities reserve their Wi-Fi networks for official use,
providing police, fire fighters, and city workers remote access to the internet and
municipal networks. To connect to a municipal Wi-Fi, a subscriber typically needs a
wireless modem, which provides a stronger radio and directional antenna than
conventional wireless adapters. Most service providers provide the necessary
equipment for free or for a fee, much like they do with DSL or cable modems.
o Cellular/Mobile broadband refers to wireless internet access delivered through
mobile phone towers to computers, mobile phones, and other digital devices.
Devices use a small radio antenna to communicate with a larger antenna at the
phone tower, via radio waves. Organizations leverage cellular networks for a
variety of use cases, such as for metering devices (sensors, vehicle diagnostics),
temporary sites (sports/fair/conference access), and to connect smaller and
remote business sites. Three common terms that are used when discussing
cellular/mobile networks include:
o Mobile Internet or Mobile Data is a general term for the internet services
from a mobile phone or from any device that uses the same technology. A
mobile phone subscription does not necessarily include a mobile data
subscription.
o Long-Term Evolution (LTE) A mobile technology that increased the capacity
and speed of the wireless link compared to 2G and 3G technologies. It
introduced novelties, such as using a different radio interface, and core
network improvements. It is considered to be the part of 4G technology,
although it was its predecessor.
o 2G/3G/4G/5G acronyms refer to the mobile wireless technologies and
standards, and stand for second, third, fourth, and fifth generations of
mobile wireless technologies. Each new generation is an evolution of the
previous one. Each generation defines its own standards and with each
new generation the access bit rates continue to increase. 4G standards
provided bit rates up to 450 Mbps download and 100 Mbps upload. The 5G
standard should provide cellular data transfer speeds from 100 Mbps to 10
Gbps and beyond. Also, 5G should significantly decrease latency and
improve reliability of cellular broadband. 5G uses new and so far, rarely
used radio frequency bands. 5G will also work in a directional way, meaning
the effects of interferences from other wireless signals will be minimized.
Low latency is one of 5G's most important attributes, making the
technology highly suitable for critical applications that require rapid
responsiveness.
o Satellite Internet is a high-speed bidirectional internet connection made through
geostationary communications satellites. Internet-by-satellite speed and cost
nowadays compare with DSL broadband offerings. Satellite Internet is typically
used in locations where land-based internet access is not available or for
temporary installations that are mobile. Internet access using satellites is available
worldwide, including for providing internet access to vessels at sea, airplanes in
flight, and vehicles moving on land. To access satellite internet services,
subscribers need a satellite dish, two modems (uplink and downlink), and coaxial
cables between the dish and the modem. The only prerequisite is that the dish can
see the sky; that it has clear line-of-sight to a geostationary satellite. A company
can create a private WAN using satellite communications and Very Small Aperture
Terminals (VSAT). A VSAT is a type of satellite dish similar to the ones used for
satellite TV from the home and is usually about 1 meter in width. The VSAT dish
sits outside, pointed at a specific satellite, and is cabled to a special router
interface, with the router inside the building.
o Worldwide Interoperability for Microwave Access (WiMAX) provides high-speed
broadband service with wireless access and provides broad coverage similar to a
cell phone network rather than through small Wi-Fi hotspots. WiMAX is a wireless
technology for both fixed and mobile implementations. WiMAX operates in a
similar way to Wi-Fi, but at higher speeds, over greater distances, and for a greater
number of users. It uses a network of WiMAX towers that are similar to cell phone
towers. To access a WiMAX network, subscribers must subscribe to an ISP with a
WiMAX tower within 30 miles of their location. They also need some type of
WiMAX receiver and a special encryption code to get access to the base station.
WiMAX may still be relevant for some areas of the world. However, in most of the
world, WiMAX has largely been replaced by LTE for mobile access and cable or DSL
for wired access.
Optical Fiber in WAN connections
Due to much lower attenuation and interference, optical fiber has large advantages over
existing copper wire, especially in long-distance, high-demand applications. Until recently,
optical fiber infrastructures were complex and expensive to install and operate and they
have been installed mainly within the service provider backhaul and backbone networks,
where they could be utilized to their full capacity. In the late 1990s, the price for installing
fiber dropped and many telecommunication companies invested in building optical fiber
networks with sufficient capacity to take existing traffic and future traffic, which was
forecast to grow exponentially, and to expend the optical network to include the local
loop. At the same time, the technologies used for transmission of optical signals also
evolved. With the development of wavelength division multiplexing (WDM), the capacity
of the single strand of optical fiber increased significantly, and as a consequence, many
fiber optic cable runs were left "unlit"—that is, were not in use. Today, this optic fiber is
offered under the term "dark fiber."
Fiber to the x
Optical fiber network architectures, in which optical fiber reaches the subscriber home,
premises, or building, are referred to as Fiber to the x (FTTx), which includes Fiber to the
Home (FTTH), Fiber to the Premises (FTTP), or Fiber to the Building (FTTB). When optical
cabling reaches a device that serves several customers, with copper wires (twisted pair or
coaxial) completing the connection, the architecture is referred to as Fiber to the
Node/Neighborhood (FTTN), or Fiber to the Curb/Cabinet (FTTC). In FTTN, the final
subscriber gains broadband internet access using cable or some form of DSL.
SONET and SDH
The standards used in service provider optical fiber networks are SONET or SDH.
SONET/SDH were designed specifically as WAN physical layer standards. SONET is used in
the United States and Canada, while SDH is used in the rest of the world. Both standards
are essentially the same and, therefore, are often listed as SONET/SDH. Both define how
to transfer multiple data, voice, and video communications over optical fiber using lasers
or light-emitting diodes (LEDs) over great distances.
Note: The SDH standard was originally defined by the European Telecommunications
Standards Institute (ETSI) and is formalized as International Telecommunication Union
(ITU) standards G.707, G.783, G.784, and G.803. The SONET standard was defined by
Telcordia and American National Standards Institute (ANSI) standard T1.105. which define
the set of transmission formats and transmission rates in the range above 51.840 Mbps.
SONET/SDH standards are used on the ring network topology. The ring contains
redundant fiber paths and allows traffic to flow in both directions.
SONET and SDH have a hierarchical signal structure. This means that a basic unit of
transmission is defined, which can be multiplexed, or combined, to achieve greater data
rates.
STS = Synchronous Transport Signal
OC = Optical Carrier
STM = Synchronous Transport Module
The figure shows the SONET and SDH signal hierarchy. SONET and SDH both have their
own terminology for the basic unit of transmission. In SONET the basic unit of
transmission is called Synchronous Transport Signal 1 (STS-1) or Optical Carrier 1 (OC-1)
and operates at 51.84 Mbps. The higher-level signals are multiples of STS-1 signals and
operate at multiples of base transmission rate. For example, STS-3 operates at a bit rate of
155.52 Mbps interleaving frames coming from three STS-1 signals. Four STS-3 streams can
be multiplexed into an STS-12 stream and so on.
The STS-1 and OC-1 designations are often used interchangeably, though the OC refers to
the physical signal, that is, the signal in its optical form, while STS-1 specifies the
transmission format.
In SDH the basic unit of transmission is the Synchronous Transport Module, level 1 (STM-
1). which operates at 155.520 Mbps. The STM-1 bit rate is the same as the SONET STS-3
bit rate.
Each rate is an exact multiple of the lower rate, ensuring that the hierarchy is
synchronous.
Dense Wavelength-Division Multiplexing
Along with installing extensive optical fiber networks, the technologies for transmission of
optical signals also advanced. DWDM is a form of wavelength division multiplexing that
combines multiple high-bit-rate optical signals into one optical signal transmitted over one
fiber strand. Each of the input optical signals is assigned a specific light wavelength, or
“color”, and is transmitted using that wavelength. Different signals can be extracted from
the multiplexed signal at the reception in a way that there is no mixing of traffic. As
demands change, more capacity can be added, either by simple equipment upgrades or by
increasing the number of wavelengths on the fiber, without expensive upgrades. The
figure below illustrates this multiplexing concept.
Specifically, DWDM:
o Assigns incoming optical signals to specific wavelengths of light (that is,
frequencies).
o Can multiplex more than 96 different channels of data (that is, wavelengths) onto a
single fiber.
o Each channel is capable of carrying a 200 Gbps multiplexed signal.
o Can amplify these wavelengths to boost the signal strength.
o Is protocol agnostic, it supports various protocols with different bit rates, including
Ethernet, Fiber Channel, SONET and SDH standards
DWDM circuits are used in all modern submarine communications cable systems and
other long-haul circuits.
Dark Fiber
The availability of WDM reduced the demand for fiber by increasing the capacity that
could be placed on a single fiber strand. As a result, many fiber optic cable runs were left
"unlit"—that is, were not in use.
Enterprises can use dark fiber to interconnect their remote locations directly. The
enterprises can create a privately operated optical fiber network over dark fiber leased or
purchased from another supplier. Both ends of the link are controlled by the same entity.
Dark fiber networks can operate using WDM to add capacity where needed. The cost to
lease fiber is usually more expensive than any other WAN option available today. On the
other hand, connecting remote sites using dark fiber offers the greatest flexibility and
control. Therefore, dark fiber is leased when speed and security are of utmost importance.
WAN-Related Protocols
In addition to understanding the various technologies available for broadband internet
access, it is also important to understand the underlying data link layer protocol used by
the ISP.
A data-link protocol that is commonly used by ISP on links to the customers is PPP. PPP
originally emerged as an encapsulation protocol for transporting IP traffic over point-to-
point links, such as links in analog dialup and ISDN access networks. PPP specifies
standards for the assignment and management of IP addresses, encapsulation, network
protocol multiplexing, link configuration, link quality testing, error detection, and option
negotiation for such capabilities as network layer address negotiation and data
compression negotiation. PPP provides router-to-router and host-to-network connections
over both synchronous and asynchronous circuits. An example of an asynchronous
connection is a dialup connection. An example of a synchronous connection is a leased
line.
Additionally, ISPs often use PPP as the data link protocol over broadband DSL connections.
There are several reasons for this. First, PPP supports the ability to automate assigning of
the IP addresses to remote ends of a PPP link. With PPP enabled, ISPs can use PPP to
assign each customer one public IPv4 address. PPP also includes the link-quality
management feature. If too many errors are detected, PPP takes down the link. More
importantly, PPP supports authentication. ISPs often want to use this feature to
authenticate customers because during authentication, ISPs can check accounting records
to determine whether the customer’s bill is paid, prior to letting the customer connect to
the internet. Also, ISPs can use the same authentication model as the ones already in
place for analog and ISDN connections.
Analog dialup and ISDN WAN technologies supported PPP, but are largely deprecated
today. On the other hand, the DSL subscriber base is still significant. When deploying a DSL
network, ISPs often provide their customers with a DSL modem. A DSL modem has one
Ethernet interface to connect to the customer Ethernet segment, and another interface
for DSL line connectivity. While ISPs value PPP because of the authentication, accounting,
and link management features, customers appreciate the ease and availability of the
Ethernet connection. However, Ethernet links do not natively support PPP. PPP over
Ethernet (PPPoE) provides a solution to this situation. As shown in the figure, PPPoE
allows the sending of PPP frames encapsulated inside Ethernet frames.
PPPoE provides an emulated point-to-point link across a shared medium, typically a
broadband aggregation network such as the ones that you can find in DSL service
providers. A very common scenario is to run a PPPoE client on the customer side, which
connects to and obtains its configuration from the PPPoE server at the ISP side.
The figure illustrates a DSL deployment, in which the DSL modem is the only intermediary
device on the customer side, between the PC and the Internet. In such case, there can be
only one PPPoE client device on the LAN side of the connection, which is the PC. The
modem converts the Ethernet frames to PPP frames by stripping the Ethernet headers.
The modem then transmits these PPP frames on the ISP’s DSL network.
The following figure shows a typical network topology, in which Cisco IOS router is added
on the customer site. The Cisco IOS router connects to the Ethernet LAN on one side and
to the DSL modem on the other. The customer’s router is connected to a DSL modem
using an Ethernet cable You can run the PPPoE client IOS feature on the Cisco router. This
way, you can connect multiple PCs on the Ethernet segment that is connected to the Cisco
IOS router.
PPPoE creates a PPP tunnel over an Ethernet connection. This allows PPP frames to be
sent across the Ethernet cable to the service provider from the customer’s router. The
modem converts the Ethernet frames to PPP frames by stripping the Ethernet headers.
The modem then transmits these PPP frames on the service provider’s DSL network. The
PPPoE client initiates a PPPoE session. If the session has a timeout or is disconnected, the
PPPoE client will immediately attempt to reestablish the session.
Enterprise Internet Connectivity Options
When connecting an enterprise network to an ISP, redundancy is a serious concern. There
are different aspects that can be addressed to achieve redundant connectivity.
Using redundant links protects your network against link failure between your router and
the ISP router. Deployment of redundant equipment protects your network against device
failure. If one router fails, internet connectivity is still established through the redundant
router. You also need redundant links to connect all devices.
If you are hosting important servers in your network, it is best to have two redundant
internet providers. If there is a failure in one ISP network, all traffic is automatically
rerouted through the second ISP.
There are multiple strategies for connecting your network to an ISP. The topology
depends on the needs of the company.
The figure illustrates a site-to-site VPN and a remote-access VPN. These two basic VPN
deployment models typically use either IPsec or SSL technologies to secure the
communications.
VPNs provide these benefits:
o Cost savings: VPNs enable organizations to use a cost-effective, third-party
internet transport to connect remote offices and remote users to the main
corporate site. The use of VPNs therefore eliminates expensive, dedicated WAN
links. Furthermore, with the advent of cost-effective, high-bandwidth technologies
such as DSL, organizations can use VPNs to reduce their connectivity costs while
simultaneously increasing remote connection bandwidth.
o Scalability: VPNs enable corporations to use the internet infrastructure, which
makes it easy to add new users. Therefore, corporations can expand capacity
without adding significant infrastructure. For instance, a corporation with an
existing VPN between a branch office and the headquarters can securely connect
new offices by simply making a few changes to the VPN configuration and ensuring
that the new office has an internet connection. Scalability is a major benefit of
VPNs.
o Compatibility with broadband technology: VPNs allow mobile workers,
telecommuters, and people who want to extend their work day to take advantage
of high-speed, broadband connectivity, such as DSL and cable, to gain access to
their corporate network. This ability provides workers with significant flexibility
and efficiency. Furthermore, high-speed, broadband connections provide a cost-
effective solution for connecting remote offices.
o Security: Cryptographic VPNs can provide the highest level of security by using
advanced encryption and authentication protocols that protect data from
unauthorized access. The two available options are IPsec and SSL.
o Generic Routing Encapsulation (GRE) over IPsec: Although IPsec provides a secure
method for tunneling data across an IP network, it has limitations. IPsec does not
support IP broadcast or IP multicast, for example, it cannot be used when
exchanging messages from protocols that rely on these features, such as routing
protocols. IPsec also does not support the use of non-IP protocols. GRE is a
tunneling protocol developed by Cisco that can encapsulate a wide variety of
network layer protocol packet types, such as IP broadcast or IP multicast, and non-
IP protocols, inside IP tunnels but it does not support encryption. Using GRE
tunnels with IPsec will give you the ability to securely run routing protocol, IP
multicast, or multiprotocol traffic across the network between the remote
locations.
o With a generic hub-and-spoke topology, you can typically implement static tunnels
(typically GRE over IPsec) between the central hub and remote spokes. When you
want to add a new spoke to the network, you need to configure it on the hub
router. Also, the traffic between spokes has to traverse the hub, where it must exit
one tunnel and enter another. Static tunnels may be an appropriate solution for
small networks, but this solution becomes unacceptable as the number of spokes
grows larger. Cisco Dynamic Multipoint Virtual Private Network (DMVPN) is a
Cisco proprietary software solution that simplifies the device configuration when
there is a need for many VPN connections. With Cisco DMVPN, a hub-and-spoke
topology is first implemented. The configuration of this network is facilitated by a
multipoint GRE tunnel interface, established on the hub. Multipoint in the name
signifies that a single GRE interface can support multiple IPsec tunnels. The hub is a
permanent tunnel source. The size of the configuration on the hub router remains
constant even if you add more spoke routers to the network. The spokes are
configured to establish a VPN connection with the hub. After building the hub-and-
spoke VPNs, the spokes can obtain information about other spokes from the hub
and establish direct spoke-to-spoke tunnels.
o IPsec virtual tunnel interface (VTI): IPsec VTI is a feature that associates an IPsec
tunnel endpoint with a virtual interface. Traffic is encrypted or decrypted when it
is forwarded from or to the tunnel interface and is managed by the IP routing
table. Using IP routing to forward the traffic to the tunnel interface simplifies the
IPsec VPN configuration compared to the conventional process, allowing for the
flexibility of sending and receiving both IP unicast and multicast encrypted traffic
on any physical interface. The IPsec tunnel protects the routing protocol and
multicast traffic, like with GRE over IPsec, but without the need to configure GRE.
Keep in mind that all traffic is encrypted and that it supports, like standard IPsec,
only one protocol (IPv4 or IPv6), which allows for the flexibility of sending and
receiving both IP unicast and multicast encrypted traffic on any physical interface,
such as in the case of multiple paths.
A Layer 3 MPLS VPN provides a Layer 3 service across the backbone. A separate IP subnet
is used on each customer site. When you deploy a routing protocol over this VPN, the
service provider needs to participate in the exchange of routes. Neighbor adjacency is
established between your CE router and the PE router (which the service provider owns).
Within the service provider network, there are many P routers (service provider core
routers). The job of P routers is to provide connectivity between PE routers. What this
situation means is that the service provider becomes the backbone of your (customer)
network.
Layer 3 VPN is appropriate for customers who prefer to outsource their routing to a
service provider. The service provider maintains and manages routing for the customer
sites. If you look from the customer perspective, with Layer 3 MPLS VPN, you can imagine
the whole service provider network as one big virtual router.
When comparing an IPv4 address from the packet header with the reference address from
the ACL statement, a device is looking for a match only for those bits of the reference IPv4
address that are masked by 0s in the wildcard mask.
The example demonstrates how the wildcard mask is interpreted. The reference IPv4
address is 172.16.100.1. For clarity, the analysis is given only for the third octet. The
second column lists some possible values for a wildcard mask octet, from 0000 0000 to
1111 1111. The third column contains the reference octet value 100 decimal, presented in
its binary form. Where the wildcard mask has a 0, the binary digit is colored red to indicate
that the digit must match with the same digit in the analyzed packet. Where the wildcard
mask has a 1, the binary digit is green, to indicate that the digit can be of any value. For
each wildcard mask - reference value combination, the fourth column gives the resulting
matching pattern. The "×" character indicates a bit of whatever value. The final column
gives the decimal values that would match the criteria specified with the reference value
and the wildcard mask.
A wildcard mask is sometimes referred to as an inverse mask. In a subnet mask, binary 1 is
equal to a match and binary 0 is not a match. The reverse is true for wildcard masks. A 0 in
a bit position of the wildcard mask indicates that the corresponding bit in the address
must be matched. A 1 in a bit position of the wildcard mask indicates that the
corresponding bit in the address is not interesting and can be ignored. There is another
significant difference between the subnet mask and the wildcard mask. After the first zero
in a subnet mask, all subsequent bits are 0s. There is no such regularity in wildcard
masks—after the first one in a wildcard mask, subsequent bits can be either 0s or 1s.
The example illustrates different matching rules.
The figure illustrates several examples of matching rules. The first two examples have
different reference addresses (172.16.100.0 and 172.16.100.1), but the result in the same
range of addresses because the relevant portions of the reference addresses (the first 24
bits, as indicated by the wildcard mask) are equivalent in both addresses. The third
example shows a wildcard mask that does not have only continuous sequences of 0s and
1s. Note that the third octet of the 0.0.254.255 wildcard mask breaks the array of 1s. This
wildcard mask requires that the last bit of the third octet must match the same bit in the
reference address, which is 1. Since only odd values have the last bit 1, the matching
criteria requires an IPv4 address to be from one of the odd numbered /24 networks, such
as 192.168.1.0, 192.168.3.0, 192.168.5.0, and so on.
With wildcard masks you can define criteria for matching all bits of an IPv4 address, or you
can define rules that require only parts of the reference address to match. Partial match
requirement results in a range of addresses matching the criteria, such as IPv4 addresses
of many subnets. By carefully setting wildcard masks, with one ACE you can select a single
IPv4 address or multiple IPv4 addresses.
The figure illustrates two uses of wildcard masks: one to match only one subnet and
another to match a range of subnets. Assume that you have subnetted address
172.16.0.0, and you want to create a wildcard mask that matches packets from subnets
172.16.16.0/24 through 172.16.31.0/24. There are 16 different subnets in that range. All
16 subnets have the first two octets identical. The third octet has values from 16 to 31.
To create rules that would match all 16 subnets, one way would be to create 16 different
ACEs, one for each matching subnet. Each of the 16 ACE statements would include a
criterion composed of the subnet ID and the wildcard mask 0.0.0.255, as illustrated in the
first table in the example. However, that would unnecessarily create an ACL with 16 or
more statements, when the criterion could be met with a single statement. Minimizing
ACL statements optimizes ACL processing speed.
To minimize the number of statements in an access list, you should try to find a wildcard
mask that matches a wider range of subnets. To do so, look into the binary
representations of the desired address range and identify which bits are identical in all of
them. In the example for subnets 172.16.16.0/24 through 172.16.31.0/24, it is easy to see
that the first two octets are identical in all addresses. Therefore, for a packet to match the
range, it too must have the same first two octets equal to 172.16. The wildcard mask that
requires matching the first two octets has 0s at the first 16 positions.
The third octet can have any value from 16 to 31. If you look closer into binary
representations of numbers 16 to 31, you will notice that all of them start with the same 4
bits 0001. The last 4 bits differ, from 16 having all zeros 0000, to 31 having all ones, 1111.
Therefore, any packet that has an address with the third octet starting with 0001, belongs
to the desired range. You can now determine the wildcard mask value. Four "must match"
bits followed by 4 "whatever" bits translate to a wildcard mask octet 00001111, or 15 in
decimal representation.
The last octet can have any value because you wish to select all packets from desired
subnets. Therefore, the last octet of the wildcard mask is all 1s, or 255 in decimal
representation. The entire wildcard mask would be 0.0.15.255.
Now that you have the wildcard mask determined, you need to determine the reference
IPv4 address and you will have the range matching rule. As the reference IPv4 address,
you can select any address from the range you wish to match. Examples of matching rules
would be 172.16.16.1 0.0.15.255, 172.16.17.1 0.0.15.255, or 172.16.30.200 0.0.15.255. As
long as you keep the wildcard mask unchanged, all these matching rules result in the same
range of matched IPv4 addresses. However, if you configure an ACL with any of these
matching rules, it will be changed to be an entry that has all nonmatching bits in the
reference IPv4 address set to binary 0, so you may see the configured reference IPv4
address different than you typed in. In this example, the matching rule would be changed
to 172.16.16.0 0.0.15.255.
Note that wildcard mask 0.0.15.255 matches all possible subnets that have the same 4 bits
in the third octet. In the example, there are exactly 16 values that have 0001 as the first 4
bits. You were looking for one statement to match exactly 16 subnets—no more or less.
You will not always be able to find a perfect fit with just one ACL statement. For instance,
if you had to match subnets 172.16.16.0 to 172.16.27.0 (only 12 subnets), the same
matching rule 172.16.16.0 0.0.15.255 would include the desired 12 subnets, but it would
be too wide because it would also include subnets 172.16.28.0 to 172.16.31.0.
To match the desired range of addresses exactly, sometimes you will have to use more
than one ACL statement. For example, to match a range of addresses from 172.16.16.0/24
to 172.16.32.0/24, you should use two entries with the following matching rules:
172.16.16.0 0.0.15.255 and 172.16.32.0 0.0.0.255.
Note: Unlike IPv4 ACLs, IPv6 ACLs do not use wildcard masks. Instead, the prefix-length is
used to indicate how much of an IPv6 source or destination address should be matched.
IPv6 ACLs are beyond the scope of this course.
22.5 Explaining the Basics of ACL
Wildcard Mask Abbreviations
Working with decimal representations of binary wildcard mask bits can be tedious. To
make configuration easier and to improve readability when viewing an access list, the
most commonly used wildcard masks are represented by the keywords host and any.
The host keyword is equal to the wildcard mask 0.0.0.0. The host keyword and all-zeros
mask require all bits of the IPv4 address to match the reference IPv4 address.
The any keyword is equal to the wildcard mask 255.255.255.255. The any keyword and all-
ones mask do not require any of the IPv4 address bits to match the reference IPv4
address.
When using the host keyword to specify a matching rule, use the keyword before the
reference IPv4 address.
When using the any keyword, the keyword alone is enough, you do not specify the
reference IPv4 address.
In the example, you can see how the keyword host and any are used. Instead of typing
172.30.16.5 0.0.0.0, you can type host 172.30.16.5. Instead of typing 172.30.16.5
255.255.255.255, you can type the any keyword. Note that 0.0.0.0 255.255.255.255 is
equivalent to any; thus you could type any reference IPv4 address with the wildcard mask
255.255.255.255, and it would indeed match any address.
Note how using keywords shortens the matching rule, making it easier to read and
interpret ACL statement.
If a wildcard mask is omitted in the matching criteria in a standard IPv4 ACL, a wildcard
mask of 0.0.0.0 is assumed.
22.6 Explaining the Basics of ACL
Types of Basic ACLs
Cisco routers support the following two basic types of IP ACLs:
o Standard IP ACLs specify matching rules for source addresses of packets only. The
matching rules are not concerned with the destination addresses of packets nor
with the protocols, whose data is carried in those packets. Matching rules specify
ranges of networks, specific networks, or single IP addresses. Standard IP ACLs
filter IP packets based on the packet source address only. They filter traffic based
in the IP layer, which means that they do not distinguish between TCP, UDP, or
HTTPS traffic, for example.
o Extended IP ACLs examine both the source and destination IP addresses. They can
also check for specific protocols, port numbers, and other parameters, which allow
administrators more flexibility and control.
The figure illustrates and compares standard ACL and extended ACL filtering for IPv4
traffic.
A standard ACL can only specify source IP addresses and source networks as matching
criteria, so it is not possible to filter based on a specific destination. For more precise
traffic filtering, you should use extended ACLs.
Extended ACLs provide a greater range of control. In addition to verifying packet source
addresses, extended ACLs also may check destination addresses, protocols, and source
and destination port numbers, as shown in the figure. They provide more criteria on which
to base the ACL. For example, an extended ACL can simultaneously allow email traffic
from a network to a specific destination and deny file transfers and web browsing for a
specific host. The ability to filter on a protocol and port number allows you to build very
specific extended ACLs. Using the appropriate port number or well-known protocol
names, you can permit or deny traffic from specific applications.
The following are the two general methods that you can use to create ACLs:
o Numbered ACLs use a number for identification of the specific access list. Each
type of ACL, standard or extended, is limited to a preassigned range of numbers.
For example, specifying an ACL number from 1 to 99 or 1300 to 1999 instructs the
router to accept numbered standard IPv4 ACL statements. Specifying an ACL
number from 100 to 199 or 2000 to 2699 instructs the router to accept numbered
extended IPv4 ACL statements. Based on ACL number it is easy to determine the
type of ACL that you are using. Numbering ACLs is an effective method on smaller
networks with more homogeneously defined traffic.
o Named ACLs allow you to identify ACLs with descriptive alphanumeric string
(name) instead of the numeric representations. Naming can be used for both IP
standard and extended ACLs.
Cisco IOS Software provides a specific configuration mode for named access lists, which is
called Named Access List configuration mode. It is recognized by HOSTNAME(config-std-
nacl)# CLI prompt. The Named Access List configuration mode provides more flexibility in
configuring and modifying ACL entries.
For IPv4 and IPv6 packet filtering, you have to create separate ACLs. For each of the
protocols (IPv4 or IPv6), you can create multiple ACLs that are differentiated by their
numbers or names for IPv4 and by names only for IPv6. However, you are restricted in
how many ACLs you can apply simultaneously, depending on the purpose of ACL. For
instance, to filter traffic on an interface, you can apply only one ACL per protocol and
traffic direction.
The figure shows the anatomy of a numbered standard ACL statement. A numbered
standard ACL statement consists of the access list identification (a number) followed by a
keyword indicating the action to be taken, and the matching criteria. Since standard IPv4
access lists allow matching only on source IPv4 address, the matching criteria always
refers to the source IPv4 address.
Each ACL statement includes a keyword indicating an action that a device must take for
the packet that matches the criteria. The action is either a permit action, allowing a
matching packet to be processed further (forwarded, analyzed, …) or a deny action, which
discards the matching packets when using ACL for packet filtering. To specify an action
that a device takes for the packet that matches the criteria, use the keywords permit or
deny.
You specify matching criteria either by using reference IPv4 address and a wildcard mask
or its abbreviated keyword. If wildcard mask is not specified, the 0.0.0.0 value is assumed.
An example of a standard ACL configuration on RouterX is:
If this ACL is used as a traffic filter, it would discard traffic from host 172.16.3.3 and allow
traffic from other devices in the network 172.16.0.0. Note that an access list must have at
least one permit statement, otherwise it blocks all traffic.
The ip access-list standard command is used for named standard IPv4 access lists. Note
the ip keyword added at the beginning of the command.
The name you choose for the access list is an arbitrary descriptive alphanumeric string.
Because it is arbitrary, the CLI cannot interpret it ambiguously. You must specify the type
of list that you are naming, which is why the keyword standard must be used in the ip
access-list command when configuring a named standard IPv4 ACL. Using meaningful
descriptive names to identify access lists makes it easier to indicate its purpose.
Capitalizing ACL names makes them stand out when viewing device configuration and to
distinguish ACL name from actual device CLI command.
Note: When using named ACL configuration, you can specify a number as a name. The
numbers have the same meaning as in numbered ACL configuration. You must use the
correct number for a standard ACL. For example, you cannot configure a standard ACL
with number 125, because this is not a valid number for a standard ACL.
Using the ip access-list standard command takes you to the Named Access List
configuration mode, which is indicated by the Router(config-std-nacl)# prompt. Note the
abbreviations std and nacl in the prompt, that stand for standard and named ACL,
respectively.
Once you are in the Named Access List configuration mode, you enter the ACL statements.
Each statement has the same elements as its numbered counterpart—it contains the
action keyword followed by matching criteria.
The configuration of the same access list used in the previous example using named
configuration method is the following:
The order of the ACL statements is important. Recall that ACL processing stops after the
first match is encountered. Only the matching statement is executed. Other statements
are not evaluated. More specific statements, such as those permitting and denying
particular hosts, should be placed before the statements matching on a wider range of
addresses.
An implicit deny any statement is added to the end of each standard IPv4 access list,
denying all other packets that did not match ACL statements.
The figure shows the anatomy of a numbered extended ACL statement. An extended ACL
statement has more elements than a standard ACL statement.
In addition to the ACL number and action keyword, the extended ACL statement contains:
o A keyword indicating a protocol suite, such as ip, icmp, tcp, or udp. Keyword ip
matches all protocols.
o Matching criteria for the source IPv4 address and optionally port
o Matching criteria for the destination IPv4 address and optionally port
The syntax for specifying matching criteria allows the following:
o Specifying IPv4 address using syntax source [source-wildcard] | host {address |
name} | any
o Option 1: Reference IPv4 address and a wildcard mask
o Option 2: Keyword host and a reference IPv4 address or host name
o Option 3: Keyword any
o Optionally specifying either the source port or the destination port, or both ports,
using the syntax operator port
o Port matching criteria uses operators to specify a single port number or
range of port numbers
o Specify a single port using syntax operand eq (equal), lt (less than), gt (greater
than), or neq (not equal), followed by port number (for instance 80), or the
protocol name (for well-known protocols, such as www).
o Specify a range of ports using syntax: operand range (inclusive range) with the first
and the last port number of the range
In an extended access list, you must specify matching criteria both for the source and for
the destination header parameters. The previous figure shows a sample configuration
which includes a statement permitting only TCP connections from client port numbers in
the range of 56000 to 60000 on the host 172.16.3.3 to establish connection to port 80 on
host 203.0.113.30. Note that matching criteria is first fully specified for the source
information and only then matching criteria for the destination is given. If you are
specifying both IPv4 address and a port, specify both for the source part before you
specify the destination criteria.
An example of a numbered extended ACL configuration on RouterX, denying remote
access via Telnet or SSH from the devices in the 172.16.3.0/24 subnet and permitting
other traffic, is below:
It is very important to note that in this example the port numbers are destination port
numbers because they come after the destination address (which in this case is
represented by the keyword any). Port numbers that appear after the source address are
source port numbers.
The ip access-list extended command is used for the named extended IPv4 access lists.
Note the ip keyword added at the beginning of the command. You must specify the
keyword extended.
Using the ip access-list extended command takes you to the Named Access List
configuration mode, which is indicated by the Router(config-ext-nacl)# prompt. Note the
abbreviation ext in the prompt, standing for extended.
Named configuration mode allows you to specify numbers as names, as long as you use
the numbers assigned for the type of the access list you are configuring.
An example of a named configuration of the same extended ACL from the previous
example is:
Notice that in the named configuration of the same extended ACL the port number 23 was
used instead of keyword telnet.
The order of the ACL statements is important also for the extended ACLs. The ACL
processing is sequential. For example, ACL processing starts with the first ACL statement
and continues top down until the first match is encountered. The matching ACL statement
is executed and the processing stops. Remaining statements are not evaluated. Therefore,
more specific ACL statements, such as those permitting and denying particular hosts,
should be placed before the statements matching on wider range of addresses.
An implicit deny any any statement is added to the end of each extended IPv4 access list,
denying all traffic that did not match ACL statements.
The figure shows the anatomy of a numbered extended ACL statement. An extended ACL
statement has more elements than a standard ACL statement.
In addition to the ACL number and action keyword, the extended ACL statement contains:
• A keyword indicating a protocol suite, such as ip, icmp, tcp, or udp. Keyword ip
matches all protocols.
• Matching criteria for the source IPv4 address and optionally port
• Matching criteria for the destination IPv4 address and optionally port
The syntax for specifying matching criteria allows the following:
It is very important to note that in this example the port numbers are destination port
numbers because they come after the destination address (which in this case is
represented by the keyword any). Port numbers that appear after the source address are
source port numbers.
The ip access-list extended command is used for the named extended IPv4 access lists.
Note the ip keyword added at the beginning of the command. You must specify the
keyword extended.
Using the ip access-list extended command takes you to the Named Access List
configuration mode, which is indicated by the Router(config-ext-nacl)# prompt. Note the
abbreviation ext in the prompt, standing for extended.
Named configuration mode allows you to specify numbers as names, as long as you use
the numbers assigned for the type of the access list you are configuring.
An example of a named configuration of the same extended ACL from the previous
example is:
Notice that in the named configuration of the same extended ACL the port number 23 was
used instead of keyword telnet.
The order of the ACL statements is important also for the extended ACLs. The ACL
processing is sequential. For example, ACL processing starts with the first ACL statement
and continues top down until the first match is encountered. The matching ACL statement
is executed and the processing stops. Remaining statements are not evaluated. Therefore,
more specific ACL statements, such as those permitting and denying particular hosts,
should be placed before the statements matching on wider range of addresses.
An implicit deny any any statement is added to the end of each extended IPv4 access list,
denying all traffic that did not match ACL statements.
• The show access-lists command displays the content of all configured ACLs. The
output can be narrowed to a specific list by providing its number or its name
• The show ip access-lists command displays the content of all IPv4 access list. The
output can be narrowed by specifying a specific ACL number or name.
Both commands display ACL statements with their sequence numbers.
The verification command output for the access lists from the previous examples would
be:
Modifying an access list differs for numbered and named configuration method. Adding
and removing statements is more convenient when using named configuration method.
When modifying access lists, you can delete the entire lists or add/remove specific entries.
To delete an IPv4 access list, you can use one of the following commands.
• no access-list access-list number
• no ip access-list standard|extended access-list-name
Both commands require you to specify the number or the name of the access list you wish
to delete.
Using numbered configuration method, you cannot add or remove individual statements
directly. Instead, you would have to first copy the entire access list, modify it in the text
editor, delete it from the configuration and enter the modified ACL statements.
Because the show access-lists command displays ACL statements in the format different
to the syntax used to configure them, it is more convenient to use the show running-
config command when you want to edit the statements in an editor. In the running
configuration file, the ACL statements are stored with the proper syntax, therefore they
can be easily reused. To filter out only numbered access lists from the show running-
config output, use the include and access-lists keywords.
Note: You cannot delete individual entries using numbered configuration method. If you
mistakenly issue a no version of the ACL statement you used to configure an entry, for
instance no access-list 15 permit host 192.168.1.1, you will delete the entire access-list
15.
With named configuration method, modifying an ACL is significantly easier. Before you
implement the modification, you need to know the sequence number of the statement
you wish to add or remove. Modifications are implemented in the Named Access List
configuration mode.
To add an entry from within Named Access List configuration mode, use one of the
following commands, depending on whether you are modifying a standard or an extended
access list:
Specific statements cannot be overwritten using the same sequence number as an existing
statement. The current statement must be deleted first, and then the new one can be
added.
To delete an entry, go to the Named Access List configuration mode. When deleting an
entry for the numbered access list, use the ACL number as the name of the list you wish to
modify. Again, you need to know the sequence number of the statement you wish to
delete. To delete a statement, use the command no sequence-number.
Note that a reload will resequence numbers in the ACL so that all numbers are multiples
of 10. To initiate resequencing on your own and avoid reloading, use the access-list ACL
name resequence command.
After modifying an access list, verify the changes using the show access-lists command.
• The extent of the network administrator’s control: Placement of the ACL can
depend on whether the network administrator has control of both the source and
destination networks.
• The bandwidth of the networks involved: Filtering unwanted traffic at the source
prevents transmission of the traffic before it consumes bandwidth on the path to a
destination—especially important in networks that have low bandwidth.
• Ease of configuration: If a network administrator wants to deny traffic coming
from several networks, one option is to use a single standard ACL on the router
closest to the destination. The disadvantage is that traffic from these networks will
use bandwidth unnecessarily. An extended ACL could be used on each router
where the traffic originated, which will save bandwidth by filtering the traffic at
the source, but requires the knowledge to create extended ACLs.
Traffic filtering is a common application of ACLs. Traffic filtering controls access to a
network by analyzing the incoming and outgoing packets and forwarding them or
discarding them based on ACL criteria. Traffic filtering can occur at Layer 3 or Layer 4.
Standard ACLs only filter at Layer 3. Extended ACLs can filter at both Layer 3 and Layer 4.
When you decide which device and which interface is the most appropriate for the
placement of an access list for traffic filtering, you need to decide to which traffic direction
the ACL should be applied.
There are two possible traffic directions: inbound and outbound.
Traffic directions are determined from the device’s point of view. The figure illustrates the
concept. Imagine that you are standing inside the device. Traffic arriving on an interface
that enters the device to be processed is called inbound, ingress, or incoming traffic.
Traffic leaving the device out through an interface is called outbound, egress, or exiting
traffic.
Note: ACLs for traffic filtering do not act on packets that originate from the router itself.
The figure illustrates how packet processing occurs when ACLs are applied:
• Inbound ACLs process incoming packets as they enter the interface, before they
are routed to the outbound interface. An inbound ACL is efficient because it saves
the overhead of routing lookups if the packet is discarded. If the packet is
permitted by the ACL, it is then processed for routing.
• Outbound ACLs process packets that are routed to the outbound interface. They
are processed before they exit the interfaces.
After you have configured an ACL, you link the ACL to an interface using the ip access-
group command. The following figure describes the command syntax and shows examples
of applying standard and extended access lists on an interface.
To remove an ACL from an interface, first enter the no ip access-group command on the
interface, then enter the global no access-list command to remove the entire ACL if
needed.
You can configure one ACL per protocol, per direction, per interface:
• One ACL per protocol: To control traffic flow on an interface, an ACL must be
defined for each protocol enabled on the interface. For instance, if you wish to
filter both IPv4 and IPv6 traffic on the interface in one direction, you have to create
and apply two access lists, one for each protocol.
• One ACL per direction: ACLs control traffic in one direction at a time. Two separate
ACLs may be created to control both inbound and outbound traffic on an interface,
or you can use the same ACL and apply it in both directions, if it makes sense to do
so.
The figure shows a scenario in which ACL is used to deny access to the internet only to the
host IPv4 address 10.1.1.101. Traffic from other hosts within 10.1.1.0/24 is allowed.
Two ACL implementations are represented in the figure. The first uses a standard ACL 15.
A standard ACL is applied at the point closest to the destination. That point is the Gi0/1
interface on the Branch router. Filtering should happen for the traffic exiting Gi0/1
interface, in the outbound direction. Note that PC2 traffic would reach the router, be
processed to determine the outbound interface (it will be routed), and, only at the very
exit, it will be discarded. The processing power and bandwidth of the router are used for
both permitted and discarded traffic.
Note that another possible placement for standard ACL 15 would be the
GigabitEthernet0/0 interface. For the traffic to be filtered, the direction would have to be
inbound. However, this solution would not only prevent host PC2 from accessing the
internet but would also deny all communication between PC2 and the router.
The second implementation uses an extended ACL NOINTERNET_PC2. An extended access
list should be placed as close to the source of the denied traffic as possible. In the
example, the denied traffic is the PC2 traffic. The closest point to PC2 is the Gi0/0
interface on the Branch router. It should filter traffic incoming to the router; therefore,
the ACL should be applied in the inbound direction. Traffic from PC2 will be discarded
before it is routed, which saves the processing power and bandwidth of the router.
In real life networks, you could encounter complex security policies and ACLs. As network
engineers, you need to have solid knowledge of how the ACL statements affect the traffic,
so that you can place ACLs where they have the greatest impact on efficiency.
23.1 Enabling Internet Connectivity
Introduction
One of the most important tasks when designing a network topology is planning for
enterprise internet connectivity. Modern corporate networks are connected to the global
internet and use it for some data transport needs. Corporations provide many services to
customers and business partners via the internet. When planning for internet
connectivity, it is also important to understand the process of assigning IP addresses. An
ISP can provide internet connectivity by providing statically assigned, public IP addresses,
or dynamically allocate them with DHCP. Depending on the option that is used, the
internet-facing interfaces need to be configured accordingly.
The IPv4 address space is not large enough to uniquely identify all network-capable
devices that need internet connectivity. As a response to this limitation, private addresses
have been reserved. However, since private addresses are not routed by internet routers,
there needs to be a mechanism in place to translate private addresses to public addresses,
which are routed by internet routers. The mechanism that is used to perform this
translation is called the Network Address Translation (NAT).
NAT is usually implemented on border devices such as firewalls or routers. This
implementation allows devices within an organization to have private addresses and to
only translate traffic when it needs to be sent to the internet.
In NAT terminology, addresses are categorized into two types. All classifications described
apply to the border device that performs translations.
The first classification divides addresses based on where they exist in the network:
• Inside addresses are addresses that belong to the network in question, such as
addresses of devices internal to the network. The inside network is the set of
networks that are subject to translation.
• Outside addresses are all addresses that do not belong to the network in question.
The outside network refers to all other addresses.
The second classification divides addresses based on where they are "viewed:"
• Local addresses are address values that are "seen" by a local device or, in other
words, address values that are intended to be used by the devices in the local
(inside) network.
• Global addresses are address values as seen globally or, in other words, address
values meant be used by the devices in external (outside) networks. You can also
think of a global address as the address seen or used by devices in the internet,
when they refer to an inside device. However, remember that NAT can also
translate between private only address realms. Devices in the internet always see
public addresses.
The figure illustrates a sample packet that is being transmitted from an inside network
through a border device to an outside network and back (from PC1 to SRV1 on the
internet and back). For IPv4 header fields, their NAT names are indicated. Each translation
is based on the mapping table entries that are created at the border device. Note that
both source and destination IPv4 addresses can be subject to NAT translation. Usually,
only source (inside) addresses are translated while destination (outside) addresses
remains unchanged. When only the inside address is translated, NAT is called inside NAT,
and when only the outside address is translated, NAT is called outside NAT.
• Static NAT maps a local IPv4 address to a global IPv4 address (one to one). Port
numbers are not translated. Static NAT is particularly useful when a device must be
accessible from an external network, such as when a device must have a static,
unchanging address accessible from the internet. Static NAT is usually used when a
company has a server that must be always reachable, from both inside and outside
networks. Both server addresses, local and global, are static. So the translation is
also always static. The server's local IPv4 address will always be translated to the
known global IPv4 address. This fact also implies that one global address cannot be
assigned to any other device. It is an exclusive translation for one local address.
Static translations last forever.
• Dynamic NAT maps local IPv4 addresses to a pool of global IPv4 addresses. When
an inside device accesses an outside network, it is assigned a global address that is
available at the moment of translation. The assignment follows a first-come first-
served algorithm, there are no fixed mappings; therefore, the translation is
dynamic. The number of translations is limited by the size of the pool of global
addresses. When using dynamic NAT, make sure that enough global addresses are
available to satisfy the needed number of user sessions. Dynamic translations
usually have a limited duration. After this time elapses, the mapping is no longer
valid and the global IPv4 address is made available for new translations. An
example of when dynamic NAT is used is a merger of two companies that are using
the same private address space. Dynamic NAT effectively readdresses packets from
one network and is an alternative to complete readdressing of one network.
• Network Address and Port Translation (NAPT) or Port Address Translation (PAT)
maps multiple local IPv4 addresses to just a single global IPv4 address (many to
one). This process is possible because the source port number is translated also.
Therefore, when two local devices communicate to an external network, packets
from the first device will get the global IPv4 address and a port number X, and the
packets from the second device will get the same global IPv4 address but a
different port number Y. PAT is also known as NAT overloading, because you
overload one global address with ports until you exhaust available port numbers.
The mappings in the case of PAT have the format of local_IP:local_port –
global_IP:global_port. PAT enables multiple local devices to access the internet,
even when the device bordering the ISP has only one public IPv4 address assigned.
PAT is the most common type of network address translation.
The inside and outside definition is important for NAT operation. The figure illustrates the
importance of inside and outside definitions regarding the processing sequence. When a
packet travels from an inside domain to an outside domain, it is received at an inside
interface, routed, and, only then, addresses are translated to global addresses. At this
point, the border device automatically creates translation-mapping (basically a "dictionary
entry") if the mapping does not exist. The packet is then forwarded out the exit (outside)
interface. In dynamic translation, the border device also sets a timeout value for each
mapping it creates. The key point to remember is that with dynamic NAT implementation,
mapping creation is "provoked" by inside to outside traffic. Without outbound traffic, no
mappings are created.
When a packet travels from an outside domain to an inside domain, the process is
reversed: packets arriving from the outside with their global addresses are first translated
back to their local addresses and, only then, routed. Note that the inbound traffic has the
translated address (the inside global address) in the destination IPv4 header. Since the
routing happens after translation, it will be based on the original, local IPv4 address.
However, all outside routers—routers in external networks—must have a route towards
the global IPv4 address in order for packets to reach the inside network. Only the global
address is visible in the external world.
What happens if a packet arrives from the outside, and there is no mapping for its
destination address? When NAT service on a device cannot find a mapping for an inbound
packet, it will discard the packet. When is this situation encountered? Dynamic NAT
creates mappings when an inside host initiates communication with the outside. However,
dynamic mappings do not last forever. After a dynamic mapping timeout expires, the
mapping is automatically deleted. Recall that dynamic mappings are not created unless
there is inside to outside traffic. Also, when NAT is required, the outside to inside
communication will not be possible, unless there was prior outbound communication. In
other words, NAT does not allow requests initiated from the outside.
If the return communication is received after the timeout expires, there would be no
mappings, and the packets will be discarded. You will not encounter this issue in static
NAT. A static NAT configuration creates static mappings, which are not time limited. In
other words, statically created mappings are always present. Therefore, those packets
from outside can arrive at any moment, and they can be either requests initiating
communication from the outside, or they can be responses to requests sent from inside.
• Specify inside and outside interfaces. You must instruct the border device on
where to expect the inside traffic that needs to be translated (inside interface) and
where to inspect outside traffic (outside interface) that needs to be translated.
Inside/outside interface specification is required regardless of whether you are
configuring inside only NAT or outside only NAT.
• Specify local addresses that need to be translated. NAT might not be performed
for all inside segments and you have to specify exactly which local addresses
require translation.
• Specify global addresses available for translations.
• Specify NAT type using ip nat inside source command. The syntax of the command
is different for different NAT types.
Configuration commands that tell a device which interfaces are inside and which are
outside are common to all NAT types. To specify inside and outside interfaces use the ip
nat inside and ip nat outside interface configuration commands respectively.
In the example configuration, interface GigabitEthernet 0/1 with public IPv4 address
209.165.200.226/27 is configured as a NAT outside interface. Interface GigabitEthernet
0/0 with private IPv4 address 172.16.1.1/24 is a NAT inside interface.
You can specify more than one inside interface.
The remaining configuration steps differ in NAT types.
Configuring Static Inside IPv4 NAT and Port Forwarding
For static inside NAT, you have to configure a static mapping between exactly one local
and one global IPv4 address. Specification of the local address, global address, and NAT
type, are all done using one command only.
To configure static inside IPv4 NAT, use the ip nat inside source command with the
keyword static. The global configuration mode command has the following syntax: ip nat
inside source static local-ip global-ip.
Packets arriving on the inside interface and matching the defined local address will be
translated to the defined global address, and vice versa.
The keyword inside in the command specifies that only inside address is translated (from
local to global). The keyword static indicates that the mapping that follows is static.
Note: Do not confuse the ip nat inside source static and ip nat source static commands.
The latter does not include the word inside. The ip nat source static command is used
when configuring NAT on a virtual interface. If you wish to configure NAT for physical
interfaces, use ip nat inside source static command.
The ip nat inside source static local-ip global-ip creates an entry in the NAT-mapping
table. To verify which addresses are currently being translated, issue the show ip nat
translations command.
Static mapping entries appear in the translations table even when there is no traffic from
the inside to the outside interface.
The following is an example of creating and verifying a static entry which maps
172.16.1.10 local IPv4 address to 209.165.200.230 global IPv4 address.
In the example, output of the show ip nat translations command, there is a mapping
present. When traffic is generated and static NAT is performed, both outside local and
outside global fields are populated. Empty outside local and outside global fields indicate
that this entry is result of the configuration activity.
To configure port forwarding, you also specify a static inside mapping. However, in port
forwarding you must specify local and global port numbers and indicate the transport
protocol that the port numbers refer to.
To configure inside IPv4 port forwarding, use the ip nat inside source static tcp|udp local-
ip local-port global-ip global-port command.
The sample configuration shows an example of configuring port forwarding. The web
server 192.168.10.254 in the inside network is listening on port 80 for the incoming
connections. Users will access this internal web server using the global IPv4 address
209.165.200.226 as the destination IPv4 address and destination port 8080.
In the example, the port forwarding entry is verified using the show ip nat translations
command. In the output, note that port forwarding mapping has the IPv4:port-number
format.
Configuring Dynamic IPv4 Inside NAT
Dynamic NAT configuration differs from static NAT, but it also has some similarities. Like
static NAT, it requires the configuration to identify each interface as an inside or outside
interface. However, rather than creating a static map between one local and only one
global IPv4 address, you can specify pools of addresses.
To specify a pool of local addresses that need to be translated, you use access control lists
(ACLs). With an ACL, you identify only those local addresses that are to be translated. You
can configure either a named or a numbered ACL.
Note: Remember that there is an implicit deny any statement at the end of each ACL. An
ACL that is too permissive can lead to unpredictable results. Using permit any can result in
NAT consuming too much router resources, which can cause network problems.
To specify a pool of global addresses available for dynamic translations, use the ip nat
pool name start-ip end-ip {netmask netmask | prefix-length prefix-length} command.
The pool of global IPv4 addresses is available to any device on the inside network on a
first-come first-served basis. The NAT pool is referenced in commands by its name.
Outside routers are not aware of NAT translations performed on the inside network. To
reach the inside network, outside routers must have a route to the network to which the
addresses are translated, in other words to the inside global network. The inside global
network contains the range of IPv4 addresses that is specified in the NAT pool.
It remains to specify how NAT should be performed. To configure dynamic inside IPv4
NAT, use the ip nat inside source command followed by the mapping between the ACL-
defined local addresses and the NAT pool defined global addresses. The ACL and NAT pool
are referenced by their names (or number for ACLs). The syntax of the global
configuration command is ip nat inside source list ACL-identifier pool pool-name.
Note: Whatever type of inside NAT you are specifying, the syntax of the ip nat inside
source command always specifies the local addresses first, followed by the specification of
global addresses.
The example configuration has a numbered ACL1 that identifies all addresses in the
10.1.1.0/24 subnet; therefore, packets from both PC1 and PC2 will be translated.
The available global addresses are identified in the NAT pool called NAT-POOL. The pool
includes six addresses, from 209.165.200.230 to 209.165.200.235, that belong to the
209.165.200.224/27 subnet as indicated by the subnet mask 255.255.255.224.
The ip nat inside source command creates a mapping between ACL 1 (list 1 in the
command) and NAT-POOL (pool NAT-POOL in the command), which indicates to the
router that dynamic many-to-many NAT is performed.
Finally, the translations are verified by the show ip nat translations command. In the
example, output of the commands includes specific translations, along with configuration-
based entries. Configuration-based entries have "Outside" fields empty. Note that the first
IPv4 address from the NAT pool, 209.165.200.230 was used first to translate the
10.1.1.100 address, when Internet Control Message Protocol (ICMP) traffic was generated.
The second address from the pool was used for 10.1.1.101 address. The traffic that
crossed the router included both ICMP and TCP (Telnet) packets. ICMP packets do not
have port numbers. Instead of port numbers, for ICMP traffic, the value from the ICMP
message identifier field is used.
Note: Dynamic NAT entries time out. These entries have a default timeout value of 86400
seconds (24 hours), after which they are removed from the table if there is no activity for
the duration of the timeout.
Configuring IPv4 Inside PAT
PAT mappings include both port numbers along with IPv4 addresses. To specify which
local IPv4 addressees and port numbers are to be translated, use ACLs, like in the case of
dynamic NAT.
Specification of global IPv4 addresses in PAT depends on whether you are using only one
global IPv4 address or a pool of global IPv4 addresses. When only one global IPv4 address
is used, it is usually the IPv4 address of the outside interface of the border device. To
configure this address as the global address, it is enough to specify the interface in the ip
nat inside source command.
The configuration of the pool of global IPv4 addresses for NAT uses the ip nat pool
command. The syntax of the command is the same as for the dynamic NAT: ip nat pool
name start-ip end-ip {netmask netmask | prefix-length prefix-length}.
When creating port mappings, the device tries to preserve the local port number value. If
the local value cannot be preserved, by default the mapped ports are chosen from the
same range of ports as the local port number.
To specify that PAT is to be performed, you use the ip nat inside source command. The
local IPv4 addresses are specified by list keyword followed by an ACL identifier.
The global IPv4 addresses are specified using one of the following options:
• When there is only one global IPv4 address, such as the address of the device's
outside interface, the interface label is specified in the ip nat inside source list
ACL-identifier interface interface-type-number overload command.
• When there is a pool of global addresses, the name of the NAT pool is specified.
The command syntax is ip nat inside source list ACL-identifier pool pool-name
overload.
The command syntax for PAT adds a keyword overload at the end. This keyword indicates
to the device that PAT is implemented.
In the example configuration, ACL 1 identifies all addresses in the 172.16.1.0/24 subnet as
local addresses. The router's GigabitEthernet0/1 interface with IPv4 address
209.165.200.226 is used for PAT. In the ip nat inside source command, it is specified by its
type and number. To instruct the router to perform PAT, the keyword overload is added
at the end of the command. The router will translate traffic from both PCs. It will try to
preserve the port numbers selected by PCs, if they are available. To the outside networks,
the entire inside network of 172.16.1.0/24 is represented by only one IPv4 address
209.165.200.226.
24.1 Introducing QoS
Introduction
IP was designed to provide best-effort service for delivery of data packets and to run
across virtually any network transmission media and system platform. As user applications
continue to drive network growth and evolution, the demand to support various types of
traffic is also increasing. Network traffic from business-critical and delay-sensitive
applications must be serviced with priority and protected from other types of traffic. To
manage applications such as VoIP, video, e-commerce, and databases, among others, a
network requires quality of service (QoS).
Networks must provide secure, predictable, measurable, and guaranteed services.
Network administrators and architects can achieve better performance from the network
by managing bandwidth provisioning, delay, jitter (delay variation), and packet loss with
QoS mechanisms. As networks increasingly converge to support voice, video, and data
traffic, there is a growing need for QoS.
QoS is a crucial element of any administrative policy that mandates how to handle
application traffic on an enterprise network. Many QoS building blocks or features operate
at different parts of a network to create an end-to-end QoS system. For example, traffic
can be classified and assigned a priority when forwarded by access switches. Then in the
LAN Core for example, different congestion management mechanisms for different types
of traffic can be used. QoS and its implementations in a converged network are complex
and create many challenges for network administrators and architects.
• Competition between constant, small-packet voice flows and bursty video and
data flows
• Time-sensitive voice and video flows
• Critical traffic that must get priority
The figure illustrates a converged network in which voice, video, and data traffic use the
same network facilities instead of a dedicated network for each traffic type. Although
there are several advantages to converged networks, merging these different traffic
streams with dramatically differing requirements can lead to a number of quality
problems.
Data traffic is typically not real-time traffic. Data applications may be bursty in that they
create unpredictable traffic patterns, and thus have widely varying packet arrival times.
Many types of application data exist within an organization. For example, some are
relatively noninteractive and therefore not delay-sensitive (such as email). Other
applications involve users entering data and waiting for responses (such as database
applications) and are therefore very delay-sensitive. You can also classify data according to
its importance to the overall corporate business objectives. For example, a company that
provides interactive, live e-learning sessions to its customers would consider that traffic to
be mission-critical. On the other hand, a manufacturing company might consider that
same traffic important, but not critical to its operations.
Voice traffic is real-time traffic and comprises constant and predictable bandwidth and
packet arrival times.
Video traffic comprises several traffic subtypes, including passive streaming video and
real-time interactive video. Video traffic can be in real time, but not always. Video has
varied bandwidth requirements, and it comprises different types of packets with different
delay and tolerance for loss within the same session.
Interactive video, or video conferencing, has the same delay, jitter, and packet loss
requirements as voice traffic. The difference is the bandwidth requirements—voice
packets are small while video-conferencing packet sizes can vary, as can the data rate. A
general guideline for overhead is to provide 20-percent more bandwidth than the data
currently requires. Streaming video has different requirements than interactive video. An
example of the use of streaming video is when an employee views an online video during
an e-learning session. As such, this video stream is not nearly as sensitive to delay or loss
as interactive video is. Requirements for streaming video include a loss of no more than 5
percent and a delay of no more than 4 to 5 seconds. Depending on how important this
traffic is to the organization, it can be given precedence over other traffic.
Voice and some video traffic are not tolerant of delay, jitter, or packet loss, and excessive
amounts of any of these will result in a poor experience for the end users. Data flows are
typically more tolerant of delay, jitter, and packet loss but are very bursty in nature and
will typically use as much bandwidth as possible.
The different traffic flows on a converged network will be in competition for network
resources. Unless some mechanism mediates the overall traffic flow, voice and video
quality will be severely compromised at times of network congestion. The critical, time-
sensitive flows must be given priority in order to preserve the quality of this traffic.
Quality Issues in Converged Networks
Four major problems affect quality on converged networks:
• Bandwidth capacity: Large graphic files, multimedia uses, and increasing use of
voice and video can cause bandwidth capacity problems over data networks.
Multiple traffic flows compete for a limited amount of bandwidth and may require
more bandwidth than is available.
• Delay: Delay is the time that it takes for a packet to reach the receiving endpoint
after being transmitted by the sender. This period of time is called the end-to-end
delay and consists of variable delay components (processing and queueing delay)
and fixed delay components (serialization and propagation delay).
• Jitter: Jitter is the variation in latency or end-to-end delay that is experienced
between when a signal is sent and when it is received. It may also be described as
a disruption in the normal flow of packets as they traverse the network.
• Packet loss: Loss of packets is usually caused by congestion, faulty connectivity, or
faulty network equipment.
Multimedia streams, such as those used in IP telephony or video conferencing, are
sensitive to delivery delays. High delay can cause noticeable echo or talker overlap. Voice
transmissions can be choppy or unintelligible with high packet loss or jitter. Images may
be jerky, or the sound might not be synchronized with the image. Voice and video calls
may disconnect or not connect if signaling packets are not delivered.
Some data applications can also be severely affected by poor QoS. Time-sensitive
applications, such as virtual desktop or interactive data sessions, may appear
unresponsive. Delayed application data could have serious performance implications for
users that depend on timely responses, such as in brokerage houses or call centers.
Managing Quality Issues in Converged Networks
Different techniques are employed to manage quality issues:
The goal of QoS is a better and more predictable network service with dedicated
bandwidth, controlled jitter and latency, and improved loss characteristics as required by
the business applications. QoS achieves these goals by providing tools for managing
network congestion, shaping network traffic, using expensive wide-area links more
efficiently, and setting traffic policies across the network.
QoS gives priority to some sessions over other sessions. Packets of delay-sensitive sessions
bypass queues of packets belonging to nondelay-sensitive sessions. When queue buffers
overflow, packets are dropped on the session that can recover from the loss or those
sessions that can be eliminated with minimal business impact.
To make space for applications that are important and cannot tolerate loss without
affecting the end-user experience, QoS manages other sessions based on QoS policy
decisions that you implement in the network. Managing refers to selectively delaying or
dropping packets when contention arises.
QoS is not a substitute for bandwidth. If the network is congested, packets will be
dropped. QoS allows the administrators control on how, when, and what traffic is dropped
during congestion.
Note: QoS describes technical network performance and you can measure QoS
quantitatively. Measurements are numerical: jitter, latency, bandwidth, and loss. Quality
of Experience (QoE) measures end-user perception of the network performance. QoE is
not a technical metric; it is a subjective metric describing the end user experience. You
deploy QoS features to maximize QoE for the end user. When you have a session between
two users, QoE is what these two users experience, regardless of how the network
between them works. QoS is often meaningless when you implement it on only a segment
of your network, because the QoE is equal to the experience impact of the worst-
performing segment of the network that the traffic passes on its way.
There are three basic steps involved in defining QoS policies for a network:
1. Identify traffic and its requirements. Study the network to determine the type of
traffic running on the network and then determine the QoS requirements for the
different types of traffic. The figure shows a network traffic discovery identifying
voice, video, and data traffic.
2. Group the traffic into classes with similar QoS requirements. For example, the
voice and video traffic are put into dedicated classes, and all of the data traffic is
put into a best-effort class.
3. Define QoS policies that will meet the QoS requirements for each traffic class. In
the example in the figure, the voice traffic is given top priority and always
transmitted first. The video traffic is transmitted after voice but before the best-
effort traffic that is only transmitted when no other traffic is present.
Identify Network Traffic and Requirements
Before deploying a QoS policy, network traffic must be identified.
After the majority of network traffic, which besides different classes of data traffic include
voice and video, has been identified and measured, use the business and service-level
requirements to define traffic classes.
Due to its stringent QoS requirements, voice traffic will almost always exist in a dedicated
class. Cisco has developed specific QoS mechanisms, that ensure priority treatment over
all other traffic.
After you define the applications with the most critical requirements, you can define the
remaining traffic classes using the business requirements.
An enterprise might define traffic classes as follows:
• Classification
• Marking
• Policing and shaping
• Congestion management
• Congestion avoidance
• Link efficiency
The following mechanisms are used to implement QoS in a network:
Cisco network devices can provide a complete toolset of QoS features and solutions for
addressing the diverse needs of voice, video and multiple classes of data applications. QoS
mechanisms allow complex network control and predictable service for a variety of
networked applications and traffic types. They can effectively control bandwidth, delay,
jitter, and packet loss. By ensuring the desired results, the QoS mechanisms lead to
efficient, predictable services for business-critical applications.
Classification and Marking
In any network in which networked applications require differentiated levels of service,
traffic must be sorted into different classes upon which QoS is applied. Classification and
marking are two critical functions of any successful QoS implementation. Classification
allows network devices to identify traffic as belonging to a specific class with specific QoS
requirements, as determined by an administrative QoS policy. After network traffic is
sorted, individual packets are marked (also called colored) so that other network devices
can apply QoS features uniformly to those packets in compliance with the defined QoS
policy.
A classifier is a tool that inspects packets within a flow to identify the type of traffic that
the packet is carrying. Traffic is then marked so that a policy enforcement mechanism will
implement the policy for that type of traffic.
Classification is the identifying and splitting of traffic into different classes.
• CoS is usually used with Ethernet 802.1q frames and contains 3 bits.
• ToS is generally used to indicate the Layer 3 IPv4 packet field and comprises 8 bits,
3 of which are designated as the IP precedence field. IPv6 changes the terminology
for the same field in the packet header to "Traffic Class."
• DSCP is a set of 6-bit values that can describe the meaning of the Layer 3 IPv4 ToS
field. While IP precedence is the old way to mark ToS, DSCP is the new way. The
transition from IP precedence to DSCP was made because IP precedence only
offers 3 bits, or eight different values, to describe different classes of traffic. DSCP
is backward-compatible with IP precedence.
• Class Selector is a term that is used to indicate a 3-bit subset of DSCP values. The
class selector designates the same 3 bits of the field as IP precedence.
• TID is a term that is used to describe a 4-bit field in the QoS control field of
wireless frames (802.11 MAC frame). TID is used for wireless connections, and CoS
is used for wired Ethernet connections.
Ultimately, there are various Layer 2 and Layer 3 mechanisms that are used in the network
for marking traffic.
Layer 3 packet marking with IP precedence and DSCP is the most widely deployed marking
option because Layer 3 packet markings have end-to-end significance. Layer 3 markings
can also be easily translated to and from Layer 2 markings.
DSCP Encoding
DSCP is encoded in the header of both IPv4 and IPv6 packets.
DiffServ uses the DiffServ (DS) field in the IP header to mark packets according to their
classification. The DS field occupies the eight-bit ToS field in the IPv4 header or the Traffic
Class field in the IPv6 header.
The following three IETF standards describe the purpose of the eight bits of the DS field:
1. RFC 791 includes specification of the ToS field, where the high-order three bits are
used for IP precedence. The other bits are used for delay, throughput, reliability,
and cost.
2. RFC 1812 modifies the meaning of the ToS field by removing meaning from the five
low-order bits (which should all be “0”). This gained widespread use and became
known as the original IP precedence.
3. RFC 2474 replaces the ToS field with the DS field, where the six high-order bits are
used for the DSCP. The remaining two bits are used for explicit congestion
notification. RFC 3260 (New Terminology and Clarifications for Diffserv) updates
RFC 2474 and provides terminology clarifications.
Policing and Shaping
Within a network, different forms of connectivity can have significantly different costs for
an organization. Because WAN bandwidth is relatively expensive, many organizations
would like to limit the amount of traffic that specific applications can send. This is
especially true when enterprise networks use internet connections for connectivity to
remote sites and the extranet. Downloading nonbusiness-critical images, music, and
movie files can greatly reduce the amount of bandwidth that is available for other
mission-critical applications. Traffic policing and traffic shaping are two QoS techniques
that can limit the amount of bandwidth that a specific application, user, or class of traffic
can use on a link.
Policers and shapers are both rate-limiters, but they differ in how they treat excess traffic;
policers drop it and shapers delay it.
Policers and shapers are tools that identify and respond to traffic violations. They usually
identify traffic violations in a similar manner, but they differ in their response:
• Policers perform checks for traffic violations against a configured rate. The action
that they take in response is either dropping or re-marking the excess traffic.
Policers do not delay traffic; they only check traffic and take action if needed.
• Shapers are traffic-smoothing tools that work in cooperation with buffering
mechanisms. A shaper does not drop traffic, but it smooths it out so it never
exceeds the configured rate. Shapers are usually used to meet service level
agreements (SLAs). Whenever the traffic spikes above the contracted rate, the
excess traffic is buffered and thus delayed until the offered traffic goes below the
contracted rate.
You can use traffic policing to control the maximum rate of traffic that is sent or received
on an interface. Traffic policing is often configured on interfaces at the edge of a network
to limit traffic into or out of the network. You can use traffic shaping to control the traffic
going out an interface in order to match its flow to the speed of the remote target
interface and to ensure that the traffic conforms to policies contracted for it.
Policer characteristics
• They are ideally placed as ingress tools (drop it as soon as possible so you do not
waste resources).
• They can be placed at egress to control the amount of traffic per class.
• When traffic is exceeded, policers can either drop traffic or re-mark it.
• Significant number of TCP re-sends can occur.
• They do not introduce jitter or delay.
Shaper characteristics
• They are usually deployed between enterprise network and the service provider to
make sure that enterprise traffic is under contracted rate.
• There are fewer TCP re-sends than with policers.
• Shapers introduce delay and jitter.
Policers make instantaneous decisions and are thus optimally deployed as ingress tools.
The logic is that if you are going to drop the packet, you might as well drop it before
spending valuable bandwidth and CPU cycles on it. However, policers can also be
deployed at egress to control the bandwidth that a particular class of traffic uses. Such
decisions sometimes cannot be made until the packet reaches the egress interface.
When traffic exceeds the allocated rate, the policer can take one of two actions. It can
either drop traffic or re-mark it to another class of service. The new class usually has a
higher drop probability, which means packets in this new class will be discarded earlier
than packets in classes with higher priority.
Shapers are commonly deployed on enterprise-to-service provider links on the enterprise
egress side. Shapers ensure that traffic going to the service provider does not exceed the
contracted rate. If the traffic exceeds the contracted rate, it would get policed by the
service provider and likely dropped.
While policers can cause a significant number of TCP re-sends when traffic is dropped,
shaping involves fewer TCP re-sends. Policing does not cause delay or jitter in a traffic
stream, but shaping does.
Traffic-policing mechanisms such as class-based policing also have marking capabilities in
addition to rate-limiting capabilities. Instead of dropping the excess traffic, traffic policing
can alternatively mark and then send the excess traffic. Excess traffic can be re-marked
with a lower priority before the excess traffic is sent out. Traffic shapers, on the other
hand, do not re-mark traffic; these only delay excess traffic bursts to conform to a
specified rate.
Note: Regulating real-time traffic such as voice and video with policing and shaping is
generally counterproductive. You should use Call Admission Control (CAC) strategies to
prevent real-time traffic from exceeding the capacity of the network. Policing and shaping
tools are best employed to regulate TCP-based data traffic.
Tools for Managing Congestion
Congestion occurs any time an interface is presented with more traffic than it is able to
transmit. Aggressive traffic can fill interface queues and starve more fragile flows such as
voice, video, and interactive traffic. The results can be devastating for delay-sensitive
traffic types, making it difficult to meet the service-level requirements that these
applications require. There are many congestion management techniques available on
Cisco platforms that can provide effective means to manage software queues and to
allocate the required bandwidth to specific applications when congestion exists.
Whenever a packet arrives at an exit interface faster than it can exit, the potential for
congestion exists. If there is no congestion, packets are sent when they arrive at the exit
interface. If congestion occurs, congestion management tools are activated.
• Strict priority: The queues with lower priority are only served when the higher-
priority queues are empty. There is a risk with this kind of scheduler that the
lower-priority traffic will never be processed. This situation is commonly referred
to as traffic starvation.
• Round-robin: Packets in queues are served in a set sequence. There is no
starvation with this scheduler, but delays can badly affect the real-time traffic.
• Weighted fair: Queues are weighted, so that some are served more frequently
than others. This method thus solves starvation and also gives priority to real-time
traffic. One drawback is that this method does not provide bandwidth guarantees.
The resulting bandwidth per flow varies based on the number of flows present and
the weights of each of the other flows
The scheduling tools that you use for QoS deployments therefore offer a combination of
these algorithms and various ways to mitigate their downsides. This combination allows
you to best tune your network for the actual traffic flows that are present.
Queuing algorithms are one of the primary ways to manage congestion in a network.
Network devices handle an overflow of arriving traffic by using a queuing algorithm to sort
traffic and determine a method of prioritizing the traffic onto an output link. Each queuing
algorithm was designed to solve a specific network traffic problem and has a particular
effect on network performance.
There are many different queuing mechanisms. Older methods are insufficient for modern
rich-media networks. However, you need to understand these older methods to
comprehend the newer methods:
• First-In, First-Out (FIFO) is a single queue with packets that are sent in the exact
order that they arrived.
• Priority Queuing (PQ) is a set of four queues that are served in strict-priority order.
By enforcing strict priority, the lower-priority queues are served only when the
higher-priority queues are empty. This method can starve traffic in the lower-
priority queues.
• Custom Queueing (CQ) is a set of 16 queues with a round-robin scheduler. To
prevent traffic starvation, it provides traffic guarantees. The drawback of this
method is that it does not provide strict priority for real-time traffic.
• Weighted Fair Queuing (WFQ) is an algorithm that divides the interface bandwidth
by the number of flows, thus ensuring proper distribution of the bandwidth for all
applications. This method provides a good service for the real-time traffic, but
there are no guarantees for a particular flow.
Here are two examples of newer queuing mechanisms that are recommended for rich-
media networks:
Note: The figure shows the LLQ queuing mechanism, which is suitable for networks with
real-time traffic. If you remove the low-latency queue (at the top), what you are left with
is CBWFQ, which is only suitable for nonreal-time data traffic networks.
With CBWFQ, you define the traffic classes based on match criteria, including protocols,
Access Control Lists (ACLs), and input interfaces. Packets satisfying the match criteria for a
class constitute the traffic for that class. A queue is reserved for each class, and traffic
belonging to a class is directed to that class queue.
After a class has been defined according to its match criteria, you can assign
characteristics to it. To characterize a class, you assign it the minimum bandwidth that it
will be delivered during congestion.
To characterize a class, you also specify the queue limit for that class, which is the
maximum number of packets allowed to accumulate in the class queue. Packets belonging
to a class are subject to the bandwidth and queue limits that characterize the class. After a
queue has reached its configured queue limit, enqueuing of additional packets to the class
causes tail drop or random packet drop to take effect, depending on how the class policy
is configured.
For CBWFQ, the weight for a packet belonging to a specific class is derived from the
bandwidth that you assigned to the class when you configured it. Therefore, the
bandwidth assigned to the packets of a class determines the order in which packets are
sent. All packets are serviced fairly based on weight; no class of packets may be granted
strict priority. This scheme poses problems for voice traffic, which is largely intolerant of
delay, especially jitter.
The LLQ brings strict priority queuing to CBWFQ. Strict priority queuing allows delay-
sensitive data such as voice to be dequeued and sent first (before packets in other queues
are dequeued), giving delay-sensitive data preferential treatment over other traffic.
Tools for Congestion Avoidance
Congestion is a normal occurrence in networks. Whether congestion occurs as a result of a
lack of buffer space, network aggregation points, or a low-speed wide-area link, many
congestion management techniques exist to ensure that specific applications and traffic
classes are given their share of available bandwidth when congestion occurs. When
congestion occurs, some traffic is delayed or even dropped at the expense of other traffic.
When drops occur, different problems may arise that can exacerbate the congestion, such
as retransmissions and TCP global synchronization in TCP/IP networks. Network
administrators can use congestion avoidance mechanisms to reduce the negative effects
of congestion by penalizing the most aggressive traffic streams as software queues begin
to fill.
TCP has built-in flow control mechanisms that operate by increasing the transmission
rates of traffic flows until packet loss occurs. When packet loss occurs, TCP drastically
slows down the transmission rate and then again begins to increase the transmission rate.
Because of TCP behavior, tail drop of traffic can result in suboptimal bandwidth utilization.
TCP global synchronization is a phenomenon that can happen to TCP flows during periods
of congestion because each sender will reduce the transmission rate at the same time
when packet loss occurs
Congestion avoidance techniques are advanced packet-discard techniques that monitor
network traffic loads in an effort to anticipate and avoid congestion at common network
bottleneck points.
Queues are finite on any interface. Devices can either wait for queues to fill up and then
start dropping packets, or drop packets before the queues fill up. Dropping packets as
they arrive is called tail drop. Selective dropping of packets while queues are filling up is
called congestion avoidance. Queuing algorithms manage the front of the queue, and
congestion mechanisms manage the back of the queue.
Randomly dropping packets instead of dropping them all at once, as it is done in a tail
drop, avoids global synchronization of TCP streams. One such mechanism that randomly
drops packets is random early detection (RED). RED monitors the buffer depth and
performs early discards (drops) on random packets when the minimum defined queue
threshold is exceeded.
Cisco IOS Software does not support pure RED, but does support WRED. The principle is
the same as with RED, except that the traffic weights skew the randomness of the packet
drop. In other words, traffic that is more important will be less likely to be dropped than
less important traffic.
The idea behind using WRED is both to maintain the queue length at a level somewhere
between the minimum and maximum thresholds and to implement different drop policies
for different classes of traffic. WRED can selectively discard lower-priority traffic when the
interface becomes congested, and it can provide differentiated performance
characteristics for different classes of service.
The figure shows how WRED is implemented, as well as the parameters that WRED uses to
influence packet-drop decisions.
WRED Building Blocks
The router constantly updates the WRED algorithm with the calculated average queue
length, which is based on the recent history of queue lengths.
When a packet arrives at the output queue, the QoS marking value is used to select the
correct WRED profile for the packet. The packet is then passed to WRED for processing.
Based on the selected traffic profile and the average queue length, WRED calculates the
probability for dropping the current packet (Probability Denominator). If the average
queue length is greater than the minimum threshold but less than the maximum
threshold, WRED will either queue the packet or perform a random drop. If the average
queue length is less than the minimum threshold, the packet is passed to the output
queue.
If the queue is already full, the packet is tail-dropped. Otherwise, the packet will
eventually be transmitted out onto the interface.
Link Efficiency Mechanisms
Increasing the bandwidth of WAN links can be expensive. An alternative is to use QoS
techniques to improve the efficiency of low bandwidth links, which, in this context,
typically refer to links with speeds less than or equal to 768 kbps. Header compression and
payload compression mechanisms reduce the sizes of packets, reducing delay and
increasing available bandwidth on a link. Other QoS link efficiency techniques, such as Link
Fragmentation and Interleaving (LFI), allow traffic types, such as voice and interactive
traffic, to be sent either ahead or interleaved with larger, more aggressive flows. These
techniques decrease latency and assist in meeting the service-level requirements of delay-
sensitive traffic.
While many QoS mechanisms exist for optimizing throughput and reducing delay in
network traffic, QoS mechanisms do not create bandwidth. QoS mechanisms optimize the
use of existing resources, and they enable the differentiation of traffic according to a
policy. Link efficiency QoS mechanisms such as payload compression, header
compression, and LFI are deployed on WAN links to optimize the use of WAN links.
Payload compression increases the amount of data that can be sent through a
transmission resource. Payload compression is primarily performed on Layer 2 frames and
therefore compresses the entire Layer 3 packet. The Layer 2 payload compression
methods include Stacker, Predictor, and Microsoft Point-to-Point Compression (MPPC).
Compression methods are based on eliminating redundancy. The protocol header is an
item of repeated data. The protocol header information in each packet in the same flow
does not change much over the lifetime of that flow. Using header compression
mechanisms, most header information can be sent only at the beginning of the session,
stored in a dictionary, and then referenced in later packets by a short dictionary index.
Cisco IOS header compression methods include TCP header compression, Real-Time
Transport Protocol (RTP) header compression, class-based TCP header compression, and
class-based RTP header compression.
It is important to note that Layer 2 payload compression and header compression are
performed on a link-by-link basis. These compression techniques cannot be performed
across multiple routers because routers need full Layer 3 header information to be able to
route packets to the next hop.
LFI is a Layer 2 technique in which large frames are broken into smaller, equally sized
fragments and then transmitted over the link in an interleaved fashion with more latency-
sensitive traffic flows (like Voice over IP). Using LFI, smaller frames are prioritized, and a
mixture of fragments is sent over the link. LFI reduces the queuing delay of small frames
because the frames are sent almost immediately. Link fragmentation, therefore, reduces
delay and jitter by expediting the transfer of smaller frames through the hardware
transmit queue.
• It is highly scalable.
• It provides many different levels of quality.
DiffServ also has these drawbacks:
• BA: A collection of packets with the same DSCP value crossing a link in a particular
direction. Packets from multiple applications and sources can belong to the same
BA.
• DSCP: A value in the IP header that is used to select a QoS treatment for a packet.
In the DiffServ model, classification and QoS revolve around the DSCP.
• PHB: An externally observable forwarding behavior (or QoS treatment) that is
applied at a DiffServ-compliant node to a DiffServ BA. The term PHB refers to the
packet scheduling, queuing, policing, or shaping behavior of a node on any given
packet belonging to a BA. The DiffServ model itself does not specify how PHBs
must be implemented. A variety of techniques may be used to affect the desired
traffic conditioning and PHB. In Cisco IOS Software, you can configure PHBs by
using Modular QoS CLI (MQC) policy maps.
The DiffServ architecture is based on a simple model in which traffic entering a network is
classified at the boundaries of the network. The traffic class is then marked, using a DSCP
marking in the IP header. Packets with the same DSCP markings create BAs as they
traverse the network in a particular direction, and these aggregates are forwarded
according to the PHB that is associated with the DSCP marking.
Each DSCP value identifies a BA. Each BA is assigned a PHB. Each PHB is implemented
using the appropriate QoS mechanism or set of QoS mechanisms.
One of the primary principles of DiffServ is that you should mark packets as close to the
edge of the network as possible. It is often a difficult and time-consuming task to
determine the traffic class for a data packet, and you should classify the data as few times
as possible. By marking the traffic at the network edge, core network devices and other
devices along the forwarding path will be able to quickly determine the proper QoS
treatment to apply to a given traffic flow, based on the PHB that is associated with the
DSCP marking.
Per-Hop Behaviors
Different PHBs are used in a network, based on the DSCP of the IP packets.
DSCP selects PHBs throughout the network.
The AF PHB defines a method by which BAs can be given different forwarding assurances.
There are four standard defined AF classes that are represented by the aaa values of 001,
010, 011, and 100. Each class should be treated independently and should have allocated
bandwidth that is based on the QoS policy.
Traffic in different classes is usually given a proportional measure of priority. If congestion
occurs between classes, the traffic in the higher class is given priority. Also, instead of
using strict PQ, more balanced queue servicing algorithms are implemented (fair queuing
or weighted fair queuing). If congestion occurs within a class, the packets with the higher
drop probability are discarded first. Typically sophisticated drop selection algorithms like
RED are used to avoid tail drop issues.
Class Selector
The class selector provides interoperability between DSCP-based and IP precedence-based
devices in a network.
The following are characteristics of the class selector:
The meaning of the eight bits in the DS field of the IP packet has changed over time to
meet the expanding requirements of IP networks.
Originally, the DS field was referred to as the ToS field, and the first three bits of the field
(bits 5 to 7) defined a packet IP precedence value. A packet could be assigned one of six
priorities based on the IP precedence value (eight total values minus two reserved values).
IP precedence 5 (101) was the highest priority that could be assigned (RFC 791).
RFC 2474 replaced the ToS field with the DS field, where a range of eight values (class
selector) is used for backward compatibility with IP precedence. There is no compatibility
with other bits that are used by the ToS field.
The class-selector PHB was defined to provide backward compatibility for DSCP with ToS-
based IP precedence. RFC 1812 simply prioritizes packets according to the precedence
value. The PHB is defined as the probability of timely forwarding. Packets with higher IP
precedence should be (on average) forwarded in less time than packets with lower IP
precedence.
The last three bits of the DSCP (bits 2 to 4), set to 0, identify a class-selector PHB. You can
calculate the DSCP value for a CS PHB by multiplying the class number by 8. For example,
the DSCP value for CS3 would be equal to (3 * 8) = 24.
24.7 Introducing QoS
Deploying End-to-End QoS
To facilitate true end-to-end QoS on an IP network, a QoS policy must be deployed in the
campus network and the WAN. Each network segment has specific QoS policy
requirements and associated best practices. When the enterprise uses a service provider
network that provides Layer 3 transport, end-to-end QoS requires close coordination
between the QoS policy of the enterprise and of the service provider. Designing and
testing QoS policies is just one step in the overall QoS deployment methodology in the
Enterprise environment.
Deploying QoS in an enterprise is a multistep process that is repeated as the business
requirements of the enterprise change.
A successful QoS deployment in Enterprise comprises multiple phases:
1. Strategically defining QoS objectives
2. Analyzing application service-level requirements
3. Designing and testing QoS policies
4. Implementing QoS policies
5. Monitoring service levels to ensure business objectives are being met
A successful QoS policy deployment requires a clear definition of the business objectives
that the enterprise wants to achieve with the QoS implementation. Interview business
stakeholders to identify their business-critical applications and to understand the service-
level requirements of these applications as implemented in the enterprise. It is also crucial
to have executive approval of the QoS policy to ensure that the QoS policy aligns with the
overall strategy of the organization. Once a good understanding of the service-level
requirements for the critical business applications is understood and executive approval of
the proposed policy is in place, a detailed policy can be created and tested. Once tested,
the policy can be rolled out across the entire enterprise network. The policy and
performance of the business-critical applications should be constantly monitored to
ensure that the QoS objectives are being met.
These phases need to be repeated as business conditions evolve.
Enterprise Campus QoS Guidelines
A QoS policy is only as strong as the weakest point in the network. If VoIP or video traffic
experiences packet loss or jitter at any point in the network, the user experience will be
noticeably impacted. In order to provide QoS guarantees, an end-to-end QoS deployment
is required that covers traffic from endpoint to endpoint across the entire network path.
The rapid rise of highly sensitive collaboration traffic has made it even more critical to
ensure that QoS is not only deployed on the WAN, where congestion on low-speed links
was the typical cause of poor application performance, but also in the high-speed campus
environment.
Although network administrators sometimes equate QoS only with queuing, the QoS
toolset extends considerably beyond queuing tools. Classification, marking, and policing
are all important QoS functions that are optimally performed within the campus network,
particularly at the access layer ingress edge (the access edge).
• Miracast connections over Wi-Fi Direct allow a device to display photos, files, and
videos on an external monitor or television.
• Wi-Fi Direct for Digital Living Network Alliance (DLNA) lets devices stream music
and video between each other.
• Wi-Fi Direct Print gives users the ability to print documents directly from a smart
phone, tablet, or PC.
Infrastructure Mode
In the infrastructure mode design, an AP is dedicated to centralizing the communication
between clients. This AP defines the frequency and wireless workgroup values. The clients
need to connect to the AP in order to communicate with the other clients in the group and
to access other network devices and resources.
The following are characteristics of the infrastructure mode:
• The AP functions as a translational bridge between 802.3 wired media and 802.11
wireless media.
• Wireless is a half-duplex environment.
• A basic service area (BSA) is also called a wireless cell.
• A BSS is the service that the AP provides.
The central device in the BSA or wireless cell is an AP, which is close in concept to an
Ethernet hub in relaying communication. But, as in an ad hoc network, all devices share
the same frequency. Only one device can communicate at a given time, sending its frame
to the AP, which then relays the frame to its final destination—this is half-duplex
communication.
Although the system might be more complex than a simple peer-to-peer network, an AP is
usually better equipped to manage congestion. An AP can also connect one client to
another in the same Wi-Fi space or to the wired network—a crucial capability.
The comparison to a hub is made because of the half-duplex aspect of the WLAN client
communication. However, APs have some functions that a wired hub simply does not
possess. For example, an AP can address and direct Wi-Fi traffic. Managed switches
maintain dynamic MAC address tables that can direct packets to ports that are based on
the destination MAC address of the frame. Similarly, an AP directs traffic to the network
backbone or back into the wireless medium, based on MAC addresses. The IEEE 802.11
header of a wireless frame typically has three MAC addresses but can have as many as
four in certain situations. The receiver is identified by MAC Address 1, and the transmitter
is identified by MAC Address 2. The receiver uses MAC Address 3 for filtering purposes,
and MAC Address 4 is only present in specific designs in a mesh network. The AP uses the
specific Layer 2 addressing scheme of the wireless frames to forward the upper-layer
information to the network backbone or back to the wireless space toward another
wireless client.
In a network, all wireless-capable devices are called stations. End devices are often called
client stations, whereas the AP is often referred to as an infrastructure device.
Like a PC in an ad hoc network, an AP offers a BSS. An AP does not offer an IBSS because
the AP is a dedicated device. The area that the AP radio covers is called a BSA or cell.
Because the client stations connect to a central device, this type of network is said to use
an infrastructure mode as opposed to an ad hoc mode.
If necessary, the AP converts 802.11 frames to IEEE 802.3 frames and forwards them to
the distribution system, which receives these packets and distributes them wherever they
need to be sent, even to another AP.
When the distribution system links two APs, or two cells, the group is called an Extended
Service Set (ESS). This scenario is common in most Wi-Fi networks because it allows Wi-Fi
stations in two separate areas of the network to communicate and, with the proper
design, also permits roaming.
In a Wi-Fi network, roaming occurs when a station moves. It leaves the coverage area of
the AP to which it was originally connected and arrives at the BSA of another AP. In a
proper design scenario, a station detects the signal of the second AP and jumps to it
before losing the signal of the first AP.
For the user, the experience is a seamless movement from connection to connection. For
the infrastructure, the designer must make sure that an overlapping area exists between
the two cells to avoid loss of connection. If an authentication mechanism exists,
credentials can be sent from one AP to another fast enough for the connection to remain
intact. Modern networks often use Cisco WLCs (not shown in the above figure)—central
devices that contain the parameters of all the APs and the credentials of connected users.
Because an overlap exists between the cells, it is better to ensure that the APs do not
work on the same frequency (also called a channel). Otherwise, any client that stays in the
overlapping area affects the communication of both cells. This problem occurs because
Wi-Fi is half duplex. The problem is called co-channel interference and must be avoided by
making sure that neighbor APs are set on frequencies that do not overlap.
Service Set Identifiers
To roam between different APs within a network, the APs must share the same network
name. This network name is called the Service Set Identifier (SSID), which has as many as
32 ASCII characters and is configured on both the AP and the client stations that wish to
join (associate) with this AP. However, the SSID may also require some type of
authorization to determine which station has the right to connect. The term WLAN is
often used to define both the SSID and the associated parameters (VLAN, security, quality
of service [QoS], and so on).
When a profile is configured on a client station, the SSID is a name that identifies which
WLAN the client station may connect to. The AP associates a MAC address to this SSID.
This MAC address can be the MAC address of the radio interface if the AP supports only
one SSID, or it can be derived from the MAC address of the radio interface if the AP
supports several SSIDs. Because each AP has a different radio MAC address, the derived
MAC address is different on each AP for the same SSID name. This configuration allows a
station that stays in the overlapping area to hear one SSID name and still understand that
the SSID is offered by two APs.
The MAC address, usually derived from the radio MAC address and associated with an
SSID, is the Basic Service Set Identifier (BSSID). The BSSID identifies the BSS that is
determined by the AP coverage area.
Because this BSSID is a MAC address that is derived from the radio MAC address, APs can
often generate several values. This ability allows the AP to support several SSIDs in a single
cell.
An administrator can create several SSIDs on the same AP (for example, a guest SSID and
an internal SSID). The criteria by which a station is allowed on one or the other SSID will be
different, but the AP will be the same. This configuration is an example of Multiple Basic
SSIDs (MBSSIDs).
MBSSIDs are basically virtual APs. All of the configured SSIDs share the same physical
device, which has a half-duplex radio. As a result, if two users of two SSIDs on the same AP
try to send a frame at the same time, the frames will collide. Even if the SSIDs are
different, the Wi-Fi space is the same. Using MBSSIDs is only a way of differentiating the
traffic that reaches the AP, not a way to increase the capacity of the AP.
Broadcast Versus Hidden SSID
SSIDs can be either broadcast (or advertised) or not broadcast (or hidden) by the APs. A
hidden network is still detectable. SSIDs are advertised in Wi-Fi packets that are sent from
the client, and SSIDs are advertised in Wi-Fi responses that are sent by the APs.
Client devices that are configured to connect to nonbroadcasting networks will send a Wi-
Fi packet with the network (SSID) that they wish to connect to. This is considered a
security risk because the client may advertise networks that it connects to from home
and/or work. This SSID can then be broadcasted by a hacker to entice the client to join the
hacker network and then exploit the client (connect to the client device or get the user to
provide security credentials).
Centralized Wireless Architecture
The centralized, or lightweight, architecture allows the splitting of 802.11 functions
between the controller-based AP, which processes real-time portions of the protocol, and
the WLC, which manages items that are not time-sensitive. This model is also called split
MAC. Split MAC is an architecture for the Control and Provisioning of Wireless Access
Points (CAPWAP) protocol defined in RFC 5415.
Alternatively, an AP can function as a standalone element, without a Cisco WLC, which is
called autonomous mode. In that case, there is no WLC and the AP supports all the
functionalities.
The following are features of Split MAC:
• Centralized tunneling of user traffic to the WLC (data plane and control plane)
• Systemwide coordination for wireless channel and power assignment, rogue AP
detection, security attacks, interference, and roaming
All MAC functionality that is not real time is processed by the Cisco WLC. The APs handle
only real-time MAC functionality, which includes the following:
• 802.11 authentication
• 802.11 association and reassociation (roaming)
• 802.11 frame translation and bridging to non-802.11 networks, such as 802.3
• Radio frequency (RF) management
• Security management
• QoS management
APs in a centralized architecture can have different modes of operation:
• Local mode, which is the default operational mode of APs when connected to the
Cisco WLC. When an AP is operating in local mode, all user traffic is tunneled to the
WLC, where VLANs are defined.
• FlexConnect mode, which is a Cisco wireless solution for branch and remote office
deployments, to eliminate the need for WLC on each location. In FlexConnect
mode, client traffic may be switched locally on the AP instead of tunneled to the
WLC.
Control and Provisioning of Wireless Access Points
CAPWAP is the current industry-standard protocol for managing APs. CAPWAP functions
for both IPv4 and IPv6.
CAPWAP is an open protocol that enables a WLC to manage a collection of wireless APs.
CAPWAP control messages are exchanged between the WLC and AP across an encrypted
tunnel. CAPWAP includes the WLC discovery and join process, AP configuration and
firmware push from the WLC, and statistics gathering and wireless security enforcement.
After the AP discovers the WLC, a CAPWAP tunnel is formed between the WLC and AP.
This CAPWAP tunnel can be IPv4 or IPv6. CAPWAP supports only Layer 3 WLC discovery.
Once an AP joins a WLC, the APs will download any new software or configuration
changes. For CAPWAP operations, any firewalls should allow the control plane (UDP port
5246) and the data plane (UDP port 5247).
Mapping SSIDs to VLANs
VLANs provide an ideal way of separating users on different WLAN SSIDs when they access
the wired side of the network. By associating each SSID to a different VLAN, you can group
users on the Ethernet segment the same way that they were grouped in the WLAN. You
can also isolate groups from each other, in the same way that they were isolated on the
WLAN.
In the example illustrated in the figure, two SSIDs are associated with different VLANs. The
"Internal" SSID is intended for internal users in the company, while the "Guest" SSID is for
guests visiting the company. Hence, the internal traffic is separated from the guest traffic
in the wired and wireless environment.
When the frames are in different SSIDs in the wireless space, they are isolated from each
other. Different authentication and encryption mechanisms per SSID and subnet isolate
them, even though they share the same wireless space.
When frames come from the wireless space and reach the Cisco WLC, they contain the
SSID information in the 802.11 encapsulated header. The Cisco WLC uses the information
to determine which SSID the client was on.
When configuring the Cisco WLC, the administrator associates each SSID to a VLAN ID. As
a result, the Cisco WLC changes the 802.11 header into an 802.3 header, and adds the
VLAN ID that is associated with the SSID. The frame is then sent on the wired trunk link
with that VLAN ID.
Switch Configuration to Support WLANs
WLCs and APs are usually connected to switches. The switch interfaces must be
configured appropriately, and the switch must be configured with the appropriate VLANs.
The configuration on switches regarding the VLANs is the same as usual. The configuration
differs on interfaces though, depending on if the deployment is centralized (using a WLC)
or autonomous (without a WLC).
Switch VLAN Configuration to Support WLANs
The following types of VLANs are required with WLANs:
1. Management VLAN
2. AP VLAN
3. Data VLAN
The management VLAN is for the WLC management interface configured on the WLC. The
APs that register to the WLC can use the same VLAN as the WLC management VLAN, or
they can use a separate VLAN. The APs can use this VLAN to obtain IP addresses through
DHCP and send their discovery request to the WLC management interface using those IP
addresses. To support wireless clients, you will need a VLAN (or VLANs) with which to map
the client SSIDs to the WLC. You may also want to use the DHCP server for the clients.
Note: Layer 3 mode is the dominant mode today, where the AP interfaces are on a
different subnet than the WLC management interface.
On the switch, the VLANs must first be created to support the WLAN management, APs,
and wireless clients, as shown in the following example:
Note: It is good practice to use a naming convention that easily identifies your VLANs in
the switch.
Second, you will need either the Layer 3 switch or a router to perform inter-VLAN routing.
Usually, inter-VLAN routing is configured along with the VLAN creation. For this example,
assume that inter-VLAN routing is already configured.
Switch Port Connected to WLC Configuration
The following example shows the configuration of the switch interface that is connected
to the Cisco WLC. The WLC and the switch are as usual connected through a trunk port.
Allowed VLANs, as per security recommendations, should only be the ones that are
needed, therefore only WLC management, AP, and data VLANs are allowed.
The following are the steps for configuration of the switch port connected to the WLC:
1. Enter global configuration mode.
2. Choose the physical port that the WLC is connected to on the switch.
3. Enter a description (for example, WLC hostname).
4. Set the port to trunk mode.
5. Set the allowed VLANs and, optionally, a native VLAN.
In this example, VLAN 11 represents the WLC management VLAN, and VLAN 14 represents
the wireless client VLAN (associated to an SSID). The AP VLAN 12 must be allowed on this
trunk, since the connectivity between the AP and the WLC is over Layer 3 connection.
Optionally, you can use link aggregation (LAG) to bundle multiple ports on the WLC,
providing port redundancy and load balancing. Note that a WLC can still connect to only
one neighboring switch. In this case:
• The switch needs to bundle ports towards the WLC into an EtherChannel with
mode "on" configured.
• The switch port channel interface must be configured as trunk port, with all data
VLANs and the AP and management VLANs allowed.
Switch Port Connected to WLC-Based AP Configuration
The WLC-based AP in local mode usually connects to an access port (nontrunking). The
access VLAN is used for traffic to and from the WLC. In a typical configuration, no traffic
from or to a wireless client can transit directly through the AP without going to the WLC.
The following are the steps for configuration of the switch port connected to the AP:
1. Enter global configuration mode.
2. Choose the physical port that the AP is connected to on the switch.
3. Enter a description (for example, AP hostname).
4. Set the access VLAN (AP VLAN).
5. Set the port to access mode.
In this example, VLAN 12 represents the AP VLAN, which allows the AP to access its DHCP
server. As indicated, it should have Layer 3 connectivity with the WLC management.
The figure and the following steps illustrate how CAPWAP communication works:
1. Based on the switch port configuration, the AP is connected to the switch on an
access port (the VLAN for AP to get DHCP). The WLC is connected to the switch on
a trunk port, allowing VLANs for WLC management (VLAN 11), AP (VLAN 12), and
the wireless clients (VLAN 14).
2. The AP and WLC create a CAPWAP tunnel.
3. The client associates to the AP with an SSID of "CORP."
4. The AP sends the client data that is marked with SSID "CORP" through the CAPWAP
tunnel to the WLC.
5. The WLC decapsulates the CAPWAP traffic.
6. The SSID of "CORP" is mapped in the WLC to VLAN ID 14.
7. The WLC tags the data with VLAN 14 before sending it back on the trunk port
(where VLAN 14 is allowed) to the switch.
8. The switch sends it on to the network (based on the destination in the packet).
Switch Port Connected to Autonomous AP Configuration
An autonomous AP connects to a trunk port. On the trunk a native (untagged) VLAN is
required for management of the AP. By default, all VLANs are allowed over the trunk link.
To enhance security, you should specify which VLANs are permitted over the trunk link,
which should include the AP management VLAN.
The configuration of the switch port in this case is very similar to the configuration of a
port connected to a WLC, as shown in the following example.
The ISM band (2.4-GHz spectrum) was planned with channels that are 22-MHz wide. The
channels also require 5 MHz of separation from each other. There are 11 channels
available in the United States, 13 in Europe, and 14 in Japan.
But if a device uses a channel that is 22-MHz wide (11 MHz on each side of the peak
channel), then this channel will encroach on the neighboring channels. As a result, there
are only three nonoverlapping channels in the United States and in Europe: 1, 6, and 11.
Any attempt to use channels that are closer to each other will result in interference issues.
Nonoverlapping channels need to be separated by 25 MHz at center frequency or by five
channel bands. In Japan, four channels (1, 6, 11, and 14) can be used because channel 14
is far apart from the other channels. Channel 14 can only be used in 802.11b networks
(not IEEE 802.11g/n).
Note: 802.11n allows for 40-MHz channels for 2.4 GHz, but the implementation is only
feasible in residential deployments. Using 40-MHz channels in the 2.4-GHz band reduces
the nonoverlapping channels.
The 5-GHz band is divided into several sections: four UNII bands and one ISM band.
Channels in the sections are spaced at 20-MHz intervals and are considered
noninterfering, however, they do have a slight overlap in frequency spectrum.
Consecutive channels can be used in neighboring cell coverage, but neighboring cell
channels should be separated by at least one channel when possible.
Since there are more non-overlapping channels in 5 GHz, you can use so-called "channel
bonding," where you can merge two adjacent channels together and achieve wider
channels (40-MHz, 80-MHz, or 160-MHz wide instead of 20 MHz), which in practice means
multiplied data rates by 2, 4, or 8.
Many regulatory domains enforce different laws for each of these bands, so even though
they may all be considered 5-GHz bands, operation in each set of channels may be
different. Also, some of the channels might not be available in all regulatory domains
(United States, Europe, Japan).
2.4-GHz and 5-GHz Comparison
Signals in the 2.4-GHz frequency have greater range and better propagation through
obstacles. On the other hand, many devices are using the 2.4-GHz frequency band and,
therefore, producing interference. It is not only Wi-Fi devices, but also many nonwireless
devices exist, so the spectrum is really crowded. There are also a limited number of
channels that do not overlap.
The 5-GHz spectrum is less crowded with many more non-overlapping channels, but it still
has some drawbacks. Older devices do not support it, so you might still need 2.4 GHz in
your network. The signal is weaker and therefore the propagation is worse. Also, it is not
completely non-Wi-Fi interference-free because weather radars can operate in this
frequency.
Other Non-802.11 Radio Interferers
Because the 2.4-GHz ISM band is unlicensed, the band is crowded by transmissions from
many devices, such as RF video cameras, baby monitors, and microwave ovens. Most of
these devices are high-powered, and they do not send IEEE 802.11 frames but can still
cause interference for Wi-Fi networks.
For example, RF video cameras operate by exchanging information (the image stream)
between a transmitter (the camera) and the receiver (linking to a video display). These
cameras usually use 100 milliwatt (mW) and a channel that is narrower than Wi-Fi. The
stream of information is continuous and severely affects any Wi-Fi network in the
neighboring channels. These cameras and Wi-Fi are incompatible—an AP cannot natively
receive and understand a camera video stream.
As another example, baby monitors are found more in home environments than in
industrial or office networks (although they can be found in hospitals, nurseries, and many
other social service or education-related environments). The exchanged keepalive
information between the monitoring stations can be single-way or double-way and half
duplex. Some of these monitors can use several channels for one monitor station to
control two devices. The monitors can use 100 mW of power. They are not 802.11
technologies but work in the same frequency and power as 802.11 devices.
Microwave ovens provide a pulse form of interference in the middle of the Wi-Fi, 2.4-GHz
band at a much higher power. Wi-Fi AP transmitters are measured in milliwatts, while
microwave ovens use a power level of over 1000 W.
Fluorescent lights also can interact with Wi-Fi systems but not as interference. The form of
the interaction is that the lamps are driven with alternating current (AC) power, so they
switch on and off many times each second. When the lights are on, the gas in the tube is
ionized and conductive. Because the gas is conductive, it reflects RF. When the tube is off,
the gas does not reflect RF. The net effect is a potential source of interference that comes
and goes many times per second.
Generally speaking, any device that uses a radio should be checked to determine whether
it works in one of the Wi-Fi spectrums.
When APs and WLCs are on separated subnets (no common broadcast domain), DHCP
Option 43 is one method that can be used to map APs to their WLCs.
DHCP Option 43 is specified as a vendor class identifier in RFC 2132. It is used to identify
the vendor type and configuration of a DHCP client. Option 43 can be used to include the
IP address of the Cisco WLC interface that the AP is attached to.
Note: In IPv6, DHCP version 6 (DHCPv6) option 52 can be used for the same purpose. For
simplicity purpose, only DHCP for IPv4 will be discussed here.
There are two ways of implementing DHCP:
An AP can use DNS during the boot process as a mechanism to discover WLCs that it can
join. This process is done using a DNS server entry for CISCO-CAPWAP-
CONTROLLER.localdomain.
The localdomain entry represents the domain name that is passed to the AP in DHCP
Option 15.
The DNS discovery option mode operates as follows:
1. The AP requests its IPv4 address from DHCP, and includes Options 6 and 15
configured to get DNS information.
2. The IPv4 address of the DNS server is provided by the DHCP server from the DHCP
option 6.
3. The AP will use this information to perform a hostname lookup using CISCO-
CAPWAP-CONTROLLER.localdomain. This hostname should be associated to the
available Cisco WLC management interface IP addresses (IPv4, IPv6, or both).
4. The AP will then be able to associate to responsive WLCs by sending packets to the
provided address.
Network Time Protocol
Network Time Protocol (NTP) is used in WLANs, much like it is in LANs. It provides
date/time synchronization for logs and scheduled events.
In WLANs, NTP also plays an important role in the AP join process. When an AP is joining a
Cisco WLC, the WLC verifies the AP embedded certificate and if the date and time that are
configured on the WLC precede the creation and installation date of certificates on the AP,
the AP fails to join the WLC. Therefore the WLC and AP should synchronize their time
using NTP.
When using a global AAA server, there must be IP reachability between the WLC and the
AAA server, because it will need to authenticate itself and pass client credentials as well.
Management Protocols
Small to midsize businesses can use HTTPS and manage their Cisco WLCs directly through
the GUI. From the GUI, you can view the status and trap logs from the Management
console menu.
Larger businesses can use SNMP to view the status of the Cisco WLC, or to control it from
a remote management station. Cisco Digital Network Architecture (DNA) Center is an
example of one such management station.
Command-Line Interface
A Cisco WLC does not have a default configuration, so you must run a setup wizard. The
initial WLC configuration is accomplished either via the console port and CLI or via the
WLC web interface. The setup using the console port requires a PC with either an available
serial (DB-9) or Universal Serial Bus (USB) port and appropriate adapter.
Like on other Cisco devices, the WLC CLI is available via the following:
The spine-and-leaf model is a two-tiered architecture where servers connect to the leaf
switches in the topology, while the spine layer is the backbone that interconnects all leaf
switches.
A model that helps in the design of a larger enterprise network, is the Cisco Enterprise
Architecture model.
This design is based on modules that correspond to a specific place in the network or a
specific function they have in a network. These modules represent areas that have
different physical or logical connectivity. Basic modules in the Cisco Enterprise
Architecture model are Enterprise Campus, Enterprise Edge, and Service Provider Edge.
Larger network designs also include a module for Remote Locations, such as Enterprise
Branch, Remote Data Center, and Remote Workers.
Issues in a Poorly Designed Network
A poorly designed network has increased support costs, reduced service availability, and
limited support for new applications and solutions. A suboptimal performance directly
affects end users and their access to resources.
One symptom of a poorly designed network is congestion. Congestion is a result of
suboptimal traffic flow or the selection of inappropriate devices or links. The most
probable cause of any problem is an inadequate or outdated design.
Even when a network is first implemented following a validated architecture, in time, its
design can turn into an undesirable network. This situation might result from
nonsystematic, uncontrolled expansion, or in other words, a liberal addition of devices
without overall consideration of the design.
The importance of careful design can be seen by examining an example of a flat network
that does not follow a structured design. Devices in a flat design are connected to each
other using Layer 2 switches without the use of VLANs. All devices on this network share
the available bandwidth and all are members of the same broadcast domain. They are
usually also in the same IP subnet. Layer 2 devices that build a flat network provide little
opportunity to control broadcasts or to filter undesirable traffic. As more devices and
applications are added to a flat network, network performance degrades until the
network becomes unpredictable, slow, or even unusable.
These issues are often found in poorly designed networks:
• A tiered design allows you to better understand the features that may be needed,
where they will be needed, and which devices need them within your final
solution. Knowing which feature goes where helps when choosing the needed
devices.
• A tiered design has stood the test of time, because it can be upgraded as
technology changes and it evolves as needs grow. This adaptability allows a
corporation to continue with a design philosophy and reuse (or reprobes)
equipment, perhaps at a different level, as they upgrade over time.
• A tiered design makes it easy to discuss and learn about a particular part of the
solution.
• The modularity of tiered models is based on designing in layers, each with its own
functionalities and devices. The network can expand by adding additional devices
in different layers and interconnecting them.
The hierarchical three-tier model includes access, distribution, and core layers.
• The access layer provides physical connection for devices to access the network.
The distribution layer is designed to aggregate traffic from the access layer.
• The distribution layer is also called the aggregation layer. As such, it represents a
point that most of the traffic traverses. Such a transitory position is appropriate for
applying policies, such as QoS, routing, or security policies.
• The core layer provides fast transport between distribution layer devices and it is
an aggregation point for the rest of the network. All distribution layer devices
connect to the core layer. The core layer provides high-speed packet forwarding
and redundancy.
If you choose a hierarchical tiered architecture, the exact number of tiers that you would
implement in a network depends on the characteristics of the deployment site. For
example, a site that occupies a single building might only require two layers while a larger
campus of multiple buildings will most likely require three layers. In smaller networks,
core and distribution layers are combined and the resulting architecture is called a
collapsed core architecture.
End devices on the LAN communicate with end devices on the same or separate network
segments. If the destination end device is on the same network segment, the request will
get switched directly to the connected host. If the destination end device is in another
segment, the request traverses one or more extra network hops, through the distribution
layer to the core, which introduces latency. The communication of end devices that flows
through other tiers (goes "up and down" the devices) is said to have a "north-south"
nature.
Typically, devices placed in distribution and core layers are required to be more resilient,
have better performance characteristics, and support more features. They are usually
termed high-end or higher-end devices in contrast to low-end devices often found in the
access layer, which provide only basic functions and features.
The three-tier model is usually applied for server and desktop connectivity in a campus.
The model has evolved to include a design for small and midsize environments. For
example, the figure shows a data center that provides dedicated network services. Note
that the figure has the access layer at the top instead of the bottom. Network topologies
may have the layers in different positions. However, what is important is the function of
each layer, not its position in a diagram. The network provides access to all services
available in the data center, such as IP Telephony services, wireless controller services,
and network management. It can also include computing and data storage services,
located within the data center.
The three-tier approach is also used for private and public external connections, for
instance, in an enterprise edge module that includes private WAN and virtual private
network (VPN) connections, and public internet connectivity.
Access Layer
The main purpose of the access layer is to enable end devices to connect to the network
via high-bandwidth links. It attaches endpoints and devices that extend the network, such
as IP phones and wireless APs. The access layer handles different types of traffic, including
voice and video that has different demands on network resources.
The access layer serves several functions, including network access control such as:
In spine-leaf two-tier architecture, every lower-tier switch (leaf layer) is connected to each
of the top-tier switches (spine layer) in a full-mesh topology. The leaf layer consists of
access switches that connect to devices such as servers. The spine layer is the backbone of
the network and is responsible for interconnecting all leaf switches. Every leaf switch
connects to every spine switch. Typically a Layer 3 network is established between leaves
and spines, so all the links can be used simultaneously.
The path between leaf and spine switches is randomly chosen so that the traffic load is
evenly distributed among the top-tier switches. If one of the top tier switches were to fail,
it would only slightly degrade performance throughout the data center. If
oversubscription of a link occurs (that is, if more traffic is generated than can be
aggregated on the active link at one time), the process for expanding the network is
straightforward. An extra spine switch can be added, and uplinks can be extended to
every leaf switch, resulting in the addition of interlayer bandwidth and reduction of the
oversubscription. If device port capacity becomes a concern, a new leaf switch can be
added by connecting it to every spine switch.
With a spine-leaf architecture, the traffic between two leaves always crosses the same
number of devices (unless communicating devices are located on the same leaf.) This
approach keeps latency at a predictable level because a payload only has to hop to a spine
switch and another leaf switch to reach its destination.
A spine-leaf approach allows architects to build a network that can expand and collapse
(be more elastic) as needed, meaning that components (servers, switches, and ports) can
be added dynamically as the load of applications grows. This elastic approach suits data
centers that host applications that are distributed across many hosts—with hosts being
dynamically added as the solution grows.
This approach is beneficial for topologies where end devices are relatively close together
and where fast scaling is necessary, such as modern data centers.
A main concern that the spine-leaf model addresses is the addition of new leaf (access)
layer switches and the redundant cross-connections that are needed for a scalable data
center. It has been estimated that a spine-leaf model allows for 25-percent greater
scalability over a three-tier model when used for data center designs.
The spine-leaf design has these additional benefits for a modern data center:
• Increased scale within the spine to create equal-cost multipaths from leaf to spine
• Support for higher performance switches and higher speed links (10-Gigabits per
second [Gbps], 25-Gbps, 40-Gbps, and 100-Gbps)
• Reduced network congestion by isolating traffic and VLANs on a leaf-by-leaf basis
• Optimization and control of east-west traffic flows
26.5 Introducing Architectures and Virtualization
Cisco Enterprise Architecture Model
The Cisco Enterprise Architecture model recognizes several functional areas of a network
and provides a network module to support these functions. Failures within a module are
isolated from the rest of the network. Changes and upgrades can be applied to particular
modules and implemented in a controlled manner. Network services, such as security and
QoS, are also implemented on a modular basis.
• For an enterprise that may not have the in-house expertise to effectively manage
their current and future IT infrastructure, especially if cloud services primarily
involve basic elements such as email, DHCP, DNS, document processing, and
collaboration tools.
• For large enterprises and government or public organizations, where resources are
shared by many users or organizational units.
• For enterprises in which computing resource needs might increase on an ad hoc
basis and for a short term. This usage scenario is sometimes called cloud bursting.
When computing requirements increase, the cloud resources are coupled with on-
premises resources only while required.
• For enterprises that decide to outsource only part of their resources. For example,
an enterprise might outsource their web front-end infrastructure, while keeping
other resources on-premises, such as application and database services.
There are also situations in which cloud outsourcing would not be possible. Regulations
might dictate that an enterprise fully own and manage their infrastructure. For enterprises
running business applications that have strict response-time requirements, cloud
outsourcing might not be the appropriate solution.
Cloud deployment models describe the cloud ownership and control of data in the cloud.
The four cloud deployment models distinguished by NIST are as follows:
• Public clouds: Public clouds are open to use by the general public and managed by
a dedicated cloud service provider. The cloud infrastructure exists on the premises
of the cloud provider and is external to the customers (businesses and individuals).
The cloud provider owns and controls the infrastructure and data. Outsourcing
resources to a public cloud provider means that you have little or no control over
upgrades, security fixes, updates, feature additions, or how the cloud provider
implements technologies.
• Private cloud: The main characteristic of a private cloud is the lack of public access.
Users of private clouds are particular organizations or groups of users. A private
cloud infrastructure is owned, managed, and operated by a third party, or the user
itself. An enterprise might own a cloud data center and IT departments might
manage and operate it, which allows the user to enjoy advantages that a cloud
provides, such as resiliency, scalability, easier workload distribution, while
maintaining control over corporate data, security and performance.
• Community cloud: The community cloud is an infrastructure intended for users
from specific organizations that have common business-specific objectives or work
on joint projects and have the same requirements for security, privacy,
performance, compliance, and so on. Community clouds are "dedicated," in other
words, they are provisioned according to the community requirements. They can
be considered halfway between a public and private cloud—they have a
multitenant infrastructure, but are not open for public use. A community cloud can
be managed internally or by a third party and it may exist on or off premises. An
example of a community cloud is Worldwide LHC Computing Grid, a European
Organization for Nuclear Research global computing resource to store, distribute,
and analyze the data of operations from the Large Hadron Collider (LHC).
• Hybrid cloud: A hybrid cloud is the cloud infrastructure that is a composition of
two or more distinct cloud infrastructures, such as private, community, or public
cloud infrastructures. This deployment takes advantage of security provided in
private clouds and scalability of the public clouds. Some organizations outsource
certain IT functions to a public cloud but prefer to keep higher-risk or more
tailored functions in a private cloud or even in-house. An example of hybrid
deployment would be using public clouds for archiving of older data, while keeping
the current data in the private cloud. The user retains control over how resources
are distributed. For hybrid solutions to provide data protection, great care must be
taken that sensitive data is not exposed to the public.
Clouds are large data centers, whose computing resources, in other words storage,
processing, memory, and network bandwidth, are shared among many users. Computing
resources of a cloud are offered as a service, rather than a product. Clouds can offer
anything a computer can offer, from processing capabilities to operating system and
applications, therefore cloud services vary considerably. Service models define which
services are included in the cloud.
NIST has defined three service models which differ in the extent that the IT infrastructure
is provided by the cloud. The following three NIST-defined service models also define the
responsibilities for management of the equipment and the software between the service
provider and the customer.
The data plane: The primary purpose of routers and switches is to forward packets and
frames through the device onward to final destinations. The data plane, also called the
forwarding plane, is responsible for the high-speed forwarding of data through a network
device. Its logic is kept simple so that it can be implemented by hardware to achieve fast
packet forwarding. The forwarding engine processes the arrived packet and then forwards
it out of the device. Data plane forwarding is very fast. It is performed in hardware. To
achieve the efficient forwarding, routers and switches create and utilize data structures,
usually called tables, which facilitate the forwarding process. The control plane dictates
the creation of these data structures. Examples of data plane structures are Content
Addressable Memory (CAM) table, Ternary CAM (TCAM) table, Forwarding Information
Base (FIB) table, and Adjacency table.
Cisco routers and switches also offer many features to secure the data plane. Almost
every network device has the ability to utilize ACLs, which are processed in hardware, to
limit allowed traffic to only well known and desirable traffic.
Note: Data plane forwarding is implemented in specialized hardware. The actual
implementation depends on the switching platform. High-speed forwarding hardware
implementations can be based on specialized integrated circuits called ASICs, field-
programmable gate arrays (FPGAs), or specialized network processors. Each of the
hardware solutions is designed to perform a particular operation in a highly efficient way.
Operations performed by ASIC may vary from compression and decompression of data, or
computing and verifying checksums to filter or forward frames based on their MAC
address.
The control plane consists of protocols and processes that communicate between
network devices to determine how data is to be forwarded. When packets that require
control plane processing arrive at the device, the data plane forwards them to the device’s
processor, where the control plane processes them.
In cases of Layer 3 devices, the control plane sets up the forwarding information based on
the information from routing protocols. The control plane is responsible for building the
routing table or Routing Information Base (RIB). The RIB in turn determines the content of
the forwarding tables, such as the FIB and the adjacency table, used by the data plane. In
Layer 2 devices, the control plane processes information from Layer 2 control protocols,
such as STP and Cisco Discovery Protocol, and processes Layer 2 keepalives. It also
processes information from incoming frames (such as the source MAC address to fill in the
MAC address table).
When high packet rates overload the control or management plane (or both), device
processor resources can be overwhelmed, reducing the availability of these resources for
tasks that are critical to the operation and maintenance of the network. Cisco networking
devices support features that facilitate control of traffic that is sent to the device
processor to prevent the processor itself from being overwhelmed and affecting system
performance.
The control plane processes the traffic that is directly or indirectly destined to the device
itself. Control plane packets are handled directly by the device processor, which is why
control plane traffic is called process switched traffic.
There are generally two types of process switched traffic. The first type of traffic is
directed, or addressed, to the device itself and must be handled directly by the device
processor. An example would be a routing protocol data exchange. The second type of
traffic that is handled by the CPU is data plane traffic with a destination beyond the device
itself, but which requires special processing by the device processor. One example of such
traffic is IPv4 packets that have a Time to Live (TTL) value, or IPv6 packets that have a Hop
Limit value that is less than or equal to one. They require Internet Control Message
Protocol (ICMP) Time Exceeded messages to be sent, which results in CPU processing.
The management plane consists of functions that achieve the management goals of the
network, which include interactive configuration sessions, and statistics gathering and
monitoring. The management plane performs management functions for a network and
coordinates functions among all the planes (data, control, and management). In addition,
the management plane is used to manage a device through its connection to the network.
The management plane is associated with traffic related to the management of the
network or the device. From the device point of view, management traffic can be destined
to the device itself or intended for other devices. The management plane encompasses
applications and protocols such as Secure Shell (SSH), Simple Network Management
Protocol (SNMP), HTTP, HTTPS, Network Time Protocol (NTP), TFTP, FTP, and others that
are used to manage the device and the network.
From the perspective of a network device, there are three general types of packets as
related to the functional planes:
• Transit packets and frames include packets and frames that are subjected to
standard, destination IP, and MAC-based forwarding functions. In most networks
and under normal operating conditions, transit packets are typically forwarded
with minimal CPU involvement or within specialized high-speed forwarding
hardware.
• Receive or for-us packets include control plane and management plane packets
that are destined for the network device itself. Receive packets must be handled
by the CPU within the device processor, because they are ultimately destined for
and handled by applications running at the process level within the device
operating system.
• Exception IP and non-IP information include IP packets that differ from standard IP
packets, such as IPv4 packets containing the Options field in the IPv4 header, IPv4
packets with a TTL that expires, and IPv4 packets with unreachable destinations.
Examples of non-IP packets are Layer 2 keepalives, ARP frames, and Cisco
Discovery Protocol frames. All packets and frames in this set must be handled by
the device processor.
In traditional networking, the control and data planes exist within one device. With the
introduction of software-defined networking (SDN), the management and control planes
are abstracted into a controlled layer, typically a centralized solution, a specialized
network controller, which implements a virtualized software orchestration to provide
network management and control functions. Infrastructure layer devices, such as switches
and routers, focus on forwarding data. The application layer consists of SDN applications,
which communicate network requirements towards the controller.
A VM runs its own operating system and applications. The applications are not aware that
they are running in a virtualized environment.
A hypervisor has these tasks:
• The hypervisor is running directly on the physical server hardware. This is also
called native, bare-metal, or Type-1 hypervisors.
• The hypervisor runs on a host operating system (in other words the operating
system of the physical device). This is also called a hosted or Type-2 hypervisor.
The figure illustrates types of full virtualization.
Note: Other virtualization types are partial virtualization and paravirtualization. In partial
virtualization, the guest operating system is aware of the physical hardware that the
hypervisor is running on and adjusts so that the communication is easier to translate for
the hypervisor, reducing overhead. In paravirtualization, the guest operating system is
aware of the hypervisor communication requirements and translates complex calls that
cause most of the overhead into the hypervisor-optimized calls or initiates special features
of the hypervisor.
Examples of hypervisor software are VMware ESXi and VMware Workstation, Microsoft
Hyper-V and Microsoft Virtual PC, Citrix XenServer, Oracle VM and Oracle VM Virtual Box,
Red Hat Enterprise Virtualization, and others.
VMs offer several benefits over physical devices.
• Partitioning:
o VMs allow for a more efficient use of resources, because a single physical
device can serve many VMs, which can be rearranged across different
servers, according to load.
o A hypervisor divides host system resources between VMs and allows VM
provisioning and management.
• Isolation:
o VMs in a virtualized environment have as much security as is present in
traditional physical server environments because VMs are unaware of the
presence of other VMs.
o VMs that share the same host are completely isolated from each other, but
can communicate over the network.
o Recovery in cases of failure is much faster with VMs than with physical
servers. Failure of a critical hardware component, such as a motherboard or
power supply, can bring down all the VMs that reside on the affected host.
Affected VMs can be easily and automatically migrated to other hosts in
the virtual infrastructure, providing for shorter downtime.
• Encapsulation:
o VMs reside in a set of files that describe them and define their resource
usage and unique identifiers.
o VMs are extremely simple to back up, modify, or even duplicate in a
number of ways.
o This encapsulation can be deployed in environments that require multiple
instances of the same VM, such as classrooms.
• Hardware abstraction:
o Any VM can be provisioned or migrated to any other physical server that
has similar characteristics.
o Support is provided for multiple operating systems: Windows, Linux, and so
on.
o Broader support for hardware, since the VM is not reliant on drivers for
physical hardware.
An issue with hosting multiple VMs on physical servers is that the physical server
represents a single point of failure for all guest machines and services running on them.
Also, if maintenance of the physical server requires machine shutdown, it shuts down all
the software components on it. Since VMs exist as files, the migration of an entire VM,
with its operating system and applications, is a matter of copying a file to another physical
machine. Once files are copied, the VM can be started and resume its operation on the
new physical host.
The mobility of VMs is an advantage of virtualized environment and is beneficial for these
reasons:
• Optimum performance: If a VM on a given host starts exceeding the resources of
the host, it can be moved to another host that has sufficient resources.
• Maintenance: If there is a need to perform maintenance or upgrade a host, the
VMs from that host can be temporarily redistributed to other hosts. After the
maintenance is complete, the process can be reversed, resulting in no downtime
for users.
• Resource optimization: If the resource usage of one or more VMs decreases, one
or more hosts may no longer be needed. In this case, the VMs can be redistributed
and the hosts that are emptied can be powered off to reduce cooling and power
requirements.
Virtualization is not limited to servers but also extends to other infrastructure
components, including networks. Virtualized servers communicate among themselves and
with the external resources. Virtualization affects networking requirements because
communications of multiple VMs are multiplexed onto the same physical network
connections provided by the host machine. Networking functions such as NIC cards,
firewalls, and switches can also be virtualized and moved to reside inside a host machine.
A virtual switch emulates a Layer 2 switch. It runs as part of a hypervisor and provides
network connectivity for all VMs. When connected to a virtual switch, VMs behave as if
they are connected to a normal network switch.
The figure shows four VMs that are connected to the same virtual switch that has access
to the outside network.
Containers
Containers are similar to VMs in many ways, but also different. Just as with VMs,
containers are instances that run on a host (bare metal or virtual) machine. Like VMs, they
can be customized and built to whatever specification is desired, and can be used the
same way that a VM is used, allowing isolated processes, networking, users, and so on.
Containers differ from VMs in that a guest operating system is not installed. Rather, when
application code is run, the container only runs the necessary processes that support the
application. This is because containers are made possible using kernel features of the host
operating system and a layered file system instead of the emulation layer required to run
VMs. This also means that containers do not consist of different operating systems with
installed applications, but instead have the necessary components that set them aside as
different Linux vender versions and variants.
Even more so, this means that because a container does not require its own operating
system, it uses fewer resources and consumes only the resources required for the
application that is run upon starting the container. Therefore applications can consist of
smaller containerized components (which are the binaries and libraries required by the
applications) instead of legacy monolithic applications installed on a virtual or bare metal
system.
How containers are similar to VMs is that they also are stored as images, although a big
difference is that container images are much smaller and more portable to use than VM
images for the aforementioned reasons of not requiring an operating system installation
as part of the image. This makes it possible to have a packaged, ready-to-use application
that runs the same regardless of where it is, as long as the host system runs containers
(Linux containers specifically).
A number of container technologies are available, with Linux leading the charge. One of
the more popular platforms is Docker, which is now based on Linux libcontainer. Actually,
Docker is a management system that is used to create, manage, and monitor Linux
containers. Ansible is another container-management system favored by Red Hat.
Virtualization of Networking Functions
Networking functions can also be virtualized with networking devices acting as hosts. The
virtualization main principle remains the same: one physical device can be segmented into
several devices that function independently. Examples include subinterfaces and virtual
interfaces, Layer 2 VLANs, Layer 3 virtual routing and forwarding (VRF), and Layer 2 virtual
device contexts.
Network device interfaces can be logically divided into subinterfaces, which are created
without special virtualization software. Rather, subinterfaces are a configuration feature
supported by the network device operating system. Subinterfaces are used when
providing router-on-a-stick inter-VLAN routing, but there are other use cases also.
VLANs are a virtual element mostly related to Layer 2 switches. VLANs divide a Layer 2
switch into multiple virtual switches, one for each VLAN, effectively creating separate
network segments. Traffic from one VLAN is isolated from the traffic of another VLAN.
A switch virtual interface (SVI) is another virtualization element in Layer 2 devices. It is a
virtual interface that can have multiple physical ports associated with it. In a way, it acts as
a virtual switch in a virtualized machine. Again, to create VLANs and SVIs you only need to
configure them using features included in the device operating system.
To provide logical Layer 3 separation within a Layer 3 device, the data plane and control
plane functions of the device must be segmented into different VRF contexts. This process
is similar to the way that a Layer 2 switch separates the Layer 2 control and data planes
into different VLANs.
With VRFs, routing and related forwarding information is separated from other VRFs. Each
VRF is isolated from other VRFs. Each VRF contains a separate address space, and makes
routing decisions that are independent of any other VRF Layer 3 interfaces, logical or
physical.
27.1 Explaining the Evolution of Intelligent Networks
Introduction
Since the beginning of computer networking, network configuration practices have
centered on a device-by-device manual configuration methodology. In the early years, this
did not pose much of a problem, but more recently this method for configuring the
hundreds if not thousands of devices on a network has been a stumbling block for
efficient and speedy application service delivery. As the scale increases, it becomes more
likely that changes that are implemented by humans are going to have a higher chance of
misconfigurations, whether simple typos, applying a new change to the wrong device, or
even completely missing a device altogether. Performing repetitive tasks that demand a
high degree of consistency unfortunately always introduces a risk for error. And the
number of changes humans are making is increasing as there are more demands from the
business to deploy more applications at a faster rate than ever before.
The solution lies in automation. The economic forces of automation are manifested in the
network domain via network programmability and software-defined networking (SDN)
concepts. Network programmability helps reduce operational expenses (OPEX), which
represents a very significant portion of the overall network costs, and speeds up service
delivery by automating tasks that are typically done via CLI. The CLI is simply not the
optimal approach in large-scale automation.
Automation tools for network configuration have existed in the past, but often they suffer
from complexities that make deployment difficult. Taking responsibility away from
devices, and driving dynamic changes from a central location is desirable, a task well-
suited to software implementing network programmability applications that can scale
from simply automating just a couple of devices to a whole Enterprise network
architecture. This solution takes into account the application and user demand and applies
configuration to the connected networking devices within the enterprise campus and
beyond.
As a networking engineer, you need to prepare yourself for the evolution of network
management, which includes developing skills in different areas:
• The CLI was designed for human interaction, limiting the speed of configuration to
as fast as a person can work. While the CLI will continue to play an integral role in
troubleshooting and operations, it is error-prone and inefficient.
• Manual configuration and common copying and pasting methods are extremely
prone to error, especially when configuring multiple devices.
• Tasks are not easily repeatable, resulting in inefficient workflows.
• Unstructured text data used in the CLI requires postprocessing (screen scraping) to
transcode to machine-friendly formatting. The CLI does not return error or exit
codes on which the operator can act programmatically.
Using tools that are common in software development, network engineers can perform
more optimal workflows such as using version control systems to store network
configurations. This way, configurations are versioned and tracked, and in addition, can be
used as the "single source of truth." Also, any change that is accepted will be fully tested,
using automated tooling, to ensure that changes are valid before deploying.
Uses of Network Automation
The value of network programmability and use cases suggest possibilities for network
programmability solutions.
Network automation is used for many common tasks, such as the following:
• Device provisioning: Device provisioning is likely one of the first things that comes
to an engineer’s mind when they think about network automation. Device
provisioning is simply configuring network devices more efficiently, faster, and
with fewer errors, because with automation, human interaction with each network
device is decreased. Automated processes also streamline the replacement of
faulty equipment.
• Device software management: Controlling the download and deployment of
software updates is a relatively simple task, but it can be time-consuming and
prone to error. Many automated tools have been created to address this issue, but
they can lag behind customer requirements. A simple network programmability
solution for device software management is beneficial in many environments.
• Data collection and telemetry: A common part of effectively maintaining a
network is collecting data from network devices, including telemetry on network
behavior. The way that data is collected is changing as many devices, such as Cisco
IOS-XE devices, can push data (and stream) off-box in real time in contrast to being
polled every few minutes.
• Compliance checks: Network automation methods allow the unique ability to
quickly audit large groups of network devices for configuration errors and
automatically make the appropriate corrections with built-in regression tests.
• Reporting: Automation decreases the manual effort that is needed to extract
information and coordinate data from disparate information sources in order to
create meaningful and human-readable reports.
• Troubleshooting: Network automation makes troubleshooting easier by making
configuration analysis and real-time error checking very fast and simple, even with
many network devices.
Network Programmability Technology
Network programmability is so much more than having programmatic interfaces on
network devices.
There are many technologies that are used when introducing network programmability
and automation into a given environment:
• Linux: The foundation of everything begins with Linux. From version control to
programming languages and configuration management, tools such as Ansible and
Puppet almost always run on Linux operating systems.
• Device and controller APIs: The API is the mechanism by which an end user makes
a request of a network device and the network device responds to the end user.
This method provides increased functionality and scalability over traditional
network management methods, and is how modern tools interact with network
devices.
• Version control: All network configuration information should be versioned. Using
a platform such as Git makes it easier to share and collaborate on projects
involving anything from code to configuration files. You can use many different
tools to accomplish automated testing in an environment where version control is
used to manage configuration files.
• Software development: While not every network programmability engineer will
be an expert programmer, understanding software development processes is
critical to understanding how software development can be used to extend or
customize open source tools.
• Automated testing: A key area of network programmability and software
development is automated testing. Deploying proper testing, such as pre- and
post-changes on the network, in an automated way improves the use of network
resources. Network administrators should use tests that run automatically under
defined conditions, or whenever a change is being proposed.
• Continuous integration (CI): CI tools are used commonly by developers, and can
drastically improve the release cycle of software and network configuration
changes. Deploying CI tools and pipelines can help with execution of your tests so
that they run when changes are being proposed (using version control tools).
• An approach and architecture in networking where control and data planes are
decoupled, and intelligence and state are logically centralized
• An implementation where the underlying network infrastructure is abstracted
from the applications (via network virtualization)
• A concept that leverages programmatic interfaces to enable external systems to
influence network provisioning, control, and operations
SDN is a set of techniques, not necessarily a specific technology, that seeks to program
network devices either through a controller or some other external mechanism. SDN
refers to the capacity to control, manage, and change network behavior dynamically
through an open interface rather than through direct, closed-box methods. It allows the
network to be managed as a whole and increases the ability to configure the network in a
more deterministic and predictable way.
With SDN, you can reduce the complexity of your network by using a standardized
network topology and by building an abstract overlay network on top. In this way, you
move from a single device view of the network (box-oriented) to a global, high-level view
(network-oriented). This high-level view enables you to use abstractions and
simplifications when provisioning new services. For example, the network operator
configuring a virtual private network (VPN) for a remote office environment is not
concerned (and should not be) with the physical layout of the network. The only
requirement of the remote site and operator is that the network spans all geographic
regions required for the VPN (for example, the Main Campus and Remote Office). The
controller will figure out what needs to be provisioned. The prerequisite to this is that the
controller is the central point of management and the "source of truth" for the
configuration.
Using abstractions when managing your network also enables you to use standardized
components. SDN implementations typically define a standard architecture and APIs that
the network devices use. To a limited degree, you can also swap a network device for a
different model or a different vendor altogether. The controller will take care of
configuration, but the high-level view of the network for the operators and customers will
stay the same.
Simplification of configuration and automated management also directly results in OPEX
savings. Typically, the total cost of ownership (TCO) for a network in a five-year span
comprises about 30 percent capital expenditure (CAPEX) and about 70 percent OPEX.
Manual service configuration and activation represent a significant chunk of OPEX.
SDN addresses the need for the following:
• The data (or forwarding) plane is responsible for the forwarding of data through a
network device.
• The control plane is responsible for controlling the forwarding tables that the data
plane uses.
• The management plane is integrated into the control plane.
• In a traditional network, the data plane acts on the forwarding decisions.
• In a traditional network, the control and management planes learn/compute
forwarding decisions.
The figure shows a traditional network. Each device has a control and data plane. This
means that all devices are equally smart and can make decisions on their own, since the
control plane exists. Of course, the data plane is what is responsible for the actual packet
forwarding. This network is now referred to as the traditional network, and is still the
dominant network type deployed.
With SDN, the network changes.
As SDN first emerged, there was the thought that the control (and management) plane
should be removed from each device and that the control and management planes must
be centralized into an SDN controller. While a major benefit to this approach is that you
can evolve the control and management plane protocols independently of the hardware
while now having a central point of control, there were significant scaling problems with
this approach.
Note: The management and control planes are abstracted into a centralized, specialized
network controller, which implements a virtualized software orchestration to provide
network management and control functions.
The figure shows the network as it could be in a "hybrid SDN".
• A controller is centralized and separated from the physical device, but devices still
retain localized control plane intelligence.
The hybrid SDN option is a combination of the best of both schemas. In a hybrid SDN, the
controller becomes an active part of the distributed network control plane, rather than a
means to configure the network control plane behavior in devices. This solution also offers
a centralized view of the network, giving an SDN controller the ability to act as the brain of
the network. In traditional networking, some network protocols, for example routing
protocols, scale well, meaning that they provide a certain level of automation. Adding a
controller offers a single pane of glass and administration while also offering a single API
to interface to the network, as opposed to establishing Secure Shell (SSH) connections to a
number of network devices to make a change or retrieve data.
SDN Layers
The SDN architecture differs from the architecture of traditional networks. It comprises
three stacked layers (from the bottom up):
• Infrastructure layer: Contains network elements (any physical or virtual device that
deals with traffic).
• Control layer: Represents the core layer of the SDN architecture. It contains SDN
controllers, which provide centralized control of the devices in the data plane.
• Application layer: Contains the SDN applications, which communicate network
requirements towards the controller.
The controller uses southbound APIs to control individual devices in the infrastructure
layer. The controller uses northbound APIs to provide an abstracted network view to
upstream applications in the application layer.
Note: SDN can be compared to network functions virtualization (NFV). Researchers
created SDN to easily test and implement new technologies and concepts in networking,
but a consortium of service providers created NFV. Their main motivation was to speed up
deployment of new services and reduce costs. NFV accomplishes these tasks by
virtualizing network devices that were previously sold only as a separate box (such as the
switch, router, firewall, and intrusion prevention system [IPS]) and by enabling them to
run on any server. It is perfectly possible to use both technologies at the same time to
complement each other. In other words, SDN decouples the control plane and data plane
of network devices, and NFV decouples network functions from proprietary hardware
appliances.
Northbound and Southbound APIs
Traditionally, methods such as SNMP, Telnet, and SSH were among the only options to
interact with a network device. However, over the last few years, networking vendors,
including Cisco, have developed and made available APIs on their platforms in order for
network operators to more easily manage network devices and gain flexibility in
functionality.
The API is the mechanism by which an end user makes a request of a network device and
the network device responds to the end user. This method provides increased
functionality and scalability over traditional network management methods. In order to
transmit information, APIs require a transport mechanism such as SSH, HTTP, and HTTPS,
though there are other possible transport mechanisms as well.
An SDN offers a centralized view of the network, giving an SDN controller the ability to act
as the brain of the network. The control layer of the SDN is usually a software solution
called the SDN controller. The SDN controller uses APIs to communicate with the
application and infrastructure layers. An API is a set of functions and procedures which
enable communication with a service. Using APIs, business applications can tell the SDN
controller what they need from the network. Then the controller uses the APIs to pass
instructions to network devices, such as routers, switches, and WLCs. However, those sets
of APIs are very different. Communication with the infrastructure layer is defined with
southbound APIs, while services are offered to the application layer using the northbound
APIs.
Northbound APIs or northbound interfaces are responsible for the communication
between the SDN controller and the services that run over the network. Northbound APIs
enable your applications to manage and control the network. So, rather than adjusting
and tweaking your network repeatedly to get a service or application running correctly,
you can set up a framework that allows the application to demand the network setup that
it needs. These applications range from network virtualization and dynamic virtual
network provisioning to more granular firewall monitoring, user identity management,
and access policy control. Currently REST API is predominately being used as a single
northbound interface that you can use for communication between the controller and all
applications.
SDN controller architectures have evolved to include a southbound abstraction layer. This
abstraction layer abstracts the network away to have one single place where you start
writing the applications to and allows application policies to be translated from an
application through the APIs, using whichever southbound protocol is supported and
available on the controller and infrastructure device. This new approach allows for the
inclusion of both new protocols and southbound controller protocols and APIs, including
(but not limited to) the following:
• Managing networks via the CLI was (and is) the norm.
• Networks were static when protocols such as SNMP emerged.
• Networks have grown to be overly complex.
• Regular expressions and scripting were the main tools for those who worked with
automation.
CLI syntax and configurable options that are associated with features such as Border
Gateway Protocol (BGP), quality of service (QoS), or VPNs varied widely across vendors,
platforms, and software releases. Over time, these differences, combined with the
limitations of the CLI, started to inhibit the ability to configure and manage networks at
scale. Configuring and operating a single feature within a large network could require the
use of several different CLIs. Trying to automate with screen-scraping scripts and regular
expressions started to make matters worse.
Another traditional management protocol, SNMP, has been around for many years. It has
been the de facto way to monitor networks. It worked great when networks were small
and polling a device every 15 to 30 minutes met operational requirements. However,
SNMP often caused operational issues when polling devices too frequently. While SNMP
has served the industry reasonably well from a device monitoring perspective, it does
have plenty of weaknesses. One of the most problematic issues from the network
programmability perspective is that SNMP lacks libraries for various programming
languages.
If you consider the way devices have been managed, you can see that there has been no
good way to handle machine-to-machine communication with the network. Expect
scripting and custom parsers were the best the industry had to offer. It is now
unacceptable because the rate of change continues to increase and there are more
devices and higher demands are being placed on the network.
If you look at where configuration management is with SNMP and CLI, you can easily
outline the requirements for next-generation configuration management:
As next-generation programmatic interfaces are being built, there are a few key attributes
that must be met:
• They must support different types of transport: HTTP, SSH, Transport Layer
Security (TLS).
• They must be flexible and support different types of data encoding formats such as
XML and JSON.
• There must be efficient and easy-to-use tooling that helps in using the new APIs,
for example, programming libraries (software development kits [SDKs]).
• There must be extensible and open APIs: REST, RESTCONF, NETCONF, Google-
defined remote procedure calls (gRPCs).
Also, they must be model-driven. Being model-driven is what helps any transport, API,
encoding, and data format.
Model-Driven Programmability
The solution for next generation management lies in adopting a programmatic and
standards-based way of writing configurations to any network device, replacing the
process of manual configuration. A main component of those innovations is model-driven
programmability.
Data models are developed in a standard, industry-defined language, that can define
configuration and state information of a network. Using data models, network devices
running on different Cisco operating systems can support the automation of configuration
for multiple devices across the network.
Model-driven programmability of Cisco devices allows you to automate the configuration
and control of those devices or even use orchestrators to provide end-to-end service
delivery (for example in Cloud Computing). Data modeling provides a programmatic and
standards-based method of writing configurations to network devices, replacing the
process of manual configuration. Although configuration using a CLI may be more human-
friendly, automating the configuration using data models results in better scalability.
Note: Orchestrator enables the IT administrators to automate management, coordination,
and deployment of IT infrastructure. It is typically used in cloud services delivery.
• Data models: The foundation of the API are data models. Data models define the
syntax and semantics, including constraints of working with the API. They use well-
defined parameters to standardize the representation of data from a network
device so the output among various platforms is the same. Device configuration
can be validated against a data model in order to check if the changes are valid for
the device before committing the changes.
• Transport: Model-driven APIs support one or more transport methods including
SSH, TLS, and HTTP(s).
• Encoding: The separation of encodings from the choice of model and protocol
provides additional flexibility. Data can be encoded in JSON, XML, or Google
Protocol Buffers (GPB) format. While some transports are currently tied to specific
encodings (for example, NETCONF and XML), the programmability infrastructure is
designed to support different encodings of the same data model if the transport
protocol supports it.
• Protocols: Model-driven APIs also support multiple options for protocols, with the
three core protocols being NETCONF, RESTCONF or gRPC protocols. Data models
are not used to actually send information to devices and instead rely on these
protocols. REST is not explicitly listed because when REST is used in a modeled
device, it becomes RESTCONF. However, pure or native REST is also used in certain
network devices. Protocol choice will ultimately be influenced by your networking,
programming, and automation background, plus available tooling.
Note: An SDK is a set of tools and software libraries that allows an end user to create their
own custom applications for various purposes, including managing hardware platforms.
The process of automating configurations and monitoring in a network involves the use of
these core components:
• Client application: Manages the configurations and monitors the devices in the
network. A client application can be written in different programming languages
(such as Python) and SDKs are often used to simplify the implementation of
applications for network automation.
• Network device: Acts as a server, responds to requests from the client application,
and configures the devices in the network.
• Data Model (YANG) module: Describes configuration and operational data of the
network device, and performs actions.
• Communication protocol: Provides mechanisms to install, manipulate, and delete
the configuration of network devices. The protocol encodes data in a particular
format (XML, JSON, gRPC) and transports the data using one of the transport
methods (HTTP, HTTPS, SSH, TLS).
Telemetry is an automated communications process by which measurements and other
data are collected at remote or inaccessible points and transmitted to receiving
equipment for monitoring. Model-driven telemetry provides a mechanism to stream data
from a model-driven telemetry-capable device to a destination.
Different Cisco operating systems provide several mechanisms such as SNMP, CLI, and
syslog to collect data from a network. These mechanisms have limitations that restrict
automation and scale. One limitation is the use of the pull model, where the initial request
for data from network elements originates from the client. The pull model does not scale
when there is more than one network management system (NMS) in the network. With
this model, the server sends data only when clients request it. To initiate such requests,
continual manual intervention is required. This continual manual intervention makes the
pull model inefficient. Model-driven streaming telemetry is able to push data off of the
device to a defined endpoint such as JSON or using GPB at a much higher frequency and
more efficiently.
Telemetry uses a subscription model to identify information sources and destinations.
Model-driven telemetry replaces the need for the periodic polling of network elements—
instead, a continuous request for information to be delivered to a subscriber is established
upon the network element. Then, either periodically, or as objects change, a subscribed
set of YANG objects are streamed to that subscriber. The data to be streamed is driven
through subscription. Subscriptions allow applications to subscribe to updates (automatic
and continuous updates) from a YANG data store, which enables the publisher to push
and in effect stream those updates.
Data Models
What are data models?
• Data models describe a constrained set of data in the form of a schema language.
• They use well-defined parameters to standardize the representation of data from a
network device, so that the output among various platforms is the same.
• They are not used to actually send information to devices, but instead, they rely on
protocols such as NETCONF and RESTCONF to send JSON- and XML-encoded
documents that simply adhere to a given model.
• Device configuration can be validated against a data model in order to check if the
changes are valid for the device before committing the changes.
Data models are used to describe the syntax and semantics of working with specific data
objects. They can define attributes and answers such as the following:
• XML
• JavaScript Object Notation (JSON)
• Google Protocol Buffers (GPBs)
• YAML Ain't Markup Language (YAML)
Note: GPBs are really just numbers, not strings, and not easily read by a human. However,
there are benefits to using number codes. GPB is an efficient way of encoding telemetry
data and represents the ultimate in efficiency and speed.
Note: YAML, which, as the name suggests, is not a markup language such as JSON and
XML. With its minimalistic format, it is more humanly writable and readable but works the
same way as other data formats. In general, it is the most humanly readable of all the
formats and at the same time just as easy for programs to use, and it is gaining popularity
among engineers working with programmability. YAML is not supported on Cisco device
APIs, but it is still used to configure Cisco network devices through the Ansible
configuration management tool.
The following are common characteristics of API encoding formats:
• Format syntax
• Concept of an object
o Element that has characteristics
o Can contain many attributes
• Key/value notation
o Key: Identifies a set of data
o Value: Is the data itself
• Array or list
• Importance of whitespaces
• Case sensitivity
A syntax is a way to represent a specific data format in textual form. You notice that some
formats use curly braces or square brackets, and others have tags marking the beginning
and end of an element. In some formats, quotation marks or commas are heavily used
while others do not require them at all. But no matter which syntax they use, each of
these data formats has a concept of an object. You can think of an object as a packet of
information, an element that has characteristics. An object can have one or more
attributes attached to it.
Many characteristics will be represented by the key/value concept, the key and value
often being separated by a colon. The key identifies a set of data and it is often positioned
on the left side of the colon. The values are the actual data that you are trying to
represent. In most cases, the data appears on the right side of the colon.
To extract the meaning of the syntax, you must recognize how keys and values are
notated when looking at the data format. A key must be a string, while a value could be a
string, a number, or a Boolean (for instance, true or false). Other values could be more
complicated, containing an array or an entirely new object that represents its own data.
Another thing to notice when looking at a particular data format is the importance of
whitespaces and case sensitivity. In some cases these could be of high importance, and in
others, they could carry no significance, as you will get to know through some examples.
One of the main points about data formats that you should bear in mind is that you can
represent any kind of data in any given format.
In the figure, there are the three previously mentioned common data formats—JSON,
XML, and YAML. Each of these examples provides details about a specific network
interface, GigabitEthernet5, providing description, IPv4 address, and more.
You can quickly recognize that the exact same data is represented in all three formats, so
it really comes down to two factors when considering which one to choose:
• If the system you are working with prefers one format over the other, pick the
format that the system prefers.
• If the system can support any of them, pick the format that you are more
comfortable working with.
In other words, if the API you are addressing uses one specific format or a handful of
them, you will have to choose one of those. If the API supports any given format, it is up
to you which one you prefer to use.
XML Overview
XML is a markup format that is human-readable, while enabling computers to efficiently
parse the information stored in the XML format. While it is not as easy for humans to
understand visually, it is easy for machines to parse and generate. XML has been created
to structure, store, and transport information. The content is wrapped in tags.
The code block is an example of XML-formatted information:
XML may look similar to HTML, but they are, in fact, different. While both use tags <tags>
to define objects and elements, HTML is used to display data. A web browser knows how
to display websites—it consumes an HTML object and displays it. XML, on the other hand,
is used to describe data such that your XML client (programming language, and so on) can
consume an object that has meaning to it.
XML namespaces are common when using XML APIs on network devices, so it is important
to understand them and know why they are used. As the number of XML files exchanged
on the internet rises, it becomes increasingly likely that two or more applications end up
using the same tag names but represent different objects. This creates a conflict with
systems trying to parse some information from a specific tag. Solving that issue requires
the use of namespaces. A namespace essentially becomes an identifier for each XML
element, distinguishing the element from any other similar element. Besides creating your
own namespaces, you can use an existing namespace, for example, referring to a YANG
model. YANG models are like templates used to generate consistent XML.
In the example, a specific YANG model (ietf-interfaces) is referred to by the namespace
urn:ietf:params:xml:ns:yang:ietf-interfaces.
JavaScript Object Notation
JavaScript Object Notation (JSON) is a lightweight data format that is used in web services
for transmitting data. It is widely used in scripting-based platforms because of its simple
format.
Compared to XML, JSON has the following advantages:
JSON is language- and platform-independent. JSON parsers and JSON libraries exist for
many different programming languages. The JSON text format is syntactically identical to
the code for creating JavaScript objects, since JSON is a JavaScript Object.
JSON Data Types
JSON uses six data types. The first four data types (string, number, Boolean, and null) are
referred to as simple data types. The last two data types (object and array) are referred to
as complex data types.
String is any sequence of characters between two double quotes. For example:
Number is a decimal number, which may use exponential notation and could contain a
fractional part. For example:
Object is an unordered collection of name–value pairs. The names (also called keys) are
represented by strings. Objects are typically rendered in curly braces. For example,
information about interface would look as follows:
Array is an ordered collection of name–value pairs, which can be of any type. For example,
a configuration of two static routes would look as follows in JSON:
Namespaces
Similar to XML, JSON (and YAML) can also use namespaces that define the syntax and
semantics of a name element, and in that way avoid element name conflicts. Take a look
at the example code from each format:
In this figure, you can find the same namespace ietf-interfaces in each of the formats.
When you are using RESTCONF, JSON requires a namespace, which has to be in the
correct URI format (ietf-interfaces:interfaces). A corresponding URL address, which is
used by RESTCONF protocol would look as follows:
https://<ROUTER_ADDRESS>/restconf/data/ietf-
interfaces:interfaces/interface=GigabitEthernet5
Protocols
To manipulate and automate on the data models supported on a network device, a
network management protocol needs to be used between the application client (such as
an SDN controller) and the network devices. Different devices support one or more
protocols such as REST, NETCONF, RESTCONF, and gRPC via a corresponding
programmable interface agent for these protocols—sometimes a native REST agent is
used.
When a request from a client is received via a NETCONF, RESTCONF, or gRPC protocol, the
corresponding programmable interface agent converts the request into an abstract
message object that is distributed to the underlying model infrastructure. The appropriate
model is selected and the request is passed to it for processing. The model infrastructure
executes the request (read or write) on the device data store, returning the results to the
originating agent for response transmission back to the requesting client.
Representational State Transfer
There is often a perception that REST is a complex topic to learn about, but in reality it is
analogous to browsing a website with a web browser.
REST is an architectural style (versus a protocol) for designing networked applications.
There are two types of URIs:
CRUD operations are used with the URL and payload. It is how the server (network device)
knows what action to perform. With the REST API, your application passes a request for a
certain type of data by specifying the URL path that models the data. Both the request and
response are JSON or XML-formatted data. The following image shows an example of an
URI address composition. The protocol used is HTTP.
The most common HTTP verbs that are used by REST are GET, POST, PUT, PATCH, and
DELETE. HTTP verbs are the methods that are used to perform some sort of action on a
specific resource. Because HTTP is a standardized and ubiquitous protocol, the semantics
are well known.
GET is used to read or retrieve information from a resource and returns a representation
of the data in JSON or XML. Because the GET method only reads data and does not change
it, it is considered a “safe” method, which means there is no risk of data corruption.
POST, on the other hand, creates new resources, which means it is not considered a
“safe” method.
PUT is normally used to update or replace an already existing resource. It is called “PUT-
ing” to a resource and involves sending a request with the updated representation of the
original resource.
PATCH is similar in some ways to PUT in that PATCH modifies the capabilities of a
resource. The difference between PUT and PATCH is that PATCH sends a request
containing only the changes to the resource and not a complete updated resource.
DELETE simply deletes a resource that is identified by a URI.
When an HTTP method is used, there is a specific response code returned. For example,
upon successful deletion of a resource using DELETE, the client will receive a 200 message
signifying that the request succeeded.
HTTP response codes were developed by the IETF and therefore easy to look up online or
on their website, ietf.org. These codes are useful in troubleshooting because they provide
specific information regarding the error on the client side or server side. For example, if a
client receives a 400 response from a server, you can conclude that there is a syntax
problem in the request.
In the tables below, you can see some of the most common HTTP response codes.
Common HTTP Response Codes
Several tools exist that are used to test REST APIs:
• cURL: A simple Linux command line tool within a shell script that provides an easy
way to transfer data with URL syntax.
• Postman: A Google Chrome application that provides you an easy GUI to read REST
APIs from within the Chrome web browser.
• Python: Requests make use of embedded Python libraries and a small variety of
methods to send HTTP requests to a resource API.
Network Configuration Protocol
NETCONF is an IETF standard transport protocol for communicating with network devices,
retrieving operational data and both setting and reading configuration data. Operational
data includes interface statistics, memory utilization, errors, and so on. The configuration
data refers to how particular interfaces, routing protocols, and other features are enabled
and provisioned. NETCONF purely defines how to communicate with the devices.
NETCONF uses an XML management interface for configuration data and protocol
messages. The protocol messages are exchanged on top of a secure transport protocol
such as SSH or TLS. NETCONF is session-oriented and stateful—it is worth pointing out as
other APIs such as native REST and RESTCONF are stateless.
NETCONF is fairly sophisticated and it uses an RPC paradigm to facilitate communication
between the client (for example, an NMS server or an open source script) and the server.
NETCONF supports device transaction, which means that when you make an API call
configuring multiple objects and one fails, the entire transaction fails,and you do not end
up with a partial configuration. NETCONF is fairly sophisticated—it is not simple CRUD
processing.
NETCONF encodes messages, operations, and content in XML, which is intended to be
machine and human-readable.
NETCONF utilizes multiple configuration data stores (including candidate, running, and
startup). This is one of the most unique attributes of NETCONF, though a device does not
have to implement this feature to “support” the protocol. NETCONF utilizes a candidate
configuration, which is simply a configuration with all proposed changes applied in an
uncommitted state. It is the equivalent of entering CLI commands and having them not
take effect right away. You would then “commit” all the changes as a single transaction.
Once committed, you would see them in the running configuration.
The example shows different NETCONF data stores:
• There are more users and endpoints, therefore, more VLANs and subnets. It
becomes more difficult to keep track of and segment all those groups.
• There are so many different types of users coming in to the network that it is
becoming more complex to configure. Multiple steps are required to give users
credentials and support connectivity choices.
• As users and devices move around the network, policy is not consistent, which
makes it difficult to find users when they move around and troubleshoot issues.
Cisco DNA Center is a Cisco SDN controller for enterprise networks—branch, campus, and
WAN. Cisco DNA Center can program the network in an automated way, based on the
application requirements, and it represents a basis for intent-based networking.
Cisco DNA Center provides open programmability APIs for policy-based management and
security through a single controller. It provides an abstraction of the network, which leads
to simplification of the management of network services. This approach automates what
has typically been a tedious manual configuration.
The controller provisions network services consistently and provides rich network
information and analytics across all network resources: both LAN and WAN, wired and
wireless, and physical and virtual infrastructures. This visibility allows you to optimize
services and support new applications and business models. The controller bridges the
gap between open, programmable network elements and the applications that
communicate with them, automating the provisioning of the entire end-to-end
infrastructure.
Intent-based Networking
SDN is a foundational building block of intent-based networking. The good news for SDN
practitioners is that intent-based networking addresses shortfalls of SDN, which include
automated translation of business policies to IT (security and compliance) policies,
automated deployment of these policies and assurance that if the network is not
providing the requested policies, they will receive proactive notification. Intent-based
networking adds context, learning and assurance capabilities, by tightly coupling policy
with intent. “Intent” enables the expression of both business purpose and network
context through abstractions, which are then translated to achieve the desired outcome
for network management. SDN is purposely focused on instantiating change in network
functions.
• The translation element enables the operator to focus on what they want to
accomplish, and not how they want to accomplish it. The translation element takes
the desired intent and translates it to associated network policies and security
policies. Before applying these new policies, the system checks if these policies are
consistent with the already deployed policies or if they will cause any
inconsistencies.
• Once approved, the new policies are then activated (automatically deployed across
the network).
• With assurance, an intent-based network performs continuous verification that the
network is operating as intended. Any discrepancies are identified; root-cause
analysis can recommend fixes to the network operator. The operator can then
"accept" the recommended fixes to be automatically applied, before another cycle
of verification. Assurance does not occur at discrete times in an intent-based
network. Continuous verification is essential since the state of the network is
constantly changing. Continuous verification assures network performance and
reliability.
Cisco DNA Center Features and Tools
Cisco DNA provides a single dashboard for managing and controlling the enterprise
network. It uses workflows to simplify provisioning of user access policies combined with
advanced assurance capabilities. It also provides open platform APIs, adapters, and SDKs
for integration with business applications and orchestrators.
How does Cisco DNA Center work? The enterprise programmable network infrastructure
sends data to the Cisco DNA Center Appliance. The appliance activates features and
capabilities on your network devices using Cisco DNA software. Everything is managed
from the Cisco DNA Center dashboard.
Cisco DNA Center is a software solution that resides on the Cisco DNA Center appliance.
The Cisco DNA Center dashboard provides an overview of network health and helps in
identifying and remediating issues. Automation and orchestration capabilities provide
zero-touch provisioning based on profiles, facilitating network deployment in remote
branches. Advanced assurance and analytics capabilities use deep insights from devices,
streaming telemetry, and rich context to deliver an experience while proactively
monitoring, troubleshooting, and optimizing your wired and wireless network.
When you fill in the fields for the source, destination, and optionally the application, the
path trace is initiated. The output for a path trace consists of two elements:
• Security: Access control policy, which dictates who can access what
• QoS: Application policy, which invokes the QoS service to provision differentiated
access to users on the network, from an application experience perspective
• Copy: Traffic copy policy, which invokes the traffic copy service for monitoring
specific traffic flows
These services are offered across the entire fabric, independently of device-specific
address or location.
SD-Access benefits
SD-Access provides automated end-to-end services (such as segmentation, QoS, and
analytics) for user, device, and application traffic. SD-Access automates user policy so
organizations can ensure that the appropriate access control and application experience
are set for any user or device to any application across the network. This is accomplished
with a single network fabric across LAN and WLAN, which creates a consistent user
experience, anywhere, without compromising on security.
SD-Access benefits include the following:
The primary components for the Cisco SD-WAN solution consist of the vManage network
management system (management plane), the vSmart controller (control plane), the
vBond orchestrator (orchestration plane), and the vEdge router (data plane). The
components are:
• Management plane (vManage): Centralized network management system
provides a GUI interface to monitor, configure, and maintain all Cisco SD-WAN
devices and links in the underlay and overlay network.
• Control plane (vSmart Controller): This software-based component is responsible
for the centralized control plane of the SD-WAN network. It establishes a secure
connection to each vEdge router and distributes routes and policy information via
the Overlay Management Protocol (OMP). It also orchestrates the secure data
plane connectivity between the vEdge routers by distributing crypto key
information.
• Orchestration plane (vBond Orchestrator): This software-based component
performs the initial authentication of vEdge devices and orchestrates vSmart and
vEdge connectivity. It also has an important role in enabling the communication of
devices that sit behind Network Address Translation (NAT).
• Data plane (vEdge Router): This device, available as either a hardware appliance or
software-based router, sits at a physical site or in the cloud and provides secure
data plane connectivity among the sites over one or more WAN transports. It is
responsible for traffic forwarding, security, encryption, QoS, routing protocols such
as BGP and OSPF, and more.
• Programmatic APIs (REST): Programmatic control over all aspects of vManage
administration.
• Analytics (vAnalytics): Adds a cloud-based predictive analytics engine for Cisco SD-
WAN.
This sample topology depicts two sites and two public internet transports. The SD-WAN
controllers (the two vSmart controllers), and the vBond orchestrator, along with the
vManage management GUI that resides on the internet, are reachable through either
transport.
At each site, vEdge routers are used to directly connect to the available transports. Colors
are used to identify an individual WAN transport, as different WAN transports are
assigned different colors, such as mpls, private1, biz-internet, metro-ethernet, lte, and so
on. The topology uses one color for the internet transports and a different one for the
public-internet.
The vEdge routers form a Datagram Transport Layer Security (DTLS) or Transport Layer
Security (TLS) control connection to the vSmart controllers and connect to both of the
vSmart controllers over each transport. The vEdge routers securely connect to vEdge
routers at other sites with IPsec tunnels over each transport. The Bidirectional Forwarding
Detection (BFD) protocol is enabled by default and will run over each of these tunnels,
detecting loss, latency, jitter, and path failures.
Policies are an important part of the Cisco SD-WAN solution and are used to influence the
flow of data traffic among the vEdge routers in the overlay network. Policies apply either
to control plane or data plane traffic and are configured either centrally on vSmart
controllers (centralized policy) or locally (localized policy) on vEdge routers.
Centralized control policies operate on the routing and transport location (TLOC)
information and allow for customizing routing decisions and determining routing paths
through the overlay network. These policies can be used in configuring traffic engineering,
path affinity, service insertion, and different types of VPN topologies (full-mesh, hub-and-
spoke, regional mesh, and so on). Another centralized control policy is application-aware
routing, which selects the optimal path based on real-time path performance
characteristics for different traffic types. Localized control policies allow you to affect
routing policy at a local site.
Data policies influence the flow of data traffic through the network based on fields in the
IP packet headers and VPN membership. Centralized data policies can be used in
configuring application firewalls, service chaining, traffic engineering, and QoS. Localized
data policies allow you to configure how data traffic is handled at a specific site, such as
ACLs, QoS, mirroring, and policing. Some centralized data policy may affect handling on
the vEdge itself, as in the case of app-route policies or a QoS classification policy. In these
cases, the configuration is still downloaded directly to the vSmart controllers, but any
policy information that needs to be conveyed to the vEdge routers is communicated
through OMP.
28.1 Introducing System Monitoring
Introduction
The first step in understanding how the network performs is to gather as much
information about the network as possible. Often the existing documentation does not
provide sufficient information, because the most recent condition of the network is
required. This is where network audits and traffic and events analysis can provide the key
information that is needed and where system monitoring tools become important.
System monitoring is necessary to get a good overall picture of the network and can help
you quickly recognize issues and consequently make sure that the network performs as it
should. It also provides you with a proper network performance baseline so that you have
a comparison tool when troubleshooting.
Enterprises want to have proactive systems and find anomalies in their networks quicker.
They want to implement a central network management system (NMS), which
communicates with a few crucial protocols that are used in system monitoring. Examples
of such protocols are syslog and Simple Network Management Protocol (SNMP), whose
reporting should be configured on devices so that network or device events can be
forwarded to a central server, which can then provide a larger picture of the events
happening in the network. For this approach to work smoothly and efficiently, proper time
synchronization is important so that you can build a picture of the sequence of events
when multiple network components or networks are affected.
As a networking engineer, you will frequently work with system monitoring tools and you
need to have a good understanding on these important ideas:
• The purpose of time synchronization and how important it is that you have
synchronized time on all network devices
• The structure of Cisco IOS system messages and how they can be stored on a
centrally located external server for better readability and análisis
• Usage of the SNMP protocol to monitor performance of network devices
Priority
Priority is an 8-bit number and its value represents the facility and severity of the
message. The three least significant bits represent the severity of the message (with 3 bits,
you can represent eight different severity levels), and the upper 5 bits represent the
facility of the message.
You can use the facility and severity values to apply certain filters on the events in the
syslog daemon.
Note: The priority and facility values are created by the syslog clients (applications or
hardware) on which the event is generated. The syslog server is just an aggregator of the
messages.
Facility
Syslog messages are broadly categorized based on the sources that generate them. These
sources can be the operating system, process, or an application. The source is defined in a
syslog message by a numeric value.
These integer values are called facilities. The local use facilities are not reserved; the
processes and applications that do not have preassigned facility values can choose any of
the eight local use facilities. As such, Cisco devices use one of the local use facilities for
sending syslog messages.
By default, Cisco IOS Software-based devices use facility local7. Most Cisco devices provide
options to change the facility level from their default value.
This table lists all facility values.
Severity
The log source or facility (a router or mail server, for example) that generates the syslog
message specifies the severity of the message using single-digit integers 0-7.
The severity levels are often used to filter out messages which are less important, to make
the amount of messages more manageable. Severity levels define how severe the issue
reported is, which is reflected in the severity definitions in the table.
The following table explains the eight levels of message severity, from the most severe
level to the least severe level.
Header
The header contains these fields:
• Time stamp
• Hostname
Time Stamp
The time stamp field is used to indicate the local time, in MMM DD HH:MM:SS format, of
the sending device when the message is generated.
For the time stamp information to be accurate, it is good administrative practice to
configure all the devices to use the Network Time Protocol (NTP). In recent years,
however, the time stamp and hostname in the header field have become less relevant in
the syslog packet itself because the syslog server will time stamp each received message
with the server time when the message is received, as well as the IP address (or
hostname) of the sender, taken from the source IP address of the packet.
A correct sequence of events is vital for troubleshooting in order to accurately determine
the cause of an issue. Often an informational message can indicate the cause of a critical
message. The events can follow each other by milliseconds.
Hostname
The hostname field consists of the host name (as configured on the host) or the IP
address. In devices such as routers or firewalls, which have multiple interfaces, syslog uses
the IP address of the interface from which the message is transmitted.
Many people can get confused by "host name" and "hostname." The latter is typically
associated with a Domain Name System (DNS) lookup. If the device includes its "host
name" in the actual message, it may be (and often is) different than the actual DNS
hostname of the device. A properly configured DNS system should include reverse lookups
to help facilitate proper sourcing for incoming messages.
Syslog MSG
The message is the text of the syslog message, with additional information about the
process that generated the message.
How to Read System Messages
The general format of syslog messages that the syslog process on Cisco IOS Software
generates by default are structured as follows:
The following table explains the items that a Cisco IOS Software syslog message contains.
Note: Many more facility codes exist and can be found here:
https://www.cisco.com/c/en/us/td/docs/ios/15_0sy/system/messages/15sysmg/sm15syo
vr.html
Note that sequence numbers are not enabled by default. You can change this behavior
with the following commands:
Note that time stamps are enabled by default because it is much easier to identify the
problem in a chronological order if you can see the time stamps on syslog messages. The
time stamp can be turned off with the following command:
Within Cisco IOS Software, the severity levels that are associated with events often relate
more to device health and network management than to security. For example, the
following four messages are listed in order of severity:
Obviously, a power supply failure (severity level 1) is an urgent issue, as it affects the
operating health of a device and of the network in which it resides. An interface failure
(severity level 3) is generally less severe than a complete device failure, but it can certainly
affect the device and the network. A configuration change (severity level 5) is routine in
network maintenance and is assigned a relatively low severity. But from a security
perspective, auditing configuration changes is very important. The last example is a logged
hit on an access control list (ACL). If a security administrator has specified the log option
on a particular line in the ACL, this event is probably significant. However, the severity
level is only a 6. It is important to note that the severity levels on syslog messages are not
necessarily prioritized according to security.
Syslog Configuration
By default, the console receives debugging messages and numerically lower levels. To
change the level of messages that are sent to the console, use the logging console level
command. If severity level 0 is configured, it means that only emergency-level messages
will be displayed. For example, if severity level 4 is configured, all messages with severity
levels up to 4 will be displayed (Emergency, Alert, Critical, Error, and Warning).
Note: Network devices should log levels 0–6 under normal operation. Level 7 should be
used for console troubleshooting only.
While logging to the console is enabled by default, it is very expensive in terms of CPU
resources on a Cisco IOS device. The reason is that the console is a character-by-character
serial device. Each character that is displayed to the console requires a CPU interrupt. As
such, it is common to disable logging to the console when logging to a centralized syslog
server is configured.
To log messages to a syslog server, specify a syslog server host as a destination for syslog
messages and limit the syslog messages that are sent to the syslog server based on
severity, as shown in the example:
The example shows a configuration for logging syslog messages to a syslog server with the
IPv4 address 10.1.1.10. The router will send syslog messages with interface Loopback0’s
IPv4 address. The details of the commands used are shown in the table.
Note: After you use the logging ip-address command, the router will start to send syslog
messages to that IP address, even if no syslog server is configured there.
The Cisco IOS devices can also send syslog messages to multiple syslog servers. To do so,
you have to enter multiple logging host ip-address commands, each with a different IP
address.
If you want to check syslog messages that are stored in the router, you can use the show
logging command. This command also shows you how many messages are logged to
various destinations, and what severity level is configured for that destination.
The output indicates that R1 is now sending syslog messages to 10.1.1.10 with the
minimum severity threshold set to "informational." The output also indicates that five
messages have been sent to the syslog server. Syslog uses UDP for transport and is
inherently not reliable. If these five messages are lost somewhere in the transport path,
there is no mechanism to recognize the lost message or to request a retransmission.
There is a local logging buffer. It is in its default state, with a severity threshold of
"debugging" (severity 7) and sized at 4096 bytes. In the sample output, 29 messages have
been logged in the local buffer. The end of the show logging command output displays the
contents of the buffer. At the start, the buffer is mostly filled with the messages that were
produced when R1 booted. At the end of the buffer, however, are the three syslog
messages that were produced when a no shutdown command was issued on the router.
SNMP is an application layer protocol that defines how SNMP managers and SNMP agents
exchange management information. SNMP uses the UDP transport mechanism to retrieve
and send management information, such as MIB variables.
SNMP is broken down into these three components:
• SNMP manager: Periodically polls the SNMP agents on managed devices by
querying the device for data. The SNMP manager can be part of an NMS such as
Cisco Prime Infrastructure.
• SNMP agent: Runs directly on managed devices, collects device information, and
translates it into a compatible SNMP format according to the MIB.
• MIB: Represents a virtual information storage location that contains collections of
managed objects. Within the MIB, there are objects that relate to different defined
MIB modules (for example, the interface module).
Routers and other network devices keep statistics about the information of their
processes and interfaces locally. SNMP on a device runs a special process that is called an
agent. This agent can be queried, using SNMP. SNMP is typically used to gather
environment and performance data such as device CPU usage, memory usage, interface
traffic, interface error rate, and so on. By periodically querying or "polling" the SNMP
agent on a device, an NMS can gather or collect statistics over time. The NMS polls devices
periodically to obtain the values defined in the MIB objects that it is set up to collect. It
then offers a look into historical data and anticipated trends. Based on SNMP values, the
NMS triggers alarms to notify network operators.
To obtain information from the MIB on the SNMP agent, you can use several different
operations:
• Get: This operation is a request sent by the manager to the SNMP agent to retrieve
one or more values from the MIB of the managed device.
• Get-next: This operation is used to get the next object in the MIB from an SNMP
agent.
• Get-bulk: This operation allows a management application to retrieve a large
section of a table at once.
• Set: This operation is used to put information in the MIB from an SNMP manager.
• Trap: This operation is used by the SNMP agent to send a triggered piece of
information to the SNMP manager.
• Inform: This operation is the same as a trap, but it adds an acknowledgment that a
trap does not provide.
The SNMP manager polls the SNMP agents and queries the MIB via SNMP agents on UDP
port 161.
The SNMP agent can also send triggered messages called traps to the SNMP manager, on
UDP port 162. For example, if the interface fails, the SNMP agent can immediately send a
trap message to the SNMP manager, notifying the manager about the interface status.
This feature is extremely useful because you can get information almost immediately
when something happens. Remember, the SNMP manager periodically polls SNMP agents,
which means that you will always receive the information on the next agent poll.
Depending on the interval, this could mean a 10-minute delay.
The SNMP trap operation is shown in the example.
1. Interface Ethernet0/0 fails on the branch router.
2. The branch router sends an SNMP trap to the NMS, informing that interface
Ethernet0/0 has failed.
3. The NMS receives the SNMP trap and raises an alarm, which notifies the network
operations center (NOC), which in turn can proactively solve the problem or notify
the customer regarding the problem.
Note: In step 3, the role of SNMP is just to send the trap. All other actions are performed
by NMS and NOC, if present.
All versions of SNMP utilize the concept of the MIB. The MIB organizes configuration and
status data into a tree structure. The figure below shows a small portion of an MIB tree
structure.
Objects in the MIB are referenced by their object ID (OID), which specifies the path from
the tree root to the object. For example, system identification data is located under
1.3.6.1.2.1.1. Some examples of system data include the system name (OID
1.3.6.1.2.1.1.5), system location (OID 1.3.6.1.2.1.1.6), and system uptime (OID
1.3.6.1.2.1.1.3).
Note that the following commands are not available on Cisco IOS Software, but they are
shown as an example of what you can achieve with SNMP. In these examples, a Linux PC is
used.
The snmpwalk command recursively pulls data from the MIB tree, starting from the
specified location. For example, you could use it to show which interfaces exist on a
router.
The snmpwalk command essentially performs a whole series of get-next requests
automatically for you and stops when it returns results that are no longer inside the range
of the OID that you originally specified.
You can also use the snmpset command to reset the interface. In this example, the no
shutdown command was issued on Serial2/0 via the snmpset command.
Here is the syslog message on the router, which shows that interface Serial2/0 changed
state to up.
*Apr 10 18:35:00.273: %SYS-5-CONFIG_I: Configured from 10.1.10.10 by snmp
*Apr 10 18:35:02.274: %LINK-3-UPDOWN: Interface Serial2/0, changed state to up
*Apr 10 18:35:03.278: %LINEPROTO-5-UPDOWN: Line protocol on Interface Serial2/0,
changed state to up
However, working with long MIB variable names like 1.3.6.1.2.1.2.2.1.2 can be
problematic for the average user. More commonly, the network operations staff uses a
network management product with an easy-to-use GUI, with the entire MIB data variable
naming transparent to the user.
When dealing with SNMP, other useful tools are the Cisco SNMP Object Navigator and
Cisco IOS MIB Locator. The Cisco SNMP Object Navigator allows you to find more details
about a particular OID, and the Cisco IOS MIB Locator can tell you which OIDs exist on a
particular Cisco platform or product. Both features are extremely helpful when you want
to create a new graph in your NMS for a particular set of OID return values.
Note: The tools that are mentioned above can be found here:
https://mibs.cloudapps.cisco.com/ITDIT/MIBS/servlet/index.
Use Case: Using SNMP to Gather Information
You are redesigning a network for a customer. An engineer on their side pointed out that
some users are complaining about slow internet connection. The engineer is asking you to
take this issue into account during the redesign.
You can use SNMP to monitor the behavior of the router that is connected to the internet.
CPU, memory, and link overutilization are usually the reason for a router's poor
performance.
Note: SNMP can only be used to interact with devices under your control. Devices and
services that exist outside of your network and may be actually the ones causing the issue
cannot be inspected by you using SNMP.
A network management application (for example, Cisco Prime Infrastructure) can display
data that is gathered via SNMP in the form of graphs and reports.
• SNMP version 1: SNMPv1 is the initial version of SNMP. SNMPv1 security is based
on communities that are nothing more than passwords: plaintext strings that allow
any SNMP-based application that knows the strings to gain access to the
management information of a device. There are typically three communities in
SNMPv1: read-only, read-write, and trap. A key security flaw in SNMPv1 is that the
only authentication available is through a community string. Anyone who knows
the community string is allowed access. Adding to this problem is the fact that all
SNMPv1 packets pass across the network unencrypted. Therefore, anyone who
can sniff a single SNMP packet now has the community string that is needed to get
access.
• SNMP version 2c: SNMPv2 was the first attempt to fix SNMPv1 security flaws.
However, SNMPv2 never really took off. The only prevalent version of SNMPv2
today is SNMPv2c, which contains SNMPv2 protocol enhancements but leaves out
the security features that no one could agree on. The letter "c" designates v2c as
being "community-based," which means that it uses the same authentication
mechanism as v1: community strings.
• SNMP version 3: SNMPv3 is the latest version. It adds support for strong
authentication and private communication between managed entities. You can
define a secure policy for each group, and optionally you can limit the IP addresses
to which its members can belong. You have to define encryption and hashing
algorithms and passwords for each user. The key security additions to SNMPv3 are
as follows:
o Can use Message Digest 5 (MD5) or Secure Hash Algorithm (SHA) hashes for
authentication
o Can encrypt the entire Packet
o Can guarantee message integrity
SNMPv3 introduces three levels of security:
If you add the detail keyword to the show clock command, it will tell you what the source
of clock configuration is.
The system clock keeps an authoritative flag that indicates whether the time is
authoritative (believed to be accurate). The asterisk in front of the first line of the
command output means that the time is not believed to be accurate.
You can also change the time zone and enable daylight saving time. In this example,
Central European Time (CET) is used.
Notice how clock settings now reflect local time, because Central European Summer Time
(CEST) is 2 hours ahead of UTC.
To configure the time zone, use the clock timezone zone-name hours-offset [minutes-
offset] global configuration command.
Hardware Clock
The hardware clock is a chip with a rechargeable backup battery that can retain the time
and date information across reboots of the device.
The hardware clock (also called the system calendar) maintains time separately from the
software clock, but is usually updated from the software clock when the software clock is
synchronized with an authoritative time source. The hardware clock continues to run
when the system is restarted or when the power is turned off. Typically, the hardware
clock needs to be manually set only once, when the system is installed, but to prevent
drifting over time it needs to be readjusted at regular intervals.
You should avoid setting the hardware clock if you have access to a reliable external time
source. Time synchronization should instead be established using NTP.
You can update the hardware clock with a new software clock setting with the following
command:
Router# clock update-calendar
Network Time Protocol
To maintain a consistent time across the network, the software clock must receive time
updates from an authoritative time on the network. Network Time Protocol (NTP) is a
protocol designed to time-synchronize a network of machines. A secure method of
providing clocking for the network is for network administrators to implement their own
private network master clocks that are synchronized to UTC-based satellite or radio.
However, if network administrators do not wish to implement their own master clocks
because of cost or other reasons, other clock sources are available on the internet, such as
ntp.org, but this option is less secure.
Correct time within networks is important for the following reasons:
• Correct time allows the tracking of events in the network in the correct order.
• Clock synchronization is critical for the correct interpretation of events within
syslog data.
• Clock synchronization is critical for digital certificates and authentication protocols
such as Kerberos.
NTP runs over UDP, using port 123 as both the source and destination, which in turn runs
over IP. NTP distributes this time across the network. NTP is extremely efficient—no more
than one packet per minute is necessary to synchronize two devices to within a
millisecond of one another.
NTP uses the concept of a stratum to describe how many NTP hops away a machine is
from an authoritative time source, a stratum 0 source. A stratum 1 time server has a radio
or atomic clock that is directly attached, a stratum 2 time server receives its time from a
stratum 1 time server, and so on. A device running NTP automatically chooses as its time
source the device with the lowest stratum number that it is configured to communicate
with through NTP. This strategy effectively builds a self-organizing tree of NTP speakers.
NTP can get the correct time from an internal or external time source:
• You should check within the company where you are implementing the NTP and
what stratum level you are supposed to set in the ntp master command. It must be
a higher number than the stratum level of the upstream NTP device.
• The ntp master command should only be configured on a device that has
authoritative time. Therefore, it must either be configured to synchronize with
another NTP server (using the ntp server command) and actually be synchronized
with that server, or it must have its time set using the clock set command.
The stratum value is a number from 1 to 15. The lowest stratum value indicates a higher
NTP priority. It also indicates the NTP stratum number that the system will claim.
Optionally, you can also configure a loopback interface, whose IP address will be used as
the source IP address when sending NTP packets.
For example, consider the following scenario, where you have multiple routers. The
Central router acts as an authoritative NTP server, while the Branch1 and Branch2 routers
act as NTP clients. In this case, initially Branch1 and Branch2 routers are referencing their
clocks via NTP to the 172.16.1.5 IPv4 address, which belongs to Ethernet 0/0 interface on
the Central router. Now imagine if that interface on the Central router fails, what do you
think will happen? The Branch1 and Branch2 routers cannot reach that IPv4 address,
which means that they will stop referencing their clocks via NTP and their clocks will
become unsynchronized. The solution for that is to use a loopback interface, which is a
virtual interface on a router and is always in up/up state. Therefore, even if one of the
interfaces fails on the Central router, the Branch1 and Branch2 routers can still use NTP if
they have a backup path to the IPv4 address of the loopback interface on the Central
router.
Configure the Branch1 router as an NTP client, which will synchronize its time with the
Central router:
Configure the Branch2 router as an NTP client, which will synchronize its time with the
Central router.
Use the show ntp associations and the show ntp status commands to verify your
configuration.
As a networking engineer, you will need to manage different Cisco devices, operating
systems, and configuration files, which includes these responsibilities:
An important feature of the Cisco IFS is the use of the URL convention to specify files on
network devices and the network. The URL prefix specifies the file location.
Commonly used prefix examples include the following:
• flash: The primary flash device. Some devices have more than one flash location,
such as slot0: and slot1:. In such cases, an alias is implemented to resolve flash: to
the flash device that is considered primary.
• nvram: NVRAM. Among other things, NVRAM is where the startup configuration is
stored.
• system: A partition that is built in RAM that holds, among other things, the running
configuration.
• tftp: Indicates that the file is stored on a server that can be accessed using the
TFTP protocol.
• ftp: Indicates that the file is stored on a server that can be accessed using the FTP
protocol.
• scp: Indicates that the file is stored on a server that can be accessed using the
Secure Copy Protocol (SCP).
Directories and subdirectories can be used to organize files in manageable containers. A
preceding slash (/) character indicates the root directory, and the slash character is also
used to separate directory names from the directory’s contents. Individual files have
names, which must be unique within the directory in which they are stored. URLs are used
to specify files and they provide full specification of a locally stored file, including the
prefix, directory path, and filename.
Here are some examples:
• flash:/c2900-universalk9-mz.SPA.153-1.T.bin
• nvram:/startup-config
• system:/running-config
URLs that specify remote files can be more complex. After the prefix, a server location (IP
address or resolvable hostname) must be specified. For protocols that require user
authentication, usernames and passwords may be specified. If they are not specified in
the URL, they will need to be specified interactively by the application.
Here are some examples of remote file specifications:
• tftp://10.10.10.10/backup-cfg.txt
• ftp://10.10.20.20/admin:Adm1nPwd/c2900-universalk9-mz.SPA.154-1.T.bin
• scp://cfg-srv/admin:Adm1nPwd/c2900-universalk9-mz.SPA.154-1.T2.bin
Note: The third example uses a hostname instead of an IP address. For it to function, the
router must have Domain Name System (DNS) properly configured or local IP host entries
defined in the running configuration, allowing the resolution of the name cfg-srv to an IP
address.
29.3 Managing Cisco Devices
Stages of the Router Power-On Boot Sequence
When a Cisco networking device boots, it performs a series of steps that include loading
Cisco operating system software and the device configuration.
The example shows a router power-on boot sequence, which consists of a series of steps
that include loading Cisco IOS Software and the router configuration.
The following stages and router components are used in the router power-on boot
sequence:
The sequence of events that occurs during the power-on (boot) of a router is explained in
detail here. Understanding these events will help you accomplish operational tasks and
troubleshoot router problems.
1. Perform POST: This event is a series of hardware tests that verifies that all
components of a Cisco router are functional. During this test, the router also
determines which hardware is present. Power-on self-test (POST) executes from
microcode that is resident in the system read-only memory (ROM).
2. Load and run bootstrap code: Bootstrap code is used to perform subsequent
events such as finding Cisco IOS Software at all possible locations, loading it into
RAM, and running it. After Cisco IOS Software is loaded and running, the bootstrap
code is not used until the next time the router is reloaded or power-cycled.
3. Locate Cisco IOS Software: The bootstrap code determines the location of Cisco
IOS Software that will be run. Normally, the Cisco IOS Software image is located in
the flash memory, but it can also be stored in other places such as a TFTP server.
The configuration register and configuration file, which are located in NVRAM,
determine where the Cisco IOS Software images are located and which image file
to use. If a complete Cisco IOS image cannot be located, a scaled-down version of
Cisco IOS Software is copied from ROM into RAM. This version of Cisco IOS
Software is used to help diagnose any problems and can be used to load a
complete version of Cisco IOS Software into RAM.
4. Load Cisco IOS Software: After the bootstrap code has found the correct image, it
loads this image into RAM and starts Cisco IOS Software. Some older routers do
not load the Cisco IOS Software image into RAM but execute it directly from flash
memory instead.
5. Locate the configuration: After Cisco IOS Software is loaded, the bootstrap
program searches for the startup configuration file in NVRAM.
6. Load the configuration: If a startup configuration file is found in NVRAM, Cisco IOS
Software loads it into RAM as the running configuration and executes the
commands in the file one line at a time. The running configuration file contains
interface addresses, starts routing processes, configures router passwords, and
defines other characteristics of the router. If no configuration file exists in NVRAM,
the router enters the setup utility or attempts an autoinstall to look for a
configuration file from a TFTP server.
7. Run the configured Cisco IOS Software: When the prompt is displayed, the router
is running Cisco IOS Software with the current running configuration file. You can
then begin using Cisco IOS commands on the router.
Entering boot system commands in sequence in a router configuration can create a fault-
tolerant boot plan. The boot system command is a global configuration command that
allows you to specify the source for the Cisco IOS Software image to load. For example,
the following command boots the system boot image file that is named c2900-
universalk9-mz.SPA.152-4.M1.bin from the flash memory device:
This next example specifies a TFTP server as a source of a Cisco IOS image, with a
ROMMON session as the backup:
Loading Cisco IOS Image Files
When a Cisco router locates a valid Cisco operating system image file in the flash memory,
the Cisco operating system image is normally loaded into RAM to run. Image files are
typically compressed, so the file must first be decompressed. After the file is
decompressed into RAM, it is started.
For example, when Cisco IOS Software begins to load, you may see a string of hash signs
(#), as shown in the figure, while the image decompresses.
The Cisco IOS image file is decompressed and stored in RAM. The output shows the boot
process on a router.
Use the show version command to help verify and troubleshoot some of the basic
hardware and software components of the router. The show version command displays
information about the version of Cisco IOS Software that is currently running on the
router, the version of the bootstrap program, and information about the hardware
configuration, including the amount of system memory.
Output from the show version command includes the following:
• Interfaces
2 Gigabit Ethernet interfaces
1 Serial (sync/async) interface
This section of the output displays the physical interfaces on the router. In this example,
the Cisco 2901 router has two Gigabit Ethernet interfaces and one serial interface.
• Amount of NVRAM
255 KB of NVRAM
This line from the example output shows the amount of NVRAM on the router.
• Amount of Flash
250880 KB of ATA System CompactFlash 0 (Read/Write)
This line from the example output shows the amount of flash memory on the router.
• Configuration register
Configuration register is 0x2102
The last line of the show version command displays the current configured value of the
software configuration register in hexadecimal format. This value indicates that the router
will attempt to load a Cisco IOS Software image from flash memory and load the startup
configuration file from NVRAM. 0x2102 is the factory-setting default.
The setup utility prompts the user at the console for specific configuration information to
create a basic initial configuration on the router, as shown in this example:
Note: If you type "yes" at this stage, the setup utility prompts you for basic information
about your router and network, and it creates an initial configuration file. Since the result
configuration is basic, typically you would type "no" at this point and continue with a
manual configuration.
To display the current configuration, enter the show running-config command.
To display the saved configuration, enter the show startup-config command.
The show running-config and show startup-config commands are among the most
common Cisco IOS Software EXEC commands because they allow you to see the current
running configuration in RAM on the router or the startup configuration commands in the
startup configuration file in NVRAM that the router will use at the next restart.
If the words "Current configuration" are displayed, the active running configuration from
RAM is being displayed.
If there is a message at the top indicating how much nonvolatile memory is being used
("Using 1318 out of 262136 bytes" in this example), the startup configuration file from
NVRAM is being displayed.
You can copy Cisco IOS image files from a TFTP, RCP, FTP, or SCP server to the flash
memory of a networking device. You may want to perform this function to upgrade the
Cisco IOS image, or to use the same image as on other devices in your network.
You can also copy (upload) Cisco IOS image files from a networking device to a file server
by using TFTP, FTP, RCP, or SCP protocols, so that you have a backup of the current IOS
image file on the server.
The protocol you use depends on which type of server you are using. The FTP and RCP
transport mechanisms provide faster performance and more reliable delivery of data than
TFTP. These improvements are possible because the FTP and RCP transport mechanisms
are built on and use the TCP/IP stack, which is connection-oriented.
Note: Just as Secure Shell (SSH) and HTTPS are more secure than Telnet and HTTP, SCP is
more secure than FTP or TFTP. SCP is an implementation of RCP that runs over an SSH
connection. By using SSH as the underlying data transfer conduit, SCP offers the same
security benefits as SSH. If the public key of the server is properly validated, then SCP
offers origin authentication, data integrity, and privacy.
Managing Device Configuration Files
Device configuration files contain a set of user-defined configuration commands that
customize the functionality of a Cisco device.
Device configurations can be loaded from the following components:
• NVRAM
• Terminal
• Network file server (for example, TFTP, SCP, and others)
• Copy the running configuration from RAM to the startup configuration in NVRAM,
overwriting the existing file:
• Copy the running configuration from RAM to a remote TFTP server location,
overwriting the existing file:
Use the configure terminal command to interactively create configurations in RAM from
the console or remote terminal.
Use the erase startup-config command to delete the saved startup configuration file in
NVRAM. (Note that this command cannot be abbreviated.)
This figure shows an example of how to use the copy tftp: running-config command to
merge the running configuration in RAM with a saved configuration file on a TFTP server.
The following is an example of merging a configuration file from the TFTP server with the
running configuration in RAM:
You can use the TFTP servers to store configurations in a central place, allowing
centralized management and updating. Regardless of the size of the network, there
should always be a copy of the current running configuration online as a backup.
The copy running-config tftp: command allows the current configuration to be uploaded
and saved to a TFTP server. The IP address or name of the TFTP server and the destination
filename must be supplied. A series of exclamation marks in the display shows the
progress of the upload.
The copy tftp: running-config command downloads a configuration file from the TFTP
server to the running configuration of the RAM. Again, the address or name of the TFTP
server and the source and destination filename must be supplied. In the example, IPv4 is
used as a transport protocol. In this case, because you are copying the file to the running
configuration, the destination filename should be running-config. This process is a merge
process, not an overwrite process.
30.1 Examining the Security Threat Landscape
Introduction
Modern networks are very large and intensely interconnected. As such, modern networks
are often open to being accessed, and a potential attacker can often easily attach to or
remotely access such networks. Widespread internetworking increases the probability
that more attacks are carried out over large, heavily interconnected networks such as the
internet.
Computer systems and applications that are attached to these networks are becoming
increasingly complex. Because of this, it has become more difficult to analyze, secure, and
properly test the security of computer systems and their applications. When these
systems and their applications are attached to large networks, the risk to the systems
dramatically increases.
The ever-evolving security landscape presents a continuous challenge to organizations.
The fast proliferation of botnets, the increasing sophistication of network attacks and the
alarming growth of Internet-based organized crime and espionage are examples of threats
that shape the security landscape. Security professionals also need to protect networks
and users from identity and data theft, more innovative insider attacks, and emerging new
forms of threats on mobile systems.
There are various concepts across the modern network security threat landscape. There is
no way to linearly convey all the branches, loops, and combinations of related concepts.
There is also no way to catalog the entire threat landscape statically, as new concepts and
combinations are always evolving.
As a network engineer, you must be aware of the security threats landscape, which
includes important concepts:
• Threat: Any circumstance or event with the potential to cause harm to an asset in
the form of destruction, disclosure, adverse modification of data, or denial of
service (DoS). An example of a threat is malicious software that targets
workstations.
• Vulnerability: A weakness that compromises either the security or the
functionality of a system. Weak or easily guessed passwords are considered
vulnerabilities.
• Exploit: A mechanism that uses a vulnerability to compromise the security or
functionality of a system. An example of an exploit is malicious code that gains
internal access. When a vulnerability is disclosed to the public, attackers often
create a tool that implements an exploit for the vulnerability. If they release this
tool or proof of concept code to the internet, other less-skilled attackers and
hackers (the so-called script kiddies) can then easily exploit the vulnerability.
• Risk: The likelihood that a particular threat using a specific attack will exploit a
particular vulnerability of an asset that results in an undesirable consequence.
• Mitigation techniques: Methods and corrective actions to protect against threats
and different exploits, such as implementing updates and patches, to reduce the
possible impact and minimize risks.
• Initial compromiso
• Escalation of privileges
• Internal reconnaissance
• Lateral propagation, compromising other systems on track towards its goal
• Mission completion
Each of these steps is taken very stealthily, with the goal of evading detection and
maintaining a presence.
• sectools.org: A website run by the Nmap Project, which regularly polls the network
security community regarding their favorite security tools. It lists the top security
tools in order of popularity. A short description is provided for each tool, along
with user reviews and links to the publisher's website. There are password
auditors, sniffers, vulnerability scanners, packet crafters, and exploitation tools,
among the many categories. The site provides information disclosure. Security
professionals should review the list and read the descriptions of the tools. Network
attackers certainly will.
• Kali Linux: The Knoppix Security Tools Distribution was published in 2004. It was a
live Linux distribution that ran from a CD-ROM and included more than 100
security tools. Back when security tools were uncommon in Windows, Windows
users could boot their PCs with the Knoppix STD CD and have access to that
toolset. Over the years, Knoppix STD evolved through WHoppix, Whax, and
Backtrack to its current distribution as Kali Linux. The details of the evolution are
not as important as the fact that a live Linux distribution that can be easily booted
from removable media or installed in a virtual machine has been well supported
for over a decade. The technology continues to be updated to remain current and
relevant. Kali Linux packages over 300 security tools in a Debian-based Linux
distribution. Kali Linux may be deployed on removable media, much like the
original Knoppix Security Tools Distribution. It may also be deployed on physical
servers or run as a virtual machine (VM).
• Metasploit: When Metasploit was first introduced, it had a big impact on the
network security industry. It was a very potent addition to the penetration tester's
toolbox. While it provided a framework for advanced security engineers to develop
and test exploit code, it also lowered the threshold for the experience required for
a novice attacker to perform sophisticated attacks. The framework separates the
exploit (code that uses a system vulnerability) from the payload (code injected to
the compromised system). The framework is distributed with hundreds of exploit
modules and dozens of payload modules. To launch an attack with Metasploit, you
must first select and configure an exploit. Each exploit targets a vulnerability of an
unpatched operating system or application server. The use of a vulnerability
scanner can help determine the most appropriate exploits to attempt. The exploit
must be configured with relevant information such as the target IP address. Next,
you must select a payload. The payload might be remote shell access, Virtual
Network Computing (VNC) access, or remote file downloads. You can add exploits
incrementally. Metasploit exploits are often published with or shortly after the
public disclosure of vulnerabilities.
Note: Using security tools on networks is often a violation of the security policy governing
those networks. You should never experiment with security tools on a network where you
do not have explicit authorization to do so.
• A botnet operator infects computers by infecting them with malicious code, which
runs the malicious bot process. A malicious bot is a self-propagating malware
designed to infect a host and connect back to the command-and-control server. In
addition to its worm-like ability to self-propagate, a bot can include the ability to
log keystrokes, gather passwords, capture and analyze packets, gather financial
information, launch DoS attacks, relay spam, and open back doors on the infected
host. Bots have all the advantages of worms but are generally much more versatile
in their infection vector and are often modified within hours of publication of a
new exploit. They have been known to exploit back doors opened by worms and
viruses, which allows them to access networks with good perimeter control. Bots
rarely announce their presence with visible actions such as high scan rates, which
negatively affect the network infrastructure; instead, they infect networks in a way
that escapes immediate notice.
• The bot on the newly infected host logs into the command-and-control server and
awaits commands. Often, the command-and-control server is an IRC channel or a
web server.
• Instructions are sent from the command-and-control server to each bot in the
botnet to execute actions. When the bots receive the instructions, they begin
generating malicious traffic that is aimed at the victim. Some bots also can be
updated to introduce new functionalities to the bot..
In the example, an attacker controls the bots to launch a DDoS attack against the victim's
infrastructure. These bots run a covert channel that is protected, obfuscated, or uses
other security mitigation techniques, to communicate with the command-and-control
server that the attacker controls. This communication often takes place over IRC,
encrypted channels, bot-specific peer-to-peer networks, and even Twitter.
30.6 Examining the Security Threat Landscape
Spoofing
An attack is considered a spoofing attack when an attacker injects traffic that appears to
be sourced from a system other than the attacker's system itself. Spoofing is not
specifically an attack, but spoofing can be incorporated into various types of attacks.
Unlike other attack types, most spoofing can be easily prevented by well-known
mitigation techniques.
There are several types of spoofing; here are some of them:
Smurf attacks can easily be mitigated on a Cisco IOS device by using the no ip directed-
broadcast interface configuration command, which has been the default setting since
Cisco IOS Software Release 12.0. With the no ip directed-broadcast command configured
for an interface, broadcasts destined for the subnet to which that interface is attached will
be dropped rather than being broadcast.
Note: An IP directed broadcast is an IPv4 packet whose destination address is a valid
broadcast address for some IPv4 subnet but which originates from a node that is not itself
part of that destination subnet.
While smurf attacks no longer pose the threat they once did, newer reflection and
amplification attacks may pose a huge threat. For example, in March 2013, DNS
amplification was used to cause a DDoS that made it impossible for anyone to access an
organization's website. This attack was so massive that it also slowed internet traffic
worldwide. The attackers were able to generate up to 300 Gbps of attack traffic by
exploiting DNS open recursive resolvers, which respond to DNS queries, including queries
outside their IP range. By sending an open resolver, a very small, deliberately formed
query with the spoofed source address of a target, an attacker can evoke a significantly
larger response to the intended target. These types of attacks use large numbers of
compromised source systems and multiple DNS open resolvers, so the effects on the
target devices are magnified. The Open Resolver Project cataloged 28 million open
recursive DNS resolved on the internet in 2013.
In February 2014, a Network Time Protocol (NTP) amplification attack generated a new
record in attack traffic, over 400 Gbps. NTP has some characteristics that make it an
attractive attack vector. Like DNS, NTP uses UDP for transport. Like DNS, some NTP
requests can result in replies that are much larger than the request. For example, NTP
supports a command that is called monlist, which can be sent to an NTP server for
monitoring purposes. The monlist command returns the addresses of up to the last 600
machines with which the NTP server has interacted. If the NTP server is relatively active,
this response is much bigger than the request sent, making it ideal for an amplification
attack.
• Calling users on the phone claiming to be IT and convincing them that they need to
set their passwords to particular values in preparation for the server upgrade that
will take place tonight.
• An individual without a badge following a badged user into a badge-secured area
(tailgating).
• Sending an infected USB key along with book or magazine samples.
• Developing fictitious personalities on social networking sites to obtain and abuse
"friend" status.
• Sending an email enticing a user to click a link to a malicious website (this is called
phishing).
• Visual hacking, where the attacker physically observes the victim entering
credentials (such as a workstation login, a bank machine PIN, or the combination
on a physical lock).
Phishing is a common social engineering technique. Typically, a phishing email pretends to
be from a large, legitimate organization, as in the figure.
Since the organization from which the phishing email appears to originate is legitimate,
the target may have a real account with the organization. The malicious website generally
resembles that of the real organization. The goal is to get the victim to enter personal
information such as account numbers, social security numbers, usernames, or passwords.
Social engineering is a serious threat and may lead to other types of attacks, and therefore
organizations should take measures to mitigate the risk from these types of attacks.
Hence, an organization should raise user awareness and educate employees to defend
against social engineering deceptions that threaten organizational security, conduct
training sessions on this subject regularly, ensure that social engineering attackers find it
difficult to breach physical security in the organization, and so on.
• Spear phishing: Emails are sent to smaller, more targeted groups. Spear phishing
may even target a single individual. Knowing more about the target community
allows the attacker to craft an email that is more likely to deceive the target
successfully. For example, an attacker sends an email with the source address of
the human resources department to the employees.
• Whaling: Like spear phishing, whaling uses the concept of targeted emails;
however, it targets a high-profile target. The target of a whaling attack is often one
or more of the top executives of an organization. The whaling email content is
designed to get an executive's attention, such as a subpoena request or a
complaint from an important customer.
• Pharming: Whereas phishing entices the victim to a malicious website, pharming
lures victims by compromising name services. Pharming can be done by injecting
entries into localhost files or by poisoning the DNS in some fashion. When victims
attempt to visit a legitimate website, the name service instead provides the IP
address of a malicious website. In the following figure, an attacker has injected an
erroneous entry into the host file on the victim system. As a result, when the
victims attempt to do online banking with BIG-bank.com, they are directed to the
address of a malicious website instead. Pharming can be implemented in other
ways. For example, the attacker may compromise legitimate DNS servers. Another
possibility is for the attacker to compromise a DHCP server, causing the DHCP
server to specify a rogue DNS server to the DHCP clients. Consumer-market routers
acting as DHCP servers for residential networks are prime targets for this form of
pharming attack.
• Watering hole: A watering hole attack uses a compromised web server to target
select groups. The first step of a watering hole attack is determining the websites
that the target group visits regularly. The second step is to compromise one or
more of those websites. The attacker compromises the websites by infecting them
with malware that can identify members of the target group. Only members of the
target group are attacked. Other traffic is undisturbed. It makes it difficult to
recognize watering holes by analyzing web traffic. Most traffic from the infected
website is benign.
• Vishing: Vishing uses the same concept as phishing, except that it uses voice and
the phone system as its medium instead of email. For example, a visher may call a
victim claiming that the victim is delinquent in loan payments and attempt to
collect personal information such as the victim's social security number or credit
card information.
• Smishing: Smishing uses the same concept as phishing, except that it uses short
message service (SMS) texting as the medium instead of email.
• No one would be interested in my network: In the past, this statement might have
been true if your network was very small, but attackers are now interested in
smaller targets that are easier to attack. If you think no one would be interested in
attacking your network, your network is probably not as secure as it could be,
making it very interesting indeed to attackers. Even if you have a two-computer
network that contains no tempting information such as banking information, debit
card information, or national defense secrets, your computers can still be a target
for several reasons. One reason is that an attacker can use your computers to
launch larger, distributed attacks. Another reason is an attacker may use your
computers to access the remote systems which your computers access.
• Router or gateway uses NAT; network is inaccessible and secure: NAT has been
created to address issues with overlapping IP address spaces and allow translation
between private and public address spaces. NAT has never been a security
mechanism, and indeed many techniques allow bidirectional communication over
NAT gateways. Without any rules, inspection, or other real firewalling procedures,
pure NAT gateways should never be considered part of a network security
architecture.
• The company has never been hacked: Unless you regularly monitor and analyze
the activity on your assets, you cannot be sure that you have never been hacked or
are not currently being attacked. Effective monitoring and analysis almost certainly
require software to automate the analysis.
• IT staff is responsible for implementing security: While IT staff play a very
important role in the configuration, maintenance, and monitoring of security
controls, end users play a primary role in implementing security. In a typical
environment, end users heavily outnumber IT staff. End users must understand the
need for security policies and their role in policy execution.
• The company has a firewall in place; it is secure: It used to be very common to use
resources to secure the perimeter and have very open systems within the
perimeter. The understanding that internal systems must be secured has gained
much traction, but there are still proponents of focusing on a hardened perimeter.
Also, reliance on a single security technology is risky. For example, firewalls can be
poorly configured, and client-side attacks are very difficult for firewalls to deal
with. Focusing on individual security points and relying on any single security
technology is insufficient in today’s networking environments.
This section provided an overview of the current networking threat landscape, but it only
addresses the basics. The threats are innumerable and constantly changing. The list below
provides more examples of today’s threat vectors:
• Cognitive threats via social networks: Social engineering takes a new meaning in
the era of social networking. Attackers can create false identities on social
networks, building and exploiting friend relationships with others on the social
network. Phishing attacks can much more accurately target susceptible audiences.
Confidential information may be exposed due to a lack of defined or enforced
policy.
• Consumer electronics exploits: The operating systems on consumer devices
(smartphones, tablets, and so on) are an option of choice for high-volume attacks.
The proliferation of applications for these operating systems, and the nature of the
development and certification processes for those applications, augments the
problem. The common expectation of bring your own device (BYOD) support
within an organization’s network increases the importance of this issue.
• Widespread website compromises: Malicious attackers compromise popular
websites, forcing the sites to download malware to connecting users. Attackers
typically are not interested in the data on the website, but they use it as a
springboard to infect the systems of users connecting to the site.
• Disruption of critical infrastructure: The Stuxnet worm confirmed concerns about
an increase in targeted attacks that are aimed at the power grid, nuclear plants,
and other critical infrastructure.
• Virtualization exploits: Device and service virtualization adds more complexity to
the network. Attackers know this fact and increasingly target virtual servers, virtual
switches, and trust relationships at the hypervisor level.
• Memory scraping: Increasingly popular, this technique is aimed at fetching
information directly from volatile memory. The attack tries to exploit operating
systems and applications that leave traces of data in memory. Attacks are
particularly aimed at accessing data that is encrypted when stored on a disk or
sent across a network but is clear text when processed in the RAM of the
compromised system.
• Hardware hacking: These attacks aim to exploit the hardware architecture of
specific devices, with consumer devices being increasingly popular. Attack
methods include bus sniffing, altering firmware, and memory dumping to find
crypto keys. Hardware-based keyloggers can be placed between a keyboard and a
computer system. Bank machines can be hacked with inconspicuous magnetic card
readers and microcameras.
• IPv6-based attacks: These attacks are becoming more pervasive as the migration
to IPv6 becomes widespread. Attackers are focus initially on covert channels
through various tunneling techniques, and man-in-the-middle attacks use IPv6 to
exploit IPv4 in dual-stack deployments.
Note: Most modern operating systems on client devices have IPv6 enabled by default.
Even though you may not yet be routing IPv6 traffic, it may be flowing in your network.
Having appropriate protection and security mechanisms for IPv6 is therefore always
recommended.
31.1 Implementing Threat Defense Technologies
Introduction
As networks become increasingly interconnected and data flows more freely, it becomes
important to enable networks to provide security services. In the commercial world,
connectivity is no longer optional. Therefore, security services must provide adequate
protection to companies that conduct business in a relatively open environment. Trends in
security threats result in the need for dynamic security intelligence gathering and
distribution, early warning systems, and application layer inspection for mobile services
where data and applications are hosted in the cloud. Enterprise network design principles
must include technologies for threat control and containment, which typically include
using firewalls and intrusion prevention systems (IPSs).
Enterprises also use the internet to connect branch offices, remote employees, and
business partners to their resources. A reliable way to maintain company privacy while
streamlining operations and allowing flexible network administration is to use
cryptographic technologies.
WLANs are widely deployed in Enterprise environments such as corporate offices,
industrial warehouses, internet-ready classrooms, and even canteens. These WLANs
present new challenges for network administrators and information security
administrators alike. Unlike the relative simplicity of wired Ethernet deployments, 802.11-
based WLANs broadcast radio-frequency (RF) data for the client stations to hear. This
presents new and complex security issues.
As a networking engineer, you need to have skills in the security technologies that are
available to protect networks in the modern network security threatscape, such as in the
following areas:
In general, a computer security awareness and training program should encompass the
following seven steps:
1. Identify program scope, goals, and objectives: The scope of the program should
provide training to all of the types of people who interact with IT systems. Because
users need training that relates directly to their use of particular systems, you need
to supplement a large organizationwide program with more system-specific
programs.
2. Identify training staff: It is important that trainers have sufficient knowledge of
computer security issues, principles, and techniques. It is also vital that they know
how to communicate information and ideas effectively.
3. Identify target audiences: Not everyone needs the same degree or type of
computer security information to do their jobs. A computer security awareness
and training program that distinguishes between groups of people and presents
only the information that is needed by that particular audience, omitting irrelevant
information, will obtain the best results.
4. Motivate management and employees: To successfully implement an awareness
and training program, it is important to gain the support of management and
employees. Consider using motivational techniques to show management and
employees how their participation in a computer security and awareness program
benefits the organization.
5. Administer the program: Several important considerations for administering the
program include visibility, selection of appropriate training methods, topics,
materials, and presentation techniques.
6. Maintain the program: The organization should make an effort to keep current
with changes in computer technology and security requirements. A training
program that meets the needs of an organization today might become ineffective
when the organization starts to use a new application or changes its environment,
7. Evaluate the program: An evaluation should attempt to ascertain how much
information is retained, to what extent computer security procedures are being
followed, and general attitudes toward computer security.
There are several types of firewalls, but all firewalls should have these properties:
Where a packet filter controls access on a packet-by-packet basis, stateful firewalls control
access on a session-by-session basis. It is called stateful because the firewall is
remembering the state of the session. By default, a stateful firewall does not allow any
traffic from the outside into the secure inside network, except for reply traffic, because
users from the secure inside network first initiated the traffic to the outside destination.
A firewall can be a hardware appliance, a virtual appliance, or a software that runs on
another device such as a router. Although firewalls can be placed in various locations
within a network (including on endpoints), they are typically placed at least at the internet
edge, where they provide vital security. Firewall threat controls should be implemented at
least at the most exposed and critical parts of enterprise networks. The internet edge is
the network infrastructure that provides connectivity to the internet and acts as the
gateway for the enterprise to the rest of the cyberspace. Because it is a public-facing
network infrastructure, it is particularly exposed to a large array of external threats.
Firewalls are also often used to protect data centers. The data center houses most of the
critical applications and data for an enterprise. The data center is primarily inward facing
and most clients are on the internal network. The intranet data center is still subject to
external threats, but must also be guarded against threat sources inside the network
perimeter.
Many firewalls also provide a suite of additional services such as Network Address
Translation (NAT) and multiple security zones. Another important service that is also
frequently provided by firewalls is Virtual Private Network (VPN) termination.
Note: NAT by itself does not provide security. Due to the stateful nature of NAT, if an
unknown packet arrives from the outside network, it is dropped because the NAT device
does not know to which device it should forward the packet. However, this function
should not be counted as a firewall feature. In addition, as soon as the inside host opens a
session through NAT, anyone can send TCP or UDP packets to the source port used by that
host.
Firewall products have evolved to meet the needs of borderless networks of today. From
simple perimeter security with access control lists (ACLs), based on IP addresses and ports,
firewalls have evolved to offer some advanced security services. The hard outer shell that
firewalls provided in the past is now superseded by security capabilities that are
integrated into the very fiber of the network to defend against multivector and persistent
threats. Because of the current threat landscape, Cisco Secure Firewalls (formerly Cisco
Next Generation Firewalls [NGFWs]) are needed.
In addition to the standard first-generation firewall capabilities, Cisco Secure Firewalls also
have these capabilities:
• Integrate security functions tightly to provide highly effective threat and advanced
malware protection
• Implement policies that are based on application visibility instead of transport
protocols and ports
• Provide URL filtering and other controls over web traffic
• Provide actionable indications of compromise to identify malware activity
• Offer comprehensive network visibility
• Help reduce complexity
• Integrate and interface smoothly with other security solutions
• Stateful devices, such as firewalls and IPS systems: Stateful devices do not provide
complete coverage and mitigation for DDoS attacks because of their ability to
monitor connection states and maintain a state table. Maintaining such
information is central processing unit (CPU) and memory intensive. When
bombarded with an influx of traffic, the stateful device spends most, if not all, of
its resources tracking states and further connection-oriented details. This effort
often causes the stateful device to be the "choke point" or succumb to the attack.
• Route filtering techniques: Remotely triggered black hole (RTBH) filtering can drop
undesirable traffic before it enters a protected network. Network black holes are
places where traffic is forwarded and dropped. When an attack has been detected,
black holing can be used to drop all attack traffic at the network edge, based on
destination or source IP address.
• Unicast Reverse Path Forwarding: Network administrators can use Unicast
Reverse Path Forwarding (uRPF) to help limit malicious traffic flows occurring on a
network, as is often the case with DDoS attacks. This security feature works by
enabling a router to verify the "reachability" of the source address in packets being
forwarded. This capability can limit the appearance of spoofed addresses on a
network. If the source IP address is not valid, the packet is discarded.
• Geographic dispersion (global resources anycast): A newer solution for mitigating
DDoS attacks dilutes attack effects by distributing the footprint of DDoS attacks so
that the targets are not individually saturated by the volume of attack traffic. This
solution uses a routing concept known as anycast. Anycast is a routing
methodology that allows traffic from a source to be routed to various nodes
(representing the same destination address) via the nearest hop or node in a group
of potential transit points. This solution effectively provides "geographic
dispersion."
• Tightening connection limits and timeouts: Antispoofing measures such as limiting
connections and enforcing timeouts in a network environment seek to ensure that
DDoS attacks are not launched or spread from inside the network, intentionally or
unintentionally. Administrators are advised to leverage these solutions to enable
antispoofing and thwart random DDoS attacks on the inside "zones" or internal
network. Such limitations that can be configured on the firewalls are half-opened
connection limits, global TCP SYN-flood limits, and so on.
• Reputation-based blocking: Reputation-based technology provides URL analysis
and establishes a reputation for each URL. Reputation technology has two aspects.
The intelligence aspect couples worldwide threat telemetry, intelligence engineers,
and analytics/modeling. The decision aspect focuses on the trustworthiness of a
URL. Reputation-based blocking limits the impact of untrustworthy URLs.
• Access control lists: ACLs provide a flexible option to a variety of security threats
and exploits, including DDoS. ACLs provide day zero or reactive mitigation for DDoS
attacks, as well as a first-level mitigation for application-level attacks. An ACL is an
ordered set of rules that filter traffic. Each rule specifies a set of conditions that a
packet must satisfy to match the rule. Firewalls, routers, and even switches
support ACLs.
• DDoS run books: The premise behind a DDoS run book is simply to provide a
"playbook" for an organization in the event that a DDoS attack arises. In essence,
the run book provides crisis management (better known as an incident response
plan) in the event of a DDoS attack. The run book provides details about who owns
which aspects of the network environment, which rules or regulations must still be
adhered to, and when to activate certain processes, solutions, and mitigation
plans.
• Manual responses to DDoS attacks: Manual responses to DDoS attacks focus on
measures and solutions that are based on details administrators discover about
the attack. For example, when an attack such as an HTTP GET/POST flood occurs,
given the information known, an organization can create an ACL to filtering known
bad actors or bad IP addresses and domains. When an attack arises, administrators
can configure or tune firewalls or load balancers to limit connection attempts.
Note: SHA-256 is one of the hashing algorithms that is used for data integrity.
Since hash algorithms produce a fixed-length output, there are a finite number of possible
outputs. It is possible for two different inputs to produce an identical output. They are
referred to as hash collisions.
Hashing is similar to the calculation of cyclic redundancy check (CRC) checksums, but it is
much stronger cryptographically. CRCs were designed to detect randomly occurring errors
in digital data, while hash algorithms were designed to assure data integrity even when
data modifications are intentional with the objective to pass fraudulent data as authentic.
One primary distinction is the size of the digest produced. CRC checksums are relatively
small, often 32 bits. Commonly used hash algorithms produce digests in the range of 128
to 512 bits in length. It is relatively easier for an attacker to find two inputs with identical
32-bit checksum values than it is to find two inputs with identical digests of 128 to 512 bits
in length.
The following figure illustrates how hashing is performed.
The following figure shows one use of hash algorithms to provide data integrity.
Organizations that offer software for download often publish hash digests on the
download page that can be used to verify data integrity of the downloaded software.
Examples of hash algorithms include:
• Deprecated:
o MD5: Produces a 128-bit hash value that is typically represented as a
sequence of 32-hex digits. However, MD5 is considered insecure and
should be avoided.
o SHA-1: Produces a 160-bit hash value that is typically represented as a
sequence of 40-hex digits. It is a legacy algorithm and thus is adequately
secure. Both MD5 and SHA-1 are vulnerable to hash collisions.
• Next-generation (recommended):
o SHA-2: Includes significant changes from its predecessor SHA-1, and is the
recommended hash algorithm today. The SHA-2 family consists of multiple
hash functions with different bit values. The larger the better, and more
bits equal better security.
o SHA-256: Produces a 256-bit hash value that is typically represented as a
sequence of 64-hex digits.
o SHA-384: Produces a 384-bit hash value that is typically represented as a
sequence of 96-hex digits.
o SHA-512: Produces a 512-bit hash value that is typically represented as a
sequence of 128-hex digits.
Encryption
A cipher is an algorithm for performing encryption and decryption. Ciphers are a series of
well-defined steps that you can follow as a procedure.
Encryption is the process of disguising a message in such a way as to hide its original
contents. With encryption, the plaintext readable message is converted to ciphertext,
which is the unreadable, "disguised" message. Decryption reverses this process.
Encryption is used to guarantee confidentiality so that only authorized entities can read
the original message.
Modern encryption relies on public algorithms that are cryptographically strong using
secret keys. It is much easier to change keys than it is to change algorithms. In fact, most
cryptographic systems dynamically generate new keys over time, limiting the amount of
data that may be compromised with the loss of a single key.
Encryption can provide confidentiality at different network layers, such as the following:
• Encrypt application layer data, such as encrypting email messages with Pretty
Good Privacy (PGP).
• Encrypt session layer data using a protocol such as Secure Sockets Layer (SSL) or
Transport Layer Security (TLS). Both SSL and TLS are considered to be operating at
the session layer and higher in the Open Systems Interconnection (OSI) reference
model.
• Encrypt network layer data using protocols such as those provided in the IP
Security (IPsec) protocol suite.
• Encrypt data link layer using MAC Security (MACsec) (IEEE 802.1AE) or proprietary
link-encrypting devices.
Encryption Algorithm Features
A good cryptographic algorithm is designed in such a way that it resists common
cryptographic attacks. The best way to break data that is protected by the algorithm is to
try to decrypt the data using all possible keys. The amount of time needed by such an
attack depends on the number of possible keys, but the time is generally very long. With
appropriately long keys, such attacks are usually considered unfeasible.
Variable key lengths and scalability are also desirable attributes of a good encryption
algorithm. The longer the encryption key is, the longer it takes an attacker to break it. For
example, a 16-bit key means that there are 65,536 possible keys, but a 56-bit key means
that there are around 72,000,000,000,000,000 possible keys. Scalability provides flexible
key length and allows you to select the strength and speed of encryption that you need.
Changing only a few bits of the plaintext message causes its ciphertext to change
completely, which is known as an avalanche effect. The avalanche effect is a desired
feature of an encryption algorithm, because it allows very similar messages to be sent
over an untrusted medium, with the encrypted (ciphertext) messages being completely
different.
You must carefully consider export and import restrictions when you use encryption
internationally. Some countries do not allow the export of encryption algorithms, or they
allow only the export of those algorithms with shorter keys. Some countries impose
import restrictions on cryptographic algorithms.
Encryption Algorithms and Keys
A key is a required parameter for encryption algorithms. There are two classes of
encryption algorithms, which differ in their use of keys:
• Symmetric encryption algorithm: Uses the same key to encrypt and decrypt data
• Asymmetric encryption algorithm: Uses different keys to encrypt and decrypt data
Symmetric Encryption Algorithms
Symmetric encryption algorithms use the same key for encryption and decryption.
Therefore, the sender and the receiver must share the same secret key before
communicating securely. The security of a symmetric algorithm rests in the secrecy of the
shared key; by obtaining the key, anyone can encrypt and decrypt messages. Symmetric
encryption is often called secret-key encryption. Symmetric encryption is the more
traditional form of cryptography. The typical key-length range of symmetric encryption
algorithms is 40 to 256 bits.
Because symmetric algorithms are usually quite fast, they are often used for wire-speed
encryption in data networks. Symmetric algorithms are based on simple mathematical
operations and can easily be accelerated by hardware.
Key management can be a challenge, because the communicating parties must obtain a
common secret key before any encryption can occur. Therefore, the security of any
cryptographic system depends greatly on the security of the key management methods.
Symmetric algorithms are frequently used for encryption services, with additional key
management algorithms providing secure key exchange. They are used for bulk encryption
when data privacy is required, such as to protect a VPN. The reason we use symmetrical
encryption algorithms for most of the data in VPNs is because it is much faster to use a
symmetrical algorithm and takes less CPU than it would for an asymmetrical algorithm.
Some examples of where the symmetric encryption is used:
Asymmetric algorithms use a pair of keys for encryption and decryption. The paired keys
are intimately related and are generated together. Most commonly, an entity with a key
pair will share one of the keys (the public key) and it will keep the other key in complete
secrecy (the private key). The private key cannot, in any reasonable amount of time, be
calculated from the public key. Data that is encrypted with the private key requires the
public key to decrypt. Vice versa, data that is encrypted with the public key requires the
private key to decrypt. Asymmetric encryption is also known as public key encryption.
The typical key length range for asymmetric algorithms is 512 to 4096 bits. You cannot
directly compare the key length of asymmetric and symmetric algorithms, because the
underlying design of the two algorithm families differs greatly.
Asymmetric algorithms are substantially slower than symmetric algorithms. Their design is
based on computational problems, such as factoring extremely large numbers or
computing discrete logarithms of extremely large numbers. Because they lack speed,
asymmetric algorithms are typically used in low-volume cryptographic mechanisms, such
as digital signatures and key exchange. However, the key management of asymmetric
algorithms tends to be simpler than symmetric algorithms, because usually one of the two
encryption or decryption keys can be made public.
Examples of asymmetric cryptographic algorithms include Rivest, Shamir, and Adleman
(RSA), Digital Signature Algorithm (DSA), ElGamal, and elliptic curve algorithms.
Usually asymmetric algorithms, such as RSA and DSA, are used for digital signatures.
For example, a customer sends transaction instructions via an email to a stockbroker, and
the transaction turns out badly for the customer. It is conceivable that the customer could
claim never to have sent the transaction order or that someone forged the email. The
brokerage could protect itself by requiring the use of digital signatures before accepting
instructions via email.
Handwritten signatures have long been used as a proof of authorship of, or at least
agreement with, the contents of a document. Digital signatures can provide the same
functionality as handwritten signatures, and much more.
The idea of encrypting a file with your private key is a step toward digital signatures.
Anyone who decrypts the file with your public key knows that you were the one who
encrypted it. But, since asymmetric encryption is computationally expensive, this is not
optimal. Digital signatures leave the original data unencrypted. It does not require
expensive decryption to simply read the signed documents. In contrast, digital signatures
use a hash algorithm to produce a much smaller fingerprint of the original data. This
fingerprint is then encrypted with the signer’s private key. The document and the
signature are delivered together. The digital signature is validated by taking the document
and running it through the hash algorithm to produce its fingerprint. The signature is then
decrypted with the sender’s public key. If the decrypted signature and the computed hash
match, then the document is identical to what was originally signed by the signer.
31.8 Implementing Threat Defense Technologies
IPsec Security Services
IPsec VPNs provide security services to traffic traversing a relatively less trustworthy
network between two relatively more trusted systems or networks. The less-trusted
network is usually the public internet. But IPsec VPNs can also be used for things like
protecting network management traffic as it crosses an organization intranet.
IPsec provides these essential security functions:
When both authentication and encryption are used, the encryption is performed first.
Authentication is then performed by sending the encrypted information through a hash
algorithm. The hash provides data integrity and data origin authentication. Finally, a new
IPv4 header is prepended to the authenticated payload. The new IPv4 address is used to
route the packet. ESP does not attempt to provide data integrity for this new external IP
header.
Note: AH does provide data integrity for the external IP header. Due to this, AH is not
compatible with NAT performed in the transmission path. NAT changes the IP addresses in
the IP header, causing AH data integrity checks to fail.
Performing encryption before authentication facilitates rapid detection and rejection of
replayed or bogus packets by the receiving device. Before decrypting the packet, the
receiver can authenticate inbound packets. By doing this authentication, it can quickly
detect problems and potentially reduce the impact of DoS attacks. ESP can, optionally,
enforce anti-replay protection by requiring that a receiving host sets the replay bit in the
header to indicate that the packet has been seen.
In modern IPsec VPN implementations, the use of ESP is common. Although both
encryption and authentication are optional in ESP, one of them must be used.
ESP can operate in either the transport mode or tunnel mode:
• ESP transport mode: Does not protect the original packet IP header. Only the
original packet payload is protected—the original packet payload and ESP trailer
are encrypted. An ESP header is inserted between the original IP header and the
protected payload. Transport mode can be negotiated directly between two IP
hosts. ESP transport mode can be used for site-to-site VPN if another technology,
such as Generic Routing Encapsulation (GRE) tunneling, is used to provide the
outer IP header.
• ESP tunnel mode: Protects the entire original IP packet, including its IP header. The
original IP packet (and ESP trailer) is encrypted. An ESP header is applied for the
transport layer header, and this is encapsulated in a new packet with a new IP
header. The new IP header specifies the VPN peers as the source and destination
IP addresses. The IP addresses specified in the original IP packet are not visible.
Note: AH can also be implemented in either the tunnel mode or transport mode. The key
distinction between these modes is what is done with the original IP header. Tunnel mode
provides a new IP header. Transport mode maintains the original IP header.
Confidentiality
Choosing an encryption algorithm is one of the most important decisions that a network
security professional makes when building a cryptosystem.
When choosing an algorithm, two main criteria are considered:
• DES algorithm: DES, developed by IBM, uses a 56-bit key, ensuring high-
performance encryption. DES is a symmetric key cryptosystem.
• 3DES algorithm: The 3DES algorithm is a variant of the 56-bit DES. 3DES operates
in a way that is similar to how DES operates, in that data is broken into 64-bit
blocks. 3DES then processes each block 3 times, each time with an independent
56-bit key. 3DES provides a significant improvement in encryption strength over
56-bit DES. 3DES is a symmetric key cryptosystem.
• AES: The National Institute of Standards and Technology (NIST) adopted AES to
replace the aging DES-based encryption in cryptographic devices. AES provides
stronger security than DES and is computationally more efficient than 3DES. AES
offers three different key lengths: 128-, 192-, and 256-bit keys.
• RSA: RSA is an asymmetrical key cryptosystem. It commonly uses a key length of
1024 bits or larger. IPsec does not use RSA for data encryption. IKE uses RSA
encryption only during the peer authentication phase.
• SEAL: Software-Optimized Encryption Algorithm (SEAL) is a stream cipher that was
developed in 1993 by Phillip Rogaway and Don Coppersmith, and uses a 160-bit
key for encryption.
Note: AES replaced DES and 3DES, because the key length of AES is much stronger than
DES. AES is more efficient and runs faster than DES and 3DES on comparable hardware,
usually by a factor of five when it is compared with DES. Also, AES is more suitable for
high-throughput, low-latency environments, especially if pure software encryption is used.
Symmetric encryption algorithms such as AES require a common shared-secret key to
perform encryption and decryption. You can use email, courier, or overnight express to
send the shared-secret keys to the administrators of the devices. This method is obviously
impractical, and does not guarantee that keys are not intercepted in transit. Public-key
exchange methods allow shared keys to be dynamically generated between the
encrypting and decrypting devices:
Key Management
Public key exchange methods allow shared keys to be dynamically generated between
encrypting and decrypting devices:
• The Diffie-Hellman (DH) key agreement is a public key exchange method. This
method provides a way for two peers to establish a shared secret key, which only
they know, even though they are communicating over an insecure channel.
• Elliptical Curve Diffie-Hellman (ECDH) is a variant of the DH protocol using elliptic
curve cryptography (ECC). It is part of the Suite B standards.
These algorithms are used within IKE to establish session keys. They support different
prime sizes that are identified by different DH or ECDH groups.
DH groups vary in the computational expense that is required for key agreement and the
strength against cryptographic attacks. Larger prime sizes provide stronger security, but
require more computational horsepower to execute:
• DH1: 768-bit
• DH2: 1024-bit
• DH5: 1536-bit
• DH14: 2048-bit
• DH15: 3072-bit
• DH16: 4096-bit
• DH19: 256-bit ECDH
• DH20: 384-bit ECDH
• DH24: 2048-bit ECDH
The following figure illustrates the key exchange process.
Internet Key Exchange
IPsec implements a VPN solution using an encryption process that involves the periodic
changing of encryption keys. IPsec uses the IKE protocol to authenticate a peer computer
and to generate encryption keys. IKE negotiates a security association (SA), which is an
agreement between two peers engaging in an IPsec exchange, and the SA consists of all
the required parameters that are necessary to establish successful communication.
IPsec uses the IKE protocol to provide the following functions:
• Negotiation of SA characteristics
• Automatic key generation
• Automatic key refresh
• Manageable manual configuration
Two versions of the IKE protocol
• Pre-shared keys (PSKs): A secret key value is entered into each peer manually and
is used to authenticate the peer. At each end, the PSK is combined with other
information to form the authentication key.
• RSA signatures: The exchange of digital certificates authenticates the peers. The
local device derives a hash and encrypts it with its private key. The encrypted hash
is attached to the message and is forwarded to the remote end, and it acts like a
signature. At the remote end, the encrypted hash is decrypted using the public key
of the local end. If the decrypted hash matches the recomputed hash, the
signature is genuine.
• RSA encrypted nonces: A nonce is a random number that is generated by the peer.
RSA-encrypted nonces use RSA to encrypt the nonce value and other values. This
method requires that each peer is aware of the public key of the other peer before
negotiation starts. For this reason, public keys must be manually copied to each
peer as part of the configuration process. This method is the least used of the
three authentication methods.
• ECDSA signatures: Elliptic Curve Digital Signature Algorithm (ECDSA) is the elliptic
curve analog of the DSA signature method. ECDSA signatures are smaller than RSA
signatures of similar cryptographic strength. On many platforms, ECDSA operations
can be computed more quickly than similar-strength RSA operations. These
advantages of signature size, bandwidth, and computational efficiency might make
ECDSA an attractive choice for many IKE and IKEv2 implementations.
• CA: The trusted third party that signs the public keys of entities in a PKI-based
system.
• Certificate: A document, which in essence binds together the name of the entity
and its public key, which has been signed by the CA.
The most widely used application-layer protocol that uses TLS is HTTPS, but other well-
known protocols also use it. Examples are Secure File Transfer Protocol (SFTP), Post Office
Protocol version 3 Secure (POP3S), Secure LDAP, wireless security (Extensible
Authentication Protocol-Transport Layer Security [EAP-TLS]), and other application-layer
protocols. It is important to distinguish that even though TLS in its name contains
transport layer security, both SSL and TLS are considered to be operating at the session
layer and higher in the OSI model. In that sense, these protocols encrypt and authenticate
from the session layer up, including the presentation and application layers.
The SSL and TLS protocols support the use of various cryptographic algorithms, or ciphers,
for use in operations such as authenticating the server and client to each other,
transmitting certificates, and establishing session keys. Symmetric algorithms are used for
bulk encryption; asymmetric algorithms are used for authentication and the exchange of
keys, and hashing is used as part of the authentication process.
The following figure depicts the steps that are taken in the negotiation of a new TLS
connection between a web browser and a web server. The figure illustrates the
cryptographic architecture of SSL and TLS, based on the negotiation process of the
protocol.
Cisco AnyConnect SSL VPN
TLS is not only used for communication on the internet, but is also used for remote-access
VPNs to secure the transit data between the remote workers and internal servers of the
company.
Cisco AnyConnect is a VPN remote-access client providing a secure endpoint access
solution. It delivers enforcement that is context-aware, comprehensive, and seamless. The
Cisco AnyConnect client uses TLS and Datagram TLS (DTLS). DTLS is the preferred protocol,
but if, for some reason, the Cisco AnyConnect client cannot negotiate DTLS, there is a
fallback to TLS.
Note: DTLS is an alternative VPN transport protocol to SSL or TLS. DTLS allows datagram-
based applications to communicate in a way that is designed to prevent eavesdropping,
tampering, or message forgery. The DTLS protocol is based on the stream-oriented TLS
protocol and is intended to provide similar security guarantees.
A basic Cisco AnyConnect SSL VPN provides users with flexible, client-based access to
sensitive resources over a remote-access VPN gateway, which is implemented on the
Cisco ASA. In a basic Cisco AnyConnect remote-access SSL VPN solution, the Cisco ASA
authenticates the user against its local user database, which is based on a username and
password. The client authenticates the Cisco ASA with a certificate-based authentication
method. In other words, the basic Cisco AnyConnect solution uses bidirectional
authentication.
After authentication, the Cisco ASA applies a set of authorization and accounting rules to
the user session. When the Cisco ASA has established an acceptable VPN environment
with the remote user, the remote user can forward IP traffic into the SSL/TLS tunnel. The
Cisco AnyConnect client creates a virtual network interface to provide this functionality.
This virtual adapter requires an IP address, and the most basic method to assign an IP
address to the adapter is to create a local pool of IP addresses on the Cisco ASA. The client
can use any application to access any resource behind the Cisco ASA VPN gateway, subject
to access rules and the split tunneling policy that are applied to the VPN session.
There are two types of tunneling policies for a VPN session:
• Full-tunneling: The traffic generated from the user is fully encrypted and is sent to
the Cisco ASA, where it is routed. This process occurs for all traffic, even when the
users want to access the resources on the internet. It is especially useful to use this
type of tunneling policy when the endpoint is connected to the unsecured public
wireless network.
• Split-tunneling: This approach only tunnels the traffic when the users want to
access any internal resources of the organization. The other traffic will utilize the
client’s own internet connection for connectivity.
• Something that you know (such as a password): This type of authentication is the
most common for users. Unfortunately, something that users know can easily
become something that they forget. And if users write down the information to
help them remember it, other people might find it.
• Something that you have (such as a smart card): This method offers no risk of
forgetting information, but users must have physical possession of the item or they
cannot be authenticated. This object can be lost or stolen, after which it might be
used by an attacker.
• Something that you are (such as a fingerprint): This method is based on
something that is specific to the person who is being authenticated. Unfortunately,
biometric sensors imply physical contact, which is not a reasonable user-
authentication method in wireless networks. (However, this technique can be used
to authenticate devices.)
These authentication methods apply to human or user-based authentication. In secured
environments, authenticating the devices that are used to access the network is also
common. A device can be authenticated by using a signature that is based on its specific
hardware characteristics.
The limitation of device authentication is that it does not authenticate the person who
makes the connection. The same authentication process occurs whether the device is
being used by a valid user or by an attacker. For this reason, storing personal passwords
on laptop or desktop computers is considered dangerous, although many systems allow
this possibility. Unless authentication requires the user to enter information, then the
device, not the user, is being authenticated.
Encryption
In wireless networks, privacy means that although an eavesdropper might receive the
wireless signal, this signal cannot be read and understood. Keeping the data private is the
role of encryption.
Several methods of encryption can be used:
• WEP: Wired Equivalent Privacy (WEP) uses a shared key (both sides have the key)
and was a very weak form of encryption (no longer used).
• TKIP: Temporal Key Integrity Protocol (TKIP) uses a suite of algorithms surrounding
WEP and Wi-Fi Protected Access (WPA) to enhance its security (no longer used
with WEP).
• AES: AES allows for longer keys and is used for most WLAN security.
Key Management
Whether keys are used to authenticate users or to encrypt data, they are the secret values
upon which wireless networks rely.
Common Keys
A key can be common to several users. For wireless networks, the key can be stored on
the access point (AP) and shared among users who are allowed to access the network via
this AP.
A common key can be used in three ways:
• For authentication only: Limits access to the network to only users who have the
key, but their subsequent communication is sent unencrypted.
• For encryption only: Any user can associate to the WLAN, but only users who have
a valid key can send and receive traffic to and from other users of the AP.
• For authentication and encryption: The key is used for both authentication and
encryption.
The advantage of this common key system is its simplicity. Everyone uses the same
algorithm and the same key. The risk is that anyone who has the key can capture and read
the frames of any other user from the same AP. A hacker needs only to compromise one
device to be able to read the traffic of all the clients in the cell.
Individual Keys
To provide more security, an individual key can be defined for each user. This approach
can be accomplished in two ways:
• The key is individual from the beginning. This method implies that the
infrastructure must store and manage individual user keys, typically by using a
central authentication server.
• The key is common at first, but it is used to create a second key that is unique to
each user. This system has many advantages. A single key is stored on the AP, and
then individual keys are created in real time and are valid only during the user
session.
Security Standards
When WEP was found to be weak and easily breakable, both the IEEE 802.11 committee
and the Wi-Fi Alliance worked to replace it. Two generations of solutions emerged: WPA
and IEEE 802.11i Wi-Fi Protected Access 2 (WPA2). These solutions offer an authentication
and encryption framework.
Currently, multiple wireless security standards exist:
• WPA3-Personal
• WPA3-Enterprise
• Open Networks
• IoT secure onboarding
WPA3 will be backward compatible with WPA2, meaning your WPA3 devices will be able
to run WPA2. However, it is expected that it will take a few years for vendors to fully
transition to WPA3-only modes, therefore WPA2 transmission capabilities may still be in
use in the near future. Cisco has been instrumental in the development of WPA3, and as
this transition starts to happen, Cisco will roll WPA3 support in the wireless LAN
controllers (WLCs) and APs, allowing early adopters to start enjoying this added level of
security.
WPA3-Personal
WPA-Personal uses passwords, called PSKs. Attackers can eavesdrop on a WPA2 valid
initial "handshake," and attempt to use brute force to deduce the PSK. With the PSK, the
attacker can connect to the network, but also decrypt passed captured traffic. The
likelihood of succeeding in such an attack depends on the password complexity: dictionary
words or other simple passwords are vulnerable.
WPA3-Personal utilizes Simultaneous Authentication of Equals (SAE), defined in the IEEE
802.11-2016 standard. With SAE, the experience for the user is unchanged (create a
password and use it for WPA3-Personal). However, WPA3 adds a step to the "handshake"
that makes brute force attacks ineffective. The passphrase is never exposed, making it
impossible for an attacker to find the passphrase through brute force dictionary attacks.
WPA3 also makes management frames more robust with the mandatory addition of
Protected Management Frames (PMF) that adds an extra layer of protection from
deauthentication and disassociation attacks.
WPA3-Enterprise
Enterprise Wi-Fi commonly uses individual user authentication through 802.1X/EAP.
Within such networks, PMF is also mandatory with WPA3. WPA3 also introduces a 192-bit
cryptographic security suite. This level of security provides consistent cryptography and
eliminates the "mixing and matching of security protocols" that are defined in the 802.11
Standards. This security suite is aligned with the recommendations from the Commercial
National Security Algorithm (CNSA) Suite, commonly in place in high security Wi-Fi
networks in government, defense, finance, and industrial verticals.
Open Networks
In public spaces, Wi-Fi networks are often unprotected, with no encryption and no
authentication, or simply a web-based onboarding page. As a result, Wi-Fi traffic is visible
to any eavesdropper. The upgrade to WPA3 Open Networks includes an extra mechanism
for public Wi-Fi, Opportunistic Wireless Encryption (OWE). With this mechanism, the end
user onboarding experience is unchanged and the Wi-Fi communication is automatically
encrypted.
IoT Secure Onboarding—DPP
Device Provisioning Protocol (DPP) is used for provisioning of IoT devices, making
onboarding of such devices easier. DPP allows an IoT device to be provisioned with the
Service Set Identifier (SSID) name and secure credentials through an out-of-band
connection. DPP is based on quick response (QR) code, and in the future Bluetooth, near
field communication (NFC), or other connections.
32.1 Securing Administrative Access
Introduction
Securing the network infrastructure requires securing the management access to these
infrastructure devices. If infrastructure device access is compromised, the security and
management of the entire network can be compromised. Consequently, it is critical to
establish the appropriate controls to prevent unauthorized access to infrastructure
devices.
Network infrastructure devices often provide a range of different access mechanisms,
including console and asynchronous connections, as well as remote access based on
protocols such as Telnet, HTTP, and Secure Shell (SSH). Some mechanisms are typically
enabled by default with minimal security associated with them. For example, Cisco IOS
Software-based platforms are shipped with console and modem access that is enabled by
default. For this reason, each infrastructure device should be carefully reviewed and
configured to ensure that only supported access mechanisms are enabled and that they
are properly secured.
As a networking engineer, you will need to be able to secure administrative access to the
networking devices in enterprise environments, which will include important tasks such as
the following:
• Guessing: The attacker enters the passwords either manually or using a tool to
automate the process.
• Brute force: Computer programs called "password crackers" perform the attack by
systematically entering all possible passwords until it succeeds.
• Dictionary attacks: This method is similar to brute force, but it uses word lists
containing millions of words to create passwords to log in instead of random
characters in sequences.
A password attack can be either an online attack or an offline attack. In an online attack,
an attacker makes repeated attempts to log in. The activity is visible to the authentication
system, so the system can automatically lock the account after too many bad guesses.
Account lockout disables the account and makes it unavailable for further attacks during
the lockout period. The lockout period and the number of allowed login attempts are
configurable by a system administrator.
Offline attacks are far more dangerous. In an online attack, the password has the
protection of the system in which it is stored, but there is no such protection in offline
attacks. In an offline attack, the attacker captures the password hash or the encrypted
form of the password. The attacker can then make countless attempts to crack the
password without being noticed.
Longer and more complex passwords are time-consuming for the attackers to crack them.
Specifying a minimum length of a password and forcing an enlarged character set (upper
case, lower case, numeric, and special characters) can have an enormous influence on the
feasibility of brute force attacks. However, if users attempt to meet the enlarged
character set requirements by making simple adjustments, such as capitalizing the first
letter and appending a number and an exclamation point (changing, for example, unicorn
to Unicorn1!), little is gained against a dictionary attack using some simple transforms.
Besides the password creation, password policy consists of password management, which
includes storage, protection, and password changes. In an enterprise environment,
password management should follow these guidelines:
Note: The passwords shown in the example are for instructional purposes only. Passwords
that are used in an actual implementation should meet the requirements of strong
passwords.
The enable password and enable secret global command restrict access to the privileged
EXEC mode.
Note: A configured enable secret always takes precedence over a configured enable
password. It is recommended to use the enable secret command instead of the enable
password command.
The enable secret command in older devices uses the Message Digest 5 (MD5) hashing by
default. The number 5 in the command in the configuration indicates that a MD5-type
hash was used to protect the password.
Note: MD5 has been deprecated due to the existence of predictable collisions. Latest
implementations use Secure Hashing Algorithm 2 (SHA-2) and its derived successors.
Encrypt plaintext passwords.
You can also add a further layer of security to any plaintext passwords in your
configuration, which is particularly useful when the configuration is viewed, or when it is
stored elsewhere, such as on a TFTP server. To enable encryption when plaintext
passwords are viewed, enter the service password-encryption command in the global
configuration mode. Passwords that are already configured, or set after you configure the
service password-encryption command, will no longer appear in plaintext when you view
the configuration. However, note that service password encryption uses type-7
obfuscation, which is not very secure. There are several tools and web pages available that
convert a type-7 protected password into a plaintext string.
Configure the secret password in the username command and require it for access to the
console.
The secret is entered in plaintext and by default is encrypted with the SHA256 algorithm,
which is indicated by the number 4 before the ciphertext when displaying the username
command configuration.
Note: Remember to always configure a password when using the login command or a
username and password when using the login local command, before closing the console
session. Entering only the login or login local command in the configuration of the console
line will result in the console terminal being inaccessible if other methods of accessing the
device were not configured.
EXEC timeout configuration
The exec-timeout minutes [seconds] command prevents users from remaining connected
to a line when the line is idle. In the example, when no user input is detected on the
console for 5 minutes, the user that is connected to the console port is automatically
disconnected. Using the exec-timeout 0 0 command disables the timeout. This should not
be used in a production environment because it is not a secure practice.
EXEC timeout:
The line vty 0 15 command, followed by the login and password subcommands, requires
login and establishes a login password on incoming Telnet sessions.
The exec-timeout command prevents users from remaining connected to a vty line when
the line is idle. In the example, when no user input is detected on a vty line for 5 minutes,
the vty session is automatically disconnected.
You can use the login local command to require a username and password as vty line
credentials, the same as you can for the console line. The username and password or
secret password are specified with the username global configuration command.
To configure the secret password in the username command and require it for access to
the vty lines:
However, devices such as switches and routers are usually accessed via SSH, which needs
to be configured on the device first.
SSH configuration:
To configure SSH on a Cisco switch or router, you need to complete these steps:
1. Use the hostname command to configure the hostname of the device so that it is
not Switch (on a Cisco switch) or Router (on a Cisco router).
2. Configure the Domain Name System (DNS) domain with the ip domain-name
command. The domain name is required to be able to generate certificate keys.
3. Generate RSA keys that will be used for authentication. Use the crypto key
generate rsa command; you will also need to configure the modulus that defines
the key length.
4. Configure the user credentials that the user will use for authentication, using the
username username secret password command.
5. Specify the login local command for vty lines, so that it will use locally defined
credentials for authentication.
6. By default, Telnet is allowed. To limit access to a device to users that use SSH and
block Telnet, use the transport input ssh mode command. If you want to support
login banners and enhanced security encryption algorithms, force SSH version 2
(SSHv2) on your device with the ssh version 2 command in global configuration
mode.
Note: RSA is one of the most common asymmetric algorithms with variable key length,
usually from 1024 to 4096 bits. Smaller keys require less computational overhead to use,
large keys provide stronger security. The RSA algorithm is based on the fact that each
entity has two keys, a public key and a private key. The public key can be published and
given away, but the private key must be kept secret and one cannot be determined from
the other. What one of the keys encrypts, the other key decrypts, and vice versa. SSH uses
the RSA algorithm to securely exchange the symmetric keys used during the session for
the bulk data encryption in real time.
To display the version and configuration data for SSH on the device that you configured as
an SSH server, use the show ip ssh command. In the example, SSHv2 is enabled.
To check the SSH connection to the device, use the show ssh command.
Verify that SSH is enabled.
SSH and Telnet provide remote console access, but unlike Telnet, SSH is designed to
provide privacy, data integrity, and origin authentication. SSH version 1 (SSHv1)
introduced better cryptographic security features compared with Telnet. After its
introduction, a vulnerability was found in the implementation of SSHv1. Therefore, a
second version with additional security features, SSHv2, was introduced and adopted.
SSHv1 is legacy and obsolete. The key exchange methodology that is used by SSHv2 is
more complex, using Diffie-Hellman. You will be presented the connection process of
SSHv1 for simplicity.
SSHv1 uses asymmetric encryption to facilitate symmetric key exchange. Computationally
expensive asymmetric encryption is only required for a small step in the negotiation
process. After key exchange, a much more computationally efficient symmetric encryption
is used for bulk data encryption between the client and server.
The connection process used by SSHv1 is as follows:
• The client connects to the server and the server presents the client with its public
key.
• The client and server negotiate the security transforms. The two sides agree to a
mutually supported symmetric encryption algorithm. This negotiation occurs in the
clear. A party that intercepts the communication will be aware of the encryption
algorithm that is agreed upon.
• The client constructs a session key of the appropriate length to support the
agreed-upon encryption algorithm. The client encrypts the session key with the
server public key. Only the server has the appropriate private key that can decrypt
the session key.
• The client sends the encrypted session key to the server. The server decrypts the
session key using its private key. At this point, both the client and the server have
the shared session key. That key is not available to any other system. From this
point on, the session between the client and server is encrypted using a symmetric
encryption algorithm.
• With privacy in place, user authentication ensues. The user’s credentials and all
other data are protected.
Not only does the use of asymmetric encryption facilitate symmetric key exchange, it also
facilitates peer authentication. If the client is aware of the server’s public key, it would
recognize if it connected to a nonauthentic system when the nonauthentic system
provided a different public key. The nonauthentic system cannot provide the real server’s
public key because it does not have the corresponding private key. While the ability to
provide peer authentication is certainly a step in the right direction, the responsibility is
generally put on the user to have prior knowledge of the server’s public key. When the
SSH client software connects to a new server for the first time, it will generally display the
server’s public key (or a hash of the server’s public key) to the user. The client software
will only continue if the user authorizes the server’s public key.
To configure a login banner, use the banner login command in global configuration mode.
Enclose the banner text in quotation marks or use a delimiter that is different from any
character appearing in the banner string.
Note: Use caution when you create the text that is used in the login banner. Words such
as "welcome" may imply that access is not restricted and may allow hackers some legal
defense of their actions.
To define and enable a message-of-the-day (MOTD) banner, use the banner motd
command in global configuration mode.
This MOTD banner is displayed to all terminals that are connected and is useful for
sending messages that affect all users (such as impending system shutdowns).
• Authentication: This service identifies users, including login and password dialog,
challenge and response, messaging support, and encryption, depending on the
security protocol that you select.
• Authorization: This service provides access control by assembling a set of
attributes that describe what the user is authorized to perform.
• Accounting: This service provides the method for collecting information, logging
the information locally, and sending the information to the AAA server for billing,
auditing, and reporting.
To better understand the three services, imagine attending an invitation-only event.
Authentication can be compared to being stopped at the office lobby by a security guard.
After you provide a driver’s license to validate that you are on the guest list, you are given
an access badge. Authorization relates to which doors the access badge opens in the
building. Your access is restricted by the badge policy. Accounting is the system that tracks
your movements through the building and records which doors you accessed with your
badge and whether access was permitted or denied.
Here are the two most popular options for external AAA:
IEEE 802.1X
The access layer is the point at which user devices connect to the network. This layer,
therefore, is the connection point between the network and any client device. So,
protecting the access layer is important for protecting other users, applications, and the
network itself from human errors and malicious attacks. Network access control at the
access layer can be managed by using the IEEE 802.1X protocol to secure the physical
ports where end users connect. A network where each user is verified before they access
it is called an identity-based network.
Identity-based networking allows you to verify users when they connect to a switch port.
Identity-based networking authenticates users and places them in the right VLAN, based
on their identity. Should any users fail to pass the authentication process, their access can
be rejected, or they might be simply put in a guest VLAN.
The IEEE 802.1X standard allows you to implement identity-based networking based on a
client-server access control model. The following three roles are defined by the standard:
As a networking engineer, you will need to be able to implement proper device hardening
mechanisms, which might include practices such as the following:
• The internet IPv4 address block that the company is using for the infrastructure is
209.165.200.224/27.
• The interface Gi 0/0 on router R1 is configured with the IPv4 address
209.165.201.1/30. This address is used to establish BGP session with the ISP 1
router, which uses the IPv4 address 209.165.201.2/30 on its interface Gi 0/0.
• The interface Gi 0/0 on router R2 is configured with the IPv4 address
209.165.201.5/30. This address is used to establish BGP session with the ISP 2
router, which uses the IPv4 address 209.165.201.6/30 on its interface Gi 0/0.
Since many attacks rely on flooding routers with fragmented packets, filtering incoming
fragments to the infrastructure provides an added measure of protection and helps
ensure that an attack cannot inject fragments by simply matching Layer 3 rules in the iACL.
ACLs can use the fragments keyword that enables specialized fragmented packet-handling
behavior. Without this fragments keyword, noninitial fragments that match the Layer 3
statements (irrespective of the Layer 4 information) in an ACL are affected by the permit
or deny statement of the matched entry. However, by adding the fragments keyword, you
can force ACLs to either deny or permit noninitial fragments with more granularity.
Filtering fragments can be added to the example as an additional layer of protection
against a denial of service (DoS) attack that uses noninitial fragments (that is, fragment
offset > 0). Using a deny statement for noninitial fragments at the beginning of the iACL
denies all noninitial fragments from accessing the router. Under rare circumstances, a
valid session might require fragmentation, and will be filtered if a deny fragment
statement exists in the ACL.
To deny any noninitial fragments, while nonfragmented packets or initial fragments are
able to pass to the next lines of the ACL, use the following entries at the beginning of an
iACL:
These separate entries in the iACL facilitate classification of the attack, since each
protocol, TCP, UDP, and Internet Control Message Protocol (ICMP), increments separate
counters in the ACL.
As previously mentioned, an iACL built without the proper understanding of the protocols
and devices involved, may end up being ineffective and may even cause a DoS attack,
instead of preventing it. Therefore, you should have a clear understanding of the
legitimate traffic required by your infrastructure before deploying an iACL. Also, you
should use a conservative methodology for deploying iACLs, leveraging iterative iACL
configurations that can help you identify and incrementally filter unwanted traffic.
The following example illustrates the iACL applied inbound on interface Gi 0/0 on router
R1, which provides antispoof filters, permits external BGP peering to the external peer,
and protects the infrastructure from all external access. R2 uses a similar iACL.
Note: The iACL shown above permits the flow of transit traffic to noninfrastructure
destinations, expecting other devices to filter the internet traffic based on the security
policies in the company regarding its internet services, such as web and email servers, and
others. iACLs are designed to secure the infrastructure and do not provide protection from
attacks against targets other than the infrastructure itself.
The network troubleshooting tools ping and traceroute use ICMP, which you can also
filter in the iACLs. Therefore, you can permit ICMP messages by name or type and code, to
allow traffic from trusted management stations to the infrastructure devices while
blocking all other ICMP packets to these devices.
In the example, services that are enabled on the router are SSH, Telnet, TACACS, and
DHCP.
Note: As an alternative, Cisco IOS Software provides the AutoSecure function that helps
disable these unnecessary services while enabling other security services.
Along with services at a higher level of the TCP/IP stack, lower-layer services should also
be considered.
Cisco Discovery Protocol can be useful for network troubleshooting. Cisco Discovery
Protocol is enabled by default in Cisco IOS Software Release 15.0 and later. Some network
management software takes advantage of Cisco Discovery Protocol neighbor data to map
out topological connectivity. Cisco VoIP deployments can take advantage of Cisco
Discovery Protocol to automatically assign the voice VLAN to Cisco IP phones. On the
other hand, Cisco Discovery Protocol provides an easy reconnaissance vector to any
attacker with an Ethernet connection. For example, when a switch sends a Cisco Discovery
Protocol announcement out of a port where a workstation is connected, the workstation
normally ignores it. However, with a simple tool such as Wireshark, an attacker can
capture and analyze the Cisco Discovery Protocol announcement. Included in the Cisco
Discovery Protocol data is the model number and operating system version of the switch.
An attacker can then use this information to look up published vulnerabilities that are
associated with that operating system version and potentially follow up with an exploit of
the vulnerability. The organization must decide whether the convenience that Cisco
Discovery Protocol brings is greater than the security risk that comes with Cisco Discovery
Protocol.
Here are some general best practices:
• Static secure MAC addresses: Specific MAC addresses that are manually
configured on a port. MAC addresses configured in this way are stored in the
secure MAC address table and are added to the running configuration on the
switch.
• Dynamic secure MAC addresses: MAC addresses that are dynamically learned
from devices that connect to the port and are not specified manually. The
maximum number of such addresses accepted on a port is configured. This port
security configuration is used when you care only about how many MAC addresses
are permitted to use the port, rather than which MAC addresses are permitted.
Dynamically learned MAC addresses that are not secure are stored in the MAC
address table until they age out. However, dynamic secure MAC addresses do not
age out by default. Instead, they are removed when the switch restarts or the port
goes down. Dynamic secure MAC addresses are not stored in the running
configuration.
• Sticky secure MAC addresses: MAC addresses that are dynamically learned and
then stored in the address table and added to the running configuration. In other
words, sticky secure MAC addresses are learned dynamically and automatically
added to the configuration. If you save the running configuration to the startup
configuration, then the sticky secure MAC addresses are saved to the startup
configuration file, and then when the switch restarts the interface does not need
to relearn the addresses. If the sticky secure addresses are not saved, they will be
lost.
When a frame arrives on a port for which port security is configured, its source MAC
address is checked against the secure MAC address table. If the source MAC address
matches an entry in the table for this port, the device forwards the frame to be processed.
Otherwise, the device does not forward the frame.
In the example in the figure below, traffic from Attacker 1 and Attacker 2 will be dropped
at the switch because the source MAC addresses of these frames do not match MAC
addresses in the list of secured (allowed) addresses.
The following are Port Security recommendations:
• When ingress traffic from a MAC address that is different than the allowed MAC
addresses arrives at an interface.
• When ingress traffic from a MAC address that is different than the allowed MAC
addresses tries to connect when the maximum number of allowed MAC addresses
on the port is already reached.
• When ingress traffic from a secure MAC address arrives at a different interface in
the same VLAN as the interface on which the address is secured.
Note: After a secure MAC address is configured or learned on one secure port, the
sequence of events that occurs when port security detects that secure MAC address on a
different port in the same VLAN is known as a MAC move violation.
As an administrator, you can configure how a switch reacts when a security violation
occurs by specifying the violation mode of the port.
One of these actions is taken, based on the configured violation mode:
• Change the switchport mode from the default Dynamic Trunking Protocol (DTP)
dynamic auto mode to either access or trunk. You can configure port security only
on static access ports or trunk ports. When an interface is in the default mode, it
cannot be configured as a secure port.
o In interface configuration mode, use the switchport mode { access | trunk
} command to set the mode to either access or trunk, or use the switchport
nonegotiate command to disable DTP.
o Use the switchport port-security interface command without keywords to
enable port security on an interface.
For all other parameters, such as a secure MAC address, a maximum number of secure
MAC addresses, or the violation mode, you use the switchport port-security interface
command with keywords. Use the no form of this command to disable port security or to
set the parameters to their default states
• Optionally, set the maximum number of secure MAC addresses for the interface.
The range depends on the switch platform. The default value is 1.
o Set the maximum number of secure MAC addresses using the switchport
port-security maximum value command
• Optionally, specify the allowed MAC addresses or sticky learning.
When defining static entries, you have to specify the specific MAC address that is allowed
on an interface. You can enter as many secure MAC addresses as is the maximum number
of MAC addresses you defined. If you configure fewer secure MAC addresses than the
maximum, the remaining MAC addresses are dynamically learned.
• To set the violation mode, use the switchport port-security violation { protect |
restrict | shutdown } command.
Optionally, set aging parameters for dynamically learned addresses. Use aging
parameters to remove and add devices on a secure port without manually deleting the
existing secure MAC addresses. Here are the supported aging types:
Absolute: The secure addresses on the port are deleted after the specified aging time.
Absolute aging is the default type if aging is enabled.
Inactivity: The secure addresses on the port are deleted only if the secure addresses
are inactive for a specified aging time. Aging time is specified in minutes.
• To set the aging type, use the switchport port-security aging type {absolute |
inactivity} command.
• To set the aging time, use switchport port-security aging time minutes command.
When the port security violation mode is set to shutdown, the port with the security
violation goes to the error-disabled state and you receive syslog notification on the device:
Sep 20 12:44:54.966: %PM-4-ERR_DISABLE: psecure-violation error detected on Fa0/5,
putting Fa0/5 in err-disable state
Sep 20 12:44:54.966: %PORT_SECURITY-2-PSECURE_VIOLATION: Security violation
occurred, caused by MAC address 000c.292b.4c75 on port FastEthernet0/5.
Sep 20 12:44:55.973: %LINEPROTO-5-PPDOWN: Line protocol on Interface
FastEthernet0/5, changed state to down
Sep 20 12:44:56.971: %LINK-3-UPDOWN: Interface FastEthernet0/5, changed state to
down
To make the interface operational again, you need to disable the interface
administratively and then enable it again, as shown here:
SwitchX(config)# interface FastEthernet 0/5
SwitchX(config-if)# shutdown
Sep 20 12:57:28.532: %LINK-5-CHANGED: Interface FastEthernet0/5,changed state to
administratively down
SwitchX(config-if)# no shutdown
Sep 20 12:57:48.186: %LINK-3-UPDOWN: Interface FastEthernet0/5, changed state to up
Sep 20 12:57:49.193: %LINEPROTO-5-UPDOWN: Line protocol on Interface
FastEthernet0/5, changed state to up
The example below shows a typical port security configuration for a voice port. Two MAC
addresses are allowed and they are learned dynamically. One MAC address is for the IP
phone and the other MAC address is for the PC connected to the IP phone. Violations of
this policy result in the port being shut down. Aging timeout for the learned MAC
addresses is set to 2 hours.
It is important to note that this attack is unidirectional and works only when the attacker’s
VLAN and trunk port native VLAN are the same. Stopping this type of attack is not as easy
as stopping basic VLAN hopping attacks. The best approach is to create a VLAN to use as
the native VLAN on all trunk ports and explicitly do not use that VLAN for any access ports.
To prevent a VLAN hopping attack that uses double 802.1Q encapsulation, the switch
must look further into the packet to determine whether more than one VLAN tag is
attached to a given frame. Unfortunately, the application-specific integrated circuits
(ASICs) that many switches use are only hardware optimized to look for one tag and then
switch the frame.
The double-tagging VLAN hop attack requires that the attacker is on the native VLAN of
the outbound trunk port. This attack can be mitigated by ensuring that no systems attach
to the native VLAN used by trunks. Specify a unique native VLAN for use on all trunk ports
and do not use that VLAN anywhere else on the switch.
In summary, you have the following two options to control trunking port behavior:
• For links that you do not intend to trunk across, use the switchport mode access
interface configuration command to disable trunking. This command configures
the port as an access port.
• For links that you do intend to trunk across, take the following actions:
o Use the switchport mode trunk interface configuration command to cause
the interface to become a trunk link and use the switchport nonegotiate
interface configuration command to prevent the generation of DTP frames.
o Use the switchport trunk native vlan vlan_number interface configuration
command to set the native VLAN on the trunk to an unused VLAN. The
default native VLAN is VLAN 1.
A DHCP starvation attack works by sending a flood of DHCP requests with spoofed MAC
addresses. If enough requests are sent, the network attacker can exhaust the address
space available on the DHCP servers. This flooding would cause a loss of network
availability to new DHCP clients as they connect to the network. A DHCP starvation attack
may be executed before a DHCP spoofing attack. If the legitimate DHCP server’s resources
are exhausted, then the rogue DHCP server on the attacker system has no competition
when it responds to new DHCP requests from clients on the network.
To mitigate DHCP address starvation attacks, deploy port security address limits, which set
an upper limit of secure MAC addresses that can be accepted into the MAC address table
from any single port. Because each DHCP request must be sourced from a separate MAC
address, this mitigation technique effectively limits the number of IP addresses that can
be requested from a switch-port-connected attacker. Set this parameter to a value that is
never legitimately exceeded in your environment.
DHCP for IPv4 Snooping
DHCP snooping is a Layer 2 security feature that specifically prevents DHCP server
spoofing attacks and mitigates DHCP starvation to a degree. DHCP snooping provides
DHCP control by filtering untrusted DHCP messages and by building and maintaining a
DHCP snooping binding database, which is also referred to as a DHCP snooping binding
table.
For DHCP snooping to work, each switch port must be labeled as trusted or untrusted.
Trusted ports are the ports over which the DHCP server is reachable and that will accept
DHCP server replies. All other ports should be labeled as untrusted ports and can only
source DHCP requests. Typically, this approach means the following:
• All access ports should be labeled as untrusted, except the port to which the DHCP
server is directly connected.
• All interswitch ports should be labeled as trusted.
• All ports pointing towards the DHCP server (that is, the ports over which the reply
from the DHCP server is expected) should be labeled as trusted.
Untrusted ports are those ports that are not explicitly configured as trusted. A DHCP
binding table is automatically built by analyzing normal DHCP transactions on all untrusted
ports. Each entry contains the client MAC address, IPv4 address, lease time, binding type,
VLAN number, and port ID that are recorded as clients make DHCP requests. The table is
then used to filter subsequent DHCP traffic. From a DHCP snooping perspective, untrusted
access ports should not send any DHCP server responses, such as DHCPOFFER, DHCPACK,
or DHCPNAK. The switch will drop all such DHCP packets.
The figure below shows the deployment of DHCP protection mechanisms on the access
layer of the network. User ports are designated as untrusted for DHCP snooping (indicated
by red dots), while interswitch links are designated as trusted (indicated by green dots), if
the DHCP server is reachable through the network core. User ports also have port security
to limit MAC addresses and prevent DHCP starvation attacks.
To mitigate the chances of DHCP spoofing, these procedures are recommended:
To verify DHCP snooping, use the show ip dhcp snooping command. The output shows
only trusted ports or the ports with a configured rate limit. In the example, DHCP
snooping is enabled for VLANs 10 through 19 and only GigabitEthernet 0/24 is specified as
trusted.
To display all the known DHCP bindings that have been learned on a switch, use the show
ip dhcp snooping binding command. In the example, there are two PCs connected to the
switch, so there is a binding for each of them in this table:
Switch# show ip dhcp snooping binding
MacAddress IpAddress Lease(sec) Type VLAN Interface
------------------ --------------- ---------- ------------- ---- ----------
----------
00:24:13:47:AF:C2 192.168.1.4 85858 dhcp-snooping 10 GigabitEthernet0/1
00:24:13:47:7D:B1 192.168.1.5 85859 dhcp-snooping 10 GigabitEthernet0/2
Total number of bindings: 2
33.9 Implementing Device Hardening
Dynamic ARP Inspection
ARP Spoofing Attack
In normal ARP operation, a host sends a broadcast to determine the MAC address of a
destination host with a particular IPv4 address. The device with the IPv4 address replies
with its MAC address. The originating host caches the ARP response, using it to populate
the destination MAC address in frames that encapsulate packets sent to that IPv4 address.
By spoofing an ARP reply from a legitimate device with a malicious ARP reply, an attacking
device appears to be the destination host that is sought by the sender. The ARP message
from the attacker causes the sender to store the MAC address of the attacking system in
its ARP cache. All packets that are destined for that IPv4 address are forwarded to the
attacker system.
An ARP spoofing attack, also known as ARP cache poisoning, can result in a man-in-the-
middle situation. In the figure below, the Attacker on Host B tricks both Host A and its
default gateway Router C. Host A sends traffic to Host B instead of the gateway and the
gateway sends traffic to host B instead of Host A. The attacker on Host B can passively
collect data from the packets before forwarding them on to their correct destination. The
attacker can also actively allow, deny, and insert data as a man-in-the-middle.
Mitigating the ARP Spoofing Attack
To prevent ARP spoofing, or "poisoning," a switch can inspect transit ARP traffic to ensure
that only valid ARP requests and responses are relayed. The ARP inspection feature of
Cisco Catalyst switches prevents ARP spoofing attacks by intercepting and validating all
ARP requests and responses. Each intercepted ARP reply is verified for valid MAC-to-IPv4
address bindings before it is forwarded. ARP replies with invalid MAC-to-IPv4 address
bindings are dropped.
Dynamic ARP Inspection (DAI) can determine the validity of an ARP reply based on
bindings that are stored in a DHCP snooping database. In non-DHCP environments, DAI
can validate ARP packets against user-configured ARP ACLs for hosts with statically
configured IPv4 addresses.
DAI associates each interface with a trusted state or an untrusted state. To ensure that
only valid ARP requests and responses are relayed, DAI takes these actions:
To view the status of the DAI configuration, the show ip arp inspection and show ip arp
inspection interfaces commands can be used. To review DAI activity, the show ip arp
inspection log and the show ip arp inspection statistics commands can be used.
Dynamic ARP Inspection in Action
The figure below shows a user with an IPv4 address of 10.0.1.2 connected through a
switch to a default gateway with an IPv4 address of 10.0.1.1. An intruder residing on an
untrusted port sends an unsolicited ARP message in an attempt to poison the MAC-to-IPv4
bindings so that all traffic from 10.0.1.2 to the 10.0.1.1 default gateway goes to the
attacker. The attacker attempts to poison the ARP cache of 10.0.1.2, so 10.0.1.2 thinks the
attacker MAC address is the MAC address of the 10.0.1.1 default gateway.
DAI examines the ARP packet and compares its information with the information in the
switch DHCP binding table. Because there is no match for the 10.0.1.1 IPv4 address to the
attacker MAC address of aaaa.1111.2345 in the DHCP binding table, the ARP packet is
dropped.
Root guard is best deployed toward ports that connect to switches that should not be the
root bridge. Root guard is enabled using the spanning-tree guard root command in
interface configuration mode.
The figure illustrates how the attacker sends out spoofed BPDUs to become the root
bridge. Upon receipt of a BPDU, the switch with the root guard feature configured on that
port ignores the BPDU and puts the port in a root-inconsistent state. The port will recover
when the offending BPDUs stop.
• Deployment: Many features can be deployed quickly and easily. Rolling out
deployments is much easier.
• Cloning configurations: When creating a new network, administrators can choose
to clone the configuration for the new network from an existing network.
• Configuration templates: An administrator can make one change that can be
applied to many networks and the devices within those networks.
• Zero-touch deployment: The cloud architecture allows you to configure devices
without having the hardware. This approach is possible because configurations are
stored and managed in the cloud, so administrators can stage configurations
before they have the hardware.
Cisco Meraki solutions are very scalable and can extend to hundreds of thousands of
devices. Scaling means simply adding more devices and licenses to the dashboard.
• Step 1: Deploy
o Cisco Meraki appliances and devices are deployed in your campus or
remote branches.
• Step 2: Connect
o Devices automatically securely connect to the Cisco Meraki cloud, register
to the proper network, and download their configurations.
• Step 3: Manage
o The centralized dashboard provides visibility, diagnostic tools, and
management of the entire network.
You simply deploy the devices.
When powered on, Cisco Meraki devices automatically establish a connection with the
Cisco Meraki cloud.
These devices come preconfigured with the hostname and IP addresses to reach the
dashboard. They also come with a certificate that is used for encryption. All management
traffic is encrypted using a proprietary lightweight encryption tunnel with Advanced
Encryption Standard (AES) 256 encryption.
The Cisco Meraki dashboard manages and monitors all devices. The dashboard is a web
interface and can be accessed via any modern web browser.
Each device creates its own management tunnel. Management traffic has a low overhead:
• Security
o Does my network traffic flow through the Cisco Meraki cloud
infrastructure?
• Reliability
o What happens if the devices cannot access the Cisco Meraki cloud?
• Future-Proof
o How do firmware upgrades work? How often do I get new features?
• Scalability
o How do I scale?
Now, take a look at some Cisco Meraki answers.
• Security: User traffic never touches the Cisco Meraki cloud, so the cloud
infrastructure is Health Insurance Portability and Accountability Act (HIPAA)- and
Payment Card Industry (PCI)-compliant.
• Reliability: Cisco Meraki cloud has been achieving 99.99 percent uptime. Cisco
Meraki has globally distributed data centers. If a device loses connectivity to the
cloud, it is most often due to an upstream issue such as an ISP outage, Layer 1
issue, or firewall rules on a third-party device.
In these situations, Cisco Meraki devices continue to pass user traffic because they
store the last known configuration locally. Devices have a locally hosted configuration
page used to make basic configuration changes such as uplink settings.
• Full: An administrator with full organization access can view and make changes to
any dashboard network in the organization.
• Read-only: An administrator with read-only organization access can see and view
everything, but cannot make any changes.
• None: An administrator can be configured to have no access to organizations and
then can be granted to access individual networks instead.
Network privileges can be used to restrict access to a network. Network-level privileges
allow an administrator to view or configure the networks and their devices for which they
have privileges assigned.
The following network access privileges are available:
• Full: An administrator with full network-level access can make changes to anything
in that network.
• Read-only: An administrator can view the configuration of this network but cannot
make any changes.
• Guest ambassador: An administrator can only see the list of Cisco Meraki
authentication users, can add users, can update existing users, and can authorize
or deauthorize users on an SSID or Client VPN.
• Monitor-only: An administrator can only view a subset of the Monitor section in
the dashboard and cannot make any changes. Monitor-only administrator can view
summary reports but cannot schedule reports via email in the dashboard.
• Network tags: Network tags can be used to assign privileges to administrators. For
example, 30 networks could be assigned a tag of IT Admin. An administrator could
then be given permission for the IT Admin tag, which means that the administrator
would then be able to configure all 30 of those networks. If tags are created and
assigned to networks based on roles, role-based access can be provisioned for
administrators. Cisco Meraki tags should not be confused or used interchangeably
with the traditional 802.1Q tag in a traditional Layer 2 Ethernet header.
Tags are also used for reporting purposes when generating summary reports for specific
networks or device groups.
The Cisco Meraki solution has a simple licensing model. There is a 1:1 ratio of hardware to
license. If you have 20 access point (AP) devices, you need 20 AP licenses—it is that
simple.
The dashboard license includes the following:
In January, the customer buys some Cisco Meraki devices and licensing.
The Cisco Meraki licensing time starts counting when the Cisco Meraki solution provides
the license key to a customer.
In the figure, the customer bought 20 devices, each with 12 months of licensing.
This figure shows that the customer delayed installation and activation for one month. At
this point, the customer has 11 months of licensing left.
Time passes, and the customer is happy. In this figure, the end of June has arrived.
In this figure, in July, the customer adds 20 more devices, each with 12 months of
licensing. The cotermination algorithm adjusts the total licenses to 40 with nine months
remaining. (In this simplified example, the devices are all equal. There is a weighting value
that adjusts for a higher-end device versus a lower-end device).
If a customer needs to replace a device using a Return Materials Authorization (RMA), a
case must be opened with support. Support will verify that the device needs to be
replaced and will then replace it with a new device. Replacement is typically accomplished
on the next business day. In the figure, the customer returned a faulty device, and
replaced it in September. The customer deletes the serial number of the old device from
the dashboard and adds the serial number of the new device. (Because licenses are not
tied to serial numbers, it is easy to have hardware cold spares.)
The figure shows that the customer is approaching the final 30 days on their license, so
they need to add a renewal license.
The dashboard licensing now expired and a renewal license needs to be added.
If the dashboard licensing is not renewed before the cotermination expiration date, the
organization enters a grace period. The figure illustrates this situation.
The grace period provides an extra 30 days, as shown in the figure. If dashboard licensing
is not renewed before the grace period ends, Cisco Meraki devices stop passing traffic and
administrators lose the ability to make configuration changes.
Per-Device Licensing Features
A new per-device licensing model is now available to give greater flexibility.
• One-day SKU: This new stock-keeping unit (SKU) type enables fine-tuning of
expiration dates.
• License devices individually: Assign a license to a specific device (Cisco Meraki MR
wireless AP, MS switch, MX security appliance, MV camera, MG cellular gateway)
or a network (with vMX and SM licenses) and maintain a shared expiration date or
separate expiration dates across devices, networks, or organizations.
Per-Device Case Study
This figure illustrates a per-device licensing case study.
Some of the available add-on licenses are Cisco Meraki MR Advanced license, Cisco Meraki
MS Advanced license, Cisco Meraki MX Advanced Security license, and Cisco Meraki
Secure SD-WAN Plus license.
35.1 Describing Cisco Meraki Products and Administration
Introduction
The Cisco Meraki solution is a suite of products that make life incredibly simple for
administrators. These products work together to perform the tasks for the efficient
functioning of the enterprise networks, and are easy to set up, deploy, manage, monitor,
and troubleshoot.
Each product comes with a set of features that go beyond simple functionality and offer
the user a sense of extravagance in their capabilities and ease of use. Each product is full
of useful features that will make most administrators wonder how they ever managed
before they started using them.
Because of its vast range of security and network connectivity features, it is the ideal
device to sit on the edge. It is the perfect unified solution for the customer edge, and its
unified threat management features help keep your network safe.
Features and Components
The Cisco Meraki MX security appliance includes security and application control feature
sets.
Next-generation firewall (NGFW) has the following components:
• Packet capture tool to observe live network traffic passed by Cisco Meraki devices
• Enterprise security features
• Cable test tool to test the integrity of the cable
• Live troubleshooting tolos
• Virtual stacking to easily push configuration to hundreds of ports in the network
regardless of where the switches are physically located
• Topology page
• Port security
• Biggest differentiator is the inherent visibility
• Layer 7 visibility into data flowing through clients
• Voice and video quality of service (QoS)
The following features are hardware-dependent, require specialized hardware, and are
not supported on every model:
• Physical stacking
• Cisco Meraki StackPower redundant power feature
• DHCP server functionality
• Multigigabit Support
• Universal Power over Ethernet (UPoE)
• Dynamic routing (Open Shortest Path First [OSPF])
Newer Cisco Meraki MS switch models now scale into high-performance distribution and
aggregation switches.
35.4 Describing Cisco Meraki Products and Administration
Cisco Meraki MR Wireless APs
The Cisco Meraki MR wireless access point (AP) was the first Cisco Meraki product that
was launched, so its feature set has been well refined over time. Cisco Meraki MR wireless
APs have been refined to provide the best possible features available. Various models
deliver solutions for a range of user needs, from basic coverage to high-usage, high-
density requirements. Cisco Meraki MR wireless APs can be customized to your exact
requirements while offering advanced features.
Models range from basic coverage to high-end, high-density, and stadium and outdoor
wireless networks.
Cisco Meraki AP features include the following:
• Cisco Meraki SM Sentry Enrollment: Only allow devices managed with Cisco
Meraki Systems Manager to access the network. With zero-touch deployment, the
unmanaged devices can install and enroll with Cisco Meraki Systems Manager to
gain access to the network. Cisco Meraki SM Sentry Enrollment is supported on
Android, Apple iOS, Apple macOS, and Microsoft Windows devices and enables
employee self-service for securing BYOD devices.
• Cisco Meraki SM Sentry VPN: VPN settings can be automatically provisioned to
connect managed devices to a Cisco Meraki MX security appliance hosting client
VPN. Changes to VPN configurations on the Cisco Meraki MX side are automatically
reflected in Cisco MerakiSystems Manager without any manual action needed.
• Cisco Meraki SM Sentry WiFi: WiFi settings are provisioned automatically to
connect managed devices to a Cisco Meraki Meraki MR wireless network. If a
connected device fails security compliance, Cisco Meraki Systems Manager can
automatically revoke device access to the network.
• Cisco Meraki SM Sentry Policies: Network settings such as firewall rules, traffic
shaping policies, and content filtering can be dynamically changed, controlled,
updated, and remediated automatically.
Because of the nature of operating from the cloud and the need to take steps in an
ordered fashion, it is expected that configuration changes will take time to filter down to
the devices. You will need to be patient while the configuration changes are made and
downloaded safely to your devices. This delay is normal and to be expected.
Configuration updates that are made in the dashboard can have a 1- to 2-minute delay.
If a device is powered off and on, it could take 3 to 5 minutes for it to reboot and
download its configuration. The length of the delay is device dependent. Similarly, if a port
is reset, it could interfere with PoE. Any attached powered devices may take time to
power on and reboot. They may also need to download a configuration.
Live tools, however, do not have any significant delay and interactions should be nearly
instantaneous.
• Topology View: The Topology view displays the status of devices and their
connectivity position in the network. Both their position and status icons are
dynamic. A device is green if all is good, red if there is a failure, and yellow if the
device is alerting. A simple issue like VLAN mismatch or DNS failure can be resolved
quickly and easily by checking the colors in the Topology view. The Cisco Meraki
dashboard Topology view is highly valuable to operators for maintaining an always
up to date network topology and subnet topology diagrams without any manual
input from the operator.
Note: You must deploy at least one Cisco Meraki MS switch in the network for the
Network Topology feature to be available. Cisco Meraki switches act as collectors of CDP
and LLDP information to build out topology, and therefore Topology View is only available
in deployments where Cisco Meraki switches are used.
The topology view may need up to 30 days to accurately reflect nodes and links that are
disconnected from the network.
Cisco Meraki offers Layer 2 and Layer 3 Topology views. Layer 2 Topology view displays
the physical topology of the network. Layer 3 Topology view displays the logical (subnet)
topology of the network.
The Topology view also shows other LLDP- or CDP-enabled Cisco and third-party devices
that are one hop away from a Cisco Meraki switch.
• Packet Capture: The Packet Capture tool is one of the most powerful tools and can
help in troubleshooting Cisco Meraki systems. It is available by default, with no
need for additional configuration. The Packet Capture tool gives you visibility into
the raw traffic running on the wire (or even wireless). You can capture packets
almost anywhere on the Cisco Meraki full stack. You can display captures directly
in the dashboard or export a pcap file to analyze in other tools (such as Wireshark).
Packet captures can also help Cisco Meraki support quickly identify and resolve a
problem.The Packet Capture tool allows you to set different capturing options for
different devices. For example, you can select which port or interface the packet
capture should run on, define capture output (pcap file or display), define if you
want to ignore broadcast or multicast traffic, set verbosity level of displayed
capture, or apply different capturing filters.
The Event Log and Change Log are enabled by default and hosted by Cisco Meraki in the
cloud.
• Event Log: The Event Log displays network events on a networkwide basis. These
events can include wireless client issues, spanning tree issues, and port flapping
issues. Also, RADIUS password issues will appear here. Logs can be valuable
sources of data during the troubleshooting process.
You can access the Event Log in Cisco Meraki dashboard under Network-wide > Monitor >
Event log.
• Change Log: The Change Log logs configuration changes on an organization wide
basis. As an example, the Change Log will show if an SSID has changed. The Change
Log adds accountability by identifying who changed what and when. The Change
Log shows the administrator who made the change, the old configuration, and the
new configuration.
You can access the Change Log in Cisco Meraki dashboard under Organization > Monitor >
Change Log.
• Syslog
• Simple Network Management Protocol (SNMP)
• SNMP traps
• NetFlow visibility
The following encryptions and authentication options are supported with Cisco Meraki
solution:
• RADIUS for IEEE 802.1X and Wi-Fi Protected Access 2 (WPA2) enterprise (for
selected switches, access points, small branch security devices)
• Cisco Meraki Authentication is a user-defined username/password RADIUS server
in the dashboard.
• On wireless side, Wired Equivalent Privacy (WEP), identity pre-shared key (PSK)
with RADIUS, and identity PSK without RADIUS authentication options are also
supported.
There are several options for splash page authentication:
• Facebook
• Google auth
• Lightweight Directory Access Protocol (LDAP)
• RADIUS
• Cisco Meraki proprietary authentication
• Microsoft Active Directory
• MAC address--based authentication
Cisco Meraki can also be integrated with:
• Cisco Identity Services Engine (ISE) for RADIUS authentication and accounting,
change of authorization, central web authentication
• Cisco DNA Center as a unified monitoring and assurance platform
• Dashboard API: This API is used to pull and push information and configurations to
and from the dashboard. This API allows you to extract device statuses and post
configurations. You can export serial numbers of the devices in the network or
even create a network from the beginning.
• Scanning API: Export location analytics data (such as data collected by the AP
Bluetooth radio) from the dashboard to a third-party application or server. The
Scanning API is often used to export that data to the Cisco DNA Spaces or other
third-party software to analyze footfall and provide valuable market analytics and
usage statistics for physical spaces.
• Captive Portal API: This API extends the power of the built-in Cisco Meraki splash
process. Captive Portal API integrates third-party splash page tools that may be
offering more flexibility compared to splash page that is natively available through
Cisco Meraki dashboard.
Cisco Meraki Marketplace at https://apps.meraki.io/ offers you an extensive catalog of the
applications that were developed on top of the Cisco Meraki platform by the Cisco Meraki
technology partners. The marketplace allows customers and partners to view, demo, and
deploy solutions.
• Phone support at Cisco Meraki support centers is always staffed for timely, one-
on-one case management.
• Online support cases that are opened via email or the dashboard allow Cisco
Meraki support to quickly locate and solve issues.
• Ongoing cases can be managed, updated, or audited directly in the dashboard
(Help > Cases).
Cisco Meraki support agents can access your systems and dashboard with your
permission. They can show you what the problem is, rather than simply telling you,
without having to start up remote control software.
Your dashboard license includes 24-hour support.
But today's digital economy demands greater agility. Organizations are challenged by
rapidly changing technology as well as business trends and industry disruptions. The
digital economy accelerates the pace of change and innovation. It places new engagement
demands on the business to achieve results to stay competitive and engage with
customers, employees, and across new and emerging ecosystems.
Cisco's collaboration portfolio is designed to help organizations meet these challenges
head-on by providing seamless connectivity between customers, partners, and employees
wherever they are with all the tools they would have access to if they were in the same
room.
The COVID-19 pandemic has forced many people to work from home away from
colleagues. Many companies realize just how important collaboration technologies can be
when a workforce is dispersed. The "new normal" will very likely see an upturn in mobile
and home workers long-term. Good, reliable collaboration tools are essential to maintain
communication with colleagues, partners, and customers.
On-premises deployments are where collaboration applications are deployed within the
enterprise premises to provide voice and video calling; text, voice, and video messaging;
presence; and video conferencing and desk, screen, and content sharing.
Collaboration Deployment Models: Cloud
In the case of cloud deployments, collaboration services delivered from the cloud include
voice and video calling, messaging, and meetings with video, as well as content and screen
sharing. Webex is a cloud-based service used for delivering these services. In a cloud
deployment, end-user devices such as phones and telepresence systems are still located
on the customer site.
Additional cloud implementations of collaboration can include collaboration platform-
based services as provided by third-party managed service providers and integrators that
deliver traditional on-premises collaboration applications and services from the cloud.
Cisco Hosted Collaboration Solution (HCS) is an example of this type of cloud platform-
based service.
Collaboration Deployment Models: Hybrid
In cases where enterprises desire the benefits of both on-premises services (such as
existing investment, high-quality voice and video calling, and so on) and cloud services
(such as continuous delivery or mobile and web delivery), those enterprises are most
often implementing hybrid deployments with a combination of both on-premises and
cloud-based collaboration applications and services.
• Call Control: A call control device is responsible for routing calls and maintaining
the connection between two endpoints. Cisco call control devices also provide a
number of other services such as bandwidth management, endpoint registration,
phone feature management, directory services, and call admission control
• Collaboration applications: Applications include voicemail, instant messaging,
presence, and contact center services.
• Edge: Edge devices manage connectivity outside of a company. These connections
may be to home workers, offices not connected by VPN, external customers and
partners via the internet, connections to Session Initiation Protocol (SIP) service
providers, and connections directly to the telephone network.
• Conferencing: Conferencing devices provide multiparty connections into a single
conference. These multiparty conferences may be voice only or video.
Conferencing resources can exist as separate devices or within other collaboration
devices.
• Endpoints: Cisco Endpoints include everything from running software on
computers and smart phones to fully integrated room systems.
• IP phones: Designed for wired and wireless phones. Capabilities can include color
or monochrome displays, in-built cameras, specialty phones for conference rooms,
reception desks, and other nondesk locations.
• Desktop endpoints: Designed for a single user at the desk. HD capable. It can be
used as a monitor and supports screen sharing.
• Room endpoints: Designed for conference rooms, room endpoints come as a fully
integrated system including monitors and stands, a kit version that can be added
to an existing monitor, or a Webex Teams Board, which is essentially a whiteboard
with integrated Cisco TelePresence capabilities.
• Mobile endpoints: Collaboration endpoints for mobile endpoints includes Cisco
Jabber and Cisco Webex Teams
• Integrator solutions: The Cisco Webex Room Kit Pro is designed for larger custom
rooms such as auditoriums and boardrooms.
• Call processing: Setting up and tearing down of calls, including the routing of
media channels and negotiation of codecs.
• Endpoint registration: Endpoints registered to the call control device are listed in a
database mapping user-facing names and numbers to IP addresses. Cisco Unified
Communications Manager and Cisco Unified Communications Manager Express
also provide endpoint devices with configuration files.
• Phone Feature Administration: The features available depend on the call control
device, but as an example, Cisco Unified Communications Manager administers
features such as Extension Mobility, Device Mobility, Call Park, and Call Pickup, to
name a few.
• Directory Services: Ability for users to access a directory of users rather than
remember each individual directory number. External directory services can also
be referenced.
• Call Admission Control: A mechanism that can control which users or devices can
call other users or devices. Call Admission Control (CAC) can also be used to allow
access to external resources to certain users, such as access to conferencing
capabilities.
• Call Routing Control: How calls are routed to services outside the call-processing
device. These services could be voicemail, contact center tools, other call-
processing devices within the organization, or gateways to other types of networks
such as Public Switched Telephone Network (PSTN), SIP service providers, or the
internet.
• Bandwidth Control: A feature that controls how much bandwidth a call is allowed
to use. This can be set per call and controlled per link. For example, calls between
site one and site two cannot exceed a set limit.
Collaboration Protocols
Cisco Collaboration uses a number of protocols for communication.
SIP: SIP is a protocol for the registration of devices and the initiation, management, and
termination of real-time sessions, such as voice and video, over IP networks. SIP was
developed by the IETF. SIP is becoming the default standard for voice and video.
H.323: H.323 is also a protocol for the registration of devices and the initiation,
management, and termination of real-time sessions. H.323 is an older standard developed
by the ITU based on the H.320 standard used for video conferencing over ISDN networks.
SCCP: Skinny Client Control Protocol (SCCP) is a Cisco proprietary protocol for the
registration of devices and the initiation, management, and termination of real-time
sessions.
MGCP: Media Gateway Control Protocol (MGCP) used by Cisco Unified Communications
Manager to control remote gateways.
Connectivity between Cisco devices generally uses the SIP protocol. Cisco phones can also
use SCCP, but any third-party phones and Cisco TelePresence devices all use SIP. Cisco
Expressway supports both H.323 and SIP endpoints. Connections between Cisco Unified
Communications Manager and voice gateways can use SIP, H.323, or MGCP.
Call Signaling and Media Flow
Cisco Unified Communications Manager uses different signaling protocols to communicate
with Cisco IP phones for call setup and maintenance tasks, including SIP and SCCP. After
the call setup is finished, media exchange normally occurs directly between Cisco IP
phones using Real-Time Transport Protocol (RTP) to carry the audio and potentially video
stream.
In the figure, User A on IP phone A (left device) wants to make a call to IP phone B (right
device). User A enters the number of User B. In this scenario, dialed digits are sent to Cisco
Unified Communications Manager (Cisco Unified CM), which performs its main function of
call processing. Cisco Unified Communications Manager finds the IP address of the
destination and determines where to route the call.
Using SCCP or SIP, Cisco Unified Communications Manager checks the current status of
the called party phone. If Cisco Unified Communications Manager is ready to accept the
call, it sends the called party details and signals, via ringback, to the calling party to
indicate that the destination is ringing.
When User B accepts the call, the RTP media path opens between the two devices. User A
and User B may now begin a conversation.
Cisco IP phones require no further communication with Cisco Unified Communications
Manager until either User A or User B invokes a feature, such as a call transfer, call
conferencing or call termination.
Cisco Unified Communications Manager is the core call-processing platform for most on-
premises customers and offers the largest number of features.
Mobile and remote access allow mobile workers to use internal collaboration services
from the public network without the need for a VPN. The external device uses DNS to
locate the Cisco Expressway-E device and send their registration messages to the
Expressway-E. The Expressway-E sends these messages onto the Expressway-C and
subsequently onto the Cisco Unified Communications Manager. The phone, Jabber
endpoint, or video device then registers as normal. As far as the endpoint is concerned, it
is talking directly to the Cisco Unified Communications Manager.
Hybrid Services
Expressway is also used to connect cloud services to on-premises services in a hybrid
deployment. Hybrid services that use Expressway include:
Hybrid Call Service: This service allows a Webex Teams customer with Cisco Unified
Communications Manager, Business Edition 6000, or Cisco Hosted Collaboration Solution
to integrate their current call control with the Cisco Collaboration Cloud.
Hybrid Calendar Service: This service allows any Webex Teams customer to enable
scheduling of Webex meetings with an automatically created and associated Webex Team
space. By adding @meet to the location field of the exchange appointment, all attendees
of the appointment are automatically added to the Webex Teams room. With @Webex in
the location field, the details of the user's personal Webex Meetings room are
automatically added to the invitation sent for the appointment.
Hybrid Directory Service: This service allows any Webex Teams customer to synchronize
their current Active Directory with the Cisco Webex Cloud. This service makes onboarding
users to the cloud simple and more secure.
Cisco Unified Border Element
IP PSTN connectivity with the SIP trunk
Cisco Unified Border Element, also called the Session Border Controller (SBC), is also used
to connect Cisco on-premises devices to external devices, usually via an Internet
Telephony Service Provider (ITSP). Cisco Unified Border Element is an additional function
of a Cisco Router.
Individual SIP trunks can also be set up for customers or partners if required. Like
Expressway, Cisco Unified Border Element can be used to interwork calls and modify
media ports as required. Each call made through a Cisco Unified Border Element is split
into two separate call legs. Cisco Border Element can handle large volumes of calls and is
typically used for voice, while Expressway Business-to-Business is typically used for Video.
• Cisco Unified Contact Center Express: A single-box solution for small- to medium-
sized businesses for up to 400 agents supporting voice, Interactive Voice Response
(IVR), and digital channels such as email and chat.
• Cisco Unified Contact Center Enterprise: A suite of products that can support up to
24,000 agents supporting voice, IVR, and digital channels. Additional products can
be integrated to provide features such as reporting and management.
• Faster deployment
• Pay for what you need
• Expand and contract with business requirements
• Reduced onsite expertise
• Reduced large upfront investment
The main benefits of a cloud deployment include:
• Faster deployment: Cloud services can be up and running in days rather than
months.
• Pay for what you need: Cloud services are fully scalable and can be purchased per
user, allowing customers to deploy exactly what they need.
• Expand and contract with business requirements: Cloud services are flexible and
can be increased and decreased as the needs of a business change.
• Reduced onsite expertise: Endpoints are the only devices on the customer site.
Configuration of endpoints is simplified using a wizard in most cases which can be
configured by an end-user, reducing the need for support functions on every site.
• Reduced large upfront investment: No requirement to purchase servers to run
management software. Some investment in endpoints may be required upfront.
Running costs are operating expenses rather than capital expenditures.
Cisco Webex Meetings
Cisco Webex Meetings are managed and hosted by Cisco. Cisco Webex Meetings platform
provides a multiperson meetings capability. There are four types of meeting platforms
available.
Cisco Webex Meetings: Used for most day-to-day meetings with up to two hundred
named attendees in a single meeting. Cisco Webex Meetings has the capability for each
named user to have a personal meeting room. Meetings can also be scheduled using
calendar platform integrations as well as from the Webex Meetings app or web page.
Meetings include chat capabilities, recording capabilities, sharing of content including
computer applications, videos and whiteboards, and file transfer capabilities. Users can
connect to Cisco Webex Meetings using HD video systems, laptops, smart phones, or
traditional PSTN audio dial-up.
Cisco Webex Training: Specifically designed for training, Cisco Webex Training includes
sharing and whiteboard capabilities, breakout sessions, integrated labs, Q&A capabilities,
chat, polls, attention indicators, integrated test engines, file transfer, and recording. When
setting up a Cisco Webex Training session, you can include attachments, require
registration and integrate with a payment system.
Cisco Webex Events: Cisco Webex Events is specifically designed for larger groups of up to
3000 people in a nonvideo-enabled event and 500 in a video-enabled event. Speakers can
share multimedia and whiteboards the same way as Training and Meetings. Q&A, chat,
polling, recording, and attention monitoring capabilities are all included. Cisco Webex
Events also supports registration and payment capabilities for both live events and access
to recordings.
Cisco Webex Support: Enables support representative to take control of a remote desktop
while connected to a user with audio and video capabilities. For more complex issues, up
to five participants can be connected to a support call. It has a very simple "click to
connect" option to bring a customer into the call. Other features include file transfer
capabilities, custom scripts, chat, ability to have multiple connections at one time and
reboot and reconnect to customer machines.
Cisco Webex Teams
Cisco Webex Teams provides a single application for meetings, calls, and chats. Cisco
Webex Teams are managed and hosted by Cisco.
From the main Cisco Webex Teams interface, you can chat to individuals (People) or
create a space for multiple people. Spaces can exist on their own or can be part of a team
with multiple spaces. Within a space or individual chat, you can also share files, launch, or
schedule a meeting. Users can be invited from outside your organization as well as
internal. Cisco Webex Teams provides presence information on users and a custom status
capability.
The Cisco Webex Teams app is available for Windows, macOS, Android, and iOS, as well as
a generic web browser version for all other devices.
Meetings can be set up directly from the Webex Application or from a Webex device.
When setting a meeting up using the app, participants can use any nearby video system
without having to manually dial the device. Participants can also connect to meetings
using standard SIP devices, dial in from a phone, or Microsoft Skype for Business. Each
meeting is connected to a space, either because it was launched from one or a new one
will be created. All whiteboard sessions, files shared, and chat within the meeting will be
available after the meeting by all participants.
Cisco Webex devices include the desktop app, smart phone app, compliant phones
including conference phones, Cisco TelePresence devices, and Cisco Webex Teams boards.
All Cisco Webex Meeting and Teams solutions have a number of security features built-in,
including meeting passwords, end-to-end encryption, and Active Directory integration for
user and password management. The Webex application programming interface (API)
enables developers to extend the features for Webex, adding applications or using APIs,
software development kits (SDKs), and widgets to embed Webex into other applications.
Cisco Webex Calling is a cloud-based PBX for the Enterprise optimized for midsized
businesses. Cisco Webex Calling enables devices to register to the cloud and from there be
routed to the PSTN. The customer has the choice of how they wish to access the PSTN.
They can either use one of the Cisco Cloud Connected Partners (CCP) and have calls
routed straight from the Cisco Cloud to the CCP cloud or have calls routed back to the
customer premises and use their existing or preferred PSTN provider. Cisco Webex Calling
is a fully featured PBX providing all the features you would expect from an Enterprise-
grade solution, including Hunt Groups, Call Queues, Voicemail, Auto Attendants, Paging
Groups, and Call Park Groups, for example.
Cisco Webex Contact Center is a cloud-based contact center platform supporting voice,
email, and chat communication with customers. Cisco Webex Contact Center is managed
and hosted by Cisco. Webex Contact Center has the ability to route customer queries
based on agent skills and availability. Staff not from the contact center, such as managers
and subject matter experts, can join interactions using Cisco Webex Teams if needed.
Cisco Contact Center can integrate with a number of Customer Relationship Management
tools such as Salesforce and Microsoft. Data from customer interaction and agent activity
records, including IVR and Automatic Call Distributor (ACD), is brought together into real-
time and historical reports and dashboards.
Optional components, including workforce optimization, enable the dynamic management
of agent schedules, forecasts, and staff planning. Qualify management tools to enable
customers to measure efficiency and performance, and outbound campaigns manager can
be utilized for outbound sales and marketing campaigns.
Cisco Webex Teams, Meetings, Calling, and Contact Center are all administered using the
Cisco Webex Control Hub. From the control hub, an administrator can add, modify, and
delete users, import users from Active Directory, manage user subscriptions, configure
locations and physical devices, set up initial configuration and specific services, and run
reports.
Cisco Hosted Collaboration Solutions (HCS) are partner hosted rather than Cisco hosted.
Essentially all the component parts you would deploy in an on-premises solution are
available within an HCS solution. Deployments can be fully partner-hosted and managed,
hosted on the customer premises but managed by the partner or a dedicated data center
built for a customer and managed by the partner. Smaller customers may share devices
such as Cisco Unified Communications Manager in a multitenant environment, while
larger customers have an independent environment.
On top of the devices normally found in an on-premises solution, Cisco HCS also includes
Cisco Hosted Collaboration (HCM-F), mediation fulfillment which performs centralized
management for the entire Cisco HCS solution. High-Performance Compression Module
(HCM) performs aggregation and provides a central connection to the service provider
cloud. HCM provides northbound interface (NBI) services to integrate Cisco HCS with the
service provider business support system (BSS), operations support system (OSS), and
Manager of Managers (MoM).
The human ear and voice communicate using sound waves, which are analog signals.
Modern communication networks communicate using digital signals. Before data can be
sent from one phone to another, the data has to be converted into digital. At the receiving
end, the digital signal is converted back to analog so that the receiving party can
understand the message.
A digital format is used to transmit signals because any signal will degrade over distance.
First, digital signals can degrade a lot further and still be readable; you can still tell a "1"
from a "0." Second, when an analog signal degrades, then the signal is amplified at regular
intervals, but amplification does not get rid of any unwanted noise that was picked up
along the way. A digital signal is not amplified. It is recreated, which removes all the noise
and creates a clean signal again. Noise created during transmission is analog in nature, so
you can distinguish it from a digital signal but not an analog signal.
The first three steps of the analog to digital conversion describe the pulse code
modulation (PCM) process, which corresponds to the G.711 codec. Step 4 explains
compression that is performed by low-bandwidth codecs, such as G.729, G.728, G.726, or
Internet Low Bitrate Codec (iLBC).
1. Sample the analog signal regularly: The sampling rate must be twice the highest
frequency to produce playback that does not appear either choppy or too smooth.
The sampling rate used in telephony is 8000 samples per second (8 kHz), which
reflects the fact that the bulk of human voice energy is carried in the spectrum of
0–4 kHz.
2. Quantize the sample: Quantization consists of a scale made up of 8 major
segments. Each segment is subdivided into 16 intervals. The segments are not
equally spaced but are actually finest near the origin. Intervals are equal within the
segments but different when they are compared between the segments. Finer
graduations at the origin result in less distortion for low-level tones.
3. Encode the value into an 8-bit digital form: Encoding maps a value derived from
the quantization to an 8-bit number (octet).
4. (Optional) Compress the samples to reduce bandwidth: Signal compression is
used to reduce the bandwidth usage per call.
Sampling is a process that takes readings of the waveform amplitude at regular intervals
by a process called pulse amplitude modulation (PAM). The output is a series of pulses
that approximate the analog waveform. For this output to have an acceptable level of
quality for the signal to be reconstructed, the sampling rate must be rapid enough.
Harry Nyquist developed a mathematical proof about the rate at which a waveform can be
sampled and the information that can be recovered from those samples. The Nyquist
theorem states that when a signal is instantaneously sampled at the transmitter in regular
intervals and has a rate of at least twice the highest channel frequency, the samples will
contain sufficient information to allow an accurate reconstruction of the signal at the
receiver.
While the human ear can sense sounds from 20 to 20,000 Hz, speech encompasses sounds
from about 200 to 9000 Hz. The telephone channel was designed to operate at
frequencies of 300 to 4000 Hz. This economical range offers enough fidelity for voice
communications, although higher frequency tones are not transmitted. The removal of
higher frequencies leads to issues with sounds such as “s” or “th.” The voice frequency of
4000 Hz requires 8000 samples per second; that is, one sample every 125 microseconds.
Nyquist theorem specifies that the significant articulation range of human voice is
between of 300 – 4000 Hz. This range is the range that telephones were designed to
sample and the range that VoIP was initially designed to sample to match that of
traditional telephony. This range is also known as Narrowband. Although this range works
well with human speech, it still does not sample the full human speech range and does
not work well for music, which is a concern when using MOH (Music on Hold).
Over time new codecs have been developed, and higher sampling rates have been
included to allow for crisper, more precise speech transmission. Codecs offering sampling
in the full band range are suitable for live music performances.
Quantization divides the range of amplitude values that are present in an analog signal
sample into a set of discrete steps that is closest in value to the original analog signal.
Quantization matches a PAM signal to a segmented scale. The scale measures the
amplitude (height) of the PAM signal and assigns an integer number to define that
amplitude.
The figure shows quantization. In the example, the x-axis represents time, and the y-axis
represents the voltage value (PAM). The voltage range is divided into 16 segments (0 to 7
positive and 0 to 7 negative). Starting with segment 0, each segment is twice the length of
the preceding one, which reduces the signal-to-noise ratio (SNR) and makes the segment
uniform. This segmentation also corresponds closely to the logarithmic behavior of the
human ear. The two principal schemes for generating these samples in electronic
communication are a-law and mu-law.
The a-law and mu-law standards are audio compression schemes defined by ITU-T G.711
that compress 16-bit linear PCM data down to 8 bits of logarithmic data. The a-law
standard is primarily used in Europe and the rest of the world. The mu-law standard is
used in North America and Japan.
Although a-law and mu-law are very similar, there are a few differences that make them
incompatible. An international connection must use a-law. The mu-law to a-law
conversion is the responsibility of the mu-law country.
Encoding converts an integer base-10 number to a binary number. The output of encoding
is a binary expression in which each bit is either a 1 (pulse) or a 0 (no pulse). After PAM
samples an input analog voice signal, the next step is to encode these samples in
preparation for transmission over a telephony network. This process is called PCM.
The PCM process mathematically converts the value obtained from PAM sampling to
another binary value within the range –127 to +127. The first bit represents positive (1) or
negative (0), while the remaining 7 bits form the number between 0 to 127.
It is during this conversion where a-law and mu-law differ in their algorithms. A-law would
represent the number +127 as 11111111 where the first bit is 1 (positive), and the
remaining bits equal 127. Mu-law inverts the last 7 bits, which results in +127 been
represented as 10000000.
It is at this stage that companding, the process of first compressing an analog signal at the
source and then expanding this signal back to its original size when it reaches its
destination, is applied. This whole process is generally referred to as PCM coding. A digital
signal processor (DSP), which is a specialized chip, performs the PCM process quickly.
Uncompressed digital speech signals are sampled at a rate of 8000 samples per second,
with each sample consisting of 8 bits. Therefore, you have 64 kbps per call (8000 * 8).
Multiple algorithms have been developed to allow voice transmission at lower bandwidth
consumption. The most common coder-decoder (codec) algorithms are presented in the
table in the figure, together with their bandwidth usage. Codecs offer compression to
voice, much like .zip or .rar offer compression to files.
• Call quality
• Network latency and reliability
• Endpoint Support
• Codec complexity
• Transcoder avoidance
• Bandwidth
A codec is a software algorithm that compresses and decompresses speech or audio
signals. There are many standardized codecs that are used in VoIP networks.
When selecting a codec to use within an enterprise environment, consider the following:
• Bandwidth: Usually the first consideration when selecting a codec, especially when
considering total consumption of bandwidth for multiple simultaneous calls over
low-speed WAN links. On the LAN, bandwidth is not paramount because most
networks will have 100 Mbps at a minimum, with many networks now having 1
Gbps to each endpoint.
• Call quality: Second to bandwidth is the call quality the codec is capable of
providing. In the past, call quality was an especially important consideration,
although, in recent years, it is not an important deciding factor because all
mainstream codecs offer premium quality calls. The most common method for
quality score is the Mean Opinion Score (MOS). Although there are other methods
available, MOS is still the first score most people will look at. The score is between
1 to 5, with 5 being perfect face-to-face quality and 4.3 being the highest quality
possible over a phone because of the Nyquist theorem.
• Network latency and reliability: The reliability and latency of your network will
have an impact on the quality of the call more so than the quality score for the
codec. However, there are differences between codecs in their ability to conceal
latency and packet loss issues experienced on the network.
• Endpoint support: Codecs improve year over year, and although the latest codecs
often offer the lowest bandwidth and highest quality, it is imperative to identify
codec support of the current endpoints in the environment. The introduction of
new codecs might require transcoding (translating from one codec to another),
which will require additional resources until all endpoints support the newer
codecs.
• Codec complexity: Each codec has a different amount of compression that it needs
to perform to maintain its quality score and bandwidth usage, so different
amounts of processing power are needed for different codecs. This processing
power is often consumed on the DSP chips on the routers. Knowing how much
processing a codec requires will help determine scalability issues with existing and
future hardware.
• Transcoder avoidance: Transcoders allow endpoints that have incompatible codec
support to communicate with each other through the transcoder, which will
translate between the two codecs in use. Transcoding, however, comes at a
resource cost because DSP resources are needed. Avoiding endpoints that are
incompatible from a codec point of view will remove this resource requirement.
* Acquired by Google in 2011
G.711 is an ITU-T standard that uses PCM to encode analog signals into a digital
representation by regularly sampling the magnitude of the signal at uniform intervals and
then quantizing it into a series of symbols in a digital (usually binary) code. The voice
samples created by the PCM process generate 64 kbps of data.
G.722 is an ITU-T standard wideband speech codec operating at 48, 56, and 64 kbps.
G.722 is typically used in LAN deployments, where the required bandwidth is not
prohibitive. Unlike G.711, which has a sampling of 8 kHz as per the Nyquist theorem,
G.722 has double the spectrum size at 16 kHz. In this type of deployment, G.722 offers a
significant improvement in audio quality over older narrowband codecs, such as G.711,
without causing an excessive increase in implementation complexity. Cisco Unified
Communications Manager calculates G.722 with 64 kbps.
iLBC is a speech codec that is suitable for robust voice communication over IP. The codec
is designed for narrowband speech and results in a payload bit rate of 13.3 kbps. The CPU
load is like the G.729A, with higher quality and better response to packet loss. If there are
lost frames, iLBC processes voice quality issues through graceful speech quality
degradation. Lost frames often occur with lost or delayed IP packets. Ordinary low-bitrate
codecs exploit dependencies between speech frames, which unfortunately results in error
propagation when packets are lost or delayed. In contrast, iLBC-encoded speech frames
are independent, so this problem will not occur.
G.729 is the compression algorithm that Cisco uses for high-quality 8-kbps voice. G.729 is
a high-complexity, processor-intensive compression algorithm that monopolizes
processing resources.
Although G.729A is also an 8-kbps compression algorithm, it is not as processor-intensive
as G.729. G.729A is a medium-complexity variant of G.729 with slightly lower voice quality
and is more susceptible to network irregularities such as delay, variation, and
"tandeming." Tandeming causes distortion that occurs when speech is coded, decoded,
then coded and decoded again, much like the distortion that occurs when a videotape is
repeatedly copied.
The Annex B variant of G.729 is also a high-complexity algorithm that adds VAD (Voice
Activity Detection) and CNG (Comfort Noise Generation) to the codec. VAD detects silence
that occurs in typical conversations. This silence is present when one end is talking and the
other is listening. The listening end can have the Real-Time Transport Protocol (RTP)
stream that is going toward the talker temporarily suppressed. The benefit of this
suppression is an approximate 35 percent savings in bandwidth. The RTP stream is
reactivated upon the detection of sound on the listening end, which can cause clipping of
the first syllable when the RTP stream restarts. With traditional voice circuits, users are
used to hearing white noise. When users switch to digital circuits, the lack of white noise
can be mistaken for a disconnection. CNG inserts white noise into the line to
accommodate users who are changing from traditional voice circuits.
G.729AB is a medium-complexity variant of G.729B with slightly lower voice quality.
Opus is an open, royalty-free codec standardized by the IETF in 2012. It supports bitrates
from 6 kbps to 510 kbps and sampling rates between 8 kHz (narrowband) to 48 kHz (full
band). This sets opus apart from other codecs as it has an unmatched quality for
interactive speech and music transmission. It has support for CBR and variable bitrate
(VBR). It is the codec of choice when using Webex.
Video codecs all fundamentally work the same way at the start. They take a single frame
of the video, group pixels into blocks, and then group blocks into macroblocks. Where
codecs differ is how this grouping process is performed and the quantity of each element
used in the groupings.
Macroblocks that are in a contiguous row are grouped into slices (a single row of
macroblock).
To reduce the amount of data needed to create a video, video codecs are designed to
identify changes from one frame to the next and only update the changes. Initially, the
process requires a full-frame, or otherwise known as an I-Frame (Intra-coded), to be used
as the starting point, the second frame is compared to the first to identify which
macroblocks have changed, and only those blocks are updated and transmitted in the
form of P-Frames (predictive-coded) or B-frames (bidirectional predictive-coded).
Pictures, or frames, are grouped into a group of pictures (GOP), with the I-frame as the
starting point and P-frames following it to the next I-frame.
The main reason why video applications are more loss-sensitive than voice is that the
codecs that are used in video compression work differently from the way that voice
codecs work.
Commonly used video codecs, such as MPEG-2, MPEG-4, H.264, and H.265, use temporal
compression algorithms. A codec that uses temporal compression does not send a
complete frame sample (called an I-frame or keyframe) at every sampling interval. Only
some of the frames that are sent are I-frames. Between the full frames (I-frames), only the
differences with the previous frame, represented as motion vectors and prediction errors,
are encoded. The frames that carry these frame deltas (P- or B-frames) tend to be much
smaller than I-frames, which is how the compression algorithm reduces the bandwidth.
With the temporal compression algorithm, a spatial compression algorithm is typically
used on each frame to reduce the number of bytes that is necessary to encode the frame
itself. This process is similar to the type of encoding that is used in picture compression
methods such as JPEG.
What does this mean for the loss tolerance of video traffic?
At the commonly used 30-f/s frame rate, a frame is sent every 33 ms. Depending on the
resolution and spatial compression that is used, each frame is broken down into several
packets, and these packets are transmitted onto the network in a short burst.
What happens if you lose a single packet out of this burst?
To begin, losing a single packet means losing the complete frame, so the loss is magnified
by the fact that a sample is not a single packet as it is with voice. Next, if only spatial
compression would be used, then a new frame would arrive after 33 ms, and you would
experience a 33-ms freeze. However, due to the temporal encoding scheme, you will not
always receive a new I-frame as the next frame. If you lose an I-frame, it can take several
hundred milliseconds (depending on the codec) before you get a new I-frame. If you lose a
P- or B-frame, the effect is slightly less severe, but this loss will still translate to clearly
visible artifacts in subsequent frames. Therefore, from a network design standpoint, to
provide a good user experience for video applications, you should design it to be as close
to lossless as possible for the video traffic.
Another related design objective is to design the network with very high availability in
mind. A commonly used target for network reconvergence in the campus is below 200 ms.
If a media application loses 200 ms worth of packets, this loss is definitely noticeable to
the user, but it will generally be accepted because it is only an incident. A longer period of
loss for media applications is noticeable and will detract from the user experience.
Video Codec Selection
The following are some video codec considerations:
• Transcoding
• Bitrate
• Quality
• Network latency and reliability
• Endpoint Support
• Complexity
When choosing a codec, especially for off-network calls, you must consider the following:
• Which codecs are supported on the off-network endpoints? If possible, you should
avoid the need for transcoding, especially video transcoding. Transcoding requires
dedicated hardware resources. Therefore, try to choose common codecs, if
possible. If endpoints do not have a codec in common, then a transcoder is
required.
• Is there enough bandwidth for the desired number of audio and video calls?
• Is quality of service (QoS) implemented? What latency, jitter, and packet loss do I
have to expect?
• What are the desired audio and video quality?
The two leading video codecs are H.264 and H.265. Although H.264 is still generally the
de-facto standard, H.265 has some benefits.
H.264, or otherwise known as MPEG-4 AVC (Advanced Video Coding), was released in
2003 and is used widely by most video hosting companies, including Netflix, Google, and
YouTube. It set itself apart from its predecessor, H.263 (MPEG-2), by reducing the bitrate
by half while still maintaining the same quality. Or it is alternatively increasing the quality
substantially while maintaining the same storage usage as H.263. It was able to achieve
this benefit without increasing the processing requirements or complexity. It supports
Spatial and Temporal compression and supports resolutions up to 8K Ultra High-Definition
with a maximum resolution of 8192x4320.
H.265 evolved from H.264 and was ratified in 2013. The bitrate was halved again when
compared to its predecessor, but the complexity increased, requiring a lot more
processing power to encode and decode H.265. All other values of H.265 remained the
same when compared to H.264.
• Initiating the call: After the user has lifted the handset and received a dial tone,
they will dial the called party's number. These digits are sent to the call control
device.
• Endpoint Discovery: The call control device then needs to identify the location of
the called party. This endpoint could be registered locally on the same device, or it
might require call routing configuration in order to route the a call to a remote
destination.
• Permission Check: A permission check may be performed to confirm the calling
party has sufficient rights to dial the number in question.
• Bandwidth Check: The call control agent may check to determine if there is
sufficient bandwidth on the network to allow the call. The bandwidth check is
better known as Call Admission Control (CAC). There are many options for the
implementation of CAC, each with its own requirements.
• Call progress tones: After the called party endpoint is ringing, a call progress tone
needs to be sent back to the calling party, which will then play the ringing tone to
the user. Similarly, if the called party is engaged, the engaged tone will be played
to the calling party. These are just two examples of call progress tones that are
available during call setup and teardown.
• Call answered: After the called party answers the call, call progress tones are no
longer needed.
• Call Detail Record (CDR): The call control agent can be configured to log all call
information. The logging can be done through several different technologies such
as syslog, RADIUS, or direct database entries. The method used for logging will
largely depend on the type of call control agent that is used.
• Codec negotiation: A capabilities exchange is now required in order to find a
common supported and requested codec between the two endpoints.
• Negotiating the streams: The two endpoints will then negotiate the ports to be
used to establish the bidirectional connections for RTP and Real-Time Transport
Control Protocol (RTCP).
o RTP will also use an even-numbered port between the range 16,384 –
32,767.
o RTCP will use the RTP port plus one in order to form a paired connection,
which results in RTCP always using an odd port number.
• Hang up: When either the calling or called party terminates the call by hanging up
the endpoint, signaling is sent to the call control agent to tear down the call and
release the resources used.
• Close connection: Signaling between the call control agent and the called party is
sent to notify and acknowledge the termination of the call and the release of the
resources.
• CDR: Call details records are closed to mark the completion of the call.
• Bandwidth release: The bandwidth that may have been allocated to the call
through CAC gets released in order to make resources available for future calls.
The Delayed Offer is recommended for SIP trunks because it enables the Internet
telephony service provider (ITSPs) to provide their capabilities first. Cisco Unified
Communications Manager allows the administrator to select the offer method. Cisco
gateways support both methods but originating gateways default to Early Offer.
Early Offer
In an Early Offer, the session initiator (calling device) sends its capabilities (including
supported codecs) in the SDP contained in the initial Invite. This method allows the called
device to choose its preferred codec for the session. Early Offer is the default method that
is used by a Cisco voice gateway acting as the originating gateway.
40.1 Digital Protocols
Explore Media Streams at the Application Layer
This topic will compare the different media streams found at the application layer, namely
RTP, Secure Real-Time Transport Protocol (SRTP), and RTCP.
After signaling has been completed, the media stream is formed directly between the two
endpoints. This can either be done with RTP in an unsecured manner, meaning that the
traffic is not encrypted, or the traffic can be secured using SRTP.
In either scenario, RTCP is used and set up as a separate stream in order to control the
media stream.
Since packets are required to be sent continuously and constantly due to the real-time
nature of the traffic, a different protocol is required when compared to data traffic. RTP
provides end-to-end delivery for real-time data such as voice and video. Unlike traditional
data, which uses acknowledgments for each packet to confirm delivery, real-time
information cannot afford the delay associated with acknowledgments.
RTP runs on an even port number randomly selected from the UDP port range 16,384 –
32,767. Even though UDP is used as the underlying protocol and does not use
acknowledgments, RTP adds sequence numbering in order to make sure packets are in the
correct order.
RTP also includes the following:
• Payload type: this is used to identify the codec type and media format. This allows
for codecs to change during the transmission.
• Sequence numbering: as already noted, this allows for packets to be sorted into
the correct order, but it also allows to identify if packet loss has occurred.
• Time stamp: this allows the protocol to measure delay and jitter and allows the
protocol to space the packets correctly on the receiving end using a playout buffer.
This is to ensure packets are timed correctly and played back at the correct speed.
This also assists the protocol to remove jitter caused by variations in delay
experienced during transmission.
RTCP is set up as a separate stream from the RTP or SRTP stream. This is set up on the port
number selected by RTP+1, which means it will always run on an odd port number.
RTCP provides out-of-band statistics and control information and includes the following:
• Packet count: How many packets have been used since the start of the call in both
directions.
• Packet delay: The delay between packets since the last RTCP packet.
• Octet count: Bandwidth usage used during the call represented in octets (8 bits).
• Packet loss: Total amount of packets lost.
• Jitter: The variation in delay between packets.
SRTP allows for the authentication and encryption of voice and video traffic. As encrypting
the RTP header would introduce routing issues for the calls, the header can be validated
and authenticated but not encrypted. The RTP payload, which contains the voice and
video traffic, can be encrypted as well as authenticated allowing for the secure
transmission and antireplay of the conversations.