Retele de Calculatoare
Retele de Calculatoare
Retele de Calculatoare
A NIC, or LAN adapter, provides network communication capabilities to and from a PC. On desktop computer
systems, it is a printed circuit board that resides in a slot on the motherboard and provides an interface
connection to the network media. On laptop computer systems, it is commonly integrated into the laptop or
available on a small, credit card-sized PCMCIA card. PCMCIA stands for Personal Computer Memory
Card International Association. PCMCIA cards are also known as PC cards. The type of NIC must match the
media and protocol used on the local network.
The NIC uses an interrupt request (IRQ), an input/output (I/O) address, and upper memory space to work
with the operating system. An IRQ value is an assigned location where the computer can expect a particular
device to interrupt it when the device sends the computer signals about its operation. For example, when a
printer has finished printing, it sends an interrupt signal to the computer. The signal momentarily interrupts
the computer so that it can decide what processing to do next. Since multiple signals to the computer on the
same interrupt line might not be understood by the computer, a unique value must be specified for each
device and its path to the computer. Prior to Plug-and Play (PnP) devices, users often had to set IRQ values
manually, or be aware of them, when adding a new device to a computer.
These considerations are important in the selection of a NIC:
Protocols Ethernet, Token Ring, or FDDI
Types of media Twisted-pair, coaxial, wireless, or fiber-optic
Type of system bus PCI or ISA
Students can use the Interactive Media Activity to view a NIC.
The next page will explain how NICs and modems are installed.
1.1.4 NIC and modem installation
This page will explain how an adapter card, which can be a modem or a NIC, provides Internet connectivity.
Students will also learn how to install a modem or a NIC.
A modem, or modulator-demodulator, is a device that provides the computer with connectivity to a telephone
line. A modem converts data from a digital signal to an analog signal that is compatible with a standard
phone line. The modem at the receiving end demodulates the signal, which converts it back to digital.
Modems may be installed internally or attached externally to the computer using a phone line.
A NIC must be installed for each device on a network. A NIC provides a network interface for each host.
Different types of NICs are used for various device configurations. Notebook computers may have a built-in
interface or use a PCMCIA card. Figure shows PCMCIA wired, wireless network cards, and a Universal
Serial Bus (USB) Ethernet adapter. Desktop systems may use an internal network adapter , called a NIC,
or an external network adapter that connects to the network through a USB port.
Situations that require NIC installation include the following:
Installation of a NIC on a PC that does not already have one
Replacement of a malfunctioning or damaged NIC
Upgrade from a 10-Mbps NIC to a 10/100/1000-Mbps NIC
Change to a different type of NIC, such as wireless
Installation of a secondary, or backup, NIC for network security reasons
To perform the installation of a NIC or modem the following resources may be required:
Knowledge of how the adapter, jumpers, and plug-and-play software are configured
Availability of diagnostic tools
Ability to resolve hardware resource conflicts
The next page will describe the history of network connectivity
1.1.5 Overview of high-speed and dial-up connectivity
This page will explain how modem connectivity has evolved into high-speed services.
In the early 1960s, modems were introduced to connect dumb terminals to a central computer. Many
companies used to rent computer time since it was too expensive to own an on-site system. The connection
rate was very slow. It was 300 bits per second (bps), which is about 30 characters per second.
As PCs became more affordable in the 1970s, bulletin board systems (BBSs) appeared. These BBSs
allowed users to connect and post or read messages on a discussion board. The 300-bps speed was
acceptable since it was faster than the speed at which most people could read or type. In the early 1980s,
use of bulletin boards increased exponentially and the 300 bps speed quickly became too slow for the
transfer of large files and graphics. In the 1990s, modems could operate at 9600 bps. By 1998, they reached
the current standard of 56,000 bps, or 56 kbps.
Soon the high-speed services used in the corporate environment such as Digital Subscriber Line (DSL) and
cable modem access moved to the consumer market. These services no longer required expensive
equipment or a second phone line. These are "always on" services that provide instant access and do not
require a connection to be established for each session. This provides more reliability and flexibility and has
simplified Internet connection sharing in small office and home networks.
The next page will introduce an important set of network protocols.
2
Network Math
10
Summary
Decimal representation of IP addresses and network masksThis page summarizes the topics discussed in
this module.
A connection to a computer network can be broken down into the physical connection, the logical connection,
and the applications that interpret the data and display the information. Establishment and maintenance of
the physical connection requires knowledge of PC components and peripherals. Connectivity to the Internet
requires an adapter card, which may be a modem or a network interface card (NIC).
In the early 1960s modems were introduced to provide connectivity to a central computer. Today, access
methods have progressed to services that provide constant, high-speed access.
The logical connection uses standards called protocols. The Transmission Control Protocol/Internet Protocol
(TCP/IP) suite is the primary group of protocols used on the Internet. TCP/IP can be configured on a
workstation using operating system tools. The ping utility can be used to test connectivity.
A web browser is software that is installed on the PC to gain access to the Internet and local web pages.
Occasionally a browser may require plug-in applications. These applications work in conjunction with the
browser to launch the program required to view special or proprietary files.
Computers recognize and process data using the binary, or Base 2, numbering system. Often the binary
output of a computer is expressed in hexadecimal to make it easier to read. The ablility to convert decimal
numbers to binary numbers is valuable when converting dotted decimal IP addresses to machine-readable
binary format. Conversion of hexadecimal numbers to binary, and binary numbers to hexadecimal, is a
common task when dealing with the configuration register in Cisco routers.
Boolean logic is a binary logic that allows two numbers to be compared and a choice generated based on the
two numbers. Two networking operations that use Boolean logic are subnetting and wildcard masking.
The 32-bit binary addresses used on the Internet are referred to as Internet Protocol (IP) addresses.
Overview
Bandwidth decisions are among the most important considerations when a network is designed. This module
discusses the importance of bandwidth and explains how it is measured.
Layered models are used to describe network functions. This module covers the two most important models,
which are the Open System Interconnection (OSI) model and the Transmission Control Protocol/Internet
Protocol (TCP/IP) model. The module also presents the differences and similarities between the two models.
This module also includes a brief history of networking. Students will learn about network devices and
different types of physical and logical layouts. This module also defines and compares LANs, MANs, WANs,
SANs, and VPNs.
This module covers some of the objectives for the CCNA 640-801, INTRO 640-821, and ICND 640-811
exams.
Students who complete this module should be able to perform the following tasks:
Explain the importance of bandwidth in networking
Use an analogy to explain bandwidth
Identify bps, kbps, Mbps, and Gbps as units of bandwidth
Explain the difference between bandwidth and throughput
Calculate data transfer rates
11
Networking Terminology
In the mid-1980s PC users began to use modems to share files with other computers. This was referred to
as point-to-point, or dial-up communication. This concept was expanded by the use of computers that were
the central point of communication in a dial-up connection. These computers were called bulletin boards.
Users would connect to the bulletin boards, leave and pick up messages, as well as upload and download
files. The drawback to this type of system was that there was very little direct communication and then only
with those who knew about the bulletin board. Another limitation was that the bulletin board computer
required one modem per connection. If five people connected simultaneously it would require five modems
connected to five separate phone lines. As the number of people who wanted to use the system grew, the
system was not able to handle the demand. For example, imagine if 500 people wanted to connect at the
same time.
From the 1960s to the 1990s the U.S. Department of Defense (DoD) developed large, reliable, wide-area
networks (WANs) for military and scientific reasons. This technology was different from the point-to-point
communication used in bulletin boards. It allowed multiple computers to be connected together through many
different paths. The network itself would determine how to move data from one computer to another. One
connection could be used to reach many computers at the same time. The WAN developed by the DoD
eventually became the Internet.
2.1.3 Networking devices
This page will introduce some important networking devices.
Equipment that connects directly to a network segment is referred to as a device. These devices are broken
up into two classifications. The first classification is end-user devices. End-user devices include computers,
printers, scanners, and other devices that provide services directly to the user. The second classification is
network devices. Network devices include all the devices that connect the end-user devices together to allow
them to communicate.
End-user devices that provide users with a connection to the network are also referred to as hosts. These
devices allow users to share, create, and obtain information. The host devices can exist without a network,
but without the network the host capabilities are greatly reduced. NICs are used to physically connect host
devices to the network media. They use this connection to send e-mails, print reports, scan pictures, or
access databases.
A NIC is a printed circuit board that fits into the expansion slot of a bus on a computer motherboard. It can
also be a peripheral device. NICs are sometimes called network adapters. Laptop or notebook computer
NICs are usually the size of a PCMCIA card. Each NIC is identified by a unique code called a Media
Access Control (MAC) address. This address is used to control data communication for the host on the
network. More about the MAC address will be covered later. As the name implies, the NIC controls host
access to the network.
There are no standardized symbols for end-user devices in the networking industry. They appear similar to
the real devices to allow for quick recognition.
Network devices are used to extend cable connections, concentrate connections, convert data formats, and
manage data transfers. Network devices provide extension of cable connections, concentration of
connections, conversion of data formats, and management of data transfers. Examples of devices that
perform these functions are repeaters, hubs, bridges, switches, and routers. All of the network devices
mentioned here are covered in depth later in the course. For now, a brief overview of networking devices will
be provided.
A repeater is a network device used to regenerate a signal. Repeaters regenerate analog or digital signals
that are distorted by transmission loss due to attenuation. A repeater does not make intelligent decision
concerning forwarding packets like a router or bridge.
Hubs concentrate connections. In other words, they take a group of hosts and allow the network to see them
as a single unit. This is done passively, without any other effect on the data transmission. Active hubs
concentrate hosts and also regenerate signals.
Bridges convert network data formats and perform basic data transmission management. Bridges provide
connections between LANs. They also check data to determine if it should cross the bridge. This makes
each part of the network more efficient.
Workgroup switches add more intelligence to data transfer management. They can determine if data
should remain on a LAN and transfer data only to the connection that needs it. Another difference between a
bridge and switch is that a switch does not convert data transmission formats.
Routers have all the capabilities listed above. Routers can regenerate signals, concentrate multiple
connections, convert data transmission formats, and manage data transfers. They can also connect to a
WAN, which allows them to connect LANs that are separated by great distances. None of the other devices
can provide this type of connection.
The Interactive Media Activities will allow students to become more familiar with network devices.
2.1.4 Network topology
This page will introduce students to the most common physical and logical network topologies.
Network topology defines the structure of the network. One part of the topology definition is the physical
topology, which is the actual layout of the wire or media. The other part is the logical topology, which defines
13
how the hosts access the media to send data. The physical topologies that are commonly used are as
follows:
A bus topology uses a single backbone cable that is terminated at both ends. All the hosts connect
directly to this backbone.
A ring topology connects one host to the next and the last host to the first. This creates a physical
ring of cable.
A star topology connects all cables to a central point.
An extended star topology links individual stars together by connecting the hubs or switches.
A hierarchical topology is similar to an extended star. However, instead of linking the hubs or
switches together, the system is linked to a computer that controls the traffic on the topology.
A mesh topology is implemented to provide as much protection as possible from interruption of
service. For example, a nuclear power plant might use a mesh topology in the networked control
systems. As seen in the graphic, each host has its own connections to all other hosts. Although the
Internet has multiple paths to any one location, it does not adopt the full mesh topology.
The logical topology of a network determines how the hosts communicate across the medium. The two most
common types of logical topologies are broadcast and token passing.
The use of a broadcast topology indicates that each host sends its data to all other hosts on the network
medium. There is no order that the stations must follow to use the network. It is first come, first serve.
Ethernet works this way as will be explained later in the course.
The second logical topology is token passing. In this type of topology, an electronic token is passed
sequentially to each host. When a host receives the token, that host can send data on the network. If the
host has no data to send, it passes the token to the next host and the process repeats itself. Two examples
of networks that use token passing are Token Ring and Fiber Distributed Data Interface (FDDI). A variation of
Token Ring and FDDI is Arcnet. Arcnet is token passing on a bus topology.
The diagram in Figure shows many different topologies connected by network devices. It shows a network
of moderate complexity that is typical of a school or a small business. The diagram includes many symbols
and networking concepts that will take time to learn.
15
16
18
19
2.2
Bandwidth
data, and the pipe width is like the bandwidth. Many networking experts say that they need to put in bigger
pipes when they wish to add more information-carrying capacity.
Bandwidth is like the number of lanes on a highway. A network of roads serves every city or town. Large
highways with many traffic lanes are joined by smaller roads with fewer traffic lanes. These roads lead to
narrower roads that lead to the driveways of homes and businesses. When very few automobiles use the
highway system, each vehicle is able to move freely. When more traffic is added, each vehicle moves more
slowly. This is especially true on roads with fewer lanes. As more traffic enters the highway system, even
multi-lane highways become congested and slow. A data network is much like the highway system. The data
packets are comparable to automobiles, and the bandwidth is comparable to the number of lanes on the
highway. When a data network is viewed as a system of highways, it is easy to see how low bandwidth
connections can cause traffic to become congested all over the network.
2.2.3 Measurement
This page will explain how bandwidth is measured.
In digital systems, the basic unit of bandwidth is bits per second (bps). Bandwidth is the measure of how
many bits of information can flow from one place to another in a given amount of time. Although bandwidth
can be described in bps, a larger unit of measurement is generally used. Network bandwidth is typically
described as thousands of bits per second (kbps), millions of bits per second (Mbps), billions of bits per
second (Gbps), and trillions of bits per second (Tbps). Although the terms bandwidth and speed are often
used interchangeably, they are not exactly the same thing. One may say, for example, that a T3 connection
at 45 Mbps operates at a higher speed than a T1 connection at 1.544 Mbps. However, if only a small amount
of their data-carrying capacity is being used, each of these connection types will carry data at roughly the
same speed. For example, a small amount of water will flow at the same rate through a small pipe as
through a large pipe. Therefore, it is usually more accurate to say that a T3 connection has greater
bandwidth than a T1 connection. This is because the T3 connection is able to carry more information in the
same period of time, not because it has a higher speed.
2.2.4 Limitations
This page describes the limitations of bandwidth.
Bandwidth varies depending upon the type of media as well as the LAN and WAN technologies used. The
physics of the media account for some of the difference. Signals travel through twisted-pair copper wire,
coaxial cable, optical fiber, and air. The physical differences in the ways signals travel result in fundamental
limitations on the information-carrying capacity of a given medium. However, the actual bandwidth of a
network is determined by a combination of the physical media and the technologies chosen for signaling and
detecting network signals.
For example, current information about the physics of unshielded twisted-pair (UTP) copper cable puts the
theoretical bandwidth limit at over 1 Gbps. However, in actual practice, the bandwidth is determined by the
use of 10BASE-T, 100BASE-TX, or 1000BASE-TX Ethernet. The actual bandwidth is determined by the
signaling methods, NICs, and other network equipment that is chosen. Therefore, the bandwidth is not
determined solely by the limitations of the medium.
Figure shows some common networking media types along with their distance and bandwidth limitations.
Figure summarizes common WAN services and the bandwidth associated with each service.
2.2.5 Throughput
This page explains the concept of throughput.
Bandwidth is the measure of the amount of information that can move through the network in a given period
of time. Therefore, the amount of available bandwidth is a critical part of the specification of the network. A
typical LAN might be built to provide 100 Mbps to every desktop workstation, but this does not mean that
each user is actually able to move 100 megabits of data through the network for every second of use. This
would be true only under the most ideal circumstances.
Throughput refers to actual measured bandwidth, at a specific time of day, using specific Internet routes, and
while a specific set of data is transmitted on the network. Unfortunately, for many reasons, throughput is
often far less than the maximum possible digital bandwidth of the medium that is being used. The following
are some of the factors that determine throughput:
Internetworking devices
Type of data being transferred
Network topology
Number of users on the network
User computer
Server computer
Power conditions
The theoretical bandwidth of a network is an important consideration in network design, because the network
bandwidth will never be greater than the limits imposed by the chosen media and networking technologies.
However, it is just as important for a network designer and administrator to consider the factors that may
affect actual throughput. By measuring throughput on a regular basis, a network administrator will be aware
21
of changes in network performance and changes in the needs of network users. The network can then be
adjusted accordingly.
2.2.6 Data transfer calculation
This page provides the formula for data transfer calculation.
Network designers and administrators are often called upon to make decisions regarding bandwidth. One
decision might be whether to increase the size of the WAN connection to accommodate a new database.
Another decision might be whether the current LAN backbone is of sufficient bandwidth for a streaming-video
training program. The answers to problems like these are not always easy to find, but one place to start is
with a simple data transfer calculation.
Using the formula transfer time = size of file / bandwidth (T=S/BW) allows a network administrator to
estimate several of the important components of network performance. If the typical file size for a given
application is known, dividing the file size by the network bandwidth yields an estimate of the fastest time that
the file can be transferred.
Two important points should be considered when doing this calculation.
The result is an estimate only, because the file size does not include any overhead added by
encapsulation.
The result is likely to be a best-case transfer time, because available bandwidth is almost never at
the theoretical maximum for the network type. A more accurate estimate can be attained if
throughput is substituted for bandwidth in the equation.
Although the data transfer calculation is quite simple, one must be careful to use the same units throughout
the equation. In other words, if the bandwidth is measured in megabits per second (Mbps), the file size must
be in megabits (Mb), not megabytes (MB). Since file sizes are typically given in megabytes, it may be
necessary to multiply the number of megabytes by eight to convert to megabits.
Try to answer the following question, using the formula T=S/BW. Be sure to convert units of measurement as
necessary.
Would it take less time to send the contents of a floppy disk full of data (1.44 MB) over an ISDN line, or to
send the contents of a ten GB hard drive full of data over an OC-48 line?
units used to describe the frequencies of 802.11a and 802.11b wireless networks, which operate at 5 GHz
and 2.4 GHz.
While analog signals are capable of carrying a variety of information, they have some significant
disadvantages in comparison to digital transmissions. The analog video signal that requires a wide frequency
range for transmission cannot be squeezed into a smaller band. Therefore, if the necessary analog
bandwidth is not available, the signal cannot be sent.
In digital signaling all information is sent as bits, regardless of the kind of information it is. Voice, video, and
data all become streams of bits when they are prepared for transmission over digital media. This type of
transmission gives digital bandwidth an important advantage over analog bandwidth. Unlimited amounts of
information can be sent over the smallest or lowest bandwidth digital channel. Regardless of how long it
takes for the digital information to arrive at its destination and be reassembled, it can be viewed, listened to,
read, or processed in its original form.
It is important to understand the differences and similarities between digital and analog bandwidth. Both
types of bandwidth are regularly encountered in the field of information technology. However, because this
course is concerned primarily with digital networking, the term bandwidth will refer to digital bandwidth.
This page concludes this lesson. The next lesson will discuss networking models. The first page will discuss
the concept of layers.
2.3
Networking Models
The early development of networks was disorganized in many ways. The early 1980s saw tremendous
increases in the number and size of networks. As companies realized the advantages of using networking
technology, networks were added or expanded almost as rapidly as new network technologies were
introduced.
By the mid-1980s, these companies began to experience problems from the rapid expansion. Just as people
who do not speak the same language have difficulty communicating with each other, it was difficult for
networks that used different specifications and implementations to exchange information. The same problem
occurred with the companies that developed private or proprietary networking technologies. Proprietary
means that one or a small group of companies controls all usage of the technology. Networking technologies
strictly following proprietary rules could not communicate with technologies that followed different proprietary
rules.
To address the problem of network incompatibility, the International Organization for Standardization (ISO)
researched networking models like Digital Equipment Corporation net (DECnet), Systems Network
Architecture (SNA), and TCP/IP in order to find a generally applicable set of rules for all networks. Using this
research, the ISO created a network model that helps vendors create networks that are compatible with
other networks.
The Open System Interconnection (OSI) reference model released in 1984 was the descriptive network
model that the ISO created. It provided vendors with a set of standards that ensured greater compatibility
and interoperability among various network technologies produced by companies around the world.
The OSI reference model has become the primary model for network communications. Although there are
other models in existence, most network vendors relate their products to the OSI reference model. This is
especially true when they want to educate users on the use of their products. It is considered the best tool
available for teaching people about sending and receiving data on a network.
In the Interactive Media Activity, students will identify the benefits of the OSI model.
26
Summary
This page summarizes the topics discussed in this module.
Computer networks developed in response to business and government computing needs. Applying
standards to network functions provided a set of guidelines for creating network hardware and software and
provided compatibility among equipment from different companies. Information could move within a company
and from one business to another.
Network devices, such as repeaters, hubs, bridges, switches and routers connect host devices together to
allow them to communicate. Protocols provide a set of rules for communication.
27
The physical topology of a network is the actual layout of the wire or media. The logical topology defines how
host devices access the media. The physical topologies that are commonly used are bus, ring, star,
extended star, hierarchical, and mesh. The two most common types of logical topologies are broadcast and
token passing.
A local-area network (LAN) is designed to operate within a limited geographical area. LANs allow multiaccess to high-bandwidth media, control the network privately under local administration, provide full-time
connectivity to local services and connect physically adjacent devices.
A wide-area network (WAN) is designed to operate over a large geographical area. WANs allow access over
serial interfaces operating at lower speeds, provide full-time and part-time connectivity and connect devices
separated over wide areas.
A metropolitan-area network (MAN) is a network that spans a metropolitan area such as a city or suburban
area. A MAN usually consists of two or more LANs in a common geographic area.
A storage-area network (SAN) is a dedicated, high-performance network used to move data between servers
and storage resources. A SAN provides enhanced system performance, is scalable, and has disaster
tolerance built in.
A virtual private network (VPN) is a private network that is constructed within a public network infrastructure.
Three main types of VPNs are access, Intranet, and Extranet VPNs. Access VPNs provide mobile workers or
small office/home office (SOHO) users with remote access to an Intranet or Extranet. Intranets are only
available to users who have access privileges to the internal network of an organization. Extranets are
designed to deliver applications and services that are Intranet based to external users or enterprises.
The amount of information that can flow through a network connection in a given period of time is referred to
as bandwidth. Network bandwidth is typically measured in thousands of bits per second (kbps), millions of
bits per second (Mbps), billions of bits per second (Gbps) and trillions of bits per second (Tbps). The
theoretical bandwidth of a network is an important consideration in network design. If the theoretical
bandwidth of a network connection is known, the formula T=S/BW (transfer time = size of file / bandwidth)
can be used to calculate potential data transfer time. However the actual bandwidth, referred to as
throughput, is affected by multiple factors such as network devices and topology being used, type of data,
number of users, hardware and power conditions.
Data can be encoded on analog or digital signals. Analog bandwidth is a measure of how much of the
electromagnetic spectrum is occupied by each signal. For instance an analog video signal that requires a
wide frequency range for transmission cannot be squeezed into a smaller band. If the necessary analog
bandwidth is not available the signal cannot be sent. In digital signaling all information is sent as bits,
regardless of the kind of information it is. Unlimited amounts of information can be sent over the smallest
digital bandwidth channel.
The concept of layers is used to describe communication from one computer to another. Dividing the network
into layers provides the following advantages:
Reduces complexity
Standardizes interfaces
Facilitates modular engineering
Ensures interoperability
Accelerates evolution
Simplifies teaching and learning
Two such layered models are the Open System Interconnection (OSI) and the TCP/IP networking models. In
the OSI reference model, there are seven numbered layers, each of which illustrates a particular network
function: application, presentation, session, transport, network, data link, and physical. The TCP/IP model
has the following four layers: application, transport, Internet, and network access.
Although some of the layers in the TCP/IP model have the same name as layers in the OSI model, the layers
of the two models do not correspond exactly. The TCP/IP application layer is equivalent to the OSI
application, presentation, and session layers. The TCP/IP model combines the OSI data link and physical
layers into the network access layer.
No matter which model is applied, networks layers perform the following five conversion steps in order to
encapsulate and transmit data:
1. Images and text are converted to data.
2. The data is packaged into segments.
3. The data segment is encapsulated in a packet with the source and destination addresses.
4. The packet is encapsulated in a frame with the MAC address of the next directly connected device.
The frame is converted to a pattern of ones and zeros (bits) for transmission on the media.
Overview
Copper cable is used in almost every LAN. Many different types of copper cable are available. Each type has
advantages and disadvantages. Proper selection of cabling is key to efficient network operation. Since
copper uses electrical currents to transmit information, it is important to understand some basics of
electricity.
Optical fiber is the most frequently used medium for the longer, high bandwidth, point-to-point transmissions
required on LAN backbones and on WANs. Optical media uses light to transmit data through thin glass or
28
plastic fiber. Electrical signals cause a fiber-optic transmitter to generate the light signals sent down the fiber.
The receiving host receives the light signals and converts them to electrical signals at the far end of the fiber.
However, there is no electricity in the fiber-optic cable. In fact, the glass used in fiber-optic cable is a very
good electrical insulator.
Physical connectivity allows users to share printers, servers, and software, which can increase productivity.
Traditional networked systems require the workstations to remain stationary and permit moves only within
the limits of the media and office area.
The introduction of wireless technology removes these restraints and brings true portability to computer
networks. Currently, wireless technology does not provide the high-speed transfers, security, or uptime
reliability of cabled networks. However, flexibility of wireless has justified the trade off.
Administrators often consider wireless when they install or upgrade a network. A simple wireless network
could be working just a few minutes after the workstations are turned on. Connectivity to the Internet is
provided through a wired connection, router, cable, or DSL modem and a wireless access point that acts as a
hub for the wireless nodes. In a residential or small office environment these devices may be combined into
a single unit.
This module covers some of the objectives for the CCNA 640-801, INTRO 640-821, and ICND 640-811
exams.
Students who complete this module should be able to perform the following tasks:
Discuss the electrical properties of matter
Define voltage, resistance, impedance, current, and circuits
Describe the specifications and performances of different types of cable
Describe coaxial cable and its advantages and disadvantages compared to other types of cable
Describe STP cable and its uses
Describe UTP cable and its uses
Discuss the characteristics of straight-through, crossover, and rollover cables and where each is
used
Explain the basics of fiber-optic cable
Describe how fiber-optic cables can carry light signals over long distances
Describe multimode and single-mode fiber
Describe how fiber is installed
Describe the type of connectors and equipment used with fiber-optic cable
Explain how fiber is tested to ensure that it will function properly
Discuss safety issues related to fiber optics
3.1
Copper Media
Semiconductors are materials that allow the amount of electricity they conduct to be precisely controlled.
These materials are listed together in one column of the periodic chart. Examples include carbon (C),
germanium (Ge), and the alloy gallium arsenide (GaAs). Silicon (Si) is the most important semiconductor
because it makes the best microscopic-sized electronic circuits.
Silicon is very common and can be found in sand, glass, and many types of rocks. The region around San
Jose, California is known as Silicon Valley because the computer industry, which depends on silicon
microchips, started in that area.
The Lab Activity demonstrates how to measure resistance and continuity.
The Interactive Media Activity identifies the resistance and impedance characteristics of different types of
material.
3.1.4 Current
This page provides a detailed explanation of current.
Electrical current is the flow of charges created when electrons move. In electrical circuits, the current is
caused by a flow of free electrons. When voltage is applied and there is a path for the current, electrons
move from the negative terminal along the path to the positive terminal. The negative terminal repels the
electrons and the positive terminal attracts the electrons. The letter I represents current. The unit of
measurement for current is Ampere (A). An ampere is defined as the number of charges per second that
pass by a point along a path.
Current can be thought of as the amount or volume of electron traffic that flows. Voltage can be thought of as
the speed of the electron traffic. The combination of amperage and voltage equals wattage. Electrical
devices such as light bulbs, motors, and computer power supplies are rated in terms of watts. Wattage
indicates how much power a device consumes or produces.
It is the current or amperage in an electrical circuit that really does the work. For example, static electricity
has such a high voltage that it can jump a gap of an inch or more. However, it has very low amperage and as
a result can create a shock but not permanent injury. The starter motor in an automobile operates at a
relatively low 12 volts but requires very high amperage to generate enough energy to turn over the engine.
Lightning has very high voltage and high amperage and can cause severe damage or injury.
3.1.5 Circuits
This page explains circuits.
Current flows in closed loops called circuits. These circuits must be made of conductive materials and must
have sources of voltage. Voltage causes current to flow. Resistance and impedance oppose it. Current
consists of electrons that flow away from negative terminals and toward positive terminals. These facts allow
people to control the flow of current.
Electricity will naturally flow to the earth if there is a path. Current also flows along the path of least
resistance. If a human body provides the path of least resistance, the current will flow through it. When an
electric appliance has a plug with three prongs, one of the prongs acts as the ground, or 0 volts. The ground
provides a conductive path for the electrons to flow to the earth. The resistance of the body would be greater
than the resistance of the ground.
Ground typically means the 0-volts level in reference to electrical measurements. Voltage is created by the
separation of charges, which means that voltage measurements must be made between two points.
A water analogy can help explain the concept of electricity. The higher the water and the greater the
pressure, the more the water will flow. The water current also depends on the size of the space it must flow
through. Similarly, the higher the voltage and the greater the electrical pressure, the more current will be
produced. The electric current then encounters resistance that, like the water tap, reduces the flow. If the
electric current is in an AC circuit, then the amount of current will depend on how much impedance is
present. If the electric current is in a DC circuit, then the amount of current will depend on how much
resistance is present. The pump is like a battery. It provides pressure to keep the flow moving.
The relationship among voltage, resistance, and current is voltage (V) equals current (I) multiplied by
resistance (R). In other words, V=I*R. This is Ohms law, named after the scientist who explored these
issues.
Two ways in which current flows are alternating current (AC) and direct current (DC). AC voltages change
their polarity, or direction, over time. AC flows in one direction, then reverses its direction and flows in the
other direction, and then repeats the process. AC voltage is positive at one terminal, and negative at the
other. Then the AC voltage reverses its polarity, so that the positive terminal becomes negative, and the
negative terminal becomes positive. This process repeats itself continuously.
DC always flows in the same direction and DC voltages always have the same polarity. One terminal is
always positive, and the other is always negative. They do not change or reverse.
An oscilloscope is an electronic device used to measure electrical signals relative to time. An oscilloscope
graphs the electrical waves, pulses, and patterns. An oscilloscope has an x-axis that represents time, and a
y-axis that represents voltage. There are usually two y-axis voltage inputs so that two waves can be
observed and measured at the same time.
Power lines carry electricity in the form of AC because it can be delivered efficiently over large distances. DC
can be found in flashlight batteries, car batteries, and as power for the microchips on the motherboard of a
computer, where it only needs to go a short distance.
31
Electrons flow in closed circuits, or complete loops. Figure shows a simple circuit. The chemical processes
in the battery cause charges to build up. This provides a voltage, or electrical pressure, that enables
electrons to flow through various devices. The lines represent a conductor, which is usually copper wire.
Think of a switch as two ends of a single wire that can be opened or broken to prevent the flow of electrons.
When the two ends are closed, fixed, or shorted, electrons are allowed to flow. Finally, a light bulb provides
resistance to the flow of electrons, which causes the electrons to release energy in the form of light. The
circuits in networks use a much more complex version of this simple circuit.
For AC and DC electrical systems, the flow of electrons is always from a negatively charged source to a
positively charged source. However, for the controlled flow of electrons to occur, a complete circuit is
required. Figure shows part of the electrical circuit that brings power to a home or office.
The Lab Activity explores the basic properties of series circuits.
3.1.6 Cable specifications
This page discusses cable specifications and expectations.
Cables have different specifications and expectations. Important considerations related to performance are
as follows:
What speeds for data transmission can be achieved? The speed of bit transmission through the
cable is extremely important. The speed of transmission is affected by the kind of conduit used.
Will the transmissions be digital or analog? Digital or baseband transmission and analog or
broadband transmission require different types of cable.
How far can a signal travel before attenuation becomes a concern? If the signal is degraded,
network devices might not be able to receive and interpret the signal. The distance the signal travels
through the cable affects attenuation of the signal. Degradation is directly related to the distance the
signal travels and the type of cable used.
The following Ethernet specifications relate to cable type:
10BASE-T
10BASE5
10BASE2
10BASE-T refers to the speed of transmission at 10 Mbps. The type of transmission is baseband, or
digitally interpreted. The T stands for twisted pair.
10BASE5 refers to the speed of transmission at 10 Mbps. The type of transmission is baseband, or digitally
interpreted. The 5 indicates that a signal can travel for approximately 500 meters before attenuation could
disrupt the ability of the receiver to interpret the signal. 10BASE5 is often referred to as Thicknet. Thicknet is
a type of network and 10BASE5 is the cable used in that network.
10BASE2 refers to the speed of transmission at 10 Mbps. The type of transmission is baseband, or digitally
interpreted. The 2, in 10BASE2, refers to the approximate maximum segment length being 200 meters
before attenuation could disrupt the ability of the receiver to appropriately interpret the signal being received.
The maximum segment length is actually 185 meters. 10BASE2 is often referred to as Thinnet. Thinnet is a
type of network and 10BASE2 is the cable used in that network.
3.1.7 Coaxial Cable
This page provides detailed information about coaxial cable.
Coaxial cable consists of a copper conductor surrounded by a layer of flexible insulation. The center
conductor can also be made of tin plated aluminium cable allowing for the cable to be manufactured
inexpensively. Over this insulating material is a woven copper braid or metallic foil that acts as the second
wire in the circuit and as a shield for the inner conductor. This second layer, or shield also reduces the
amount of outside electromagnetic interference. Covering this shield is the cable jacket.
For LANs, coaxial cable offers several advantages. It can be run longer distances than shielded twisted pair,
STP, unshielded twisted pair, UTP, and screened twisted pair, ScTP, cable without the need for repeaters.
Repeaters regenerate the signals in a network so that they can cover greater distances. Coaxial cable is less
expensive than fiber-optic cable and the technology is well known. It has been used for many years for many
types of data communication such as cable television.
It is important to consider the size of a cable. As the thickness increases, it becomes more difficult to work
with a cable. Remember that cable must be pulled through conduits and troughs that are limited in size.
Coaxial cable comes in a variety of sizes. The largest diameter was specified for use as Ethernet backbone
cable since it has greater transmission lengths and noise rejection characteristics. This type of coaxial cable
is frequently referred to as Thicknet. This type of cable can be too rigid to install easily in some situations.
Generally, the more difficult the network media is to install, the more expensive it is to install. Coaxial cable is
more expensive to install than twisted-pair cable. Thicknet cable is rarely used anymore aside from special
purpose installations.
In the past, Thinnet coaxial cable with an outside diameter of only 0.35 cm was used in Ethernet networks. It
was especially useful for cable installations that required the cable to make many twists and turns. Since
Thinnet was easier to install, it was also cheaper to install. This led some people to refer to it as Cheapernet.
The outer copper or metallic braid in coaxial cable comprises half the electric circuit. A solid electrical
connection at both ends is important to properly ground the cable. Poor shield connection is one of the
biggest sources of connection problems in the installation of coaxial cable. Connection problems result in
32
electrical noise that interferes with signal transmission. For this reason Thinnet is no longer commonly used
nor supported by latest standards, 100 Mbps and higher, for Ethernet networks.
The following page describes STP cable.
3.1.8 STP Cable
This page provides detailed information about STP cable.
STP cable combines the techniques of cancellation, shielded, and twisted wires. Each pair of wires is
wrapped in metallic foil. The two pairs of wires are wrapped in an overall metallic braid or foil. It is usually
150-ohm cable. As specified for use in Token Ring network installations, STP reduces electrical noise within
the cable such as pair to pair coupling and crosstalk. STP also reduces electronic noise from outside the
cable such as electromagnetic interference (EMI) and radio frequency interference (RFI). STP cable shares
many of the advantages and disadvantages of UTP cable. STP provides more protection from all types of
external interference. However, STP is more expensive and difficult to install than UTP.
A new hybrid of UTP is Screened UTP (ScTP), which is also known as foil screened twisted pair (FTP).
ScTP is essentially UTP wrapped in a metallic foil shield, or screen. ScTP, like UTP, is also 100-ohm cable.
Many cable installers and manufacturers may use the term STP to describe ScTP cabling. It is important to
understand that most references made to STP today actually refer to four-pair shielded cabling. It is highly
unlikely that true STP cable will be used during a cable installation job.
The metallic shielding materials in STP and ScTP need to be grounded at both ends. If improperly grounded
or if there are any discontinuities in the entire length of the shielding material, STP and ScTP can become
susceptible to major noise problems. They are susceptible because they allow the shield to act like an
antenna that picks up unwanted signals. However, this effect works both ways. Not only does the shield
prevent incoming electromagnetic waves from causing noise on data wires, but it also minimizes the
outgoing radiated electromagnetic waves. These waves could cause noise in other devices. STP and ScTP
cable cannot be run as far as other networking media, such as coaxial cable or optical fiber, without the
signal being repeated. More insulation and shielding combine to considerably increase the size, weight, and
cost of the cable. The shielding materials make terminations more difficult and susceptible to poor
workmanship. However, STP and ScTP still have a role, especially in Europe or installations where there is
extensive EMI and RFI near the cabling.
The following page discusses UTP cable.
3.1.9 UTP Cable
This page provides detailed information about UTP cable.
UTP is a four-pair wire medium used in a variety of networks. Each of the eight copper wires in the UTP
cable is covered by insulating material. In addition, each pair of wires is twisted around each other. This type
of cable relies on the cancellation effect produced by the twisted wire pairs to limit signal degradation caused
by EMI and RFI. To further reduce crosstalk between the pairs in UTP cable, the number of twists in the wire
pairs varies. Like STP cable, UTP cable must follow precise specifications as to how many twists or braids
are permitted per foot of cable.
TIA/EIA-568-B.2 contains specifications that govern cable performance. It involves the connection of two
cables, one for voice and one for data, to each outlet. The cable for voice must be four-pair UTP. Category 5
is the cable most frequently recommended and implemented in installations. However, analyst predictions
and independent polls indicate that Category 6 cable will supersede Category 5 cable in network
installations. The fact that Category 6 link and channel requirements are backward compatible to Category
5e makes it very easy for customers to choose Category 6 and supersede Category 5e in their networks.
Applications that work over Category 5e will work over Category 6.
UTP cable has many advantages. It is easy to install and is less expensive than other types of networking
media. In fact, UTP costs less per meter than any other type of LAN cabling. However, the real advantage
is the size. Since it has such a small external diameter, UTP does not fill up wiring ducts as rapidly as other
types of cable. This can be an extremely important factor to consider, particularly when a network is installed
in an older building. When UTP cable is installed with an RJ-45 connector, potential sources of network noise
are greatly reduced and a good solid connection is almost guaranteed.
There are some disadvantages of twisted-pair cabling. UTP cable is more prone to electrical noise and
interference than other types of networking media, and the distance between signal boosts is shorter for UTP
than it is for coaxial and fiber optic cables.
Twisted pair cabling was once considered slower at transmitting data than other types of cable. This is no
longer true. In fact, today, twisted pair is considered the fastest copper-based media.
For communication to occur the signal that is transmitted by the source needs to be understood by the
destination. This is true from both a software and physical perspective. The transmitted signal needs to be
properly received by the circuit connection designed to receive signals. The transmit pin of the source needs
to ultimately connect to the receiving pin of the destination. The following are the types of cable connections
used between internetwork devices.
In Figure , a LAN switch is connected to a computer. The cable that connects from the switch port to the
computer NIC port is called a straight-through cable.
In Figure , two switches are connected together. The cable that connects from one switch port to another
switch port is called a crossover cable.
33
In Figure , the cable that connects the RJ-45 adapter on the com port of the computer to the console port
of the router or switch is called a rollover cable.
The cables are defined by the type of connections, or pinouts, from one end to the other end of the cable.
See Figures , , and . A technician can compare both ends of the same cable by placing them next to
each other, provided the cable has not yet been placed in a wall. The technician observes the colors of the
two RJ-45 connections by placing both ends with the clip placed into the hand and the top of both ends of the
cable pointing away from the technician. A straight-through cable should have both ends with identical color
patterns. While comparing the ends of a cross-over cable, the color of pins #1 and #2 will appear on the
other end at pins #3 and #6, and vice-versa. This occurs because the transmit and receive pins are in
different locations. On a rollover cable, the color combination from left to right on one end should be exactly
opposite to the color combination on the other end.
In the first Lab Activity, a simple communication system is designed, built, and tested.
In the next Lab Activity, students will use a cable tester to determine if a straight-through or crossover cable
is good or bad.
The next three Lab Activities will provides hands-on experience with straight-through, rollover, and crossover
cable construction.
In the final Lab Activity, students will research cable costs.
This page concludes this lesson. The next lesson will discuss optical media. The first page will describe the
electromagnetic spectrum.
3.2
Optical Media
34
depends on the angle at which the incident ray strikes the surface of the glass and the different rates of
speed at which light travels through the two substances.
The bending of light rays at the boundary of two substances is the reason why light rays are able to travel
through an optical fiber even if the fiber curves in a circle.
The optical density of the glass determines how much the rays of light in the glass bends. Optical density
refers to how much a light ray slows down when it passes through a substance. The greater the optical
density of a material, the more it slows light down from its speed in a vacuum. The index of refraction is
defined as the speed of light in vacuum divided by the speed of light in the medium. Therefore, the measure
of the optical density of a material is the index of refraction of that material. A material with a large index of
refraction is more optically dense and slows down more light than a material with a smaller index of
refraction.
For a substance like glass, the Index of Refraction, or the optical density, can be made larger by adding
chemicals to the glass. Making the glass very pure can make the index of refraction smaller. The next
lessons will provide further information about reflection and refraction, and their relation to the design and
function of optical fiber.
The Interactive Media Activity demonstrates how light travels.
3.2.3 Reflection
This page provides an overview of reflection.
When a ray of light (the incident ray) strikes the shiny surface of a flat piece of glass, some of the light
energy in the ray is reflected. The angle between the incident ray and a line perpendicular to the surface of
the glass at the point where the incident ray strikes the glass is called the angle of incidence. The
perpendicular line is called the normal. It is not a light ray but a tool to allow the measurement of angles. The
angle between the reflected ray and the normal is called the angle of reflection. The Law of Reflection states
that the angle of reflection of a light ray is equal to the angle of incidence. In other words, the angle at which
a light ray strikes a reflective surface determines the angle that the ray will reflect off the surface.
The Interactive Media Activity demonstrates the laws of reflection.
3.2.4
Refraction
35
When both of these conditions are met, the entire incident light in the fiber is reflected back inside the fiber.
This is called total internal reflection, which is the foundation upon which optical fiber is constructed. Total
internal reflection causes the light rays in the fiber to bounce off the core-cladding boundary and continue its
journey towards the far end of the fiber. The light will follow a zigzag path through the core of the fiber.
A fiber that meets the first condition can be easily created. In addition, the angle of incidence of the light rays
that enter the core can be controlled. Restricting the following two factors controls the angle of incidence:
The numerical aperture of the fiber The numerical aperture of a core is the range of angles of
incident light rays entering the fiber that will be completely reflected.
Modes The paths which a light ray can follow when traveling down a fiber.
By controlling both conditions, the fiber run will have total internal reflection. This gives a light wave guide
that can be used for data communications.
3.2.6
Multimode fiber
Single-mode fiber
Single-mode fiber consists of the same parts as multimode. The outer jacket of single-mode fiber is usually
yellow. The major difference between multimode and single-mode fiber is that single-mode allows only one
mode of light to propagate through the smaller, fiber-optic core. The single-mode core is eight to ten microns
in diameter. Nine-micron cores are the most common. A 9/125 marking on the jacket of the single-mode fiber
indicates that the core fiber has a diameter of 9 microns and the surrounding cladding is 125 microns in
diameter.
An infrared laser is used as the light source in single-mode fiber. The ray of light it generates enters the core
at a 90-degree angle. As a result, the data carrying light ray pulses in single-mode fiber are essentially
transmitted in a straight line right down the middle of the core. This greatly increases both the speed and
the distance that data can be transmitted.
Because of its design, single-mode fiber is capable of higher rates of data transmission (bandwidth) and
greater cable run distances than multimode fiber. Single-mode fiber can carry LAN data up to 3000 meters.
Although this distance is considered a standard, newer technologies have increased this distance and will be
discussed in a later module. Multimode is only capable of carrying up to 2000 meters. Lasers and singlemode fibers are more expensive than LEDs and multimode fiber. Because of these characteristics, singlemode fiber is often used for inter-building connectivity.
Warming: The laser light used with single-mode has a longer wavelength than can be seen. The laser is so
strong that it can seriously damage eyes. Never look at the near end of a fiber that is connected to a device
at the far end. Never look into the transmit port on a NIC, switch, or router. Remember to keep protective
covers over the ends of fiber and inserted into the fiber-optic ports of switches and routers. Be very careful.
Figure compares the relative sizes of the core and cladding for both types of fiber optic in different
sectional views. The much smaller and more refined fiber core in single-mode fiber is the reason singlemode has a higher bandwidth and cable run distance than multimode fiber. However, it entails more
manufacturing costs.
3.2.8 Other optical components
This page explains how optical devices are used to transmit data.
Most of the data sent over a LAN is in the form of electrical signals. However, optical fiber links use light to
send data. Something is needed to convert the electricity to light and at the other end of the fiber convert the
light back to electricity. This means that a transmitter and a receiver are required.
The transmitter receives data to be transmitted from switches and routers. This data is in the form of
electrical signals. The transmitter converts the electronic signals into their equivalent light pulses. There are
two types of light sources used to encode and transmit the data through the cable:
A light emitting diode (LED) producing infrared light with wavelengths of either 850 nm or 1310 nm.
These are used with multimode fiber in LANs. Lenses are used to focus the infrared light on the end
of the fiber.
Light amplification by stimulated emission radiation (LASER) a light source producing a thin beam of
intense infrared light usually with wavelengths of 1310nm or 1550 nm. Lasers are used with singlemode fiber over the longer distances involved in WANs or campus backbones. Extra care should be
exercised to prevent eye injury.
Each of these light sources can be lighted and darkened very quickly to send data (1s and 0s) at a high
number of bits per second.
At the other end of the optical fiber from the transmitter is the receiver. The receiver functions something like
the photoelectric cell in a solar powered calculator. When light strikes the receiver, it produces electricity. The
first job of the receiver is to detect a light pulse that arrives from the fiber. Then the receiver converts the light
pulse back into the original electrical signal that first entered the transmitter at the far end of the fiber. Now
the signal is again in the form of voltage changes. The signal is ready to be sent over copper wire into any
receiving electronic device such as a computer, switch, or router. The semiconductor devices that are usually
used as receivers with fiber-optic links are called p-intrinsic-n diodes (PIN photodiodes).
PIN photodiodes are manufactured to be sensitive to 850, 1310, or 1550 nm of light that are generated by
the transmitter at the far end of the fiber. When struck by a pulse of light at the proper wavelength, the PIN
photodiode quickly produces an electric current of the proper voltage for the network. It instantly stops
producing the voltage when no light strikes the PIN photodiode. This generates the voltage changes that
represent the data 1s and 0s on a copper cable.
Connectors are attached to the fiber ends so that the fibers can be connected to the ports on the transmitter
and receiver. The type of connector most commonly used with multimode fiber is the Subscriber Connector
(SC). On single-mode fiber, the Straight Tip (ST) connector is frequently used.
In addition to the transmitters, receivers, connectors, and fibers that are always required on an optical
network, repeaters and fiber patch panels are often seen.
Repeaters are optical amplifiers that receive attenuating light pulses traveling long distances and restore
them to their original shapes, strengths, and timings. The restored signals can then be sent on along the
journey to the receiver at the far end of the fiber.
37
Fiber patch panels similar to the patch panels used with copper cable. These panels increase the flexibility of
an optical network by allowing quick changes to the connection of devices like switches or routers with
various available fiber runs, or cable links.
The Lab Activity will teach students about the price of different types of fiber cables.
Signals and noise in optical fibers
3.2.9
This page explains some factors that reduce signal strength in optical media.
Fiber-optic cable is not affected by the sources of external noise that cause problems on copper media
because external light cannot enter the fiber except at the transmitter end. The cladding is covered by a
buffer and an outer jacket that stops light from entering or leaving the cable.
Furthermore, the transmission of light on one fiber in a cable does not generate interference that disturbs
transmission on any other fiber. This means that fiber does not have the problem with crosstalk that copper
media does. In fact, the quality of fiber-optic links is so good that the recent standards for gigabit and ten
gigabit Ethernet specify transmission distances that far exceed the traditional two-kilometer reach of the
original Ethernet. Fiber-optic transmission allows the Ethernet protocol to be used on metropolitan-area
networks (MANs) and wide-area networks (WANs).
Although fiber is the best of all the transmission media at carrying large amounts of data over long distances,
fiber is not without problems. When light travels through fiber, some of the light energy is lost. The farther a
light signal travels through a fiber, the more the signal loses strength. This attenuation of the signal is due to
several factors involving the nature of fiber itself. The most important factor is scattering. The scattering of
light in a fiber is caused by microscopic non-uniformity (distortions) in the fiber that reflects and scatters
some of the light energy.
Absorption is another cause of light energy loss. When a light ray strikes some types of chemical impurities
in a fiber, the impurities absorb part of the energy. This light energy is converted to a small amount of heat
energy. Absorption makes the light signal a little dimmer.
Another factor that causes attenuation of the light signal is manufacturing irregularities or roughness in the
core-to-cladding boundary. Power is lost from the light signal because of the less than perfect total internal
reflection in that rough area of the fiber. Any microscopic imperfections in the thickness or symmetry of the
fiber will cut down on total internal reflection and the cladding will absorb some light energy.
Dispersion of a light flash also limits transmission distances on a fiber. Dispersion is the technical term for the
spreading of pulses of light as they travel down the fiber.
Graded index multimode fiber is designed to compensate for the different distances the various modes of
light have to travel in the large diameter core. Single-mode fiber does not have the problem of multiple paths
that the light signal can follow. However, chromatic dispersion is a characteristic of both multimode and
single-mode fiber. When wavelengths of light travel at slightly different speeds through glass than do other
wavelengths, chromatic dispersion is caused. That is why a prism separates the wavelengths of light. Ideally,
an LED or Laser light source would emit light of just one frequency. Then chromatic dispersion would not be
a problem.
Unfortunately, lasers, and especially LEDs generate a range of wavelengths so chromatic dispersion limits
the distance that can be transmitted on a fiber. If a signal is transmitted too far, what started as a bright pulse
of light energy will be spread out, separated, and dim when it reaches the receiver. The receiver will not be
able to distinguish a one from a zero.
Installation, care, and testing of optical fiber
3.2.10
This page will teach students how to troubleshoot optical fiber.
A major cause of too much attenuation in fiber-optic cable is improper installation. If the fiber is stretched or
curved too tightly, it can cause tiny cracks in the core that will scatter the light rays. Bending the fiber in too
tight a curve can change the incident angle of light rays striking the core-to-cladding boundary. Then the
incident angle of the ray will become less than the critical angle for total internal reflection. Instead of
reflecting around the bend, some light rays will refract into the cladding and be lost.
To prevent fiber bends that are too sharp, fiber is usually pulled through a type of installed pipe called
interducting. The interducting is much stiffer than fiber and cannot be bent so sharply that the fiber inside the
interducting has too tight a curve. The interducting protects the fiber, makes it easier to pull the fiber, and
ensures that the bending radius (curve limit) of the fiber is not exceeded.
When the fiber has been pulled, the ends of the fiber must be cleaved (cut) and properly polished to ensure
that the ends are smooth. A microscope or test instrument with a built in magnifier is used to examine the
end of the fiber and verify that it is properly polished and shaped. Then the connector is carefully attached to
the fiber end. Improperly installed connectors, improper splices, or the splicing of two cables with different
core sizes will dramatically reduce the strength of a light signal.
Once the fiber-optic cable and connectors have been installed, the connectors and the ends of the fibers
must be kept spotlessly clean. The ends of the fibers should be covered with protective covers to prevent
damage to the fiber ends. When these covers are removed prior to connecting the fiber to a port on a switch
or a router, the fiber ends must be cleaned. Clean the fiber ends with lint free lens tissue moistened with pure
isopropyl alcohol. The fiber ports on a switch or router should also be kept covered when not in use and
38
cleaned with lens tissue and isopropyl alcohol before a connection is made. Dirty ends on a fiber will cause a
big drop in the amount of light that reaches the receiver.
Scattering, absorption, dispersion, improper installation, and dirty fiber ends diminish the strength of the light
signal and are referred to as fiber noise. Before using a fiber-optic cable, it must be tested to ensure that
enough light actually reaches the receiver for it to detect the zeros and ones in the signal.
When a fiber-optic link is being planned, the amount of signal power loss that can be tolerated must be
calculated. This is referred to as the optical link loss budget. Imagine a monthly financial budget. After all of
the expenses are subtracted from initial income, enough money must be left to get through the month.
The decibel (dB) is the unit used to measure the amount of power loss. It tells what percent of the power that
leaves the transmitter actually enters the receiver.
Testing fiber links is extremely important and records of the results of these tests must be kept. Several types
of fiber-optic test equipment are used. Two of the most important instruments are Optical Loss Meters and
Optical Time Domain Reflectometers (OTDRs).
These meters both test optical cable to ensure that the cable meets the TIA standards for fiber. They also
test to verify that the link power loss does not fall below the optical link loss budget. OTDRs can provide
much additional detailed diagnostic information about a fiber link. They can be used to trouble shoot a link
when problems occur.
This page concludes this lesson. The next lesson will discuss wireless media. The first page will discuss
Wireless LAN organizations and standards.
3.3
Wireless Media
for movement of devices within the WLAN. Although not addressed in the IEEE standards, a 20-30% overlap
is desirable. This rate of overlap will permit roaming between cells, allowing for the disconnect and reconnect
activity to occur seamlessly without service interruption.
When a client is activated within the WLAN, it will start "listening" for a compatible device with which to
"associate". This is referred to as "scanning" and may be active or passive.
Active scanning causes a probe request to be sent from the wireless node seeking to join the network. The
probe request will contain the Service Set Identifier (SSID) of the network it wishes to join. When an AP with
the same SSID is found, the AP will issue a probe response. The authentication and association steps are
completed.
Passive scanning nodes listen for beacon management frames (beacons), which are transmitted by the AP
(infrastructure mode) or peer nodes (ad hoc). When a node receives a beacon that contains the SSID of the
network it is trying to join, an attempt is made to join the network. Passive scanning is a continuous process
and nodes may associate or disassociate with APs as signal strength changes.
The first Interactive Media Activity shows the levels of the OSI reference model and the related networking
devices.
The second Interactive Media Activity shows the addition of a wireless hub to a wired network.
rejected by the AP. The client is notified of the response via an authentication response frame. The AP may
also be configured to hand off the authentication task to an authentication server, which would perform a
more thorough credentialing process.
Association, performed after authentication, is the state that permits a client to use the services of the AP to
transfer data.
Authentication and Association types
Unauthenticated and unassociated
The node is disconnected from the network and not associated to an access point.
Authenticated and unassociated
The node has been authenticated on the network but has not yet associated with the access
point.
Authenticated and associated
The node is connected to the network and able to transmit and receive data through the access
point.
Methods of authentication
IEEE 802.11 lists two types of authentication processes.
The first authentication process is the open system. This is an open connectivity standard in which only the
SSID must match. This may be used in a secure or non-secure environment although the ability of low level
network sniffers to discover the SSID of the WLAN is high.
The second process is the shared key. This process requires the use of Wireless Equivalency Protocol
(WEP) encryption. WEP is a fairly simple algorithm using 64 and 128 bit keys. The AP is configured with an
encrypted key and nodes attempting to access the network through the AP must have a matching key.
Statically assigned WEP keys provide a higher level of security than the open system but are definitely not
hack proof.
The problem of unauthorized entry into WLANs is being addressed by a number of new security solution
technologies.
The radio wave and microwave spectrums
3.3.5
This page describes radio waves and modulation.
Computers send data signals electronically. Radio transmitters convert these electrical signals to radio
waves. Changing electric currents in the antenna of a transmitter generates the radio waves. These radio
waves radiate out in straight lines from the antenna. However, radio waves attenuate as they move out
from the transmitting antenna. In a WLAN, a radio signal measured at a distance of just 10 meters (30 feet)
from the transmitting antenna would be only 1/100th of its original strength. Like light, radio waves can be
absorbed by some materials and reflected by others. When passing from one material, like air, into another
material, like a plaster wall, radio waves are refracted. Radio waves are also scattered and absorbed by
water droplets in the air.
These qualities of radio waves are important to remember when a WLAN is being planned for a building or
for a campus. The process of evaluating a location for the installation of a WLAN is called making a Site
Survey.
Because radio signals weaken as they travel away from the transmitter, the receiver must also be equipped
with an antenna. When radio waves hit the antenna of a receiver, weak electric currents are generated in that
antenna. These electric currents, caused by the received radio waves, are equal to the currents that
originally generated the radio waves in the antenna of the transmitter. The receiver amplifies the strength of
these weak electrical signals.
In a transmitter, the electrical (data) signals from a computer or a LAN are not sent directly into the antenna
of the transmitter. Rather, these data signals are used to alter a second, strong signal called the carrier
signal.
The process of altering the carrier signal that will enter the antenna of the transmitter is called modulation.
There are three basic ways in which a radio carrier signal can be modulated. For example, Amplitude
Modulated (AM) radio stations modulate the height (amplitude) of the carrier signal. Frequency Modulated
(FM) radio stations modulate the frequency of the carrier signal as determined by the electrical signal from
the microphone. In WLANs, a third type of modulation called phase modulation is used to superimpose the
data signal onto the carrier signal that is broadcast by the transmitter.
In this type of modulation, the data bits in the electrical signal change the phase of the carrier signal.
A receiver demodulates the carrier signal that arrives from its antenna. The receiver interprets the phase
changes of the carrier signal and reconstructs from it the original electrical data signal.
The first Interactive Media Activity explains electromagnetic fields and polarization.
The second Interactive Media Activity shows the names, devices, frequencies, and wavelengths of the EM
spectrum.
Signals and noise on a WLAN
3.3.6
This page discusses how signals and noise can affect a WLAN.
On a wired Ethernet network, it is usually a simple process to diagnose the cause of interference. When
using RF technology many kinds of interference must be taken into consideration.
41
Narrowband is the opposite of spread spectrum technology. As the name implies narrowband does not affect
the entire frequency spectrum of the wireless signal. One solution to a narrowband interference problem
could be simply changing the channel that the AP is using. Actually diagnosing the cause of narrowband
interference can be a costly and time-consuming experience. To identify the source requires a spectrum
analyzer and even a low cost model is relatively expensive.
All band interference affects the entire spectrum range. Bluetooth technologies hops across the entire 2.4
GHz many times per second and can cause significant interference on an 802.11b network. It is not
uncommon to see signs in facilities that use wireless networks requesting that all Bluetooth devices be
shut down before entering. In homes and offices, a device that is often overlooked as causing interference is
the standard microwave oven. Leakage from a microwave of as little as one watt into the RF spectrum can
cause major network disruption. Wireless phones operating in the 2.4GHZ spectrum can also cause network
disorder.
Generally the RF signal will not be affected by even the most extreme weather conditions. However, fog or
very high moisture conditions can and do affect wireless networks. Lightning can also charge the
atmosphere and alter the path of a transmitted signal.
The first and most obvious source of a signal problem is the transmitting station and antenna type. A higher
output station will transmit the signal further and a parabolic dish antenna that concentrates the signal will
increase the transmission range.
In a SOHO environment most access points will utilize twin omnidirectional antennae that transmit the
signal in all directions thereby reducing the range of communication
3.3.7 Wireless security
This page will explain how wireless security can be achieved.
Where wireless networks exist there is little security. This has been a problem from the earliest days of
WLANs. Currently, many administrators are weak in implementing effective security practices.
A number of new security solutions and protocols, such as Virtual Private Networking (VPN) and Extensible
Authentication Protocol (EAP) are emerging. With EAP, the access point does not provide authentication to
the client, but passes the duties to a more sophisticated device, possibly a dedicated server, designed for
that purpose. Using an integrated server VPN technology creates a tunnel on top of an existing protocol such
as IP. This is a Layer 3 connection as opposed to the Layer 2 connection between the AP and the sending
node.
EAP-MD5 Challenge Extensible Authentication Protocol is the earliest authentication type, which
is very similar to CHAP password protection on a wired network.
LEAP (Cisco) Lightweight Extensible Authentication Protocol is the type primarily used on Cisco
WLAN access points. LEAP provides security during credential exchange, encrypts using dynamic
WEP keys, and supports mutual authentication.
User authentication Allows only authorized users to connect, send and receive data over the
wireless network.
Encryption Provides encryption services further protecting the data from intruders.
Data authentication Ensures the integrity of the data, authenticating source and destination
devices.
VPN technology effectively closes the wireless network since an unrestricted WLAN will automatically
forward traffic between nodes that appear to be on the same wireless network. WLANs often extend outside
the perimeter of the home or office in which they are installed and without security intruders may infiltrate the
network with little effort. Conversely it takes minimal effort on the part of the network administrator to provide
low-level security to the WLAN.
This page concludes the lesson. The next page will summarize the main points from the module.
Summary
This page summarizes the topics discussed in this module.
Copper cable carries information using electrical current. The electrical specifications of a cable determines
the kind of signal a particular cable can transmit, the speed at which the signal is transmitted and the
distance the signal will travel.
An understanding of the following electrical concepts is helpful when working with computer networks:
Voltage the pressure that moves electrons through a circuit from one place to another
Resistance opposition to the flow of electrons and why a signal becomes degraded as it travels
along the conduit
Current flow of charges created when electrons move
Circuits a closed loop through which an electrical current flows
Circuits must be composed of conducting materials, and must have sources of voltage. Voltage causes
current to flow, while resistance and impedance oppose it. A multimeter is used to measure voltage, current,
resistance, and other electrical quantities expressed in numeric form.
Coaxial cable, unshielded twisted pair (UTP) and shielded twisted pair (STP) are types of copper cables that
can be used in a network to provide different capabilities. Twisted-pair cable can be configured for straight
through, crossover, or rollover signaling. These terms refer to the individual wire connections, or pinouts,
42
from one end to the other end of the cable. A straight-through cable is used to connect unlike devices such
as a switch and a PC. A crossover cable is used to connect similar devices such as two switches. A rollover
cable is used to connect a PC to the console port of a router. Different pinouts are required because the
transmit and receive pins are in different locations on each of these devices.
Optical fiber is the most frequently used medium for the longer, high-bandwidth, point-to-point transmissions
required on LAN backbones and on WANs. Light energy is used to transmit large amounts of data securely
over relatively long distances The light signal carried by a fiber is produced by a transmitter that converts an
electrical signal into a light signal. The receiver converts the light that arrives at the far end of the cable back
to the original electrical signal.
Every fiber-optic cable used for networking consists of two glass fibers encased in separate sheaths. Just as
copper twisted-pair uses separate wire pairs to transmit and receive, fiber-optic circuits use one fiber strand
to transmit and one to receive.
The part of an optical fiber through which light rays travel is called the core of the fiber. Surrounding the core
is the cladding. Its function is to reflect the signal back towards the core. Surrounding the cladding is a buffer
material that helps shield the core and cladding from damage. A strength material surrounds the buffer,
preventing the fiber cable from being stretched when installers pull it. The material used is often Kevlar. The
final element is the outer jacket that surrounds the cable to protect the fiber against abrasion, solvents, and
other contaminants.
The laws of reflection and refraction are used to design fiber media that guides the light waves through the
fiber with minimum energy and signal loss. Once the rays have entered the core of the fiber, there are a
limited number of optical paths that a light ray can follow through the fiber. These optical paths are called
modes. If the diameter of the core of the fiber is large enough so that there are many paths that light can take
through the fiber, the fiber is called multimode fiber. Single-mode fiber has a much smaller core that only
allows light rays to travel along one mode inside the fiber. Because of its design, single-mode fiber is capable
of higher rates of data transmission and greater cable run distances than multimode fiber.
Fiber is described as immune to noise because it is not affected by external noise or noise from other cables.
Light confined in one fiber has no way of inducing light in another fiber. Attenuation of a light signal becomes
a problem over long cables especially if sections of cable are connected at patch panels or spliced.
Both copper and fiber media require that devices remains stationary permitting moves only within the limits of
the media. Wireless technology removes these restraints. Understanding the regulations and standards that
apply to wireless technology will ensure that deployed networks will be interoperable and in compliance with
IEEE 802.11 standards for WLANs.
A wireless network may consist of as few as two devices. The wireless equivalent of a peer-to-peer network
where end-user devices connect directly is referred to as an ad-hoc wireless topology. To solve compatibility
problems among devices, an infrastructure mode topology can be set up using an access point (AP) to act
as a central hub for the WLAN. Wireless communication uses three types of frames: control, management,
and data frames. To avoid collisions on the shared radio frequency media WLANs use Carrier Sense Multiple
Access/Collision Avoidance (CSMA/CA).
WLAN authentication is a Layer 2 process that authenticates the device, not the user. Association,
performed after authentication, permits a client to use the services of the access point to transfer data.
4.1
Networking media is the backbone of a network. Networking media is literally and physically the backbone of
a network. Inferior quality of network cabling results in network failures and unreliable performance. Copper,
optical fiber, and wireless networking media all require testing to ensure that they meet strict specification
guidelines. These tests involve certain electrical and mathematical concepts and terms such as signal, wave,
frequency, and noise. These terms will help students understand networks, cables, and cable testing.
The first lesson in this module will provide some basic definitions to help students understand the cable
testing concepts presented in the second lesson.
The second lesson of this module describes issues related to cable testing for physical layer connectivity in
LANs. In order for the LAN to function properly, the physical layer medium should meet the industry standard
specifications.
Attenuation, which is signal deterioration, and noise, which is signal interference, can cause problems in
networks because the data sent may be interpreted incorrectly or not recognized at all after it has been
received. Proper termination of cable connectors and proper cable installation are important. If standards are
followed during installations, repairs, and changes, attenuation and noise levels should be minimized.
After a cable has been installed, a cable certification meter can verify that the installation meets TIA/EIA
specifications. This module also describes some important tests that are performed.
This module covers some of the objectives for the CCNA 640-801, INTRO 640-821, and ICND 640-811
exams.
Students who complete this module should be able to perform the following tasks:
Differentiate between sine waves and square waves
Define and calculate exponents and logarithms
Define and calculate decibels
43
Waves
4.1.1
This lesson provides definitions that relate to frequency-based cable testing. This page defines waves.
A wave is energy that travels from one place to another. There are many types of waves, but all can be
described with similar vocabulary.
It is helpful to think of waves as disturbances. A bucket of water that is completely still does not have waves
since there are no disturbances. Conversely, the ocean always has some sort of detectable waves due to
disturbances such as wind and tide.
Ocean waves can be described in terms of their height, or amplitude, which could be measured in meters.
They can also be described in terms of how frequently the waves reach the shore, which relates to period
and frequency. The period of the waves is the amount of time between each wave, measured in seconds.
The frequency is the number of waves that reach the shore each second, measured in hertz (Hz). 1 Hz is
equal to 1 wave per second, or 1 cycle per second. To experiment with these concepts, adjust the amplitude
and frequency in Figure .
Networking professionals are specifically interested in voltage waves on copper media, light waves in optical
fiber, and alternating electric and magnetic fields called electromagnetic waves. The amplitude of an
electrical signal still represents height, but it is measured in volts (V) instead of meters (m). The period is the
amount of time that it takes to complete 1 cycle. This is measured in seconds. The frequency is the number
of complete cycles per second. This is measured in Hz.
If a disturbance is deliberately caused, and involves a fixed, predictable duration, it is called a pulse. Pulses
are an important part of electrical signals because they are the basis of digital transmission. The pattern of
the pulses represents the value of the data being transmitted.
4.1.2 Sine waves and square waves (Core)
This page defines sine waves and square waves.
Sine waves, or sinusoids, are graphs of mathematical functions. Sine waves are periodic, which means
that they repeat the same pattern at regular intervals. Sine waves vary continuously, which means that no
adjacent points on the graph have the same value.
Sine waves are graphical representations of many natural occurrences that change regularly over time.
Some examples of these occurrences are the distance from the earth to the sun, the distance from the
ground while riding a Ferris wheel, and the time of day that the sun rises. Since sine waves vary
continuously, they are examples of analog waves.
Square waves, like sine waves, are periodic. However, square wave graphs do not continuously vary with
time. The wave maintains one value and then suddenly changes to a different value. After a short amount of
time it changes back to the original value. Square waves represent digital signals, or pulses. Like all waves,
square waves can be described in terms of amplitude, period, and frequency.
4.1.3
by these voltage patterns can be converted to light waves or radio waves, and then back to voltage waves.
Consider the example of an analog telephone. The sound waves of the callers voice enter a microphone in
the telephone. The microphone converts the patterns of sound energy into voltage patterns of electrical
energy that represent the voice.
If the voltage is graphed over time, the patterns that represent the voice will be displayed. An oscilloscope
is an important electronic device used to view electrical signals such as voltage waves and pulses. The xaxis on the display represents time and the y-axis represents voltage or current. There are usually two y-axis
inputs, so two waves can be observed and measured at the same time.
The analysis of signals with an oscilloscope is called time-domain analysis. The x-axis or domain of the
mathematical function represents time. Engineers also use frequency-domain analysis to study signals. In
frequency-domain analysis, the x-axis represents frequency. An electronic device called a spectrum analyzer
creates graphs for frequency-domain analysis.
Electromagnetic signals use different frequencies for transmission so that different signals do not interfere
with each other. Frequency modulation (FM) radio signals use frequencies that are different from television
or satellite signals. When listeners change the station on a radio, they change the frequency that the radio
receives.
4.1.6 Analog and digital signals (Core)
This page will explain how analog signals vary with time and with frequency.
First, consider a single-frequency electrical sine wave, whose frequency can be detected by the human ear.
If this signal is transmitted to a speaker, a tone can be heard.
Next, imagine the combination of several sine waves. This will create a wave that is more complex than a
pure sine wave. This wave will include several tones. A graph of the tones will show several lines that
correspond to the frequency of each tone.
Finally, imagine a complex signal, like a voice or a musical instrument. If many different tones are present,
the graph will show a continuous spectrum of individual tones.
The Interactive Media Activity draws sine waves and complex waves based on amplitude, frequency, and the
phase.
Noise that affects all transmission frequencies equally is called white noise. Noise that only affects small
ranges of frequencies is called narrowband interference. White noise on a radio receiver would interfere with
all radio stations. Narrowband interference would affect only a few stations whose frequencies are close
together. When detected on a LAN, white noise could affect all data transmissions, but narrowband
interference might disrupt only certain signals.
The Interactive Media Activity will allow students to generate white noise and narrowband noise.
Bandwidth
4.1.8
This page will describe bandwidth, which is an extremely important concept in networks.
Two types of bandwidth that are important for the study of LANs are analog and digital.
Analog bandwidth typically refers to the frequency range of an analog electronic system. Analog bandwidth
could be used to describe the range of frequencies transmitted by a radio station or an electronic amplifier.
The unit of measurement for analog bandwidth is hertz (Hz), the same as the unit of frequency.
Digital bandwidth measures how much information can flow from one place to another in a given amount of
time. The fundamental unit of measurement for digital bandwidth is bps. Since LANs are capable of
speeds of thousands or millions of bits per second, measurement is expressed in kbps or Mbps. Physical
media, current technologies, and the laws of physics limit bandwidth.
During cable testing, analog bandwidth is used to determine the digital bandwidth of a copper cable. The
digital waveforms are made up of many sinewaves (analog waves). Analog frequencies are transmitted from
one end and received on the opposite end. The two signals are then compared, and the amount of
attenuation of the signal is calculated. In general, media that will support higher analog bandwidths without
high degrees of attenuation will also support higher digital bandwidths.
This page concludes this lesson. The next lesson will discuss signals and noise. The first page describes
copper and fiber optic cables.
4.2
Attenuation is the decrease in signal amplitude over the length of a link. Long cable lengths and high signal
frequencies contribute to greater signal attenuation. For this reason, attenuation on a cable is measured by a
cable tester with the highest frequencies that the cable is rated to support. Attenuation is expressed in dBs
with negative numbers. Smaller negative dB values are an indication of better link performance.
There are several factors that contribute to attenuation. The resistance of the copper cable converts some of
the electrical energy of the signal to heat. Signal energy is also lost when it leaks through the insulation of
the cable and by impedance caused by defective connectors.
Impedance is a measurement of the resistance of the cable to alternating current (AC) and is measured in
ohms. The normal impedance of a Category 5 cable is 100 ohms. If a connector is improperly installed on
Category 5, it will have a different impedance value than the cable. This is called an impedance discontinuity
or an impedance mismatch.
Impedance discontinuities cause attenuation because a portion of a transmitted signal is reflected back, like
an echo, and does not reach the receiver. This effect is compounded if multiple discontinuities cause
additional portions of the signal to be reflected back to the transmitter. When the reflected signal strikes the
first discontinuity, some of the signal rebounds in the original direction, which creates multiple echo effects.
The echoes strike the receiver at different intervals. This makes it difficult for the receiver to detect data
values. This is called jitter and results in data errors.
The combination of the effects of signal attenuation and impedance discontinuities on a communications link
is called insertion loss. Proper network operation depends on constant characteristic impedance in all cables
and connectors, with no impedance discontinuities in the entire cable system.
Delay skew
The Ethernet standard specifies that each of the pins on an RJ-45 connector have a particular purpose. A
NIC transmits signals on pins 1 and 2, and it receives signals on pins 3 and 6. The wires in UTP cable must
be connected to the proper pins at each end of a cable. The wire map test insures that no open or short
circuits exist on the cable. An open circuit occurs if the wire does not attach properly at the connector. A short
circuit occurs if two wires are connected to each other.
The wire map test also verifies that all eight wires are connected to the correct pins on both ends of the
cable. There are several different wiring faults that the wire map test can detect. The reversed-pair fault
occurs when a wire pair is correctly installed on one connector, but reversed on the other connector. If the
white/orange wire is terminated on pin 1 and the orange wire is terminated on pin 2 at one end of a cable, but
reversed at the other end, then the cable has a reversed-pair fault. This example is shown in the graphic.
A split-pair wiring fault occurs when one wire from one pair is switched with one wire from a different pair at
both ends. Look carefully at the pin numbers in the graphic to detect the wiring fault. A split pair creates two
transmit or receive pairs each with two wires that are not twisted together. This mixing hampers the crosscancellation process and makes the cable more susceptible to crosstalk and interference. Contrast this with
a reversed-pair, where the same pair of pins is used at both ends.
50
Propagation delay is a simple measurement of how long it takes for a signal to travel along the cable being
tested. The delay in a wire pair depends on its length, twist rate, and electrical properties. Delays are
measured in hundredths of nanoseconds. One nanosecond is one-billionth of a second, or 0.000000001
second. The TIA/EIA-568-B standard sets a limit for propagation delay for the various categories of UTP.
Propagation delay measurements are the basis of the cable length measurement. TIA/EIA-568-B.1 specifies
that the physical length of the link shall be calculated using the wire pair with the shortest electrical delay.
Testers measure the length of the wire based on the electrical delay as measured by a Time Domain
Reflectometry (TDR) test, not by the physical length of the cable jacket. Since the wires inside the cable are
twisted, signals actually travel farther than the physical length of the cable. When a cable tester makes a
TDR measurement, it sends a pulse signal down a wire pair and measures the amount of time required for
the pulse to return on the same wire pair.
The TDR test is used not only to determine length, but also to identify the distance to wiring faults such as
shorts and opens. When the pulse encounters an open, short, or poor connection, all or part of the pulse
energy is reflected back to the tester. This can be used to calculate the approximate distance to the wiring
fault. The approximate distance can be helpful in locating a faulty connection point along a cable run, such
as a wall jack.
The propagation delays of different wire pairs in a single cable can differ slightly because of differences in the
number of twists and electrical properties of each wire pair. The delay difference between pairs is called
delay skew. Delay skew is a critical parameter for high-speed networks in which data is simultaneously
transmitted over multiple wire pairs, such as 1000BASE-T Ethernet. If the delay skew between the pairs is
too great, the bits arrive at different times and the data cannot be properly reassembled. Even though a
cable link may not be intended for this type of data transmission, testing for delay skew helps ensure that the
link will support future upgrades to high-speed networks.
All cable links in a LAN must pass all of the tests previously mentioned as specified in the TIA/EIA-568-B
standard in order to be considered standards compliant. A certification meter must be used to ensure that all
of the tests are passed in order to be considered standards compliant. These tests ensure that the cable
links will function reliably at high speeds and frequencies. Cable tests should be performed when the cable is
installed and afterward on a regular basis to ensure that LAN cabling meets industry standards. High quality
cable test instruments should be correctly used to ensure that the tests are accurate. Test results should also
be carefully documented.
Testing optical fiber (Optional)
4.2.8
This page will explain how optical fiber is tested.
A fiber link consists of two separate glass fibers functioning as independent data pathways. One fiber carries
transmitted signals in one direction, while the second carries signals in the opposite direction. Each glass
fiber is surrounded by a sheath that light cannot pass through, so there are no crosstalk problems on fiber
optic cable. External electromagnetic interference or noise has no affect on fiber cabling. Attenuation does
occur on fiber links, but to a lesser extent than on copper cabling.
Fiber links are subject to the optical equivalent of UTP impedance discontinuities. When light encounters
an optical discontinuity, like an impurity in the glass or a micro-fracture, some of the light signal is reflected
back in the opposite direction. This means only a fraction of the original light signal will continue down the
fiber towards the receiver. This results in a reduced amount of light energy arriving at the receiver, making
signal recognition difficult. Just as with UTP cable, improperly installed connectors are the main cause of
light reflection and signal strength loss in optical fiber.
Because noise is not an issue when transmitting on optical fiber, the main concern with a fiber link is the
strength of the light signal that arrives at the receiver. If attenuation weakens the light signal at the receiver,
then data errors will result. Testing fiber optic cable primarily involves shining a light down the fiber and
measuring whether a sufficient amount of light reaches the receiver.
On a fiber optic link, the acceptable amount of signal power loss that can occur without dropping below the
requirements of the receiver must be calculated. This calculation is referred to as the optical link loss budget.
A fiber test instrument, known as a light source and power meter, checks whether the optical link loss budget
has been exceeded. If the fiber fails the test, another cable test instrument can be used to indicate where
the optical discontinuities occur along the length of the cable link. An optical TDR known as an OTDR is
capable of locating these discontinuities. Usually, the problem is one or more improperly attached
connectors. The OTDR will indicate the location of the faulty connections that must be replaced. When the
faults are corrected, the cable must be retested.
The standards for testing are updated regularly. The next page will introduce a new standard.
4.2.9 A new standard (Optional)
This page discusses the new test standards for Category 6 cable.
On June 20, 2002, the Category 6 addition to the TIA-568 standard was published. The official title of the
standard is ANSI/TIA/EIA-568-B.2-1. This new standard specifies the original set of performance parameters
that need to be tested for Ethernet cabling as well as the passing scores for each of these tests. Cables
certified as Category 6 cable must pass all ten tests.
51
Although the Category 6 tests are essentially the same as those specified by the Category 5 standard,
Category 6 cable must pass the tests with higher scores to be certified. Category 6 cable must be capable of
carrying frequencies up to 250 MHz and must have lower levels of crosstalk and return loss.
A quality cable tester similar to the Fluke DSP-4000 series or Fluke OMNIScanner2 can perform all the test
measurements required for Category 5, Category 5e, and Category 6 cable certifications of both permanent
links and channel links. Figure shows the Fluke DSP-4100 Cable Analyzer with a DSP-LIA013
Channel/Traffic Adapter for Category 5e.
The Lab Activities will teach students how to use a cable tester.
This page concludes this lesson. The next page will summarize the main points from the module.
Summary
This page summarizes the topics discussed in this module.
Data symbolizing characters, words, pictures, video, or music can be represented electrically by voltage
patterns on wires and in electronic devices. The data represented by these voltage patterns can be
converted to light waves or radio waves, and then back to voltage patterns. Waves are energy traveling from
one place to another, and are created by disturbances. All waves have similar attributes such as amplitude,
period, and frequency. Sine waves are periodic, continuously varying functions. Analog signals look like sine
waves. Square waves are periodic functions whose values remain constant for a period of time and then
change abruptly. Digital signals look like square waves.
Exponents are used to represent very large or very small numbers. The base of a number raised to a
positive exponent is equal to the base multiplied by itself exponent times. For example, 10 3 = 10x10x10 =
1000. Logarithms are similar to exponents. A logarithm to the base of 10 of a number equals the exponent to
which 10 would have to be raised in order to equal the number. For example, log 10 1000 = 3 because 103 =
1000.
Decibels are measurements of a gain or loss in the power of a signal. Negative values represent losses and
positive values represent gains. Time and frequency analysis can both be used to graph the voltage or power
of a signal.
Undesirable signals in a communications system are called noise. Noise originates from other cables, radio
frequency interference (RFI), and electromagnetic interference (EMI). Noise may affect all signal frequencies
or a subset of frequencies.
Analog bandwidth is the frequency range that is associated with certain analog transmission, such as
television or FM radio. Digital bandwidth measures how much information can flow from one place to another
in a given amount of time. Its units are in various multiples of bits per second.
On copper cable, data signals are represented by voltage levels that correspond to binary ones and zeros. In
order for the LAN to operate properly, the receiving device must be able to accurately interpret the bit signal.
Proper cable installation according to standards increases LAN reliability and performance.
Signal degradation is due to various factors such as attenuation, impedance mismatch, noise, and several
types of crosstalk. Attenuation is the decrease in signal amplitude over the length of a link. Impedance is a
measurement of resistance to the electrical signal. Cables and the connectors used on them must have
similar impedance values or some of the data signal may be reflected back from a connector. This is referred
to as impedance mismatch or impedance discontinuity. Noise is any electrical energy on the transmission
cable that makes it difficult for a receiver to interpret the data sent from the transmitter. Crosstalk involves the
transmission of signals from one wire to a nearby wire. There are three distinct types of crosstalk: Near-end
Crosstalk (NEXT), Far-end Crosstalk (FEXT), Power Sum Near-end Crosstalk (PSNEXT).
52
STP and UTP cable are designed to take advantage of the effects of crosstalk in order to minimize noise.
Additionally, STP contains an outer conductive shield and inner foil shields that make it less susceptible to
noise. UTP contains no shielding and is more susceptible to external noise but is the most frequently used
because it is inexpensive and easier to install.
Fiber-optic cable is used to transmit data signals by increasing and decreasing the intensity of light to
represent binary ones and zeros. The strength of a light signal does not diminish like the strength of an
electrical signal does over an identical run length. Optical signals are not affected by electrical noise, and
optical fiber does not need to be grounded. Therefore, optical fiber is often used between buildings and
between floors within a building.
The TIA/EIA-568-B standard specifies ten tests that a copper cable must pass if it will be used for
modern, high-speed Ethernet LANs. Optical fiber must also be tested according to networking standards.
Category 6 cable must meet more rigorous frequency testing standards than Category 5 cable.
5.1 Cabling LANs
Overview
Even though each LAN is unique, there are many design aspects that are common to all LANs. For example,
most LANs follow the same standards and use the same components. This module presents information on
elements of Ethernet LANs and common LAN devices.
There are several types of WAN connections. They range from dial-up to broadband access and differ in
bandwidth, cost, and required equipment. This module presents information on the various types of WAN
connections.
This module covers some of the objectives for the CCNA 640-801, INTRO 640-821, and ICND 640-811
exams.
Students who complete this module should be able to perform the following tasks:
Identify characteristics of Ethernet networks
Identify straight-through, crossover, and rollover cables
Describe the function, advantages, and disadvantages of repeaters, hubs, bridges, switches, and
wireless network components
Describe the function of peer-to-peer networks
Describe the function, advantages, and disadvantages of client-server networks
Describe and differentiate between serial, ISDN, DSL, and cable modem WAN connections
Identify router serial ports, cables, and connectors
Identify and describe the placement of equipment used in various WAN configurations
5.1.1 LAN physical layer
This page describes the LAN physical layer.
Various symbols are used to represent media types. Token Ring is represented by a circle. FDDI is
represented by two concentric circles and the Ethernet symbol is represented by a straight line. Serial
connections are represented by a lightning bolt.
Each computer network can be built with many different media types. The function of media is to carry a flow
of information through a LAN. Wireless LANs use the atmosphere, or space, as the medium. Other
networking media confine network signals to a wire, cable, or fiber. Networking media are considered Layer
1, or physical layer, components of LANs.
Each type of media has advantages and disadvantages. These are based on the following factors:
Cable length
Cost
Ease of installation
Susceptibility to interference
Coaxial cable, optical fiber, and space can carry network signals. This module will focus on Category 5 UTP,
which includes the Category 5e family of cables.
Many topologies support LANs, as well as many different physical media. Figure shows a subset of
physical layer implementations that can be deployed to support Ethernet.
The next page explains how Ethernet is implemented in a campus environment.
53
54
5.1.4
Connection media
55
This page describes the different connection types used by each physical layer implementation, as shown in
Figure . The RJ-45 connector and jack are the most common. RJ-45 connectors are discussed in more
detail in the next section.
The connector on a NIC may not match the media to which it needs to connect. As shown in Figure , an
interface may exist for the 15-pin attachment unit interface (AUI) connector. The AUI connector allows
different media to connect when used with the appropriate transceiver. A transceiver is an adapter that
converts one type of connection to another. A transceiver will usually convert an AUI to an RJ-45, a coax, or
a fiber optic connector. On 10BASE5 Ethernet, or Thicknet, a short cable is used to connect the AUI with a
transceiver on the main cable.
Switch to PC or server
Hub to PC or server
Use crossover cables for the following connections:
Switch to switch
Switch to hub
Hub to hub
Router to router
PC to PC
Router to PC
Figure illustrates how a variety of cable types may be required in a given network. The category of UTP
cable required is based on the type of Ethernet that is chosen.
The Lab Activity shows the termination process for an RJ-45 jack.
The Interactive Media Activities provide detailed views of a straight-through and crossover cable.
5.1.6 Repeaters
This page will discuss how a repeater is used on a network.
The term repeater comes from the early days of long distance communication. A repeater was a person on
one hill who would repeat the signal that was just received from the person on the previous hill. The process
would repeat until the message arrived at its destination. Telegraph, telephone, microwave, and optical
communications use repeaters to strengthen signals sent over long distances.
A repeater receives a signal, regenerates it, and passes it on. It can regenerate and retime network signals
at the bit level to allow them to travel a longer distance on the media. Ethernet and IEEE 802.3 implement
a rule, known as the 5-4-3 rule, for the number of repeaters and segments on shared access Ethernet
backbones in a tree topology. The 5-4-3 rule divides the network into two types of physical segments:
populated (user) segments, and unpopulated (link) segments. User segments have users' systems
connected to them. Link segments are used to connect the network repeaters together. The rule mandates
that between any two nodes on the network, there can only be a maximum of five segments, connected
through four repeaters, or concentrators, and only three of the five segments may contain user connections.
The Ethernet protocol requires that a signal sent out over the LAN reach every part of the network within a
specified length of time. The 5-4-3 rule ensures this. Each repeater that a signal goes through adds a small
amount of time to the process, so the rule is designed to minimize transmission times of the signals. Too
much latency on the LAN increases the number of late collisions and makes the LAN less efficient.
57
5.1.7 Hubs
This page will describe the three types of hubs.
Hubs are actually multiport repeaters. The difference between hubs and repeaters is usually the number of
ports that each device provides. A typical repeater usually has two ports. A hub generally has from 4 to 24
ports. Hubs are most commonly used in Ethernet 10BASE-T or 100BASE-T networks.
The use of a hub changes the network from a linear bus with each device plugged directly into the wire to a
star topology. Data that arrives over the cables to a hub port is electrically repeated on all the other ports
connected to the network segment.
Hubs come in three basic types:
Passive A passive hub serves as a physical connection point only. It does not manipulate or view
the traffic that crosses it. It does not boost or clean the signal. A passive hub is used only to share
the physical media. A passive hub does not need electrical power.
Active An active hub must be plugged into an electrical outlet because it needs power to amplify a
signal before it is sent to the other ports.
Intelligent Intelligent hubs are sometimes called smart hubs. They function like active hubs with
microprocessor chips and diagnostic capabilities. Intelligent hubs are more expensive than active
hubs. They are also more useful in troubleshooting situations.
Devices attached to a hub receive all traffic that travels through the hub. If many devices are attached to the
hub, collisions are more likely to occur. A collision occurs when two or more workstations send data over the
network wire at the same time. All data is corrupted when this occurs. All devices that are connected to the
same network segment are members of the same collision domain.
Sometimes hubs are called concentrators since they are central connection points for Ethernet LANs.
The Lab Activity will teach students about the price of different network components.
The next page discusses wireless networks.
5.1.8 Wireless
This page will explain how a wireless network can be created with much less cabling than other networks.
Wireless signals are electromagnetic waves that travel through the air. Wireless networks use radio
frequency (RF), laser, infrared (IR), satellite, or microwaves to carry signals between computers without a
permanent cable connection. The only permanent cabling can be to the access points for the network.
Workstations within the range of the wireless network can be moved easily without the need to connect and
reconnect network cables.
A common application of wireless data communication is for mobile use. Some examples of mobile use
include commuters, airplanes, satellites, remote space probes, space shuttles, and space stations.
At the core of wireless communication are devices called transmitters and receivers. The transmitter
converts source data to electromagnetic waves that are sent to the receiver. The receiver then converts
these electromagnetic waves back into data for the destination. For two-way communication, each device
requires a transmitter and a receiver. Many networking device manufacturers build the transmitter and
receiver into a single unit called a transceiver or wireless network card. All devices in a WLAN must have
the correct wireless network card installed.
The two most common wireless technologies used for networking are IR and RF. IR technology has its
weaknesses. Workstations and digital devices must be in the line of sight of the transmitter to work correctly.
An infrared-based network can be used when all the digital devices that require network connectivity are in
58
one room. IR networking technology can be installed quickly. However, the data signals can be weakened or
obstructed by people who walk across the room or by moisture in the air. New IR technologies will be able to
work out of sight.
RF technology allows devices to be in different rooms or buildings. The limited range of radio signals restricts
the use of this kind of network. RF technology can be on single or multiple frequencies. A single radio
frequency is subject to outside interference and geographic obstructions. It is also easily monitored by
others, which makes the transmissions of data insecure. Spread spectrum uses multiple frequencies to
increase the immunity to noise and to make it difficult for outsiders to intercept data transmissions.
Two approaches that are used to implement spread spectrum for WLAN transmissions are Frequency
Hopping Spread Spectrum (FHSS) and Direct Sequence Spread Spectrum (DSSS). The technical details of
how these technologies work are beyond the scope of this course.
A large LAN can be broken into smaller segments. The next page will explain how bridges are used to
accomplish this.
5.1.9 Bridges
This page will explain the function of bridges in a LAN.
There are times when it is necessary to break up a large LAN into smaller and more easily managed
segments.
This decreases the amount of traffic on a single LAN and can extend the geographical area
past what a single LAN can support. The devices that are used to connect network segments together
include bridges, switches, routers, and gateways. Switches and bridges operate at the data link layer of the
OSI model. The function of the bridge is to make intelligent decisions about whether or not to pass signals on
to the next segment of a network.
When a bridge receives a frame on the network, the destination MAC address is looked up in the bridge
table to determine whether to filter, flood, or copy the frame onto another segment. This decision process
occurs as follows:
If the destination device is on the same segment as the frame, the bridge will not send the frame
onto other segments. This process is known as filtering.
If the destination device is on a different segment, the bridge forwards the frame to the appropriate
segment.
If the destination address is unknown to the bridge, the bridge forwards the frame to all segments
except the one on which it was received. This process is known as flooding.
If placed strategically, a bridge can greatly improve network performance.
59
Swtiches
5.1.10
This page will explain the function of switches.
A switch is sometimes described as a multiport bridge. A typical bridge may have only two ports that link
two network segments. A switch can have multiple ports based on the number of network segments that
need to be linked. Like bridges, switches learn information about the data packets that are received from
computers on the network. Switches use this information to build tables to determine the destination of data
that is sent between computers on the network.
Although there are some similarities between the two, a switch is a more sophisticated device than a bridge.
A bridge determines whether the frame should be forwarded to the other network segment based on the
destination MAC address. A switch has many ports with many network segments connected to them. A
switch chooses the port to which the destination device or workstation is connected. Ethernet switches are
popular connectivity solutions because they improve network speed, bandwidth, and performance.
Switching is a technology that alleviates congestion in Ethernet LANs. Switches reduce traffic and increase
bandwidth. Switches can easily replace hubs because switches work with the cable infrastructures that are
already in place. This improves performance with minimal changes to a network.
All switching equipment perform two basic operations. The first operation is called switching data frames.
This is the process by which a frame is received on an input medium and then transmitted to an output
medium. The second is the maintenance of switching operations where switches build and maintain
switching tables and search for loops.
Switches operate at much higher speeds than bridges and can support new functionality, such as virtual
LANs.
An Ethernet switch has many benefits. One benefit is that it allows many users to communicate at the same
time through the use of virtual circuits and dedicated network segments in a virtually collision-free
environment. This maximizes the bandwidth available on the shared medium. Another benefit is that a
switched LAN environment is very cost effective since the hardware and cables in place can be reused.
The Lab activity will help students understand the price of a LAN switch.
The next page will discuss NICs.
Host connectivity
5.1.11
This page will explain how NICs provide network connectivity.
The function of a NIC is to connect a host device to the network medium. A NIC is a printed circuit board that
fits into the expansion slot on the motherboard or peripheral device of a computer.
The NIC is also
referred to as a network adapter. On laptop or notebook computers a NIC is the size of a credit card.
NICs are considered Layer 2 devices because each NIC carries a unique code called a MAC address. This
address is used to control data communication for the host on the network. More will be learned about the
MAC address later. NICs control host access to the medium.
In some cases the type of connector on the NIC does not match the type of media that needs to be
connected to it. A good example is a Cisco 2500 router. This router has an AUI connector. That AUI
connector needs to connect to a UTP Category 5 Ethernet cable. A transceiver is used to do this. A
transceiver converts one type of signal or connector to another. For example, a transceiver can connect a
15-pin AUI interface to an RJ-45 jack. It is considered a Layer 1 device because it only works with bits and
not with any address information or higher-level protocols.
60
NICs have no standardized symbol. It is implied that, when networking devices are attached to network
media, there is a NIC or NIC-like device present. A dot on a topology map represents either a NIC interface
or port, which acts like a NIC.
The next page discusses peer-to-peer networks.
5.1.12 Peer-to-peer
This page covers peer-to-peer networks.
When LAN and WAN technologies are used, many computers are interconnected to provide services to their
users. To accomplish this, networked computers take on different roles or functions in relation to each other.
Some types of applications require computers to function as equal partners. Other types of applications
distribute their work so that one computer functions to serve a number of others in an unequal relationship.
Two computers generally use request and response protocols to communicate with each other. One
computer issues a request for a service, and a second computer receives and responds to that request. The
requestor acts like a client and the responder acts like a server.
In a peer-to-peer network, networked computers act as equal partners, or peers. As peers, each computer
can take on the client function or the server function. Computer A may request for a file from Computer B,
which then sends the file to Computer A. Computer A acts like the client and Computer B acts like the server.
At a later time, Computers A and B can reverse roles.
In a peer-to-peer network, individual users control their own resources. The users may decide to share
certain files with other users.
The users may also require passwords before they allow others to access
their resources. Since individual users make these decisions, there is no central point of control or
administration in the network. In addition, individual users must back up their own systems to be able to
recover from data loss in case of failures. When a computer acts as a server, the user of that machine may
experience reduced performance as the machine serves the requests made by other systems.
Peer-to-peer networks are relatively easy to install and operate. No additional equipment is necessary
beyond a suitable operating system installed on each computer. Since users control their own resources, no
dedicated administrators are needed.
As networks grow, peer-to-peer relationships become increasingly difficult to coordinate. A peer-to-peer
network works well with ten or fewer computers. Since peer-to-peer networks do not scale well, their
efficiency decreases rapidly as the number of computers on the network increases. Also, individual users
control access to the resources on their computers, which means security may be difficult to maintain. The
client/server model of networking can be used to overcome the limitations of the peer-to-peer network.
Students will create a simple peer-to-peer network in the Lab Activity.
The next page discusses a client/server network.
5.1.13 Client/server
This page will describe a client/server environment.
In a client/server arrangement, network services are located on a dedicated computer called a server. The
server responds to the requests of clients. The server is a central computer that is continuously available to
respond to requests from clients for file, print, application, and other services. Most network operating
systems adopt the form of a client/server relationship. Typically, desktop computers function as clients and
one or more computers with additional processing power, memory, and specialized software function as
servers.
Servers are designed to handle requests from many clients simultaneously. Before a client can access the
server resources, the client must be identified and be authorized to use the resource. Each client is assigned
an account name and password that is verified by an authentication service. The authentication service
guards access to the network. With the centralization of user accounts, security, and access control, serverbased networks simplify the administration of large networks.
The concentration of network resources such as files, printers, and applications on servers also makes it
easier to back-up and maintain the data. Resources can be located on specialized, dedicated servers for
easier access. Most client/server systems also include ways to enhance the network with new services that
extend the usefulness of the network.
61
The centralized functions in a client/server network has substantial advantages and some disadvantages.
Although a centralized server enhances security, ease of access, and control, it introduces a single point of
failure into the network. Without an operational server, the network cannot function at all. Servers require a
trained, expert staff member to administer and maintain. Server systems also require additional hardware
and specialized software that add to the cost.
Figures and summarize the advantages and disadvantages of peer-to-peer and client/server networks.
In the Lab Activities, students will build a hub-based network and a switch-based network.
This page concludes this lesson. The next lesson will discuss cabling WANs. The first page focuses on
the WAN physical layer.
5.2
Cabling WANs
62
63
5.2.5
65
Summary
This page summarizes the topics discussed in this module.
Ethernet is the most widely used LAN technology and can be implemented on a variety of media. Ethernet
technologies provide a variety of network speeds, from 10 Mbps to Gigabit Ethernet, which can be applied to
appropriate areas of a network. Media and connector requirements differ for various Ethernet
implementations.
The connector on a network interface card (NIC) must match the media. A bayonet nut connector (BNC)
connector is required to connect to coaxial cable. A fiber connector is required to connect to fiber media. The
registered jack (RJ-45) connector used with twisted-pair wire is the most common type of connector used in
LAN implementations. Ethernet
When twisted-pair wire is used to connect devices, the appropriate wire sequence, or pinout, must be
determined as well. A crossover cable is used to connect two similar devices, such as two PCs. A straightthrough cable is used to connect different devices, such as connections between a switch and a PC. A
rollover cable is used to connect a PC to the console port of a router.
Repeaters regenerate and retime network signals and allow them to travel a longer distance on the media.
Hubs are multi-port repeaters. Data arriving at a hub port is electrically repeated on all the other ports
connected to the same network segment, except for the port on which the data arrived. Sometimes hubs are
called concentrators, because hubs often serve as a central connection point for an Ethernet LAN.
A wireless network can be created with much less cabling than other networks. The only permanent cabling
might be to the access points for the network. At the core of wireless communication are devices called
transmitters and receivers. The transmitter converts source data to electromagnetic (EM) waves that are
passed to the receiver. The receiver then converts these electromagnetic waves back into data for the
destination. The two most common wireless technologies used for networking are infrared (IR) and radio
frequency (RF).
There are times when it is necessary to break up a large LAN into smaller, more easily managed segments.
The devices that are used to define and connect network segments include bridges, switches, routers, and
gateways.
A bridge uses the destination MAC address to determine whether to filter, flood, or copy the frame onto
another segment. If placed strategically, a bridge can greatly improve network performance.
A switch is sometimes described as a multi-port bridge. Although there are some similarities between the
two, a switch is a more sophisticated device than a bridge. Switches operate at much higher speeds than
bridges and can support new functionality, such as virtual LANs.
Routers are responsible for routing data packets from source to destination within the LAN, and for providing
connectivity to the WAN. Within a LAN environment the router controls broadcasts, provides local address
resolution services, such as ARP and RARP, and may segment the network using a subnetwork structure.
Computers typically communicate with each other by using request/response protocols. One computer
issues a request for a service, and a second computer receives and responds to that request. In a peer-topeer network, networked computers act as equal partners, or peers. As peers, each computer can take on
the client function or the server function. In a client/server arrangement, network services are located on a
dedicated computer called a server. The server responds to the requests of clients.
66
WAN connection types include high-speed serial links, ISDN, DSL, and cable modems. Each of these
requires a specific media and connector. To interconnect the ISDN BRI port to the service-provider device, a
UTP Category 5 straight-through cable with RJ-45 connectors, is used. A phone cable and an RJ-11
connector are used to connect a router for DSL service. Coaxial cable and a BNC connector are used to
connect a router for cable service.
In addition to the connection type, it is necessary to determine whether DTE or DCE connectors are
required on internetworking devices. The DTE is the endpoint of the users private network on the WAN
link. The DCE is typically the point where responsibility for delivering data passes to the service provider.
When connecting directly to a service provider, or to a device such as a CSU/DSU that will perform
signal clocking, the router is a DTE and needs a DTE serial cable. This is typically the case for routers.
However, there are cases when the router will need to be the DCE.
6.1
Ethernet Fundamentals
Overview
Ethernet is now the dominant LAN technology in the world. Ethernet is a family of LAN technologies that may
be best understood with the OSI reference model. All LANs must deal with the basic issue of how individual
stations, or nodes, are named. Ethernet specifications support different media, bandwidths, and other Layer
1 and 2 variations. However, the basic frame format and address scheme is the same for all varieties of
Ethernet.
Various MAC strategies have been invented to allow multiple stations to access physical media and network
devices. It is important to understand how network devices gain access to the network media before students
can comprehend and troubleshoot the entire network.
This module covers some of the objectives for the CCNA 640-801, INTRO 640-821, and ICND 640-811
exams.
Students who complete this module should be able to perform the following tasks:
Describe the basics of Ethernet technology
Explain naming rules of Ethernet technology
Explain how Ethernet relates to the OSI model
Describe the Ethernet framing process and frame structure
List Ethernet frame field names and purposes
Identify the characteristics of CSMA/CD
Describe Ethernet timing, interframe spacing, and backoff time after a collision
Define Ethernet errors and collisions
Explain the concept of auto-negotiation in relation to speed and duplex
6.1.1 Introduction to Ethernet
This page provides an introduction to Ethernet. Most of the traffic on the Internet originates and ends with
Ethernet connections. Since it began in the 1970s, Ethernet has evolved to meet the increased demand for
high-speed LANs. When optical fiber media was introduced, Ethernet adapted to take advantage of the
superior bandwidth and low error rate that fiber offers. Now the same protocol that transported data at 3
Mbps in 1973 can carry data at 10 Gbps.
The success of Ethernet is due to the following factors:
Simplicity and ease of maintenance
Ability to incorporate new technologies
Reliability
Low cost of installation and upgrade
The introduction of Gigabit Ethernet has extended the original LAN technology to distances that make
Ethernet a MAN and WAN standard.
The original idea for Ethernet was to allow two or more hosts to use the same medium with no interference
between the signals. This problem of multiple user access to a shared medium was studied in the early
1970s at the University of Hawaii. A system called Alohanet was developed to allow various stations on the
Hawaiian Islands structured access to the shared radio frequency band in the atmosphere.
This work later
formed the basis for the Ethernet access method known as CSMA/CD.
The first LAN in the world was the original version of Ethernet. Robert Metcalfe and his coworkers at Xerox
designed it more than thirty years ago. The first Ethernet standard was published in 1980 by a consortium of
Digital Equipment Company, Intel, and Xerox (DIX). Metcalfe wanted Ethernet to be a shared standard from
which everyone could benefit, so it was released as an open standard. The first products that were
developed from the Ethernet standard were sold in the early 1980s. Ethernet transmitted at up to 10 Mbps
over thick coaxial cable up to a distance of 2 kilometers (km). This type of coaxial cable was referred to as
thicknet and was about the width of a small finger.
In 1985, the IEEE standards committee for Local and Metropolitan Networks published standards for LANs.
These standards start with the number 802. The standard for Ethernet is 802.3. The IEEE wanted to make
sure that its standards were compatible with the International Standards Organization (ISO) and OSI model.
67
To do this, the IEEE 802.3 standard had to address the needs of Layer 1 and the lower portion of Layer 2 of
the OSI model. As a result, some small modifications to the original Ethernet standard were made in 802.3.
The differences between the two standards were so minor that any Ethernet NIC can transmit and receive
both Ethernet and 802.3 frames. Essentially, Ethernet and IEEE 802.3 are the same standards.
The 10-Mbps bandwidth of Ethernet was more than enough for the slow PCs of the 1980s. By the early
1990s PCs became much faster, file sizes increased, and data flow bottlenecks occurred. Most were caused
by the low availability of bandwidth. In 1995, IEEE announced a standard for a 100-Mbps Ethernet. This was
followed by standards for Gigabit Ethernet in 1998 and 1999.
All the standards are essentially compatible with the original Ethernet standard. An Ethernet frame could
leave an older coax 10-Mbps NIC in a PC, be placed onto a 10-Gbps Ethernet fiber link, and end up at a
100-Mbps NIC. As long as the packet stays on Ethernet networks it is not changed. For this reason Ethernet
is considered very scalable. The bandwidth of the network could be increased many times while the Ethernet
technology remains the same.
The original Ethernet standard has been amended many times to manage new media and higher
transmission rates. These amendments provide standards for new technologies and maintain compatibility
between Ethernet variations.
6.1.2 IEEE Ethernet naming rules
This page focuses on the Ethernet naming rules developed by IEEE.
Ethernet is not one networking technology, but a family of networking technologies that includes Legacy, Fast
Ethernet, and Gigabit Ethernet. Ethernet speeds can be 10, 100, 1000, or 10,000 Mbps. The basic frame
format and the IEEE sublayers of OSI Layers 1 and 2 remain consistent across all forms of Ethernet.
When Ethernet needs to be expanded to add a new medium or capability, the IEEE issues a new
supplement to the 802.3 standard. The new supplements are given a one or two letter designation such as
802.3u. An abbreviated description, called an identifier, is also assigned to the supplement.
The abbreviated description consists of the following elements:
A number that indicates the number of Mbps transmitted
The word base to indicate that baseband signaling is used
One or more letters of the alphabet indicating the type of medium used. For example, F = fiber
optical cable and T = copper unshielded twisted pair
Ethernet relies on baseband signaling, which uses the entire bandwidth of the transmission medium. The
data signal is transmitted directly over the transmission medium.
In broadband signaling, the data signal is no longer placed directly on the transmission medium. Ethernet
used broadband signaling in the 10BROAD36 standard. 10BROAD36 is the IEEE standard for an 802.3
Ethernet network using broadband transmission with thick coaxial cable running at 10 Mbps. 10BROAD36 is
now considered obsolete. An analog or carrier signal is modulated by the data signal and then transmitted.
Radio broadcasts and cable TV use broadband signaling.
IEEE cannot force manufacturers to fully comply with any standard. IEEE has two main objectives:
Supply the information necessary to build devices that comply with Ethernet standards
Promote innovation among manufacturers
Students will identify the IEEE 802 standards in the Interactive Media Activity.
6.1.3 Ethernet and the OSI model
This page will explain how Ethernet relates to the OSI model.
Ethernet operates in two areas of the OSI model. These are the lower half of the data link layer, which is
known as the MAC sublayer, and the physical layer.
Data that moves from one Ethernet station to another often passes through a repeater. All stations in the
same collision domain see traffic that passes through a repeater. A collision domain is a shared resource.
Problems that originate in one part of a collision domain will usually impact the entire collision domain.
A repeater forwards traffic to all other ports. A repeater never sends traffic out the same port from which it
was received. Any signal detected by a repeater will be forwarded. If the signal is degraded through
attenuation or noise, the repeater will attempt to reconstruct and regenerate the signal.
To guarantee minimum bandwidth and operability, standards specify the maximum number of stations per
segment, maximum segment length, and maximum number of repeaters between stations. Stations
separated by bridges or routers are in different collision domains.
Figure maps a variety of Ethernet technologies to the lower half of OSI Layer 2 and all of Layer 1. Ethernet
at Layer 1 involves signals, bit streams that travel on the media, components that put signals on media, and
various topologies. Ethernet Layer 1 performs a key role in the communication that takes place between
devices, but each of its functions has limitations. Layer 2 addresses these limitations.
Data link sublayers contribute significantly to technological compatibility and computer communications. The
MAC sublayer is concerned with the physical components that will be used to communicate the information.
The Logical Link Control (LLC) sublayer remains relatively independent of the physical equipment that will be
used for the communication process.
Figure maps a variety of Ethernet technologies to the lower half of OSI Layer 2 and all of Layer 1. While
there are other varieties of Ethernet, the ones shown are the most widely used.
68
The Interactive Media Activity reviews the layers of the OSI model.
6.1.4 Naming
This page will discuss the MAC addresses used by Ethernet networks.
An address system is required to uniquely identify computers and interfaces to allow for local delivery of
frames on the Ethernet. Ethernet uses MAC addresses that are 48 bits in length and expressed as 12
hexadecimal digits. The first six hexadecimal digits, which are administered by the IEEE, identify the
manufacturer or vendor. This portion of the MAC address is known as the Organizational Unique Identifier
(OUI). The remaining six hexadecimal digits represent the interface serial number or another value
administered by the manufacturer. MAC addresses are sometimes referred to as burned-in MAC
addresses (BIAs) because they are burned into ROM and are copied into RAM when the NIC initializes.
At the data link layer MAC headers and trailers are added to upper layer data. The header and trailer contain
control information intended for the data link layer in the destination system. The data from upper layers is
encapsulated within the data link frame, between the header and trailer, and then sent out on the network.
The NIC uses the MAC address to determine if a message should be passed on to the upper layers of the
OSI model. The NIC does not use CPU processing time to make this assessment. This enables better
communication times on an Ethernet network.
When a device sends data on an Ethernet network, it can use the destination MAC address to open a
communication pathway to the other device. The source device attaches a header with the MAC address of
the intended destination and sends data through the network. As this data travels along the network media
the NIC in each device checks to see if the MAC address matches the physical destination address carried
by the data frame. If there is no match, the NIC discards the data frame. When the data reaches the
destination node, the NIC makes a copy and passes the frame up the OSI layers. On an Ethernet network,
all nodes must examine the MAC header.
All devices that are connected to the Ethernet LAN have MAC addressed interfaces. This includes
workstations, printers, routers, and switches.
6.1.5 Layer 2 framing
This page will explain how frames are created at Layer 2 of the OSI model.
Encoded bit streams, or data, on physical media represent a tremendous technological accomplishment, but
they, alone, are not enough to make communication happen. Framing provides essential information that
could not be obtained from coded bit streams alone. This information includes the following:
Which computers are in communication with each other
When communication between individual computers begins and when it ends
Which errors occurred while the computers communicated
Which computer will communicate next
Framing is the Layer 2 encapsulation process. A frame is the Layer 2 protocol data unit.
A voltage versus time graph could be used to visualize bits. However, it may be too difficult to graph address
and control information for larger units of data. Another type of diagram that could be used is the frame
format diagram, which is based on voltage versus time graphs. Frame format diagrams are read from left to
69
right, just like an oscilloscope graph. The frame format diagram shows different groupings of bits, or fields,
that perform other functions.
There are many different types of frames described by various standards.A single generic frame has sections
called fields. Each field is composed of bytes. The names of the fields are as follows:
Start Frame field
Address field
Length/Type field
Data field
Frame Check Sequence (FCS) field
When computers are connected to a physical medium, there must be a way to inform other computers when
they are about to transmit a frame. Various technologies do this in different ways. Regardless of the
technology, all frames begin with a sequence of bytes to signal the data transmission.
All frames contain naming information, such as the name of the source node, or source MAC address, and
the name of the destination node, or destination MAC address.
Most frames have some specialized fields. In some technologies, a Length field specifies the exact length of
a frame in bytes. Some frames have a Type field, which specifies the Layer 3 protocol used by the device
that wants to send data.
Frames are used to send upper-layer data and ultimately the user application data from a source to a
destination. The data package includes the message to be sent, or user application data. Extra bytes may be
added so frames have a minimum length for timing purposes. LLC bytes are also included with the Data field
in the IEEE standard frames. The LLC sublayer takes the network protocol data, which is an IP packet, and
adds control information to help deliver the packet to the destination node. Layer 2 communicates with the
upper layers through LLC.
All frames and the bits, bytes, and fields contained within them, are susceptible to errors from a variety of
sources. The FCS field contains a number that is calculated by the source node based on the data in the
frame. This number is added to the end of a frame that is sent. When the destination node receives the
frame the FCS number is recalculated and compared with the FCS number included in the frame. If the two
numbers are different, an error is assumed, the frame is discarded.
Because the source cannot detect that the frame has been discarded, retransmission has to be initiated by
higher layer connection-oriented protocols providing data flow control. Because these protocols, such as
TCP, expect frame acknowledgment, ACK, to be sent by the peer station within a certain time, retransmission
usually occurs.
There are three primary ways to calculate the FCS number:
Cyclic redundancy check (CRC) performs calculations on the data.
Two-dimensional parity places individual bytes in a two-dimensional array and performs
redundancy checks vertically and horizontally on the array, creating an extra byte resulting in an
even or odd number of binary 1s.
Internet checksum adds the values of all of the data bits to arrive at a sum.
The node that transmits data must get the attention of other devices to start and end a frame. The
Length field indicates where the frame ends. The frame ends after the FCS. Sometimes there is a formal
byte sequence referred to as an end-frame delimiter
70
71
6.2
Ethernet Operation
6.2.1 MAC
This page will define MAC and provide examples of deterministic and non-deterministic MAC protocols.
MAC refers to protocols that determine which computer in a shared-media environment, or collision domain,
is allowed to transmit data. MAC and LLC comprise the IEEE version of the OSI Layer 2. MAC and LLC are
sublayers of Layer 2. The two broad categories of MAC are deterministic and non-deterministic.
Examples of deterministic protocols include Token Ring and FDDI. In a Token Ring network, hosts are
arranged in a ring and a special data token travels around the ring to each host in sequence. When a host
72
wants to transmit, it seizes the token, transmits the data for a limited time, and then forwards the token to the
next host in the ring. Token Ring is a collisionless environment since only one host can transmit at a time.
Non-deterministic MAC protocols use a first-come, first-served approach. CSMA/CD is a simple system. The
NIC listens for the absence of a signal on the media and begins to transmit. If two nodes transmit at the
same time a collision occurs and none of the nodes are able to transmit.
Three common Layer 2 technologies are Token Ring, FDDI, and Ethernet. All three specify Layer 2 issues,
LLC, naming, framing, and MAC, as well as Layer 1 signaling components and media issues. The specific
technologies for each are as follows:
Ethernet uses a logical bus topology to control information flow on a linear bus and a physical star
or extended star topology for the cables
Token Ring uses a logical ring topology to control information flow and a physical star topology
FDDI uses a logical ring topology to control information flow and a physical dual-ring topology
The next page explains how collisions are avoided in an Ethernet network.
73
Ethernet timing
6.2.3
This page explains the importance of slot times in an Ethernet network.
The basic rules and specifications for proper operation of Ethernet are not particularly complicated, though
some of the faster physical layer implementations are becoming so. Despite the basic simplicity, when a
problem occurs in Ethernet it is often quite difficult to isolate the source. Because of the common bus
architecture of Ethernet, also described as a distributed single point of failure, the scope of the problem
usually encompasses all devices within the collision domain. In situations where repeaters are used, this can
include devices up to four segments away.
Any station on an Ethernet network wishing to transmit a message first listens to ensure that no other
station is currently transmitting. If the cable is quiet, the station will begin transmitting immediately. The
electrical signal takes time to travel down the cable (delay), and each subsequent repeater introduces a
small amount of latency in forwarding the frame from one port to the next. Because of the delay and latency,
it is possible for more than one station to begin transmitting at or near the same time. This results in a
collision.
If the attached station is operating in full duplex then the station may send and receive simultaneously and
collisions should not occur. Full-duplex operation also changes the timing considerations and eliminates the
concept of slot time. Full-duplex operation allows for larger network architecture designs since the timing
restriction for collision detection is removed.
In half duplex, assuming that a collision does not occur, the sending station will transmit 64 bits of timing
synchronization information that is known as the preamble. The sending station will then transmit the
following information:
Destination and source MAC addressing information
Certain other header information
The actual data payload
Checksum (FCS) used to ensure that the message was not corrupted along the way
Stations receiving the frame recalculate the FCS to determine if the incoming message is valid and then
pass valid messages to the next higher layer in the protocol stack.
10 Mbps and slower versions of Ethernet are asynchronous. Asynchronous means that each receiving
station will use the eight octets of timing information to synchronize the receive circuit to the incoming data,
and then discard it. 100 Mbps and higher speed implementations of Ethernet are synchronous. Synchronous
means the timing information is not required, however for compatibility reasons the Preamble and Start
Frame Delimiter (SFD) are present.
For all speeds of Ethernet transmission at or below 1000 Mbps, the standard describes how a transmission
may be no smaller than the slot time. Slot time for 10 and 100-Mbps Ethernet is 512 bit-times, or 64 octets.
Slot time for 1000-Mbps Ethernet is 4096 bit-times, or 512 octets. Slot time is calculated assuming maximum
cable lengths on the largest legal network architecture. All hardware propagation delay times are at the legal
maximum and the 32-bit jam signal is used when collisions are detected.
74
The actual calculated slot time is just longer than the theoretical amount of time required to travel between
the furthest points of the collision domain, collide with another transmission at the last possible instant, and
then have the collision fragments return to the sending station and be detected. For the system to work the
first station must learn about the collision before it finishes sending the smallest legal frame size. To allow
1000-Mbps Ethernet to operate in half duplex the extension field was added when sending small frames
purely to keep the transmitter busy long enough for a collision fragment to return. This field is present only on
1000-Mbps, half-duplex links and allows minimum-sized frames to be long enough to meet slot time
requirements. Extension bits are discarded by the receiving station.
On 10-Mbps Ethernet one bit at the MAC layer requires 100 nanoseconds (ns) to transmit. At 100 Mbps that
same bit requires 10 ns to transmit and at 1000 Mbps only takes 1 ns. As a rough estimate, 20.3 cm (8 in)
per nanosecond is often used for calculating propagation delay down a UTP cable. For 100 meters of UTP,
this means that it takes just under 5 bit-times for a 10BASE-T signal to travel the length the cable.
For CSMA/CD Ethernet to operate, the sending station must become aware of a collision before it has
completed transmission of a minimum-sized frame. At 100 Mbps the system timing is barely able to
accommodate 100 meter cables. At 1000 Mbps special adjustments are required as nearly an entire
minimum-sized frame would be transmitted before the first bit reached the end of the first 100 meters of UTP
cable. For this reason half duplex is not permitted in 10-Gigabit Ethernet.
The Interactive Media Activity will help students identify the bit time of different Ethernet speeds.
Interframe spacing and backoff
6.2.4
This page explains how spacing is used in an Ethernet network for data transmission.
The minimum spacing between two non-colliding frames is also called the interframe spacing. This is
measured from the last bit of the FCS field of the first frame to the first bit of the preamble of the second
frame.
After a frame has been sent, all stations on a 10-Mbps Ethernet are required to wait a minimum of 96 bittimes (9.6 microseconds) before any station may legally transmit the next frame. On faster versions of
Ethernet the spacing remains the same, 96 bit-times, but the time required for that interval grows
correspondingly shorter. This interval is referred to as the spacing gap. The gap is intended to allow slow
stations time to process the previous frame and prepare for the next frame.
A repeater is expected to regenerate the full 64 bits of timing information, which is the preamble and SFD, at
the start of any frame. This is despite the potential loss of some of the beginning preamble bits because of
slow synchronization. Because of this forced reintroduction of timing bits, some minor reduction of the
interframe gap is not only possible but expected. Some Ethernet chipsets are sensitive to a shortening of the
interframe spacing, and will begin failing to see frames as the gap is reduced. With the increase in
processing power at the desktop, it would be very easy for a personal computer to saturate an Ethernet
segment with traffic and to begin transmitting again before the interframe spacing delay time is satisfied.
After a collision occurs and all stations allow the cable to become idle (each waits the full interframe
spacing), then the stations that collided must wait an additional and potentially progressively longer period of
time before attempting to retransmit the collided frame. The waiting period is intentionally designed to be
random so that two stations do not delay for the same amount of time before retransmitting, which would
result in more collisions. This is accomplished in part by expanding the interval from which the random
retransmission time is selected on each retransmission attempt. The waiting period is measured in
increments of the parameter slot time.
If the MAC layer is unable to send the frame after sixteen attempts, it gives up and generates an error to the
network layer. Such an occurrence is fairly rare and would happen only under extremely heavy network
loads, or when a physical problem exists on the network.
6.2.5 Error handling
This page will describe collisions and how they are handled on a network.
The most common error condition on Ethernet networks are collisions. Collisions are the mechanism for
resolving contention for network access. A few collisions provide a smooth, simple, low overhead way for
network nodes to arbitrate contention for the network resource. When network contention becomes too great,
collisions can become a significant impediment to useful network operation.
Collisions result in network bandwidth loss that is equal to the initial transmission and the collision jam signal.
This is consumption delay and affects all network nodes possibly causing significant reduction in network
throughput.
The considerable majority of collisions occur very early in the frame, often before the SFD. Collisions
occurring before the SFD are usually not reported to the higher layers, as if the collision did not occur. As
soon as a collision is detected, the sending stations transmit a 32-bit jam signal that will enforce the
collision. This is done so that any data being transmitted is thoroughly corrupted and all stations have a
chance to detect the collision.
In Figure two stations listen to ensure that the cable is idle, then transmit. Station 1 was able to transmit a
significant percentage of the frame before the signal even reached the last cable segment. Station 2 had not
received the first bit of the transmission prior to beginning its own transmission and was only able to send
several bits before the NIC sensed the collision. Station 2 immediately truncated the current transmission,
substituted the 32-bit jam signal and ceased all transmissions. During the collision and jam event that Station
75
2 was experiencing, the collision fragments were working their way back through the repeated collision
domain toward Station 1. Station 2 completed transmission of the 32-bit jam signal and became silent before
the collision propagated back to Station 1 which was still unaware of the collision and continued to transmit.
When the collision fragments finally reached Station 1, it also truncated the current transmission and
substituted a 32-bit jam signal in place of the remainder of the frame it was transmitting. Upon sending the
32-bit jam signal Station 1 ceased all transmissions.
A jam signal may be composed of any binary data so long as it does not form a proper checksum for the
portion of the frame already transmitted. The most commonly observed data pattern for a jam signal is simply
a repeating one, zero, one, zero pattern, the same as Preamble. When viewed by a protocol analyzer this
pattern appears as either a repeating hexadecimal 5 or A sequence. The corrupted, partially transmitted
messages are often referred to as collision fragments or runts. Normal collisions are less than 64 octets in
length and therefore fail both the minimum length test and the FCS checksum test.
a cable fault such as excessive crosstalk can cause a station to perceive its own transmission as a local
collision.
The characteristics of a remote collision are a frame that is less than the minimum length, has an invalid FCS
checksum, but does not exhibit the local collision symptom of over-voltage or simultaneous RX/TX activity.
This sort of collision usually results from collisions occurring on the far side of a repeated connection. A
repeater will not forward an over-voltage state, and cannot cause a station to have both the TX and RX pairs
active at the same time. The station would have to be transmitting to have both pairs active, and that would
constitute a local collision. On UTP networks this is the most common sort of collision observed.
There is no possibility remaining for a normal or legal collision after the first 64 octets of data has been
transmitted by the sending stations. Collisions occurring after the first 64 octets are called late collisions".
The most significant difference between late collisions and collisions occurring before the first 64 octets is
that the Ethernet NIC will retransmit a normally collided frame automatically, but will not automatically
retransmit a frame that was collided late. As far as the NIC is concerned everything went out fine, and the
upper layers of the protocol stack must determine that the frame was lost. Other than retransmission, a
station detecting a late collision handles it in exactly the same way as a normal collision.
The Interactive Media Activity will require students to identify the different types of collisions.
6.2.7 Ethernet errors
This page will define common Ethernet errors.
Knowledge of typical errors is invaluable for understanding both the operation and troubleshooting of
Ethernet networks.
The following are the sources of Ethernet error:
Collision or runt Simultaneous transmission occurring before slot time has elapsed
Late collision Simultaneous transmission occurring after slot time has elapsed
Jabber, long frame and range errors Excessively or illegally long transmission
Short frame, collision fragment or runt Illegally short transmission
FCS error Corrupted transmission
Alignment error Insufficient or excessive number of bits transmitted
Range error Actual and reported number of octets in frame do not match
Ghost or jabber Unusually long Preamble or Jam event
While local and remote collisions are considered to be a normal part of Ethernet operation, late collisions are
considered to be an error. The presence of errors on a network always suggests that further investigation is
warranted. The severity of the problem indicates the troubleshooting urgency related to the detected errors. A
handful of errors detected over many minutes or over hours would be a low priority. Thousands detected
over a few minutes suggest that urgent attention is warranted.
Jabber is defined in several places in the 802.3 standard as being a transmission of at least 20,000 to 50,000
bit times in duration. However, most diagnostic tools report jabber whenever a detected transmission
exceeds the maximum legal frame size, which is considerably smaller than 20,000 to 50,000 bit times. Most
references to jabber are more properly called long frames.
A long frame is one that is longer than the maximum legal size, and takes into consideration whether or not
the frame was tagged. It does not consider whether or not the frame had a valid FCS checksum. This error
usually means that jabber was detected on the network.
A short frame is a frame smaller than the minimum legal size of 64 octets, with a good frame check
sequence. Some protocol analyzers and network monitors call these frames runts". In general the presence
of short frames is not a guarantee that the network is failing.
The term runt is generally an imprecise slang term that means something less than a legal frame size. It may
refer to short frames with a valid FCS checksum although it usually refers to collision fragments.
The Interactive Media Activity will help students become familiar with Ethernet errors.
77
78
When an Auto-Negotiating station first attempts to link it is supposed to enable 100BASE-TX to attempt to
immediately establish a link. If 100BASE-TX signaling is present, and the station supports 100BASE-TX, it
will attempt to establish a link without negotiating. If either signaling produces a link or FLP bursts are
received, the station will proceed with that technology. If a link partner does not offer an FLP burst, but
instead offers NLPs, then that device is automatically assumed to be a 10BASE-T station. During this initial
interval of testing for other technologies, the transmit path is sending FLP bursts. The standard does not
permit parallel detection of any other technologies.
If a link is established through parallel detection, it is required to be half duplex. There are only two methods
of achieving a full-duplex link. One method is through a completed cycle of Auto-Negotiation, and the other is
to administratively force both link partners to full duplex. If one link partner is forced to full duplex, but the
other partner attempts to Auto-Negotiate, then there is certain to be a duplex mismatch. This will result in
collisions and errors on that link. Additionally if one end is forced to full duplex the other must also be forced.
The exception to this is 10-Gigabit Ethernet, which does not support half duplex.
Many vendors implement hardware in such a way that it cycles through the various possible states. It
transmits FLP bursts to Auto-Negotiate for a while, then it configures for Fast Ethernet, attempts to link for a
while, and then just listens. Some vendors do not offer any transmitted attempt to link until the interface first
hears an FLP burst or some other signaling scheme.
There are two duplex modes, half and full. For shared media, the half-duplex mode is mandatory. All coaxial
implementations are half duplex in nature and cannot operate in full duplex. UTP and fiber implementations
may be operated in half duplex. 10-Gbps implementations are specified for full duplex only.
In half duplex only one station may transmit at a time. For the coaxial implementations a second station
transmitting will cause the signals to overlap and become corrupted. Since UTP and fiber generally transmit
on separate pairs the signals have no opportunity to overlap and become corrupted. Ethernet has
established arbitration rules for resolving conflicts arising from instances when more than one station
attempts to transmit at the same time. Both stations in a point-to-point full-duplex link are permitted to
transmit at any time, regardless of whether the other station is transmitting.
Auto-Negotiation avoids most situations where one station in a point-to-point link is transmitting under halfduplex rules and the other under full-duplex rules.
In the event that link partners are capable of sharing more than one common technology, refer to the list in
Figure . This list is used to determine which technology should be chosen from the offered configurations.
Fiber-optic Ethernet implementations are not included in this priority resolution list because the interface
electronics and optics do not permit easy reconfiguration between implementations. It is assumed that the
interface configuration is fixed. If the two interfaces are able to Auto-Negotiate then they are already using
the same Ethernet implementation. However, there remain a number of configuration choices such as the
duplex setting, or which station will act as the Master for clocking purposes, that must be determined.
The Interactive Media Activity will help students understand the link establishment process.
This page concludes this lesson. The next page will summarize the main points from the module.
Summary
This page summarizes the topics discussed in this module.
Ethernet is not one networking technology, but a family of LAN technologies that includes Legacy, Fast
Ethernet, and Gigabit Ethernet. When Ethernet needs to be expanded to add a new medium or capability,
the IEEE issues a new supplement to the 802.3 standard. The new supplements are given a one or two letter
designation such as 802.3u. Ethernet relies on baseband signaling, which uses the entire bandwidth of the
transmission medium. Ethernet operates at two layers of the OSI model, the lower half of the data link layer,
known as the MAC sublayer and the physical layer. Ethernet at Layer 1 involves interfacing with media,
signals, bit streams that travel on the media, components that put signals on media, and various physical
topologies. Layer 1 bits need structure so OSI Layer 2 frames are used. The MAC sublayer of Layer 2
determines the type of frame appropriate for the physical media.
The one thing common to all forms of Ethernet is the frame structure. This is what allows the interoperability
of the different types of Ethernet.
Some of the fields permitted or required in an 802.3 Ethernet Frame are:
Preamble
Start Frame Delimiter
Destination Address
Source Address
Length/Type
Data and Pad
Frame Check Sequence
In 10 Mbps and slower versions of Ethernet, the Preamble provides timing information the receiving node
needs in order to interpret the electrical signals it is receiving. The Start Frame Delimiter marks the end of
the timing information. 10 Mbps and slower versions of Ethernet are asynchronous. That is, they will use the
preamble timing information to synchronize the receive circuit to the incoming data. 100 Mbps and higher
speed implementations of Ethernet are synchronous. Synchronous means the timing information is not
required, however for compatibility reasons the Preamble and SFD are present.
79
The address fields of the Ethernet frame contain Layer 2, or MAC, addresses.
All frames are susceptible to errors from a variety of sources. The Frame Check Sequence (FCS) field of an
Ethernet frame contains a number that is calculated by the source node based on the data in the frame. At
the destination it is recalculated and compared to determine that the data received is complete and error
free.
Once the data is framed the Media Access Control (MAC) sublayer is also responsible to determine which
computer on a shared-medium environment, or collision domain, is allowed to transmit the data. There are
two broad categories of Media Access Control, deterministic (taking turns) and non-deterministic (first come,
first served).
Examples of deterministic protocols include Token Ring and FDDI. The carrier sense multiple access with
collision detection (CSMA/CD) access method is a simple non-deterministic system. The NIC listens for an
absence of a signal on the media and starts transmitting. If two nodes or more nodes transmit at the same
time a collision occurs. If a collision is detected the nodes wait a random amount of time and retransmit.
The minimum spacing between two non-colliding frames is also called the interframe spacing. Interframe
spacing is required to insure that all stations have time to process the previous frame and prepare for the
next frame.
Collisions can occur at various points during transmission. A collision where a signal is detected on the
receive and transmit circuits at the same time is referred to as a local collision. A collision that occurs before
the minimum number of bytes can be transmitted is called a remote collision. A collision that occurs after the
first sixty-four octets of data have been sent is considered a late collision. The NIC will not automatically
retransmit for this type of collision.
While local and remote collisions are considered to be a normal part of Ethernet operation, late collisions are
considered to be an error. Ethernet errors result from detection of frames sizes that are longer or shorter than
standards allow or excessively long or illegal transmissions called jabber. Runt is a slang term that refers to
something less than the legal frame size.
Auto-Negotiation detects the speed and duplex mode, half-duplex or full-duplex, of the device on the
other end of the wire and adjusts to match those settings.
7.1
Overview
Ethernet has been the most successful LAN technology mainly because of how easy it is to implement.
Ethernet has also been successful because it is a flexible technology that has evolved as needs and media
capabilities have changed. This module will provide details about the most important types of Ethernet. The
goal is to help students understand what is common to all forms of Ethernet.
Changes in Ethernet have resulted in major improvements over the 10-Mbps Ethernet of the early 1980s.
The 10-Mbps Ethernet standard remained virtually unchanged until 1995 when IEEE announced a standard
for a 100-Mbps Fast Ethernet. In recent years, an even more rapid growth in media speed has moved the
transition from Fast Ethernet to Gigabit Ethernet. The standards for Gigabit Ethernet emerged in only three
years. A faster Ethernet version called 10-Gigabit Ethernet is now widely available and faster versions will be
developed.
MAC addresses, CSMA/CD, and the frame format have not been changed from earlier versions of Ethernet.
However, other aspects of the MAC sublayer, physical layer, and medium have changed. Copper-based
NICs capable of 10, 100, or 1000 Mbps are now common. Gigabit switch and router ports are becoming the
80
standard for wiring closets. Optical fiber to support Gigabit Ethernet is considered a standard for backbone
cables in most new installations.
This module covers some of the objectives for the CCNA 640-801, INTRO 640-821, and ICND 640-811
exams.
Students who complete this module should be able to perform the following tasks:
Describe the differences and similarities among 10BASE5, 10BASE2, and 10BASE-T Ethernet
Define Manchester encoding
List the factors that affect Ethernet timing limits
List 10BASE-T wiring parameters
Describe the key characteristics and varieties of 100-Mbps Ethernet
Describe the evolution of Ethernet
Explain the MAC methods, frame formats, and transmission process of Gigabit Ethernet
Describe the uses of specific media and encoding with Gigabit Ethernet
Identify the pinouts and wiring typical to the various implementations of Gigabit Ethernet
Describe the similarities and differences between Gigabit and 10-Gigabit Ethernet
Describe the basic architectural considerations of Gigabit and 10-Gigabit Ethernet
7.1.1
10-Mbps Ethernet
Delay of repeaters
Delay of transceivers
Interframe gap shrinkage
Delays within the station
10-Mbps Ethernet operates within the timing limits for a series of up to five segments separated by up to four
repeaters. This is known as the 5-4-3 rule. No more than four repeaters can be used in series between any
two stations. There can also be no more than three populated segments between any two stations.
7.1
7.1.2 10BASE5
This page will discuss the original 1980 Ethernet product, which is 10BASE5. 10BASE5 transmitted 10 Mbps
over a single think coaxial cable bus.
10BASE5 is important because it was the first medium used for Ethernet. 10BASE5 was part of the original
802.3 standard. The primary benefit of 10BASE5 was length. 10BASE5 may be found in legacy installations.
It is not recommended for new installations. 10BASE5 systems are inexpensive and require no configuration.
Two disadvantages are that basic components like NICs are very difficult to find and it is sensitive to signal
reflections on the cable. 10BASE5 systems also represent a single point of failure.
10BASE5 uses Manchester encoding. It has a solid central conductor. Each segment of thick coax may be
up to 500 m (1640.4 ft) in length. The cable is large, heavy, and difficult to install. However, the distance
limitations were favorable and this prolonged its use in certain applications.
When the medium is a single coaxial cable, only one station can transmit at a time or a collision will occur.
Therefore, 10BASE5 only runs in half-duplex with a maximum transmission rate of 10 Mbps.
Figure illustrates a configuration for an end-to-end collision domain with the maximum number of segments
and repeaters. Remember that only three segments can have stations connected to them. The other two
repeated segments are used to extend the network.
The Lab Activity will help students decode a waveform.
The Interactive Media Activity will help students learn the features of 10BASE5 technology.
82
7.1
7.1.3 10BASE2
This page covers 10BASE2, which was introduced in 1985.
Installation was easier because of its smaller size, lighter weight, and greater flexibility. 10BASE2 still exists
in legacy networks. Like 10BASE5, it is no longer recommended for network installations. It has a low cost
and does not require hubs.
10BASE2 also uses Manchester encoding. Computers on a 10BASE2 LAN are linked together by an
unbroken series of coaxial cable lengths. These lengths are attached to a T-shaped connector on the NIC
with BNC connectors.
10BASE2 has a stranded central conductor. Each of the maximum five segments of thin coaxial cable may
be up to 185 m (607 ft) long and each station is connected directly to the BNC T-shaped connector on the
coaxial cable.
Only one station can transmit at a time or a collision will occur. 10BASE2 also uses half-duplex. The
maximum transmission rate of 10BASE2 is 10 Mbps.
There may be up to 30 stations on a 10BASE2 segment. Only three out of five consecutive segments
between any two stations can be populated.
The Interactive Media Activity will help students learn the features of 10BASE2 technology.
83
7.1.4 10BASE-T
This page covers 10BASE-T, which was introduced in 1990.
10BASE-T used cheaper and easier to install Category 3 UTP copper cable instead of coax cable. The cable
plugged into a central connection device that contained the shared bus. This device was a hub. It was at the
center of a set of cables that radiated out to the PCs like the spokes on a wheel. This is referred to as a star
topology. As additional stars were added and the cable distances grew, this formed an extended star
topology. Originally 10BASE-T was a half-duplex protocol, but full-duplex features were added later. The
explosion in the popularity of Ethernet in the mid-to-late 1990s was when Ethernet came to dominate LAN
technology.
10BASE-T also uses Manchester encoding. A 10BASE-T UTP cable has a solid conductor for each wire. The
maximum cable length is 90 m (295 ft). UTP cable uses eight-pin RJ-45 connectors. Though Category 3
cable is adequate for 10BASE-T networks, new cable installations should be made with Category 5e or
better. All four pairs of wires should be used either with the T568-A or T568-B cable pinout arrangement. This
type of cable installation supports the use of multiple protocols without the need to rewire. Figure shows
the pinout arrangement for a 10BASE-T connection. The pair that transmits data on one device is connected
to the pair that receives data on the other device.
Half duplex or full duplex is a configuration choice. 10BASE-T carries 10 Mbps of traffic in half-duplex mode
and 20 Mbps in full-duplex mode.
The Interactive Media Activity will help students learn the features of 10BASE-T technology.
84
7.1.7 100BASE-TX
This page will describe 100BASE-TX.
In 1995, 100BASE-TX was the standard, using Category 5 UTP cable, which became commercially
successful.
The original coaxial Ethernet used half-duplex transmission so only one device could transmit at a time. In
1997, Ethernet was expanded to include a full-duplex capability that allowed more than one PC on a network
to transmit at the same time. Switches replaced hubs in many networks. These switches had full-duplex
capabilities and could handle Ethernet frames quickly.
100BASE-TX uses 4B/5B encoding, which is then scrambled and converted to Multi-Level Transmit (MLT-3)
encoding. Figure shows four waveform examples. The top waveform has no transition in the center of the
timing window. No transition indicates a binary zero. The second waveform shows a transition in the center
85
of the timing window. A transition represents a binary one. The third waveform shows an alternating binary
sequence. The fourth wavelength shows that signal changes indicate ones and horizontal lines indicate
zeros.
Figure shows the pinout for a 100BASE-TX connection. Notice that the two separate transmit-receive
paths exist. This is identical to the 10BASE-T configuration.
100BASE-TX carries 100 Mbps of traffic in half-duplex mode. In full-duplex mode, 100BASE-TX can
exchange 200 Mbps of traffic. The concept of full duplex will become more important as Ethernet speeds
increase.
7.1.8 100BASE-FX
This page covers 100BASE-FX.
When copper-based Fast Ethernet was introduced, a fiber version was also desired. A fiber version could be
used for backbone applications, connections between floors, buildings where copper is less desirable, and
also in high-noise environments. 100BASE-FX was introduced to satisfy this desire. However, 100BASE-FX
was never adopted successfully. This was due to the introduction of Gigabit Ethernet copper and fiber
standards. Gigabit Ethernet standards are now the dominant technology for backbone installations, highspeed cross-connects, and general infrastructure needs.
The timing, frame format, and transmission are the same in both versions of 100-Mbps Fast Ethernet. In
Figure , the top waveform has no transition, which indicates a binary 0. In the second waveform, the
transition in the center of the timing window indicates a binary 1. In the third waveform, there is an alternating
binary sequence. In the third and fourth waveforms it is more obvious that no transition indicates a binary
zero and the presence of a transition is a binary one.
Figure summarizes a 100BASE-FX link and pinouts. A fiber pair with either ST or SC connectors is most
commonly used.
The separate Transmit (Tx) and Receive (Rx) paths in 100BASE-FX optical fiber allow for 200-Mbps
transmission.
86
87
7.2
7.2.2 1000BASE-T
This page will describe 1000BASE-T.
As Fast Ethernet was installed to increase bandwidth to workstations, this began to create bottlenecks
upstream in the network. The 1000BASE-T standard, which is IEEE 802.3ab, was developed to provide
additional bandwidth to help alleviate these bottlenecks. It provided more throughput for devices such as
intra-building backbones, inter-switch links, server farms, and other wiring closet applications as well as
connections for high-end workstations. Fast Ethernet was designed to function over Category 5 copper cable
that passes the Category 5e test. Most installed Category 5 cable can pass the Category 5e certification if
properly terminated. It is important for the 1000BASE-T standard to be interoperable with 10BASE-T and
100BASE-TX.
Since Category 5e cable can reliably carry up to 125 Mbps of traffic, 1000 Mbps or 1 Gigabit of bandwidth
was a design challenge. The first step to accomplish 1000BASE-T is to use all four pairs of wires instead of
the traditional two pairs of wires used by 10BASE-T and 100BASE-TX. This requires complex circuitry that
allows full-duplex transmissions on the same wire pair. This provides 250 Mbps per pair. With all four-wire
pairs, this provides the desired 1000 Mbps. Since the information travels simultaneously across the four
paths, the circuitry has to divide frames at the transmitter and reassemble them at the receiver.
The 1000BASE-T encoding with 4D-PAM5 line encoding is used on Category 5e, or better, UTP. That means
the transmission and reception of data happens in both directions on the same wire at the same time. As
might be expected, this results in a permanent collision on the wire pairs. These collisions result in complex
voltage patterns. With the complex integrated circuits using techniques such as echo cancellation, Layer 1
88
Forward Error Correction (FEC), and prudent selection of voltage levels, the system achieves the 1-Gigabit
throughput.
In idle periods there are nine voltage levels found on the cable, and during data transmission periods there
are 17 voltage levels found on the cable. With this large number of states and the effects of noise, the
signal on the wire looks more analog than digital. Like analog, the system is more susceptible to noise due to
cable and termination problems.
The data from the sending station is carefully divided into four parallel streams, encoded, transmitted and
detected in parallel, and then reassembled into one received bit stream. Figure represents the
simultaneous full duplex on four-wire pairs. 1000BASE-T supports both half-duplex as well as full-duplex
operation. The use of full-duplex 1000BASE-T is widespread.
7.2.3 1000BASE-SX and LX
This page will discuss single-mode and multimode optical fiber.
The IEEE 802.3 standard recommends that Gigabit Ethernet over fiber be the preferred backbone
technology.
The timing, frame format, and transmission are common to all versions of 1000 Mbps. Two signal-encoding
schemes are defined at the physical layer. The 8B/10B scheme is used for optical fiber and shielded
copper media, and the pulse amplitude modulation 5 (PAM5) is used for UTP.
1000BASE-X uses 8B/10B encoding converted to non-return to zero (NRZ) line encoding. NRZ encoding
relies on the signal level found in the timing window to determine the binary value for that bit period. Unlike
most of the other encoding schemes described, this encoding system is level driven instead of edge driven.
That is the determination of whether a bit is a zero or a one is made by the level of the signal rather than
when the signal changes levels.
The NRZ signals are then pulsed into the fiber using either short-wavelength or long-wavelength light
sources. The short-wavelength uses an 850 nm laser or LED source in multimode optical fiber (1000BASESX). It is the lower-cost of the options but has shorter distances. The long-wavelength 1310 nm laser source
uses either single-mode or multimode optical fiber (1000BASE-LX). Laser sources used with single-mode
fiber can achieve distances of up to 5000 meters. Because of the length of time to completely turn the LED
or laser on and off each time, the light is pulsed using low and high power. A logic zero is represented by low
power, and a logic one by high power.
The Media Access Control method treats the link as point-to-point. Since separate fibers are used for
transmitting (Tx) and receiving (Rx) the connection is inherently full duplex. Gigabit Ethernet permits only a
single repeater between two stations. Figure is a 1000BASE Ethernet media comparison chart.
7.2.4 Gigabit Ethernet architecture
This page will discuss the architecture of Gigabit Ethernet.
The distance limitations of full-duplex links are only limited by the medium, and not the round-trip delay.
Since most Gigabit Ethernet is switched, the values in Figures and are the practical limits between
devices. Daisy-chaining, star, and extended star topologies are all allowed. The issue then becomes one of
logical topology and data flow, not timing or distance limitations.
A 1000BASE-T UTP cable is the same as 10BASE-T and 100BASE-TX cable, except that link performance
must meet the higher quality Category 5e or ISO Class D (2000) requirements.
Modification of the architecture rules is strongly discouraged for 1000BASE-T. At 100 meters, 1000BASE-T is
operating close to the edge of the ability of the hardware to recover the transmitted signal. Any cabling
problems or environmental noise could render an otherwise compliant cable inoperable even at distances
that are within the specification.
It is recommended that all links between a station and a hub or switch be configured for Auto-Negotiation
to permit the highest common performance. This will avoid accidental misconfiguration of the other
required parameters for proper Gigabit Ethernet operation.
With the frame format and other Ethernet Layer 2 specifications compatible with previous standards, 10GbE
can provide increased bandwidth needs that are interoperable with existing network infrastructure.
A major conceptual change for Ethernet is emerging with 10GbE. Ethernet is traditionally thought of as a
LAN technology, but 10GbE physical layer standards allow both an extension in distance to 40 km over
single-mode fiber and compatibility with synchronous optical network (SONET) and synchronous digital
hierarchy (SDH) networks. Operation at 40 km distance makes 10GbE a viable MAN technology.
Compatibility with SONET/SDH networks operating up to OC-192 speeds (9.584640 Gbps) make 10GbE a
viable WAN technology. 10GbE may also compete with ATM for certain applications.
To summarize, how does 10GbE compare to other varieties of Ethernet?
Frame format is the same, allowing interoperability between all varieties of legacy, fast, gigabit, and
10 gigabit, with no reframing or protocol conversions.
Bit time is now 0.1 nanoseconds. All other time variables scale accordingly.
Since only full-duplex fiber connections are used, CSMA/CD is not necessary.
The IEEE 802.3 sublayers within OSI Layers 1 and 2 are mostly preserved, with a few additions to
accommodate 40 km fiber links and interoperability with SONET/SDH technologies.
Flexible, efficient, reliable, relatively low cost end-to-end Ethernet networks become possible.
TCP/IP can run over LANs, MANs, and WANs with one Layer 2 transport method.
The basic standard governing CSMA/CD is IEEE 802.3. An IEEE 802.3 supplement, entitled 802.3ae,
governs the 10GbE family. As is typical for new technologies, a variety of implementations are being
considered, including:
10GBASE-SR Intended for short distances over already-installed multimode fiber, supports a
range between 26 m to 82 m
10GBASE-LX4 Uses wavelength division multiplexing (WDM), supports 240 m to 300 m over
already-installed multimode fiber and 10 km over single-mode fiber
10GBASE-LR and 10GBASE-ER Support 10 km and 40 km over single-mode fiber
10GBASE-SW, 10GBASE-LW, and 10GBASE-EW Known collectively as 10GBASE-W, intended
to work with OC-192 synchronous transport module SONET/SDH WAN equipment
The IEEE 802.3ae Task force and the 10-Gigabit Ethernet Alliance (10 GEA) are working to standardize
these emerging technologies.
10-Gbps Ethernet (IEEE 802.3ae) was standardized in June 2002. It is a full-duplex protocol that uses only
optic fiber as a transmission medium. The maximum transmission distances depend on the type of fiber
being used. When using single-mode fiber as the transmission medium, the maximum transmission distance
is 40 kilometers (25 miles). Some discussions between IEEE members have begun that suggest the
possibility of standards for 40, 80, and even 100-Gbps Ethernet.
timing to separate the data from the effects of noise on the physical layer. This is the purpose of
synchronization.
In response to these issues of synchronization, bandwidth, and Signal-to-Noise Ratio, 10-Gigabit Ethernet
uses two separate encoding steps. By using codes to represent the user data, transmission is made more
efficient. The encoded data provides synchronization, efficient usage of bandwidth, and improved Signal-toNoise Ratio characteristics.
Complex serial bit streams are used for all versions of 10GbE except for 10GBASE-LX4, which uses Wide
Wavelength Division Multiplex (WWDM) to multiplex four bit simultaneous bit streams as four wavelengths of
light launched into the fiber at one time.
Figure represents the particular case of using four slightly different wavelength, laser sources. Upon
receipt from the medium, the optical signal stream is demultiplexed into four separate optical signal streams.
The four optical signal streams are then converted back into four electronic bit streams as they travel in
approximately the reverse process back up through the sublayers to the MAC layer.
Currently, most 10GbE products are in the form of modules, or line cards, for addition to high-end switches
and routers. As the 10GbE technologies evolve, an increasing diversity of signaling components can be
expected. As optical technologies evolve, improved transmitters and receivers will be incorporated into these
products, taking further advantage of modularity. All 10GbE varieties use optical fiber media. Fiber types
include 10 single-mode Fiber, and 50 and 62.5 multimode fibers. A range of fiber attenuation and
dispersion characteristics is supported, but they limit operating distances.
Even though support is limited to fiber optic media, some of the maximum cable lengths are surprisingly
short. No repeater is defined for 10-Gigabit Ethernet since half duplex is explicitly not supported.
As with 10 Mbps, 100 Mbps and 1000 Mbps versions, it is possible to modify some of the architecture
rules slightly. Possible architecture adjustments are related to signal loss and distortion along the
medium. Due to dispersion of the signal and other issues the light pulse becomes undecipherable
beyond certain distances.
Summary
This page summarizes the topics discussed in this module.
Ethernet is a technology that has increased in speed one thousand times, from 10 Mbps to 10,000 Mbps, in
less than a decade. All forms of Ethernet share a similar frame structure and this leads to excellent
interoperability. Most Ethernet copper connections are now switched full duplex, and the fastest copperbased Ethernet is 1000BASE-T, or Gigabit Ethernet. 10 Gigabit Ethernet and faster are exclusively optical
fiber-based technologies.
10BASE5, 10BASE2, and 10BASE-T Ethernet are considered Legacy Ethernet. The four common features
of Legacy Ethernet are timing parameters, frame format, transmission process, and a basic design rule.
Legacy Ethernet encodes data on an electrical signal. The form of encoding used in 10 Mbps systems is
called Manchester encoding. Manchester encoding uses a change in voltage to represent the binary
numbers zero and one. An increase or decrease in voltage during a timed period, called the bit period,
determines the binary value of the bit.
In addition to a standard bit period, Ethernet standards set limits for slot time and interframe spacing.
Different types of media can affect transmission timing and timing standards ensure interoperability. 10 Mbps
Ethernet operates within the timing limits offered by a series of no more than five segments separated by no
more than four repeaters.
A single thick coaxial cable was the first medium used for Ethernet. 10BASE2, using a thinner coax cable,
was introduced in 1985. 10BASE-T, using twisted-pair copper wire, was introduced in 1990. Because it used
multiple wires 10BASE-T offered the option of full-duplex signaling. 10BASE-T carries 10 Mbps of traffic in
half-duplex mode and 20 Mbps in full-duplex mode.
10BASE-T links can have unrepeated distances up to 100 m. Beyond that network devices such as
repeaters, hub, bridges and switches are used to extend the scope of the LAN. With the advent of switches,
the 4-repeater rule is not so relevant. You can extend the LAN indefinitely by daisy-chaining switches. Each
switch-to-switch connection, with maximum length of 100m, is essentially a point-to-point connection without
the media contention or timing issues of using repeaters and hubs.
92
100-Mbps Ethernet, also known as Fast Ethernet, can be implemented using twisted-pair copper wire, as in
100BASE-TX, or fiber media, as in 100BASE-FX. 100 Mbps forms of Ethernet can transmit 200 Mbps in full
duplex.
Because the higher frequency signals used in Fast Ethernet are more susceptible to noise, two separate
encoding steps are used by 100-Mbps Ethernet to enhance signal integrity.
Gigabit Ethernet over copper wire is accomplished by the following:
Category 5e UTP cable and careful improvements in electronics are used to boost 100 Mbps per
wire pair to 125 Mbps per wire pair.
All four wire pairs instead of just two. This allows 125 Mbps per wire pair, or 500 Mbps for the four
wire pairs.
Sophisticated electronics allow permanent collisions on each wire pair and run signals in full duplex,
doubling the 500 Mbps to 1000 Mbps.
On Gigabit Ethernet networks bit signals occur in one tenth of the time of 100 Mbps networks and 1/100 of
the time of 10 Mbps networks. With signals occurring in less time the bits become more susceptible to noise.
The issue becomes how fast the network adapter or interface can change voltage levels to signal bits and
still be detected reliably one hundred meters away at the receiving NIC or interface. At this speed encoding
and decoding data becomes even more complex.
The fiber versions of Gigabit Ethernet, 1000BASE-SX and 1000BASE-LX offer the following advantages:
noise immunity, small size, and increased unrepeated distances and bandwidth. The IEEE 802.3
standard recommends that Gigabit Ethernet over fiber be the preferred backbone technology.
8.1
Ethernet Switching
Overview
Shared Ethernet works extremely well under ideal conditions. If the number of devices that try to access the
network is low, the number of collisions stays well within acceptable limits. However, when the number of
users on the network increases, the number of collisions can significantly reduce performance. Bridges were
developed to help correct performance problems that arose from increased collisions. Switches evolved from
bridges to become the main technology in modern Ethernet LANs.
Collisions and broadcasts are expected events in modern networks. They are engineered into the design of
Ethernet and higher layer technologies. However, when collisions and broadcasts occur in numbers that are
above the optimum, network performance suffers. Collision domains and broadcast domains should be
designed to limit the negative effects of collisions and broadcasts. This module explores the effects of
collisions and broadcasts on network traffic and then describes how bridges and routers are used to segment
networks for improved performance.
93
This module covers some of the objectives for the CCNA 640-801, INTRO 640-821, and ICND 640-811
exams.
Students who complete this module should be able to perform the following tasks:
Define bridging and switching
Define and describe the content-addressable memory (CAM) table
Define latency
Describe store-and-forward and cut-through packet switching modes
Explain Spanning-Tree Protocol (STP)
Define collisions, broadcasts, collision domains, and broadcast domains
Identify the Layers 1, 2, and 3 devices used to create collision domains and broadcast domains
Discuss data flow and problems with broadcasts
Explain network segmentation and list the devices used to create segments
Layer 2 bridging
8.1.1
This page will discuss the operation of Layer 2 bridges.
As more nodes are added to an Ethernet segment, use of the media increases. Ethernet is a shared media,
which means only one node can transmit data at a time. The addition of more nodes increases the demands
on the available bandwidth and places additional loads on the media. This also increases the probability of
collisions, which results in more retransmissions. A solution to the problem is to break the large segment into
parts and separate it into isolated collision domains.
To accomplish this a bridge keeps a table of MAC addresses and the associated ports. The bridge then
forwards or discards frames based on the table entries. The following steps illustrate the operation of a
bridge:
The bridge has just been started so the bridge table is empty. The bridge just waits for traffic on the
segment. When traffic is detected, it is processed by the bridge.
Host A pings Host B. Since the data is transmitted on the entire collision domain segment, both the
bridge and Host B process the packet.
The bridge adds the source address of the frame to its bridge table. Since the address was in the
source address field and the frame was received on Port 1, the frame must be associated with Port 1
in the table.
The destination address of the frame is checked against the bridge table. Since the address is not in
the table, even though it is on the same collision domain, the frame is forwarded to the other
segment. The address of Host B has not been recorded yet.
Host B processes the ping request and transmits a ping reply back to Host A. The data is transmitted
over the whole collision domain. Both Host A and the bridge receive the frame and process it.
The bridge adds the source address of the frame to its bridge table. Since the source address was
not in the bridge table and was received on Port 1, the source address of the frame must be
associated with Port 1 in the table.
The destination address of the frame is checked against the bridge table to see if its entry is there.
Since the address is in the table, the port assignment is checked. The address of Host A is
associated with the port the frame was received on, so the frame is not forwarded.
Host A pings Host C. Since the data is transmitted on the entire collision domain segment, both the
bridge and Host B process the frame. Host B discards the frame since it was not the intended
destination.
The bridge adds the source address of the frame to its bridge table. Since the address is already
entered into the bridge table the entry is just renewed.
The destination address of the frame is checked against the bridge table. Since the address is not in
the table, the frame is forwarded to the other segment. The address of Host C has not been
recorded yet.
Host C processes the ping request and transmits a ping reply back to Host A. The data is transmitted
over the whole collision domain. Both Host D and the bridge receive the frame and process it. Host
D discards the frame since it is not the intended destination.
The bridge adds the source address of the frame to its bridge table. Since the address was in the
source address field and the frame was received on Port 2, the frame must be associated with Port 2
in the table.
The destination address of the frame is checked against the bridge table to see if its entry is present.
The address is in the table but it is associated with Port 1, so the frame is forwarded to the other
segment.
When Host D transmits data, its MAC address will also be recorded in the bridge table. This is how
the bridge controls traffic between to collision domains.
These are the steps that a bridge uses to forward and discard frames that are received on any of its ports.
8.1.2
Layer 2 switching
94
Generally, a bridge has only two ports and divides a collision domain into two parts. All decisions made by a
bridge are based on MAC or Layer 2 addresses and do not affect the logical or Layer 3 addresses. A bridge
will divide a collision domain but has no effect on a logical or broadcast domain. If a network does not have a
device that works with Layer 3 addresses, such as a router, the entire network will share the same logical
broadcast address space. A bridge will create more collision domains but will not add broadcast domains.
A switch is essentially a fast, multi-port bridge that can contain dozens of ports. Each port creates its own
collision domain. In a network of 20 nodes, 20 collision domains exist if each node is plugged into its own
switch port. If an uplink port is included, one switch creates 21 single-node collision domains. A switch
dynamically builds and maintains a content-addressable memory (CAM) table, which holds all of the
necessary MAC information for each port.
95
8.1.4 Latency
Latency is the delay between the time a frame begins to leave the source device and when the first part of
the frame reaches its destination. A variety of conditions can cause delays:
Media delays may be caused by the finite speed that signals can travel through the physical media.
Circuit delays may be caused by the electronics that process the signal along the path.
Software delays may be caused by the decisions that software must make to implement switching
and protocols.
Delays may be caused by the content of the frame and the location of the frame switching decisions.
For example, a device cannot route a frame to a destination until the destination MAC address has
been read.
8.1.5 Switch modes
How a frame is switched to the destination port is a trade off between latency and reliability. A switch can
start to transfer the frame as soon as the destination MAC address is received. This is called cut-through
packet switching and results in the lowest latency through the switch. However, no error checking is
available. The switch can also receive the entire frame before it is sent to the destination port. This gives the
switch software an opportunity to verify the Frame Check Sequence (FCS). If the frame is invalid, it is
discarded at the switch. Since the entire frame is stored before it is forwarded, this is called store-andforward packet switching. A compromise between cut-through and store-and-forward packet switching is
the fragment-free mode. Fragment-free packet switching reads the first 64 bytes, which includes the frame
header, and starts to send out the packet before the entire data field and checksum are read. This mode
verifies the reliability of the addresses and LLC protocol information to ensure the data will be handled
properly and arrive at the correct destination.
When cut-through packet switching is used, the source and destination ports must have the same bit rate to
keep the frame intact. This is called symmetric switching. If the bit rates are not the same, the frame must be
stored at one bit rate before it is sent out at the other bit rate. This is known as asymmetric switching. Storeand-forward mode must be used for asymmetric switching.
Asymmetric switching provides switched connections between ports with different bandwidths. Asymmetric
switching is optimized for client/server traffic flows in which multiple clients communicate with a server at
once. More bandwidth must be dedicated to the server port to prevent a bottleneck.
The Interactive Media Activity will help students become familiar with the three types of switch modes.
8.1.6 Spanning-Tree Protocol
When multiple switches are arranged in a simple hierarchical tree, switching loops are unlikely to occur.
However, switched networks are often designed with redundant paths to provide for reliability and fault
tolerance. Redundant paths are desirable but they can have undesirable side effects such as switching
loops. Switching loops are one such side effect. Switching loops can occur by design or by accident, and
they can lead to broadcast storms that will rapidly overwhelm a network. STP is a standards-based routing
protocol that is used to avoid routing loops. Each switch in a LAN that uses STP sends messages called
96
Bridge Protocol Data Units (BPDUs) out all its ports to let other switches know of its existence. This
information is used to elect a root bridge for the network. The switches use the spanning-tree algorithm
(STA) to resolve and shut down the redundant paths.
Each port on a switch that uses STP exists in one of the following five states:
Blocking
Listening
Learning
Forwarding
Disabled
A port moves through these five states as follows:
From initialization to blocking
From blocking to listening or to disabled
From listening to learning or to disabled
From learning to forwarding or to disabled
From forwarding to disabled
STP is used to create a logical hierarchical tree with no loops. However, the alternate paths are still available
if necessary.
The Interactive Media Activity will help students learn the function of each spanning-tree state.
8.2
98
8.2.3 Segmentation
The history of how Ethernet handles collisions and collision domains dates back to research at the University
of Hawaii in 1970. In its attempts to develop a wireless communication system for the islands of Hawaii,
university researchers developed a protocol called Aloha. The Ethernet protocol is actually based on the
Aloha protocol.
One important skill for a networking professional is the ability to recognize collision domains. A collision
domain is created when several computers are connected to a single shared-access medium that is not
attached to other network devices. This situation limits the number of computers that can use the segment.
Layer 1 devices extend but do not control collision domains.
Layer 2 devices segment or divide collision domains. They use the MAC address assigned to every
Ethernet device to control frame propagation. Layer 2 devices are bridges and switches. They keep track of
the MAC addresses and their segments. This allows these devices to control the flow of traffic at the Layer 2
level. This function makes networks more efficient. It allows data to be transmitted on different segments of
the LAN at the same time without collisions. Bridges and switches divide collision domains into smaller parts.
Each part becomes its own collision domain.
These smaller collision domains will have fewer hosts and less traffic than the original domain. The fewer
hosts that exist in a collision domain, the more likely the media will be available. If the traffic between bridged
segments is not too heavy a bridged network works well. Otherwise, the Layer 2 device can slow down
communication and become a bottleneck.
Layer 2 and 3 devices do not forward collisions. Layer 3 devices divide collision domains into smaller
domains.
Layer 3 devices also perform other functions. These functions will be covered in the section on broadcast
domains.
The Interactive Media Activity will teach students about network segmentation.
99
network administrators only configure RIP on five to ten routers. For a routing table that has a size of 50
packets, 10 RIP routers would generate about 16 broadcasts per second.
IP multicast applications can adversely affect the performance of large, scaled, switched networks.
Multicasting is an efficient way to send a stream of multimedia data to many users on a shared-media hub.
However, it affects every user on a flat switched network. A packet video application could generate a 7-MB
stream of multicast data that would be sent to every segment. This would result in severe congestion.
Layer 3 devices filter data packets based on IP destination address. The only way that a packet will be
forwarded is if its destination IP address is outside of the broadcast domain and the router has an identified
location to send the packet. A Layer 3 device creates multiple collision and broadcast domains.
Data flow through a routed IP based network, involves data moving across traffic management devices at
Layers 1, 2, and 3 of the OSI model. Layer 1 is used for transmission across the physical media, Layer 2 for
collision domain management, and Layer 3 for broadcast domain management.
102
Summary
Ethernet is a shared media, baseband technology, which means only one node can transmit data at a time.
Increasing the number of nodes on a single segment increases demand on the available bandwidth. This in
turn increases the probability of collisions. A solution to the problem is to break a large network segment into
parts and separate it into isolated collision domains. Bridges and switches are used to segment the network
into multiple collision domains.
A bridge builds a bridge table from the source addresses of packets it processes. An address is associated
with the port the frame came in on. Eventually the bridge table contains enough address information to allow
the bridge to forward a frame out a particular port based on the destination address. This is how the bridge
controls traffic between two collision domains.
Switches learn in much the same way as bridges but provide a virtual connection directly between the source
and destination nodes, rather than the source collision domain and destination collision domain. Each port
creates its own collision domain. A switch dynamically builds and maintains a Content-Addressable Memory
(CAM) table, holding all of the necessary MAC information for each port. CAM is memory that essentially
works backwards compared to conventional memory. Entering data into the memory will return the
associated address.
Two devices connected through switch ports become the only two nodes in a small collision domain. These
small physical segments are called microsegments. Microsegments connected using twisted pair cabling are
capable of full-duplex communications. In full duplex mode, when separate wires are used for transmitting
and receiving between two hosts, there is no contention for the media. Thus, a collision domain no longer
exists.
There is a propagation delay for the signals traveling along transmission medium. Additionally, as signals are
processed by network devices further delay, or latency, is introduced.
How a frame is switched affects latency and reliability. A switch can start to transfer the frame as soon as the
destination MAC address is received. Switching at this point is called cut-through switching and results in the
lowest latency through the switch. However, cut-through switching provides no error checking. At the other
extreme, the switch can receive the entire frame before sending it out the destination port. This is called
store-and-forward switching. Fragment-free switching reads and checks the first sixty-four bytes of the frame
before forwarding it to the destination port.
Switched networks are often designed with redundant paths to provide for reliability and fault tolerance.
Switches use the Spanning-Tree Protocol (STP) to identify and shut down redundant paths through the
network. The result is a logical hierarchical path through the network with no loops.
Using Layer 2 devices to break up a LAN into multiple collision domains increases available bandwidth for
every host. But Layer 2 devices forward broadcasts, such as ARP requests. A Layer 3 device is required to
control broadcasts and define broadcast domains.
Data flow through a routed IP network, involves data moving across traffic management devices at
Layers 1, 2, and 3 of the OSI model. Layer 1 is used for transmission across the physical media, Layer 2
for collision domain management, and Layer 3 for broadcast domain management.
103
Overview
The Internet was developed to provide a communication network that could function in wartime. Although the
Internet has evolved from the original plan, it is still based on the TCP/IP protocol suite. The design of
TCP/IP is ideal for the decentralized and robust Internet. Many common protocols were designed based on
the four-layer TCP/IP model.
It is useful to know both the TCP/IP and OSI network models. Each model uses its own structure to explain
how a network works. However, there is much overlap between the two models. A system administrator
should be familiar with both models to understand how a network functions.
Any device on the Internet that wants to communicate with other Internet devices must have a unique
identifier. The identifier is known as the IP address because routers use a Layer 3 protocol called the IP
protocol to find the best route to that device. The current version of IP is IPv4. This was designed before
there was a large demand for addresses. Explosive growth of the Internet has threatened to deplete the
supply of IP addresses. Subnets, Network Address Translation (NAT), and private addresses are used to
extend the supply of IP addresses. IPv6 improves on IPv4 and provides a much larger address space.
Administrators can use IPv6 to integrate or eliminate the methods used to work with IPv4.
In addition to the physical MAC address, each computer needs a unique IP address to be part of the Internet.
This is also called the logical address. There are several ways to assign an IP address to a device. Some
devices always have a static address. Others have a temporary address assigned to them each time they
connect to the network. When a dynamically assigned IP address is needed, a device can obtain it several
ways.
For efficient routing to occur between devices, issues such as duplicate IP addresses must be resolved.
This module covers some of the objectives for the CCNA 640-801, INTRO 640-821, and ICND 640-811
exams.
Students who complete this module should be able to perform the following tasks:
Explain why the Internet was developed and how TCP/IP fits the design of the Internet
List the four layers of the TCP/IP model
Describe the functions of each layer of the TCP/IP model
Compare the OSI model and the TCP/IP model
Describe the function and structure of IP addresses
Understand why subnetting is necessary
Explain the difference between public and private addressing
Understand the function of reserved IP addresses
Explain the use of static and dynamic addressing for a device
Understand how dynamic addresses can be assigned with RARP, BootP, and DHCP
Use ARP to obtain the MAC address to send a packet to another device
Understand the issues related to addressing between networks.
9.1
Introduction to TCP/IP
104
105
IP provides connectionless, best-effort delivery routing of packets. IP is not concerned with the
content of the packets but looks for a path to the destination.
Internet Control Message Protocol (ICMP) provides control and messaging capabilities.
Address Resolution Protocol (ARP) determines the data link layer address, or MAC address, for
known IP addresses.
Reverse Address Resolution Protocol (RARP) determines the IP address for a known MAC address.
IP performs the following operations:
Defines a packet and an addressing scheme
Transfers data between the Internet layer and network access layer
Routes packets to remote hosts
IP is sometimes referred to as an unreliable protocol. This does not mean that IP will not accurately deliver
data across a network. IP is unreliable because it does not perform error checking and correction. That
function is handled by upper layer protocols from the transport or application layers.
The Interactive Media Activity will help students become familiar with the protocols used in the Internet layer.
107
108
Two computers located anywhere in the world that follow certain hardware, software, and protocol
specifications can communicate reliably. The standardization of ways to move data across networks has
made the Internet possible.
9.2
Internet Addresses
9.2.1 IP addressing
For any two systems to communicate, they must be able to identify and locate each other. The addresses in
Figure are not actual network addresses. They represent and show the concept of address grouping.
A computer may be connected to more than one network. In this situation, the system must be given more
than one address. Each address will identify the connection of the computer to a different network. Each
connection point, or interface, on a device has an address to a network. This will allow other computers to
locate the device on that particular network. The combination of the network address and the host address
creates a unique address for each device on a network. Each computer in a TCP/IP network must be given a
unique identifier, or IP address. This address, which operates at Layer 3, allows one computer to locate
another computer on a network. All computers also have a unique physical address, which is known as a
MAC address. These are assigned by the manufacturer of the NIC. MAC addresses operate at Layer 2 of the
OSI model.
An IP address is a 32-bit sequence of ones and zeros. Figure shows a sample 32-bit number. To make the
IP address easier to work with, it is usually written as four decimal numbers separated by periods. For
example, an IP address of one computer is 192.168.1.2. Another computer might have the address
128.10.2.1. This is called the dotted decimal format. Each part of the address is called an octet because it is
made up of eight binary digits. For example, the IP address 192.168.1.8 would be
11000000.10101000.00000001.00001000 in binary notation. The dotted decimal notation is an easier
method to understand than the binary ones and zeros method. This dotted decimal notation also prevents a
large number of transposition errors that would result if only the binary numbers were used.
Both the binary and decimal numbers in Figure represent the same values. However, the address is easier
to understand in dotted decimal notation. This is one of the common problems associated with binary
numbers. The long strings of repeated ones and zeros make errors more likely.
It is easy to see the relationship between the numbers 192.168.1.8 and 192.168.1.9. The binary values
11000000.10101000.00000001.00001000 and 11000000.10101000.00000001.00001001 are not as easy to
recognize. It is more difficult to determine that the binary values are consecutive numbers.
110
of the octets breaks down into 256 subgroups and they break down into another 256 subgroups with 256
addresses in each. By referring to the group address directly above a group in the hierarchy, all of the groups
that branch from that address can be referenced as a single unit.
This kind of address is called a hierarchical address, because it contains different levels. An IP address
combines these two identifiers into one number. This number must be a unique number, because duplicate
addresses would make routing impossible. The first part identifies the system's network address. The second
part, called the host part, identifies which particular machine it is on the network.
IP addresses are divided into classes to define the large, medium, and small networks. Class A addresses
are assigned to larger networks. Class B addresses are used for medium-sized networks, and Class C for
small networks.
The first step in determining which part of the address identifies the network and which
part identifies the host is identifying the class of an IP address.
The Interactive Media Activity will require students to identify the different classes of addresses.
11100000 to 11101111, or 224 to 239. An IP address that starts with a value in the range of 224 to 239 in the
first octet is a Class D address.
A Class E address has been defined. However, the Internet Engineering Task Force (IETF) reserves these
addresses for its own research. Therefore, no Class E addresses have been released for use in the Internet.
The first four bits of a Class E address are always set to 1s. Therefore, the first octet range for Class E
addresses is 11110000 to 11111111, or 240 to 255.
Figure shows the IP address range of the first octet both in decimal and binary for each IP address class.
113
114
115
116
9.3
Obtaining an IP address
117
Servers should be assigned a static IP address so workstations and other devices will always know how to
access needed services. Consider how difficult it would be to phone a business that changed its phone
number every day.
Other devices that should be assigned static IP addresses are network printers, application servers, and
routers.
9.3.3 RARP IP address assignment
Reverse Address Resolution Protocol (RARP) associates a known MAC addresses with an IP addresses.
This association allows network devices to encapsulate data before sending the data out on the network. A
network device, such as a diskless workstation, might know its MAC address but not its IP address. RARP
allows the device to make a request to learn its IP address. Devices using RARP require that a RARP server
be present on the network to answer RARP requests.
Consider an example where a source device wants to send data to another device. In this example, the
source device knows its own MAC address but is unable to locate its own IP address in the ARP table. The
source device must include both its MAC address and IP address in order for the destination device to
retrieve data, pass it to higher layers of the OSI model, and respond to the originating device. Therefore, the
source initiates a process called a RARP request. This request helps the source device detect its own IP
address. RARP requests are broadcast onto the LAN and are responded to by the RARP server which is
usually a router.
RARP uses the same packet format as ARP. However, in a RARP request, the MAC headers and
operation code are different from an ARP request.
The RARP packet format contains places for
MAC addresses of both the destination and source devices. The source IP address field is empty. The
broadcast goes to all devices on the network. Figures , , and depict the destination MAC address
as FF:FF:FF:FF:FF:FF. Workstations running RARP have codes in ROM that direct them to start the
RARP process. A step-by-step layout of the RARP process is illustrated in Figures through
.
118
TCP/IP suite has a protocol, called Address Resolution Protocol (ARP), which can automatically obtain MAC
addresses for local transmission. Different issues are raised when data is sent outside of the local area
network.
Communications between two LAN segments have an additional task. Both the IP and MAC addresses are
needed for both the destination host and the intermediate routing device. TCP/IP has a variation on ARP
called Proxy ARP that will provide the MAC address of an intermediate device for transmission outside the
LAN to another network segment.
address of the interface on which the request was received, to the requesting host. The router responds with
the MAC addresses for those requests in which the IP address is not in the range of addresses of the local
subnet.
Another method to send data to the address of a device that is on another network segment is to set up a
default gateway. The default gateway is a host option where the IP address of the router interface is stored
in the network configuration of the host. The source host compares the destination IP address and its own IP
address to determine if the two IP addresses are located on the same segment. If the receiving host is not on
the same segment, the source host sends the data using the actual IP address of the destination and the
MAC address of the router. The MAC address for the router was learned from the ARP table by using the IP
address of that router.
If the default gateway on the host or the proxy ARP feature on the router is not configured, no traffic can
leave the local area network. One or the other is required to have a connection outside of the local area
network.
The Lab Activity will introduce the arp -a command.
The Interactive Media Activity will help students understand the ARP process.
Summary
The U.S. Department of Defense (DoD) TCP/IP reference model has four layers: the application layer,
transport layer, Internet layer, and the network access layer. The application layer handles high-level
protocols, issues of representation, encoding, and dialog control. The transport layer provides transport
services from the source host to the destination host. The purpose of the Internet layer is to select the best
path through the network for packet transmissions. The network access layer is concerned with the physical
link to the network media.
Although some layers of the TCP/IP reference model correspond to the seven layers of the OSI model, there
are differences. The TCP/IP model combines the presentation and session layer into its application layer.
The TCP/IP model combines the OSI data link and physical layers into its network access layer.
Routers use the IP address to move data packets between networks. IP addresses are thirty-two bits long
according to the current version IPv4 and are divided into four octets of eight bits each. They operate at the
network layer, Layer 3, of the OSI model, which is the Internet layer of the TCP/IP model.
The IP address of a host is a logical address and can be changed. The Media Access Control (MAC)
address of the workstation is a 48-bit physical address. This address is usually burned into the network
interface card (NIC) and cannot change unless the NIC is replaced. TCP/IP communications within a LAN
segment require both a destination IP address and a destination MAC address for delivery. While IP address
are unique and routable throughout the Internet, when a packet arrives at the destination network there
needs to be a way to automatically map the IP address to a MAC address. The TCP/IP suite has a protocol,
called Address Resolution Protocol (ARP), which can automatically obtain MAC addresses for local
transmission. A variation on ARP called Proxy ARP will provide the MAC address of an intermediate device
for transmission to another network segment.
There are five classes of IP addresses, A through E. Only the first three classes are used commercially.
Depending on the class, the network and host part of the address will use a different number of bits. The
Class D address is used for multicast groups. Class E addresses are reserved for research use only.
121
An IP address that has binary zeros in all host bit positions is used to identify the network itself. An address
in which all of the host bits are set to one is the broadcast address and is used for broadcasting packets to all
the devices on a network.
Public IP addresses are unique. No two machines that connect to a public network can have the same IP
address because public IP addresses are global and standardized. Private networks that are not connected
to the Internet may use any host addresses, as long as each host within the private network is unique. Three
blocks of IP addresses are reserved for private, internal use. These three blocks consist of one Class A, a
range of Class B addresses, and a range of Class C addresses. Addresses that fall within these ranges are
discarded by routers and not routed on the Internet backbone.
Subnetting is another means of dividing and identifying separate networks throughout the LAN. Subnetting a
network means to use the subnet mask to divide the network and break a large network up into smaller,
more efficient and manageable segments, or subnets. Subnet addresses include the network portion, plus a
subnet field and a host field. The subnet field and the host field are created from the original host portion for
the entire network.
A more extendible and scalable version of IP, IP Version 6 (IPv6), has been defined and developed. IPv6
uses 128 bits rather than the 32 bits currently used in IPv4. IPv6 uses hexadecimal numbers to represent the
128 bits. IPv6 is being implemented in select networks and may eventually replace IPv4 as the dominant
Internet protocol.
IP addresses are assigned to hosts in the following ways:
Statically manually, by a network administrator
Dynamically automatically, using reverse address resolution protocol, bootstrap protocol
(BOOTP), or Dynamic Host Configuration Protocol (DHCP)
122
Overview
Internet Protocol (IP) is the main routed protocol of the Internet. IP addresses are used to route packets from
a source to a destination through the best available path. The propagation of packets, encapsulation
changes, and connection-oriented and connectionless protocols are also critical to ensure that data is
properly transmitted to its destination. This module will provide an overview for each.
The difference between routing and routed protocols is a common source of confusion. The two words sound
similar but are quite different. Routers use routing protocols to build tables that are used to determine the
best path to a host on the Internet.
Not all organizations can fit into the three class system of A, B, and C addresses. Flexibility exists within the
class system through subnets. Subnets allow network administrators to determine the size of the network
they will work with. After they decide how to segment their networks, they can use subnet masks to
determine the location of each device on a network.
This module covers some of the objectives for the CCNA 640-801, INTRO 640-821, and ICND 640-811
exams.
Students who complete this module should be able to perform the following tasks:
Describe routed protocols
List the steps of data encapsulation in an internetwork as data is routed to Layer 3 devices
Describe connectionless and connection-oriented delivery
Name the IP packet fields
Describe how data is routed
Compare and contrast different types of routing protocols
List and describe several metrics used by routing protocols
List several uses for subnetting
Determine the subnet mask for a given situation
Use a subnet mask to determine the subnet ID
10.1.1.
This page will define routed and routable protocols.
A protocol is a set of rules that determines how computers communicate with each other across networks.
Computers exchange data messages to communicate with each other. To accept and act on these
messages, computers must have sets of rules that determine how a message is interpreted. Examples
include messages used to establish a connection to a remote machine, e-mail messages, and files
transferred over a network.
A protocol describes the following:
The required format of a message
The way that computers must exchange messages for specific activities
A routed protocol allows the router to forward data between nodes on different networks. A routable
protocol must provide the ability to assign a network number and a host number to each device. Some
protocols, such as IPX, require only a network number. These protocols use the MAC address of the host for
the host number. Other protocols, such as IP, require an address with a network portion and a host portion.
These protocols also require a network mask to differentiate the two numbers. The network address is
obtained by ANDing the address with the network mask.
The reason that a network mask is used is to allow groups of sequential IP addresses to be treated as a
single unit. If this grouping were not allowed, each host would have to be mapped individually for
routing. This would be impossible, because according to the Internet Software Consortium there are
approximately 233,101,500 hosts on the Internet
123
124
125
126
127
adds the appropriate headers and trailers, and then transmits the data. The de-encapsulation process
removes the headers and trailers and then recombines the data into a seamless stream.
This course focuses on the most common routable protocol, which is IP. Other examples of routable
protocols include IPX/SPX and AppleTalk. These protocols provide Layer 3 support. Non-routable protocols
do not provide Layer 3 support. The most common non-routable protocol is NetBEUI. NetBEUI is a small,
fast, and efficient protocol that is limited to frame delivery within one segment.
129
130
131
132
133
When a router receives a packet, it checks the destination address and attempts to match this
address with a routing table entry.
Routing metric Different routing protocols use different routing metrics. Routing metrics are used
to determine the desirability of a route. For example, RIP uses hop count as its only routing metric.
IGRP uses bandwidth, load, delay, and reliability metrics to create a composite metric value.
Outbound interfaces The interface that the data must be sent out of to reach the final destination.
Routers communicate with one another to maintain their routing tables through the transmission of
routing update messages. Some routing protocols transmit update messages periodically. Other
protocols send them only when there are changes in the network topology. Some protocols transmit the
entire routing table in each update message and some transmit only routes that have changed. Routers
analyze the routing updates from directly-connected routers to build and maintain their routing tables
Load Load is the amount of activity on a network resource such as a router or a link.
Reliability Reliability is usually a reference to the error rate of each network link.
Hop count Hop count is the number of routers that a packet must travel through before reaching
its destination. Each router is equal to one hop. A hop count of four indicates that data would have to
pass through four routers to reach its destination. If multiple paths are available to a destination, the
path with the least number of hops is preferred.
Ticks The delay on a data link using IBM PC clock ticks. One tick is approximately 1/18 second.
Cost Cost is an arbitrary value, usually based on bandwidth, monetary expense, or other
measurement, that is assigned by a network administrator
10.2.7 IGP and EGP
This page will introduce two types of routing protocols.
An autonomous system is a network or set of networks under common administrative control, such as the
cisco.com domain. An autonomous system consists of routers that present a consistent view of routing to the
external world.
Two families of routing protocols are Interior Gateway Protocols (IGPs) and Exterior Gateway Protocols
(EGPs).
IGPs route data within an autonomous system:
RIP and RIPv2
IGRP
EIGRP
OSPF
Intermediate System-to-Intermediate System (IS-IS) protocol
EGPs route data between autonomous systems. An example of an EGP is BGP
Interior Gateway Routing Protocol (IGRP) This IGP was developed by Cisco to address issues
associated with routing in large, heterogeneous networks.
Enhanced IGRP (EIGRP) This Cisco-proprietary IGP includes many of the features of a link-state
routing protocol. Because of this, it has been called a balanced-hybrid protocol, but it is really an
advanced distance-vector routing protocol.
Link-state routing protocols were designed to overcome limitations of distance vector routing protocols. Linkstate routing protocols respond quickly to network changes sending trigger updates only when a network
change has occurred. Link-state routing protocols send periodic updates, known as link-state refreshes, at
longer time intervals, such as every 30 minutes.
When a route or link changes, the device that detected the change creates a link-state advertisement (LSA)
concerning that link. The LSA is then transmitted to all neighboring devices. Each routing device takes a
copy of the LSA, updates its link-state database, and forwards the LSA to all neighboring devices. This
flooding of LSAs is required to ensure that all routing devices create databases that accurately reflect the
network topology before updating their routing tables.
Link-state algorithms typically use their databases to create routing table entries that prefer the shortest path.
Examples of link-state protocols include Open Shortest Path First (OSPF) and Intermediate System-toIntermediate System (IS-IS).
The Interactive Media Activity will identify the differences between link-state and distance vector routing
protocols.
10.3.1
This page will review the classes of IP addresses. The combined classes of IP addresses offer a range from
256 to 16.8 million hosts.
136
To efficiently manage a limited supply of IP addresses, all classes can be subdivided into smaller
subnetworks. Figure provides an overview of the division between networks and hosts.
137
138
139
The balance of the broadcast ID column can be filled in using the same process that was used in the
subnetwork ID column. Simply add 32 to the preceding broadcast ID of the subnet. Another option is to start
at the bottom of this column and work up to the top by subtracting one from the preceding subnetwork ID.
10.3.5
This page will describe the process used to subnet Class A, B, and C networks.
The Class A and B subnetting procedure is identical to the process for Class C, except there may be
significantly more bits involved. The available bits for assignment to the subnet field in a Class A address is
22 bits while a Class B address has 14 bits.
Assigning 12 bits of a Class B address to the subnet field creates a subnet mask of 255.255.255.240 or /28.
All eight bits were assigned in the third octet resulting in 255, the total value of all eight bits. Four bits were
assigned in the fourth octet resulting in 240. Recall that the slash mask is the sum total of all bits assigned to
the subnet field plus the fixed network bits.
Assigning 20 bits of a Class A address to the subnet field creates a subnet mask of 255.255.255.240 or /28.
All eight bits of the second and third octets were assigned to the subnet field and four bits from the fourth
octet.
In this situation, it is apparent that the subnet mask for the Class A and Class B addresses appear identical.
Unless the mask is related to a network address it is not possible to decipher how many bits were assigned
to the subnet field.
Whichever class of address needs to be subnetted, the following rules are the same:
Total subnets = 2 to the power of the bits borrowed
Total hosts = 2 to the power of the bits remaining
Usable subnets = 2 to the power of the bits borrowed minus 2
Usable hosts = 2 to the power of the bits remaining minus 2
140
scheme and assure the validity of the results from a subnet calculator. The subnet calculator will not provide
the initial scheme, only the final addressing. Also, no calculators, of any kind, are permitted during the
certification exam.
This page concludes this lesson. The next page will summarize the main points from the module.
Summary
This page summarizes the topics discussed in this module.
IP is referred to as a connectionless protocol because no dedicated circuit connection is established between
source and destination prior to transmission, IP is referred to as unreliable because does not verify that the
data reached its destination. If verification of delivery is required then a combination of IP and a connectionoriented transport protocol such as TCP is required. If verification of error-free delivery is not required IP can
be used in combination with a connectionless transport protocol such as UDP. Connectionless network
processes are often referred to as packet switched processes. Connection-oriented network processes are
often referred to as circuit switched processes.
Protocols at each layer of the OSI model add control information to the data as it moves through the network.
Because this information is added at the beginning and end of the data, this process is referred to as
encapsulating the data. Layer 3 adds network, or logical, address information to the data and Layer 2 adds
local, or physical, address information.
Layer 3 routing and Layer 2 switching are used to direct and deliver data throughout the network. Initially, the
router receives a Layer 2 frame with a Layer 3 packet encapsulated within it. The router must strip off the
Layer 2 frame and examine the Layer 3 packet. If the packet is destined for local delivery the router must
encapsulate it in a new frame with the correct local MAC address as the destination. If the data must be
forwarded to another broadcast domain, the router must encapsulate the Layer 3 packet in a new Layer 2
frame that contains the MAC address of the next internetworking device. In this way a frame is transmitted
through networks from broadcast domain to broadcast domain and eventually delivered to the correct host.
Routed protocols, such as IP, transport data across a network. Routing protocols allow routers to choose the
best path for data from source to destination. These routes can be either static routes, which are entered
manually, or dynamic routes, which are learned through routing protocols. When dynamic routing protocols
are used, routers use routing update messages to communicate with one another and maintain their routing
tables. Routing algorithms use metrics to process routing updates and populate the routing table with the
best routes. Convergence describes the speed at which all routers agree on a change in the network.
Interior gateway protocols (IGP) are routing protocols that route data within autonomous systems, while
exterior gateway protocols (EGP) route data between autonomous systems. IGPs can be further categorized
as either distance-vector or link-state protocols. Routers using distance-vector routing protocols periodically
send routing updates consisting of all or part of their routing tables. Routers using link-state routing protocols
use link-state advertisements (LSAs) to send updates only when topological changes occur in the network,
and send complete routing tables much less frequently.
As a packet travels through the network devices need a method of determining what portion of the IP
address identifies the network and what portion identifies the host. A 32-bit address mask, called a subnet
mask, is used to indicate the bits of an IP address that are being used for the network address. The default
subnet mask for a Class A address is 255.0.0.0. For a Class B address, the subnet mask always starts out
as 255.255.0.0, and a Class C subnet mask begins as 255.255.255.0. The subnet mask can be used to split
up an existing network into subnetworks, or subnets.
Subnetting reduces the size of broadcast domains, allows LAN segments in different geographical locations
to communicate through routers and provides improved security by separating one LAN segment from
another.
Custom subnet masks use more bits than the default subnet masks by borrowing these bits from the host
portion of the IP address. This creates a three-part address:
The original network address
The subnet address made up of the bits borrowed
The host address made up of the bits left after borrowing some for subnets
Routers use subnet masks to determine the subnetwork portion of an address for an incoming packet.
This process is referred to as logical ANDing.
142
Overview
The TCP/IP transport layer transports data between applications on source and destination devices.
Familiarity with the transport layer is essential to understand modern data networks. This module will
describe the functions and services of this layer.
Many of the network applications that are found at the TCP/IP application layer are familiar to most network
users. HTTP, FTP, and SMTP are acronyms that are commonly seen by users of Web browsers and e-mail
clients. This module also describes the function of these and other applications from the TCP/IP networking
model.
This module covers some of the objectives for the CCNA 640-801, INTRO 640-821, and ICND 640-811
exams.
Students who complete this module should be able to perform the following tasks:
Describe the functions of the TCP/IP transport layer
Describe flow control
Explain how a connection is established between peer systems
Describe windowing
Describe acknowledgment
Identify and describe transport layer protocols
Describe TCP and UDP header formats
Describe TCP and UDP port numbers
List the major protocols of the TCP/IP application layer
Provide a brief description of the features and operation of well-known TCP/IP applications
The two primary duties of the transport layer are to provide flow control and reliability. The transport layer
defines end-to-end connectivity between host applications. Some basic transport services are as follows:
Segmentation of upper-layer application data
Establishment of end-to-end operations
Transportation of segments from one end host to another
Flow control provided by sliding windows
Reliability provided by sequence numbers and acknowledgments
TCP/IP is a combination of two individual protocols. IP operates at Layer 3 of the OSI model and is a
connectionless protocol that provides best-effort delivery across a network. TCP operates at the transport
layer and is a connection-oriented service that provides flow control and reliability. When these protocols are
combined they provide a wider range of services. The combined protocols are the basis for the TCP/IP
protocol suite. The Internet is built upon this TCP/IP protocol suite.
11.1.2 Flow control
This page will describe how the transport layer provides flow control.
As the transport layer sends data segments, it tries to ensure that data is not lost. Data loss may occur if a
host cannot process data as quickly as it arrives. The host is then forced to discard the data. Flow control
ensures that a source host does not overflow the buffers in a destination host. To provide flow control, TCP
allows the source and destination hosts to communicate. The two hosts then establish a data-transfer rate
that is agreeable to both.
11.1.3 Session establishment, maintenance, and termination
This page discusses transport functionality and how it is accomplished on a segment-by-segment basis.
Applications can send data segments on a first-come, first-served basis. The segments that arrive first will be
taken care of first. These segments can be routed to the same or different destinations. Multiple applications
can share the same transport connection in the OSI reference model. This is referred to as the multiplexing
of upper-layer conversations. Numerous simultaneous upper-layer conversations can be multiplexed over
a single connection.
One function of the transport layer is to establish a connection-oriented session between similar devices at
the application layer. For data transfer to begin, the source and destination applications inform the operating
systems that a connection will be initiated. One node initiates a connection that must be accepted by the
other. Protocol software modules in the two operating systems exchange messages across the network to
verify that the transfer is authorized and that both sides are ready.
The connection is established and the transfer of data begins after all synchronization has occurred. The two
machines continue to communicate through their protocol software to verify that the data is received
correctly.
Figure shows a typical connection between two systems. The first handshake requests synchronization.
The second handshake acknowledge the initial synchronization request, as well as synchronizing connection
parameters in the opposite direction. The third handshake segment is an acknowledgment used to inform the
destination that both sides agree that a connection has been established. After the connection has been
established, data transfer begins.
Congestion can occur for two reasons:
First, a high-speed computer might generate traffic faster than a network can transfer it.
Second, if many computers simultaneously need to send datagrams to a single destination, that
destination can experience congestion, although no single source caused the problem.
When datagrams arrive too quickly for a host or gateway to process, they are temporarily stored in memory.
If the traffic continues, the host or gateway eventually exhausts its memory and must discard additional
datagrams that arrive.
Instead of allowing data to be lost, the TCP process on the receiving host can issue a not ready indicator to
the sender. This indicator signals the sender to stop data transmission. When the receiver can handle
additional data, it sends a ready transport indicator. When this indicator is received, the sender can resume
the segment transmission.
At the end of data transfer, the source host sends a signal that indicates the end of the transmission. The
destination host acknowledges the end of transmission and the connection is terminated.
144
145
11.1.5 Windowing
This page will explain how windows are used to transmit data.
Data packets must be delivered to the recipient in the same order in which they were transmitted to have a
reliable, connection-oriented data transfer. The protocol fails if any data packets are lost, damaged,
duplicated, or received in a different order. An easy solution is to have a recipient acknowledge the receipt of
each packet before the next packet is sent.
If a sender had to wait for an ACK after each packet was sent, throughput would be low. Therefore, most
connection-oriented, reliable protocols allow multiple packets to be sent before an ACK is received. The time
interval after the sender transmits a data packet and before the sender processes any ACKs is used to
transmit more data. The number of data packets the sender can transmit before it receives an ACK is known
as the window size, or window.
TCP uses expectational ACKs. This means that the ACK number refers to the next packet that is expected.
Windowing refers to the fact that the window size is negotiated dynamically in the TCP session. Windowing
is a flow-control mechanism. Windowing requires the source device to receive an ACK from the destination
after a certain amount of data is transmitted. The destination host reports a window size to the source host.
146
This window specifies the number of packets that the destination host is prepared to receive. The first packet
is the ACK.
With a window size of three, the source device can send three bytes to the destination. The source device
must then wait for an ACK. If the destination receives the three bytes, it sends an acknowledgment to the
source device, which can now transmit three more bytes. If the destination does not receive the three bytes,
because of overflowing buffers, it does not send an acknowledgment. Because the source does not receive
an acknowledgment, it knows that the bytes should be retransmitted, and that the transmission rate should
be decreased.
In Figure , the sender sends three packets before it expects an ACK. If the receiver can handle only two
packets, the window drops packet three, specifies three as the next packet, and indicates a new window size
of two. The sender sends the next two packets, but still specifies a window size of three. This means that the
sender will still expect a three-packet ACK from the receiver. The receiver replies with a request for packet
five and again specifies a window size of two.
11.1.6 Acknowledgment
This page will discuss acknowledgments and the sequence of segments.
Reliable delivery guarantees that a stream of data sent from one device is delivered through a data link to
another device without duplication or data loss. Positive acknowledgment with retransmission is one
technique that guarantees reliable delivery of data. Positive acknowledgment requires a recipient to
communicate with the source and send back an ACK when the data is received. The sender keeps a record
of each data packet, or TCP segment, that it sends and expects an ACK. The sender also starts a timer
when it sends a segment and will retransmit a segment if the timer expires before an ACK arrives.
Figure shows a sender that transmits data packets 1, 2, and 3. The receiver acknowledges receipt of the
packets with a request for packet 4. When the sender receives the ACK, it sends packets 4, 5, and 6. If
packet 5 does not arrive at the destination, the receiver acknowledges with a request to resend packet 5.
The sender resends packet 5 and then receives an ACK to continue with the transmission of packet 7.
TCP provides sequencing of segments with a forward reference acknowledgment. Each segment is
numbered before transmission. At the destination, TCP reassembles the segments into a complete
message. If a sequence number is missing in the series, that segment is retransmitted. Segments that are
not acknowledged within a given time period will result in a retransmission.
147
11.1.7 TCP
This page will discuss the protocols that use TCP and the fields included in a TCP segment.
TCP is a connection-oriented transport layer protocol that provides reliable full-duplex data transmission.
TCP is part of the TCP/IP protocol stack. In a connection-oriented environment, a connection is established
between both ends before the transfer of information can begin. TCP breaks messages into segments,
reassembles them at the destination, and resends anything that is not received. TCP supplies a virtual circuit
between end-user applications.
The following protocols use TCP:
FTP
HTTP
SMTP
Telnet
The following are the definitions of the fields in the TCP segment:
Source port Number of the port that sends data
Destination port Number of the port that receives data
Sequence number Number used to ensure the data arrives in the correct order
Acknowledgment number Next expected TCP octet
HLEN Number of 32-bit words in the header
Reserved Set to zero
Code bits Control functions, such as setup and termination of a session
Window Number of octets that the sender will accept
Checksum Calculated checksum of the header and data fields
Urgent pointer Indicates the end of the urgent data
Option One option currently defined, maximum TCP segment size
Data Upper-layer protocol data
148
11.1.8 UDP
This page will discuss UDP. UDP is the connectionless transport protocol in the TCP/IP protocol stack.
UDP is a simple protocol that exchanges datagrams without guaranteed delivery. It relies on higher-layer
protocols to handle errors and retransmit data.
UDP does not use windows or ACKs. Reliability is provided by application layer protocols. UDP is designed
for applications that do not need to put sequences of segments together.
The following protocols use UDP:
TFTP
SNMP
DHCP
DNS
The following are the definitions of the fields in the UDP segment:
Source port Number of the port that sends data
Destination port Number of the port that receives data
Length Number of bytes in header and data
Checksum Calculated checksum of the header and data fields
Data Upper-layer protocol data
149
11.2
Internet site will make up the domain name. There are more than 200 top-level domains on the Internet,
examples of which include the following:
.us United States
.uk United Kingdom
There are also generic names, which examples include the following:
.edu educational sites
.com commercial sites
.gov government sites
.org non-profit sites
.net network service
See Figure for a detailed explanation of these domains
11.2.3 FTP and TFTP
This page will describe the features of FTP and TFPT.
FTP is a reliable, connection-oriented service that uses TCP to transfer files between systems that support
FTP. The main purpose of FTP is to transfer files from one computer to another by copying and moving files
from servers to clients, and from clients to servers. When files are copied from a server, FTP first establishes
a control connection between the client and the server. Then a second connection is established, which is a
link between the computers through which the data is transferred. Data transfer can occur in ASCII mode or
in binary mode. These modes determine the encoding used for data file, which in the OSI model is a
presentation layer task. After the file transfer has ended, the data connection terminates automatically. When
the entire session of copying and moving files is complete, the command link is closed when the user logs off
and ends the session.
TFTP is a connectionless service that uses User Datagram Protocol (UDP). TFTP is used on the router to
transfer configuration files and Cisco IOS images and to transfer files between systems that support TFTP.
TFTP is designed to be small and easy to implement. Therefore, it lacks most of the features of FTP. TFTP
can read or write files to or from a remote server but it cannot list directories and currently has no provisions
for user authentication. It is useful in some LANs because it operates faster than FTP and in a stable
environment it works reliably.
11.2.4 HTTP
This page will describe the features of HTTP.
Hypertext Transfer Protocol (HTTP) works with the World Wide Web, which is the fastest growing and most
used part of the Internet. One of the main reasons for the extraordinary growth of the Web is the ease with
which it allows access to information. A Web browser is a client-server application, which means that it
requires both a client and a server component in order to function. A Web browser presents data in
multimedia formats on Web pages that use text, graphics, sound, and video. The Web pages are created
with a format language called Hypertext Markup Language (HTML). HTML directs a Web browser on a
particular Web page to produce the appearance of the page in a specific manner. In addition, HTML specifies
locations for the placement of text, files, and objects that are to be transferred from the Web server to the
Web browser.
Hyperlinks make the World Wide Web easy to navigate. A hyperlink is an object, word, phrase, or picture, on
a Web page. When that hyperlink is clicked, it directs the browser to a new Web page. The Web page
contains, often hidden within its HTML description, an address location known as a Uniform Resource
Locator (URL).
In the URL http://www.cisco.com/edu/, the "http://" tells the browser which protocol to use. The second part,
"www", is the hostname or name of a specific machine with a specific IP address. The last part, /edu/
identifies the specific folder location on the server that contains the default web page.
A Web browser usually opens to a starting or "home" page. The URL of the home page has already been
stored in the configuration area of the Web browser and can be changed at any time. From the starting page,
click on one of the Web page hyperlinks, or type a URL in the address bar of the browser. The Web browser
examines the protocol to determine if it needs to open another program, and then determines the IP address
of the Web server using DNS. Then the transport layer, network layer, data link layer, and physical layer work
together to initiate a session with the Web server. The data that is transferred to the HTTP server contains
the folder name of the Web page location. The data can also contain a specific file name for an HTML page.
If no name is given, then the default name as specified in the configuration on the server is used.
The server responds to the request by sending to the Web client all of the text, audio, video, and graphic files
specified in the HTML instructions. The client browser reassembles all the files to create a view of the Web
page, and then terminates the session. If another page that is located on the same or a different server is
clicked, the whole process begins again.
The Lab Activity will help students become familiar with TCP and HTTP.
151
11.2.5 SMTP
This page will discuss the features of SMTP.
Email servers communicate with each other using the Simple Mail Transfer Protocol (SMTP) to send and
receive mail. The SMTP protocol transports email messages in ASCII format using TCP.
When a mail server receives a message destined for a local client, it stores that message and waits for the
client to collect the mail. There are several ways for mail clients to collect their mail. They can use
programs that access the mail server files directly or collect their mail using one of many network protocols.
The most popular mail client protocols are POP3 and IMAP4, which both use TCP to transport data. Even
though mail clients use these special protocols to collect mail, they almost always use SMTP to send mail.
Since two different protocols, and possibly two different servers, are used to send and receive mail, it is
possible that mail clients can perform one task and not the other. Therefore, it is usually a good idea to
troubleshoot e-mail sending problems separately from e-mail receiving problems.
When checking the configuration of a mail client, verify that the SMTP and POP or IMAP settings are
correctly configured. A good way to test if a mail server is reachable is to Telnet to the SMTP port (25) or to
the POP3 port (110). The following command format is used at the Windows command line to test the ability
to reach the SMTP service on the mail server at IP address 192.168.10.5:
C:\>telnet 192.168.10.5 25
The SMTP protocol does not offer much in the way of security and does not require any authentication.
Administrators often do not allow hosts that are not part of their network to use their SMTP server to send or
relay mail. This is to prevent unauthorized users from using their servers as mail relays.
152
11.2.6 SNMP
This page will define SNMP.
The Simple Network Management Protocol (SNMP) is an application layer protocol that facilitates the
exchange of management information between network devices. SNMP enables network administrators to
manage network performance, find and solve network problems, and plan for network growth. SNMP uses
UDP as its transport layer protocol.
An SNMP managed network consists of the following three key components:
Network management system (NMS) NMS executes applications that monitor and control
managed devices. The bulk of the processing and memory resources required for network
management are provided by NMS. One or more NMSs must exist on any managed network.
Managed devices Managed devices are network nodes that contain an SNMP agent and that
reside on a managed network. Managed devices collect and store management information and
make this information available to NMSs using SNMP. Managed devices, sometimes called network
elements, can be routers, access servers, switches, and bridges, hubs, computer hosts, or printers.
Agents Agents are network-management software modules that reside in managed devices. An
agent has local knowledge of management information and translates that information into a form
compatible with SNMP
153
11.2.7 Telnet
This page will explain the features of Telnet.
Telnet client software provides the ability to login to a remote Internet host that is running a Telnet server
application and then to execute commands from the command line. A Telnet client is referred to as a local
host. Telnet server, which uses special software called a daemon, is referred to as a remote host.
To make a connection from a Telnet client, the connection option must be selected. A dialog box typically
prompts for a host name and terminal type. The host name is the IP address or DNS name of the remote
computer. The terminal type describes the type of terminal emulation that the Telnet client should perform.
The Telnet operation uses none of the processing power from the transmitting computer. Instead, it transmits
the keystrokes to the remote host and sends the resulting screen output back to the local monitor. All
processing and storage take place on the remote computer.
Telnet works at the application layer of the TCP/IP model. Therefore, Telnet works at the top three layers of
the OSI model. The application layer deals with commands. The presentation layer handles formatting,
usually ASCII. The session layer transmits. In the TCP/IP model, all of these functions are considered to be
part of the application layer.
This page concludes this lesson. The next page will summarize the main points from the module.
Summary
This page summarizes the topics discussed in this module.
The primary duties of the transport layer, Layer 4 of the OSI model, are to transport and regulate the flow of
information from the source to the destination reliably and accurately.
The transport layer multiplexes data from upper layer applications into a stream of data packets. It uses port
(socket) numbers to identify different conversations and delivers the data to the correct application.
The Transmission Control Protocol (TCP) is a connection-oriented transport protocol that provides flow
control as well as reliability. TCP uses a three-way handshake to establish a synchronized circuit between
end-user applications. Each datagram is numbered before transmission. At the receiving station, TCP
reassembles the segments into a complete message. If a sequence number is missing in the series, that
segment is retransmitted.
Flow control ensures that a transmitting node does not overwhelm a receiving node with data. The simplest
method of flow control used by TCP involves a not ready signal that notifies the transmitting device that the
154
buffers on the receiving device are full. When the receiver can handle additional data, the receiver sends a
ready transport indicator.
Positive acknowledgment with retransmission is another TCP protocol technique that guarantees reliable
delivery of data. Because having to wait for an acknowledgment after sending each packet would negatively
impact throughput, windowing is used to allow multiple packets to be transmitted before an acknowledgment
is received. TCP window sizes are variable during the lifetime of a connection.
Positive acknowledgment with retransmission is another TCP protocol technique that guarantees reliable
delivery of data. Because having to wait for an acknowledgment after sending each packet would negatively
impact throughput, windowing is used to allow multiple packets to be transmitted before an acknowledgment
is received. TCP window sizes are variable during the lifetime of a connection.
If an application does not require flow control or an acknowledgment, as in the case of a broadcast
transmission, User Datagram Protocol (UDP) can be used instead of TCP. UDP is a connectionless transport
protocol in the TCP/IP protocol stack that allows multiple conversations to occur simultaneously but does not
provide acknowledgments or guaranteed delivery. A UDP header is much smaller than a TCP header
because of the lack of control information it must contain.
Some of the protocols and applications that function at the application level are well known to Internet users:
Domain Name System (DNS) - Used in IP networks to translate names of network nodes into IP
addresses
File Transfer Protocol (FTP) - Used for transferring files between networks
Hypertext Transfer Protocol (HTTP) - Used to deliver hypertext markup language (HTML)
documents to a client application, such as a WWW browser
Simple Mail Transfer Protocol (SMTP) - Used to provide electronic mail services
Simple Network Management Protocol (SNMP) - Used to monitor and control network devices
and to manage configurations, statistics collection, performance and security
Telnet - Used to login to a remote host that is running a Telnet server application and then to execute
commands from the command line
155