Computer Network UNIT 2
Computer Network UNIT 2
Computer Network UNIT 2
UNIT -2
By: Nihal Kumar
Framing:
To provide service to the network layer, the data link layer must use the service provided
to it by the physical layer. What the physical layer does is accept a raw bit stream and attempt to
deliver it to the destination. This bit stream is not guaranteed to be error free. The number of bits
received may be less than, equal to, or more than the number of bits transmitted, and they may
have different values. It is up to the data link layer to detect and, if necessary, correct errors. The
usual approach is for the data link layer to break the bit stream up into discrete frames and
compute the checksum for each frame. When a frame arrives at the destination, the checksum is
recomputed. If the newly computed checksum is different from the one contained in the frame,
the data link layer knows that an error has occurred and takes steps to deal with it (e.g.,
discarding the bad frame and possibly also sending back an error report).
Breaking the bit stream up into frames is more difficult than it at first appears. One way to
achieve this framing is to insert time gaps between frames, much like the spaces between words
in ordinary text. However, networks rarely make any guarantees about timing, so it is possible
these gaps might be squeezed out or other gaps might be inserted during transmission. Since it is
too risky to count on timing to mark the start and end of each frame, other methods have been
devised. We will look at four methods:
In this method a field in the header will be used to specify the number of
CHARACTERS in the frame. When data link layer at the destination sees the character
count, it knows how many characters follow and hence where the end of the frame is.
The trouble with this algorithm is that the count can be garbed by a transmission error
resulting the destination will get out of synchronization and will be unable to locate the
start of the next frame. There is no way of telling where the next frame starts. For this
reason this method is rarely used.
By: Nihal Kumar
Character count
(a) 5 1 2 3 4 5 6 7 8 9 8 0 1 2 3 4 5 6 8 7 8 9 0 1 2 3
Frame 3 Frame 4
Frame 1 Frame 2
5 characters 5 characters 8 characters 8 characters
Error
5 1 2 3 4 7 6 7 8 9 8 01 2 3 4 5 6 8 7 8 9 0 1 2 3 5
Frame 1 Frame 2
(Wrong) Now a character count
In this method each frame will start with a FLAG and ends with a FLAG.
The starting flag is DLE STX ---- Data Link Escape Start of Text
The ending flag is DLE ETX ---- Data link Escape End of Text.
By: Nihal Kumar
Dis Adv:
1.24 bits are unnecessarily stuffed.
2. Transmission delay.
In the data if there are FIVE consecutive ONE ‘s are there then a ZERO will be
stuffed.
Ex. The given data is 01111000011111110101001111110 01111101100
Stuffed bits
Advantages:
Type of errors
Single-Bit Error
Single-bit error means that only one bit of given data unit (for example byte, character,
frame or packet) is changed from one to zero or zero to one.
Suppose that a sender transmit group of 8 bits of ASCII characters. In following example,
01010001 (ASCII Q) was sent but 01000001 (ASCII A) was received at other end.
Single-bit errors most likely occur in parallel transmission, but rare in serial transmission.
One of the reasons is that one of the 8 wires is may be noisy.
Suppose a sender transfer data at 1 Mbps. That means each bit lasts only 1/1,000,000
second or 1 µ s. If a noise lasts only 1µ s (normally noise lasts much longer than µ s)
then it can affect only one bit.
Burst Error
A burst error means that two or more bits in the data unit have changed from one to zero
or from zero to one.
In the following example, 01011101 01000011 was sent but 01000100 01000011 was
receive. Burst errors may not occur in consecutive bits. The length of burst error is
calculated from first corrupted bit to the last corrupted bit. Some bits between them may
not be corrupted. In this example, three bits has been changed from one to zero but the
length of burst error is 5.
Burst errors most likely occur in serial transmission. Because data transfer in serial
transmission is at slow speed. Suppose a sender transfer data at 1 Kbps. That means each
bit lasts only 1/1000 second. If a noise lasts 1/100 second then it can affect 10 bits.
Given any two codewords W1 and W2, the number of bit position in which the codewords
differ is called the Hamming distance, dw1w2 between them. It is possible to determine
how many bits differ, just Exclusive OR the two codeword and count the number of 1
bits in the result. It is given by
dw1w2 = W1 ⊕ W2
It is significance that if two codewords are a Hamming distance d apart, it will require d
single-bit errors to convert one into the other.
By: Nihal Kumar
1. PARITY METHOD
2. LRC METHOD (Longitudinal redundancy check)
3. CRC METHOD (Cyclic redundancy check)
4. HAMMING CODE METHOD
PARITY METHOD
If one bit or any odd no bits is erroneously inverted during Transmission, the Receiver
will detect an error. How ever if two or even no of bits are inverted an undetected error
occurs.
Ci=bi1+bi2+-----+bin
The Vertical Redundancy Check (VRC) and the Parity check character is referred to as
the Longitudinal Redundancy Check (LRC).
bn1 VRC
Character 1 b11 b21 R1
10110111
11010111
00111010
11110000
1
0001011 LRC
Character m Rm 01011111
Parity check b1m b2m bnm cn+1
cn
character
c1 c2 bnm
CRC Method
1. The frame is expressed in the form of a Polynomial F(x).0 1 1 1 1 1 1 0
2. Both the sender and receiver will agree upon a generator polynomial G(x) in
advance.
3. Let ‘r’ be the degree of G(x).Append ‘r’ zero bits to the lower – order end of
frame now it contains m+r bits.
4. Divide the bit string by G(x) using Mod 2 operation.
5. Transmitted frame [T(x)] = frame + remainder
6. Divide T(x) by G(x) at the receiver end. If the result is a zero, then the frame is
transmitted correctly. Ex. Frame: 1101011011
Generator: 10011
Message after appending 4 zero bits: 11010110000
By: Nihal Kumar
1100001010
10011 11010110 110 0 0 0 1100001010
10011 10011 11010110 111 1 1 0
10011
10011
10011 10011
10011
00001
00000 00001
00000
00010
00000 00010
00000
00101
00000 00101
00000
01011
00000 01011
00000
10110
10011 10111
10011
01010
00000 01001
00000
10100
10011 10011
10011
01110 Remainder
00000 00000 Remainder
1110 00000
0000
Transmitted frame: 11010110111110
Since the remainder is zero there is
no error in the transmitted frame.
By: Nihal Kumar
HAMMING CODES
Hamming codes provide another method for error correction. Error bits, called Hamming
bits, are inserted into message bits at random locations. It is believed that the
randomness of their locations reduces the odds that these Hamming bits themselves
would be in error. This is based on a mathematical assumption that because there are so
many more message bits compared with Hamming bits, there is a greater chance for a
message bit to be in error than for a Hamming bit to be wrong. Determining the
placement and binary value of the Hamming bits can be implemented using hardware,
but it is often more practical to implement them using software. The number of bits in a
message (M) are counted and used to solve the following equation to determine the
number of Hamming bits (H) to be used:
2H ≥ M + H + 1
Once the number of Hamming bits is determined, the actual placement of the bits into the
message is performed. It is important to note that despite the random nature of the
Hamming bit placements, the exact sample placements must be known and used by both
the transmitter and receiver. Once the Hamming bits are inserted into their positions, the
numerical values of the bit positions of the logic 1 bits in the original message are listed.
The equivalent binary numbers of these values are added in the same manner as used in
previous error methods by discarding all carry results. The sum produced is used as the
states of the Hamming bits in the message. The numerical difference between the
Hamming values transmitted and that produced at the receiver indicates the bit position
that contains a bad bit, which is then inverted to correct it.
Ex. The given data
10010001100101(14- bits)
The number of hamming codes
2H ≥ M + H + 1
H = ? M = 14 to satisfy this equation H should be 5 i.e. 5 hamming code
bits should be incorporated in the data bits.
1001000110 H0H1H0H1H
Now count the positions where binary 1’s are present. Add using mod 2 operation (Ex-OR). The
result will give the Hamming code at the transmitter end.
2 - 0 0 0 1 0
6 - 0 0 1 1 0
11 - 0 1 0 1 1
12 - 0 1 1 0 0
16 - 1 0 0 0 0
19 - 1 0 0 1 1
Hamming code = 0 0 0 0 0
This Hamming code will be incorporated at the places of ‘H’ in the data bits and the data
will be transmitted.
How to find out there is an error in the data?
Let the receiver received the 12th bit as zero. The receiver also finds out the Hamming
code in the same way as transmitter.
The decimal equivalent for the binary is 12 so error is occurred at 12th place.
By: Nihal Kumar
Damaged Frame
A recognizable frame does arrive, but some of the bits have been altered during
transmission. That means receiving frame has some error.
Lost Frames
A frame fails to arrive at the other side. For example, a noise burst may damage a frame
to the extent that the receiver is not aware that a frame has been transmitted.
Lost acknowledgement
An acknowledgement fails to arrive at the source. The sender is not aware that
acknowledgement has been transmitted from the receiver.
The purpose of ARQ is to turn an unreliable data transmission to reliable one. There are
two categories of ARQ:
1. Stop-and-wait ARQ
2. Continuous ARQ
Stop-and-wait ARQ
Stop-and-wait ARQ is based on stop-and-wait flow control technique. The stop-and-wait
process allows the transmitter DLC station to transmit a block of data and then wait for
the acknowledgement (ACK) signal from receiver station, which indicates correct
reception. No other data frames can be sent until the receiver’s reply (ACK) arrives at the
source station. There is chance that a frame that arrives at the destination (receiver) is
damaged. The receiver detects this error by using error detection technique. If the
receiver detects an error, it simply discards corrupted frames and it sends a negative
acknowledgement (NAK). The sender waits for acknow, and the message is then
retransmitted. This type of protocol is most often found in simplex and half-duplex
communication.
The ACK/NAK frame is a small feedback frame from receiver to sender. Little dummy
frames are used to carry ACK/NAK response.
The acknowledgement is attached to the outgoing data frame from receiver to sender (i.e.
full duplex operation) using ack field onto the header of the frames in the opposite
direction. The technique of temporarily delaying outgoing acknowledgements so that
they can be hooked onto the next outgoing data frame is known is piggybacking. In
affect, the acknowledgement gets a free ride on the next outgoing data frame.
The advantages of piggybacking are
By: Nihal Kumar
Continuous ARQ
In the continuous ARQ frames transmit continuously. The transmitter contains buffer
storage and both transmitter and receiver monitor frame counts. If the receiver detects an
erroneous frame then NAK is sent to the transmitter with defective frame number N.
when the transmitter gets the NAK message then it can retransmission in either of the
following ways:
1. Go-Back-N
2. Selective Repeat
Go-Back-N
In Go-Back-N ARQ transmitter retransmission all frames starting from N. That means
whenever transmitter received a NAK message, it simply goes back to frame N and
resume transmission as before. Every time an NAK is received, the transmitter
repositions the frame pointer back to frame N. The number of frames, which must be
retransmitted, is at least one and often more, as all the frames from the erroneous frame
are transmitted by the sender. The receiver simply discards all subsequent frames,
sending no acknowledgements for the discarded frames. Go-Back-N ARQ is the most
widely used type of ARQ protocol.
Selective Repeat
In Selective Repeat ARQ transmitter retransmission only the defective frame N and not
the subsequent frames. The number of frames, which must be retransmitted, is always
one, it being the frame containing error. The receiver buffered all correct frames
following the erroneous frame. When transmitter receive a NAK message, it just
retransmission the defective frame, not all its successors. If the second try succeeds, the
receiver will rearrange the frames in sequence. Selective repeat is more efficient than go-
back-N, if less number of errors occur. But in this approach can require large amount of
buffer memory. Both require the use of a full-duplex link.
In comparison with stop-and-wait protocol, link efficiency is improved overall by both
implementations because line idle and line turnaround times eliminated.
Sender
0 1 2 3 4 5 6 7 3 4 5 6 7
Receiver 0 1 2 E D D D D 3 4
Error
Discarded
Go-Back-N strategy
By: Nihal Kumar
Sender
0 1 2 3 4 5 6 3 7 8 9 10 11
Receiver 0 1 2 E 4 5 6 3 7 8
Error
Buffered
In example window size is 1 with a 2-bit sequence number. The corresponding window
operations are shown in the following figure.
• Initially the lower and upper edges of the sender’s window are equal. The
receiving window is open to accept the frame numbered 0.
• Sending window is open and has transmitted frame 0 to the receiver. The
receiving window still pen to receive frame 0.
• After frame 0 is received, the receiving window is rotated by one to be ready to
receive the next frame. The acknowledgement is issued by receiver before the
window is rotated. Sending window open to receive acknowledgement of frame 0.
• The number of the acknowledgement frame indicates the number of the last frame
received correctly. If this number matches with the number of the outstanding frame
in the sender’s buffer, the frame is taken to be received properly by the receiver and
the sender takes up the next frame for further transmission. If there is a mismatch, the
sender retransmits the frame from the buffer.
By: Nihal Kumar
Sender Receiver
3 0 3 0
(a) Initial Setting
2 1 2 1
3 0 3 0
(b) After frame 0
is sent
2 1 2 1
2 1 2 1
Window rotated
Window open to accept frame
receive ACK 0 1 as ACK 0 sent
3 0 3 0
(d) After ACK0
is received
2 1 2 1
- In broadcast network, the key issue is how to share the channel among
several users.
- Ex a conference call with five people
-Broadcast channels are also called as multi-access channels or random access
channels.
-Multi-access channel belong to a sublayer at the DL layer called the MAC sublayer.
The Channel Allocation problem:
Drawbacks: -1) Channel is wasted if one or more stations do not send data.
2) If users increases this will not support.
Pure ALOHA
-1970’s Norman Abramson end his colleagues devised this method, used ground –based
radio broad costing. This is called the ALOHA system.
-The basic idea, many users are competing for the use of a single shared channel.
-There are two versions of ALOHA: Pure and Slotted.
-Pure ALOHA does not require global time synchronization, where as in slotted ALOHA
the time is divided into discrete slots into which all frames must fit.
-Let users transmit whenever they have data to be sent.
-There will be collisions and all collided frames will be damaged.
-Senders will know through feedback property whether the frame is destroyed or not by
listening channel.
[-With a LAN it is immediate, with a satellite, it will take 270m sec.]
-If the frame was destroyed, the sender waits random amount of time and again sends
the frame.
-The waiting time must be random otherwise the same frame will collide over and over.
USER
TIME
By: Nihal Kumar
Collides Collides
with the with the
start of the t end of the
shaded shaded
frame frame
to+t to+2t
to
to+3t Time
Vulnerable
0.368
Slotted ALOHA : S = Ge-G
0.184
Pure ALOHA : S = Ge-G
0.5 1.0
G (attempts per packet time)
Slotted ALOHA
-In 1972, Roberts’ devised a method for doubling the capacity of ALOHA system.
-In this system the time is divided into discrete intervals, each interval corresponding to
one frame.
By: Nihal Kumar
-One way to achieve synchronization would be to have one special station emit a pip at
the start of each interval, like a clock.
-In Roberts’ method, which has come to be known as slotted ALOHA, in contrast to
Abramson’s pure ALOHA; a computer is not permitted to send whenever a carriage return
is typed.
-Instead, it is required to wait for the beginning of the next slot.
-Thus the continuous pure ALOHA is turned into a discrete one.
-Since the vulnerable period is now halved, the of no other traffic during the same slot as
our test frame is e-G which leads to
S = Ge –G
- At G=1, slotted ALOHA will have maximum throughput.
- So S=1/e or about 0.368, twice that of pure ALOHA.
- The channel utilization is 37% in slotted ALOHA.
Carrier Sense Multiple Access Protocols
Protocols in which stations listen for a carrier (transmission) and act accordingly are
called carries sense protocols.
Persistent CSMA
When a station has data to send, it first listens to the channel to see if any one else is
transmitting at that moment. If the channel is busy, the station waits until it become idle.
When the station detects an idle channel, it transmits a frame. If a collision occurs, the
station waits a random amount of time and starts all over again. The protocol is called 1-
persistent also because the station transmits with a probability of 1 when it finds the
channel idle.
The propagation delay has an important effect on the performance of the protocol. The
longer the propagation delay the worse the performance of the protocol.
Even if the propagation delay is zero, there will be collisions. If two stations listen the
channel, that is idle at the same, both will send frame and there will be collision.
By: Nihal Kumar
With persistent CSMA, what happens if two stations become active when a third station is
busy? Both wait for the active station to finish, then simultaneously launch a packet,
resulting a collision. There are two ways to handle this problem.
a) P-persistent CSMA b) exponential backoff.
P-persistent CSMA
The first technique is for a waiting station not to launch a packet immediately when the
channel becomes idle, but first toss a coin, and send a packet only if the coin comes up
heads. If the coin comes up tails, the station waits for some time (one slot for slotted
CSMA), then repeats the process. The idea is that if two stations are both waiting for the
medium, this reduces the chance of a collision from 100% to 25%. A simple
generalization of the scheme is to use a biased coin, so that the probability of sending a
packet when the medium becomes idle is not 0.5, but p, where 0< p < 1. We call such a
scheme P-persistent CSMA. The original scheme, where p=1, is thus called 1-persitent
CSMA.
Exponential backoff
The key idea is that each station, after transmitting a packet, checks whether the packet
transmission was successful. Successful transmission is indicated either by an explicit
acknowledgement from the receiver or the absence of a signal from a collision detection
circuit. If the transmission is successful, the station is done. Otherwise, the station
retransmits the packet, simultaneously realizing that at least one other station is also
contending for the medium. To prevent its retransmission from colliding with the other
station’s retransmission, each station backs off (that is, idles) for a random time chosen
from the interval
By: Nihal Kumar
opportunity to transmit a 1during slot 1, but only if it has a frame queued. In general,
station j may announce the fact that it has a frame to send by inserting a 1 bit into slot j.
after all N slots have passed by, each station has complete knowledge of which stations
with to transmit.
0 1 2 3 4 5 6 7 0 1 2 3 4 5 6 7
1 1 1 1 3 7 1 1 1 5
Since everyone agrees on who goes next, there will never be any collisions. After the last
ready station has transmitted its frame, an event all stations can easily monitor, another N
bit contention period is begun. If a station becomes ready just after its bit slot has passed
by, it is out of luck and must remain silent until every station has had a chance and the bit
map has come around again. Protocols like this in which the desire to transmit is
broadcast before the actual transmission are called reservation protocols.
Binary Countdown
A problem with the basic bit-map protocol is that the overhead is 1 bit per station. A
station wanting to use the channel now broadcasts its address as a binary bit string,
starting with the high-order bit. All addresses are assumed to be the same length. The
bits in each address position from different stations are BOOLEAN ORed together. We
will call this protocol binary countdown. It is used in Datakit.
As soon as a station sees that a high-order bit position that is 0 in its address has been
overwritten with a 1, it gives up. For example, if station 0010,0100,1001, and 1010 are all
trying to get the channel, in the first bit time the stations transmit 0,0,1, and 1,
respectively. Stations 0010 and 0100 see the 1 and know that a higher-numbered station
is competing for the channel, so they give up for the current round. Stations 1001 and
1010 continue.
By: Nihal Kumar
The next bit is 0, and both stations continue. The next bit is 1, so station 1001 gives up.
The winner is station 1010, because it has the highest address. After winning the bidding,
it may now transmit a frame, after which another bidding cycle starts.
The binary countdown protocol. A dash indicates silence
Bit time
0 1 2 3
0 0 1 0 0---
0 1 0 0 0---
1 0 0 1 100-
1 0 1 0 1010
Result 1010
This chapter introduces the technologies employed in devices loosely referred to as bridges and
switches. Topics summarized here include general link layer device operations, local and remote
bridging, ATM switching, and LAN switching. Chapters in Part V, “Bridging and Switching,” address
specific technologies in more detail.
Bridges are capable of filtering frames based on any Layer 2 fields. For example, a bridge can be
programmed to reject (not forward) all frames sourced from a particular network. Because link layer
information often includes a reference to an upper-layer protocol, bridges usually can filter on this
parameter. Furthermore, filters can be helpful in dealing with unnecessary broadcast and multicast
packets.
By dividing large networks into self-contained units, bridges and switches provide several advantages.
Because only a certain percentage of traffic is forwarded, a bridge or switch diminishes the traffic
experienced by devices on all connected segments. The bridge or switch will act as a firewall for some
potentially damaging network errors and will accommodate communication between a larger number of
devices than would be supported on any single LAN connected to the bridge. Bridges and switches
extend the effective length of a LAN, permitting the attachment of distant stations that was not
previously permitted.
Although bridges and switches share most relevant attributes, several distinctions differentiate these
technologies. Bridges are generally used to segment a LAN into a couple of smaller segments. Switches
are generally used to segment a large LAN into many smaller segments. Bridges generally have only a
few ports for LAN connectivity, whereas switches generally have many. Small switches such as the
Cisco Catalyst 2924XL have 24 ports capable of creating 24 different network segments for a LAN.
Larger switches such as the Cisco Catalyst 6500 can have hundreds of ports. Switches can also be used
to connect LANs with different media—for example, a 10-Mbps Ethernet LAN and a 100-Mbps Ethernet
LAN can be connected using a switch. Some switches support cut-through switching, which reduces
latency and delays in the network, while bridges support only store-and-forward traffic switching.
Finally, switches reduce collisions on network segments because they provide dedicated bandwidth to
each network segment.
Types of Bridges
Bridges can be grouped into categories based on various product characteristics. Using one popular
classification scheme, bridges are either local or remote. Local bridges provide a direct connection
between multiple LAN segments in the same area. Remote bridges connect multiple LAN segments in
different areas, usually over telecommunications lines. Figure 4-1 illustrates these two configurations.
Figure 4-1 Local and Remote Bridges Connect LAN Segments in Specific Areas
Ethernet Local
Bridge bridging
Token Remote
Ring Bridge Bridge bridging
Remote bridging presents several unique internetworking challenges, one of which is the difference
between LAN and WAN speeds. Although several fast WAN technologies now are establishing a
presence in geographically dispersed internetworks, LAN speeds are often much faster than WAN
speeds. Vast differences in LAN and WAN speeds can prevent users from running delay-sensitive LAN
applications over the WAN.
By: Nihal Kumar
Remote bridges cannot improve WAN speeds, but they can compensate for speed discrepancies through
a sufficient buffering capability. If a LAN device capable of a 3-Mbps transmission rate wants to
communicate with a device on a remote LAN, the local bridge must regulate the 3-Mbps data stream so
that it does not overwhelm the 64-kbps serial link. This is done by storing the incoming data in onboard
buffers and sending it over the serial link at a rate that the serial link can accommodate. This buffering
can be achieved only for short bursts of data that do not overwhelm the bridge’s buffering capability.
The Institute of Electrical and Electronic Engineers (IEEE) differentiates the OSI link layer into two
separate sublayers: the Media Access Control (MAC) sublayer and the Logical Link Control (LLC)
sublayer. The MAC sublayer permits and orchestrates media access, such as contention and token
passing, while the LLC sublayer deals with framing, flow control, error control, and MAC sublayer
addressing.
Some bridges are MAC-layer bridges, which bridge between homogeneous networks (for example, IEEE
802.3 and IEEE 802.3), while other bridges can translate between different link layer protocols (for
example, IEEE 802.3 and IEEE 802.5). The basic mechanics of such a translation are shown in Figure
4-2.
Figure 4-2 A MAC-Layer Bridge Connects the IEEE 802.3 and IEEE 802.5 Networks
Host A Host B
Application Application
Presentation Presentation
Session Session
Transport Transport
Network Network
Bridge
LLC PKT
Link Link Link
MAC 802.3 PKT 802.5 PKT
Figure 4-2 illustrates an IEEE 802.3 host (Host A) formulating a packet that contains application
information and encapsulating the packet in an IEEE 802.3-compatible frame for transit over the IEEE
802.3 medium to the bridge. At the bridge, the frame is stripped of its IEEE 802.3 header at the MAC
sublayer of the link layer and is subsequently passed up to the LLC sublayer for further processing. After
this processing, the packet is passed back down to an IEEE 802.5 implementation, which encapsulates
the packet in an IEEE 802.5 header for transmission on the IEEE 802.5 network to the IEEE 802.5 host
(Host B).
A bridge’s translation between networks of different types is never perfect because one network likely
will support certain frame fields and protocol functions not supported by the other network.
By: Nihal Kumar
Types of Switches
Switches are data link layer devices that, like bridges, enable multiple physical LAN segments to be
interconnected into a single larger network. Similar to bridges, switches forward and flood traffic based
on MAC addresses. Any network device will create some latency. Switches can use different forwarding
techniques—two of these are store-and-forward switching and cut-through switching.
In store-and-forward switching, an entire frame must be received before it is forwarded. This means that
the latency through the switch is relative to the frame size—the larger the frame size, the longer the delay
through the switch. Cut-through switching allows the switch to begin forwarding the frame when enough
of the frame is received to make a forwarding decision. This reduces the latency through the switch.
Store-and-forward switching gives the switch the opportunity to evaluate the frame for errors before
forwarding it. This capability to not forward frames containing errors is one of the advantages of
switches over hubs. Cut-through switching does not offer this advantage, so the switch might forward
frames containing errors. Many types of switches exist, including ATM switches, LAN switches, and
various types of WAN switches.
ATM Switch
Asynchronous Transfer Mode (ATM) switches provide high-speed switching and scalable bandwidths in
the workgroup, the enterprise network backbone, and the wide area. ATM switches support voice, video,
and data applications, and are designed to switch fixed-size information units called cells, which are used
in ATM communications. Figure 4-3 illustrates an enterprise network comprised of multiple LANs
interconnected across an ATM backbone.
Figure 4-3 Multi-LAN Networks Can Use an ATM-Based Backbone When Switching Cells
Engineering
R&D
ATM
backbone
Marketing
Sales
Security
LAN Switch
LAN switches are used to interconnect multiple LAN segments. LAN switching provides dedicated,
collision-free communication between network devices, with support for multiple simultaneous
conversations. LAN switches are designed to switch data frames at high speeds. Figure 4-4 illustrates a
simple network in which a LAN switch interconnects a 10-Mbps and a 100-Mbps Ethernet LAN.
By: Nihal Kumar
Figure 4-4 A LAN Switch Can Link 10-Mbps and 100-Mbps Ethernet Segments
10-Mbps
Ethernet
LAN switch
100-Mbps
Ethernet
Review Questions
Q—What layer of the OSI reference model to bridges and switches operate.
A—Bridges and switches are data communications devices that operate principally at Layer 2 of the OSI
reference model. As such, they are widely referred to as data link-layer devices.
Q—What is controlled at the link layer?
A—Bridging and switching occur at the link layer, which controls data flow, handles transmission errors,
provides physical (as opposed to logical) addressing, and manages access to the physical medium.
Q—Under one popular classification scheme what are bridges classified as?
A—Local or Remote: Local bridges provide a direct connection between multiple LAN segments in the
same area. Remote bridges connect multiple LAN segments in different areas, usually over
telecommunications lines.
Q—What is a switch?
A—Switches are data link-layer devices that, like bridges, enable multiple physical LAN segments to be
interconnected into a single larger network.