Computer Network UNIT 2

Download as pdf or txt
Download as pdf or txt
You are on page 1of 30

By: Nihal Kumar

UNIT -2
By: Nihal Kumar

Framing:
To provide service to the network layer, the data link layer must use the service provided
to it by the physical layer. What the physical layer does is accept a raw bit stream and attempt to
deliver it to the destination. This bit stream is not guaranteed to be error free. The number of bits
received may be less than, equal to, or more than the number of bits transmitted, and they may
have different values. It is up to the data link layer to detect and, if necessary, correct errors. The
usual approach is for the data link layer to break the bit stream up into discrete frames and
compute the checksum for each frame. When a frame arrives at the destination, the checksum is
recomputed. If the newly computed checksum is different from the one contained in the frame,
the data link layer knows that an error has occurred and takes steps to deal with it (e.g.,
discarding the bad frame and possibly also sending back an error report).
Breaking the bit stream up into frames is more difficult than it at first appears. One way to
achieve this framing is to insert time gaps between frames, much like the spaces between words
in ordinary text. However, networks rarely make any guarantees about timing, so it is possible
these gaps might be squeezed out or other gaps might be inserted during transmission. Since it is
too risky to count on timing to mark the start and end of each frame, other methods have been
devised. We will look at four methods:

1. CHARATER COUNT METHOD

2. STARTING AND ENDING CHARACTERS, WITH CHARATER STUFFING

3. STARTING AND ENDING FLAGS, WITH BIT STUFFING

CHARATER COUNT METHOD:

In this method a field in the header will be used to specify the number of
CHARACTERS in the frame. When data link layer at the destination sees the character
count, it knows how many characters follow and hence where the end of the frame is.
The trouble with this algorithm is that the count can be garbed by a transmission error
resulting the destination will get out of synchronization and will be unable to locate the
start of the next frame. There is no way of telling where the next frame starts. For this
reason this method is rarely used.
By: Nihal Kumar

Character count

(a) 5 1 2 3 4 5 6 7 8 9 8 0 1 2 3 4 5 6 8 7 8 9 0 1 2 3

Frame 3 Frame 4
Frame 1 Frame 2
5 characters 5 characters 8 characters 8 characters

Error

5 1 2 3 4 7 6 7 8 9 8 01 2 3 4 5 6 8 7 8 9 0 1 2 3 5

Frame 1 Frame 2
(Wrong) Now a character count

A Character Stream (a) Without errors (b) With one error

CHARATER STUFFING METHOD:

In this method each frame will start with a FLAG and ends with a FLAG.

The starting flag is DLE STX ---- Data Link Escape Start of Text

The ending flag is DLE ETX ---- Data link Escape End of Text.
By: Nihal Kumar

Ex 1. The given Data ABRFCXDGJHKK12435ASBGXRR


The Data will be sent
DLE STX ABRFCXDGJHKK12435ASBGXRR DLE STX

Ex 2. The given Data ASHGTRDXZBNHG DLE STX %$#54378


The data will be sent as

DLE STX ASHGTRDXZBNHG DLE DLE STX %$#54378 DLE ETX

Dis Adv:
1.24 bits are unnecessarily stuffed.

2. Transmission delay.

BIT STUFFING METHOD

In this method every frame will start with a flag 01111110.

In the data if there are FIVE consecutive ONE ‘s are there then a ZERO will be
stuffed.
Ex. The given data is 01111000011111110101001111110 01111101100

The data will be sent as

01111110 0111100001111101101010011111010 011111001100

Stuffed bits

Advantages:

1. Only one bit is stuffed.


2. No transmission delay
By: Nihal Kumar

Error detection and Correction


When data units transfer from one device to another device, the data units can become
corrupted. Networks must be able to transfer data with complete accuracy. That is why a
reliable networking system must have a mechanism for detecting and correcting those
errors.

Type of errors

Errors are divided into two types:


1. Single-Bit Error.
2. Burst Error.

Single-Bit Error

Single-bit error means that only one bit of given data unit (for example byte, character,
frame or packet) is changed from one to zero or zero to one.
Suppose that a sender transmit group of 8 bits of ASCII characters. In following example,
01010001 (ASCII Q) was sent but 01000001 (ASCII A) was received at other end.
Single-bit errors most likely occur in parallel transmission, but rare in serial transmission.
One of the reasons is that one of the 8 wires is may be noisy.
Suppose a sender transfer data at 1 Mbps. That means each bit lasts only 1/1,000,000
second or 1 µ s. If a noise lasts only 1µ s (normally noise lasts much longer than µ s)
then it can affect only one bit.

Burst Error
A burst error means that two or more bits in the data unit have changed from one to zero
or from zero to one.

In the following example, 01011101 01000011 was sent but 01000100 01000011 was
receive. Burst errors may not occur in consecutive bits. The length of burst error is
calculated from first corrupted bit to the last corrupted bit. Some bits between them may
not be corrupted. In this example, three bits has been changed from one to zero but the
length of burst error is 5.

Burst errors most likely occur in serial transmission. Because data transfer in serial
transmission is at slow speed. Suppose a sender transfer data at 1 Kbps. That means each
bit lasts only 1/1000 second. If a noise lasts 1/100 second then it can affect 10 bits.

Given any two codewords W1 and W2, the number of bit position in which the codewords
differ is called the Hamming distance, dw1w2 between them. It is possible to determine
how many bits differ, just Exclusive OR the two codeword and count the number of 1
bits in the result. It is given by
dw1w2 = W1 ⊕ W2
It is significance that if two codewords are a Hamming distance d apart, it will require d
single-bit errors to convert one into the other.
By: Nihal Kumar

The Error – correcting and Error- detecting methods are

1. PARITY METHOD
2. LRC METHOD (Longitudinal redundancy check)
3. CRC METHOD (Cyclic redundancy check)
4. HAMMING CODE METHOD
PARITY METHOD

- appends a parity bit to the end of each word in the frame

- Even parity is used for asynchronous Transmission

- Odd parity is used for synchronous Transmission

Ex 1. Character code even parity odd parity

1100100 1100100 1 1100100 0

2. 0011000 0011000 0 0011000 1

If one bit or any odd no bits is erroneously inverted during Transmission, the Receiver
will detect an error. How ever if two or even no of bits are inverted an undetected error
occurs.

Ex 3. The Transmitted data is 10011010. The received data is 11011010.


Let both the transmitter and receiver are agreed on EVEN parity.
Now an error will be detected, since the no of ones received are ODD

4. The Transmitted data is 10011010. The received data is 01011010


The received data is wrong even though the no of ones are EVEN.
Science two bits are inverted error can’t be detected.
By: Nihal Kumar

Longitudinal Redundancy Check(LRC)

The frame is viewed as a block of characters arranged in 2-dimensions. To each


character is appended a parity bit. In addition a parity bit is generated for each bit position
across all characters i.e., an additional character is generated in which the Ith bit of the
character is parity bit for the Ith bit of all other characters in the block. This can be
expressed mathematically using exclusive OR(+) operation. The parity bit at the end of
each character of row parity

Rj =b1j + b2j +------bnj

Where Rj=Parity bit of jth character

bij=ith bit in jth character

This equation generates even parity.

Ci=bi1+bi2+-----+bin

Where Ci=ith bit of parity check character


m=number of characters in a frame
In this format the parity bits at the end of each character are referred to as
By: Nihal Kumar

The Vertical Redundancy Check (VRC) and the Parity check character is referred to as
the Longitudinal Redundancy Check (LRC).

bit bit bit Parity


1 2 n bit

bn1 VRC
Character 1 b11 b21 R1

Character 2 b12 b22 bn2 R2

10110111
11010111
00111010
11110000
1
0001011 LRC
Character m Rm 01011111
Parity check b1m b2m bnm cn+1
cn
character
c1 c2 bnm

CRC Method
1. The frame is expressed in the form of a Polynomial F(x).0 1 1 1 1 1 1 0
2. Both the sender and receiver will agree upon a generator polynomial G(x) in
advance.
3. Let ‘r’ be the degree of G(x).Append ‘r’ zero bits to the lower – order end of
frame now it contains m+r bits.
4. Divide the bit string by G(x) using Mod 2 operation.
5. Transmitted frame [T(x)] = frame + remainder
6. Divide T(x) by G(x) at the receiver end. If the result is a zero, then the frame is
transmitted correctly. Ex. Frame: 1101011011
Generator: 10011
Message after appending 4 zero bits: 11010110000
By: Nihal Kumar

1100001010
10011 11010110 110 0 0 0 1100001010
10011 10011 11010110 111 1 1 0
10011
10011
10011 10011
10011
00001
00000 00001
00000
00010
00000 00010
00000
00101
00000 00101
00000
01011
00000 01011
00000
10110
10011 10111
10011
01010
00000 01001
00000
10100
10011 10011
10011
01110 Remainder
00000 00000 Remainder
1110 00000
0000
Transmitted frame: 11010110111110
Since the remainder is zero there is
no error in the transmitted frame.
By: Nihal Kumar

HAMMING CODES

Hamming codes provide another method for error correction. Error bits, called Hamming
bits, are inserted into message bits at random locations. It is believed that the
randomness of their locations reduces the odds that these Hamming bits themselves
would be in error. This is based on a mathematical assumption that because there are so
many more message bits compared with Hamming bits, there is a greater chance for a
message bit to be in error than for a Hamming bit to be wrong. Determining the
placement and binary value of the Hamming bits can be implemented using hardware,
but it is often more practical to implement them using software. The number of bits in a
message (M) are counted and used to solve the following equation to determine the
number of Hamming bits (H) to be used:
2H ≥ M + H + 1
Once the number of Hamming bits is determined, the actual placement of the bits into the
message is performed. It is important to note that despite the random nature of the
Hamming bit placements, the exact sample placements must be known and used by both
the transmitter and receiver. Once the Hamming bits are inserted into their positions, the
numerical values of the bit positions of the logic 1 bits in the original message are listed.
The equivalent binary numbers of these values are added in the same manner as used in
previous error methods by discarding all carry results. The sum produced is used as the
states of the Hamming bits in the message. The numerical difference between the
Hamming values transmitted and that produced at the receiver indicates the bit position
that contains a bad bit, which is then inverted to correct it.
Ex. The given data
10010001100101(14- bits)
The number of hamming codes
2H ≥ M + H + 1
H = ? M = 14 to satisfy this equation H should be 5 i.e. 5 hamming code
bits should be incorporated in the data bits.
1001000110 H0H1H0H1H
Now count the positions where binary 1’s are present. Add using mod 2 operation (Ex-OR). The
result will give the Hamming code at the transmitter end.

1’s position Binary equivalent


By: Nihal Kumar

2 - 0 0 0 1 0
6 - 0 0 1 1 0
11 - 0 1 0 1 1
12 - 0 1 1 0 0
16 - 1 0 0 0 0
19 - 1 0 0 1 1
Hamming code = 0 0 0 0 0

This Hamming code will be incorporated at the places of ‘H’ in the data bits and the data
will be transmitted.
How to find out there is an error in the data?
Let the receiver received the 12th bit as zero. The receiver also finds out the Hamming
code in the same way as transmitter.

1’s position Binary equivalent


2 - 0 0 0 1 0
6 - 0 0 1 1 0
11 - 0 1 0 1 1
16 - 1 0 0 0 0
19 - 1 0 0 1 1
Hamming 0 1 1 0 0
code at the
receiver

Hamming code at the Tx 0 0 0 0 0


Hamming code at the Rx 0 1 1 0 0
0 1 1 0 0

The decimal equivalent for the binary is 12 so error is occurred at 12th place.
By: Nihal Kumar

FLOW CONTROL AND ERROR CONTROL


Automatic Repeat Request
The data link controls (DLCs) generally use the concept of Automatic Repeat reQuest
(ARQ). At the receiving end DLC module detect erroneous frames and then send a
request to the transmitting DLC module to retransmission the incorrect frames.
Retransmission of data necessitate due to in three cases: damaged frame, lost frame and
lost acknowledgement.

Damaged Frame
A recognizable frame does arrive, but some of the bits have been altered during
transmission. That means receiving frame has some error.

Lost Frames
A frame fails to arrive at the other side. For example, a noise burst may damage a frame
to the extent that the receiver is not aware that a frame has been transmitted.

Lost acknowledgement
An acknowledgement fails to arrive at the source. The sender is not aware that
acknowledgement has been transmitted from the receiver.

The purpose of ARQ is to turn an unreliable data transmission to reliable one. There are
two categories of ARQ:
1. Stop-and-wait ARQ
2. Continuous ARQ

Stop-and-wait ARQ
Stop-and-wait ARQ is based on stop-and-wait flow control technique. The stop-and-wait
process allows the transmitter DLC station to transmit a block of data and then wait for
the acknowledgement (ACK) signal from receiver station, which indicates correct
reception. No other data frames can be sent until the receiver’s reply (ACK) arrives at the
source station. There is chance that a frame that arrives at the destination (receiver) is
damaged. The receiver detects this error by using error detection technique. If the
receiver detects an error, it simply discards corrupted frames and it sends a negative
acknowledgement (NAK). The sender waits for acknow, and the message is then
retransmitted. This type of protocol is most often found in simplex and half-duplex
communication.
The ACK/NAK frame is a small feedback frame from receiver to sender. Little dummy
frames are used to carry ACK/NAK response.
The acknowledgement is attached to the outgoing data frame from receiver to sender (i.e.
full duplex operation) using ack field onto the header of the frames in the opposite
direction. The technique of temporarily delaying outgoing acknowledgements so that
they can be hooked onto the next outgoing data frame is known is piggybacking. In
affect, the acknowledgement gets a free ride on the next outgoing data frame.
The advantages of piggybacking are
By: Nihal Kumar

• Better use of available channel bandwidth.


• Less traffic due to absence of dummy frames.
• Less frame arrival interrupts to DLC software/hardware.
• Perhaps fewer buffer requirement in the receiver.

Continuous ARQ
In the continuous ARQ frames transmit continuously. The transmitter contains buffer
storage and both transmitter and receiver monitor frame counts. If the receiver detects an
erroneous frame then NAK is sent to the transmitter with defective frame number N.
when the transmitter gets the NAK message then it can retransmission in either of the
following ways:

1. Go-Back-N
2. Selective Repeat

Go-Back-N
In Go-Back-N ARQ transmitter retransmission all frames starting from N. That means
whenever transmitter received a NAK message, it simply goes back to frame N and
resume transmission as before. Every time an NAK is received, the transmitter
repositions the frame pointer back to frame N. The number of frames, which must be
retransmitted, is at least one and often more, as all the frames from the erroneous frame
are transmitted by the sender. The receiver simply discards all subsequent frames,
sending no acknowledgements for the discarded frames. Go-Back-N ARQ is the most
widely used type of ARQ protocol.

Selective Repeat
In Selective Repeat ARQ transmitter retransmission only the defective frame N and not
the subsequent frames. The number of frames, which must be retransmitted, is always
one, it being the frame containing error. The receiver buffered all correct frames
following the erroneous frame. When transmitter receive a NAK message, it just
retransmission the defective frame, not all its successors. If the second try succeeds, the
receiver will rearrange the frames in sequence. Selective repeat is more efficient than go-
back-N, if less number of errors occur. But in this approach can require large amount of
buffer memory. Both require the use of a full-duplex link.
In comparison with stop-and-wait protocol, link efficiency is improved overall by both
implementations because line idle and line turnaround times eliminated.
Sender

0 1 2 3 4 5 6 7 3 4 5 6 7

ACK0 ACK1 ACK2 NAK3 ACK3

Receiver 0 1 2 E D D D D 3 4

Error
Discarded

Go-Back-N strategy
By: Nihal Kumar

Sender

0 1 2 3 4 5 6 3 7 8 9 10 11

ACK0 ACK1 ACK2 NAK3 ACK4 ACK5 ACK6 ACK3 ACK7

Receiver 0 1 2 E 4 5 6 3 7 8

Error
Buffered

Selective repeat strategy

Sliding Window Protocols


When the channel is not error-free and/or the receiver has limited buffer spaces, flow of
fames from sender to receiver must be controlled to prevent buffer overflow and/or
duplication of frames at the receiver.
A data link between a sender A and a receiver B is said to be window flow controlled, if
there is an upper bound on the number of frames that can be transmitted by A but not yet
been acknowledged by B. this upper bound (a positive integer) is called the window size
or simply the window. The number of outstanding (i.e. unacknowledged) frames at any
instant should not exceed the window size. Acknowledgements are either contained in
special frames or are piggybacked on regular data frames in the reverse direction. The
sliding window technique follow either go-back-N or selective repeat.
By: Nihal Kumar

The semantics of an n-bit sliding window protocol:


• Each transmitting frame contains a sequence number between 0 to (2n-1). For
stop-and-wait sliding window protocol, n=1, so that only two sequence numbers, (i.e.
0 and 1) are possible. A common value of n is 3, which allow eight frames to be
transmitted without any acknowledgement.
• At any point of time, the sender or receiver maintains a list of consecutive
sequence numbers corresponding to frames it is permitted to send or receiver.
• Initially, no frames are outstanding, so the lower and upper edges of the sender’s
window are equal.
• The frames are said to fall within the opening of the sending window. The
receiver also maintains receiving window corresponding to the set of frames it is
permitted to accept. The sending and receiving window sizes need not be identical.
• The open portion of the sending window represents frames transmitted but as yet
not acknowledged.
• When a new frame arrives, it is assigned the next highest sequence number and
upper edge of the window is advanced by one and provided the window is not fully
closed.
• When an acknowledgement comes in, the lower edge of the window is advanced
by one. In this way the window continuously maintains a list of unacknowledged
frames.
• The open portion of the receiving window corresponds to the frames that the
receiver is waiting to accept. When a frame is received, whose sequence number is
equal to the lower edge of the open portion, its acknowledgement is generated and the
window is rotated by one.
• The sender window always remains at its initial size. Since any frame within the
open portion of sending window may have to be retransmitted, the sender must keep
all these frames in its buffer.

In example window size is 1 with a 2-bit sequence number. The corresponding window
operations are shown in the following figure.
• Initially the lower and upper edges of the sender’s window are equal. The
receiving window is open to accept the frame numbered 0.
• Sending window is open and has transmitted frame 0 to the receiver. The
receiving window still pen to receive frame 0.
• After frame 0 is received, the receiving window is rotated by one to be ready to
receive the next frame. The acknowledgement is issued by receiver before the
window is rotated. Sending window open to receive acknowledgement of frame 0.
• The number of the acknowledgement frame indicates the number of the last frame
received correctly. If this number matches with the number of the outstanding frame
in the sender’s buffer, the frame is taken to be received properly by the receiver and
the sender takes up the next frame for further transmission. If there is a mismatch, the
sender retransmits the frame from the buffer.
By: Nihal Kumar

Sender Receiver

3 0 3 0
(a) Initial Setting

2 1 2 1

Window open to Window open to


send frame 0 received frame 0

3 0 3 0
(b) After frame 0
is sent
2 1 2 1

Window has send Window open to


frame 0 received frame 0

3 0 3 0 (c) After frame 0


is received and
ACK 0 is sent

2 1 2 1
Window rotated
Window open to accept frame
receive ACK 0 1 as ACK 0 sent
3 0 3 0
(d) After ACK0
is received
2 1 2 1

Window open to Window open to


send frame 1 received frame 1

An example of sliding window protocol. Window size is 1 with a 2-bit sequence.


number.
By: Nihal Kumar

MEDIUM ACCESS CONTROL SUBLAYER (MAC)

Networks can be categories in to two ways


a) Point to point b) Broad cast channel

- In broadcast network, the key issue is how to share the channel among
several users.
- Ex a conference call with five people
-Broadcast channels are also called as multi-access channels or random access
channels.
-Multi-access channel belong to a sublayer at the DL layer called the MAC sublayer.
The Channel Allocation problem:

a) Static channel allocation in LANs & MANs


i) FDM ii) TDM

Drawbacks: -1) Channel is wasted if one or more stations do not send data.
2) If users increases this will not support.

b) Dynamic channel allocation

i) Pure ALOHA & Slotted ALOHA


CSMA/CD
ii) CSMA
CSMA/CA
By: Nihal Kumar

Pure ALOHA
-1970’s Norman Abramson end his colleagues devised this method, used ground –based
radio broad costing. This is called the ALOHA system.
-The basic idea, many users are competing for the use of a single shared channel.
-There are two versions of ALOHA: Pure and Slotted.
-Pure ALOHA does not require global time synchronization, where as in slotted ALOHA
the time is divided into discrete slots into which all frames must fit.
-Let users transmit whenever they have data to be sent.
-There will be collisions and all collided frames will be damaged.
-Senders will know through feedback property whether the frame is destroyed or not by
listening channel.
[-With a LAN it is immediate, with a satellite, it will take 270m sec.]
-If the frame was destroyed, the sender waits random amount of time and again sends
the frame.
-The waiting time must be random otherwise the same frame will collide over and over.

USER

TIME
By: Nihal Kumar

Frames are transmitted at completely arbitrary times


-Whenever two frames try to occupy the channel at the same time, there will be a collision
and both will be destroyed.
-We have to find out what is the efficiency of an ALOHA channel?
-Let us consider an infinite collection of interactive users sitting at their systems (stations).
-A user will always in two states typing or waiting.
-Let the ‘Frame time’ denotes the time required to transmit one fixed length frame.
-Assume that infinite populations of users are generating new frames according to
possion distribution with mean N frames per frame time.
-If N>1 users are generating frames at a higher rate than the channel can handle.
-For reasonable throughput 0<N<1.
-In addition to new frames, the station also generates retransmission of frames.
-Old and new frames are G per frame time.
-G> N
-At low load there will be few collisions, so G ~ N
-Under all loads, the throughput S = GPo, where Po is the probability that a frame does not
suffer a collision.
-A frame will not suffer a collision if no other frames are sent with one frame time of its
start.
-Let ‘t’ be the time required to send a frame.
-If any other user has generated a frame between time to and to+t, the end of that frame
will collide with the beginning of the shaded frame.
-Similarly, any other frame started b/w to+t and to+2t will bump into the end of the shaded
frame.
-The probability that ‘k’ frames are generated during a given frame time is given by the
possion distribution:
Pr[k] = Gke-G
k!
-The probability of zero frames is just e-G
-In an interval two frame times long, the mean number at frames generated is 2G.
-The probability at no other traffic being initiated during the entire vulnerable period is
given by
Po = e-2G
S= Ge-2G [S=GPo]
The Maximum through put occurs at G=0.5 with S=1/2e = 0.184
The channel utilization at pure ALOHA =18%.
By: Nihal Kumar

Collides Collides
with the with the
start of the t end of the
shaded shaded
frame frame

to+t to+2t
to
to+3t Time
Vulnerable

Vulnerable period for the shaded frame


S (throughput per frame time)

0.368
Slotted ALOHA : S = Ge-G

0.184
Pure ALOHA : S = Ge-G

0.5 1.0
G (attempts per packet time)

Throughput versus offered traffic for ALOHA systems

Slotted ALOHA
-In 1972, Roberts’ devised a method for doubling the capacity of ALOHA system.
-In this system the time is divided into discrete intervals, each interval corresponding to
one frame.
By: Nihal Kumar

-One way to achieve synchronization would be to have one special station emit a pip at
the start of each interval, like a clock.
-In Roberts’ method, which has come to be known as slotted ALOHA, in contrast to
Abramson’s pure ALOHA; a computer is not permitted to send whenever a carriage return
is typed.
-Instead, it is required to wait for the beginning of the next slot.
-Thus the continuous pure ALOHA is turned into a discrete one.
-Since the vulnerable period is now halved, the of no other traffic during the same slot as
our test frame is e-G which leads to
S = Ge –G
- At G=1, slotted ALOHA will have maximum throughput.
- So S=1/e or about 0.368, twice that of pure ALOHA.
- The channel utilization is 37% in slotted ALOHA.
Carrier Sense Multiple Access Protocols

Protocols in which stations listen for a carrier (transmission) and act accordingly are
called carries sense protocols.
Persistent CSMA
When a station has data to send, it first listens to the channel to see if any one else is
transmitting at that moment. If the channel is busy, the station waits until it become idle.
When the station detects an idle channel, it transmits a frame. If a collision occurs, the
station waits a random amount of time and starts all over again. The protocol is called 1-
persistent also because the station transmits with a probability of 1 when it finds the
channel idle.
The propagation delay has an important effect on the performance of the protocol. The
longer the propagation delay the worse the performance of the protocol.
Even if the propagation delay is zero, there will be collisions. If two stations listen the
channel, that is idle at the same, both will send frame and there will be collision.
By: Nihal Kumar

Non persistent CSMA


In this, before sending, a station sense the channel. If no one else is sending, the station
begins doing so it self. However, if the channel is busy, the station does not continually
sense it but it waits a random amount of time and repeats the process.
This algorithms leads to better channel utilization but longer delays then
1-persistent CSMA.

With persistent CSMA, what happens if two stations become active when a third station is
busy? Both wait for the active station to finish, then simultaneously launch a packet,
resulting a collision. There are two ways to handle this problem.
a) P-persistent CSMA b) exponential backoff.

P-persistent CSMA

The first technique is for a waiting station not to launch a packet immediately when the
channel becomes idle, but first toss a coin, and send a packet only if the coin comes up
heads. If the coin comes up tails, the station waits for some time (one slot for slotted
CSMA), then repeats the process. The idea is that if two stations are both waiting for the
medium, this reduces the chance of a collision from 100% to 25%. A simple
generalization of the scheme is to use a biased coin, so that the probability of sending a
packet when the medium becomes idle is not 0.5, but p, where 0< p < 1. We call such a
scheme P-persistent CSMA. The original scheme, where p=1, is thus called 1-persitent
CSMA.

Exponential backoff

The key idea is that each station, after transmitting a packet, checks whether the packet
transmission was successful. Successful transmission is indicated either by an explicit
acknowledgement from the receiver or the absence of a signal from a collision detection
circuit. If the transmission is successful, the station is done. Otherwise, the station
retransmits the packet, simultaneously realizing that at least one other station is also
contending for the medium. To prevent its retransmission from colliding with the other
station’s retransmission, each station backs off (that is, idles) for a random time chosen
from the interval
By: Nihal Kumar

[0,2*max-propagation_delay] before retransmitting its packet. If the retransmission also


fails, then the station backs off for a random time in the interval [0,4*
max_propagation_delay], and tries again. Each subsequent collision doubles the backoff
interval length, until the retransmission finally succeeds. On a successful transmission,
the backoff interval is reset to the initial value. We call this type of backoff exponential
backoff.
CSMA/CA
In many wireless LANS, unlike wired LANS, the station has no idea whether the packet
collided with another packet or not until it receives an acknowledgement from receiver. In
this situation, collisions have a greater effect on performance than with CSMA/CD, where
colliding packets can be quickly detected and aborted. Thus, it makes sense to try to
avoid collisions, if possible. CSMA/CA is basically p-persistence, with the twist that when
the medium becomes idle, a station must wait for a time called the interframe spacing or
IFS before contending for a slot. A station gets a higher priority if it is allocated smaller
inter frame spacing.
When a station wants to transmit data, it first checks if the medium is busy. If it is, it
continuously senses the medium, waiting for it to become idle. When the medium
becomes idle, the station first waits for an interframe spacing corresponding to its priority
level, then sets a contention timer to a time interval randomly selected in the range
[0,CW], where CW is a predefined contention window length. When this timer expires, it
transmits a packet and waits for the receiver to send an ack. If no ack is received, the
packet is assumed lost to collision, and the source tries again, choosing a contention
timer at random from an interval twice as long as the one before(binary exponential
backoff). If the station senses that another station has begun transmission while it was
waiting for the expiration of the contention timer, it does not reset its timer, but merely
freezer it, and restarts the countdown when the packet completes transmission. In this
way, stations that happen to choose a longer timer value get higher priority in the next
round of contention.
Collision-Free Protocols
A Bit-Map Protocol
In the basic bit-map method, each contention period consists of exactly N slots. If station
0 has a frame to send, it transmits a 1 bit during the zeroth slot. No other station is
allowed to transmit during this slot. Regardless of what station 0 does, station 1 gets the
By: Nihal Kumar

opportunity to transmit a 1during slot 1, but only if it has a frame queued. In general,
station j may announce the fact that it has a frame to send by inserting a 1 bit into slot j.
after all N slots have passed by, each station has complete knowledge of which stations
with to transmit.

8 Contention slots 8 Contention slots


Frames

0 1 2 3 4 5 6 7 0 1 2 3 4 5 6 7

1 1 1 1 3 7 1 1 1 5

The basic bit-map protocol

Since everyone agrees on who goes next, there will never be any collisions. After the last
ready station has transmitted its frame, an event all stations can easily monitor, another N
bit contention period is begun. If a station becomes ready just after its bit slot has passed
by, it is out of luck and must remain silent until every station has had a chance and the bit
map has come around again. Protocols like this in which the desire to transmit is
broadcast before the actual transmission are called reservation protocols.
Binary Countdown
A problem with the basic bit-map protocol is that the overhead is 1 bit per station. A
station wanting to use the channel now broadcasts its address as a binary bit string,
starting with the high-order bit. All addresses are assumed to be the same length. The
bits in each address position from different stations are BOOLEAN ORed together. We
will call this protocol binary countdown. It is used in Datakit.
As soon as a station sees that a high-order bit position that is 0 in its address has been
overwritten with a 1, it gives up. For example, if station 0010,0100,1001, and 1010 are all
trying to get the channel, in the first bit time the stations transmit 0,0,1, and 1,
respectively. Stations 0010 and 0100 see the 1 and know that a higher-numbered station
is competing for the channel, so they give up for the current round. Stations 1001 and
1010 continue.
By: Nihal Kumar

The next bit is 0, and both stations continue. The next bit is 1, so station 1001 gives up.
The winner is station 1010, because it has the highest address. After winning the bidding,
it may now transmit a frame, after which another bidding cycle starts.
The binary countdown protocol. A dash indicates silence
Bit time
0 1 2 3

0 0 1 0 0---

0 1 0 0 0---

1 0 0 1 100-

1 0 1 0 1010
Result 1010

Stations 0010 Station 1001


and 0100 see sees this 1
this 1 and give and gives up
up

IEEE Standard 802 for LANS and MANS


The IEEE 802.3 is for a 1-persistent CSMA/CD LAN. Xerox built a 2.94 Mbps CSMA/CD
system to connect over 100 personal workstations on 1-Km cable. This system was
called Ethernet through which electromagnetic radiation was once thought to propagate.
Xerox DEC and Intel came with another standard for 100 Mbps Ethernet. This differs from
old one that it runs at speeds from 1 to 10 Mbps on various media. The second difference
between these two is in one header (802.3 length field is used for packet type in
Ethernet).
By: Nihal Kumar

Bridging and Switching Basics

This chapter introduces the technologies employed in devices loosely referred to as bridges and
switches. Topics summarized here include general link layer device operations, local and remote
bridging, ATM switching, and LAN switching. Chapters in Part V, “Bridging and Switching,” address
specific technologies in more detail.

What Are Bridges and Switches?


Bridges and switches are data communications devices that operate principally at Layer 2 of the OSI
reference model. As such, they are widely referred to as data link layer devices.
Bridges became commercially available in the early 1980s. At the time of their introduction, bridges
connected and enabled packet forwarding between homogeneous networks. More recently, bridging
between different networks has also been defined and standardized.
Several kinds of bridging have proven important as internetworking devices. Transparent bridging is
found primarily in Ethernet environments, while source-route bridging occurs primarily in Token Ring
environments. Translational bridging provides translation between the formats and transit principles of
different media types (usually Ethernet and Token Ring). Finally, source-route transparent bridging
combines the algorithms of transparent bridging and source-route bridging to enable communication in
mixed Ethernet/Token Ring environments.
Today, switching technology has emerged as the evolutionary heir to bridging-based internetworking
solutions. Switching implementations now dominate applications in which bridging technologies were
implemented in prior network designs. Superior throughput performance, higher port density, lower
per-port cost, and greater flexibility have contributed to the emergence of switches as replacement
technology for bridges and as complements to routing technology.

Link Layer Device Overview


Bridging and switching occur at the link layer, which controls data flow, handles transmission errors,
provides physical (as opposed to logical) addressing, and manages access to the physical medium.
Bridges provide these functions by using various link layer protocols that dictate specific flow control,
error handling, addressing, and media-access algorithms. Examples of popular link layer protocols
include Ethernet, Token Ring, and FDDI.
Bridges and switches are not complicated devices. They analyze incoming frames, make forwarding
decisions based on information contained in the frames, and forward the frames toward the destination.
In some cases, such as source-route bridging, the entire path to the destination is contained in each
frame. In other cases, such as transparent bridging, frames are forwarded one hop at a time toward the
destination.
Upper-layer protocol transparency is a primary advantage of both bridging and switching. Because both
device types operate at the link layer, they are not required to examine upper-layer information. This
means that they can rapidly forward traffic representing any network layer protocol. It is not uncommon
for a bridge to move AppleTalk, DECnet, TCP/IP, XNS, and other traffic between two or more networks.
By: Nihal Kumar

Bridges are capable of filtering frames based on any Layer 2 fields. For example, a bridge can be
programmed to reject (not forward) all frames sourced from a particular network. Because link layer
information often includes a reference to an upper-layer protocol, bridges usually can filter on this
parameter. Furthermore, filters can be helpful in dealing with unnecessary broadcast and multicast
packets.
By dividing large networks into self-contained units, bridges and switches provide several advantages.
Because only a certain percentage of traffic is forwarded, a bridge or switch diminishes the traffic
experienced by devices on all connected segments. The bridge or switch will act as a firewall for some
potentially damaging network errors and will accommodate communication between a larger number of
devices than would be supported on any single LAN connected to the bridge. Bridges and switches
extend the effective length of a LAN, permitting the attachment of distant stations that was not
previously permitted.
Although bridges and switches share most relevant attributes, several distinctions differentiate these
technologies. Bridges are generally used to segment a LAN into a couple of smaller segments. Switches
are generally used to segment a large LAN into many smaller segments. Bridges generally have only a
few ports for LAN connectivity, whereas switches generally have many. Small switches such as the
Cisco Catalyst 2924XL have 24 ports capable of creating 24 different network segments for a LAN.
Larger switches such as the Cisco Catalyst 6500 can have hundreds of ports. Switches can also be used
to connect LANs with different media—for example, a 10-Mbps Ethernet LAN and a 100-Mbps Ethernet
LAN can be connected using a switch. Some switches support cut-through switching, which reduces
latency and delays in the network, while bridges support only store-and-forward traffic switching.
Finally, switches reduce collisions on network segments because they provide dedicated bandwidth to
each network segment.

Types of Bridges
Bridges can be grouped into categories based on various product characteristics. Using one popular
classification scheme, bridges are either local or remote. Local bridges provide a direct connection
between multiple LAN segments in the same area. Remote bridges connect multiple LAN segments in
different areas, usually over telecommunications lines. Figure 4-1 illustrates these two configurations.

Figure 4-1 Local and Remote Bridges Connect LAN Segments in Specific Areas

Ethernet Local
Bridge bridging

Token Remote
Ring Bridge Bridge bridging

Remote bridging presents several unique internetworking challenges, one of which is the difference
between LAN and WAN speeds. Although several fast WAN technologies now are establishing a
presence in geographically dispersed internetworks, LAN speeds are often much faster than WAN
speeds. Vast differences in LAN and WAN speeds can prevent users from running delay-sensitive LAN
applications over the WAN.
By: Nihal Kumar

Remote bridges cannot improve WAN speeds, but they can compensate for speed discrepancies through
a sufficient buffering capability. If a LAN device capable of a 3-Mbps transmission rate wants to
communicate with a device on a remote LAN, the local bridge must regulate the 3-Mbps data stream so
that it does not overwhelm the 64-kbps serial link. This is done by storing the incoming data in onboard
buffers and sending it over the serial link at a rate that the serial link can accommodate. This buffering
can be achieved only for short bursts of data that do not overwhelm the bridge’s buffering capability.
The Institute of Electrical and Electronic Engineers (IEEE) differentiates the OSI link layer into two
separate sublayers: the Media Access Control (MAC) sublayer and the Logical Link Control (LLC)
sublayer. The MAC sublayer permits and orchestrates media access, such as contention and token
passing, while the LLC sublayer deals with framing, flow control, error control, and MAC sublayer
addressing.
Some bridges are MAC-layer bridges, which bridge between homogeneous networks (for example, IEEE
802.3 and IEEE 802.3), while other bridges can translate between different link layer protocols (for
example, IEEE 802.3 and IEEE 802.5). The basic mechanics of such a translation are shown in Figure
4-2.

Figure 4-2 A MAC-Layer Bridge Connects the IEEE 802.3 and IEEE 802.5 Networks
Host A Host B

Application Application

Presentation Presentation

Session Session

Transport Transport

Network Network
Bridge

LLC PKT
Link Link Link
MAC 802.3 PKT 802.5 PKT

Physical 802.3 PKT 802.5 PKT Physical Physical

802.3 medium 802.5 medium

Figure 4-2 illustrates an IEEE 802.3 host (Host A) formulating a packet that contains application
information and encapsulating the packet in an IEEE 802.3-compatible frame for transit over the IEEE
802.3 medium to the bridge. At the bridge, the frame is stripped of its IEEE 802.3 header at the MAC
sublayer of the link layer and is subsequently passed up to the LLC sublayer for further processing. After
this processing, the packet is passed back down to an IEEE 802.5 implementation, which encapsulates
the packet in an IEEE 802.5 header for transmission on the IEEE 802.5 network to the IEEE 802.5 host
(Host B).
A bridge’s translation between networks of different types is never perfect because one network likely
will support certain frame fields and protocol functions not supported by the other network.
By: Nihal Kumar

Types of Switches
Switches are data link layer devices that, like bridges, enable multiple physical LAN segments to be
interconnected into a single larger network. Similar to bridges, switches forward and flood traffic based
on MAC addresses. Any network device will create some latency. Switches can use different forwarding
techniques—two of these are store-and-forward switching and cut-through switching.
In store-and-forward switching, an entire frame must be received before it is forwarded. This means that
the latency through the switch is relative to the frame size—the larger the frame size, the longer the delay
through the switch. Cut-through switching allows the switch to begin forwarding the frame when enough
of the frame is received to make a forwarding decision. This reduces the latency through the switch.
Store-and-forward switching gives the switch the opportunity to evaluate the frame for errors before

forwarding it. This capability to not forward frames containing errors is one of the advantages of
switches over hubs. Cut-through switching does not offer this advantage, so the switch might forward
frames containing errors. Many types of switches exist, including ATM switches, LAN switches, and
various types of WAN switches.

ATM Switch
Asynchronous Transfer Mode (ATM) switches provide high-speed switching and scalable bandwidths in
the workgroup, the enterprise network backbone, and the wide area. ATM switches support voice, video,
and data applications, and are designed to switch fixed-size information units called cells, which are used
in ATM communications. Figure 4-3 illustrates an enterprise network comprised of multiple LANs
interconnected across an ATM backbone.

Figure 4-3 Multi-LAN Networks Can Use an ATM-Based Backbone When Switching Cells

Engineering
R&D

ATM
backbone

Marketing
Sales

Security

LAN Switch
LAN switches are used to interconnect multiple LAN segments. LAN switching provides dedicated,
collision-free communication between network devices, with support for multiple simultaneous
conversations. LAN switches are designed to switch data frames at high speeds. Figure 4-4 illustrates a
simple network in which a LAN switch interconnects a 10-Mbps and a 100-Mbps Ethernet LAN.
By: Nihal Kumar

Figure 4-4 A LAN Switch Can Link 10-Mbps and 100-Mbps Ethernet Segments
10-Mbps
Ethernet

LAN switch

100-Mbps
Ethernet

Review Questions
Q—What layer of the OSI reference model to bridges and switches operate.
A—Bridges and switches are data communications devices that operate principally at Layer 2 of the OSI
reference model. As such, they are widely referred to as data link-layer devices.
Q—What is controlled at the link layer?
A—Bridging and switching occur at the link layer, which controls data flow, handles transmission errors,
provides physical (as opposed to logical) addressing, and manages access to the physical medium.
Q—Under one popular classification scheme what are bridges classified as?
A—Local or Remote: Local bridges provide a direct connection between multiple LAN segments in the
same area. Remote bridges connect multiple LAN segments in different areas, usually over
telecommunications lines.
Q—What is a switch?
A—Switches are data link-layer devices that, like bridges, enable multiple physical LAN segments to be
interconnected into a single larger network.

You might also like