CN Unit 2

Download as pdf or txt
Download as pdf or txt
You are on page 1of 36

COMPUTER NETWORKS

UNIT -2

Data link layer: Design issues, Framing: fixed size framing, variable size framing, flow
control, error control, error detection and correction codes, CRC, Checksum: idea, one‘s
Complement internet checksum, services provided to Network Layer,
Elementary Data Link Layer protocols: simplex protocol, Simplex stop and wait, Simplex
protocol for Noisy Channel.
Sliding window protocol: One bit, Go-back N, Selective repeat-Stop and wait protocol, Data
Link layer in HDLC: configuration and transfer modes, frames, control field, point to point
Protocol (PPP): framing transition phase, multiplexing, multi link PPP.

Introduction:
In the OSI Model, each layer uses the services of the layer below it and provides services to
the layer above it. The data link layer uses the services offered by the physical layer. The primary
function of this layer is to provide a well defined service interface to network layer above it.
Data-link layer is the second layer after the physical layer. The data link layer is responsible for
maintaining the data link between two hosts or nodes.
Before going through the design issues in the data link layer. Some of its sub-layers and their
functions are as following below.
The data link layer is divided into two sub-layers :
1. Logical Link Control Sub-layer (LLC) –
Provides the logic for the data link, Thus it controls the synchronization, flow control, and
error checking functions of the data link layer. Functions are –
 (i) Error Recovery.
 (ii) It performs the flow control operations.
 (iii) User addressing.

2. Media Access Control Sub-layer (MAC) –


It is the second sub-layer of data-link layer. It controls the flow and multiplexing for
transmission medium. Transmission of data packets is controlled by this layer. This layer is
responsible for sending the data over the network interface card.
Functions are –
 (i) To perform the control of access to media.
 (ii) It performs the unique addressing to stations directly connected to LAN.
 (iii) Detection of errors.

Data Link Layer Design Issues:

This layer converts the raw transmission facility provided by the physical layer to a reliable and error-
free link.
The main functions and the design issues of this layer are

 Providing services to the network layer


 Framing
 Error Control
 Flow Control

1
To accomplish these goals, the data link layer takes the packets it gets from the network layer and
encapsulates them into frames for transmission. Each frame contains a frame header, a payload
field for holding the packet, and a frame trailer, as illustrated in below figure. Frame management
forms the heart of what the data link layer does.

Relationship between packets and frames.

1. Services Provided to the Network Layer


 The function of the data link layer is to provide service to the network layer.
 The principal service is transferring data from the network layer on the source
machine tothe network layer on the destination machine.
 The network layer hands some bits to the data link layer for transmission to the
destination, the job of the data link layer is to transmit the bits to the destination
machine, so they can be handedover to the network layer on the destination machine.

The job of the data link


layer is to transmit the
bits to the destination
machine so they can be
handed over to the
network layer there, as
shown in Fig.(a). The
actual transmission
follows the path of
Fig.(b), but it is easier to
think in terms of two
data link layer processes
communicating using a
data link protocol.

 The data link layer can be designed to offer various services, Three possibilities that
are commonlyprovided are:

1. Unacknowledged connectionless service.


2. Acknowledged connectionless service.
3. Acknowledged connection-oriented service.

2
Unacknowledged connectionless service

Unacknowledged connectionless service consists of having the source machine send


independent frames to the destination machine without having the destination machine
acknowledge them. No connection is established beforehand or released afterward.
Good channels with low eror rates, for real-time traffic, such as speech.
Acknowledged connectionless service

When this service is offered, there are still no connections used, but each frame sent is
individually acknowledged. This way, the sender knows whether or not a frame has
arrived safely. Good for unreliable channels, such as wireless.
Acknowledged connection-oriented service
With this service, the source and destination machines establish a connection before
any data are transferred. Each frame sent over the connection is numbered, and the data
link layer guarantees that each frame sent is received. Furthermore, it guarantees that
each frame is received exactly once and that all frames are received in the right order.
When connection-oriented service is used, transfers have three distinct phases.

1. In the first phase the connection is established by having both sides initialize variable
and counter need to keep track of which frames have been received and which ones
have not.
2. In the second phase, one or more frames are actually transmitted.
3. In the third phase, the connection is released, freeing up the variables, buffers, and
other resourcesused to maintain the connection.
Framing:
 In order to provide service to the network layer, the data link layer must use the
service providedto it by the physical layer.
 What the physical layer does is accept raw bit stream and attempt to
deliver it to thedestination.
 This bit stream is not guaranteed to be error free.
 It is up to the data link layer to detect, and if necessary, correct errors.
 The usual approach is for the data link layer to break the bit stream up into
discrete frames and compute the checksum for each frame. When the frames
arrive at the destination , thechecksum is re-computed.
There are three methods of breaking up the bit stream

1. Character count.
2. Flag bytes with byte stuffing (or) character stuffing.
3. Starting and ending flags, with bit stuffing.

1. Character count
The first framing method uses a field in the header to specify the number of characters in
the frame. When the data link layer at the destination sees the character count, it knows
how many characters follow and hence where the end of the frame is. This technique is
shown in Fig.-(a) for four frames of sizes 5, 5, 8, and 8 characters, respectively

3
The trouble with this algorithm is that the count can be garbled by a
transmission error. For example, if the character count of 5 in the second frame of
Fig-(b) becomes a 7, the destination will get out of synchronization and will be
unable to locate the start of the next frame. Even if the checksum is incorrect so
the destination knows that the frame is bad, it still has no way of telling where the
next frame starts. Sending a frame back to the source asking for a retransmission
does not help either, since the destination does not know how many characters to
skip over to get to the start of the retransmission. For this reason, the character
count method is rarely used anymore.
2.Flag bytes with byte stuffing (or) character stuffing.
Starting and ending character stuffing, gets around the problem of
resynchronization after an error by having each frame start with the ASCII character
sequence DLE STX and end with the sequence DLE ETX. (DLE is Data Link Escape,
STX is Start of Text, and ETX is End of Text).
a serious problem occurs with this method when binary data, such as object programs
or floating-point numbers, are being transmitted it is possible that the DLE, STX, and
ETX characters can occur, which will interfere with the framing. One way to solve this
problem is to have the sender's data link layer insert and DLE character just
before each "accidental" DLE and the data link layer on the other machine
removes them before it gives the data to the network layer, this is called Character
stuffing.

4
3. Starting and Ending flags, with bit stuffing.

The new technique allows data frames to contain an arbitrary number of bits and
allows character codes with an arbitrary number of bits per character. It works like
this. Each frame begins and ends with a special bit pattern, 01111110 (in fact, a
flag byte). Whenever the sender's data link layer encounters five consecutive
1s in the data, it automatically stuffs a 0 bit into the outgoing bit stream. This bit
stuffing is analogous to byte stuffing, in which an escape byte is stuffed into the
outgoing character stream before a flag byte in the data.

When the receiver sees five consecutive incoming 1 bits, followed by a 0 bit, it
automatically destuffs (i.e., deletes) the 0 bit. Just as byte stuffing is completely
transparent to the network layer in both computers, so is bit stuffing. If the user
data contain the flag pattern, 01111110, this flag is transmitted as 011111010 but
stored in the receiver's memory as 01111110.

Figure gives an example of bit stuffing.

(a) The original data. (b) The data as they appear on the line. (c) The data as they
arestored in the receiver's memory after destuffing.

With bit stuffing, the boundary between two frames can be unambiguously
recognized by the flag pattern. Thus, if the receiver loses track of where it is, all it
has to do is scan the input for flag sequences, since they can only occur at frame
boundaries and never within the data.

3.Error Control
The usual way to ensure reliable delivery is to provide the sender with some
feedback about what is happening at the other end of the line. Typically, the
protocol calls for the receiver to send back special control frames bearing positive
or negative acknowledgements about the incoming frames. If the sender receives a
positive acknowledgement about a frame, it knows the frame has arrived safely.
On the other hand, a negative acknowledgement means that something has gone
wrong, and the frame must be transmitted again.

An additional complication comes from the possibility that hardware troubles may
cause a frame to vanish completely (e.g., in a noise burst). In this case, the receiver
will not react at all, since it has no reason to react. It should be clear that a protocol
in which the sender transmits a frame and then waits for an acknowledgement,
positive or negative, will hang forever if a frame is ever lost due to, for example,
malfunctioning hardware.

5
4.Flow Control
 Another important design issue that occurs in the data link layer (and higher
layers as well) is what to do with a sender that systematically wants to transmit
frames faster than a receiver can accept them.
 This situation can easily occur when the sender is running on a fast computer
and the receiver is running on a slow machine.
 The usual solution is to introduce flow control to throttle the sender into
sending no faster than the receiver can handle the traffic.
 Various flow control schemes are known, but most of them use the same basic
principle, eg HDLC. The protocol contains well-defined rules about when a
sender may transmit the next frame.

Error Detection and Correction

Error-Correcting Codes
Two basic strategies for dealing with errors. One way is to include enough redundant
information along with each block of data sent, to enable the receiver to deduce what the
transmitted data must have been. The other way is to include only enough redundancy to
allow the receiver to deduce that an error occurred, but not which error, and have it request a
retransmission.
The former strategy uses error-correcting codes and the latter uses error- detecting
codes. The use of error-correcting codes is often referred to as forward error correction.

To understand how errors can be handled, it is necessary to look closely at what an error
really is. Normally, a frame consists of m data (i.e., message) bits and r redundant, or check,
bits. Let the total length be n (i.e., n = m + r). An n-bit unit containing data and check bits is
often referred to as an n-bit codeword.

Type of Errors:

Single-bit Error

The term single-bit error means that only one bit of given data unit (such as a byte, character,
or data unit) is changed from 1 to 0 or from 0 to 1 as shown in

6
Single bit errors are least likely type of errors in serial data transmission. To see why,I
magine a sender sends data at 10 Mbps.This means that each bit lasts only for 0.1 μs
(micro-second). For a single bit error to occur noise must have duration of only 0.1 μs
(micro-second), which is very rare. However, a single-bit error can happen if we are having
a parallel data transmission. For example, if 16 wires are used to send all 16 bits of a word
at the same time and one of the wires is noisy, one bit is corrupted in each word.

Burst Error

The term burst error means that two or more bits in the data unit have changed from 0
to 1 or vice-versa. Note that burst error doesn’t necessary means that error occurs in
consecutive bits. The length of the burst error is measured from

Burst errors are mostly likely to happen in serial transmission. The duration of the noise is
normally longer than the duration of a single bit, which means that the noise affects data; it
affects a set of bits as shown in Fig. The number of bits affected depends on the data rate
and duration of noise.
Error Control
Error control can be done in two ways
 Error detection − Error detection involves checking whether any error has occurred or not.
The number of error bits and the type of error does not matter.
 Error correction − Error correction involves ascertaining the exact number of bits that has
been corrupted and the location of the corrupted bits.
For both error detection and error correction, the sender needs to send some additional bits
along with the data bits. The receiver performs necessary checks based upon the additional redundant
bits. If it finds that the data is free from errors, it removes the redundant bits before passing the
message to the upper layers.

ERROR DETECTION

Most networking equipment at the data-link layer inserts some type of error-detection code.
When a frame arrives at the next hop in the transmission sequence, the receiving hop extracts
the error-detection code and applies it to the frame. When an error is detected, the message is
normally discarded. In this case, the sender of the erroneous message is notified, and the
message is sent again. However, in real-time applications, it is not possible to resend
messages. The most common approaches to error detection are

7
Redundancy
One error detection mechanism would be to send every data unit twice. The receiving
device would then able to do a bit for bit comparison between the two versions of the
data. Any discrepancy would indicate an error, and an appropriate correction
mechanism could be set in place The system would be completely accurate (the odds of
errors being introduced on to exactly the same bits in both sets of data are
infinitesimally small), but it would also Not only would the transmission time double,
but also the time it takes to compare every unit bit by bit must be added.
Error detection uses the concept of redundancy, which means adding extra bits for detecting
errors at the destination.. This technique is called redundancy because the extra bits are
redundant to the information; they are discarded as soon as the accuracy of the transmission
has been determined
Below picture shows the process of using redundant bits to check the accuracy of a data unit

Three types of redundancy checks are common in data communications: Parity check,
Cyclic Redundancy Check (CRC), and Checksum

Error Detection

Parity Check CRC Checksum

Error Detecting Techniques:


The most popular Error Detecting Techniques are:

o Parity check
 Single parity check
 Two-dimensional parity check
o Checksum
o Cyclic redundancy check

8
1.Parity Check
The most common and least expensive mechanism for error detection is the parity
check. Parity checking can be simple or two dimensional.
Single Parity Check
o Single Parity checking is the simple mechanism and inexpensive to detect the errors.
o In this technique, a redundant bit is also known as a parity bit which is appended at
the end of the data unit so that the number of 1s becomes even. Therefore, the total
number of transmitted bits would be 9 bits.
o If the number of 1s bits is odd, then parity bit 1 is appended and if the number of 1s
bits is even, then parity bit 0 is appended at the end of the data unit.
o At the receiving end, the parity bit is calculated from the received data bits and
compared with the received parity bit.
o This technique generates the total number of 1s even, so it is known as even-parity
checking.

Drawbacks Of Single Parity Checking


o It can only detect single-bit errors which are very rare.
o If two bits are interchanged, then it cannot detect the errors.

9
Two-Dimensional Parity Check
o Performance can be improved by using two-dimensional parity check, which
organizes the block of bits in the form of a table.
o Parity check bits are calculated for each row, which is equivalent to a simple
parity check bit.
o Parity check bits are also calculated for all columns then both are sent along with
the data.
o At the receiving end these are compared with the parity bits calculated on the
received data.

Drawbacks Of 2D Parity Check


o If two bits in one data unit are corrupted and two bits exactly the same position in another
data unit is also corrupted, then 2D Parity checker will not be able to detect the error.
o This technique cannot be used to detect the 4-bit errors or more in some cases.

2.Checksum

A Checksum is an error detection technique based on the concept of redundancy.


 In check sum error detection scheme the data is divided in to k segments each of m
bits
 In the sender end the segments are added using 1’s complement arthemetic to get
sum. The sum is complemented to get checksum.
 Now , checksum segment is sent along with data segment
 At receiver end all received segments are added using 1’s complement arithmetic to
get the sum . the sum is complemented

10
 If result is zero, the data is accepted otherwise rejected.
(OR)

The Sender (Checksum Generator) follows the given steps:


1. The block unit is divided into k sections, and each of n bits.
2. All the k sections are added together by using one's complement to get the sum.
3. The sum is complemented and it becomes the checksum field.
4. The original data and checksum field are sent across the network.

The Receiver(Checksum checker) follows the given steps:


1. The block unit is divided into k sections and each of n bits.
2. All the k sections are added together by using one's complement algorithm to get the sum.
3. The sum is complemented.
4. If the result of the sum is zero, then the data is accepted otherwise the data is discarded.

11
3. Cyclic Redundancy Check (CRC)
CRC is a redundancy error technique used to determine the error.

Following are the steps used in CRC for error detection:

o In CRC technique, a string of n 0 s is appended to the data unit, and this n number is less than
the number of bits in a predetermined number, known as division which is n+1 bits.
o Secondly, the newly extended data is divided by a divisor using a process is known as binary
division. The remainder generated from this division is known as CRC remainder.
o Thirdly, the CRC remainder replaces the appended 0s at the end of the original data. This
newly generated unit is sent to the receiver.
o The receiver receives the data followed by the CRC remainder. The receiver will treat this
whole unit as a single unit, and it is divided by the same divisor that was used to find the CRC
remainder.

 If the resultant of this division is zero which means that it has no error, and the data is
accepted.
 If the resultant of this division is not zero which means that the data consists of an error.
Therefore, the data is discarded.

12
Let's understand this concept through an example:

Suppose the original data is 11100 and divisor is 1001.

CRC Generator
o A CRC generator uses a modulo-2 division. Firstly, three zeroes are appended at the end of
the data as the length of the divisor is 4 and we know that the length of the string 0s to be
appended is always one less than the length of the divisor.
o Now, the string becomes 11100000, and the resultant string is divided by the divisor 1001.
o The remainder generated from the binary division is known as CRC remainder. The generated
value of the CRC remainder is 111.
o CRC remainder replaces the appended string of 0s at the end of the data unit, and the final
string would be 11100111 which is sent across the network.

CRC Checker
o The functionality of the CRC checker is similar to the CRC generator.
o When the string 11100111 is received at the receiving end, then CRC checker performs the
modulo-2 division.
o A string is divided by the same divisor, i.e., 1001.
o In this case, CRC checker generates the remainder of zero. Therefore, the data is accepted.

13
Error Correction
Error Correction codes are used to detect and correct the errors when data is transmitted from
the sender to the receiver.

Error Correction can be handled in two ways:

o Backward error correction: Once the error is discovered, the receiver requests the
sender to retransmit the entire data unit.
o Forward error correction: In this case, the receiver uses the error-correcting code
which automatically corrects the errors.

A single additional bit can detect the error, but cannot correct it.
For correcting the errors, one has to know the exact position of the error. For example, If we
want to calculate a single-bit error, the error correction code will determine which one of
seven bits is in error. To achieve this, we have to add some additional redundant bits.
59.4M

Hamming Code
Hamming code is a set of error-correction codes that can be used to detect and correct
the errors that can occur when the data is moved or stored from the sender to the receiver.
It is a technique developed by R.W. Hamming for error correction. Redundant bits –
Redundant bits are extra binary bits that are generated and added to the information-

14
carrying bits of data transfer to ensure that no bits were lost during the data transfer. The
number of redundant bits can be calculated using the following formula:
2^r ≥ m + r + 1
where, r = redundant bit, m = data bit
Suppose the number of data bits is 7, then the number of redundant bits can be calculated
using: = 2^4 ≥ 7 + 4 + 1 Thus, the number of redundant bits= 4 Parity bits.
A parity bit is a bit appended to a data of binary bits to ensure that the total number of 1’s
in the data is even or odd. Parity bits are used for error detection. There are two t ypes of
parity bits:
1. Even parity bit: In the case of even parity, for a given set of bits, the number of 1’s are
counted. If that count is odd, the parity bit value is set to 1, making the total count of
occurrences of 1’s an even number. If the total number of 1’s in a given set of bits is
already even, the parity bit’s value is 0.
2. Odd Parity bit – In the case of odd parity, for a given set of bits, the number of 1’s are
counted. If that count is even, the parity bit value is set to 1, making the total count of
occurrences of 1’s an odd number. If the total number of 1’s in a given set of bits is
already odd, the parity bit’s value is 0.

General Algorithm of Hamming code:


Hamming Code is simply the use of extra parity bits to allow the identification of an error.
1. Write the bit positions starting from 1 in binary form (1, 10, 11, 100, etc).
2. All the bit positions that are a power of 2 are marked as parity bits (1, 2, 4, 8, etc).
3. All the other bit positions are marked as data bits.
4. Each data bit is included in a unique set of parity bits, as determined its bit position in
binary form. a. Parity bit 1 covers all the bits positions whose binary representation
includes a 1 in the least significant position (1, 3, 5, 7, 9, 11, etc). b. Parity bit 2 covers
all the bits positions whose binary representation includes a 1 in the second position
from the least significant bit (2, 3, 6, 7, 10, 11, etc). c. Parity bit 4 covers all the bits
positions whose binary representation includes a 1 in the third position from the least
significant bit (4–7, 12–15, 20–23, etc). d. Parity bit 8 covers all the bits positions whose
binary representation includes a 1 in the fourth position from the least significant bit bits
(8–15, 24–31, 40–47, etc). e. In general, each parity bit covers all bits where the bitwise
AND of the parity position and the bit position is non-zero.
5. Since we check for even parity set a parity bit to 1 if the total number of ones in the
positions it checks is odd.
6. Set a parity bit to 0 if the total number of ones in the positions it checks is even.

15
Determining the position of redundant bits – These redundancy bits are placed at
positions that correspond to the power of 2.
As in the above example:
 The number of data bits = 7
 The number of redundant bits = 4
 The total number of bits = 11
 The redundant bits are placed at positions corresponding to power of 2- 1, 2, 4, and 8

 Suppose the data to be transmitted is 1011001, the bits will be placed as follows:

Determining the Parity bits:


 R1 bit is calculated using parity check at all the bits positions whose binary
representation includes a 1 in the least significant position. R1: bits 1, 3, 5, 7, 9, 11

16
 To find the redundant bit R1, we check for even parity. Since the total number of 1’s in
all the bit positions corresponding to R1 is an even number the value of R1 (parity bit’s
value) = 0
 R2 bit is calculated using parity check at all the bits positions whose binary
representation includes a 1 in the second position from the least significant bit. R2: bits
2,3,6,7,10,11

 To find the redundant bit R2, we check for even parity. Since the total number of 1’s in
all the bit positions corresponding to R2 is odd the value of R2(parity bit’s value)=1
 R4 bit is calculated using parity check at all the bits positions whose binary
representation includes a 1 in the third position from the least significant bit. R4: bits 4,
5, 6, 7

1. To find the redundant bit R4, we check for even parity. Since the total number of 1’s in
all the bit positions corresponding to R4 is odd the value of R4(parity bit’s value) = 1
2. R8 bit is calculated using parity check at all the bits positions whose binary
representation includes a 1 in the fourth position from the least significant bit. R8: bit
8,9,10,11

17
 To find the redundant bit R8, we check for even parity. Since the total number of 1’s in
all the bit positions corresponding to R8 is an even number the value of R8(parity bit’s
value)=0. Thus, the data transferred is:

Error detection and correction: Suppose in the above example the 6th bit is changed
from 0 to 1 during data transmission, then it gives new parity values in the binary number:

The bits give the binary number 0110 whose decimal representation is 6. Thus, bit 6
contains an error. To correct the error the 6th bit is changed from 1 to 0.

18
Flow Control
When the sender is running on the fast machine or lightly loaded machine and receiver is on
slow or heavily loaded machine, then the transmitter will transmit frames faster than the receiver can
accept them, hence there is every possibility of loosing the data frames.

Flow Control mainly coordinates with the amount of data that can be sent before receiving an
acknowledgment from the receiver and it is one of the major duties of the data link layer.
 For most of the protocols, flow control is a set of procedures that mainly tells the sender how
much data the sender can send before it must wait for an acknowledgment from the
receiver.
 The data flow must not be allowed to overwhelm the receiver; because any receiving device
has a very limited speed at which the device can process the incoming data and the limited
amount of memory to store the incoming data.
 The processing rate is slower than the transmission rate; due to this reason each receiving
device has a block of memory that is commonly known as buffer that is used to store the
incoming data until this data will be processed. In case the buffer begins to fill up then the
receiver must be able to tell the sender to halt the transmission until once again the receiver
become able to receive.
 Thus the flow control makes the sender; wait for the acknowledgment from the receiver
before the continuation to send more data to the receiver.

Protocols
 The implementation of protocols is mainly implemented in the software by using one of the
common programming languages. The classification of the protocols can be mainly done on
the basis of where they are being used.
 Protocols can be used for noiseless channels(that is error-free) and also used for noisy
channels(that is error-creating). The protocols used for noiseless channels mainly cannot be
used in real-life and are mainly used to serve as the basis for the protocols used for noisy
channels.

 All the above-given protocols are unidirectional in the sense that the data frames travel from
one node i.e Sender to the other node i.e receiver.
 The special frames called acknowledgment (ACK) and negative acknowledgment (NAK)
both can flow in opposite direction for flow and error control purposes and the data can flow
in only one direction.

19
 But in the real-life network, the protocols of the data link layer are implemented as
bidirectional which means the flow of the data is in both directions. And in these protocols,
the flow control and error control information such as ACKs and NAKs are included in the
data frames in a technique that is commonly known as piggybacking.
 Also, bidirectional protocols are more complex than the unidirectional protocol.
 In our further tutorials we will be covering the above mentioned protocols in detail.

NOISELESS CHANNELS:
Simplest Protocol
The flow control is not needed by the Simplest Protocol. The data link layer at the sender side
mainly gets the data from the network layer and then makes the frame out of data and sends it. On the
Receiver side, the data link layer receives the frame from the physical layer and then extracts the data
from the frame, and then delivers the data to its network layer.
 The simplest protocol is basically a unidirectional protocol in which data frames only travel
in one direction; from the sender to the receiver.
 In this, the receiver can immediately handle the frame it receives whose processing time is
small enough to be considered as negligible.
 Basically, the data link layer of the receiver immediately removes the header from the frame
and then hand over the data packet to the network layer that also accepts the data packet
immediately.
 We can also say that in the case of this protocol the receiver never gets overwhelmed with the
incoming frames from the sender.

Flow Diagram for Simplest Protocol


 Using the simplest protocol the sender A sends a sequence of frames without even thinking
about receiver B.


 In order to send the three frames, there will be an occurrence of three events at sender A and
three events at the receiver B.
 It is important to note that in the above figure the data frames are shown with the help of
boxes.
 The height of the box mainly indicates the transmission time difference between the first bit
and the last bit of the frame.

Stop-and-Wait protocol
As the name suggests, when we use this protocol during transmission, then the sender sends one
frame, then stops until it receives the confirmation from the receiver, after receiving the confirmation
sender sends the next frame.

20
 There is unidirectional communication for the data frames, but the acknowledgment or
ACK frames travel from the other direction. Thus the flow control is added here.
 Thus the stop-and-wait is one of the flow control protocol which makes the use of flow
control service provided by the data link layer.
 For every sent frame, the acknowledgment is needed and it takes the same amount of time for
propagation in order to get back to the sender.
 In order to end up the transmission, the sender transmits an end of transmission that
means(EOT frame).

Flow diagram of the stop-and-wait protocol


Given below is the flow diagram of the stop-and-wait protocol:

Advantages

One of the main advantages of the stop-and-wait protocol is the accuracy provided. As the
transmission of the next frame is only done after receiving the acknowledgment of the previous frame.
Thus there is no chance for data loss.

Disadvantages:

 Using this protocol only one frame can be transmitted at a time.


 Suppose in a case, the frame is sent by the sender but it gets lost during the transmission and
then the receiver can neither get it nor can send an acknowledgment back to the sender. Upon
not receiving the acknowledgment the sender will not send the next frame. Thus there will
occur two situations and these are: The receiver has to wait for an infinite amount of time for
the data and the sender has to wait for an infinite amount of time in order to send the next
frame.
 In the case of the transmission over a long distance, this is not suitable because the
propagation delay becomes much longer than the transmission delay.
 In case the sender sends the data and this data has also been received by the receiver. After
receiving the data the receiver then sends the acknowledgment but due to some reasons, this
acknowledgment is received by the sender after the timeout period. Now as this
acknowledgment is received too late; thus it can be wrongly considered as the
acknowledgment of another data packet.

21
NOISY CHANNELS.

 Flow Control mechanism is incorporated which includes a ‘FEED – BACK’(ACK)


mechanism requesting transmitter a re – transmission of incorrect message frames.

 The most common re – transmission technique is known as “AUTOMATIC REPEAT


REQUEST (ARQ)”.

 ARQ in Data Link Layer is done in THREE cases :

 Damaged Frames.

 Lost Frames

 Lost Acknowledgments.

 An ARQ protocol is characterized by FOUR fundamental steps. They are :

 Transmission of Frames.

 Error checking at receiver end.

 Acknowledgement.

 Negative Acknowledgement (NAK), if error is detected.

 Positive Acknowledgement (ACK) , if no error is detected.

 Re – transmission if acknowledgement is Negative (NAK) or if no acknowledgement


is received with in a stipulated time.

 ARQ Techniques: ARQ techniques can be categorized into TWO ways i.e., ‘STOP – AND –
WAIT ARQ’ and ‘SLIDING – WINDOW ARQ’.

22
Stop – and – Wait ARQ:
The sender transmits the frame, when frame arrives at the receiver it checks for
damage and acknowledges to the sender accordingly.

 While transmitting a frame there can be any of the FOUR situations :

a) Normal Operation:

 The sender sends Frame1 and waits for ACK1. After receiving ACK1 sender sends next
Frame2, and waits for its acknowledgement ACK2. This operation is repeated until data
frames are completed.

 Usually a ‘Timer’ is set by the sender after each frame is transmitted, its acknowledgement
must be received before timer expires.

 b) Lost or Damaged Frame:

23
 When a receiver receives the frame and found it damaged or lost, it is discarded but it
retains its number. When sender does not receive its acknowledgement it re –
transmits the same frame.

 c) Lost Acknowledgement :

 When an acknowledgement is lost, the sender does not know whether the frame is
received by the receiver. After the timer expires, the sender re-transmit the same
frame. But, receiver has already received this frame earlier hence the second copy
frame is discarded.

 d) Delayed Acknowledgement :

 The ACK frame may be delayed due to some link problem.

 The acknowledgement is received after the timer is completed. While the sender has
already transmitted the same frame. Again second acknowledgement is initiated by
receiver for the re-transmitted Frame, hence the second Acknowledgement is
discarded.

24
 Features of STOP – AND – WAIT ARQ:
 Sender keeps a copy of last transmitted frame until its acknowledgement is received.
 In case of damage or loss of frame, the frames are discarded, no acknowledgement is
sent.
 The frames are numbered sequentially to avoid duplication.
 The sender maintains a timer, if acknowledgement is not received in time, sender
assumes it as lost.
 The receiver sends only positive acknowledgement to the sender.
 Disadvantages of Stop – And – Wait ARQ:
 The sender’s frame is lost, the receiver never sends an acknowledgement, and the
sender will wait for a long time.
 If the receiver’s acknowledgement is lost, the sender will wait for a long time.
 If the acknowledgement is damaged, the sender may draw the wrong conclusion and
the protocol fails.
 Both sender and receiver do a lot of waiting.

GO – BACK ‘N’ ARQ PROTOCOLS :


 As in ‘Stop – and – Wait’ protocol sender’s has to wait for every ‘ACK’ then next
frame is transmitted. But in “Go – Back ‘N’ ” frames can be transmitted without
waiting for acknowledgment. A copy of each transmitted frame is maintained until
the respective ACK is received. Different operations are :

 a) Normal Operation: The sender sends frames and update the control variables ‘S’
and the receiver updates control variables ‘R’.

25
b) Damaged or Lost Frame:

 Suppose Frame-3 is damaged or lost and if receiver


receives Frame-4 and Frame-5, they will be discarded,
since receiver is expecting Frame-3. Sender retransmits
Frame-3, 4, and 5, and the process continues.
 C) Lost Acknowledgment :

 The sender has transmitted all of its frames and is


waiting for an acknowledgement that has been
lost along the way. The sender waits a
Predetermined amount of time Then retransmits
the unack Frames. The receiver recognizes that
the new transmission is a repeat of an earlier one,
sends another ACK and discards the redundant
Data.

SELECTIVE REPEAT ARQ:


 It only re-transmits the damaged or lost frames instead of sending multiple frames.

 The selective re – transmission increases the efficiency of transmission and is more


suitable for noisy channels.

 A Selective repeat ARQ systems differs from a go – back – n ARQ systems are

26
 The receiving device must contain sorting logic to enable it to reorder frames
received out of sequence. It must also be able to store frames received after a
NAK has been sent until the damaged frame has been replaced.

 The sending device must contain a searching mechanism that allows it to find
and select only the requested frame for retransmission.

 A buffer in the receiver must keep all previously received frames on hold until all
retransmissions have been sorted and any duplicate frames have been identified and
discarded.

 The aid selectively , ACK numbers, like NAK numbers, must refer to the frame
received ( or lost) instead of the next frame expected.

 This complexity requires a smaller window size than is needed by the go – back – n
method if it is to work efficiently. It is recommended that the window size be less
than or equal to (n+1)/2, where n – 1 is the go – back – n window size.

 Damaged Frames: Frames 0 and 1 are received but not acknowledged Data2 arrives and is
found to contain an error, so a NAK 2 is returned. Like NAK frames in go – back – n error
correction, a NAK here both acknowledges the intact receipt of any previously
unacknowledged data frames and indicates an error in the current frame. NAK 2 tells the
sender that data 0 and data 1 have been accepted, but that data 2 must be resent.

 Lost Frames: Although frames can be accepted out of sequence, they cannot be
acknowledged out of sequence. If a frame is lost, the next frame will arrive out of sequence.

27
When the receiver tries to reorder the existing frames to include it, it will discover the
discrepancy and return a NAK. The receiver will recongnize the omission only if other frames
follow. If the lost frame was the last of the transmission the receiver does nothing and the
sender treats the silence like a lost acknowledgment.

 Lost Acknowledgement: When the sender device reaches either the capacity of its window
or the end of its transmission, it sets a timer if no acknowledgement arrives in the time
allotted, the sender re – transmits all of the frames that remain un – acknowledged. In worst
cases, the receiver will recognize any duplications and discard them.

 Comparison between Go – back – n and Selective Repeat:

 Retransmitting only specified damaged or lost frames may seen more efficient than
resending undamaged frames as well, it is in fact less so.

 Because of the complexity of the sorting and storage required by the receiver, and the
extra logic needed by the sender to select specific frames for transmission, selective –
repeat ARQ is expensive and not often used.

 Selective repeat gives better performance, but in practice it is usually discarded in


favor of Go – back – n for simplicity of implementation.

 Sliding Window Protocols:

 Sliding window protocols are classified into three types. They are:

 A one bit sliding window protocol.

 A protocol using go – back – ‘N’.

 A protocol using selective repeat.

 In the previous protocols, data frames were transmitted in one direction only. But in
most practical situations, there is a need for a transmitting data in both directions i.e.,
full duplex data transmission.

PIGGY BACKING:
When the sender transmits a control frame or data frame, the receiver retains itself
and waits until the Network layer passes it the next data packet. The acknowledgement is
attached to the outgoing data frame (i.e., using the ‘ACK’ field in the frame header).

 In effect, the acknowledgement gets a free ride on the next out – going data frame.
This technique of temporarily delaying out – going acknowledgements so that they
can be sent along with the next out – going data frame is known as “PIGGY
BACKING”.

 The principle advantage of using Piggy Backing over having distinct


acknowledgement frames is a better use of the available channel band – width.

28
 One – bit Sliding Window Protocols:

 The sender maintains a set of sequence numbers corresponding to frames, it is


permitted to send. These frames are said to be the “Sending window”.

 Similarly, the receiver also maintains a “Receiving window” corresponding to the set
of frames it is permitted to accept.

 In sliding window protocols, each outbound frame contains a sequence number, ranging from
‘0’ upto some maximum value.

 ‘A’: This is the initial state. The sender has not started transmission so its sending window is
empty. The receiver, is ready to receive the first frame.

 ‘B’ : This is the state after sending the frame. The window in the sending process indicate that
frame ‘0’ has been sent.

 ‘C’: This is the state after the first frame has been received. The receiver window advanced
by ‘1’ so that it is now ready to receive frame 1.

 ‘D’: This is the state after the first acknowledgement had been received. The senders window
advanced by ‘1’ so that it is ready to transmit frame 1.

 When ever new packet receives from the Network Layer it gives the next highest sequence
number and the upper edge of the Sliding Window is advanced by ‘1’. The process continues
until all the frames have been sent successfully.

29
Comparison between ‘GO – Back – ‘N’ and ‘Selective Repeat’ :
 In Selective repeat, the re – transmitting is only of specific damaged or lost frames.
But at the receiver it has a storing capacity, and an extra logic of sorting capability is
needed by the sender to select specific frames to do re – transmission.

 In Go – back – N, the sender re – transmits all the data frames from which the error
has been occurred. Here no need of any buffer at Data Link Layer for the receiver,
because it discards all the frames followed by the Error frame.

 These two approaches are trade – off between band – width and Data Link Layer buffer
space. Depending on which resource is valuable one or other can be used.

 The TCP uses a form of selective repeat to provide end – to – end error control across a
network.

 The HDLC is a Data Link control standard developed by the ISO, it is a bit oriented protocol
operated on go – back – N protocol.

30
Examples of Data Link Layer Protocols:

 1. High Level Data Link Control Protocol (HDLC)

 HDLC (High-Level Data Link Control) is a bit-oriented protocol that is used for
communication over the point-to-point and multipoint links. This protocol
implements the mechanism of ARQ(Automatic Repeat Request). With the help of the
HDLC protocol,full-duplex communication is possible.

 HDLC is the most widely used protocol and offers reliability, efficiency, and a high
level of Flexibility.

 HDLC defines three types of stations to satisfy a variety of applications. They are:
 1. Primary Station: It has the responsibility for controlling the operation of
the link. Frames issued by the primary stations are called as ‘COMMANDS’.

 2. Secondary Station: These operates under the control of the primary


station. Frames issued by a secondary station are called as ‘RESPONSE’.
The Primary station maintains a separate logical link with each secondary
station on the line.

 3. Combined Stations: It combines the features of primary and secondary


stations. A combined station may issue both commands and responses.

 There are two types of link configurations in HDLC. They are :

 Unbalanced Link: Consists of one primary and one or more secondary stations and
supports both full – duplex and half – duplex transmission.

 Balanced Configuration: Consists of two combined stations and supports both full –
duplex and half – duplex transmission.

 The three data transfer modes in HDLC are :

 1. Normal Response Mode (NRM) : It is used in unbalanced configurations. The


primary station may initiate data transfer to a secondary station, but a secondary
station may only transmits data in response to a command from the primary station.

 NRM is used on Multi – drop lines, in which a number of terminals are


connected to a host computer. The computer polls each terminal for input.

31
 2. Asynchronous Response Mode (ARM) : Used with a balanced configuration, either
combined station may initiate transmission without receiving permission from the other
combined station.

 ARM is the most widely used among the three modes, it makes more
efficient use of a full – duplex point – to – point link because there is no
polling overhead.

 3. Asynchronous Response Mode (ARM) : Used with an unbalanced configuration.


The secondary station may initiate transmission without explicit permission of the
primary station. The primary station still retains responsibility for the line, including
initialization, error recovery and logical disconnection.

 ARM is rarely used, it is applicable to some special situations in


which secondary may need to initiate transmission.

 HDLC Frame Structure:

 It uses synchronous transmission. All transmission are in the form of frames, and
single frame format suffices for all types of data and control exchanges.

 Flag, Address and Control fields that precede the information field are known as
‘Header’. The FCS and Flag fields following the data field are referred to as a
‘Trailer’.

32
In hdlc frame structure the fields and the use of fields are as :

1. Flag Field
This field of the HDLC frame is mainly a sequence of 8-bit having the bit pattern 01111110
and it is used to identify the beginning and end of the frame. The flag field mainly serves as a
synchronization pattern for the receiver.

2. Address Field
It is the second field of the HDLC frame and it mainly contains the address of the secondary
station. This field can be 1 byte or several bytes long which mainly depends upon the need of
the network. In case if the frame is sent by the primary station, then this field contains the
address(es) of the secondary stations. If the frame is sent by the secondary station, then this
field contains the address of the primary station.

3. Control Field
This is the third field of the HDLC frame and it is a 1 or 2-byte segment of the frame and is
mainly used for flow control and error control. Bits interpretation in this field mainly depends
upon the type of the frame.

4. Information Field
This field of the HDLC frame contains the user's data from the network layer or the
management information. The length of this field varies from one network to another.

5. FCS Field
FCS means Frame check sequence and it is the error detection field in the HDLC protocol.
There is a 16 bit CRC code for error detection.

In order to provide the flexibility that is necessary to support all the options possible in the
modes and Configurations that are just described above. There are three types of frames
defined in the HDLC:
 Information Frames(I-frames) These frames are used to transport the user data and
the control information that is related to the user data. If the first bit of the control
field is 0 then it is identified as I-frame.
 Supervisory Frames(S-frames) These frames are only used to transport the control
information. If the first two bits of the control field are 1 and 0 then the frame is
identified as S-frame
 Unnumbered Frames(U-Frames) These frames are mainly reserved for system
management. These frames are used for exchanging control information between the
communicating devices.

33
Each type of frame mainly serves as an envelope for the transmission of a different type of
message.

Frame Format
There are up to six fields in each HDLC frame. There is a beginning flag field, the address
field then, a control field, an information field, a frame check sequence field(FCS), and an
ending field.
In the case of the multiple-frame transmission, the ending flag of the one frame acts as the
beginning flag of the next frame.
Let us take a look at different HDLC frames:

Features of HDLC Protocol


Given below are some of the features of the HDLC protocol:
1.This protocol uses bits to stuff flags occurring in the data.
2.This protocol is used for point-to-point as well as multipoint link access.
3.HDLC is one of the most common protocols of the data link layer.
4.HDLC is a bit-oriented protocol.
5.This protocol implements error control as well as flow control.

Point-to-Point Protocol (PPP)


Point - to - Point Protocol (PPP) is a communication protocol of the data link layer
that is used to transmit multiprotocol data between two directly connected (point-to-point)
computers. It is a byte - oriented protocol that is widely used in broadband communications
having heavy loads and high speeds. Since it is a data link layer protocol, data is transmitted
in frames

Services Provided by PPP


The main services provided by Point - to - Point Protocol are −
 Defining the frame format of the data to be transmitted.
 Defining the procedure of establishing link between two points and
exchange ofdata.
 Stating the method of encapsulation of network layer data in the frame.
 Stating authentication rules of the communicating devices.
 Providing address for network communication.
 Providing connections over multiple links.

34
Frame format of PPP protocol
The frame format of PPP protocol contains the following fields:

o Flag: The flag field is used to indicate the start and end of the frame. The
flag field is a 1-byte field that appears at the beginning and the ending of
the frame. The pattern ofthe flag is similar to the bit pattern in HDLC, i.e.,
01111110.
o Address: It is a 1-byte field that contains the constant value which is
11111111. These8 ones represent a broadcast message.
o Control: It is a 1-byte field which is set through the constant value, i.e.,
11000000. It is not a required field as PPP does not support the flow
control and a very limited error control mechanism. The control field is a
mandatory field where protocol supports flowand error control mechanism.
o Protocol: It is a 1 or 2 bytes field that defines what is to be carried in the
data field. The data can be a user data or other information.
o Payload: The payload field carries either user data or other information.
The maximumlength of the payload field is 1500 bytes.
Checksum: It is a 16-bit field which is generally used for error detection.

Transition phases of PPP protocol:

35
o Dead: Dead is a transition phase which means that the link is not used or
there is no active carrier at the physical layer.
o Establish: If one of the nodes starts working then the phase goes to the
establish phase. In short, we can say that when the node starts
communication or carrier is detected then it moves from the dead to the
establish phase.
o Authenticate: It is an optional phase which means that the
communication can also moves to the authenticate phase. The phase
moves from the establish to the authenticate phase only when both the
communicating nodes agree to make the communication authenticated.
o Network: Once the authentication is successful, the network is established
or phase is network. In this phase, the negotiation of network layer
protocols take place.
o Open: After the establishment of the network phase, it moves to the open
phase. Here open phase means that the exchange of data takes place. Or we
can say that it reaches to the open phase after the configuration of the
network layer.
o Terminate: When all the work is done then the connection gets
terminated, and it moves to the terminate phase.
On reaching the terminate phase, the link moves to the dead phase which indicates
that the carrier is dropped which was earlier created.

*****

36

You might also like