0% found this document useful (0 votes)
2 views44 pages

Computer network unit2 ( new)

Download as pdf or txt
Download as pdf or txt
Download as pdf or txt
You are on page 1/ 44

Unit 2 Computer Network

Data Link Control


Data Link Control is the service provided by the Data Link Layer to provide reliable data
transfer over the physical medium.

Functions in Data Link Control


The functions included in data link control are:
1. Framing -
In the Physical layer, data transmission means moving bits are in the form of a signal from the
source to the destination. The Physical layer also provides synchronization that mainly ensures
that the sender and the receiver use the same bit durations and timings.
The bits are packed into the frames by the data link layer; so that each frame is distinguishable
from another frame.
The Framing in the data link layer separates a message from one source to a destination or from
other messages to other destinations just by adding a sender address and destination address;
where the destination address specifies where the packet has to go and the sender address helps
the recipient to acknowledge the receipt.

2. Flow and Error Control

Flow control and Error control are the two main responsibilities of the Data link layer. Let us
understand what these two terms specify. For the node-to-node delivery of the data, the flow and
error control are done at the data link layer.

Flow Control mainly coordinates with the amount of data that can be sent before receiving an
acknowledgment from the receiver and it is one of the major duties of the data link layer.

For most of the protocols, flow control is a set of procedures that mainly tells the sender how
much data the sender can send before it must wait for an acknowledgment from the receiver.

The data flow must not be allowed to overwhelm the receiver; because any receiving device has
a very limited speed at which the device can process the incoming data and the limited amount of
memory to store the incoming data.

The processing rate is slower than the transmission rate; due to this reason each receiving device
has a block of memory that is commonly known as buffer, that is used to store the incoming data
until this data will be processed. In case the buffer begins to fillup then the receiver must be able
to tell the sender to halt the transmission until once again the receiver become able to receive.
Thus the flow control makes the sender; wait for the acknowledgment from the receiver before
the continuation to send more data to the receiver.

Some of the common flow control techniques are: Stop-and-Wait and sliding window technique.

Error Control contains both error detection and error correction. It mainly allows the receiver to
inform the sender about any damaged or lost frames during the transmission and then it
coordinates with the retransmission of those frames by the sender.

The term Error control in the data link layer mainly refers to the methods of error detection and
retransmission. Error control is mainly implemented in a simple way and that is whenever there
is an error detected during the exchange, then specified frames are retransmitted and this process
is also referred to as Automatic Repeat request(ARQ).

Data-link layer is responsible for implementation of point-to-point flow and error control
mechanism.

Flow Control
When a data frame (Layer-2 data) is sent from one host to another over a single medium, it is
required that the sender and receiver should work at the same speed. That is, sender sends at a
speed on which the receiver can process and accept the data. What if the speed
(hardware/software) of the sender or receiver differs? If sender is sending too fast the receiver
may be overloaded, (swamped) and data may be lost.

Two types of mechanisms can be deployed to control the flow:

1. Stop and Wait

This flow control mechanism forces the sender after transmitting a data frame to stop and wait
until the acknowledgement of the data-frame sent is received.
This method is the easiest and simplest form of flow control. In this method, basically message
or data is broken down into various multiple frames, and then receiver indicates its readiness to
receive frame of data. When acknowledgement is received, then only sender will send or transfer
the next frame. This process is continued until sender transmits EOT (End of Transmission)
frame. In this method, only one of frames can be in transmission at a time. It leads to inefficiency
i.e. less productivity if propagation delay is very much longer than the transmission delay and
Ultimately In this method sender sent single frame and receiver take one frame at a time and
sent acknowledgement(which is next frame number only) for new frame.
Advantages –
1. This method is very easiest and simple and each of the frames is checked and
acknowledged well.
2. This method is also very accurate.
Disadvantages –
● This method is fairly slow.
● In this, only one packet or frame can be sent at a time.
● It is very inefficient and makes the transmission process very slow.

2. Sliding Window

In this flow control mechanism, both sender and receiver agree on the number of data-frames
after which the acknowledgement should be sent. As we learnt, stop and wait flow control
mechanism wastes resources, this protocol tries to make use of underlying resources as much as
possible.

This method is required where reliable in-order delivery of packets or frames is very much
needed like in data link layer. It is point to point protocol that assumes that none of the other
entity tries to communicate until current data or frame transfer gets completed. In this method,
sender transmits or sends various frames or packets before receiving any acknowledgement. In
this method, both the sender and receiver agree upon total number of data frames after which
acknowledgement is needed to be transmitted. Data Link Layer requires and uses this method
that simply allows sender to have more than one unacknowledged packet “in-flight” at a time.
This increases and improves network throughput. and Ultimately In this method sender sent
multiple frame but receiver take one by one and after completing one frame
acknowledge(which is next frame number only) for new frame.
Advantages –
● It performs much better than stop-and-wait flow control.
● This method increases efficiency.
● Multiples frames can be sent one after another.
Disadvantages –
● The main issue is complexity at the sender and receiver due to the transferring of
multiple frames.
● The receiver might receive data frames or packets out the sequence.

Error Control
When data-frame is transmitted, there is a probability that data-frame may be lost in the transit or
it is received corrupted. In both cases, the receiver does not receive the correct data-frame and
sender does not know anything about any loss.In such case, both sender and receiver are
equipped with some protocols which helps them to detect transit errors such as loss of
data-frame. Hence, either the sender retransmits the data-frame or the receiver may request to
resend the previous data-frame.

Requirements for error control mechanism:

Error detection - The sender and receiver, either both or any, must ascertain that there is some
error in the transit.

Positive ACK - When the receiver receives a correct frame, it should acknowledge it.

Negative ACK - When the receiver receives a damaged frame or a duplicate frame, it sends a
NACK back to the sender and the sender must retransmit the correct frame.
Retransmission: The sender maintains a clock and sets a timeout period. If an acknowledgement
of a data-frame previously transmitted does not arrive before the timeout the sender retransmits
the frame, thinking that the frame or it’s acknowledgement is lost in transit.

There are three types of techniques available which Data-link layer may deploy to control the
errors by Automatic Repeat Requests (ARQ):

Stop and Wait ARQ

The following transition may occur in Stop-and-Wait ARQ:

The sender maintains a timeout counter.


When a frame is sent, the sender starts the timeout counter.
If acknowledgement of frame comes in time, the sender transmits the next frame in queue.
If acknowledgement does not come in time, the sender assumes that either the frame or its
acknowledgement is lost in transit. Sender retransmits the frame and starts the timeout counter.
If a negative acknowledgement is received, the sender retransmits the frame.
Advantages
One of the main advantages of the stop-and-wait protocol is the accuracy provided. As the
transmission of the next frame is only done after receiving the acknowledgment of the previous
frame. Thus there is no chance for data loss.
Disadvantages
Given below are some of the drawbacks of using the stop-and-wait Protocol:

Using this protocol only one frame can be transmitted at a time.

Suppose in a case, the frame is sent by the sender but it gets lost during the transmission and then
the receiver can neither get it nor can send an acknowledgment back to the sender. Upon not
receiving the acknowledgment the sender will not send the next frame. Thus there will occur two
situations and these are: The receiver has to wait for an infinite amount of time for the data and
the sender has to wait for an infinite amount of time in order to send the next frame.

In the case of the transmission over a long distance, this is not suitable because the propagation
delay becomes much longer than the transmission delay.

In case the sender sends the data and this data has also been received by the receiver. After
receiving the data the receiver then sends the acknowledgment but due to some reasons, this
acknowledgment is received by the sender after the timeout period. Now as this acknowledgment
is received too late; thus it can be wrongly considered as the acknowledgment of another data
packet.

The time spent waiting for the acknowledgment for each frame also adds up in the total
transmission time.

Go-Back-N ARQ
Stop and wait ARQ mechanism does not utilize the resources at their best.When the
acknowledgement is received, the sender sits idle and does nothing. In Go-Back-N ARQ method,
both sender and receiver maintain a window.
The sending-window size enables the sender to send multiple frames without receiving the
acknowledgement of the previous ones. The receiving-window enables the receiver to receive
multiple frames and acknowledge them. The receiver keeps track of incoming frame’s sequence
number.
When the sender sends all the frames in window, it checks up to what sequence number it has
received positive acknowledgement. If all frames are positively acknowledged, the sender sends
next set of frames. If sender finds that it has received NACK or has not receive any ACK for a
particular frame, it retransmits all the frames after which it does not receive any positive ACK.
Advantages
Given below are some of the benefits of using the Go-Back-N ARQ protocol:
● The efficiency of this protocol is more.
● The waiting time is pretty much low in this protocol.
● With the help of this protocol, the timer can be set for many frames.Also, the sender can
send many frames at a time.
● Only one ACK frame can acknowledge more than one frame.

Disadvantages
Given below are some drawbacks:
● Timeout timer runs at the receiver side only.
● The transmitter needs to store the last N packets.
● The retransmission of many error-free packets follows an erroneous packet.

Selective Repeat ARQ

In Go-back-N ARQ, it is assumed that the receiver does not have any buffer space for its window
size and has to process each frame as it comes. This enforces the sender to retransmit all the
frames which are not acknowledged.
In Selective-Repeat ARQ, the receiver while keeping track of sequence numbers, buffers the
frames in memory and sends NACK for only frame which is missing or damaged.

The sender in this case, sends only packet for which NACK is received.
Selective repeat protocol, also known as Selective Repeat Automatic Repeat Request (ARQ),
is a data link layer protocol that uses the sliding window technique for reliable data frame
delivery. Only erroneous or lost frames are retransmitted in this case, while good frames are
received and buffered.
Selective Repeat ARQ is used in the data link layer for error detection and control. The sender
sends several frames specified by a window size in the selective repeat without waiting for
individual acknowledgement from the receiver as in Go-Back-N ARQ.

Working of Selective Repeat ARQ

● In Selective Repeat ARQ, only the erroneous or lost frames are retransmitted, while
correct frames are received and buffered.
● While keeping track of sequence numbers, the receiver buffers the frames in
memory and sends NACK (negative acknowledgement) for only the missing or
damaged frame.
● The sender will send/retransmit the packet for which NACK (negative
acknowledgement) is received.
HDLC Protocol
HDLC (High-Level Data Link Control) is a bit-oriented protocol that is used for communication
over the point-to-point and multipoint links. This protocol implements the mechanism of
ARQ(Automatic Repeat Request). With the help of the HDLC protocol,full-duplex
communication is possible.

HDLC is the most widely used protocol and offers reliability, efficiency, and a high level of
Flexibility.

In order to make the HDLC protocol applicable for various network configurations, there are
three types of stations and these are as follows:

Primary Station This station mainly looks after data like management. In the case of the
communication between the primary and secondary station; it is the responsibility of the primary
station to connect and disconnect the data link. The frames issued by the primary station are
commonly known as commands.

Secondary Station The secondary station operates under the control of the primary station. The
Frames issued by the secondary stations are commonly known as responses.

Combined Station The combined station acts as both Primary stations as well as Secondary
stations. The combined station issues both commands as well as responses.

Transfer Modes in HDLC

The HDLC protocol offers two modes of transfer that mainly can be used in different
configurations. These are as follows:

1. Normal Response Mode(NRM)

2. Asynchronous Balance Mode(ABM)


Let us now discuss both these modes one by one:

1. Normal Response Mode(NRM)

In this mode, the configuration of the station is unbalanced. There are one primary station and
multiple secondary stations. Where the primary station can send the commands and the
secondary station can only respond.

This mode is used for both point-to-point as well as multiple-point links.


2. Asynchronous Balance Mode(ABM)

In this mode, the configuration of the station is balanced. In this mode, the link is point-to-point,
and each station can function as a primary and as secondary.

Asynchronous Balance mode(ABM) is a commonly used mode today.

HDLC Frames
In order to provide the flexibility that is necessary to support all the options possible in the
modes and Configurations that are just described above. There are three types of frames defined
in the HDLC:

Information Frames(I-frames) These frames are used to transport the user data and the control
information that is related to the user data. If the first bit of the control field is 0 then it is
identified as I-frame.

Supervisory Frames(S-frames) These frames are only used to transport the control information.
If the first two bits of the control field are 1 and 0 then the frame is identified as S-frame

Unnumbered Frames(U-Frames) These frames are mainly reserved for system management.
These frames are used for exchanging control information between the communicating devices.
Each type of frame mainly serves as an envelope for the transmission of a different type of
message.

Frame Format
There are up to six fields in each HDLC frame. There is a beginning flag field, the address field
then, a control field, an information field, a frame check sequence field(FCS), and an ending
field.

In the case of the multiple-frame transmission, the ending flag of the one frame acts as the
beginning flag of the next frame.

Let us take a look at different HDLC frames:

Now its time to discuss the fields and the use of fields in different frame types:

1. Flag Field
This field of the HDLC frame is mainly a sequence of 8-bit having the bit pattern 01111110 and
it is used to identify the beginning and end of the frame. The flag field mainly serves as a
synchronization pattern for the receiver.

2. Address Field
It is the second field of the HDLC frame and it mainly contains the address of the secondary
station. This field can be 1 byte or several bytes long which mainly depends upon the need of the
network. In case if the frame is sent by the primary station, then this field contains the
address(es) of the secondary stations. If the frame is sent by the secondary station, then this field
contains the address of the primary station.

3. Control Field
This is the third field of the HDLC frame and it is a 1 or 2-byte segment of the frame and is
mainly used for flow control and error control. Bits interpretation in this field mainly depends
upon the type of the frame.

4. Information Field
This field of the HDLC frame contains the user's data from the network layer or the management
information. The length of this field varies from one network to another.

5. FCS Field
FCS means Frame check sequence and it is the error detection field in the HDLC protocol. There
is a 16 bit CRC code for error detection.

Features of HDLC Protocol

Given below are some of the features of the HDLC protocol:

1.This protocol uses bits to stuff flags occurring in the data.

2.This protocol is used for point-to-point as well as multipoint link access.

3.HDLC is one of the most common protocols of the data link layer.

4.HDLC is a bit-oriented protocol.

5.This protocol implements error control as well as flow control.

Point-to-Point Protocol
PPP(Point-To-Point) protocol is a protocol used in the data link layer. The PPP protocol is
mainly used to establish a direct connection between two nodes.

The Point-To-Point protocol mainly provides connections over multiple links.

This protocol defines how two devices can authenticate with each other.

PPP protocol also defines the format of the frames that are to be exchanged between the devices.

This protocol also defines how the data of the network layer are encapsulated in the data link
frame.

The PPP protocol defines how the two devices can negotiate the establishment of the link and
then can exchange the data.

This protocol provides multiple services of the network layer and also supports various
network-layer protocols.

This protocol also provides connection over multiple links.

Some services that are not offered by the PPP protocol are as follows:

This protocol does not provide a flow control mechanism. Because when using this protocol the
sender can send any number of frames to the receiver one after the other without even thinking
about overwhelming the receiver.

This protocol does not provide any mechanism for addressing in order to handle the frames in the
multipoint configuration.

The PPP protocol provides a very simple mechanism for error control. There is a CRC field that
detects the errors. In case if there is a corrupted frame then it is discarded silently.
In the PPP protocol, the framing is done using the byte-oriented technique.

PPP Frame Format

Given below figure shows the format of the PPP Frame:

Let us discuss each field of the PPP frame format one by one:

1. Flag
The PPP frame mainly starts and ends with a 1-byte flag field that has the bit pattern: 01111110.
It is important to note that this pattern is the same as the flag pattern used in HDLC. But there is
a difference too and that is PPP is a byte-oriented protocol whereas the HDLC is a bit-oriented
protocol.

2. Address
The value of this field in PPP protocol is constant and it is set to 11111111 which is a broadcast
address. The two parties can negotiate and can omit this byte.
3. Control
The value of this field is also a constant value of 11000000. We have already told you that PPP
does not provide any flow control and also error control is limited to error detection. The two
parties can negotiate and can omit this byte.

4. Protocol
This field defines what is being carried in the data field. It can either be user information or other
information. By default, this field is 2 bytes long.

5. Payload field
This field carries the data from the network layer. The maximum length of this field is 1500
bytes. This can also be negotiated between the endpoints of communication.

6. FCS
It is simply a 2-byte or 4-byte standard CRC(Cyclic redundancy check).

Byte Stuffing in PPP

As we have told you that the major difference between PPP and HDLC is that PPP is a
byte-oriented protocol. It means that the flag in the PPP is a byte and it is needed to be escaped
wherever it appears in the data section of the frame.

The escape byte is 011111101 which means whenever the flag-like pattern appears in the data
then the extra byte is stuffed that mainly tells the receiver that the next byte is not a flag.

Transition Phases in the PPP Protocol


The PPP protocol has to go through various phases and these are shown in the diagram given
below;
Dead
In this phase, the link is not being used.No active carrier is there at the physical layer and the line
is simply quiet.

Establish
If one of the nodes starts the communication then the connection goes into the established phase.
In this phase, options are negotiated between the two parties. In case if the negotiation is done
successfully then the system goes into the Authenticate phase (in case if there is the requirement
of authentication otherwise goes into the network phase.)
Several packets are exchanged here.

Authenticate
This is an optional phase. During the establishment phase, the two nodes may decide not to skip
this phase. If the two nodes decide to proceed with the authentication then they send several
authentication packets.
If the result of this is successful then the connection goes into the networking phase otherwise
goes into the termination phase.

Network
In this phase, the negotiation of the protocols of the network layer takes place. The PPP protocol
specifies that the two nodes establish an agreement of the network layer before the data at the
network layer can be exchanged. The reason behind this is PPP supports multiple protocols at the
network layer.

In case if any node is running multiple protocols at the network layer simultaneously then the
receiving node needs to know that which protocol will receive the data.

Open
In this phase the transfer of the data takes place. Whenever a connection reaches this phase, then
the exchange of data packets can be started. The Connection remains in this phase until one of
the endpoints in the communication terminates the connection.

Terminate
In this phase, the connection is terminated. There is an exchange of several packets between two
ends for house cleaning and then closing the link.

Components of PPP/ PPP stack

Basically, PPP is a layered protocol. There are three components of the PPP protocol and these
are as follows:
● Link Control Protocol
● Authentication Protocol
● Network Control Protocol

Link Control protocol


This protocol is mainly responsible for establishing, maintaining, configuring, and terminating
the links. The link control protocol provides the negotiation mechanism in order to set the
options between the two endpoints.
Both endpoints of the link must need to reach an agreement about the options before the link can
be established.

Authentication protocol
This protocol plays a very important role in the PPP protocol because the PPP is designed for use
over the dial-up links where the verification of user identity is necessary. Thus this protocol is
mainly used to authenticate the endpoints for the use of other services.
There are two protocols for authentication:

Password Authentication Protocol


Challenge handshake authentication Protocol

Network Control Protocol


The Network Control Protocol is mainly used for negotiating the parameters and facilities for the
network layer.
For every higher-layer protocol supported by PPP protocol; there is one Network control
protocol.
Some of the Network Control protocol of the PPP are as follows;
Multiple Access in Data Link Layer
The Data link layer can be considered as two sublayers, where the upper sublayer is mainly
responsible for the data link control and the lower layer is responsible for resolving the access to
the shared media.
If there is a dedicated channel in that case there is no need for the lower sublayer.

The upper sublayer of the data link layer is mainly responsible for the flow control and error
control and is also referred to as Logical link control(LLC); while the lower layer is mainly
responsible for the multiple-access resolution and thus is known as Media Access control (MAC)
layer.
The main objectives of the multiple access protocols are the optimization of the transmission
time, minimization of collisions, and avoidance of the crosstalks.
Multiple Access protocols mainly allow a number of nodes to access the shared network channel.
Several data streams originating from several nodes are transferred via the multi-point
transmission channel.

The Multiple access protocols are categorized as follows. Let us take a look at them:

Random Access Protocol

In the random access, there is no such station that is superior to another station and none is
assigned control over the other.No station permits another station to send.
Each station can transmit whenever it desires on the condition that it follows the predefined
procedure that includes the testing state of the medium.
There is no time scheduled for any station in order to transmit the data, the transmission is
random among all stations; that is why these methods are called random access.
Given below are the protocols that lie under the category of Random Access protocol:
● ALOHA
● CSMA(Carrier sense multiple access)
● CSMA/CD(Carrier sense multiple access with collision detection)
● CSMA/CA(Carrier sense multiple access with collision avoidance)

Controlled Access Protocol

While using the Controlled access protocol the stations can consult with one another in order to
find which station has the rights to send the data. Any station cannot send until it has been
authorized by the other stations.
The three main controlled access methods are as follows;
● Reservation
● Polling
● Token Passing

Channelization Protocols

Channelization is another method used for multiple access in which the available bandwidth of
the link is shared in the time, frequency, or through the code in between the different stations.
Three channelization protocols used are as follows;
● FDMA(Frequency-division Multiple Access)
● TDMA(Time-Division Multiple Access)
● CDMA(Code-Division Multiple Access)

ALOHA
The earliest method used for random access was ALOHA and it was developed at the University
of Hawaii in early 1970.ALOHA was mainly designed for radio/Wireless LAN but it can also be
used for shared mediums.

As the medium is shared between the stations, when a station sends the data then the other
station may attempt to do so at the same time. Thus the data from the two stations collide.
Original ALOHA is simply termed as "Pure ALOHA".

Pure ALOHA
This protocol is very simple but elegant at the same time.
The main idea behind this protocol is that each station sends a frame whenever it has a frame to
send.

As we have already told you that the medium is shared among different stations, thus there is a
possibility of collision between frames from the different stations.
Let us now take a look at the frames in a pure ALOHA network:

The above figure shows that there are 4 stations and each station sends two frames, thus there is
a total of 8 frames on the shared medium. Among these 8 frames, there are some frames that
collide with each other.

Thus there is the only frame that is frame 1.1 from station 1 that survives.
We need to resend those frames that have been destroyed during the transmission. The pure
ALOHA method depends upon the acknowledgment from the receiver, whenever any station
sends a frame, then it expects the receiver to send an acknowledgment. If there is no arrival of
acknowledgment after the time-out period then the station assumes that the frame or the
acknowledgment gets destroyed and in this way, it resends the frame.

In the collision there is mainly the involvement of two or more stations, in case if all the stations
try to resend the frames after the time-out period passes then all the frames will collide again.
Thus in Pure ALOHA, when the time-out period passes then each station waits for a random
amount of time before resending its frame and this randomness helps to avoid more collisions
and this time is referred to as back-off time (TB).

Given below is the procedure used for Pure ALOHA protocol:


Vulnerable time

It is the time when there is the possibility of collision.


And the Vulnerable time in the case of Pure ALOHA is: 2 x T fr

Slotted ALOHA

As the name suggests, in the slotted ALOHA the time of the shared channel is simply divided
into discrete intervals that are commonly known as Time Slots.
In Slotted ALOHA it is imposed on each station to send the data only at the beginning of the
time slot.
As a station is allowed to send the data only at the beginning of the time slot, in case if any
station misses this moment then it must have to wait until the beginning of the next time slot.
In case if two stations try to send at the beginning of the same time slot then there are chances for
the occurrence of the collision
In the Pure ALOHA, the station can transmit the data frame whenever the station has data to
send. In this, the station can transmit the data only at the beginning of the time slot.
In the case of Pure ALOHA, the time is continuous In the case of Slotted ALOHA, the time is
discrete and is in the form of time slots.
One of the main advantages of using Pure ALOHA is that the implementation of this method is
simple.One of the main advantages of using the slotted ALOHA is that there is a reduction in the
number of collisions and there is an increase in efficiency as compared to Pure ALOHA. In
simple terms, collisions are reduced to half while the efficiency increases to double.
The efficiency offered by the Pure ALOHA is 18.4%.
The Efficiency offered by the slotted ALOHA is 36.8%
The probability of the successful transmission of the data packet is equals to G x e-2G
The probability of the successful transmission of the data packet is equals to G x e-G
The Vulnerable time offered by the Pure ALOHA is 2 x T fr
The Vulnerable time offered by the slotted ALOHA is T fr

CSMA
In order to minimize the chances of collision and in return to increase the performance, the
method name CSMA was developed.
If a station senses the medium before trying to use it then it leads to a reduction in the chances of
the collision.
The Carrier Sense Multiple Access(CSMA) mainly requires that each station first listens to the
medium before sending the data.
In simple words, we can say the CSMA method is based on the principle "Sense before
Transmit" or "Listen before Talking".
There is the possibility of the reduction of the collision but the CSMA method cannot eliminate
the Collision.

Working of CSMA

On the shared medium, whenever any channel has a data frame to transmit, then the station
attempts to detect the presence of any carrier signal from the other stations that are connected to
the shared medium.
In case the station detects any carrier signal on the shared medium then it means that another
transmission is in the progress on the shared medium.
After that, the station waits until the ongoing transmission completes, and then after the
completion, the station then initiates its own transmission. Generally, the transmission by the
station is received by all other stations that are connected to the channel/link.
As in CSMA, all stations detect before sending their own frames this leads to the reduction in the
collision of frames.
If the shared medium is detected as idle by the two stations and they both initiate the
transmission simultaneously then it will lead to the collision of frames.

Vulnerable time in CSMA

The propagation time is referred to as vulnerable time in the CSMA and it is denoted as Tp. This
is basically the time a signal needed by a signal in order to propagate from one end of the
medium to another end.
When a station sends a frame and at the same time if any other station tries to send a frame then
it will result in the collision.
In a case, if the first bit of the frame reaches the end of the medium then every station hears that
bit and refrains from sending.

Access Modes in CSMA

There are different versions of Access modes in CSMA and these are as follows:
1. 1-persistent CSMA
2. Non-persistent CSMA
3. p-Persistent CSMA
1-Persistent CSMA

This is one of the simplest and straightforward methods. In this method, once the station finds
that the medium is idle then it immediately sends the frame. By using this method there are
higher chances for collision because it is possible that two or more stations find the shared
medium idle at the same time and then they send their frames immediately.

Flow diagram of 1-Persistent approach:

CSMA stands for Carrier Sense Multiple Access (CSMA). CSMA is one of the network
protocols which works on the principle of ‘carrier sense’. CSMA is a protocol developed to
increase the performance of the network and reduce the chance of collision in the network.

● If any device wants to send data then the device first sense or listens to the network
medium to check whether the shared network is free or not. If the channel is found idle
then the device will transmit its data.
● This sense reduces the chance of collision in the network but this method is not able to
eliminate the collision.
● Carrier Sense Multiple Access (CSMA) is a protocol that senses or listens to the medium
before any transmission of data in the medium.
● CSMA is used in Ethernet networks where two or more network devices are connected.

Working Principle of CSMA


● CSMA works on the principle of "Listen before Talking" or "Sense before Transmit".
When the device on the shared medium wants to transmit a data frame, then the device
first detects the channel to check the presence of any carrier signal from other connected
devices on the network.
● In this situation, if the device senses any carrier signal on the shared medium, then this
means that there is another transmission on the channel. And the device will wait until the
channel becomes idle and the transmission that is in progress currently completes.
● When the channel becomes idle the station starts its transmission. All other stations
connected in the network receive the transmission of the station.
● In CSMA, the station sense or detects the channel before the transmission of data so it
reduces the chance of collision in the transmission.
● But there may be a situation where two stations detected the channel idle at the same time
and they both start data transmission simultaneously so in this there is a chance of
collision.
● So CSMA reduces the chance of collision in data transmission but it does not eliminate
the collision.

Example: In the network given below in the diagram if node 1 wants to transmit the data in the
network then, first of all, it will sense the network if data of any other device is available on the
network then it will not send the data. When node 1 finds the channel idle then it will transmit
data on the channel.

Refer to the below image for an example of CSMA


Vulnerable Time in CSMA
In the CSMA vulnerable time is considered as the propagation time and Tp is used to denote it.
Generally, it is the time taken by the data frame to reach from one end of the channel to another
end. When two stations send the data simultaneously then it will result in a collision in the
network. In a situation, if the first bit of the frame sent by the station reaches the end of the
shared medium then every station connected in a network hears that bit and every device in the
network refrains from sending it.
Refer to the below image for the vulnerable time of CSMA

Types of CSMA Access Modes

1-Persistent
This method is considered the straightforward and simplest method of CSMA. In this method, if
the station found the medium idle then the station will immediately send the data frame with 1-
probability.

● In this, if the station wants to transmit the data. Then the station first senses the medium.
● If the medium is busy then the station waits until the channel becomes idle. And the
station continuously senses the channel until the medium becomes idle.
● If the station detected the channel as idle then the station will immediately send the data
frame with 1 probability that’s why the name of this method is 1-persistent.
Refer to the below image to show the flow diagram of the 1-persistent method of CSMA

In this method there is a high possibility of collision as two or more station sense the channel idle
at the same time and transmits data simultaneously which may lead to a collision This is one of
the most straightforward methods. In this method, once the station finds that the medium is idle
then it immediately sends the frame. By using this method there are higher chances for collision
because it is possible that two or more stations find the shared medium idle at the same time and
then they send their frames immediately.
Refer to the below image to show the behavior of the 1-persistent method of CSMA
Non-Persistent
In this method of CSMA, if the station founds the channel busy then it will wait for a random
amount of time before sensing the channel again.

● If the station wants to transmit the data then first of all it will sense the medium.
● If the medium is idle then the station will immediately send the data.
● Otherwise, if the medium is busy then the station waits for a random amount of time and
then again senses the channel after waiting for a random amount of time.
● In P-persistent there is less chance of collision in comparison to the 1-persistent method
as this station will not continuously sense the channel but since the channel after waiting
for a random amount of time.

Refer to the below image to show the flow diagram of the Non-persistent method of CSMA

So the random amount of time is unlikely to be the same for two stations that’s why this method
reduces the chance of collision.

Refer to the below image to show the behavior of the Non-persistent method of CSMA
P-Persistent
The p-persistent method of CSMA is used when the channel is divided into multiple time slots
and the duration of time slots is greater than or equal to the maximum propagation time. This
method is designed as a combination of the advantages of 1-Persistent and Non-Persistent
CSMA. The p-persistent method of CSMA reduces the chance of collision in the network and
there is an increment in the efficiency of the network. When any station wants to transmit the
data firstly it will sense the channel If the channel is busy then the station continuously senses
the channel until the channel becomes idle. If the channel is idle then the station does the
following steps.

1. The station transmits its data frame in the network by p probability.


2. And the station waits for the start of the next time slot with probability q=1-p and after
waiting again senses the channel.
3. If the channel is again idle, then it again performs step1. If the channel is busy, then it
thinks that there is a collision in the network and now this station will follow the back-off
procedure.

Refer to the below image to show the flow diagram of the P-persistent method of CSMA
Refer to the below image to show the behavior of the P-persistent method of CSMA

O-Persistent
In this method of CSMA supervisory node assign a transmission order to each node in the
network. When the channel was idle instead of immediately sending the data channel will wait
for its transmission order assigned to them. This mode of CSMA defines the superiority of the
station before data transmission in the medium. In this mode, if the channel is inactive then all
stations will wait to transmit the data for its turn. Every station in the channel transmits the data
in its turn.

Variations of CSMA Protocol

Carrier Sense Multiple Access with Collision Detection (CSMA/CD)


Carrier sense multiple access/ collision detection is one of the network protocols for
transmission.
CSMA/CD (Carrier Sense Multiple Access/ Collision Detection) is a media access control
method that was widely used in Early Ethernet technology/LANs when there used to be shared
Bus Topology and each node ( Computers) were connected By Coaxial Cables. Now a Days
Ethernet is Full Duplex and Topology is either Star (connected via Switch or Router) or Point to
Point ( Direct Connection). Hence CSMA/CD is not used but they are still supported though.

Consider a scenario where there are ‘n’ stations on a link and all are waiting to transfer data
through that channel. In this case, all ‘n’ stations would want to access the link/channel to
transfer their own data. Problem arises when more than one station transmits the data at the
moment. In this case, there will be collisions in the data from different stations.

CSMA/CD is one such technique where different stations that follow this protocol agree on some
terms and collision detection measures for effective transmission. This protocol decides which
station will transmit when so that data reaches the destination without corruption.

That’s why the station senses the channel before transmission of data and if the station founds the
channel idle then the station transmits its data frames to check whether data transmission is
successful in the network or not. If the station successfully the data frame sent then it will again
send the next frame. If the station detects a collision in the network, then in CSMA/CD the
station will send the stop/jam signal to all the stations connected in the network to terminate their
transmission of data. Then the station waits for a random amount of time for the transmission of
data.

Carrier Sense Multiple Access with Collision Avoidance (CSMA/CA)


Carrier sense multiple access/collision avoidance is one of the network protocols for data frame
transmission. When the station sends the data frame on the channel then it receives the
acknowledgment in response to the sent data frame to test whether the channel is idle or not.
When the station receives a single signal i.e. its signal this means that there is no collision and
data has been successfully received by the receiver. But in case of collision, the station receives
two signals: its signal and the second signal sent by the other station.
CSMA/CA protocol is used in wireless networks because they cannot detect the collision so the
only solution is collision avoidance.
In CSMA/CA collision is avoided by using the following three strategies. Following are the
methods used in the CSMA/ CA to avoid the collision:

● Interframe space
● Contention window
● Acknowledgement

Interframe Space or IFS: If the station wants to transmit the data then it waits until the channel
becomes idle and when the channel becomes idle station does not immediately send the data but
waits for some time. This period is known as the Interframe Space or IFS. IFS can also define the
priority of the frame or station.

Contention window: The contention window is a time that is divided into time slots. When the
station is ready for data transmission after waiting for IFS then it chooses the random amount of
slots for waiting. After waiting for the random number of slots if the channel is still busy then the
station does not initiate the whole process again, the station stops its timer and restarts again
when the channel is sensed idle.

Acknowledgement:

There may be a chance of collision or data may be corrupted during the transmission. Positive
acknowledgment and time-out are used in addition to ensuring that the receiver has successfully
received the data.

Refer to the below image to show the behavior of the CSMA/CA


CSMA/CA Procedure:

Fig. Shows the flow chart explaining the principle of CSMA/CA.

This is the CSMA protocol with collision avoidance.

● The station ready to transmit, senses the line by using one of the persistent strategies.
● As soon as it find the line to be idle, the station waits for an IFG (Interframe gap) amount
of time.
● If then waits for some random time and sends the frame.
● After sending the frame, it sets a timer and waits for the acknowledgement from the
receiver.
● If the acknowledgement is received before expiry of the timer, then the transmission is
successful.
● But if the transmitting station does not receive the expected acknowledgement before the
timer expiry then it increments the back off parameter, waits for the back off time and
resenses the line.
CSMA/CA can optionally be supplemented by the exchange of a Request to Send (RTS) packet
sent by the sender S, and a Clear to Send (CTS) packet sent by the intended receiver R. Thus
alerting all nodes within range of the sender, receiver or both, to not transmit for the duration of
the main transmission. This is known as the IEEE 802.11 RTS/CTS exchange. Implementation of
RTS/CTS helps to partially solve the hidden node problem that is often found in wireless
networking.

Let us now discuss the types of controlled access protocols. There are three types of Controlled
access protocols:
● Reservation
● Polling
● Token Passing
Let's learn about them one by one.

1). Reservation
Whenever we travel from a train or an airplane, the first thing we do is to reserve our seats,
similarly here a station must make a reservation first before transmitting any data-frames. This
reservation in Computer Network
In the reservation method, a station needs to make a reservation before sending data.timeline
consists of two kinds of periods:
● Reservation interval of a fixed time duration
● Data transmission period of variable frames
If there are M stations, the reservation interval is divided into M slots, and each station has one
slot.
Suppose if station 1 has a frame to send, it transmits 1 bit during the slot 1. No other station is
allowed to transmit during this slot.
In general, i th station may announce that it has a frame to send by inserting a 1 bit into i th slot.
After all N slots have been checked, each station knows which stations wish to transmit.
The stations which have reserved their slots transfer their frames in that order.
After data transmission period, next reservation interval begins.
Since everyone agrees on who goes next, there will never be any collisions.

Consider there are 4 stations then the reservation intervals are divided into 4 slots so that each
station has a slot. Means if n number of stations are there then n slot will be allotted.

Now let us assume that these 4 stations are 4 friends, now is friend-1 speaks in his slot-1 then no
other friend can speak at this time. Similarly, if station-1 transmits a 1-bit data-frame in slot-1
then at that time no other station can transmit its data-frames and they must wait for their time
slot. After all the slots have transmitted and checked then each station knows which station now
wishes for transmission.
The biggest advantage of this method is since all stations agree on which station is next to
transmit then there are no possible collisions.

The illustration below shows a scenario with five stations with a five-slot reservation frame.
here, in the time interval station 1,3,4 are the only stations with reservations and in the second
interval station-1 is the only station with a reservation.

2). Polling
Recall your school or college classroom, what was the first thing the teacher does after entering
the class? The answer is roll call or attendance. Let's compare the scenario. The teacher calls roll
number 1 and gets a response if he/she is present then switches to the next roll number, say roll
number two and roll number 2 is absent, so the teacher gets no response in return or say a
negative response. Similarly, in a computer network there is a primary station or controller
(teacher) and all other stations are secondary (students), the primary station sends a message to
each station. The message which is sent by the primary station consists of the address of the
station which is selected for granting access.

The point to remember is that all the nodes receive the message but the addressed one responds
and sends data in return, but if the station has no data to transmit then it sends a message called
Poll Reject or NAK (negative acknowledgment).

But this method has some drawbacks like the high overhead of the polling messages and high
dependence on the reliability of the primary station.
We calculate the efficiency of this method in terms of time for polling & time required for
transmission of data.

Polling works with topologies having Primary Station and Secondary Stations.
● The Primary Station Controls the link whereas the secondary station follows its
instructions.
● The exchange of data occurs only and only through the primary device even when the
final destination of transmission is secondary.
● Whenever primary Station has something to send, it sends the message to each node.
● Before sending the data, it creates and transmits a Select (SEL) Frame. One field of SEL
Frame includes the address of the intended secondary station.
● While sending, the primary station should know whether the target device is ready to
receive or not.
● Hence, it alerts the secondary station for upcoming transmission and wait for an
acknowledgement (ACK) of secondary station’s status.
● Poll Function: When the primary is ready to receive data, it must ask (poll) each device
if it has anything to send.
● If the secondary has data to transmit, it sends the data frame. Else, it sends a negative
acknowledgement (NAK).
● The primary station then polls the next secondary. When the response is positive (a data
frame), the primary station reads the frame and returns an acknowledgement (ACK).
● There are two possibilities to terminate the transmission –
(a) either the secondary sends all data, finishing with an EOT frame.
(b) or, the primary station shows the timer up signal.

Tpoll = time for polling


Tt = time required for transmission of data
So, efficiency = Tt / (Tt + Tpoll)

Whenever the primary station wants to recieve the data, it asks the secondary stations present in
its channel, this method is polling. In the first diagram, we see that primary station asks station A
if it has any data ready for transmission, since A does not have any data queued for transmission
it sends NAK (negative acknowledgement), and then it asks station B, since B has data ready for
transmission, so it transmits the data and in return recieves acknowledgement from primary
station.
In the next case, if primary station wants to send data to the secondary stations, it sends a select
message, and if the secondary station accepts the request from the primary station, then it sends
back an acknowledgement and then primary station transmits the data and in return received an
acknowledgement.

Advantage Of Polling

– No state is ever wasted

Disadvantage Of Polling

– No fair sharing.

3). Token Passing


Now, say 4 people are sitting on a round table and only that person can speak who has the token.
In computer networks a token is a special bit pattern that allows the token possessing system to
send data or we can say that a token represents permission to transmit data. The token circulation
around the table (or a network ring) is in a predefined order. A station can only pass the token to
its adjacent station and not to any other station in the network. If a station has some data queued
for transmission it can not transmit the data until it receives the token and makes sure it has
transmitted all the data before passing on the received token.

This method has some drawbacks like duplication of token or sometimes the token is damaged or
lost during the circulation, or some times if we introduce a new station or remove an existing
station from the network, this leads to a huge disturbance, which should be taken care of so that
the efficiency of the method is not affected.
● In token passing scheme, the stations are connected logically to each other in form of ring
and access to stations is governed by tokens.
● A token is a special bit pattern or a small message, which circulate from one station to the
next in some predefined order.
● In Token ring, token is passed from one station to another adjacent station in the ring
whereas incase of Token bus, each station uses the bus to send the token to the next
station in some predefined order.
● In both cases, token represents permission to send. If a station has a frame queued for
transmission when it receives the token, it can send that frame before it passes the token
to the next station. If it has no queued frame, it passes the token simply.
● After sending a frame, each station must wait for all N stations (including itself) to send
the token to their neighbours and the other N – 1 stations to send a frame, if they have
one.
● There exists problems like duplication of token or token is lost or insertion of new
station, removal of a station, which need be tackled for correct and reliable operation of
this scheme.

The performance of a token ring is governed by 2 parameters, which are delay and throughput.
Delay is a measure of the time; it is the time difference between a packet ready for transmission
and when it is transmitted. Hence, the average time required to send a token to the next station is
a/N.

Throughput is a measure of the successful traffic in the communication channel.


Throughput, S = 1/ (1 + a/N) for a<1

S = 1/[a(1+1/N)] for a>1, here N = number of stations & a = Tp/Tt

Tp = propagation delay &Tt = transmission delay

In the diagram below when station-1 posses the token it starts transmitting all the data-frames
which are in it's queue. now after transmission, station-1 passes the token to station-2 and so on.
Station-1 can now transmit data again, only when all the stations in the network have transmitted
their data and passed the token.

Frequency Division Multiple Access (FDMA)

Frequency Division Multiple Access (FDMA) is one of the most common analogue multiple
access methods. The frequency band is divided into channels of equal bandwidth so that each
conversation is carried on a different frequency (as shown in the figure below).

FDMA Overview

In FDMA method, guard bands are used between the adjacent signal spectra to minimize
crosstalk between the channels. A specific frequency band is given to one person, and it will
received by identifying each of the frequency on the receiving end. It is often used in the first
generation of analog mobile phone.
Advantages of FDMA
As FDMA systems use low bit rates (large symbol time) compared to average delay spread, it
offers the following advantages −
● Reduces the bit rate information and the use of efficient numerical codes increases the
capacity.
● It reduces the cost and lowers the inter symbol interference (ISI)
● Equalization is not necessary.
● An FDMA system can be easily implemented. A system can be configured so that the
improvements in terms of speech encoder and bit rate reduction may be easily
incorporated.

Since the transmission is continuous, less number of bits are required for synchronization and
framing.

Disadvantages of FDMA
Although FDMA offers several advantages, it has a few drawbacks as well, which are listed
below −
● It does not differ significantly from analog systems; improving the capacity depends on
the signal-to-interference reduction, or a signal-to-noise ratio (SNR).
● The maximum flow rate per channel is fixed and small.
● Guard bands lead to a waste of capacity.
● Hardware implies narrowband filters, which cannot be realized in VLSI and therefore
increases the cost.
● FDMA is different from frequency division duplexing (FDD). While FDMA permits
multiple users to simultaneously access a transmission system, FDD describes the way
the radio channel is shared between the downlink and uplink.

FDMA is also different from Frequency-division multiplexing (FDM). FDM refers to a physical
layer method that blends and transmits low-bandwidth channels via a high-bandwidth channel.
FDMA, in contrast, is a channel access technique in the data link layer.

Main features:

In FDMA, every user shares the frequency channel or satellite transponder simultaneously;
however, every user transmits at single frequency.
FDMA is compatible with both digital and analog signals.
FDMA demands highly efficient filters in the radio hardware, contrary to CDMA and TDMA.
FDMA is devoid of timing issues that exist in TDMA.
As a result of the frequency filtering, FDMA is not prone to the near-far problem that exists in
CDMA.
All users transmit and receive at different frequencies because every user receives an individual
frequency slot.
One disadvantage of FDMA is crosstalk, which can cause interference between frequencies and
interrupt the transmission.

TDMA
Time Division Multiple Access (TDMA) is a digital modulation technique used in digital cellular
telephone and mobile radio communication. TDMA is one of two ways to divide the limited
spectrum available over a radio frequency (RF) cellular channel. The other is known as
frequency division multiple access (FDMA).

In simplest terms, TDMA enables multiple users to share the same frequency by dividing each
cellular channel into different time slots. In effect, a single frequency supports multiple and
simultaneous data channels. So, with a two-time slot TDMA, two users can share the same
frequency. With a three-time slot TDMA, three users can share the same frequency and so on.
How does TDMA work?

In TDMA, users transmit in rapid succession, each using their own time slot. This shuttling
process is so fast each user thinks they occupy the same RF channel at the same time. By
allocating a discrete amount of bandwidth to each user, TDMA increases the amount of data that
can be carried over the channel, while enabling simultaneous conversations.

TDMA Overview

Time Division Multiple Access (TDMA) is a complex technology, because it requires an


accurate synchronization between the transmitter and the receiver. TDMA is used in digital
mobile radio systems. The individual mobile stations cyclically assign a frequency for the
exclusive use of a time interval.

In most of the cases, the entire system bandwidth for an interval of time is not assigned to a
station. However, the frequency of the system is divided into sub-bands, and TDMA is used for
the multiple access in each sub-band. Sub-bands are known as carrier frequencies. The mobile
system that uses this technique is referred as the multi-carrier systems.

In the following example, the frequency band has been shared by three users. Each user is
assigned definite timeslots to send and receive data. In this example, user ‘B’ sends after user
‘A,’ and user ‘C’ sends thereafter. In this way, the peak power becomes a problem and larger by
the burst communication.

Advantages of TDMA

Here is a list of few notable advantages of TDMA −

● Permits flexible rates (i.e. several slots can be assigned to a user, for example, each time
interval translates 32Kbps, a user is assigned two 64 Kbps slots per frame).
● Can withstand gusty or variable bit rate traffic. Number of slots allocated to a user can be
changed frame by frame (for example, two slots in the frame 1, three slots in the frame 2,
one slot in the frame 3, frame 0 of the notches 4, etc.).
● No guard band required for the wideband system.
● No narrowband filter required for the wideband system.
Disadvantages of TDMA

The disadvantages of TDMA are as follow −

● High data rates of broadband systems require complex equalization.


● Due to the burst mode, a large number of additional bits are required for synchronization
and supervision.
● Call time is needed in each slot to accommodate time to inaccuracies (due to clock
instability).
● Electronics operating at high bit rates increase energy consumption.
● Complex signal processing is required to synchronize within short slots.

CDMA

Code Division Multiple Access (CDMA) is a sort of multiplexing that facilitates various signals
to occupy a single transmission channel. It optimizes the use of available bandwidth. The
technology is commonly used in ultra-high-frequency (UHF) cellular telephone systems, bands
ranging between the 800-MHz and 1.9-GHz.

CDMA Overview

Code Division Multiple Access system is very different from time and frequency multiplexing.
In this system, a user has access to the whole bandwidth for the entire duration. The basic
principle is that different CDMA codes are used to distinguish among the different users.

Techniques generally used are direct sequence spread spectrum modulation (DS-CDMA),
frequency hopping or mixed CDMA detection (JDCDMA). Here, a signal is generated which
extends over a wide bandwidth. A code called spreading code is used to perform this action.
Using a group of codes, which are orthogonal to each other, it is possible to select a signal with a
given code in the presence of many other signals with different orthogonal codes.

How Does CDMA Work?

CDMA allows up to 61 concurrent users in a 1.2288 MHz channel by processing each voice
packet with two PN codes. There are 64 Walsh codes available to differentiate between calls and
theoretical limits. Operational limits and quality issues will reduce the maximum number of calls
somewhat lower than this value.

In fact, many different "signals" baseband with different spreading codes can be modulated on
the same carrier to allow many different users to be supported. Using different orthogonal codes,
interference between the signals is minimal. Conversely, when signals are received from several
mobile stations, the base station is capable of isolating each as they have different orthogonal
spreading codes.

The following figure shows the technicality of the CDMA system. During the propagation, we
mixed the signals of all users, but by that you use the same code as the code that was used at the
time of sending the receiving side. You can take out only the signal of each user.

Advantages of CDMA

CDMA has a soft capacity. The greater the number of codes, the more the number of users. It has
the following advantages −

● CDMA requires a tight power control, as it suffers from near-far effect. In other words, a
user near the base station transmitting with the same power will drown the signal latter.
All signals must have more or less equal power at the receiver
● Rake receivers can be used to improve signal reception. Delayed versions of time (a chip
or later) of the signal (multipath signals) can be collected and used to make decisions at
the bit level.
● Flexible transfer may be used. Mobile base stations can switch without changing
operator. Two base stations receive mobile signal and the mobile receives signals from
the two base stations.
● Transmission Burst − reduces interference.

Disadvantages of CDMA

The disadvantages of using CDMA are as follows −

● The code length must be carefully selected. A large code length can induce delay or may
cause interference.
● Time synchronization is required.
● Gradual transfer increases the use of radio resources and may reduce capacity.
● As the sum of the power received and transmitted from a base station needs constant tight
power control. This can result in several handovers.

You might also like