Unit 2 CN
Unit 2 CN
Unit 2 CN
Data Link Layer is second layer of OSI Layered Model. This layer is one of the most
complicated layers and has complex functionalities and liabilities. Data link layer hides the
details of underlying hardware and represents itself to upper layer as the medium to
communicate.
Data link layer works between two hosts which are directly connected in some sense. This
direct connection could be point to point or broadcast. Systems on broadcast network are
said to be on same link. The work of data link layer tends to get more complex when it is
dealing with multiple hosts on single collision domain.
Data link layer is responsible for converting data stream to signals bit by bit and to send
that over the underlying hardware. At the receiving end, Data link layer picks up data from
hardware which are in the form of electrical signals, assembles them in a recognizable
frame format, and hands over to upper layer.
Logical Link Control: It deals with protocols, flow-control, and error control
Framing
Data-link layer takes packets from Network Layer and encapsulates them into
Frames.Then, it sends each frame bit-by-bit on the hardware. At receiver’ end,
data link layer picks up signals from hardware and assembles them into frames.
Addressing
Synchronization
When data frames are sent on the link, both machines must be synchronized in
order to transfer to take place.
Error Control
Sometimes signals may have encountered problem in transition and the bits are
flipped.These errors are detected and attempted to recover actual data bits. It
also provides error reporting mechanism to the sender.
Flow Control
Stations on same link may have different speed or capacity. Data-link layer
ensures flow control that enables both machine to exchange data on same
speed.
Multi-Access
When host on the shared link tries to transfer the data, it has a high probability
of collision. Data-link layer provides mechanism such as CSMA/CD to equip
capability of accessing a shared media among multiple Systems.
At data link layer, it extracts message from sender and provide it to receiver by
providing sender’s and receiver’s address. The advantage of using frames is that data is
broken up into recoverable chunks that can easily be checked for corruption.
Problems in Framing –
Detecting start of the frame: When a frame is transmitted, every station must be able
to detect it. Station detect frames by looking out for special sequence of bits that marks the
beginning of the frame i.e. SFD (Starting Frame Delimeter).
How do station detect a frame: Every station listen to link for SFD pattern through a
sequential circuit. If SFD is detected, sequential circuit alerts station. Station checks
destination address to accept or reject frame.
Detecting end of frame: When to stop reading the frame.
Types of framing – There are two types of framing:
1. Fixed size – The frame is of fixed size and there is no need to provide boundaries to
the frame, length of the frame itself acts as delimiter.
Drawback: It suffers from internal fragmentation if data size is less than frame
size
Solution: Padding
2. Variable size – In this there is need to define end of frame as well as beginning of
next frame to distinguish.
Character count
Starting and ending characters, with character stuffing
Starting and ending flags, with bit stuffing
Physical layer coding violations
1 .Character Count
This method uses a field in the header to specify the number of characters in the
frame. When the data link layer at the destination sees the character count,it knows
how many characters follow, and hence where the end of the frame is. The
disadvantage is that if the count is garbled by a transmission error, the destination
will lose synchronization and will be unable to locate the start of the next frame. So,
this method is rarely used.
Errors
Problem shown in (b). Transmission error changed 5 to 7. All frames now out of synch.
Even if we detect error, we have no way of recovering - of finding where next frame starts.
Checksum
2 .Character stuffing
In the second method, each frame starts with the ASCII character sequence DLE STX
and ends with the sequence DLE ETX.(where DLE is Data Link Escape, STX is Start of
TeXt and ETX is End of TeXt.) This method overcomes the drawbacks of the character
count method. If the destination ever loses synchronization, it only has to look for
DLE STX and DLE ETX characters. If however, binary data is being transmitted then
there exists a possibility of the characters DLE STX and DLE ETX occurring in the data.
Since this can interfere with the framing, a technique called character stuffing is
used. The sender's data link layer inserts an ASCII DLE character just before the DLE
character in the data. The receiver's data link layer removes this DLE before this data
is given to the network layer. However character stuffing is closely associated with 8-
bit characters and this is a major hurdle in transmitting arbitrary sized characters.
Each frame starts with special start and end bytes (flag bytes). Here will imagine it as same byte,
FLAG.
After error, can always find start of next frame.
3. Bit stuffing
The third method allows data frames to contain an arbitrary number of bits and
allows character codes with an arbitrary number of bits per character. At the start
and end of each frame is a flag byte consisting of the special bit pattern 01111110 .
Whenever the sender's data link layer encounters five consecutive 1s in the data, it
automatically stuffs a zero bit into the outgoing bit stream. This technique is called
bit stuffing. When the receiver sees five consecutive 1s in the incoming data stream,
followed by a zero bit, it automatically destuffs the 0 bit. The boundary between two
frames can be determined by locating the flag pattern.
Byte stuffing specifies char format (e.g. 8 bits per char).
To allow arbitrary no. of bits per char, use stuffing at bit-level rather than at byte-level.
Each frame begins and ends with bit pattern 01111110 (6 1's)
If 5 1's in a row in data, stuff a 0 in so will never be 6 in a row.
Stuff it in always - whether the next char was going to be a 1 or not.
De-stuffer removes the 0's after any 5 1's.
Flow Control:-
• Flow control coordinates the amount of data that can be sent before receiving acknowledgement
• It is one of the most important functions of data link layer.
• Flow control is a set of procedures that tells the sender how much data it can transmit before it must
wait for an acknowledgement from the receiver.
• Receiver has a limited speed at which it can process incoming data and a limited amount of memory in
which to store incoming data.
• Receiver must inform the sender before the limits are reached and request that the transmitter to
send fewer frames or stop temporarily.
• Since the rate of processing is often slower than the rate of transmission, receiver has a block of
memory (buffer) for storing incoming data until they are processed.
Stop and Wait Protocol-
It works under the following assumptions-
Communication channel is perfect.
No error occurs during transmission.
Working-
Sender sends a data packet to the receiver.
Sender stops and waits for the acknowledgement for the sent packet from the
receiver.
Receiver receives and processes the data packet.
Receiver sends an acknowledgement to the sender.
After receiving the acknowledgement, sender sends the next data packet to the
receiver.
These steps are illustrated below-
Analysis-
Now, let us analyze in depth how the transmission is actually carried out-
Sender puts the data packet on the transmission link.
Data packet propagates towards the receiver’s end.
Data packet reaches the receiver and waits in its buffer.
Receiver processes the data packet.
Receiver puts the acknowledgement on the transmission link.
Acknowledgement propagates towards the sender’s end.
Acknowledgement reaches the sender and waits in its buffer.
Sender processes the acknowledgement.
Transmission delay : Time taken to put the data packet on the transmission link is called as transmission delay.
Propagation delay : Time taken for one bit to travel from sender to receiver end of the link is called as propagation
delay.
Queuing delay : Time spent by the data packet waiting in the queue before it is taken for execution is called
as queuing delay.
Processing delay : Time taken by the processor to process the data packet is called as processing delay.
1.
Total Time-
Total time taken in sending one data packet
= (Transmission delay + Propagation delay + Queuing delay + Processing delay) packet
+
(Transmission delay + Propagation delay + Queuing delay + Processing delay) ACK
Assume-
Queuing delay and processing delay to be zero at both sender and receiver side.
Transmission time for the acknowledgement to be zero since its size is very small.
We know,
Propagation delay depends on the distance and speed.
So, it would be same for both data packet and acknowledgement.
Efficiency-
where-
Useful time = Transmission delay of data packet = (Transmission delay)packet
Useless time = Time for which sender is forced to wait and do nothing = 2 x
Propagation delay
Total time = Useful time + Useless time
Advantages-
Point-02:
Point-03:
Note-01:
Note-02:
Throughput-
Number of bits that can be sent through the channel per second is called as its
throughput. Throughput = Efficiency (η) x Bandwidth
Note-03:
Stop and Wait protocol performs better for LANs than WANs.
This is because-
Efficiency of the protocol is inversely proportional to the distance between sender
and receiver.
So, the protocol performs better where the distance between sender and receiver is
less.
The distance is less in LANs as compared to WANs.
Sliding Window Protocol
• In sliding window method, multiple frames are sent by sender at a time before needing
an acknowledgment.
• Multiple frames sent by source are acknowledged by receiver using a single ACK frame.
Sliding Window
• Sliding window refers to an imaginary boxes that hold the frames on both sender and
receiver side.
• It provides the upper limit on the number of frames that can be transmitted before
requiring an acknowledgment.
• Frames may be acknowledged by receiver at any point even when window is not full on
receiver side.
• Frames may be transmitted by source even when window is not yet full on sender side.
• The windows have a specific size in which the frames are numbered modulo- n, which
means they are numbered from 0 to n-1. For e.g. if n = 8, the frames are numbered 0,
1,2,3,4,5,6, 7, 0, 1,2,3,4,5,6, 7, 0, 1, .
• The size of window is n. For e.g. In this case it is 8. Therefore, a maximum of n-1
frames may be sent before an acknowledgment.
• When the receiver sends an ACK, it includes the number of next frame it expects to
receive. For example in order to acknowledge the group of frames ending in frame 4, the
receiver sends an ACK containing the number 5. When sender sees an ACK with number
5, it comes to know that all the frames up to number 4 have been received.
• Therefore, the sliding window of sender shrinks from left when frames of data are
sending. The sliding window of the sender expands to right when acknowledgments are
received.
• The sliding window of the receiver shrinks from left when frames of data are received.
The sliding window of the receiver expands to the right when acknowledgement is sent.
Error control
When data-frame is transmitted, there is a probability that data-frame may be lost in the
transit or it is received corrupted. In both cases, the receiver does not receive the correct
data-frame and sender does not know anything about any loss. In such case, both sender
and receiver are equipped with some protocols which help them to detect transit errors
such as loss of data-frame. Hence, either the sender retransmits the data-frame or the
receiver may request to resend the previous data-frame.
Problems in stop & wait :
After sending a data packet to the receiver, sender starts the time out timer.
If the data packet gets acknowledged before the timer expires, sender stops the time
out timer.
If the timer goes off before receiving the acknowledgement, sender retransmits the
same data packet.
After retransmission, sender resets the timer.
This prevents the occurrence of deadlock.
2. Problem of Lost Acknowledgement-
Go-Back-N ARQ
0, 1, 2, 3, 4, 5, 6, 7, 0, 1, 2, 3, 4, 5, 6, 7, 0, 1…
Sender Sliding Window
• At the sending site, to hold the outstanding frames until they are acknowledged,
we use the concept of a window.
• The size of the window is at most 2m -1 where m is the number of bits for the
sequence number.
• Size of the window can be variable, e.g. TCP.
The window slides to include new unsent frames when the correct ACKs are received
• Frame 2 is lost
• When the receiver receives frame 3, it discards frame 3 as it is expecting frame 2
(according to window).
• After the timer for frame 2 expires at the sender site, the sender sends frame 2
and 3. (go back to 2)
Go-Back-N ARQ, damaged/lost/delayed ACK
• If an ACK is damaged/lost, we can have two situations:
• If the next ACK arrives before the expiration of any timer, there is no need for
retransmission of frames because ACKs are cumulative in this protocol.
• If ACK1, ACK2, and ACk3 are lost, ACK4 covers them if it arrives before the
timer expires.
• If ACK4 arrives after time-out, the last frame and all the frames after that are
resent.
• Receiver never resends an ACK.
A delayed ACK also triggers the resending of frames
The MAC sublayer acts as an interface between the logical link control (LLC) sublayer and the network's
physical layer. The MAC layer emulates a full-duplex logical communication channel in a multi-point
network. This channel may provide unicast, multicast or broadcast communication service.
MM:MM:MM:SS:SS:SS
MM-MM-MM-SS-SS-SS
The first half of a MAC address contains the ID number of the adapter manufacturer. These IDs are
regulated by an Internet standards body (see sidebar). The second half of a MAC address represents the
serial number assigned to the adapter by the manufacturer. In the example,
00:A0:C9:14:C8:29
The prefix 00A0C9 indicates the manufacturer is Intel Corporation.
Examples are the retransmission of frames in carrier sense multiple access with collision avoidance
(CSMA/CA) and carrier sense multiple access with collision detection (CSMA/CD) networks, where this
algorithm is part of the channel access method used to send data on these networks. In Ethernet
networks, the algorithm is commonly used to schedule retransmissions after collisions. The
retransmission is delayed by an amount of time derived from the slot time and the number of attempts
to retransmit.
If a collision takes place between 2 stations, they may restart transmission as soon as they can after the
collision. This will always lead to another collision and form an infinite loop of collisions leading to a
deadlock. To prevent such scenario back-off algorithm is used.
After a collision, time is divided into discrete slots (Tslot) whose length is equal to 2t, where t is the
maximum propagation delay in the network.
The stations involved in the collision randomly pick an integer from the set K i.e {0, 1}. This set is called
the contention window. If the sources collide again because they picked the same integer, the
contention window size is doubled and it becomes {0, 1, 2, 3}. Now the sources involved in the second
collision randomly pick an integer from the set {0, 1, 2, 3} and wait that number of time slots before
trying again. Before they try to transmit, they listen to the channel and transmit only if the channel is
idle. This causes the source which picked the smallest integer in the contention window to succeed in
transmitting its frame.
So, Back-off algorithm defines a waiting time for the stations involved in collision, i.e. for how much
time the station should wait to re-transmit.
Example –
Case-1 :
Suppose 2 stations A and B start transmitting data (Packet 1) at the same time then, collision occurs. So,
the collision number n for both their data (Packet 1) = 1. Now, both the station randomly pick an integer
from the set K i.e. {0, 1}.
When both A and B choose K = 0
–> Waiting time for A = 0 * Tslot = 0
Waiting time for B = 0 * Tslot = 0
Therefore, both stations will transmit at the same time and hence collision occurs.
Case-2 :
Assume that A wins in Case 1 and transmitted its data(Packet 1). Now, as soon as B transmits its packet
1, A transmits its packet 2. Hence, collision occurs. Now collision no. n becomes 1 for packet 2 of A and
becomes 2 for packet 1 of B.
Advantage –
Collision probability decreases exponentially.
Disadvantages –
Capture effect: Station who wins ones keeps on winning.
Works only for 2 stations or hosts.
Distributed Random Access Schemes/Contention Schemes: for Data
Services (ALOHA and Slotted ALOHA)
ALOHA: ALOHA is a system for coordinating and arbitrating access to a shared communication
Networks channel. It was developed in the 1970s by Norman Abramson and his colleagues at the
University of Hawaii. The original system used for ground based radio broadcasting, but the system has
been implemented in satellite communication systems.
A shared communication system like ALOHA requires a method of handling collisions that occur when
two or more systems attempt to transmit on the channel at the same time. In the ALOHA system, a node
transmits whenever data is available to send. If another node transmits at the same time, a collision
occurs, and the frames that were transmitted are lost. However, a node can listen to broadcasts on the
medium, even its own, and determine whether the frames were transmitted.
Aloha means "Hello". Aloha is a multiple access protocol at the datalink layer and proposes how multiple
terminals access the medium without interference or collision. In 1972 Roberts developed a protocol
that would increase the capacity of aloha two fold. The Slotted Aloha protocol involves dividing the time
interval into discrete slots and each slot interval corresponds to the time period of one frame. This
method requires synchronization between the sending nodes to prevent collisions.
Pure ALOHA
• In pure ALOHA, the stations transmit frames whenever they have data to send.
• When two or more stations transmit simultaneously, there is collision and the frames are destroyed.
• In pure ALOHA, whenever any station transmits a frame, it expects the acknowledgement from the
receiver.
• If acknowledgement is not received within specified time, the station assumes that the frame (or
acknowledgement) has been destroyed.
• If the frame is destroyed because of collision the station waits for a random amount of time and sends
it again. This waiting time must be random otherwise same frames will collide again and again.
• Therefore pure ALOHA dictates that when time-out period passes, each station must wait for a
random amount of time before resending its frame. This randomness will help avoid more collisions.
In fig there are four stations that .contended with one another for access to shared channel. All
these stations are transmitting frames. Some of these frames collide because multiple frames
are in contention for the shared channel. Only two frames, frame 1.1 and frame 2.2 survive. All
other frames are destroyed.
Whenever two frames try to occupy the channel at the same time, there will be a collision and
both will be damaged. If first bit of a new frame overlaps with just the last bit of a frame almost
finished, both frames will be totally destroyed and both will have to be retransmitted.
Slotted ALOHA
• Slotted ALOHA was invented to improve the efficiency of pure ALOHA as chances of collision in pure
ALOHA are very high.
• In slotted ALOHA, the time of the shared channel is divided into discrete intervals called slots.
• The stations can send a frame only at the beginning of the slot and only one frame is sent in each slot.
In slotted ALOHA, if any station is not able to place the frame onto the channel at the beginning of the
slot i.e. it misses the time slot then the station has to wait until the beginning of the next time slot.
• In slotted ALOHA, there is still a possibility of collision if two stations try to send at the beginning of the
same time slot as shown in fig.
• Slotted ALOHA still has an edge over pure ALOHA as chances of collision are reduced to one-half.
Explanation:
• A station which has a frame ready will send it.
• Then it waits for some time.
• If it receives the acknowledgement then the transmission is successful.
• Otherwise the station uses a backoff strategy, and sends the packet again.
• After many times if there is no acknowledgement then the station aborts the idea of transmission.
If any device wants to send data then the device first sense or listens to the network
medium to check whether the shared network is free or not. If the channel is found idle
then the device will transmit its data.
This sense reduces the chance of collision in the network but this method is not able to
eliminate the collision.
Carrier Sense Multiple Access (CSMA) is a protocol that senses or listens to the medium
before any transmission of data in the medium.
CSMA is used in Ethernet networks where two or more network devices are connected.
CSMA works on the principle of "Listen before Talking" or "Sense before Transmit".
When the device on the shared medium wants to transmit a data frame, then the device
first detects the channel to check the presence of any carrier signal from other connected
devices on the network.
In this situation, if the device senses any carrier signal on the shared medium, then this
means that there is another transmission on the channel. And the device will wait until the
channel becomes idle and the transmission that is in progress currently completes.
When the channel becomes idle the station starts its transmission. All other stations
connected in the network receive the transmission of the station.
In CSMA, the station sense or detects the channel before the transmission of data so it
reduces the chance of collision in the transmission.
But there may be a situation where two stations detected the channel idle at the same time
and they both start data transmission simultaneously so in this there is a chance of
collision.
So CSMA reduces the chance of collision in data transmission but it does not eliminate
the collision.
Example: In the network given below in the diagram if node 1 wants to transmit the data in the
network then, first of all, it will sense the network if data of any other device is available on the
network then it will not send the data. When node 1 finds the channel idle then it will transmit
data on the channel.
This method is considered the straightforward and simplest method of CSMA. In this method, if the
station found the medium idle then the station will immediately send the data frame with 1- probability.
In this, if the station wants to transmit the data. Then the station first senses the medium.
If the medium is busy then the station waits until the channel becomes idle. And the station
continuously senses the channel until the medium becomes idle.
If the station detected the channel as idle then the station will immediately send the data frame
with 1 probability that’s why the name of this method is 1-persistent.
Refer to the below image to show the flow diagram of the 1-persistent method of CSMA
In this method there is a high possibility of collision as two or more station sense the channel idle
at the same time and transmits data simultaneously which may lead to a collision. This is one of
the most straightforward methods. In this method, once the station finds that the medium is idle
then it immediately sends the frame. By using this method there are higher chances for collision
because it is possible that two or more stations find the shared medium idle at the same time and
then they send their frames immediately.
Refer to the below image to show the behavior of the 1-persistent method of CSMA
Non-Persistent
In this method of CSMA, if the station founds the channel busy then it will wait for a random amount of
time before sensing the channel again.
If the station wants to transmit the data then first of all it will sense the medium.
If the medium is idle then the station will immediately send the data.
Otherwise, if the medium is busy then the station waits for a random amount of time and then
again senses the channel after waiting for a random amount of time.
In P-persistent there is less chance of collision in comparison to the 1-persistent method as this
station will not continuously sense the channel but since the channel after waiting for a random
amount of time.
Refer to the below image to show the flow diagram of the Non-persistent method of CSMA
So the random amount of time is unlikely to be the same for two stations that’s why this method
reduces the chance of collision.
Refer to the below image to show the behavior of the Non-persistent method of CSMA
The p-persistent method of CSMA is used when the channel is divided into multiple time slots and the
duration of time slots is greater than or equal to the maximum propagation time. This method is
designed as a combination of the advantages of 1-Persistent and Non-Persistent CSMA. The p-persistent
method of CSMA reduces the chance of collision in the network and there is an increment in the
efficiency of the network. When any station wants to transmit the data firstly it will sense the channel If
the channel is busy then the station continuously senses the channel until the channel becomes idle. If
the channel is idle then the station does the following steps.
And the station waits for the start of the next time slot with probability q=1-p and after waiting
again senses the channel.
If the channel is again idle, then it again performs step1. If the channel is busy, then it thinks
that there is a collision in the network and now this station will follow the back-off procedure.
Refer to the below image to show the flow diagram of the P-persistent method of CSMA
Carrier Sense Multiple Access with Collision Avoidance
(CSMA/CA):
CSMA/CA protocol is used in wireless networks because they cannot detect the
collision so the only solution is collision avoidance.
• When channel is sensed to be idle, it may be possible that same distant station may have already
started transmitting and the signal of that distant station has not yet reached other stations.
• Therefore the purpose of IFS time is to allow this transmitted signal to reach other stations.
• If after this IFS time, the channel is still idle, the station can send, but it still needs to wait a time equal
to contention time.
• IFS variable can also be used to define the priority of a station or a frame.
2. Contention Window
• Contention window is an amount of time divided into slots.
• A station that is ready to send chooses a random number of slots as its wait time.
• The number of slots in the window changes according to the binary exponential back-off strategy. It
means that it is set of one slot the first time and then doubles each time the station cannot detect an
idle channel after the IFS time.
• This is very similar to the p-persistent method except that a random outcome defines the number of
slots taken by the waiting station.
• In contention window the station needs to sense the channel after each time slot.
• If the station finds the channel busy, it does not restart the process. It just stops the timer & restarts it
when the channel is sensed as idle.
3. Acknowledgement
• Despite all the precautions, collisions may occur and destroy the data.
• The positive acknowledgment and the time-out timer can help guarantee that receiver has received
the frame.
CSMA/CA Procedure:
• The station ready to transmit, senses the line by using one of the persistent strategies.
• As soon as it find the line to be idle, the station waits for an IFG (Interframe gap) amount of time.
• If then waits for some random time and sends the frame.
• After sending the frame, it sets a timer and waits for the acknowledgement from the receiver.
• If the acknowledgement is received before expiry of the timer, then the transmission is successful.
• But if the transmitting station does not receive the expected acknowledgement before the timer
expiry then it increments the back off parameter, waits for the back off time and re-senses the line.
CSMA/CD (Carrier Sense Multiple Access/ Collision Detection)
CSMA/CD is a media-access control method widely used in Ethernet technology/LANs.
Consider a scenario where there are ‘n’ stations on a link and all are waiting to transfer data through
that channel. In this case all ‘n’ stations would want to access the link/channel to transfer their own
data. Problem arises when more than one station transmits the data at the moment. In this case, there
will be collisions in the data from different stations.
CSMA/CD is one such technique where different stations that follow this protocol agree on some terms
and collision detection measures for effective transmission. This protocol decides which station will
transmit when so that data reaches the destination without corruption.
Sender has to keep on checking if the transmission link/medium is idle. For this it continuously senses
transmissions from other nodes. Sender sends dummy data on the link. If it does not receive any
collision signal, this means the link is idle at the moment. If it senses that the carrier is free and there are
no collisions, it sends the data. Otherwise it refrains from sending data.
Sender transmits its data on the link. CSMA/CD does not use ‘acknowledgement’ system. It checks for
the successful and unsuccessful transmissions through collision signals. During transmission, if collision
signal is received by the node, transmission is stopped. The station then transmits a jam signal onto the
link and waits for random time interval before it resends the frame. After some random time, it again
attempts to transfer the data and repeats above process.
1. Preamble: It is seven bytes (56 bits) that provides bit synchronization. It consists of alternating Os
and 1s. The purpose is to provide alert and timing pulse.
2. Start Frame Delimiter (SFD): It is one byte field with unique pattern: 10 10 1011. It marks the
beginning of frame.
3. Destination Address (DA): It is six byte field that contains physical address of packet's
destination.
4. Source Address (SA): It is also a six byte field and contains the physical address of source or last
device to forward the packet (most recent router to receiver).
5. Length: This two byte field specifies the length or number of bytes in data field.
6. Data: It can be of 46 to 1500 bytes, depending upon the type of frame and the length of
the information field.
7. Frame Check Sequence (FCS): This for byte field contains CRC for error detection.