CN Unit-2
CN Unit-2
CN Unit-2
UNIT-2
The Data Link Layer in the OSI model is responsible for node-to-node data
transfer, detecting and possibly correcting errors that may occur in the Physical
layer, and managing flow control. It ensures that the data transferred across a
physical link between two devices is reliable and error-free. When designing the
Data Link Layer, several issues arise, which need to be addressed for efficient
communication. These design issues include:
1. Framing
2. Error Control
Automatic Repeat reQuest (ARQ) are used, where corrupted frames are
retransmitted.
3. Flow Control
Problem: If a sender transmits data too quickly, the receiver's buffer may
overflow, leading to data loss.
Solution: Flow control protocols like Stop-and-Wait, Sliding Window, and
Acknowledgment (ACK) systems are employed. These allow the receiver
to manage the data rate by either halting the sender or using windowing
techniques to control the number of frames sent without receiving an
acknowledgment.
5. Link Management
These issues collectively ensure that data can be reliably and efficiently transferred
across a link, handling potential disruptions, errors, and differences in data rate
between devices.
FRAMING:
Framing is one of the key design issues in the Data Link Layer, which deals with
how to group or organize bits into manageable units called frames. A frame is the
data packet that is transmitted over the network, and it includes not only the raw
data but also information needed to manage the communication, such as addresses
and error-detection codes.
In a data communication system, the physical layer only deals with a stream of bits
without distinguishing where one piece of data starts or ends. Framing helps in:
Methods of Framing
In this method, the first few bytes of the frame specify the number of bytes
in the frame. The receiver reads this count and knows how many bytes to
expect in the frame.
Advantages: Simple to implement.
Disadvantages: If the length field is corrupted, it can result in the loss of
synchronization and frame errors.
Flag Bytes: The frame starts and ends with a special flag byte (for example,
the flag byte might be 01111110).
Byte Stuffing: If the flag byte appears in the data, a special escape character
is added before it (stuffing), so the receiver does not mistakenly interpret it
as a frame boundary.
Example Protocol: HDLC (High-level Data Link Control) uses this method.
1. HDLC frames start and end with a unique flag sequence (01111110).
2. Data is framed between these flags.
3. If the flag sequence appears in the actual data, bit stuffing is used to avoid
confusion.
Error detection and correction are essential techniques in digital communications and data
storage to ensure data integrity. These methods identify and fix errors that can occur during
transmission or storage due to noise, interference, or other issues. Here’s an overview of both:
1. Error Detection
This involves identifying errors in data transmission. Several methods are used:
Parity Check: A parity bit is added to data. This bit ensures that the number of 1's is
either even (even parity) or odd (odd parity). It's a simple method but can detect only
single-bit errors.
Checksum: A sum of the data's bits or bytes is calculated and sent along with the data.
The receiver re-calculates the sum and compares it to the received checksum. If they
differ, an error has occurred.
Cyclic Redundancy Check (CRC): A more sophisticated method that treats data as a
polynomial and divides it by a known generator polynomial. The remainder (CRC value)
is sent along with the data. The receiver performs the same division and checks for errors.
Hashing: Hash functions can be used to generate a fixed-length representation of data.
The receiver recalculates the hash to verify integrity.
2. Error Correction
Once an error is detected, error correction techniques try to recover the original data. Common
methods include:
Hamming Code: Adds redundant bits to the data to both detect and correct errors. It’s
capable of correcting single-bit errors and detecting two-bit errors. The position of the
error can be determined by checking the parity bits.
Reed-Solomon Code: A block-based error correction code used in many applications
like CDs, DVDs, and QR codes. It can correct multiple random symbol errors and is
widely used for its efficiency in correcting burst errors.
Forward Error Correction (FEC): A technique where redundant data (extra
information) is added so that the receiver can correct errors without needing
retransmission. FEC codes are widely used in communication systems like satellite links.
Convolutional Code: This is used for error correction in systems like deep space
communications. It encodes data by applying multiple convolutions, and decoding is
often done using the Viterbi algorithm.
Automatic Repeat Request (ARQ): Instead of correcting the error, the receiver requests
the sender to retransmit the erroneous packet.
Key Differences:
Detection vs. Correction: Detection only identifies that an error has occurred, while
correction identifies and fixes the error.
Overhead: Error correction typically requires more redundant data than error detection
alone.
These methods ensure that communication systems like the Internet, storage systems, and
wireless networks can function reliably despite potential data errors.
Checksum
Checksum error detection is a method used to identify errors in transmitted data.
The process involves dividing the data into equally sized segments and using
a 1’s complement to calculate the sum of these segments. The calculated sum is
then sent along with the data to the receiver. At the receiver’s end, the same
process is repeated and if all zeroes are obtained in the sum, it means that the data
is correct.
Checksum – Operation at Sender’s Side
Firstly, the data is divided into k segments each of m bits.
On the sender’s end, the segments are added using 1’s complement arithmetic
to get the sum. The sum is complemented to get the checksum.
The checksum segment is sent along with the data segments.
Checksum – Operation at Receiver’s Side
At the receiver’s end, all received segments are added using 1’s complement
arithmetic to get the sum. The sum is complemented.
If the result is zero, the received data is accepted; otherwise discarded.
Bit Stuffing and Byte Stuffing are two different techniques used in data
communication to handle control information embedded within the data stream,
especially in protocols that use flags to delimit data frames.
1. Bit Stuffing:
Purpose: Bit stuffing is used to ensure that the flag sequence (usually a series of
bits like 01111110 in HDLC) is not interpreted as part of the data.
How It Works:
In a data transmission protocol that uses flags (like 01111110), the receiver
identifies the start and end of the data frame by this bit pattern.
To ensure that this bit pattern does not accidentally appear in the data, a
process known as "bit stuffing" is used.
If five consecutive 1s appear in the data, a 0 is automatically inserted
(stuffed) after them. This prevents the receiver from mistakenly interpreting
the data as a flag.
Example:
The receiver will remove the extra stuffed 0s when the data is received, restoring
the original bit sequence.
Applications:
Bit stuffing is used in protocols like HDLC (High-Level Data Link Control)
and PPP (Point-to-Point Protocol).
2. Byte Stuffing:
Purpose: Byte stuffing is similar to bit stuffing but operates at the byte level. It is
used to distinguish between data and control information (such as frame
delimiters).
How It Works:
Byte stuffing is often used in protocols that use special characters or bytes to
indicate the start and end of a frame, like in PPP or SLIP.
If a special character (e.g., flag or escape character) appears in the data, an
escape character is inserted before it.
This allows the receiver to differentiate between actual data and control
sequences.
Example:
Suppose the flag byte is 0x7E, and the escape byte is 0x7D.
Original data: Hello 0x7E World
After byte stuffing: Hello 0x7D 0x5E World
Here, 0x7D 0x5E means that the original byte was 0x7E. The receiver can reverse
this when decoding.
Applications:
Summary:
Bit Stuffing works at the bit level, inserting bits to avoid misinterpretation
of control sequences.
Byte Stuffing operates at the byte level, inserting escape sequences before
special control characters.
Both techniques are essential in ensuring that the data communication process
remains reliable, preventing the mix-up of control information with actual data.
CRC:
It works by calculating a checksum from the data being transmitted or stored, and
then comparing this checksum with the one calculated at the receiving end. If the
checksums do not match, it indicates that an error has occurred, and the data can be
discarded or retransmitted.
HAMMING CODES:
Hamming codes are a class of error-correcting codes that can detect and correct
single-bit errors. Developed by Richard Hamming in 1950, these codes are widely
used in digital communication systems, computer memory, and data transmission
systems where error correction is crucial.
Bit Position 1 2 3 4 5 6 7
Type r1 r2 d1 r3 d2 d3 d4
Data - - 1 - 0 1 1
o Here, r1, r2, and r3 are the redundant bits, and d1, d2, d3, and d4 are
the data bits.
1. Step 1: Calculate Parity Bits: Each redundant bit (parity bit) covers several
data bits. The parity bits are calculated using even parity (i.e., the number of
1s in the covered bits should be even).
o r1 covers positions 1, 3, 5, and 7.
o r2 covers positions 2, 3, 6, and 7.
o r3 covers positions 4, 5, 6, and 7.
For example:
o r1 = parity(1, 3, 5, 7)
o r2 = parity(2, 3, 6, 7)
o r3 = parity(4, 5, 6, 7)
2. Step 2: Transmit the Data: Once the parity bits are calculated, they are
inserted into the appropriate positions, and the entire bit sequence is
transmitted.
3. Step 3: Detect and Correct Errors: At the receiver's end, the same parity
checks are performed:
o If no error is detected, the data is accepted as correct.
o If an error is detected, the position of the erroneous bit can be
identified by examining which parity checks fail. The position of the
erroneous bit is given by the binary number formed from the failed
parity check results. This allows the receiver to flip the erroneous bit
and correct the error.
1. Place the data bits in positions 3, 5, 6, and 7, and leave positions 1, 2, and 4
for the parity bits.
Position 1 2 3 4 5 6 7
Data r1 r2 1 r3 0 1 1
Let’s assume an error occurs during transmission, and the receiver gets the data
1110011.
Limitations:
Limited error detection: Hamming codes can detect double-bit errors but
cannot correct them. They also cannot detect more than two errors.
Inefficiency for large data: As the number of bits increases, more
redundant bits are needed, making Hamming codes less efficient for very
large data blocks.
Applications:
Hamming codes are one of the simplest and most fundamental methods of error
correction, making them widely used in both small and large-scale systems.
Elementary Data Link Protocols are simple protocols used in the Data Link Layer
of the OSI model to manage data transmission between directly connected devices.
These protocols are responsible for error detection, framing, and flow control to
ensure reliable and organized data transfer. Below are some of the most basic or
"elementary" data link protocols:
Example:
Example:
Example:
Example:
5. Go-Back-N ARQ
Example:
Used in systems where error rates are low, and retransmitting all frames
upon error is still feasible. For example, in older versions of TCP and
wireless communication systems.
Example:
Example:
1. Flow Control: Ensures that the sender does not overwhelm the receiver with
too much data. Protocols like Stop-and-Wait ensure that each frame is
acknowledged before proceeding.
2. Error Detection and Correction:
o Error detection: Using techniques like checksums, CRC, or parity
bits to detect transmission errors.
o Error correction: ARQ (Automatic Repeat Request) techniques are
used to retransmit frames when errors are detected.
3. Acknowledgments: The receiver sends feedback (ACK for successful
receipt, NAK for errors) to the sender, prompting retransmission when
necessary.
ALOHA:
CSMA:
CSMA/CD:
Media Access Control (MAC) is a sublayer of the Data Link Layer in the OSI
model that controls how devices on a shared communication medium access and
transmit data. When multiple devices share the same transmission medium, a
method is needed to avoid collisions and ensure that the medium is used
efficiently. Random Access methods are a group of MAC techniques that allow
devices to transmit data whenever they are ready, with the risk of collisions, which
are resolved after they occur.
1. ALOHA
2. Carrier Sense Multiple Access (CSMA)
3. CSMA with Collision Detection (CSMA/CD)
4. CSMA with Collision Avoidance (CSMA/CA)
1. ALOHA Protocol
ALOHA is one of the simplest and earliest random access protocols, developed for
wireless communication systems like radio transmissions.
Types of ALOHA:
Pure ALOHA:
o Description: Devices transmit data whenever they have data to send.
If a collision occurs (two devices transmit at the same time), the
sender waits for a random amount of time (backoff) and retransmits
the data.
o Collision Handling: Since devices transmit without listening to the
channel, collisions can occur frequently. Each station must detect
whether its transmission was successful, usually via an
acknowledgment from the receiver.
o Efficiency: The maximum channel utilization is about 18.4% due to
frequent collisions.
Process:
Process:
In CSMA, devices first listen to the medium to check if it is free (carrier sense)
before attempting to transmit data. This helps reduce the probability of collisions
compared to ALOHA.
Types of CSMA:
1-persistent CSMA:
o Description: The device continuously listens to the channel. If the
channel is idle, it sends data immediately. If the channel is busy, it
waits until it becomes idle and then transmits.
o Problem: This increases the chances of collisions since multiple
devices may transmit immediately after the channel becomes free.
Non-persistent CSMA:
o Description: The device checks the channel, and if it is busy, it waits
for a random period before checking again. This reduces the chances
of collisions but increases the delay.
o Collision Handling: This type reduces the likelihood of collisions but
can increase idle times if devices wait too long before checking the
channel again.
p-persistent CSMA:
o Description: The device listens to the channel, and if it is free, it
transmits with probability p. If the channel is busy or the device does
not transmit, it waits for the next time slot and tries again with the
same probability.
o Collision Handling: Reduces collision risk while maintaining
efficient channel usage. The value of p is chosen to optimize
performance based on network conditions.
Application:
Carrier Sensing: The device listens to the channel. If it is idle, the device
waits for a random backoff period to avoid collisions with other devices that
may have been waiting for the channel.
SRM University/Dr.SK GUPTA Page 22
COMPUTER NETWORK
Collision Avoidance: The device may send a short control message (e.g.,
Request to Send or RTS) before transmitting the actual data to alert other
devices to hold off their transmissions.
Acknowledgments: After successfully receiving the data, the receiver sends
an acknowledgment (ACK) to confirm the successful reception.
Application:
Used in Wi-Fi (IEEE 802.11) and other wireless networks where collisions
are harder to detect due to signal interference and attenuation over long
distances.
Satellite
Slotted Time- Reduced collision rate,
~36% communication
ALOHA Slotted retransmit in slots
systems
transmit happen
CONTROLLED ACCESS:
TOKEN PASSING:
POLLING
RESERVATION:
1. No Collisions: Since only one device can access the medium at a time,
collisions are avoided entirely.
2. Coordination: Devices use a coordination mechanism to determine which
one has the right to transmit.
3. Suitable for High Traffic: These protocols are more efficient when the
network has high traffic or when the shared medium is in constant use.
1. Reservation
2. Polling
3. Token Passing
SRM University/Dr.SK GUPTA Page 24
COMPUTER NETWORK
1. Reservation Protocol
In the Reservation protocol, devices reserve the right to use the medium before
transmitting. This ensures that collisions are avoided by scheduling transmissions
ahead of time.
Process:
1. Each device sends a small control message during the reservation period,
indicating it wants to transmit.
2. If the reservation is successful, the device gets a slot to transmit its data in
the transmission period.
3. If multiple devices request the same slot, collisions may occur in the
reservation phase, but not in the data transmission phase.
4. Unsuccessful devices can try to reserve a slot in the next reservation period.
Example:
2. Polling
Process:
Advantages:
Disadvantages:
Example:
3. Token Passing
A device that has the token can transmit data. If it has no data to send, it
simply passes the token to the next device.
The token continues circulating, ensuring that every device gets an
opportunity to transmit.
Process:
Advantages:
No Collisions: Since only the device with the token can transmit, collisions
are avoided.
Efficient for Heavy Traffic: Token passing can handle high traffic loads
effectively.
Disadvantages:
Token Loss: If the token is lost due to an error, the entire network can halt
until the token is recovered or regenerated.
Delay: Devices must wait their turn to receive the token, which can
introduce delay, especially in large networks.
Example:
Token Ring Networks (IEEE 802.5) were based on this principle. A logical
ring is formed, and a token circulates around the network.
FDDI (Fiber Distributed Data Interface) also uses token passing in high-
speed fiber optic networks.
Collisio
Coordina Example
n Centralized/Distr Advanta Disadvant
Protocol tion Applicatio
Avoida ibuted ges ages
Method n
nce
Collisio
Coordina Example
n Centralized/Distr Advanta Disadvant
Protocol tion Applicatio
Avoida ibuted ges ages
Method n
nce
Efficient Collisions
Satellite
Reserva Reservati use of can occur
Yes Distributed communic
tion on period the in
ation
medium reservation
Fair
Delay,
Central access, Mainframe
single
Polling controller Yes Centralized no systems,
point of
polls collision Bluetooth
failure
s
No
collision
s,
Token Token Token loss
Yes Distributed efficient
Passing circulates can disrupt
for
heavy
loads
CHANNELIZATION:
TDMA:
FDMA:
SRM University/Dr.SK GUPTA Page 28
COMPUTER NETWORK
CDMA:
Channelization techniques are widely used in both wired and wireless networks,
including cellular networks, satellite communications, and cable systems. These
techniques fall under controlled access methods, as they ensure that users don't
interfere with each other's transmissions.
Key Features:
No Collisions: Since each user has a dedicated frequency band, there are no
collisions between users.
Guard Bands: To prevent interference between adjacent frequency bands,
guard bands (unused frequencies) are inserted between channels.
Advantages:
Simple to implement.
Suitable for analog communication systems.
No need for complex synchronization between users.
Disadvantages:
Example:
TDMA divides the communication medium into time slots, and each user is
allocated a specific time slot during which they can transmit data. Multiple users
share the same frequency band but take turns transmitting in their assigned time
slots.
The entire available bandwidth is used by all users, but each user is assigned
a specific time slot.
Users take turns transmitting their data in a synchronized manner.
Between time slots, the medium remains idle for each user until their next
time slot.
Key Features:
Time-Slot Allocation: Each user is assigned a time slot, and only one user
can transmit during a specific slot.
Synchronization: The system requires tight synchronization to ensure that
users transmit only in their assigned time slots.
Time Guard Intervals: To avoid overlap between time slots, guard
intervals may be used to separate them.
Advantages:
Disadvantages:
Example:
CDMA is a more complex channelization technique where all users share the same
frequency band and time period, but each user is assigned a unique code. These
codes are used to encode and decode data, allowing multiple users to transmit
simultaneously without interfering with each other.
Each user's data is spread across the entire frequency band using a unique
spreading code.
The receiver uses the same spreading code to decode the signal and retrieve
the original data.
Since each user's data is encoded differently, the signals can overlap, but
they remain distinguishable by their codes.
Key Features:
Advantages:
Disadvantages:
Example:
No collisions No collisions
Collision No collisions
(dedicated (dedicated time
Avoidance (distinct codes)
frequencies) slots)
ETHERNET STANDARD:
Ethernet is a family of networking technologies and standards used for local area
networks (LANs) and metropolitan area networks (MANs). Developed in the
1970s by Xerox and later standardized by the IEEE 802.3 working group, Ethernet
has evolved over the years, with multiple standards addressing different speeds,
media types, and topologies. Ethernet supports data link layer protocols and uses
both wired and optical fiber as transmission media.
Maximum
Standard Speed Medium Notes
Distance
10 Twisted-pair Early Ethernet standard,
10BASE-T 100 meters
Mbps copper now largely obsolete.
100 Twisted-pair Also known as Fast
100BASE-TX 100 meters
Mbps copper Ethernet.
Twisted-pair Commonly known as
1000BASE-T 1 Gbps 100 meters
copper Gigabit Ethernet.
10 Twisted-pair 55 to 100 10 Gigabit Ethernet over
10GBASE-T
Gbps copper meters twisted-pair copper.
40 Twisted-pair High-speed Ethernet for
40GBASE-T 30 meters
Gbps copper data centers.
High-speed Ethernet used
100 Twisted-pair
100GBASE-T 30 meters primarily in enterprise
Gbps copper
systems.
1000BASE- Multimode
1 Gbps 550 meters Gigabit Ethernet over fiber.
SX fiber
1000BASE- Single-mode 5-10 Long-distance Ethernet over
1 Gbps
LX fiber kilometers fiber.
10 Multimode 10 Gigabit Ethernet over
10GBASE-SR 400 meters
Gbps fiber fiber for shorter distances.
10GBASE- 10 Single-mode 10 kilometers 10 Gigabit Ethernet for
Maximum
Standard Speed Medium Notes
Distance
LR Gbps fiber long-distance fiber
networks.
40GBASE- 40 Multimode 100-150 40 Gigabit Ethernet over
SR4 Gbps fiber meters fiber, often in data centers.
100 Gigabit Ethernet for
100GBASE- 100 Multimode
100 meters high-speed data center
SR10 Gbps fiber
networks.
1. Preamble: A 7-byte field that synchronizes the receiver's clock with the
incoming data.
2. Start Frame Delimiter (SFD): A 1-byte field indicating the start of the
frame.
3. Destination MAC Address: A 6-byte field containing the MAC address of
the destination device.
4. Source MAC Address: A 6-byte field containing the MAC address of the
source device.
5. EtherType/Length: A 2-byte field indicating the type of the payload (e.g.,
IPv4, ARP) or the length of the payload.
6. Payload: The actual data being transmitted (46–1500 bytes).
7. Frame Check Sequence (FCS): A 4-byte field used for error detection
(CRC).
Full Duplex: Modern Ethernet supports full duplex, meaning devices can
send and receive data simultaneously, eliminating collisions and increasing
throughput.
Switched Ethernet: In modern networks, Ethernet switches manage data
transmission between devices. Each device is connected to a switch port, and
the switch manages packet forwarding, eliminating the need for CSMA/CD.