CN Module-2 Notes.
CN Module-2 Notes.
CN Module-2 Notes.
MODULE -2
INTRODUCTION
Types of Errors
• When bits flow from 1 point to another, they are subject to unpredictable-changes
‘.’ of interference.
• The interference can change the shape of the signal.
• Two types of errors: 1) Single-bit error 2) Burst-error.
1) Single-Bit Error
Only 1 bit of a given data is changed
→ from 1 to 0 or
→ from 0 to 1 (Figure 10.1a).
2) Burst Error
Two or more bits in the data have changed
→ from 1 to 0 or
→ from 0 to 1 (Figure 10.1b).
A burst-error occurs more than a single-bit error.
This is because: Normally, the duration of noise
is longer than the duration of 1-bit.
When noise affects data, the noise also affects the bits.
The no. of corrupted-bits depends on
→ data-rate and
→ duration of noise.
Redundancy
• The central concept in detecting/correcting errors is redundancy.
• Some extra-bits along with the data have to be sent to detect/correct errors.
These extra bits are called redundant-bits.
• The redundant-bits are
→ added by the sender and
→ removed by the receiver.
• The presence of redundant-bits allows the receiver to detect/correct errors.
Coding
• Redundancy is achieved through various coding-schemes.
1) Sender adds redundant-bits to the data-bits. This process creates a
relationship between
→ redundant-bits and
→ data-bits.
2) Receiver checks the relationship between redundant-bits & data-bits to
detect/correct errors.
• Two important factors to be considered:
1) Ratio of redundant-bits to the data-bits and
2) Robustness of the process.
• Two broad categories of coding schemes: 1) Block-coding and 2) Convolution
coding.
Block Coding
• The message is divided into k-bit blocks. These blocks are called data-words.
• Here, r-redundant-bits are added to each block to make the length n=k+r.
• The resulting n-bit blocks are called code-words.
• Since n>k, the number of possible code-words is larger than the number of
possible data-words.
• Block-coding process is 1-to-1; the same data-word is always encoded as the same
code-word.
• Thus, we have 2n-2k code-words that are not used. These code-words are invalid or
illegal.
Error Detection
• If the following 2 conditions are met, the receiver can detect a change in the
original code-word:
1) The receiver has a list of valid code-words.
2) The original code-word has changed to an invalid code-words.
Example 3.1
Hamming Distance
• The main concept for error-control: Hamming distance.
• The Hamming distance b/w 2 words is the number of differences between the
corresponding bits.
• Let d(x,y) = Hamming distance b/w 2 words x and y.
• Hamming distance can be found by
→ applying the XOR operation on the 2 words and
→ counting the number of 1s in the result.
• For example:
1) The Hamming distance d(000, 011) is 2 because 000⊕011= 011 (two 1s).
2) The Hamming distance d(10101, 11110) is 3 because 10101⊕11110 =
01011 (three 1s).
Hamming Distance and Error
Hamming distance between the received word and the sent code-word
is the number of bits that are corrupted during transmission.
For example: Let Sent code-word = 00000
Received word = 01101
Hamming distance = d(00000, 01101) =3. Thus, 3 bits are in error.
• The code in Table 10.1 is a linear block code because the result of XORing any
code-word with any other code-word is a valid code-word.
For example, the XORing of the 2nd and 3rd code-words creates the 4th one.
Example 3.2
Cyclic Codes
• Cyclic codes are special linear block codes with one extra property:
If a code-word is cyclically shifted (rotated), the result is another code-word.
For ex: if code-word = 1011000 and we cyclically left-shift, then another code-word =
0110001.
• Let First-word = a0 to a6 and Second-word = b0 to b6, we can shift the bits by using
the following:
1) At Sender
n-k 0s is appended to the data-word to create augmented data-word. (here
n-k=3).
The augmented data-word is fed into the generator (Figure 10.6).
The generator divides the augmented data-word by the divisor.
The remainder is called check-bits (r2r1r0).
The check-bits (r2r1r0) are appended to the data-word to create the code-
word.
2) At Receiver
The possibly corrupted code-word is fed into the checker.
The checker is a replica of the generator.
The checker divides the code-word by the divisor.
The remainder is called syndrome bits (r2r1r0).
The syndrome bits are fed to the decision-logic-analyzer.
The decision-logic-analyzer performs following functions:
i) For No Error
¤ If all syndrome-bits are 0s, the received code-word is accepted.
¤ Data-word is extracted from received code-word (Figure 10.7a).
ii) For Error
¤ If all syndrome-bits are not 0s, the received code-word is discarded
(Figure 10.7b).
Polynomials
• A pattern of 0s and 1s can be represented as a polynomial with coefficients of 0
and 1 (Figure 10.8).
• The power of each term shows the position of the bit; the coefficient shows the
value of the bit.
Checksum
• Checksum is an error-detecting technique.
• In the Internet,
→ The checksum is mostly used at the network and transport layer.
→ The checksum is not used in the data link layer.
• Like linear and cyclic codes, the checksum is based on the concept of redundancy.
• Here is how it works (Figure 10.15):
1) At Source
Firstly the message is divided into m-bit units.
Then, the generator creates an extra m-bit unit called the checksum.
The checksum is sent with the message.
2) At Destination
The checker creates a new checksum from the combination of the
message and sent- checksum.
i) If the new checksum is all 0s, the message is accepted.
ii) If the new checksum is not all 0s, the message is discarded.
Concept of Checksum
Consider the following example:
Example 3.4
• Our data is a list of five 4-bit numbers that we want to send to a destination.
• In addition to sending these numbers, we send the sum of the numbers.
• For example:
Let set of numbers = (7, 11, 12, 0, 6).
We send (7, 11, 12, 0, 6, 36), where 36 is the sum of the original numbers.
• The receiver adds the five numbers and compares the result with the sum.
• If the result & the sum are the same,
The receiver assumes no error, accepts the five numbers, and
discards the sum. Otherwise, there is an error somewhere and the
data are not accepted.
Example 3.5
• To make the job of the receiver easy if we send the negative (complement) of
the sum, called the checksum.
• In this case, we send (7, 11, 12, 0, 6, -36).
• The receiver can add all the numbers received (including the checksum).
• If the result is 0, it assumes no error; otherwise, there is an error.
One's Complement
• The previous example has one major drawback.
All of our data can be written as a 4-bit word (they are less than 15) except
for the checksum.
• Solution: Use one's complement arithmetic.
We can represent unsigned numbers between 0 and 2n-1 using only n bits.
If the number has more than n bits, the extra leftmost bits need to
be added to the n rightmost bits (wrapping).
A negative number can be represented by inverting all bits (changing 0 to 1
and 1 to 0).
This is the same as subtracting the number from 2n-1.
Algorithm
Other Approaches to the Checksum
• If two 16-bit items are transposed in transmission, the checksum cannot catch this
error.
• The reason is that the traditional checksum is not weighted: it treats each data
item equally.
• In other words, the order of data items is immaterial to the calculation.
• Two approaches have been used to prevent this problem: 1)Fletcher and 2)Adler
Fletcher Checksum
• The Fletcher checksum was devised to weight each data item according to its
position.
• Fletcher has proposed two algorithms: 8-bit and 16-bit (Figure 10.18).
• The first, 8-bit Fletcher, calculates on 8-bit data items and creates a 16-bit
checksum.
• The second, 16-bit Fletcher, calculates on 16-bit data items and creates a 32-bit
checksum.
• The 8-bit Fletcher is calculated over data octets (bytes) and creates a 16-bit
checksum.
• The calculation is done modulo 256 (28), which means the intermediate
results are divided by 256 and the remainder is kept.
• The algorithm uses two accumulators, L and R.
• The first simply adds data items together;
The second adds a weight to the calculation.
Adler Checksum
• The Adler checksum is a 32-bit checksum.
• It is similar to the 16-bit Fletcher with three differences (Figure 10.19).
1) Calculation is done on single bytes instead of 2 bytes at a time.
2) The modulus is a prime number (65,521) instead of 65,536.
3) L is initialized to 1 instead of 0.
• A prime modulo has a better detecting capability in some combinations of data.
3.2.1 Framing
• A frame is a group of bits.
• Framing means organizing the bits into a frame that are carried by the physical
layer.
• The data-link-layer needs to form frames, so that each frame is distinguishable
from another.
• Framing separates a message from other messages by adding sender-address &
destination-address.
• The destination-address defines where the packet is to go.
The sender-address helps the recipient acknowledge the receipt.
• Q: Why the whole message is not packed in one frame?
Ans: Large frame makes flow and error-control very inefficient.
Even a single-bit error requires the re-transmission of the whole message.
When a message is divided into smaller frames, a single-bit error affects only that
small frame.
(Our postal system practices a type of framing. The simple act of inserting a letter
into an envelope separates one piece of information from another; the envelope
serves as the delimiter. In addition, each envelope defines the sender and
receiver addresses since the postal system is a many-to-many carrier facility).
•Problem:
What happens if the text contains one or more escape characters followed
by a flag?
The receiver removes the escape character, but keeps the flag,
which is incorrectly interpreted as the end of the frame.
Solution:
Escape characters part of the text must also be marked by another escape
character (Fig 11.2).
• In short, byte stuffing is the process of adding one extra byte whenever
there is a flag or escape character in the text.
• Problem:
If the flag-pattern appears in the data-section, the receiver might think that it has
reached the end of the frame.
Solution: A bit-stuffing is used.
• In bit stuffing, if a 0 and five consecutive 1 bits are encountered, an extra 0
is added. This extra stuffed bit is eventually removed from the data by the
receiver. (Figure 11.4).
• This guarantees that the flag field sequence does not inadvertently
appear in the frame.
• In short, bit stuffing is the process of adding one extra 0 whenever five
consecutive 1s follow a 0 in the data, so that the receiver does not mistake the
pattern 0111110 for a flag.
3.2.2 Flow Control and Error Control
• One of the responsibilities of the DLC sublayer is flow and error control at the data-
link layer.
3.2.2.1 Flow Control
• Whenever an entity produces items and another entity consumes them, there
should be a balance between production and consumption rates.
• If the items are produced faster than they can be consumed, the consumer can
be overwhelmed and may need to discard some items.
• We need to prevent losing the data items at the consumer site.
• At the sending node, the data-link layer tries to push frames toward the
data-link layer at the receiving node (Figure 11.5).
• If the receiving node cannot process and deliver the packet to its network at the
same rate that the frames arrive, it becomes overwhelmed with frames.
• Here, flow control can be feedback from the receiving node to the sending node
to stop or slow down pushing frames.
3.2.2.1.1 Buffers
• Flow control can be implemented by using buffer.
• A buffer is a set of memory locations that can hold packets at the sender and
receiver.
• Normally, two buffers can be used.
1) First buffer at the sender.
2) Second buffer at the receiver.
• The flow control communication can occur by sending signals from the consumer
to the producer.
• When the buffer of the receiver is full, it informs the sender to stop pushing
frames.
Example 3.6
3.3.2 Stop & Wait Protocol
• This uses both flow and error control.
• Normally, the receiver has limited storage-space.
• If the receiver is receiving data from many sources, the receiver may
→ be overloaded with frames &
→ discard the frames.
• To prevent the receiver from being overloaded with frames, we need to tell the sender to slow
down.
3.3.2.1Design
1) At Sender
The sender
→ sends one frame & starts a timer
→ keeps a copy of the sent-frame and
→ waits for ACK-frame from the receiver (okay to go ahead).
Then,
1) If an ACK-frame arrives before the timer expires, the timer is stopped and the
sender sends the next frame.
Also, the sender discards the copy of the previous frame.
2) If the timer expires before ACK-frame arrives, the sender resends the previous
frame and restarts the timer
2) At Receiver
To detect corrupted frames, a CRC is added to each data frame.
When a frame arrives at the receiver-site, the frame is checked.
If frame’s CRC is incorrect, the frame is corrupted and discarded.
The silence of the receiver is a signal for the sender that a frame was either corrupted or
lost.
3.3.2.2 FSMs
• Here is how it works (Figure 11.11):
1) Sender States
• Sender is initially in the ready state, but it can move between the ready and blocking state.
i) Ready State: When the sender is in this state, it is only waiting for a packet from
the network layer.
If a packet comes from the network layer, the sender creates a frame, saves a
copy of the frame, starts the only timer and sends the frame. The sender then
moves to the blocking state.
ii)Blocking State: When the sender is in this state, three events can occur:
a) If a time-out occurs, the sender resends the saved copy of the frame
and restarts the timer.
b) If a corrupted ACK arrives, it is discarded.
c) If an error-free ACK arrives, the sender stops the timer and discards the
saved copy of the frame. It then moves to the ready state.
2) Receiver
• The receiver is always in the ready state. Two events may occur:
a) If an error-free frame arrives, the message in the frame is delivered to the
network layer and an ACK is sent.
b) If a corrupted frame arrives, the frame is discarded.
Example 3.7
Example 3.8
3.3.3 Piggybacking
• A technique called piggybacking is used to improve the efficiency of the bidirectional protocols.
• The data in one direction is piggybacked with the acknowledgment in the other direction.
• In other words, when node A is sending data to node B, Node A also acknowledges the data
received from node B.
3.4 High-Level Data Link Control (HDLC)
•HDLC is a bit-oriented protocol for communication over point-to-point and multipoint links.
• HDLC implements the ARQ mechanisms.
3.4.1 Configurations and Transfer Modes
• HDLC provides 2 common transfer modes that can be used in different configurations:
1) Normal response mode (NRM)
2) Asynchronous balanced mode (ABM).
NRM
• The station configuration is unbalanced (Figure 11.14).
• We have one primary station and multiple secondary stations.
• A primary station can send commands, a secondary station can only respond.
• The NRM is used for both point-to-point and multiple-point links.
ABM
• The configuration is balanced (Figure 11.15).
• Link is point-to-point, and each station can function as a primary and a secondary (acting as peers).
• This is the common mode today.
3.4.2 Framing
• To provide the flexibility necessary to support all the options possible in the modes
and configurations, HDLC defines three types of frames:
1) Information frames (I-frames): are used to transport user data and control
information relating to user data (piggybacking).
2)
Supervisory frames (S-frames): are used only to transport control information.
3) Unnumbered frames (U-frames): are reserved for system management.
Information carried by U-frames is intended for managing the link
itself.
• Each type of frame serves as an envelope for the transmission of a different type of message.
3.5.1 Framing
• PPP uses a character-oriented (or byte-oriented) frame (Figure 11.20).
• Various fields of PPP frame are:
1) Flag
This field has a synchronization pattern 01111110.
This field identifies both the beginning and the end of a frame.
2) Address
This field is set to the constant value 11111111 (broadcast address).
3) Control
This field is set to the constant value 00000011 (imitating unnumbered frames in HDLC).
PPP does not provide any flow control.
Error control is also limited to error detection.
4) Protocol
This field defines what is being carried in the payload field.
Payload field carries either i) user data or ii) other control information.
By default, size of this field = 2 bytes.
5) Payload field
This field carries either i) user data or ii) other control information.
By default, maximum size of this field = 1500 bytes.
This field is byte-stuffed if the flag-byte pattern appears in this field.
Padding is needed if the payload-size is less than the maximum size.
6) FCS
This field is the PPP error-detection field.
This field can contain either a 2- or 4-byte standard CRC.
4.1 Introduction
• When nodes use shared-medium, we need multiple-access protocol to coordinate access to
medium.
• Analogy:
This problem is similar to the rules of speaking in an assembly.
We need to ensure
→ Each people has right to speak.
→ Two people do not speak at the same time
→ Two people do not interrupt each other (i.e. Collision Avoidance)
• Many protocols have been designed to handle access to a shared-link (Figure 12.1).
• These protocols belong to a sublayer in the data-link layer called Media Access Control (MAC).
• These protocols belong to a sublayer in the data-link layer called Media Access Control (MAC).
1) Four random-access protocols (or Contention Methods):
i) ALOHA ii) CSMA iii) CSMA/CD
iv) CSMA/CA
These protocols are mostly used in LANs and WANs.
2) Three controlled-access protocols:
i) Reservation ii) Polling
iii) Token-passing Some of these protocols are used in LANs.
3) Three channelization protocols:
i) FDMA ii) TDMA iii) CDMA
These protocols are used in cellular telephony.
• In Figure 12.4,
If station B sends a frame between t-Tfr and t, this leads to a collision between the
frames from station A and station B.
If station C sends a frame between t and t+Tfr, this leads to a collision between the
frames from station A and station C.
Example 4.1
4.2.1.1.2 Throughput
Example 4.2
4.2.2 CSMA
• CSMA was developed to minimize the chance of collision and, therefore, increase the performance.
• CSMA is based on the principle “sense before transmit” or “listen before talk.”
• Here is how it works:
1) Each station checks the state of the medium: idle or busy.
2) i) If the medium is idle, the station sends the data.
ii) If the medium is busy, the station defers sending.
• CSMA can reduce the possibility of collision, but it cannot eliminate it.
1) 1-Persistent
• Before sending a frame, a station senses the line (Figure 12.10a).
i) If the line is idle, the station sends immediately (with probability = 1).
ii) If the line is busy, the station continues sensing the line.
• This method has the highest chance of collision because 2 or more stations:
→ may find the line idle and
→ send the frames immediately.
2) Non-Persistent
• Before sending a frame, a station senses the line (Figure 12.10b).
i) If the line is idle, the station sends immediately.
ii) If the line is busy, the station waits a random amount of time and then senses the line again.
• This method reduces the chance of collision because 2 or more stations:
→ will not wait for the same amount of time and
→ will not retry to send simultaneously.
3) P-Persistent
• This method is used if the channel has time-slots with a slot-duration equal to or greater than
the maximum propagation time (Figure 12.10c).
• Advantages:
i) It combines the advantages of the other 2 methods.
ii) It reduces the chance of collision and improves efficiency.
• After the station finds the line idle, it follows these steps:
1) With probability p, the station sends the frame.
2) With probability q=1-p, the station waits for the beginning of the next time-slot and
checks the line again.
i) If line is idle, it goes to step 1.
ii) If line is busy, it assumes that collision has occurred and uses the back off procedure.
4.2.3 CSMA/CD
• Disadvantage of CSMA: CSMA does not specify the procedure after a collision has
occurred. Solution: CSMA/CD enhances the CSMA to handle the collision.
• Here is how it works (Figure 12.12):
1) A station
→ sends the frame &
→ then monitors the medium to see if the transmission was successful or not.
2) If the transmission was unsuccessful (i.e. there is a collision), the frame is sent again.
• In the Figure 12.11,
At time t1, station A has executed its procedure and starts sending the bits of its frame.
At time t2, station C has executed its procedure and starts sending the bits of its frame.
The collision occurs sometime after time t2.
Station C detects a collision at time t3 when it receives the first bit of A's
frame. Station C immediately aborts transmission.
Station A detects collision at time t4 when it receives the first bit of C's
frame. Station A also immediately aborts transmission.
• Station A transmits for the duration t4–t1.
Station C transmits for the duration t3-
t2.
• For the protocol to work:
The length of any frame divided by the bit rate must be more than either of these durations.
Example 4.4
4.2.3.2 Procedure
• CSMA/CD is similar to ALOHA with 2 differences (Figure 12.13):
1) Addition of the persistence process.
¤ We need to sense the channel before sending the frame by using non-persistent,
1- persistent or p-persistent.
2) Frame transmission.
i) In ALOHA, first the entire frame is transmitted and then acknowledgment is waited
for.
ii) In CSMA/CD, transmission and collision-detection is a continuous process.
4.2.4 CSMA/CA
• Here is how it works (Figure 12.15):
1) A station needs to be able to receive while transmitting to detect a collision.
i) When there is no collision, the station receives one signal: its own signal.
ii) When there is a collision, the station receives 2 signals:
a) Its own signal and
b) Signal transmitted by a second station.
2) To distinguish b/w these 2 cases, the received signals in these 2 cases must be different.
• CSMA/CA was invented to avoid collisions on wireless networks.
• Three methods to avoid collisions (Figure 12.16):
1) Interframe space 2) Contention window and 3) Acknowledgments
2) Contention Window
• The contention-window is an amount of time divided into time-slots.
• A ready-station chooses a random-number of slots as its wait time.
• In the window, the number of slots changes according to the binary exponential back-off strategy.
• For example:
At first time, number of slots is set to one slot and
Then, number of slots is doubled each time if the station cannot detect an idle channel.
3) Acknowledgment
• There may be a collision resulting in destroyed-data.
• In addition, the data may be corrupted during the transmission.
• To help guarantee that the receiver has received the frame, we can use
i) Positive acknowledgment and
ii) Time-out timer
4.3.1 Reservation
• Before sending data, each station needs to make a reservation of the medium.
• Time is divided into intervals.
• In each interval, a reservation-frame precedes the data-frames.
• If no. of stations = N, then there are N reservation mini-slots in the reservation-frame.
• Each mini-slot belongs to a station.
• When a station wants to send a data-frame, it makes a reservation in its own minislot.
• The stations that have made reservations can send their data-frames.
• For example (Figure 12.18):
5 stations have a 5-minislot reservation-frame.
In the first interval, only stations 1, 3, and 4 have made reservations.
In the second interval, only station-1 has made a reservation.
4.3.2 Polling
• In a network,
One device is designated as a primary station and
Other devices are designated as secondary
stations.
• Functions of primary-device:
1) The primary-device controls the link.
2) The primary-device is always the initiator of a session.
3) The primary-device is determines which device is allowed to use the channel at a given time.
4) All data exchanges must be made through the primary-device.
• The secondary devices follow instructions of primary-device.
• Disadvantage: If the primary station fails, the system goes down.
• Poll and select functions are used to prevent collisions (Figure 12.19).
1) Select
If the primary wants to send data, it tells the secondary to get ready to receive; this is
called select function.
The primary
→ alerts the secondary about upcoming transmission by sending select frame (SEL)
→ then waits for an acknowledgment (ACK) from secondary
→ then sends the data frame and
→ finally waits for an acknowledgment (ACK) from the secondary.
2) Poll
If the primary wants to receive data, it asks the secondaries if they have anything to send;
this is called poll function.
When the first secondary is approached, it responds either
→ with a NAK frame if it has no data to send or
→ with data-frame if it has data to send.
i) If the response is negative (NAK frame), then the primary polls the next secondary
in the same manner.
ii) When the response is positive (a data-frame), the primary
→ reads the frame and
→ returns an acknowledgment (ACK frame).