CN UNIT IV Notes
CN UNIT IV Notes
proceeding:
This document is confidential and intended solely for the educational purpose
of RMK Group of Educational Institutions. If you have received this document
through email in error, please notify the system manager. This document
Contains proprietary information and is intended only to the respective group /
learning community as intended. If you are not the addressee you should not
disseminate, distribute or copy through e-mail. Please notify the sender
immediately by e-mail if you have received this document by mistake and
delete this document from your system. If you are not the intended recipient
you are notified that disclosing, copying, distributing or taking any action in
reliance on the contents of this information is strictly prohibited.
22CS006
Introduction to
Computer Networks
Department of
Artificial Intelligence and Data Science
Batch/Year
2022-2026 / III Year
Created by
Date
31.08.2024
Table of Contents
S
CONTENTS PAGE NO
NO
1 Contents 5
2 Course Objectives 6
5 Course Outcomes 8
7 Lecture Plan 10
9 1 Introduction 13
3 Services 13
4 Port Numbers 13
7 SCTP 33
10 Part A (Q & A) 40
11 Part B Qs 45
14 Assessment Schedule 48
• To gain the knowledge of various protocols and techniques used in the data link
layer.
• To learn the services of network layer and network layer protocols.
C Programming
Data Structures
SYLLABUS
UNIT –I INTRODUCTION AND PHYSICAL LAYER
Data Communications – Network Types – Protocol Layering – Network Models (OSI, TCP/IP)
Networking Devices: Hubs, Bridges, Switches – Performance Metrics – Transmission media -
Guided media -Unguided media- Switching-Circuit Switching - Packet Switching
CO1: Understand the fundamental concepts of computer networks and physical layer.
CO2: Gain knowledge of various protocols and techniques used in the data link layer.
CO3: Learn the network layer services and network layer protocols.
POs/PSOs
PS
COs PO PO PO PO PO PO PO PO PO PO1 PO1 PO1 PSO PSO O3
1 2 3 4 5 6 7 8 9 0 1 2 1 2
CO1 3 3 3 - - - - - - - - - 3 2 2
CO2 3 2 2 - - - - - - - - - 3 2 2
CO3 3 2 2 - - - - - - - - - 3 2 2
CO4 3 2 2 - - - - - - - - - 3 2 2
CO5 3 2 2 - - - - - - - - - 3 2 2
LECTURE PLAN
UNIT – IV
Taxonomy Mode of
S No of Proposed date Actual pertaining
level delivery
No Topics periods Lecture Date CO
IC
1 Introduction 1 17.09.2024 17.09.2024 K2
CO4 T
TOO
LS
IC
2 Transport 1 18.09.2024 18.09.2024 K2
CO4 T
layer
TOO
Protocols
LS
IC
3 Services 1 19.09.2024 19.09.2024 K2
CO4 T
TOO
LS
1 20.09.2024 20.09.2024 IC
4 Port Numbers CO4 K2
T
TOO
LS
IC
5 User Datagram 1 21.09.2024 21.09.2024 K2
CO4 T
Protocol
TOO
LS
ICT
6 User Datagram 1 24.09.2024 24.09.2024 K2
CO4 TOO
Protocol LS
IC
7 Transmission 1 25.09.2024 25.09.2024 K2
CO4 T
Control TOO
Protocol
LS
ICT
8 SCTP 1 26.09.2024 26.09.2024 K2
CO4 TOO
LS
ACTIVITY BASED LEARNING: UNIT – IV
A L D C O T M Z B P F F C L X
R C S O G R G O E D M L U Y E
N E K N T A N J C J I O M P L
K R E N U T I I W E S W U P P
V R R E O S X U N F E C S E U
H O B C E W E T L J R O K E D
E R E T M O L M O M V N C R L
K C R I I L P E S A E T E T L
A O O O T S I X D R R R H O U
H N S N O I T S E G N O C P F
S T H L G L L K V A E L Y E W
D R Y E D K U L O T L M F E H
N O I S S I M S N A R T E R N
A L H S O C K E T D N Q R N M
H R E Y A L T R O P S N A R T
11
ACTIVITY BASED LEARNING: UNIT – IV
TRANSPORTLAYER
FLOWCONTROL
ERRORCONTROL
CONNECTIONLESS
DATAGRAM
QUALITYOFSERVICE
CHECKSUM
CONGESTION
SLOWSTART
HANDSHAKE
ACKNOWLEDGEMENT
MULTIPLEXING
TIMEOUT
KERBEROS
RETRANSMISSION
PEERTOPEER
SOCKET
CLIENT
SERVER
MESSAGE
ACKNOWLEDGEMENT
FULLDUPLEX
12
TRANSPORT LAYER
Introduction – Transport Layer Protocols – Services – Port Numbers –
User Datagram Protocol – Transmission Control Protocol – SCTP.
1. INTRODUCTION
The transport layer is located between the application layer and the network layer. It
provides a process-to-process communication between two application layers, one at
the local host and the other at the remote host. Communication is provided using a
logical connection, which means that the two application layers, which can be located
in different parts of the globe, assume that there is an imaginary direct connection
through which they can send and receive messages.
The transport layer is responsible for providing services to the application layer; it
receives services from the network layer.
Process-to-Process Communication
• For communication, we must define the local host, local process, remote host, and
remote process. The local host and the remote host are defined using IP
addresses. To define the processes, we need second identifiers, called port
numbers. In the TCP/IP protocol suite, the port numbers are integers between 0
and 65,535 (16 bits).
• The client program defines itself with a port number, called the ephemeral port
number. The word ephemeral means “short-lived” and is used because the life of a
client is normally short. An ephemeral port number is recommended to be greater
than1023 for some client/server programs to work properly.
• The server process must also define itself with a port number. This port number,
however, cannot be chosen randomly. TCP/IP has decided to use universal port
numbers for servers.
Encapsulation and Decapsulation
• Encapsulation happens at the sender site. When a process has a message to send, it
passes the message to the transport layer along with a pair of socket addresses and
some other pieces of information, which depend on the transport-layer protocol.
• The transport layer receives the data and adds the transport-layer header. The
packets at the transport layer in the Internet are called user datagrams,
segments, or packets, depending on what transport-layer protocol we use.
• Decapsulation happens at the receiver site. When the message arrives at the
destination transport layer, the header is dropped and the transport layer delivers the
message to the process running at the application layer. The sender socket address is
passed to the process in case it needs to respond to the message received.
Whenever an entity accepts items from more than one source, this is referred to as
multiplexing (many to one); whenever an entity delivers items to more than one
source, this is referred to as demultiplexing (one to many). The transport layer at
thesource performs multiplexing; the transport layer at the destination performs
demultiplexing.
• The processes P1 and P3 need to send requests to the corresponding server
processrunning in a server.
• The client process P2 needs to send a request to the corresponding server process running
at another server. The transport layer at the client site accepts three messages from the
three processes and creates three packets. It acts as a multiplexer
• The packets 1 and 3 use the same logical channel to reach the transport layer of the first
server. When they arrive at the server, the transport layer does the job of a demultiplexer
and distributes the messages to two different processes.
• The transport layer at the second server receives packet 2 and delivers it to the
corresponding process. Note that we still have demultiplexing although there is only one
message
• The receiving transport layer also has a double role: it is the consumer for the packets
received from the sender and the producer that decapsulates the messages and delivers
them to the application layer.
• The last delivery, receiver process, is normally apulling delivery; the transport layer waits
Until the application-layer process asks for messages.
Buffers
Although flow control can be implemented in several ways, one of the solutions is normally
touse two buffers: one at the sending transport layer and the other at the receiving
transport layer.
A buffer is a set of memory locations that can hold packets at the sender and receiver. The
flow control communication can occur by sending signals from the consumer to the
producer.When the buffer of the sending transport layer is full, it informs the application
layer to stop passing chunks of messages; when there are some vacancies, it informs the
application layerthat it can pass message chunks again.
When the buffer of the receiving transport layer is full, it informs the sending transport layer
to stop sending packets. When there are some vacancies, it informs the sending transport
layer that it can send packets again.
Error Control
Sliding Window
• Since the sequence numbers use modulo 2m, a circle can represent the sequence
numbers from 0 to 2m − 1. The buffer is represented as a set of slices, called the sliding
window that occupies part of the circle at any time.
• At the sender site, when a packet is sent, the corresponding slice is marked. When all the
slices are marked, it means that the buffer is full and no further messages can be accepted
from the application layer.
• The sequence numbers arein modulo 16 (m = 4) and the size of the window is 7.
Connectionless and Connection-Oriented Protocols
Connectionless Service
• In a connectionless service, the source process (application program) needs to divide its
message into chunks of data of the size acceptable by the transport layer and deliver them
to the transport layer one by one.
• The transport layer treats each chunk as a single unit without any relation between the
chunks. When a chunk arrives from the application layer, the transport layer encapsulates
it in a packet and sends it.
Connectionless service
• The figure shows that at the client site, the three chunks of messages are delivered to
the client transport layer in order (0, 1, and 2). Because of the extra delaying
transportation of the second packet, the delivery of messages at the server is not in order
(0, 2, 1).
• If these three chunks of data belong to the same message, the server process may have
received a strange message. The situation would be worse if one of the packets were
lost.Since there is no numbering on the packets, the receiving transport layer has no idea
thatone of the messages has been lost. It just delivers two chunks of data to the server
process
• The above two problems arise from the fact that the two transport layers do not
coordinatewith each other. The receiving transport layer does not know when the first
packet will come nor when all of the packets have arrived.
• We can say that no flow control, error control, or congestion control can be effectively
implemented in a connectionless service.
Connection-Oriented Service
In a connection-oriented service, the client and the server first need to establish logical
connection between themselves. The data exchange can only happen after the
connection establishment. After data exchange, the connection needs to be torn down.
TRANSPORT-LAYER PROTOCOLS
Simple Protocol
Our first protocol is a simple connectionless protocol with neither flow nor error control.The
receiver can immediately handle any packet it receives. In other words, the receiver can
never be overwhelmed with incoming packets.
Simple protocol
The transport layer at the sender gets a message from its application layer, makes a packet
out of it, and sends the packet. The transport layer at the receiver receives a packet from its
network layer, extracts the message from the packet, and delivers the message to its
application layer.
FSMs
• The sender site should not send a packet until its application layer has a message to send.
The receiver site cannot deliver a message to its application layer until a packet arrives.
•
• We can show these requirements using two FSMs. Each FSM has only one state, the ready
state. The sending machine remains in the ready state until a request comes from the
process in the application layer. When this event occurs, the sending machine
encapsulates the message in a packet and sends it to the receiving machine.
• The receiving machine remains in the ready state until a packet arrives from the sending
machine. When this event occurs, the receiving machine decapsulates the message out of
the packet and delivers it to the process at the application layer.
• The communication using this protocol is very simple. The sender sends packets one after
another without even thinking about the receiver.
Stop-and-Wait Protocol
• Stop-and-Wait protocol uses both flow and error control. Both the sender and the
receiver use a sliding window of size 1. The sender sends one packet at a time and waits
for an acknowledgment before sending the next one.
• To detect corrupted packets, we need to add a checksum to each data packet. When a
packet arrives at the receiver site, it is checked. If its checksum is incorrect, the packet is
corrupted and silently discarded.
• The silence of the receiver is a signal for the sender that a packet was either corrupted or
lost.
• Every time the sender sends a packet, it starts a timer. If an acknowledgment arrives
before the timer expires, the timer is stopped and the sender sends the next packet (if it
has one to send).
• If the timer expires, the sender resends the previous packet, assuming that the packet
was either lost or corrupted. This means that the sender needs to keep a copy of the
packet until its acknowledgment arrives.
• The Stop-and-Wait protocol is a connection-oriented protocol that provides flow and error
Control.
Sequence Numbers
• To prevent duplicate packets, the protocol uses sequence numbers and acknowledgment
numbers. A field is added to the packet header to hold the sequence number of that
packet. Assume we have used x as a sequence number; we only need to use x + 1 after
that. There is no need for x + 2. Three things can happen.
• 1. The packet arrives safe and sound at the receiver site; the receiver sends an
acknowledgment. The acknowledgment arrives at the sender site, causing the sender to
send the next packet numbered x + 1.
• 2. The packet is corrupted or never arrives at the receiver site; the sender resends the
packet (numbered x) after the time-out. The receiver returns an acknowledgment.
• 3. The packet arrives safe and sound at the receiver site; the receiver sends an
acknowledgment, but the acknowledgment is corrupted or lost. The sender resends the
packet (numbered x) after the time-out. Note that the packet here is a duplicate. The
receiver can recognize this fact because it expects packet x + 1 but packet x was received.
Acknowledgment Numbers
• The acknowledgment numbers always announce the sequence number of the next packet
expected by the receiver. For example, if packet 0 has arrived safe and sound, the receiver
sends an ACK with acknowledgment 1 (meaning packet 1 is expected next).
• If packet 1 has arrived safe and sound, the receiver sends an ACK with acknowledgment 0
(meaning packet 0 is expected).
Receiver
The receiver is always in the ready state. Three events may occur:
a. If an error-free packet with seqNo = R arrives, the message in the packet is delivered to
the application layer. The window then slides, R = (R + 1) modulo 2.Finally an ACK with
ackNo = R is sent.
b. If an error-free packet with seqNo ≠ R arrives, the packet is discarded, but an ACK with
ackNo = R is sent.
c. If a corrupted packet arrives, the packet is discarded.
Pipelining
In networking and in other areas, a task is often begun before the previous task has ended.
This is known as pipelining. There is no pipelining in the Stop-and-Wait protocol because a
sender must wait for a packet to reach the destination and be acknowledged before the next
packet can be sent.
To improve the efficiency of transmission (to fill the pipe), multiple packets must be in
transition while the sender is waiting for acknowledgment. The key to Go-Back-N (GBN) is
that we can send several packets before receiving acknowledgments, but the receiver can
only buffer one packet. We keep a copy of the sent packets until the acknowledgments
arrive.
Sequence Numbers
The sequence numbers are modulo 2m, where m is the size of the sequence number field in
Bits.
Receiver
Acknowledgment Numbers
An acknowledgment number in this protocol is cumulative and defines the sequence number
of the next packet expected. For example, if the acknowledgment number(ack No) is 7, it
means all packets with sequence number up to 6 have arrived, safe and sound, and the
receiver is expecting the packet with sequence number 7.
Send Window
The send window is an imaginary box covering the sequence numbers of the data packets
that can be in transit or can be sent. In each window position, some of the sequence
numbers define the packets that have been sent; others define those that can be sent. The
maximum size of the window is 2m− 1.Figure shows a sliding window of size 7 (m = 3) for
the Go-Back-N protocol.
Figure: Send window for Go-Back-N
• The send window at any time divides the possible sequence numbers into four regions.
• The first region, left of the window, defines the sequence numbers belonging to packets
that are already acknowledged. The sender does not worry about these packets and
keepsno copies of them.
• The second region, colored, defines the range of sequence numbers belonging to the
packets that have been sent, but have an unknown status. The sender needs to wait
tofind out if these packets have been received or were lost. We call these outstanding
packets.
• The third range, white in the figure, defines the range of sequence numbers for
packets that can be sent; however, the corresponding data have not yet been received
from theapplication layer.
• Finally, the fourth region, right of the window, defines sequence numbers that cannot be
Used until the window slides.
Receive Window
• The receive window makes sure that the correct data packets are received and that the
correct acknowledgments are sent. In Go-Back-N, the size of the receive window is always
1. The receiver is always looking for the arrival of a specific packet. Any packet arriving out
of order is discarded and needs to be resent.
• Figure shows the receive window. Only one variable, Rn (receive window, next packet
expected), to define this abstraction. The sequence numbers to the left of the window
belong to the packets already received and acknowledged; the sequence numbers to the
right of this window define the packets that cannot be received.
• Any received packet with a sequence number in these two regions is discarded. Only a
packet with a sequence number matching the value of Rn is accepted and
acknowledged.The receive window also slides, but only one slot at a time. When a
correct packet is received, the window slides, Rn= (Rn+ 1) modulo 2m.
Selective-Repeat Protocol
The Go-Back-N protocol simplifies the process at the receiver. The receiver keeps track of
only one variable, and there is no need to buffer out-of-order packets; they are simply
discarded.
If the network layer is losing many packets because of congestion in the network, the
resending of all of these outstanding packets makes the congestion worse, and eventually
more packets are lost.
• The Selective-Repeat protocol also uses two windows: a send window and a
receivewindow.
• First, the maximum size of the send window is much smaller; it is2m−1. Second, the
receive window is the same size as the send window. For example, if m = 4, the sequence
numbers go from 0 to 15, but the maximum size of the window is just 8 (it is 15 in the Go-
Back-N Protocol).
• The Selective-Repeat protocol allows as many packets as the size of the receive window to
arrive out of order and be kept until there is a set of consecutive packets to be delivered
to the application layer.
• Because the sizes of the send window and receive window are the same, all the packets in
the send packet can arrive out of order and be stored until they can be delivered.
• The four protocols we discussed earlier in this section are all unidirectional: data packets
flow in only one direction and acknowledgments travel in the other direction.
• In real life, data packets are normally flowing in both directions: from client to server
andfrom server to client. This means that acknowledgments also need to flow in both
directions.
• A technique called piggybacking is used to improve the efficiency of the bidirectional
protocols. When a packet is carrying data from A to B, it can also carry acknowledgment
feedback about arrived packets from B; when a packet is carrying data from B to A, it can
also carry acknowledgment feedback about the arrived packets from A.
Services
Each protocol provides a different type of service and should be used appropriately.
UDP
UDP is an unreliable connectionless transport-layer protocol used for its simplicity and
efficiency in applications where error control can be provided by the application- layer
process.
TCP
TCP is a reliable connection-oriented protocol that can be used in any
application where reliability is important.
SCTP
SCTP is a new transport-layer protocol that combines the features of UDP and TCP.
Port Numbers
A transport-layer protocol creates a process-to-process communication.
These protocols use port numbers to accomplish this.
Port numbers provide end- to-end addresses at the transport layer and allow multiplexing
And demultiplexing at this layer.
Most popularly used Port numbers are listed in Figure 4.2.
Reference video : https://www.youtube.com/watch?v=qsZ8Qcm6_8k
Note: It is not usually used for a process such as FTP that needs to send bulk data.
UDP is suitable for a process with internal flow- and error-control mechanisms.
For example, the Trivial File Transfer Protocol (TFTP) process includes flow and
error control.
UDP is a suitable transport protocol for multicasting. Multicasting capability is
embedded in the UDP software but not in the TCP software.
UDP is used for management processes such
as SNMP (Simple Network Management Protocol)
UDP is used for some route updating protocols such as Routing Information
Protocol (RIP)
UDP is normally used for interactive real-time applications that cannot tolerate
uneven delay between sections of a received message.
Reference Video : https://www.youtube.com/watch?v=O22zHLcHHN0
Connection termination.
The connection establishment in TCP is called three-way handshaking.
The three-way handshake involves the exchange of three messages between the
client and the server.
1. First, the client (the active participant) sends a segment to the server (the passive
participant) stating the initial sequence number it plans to use (Flags = SYN,
SequenceNum = x).
2.The server then responds with a single segment that both acknowledges the
client’s sequence number (Flags = ACK, Ack = x + 1) and states its own beginning
sequence number (Flags = SYN, SequenceNum = y). That is, both the SYN and ACK
bits are set in the Flags field of this second message.
3.Finally, the client responds with a third segment that acknowledges the server’s
Sequence number (Flags = ACK, Ack = y + 1).
The client TCP, after receiving a close command from the client process, sends
the first segment, a FIN segment in which the FIN flag is set.
The server TCP, after receiving the FIN segment, sends the second segment, a
FIN + ACK segment, to confirm the receipt of the FIN segment from the client and
at the same time to announce the closing of the connection in the other direction.
The client TCP sends the last segment, an ACK segment, to confirm the receipt of
the FIN segment from the TCP server.
Figure 4.12 – Connection termination using three way handshaking
∙ If only one side closes the connection, then this means it has no more data to
send, but it is still available to receive data from the other side.
∙ Thus, on any one side there are three combinations of transitions that get a
connection from the ESTABLISHED state to the CLOSED state:
This side closes first:
ESTABLISHED → FIN WAIT 1 → FIN WAIT 2 → TIME WAIT → CLOSED.
The other side closes first:
ESTABLISHED → CLOSE WAIT → LAST ACK → CLOSED.
Both sides close at the same time:
ESTABLISHED → FIN WAIT 1 → CLOSING → TIME WAIT → CLOSED.
Figure 4.14 – Data flow and flow control feedbacks in TCP
Flow Control
Flow control balances the rate a producer creates data with the rate a
consumer can use the data.
Assume that the logical channel between the sending and receiving TCP is error-
free.
Paths 1, 2, and 3 - The data travel from the sending process down to the
sending TCP, from the sending TCP to the receiving TCP, and from the
receiving TCP up to the receiving process.
Paths 4 and 5- Flow control feedbacks, are traveling from the receiving TCP to
the sending TCP and from the sending TCP up to the sending process.
The receive window closes (moves its left wall to the right) when more bytes
arrive from the sender; it opens (moves its right wall to the right) when
more
The opening, closing, and shrinking of the send window is controlled by the
Receiver.
The send window closes (moves its left wall to the right) when a new
acknowledgment allows it to do so.
The send window opens (its right wall moves to the right) when the
receive window size (rwnd) advertised by the receiver allows it to do so
The send window shrinks in the event this situation does not occur.
Shrinking of Windows
The receive window cannot shrink.
The send window, on the other hand, can shrink if the receiver defines a value for rwnd
that results in shrinking the window.
Error Control
TCP is a reliable transport-layer protocol. This means that an application program that
delivers a stream of data to TCP relies on TCP to deliver the entire stream to the
application program on the other end in order, without error, and without any part lost or
duplicated.
Error control includes mechanisms for detecting and resending corrupted segments,
resending lost segments, storing out-of order segments until missing segments arrive,
and detecting and discarding duplicated segments.
Error control in TCP is achieved through the use of three simple tools:
Checksum
Acknowledgment
time-out.
Checksum
Each segment includes a checksum field, which is used to check for a corrupted
segment.
Acknowledgment
TCP uses acknowledgments to confirm the receipt of data segments.
Control segments that carry no data, but consume a sequence number, are also
acknowledged.
ACK segments are never acknowledged.
Acknowledgment Type
Selective Acknowledgment (SACK) - A SACK reports a block of bytes that is out of order,
and also a block of bytes that is duplicated
Generating Acknowledgments
2. The receiver needs to delay sending an ACK segment if there is only one outstanding in-
order segment.
3. When a segment arrives with a sequence number that is expected by the receiver, and
the previous in-order segment has not been acknowledged, the receiver immediately
sends an ACK segment.
4. When a segment arrives with an out-of-order sequence number that is higher than
expected, the receiver immediately sends an ACK segment announcing the sequence
number of the next expected segment.
5. When a missing segment arrives, the receiver sends an ACK segment to announce the
next sequence number expected.
6. If a duplicate segment arrives, the receiver discards the segment, but immediately
sends an acknowledgment indicating the next in-order segment expected.
The idea of TCP congestion control is for each source to determine how much capacity
is available in the network, so that it knows how many packets it can safely have in
transit.
Once a given source has this many packets in transit, it uses the arrival of an ACK as a
signal that one of its packets has left the network, and that it is therefore safe to insert a
new packet into the network without adding to the level of congestion.
TCP maintains a new state variable for each connection, called Congestion Window.
Congestion Window is used by the source to limit how much data it is allowed to have in
transit at a given time.
TCP is modified such that the maximum number of bytes of unacknowledged data
allowed is now the minimum of the congestion window and the advertised window.
This involves decreasing the congestion window when the level of congestion goes up and increasing
the congestion window when the level of congestion goes down.
This mechanism is commonly called additive increase/multiplicative decrease (AIMD).
How does the source determine that the network is congested and that it should decrease the
Congestion window?
TCP interprets timeouts as a sign of congestion and reduces the rate at which it is transmitting.
Specifically, each time a timeout occurs, the source sets CongestionWindow to half of its previous value.
This halving of the CongestionWindow for each timeout corresponds to the ―multiplicative decrease‖ part of AIMD.
For example, suppose the CongestionWindow is currently set to 16 packets. If a loss is detected,
CongestionWindow is set to 8.
Additional losses cause CongestionWindow to be reduced to 4, then 2, and finally to 1
packet.
CongestionWindow is not allowed to fall below the size of a single packet, or in TCP terminology, the
maximum segment size (MSS).
We also need to be able to increase the congestion window to take advantage of newly available
capacity in the network.
This is the ―additive increase‖ part of AIMD, and it works as follows.
Every time the source successfully sends a Congestion Window’s worth of packets—that is, each packet
sent out during the last RTT has been ACKed—it adds the equivalent of one packet to
CongestionWindow.
This linear increase is illustrated in Figure.
Specifically, the congestion window is incremented as follows each time an ACK arrives:
Increment = MSS × (MSS/CongestionWindow)
CongestionWindow + = Increment
Figure 4.15 - Packets in transit during additive increase, with one packet being
added each RTT.
The graph in Figure 4.16 is a plot of the current value of congestion window as a
Function of time.
Slow Start
The additive increase mechanism is used when the source is operating close to
the available capacity of the network.
It takes too long to ramp up a new TCP connection.
Slow start is used to increase the congestion window rapidly from a cold start.
Slow start effectively increases the congestion window exponentially, rather than
linearly.
Retransmission Timer
To retransmit lost segments, TCP employs one retransmission timer (for the
whole connection period) that handles the retransmission time-out (RTO), the
waiting time for an acknowledgment of a segment.
1. When TCP sends the segment in front of the sending queue, it starts the timer.
2.When the timer expires, TCP resends the first segment in front of the queue,
and restarts the timer.
3.When a segment or segments are cumulatively acknowledged, the segment or
Segments are purged from the queue.
4. If the queue is empty, TCP stops the timer; otherwise, TCP restarts the timer.
Persistence Timer
To deal with a zero-window-size advertisement, TCP needs another timer.
If the receiving TCP announces a window size of zero, the sending TCP stops
transmitting segments until the receiving TCP sends an ACK segment announcing
a nonzero window size.
This ACK segment can be lost.
If this acknowledgment is lost, the receiving TCP thinks that it has done its job
and waits for the sending TCP to send more segments.
There is no retransmission timer for a segment containing only an
Acknowledgment.
The sending TCP has not received an acknowledgment and waits for the other
TCP to send an acknowledgment advertising the size of the window.
When the persistence timer goes off, the sending TCP sends a special segment
called a probe.
The probe causes the receiving TCP to resend the acknowledgment.
The value of the persistence timer is set to the value of the retransmission time.
However, if a response is not received from the receiver, another probe segment
is sent and the value of the persistence timer is doubled and reset.
The sender continues sending the probe segments and doubling and resetting the
value of the persistence timer until the value reaches a threshold (usually 60 s).
After that the sender sends one probe segment every 60 seconds until the window
is reopened.
Keepalive Timer
A keepalive timer is used to prevent a long idle connection between two TCPs.
Example: Suppose that a client opens a TCP connection to a server, transfers
some data, and has crashed. In this case, the connection remains open forever.
The server has a keepalive timer. Each time the server hears from a client, it
resets this timer. The time-out is usually 2 hours. If the server does not hear from
the client after 2 hours, it sends a probe segment. If there is no response after 10
probes, each of which is 75 seconds apart, it assumes that the client is down and
terminates the connection.
TIME-WAIT Timer
The TIME-WAIT (2MSL) timer is used during connection termination.
The maximum segment lifetime (MSL) is the amount of time any segment can
exist in a network before being discarded.
Common values for MSL are 30 seconds, 1 minute, or even 2 minutes.
The 2MSL timer is used when TCP performs an active close and sends the final
ACK.
The connection must stay open for 2 MSL amount of time to allow TCP to resend
the final ACK in case the ACK is lost.
Reference video: https://www.youtube.com/watch?v=oEUP7RXzxDY
SCTP (Stream Control Transmission Protocol)
(SCTP) is a new transport-layer protocol designed to combine some
features of UDP and TCP in an effort to create a better protocol for
multimedia communication.
SCTP Services
1. Process-to-Process Communication
SCTP, like UDP or TCP, provides process-to-process communication.
2. Multiple Streams
SCTP allows multistream service in each connection.
If one of the streams is blocked, the other streams can still deliver their
data.
Figure 4.21 – Comparison between a TCP Segment and an SCTP Packet
2. Stream Identifier (SI)
There may be several streams in each association.
Each stream in SCTP needs to be identified using a stream identifier (SI).
Each data chunk must carry the SI in its header so that when it arrives at
thedestination, it can be properly placed in its stream.
The SI is a 16-bit number starting from 0.
3. Stream Sequence Number (SSN)
When a data chunk arrives at the destination SCTP, it is delivered to the
appropriate stream and in the proper order.
This means that, in addition to an SI, SCTP defines each data chunk in each
stream with a stream sequence number (SSN).
4. Packets
Data are carried as data chunks, control information as control chunks.
Several control chunks and data chunks can be packed together in a packet.
The receiver stores all chunks that have arrived in its queue including the out-of-
order ones.
The last acknowledgment sent was for data chunk 20.
The available window size is1000 bytes.
Chunks 21 to 23 have been received in order.
Simple average
Exponential / weighted average
Jacobson’s Algorithm
IANA (Internet Assigned Number Authority) has divided port numbers into three
ranges: 1) Well Known ports 2) Registered ports 3) Dynamic Ports.
4) List out the various features of sliding window protocol. (Nov/Dec 2012)
(CO4, K1)
5) What are the services provided by the transport layer? (May 2018)
(CO4, K1)
End - to End Delivery
Flow Control
Addressing
Multiplexing
Reliable delivery
Congestion occurs because the switches in a network have a limited buffer size to
store arrived packets. And also because the packets arrive at a faster rate than
what the receiver can receive and process the packets.
7) Define congestion. (Nov 2011) (CO4,K1)
Congestion in a network occurs if user sends data into the network at a rate
greater than that allowed by network resources. Any given node has a number of
I/O ports attached to it. There are two buffers at each port. One to accept
arriving packets & another one to hold packets that are waiting to depart. If
packets arrive too fast node than to process them or faster than packets can be
cleared from the outgoing buffers, then there will be no empty buffer. Thus
causing congestion and traffic in the network.
A very small packet of data is called a tiny gram. Too many tiny grams can
congest a network connection.
It stands for User Datagram Protocol. It is part of the TCP/IP suite of protocols
used for data transferring. UDP is a known as a "stateless" protocol, meaning it
doesn't acknowledge that the packets being sent, have been received.
10) What are the advantages of using UDP over TCP? (Nov 10) (CO4,K1)
UDP is very useful for audio or video delivery which does not need
acknowledgement. It is useful in the transmission of multimedia data. Connection
Establishment delay will occur in TCP.
11) What are the different fields and use of UDP’s Pseudo header?
(CO4, K1)
The pseudo header consists of three field from the IP header protocol number
, source IP address and destination IP address plus the UDP length field (which is
included twice in checksum calculation).The pseudo header is used to check
whether the message is delivered between 2 endpoints.
A client would send a message to the Port Mapper’s well-known port asking for
the port it should use to talk to the ―whatever‖ service, and the Port Mapper
returns the appropriate port.
A more common approach, and the one used by UDP, is for processes to
indirectly identify each other using an abstract locator, often called a port or
mailbox.
14) Give the datagram format of UDP? (CO4, K1)
The basic idea of UDP is for a source process to send a message to a port and for
the destination process to receive the message from a port.
Source port address: It is the address of the application program that has
created the message.
Destination port address: It is the address of the application program that will
receive the message.
Total Length: It defines the total length of the user datagram in bytes.
17) Name the policies that can prevent (avoid) congestion. (CO4, K1)
Slow start
Fast retransmit
Recovery
19) Suppose TCP operates over a 1-Gbps link, utilizing the full bandwidth
continuously. How long will it take for the sequence numbers to wrap around
completely? Suppose an added 32-bit timestamp field increments 1000 times
during this wrap around time, how long it will take timestamp filed to wrap
around?
Once a segment with sequence x survives in Internet, TCP cannot use the same
sequence no. How fast 32-bit sequence no space can be consumed? 32-bit
sequence no is adequate for today’s network. Wrap around Time for T3-
45Mbps (232 x 8) /45Mbps=763.55sec=12.73 min.
It involves preventing too much data from being injected into the network,
thereby causing switches or links to become overloaded. Thus flow control is an
end to an end issue, while congestion control is concerned with how hosts and
networks interact.
21) What do you mean by slow start in TCP congestion? (May 16)
(CO4,K1)
TCP slow start is an algorithm which balances the speed of a network connection.
Slow start gradually increases the amount of data transmitted until it finds the
network’s maximum carrying capacity.
22) What are the four aspects related to the reliable delivery of data? (May 12)
(CO4,K1)
The four aspects are Error control, Sequence control, Loss control and Duplication
Control.
23) List the different phases used in TCP Connection. (May 16) (CO4,K1)
Slow start is used to increase the congestion window rapidly from a cold start.
Slow start effectively increases the congestion window exponentially, rather
than linearly. If the source sends as many packets as the advertised window
allows, the routers may not be able to consume this burst of packets. Thus
slow start is much ―slower‖ than sending an entire advertised window’s worth
of data all at once.
27) What are the situations under the slow start is run? (CO4, K1)
There are actually two different situations in which slow start runs.
29) What is the use of SCTP Multiple stream service? (CO4, K1)
SCTP allows multi stream service in each connection, which is called association in
SCTP terminology. If one of the streams is blocked, the other streams can still
deliver their data. The idea is similar to multiple lanes on a highway. The figure
shows the idea of multi stream delivery.
(i) TCP segment format (ii) Silly window syndrome (Or) discuss the silly window
syndrome and explain how to avoid it. (CO4, K2)
2) With neat architecture, Explain TCP and its sliding window algorithm for
flowcontrol. (Nov 15) (CO4,K1)
3) Define UDP. Discuss the operations of UDP. Explain UDP checksum with
oneexample. (Nov/Dec 2011, May 16) (CO4,K2)
4) Identify and explain the states involved in TCP. Explain how TCP
manages aByte Stream. (May 2018) (CO4,K3)
5)Explain the various fields of TCP header and working of the TCP protocol.
(May/June 2015, Nov/Dec 2015, Nov/Dec 2016) (CO4, K1)
8) Illustrate the features of TCP that can be used by the sender to insert
record boundaries into the byte stream. Also mention their original
purpose.(May 13) (CO4,K4)
10) With TCPs slow start and AIMD for congestion control show how the
windowsize will vary for a transmission where every 5th packet is lost. Assume
an advertised window size of 50 MSS. (May 17) (CO4,K4)
11) What is SCTP? Explain the Association establishment of SCTP through four-way
handshake in detail.(May 17) (CO4,K1)
SUPPORTIVE ONLINE COURSES
Course
S No Course title Link
provider
https://www.udemy.co
Introduction to Networking
1 Udemy m/course/introduction-
for Complete Beginners
to-networking-for-
complete-beginners/
https://www.coursera.
Fundamentals of Network
2 Coursera o
Communication
rg/learn/fundamentals-
network-
communications/
https://www.coursera.
Peer-to-Peer Protocols
o rg/learn/peer-to-
3 Coursera
and Local Area
peer-
Networks
protocols-local-area-
networks/
https://www.coursera.
Packet Switching
o rg/learn/packet-
4 Coursera
Networks and
switching-networks-
Algorithms
algorithms
https://www.coursera.
5 Coursera TCP/IP and Advanced Topics o rg/learn/tcp-ip-
advanced
https://www.edx.org/
Computer Networks and the
c ourse/computer-
6 edX Internet
REAL TIME APPLICATIONS IN DAY TO DAY LIFE
AND TO INDUSTRY
The real-time application-controlled TCP/IP trace NMI is a callable NMI that
Provides real-time TCP/IP stack data to network management applications
based on filters that are set by an application trace instance. Each
application can use the NMI to open multiple trace instances and set unique
filters for each trace instance to obtain the desired data. Filters can be set
for the following trace types:
• Data trace
• Packet trace
The application will receive information about real-time data that is lost. The
information is provided in the form of lost trace records. The real-time data
that matches the application filters is provided in trace records. These trace
records are similar to the trace records that are provided by the real-time
TCP/IP network monitoring NMI.
As part of collecting the real-time data for the applications, the NMI uses 64-
bit shared storage that it shares with the application address space. The NMI
also uses 64-bit common storage that the TCP/IP address space owns.
CONTENT BEYOND SYLLABUS: UNIT – IV
TLS was derived from a security protocol called Secure Service Layer (SSL).
TLS ensures that no third party may eavesdrop or tamper with any
message.
There are several benefits of TLS:
1. Encryption
2.Interoperability
3.Algorithm flexibility
4.Ease of Deployment
5.Ease of Use
6.Working of TLS
https://www.cloudflare.com/learning/ssl/transport-layer-security-tls/
47
MINIPROJECT
Name of the
S.NO Start Date End Date Portion
Assessment
114
Prescribed Text Books & Reference Books
TEXT BOOK
Data Communications and Networking, Behrouz A. Forouzan, McGraw Hill Education,
5th Ed., 2017.
REFERENCES
1. Computer Networking- A Top Down Approach, James F. Kurose, University of
Massachusetts and Amherst Keith Ross, 8th Edition, 2021.
2. Computer Networks, Andrew S. Tanenbaum, Sixth Edition, Pearson, 2021.
3. Data Communications and Computer Networks, P.C. Gupta, Prentice-Hall of
India, 2006.
4. Computer Networks: A Systems Approach , L. L. Peterson and B. S. Davie,
Morgan Kaufmann, 3rd ed., 2003.
Thank You
Disclaimer:
This document is confidential and intended solely for the educational purpose of RMK Group of
Educational Institutions. If you have received this document through email in error, please notify the
system manager. This document contains proprietary information and is intended only to the
respective group / learning community as intended. If you are not the addressee you should not
disseminate, distribute or copy through e-mail. Please notify the sender immediately by e-mail if you
have received this document by mistake and delete this document from your system. If you are not
the intended recipient you are notified that disclosing, copying, distributing or taking any action in
reliance on the contents of this information is strictly prohibited.