0% found this document useful (0 votes)
26 views67 pages

CN UNIT IV Notes

Uploaded by

anusad003
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
26 views67 pages

CN UNIT IV Notes

Uploaded by

anusad003
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 67

Please read this disclaimer before

proceeding:
This document is confidential and intended solely for the educational purpose
of RMK Group of Educational Institutions. If you have received this document
through email in error, please notify the system manager. This document
Contains proprietary information and is intended only to the respective group /
learning community as intended. If you are not the addressee you should not
disseminate, distribute or copy through e-mail. Please notify the sender
immediately by e-mail if you have received this document by mistake and
delete this document from your system. If you are not the intended recipient
you are notified that disclosing, copying, distributing or taking any action in
reliance on the contents of this information is strictly prohibited.
22CS006
Introduction to
Computer Networks
Department of
Artificial Intelligence and Data Science

Batch/Year
2022-2026 / III Year

Created by

Ms. N. Vani (ADS)

Date

31.08.2024
Table of Contents
S
CONTENTS PAGE NO
NO

1 Contents 5

2 Course Objectives 6

3 Pre Requisites (Course Names with Code) 6

Syllabus (With Subject Code, Name,


4 7
LTPC details)

5 Course Outcomes 8

6 CO- PO/PSO Mapping 9

7 Lecture Plan 10

8 Activity Based Learning 11

9 1 Introduction 13

2 Transport Layer Protocols 13

3 Services 13

4 Port Numbers 13

5 User Datagram Protocol 14

6 Transmission Control Protocol 17

7 SCTP 33

10 Part A (Q & A) 40

11 Part B Qs 45

12 Supportive Online Certification Courses 46

13 Contents Beyond the Syllabus 47

14 Assessment Schedule 48

15 Prescribed Text Books & Reference Books 49


COURSE OBJECTIVES
• To study the fundamental concepts of computer networks and physical layer.

• To gain the knowledge of various protocols and techniques used in the data link

layer.
• To learn the services of network layer and network layer protocols.

• To describe different protocols used in the transport layer.

• To understand the application layer protocols.


PRE REQUISITE

Basics of Analog and


Digital Transmission

C Programming

Data Structures
SYLLABUS
UNIT –I INTRODUCTION AND PHYSICAL LAYER
Data Communications – Network Types – Protocol Layering – Network Models (OSI, TCP/IP)
Networking Devices: Hubs, Bridges, Switches – Performance Metrics – Transmission media -
Guided media -Unguided media- Switching-Circuit Switching - Packet Switching

UNIT II DATA LINK LAYER


Introduction – Link-Layer Addressing- Error Detection and Correction - Wired LANs: Ethernet
- Wireless LANs – Introduction – IEEE 802.11, Bluetooth

UNIT – III NETWORK LAYER


Network Layer Services – IPV4 Addresses – Forwarding ofIP Packets - Network Layer
Protocols: IP, ICMP v4 – Unicast Routing Algorithms – Protocols – Multicasting Basics – IPV6
Addressing – IPV6 Protocol.

UNIT IV TRANSPORT LAYER


Introduction – Transport Layer Protocols – Services – Port Numbers – User Datagram
Protocol –Transmission Control Protocol – SCTP.

UNIT V APPLICATION LAYER


Application layer-WWW and HTTP – FTP – Email –Telnet –SSH – DNS – SNMP
Course Outcomes

CO1: Understand the fundamental concepts of computer networks and physical layer.

CO2: Gain knowledge of various protocols and techniques used in the data link layer.

CO3: Learn the network layer services and network layer protocols.

CO4: Understand the various protocols used in the transport layer.

CO5: Analyze the various application layer protocols.


CO- PO/PSO Mapping

POs/PSOs
PS
COs PO PO PO PO PO PO PO PO PO PO1 PO1 PO1 PSO PSO O3
1 2 3 4 5 6 7 8 9 0 1 2 1 2

CO1 3 3 3 - - - - - - - - - 3 2 2
CO2 3 2 2 - - - - - - - - - 3 2 2
CO3 3 2 2 - - - - - - - - - 3 2 2
CO4 3 2 2 - - - - - - - - - 3 2 2
CO5 3 2 2 - - - - - - - - - 3 2 2
LECTURE PLAN
UNIT – IV

Taxonomy Mode of
S No of Proposed date Actual pertaining
level delivery
No Topics periods Lecture Date CO

IC
1 Introduction 1 17.09.2024 17.09.2024 K2
CO4 T
TOO
LS
IC
2 Transport 1 18.09.2024 18.09.2024 K2
CO4 T
layer
TOO
Protocols
LS
IC
3 Services 1 19.09.2024 19.09.2024 K2
CO4 T
TOO
LS

1 20.09.2024 20.09.2024 IC
4 Port Numbers CO4 K2
T
TOO
LS
IC
5 User Datagram 1 21.09.2024 21.09.2024 K2
CO4 T
Protocol
TOO
LS
ICT
6 User Datagram 1 24.09.2024 24.09.2024 K2
CO4 TOO
Protocol LS
IC
7 Transmission 1 25.09.2024 25.09.2024 K2
CO4 T
Control TOO
Protocol
LS
ICT
8 SCTP 1 26.09.2024 26.09.2024 K2
CO4 TOO
LS
ACTIVITY BASED LEARNING: UNIT – IV

WORD SEARCH GAME ON TRANSPORT LAYER

A L D C O T M Z B P F F C L X
R C S O G R G O E D M L U Y E
N E K N T A N J C J I O M P L
K R E N U T I I W E S W U P P
V R R E O S X U N F E C S E U
H O B C E W E T L J R O K E D
E R E T M O L M O M V N C R L
K C R I I L P E S A E T E T L
A O O O T S I X D R R R H O U
H N S N O I T S E G N O C P F
S T H L G L L K V A E L Y E W
D R Y E D K U L O T L M F E H
N O I S S I M S N A R T E R N
A L H S O C K E T D N Q R N M
H R E Y A L T R O P S N A R T

11
ACTIVITY BASED LEARNING: UNIT – IV

SOLUTION FOR WORD SEARCH GAME ON TRANSPORT LAYER

TRANSPORTLAYER
FLOWCONTROL
ERRORCONTROL
CONNECTIONLESS
DATAGRAM
QUALITYOFSERVICE
CHECKSUM
CONGESTION
SLOWSTART
HANDSHAKE
ACKNOWLEDGEMENT
MULTIPLEXING
TIMEOUT
KERBEROS
RETRANSMISSION
PEERTOPEER
SOCKET
CLIENT
SERVER
MESSAGE
ACKNOWLEDGEMENT
FULLDUPLEX

12
TRANSPORT LAYER
Introduction – Transport Layer Protocols – Services – Port Numbers –
User Datagram Protocol – Transmission Control Protocol – SCTP.

1. INTRODUCTION

The transport layer is located between the application layer and the network layer. It
provides a process-to-process communication between two application layers, one at
the local host and the other at the remote host. Communication is provided using a
logical connection, which means that the two application layers, which can be located
in different parts of the globe, assume that there is an imaginary direct connection
through which they can send and receive messages.

4.1.1 Transport-Layer Services

The transport layer is responsible for providing services to the application layer; it
receives services from the network layer.

Process-to-Process Communication

A process is an application-layer entity (running program) that uses the services of


the transport layer. A transport-layer protocol is responsible for delivery of the
message to the appropriate process.

Addressing: Port Numbers

• To achieve process-to-process communication, the client-server paradigm is used.


A process on the local host, called a client, needs services from a process usually
on the remote host, called a server.

• For communication, we must define the local host, local process, remote host, and
remote process. The local host and the remote host are defined using IP
addresses. To define the processes, we need second identifiers, called port
numbers. In the TCP/IP protocol suite, the port numbers are integers between 0
and 65,535 (16 bits).

• The client program defines itself with a port number, called the ephemeral port
number. The word ephemeral means “short-lived” and is used because the life of a
client is normally short. An ephemeral port number is recommended to be greater
than1023 for some client/server programs to work properly.

• The server process must also define itself with a port number. This port number,
however, cannot be chosen randomly. TCP/IP has decided to use universal port
numbers for servers.
Encapsulation and Decapsulation

• To send a message from one process to another, the transport-layer protocol


encapsulates and decapsulates messages.

• Encapsulation happens at the sender site. When a process has a message to send, it
passes the message to the transport layer along with a pair of socket addresses and
some other pieces of information, which depend on the transport-layer protocol.

• The transport layer receives the data and adds the transport-layer header. The
packets at the transport layer in the Internet are called user datagrams,
segments, or packets, depending on what transport-layer protocol we use.

• Decapsulation happens at the receiver site. When the message arrives at the
destination transport layer, the header is dropped and the transport layer delivers the
message to the process running at the application layer. The sender socket address is
passed to the process in case it needs to respond to the message received.

Multiplexing and Demultiplexing

Whenever an entity accepts items from more than one source, this is referred to as
multiplexing (many to one); whenever an entity delivers items to more than one
source, this is referred to as demultiplexing (one to many). The transport layer at
thesource performs multiplexing; the transport layer at the destination performs
demultiplexing.
• The processes P1 and P3 need to send requests to the corresponding server
processrunning in a server.
• The client process P2 needs to send a request to the corresponding server process running
at another server. The transport layer at the client site accepts three messages from the
three processes and creates three packets. It acts as a multiplexer

• The packets 1 and 3 use the same logical channel to reach the transport layer of the first
server. When they arrive at the server, the transport layer does the job of a demultiplexer
and distributes the messages to two different processes.

• The transport layer at the second server receives packet 2 and delivers it to the
corresponding process. Note that we still have demultiplexing although there is only one
message

Flow Control at Transport Layer


• In communication at the transport layer, we are dealing with four entities: sender
process, sender transport layer, receiver transport layer, and receiver
process.
• The sending process at the application layer is only a producer. It produces
messagechunks and pushes them to the transport layer.
• The sending transport layer has a double role: it is both a consumer and a producer. It
consumes the messages pushed by the producer. It encapsulates the messages in
packetsand pushes them to the receiving transport layer.

• The receiving transport layer also has a double role: it is the consumer for the packets
received from the sender and the producer that decapsulates the messages and delivers
them to the application layer.

• The last delivery, receiver process, is normally apulling delivery; the transport layer waits
Until the application-layer process asks for messages.
Buffers
Although flow control can be implemented in several ways, one of the solutions is normally
touse two buffers: one at the sending transport layer and the other at the receiving
transport layer.
A buffer is a set of memory locations that can hold packets at the sender and receiver. The
flow control communication can occur by sending signals from the consumer to the
producer.When the buffer of the sending transport layer is full, it informs the application
layer to stop passing chunks of messages; when there are some vacancies, it informs the
application layerthat it can pass message chunks again.
When the buffer of the receiving transport layer is full, it informs the sending transport layer
to stop sending packets. When there are some vacancies, it informs the sending transport
layer that it can send packets again.
Error Control

Error control at the transport layer is responsible for


1. Detecting and discarding corrupted packets.
2. Keeping track of lost and discarded packets and resending them.
3. Recognizing duplicate packets and discarding them.
4. Buffering out-of-order packets until the missing packets arrive.

Sliding Window

• Since the sequence numbers use modulo 2m, a circle can represent the sequence
numbers from 0 to 2m − 1. The buffer is represented as a set of slices, called the sliding
window that occupies part of the circle at any time.

• At the sender site, when a packet is sent, the corresponding slice is marked. When all the
slices are marked, it means that the buffer is full and no further messages can be accepted
from the application layer.

• When an acknowledgment arrives, the corresponding slice is unmarked. If some


consecutive slices from the beginning of the window are unmarked, the window slides over
the range of the corresponding sequence numbers to allow more free slices at the end of
the window.

• The sequence numbers arein modulo 16 (m = 4) and the size of the window is 7.
Connectionless and Connection-Oriented Protocols

• A transport-layer protocol can provide two types of services: connectionless and


connection-oriented. At the transport layer, we are not concerned about the physical paths
of packets (we assume a logical connection between two transport layers).

• Connectionless service at the transport layer means independency between packets;


connection-oriented means dependency. Let us elaborate on these two services.

Connectionless Service

• In a connectionless service, the source process (application program) needs to divide its
message into chunks of data of the size acceptable by the transport layer and deliver them
to the transport layer one by one.

• The transport layer treats each chunk as a single unit without any relation between the
chunks. When a chunk arrives from the application layer, the transport layer encapsulates
it in a packet and sends it.

Connectionless service
• The figure shows that at the client site, the three chunks of messages are delivered to
the client transport layer in order (0, 1, and 2). Because of the extra delaying
transportation of the second packet, the delivery of messages at the server is not in order
(0, 2, 1).

• If these three chunks of data belong to the same message, the server process may have
received a strange message. The situation would be worse if one of the packets were
lost.Since there is no numbering on the packets, the receiving transport layer has no idea
thatone of the messages has been lost. It just delivers two chunks of data to the server
process

• The above two problems arise from the fact that the two transport layers do not
coordinatewith each other. The receiving transport layer does not know when the first
packet will come nor when all of the packets have arrived.

• We can say that no flow control, error control, or congestion control can be effectively
implemented in a connectionless service.
Connection-Oriented Service

In a connection-oriented service, the client and the server first need to establish logical
connection between themselves. The data exchange can only happen after the
connection establishment. After data exchange, the connection needs to be torn down.

TRANSPORT-LAYER PROTOCOLS

Simple Protocol

Our first protocol is a simple connectionless protocol with neither flow nor error control.The
receiver can immediately handle any packet it receives. In other words, the receiver can
never be overwhelmed with incoming packets.

Simple protocol
The transport layer at the sender gets a message from its application layer, makes a packet
out of it, and sends the packet. The transport layer at the receiver receives a packet from its
network layer, extracts the message from the packet, and delivers the message to its
application layer.
FSMs

• The sender site should not send a packet until its application layer has a message to send.
The receiver site cannot deliver a message to its application layer until a packet arrives.

• We can show these requirements using two FSMs. Each FSM has only one state, the ready
state. The sending machine remains in the ready state until a request comes from the
process in the application layer. When this event occurs, the sending machine
encapsulates the message in a packet and sends it to the receiving machine.

• The receiving machine remains in the ready state until a packet arrives from the sending
machine. When this event occurs, the receiving machine decapsulates the message out of
the packet and delivers it to the process at the application layer.

Figure: FSMs for the simple protocol

• The communication using this protocol is very simple. The sender sends packets one after
another without even thinking about the receiver.

Stop-and-Wait Protocol

• Stop-and-Wait protocol uses both flow and error control. Both the sender and the
receiver use a sliding window of size 1. The sender sends one packet at a time and waits
for an acknowledgment before sending the next one.

• To detect corrupted packets, we need to add a checksum to each data packet. When a
packet arrives at the receiver site, it is checked. If its checksum is incorrect, the packet is
corrupted and silently discarded.
• The silence of the receiver is a signal for the sender that a packet was either corrupted or
lost.
• Every time the sender sends a packet, it starts a timer. If an acknowledgment arrives
before the timer expires, the timer is stopped and the sender sends the next packet (if it
has one to send).
• If the timer expires, the sender resends the previous packet, assuming that the packet
was either lost or corrupted. This means that the sender needs to keep a copy of the
packet until its acknowledgment arrives.
• The Stop-and-Wait protocol is a connection-oriented protocol that provides flow and error
Control.

Sequence Numbers
• To prevent duplicate packets, the protocol uses sequence numbers and acknowledgment
numbers. A field is added to the packet header to hold the sequence number of that
packet. Assume we have used x as a sequence number; we only need to use x + 1 after
that. There is no need for x + 2. Three things can happen.

• 1. The packet arrives safe and sound at the receiver site; the receiver sends an
acknowledgment. The acknowledgment arrives at the sender site, causing the sender to
send the next packet numbered x + 1.

• 2. The packet is corrupted or never arrives at the receiver site; the sender resends the
packet (numbered x) after the time-out. The receiver returns an acknowledgment.

• 3. The packet arrives safe and sound at the receiver site; the receiver sends an
acknowledgment, but the acknowledgment is corrupted or lost. The sender resends the
packet (numbered x) after the time-out. Note that the packet here is a duplicate. The
receiver can recognize this fact because it expects packet x + 1 but packet x was received.
Acknowledgment Numbers
• The acknowledgment numbers always announce the sequence number of the next packet
expected by the receiver. For example, if packet 0 has arrived safe and sound, the receiver
sends an ACK with acknowledgment 1 (meaning packet 1 is expected next).

• If packet 1 has arrived safe and sound, the receiver sends an ACK with acknowledgment 0
(meaning packet 0 is expected).
Receiver

The receiver is always in the ready state. Three events may occur:
a. If an error-free packet with seqNo = R arrives, the message in the packet is delivered to
the application layer. The window then slides, R = (R + 1) modulo 2.Finally an ACK with
ackNo = R is sent.
b. If an error-free packet with seqNo ≠ R arrives, the packet is discarded, but an ACK with
ackNo = R is sent.
c. If a corrupted packet arrives, the packet is discarded.

Pipelining

In networking and in other areas, a task is often begun before the previous task has ended.
This is known as pipelining. There is no pipelining in the Stop-and-Wait protocol because a
sender must wait for a packet to reach the destination and be acknowledged before the next
packet can be sent.

Go-Back-N Protocol (GBN)

To improve the efficiency of transmission (to fill the pipe), multiple packets must be in
transition while the sender is waiting for acknowledgment. The key to Go-Back-N (GBN) is
that we can send several packets before receiving acknowledgments, but the receiver can
only buffer one packet. We keep a copy of the sent packets until the acknowledgments
arrive.

Sequence Numbers
The sequence numbers are modulo 2m, where m is the size of the sequence number field in
Bits.
Receiver

Acknowledgment Numbers
An acknowledgment number in this protocol is cumulative and defines the sequence number
of the next packet expected. For example, if the acknowledgment number(ack No) is 7, it
means all packets with sequence number up to 6 have arrived, safe and sound, and the
receiver is expecting the packet with sequence number 7.
Send Window
The send window is an imaginary box covering the sequence numbers of the data packets
that can be in transit or can be sent. In each window position, some of the sequence
numbers define the packets that have been sent; others define those that can be sent. The
maximum size of the window is 2m− 1.Figure shows a sliding window of size 7 (m = 3) for
the Go-Back-N protocol.
Figure: Send window for Go-Back-N

• The send window at any time divides the possible sequence numbers into four regions.
• The first region, left of the window, defines the sequence numbers belonging to packets
that are already acknowledged. The sender does not worry about these packets and
keepsno copies of them.
• The second region, colored, defines the range of sequence numbers belonging to the
packets that have been sent, but have an unknown status. The sender needs to wait
tofind out if these packets have been received or were lost. We call these outstanding
packets.
• The third range, white in the figure, defines the range of sequence numbers for
packets that can be sent; however, the corresponding data have not yet been received
from theapplication layer.
• Finally, the fourth region, right of the window, defines sequence numbers that cannot be
Used until the window slides.
Receive Window
• The receive window makes sure that the correct data packets are received and that the
correct acknowledgments are sent. In Go-Back-N, the size of the receive window is always
1. The receiver is always looking for the arrival of a specific packet. Any packet arriving out
of order is discarded and needs to be resent.
• Figure shows the receive window. Only one variable, Rn (receive window, next packet
expected), to define this abstraction. The sequence numbers to the left of the window
belong to the packets already received and acknowledged; the sequence numbers to the
right of this window define the packets that cannot be received.
• Any received packet with a sequence number in these two regions is discarded. Only a
packet with a sequence number matching the value of Rn is accepted and
acknowledged.The receive window also slides, but only one slot at a time. When a
correct packet is received, the window slides, Rn= (Rn+ 1) modulo 2m.

Selective-Repeat Protocol
The Go-Back-N protocol simplifies the process at the receiver. The receiver keeps track of
only one variable, and there is no need to buffer out-of-order packets; they are simply
discarded.
If the network layer is losing many packets because of congestion in the network, the
resending of all of these outstanding packets makes the congestion worse, and eventually
more packets are lost.

• The Selective-Repeat protocol also uses two windows: a send window and a
receivewindow.
• First, the maximum size of the send window is much smaller; it is2m−1. Second, the
receive window is the same size as the send window. For example, if m = 4, the sequence
numbers go from 0 to 15, but the maximum size of the window is just 8 (it is 15 in the Go-
Back-N Protocol).
• The Selective-Repeat protocol allows as many packets as the size of the receive window to
arrive out of order and be kept until there is a set of consecutive packets to be delivered
to the application layer.
• Because the sizes of the send window and receive window are the same, all the packets in
the send packet can arrive out of order and be stored until they can be delivered.

Arrived packets from A.


Bidirectional Protocols: Piggybacking

• The four protocols we discussed earlier in this section are all unidirectional: data packets
flow in only one direction and acknowledgments travel in the other direction.
• In real life, data packets are normally flowing in both directions: from client to server
andfrom server to client. This means that acknowledgments also need to flow in both
directions.
• A technique called piggybacking is used to improve the efficiency of the bidirectional
protocols. When a packet is carrying data from A to B, it can also carry acknowledgment
feedback about arrived packets from B; when a packet is carrying data from B to A, it can
also carry acknowledgment feedback about the arrived packets from A.
Services
Each protocol provides a different type of service and should be used appropriately.
UDP
UDP is an unreliable connectionless transport-layer protocol used for its simplicity and
efficiency in applications where error control can be provided by the application- layer
process.
TCP
TCP is a reliable connection-oriented protocol that can be used in any
application where reliability is important.
SCTP
SCTP is a new transport-layer protocol that combines the features of UDP and TCP.
Port Numbers
A transport-layer protocol creates a process-to-process communication.
These protocols use port numbers to accomplish this.
Port numbers provide end- to-end addresses at the transport layer and allow multiplexing
And demultiplexing at this layer.
Most popularly used Port numbers are listed in Figure 4.2.
Reference video : https://www.youtube.com/watch?v=qsZ8Qcm6_8k

Figure – 4.1 - Position of transport-layer protocols in the TCP/IP protocol


suite
Figure 4.2 - Some well-known ports used with UDP and TCP
USER DATAGRAM PROTOCOL
∙ The User Datagram Protocol (UDP) is a connectionless, unreliable transport protocol.
∙ It provides process-to-process communication instead of host-to-host communication.
∙ UDP is a very simple protocol using a minimum of overhead.
∙ If a process wants to send a small message without considering reliability, it
can use UDP.
∙ Sending a small message using UDP takes much less interaction between the sender
and receiver than using TCP.
User Datagram
∙ UDP packets are called user datagrams (Figure 4.3).
∙ They have a fixed-size header of 8 bytes made of four fields, each of 2 bytes (16 bits).

Figure 4.3 - Format of a user datagram packet


First two fields define the source and destination port numbers.
The third field defines the total length of the user datagram, header plus data(0
to 65,535 bytes)
The last field has the optional checksum
UDP Services
1. Process-to-Process Communication
2. Connectionless Services
3. Flow Control
4. Error Control
5. Congestion Control
6. Encapsulation and Decapsulation
7. Queuing
8. Multiplexing and Demultiplexing
9. Process-to-Process Communication
UDP provides process-to-process communication using socket addresses.
2. Connectionless Services
UDP provides a connectionless service.
Each user datagram sent by UDP is an independent datagram.

The user datagrams are not numbered.


There is no connection establishment and no connection termination.
Each user datagram can travel on a different path.
One of the ramifications of being connectionless is that the process that uses
Only those processes sending short messages, messages less than65,507 bytes
(65,535 minus 8 bytes for the UDP header and minus 20 bytes for the
IP header), can use UDP.
3. Flow Control
There is no flow control, and hence no window mechanism.

The receiver may overflow with incoming messages.


4. Error Control
There is no error control mechanism in UDP except for the checksum.

Sender does not know if a message has been lost or duplicated.


When the receiver detects an error through the checksum, the user datagram is
silently discarded.
Checksum
UDP checksum calculation includes three sections (Figure 4.4:
Pseudoheader
UDP header
Data coming from the application layer.
If the checksum does not include the pseudoheader, a user datagram may
arrive safe.
If the IP header is corrupted, it may be delivered to the wrong host.
The protocol field is added to ensure that the packet belongs to UDP, and
not to TCP.
The value of the protocol field for UDP is 17. If this value is changed during
transmission, the checksum calculation at the receiver will detect it and UDP
drops the packet.
5. Congestion Control
UDP does not provide congestion control.
UDP assumes that the packets sent are small and cannot create congestion in the
network.
6. Encapsulation and Decapsulation
To send a message from one process to another, the UDP protocol encapsulates
and decapsulates messages.
7. Queuing
In UDP, queues are associated with ports.
At the client site, when a process starts, it requests a port number from the
operating system.
Some implementations create both an incoming and an outgoing queue
associated with each process. Other implementations create only an incoming
queue associated with each process.

Figure 4.4-Pseudoheader for checksum calculation


8. Multiplexing and Demultiplexing
Several processes may want to use the services of UDP.
To handle this situation, UDP multiplexes and demultiplexes.
UDP Applications
UDP is suitable for a process that
requires simple communication.

Note: It is not usually used for a process such as FTP that needs to send bulk data.
UDP is suitable for a process with internal flow- and error-control mechanisms.
For example, the Trivial File Transfer Protocol (TFTP) process includes flow and
error control.
UDP is a suitable transport protocol for multicasting. Multicasting capability is
embedded in the UDP software but not in the TCP software.
UDP is used for management processes such
as SNMP (Simple Network Management Protocol)
UDP is used for some route updating protocols such as Routing Information
Protocol (RIP)
UDP is normally used for interactive real-time applications that cannot tolerate
uneven delay between sections of a received message.
Reference Video : https://www.youtube.com/watch?v=O22zHLcHHN0

TRANSMISSION CONTROL PROTOCOL (TCP)


Transmission Control Protocol (TCP) is a connection-oriented, reliable protocol.
It defines connection establishment, data transfer, and connection teardown
phases to provide a connection-oriented service.
TCP uses a combination of GBN and SR protocols to provide reliability.
TCP is the most common transport-layer protocol in the Internet.
TCP Services
Process-to-Process Communication
As with UDP, TCP provides process-to-process communication using port numbers.
Stream Delivery Service
TCP is a stream-oriented protocol. (Figure 4.5)
TCP allows the sending process to deliver data as a stream of bytes and allows the
receiving process to obtain data as a stream of bytes.
The sending process produces (writes to) the stream and the receiving process
Consumes (reads from) it.
Figure 4.6 - Sending and Receiving Buffers
Sending and Receiving Buffers
At the sender, the buffer has three types of chambers:
Empty chambers that can be filled by the sending process (producer).
Chambers that hold bytes that have been sent but not yet acknowledged.
Chambers that contain bytes to be sent by the sending TCP.
The operation of the buffer at the receiver is simpler.
The circular buffer is divided into two types of chambers:
Empty chambers to be filled by bytes received from the network.
Chambers that contain received bytes that can be read by the receiving process.
When a byte is read by the receiving process, the chamber is recycled and added
to the pool of empty chambers.
Segments
TCP groups a number of bytes together into a packet called a segment.
TCP adds a header to each segment (for control purposes) and delivers the
segment to the network layer for transmission.

The segments are encapsulated in an IP datagram and

transmitted. Segments are not of the same size.


Full-Duplex Communication- TCP offers full-duplex service, where data can
Flow in both directions at the same time.
Multiplexing and Demultiplexing- TCP performs multiplexing at the sender
and demultiplexing at the receiver.
Connection-Oriented Service - TCP is a connection-oriented protocol, a
connection needs to be established for each pair of processes.
The two TCP’s establish a logical connection between

them. Data are exchanged in both directions.


The connection is terminated.
Figure 4.7 – TCP Segments
Segment Format
The segment consists of a header of 20 to 60 bytes, followed by data from the
application program.
The header is 20 bytes if there are no options and up to 60 bytes if it contains
options.
Source port address -This is a 16-bit field that defines the port number of the
Application program in the host that is sending the segment.
Destination port address - This is a 16-bit field that defines the port number of
the application program in the host that is receiving the segment.
Sequence number- This 32-bit field defines the number assigned to the first
byte of data contained in this segment.
Acknowledgment number- This 32-bit field defines the byte number that the
receiver of the segment is expecting to receive from the other party.
Header length - This 4-bit field indicates the number of 4-byte words in the TCP
header. The length of the header can be between 20 and 60 bytes.
Control - This field defines 6 different control bits or flags. These bits enable flow
control, connection establishment and termination, connection abortion, and the
mode of data transfer in TCP.

Figure 4.8 – TCP Segment Format


Figure 4. 9 – Control Field
Window size - This field defines the window size of the sending TCP in bytes.
Maximum size of the window is 65,535 bytes.
Checksum - This 16-bit field contains the checksum.
Urgent pointer - This 16-bit field, which is valid only if the urgent flag is set,
issued when the segment contains urgent data.
Options- There can be up to 40 bytes of optional information in the TCP header.
TCP Connection Establishment
TCP establishes a logical path between the source and destination.
This requires three phases:
Connection establishment
Data transfer

Connection termination.
The connection establishment in TCP is called three-way handshaking.
The three-way handshake involves the exchange of three messages between the
client and the server.
1. First, the client (the active participant) sends a segment to the server (the passive
participant) stating the initial sequence number it plans to use (Flags = SYN,
SequenceNum = x).

Figure 4.10 – Connection Establishment using three way handshaking


Figure 4.11 – Data
Transfer

2.The server then responds with a single segment that both acknowledges the
client’s sequence number (Flags = ACK, Ack = x + 1) and states its own beginning
sequence number (Flags = SYN, SequenceNum = y). That is, both the SYN and ACK
bits are set in the Flags field of this second message.

3.Finally, the client responds with a third segment that acknowledges the server’s
Sequence number (Flags = ACK, Ack = y + 1).

TCP Connection Termination


Three-way handshaking is also used for connection termination.

The client TCP, after receiving a close command from the client process, sends
the first segment, a FIN segment in which the FIN flag is set.

The server TCP, after receiving the FIN segment, sends the second segment, a
FIN + ACK segment, to confirm the receipt of the FIN segment from the client and
at the same time to announce the closing of the connection in the other direction.

The client TCP sends the last segment, an ACK segment, to confirm the receipt of
the FIN segment from the TCP server.
Figure 4.12 – Connection termination using three way handshaking

TCP State Transition Diagram


∙ This diagram shows only the states involved in opening a connection
(everything above ESTABLISHED) and in closing a connection (everything
below ESTABLISHED). Everything that goes on while a connection is open—that
is, the operation of the sliding window algorithm—is hidden in the
ESTABLISHED state.
∙ Each circle denotes a state that one end of a TCP connection can find itself in.
∙ All connections start in the CLOSED state.
∙ Each arc is labeled with a tag of the form event/action.
∙ Thus, if a connection is in the LISTEN state and a SYN segment arrives (i.e., a
segment with the SYN flag set), the connection makes a transition to the SYN
RCVD state and takes the action of replying with an ACK + SYN segment.
∙ When opening a connection, the server first invokes a passive open operation
on TCP, which causes TCP to move to the LISTEN state.
∙ At some later time, the client does an active open, which causes its end of the
connection to send a SYN segment to the server and to move to the SYN SENT
state.
∙ When the SYN segment arrives at the server, it moves to the SYN RCVD state
and responds with a SYN+ACK segment.
∙ The arrival of this segment causes the client to move to the ESTABLISHED
state and to send an ACK back to the server.
∙ When this ACK arrives, the server finally moves to the ESTABLISHED state. In
other words, we have just traced the three-way handshake.
Figure 4.13 – States for TCP

∙ Now to the process of terminating a connection, the important thing to keep in


mind is that the application process on both sides of the connection must
independently close its half of the connection.

∙ If only one side closes the connection, then this means it has no more data to
send, but it is still available to receive data from the other side.

∙ Thus, on any one side there are three combinations of transitions that get a
connection from the ESTABLISHED state to the CLOSED state:
This side closes first:
ESTABLISHED → FIN WAIT 1 → FIN WAIT 2 → TIME WAIT → CLOSED.
The other side closes first:
ESTABLISHED → CLOSE WAIT → LAST ACK → CLOSED.
Both sides close at the same time:
ESTABLISHED → FIN WAIT 1 → CLOSING → TIME WAIT → CLOSED.
Figure 4.14 – Data flow and flow control feedbacks in TCP

Flow Control
Flow control balances the rate a producer creates data with the rate a
consumer can use the data.

Assume that the logical channel between the sending and receiving TCP is error-
free.

Paths 1, 2, and 3 - The data travel from the sending process down to the
sending TCP, from the sending TCP to the receiving TCP, and from the
receiving TCP up to the receiving process.

Paths 4 and 5- Flow control feedbacks, are traveling from the receiving TCP to
the sending TCP and from the sending TCP up to the sending process.

Opening and Closing Windows


To achieve flow control, TCP forces the sender and the receiver to adjust
their window sizes.

The receive window closes (moves its left wall to the right) when more bytes
arrive from the sender; it opens (moves its right wall to the right) when
more

Bytes are pulled by the process.

The opening, closing, and shrinking of the send window is controlled by the
Receiver.

The send window closes (moves its left wall to the right) when a new
acknowledgment allows it to do so.

The send window opens (its right wall moves to the right) when the
receive window size (rwnd) advertised by the receiver allows it to do so

newackNo + new rwnd> last ackNo + last rwnd

The send window shrinks in the event this situation does not occur.
Shrinking of Windows
The receive window cannot shrink.

The send window, on the other hand, can shrink if the receiver defines a value for rwnd
that results in shrinking the window.

newackNo 1 new rwnd ≥ last ackNo 1 last rwnd

Error Control
TCP is a reliable transport-layer protocol. This means that an application program that
delivers a stream of data to TCP relies on TCP to deliver the entire stream to the
application program on the other end in order, without error, and without any part lost or
duplicated.

TCP provides reliability using error control.

Error control includes mechanisms for detecting and resending corrupted segments,
resending lost segments, storing out-of order segments until missing segments arrive,
and detecting and discarding duplicated segments.
Error control in TCP is achieved through the use of three simple tools:

Checksum

Acknowledgment

time-out.

Checksum
Each segment includes a checksum field, which is used to check for a corrupted
segment.

If a segment is corrupted, as detected by an invalid checksum, the segment is discarded


by the destination TCP and is considered as lost.

TCP uses a 16-bit checksum that is mandatory in every segment.

Acknowledgment
TCP uses acknowledgments to confirm the receipt of data segments.

Control segments that carry no data, but consume a sequence number, are also
acknowledged.
ACK segments are never acknowledged.

Acknowledgment Type

Cumulative Acknowledgment (ACK) - TCP acknowledges receipt of segments


cumulatively. The receiver advertises the next byte it expects to receive, ignoring all
segments received and stored out of order.

Selective Acknowledgment (SACK) - A SACK reports a block of bytes that is out of order,
and also a block of bytes that is duplicated
Generating Acknowledgments

When end A sends a data segment to end B, it must include (piggyback) an

acknowledgment that gives the next sequence number it expects to receive.

2. The receiver needs to delay sending an ACK segment if there is only one outstanding in-
order segment.

3. When a segment arrives with a sequence number that is expected by the receiver, and
the previous in-order segment has not been acknowledged, the receiver immediately
sends an ACK segment.

4. When a segment arrives with an out-of-order sequence number that is higher than
expected, the receiver immediately sends an ACK segment announcing the sequence
number of the next expected segment.

5. When a missing segment arrives, the receiver sends an ACK segment to announce the
next sequence number expected.

6. If a duplicate segment arrives, the receiver discards the segment, but immediately
sends an acknowledgment indicating the next in-order segment expected.

TCP Congestion Control

The idea of TCP congestion control is for each source to determine how much capacity
is available in the network, so that it knows how many packets it can safely have in
transit.

Once a given source has this many packets in transit, it uses the arrival of an ACK as a
signal that one of its packets has left the network, and that it is therefore safe to insert a
new packet into the network without adding to the level of congestion.

By using ACKs to pace the transmission of packets, TCP is said to be self-


clocking.

Congestion Control Mechanisms

Additive Increase/Multiplicative Decrease

TCP maintains a new state variable for each connection, called Congestion Window.

Congestion Window is used by the source to limit how much data it is allowed to have in
transit at a given time.

The congestion window is congestion control’s counterpart to flow control’s advertised


window.

TCP is modified such that the maximum number of bytes of unacknowledged data
allowed is now the minimum of the congestion window and the advertised window.

Reference video: https://www.youtube.com/watch?v=uwoD5YsGACg


Effective window is revised as follows:
MaxWindow = MIN (CongestionWindow, AdvertisedWindow) EffectiveWindow
= MaxWindow− (LastByteSent−LastByteAcked).

That is, MaxWindow replaces AdvertisedWindow in the calculation of EffectiveWindow.


Thus, a TCP source is allowed to send no faster than the slowest component—the network or the
destination host—can accommodate.
The AdvertisedWindow, is sent by the receiving side of the connection.
How does TCP set a value for congestion window?
The TCP source sets the CongestionWindow based on the level of congestion it perceives to exist in the
network.

This involves decreasing the congestion window when the level of congestion goes up and increasing
the congestion window when the level of congestion goes down.
This mechanism is commonly called additive increase/multiplicative decrease (AIMD).
How does the source determine that the network is congested and that it should decrease the
Congestion window?

TCP interprets timeouts as a sign of congestion and reduces the rate at which it is transmitting.

Specifically, each time a timeout occurs, the source sets CongestionWindow to half of its previous value.
This halving of the CongestionWindow for each timeout corresponds to the ―multiplicative decrease‖ part of AIMD.

Although CongestionWindow is defined in terms of bytes, it is easiest to understand multiplicative


decrease if we think in terms of whole packets.

For example, suppose the CongestionWindow is currently set to 16 packets. If a loss is detected,
CongestionWindow is set to 8.
Additional losses cause CongestionWindow to be reduced to 4, then 2, and finally to 1
packet.

CongestionWindow is not allowed to fall below the size of a single packet, or in TCP terminology, the
maximum segment size (MSS).

We also need to be able to increase the congestion window to take advantage of newly available
capacity in the network.
This is the ―additive increase‖ part of AIMD, and it works as follows.
Every time the source successfully sends a Congestion Window’s worth of packets—that is, each packet
sent out during the last RTT has been ACKed—it adds the equivalent of one packet to
CongestionWindow.
This linear increase is illustrated in Figure.
Specifically, the congestion window is incremented as follows each time an ACK arrives:
Increment = MSS × (MSS/CongestionWindow)
CongestionWindow + = Increment
Figure 4.15 - Packets in transit during additive increase, with one packet being
added each RTT.
The graph in Figure 4.16 is a plot of the current value of congestion window as a
Function of time.

Slow Start
The additive increase mechanism is used when the source is operating close to
the available capacity of the network.
It takes too long to ramp up a new TCP connection.
Slow start is used to increase the congestion window rapidly from a cold start.
Slow start effectively increases the congestion window exponentially, rather than
linearly.

Figure 4.16 - Typical TCP saw tooth


pattern
Figure 4.17 - Packets in transit during slow start.

Specifically, the source starts out by setting CongestionWindow to one packet.


When the ACK for this packet arrives, TCP adds 1 to CongestionWindow and then
Sends two packets.
Upon receiving the corresponding two ACKs, TCP increments CongestionWindow
by 2—one for each ACK—and next sends four packets.
The end result is that TCP effectively doubles the number of packets it has in
Transit every RTT.
The above figure shows the growth in the number of packets in transit during
slow start.
Why is it called slow start?
If the source sends as many packets as the advertised window allows, the
routers may not be able to consume this burst of packets.
It all depends on how much buffer space is available at the routers.
Slow start was therefore designed to space packets out so that this burst does
not occur.
Thus slow start is much ―slower‖ than sending an entire advertised window’s
worth of data all at once.
Situations under which slow start is used
There are actually two different situations in which slow start runs.
For a new connection
In this situation, slow start continues to double Congestion Window each RTT until
there is a loss, at which time a timeout causes multiplicative decrease to divide
Congestion Window by 2.
When connection goes dead waiting for a timeout to occur
In this situation, when a packet is lost, the source eventually reaches a point where it
has sent as much data as the advertised window allows, and so it blocks while waiting
for an ACK that will not arrive. Eventually, a timeout happens, but by this time there
are no packets in transit.
From the above graph, Congestion Window flattens out at about 34 KB.
The reason why the congestion window flattens is that there are no ACKs arriving,
due to the fact that several packets were lost.
When a timeout eventually happens, the congestion window is divided by 2 (i.e., cut
from approximately 34 KB to around 17 KB) and Congestion Threshold is set to this
value.
Congestion Window is reset to one packet and slow start is used till Congestion
Window reaches Congestion Threshold, after which Congestion Window increases
linearly.
Fast Retransmit and Fast Recovery
Fast retransmit is a heuristic that sometimes triggers the retransmission of a
dropped packet sooner than the regular timeout mechanism.
The fast retransmit mechanism does not replace regular timeouts; it just enhances
that facility.
Every time a data packet arrives at the receiving side, the receiver responds with an
acknowledgment, even if this sequence number has already been acknowledged.
Thus, when a packet arrives out of order— that is, TCP cannot yet acknowledge the
data the packet contains because earlier data has not yet arrived—TCP resends the
same acknowledgment it sent the last time.
This second transmission of the same acknowledgment is called a duplicate ACK.
When the sending side sees a duplicate ACK, it knows that the other side must have
received a packet out of order, which suggests that an earlier packet might have been lost.
Since it is also possible that the earlier packet has only been delayed rather than lost, the
sender waits until it sees some number of duplicate ACKs and then retransmits the missing
packet.
In practice, TCP waits until it has seen three duplicate ACKs before retransmitting the
packet.
Figure illustrates how duplicate ACKs leads to a fast retransmit.
In this example, the destination receives packets 1 and 2, but packet 3 is lost in the
network.
Thus, the destination will send a duplicate ACK for packet 2 when packet 4 arrives, again
when packet 5 arrives, and so on.
When the sender sees the third duplicate ACK for packet 2—the one sent because the
receiver had gotten packet 6—it retransmits packet 3.
Note that when the retransmitted copy of packet 3 arrives at the destination, the receiver
then sends a cumulative ACK for everything up to and including packet 6 back to the
source.
When the fast retransmit mechanism signals congestion, it is possible to use the ACKs that
are still in the pipe to clock the sending of packets.
This mechanism, which is called fast recovery, effectively removes the slow start phase
that happens between
een additive
when fast increase
retransmit
begins.
detects a lost packet and

Figure 4.18 - Fast retransmit based on duplicate ACKs.


For example, fast recovery avoids the slow start and instead simply cuts the
Congestion window in half and resumes additive increase.
In other words, slow start is only used at the beginning of a connection.
At all other times, the congestion
window is following a pure
additive increase/multiplicative decrease pattern.
TCP Timers
To perform their operations smoothly, most TCP implementations use at least four
timers:
Retransmission
PersistenceKeepalive
TIME-WAIT.

Retransmission Timer
To retransmit lost segments, TCP employs one retransmission timer (for the
whole connection period) that handles the retransmission time-out (RTO), the
waiting time for an acknowledgment of a segment.
1. When TCP sends the segment in front of the sending queue, it starts the timer.
2.When the timer expires, TCP resends the first segment in front of the queue,
and restarts the timer.
3.When a segment or segments are cumulatively acknowledged, the segment or
Segments are purged from the queue.
4. If the queue is empty, TCP stops the timer; otherwise, TCP restarts the timer.
Persistence Timer
To deal with a zero-window-size advertisement, TCP needs another timer.
If the receiving TCP announces a window size of zero, the sending TCP stops
transmitting segments until the receiving TCP sends an ACK segment announcing
a nonzero window size.
This ACK segment can be lost.
If this acknowledgment is lost, the receiving TCP thinks that it has done its job
and waits for the sending TCP to send more segments.
There is no retransmission timer for a segment containing only an
Acknowledgment.

The sending TCP has not received an acknowledgment and waits for the other
TCP to send an acknowledgment advertising the size of the window.
When the persistence timer goes off, the sending TCP sends a special segment
called a probe.
The probe causes the receiving TCP to resend the acknowledgment.
The value of the persistence timer is set to the value of the retransmission time.
However, if a response is not received from the receiver, another probe segment
is sent and the value of the persistence timer is doubled and reset.
The sender continues sending the probe segments and doubling and resetting the
value of the persistence timer until the value reaches a threshold (usually 60 s).
After that the sender sends one probe segment every 60 seconds until the window
is reopened.

Keepalive Timer
A keepalive timer is used to prevent a long idle connection between two TCPs.
Example: Suppose that a client opens a TCP connection to a server, transfers
some data, and has crashed. In this case, the connection remains open forever.
The server has a keepalive timer. Each time the server hears from a client, it
resets this timer. The time-out is usually 2 hours. If the server does not hear from
the client after 2 hours, it sends a probe segment. If there is no response after 10
probes, each of which is 75 seconds apart, it assumes that the client is down and
terminates the connection.
TIME-WAIT Timer
The TIME-WAIT (2MSL) timer is used during connection termination.
The maximum segment lifetime (MSL) is the amount of time any segment can
exist in a network before being discarded.
Common values for MSL are 30 seconds, 1 minute, or even 2 minutes.
The 2MSL timer is used when TCP performs an active close and sends the final
ACK.
The connection must stay open for 2 MSL amount of time to allow TCP to resend
the final ACK in case the ACK is lost.
Reference video: https://www.youtube.com/watch?v=oEUP7RXzxDY
SCTP (Stream Control Transmission Protocol)
(SCTP) is a new transport-layer protocol designed to combine some
features of UDP and TCP in an effort to create a better protocol for
multimedia communication.
SCTP Services
1. Process-to-Process Communication
SCTP, like UDP or TCP, provides process-to-process communication.
2. Multiple Streams
SCTP allows multistream service in each connection.
If one of the streams is blocked, the other streams can still deliver their
data.
Figure 4.21 – Comparison between a TCP Segment and an SCTP Packet
2. Stream Identifier (SI)
There may be several streams in each association.
Each stream in SCTP needs to be identified using a stream identifier (SI).
Each data chunk must carry the SI in its header so that when it arrives at
thedestination, it can be properly placed in its stream.
The SI is a 16-bit number starting from 0.
3. Stream Sequence Number (SSN)
When a data chunk arrives at the destination SCTP, it is delivered to the
appropriate stream and in the proper order.
This means that, in addition to an SI, SCTP defines each data chunk in each
stream with a stream sequence number (SSN).
4. Packets
Data are carried as data chunks, control information as control chunks.

Several control chunks and data chunks can be packed together in a packet.

A packet in SCTP plays the same role as a segment in TCP.


5. Acknowledgment Number
SCTP acknowledgment numbers are chunk-oriented.
In SCTP, the control information is carried by control chunks, which do not need a
TSN.

Figure 4.22 - Packet Format


Figure 4.23 - General Header

These control chunks are acknowledged by another control chunk of the


appropriate type (some need no acknowledgment).
SCTP Association Establishment
Association establishment in SCTP requires a four-way handshake (Figure
4.24).
The client sends the first packet, which contains an INIT chunk. The verification
tag (VT) of this packet is 0 because no verification tag has yet been defined for
this direction (client to server). The INIT tag includes an initiation tagto be used
for packets from the other direction (server to client).The chunk also defines the
initial TSN for this direction and advertises a value forrwnd. The value of rwnd is
normally advertised in a SACK chunk.
The server sends the second packet, which contains an INIT ACK chunk. The
verification tag is the value of the initial tag field in the INIT chunk. This chunk
initiates the tag to be used in the other direction, defines the initial TSN, for data
flowfrom server to client, and sets the server’s rwnd. The value of rwnd is defined
to allow the client to send a DATA chunk with the third packet. The INIT ACK
also sends a cookie that defines the state of the server at this moment.
The client sends the third packet, which includes a COOKIE ECHO chunk. This isa
very simple chunk that echoes, without change, the cookie sent by the
server.SCTP allows the inclusion of data chunks in this packet.
The server sends the fourth packet, which includes the COOKIE ACK chunk
thatacknowledges the receipt of the COOKIE ECHO chunk. SCTP allows the
inclusionof data chunks with this packet.

Figure 4.24 – Four way handshaking


Figure 4.25 – Association termination

SCTP Association Termination


The receiver has one buffer (queue) and three variables.
The queue holds the received data chunks that have not yet been read by the
process.
The first variable holds the lastTSN received, cumTSN. The second variable holds
the available buffer size, winSize.
The third variable holds the last cumulative acknowledgment, lastACK.
When the site receives a data chunk, it stores it at the end of the buffer (queue)
and subtracts the size of the chunk from winSize. The TSN number of the chunk is
stored in the cumTSN variable.
When the process reads a chunk, it removes it from the queue and adds the size
of the removed chunk to winSize (recycling).
When the receiver decides to send a SACK, it checks the value of lastAck; if it is
less than cumTSN, it sends a SACK with a cumulative TSN number equal to
thecumTSN. It also includes the value of winSize as the advertised window size.
The value of lastACK is then updated to hold the value of cumTSN.

Figure 4.26 - Flow Control Receiver Site


Figure 4.27 – Sender site
The sender has one buffer (queue) and three variables: curTSN, rwnd, and
In Transit,
CurTSN, refers to the next chunk to be sent.
Rwnd, holds the last value advertised by the receiver (in bytes).
in Transit, holds the number of bytes in transit, bytes sent but not yet
Acknowledged.
1. A chunk pointed to by curTSN can be sent if the size of the data is less than
or equal to the quantity (rwnd − inTransit). After sending the chunk, the
value ofcurTSN is incremented by one and now points to the next chunk to
be sent. The value of inTransit is incremented by the size of the data in the
transmitted chunk.
2. When a SACK is received, the chunks with a TSN less than or equal to the
cumulativeTSN in the SACK are removed from the queue and discarded.
The value of inTransit is reduced bythe total size of the discarded chunks.
The value of rwnd is updated with the value of the advertised window in the
SACK.
Error Control
Receiver Site

The receiver stores all chunks that have arrived in its queue including the out-of-
order ones.
The last acknowledgment sent was for data chunk 20.
The available window size is1000 bytes.
Chunks 21 to 23 have been received in order.

Figure 4.28 – Error control, receiver site


Figure 4.29 – Error control, sender site
The first out-of-order block contains chunks 26 to 28.

The second out-of-order block contains chunks 31 to 34.

A variable holds the value of cumTSN.


An array of variables keeps track of the beginning and the end of each block that
is out of order.
An array of variables holds the duplicate chunks received.
Sender Site
At the sender site, our design demands two buffers (queues): a sending queue
and are transmission queue.

We also use three variables: rwnd, inTransit, and

curTSN. The sending queue holds chunks 23 to 40.


The chunks 23 to 36 have already been sent, but not acknowledged; they are
Outstanding chunks.
The curTSN points to the next chunk to be sent (37).
We assume that each chunk is 100 bytes, which means that 1400 bytes of data
(chunks 23 to 36) are in transit.
The sender at this moment has a retransmission queue.
When a packet is sent, a retransmission timer starts for that packet.
When the retransmission timer for a packet expires, or three SACKs arrive that
declares a packet as missing, the chunks in that packet are moved to the
retransmission queue to be resent.
These chunks are considered lost, rather than outstanding.

The chunks in the retransmission queue have priority.


In other words, the next time the sender sends a chunk, it would be chunk 21
from the retransmission queue.
UNIT IV QUESTION BANK
PART A
1) Give any two Transport layer service. (Dec 12) (CO4,K1)
Multiplexing: Transport layer performs multiplexing/de-multiplexing
function. Multiple applications employ same transport protocol, but use different port
number. According to lower layer n/w protocol, it does upward multiplexing or
downward multiplexing.

Reliability: Error Control and Flow Control.

2) Mention the various adaptive retransmission policy of TCP. (CO4,K1)

Simple average
Exponential / weighted average

Exponential RTT back off

Jacobson’s Algorithm

3) How IANA has divided port numbers? (CO4, K2)

IANA (Internet Assigned Number Authority) has divided port numbers into three
ranges: 1) Well Known ports 2) Registered ports 3) Dynamic Ports.

4) List out the various features of sliding window protocol. (Nov/Dec 2012)
(CO4, K1)

The sliding window protocol is a useful protocol in network communications. Many


important protocol such as TCP, SPX etc., carry the idea of Sliding window
protocol. The sliding window protocol provides flow control in data communication
over an unreliable connection. The key feature of the sliding window protocol is
that it permits pipelined communication.

5) What are the services provided by the transport layer? (May 2018)
(CO4, K1)
End - to End Delivery
Flow Control
Addressing
Multiplexing

Reliable delivery

6) Why does the congestion occur in network? (CO4, K1)

Congestion occurs because the switches in a network have a limited buffer size to
store arrived packets. And also because the packets arrive at a faster rate than
what the receiver can receive and process the packets.
7) Define congestion. (Nov 2011) (CO4,K1)

Congestion in a network occurs if user sends data into the network at a rate
greater than that allowed by network resources. Any given node has a number of
I/O ports attached to it. There are two buffers at each port. One to accept
arriving packets & another one to hold packets that are waiting to depart. If
packets arrive too fast node than to process them or faster than packets can be
cleared from the outgoing buffers, then there will be no empty buffer. Thus
causing congestion and traffic in the network.

8) What is tiny gram? (CO4,K1)

A very small packet of data is called a tiny gram. Too many tiny grams can
congest a network connection.

9) What is UDP? (CO4,K1)

It stands for User Datagram Protocol. It is part of the TCP/IP suite of protocols
used for data transferring. UDP is a known as a "stateless" protocol, meaning it
doesn't acknowledge that the packets being sent, have been received.

10) What are the advantages of using UDP over TCP? (Nov 10) (CO4,K1)

UDP is very useful for audio or video delivery which does not need
acknowledgement. It is useful in the transmission of multimedia data. Connection
Establishment delay will occur in TCP.

11) What are the different fields and use of UDP’s Pseudo header?
(CO4, K1)

The pseudo header consists of three field from the IP header protocol number
, source IP address and destination IP address plus the UDP length field (which is
included twice in checksum calculation).The pseudo header is used to check
whether the message is delivered between 2 endpoints.

12) What is called Port Mapper? (CO4,K1)

A client would send a message to the Port Mapper’s well-known port asking for
the port it should use to talk to the ―whatever‖ service, and the Port Mapper
returns the appropriate port.

13) What is meant by PORT or MAILBOX related with UDP? (Nov/Dec


2012) (CO4, K1)

A more common approach, and the one used by UDP, is for processes to
indirectly identify each other using an abstract locator, often called a port or
mailbox.
14) Give the datagram format of UDP? (CO4, K1)

The basic idea of UDP is for a source process to send a message to a port and for
the destination process to receive the message from a port.

Source port address: It is the address of the application program that has
created the message.

Destination port address: It is the address of the application program that will
receive the message.

Total Length: It defines the total length of the user datagram in bytes.

Checksum: It is a 16 bit field used in error correction.

15) What is TCP? (Nov 11) (CO4,K1)

Transmission Control Protocol provides Connection oriented and reliable services.


TCP guarantees the reliable delivery of a stream of bytes. It is a full-duplex
protocol, meaning that each TCP connection supports a pair of byte streams, one
flowing in each direction. The different phases in TCP state machine are
Connection Establishment, Data transfer and Connection Release. TCP services to
provide reliable communication are Error control, Flow control, Connection
control and Congestion control.

16) How does transport layer perform duplication control? (May/June


2015) (CO4, K1)

Duplication control is important to consider as well because as the speed of


networks continue to increase, it becomes possible for different messages to be
identified as duplicated and discarded. Similarly, if a packet can become
corrupted or erroneous, it is possible then for the sequence number of a real
message to be incorrect and cause a duplicate. Also it is entirely possible for a
duplicate message to be sent by the sender itself, and therefore this duplicate
should be detected to avoid errors.

17) Name the policies that can prevent (avoid) congestion. (CO4, K1)

DEC (Digital Equipment Corporation) bit.

Random Early Detection (RED).

Source based congestion avoidance.

18) List out various congestion control techniques. (CO4, K1)

AIMD (Additive Increase Multiplicative Decrease)

Slow start

Fast retransmit

Recovery
19) Suppose TCP operates over a 1-Gbps link, utilizing the full bandwidth
continuously. How long will it take for the sequence numbers to wrap around
completely? Suppose an added 32-bit timestamp field increments 1000 times
during this wrap around time, how long it will take timestamp filed to wrap
around?

(May 13) (CO4, K4)

Once a segment with sequence x survives in Internet, TCP cannot use the same
sequence no. How fast 32-bit sequence no space can be consumed? 32-bit
sequence no is adequate for today’s network. Wrap around Time for T3-
45Mbps (232 x 8) /45Mbps=763.55sec=12.73 min.

20) Write short notes on congestion control. (Nov 12) (CO4,K1)

It involves preventing too much data from being injected into the network,
thereby causing switches or links to become overloaded. Thus flow control is an
end to an end issue, while congestion control is concerned with how hosts and
networks interact.

21) What do you mean by slow start in TCP congestion? (May 16)
(CO4,K1)

TCP slow start is an algorithm which balances the speed of a network connection.
Slow start gradually increases the amount of data transmitted until it finds the
network’s maximum carrying capacity.

22) What are the four aspects related to the reliable delivery of data? (May 12)
(CO4,K1)

The four aspects are Error control, Sequence control, Loss control and Duplication
Control.

23) List the different phases used in TCP Connection. (May 16) (CO4,K1)

The different phases used in TCP connection are Connection establishment


Phase, Data transfer and Connection Termination Phase

24) List the advantages of Connection oriented services over


connectionless services.(May 17) (CO4,K1)
Buffers can be reserved in
advance Sequencing can be
guaranteed.
Short headers.

25) How do fast retransmit mechanism of TCP works?(May 17) (CO4,K1)

Fast retransmit is a modification to the congestion avoidance algorithm. As in


Jacobson's fast retransmit algorithm, when the sender receives 3rd duplicate
ACK, it assumes that the packet is lost and retransmit that packet without
waiting for a retransmission timer to expire.
26) What is slow start? Why is it called slow start? (CO4, K1)

Slow start is used to increase the congestion window rapidly from a cold start.
Slow start effectively increases the congestion window exponentially, rather
than linearly. If the source sends as many packets as the advertised window
allows, the routers may not be able to consume this burst of packets. Thus
slow start is much ―slower‖ than sending an entire advertised window’s worth
of data all at once.

27) What are the situations under the slow start is run? (CO4, K1)

There are actually two different situations in which slow start runs.

For a new connection

When connection goes dead waiting for a timeout to occur.

28)Define SCTP. (CO4, K1)

SCTP (Stream Control Transmission Protocol) is a reliable, message


oriented transport layer protocol. It combines the best features of UDP and
TCP. It is mostly designed for internet applications.

29) What is the use of SCTP Multiple stream service? (CO4, K1)

SCTP allows multi stream service in each connection, which is called association in
SCTP terminology. If one of the streams is blocked, the other streams can still
deliver their data. The idea is similar to multiple lanes on a highway. The figure
shows the idea of multi stream delivery.

30) Define Multihoming Concept of SCTP (CO4, K1)

Multihoming is the ability of an SCTP association to support multiple IP paths to


its peer endpoint. The benefit of multihoming associations is that it makes the
association more fault-tolerant against physical network failures and other
issues on the interfaces.
PART – B
1) Write short notes on (May 12) (CO4,K1)

(i) TCP segment format (ii) Silly window syndrome (Or) discuss the silly window
syndrome and explain how to avoid it. (CO4, K2)

2) With neat architecture, Explain TCP and its sliding window algorithm for
flowcontrol. (Nov 15) (CO4,K1)

3) Define UDP. Discuss the operations of UDP. Explain UDP checksum with
oneexample. (Nov/Dec 2011, May 16) (CO4,K2)

4) Identify and explain the states involved in TCP. Explain how TCP
manages aByte Stream. (May 2018) (CO4,K3)

5)Explain the various fields of TCP header and working of the TCP protocol.
(May/June 2015, Nov/Dec 2015, Nov/Dec 2016) (CO4, K1)

6) Explain the three way handshake protocol to establish the transport


levelconnection. (CO4,K1)

7)How is congestion controlled? Explain in detail the TCP congestion control


methods/techniques. (May 2019, May 2018, Nov/Dec 2016, May/June 2016,
Nov/Dec 2014, May June 2014) (CO4, K4)

8) Illustrate the features of TCP that can be used by the sender to insert
record boundaries into the byte stream. Also mention their original
purpose.(May 13) (CO4,K4)

9) Explain connection establishment and connection closing in TCP (or)


Describe how reliable and ordered delivery is achieved through TCP. (Nov
13)(May 15) (CO4,K1)

10) With TCPs slow start and AIMD for congestion control show how the
windowsize will vary for a transmission where every 5th packet is lost. Assume
an advertised window size of 50 MSS. (May 17) (CO4,K4)

11) What is SCTP? Explain the Association establishment of SCTP through four-way
handshake in detail.(May 17) (CO4,K1)
SUPPORTIVE ONLINE COURSES

Course
S No Course title Link
provider

https://www.udemy.co
Introduction to Networking
1 Udemy m/course/introduction-
for Complete Beginners
to-networking-for-

complete-beginners/

https://www.coursera.
Fundamentals of Network
2 Coursera o
Communication
rg/learn/fundamentals-

network-

communications/

https://www.coursera.
Peer-to-Peer Protocols
o rg/learn/peer-to-
3 Coursera
and Local Area
peer-
Networks
protocols-local-area-

networks/

https://www.coursera.
Packet Switching
o rg/learn/packet-
4 Coursera
Networks and
switching-networks-
Algorithms
algorithms
https://www.coursera.
5 Coursera TCP/IP and Advanced Topics o rg/learn/tcp-ip-

advanced

https://www.edx.org/
Computer Networks and the
c ourse/computer-
6 edX Internet
REAL TIME APPLICATIONS IN DAY TO DAY LIFE

AND TO INDUSTRY
The real-time application-controlled TCP/IP trace NMI is a callable NMI that
Provides real-time TCP/IP stack data to network management applications
based on filters that are set by an application trace instance. Each
application can use the NMI to open multiple trace instances and set unique
filters for each trace instance to obtain the desired data. Filters can be set
for the following trace types:
• Data trace
• Packet trace
The application will receive information about real-time data that is lost. The
information is provided in the form of lost trace records. The real-time data
that matches the application filters is provided in trace records. These trace
records are similar to the trace records that are provided by the real-time
TCP/IP network monitoring NMI.
As part of collecting the real-time data for the applications, the NMI uses 64-
bit shared storage that it shares with the application address space. The NMI
also uses 64-bit common storage that the TCP/IP address space owns.
CONTENT BEYOND SYLLABUS: UNIT – IV

Transport Layer Security (TLS)


Transport Layer Securities (TLS) are designed to provide security at the transport
layer.

TLS was derived from a security protocol called Secure Service Layer (SSL).

TLS ensures that no third party may eavesdrop or tamper with any

message.
There are several benefits of TLS:
1. Encryption
2.Interoperability
3.Algorithm flexibility
4.Ease of Deployment
5.Ease of Use
6.Working of TLS

What is the difference between TLS and SSL?


TLS evolved from a previous encryption protocol called Secure Sockets Layer (SSL),
which was developed by Netscape. TLS version 1.0 actually began development as
SSL version 3.1, but the name of the protocol was changed before publication in
order to indicate that it was no longer associated with Netscape. Because of this
history, the terms TLS and SSL are sometimes used interchangeably.
What does TLS do?
There are three main components to what the TLS protocol accomplishes:
• Encryption: hides the data being transferred from third parties.
• Authentication: ensures that the parties exchanging information are who they
claim to be.
• Integrity: verifies that the data has not been forged or tampered with.

How does TLS work?


Online Reference:

https://www.cloudflare.com/learning/ssl/transport-layer-security-tls/

47
MINIPROJECT

Using TCP/IP sockets, write a client – server program to make the


client send the file name and to make the server send back the
contents of the requested file if present. (USE NS2)
Assessment Schedule
• Tentative schedule for the Assessment During
2022-2023 odd semester

Name of the
S.NO Start Date End Date Portion
Assessment

1 IAT 1 22.08.2024 30.08.2024 UNIT 1 & 2

2 IAT 2 30.09.2024 08.10.2024 UNIT 3 & 4

3 REVISION 17.10.2024 25.10.2024 UNIT 5 , 1 & 2

4 MODEL 26.10.2024 08.11.2024 ALL 5 UNITS

114
Prescribed Text Books & Reference Books

TEXT BOOK
Data Communications and Networking, Behrouz A. Forouzan, McGraw Hill Education,
5th Ed., 2017.

REFERENCES
1. Computer Networking- A Top Down Approach, James F. Kurose, University of
Massachusetts and Amherst Keith Ross, 8th Edition, 2021.
2. Computer Networks, Andrew S. Tanenbaum, Sixth Edition, Pearson, 2021.
3. Data Communications and Computer Networks, P.C. Gupta, Prentice-Hall of
India, 2006.
4. Computer Networks: A Systems Approach , L. L. Peterson and B. S. Davie,
Morgan Kaufmann, 3rd ed., 2003.
Thank You

Disclaimer:

This document is confidential and intended solely for the educational purpose of RMK Group of
Educational Institutions. If you have received this document through email in error, please notify the
system manager. This document contains proprietary information and is intended only to the
respective group / learning community as intended. If you are not the addressee you should not
disseminate, distribute or copy through e-mail. Please notify the sender immediately by e-mail if you
have received this document by mistake and delete this document from your system. If you are not
the intended recipient you are notified that disclosing, copying, distributing or taking any action in
reliance on the contents of this information is strictly prohibited.

You might also like