0% found this document useful (0 votes)
7 views105 pages

1

The document provides an overview of the Transport Layer in computer networks, detailing its role in providing communication services between application processes on different hosts. It discusses key protocols such as TCP and UDP, their characteristics, advantages, and disadvantages, as well as various transport layer services including flow control, error control, and multiplexing. The document emphasizes the importance of reliable delivery and connection management in network communications.

Uploaded by

Rishi upadhayay
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
7 views105 pages

1

The document provides an overview of the Transport Layer in computer networks, detailing its role in providing communication services between application processes on different hosts. It discusses key protocols such as TCP and UDP, their characteristics, advantages, and disadvantages, as well as various transport layer services including flow control, error control, and multiplexing. The document emphasizes the importance of reliable delivery and connection management in network communications.

Uploaded by

Rishi upadhayay
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 105

Course Code: EC255

COURSE NAME: Computer Networks

UNIT - IV:

Transport Layer - Transport service - Elements of transport protocols - User Datagram Protocol
- Transmission Control Protocol

Prepared by:

Dr. M. Ambika, Assistant Professor.


Transport Layer

• The transport layer is the fourth layer of the OSI model and is the core of the Internet model.

• The main role of the transport layer is to provide the communication services directly to the application
processes running on different hosts.

• The transport layer provides with end to end connection between the source and the destination and
reliable delivery of the services. Therefore transport layer is known as the end-to-end layer.

• Transport Layer provides the services to the session layer and it receives the services from the network layer.

• The hardware and/or software within the transport layer that does the work is called the transport entity.

• The main duty of transport layer is to provide process-to-process communication. The transport layer
provides a logical communication between application processes running on different hosts.

• The transport layer protocols are implemented in the end systems but not in the network routers.

• Segment is the unit of data encapsulation at the transport layer.

• It provides both connectionless and connection oriented service.


Transport Layer

• TCP and UDP are two transport layer protocols that provide a different set of services to the network layer.

• All transport layer protocols provide multiplexing/demultiplexing service.

• It also provides other services such as reliable data transfer, bandwidth guarantees, and delay guarantees.

• Each of the applications in the application layer has the ability to send a message by using TCP or UDP.

• The application communicates by using either of these two protocols. Both TCP and UDP will then
communicate with the internet protocol in the internet layer. The applications can read and write to the
transport layer. Therefore, we can say that communication is a two-way process.
Services Provided to the Upper Layers
Transport Layer Services or Functions
• The services provided by the transport layer are similar to those of the data link layer.
• The data link layer provides the services within a single network while the transport layer provides the
services across an internetwork made up of many networks.
• The data link layer controls the physical layer while the transport layer controls all the lower layers.

The services provided by the transport layer protocols can be divided into five categories:
• The process to process delivery
• End-to-end Connection between Hosts
• Addressing : port number
• Encapsulation and Decapsulation
• Multiplexing and Demultiplexing
• Flow control
• Error control
• Reliable delivery
• Congestion Control
Process-to-Process Communication
• The Transport Layer is responsible for delivering data to the appropriate application process on the host
computers.

• This involves multiplexing of data from different application processes, i.e. forming data packets, and adding
source and destination port numbers in the header of each Transport Layer data packet.

• Together with the source and destination IP address, the port numbers constitutes a network socket, i.e. an
identification address of the process-to-process communication.
2. End-to-end Connection between Hosts
The transport layer is also responsible for creating the end-to-end Connection between hosts for which it mainly uses
TCP and UDP. TCP is a secure, connection-orientated protocol that uses a handshake protocol to establish a robust
connection between two end hosts. TCP ensures the reliable delivery of messages and is used in various applications.
UDP, on the other hand, is a stateless and unreliable protocol that ensures best-effort delivery. It is suitable for
applications that have little concern with flow or error control and requires sending the bulk of data like video
conferencing. It is often used in multicasting protocols.
Addressing: Port Numbers
• Ports are the essential ways to address multiple entities in the same location.
• Using port addressing it is possible to use more than one network-based application at the same time.

• Three types of Port numbers are used :


• Well-known ports - These are permanent port numbers. They range between 0 to 1023.These port numbers
are used by Server Process.
• Registered ports - The ports ranging from 1024 to 49,151 are not assigned or controlled.
• Ephemeral ports (Dynamic Ports) – These are temporary port numbers. They range between 49152–
65535.These port numbers are used by Client Process.
Encapsulation and Decapsulation
• To send a message from one process to another, the transport-layer protocol encapsulates and decapsulates
messages.
• Encapsulation happens at the sender site.
• The transport layer receives the data and adds the transport-layer header.
• Decapsulation happens at the receiver site.
• When the message arrives at the destination transport layer, the header is dropped and the transport layer
delivers the message to the process running at the application layer.
Multiplexing and Demultiplexing
• Whenever an entity accepts items from more than one source, this is referred to as multiplexing (many to one).
• Whenever an entity delivers items to more than one source, this is referred to as demultiplexing (one to many).
• The transport layer at the source performs multiplexing
• The transport layer at the destination performs demultiplexing
Multiplexing can occur in two ways:
• Upward multiplexing: Upward multiplexing means multiple transport layer connections use the same
network connection. To make more cost-effective, the transport layer sends several transmissions bound
for the same destination along the same path; this is achieved through upward multiplexing.
Multiplexing can occur in two ways:
• Downward multiplexing: Downward multiplexing means one transport layer connection
uses the multiple network connections. Downward multiplexing allows the transport layer to
split a connection among several paths to improve the throughput. This type of multiplexing is
used when networks have a low or slow capacity.
Flow Control
• Flow Control is the process of managing the rate of data transmission between two nodes to
prevent a fast sender from overwhelming a slow receiver.
• It provides a mechanism for the receiver to control the transmission speed, so that the receiving
node is not overwhelmed with data from transmitting node.

Error Control
• Error control at the transport layer is responsible for
1.Detecting and discarding corrupted packets.
2.Keeping track of lost and discarded packets and resending them.
3.Recognizing duplicate packets and discarding them.
4.Buffering out-of-order packets until the missing packets arrive.

• Error Control involves Error Detection and Error Correction


Congestion Control
• Congestion in a network may occur if the load on the network (the number of packets sent to the
network) is greater than the capacity of the network (the number of packets a network can
handle).
• Congestion control refers to the mechanisms and techniques that control the congestion and keep
the load below the capacity.
• Congestion Control refers to techniques and mechanisms that can either prevent congestion,
before it happens, or remove congestion, after it has happened
• Congestion control mechanisms are divided into two categories,
1.Open loop - prevent the congestion before it happens.
2.Closed loop - remove the congestion after it happens.
Transport Service Primitives
Transport Service Primitives
• To see how these primitives might be used, consider an application with a server and a number of
remote clients.
• To start with, the server executes a LISTEN primitive, typically by calling a library procedure
that makes a system call to block the server until a client turns up.
• When a client wants to talk to the server, it executes a CONNECT primitive.
• The transport entity carries out this primitive by blocking the caller and sending a packet to
the server.
• Encapsulated in the payload of this packet is a transport layer message for the server's
transport entity.
Transport Service Primitives
Elements of Transport Protocols

• Addressing
• Connection establishment
• Connection release
• Error control and flow control
• Multiplexing
• Crash recovery
Elements of Transport Protocols
• Transport protocol similar to data link protocols
• Both do error control and flow control
• However, significant differences exist
Note : Network Service Access Point (NSAP)
For data transmission-
• TCP segment sits inside the IP datagram payload field.
• IP datagram sits inside the Ethernet payload field.
Transport Layer Protocols-

• UDP - UDP is an unreliable connectionless transport-layer protocol used for its simplicity and efficiency in
applications where error control can be provided by the application-layer process.

• TCP - TCP is a reliable connection-oriented protocol that can be used in any application where reliability is
important.
UDP Protocol-

• UDP is short for User Datagram Protocol.


• It is the simplest transport layer protocol.
• It has been designed to send data packets over the Internet.
• It simply takes the datagram from the network layer, attaches its header and sends it to the user.

Characteristics of UDP-

• It is a connectionless protocol.
• It is a stateless protocol.
• It is an unreliable protocol.
• It is a fast protocol.
• It offers the minimal transport service.
• It is almost a null protocol.
• It does not guarantee in order delivery.
• It does not provide congestion control mechanism.
• It is a good protocol for data flowing in one direction.
Need of UDP-

•TCP proves to be an overhead for certain kinds of applications.


•The Connection Establishment Phase, Connection Termination Phase etc of TCP are time consuming.
•To avoid this overhead, certain applications which require fast speed and less overhead use UDP.
UDP Header-
1. Source Port-
• Source Port is a 16 bit field.
• It identifies the port of the sending application.

2. Destination Port-
• Destination Port is a 16 bit field.
• It identifies the port of the receiving application.

3. Length-
• Length is a 16 bit field.
• It identifies the combined length of UDP Header and Encapsulated data.
• Length = Length of UDP Header + Length of encapsulated data

4. Checksum-
• Checksum is a 16 bit field used for error control.
• It is calculated on UDP Header, encapsulated data and IP pseudo header.
• Checksum calculation is not mandatory in UDP.
Applications Using UDP-

Following applications use UDP-


• Applications which require one response for one request use UDP. Example- DNS.
• Routing Protocols like RIP and OSPF use UDP because they have very small amount of data to be
transmitted.
• Trivial File Transfer Protocol (TFTP) uses UDP to send very small sized files.
• Broadcasting and multicasting applications use UDP.
• Streaming applications like multimedia, video conferencing etc use UDP since they require speed over
reliability.
• Real time applications like chatting and online games use UDP.
• Management protocols like SNMP (Simple Network Management Protocol) use UDP.
• Bootp / DHCP uses UDP.
• Other protocols that use UDP are- Kerberos, Network Time Protocol (NTP), Network News Protocol
(NNP), Quote of the day protocol etc.
• Following implementations uses UDP as a transport layer protocol:
• NTP (Network Time Protocol)
• DNS (Domain Name Service)
• BOOTP, DHCP.
• NNP (Network News Protocol)
• Quote of the day protocol
• TFTP, RTSP, RIP.

• The application layer can do some of the tasks through UDP-


• Trace Route
• Record Route
• Timestamp

• UDP takes a datagram from Network Layer, attaches its header, and sends it to the user. So, it works fast.

• Actually, UDP is a null protocol if you remove the checksum field.


• Reduce the requirement of computer resources.
• When using the Multicast or Broadcast to transfer.
• The transmission of Real-time packets, mainly in multimedia applications.
Advantages of UDP:
1. Speed: UDP is faster than TCP because it does not have the overhead of establishing a connection and
ensuring reliable data delivery.
2. Lower latency: Since there is no connection establishment, there is lower latency and faster response time.
3. Simplicity: UDP has a simpler protocol design than TCP, making it easier to implement and manage.
4. Broadcast support: UDP supports broadcasting to multiple recipients, making it useful for applications such as
video streaming and online gaming.
5. Smaller packet size: UDP uses smaller packet sizes than TCP, which can reduce network congestion and
improve overall network performance.
Disadvantages of UDP:
1. No reliability: UDP does not guarantee delivery of packets or order of delivery, which can lead to
missing or duplicate data.
2. No congestion control: UDP does not have congestion control, which means that it can send packets
at a rate that can cause network congestion.
3. No flow control: UDP does not have flow control, which means that it can overwhelm the receiver with
packets that it cannot handle.
4. Vulnerable to attacks: UDP is vulnerable to denial-of-service attacks, where an attacker can flood a
network with UDP packets, overwhelming the network and causing it to crash.
5. Limited use cases: UDP is not suitable for applications that require reliable data delivery, such as email
or file transfers, and is better suited for applications that can tolerate some data loss, such as video
streaming or online gaming.
TCP
What is Transmission Control Protocol (TCP)?
• TCP (Transmission Control Protocol) is one of the main protocols of the Internet protocol suite.
• It lies between the Application and Network Layers which are used in providing reliable delivery services.
• It is a connection-oriented protocol for communications that helps in the exchange of messages between
different devices over a network.
• The Internet Protocol (IP), which establishes the technique for sending data packets between computers,
works with TCP.
TCP
Working of TCP
• To make sure that each message reaches its target location intact, the TCP/IP model breaks down the data
into small bundles and afterward reassembles the bundles into the original message on the opposite end.
• Sending the information in little bundles of information makes it simpler to maintain efficiency as opposed
to sending everything in one go.
• After a particular message is broken down into bundles, these bundles may travel along multiple routes if
one route is jammed but the destination remains the same.
TCP
For example,
• When a user requests a web page on the internet, somewhere in the world, the server processes that
request and sends back an HTML Page to that user.
• The server makes use of a protocol called the HTTP Protocol.
• The HTTP then requests the TCP layer to set the required connection and send the HTML file.
• Now, the TCP breaks the data into small packets and forwards it toward the Internet Protocol (IP) layer.
• The packets are then sent to the destination through different routes.
• The TCP layer in the user’s system waits for the transmission to get finished and acknowledges once all
packets have been received.
TCP
Features of TCP/IP
Some of the most prominent features of Transmission control protocol are
1. Segment Numbering System
• TCP keeps track of the segments being transmitted or received by assigning numbers to each and every single
one of them.
• A specific Byte Number is assigned to data bytes that are to be transferred while segments are
assigned sequence numbers.
• Acknowledgment Numbers are assigned to received segments.

2. Flow Control
• Flow control limits the rate at which a sender transfers data. This is done to ensure reliable delivery.
• The receiver continually hints to the sender on how much data can be received (using a sliding window)
TCP
3. Error Control
• TCP implements an error control mechanism for reliable data transfer
• Error control is byte-oriented
• Segments are checked for error detection
• Error Control includes – Corrupted Segment & Lost Segment Management, Out-of-order segments, Duplicate
segments, etc.

4. Congestion Control
• TCP takes into account the level of congestion in the network
• Congestion level is determined by the amount of data sent by a sender
TCP
Advantages
• It is a reliable protocol.
• It provides an error-checking mechanism as well as one for recovery.
• It gives flow control.
• It makes sure that the data reaches the proper destination in the exact order that it was sent.
• Open Protocol, not owned by any organization or individual.
• It assigns an IP address to each computer on the network and a domain name to each site thus making each
device site to be distinguishable over the network.

Disadvantages
• TCP is made for Wide Area Networks, thus its size can become an issue for small networks with low resources.
• TCP runs several layers so it can slow down the speed of the network.
• It is not generic in nature. Meaning, it cannot represent any protocol stack other than the TCP/IP suite. E.g., it
cannot work with a Bluetooth connection.
• No modifications since their development around 30 years ago.
The TCP Service Model
• TCP provides a byte stream abstraction to applications that use it.
• One end puts a stream of bytes into TCP, and the identical stream of bytes appears at the other end.
• Each endpoint individually chooses its read and write sizes.
• TCP does not interpret the contents of the bytes in the byte stream at all.
• It has no idea if the data bytes being exchanged are binary data, ASCII characters, EBCDIC characters, or something
else.
• The interpretation of this byte stream is up to the applications on each end of the connection.
TCP Header and Encapsulation
TCP is encapsulated in IP datagrams as shown the figure below:

Figure 12-2 The TCP header appears immediately following the IP header or last IPv6 extension
header and is often 20 bytes long (with no TCP options). With options, the TCP header
can be as large as 60 bytes. Common options include Maximum Segment Size, Timestamps,
Window Scaling, and Selective ACKs.
TCP Header and Encapsulation
The TCP header is considerably more complicated than the UDP header that must keep each end of the connection
informed (synchronized) about the current state.
TCP Header Format
• Each TCP header has 10 required fields totaling 20 bytes (160 bits) in size.
• It can optionally include an additional data field up to 40 bytes in size.

Let’s walk through all these fields:


• Source port: this is a 16 bit field that specifies the port number of the sender.
• Destination port: this is a 16 bit field that specifies the port number of the receiver.

NOTE
• A TCP connection is uniquely identified by using- Combination of port numbers and IP Addresses of
sender and receiver
• IP Addresses indicate which systems are communicating.
• Port numbers indicate which end to end sockets are communicating.

• Sequence number
• Sequence number is a 32 bit field.
• TCP assigns a unique sequence number to each byte of data contained in the TCP segment.
• This field contains the sequence number of the first data byte.
Acknowledgment number:
• Acknowledgment number is a 32 bit field.
• It contains sequence number of the data byte that receiver expects to receive next from the
sender.
• It is always sequence number of the last received data byte incremented by 1.

Header Length :
• field gives the length of the header in 32-bit words. This is required because the length of
the Options field is variable. With a 4-bit field, TCP is limited to a 60-byte header.
• Without options, the size is 20 bytes.

Reserved (RSV): these are 3 bits for the reserved field. They are unused and are always set to 0.

Control bits: Each bit of a control field functions individually and independently. A control bit defines the use of a
segment or serves as a validity check for other fields.
• Flags: there are 9 bits for flags, we also call them control bits. We use them to establish connections,
send data and terminate connections:
• CWR. Congestion Window Reduced (the sender reduced its sending rate);
• ECE. ECN Echo (the sender received an earlier congestion notification);

There are total six types of flags in control field:

• URG. Urgent (the Urgent Pointer field is valid; rarely used);


• ACK. Acknowledgment (the Acknowledgment Number field is valid; always on after a connection is
established);
• PSH. Push (the receiver should pass this data to the application as soon as possible; not reliably
implemented or used);
• RST. Reset the connection (connection abort, usually because of an error);
• SYN. Synchronize sequence numbers to initiate a connection;
• FIN. The sender of the segment is finished sending data to its peer;
• FIN. The sender of the segment is finished sending data to its peer;
• Window: the 16 bit window field specifies how many bytes the receiver is willing to receive. It is used so the
receiver can tell the sender that it would like to receive more data than what it is currently receiving. It does
so by specifying the number of bytes beyond the sequence number in the acknowledgment field.
• Checksum: 16 bits are used for a checksum to check if the TCP header is OK or not.
• Urgent pointer: these 16 bits are used when the URG bit has been set, the urgent pointer is used to indicate
where the urgent data ends.
• Options: this field is optional and can be anywhere between 0 and 320 bits. Or 0 bytes to 40 bytes.

Options field is generally used for the following purposes-


Time stamp
Window size extension
Parameter negotiation
Padding
Advantages of TCP
• TCP supports multiple routing protocols.
• TCP protocol operates independently of that of the operating system.
• TCP protocol provides the features of error control and flow control.
• TCP provides a connection-oriented protocol and provides the delivery of data.

Disadvantages of TCP
• TCP protocol cannot be used for broadcast or multicast transmission.
• TCP protocol has no block boundaries.
• No clear separation is being offered by TCP protocol between its interface, services, and
protocols.
• In TCP/IP replacement of protocol is difficult.
TCP CONNECTION MANAGEMENT

• TCP is connection-oriented.
• A connection-oriented transport protocol establishes a logical path between the source and
destination.
• All of the segments belonging to a message are then sent over this logical path.
• In TCP, connection-oriented transmission requires three phases: Connection Establishment, Data
Transfer and Connection Termination.

Connection Establishment
• While opening a TCP connection the two nodes(client and server) want to agree on a set of
parameters.
• The parameters are the starting sequence numbers that is to be used for their respective byte
streams.
• Connection establishment in TCP is a three-way handshaking.
Connection Termination
• Connection termination or teardown can be done in two ways :
• Three-way Close and
• Half-Close
Connection Termination
TCP Connection Management
TCP Connection Management
From start to finish
• Let's step through the process of transmitting a packet with TCP/IP.
Step 1: Establish connection
• When two computers want to send data to each other over TCP, they first need to establish a connection
using a three-way handshake.

• Arrow goes from Computer 1 to Computer 2 with "SYN" label.


• Arrow goes from Computer 2 to Computer 1 with "ACK SYN" label.
• Arrow goes from Computer 1 to Computer 2 with "ACK" label.
• The first computer sends a packet with the SYN bit set to 1 (SYN = "synchronize?").
• The second computer sends back a packet with the ACK bit set to 1 (ACK = "acknowledge!") plus the SYN bit
set to 1. The first computer replies back with an ACK.
TCP Connection Management
Step 2: Send packets of data
• When a packet of data is sent over TCP, the recipient must always acknowledge what they received.

• The first computer sends a packet with data and a sequence number.
• The second computer acknowledges it by setting the ACK bit and increasing the acknowledgement number
by the length of the received data. (18+18 = 36 bits are transmitted)
• The sequence and acknowledgement numbers are part of the TCP header:
• Those two numbers help the computers to keep track of which data was successfully received, which data
was lost, and which data was accidentally sent twice.
TCP Connection Management
Step 3: Close the connection
• Either computer can close the connection when they no longer want to send or receive data.

• A computer initiates closing the connection by sending a packet with the FIN bit set to 1 (FIN = finish).
• The other computer replies with an ACK and another FIN.
• After one more ACK from the initiating computer, the connection is closed.
TCP Connection Management
Detecting lost packets
• TCP connections can detect lost packets using a timeout.

• After sending off a packet, the sender starts a timer and puts the packet in a retransmission queue.
• If the timer runs out and the sender has not yet received an ACK from the recipient, it sends the packet
again.
• The retransmission may lead to the recipient receiving duplicate packets,
• if a packet was not actually lost but just very slow to arrive or be acknowledged. If so, the recipient can
simply discard duplicate packets. It's better to have the data twice than not at all!
TCP Connection Management
Handling out of order packets
• TCP connections can detect out of order packets by using the sequence and acknowledgement numbers.

• When the recipient sees a higher sequence number than what they have acknowledged so far, they know
that they are missing at least one packet in between.
• In the situation pictured above, the recipient sees a sequence number of #73 but expected a sequence
number of #37.
• The recipient lets the sender know there's something miss by sending a packet with an acknowledgement
number set to the expected sequence number.
TCP Connection Management
• Sometimes the missing packet is simply taking a slower route through the Internet and it arrives soon after.

• Other times, the missing packet may actually be a lost packet and the sender must retransmit the packet.
TCP Connection Management
• In both situations, the recipient has to deal with out of order packets.
• Fortunately, the recipient can use the sequence numbers to reassemble the packet data in the correct order.
https://www.ques10.com/p/9391/tcp-connection-management-1/

https://www.javatpoint.com/tcp-connection-termination
TCP Connection Management
• TCP is a unicast connection-oriented protocol.
• every connection-oriented protocol needs to establish a connection in order to reserve resources at both the
communicating ends.
• In TCP, the connections are established using three way handshake technique.
• TCP Connection Management includes TCP Connection Establishment & TCP Connection Release.
TCP Connection Establishment and Termination
• A TCP connection is defined to be a 4-tuple consisting of two IP addresses and two port numbers.
• It is a pair of endpoints or sockets where each endpoint is identified by an (IP address, port number) pair.
• A connection typically goes through three phases:
• Setup(called established)
• Data transfer
• Teardown (closing).
• Some of the difficulty in creating a robust TCP implementation is handling all of the transitions between and
among these phases correctly.
A typical TCP connection establishment and close (without any data transfer) is shown below:
Connection Establishment
TCP Connection (A 3-way handshake)
To establish a TCP connection, the following events usually take place:
1. The active opener (normally called the client) sends a SYN segment (a TCP/IP packet with the SYN bit field
turned on in the TCP header) specifying the port number of the peer to which it wants to connect and the
client’s initial sequence number or ISN(c) It typically sends one or more options at this point . This is segment 1.
2. The server responds with its own SYN segment containing its initial sequence number (ISN(s)). This is segment
2. The server also acknowledges the client’s SYN by ACKing ISN(c) plus 1. A SYN consumes one sequence
number and is retransmitted if lost.
3. The client must acknowledge this SYN from the server by ACKing ISN(s) plus 1. This is segment 3.

• These three segments complete the connection establishment. This is often called the three-way handshake. Its
main purposes are to let each end of the connection know that a connection is starting and the special details
that are carried as options, and to exchange the ISNs.
• The side that sends the first SYN is said to perform an active open. This is typically a client.
• The other side, which receives this SYN and sends the next SYN, performs a passive open. It is most commonly
called the server.
Connection Termination
TCP Termination (A 4-way handshake)
The figure above also shows how a TCP connection is closed (also called cleared or terminated). Either end can
initiate a close operation, and simultaneous closes are also supported but are rare. Traditionally, it was most
common for the client to initiate a close. However, other servers (e.g., Web servers) initiate a close after they have
completed a request. Usually a close operation starts with an application indicating its desire to terminate its
connection (e.g., using the close() system call). The closing TCP initiates the close operation by sending a FIN
segment (a TCP segment with the FIN bit field set). The complete close operation occurs after both sides have
completed the close:

• The active closer sends a FIN segment specifying the current sequence number the receiver expects to see
(K in Figure 13-1). The FIN also includes an ACK for the last data sent in the other direction (labeled L in Figure
13-1).
• The passive closer responds by ACKing value K + 1 to indicate its successful receipt of the active closer’s FIN. At
this point, the application is notified that the other end of its connection has performed a close. Typically this
results in the application initiating its own close operation. The passive closer then effectively becomes another
active closer and sends its own FIN. The sequence number is equal to L.
• To complete the close, the final segment contains an ACK for the last FIN. Note that if a FIN is lost, it is
retransmitted until an ACK for it is received.
Connection Termination
• While it takes three segments to establish a connection, it takes four to terminate one.
• It is also possible for the connection to be in a half-open state, although this is not common.
• This reason is that TCP’s data communications model is bidirectional, meaning it is possible to have only one of
the two directions operating.
• The half-close operation in TCP closes only a single direction of the data flow.
• Two half-close operations together close the entire connection.
TCP Half-Close
• TCP supports a half-close operation.
• Few applications require this capability, so it is not common.
• To use this feature, the API must provide a way for the application to say, "I am done sending data, so send a FIN
to the other end, but I still want to receive data from the other end, until it sends me a FIN."
• The Berkeley sockets API supports half-close, if the application calls the shutdown() function instead of calling
the more typical close() function.
• Most applications, however, terminate both directions of the connection by calling close.
• The figure below shows an example of a half-close being used.
• It shows the client on the left side initiating the half-close, but either end can do this.
The first two segments are the same as for a regular close: a FIN by the initiator, followed by an ACK of the FIN by the
recipient. The operation then differs from Figure 13-1, because the side that receives the half-close can still send
data. We show only one data segment, followed by an ACK, but any number of data segments can be sent. (The
exchange of data segments and acknowledgments is detailed in Chapter 15.) When the end that received the half-
close is done sending data, it closes its end of the connection, causing a FIN to be sent, and this delivers an end-of-
file indication to the application that initiated the half-close. When this second FIN is acknowledged, the connection
is completely closed.
TCP Options
https://www.javatpoint.com/tcp-retransmission
Timers in TCP
• Because data segment and acknowledgement can get lost in the network, timeout and retransmission
mechanism has to be used.
• In TCP, four different timers are used for each connection:
 Retransmission timer: This timer is used when expecting an acknowledgement from the other end;
 Persist timer: This is used to keep window size information flowing even if the other end closes its receive
window;
 Keepalive timer: This is used to detects when the other end on an otherwise idle connection, crashes or
reboots;
 2MSL timer: This measures the time a connection has been in the time_waitstate.
Timer Management
• TCP uses different types of timer to control and management various tasks:
Keep-alive timer:
• This timer is used to check the integrity and validity of a connection.
• When keep-alive time expires, the host sends a probe to check if the connection still exists.

Retransmission timer:
• This timer maintains stateful session of data sent.
• If the acknowledgement of sent data does not receive within the Retransmission time, the data segment is sent again.

Persist timer:
• TCP session can be paused by either host by sending Window Size 0.
• To resume the session a host needs to send Window Size with some larger value.
• If this segment never reaches the other end, both ends may wait for each other for infinite time.
• When the Persist timer expires, the host re-sends its window size to let the other end know.
• Persist Timer helps avoid deadlocks in communication.

Timed-Wait:
• After releasing a connection, either of the hosts waits for a Timed-Wait time to terminate the connection completely.
• This is in order to make sure that the other end has received the acknowledgement of its connection termination
request.
• Timed-out can be a maximum of 240 seconds (4 minutes).
TCP: The Transmission Control Protocol (Preliminaries)
TCP: The Transmission Control Protocol (Preliminaries)
• The protocols discussed so far [ upto network layer] do not include mechanisms for delivering data reliablity;
• they may detect that erroneous data has been received, using a checksum or CRC,
• but they do not try very hard to repair errors:

Information theory and coding theory


• Error-correcting codes (adding redundant bits so that the real information can be retrieved even if some bits
are damaged) is one way to correct communications problems is one very important method for handling
errors.
• Automatic Repeat Request (ARQ): which means "try sending again" until the information is finally received.
This approach forms the basis for many communications protocols, including TCP.
TCP: The Transmission Control Protocol (Preliminaries)
• ARQ and Retransmission
• Windows of Packets and Sliding Windows
• Variable Windows: Flow Control and Congestion Control
• Setting the Retransmission Timeout
1. ARQ and Retransmission
For a multihop communications channel, there are other problems besides packet bit errors:
• Problems that arise at an intermediate router
• Packet reordering
• Packet duplication
• Packet erasures (drops)

• An error-correcting protocol designed for use over a multihop communications channel (such as IP) must cope with
all of these problems.
1. ARQ and Retransmission
Packet drops and bit errors
• A straightforward method dealing with packet drops (and bit errors) is to resend the packet until it is received
properly.
• This requires a way to determine:
• Whether the receiver has received the packet.
• Whether the packet it received was the same one the sender sent.

• This is solved by using acknowledgment (ACK):


• the sender sends a packet and awaits an ACK.
• When the receiver receives the packet, it sends the ACK.
• When the sender receives the ACK, it sends another packet, and the process continues.
1. ARQ and Retransmission
Packet drops and bit errors

Interesting questions to ask here are:


• How long should the sender (expect to) wait for an ACK?
• What if the ACK is lost?
• simply sends the packet again. The receiver may receive two or more copies in that case, so it
must be prepared to handle that situation (duplication)
• What if the packet was received but had errors in it?
• When a receiver receives a packet containing an error, it refrains from sending an ACK.
Eventually, the sender resends the packet, which ideally arrives undamaged.
1. ARQ and Retransmission
Packet duplication

• The receiver might receive duplicate copies of the packet.


• This problem is addressed using a sequence number.
• Every unique packet gets a new sequence number when it is sent at the source, and this sequence number is carried
along in the packet itself.
• The receiver can use this number to determine whether it has already seen the packet and if so,
ARQ and Retransmission
Efficiency

Allowing more than one packet to be in the network at a time:


• The sender must decide not only when to inject a packet into the network, but also how many. It also must figure out how
to keep the timers when waiting for ACKs, and it must keep a copy of each packet not yet acknowledged in case
retransmissions are necessary.
• The receiver needs to have a more sophisticated ACK mechanism: one that can distinguish which packets have been
received and which have not.
• The receiver may need a more sophisticated buffering (packet storage) mechanism: one that allows it to hold "out-of-
sequence" packets (those packets that have arrived earlier than those expected because of loss or reordering).

There are other issues:


• What if the receiver is slower than the sender? If the sender simply injects many packets at a very high rate, the receiver
might just drop them because of processing or memory limitations. The same question can be asked about the routers in
the middle.
• What if the network infrastructure cannot handle the rate of data the sender and receiver wish to use?
2. Windows of Packets and Sliding Windows
• Assume each unique packet has a sequence number.
• We define a window of packets as the collection of packets (or their sequence numbers) that have been injected by
the sender but not yet completely acknowledged (the sender has not received an ACK for them).
• We refer to the window size as the number of packets in the window.
2. Windows of Packets and Sliding Windows
In the figure:
• Packet number 3 has already been sent and acknowledged, so the copy of it that the sender was keeping can now
be released.
• Packet 7 is ready at the sender but not yet able to be sent because it is not yet "in" the window.
• When the sender receives an ACK for packet 4, the window "slides" to the right by one packet, meaning that the
copy of packet 4 can be released and packet 7 can be sent.
• This movement of the window gives rise to another name for this type of protocol, a sliding window protocol.

Typically, this window structure is kept at both the sender and the receiver.
• At the sender, it keeps track of what packets can be released, awaiting ACKs, and cannot yet be sent.
• At the receiver, it keeps track of:
• What packets have already been received and acknowledged,
• What packets are expected (and how much memory has been allocated to hold them),
• Which packets (even if received) will not be kept because of limited memory.
2. Windows of Packets and Sliding Windows
Advantages

• Although the window structure is convenient for keeping track of data as it flows between sender and
receiver,

Drawback:
• it does not provide guidance as to how large the window should be, or
• what happens if the receiver or network cannot handle the sender’s data rate.
3. Variable Windows: Flow Control and Congestion Control
Flow control
• It can handle problem that arises when a receiver is too slow relative to a sender, by forcing the sender to slow down
when the receiver cannot keep up.

• It is usually handled in one of two ways:


• Rate-based flow control -
• gives the sender a certain data rate allocation and ensures that data is never allowed to be sent at a rate
that exceeds the allocation.
• This type of flow control is used for streaming applications and can be used with broadcast and multicast
delivery.
• Window-based flow control is the most popular approach when sliding windows are being used. In this
approach, the window size is not fixed but is instead allowed to vary over time.
• Window advertisement, or simply a window update is a method for the receiver to signal the sender how
large a window to use. This value is used by the sender (the receiver of the window advertisement) to
adjust its window size.
• Logically, window update is separate from the ACKs we discussed previously, but in practice the window
update and ACK are carried in a single packet, meaning that the sender tends to adjust the size of its
window at the same time it slides it to the right.
3. Variable Windows: Flow Control and Congestion Control
• This approach works fine for protecting the receiver, but what about the network in between?
• We may have routers with limited memory between the sender and the receiver that have to contend with
slow network links.
• When this happens, it is possible for the sender’s rate to exceed a router’s ability to keep up, leading to packet
loss.
• This is addressed with a special form of flow control called congestion control.

Congestion control
• Congestion control involves the sender slowing down so as to not overwhelm the network between itself and
the receiver.
• Explicit signaling uses a window advertisement to signal the sender to slow down for the receiver.
• Implicit signaling: the sender guesses that it needs to slow down. It would involve deciding to slow down based
on some other evidence.
4. Setting the Retransmission Timeout
• One of the most important performance issues is how long to wait before concluding that a packet has been lost and
should be resent.
• That is, What should the retransmission timeout be?
• Intuitively, the amount of time the sender should wait before resending a packet is about the sum of the following
times:
• The time to send the packet,
• The time for the receiver to process it and send an ACK,
• The time for the ACK to travel back to the sender,
• The time for the sender to process the ACK.

• In practice, none of these times are known with certainty and any or all of them vary over time as additional load is
added to or removed from the end hosts or routers.

• Because it is not practical for the user to estimate all the times, a better strategy is to have the protocol
implementation try to estimate them.

• This is called round-trip-time estimation and is a statistical process.


• The true RTT is likely to be close to the sample mean of a collection of samples of RTTs.
• This average naturally changes over time (it is not stationary), as the paths taken through the network may change.
4. Setting the Retransmission Timeout
• It would not be sensible to set the retransmission timer to be exactly equal to the mean estimator, as it is likely that
many actual RTTs will be larger, thereby inducing unwanted retransmissions.
• The timeout should be set to something larger than the mean, but exactly what this relationship is (or even if
the mean should be directly used) is not yet clear.
• Setting the timeout too large is also undesirable, as this leads back to letting the network go idle, reducing
throughput.

You might also like