Unit 2-4 DCCN

Download as pdf or txt
Download as pdf or txt
You are on page 1of 39

Medium Access Control Sublayer (MAC

sublayer)
The medium access control (MAC) is a sublayer of the data link layer of the open
system interconnections (OSI) reference model for data transmission. It is
responsible for flow control and multiplexing for transmission medium. It
controls the transmission of data packets via remotely shared channels. It
sends data over the network interface card.

MAC Layer in the OSI Model


The Open System Interconnections (OSI) model is a layered networking
framework that conceptualizes how communications should be done between
heterogeneous systems. The data link layer is the second lowest layer. It is
divided into two sublayers −

• The logical link control (LLC) sublayer


• The medium access control (MAC) sublayer

The following diagram depicts the position of the MAC layer −

Functions of MAC Layer


• It provides an abstraction of the physical layer to the LLC and upper layers of the OSI
network.
• It is responsible for encapsulating frames so that they are suitable for transmission
via the physical medium.
• It resolves the addressing of source station as well as the destination station, or
groups of destination stations.
• It performs multiple access resolutions when more than one data frame is to be
transmitted. It determines the channel access methods for transmission.
• It also performs collision resolution and initiating retransmission in case of collisions.
• It generates the frame check sequences and thus contributes to protection against
transmission errors.

MAC Addresses
MAC address or media access control address is a unique identifier allotted to
a network interface controller (NIC) of a device. It is used as a network address
for data transmission within a network segment like Ethernet, Wi-Fi,
and Bluetooth.

MAC address is assigned to a network adapter at the time of manufacturing.


It is hardwired or hard-coded in the network interface card (NIC). A MAC
address comprises of six groups of two hexadecimal digits, separated by
hyphens, colons, or no separators. An example of a MAC address is
00:0A:89:5B:F0:11.

Channel allocation is a process in which a single channel is divided and


allotted to multiple users in order to carry user specific tasks. There are
user’s quantity may vary every time the process takes place. If there are N
number of users and channel is divided into N equal-sized sub channels,
Each user is assigned one portion. If the number of users are small and don’t
vary at times, then Frequency Division Multiplexing can be used as it is a
simple and efficient channel bandwidth allocating technique.
Channel allocation problem can be solved by two schemes: Static Channel
Allocation in LANs and MANs, and Dynamic Channel Allocation.
These are explained as following below.
1. Static Channel Allocation in LANs and MANs:
It is the classical or traditional approach of allocating a single channel among
multiple competing users using Frequency Division Multiplexing (FDM). if
there are N users, the frequency channel is divided into N equal sized
portions (bandwidth), each user being assigned one portion. since each user
has a private frequency band, there is no interference between users.
It is not efficient to divide into fixed number of chunks.

T = 1/(U*C-L)

T(FDM) = N*T(1/U(C/N)-L/N)
Where,

T = mean time delay,


C = capacity of channel,
L = arrival rate of frames,
1/U = bits/frame,
N = number of sub channels,
T(FDM) = Frequency Division Multiplexing Time
2. Dynamic Channel Allocation:
Possible assumptions include:

1. Station Model:
Assumes that each of N stations independently produce frames. The
probability of producing a packet in the interval IDt where I is the constant
arrival rate of new frames.

2. Single Channel Assumption:


In this allocation all stations are equivalent and can send and receive on
that channel.
3. Collision Assumption:
If two frames overlap in time-wise, then that’s collision. Any collision is an
error, and both frames must re transmitted. Collisions are only possible
error.

4. Time can be divided into Slotted or Continuous.

5. Stations can sense a channel is busy before they try it.

6. ALOHA is a multiple access protocol for transmission of data via a shared


network channel. It operates in the medium access control sublayer
(MAC sublayer) of the open systems interconnection (OSI) model. Using
this protocol, several data streams originating from multiple nodes are
transferred through a multi-point transmission channel.
7. In ALOHA, each node or station transmits a frame without trying to
detect whether the transmission channel is idle or busy. If the channel
is idle, then the frames will be successfully transmitted. If two frames
attempt to occupy the channel simultaneously, collision of frames will
occur and the frames will be discarded. These stations may choose to
retransmit the corrupted frames repeatedly until successful transmission
occurs.
8. Versions of ALOHA Protocols

Pure ALOHA
9. In pure ALOHA, the time of transmission is continuous. Whenever a
station has an available frame, it sends the frame. If there is collision
and the frame is destroyed, the sender waits for a random amount of
time before retransmitting it.
10. Slotted ALOHA
11. Slotted ALOHA reduces the number of collisions and doubles the
capacity of pure ALOHA. The shared channel is divided into a number of
discrete time intervals called slots. A station can transmit only at the
beginning of each slot. However, there can still be collisions if more than
one station tries to transmit at the beginning of the same time slot.
Fiber Distributed Data Interface (FDDI)
Fiber Distributed Data Interface (FDDI) is a set of ANSI and ISO
standards for transmission of data in local area network (LAN) over fiber
optic cables. It is applicable in large LANs that can extend up to 200
kilometers in diameter.

• FDDI uses optical fiber as its physical medium.


• It operates in the physical and medium access control (MAC layer) of the Open
Systems Interconnection (OSI) network model.
• It provides high data rate of 100 Mbps and can support thousands of users.
• It is used in LANs up to 200 kilometers for long distance voice and multimedia
communication.
• It uses ring based token passing mechanism and is derived from IEEE 802.4 token bus
standard.
• It contains two token rings, a primary ring for data and token transmission and a
secondary ring that provides backup if the primary ring fails.
• FDDI technology can also be used as a backbone for a wide area network (WAN).

The following diagram shows FDDI −


Frame Format
The frame format of FDDI is similar to that of token bus as shown in the
following diagram −

The fields of an FDDI frame are −

• Preamble: 1 byte for synchronization.


• Start Delimiter: 1 byte that marks the beginning of the frame.
• Frame Control: 1 byte that specifies whether this is a data frame or control frame.
• Destination Address: 2-6 bytes that specifies address of destination station.
• Source Address: 2-6 bytes that specifies address of source station.
• Payload: A variable length field that carries the data from the network layer.
• Checksum: 4 bytes frame check sequence for error detection.
• End Delimiter: 1 byte that marks the end of the frame.

Protocols in the data link layer are designed so that this layer can perform its
basic functions: framing, error control and flow control. Framing is the process
of dividing bit - streams from physical layer into data frames whose size ranges
from a few hundred to a few thousand bytes. Error control mechanisms deals
with transmission errors and retransmission of corrupted and lost frames. Flow
control regulates speed of delivery and so that a fast sender does not drown a
slow receiver.

Types of Data Link Protocols


Data link protocols can be broadly divided into two categories, depending on
whether the transmission channel is noiseless or noisy.

Elementary Data Link Protocols

Simplex Protocol

The Simplex protocol is hypothetical protocol designed for unidirectional data


transmission over an ideal channel, i.e. a channel through which transmission
can never go wrong. It has distinct procedures for sender and receiver. The
sender simply sends all its data available onto the channel as soon as they are
available its buffer. The receiver is assumed to process all incoming data
instantly. It is hypothetical since it does not handle flow control or error
control.

Stop – and – Wait Protocol

Stop – and – Wait protocol is for noiseless channel too. It provides


unidirectional data transmission without any error control facilities. However,
it provides for flow control so that a fast sender does not drown a slow receiver.
The receiver has a finite buffer size with finite processing speed. The sender
can send a frame only when it has received indication from the receiver that it
is available for further data processing.
Stop – and – Wait ARQ

Stop – and – wait Automatic Repeat Request (Stop – and – Wait ARQ) is a
variation of the above protocol with added error control mechanisms,
appropriate for noisy channels. The sender keeps a copy of the sent frame. It
then waits for a finite time to receive a positive acknowledgement from
receiver. If the timer expires or a negative acknowledgement is received, the
frame is retransmitted. If a positive acknowledgement is received then the
next frame is sent.

Go – Back – N ARQ

Go – Back – N ARQ provides for sending multiple frames before receiving the
acknowledgement for the first frame. It uses the concept of sliding window,
and so is also called sliding window protocol. The frames are sequentially
numbered and a finite number of frames are sent. If the acknowledgement of
a frame is not received within the time period, all frames starting from that
frame are retransmitted.

Selective Repeat ARQ

This protocol also provides for sending multiple frames before receiving the
acknowledgement for the first frame. However, here only the erroneous or lost
frames are retransmitted, while the good frames are received and buffered.

Sliding Window Protocol


The sliding window is a technique for sending multiple frames at a time. It controls the
data packets between the two devices where reliable and gradual delivery of data
frames is needed. It is also used in TCP (Transmission Control Protocol).

In this technique, each frame has sent from the sequence number. The sequence
numbers are used to find the missing data in the receiver end. The purpose of the
sliding window technique is to avoid duplicate data, so it uses the sequence number.

Types of Sliding Window Protocol


Sliding window protocol has two types:
1. Go-Back-N ARQ
2. Selective Repeat ARQ

Go-Back-N ARQ
Go-Back-N ARQ protocol is also known as Go-Back-N Automatic Repeat Request. It is
a data link layer protocol that uses a sliding window method. In this, if any frame is
corrupted or lost, all subsequent frames have to be sent again.

The size of the sender window is N in this protocol. For example, Go-Back-8, the size
of the sender window, will be 8. The receiver window size is always 1.

If the receiver receives a corrupted frame, it cancels it. The receiver does not accept a
corrupted frame. When the timer expires, the sender sends the correct frame again.
The design of the Go-Back-N ARQ protocol is shown below.

The example of Go-Back-N ARQ is shown below in the figure.


Selective Repeat ARQ
Selective Repeat ARQ is also known as the Selective Repeat Automatic Repeat Request.
It is a data link layer protocol that uses a sliding window method. The Go-back-N ARQ
protocol works well if it has fewer errors. But if there is a lot of error in the frame, lots
of bandwidth loss in sending the frames again. So, we use the Selective Repeat ARQ
protocol. In this protocol, the size of the sender window is always equal to the size of
the receiver window. The size of the sliding window is always greater than 1.

If the receiver receives a corrupt frame, it does not directly discard it. It sends a negative
acknowledgment to the sender. The sender sends that frame again as soon as on the
receiving negative acknowledgment. There is no waiting for any time-out to send that
frame. The design of the Selective Repeat ARQ protocol is shown below.
The example of the Selective Repeat ARQ protocol is shown below in the figure.

Error control in Data Link Layer


Error control in data link layer is the process of detecting and correcting data
frames that have been corrupted or lost during transmission.

In case of lost or corrupted frames, the receiver does not receive the correct
data-frame and sender is ignorant about the loss. Data link layer follows a
technique to detect transit errors and take necessary actions, which is
retransmission of frames whenever error is detected or frame is lost. The
process is called Automatic Repeat Request (ARQ).

Phases in Error Control


The error control mechanism in data link layer involves the following phases −

• Detection of Error − Transmission error, if any, is detected by either the sender or


the receiver.
• Acknowledgment − acknowledgment may be positive or negative.
o Positive ACK − On receiving a correct frame, the receiver sends a positive
acknowledge.
o Negative ACK − On receiving a damaged frame or a duplicate frame, the
receiver sends a negative acknowledgment back to the sender.
• Retransmission − The sender maintains a clock and sets a timeout period. If an
acknowledgment of a data-frame previously transmitted does not arrive before the
timeout, or a negative acknowledgment is received, the sender retransmits the frame.

Error Control Techniques


There are three main techniques for error control −

• Stop and Wait ARQ


This protocol involves the following transitions −
o A timeout counter is maintained by the sender, which is started when a frame
is sent.
o If the sender receives acknowledgment of the sent frame within time, the
sender is confirmed about successful delivery of the frame. It then transmits
the next frame in queue.
o If the sender does not receive the acknowledgment within time, the sender
assumes that either the frame or its acknowledgment is lost in transit. It then
retransmits the frame.
o If the sender receives a negative acknowledgment, the sender retransmits the
frame.
• Go-Back-N ARQ
The working principle of this protocol is −
o The sender has buffers called sending window.
o The sender sends multiple frames based upon the sending-window size,
without receiving the acknowledgment of the previous ones.
o The receiver receives frames one by one. It keeps track of incoming frame’s
sequence number and sends the corresponding acknowledgment frames.
o After the sender has sent all the frames in window, it checks up to what
sequence number it has received positive acknowledgment.
o If the sender has received positive acknowledgment for all the frames, it sends
next set of frames.
o If sender receives NACK or has not receive any ACK for a particular frame, it
retransmits all the frames after which it does not receive any positive ACK.
• Selective Repeat ARQ
o Both the sender and the receiver have buffers called sending window and
receiving window respectively.
o The sender sends multiple frames based upon the sending-window size,
without receiving the acknowledgment of the previous ones.
o The receiver also receives multiple frames within the receiving window size.
o The receiver keeps track of incoming frame’s sequence numbers, buffers the
frames in memory.
o It sends ACK for all successfully received frames and sends NACK for only
frames which are missing or damaged.
o The sender in this case, sends only packet for which NACK is received.
Line Configuration in Computer Networks
A network is two or more devices connected through a link. A link is a
communication pathway that transfers data from one device to another.
Devices can be a computer, printer, or any other device that is capable to send
and receive data. For visualization purposes, imagine any link as a line drawn
between two points.
For communication to occur, two devices must be connected in some way to
the same link at the same time. There are two possible types of connections:
1. Point-to-Point Connection
2. Multipoint Connection
Point-to-Point Connection:
1. A point-to-point connection provides a dedicated link between two devices.
2. The entire capacity of the link is reserved for transmission between those
two devices.
3. Most point-to-point connections use an actual length of wire or cable to
connect the two ends, but other options such as microwave or satellite links
are also possible.
4. Point to point network topology is considered to be one of the easiest and
most conventional networks
topologies.
5. It is also the simplest to establish and understand.

Point-to-Point:
• Uses a dedicated link to connect two devices
• Simple and easy to set up
• Limited to two devices only
• Does not require a network interface card (NIC) or a hub/switch
• Can become complex and difficult to manage as the network grows
Multipoint:
• Uses a single link to connect three or more devices
• More complex than point-to-point configuration
• Can be more efficient and cost-effective for larger networks
• Devices share the same link, which can lead to collisions and lower
performance
• Commonly used in LANs and MANs
Routing is a process that is performed by layer 3 (or network layer) devices in
order to deliver the packet by choosing an optimal path from one network to
another.
Types of Routing
There are 3 types of routing that are described below.
1. Static Routing
Static routing is a process in which we have to manually add routes to the
routing table.
Advantages
• No routing overhead for the router CPU which means a cheaper router
can be used to do routing.
• It adds security because only an only administrator can allow routing to
particular networks only.
• No bandwidth usage between routers.
Disadvantage
• For a large network, it is a hectic task for administrators to manually add
each route for the network in the routing table on each router.
• The administrator should have good knowledge of the topology. If a new
administrator comes, then he has to manually add each route so he
should have very good knowledge of the routes of the topology.
2. Default Routing
This is the method where the router is configured to send all packets toward
a single router (next hop). It doesn’t matter to which network the packet
belongs, it is forwarded out to the router which is configured for default
routing. It is generally used with stub routers. A stub router is a router that
has only one route to reach all other networks.

3. Dynamic Routing
Dynamic routing makes automatic adjustments of the routes according to the
current state of the route in the routing table. Dynamic routing uses protocols
to discover network destinations and the routes to reach
them. RIP and OSPF are the best examples of dynamic routing protocols.
Automatic adjustments will be made to reach the network destination if one
route goes down.
A dynamic protocol has the following features:
• The routers should have the same dynamic protocol running in order to
exchange routes.
• When a router finds a change in the topology then the router advertises it
to all other routers.
Advantages
• Easy to configure.
• More effective at selecting the best route to a destination remote network
and also for discovering remote networks.
Disadvantage
• Consumes more bandwidth for communicating with other neighbors.
• Less secure than static routing.

Congestion Control in Computer Networks


What is congestion?
A state occurring in network layer when the message traffic is so heavy that
it slows down network response time.

Effects of Congestion
• As delay increases, performance decreases.
• If delay increases, retransmission occurs, making situation worse.

Congestion control algorithms


• Congestion Control is a mechanism that controls the entry of data
packets into the network, enabling a better use of a shared network
infrastructure and avoiding congestive collapse.
• Congestive-Avoidance Algorithms (CAA) are implemented at the TCP
layer as the mechanism to avoid congestive collapse in a network.
• There are two congestion control algorithm which are as follows:

• Leaky Bucket Algorithm


• The leaky bucket algorithm discovers its use in the context of network
traffic shaping or rate-limiting.
• A leaky bucket execution and a token bucket execution are predominantly
used for traffic shaping algorithms.
• This algorithm is used to control the rate at which traffic is sent to the
network and shape the burst traffic to a steady traffic stream.
• The disadvantages compared with the leaky-bucket algorithm are the
inefficient use of available network resources.
• The large area of network resources such as bandwidth is not being used
effectively.

Let us consider an example to understand

Imagine a bucket with a small hole in the bottom.No matter at what rate
water enters the bucket, the outflow is at constant rate.When the bucket is
full with water additional water entering spills over the sides and is lost.
Similarly, each network interface contains a leaky bucket and the
following steps are involved in leaky bucket algorithm:
1. When host wants to send packet, packet is thrown into the bucket.
2. The bucket leaks at a constant rate, meaning the network interface
transmits packets at a constant rate.
3. Bursty traffic is converted to a uniform traffic by the leaky bucket.
4. In practice the bucket is a finite queue that outputs at a finite rate.
• Token bucket Algorithm
• The leaky bucket algorithm has a rigid output design at an average rate
independent of the bursty traffic.
• In some applications, when large bursts arrive, the output is allowed to
speed up. This calls for a more flexible algorithm, preferably one that
never loses information. Therefore, a token bucket algorithm finds its uses
in network traffic shaping or rate-limiting.
• It is a control algorithm that indicates when traffic should be sent. This
order comes based on the display of tokens in the bucket.
• The bucket contains tokens. Each of the tokens defines a packet of
predetermined size. Tokens in the bucket are deleted for the ability to
share a packet.
• When tokens are shown, a flow to transmit traffic appears in the display of
tokens.
• No token means no flow sends its packets. Hence, a flow transfers traffic
up to its peak burst rate in good tokens in the bucket.
TCP/IP Packet Format
TCP Packet Format
TCP (Transmission Control Protocol) is a fundamental protocol in the TCP/IP
(Transmission Control Protocol/Internet Protocol) family. It offers a
dependable and organized data distribution between applications that are
executed on several hosts in a network. The TCP packet format must be
understood in order to analyze and resolve network communication issues.
The TCP packet format will be thoroughly examined in this article, along with
its many fields and their importance.

Diagram Showing the TCP packet Format

TCP Packet format has these fields


• Source Port(16 bits): It holds the source/transmitting application’s port
number and helps in determining the application where the data delivery
is planned.
• Destination Port (16 bits): This field has the port number of the
transmitting application and helps to send the data to the appropriate
application.
• Sequence Number (32 bits): It ensures that the data is received in
proper order by ordered segmenting and reassembling them at the
receiving end.
• Acknowledgment Number (32 bits): This field contains the upcoming
sequence number and it acknowledges the feedback up to that.
• Data Offset (4 bits): The data offset field indicates the starting point of
the TCP data payload also storing the size of the TCP header.
• Control Flags (9 bits): TCP uses a few control flags to regulate
communication. Some of the important flags include:
• SYN (Synchronize): Responsible for connecting the sender and
receiver.
• ACK (Acknowledgment): Its purpose is transfer the
acknowledgement of whether the the sender has received data.
• FIN (Finish): It informs whether the TCP connection is
terminated or not.
• RST (Reset): Mainly used to reset the connection when an error
occurs.
• Window Size (16 bits): The size of the sender’s receive window is
specified by this property.
• Checksum (16 bits): It reveals if the header was damaged during
transportation.
• Urgent Pointer (16 bits): This field points to the packet’s first byte of
urgent data.
• Options (Variable length): This field represents the different TCP
options.
• Data Payload: This field mainly contains the information which is the
actual application data that is being transmitted.
IP Packet Format
IP (Internet Protocol) packet structure is used for data transmission in
computer networks. Data units received from the layer above are
encapsulated in an IP packet, and additional header data is added. IP Payload
is the term used to describe the enclosed data. All the information required to
deliver the packet at the other end is in the IP header.

Diagram Showing the IP Packet Format

Key IP Packet Format Fields


• Version (4 bits): This field contains a value in four bits which is 0100
generally. This value is utilized in distinguishing between IPv4 and IPv6,
using four bits.
• Header Length (4 bits): This value is 4 bits in size and it represents how
many 32-bit words are present in the IP header. In short it is also called
as HE-LEN.
• Type of Service (TOS) (8 bits): This field mainly deals with the
information about how the quality of the service is being delivered. The
first 3 bits provides distinction and prioritization of IP packets depending
on certain service needs, such as precedence, delay, throughput,
dependability, and cost.
• Total Length (16 bits): The total length of the IP packet is stored in
bytes. This value and the HE-LEN sums up to the value of the Payload.
• Identification (16 bits): This field gives a specific IP packet a distinctive
identity. This helps to identify the fragments of the an IP Datagram
Uniquely.
• Flags (3 bits): Contains control flags for packet fragmentation and
reassembly, such as the “Don’t Fragment” and “More Fragments” flags.
• Fragment Offset (13 bits): After a packet is fragmented and put back
together, this field contains the location of it inside the original packet. It
represents the number of Data Bytes ahead of the particular fragment in
the specific Datagram.
• Time to Live (TTL) (8 bits): This field specifies the total lifetime of the
Data packet in the internet system. This field indicates how many hops
(routers) an IP packet can make before being terminated. This values
goes from 0-255.
• Protocol (8 bits): This IPv4 headerspecifies the type of transport layer
protocol, such as TCP, UDP, or ICMP, to which the IP packet will be
routed. For example, TCP is indicated by number 6 and UDP protocol us
denoted by number 17.
• Header Checksum (16 bits): This is an error checking layer which is
added to identity errors in the header. By comparing the IP header with its
checksum for error detection, it ensures the IP header’s integrity.
• Source IP Address (32 bits): The IPv4 sender’s 32-bit address is
represented by this value.
• Destination IP Address (32 bits): This value represents the 32-bit IP
address of the intended recipient.
• Options (variable length): This field has options and parameters for
security, record route, time stamp, etc. You can see that the End of
Options, or EOL, usually marks the end of the list of options component.

IP Address Format and Table


IP address is a short form of "Internet Protocol Address." It is a unique number
provided to every device connected to the internet network, such as Android phone,
laptop, Mac, etc. An IP address is represented in an integer number separated by a dot
(.), for example, 192.167.12.46.

Types of IP Address
An IP address is categorized into two different types based on the number of IP
address it contains. These are:

o Pv4 (Internet Protocol version 4)


o IPv6 (Internet Protocol version 6)

What is IPv4?
IPv4 is version 4 of IP. It is a current version and the most commonly used IP address.
It is a 32-bit address written in four numbers separated by a dot (.), i.e., periods. This
address is unique for each device. For example, 66.94.29.13

What is IPv6?
IPv4 produces 4 billion addresses, and the developers think that these addresses are
enough, but they were wrong. IPv6 is the next generation of IP addresses. The main
difference between IPv4 and IPv6 is the address size of IP addresses. The IPv4 is a 32-
bit address, whereas IPv6 is a 128-bit hexadecimal address. IPv6 provides a large
address space, and it contains a simple header as compared to IPv4.

P Address Format
Originally IP addresses were divided into five different categories called classes. These
divided IP classes are class A, class B, class C, class D, and class E. Out of these, classes A,
B, and C are most important. Each address class defines a different number of bits for
its network prefix (network address) and host number (host address). The starting
address bits decide from which class an address belongs.

Network Address: The network address specifies the unique number which is assigned to
your network. In the above figure, the network address takes two bytes of IP address.
Host Address: A host address is a specific address number assigned to each host machine.
With the help of the host address, each machine is identified in your network. The network
address will be the same for each host in a network, but they must vary in host address.

Address Format IPv4


The address format of IPv4 is represented into 4-octets (32-bit), which is divided into three
different classes, namely class A, class B, and class C.

The above diagram shows the address format of IPv4. An IPv4 is a 32-bit decimal address. It
contains four octets or fields separated by 'dot,' and each field is 8-bit in size. The number
that each field contains should be in the range of 0-255.
Class A
Class A address uses only first higher order octet (byte) to identify the network prefix, and
remaining three octets (bytes) are used to define the individual host addresses. The class A
address ranges between 0.0.0.0 to 127.255.255.255. The first bit of the first octet is always set
to 0 (zero), and next 7 bits determine network address, and the remaining 24 bits determine
host address. So the first octet ranges from 0 to 127 (00000000 to 01111111).

Class B
Class B addresses use the initial two octets (two bytes) to identify the network prefix, and the
remaining two octets (two bytes) define host addresses. The class B addresses are range
between 128.0.0.0 to 191.255.255.255. The first two bits of the first higher octet is always set
to 10 (one and zero bit), and next 14 bits determines the network address and remaining 16
bits determines the host address. So the first octet ranges from 128 to 191 (10000000 to
10111111).

Class C
Class C addresses use the first three octets (three bytes) to identify the network prefix, and
the remaining last octet (one byte) defines the host address. The class C address ranges
between 192.0.0.0 to 223.255.255.255. The first three bit of the first octet is always set to
110, and next 21 bits specify network address and remaining 8 bits specify the host address.
Its first octet ranges from 192 to 223 (11000000 to 11011111).
Class D
Class D IP address is reserved for multicast addresses. Its first four bits of the first octet are
always set to 1110, and the remaining bits determine the host address in any IP address. The
first higher octet bits are always set to 1110, and the remaining bits specify the host address.
The class D address ranges between 224.0.0.0 to 239.255.255.255. In multicasting, data is not
assigned to any particular host machine, so it is not require to find the host address from the
IP address, and also, there is no subnet mask present in class D.

Class E
Class E IP address is reserved for experimental purposes and future use. It does not contain
any subnet mask in it. The first higher octet bits are always set to 1111, and next remaining
bits specify the host address. Class E address ranges between 240.0.0.0 to 255.255.255.255.

In every IP address class, all host-number bits are specified by a power of 2 that indicates the
total numbers of the host's address that can create for a particular network address. Class A
address can contain the maximum number of 224 (16,777,216) host numbers. Class B
addresses contain the maximum number of 216 (65, 536) host numbers. And class C contains
a maximum number of 28 (256) host numbers.

Subnet address of IP address, understand with an example:

Suppose a class A address is 11.65.27.1, where 11 is a network prefix (address), and 65.27.1
specifies a particular host address on the network. Consider that a network admin wants to
use 23 to 6 bits to identify the subnet and the remaining 5 to 0 bits to identify the host
address. It can be represented in the Subnet mask with all 1 bits from 31 to 6 and the
remaining (5 to 0) with 0 bits.

Subnet Mask (binary): 11111111 11111111 11111111 11000000


IP address (binary): 00001011 01000001 00011011 00000001

Now, the subnet can be calculated by applying AND operation (1+1=1, 1+0=0, 0+1=0,
0+0=0) between complete IP address and Subnet mask. The result is:

00001011 01000001 00011011 00000000 = 11.65.27.0 subnet address

IP Address Format IPv6


All IPv6 addresses are 128-bit hexadecimal addresses, written in 8 separate sections
having each of them have 16 bits. As the IPv6 addresses are represented in a
hexadecimal format, their sections range from 0 to FFFF. Each section is separated by
colons (:). It also allows to removes the starting zeros (0) of each 16-bit section. If two
or more consecutive sections 16-bit contains all zeros (0 : 0), they can be compressed
using double colons (::).

IPv6 addresses are consist of 8 different sections, each section has a 16-bit
hexadecimal values separated by colon (:). IPv6 addresses are represented as following
format:

xxxx : xxxx : xxxx : xxxx : xxxx : xxxx : xxxx : xxxx

Each "xxxx" group contains a 16-bit hexadecimal value, and each "x" is a 4-bit
hexadecimal value. For example:

FDEC : BA98 : 0000 : 0000 : 0600 : BDFF : 0004 : FFFF

Transport Layer
o The transport layer is a 4th layer from the top.
o The main role of the transport layer is to provide the communication services directly
to the application processes running on different hosts.
o The transport layer provides a logical communication between application processes
running on different hosts. Although the application processes on different hosts are
not physically connected, application processes use the logical communication
provided by the transport layer to send the messages to each other.
o The transport layer protocols are implemented in the end systems but not in the
network routers.
o A computer network provides more than one protocol to the network applications. For
example, TCP and UDP are two transport layer protocols that provide a different set of
services to the network layer.
o All transport layer protocols provide multiplexing/demultiplexing service. It also
provides other services such as reliable data transfer, bandwidth guarantees, and delay
guarantees.
o Each of the applications in the application layer has the ability to send a message by
using TCP or UDP. The application communicates by using either of these two
protocols. Both TCP and UDP will then communicate with the internet protocol in the
internet layer. The applications can read and write to the transport layer. Therefore, we
can say that communication is a two-way process.
Services provided by the Transport Layer
The services provided by the transport layer are similar to those of the data link layer.
The data link layer provides the services within a single network while the transport
layer provides the services across an internetwork made up of many networks. The
data link layer controls the physical layer while the transport layer controls all the lower
layers.

The services provided by the transport layer protocols can be divided into five
categories:

o End-to-end delivery
o Addressing
o Reliable delivery
o Flow control

o Multiplexing

End-to-end delivery:
The transport layer transmits the entire message to the destination. Therefore, it
ensures the end-to-end delivery of an entire message from a source to the destination.

Reliable delivery:
The transport layer provides reliability services by retransmitting the lost and damaged
packets.

he reliable delivery has four aspects:

o Error control
o Sequence control
o Loss control
o Duplication control

Error Control

o The primary role of reliability is Error Control. In reality, no transmission will be 100
percent error-free delivery. Therefore, transport layer protocols are designed to
provide error-free transmission.
o The data link layer also provides the error handling mechanism, but it ensures only
node-to-node error-free delivery. However, node-to-node reliability does not ensure
the end-to-end reliability.
o The data link layer checks for the error between each network. If an error is introduced
inside one of the routers, then this error will not be caught by the data link layer. It only
detects those errors that have been introduced between the beginning and end of the
link. Therefore, the transport layer performs the checking for the errors end-to-end to
ensure that the packet has arrived correctly.
Sequence Control

o The second aspect of the reliability is sequence control which is implemented at the
transport layer.
o On the sending end, the transport layer is responsible for ensuring that the packets
received from the upper layers can be used by the lower layers. On the receiving end,
it ensures that the various segments of a transmission can be correctly reassembled.

Loss Control

Loss Control is a third aspect of reliability. The transport layer ensures that all the
fragments of a transmission arrive at the destination, not some of them. On the
sending end, all the fragments of transmission are given sequence numbers by a
transport layer. These sequence numbers allow the receiver?s transport layer to
identify the missing segment.

Duplication Control

Duplication Control is the fourth aspect of reliability. The transport layer guarantees
that no duplicate data arrive at the destination. Sequence numbers are used to identify
the lost packets; similarly, it allows the receiver to identify and discard duplicate
segments.
Flow Control
Flow control is used to prevent the sender from overwhelming the receiver. If the
receiver is overloaded with too much data, then the receiver discards the packets and
asking for the retransmission of packets. This increases network congestion and thus,
reducing the system performance. The transport layer is responsible for flow control.
It uses the sliding window protocol that makes the data transmission more efficient as
well as it controls the flow of data so that the receiver does not become overwhelmed.
Sliding window protocol is byte oriented rather than frame oriented.

Multiplexing
The transport layer uses the multiplexing to improve transmission efficiency.

Multiplexing can occur in two ways:

o Upward multiplexing: Upward multiplexing means multiple transport layer


connections use the same network connection. To make more cost-effective, the
transport layer sends several transmissions bound for the same destination along the
same path; this is achieved through upward multiplexing.

o Downward multiplexing: Downward multiplexing means one transport layer


connection uses the multiple network connections. Downward multiplexing allows the
transport layer to split a connection among several paths to improve the throughput.
This type of multiplexing is used when networks have a low or slow capacity.
Addressing

o According to the layered model, the transport layer interacts with the functions of the
session layer. Many protocols combine session, presentation, and application layer
protocols into a single layer known as the application layer. In these cases, delivery to
the session layer means the delivery to the application layer. Data generated by an
application on one machine must be transmitted to the correct application on another
machine. In this case, addressing is provided by the transport layer.
o The transport layer provides the user address which is specified as a station or port.
The port variable represents a particular TS user of a specified station known as a
Transport Service access point (TSAP). Each station has only one transport entity.
o The transport layer protocols need to know which upper-layer protocols are
communicating.
TCP Connection Management

The connection is established in TCP using the three-way handshake as


discussed earlier to create a connection. One side, say the server, passively
stays for an incoming link by implementing the LISTEN and ACCEPT primitives,
either determining a particular other side or nobody in particular.

The other side performs a connect primitive specifying the I/O port to which it
wants to join. The maximum TCP segment size available, other options are
optionally like some private data (example password).

The CONNECT primitive transmits a TCP segment with the SYN bit on and the
ACK bit off and waits for a response.

The sequence of TCP segments sent in the typical case, as shown in the figure
below −

The sequence of TCP segments sent in the typical case, as shown in the figure
below −

When the segment sent by Host-1 reaches the destination, i.e., host -2, the
receiving server checks to see if there is a process that has done a LISTEN on
the port given in the destination port field. If not, it sends a response with the
RST bit on to refuse the connection. Otherwise, it governs the TCP segment to
the listing process, which can accept or decline (for example, if it does not look
similar to the client) the connection.
Call Collision
If two hosts try to establish a connection simultaneously between the same
two sockets, then the events sequence is demonstrated in the figure under
such circumstances. Only one connection is established. It cannot select both
the links because their endpoints identify connections.

Suppose the first set up results in a connection identified by (x, y) and the
second connection are also released up. In that case, only tail enter will be
made, i.e., for (x, y) for the initial sequence number, a clock-based scheme is
used, with a clock pulse coming after every 4 microseconds. For ensuring
additional safety when a host crashes, it may not reboot for sec, which is the
maximum packet lifetime. This is to make sure that no packets from previous
connections are roaming around.

Design Issues with Session Layer :

1. Establish sessions between machines –


The establishment of session between machines is an important service
provided by session layer. This session is responsible for creating a
dialog between connected machines. The Session Layer provides
mechanism for opening, closing and managing a session between end-
user application processes, i.e. a semi-permanent dialogue. This session
consists of requests and responses that occur between applications.
2. Enhanced Services –
Certain services such as checkpoints and management of tokens are the
key features of session layer and thus it becomes necessary to keep
enhancing these features during the layer’s design.
3. To help in Token management and Synchronization –
The session layer plays an important role in preventing collision of several
critical operation as well as ensuring better data transfer over network by
establishing synchronization points at specific intervals. Thus it becomes
highly important to ensure proper execution of these services.

Remote Procedure Call (RPC)

A remote procedure call is an interprocess communication technique that is used


for client-server based applications. It is also known as a subroutine call or a
function call.

A client has a request message that the RPC translates and sends to the server.
This request may be a procedure or a function call to a remote server. When
the server receives the request, it sends the required response back to the
client. The client is blocked while the server is processing the call and only
resumed execution after the server is finished.

The sequence of events in a remote procedure call are given as follows −

• The client stub is called by the client.


• The client stub makes a system call to send the message to the server and puts the
parameters in the message.
• The message is sent from the client to the server by the client’s operating system.
• The message is passed to the server stub by the server operating system.
• The parameters are removed from the message by the server stub.
• Then, the server procedure is called by the server stub.

A diagram that demonstrates this is as follows −

Advantages of Remote Procedure Call


Some of the advantages of RPC are as follows −

• Remote procedure calls support process oriented and thread oriented models.
• The internal message passing mechanism of RPC is hidden from the user.
• The effort to re-write and re-develop the code is minimum in remote procedure
calls.
• Remote procedure calls can be used in distributed environment as well as the local
environment.
• Many of the protocol layers are omitted by RPC to improve performance.
Disadvantages of Remote Procedure Call
Some of the disadvantages of RPC are as follows −

• The remote procedure call is a concept that can be implemented in different ways. It
is not a standard.
• There is no flexibility in RPC for hardware architecture. It is only interaction based.
• There is an increase in costs because of remote procedure call.
Design Issues in Presentation Layer
The syntax and the semantics of the information exchanged between two
communication systems is managed by the presentation layer of the OSI
Model.
Before going through the design issues in the presentation layer, some of its
main functions are:
1. Translation –
It is necessary that the information which is in the form of numbers,
characters and symbols needs to be changed to the bit streams. The
presentation layer handles the different encoding methods used by
different machines .It manages the translation of data between the format
of network requires and computer.
2. Encryption –
The data encryption at the transmission end as well as the decryption at
the receiver end is managed by the presentation layer.
1.
2. Compression –
In order to reduce the number of bits to be transmitted, the presentation
layer performs the data compression. It increases efficiency in case of
multimedia files such as audio, video etc.
Design issues with Presentation Layer :
1. Standard way of encoding data –
The presentation layer follows a standard way to encode data when it
needs to be transmitted. This encoded data is represented as character
strings, integers, floating point numbers, and data structures composed of
simple components. It is handled differently by different machines based
on the encoding methods followed by them.
2. Maintaining the Syntax and Semantics of distributed information –
The presentation layer manages and maintains the syntax as well as logic
and meaning of the information that is distributed.
3. Standard Encoding on the wire –
The data structures that are defined to be exchanged need to be abstract
along with the standard encoding to be used “on the wire”.
Cyclic Redundancy Check-

• Cyclic Redundancy Check (CRC) is an error detection method.


• It is based on binary division.

CRC Generator-

• CRC generator is an algebraic polynomial represented as a bit pattern.


• Bit pattern is obtained from the CRC generator using the following rule-

The power of each term gives the position of the bit and the coefficient gives
the value of the bit.

Example-
Consider the CRC generator is x7 + x6 + x4 + x3 + x + 1.
The corresponding binary pattern is obtained as-

Problem-01:

A bit stream 1101011011 is transmitted using the standard CRC method. The generator
polynomial is x4+x+1. What is the actual bit string transmitted?

Solution-

• The generator polynomial G(x) = x4 + x + 1 is encoded as 10011.


• Clearly, the generator polynomial consists of 5 bits.
• So, a string of 4 zeroes is appended to the bit stream to be transmitted.
• The resulting bit stream is 11010110110000.

Now, the binary division is performed as-


From here, CRC = 1110.
Now,
• The code word to be transmitted is obtained by replacing the last 4 zeroes of
11010110110000 with the CRC.
• Thus, the code word transmitted to the receiver = 11010110111110.

Domain Name System (DNS) in Application


Layer
Domain Name System (DNS) is a hostname for IP address translation
service. DNS is a distributed database implemented in a hierarchy of name
servers. It is an application layer protocol for message exchange between
clients and servers. It is required for the functioning of the Internet.

Types of Domain
There are various kinds of domain:
1. Generic domains: .com(commercial), .edu(educational), .mil(military),
.org(nonprofit organization), .net(similar to commercial) all these are
generic domains.
2. Country domain: .in (India) .us .uk
3. Inverse domain: if we want to know what is the domain name of the
website. Ip to domain name mapping. So DNS can provide both the
mapping for example to find the IP addresses of geeksforgeeks.org then
we have to type

Types of DNS

Organization of Domain
It is very difficult to find out the IP address associated with a website because
there are millions of websites and with all those websites we should be able
to generate the IP address immediately, there should not be a lot of delays for
that to happen organization of the database is very important.

Root DNS Server

• DNS record: Domain name, IP address what is the validity? what is the
time to live? and all the information related to that domain name. These
records are stored in a tree-like structure.
• Namespace: Set of possible names, flat or hierarchical. The naming
system maintains a collection of bindings of names to values – given a
name, a resolution mechanism returns the corresponding value.
• Name server: It is an implementation of the resolution mechanism.
DNS = Name service in Internet – A zone is an administrative unit,
and a domain is a subtree.
Name-to-Address Resolution
The host requests the DNS name server to resolve the domain name. And the
name server returns the IP address corresponding to that domain name to the
host so that the host can future connect to that IP address.

Name-to-Address Resolution

• Hierarchy of Name Servers Root name servers: It is contacted by


name servers that can not resolve the name. It contacts the authoritative
name server if name mapping is not known. It then gets the mapping and
returns the IP address to the host.
• Top-level domain (TLD) server: It is responsible for com, org, edu, etc,
and all top-level country domains like uk, fr, ca, in, etc. They have info
about authoritative domain servers and know the names and IP
addresses of each authoritative name server for the second-level
domains.
• Authoritative name servers are the organization’s DNS servers,
providing authoritative hostnames to IP mapping for organization servers.
It can be maintained by an organization or service provider. In order to
reach cse.dtu.in we have to ask the root DNS server, then it will point out
to the top-level domain server and then to the authoritative domain name
server which actually contains the IP address. So the authoritative domain
server will return the associative IP address.
Domain Name Server
The client machine sends a request to the local name server, which, if the root
does not find the address in its database, sends a request to the root name
server, which in turn, will route the query to a top-level domain (TLD) or
authoritative name server. The root name server can also contain some
hostName to IP address mappings. The Top-level domain (TLD) server always
knows who the authoritative name server is. So finally the IP address is
returned to the local name server which in turn returns the IP address to the
host.

Domain Name Server

You might also like