0% found this document useful (0 votes)
31 views129 pages

Unit-4 CN Material

Uploaded by

shaik amreen
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
31 views129 pages

Unit-4 CN Material

Uploaded by

shaik amreen
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 129

COMPUTER NETWORKS..

UNIT-4 CHAPTER-1
THE NETWORK LAYER DESIGN ISSUES.
Network Layer Design issues
Network layer Design issues are as follows −
 Store and forward packets switching
 Services provided to the transport layer
 Implementation of connectionless services
 Implementation of connection oriented services
 Comparison of virtual-circuit datagram networks
Now let us see one of the design issues of the network
layer.
Store and forward packet switching:
In the above diagram, we can see that the Internet Service Provider
(ISP) has six routers (A to F) connected by transmission line. There
are two hosts, host H1 is connected to router A, while host H2 is
connected to router F(LAN). Suppose that H1 wants to send a data
packet to H2. H1 sends the packet to router A. The packet is stored in
router A until it has arrived fully. Router A verifies the checksum
using CRC (cyclic redundancy check) code. If there is a CRC error, the
packet is discarded, otherwise it is transmitted to the next hop, here
router F. The same process is followed by router F which then Finally
router F delivers the packet to host H2.

Services Provided to the Transport layer:


Implementation of Connection Less Service:
packet = datagram
network = datagram network
example: IP(internet protocol)
When connectionless service is offered, packets are
frequently called Datagrams
No advance setup is required. Subnets are called
Datagram subnets
Implementation of Connection Oriented Service:
Connection=virtual circuit
Network=virtual circuit network
Example=MPLS(Multiprotocol Label Switching).
When Connection oriented service is provided, then before
any packet is sent a path from source router to destination
router is established. This connection is called Virtual Circuit
and the subnet is called Virtual Circuit subnet.

Comparison of Virtual Circuit and Datagram


Networks:
Circuit switching
Difference between Circuit Switching and Packet Switching:
Circuit Switching Packet Switching

In-circuit switching has there are 3 phases:


i) Connection Establishment. In Packet switching directly data transfer
ii) Data Transfer. takes place.
iii) Connection Released.
Circuit Switching Packet Switching

In Packet switching, each data unit just


In-circuit switching, each data unit knows the entire path knows the final destination address
address which is provided by the source. intermediate path is decided by the
routers.

In Packet switching, data is processed at


In-Circuit switching, data is processed at the source system
all intermediate nodes including the
only
source system.

The delay between data units in circuit switching is The delay between data units in packet
uniform. switching is not uniform.

Resource reservation is the feature of circuit switching There is no resource reservation because
because the path is fixed for data transmission. bandwidth is shared among users.

Circuit switching is more reliable. Packet switching is less reliable.

Less wastage of resources as compared to


Wastage of resources is more in Circuit Switching
Circuit Switching

It is not a store and forward technique. It is a store and forward technique.

Transmission of the data is done not only


Transmission of the data is done by the source. by the source but also by the
intermediate routers.

Congestion can occur during the connection establishment Congestion can occur during the data
phase because there might be a case where a request is transfer phase, a large number of packets
being made for a channel but the channel is already comes in no time.
occupied.

Circuit switching is not convenient for handling bilateral Packet switching is suitable for handling
traffic. bilateral traffic.

In Packet switching, the charge is based


In-Circuit switching, the charge depends on time and
on the number of bytes and connection
distance, not on traffic in the network.
time.

Recording of packets is possible in packet


Recording of packets is never possible in circuit switching.
switching.

In Packet Switching there is no physical


In-Circuit Switching there is a physical path between the
path between the source and the
source and the destination
destination

Circuit Switching does not support store and forward Packet Switching supports store and
Circuit Switching Packet Switching

transmission forward transmission

No call setup is required in packet


Call setup is required in circuit switching.
switching.

In packet switching packets can follow


In-circuit switching each packet follows the same route.
any route.

The circuit switching network is implemented at the Packet switching is implemented at the
physical layer. datalink layer and network layer

Packet switching requires complex


Circuit switching requires simple protocols for delivery.
protocols for delivery.

Routing Algorithms:
 In order to transfer the packets from source to the
destination, the network layer must determine the best
route through which packets can be transmitted.
 Whether the network layer provides datagram service or
virtual circuit service, the main job of the network layer
is to provide the best route. The routing protocol
provides this job.
 The routing protocol is a routing algorithm that provides
the best path from the source to the destination. The best
path is the path that has the "least-cost path" from source
to the destination.
 Routing is the process of forwarding the packets from
source to the destination but the best route to send the
packets is determined by the routing algorithm.
Classification of a Routing algorithm
The Routing algorithm is divided into two categories:
 Adaptive Routing algorithm
 Non-adaptive Routing algorithm

Non-Adaptive routing algorithms:


Flooding: In case of flooding, every incoming packet is sent to
all the outgoing links except the one from it has been
reached. The disadvantage of flooding is that node may
contain several copies of a particular packet.

Random walks: In case of random walks, a packet sent by the


node to one of its neighbors randomly. An advantage of using
random walks is that it uses the alternative routes very
efficiently.
Adaptive-routing algorithms:

An adaptive routing algorithm can be classified into


three parts:
1) Centralized routing algorithm
2) Isolation routing algorithm
3) Distributed routing algorithm
Centralized algorithm: It is also known as global routing
algorithm as it computes the least-cost path between
source and destination by using complete and global
knowledge about the network. This algorithm takes the
connectivity between the nodes and link cost as input.
Isolation algorithm: It is an algorithm that obtains the
routing information by using local information rather than
gathering information from other nodes.
Distributed algorithm: It is also known as decentralized
algorithm as it computes the least-cost path between
source and destination in an iterative and distributed
manner. In the decentralized algorithm, no node has the
knowledge about the cost of all the network links.
Shortest path:
In computer networks, the shortest path algorithms aim to
find the optimal paths between the network nodes so that
routing cost is minimized. They are direct applications of
the shortest path algorithms proposed in graph theory.
Consider that a network comprises of N vertices (nodes or
network devices) that are connected by M edges
(transmission lines). Each edge is associated with a weight,
representing the physical distance or the transmission
delay of the transmission line. The target of shortest path
algorithms is to find a route between any pair of vertices
along the edges, so the sum of weights of edges is
minimum. If the edges are of equal weights, the shortest
path algorithm aims to find a route having minimum
number of hops(the number of routers)

Common Shortest Path Algorithms


Some common shortest path algorithms are −
 Bellman Ford’s Algorithm
 Dijkstra’s Algorithm
 Floyd Warshall’s Algorithm.

Flooding:
Flooding is a non-adaptive routing technique following
this simple method: when a data packet arrives at a
router, it is sent to all the outgoing links except the one
it has arrived on.
For example, let us consider the network in the figure,
having six routers that are connected through
transmission lines.

Using flooding technique −


 An incoming packet to A, will be sent to B, C and
D.
 B will send the packet to C and E.
 C will send the packet to B, D and F.
 D will send the packet to C and F.
 E will send the packet to F.
 F will send the packet to C and E.
Advantages of Flooding
 It is very simple to setup and implement, since a
router may know only its neighbours.
 It is extremely robust. Even in case of
malfunctioning of a large number routers, the
packets find a way to reach the destination.
 All nodes which are directly or indirectly
connected are visited. So, there are no chances for
any node to be left out. This is a main criteria in
case of broadcast messages.
 The shortest path is always chosen by flooding.
Limitations of Flooding
 Flooding tends to create an infinite number of
duplicate data packets, unless some measures are
adopted to damp packet generation.
 It is wasteful if a single destination needs the
packet, since it delivers the data packet to all nodes
irrespective of the destination.
 The network may be clogged with unwanted and
duplicate data packets. This may hamper delivery
of other data packets.
Flow based:
 It is a non-adaptive routing algorithm.
 It takes into account both the topology and the load
in this routing algorithm;
 We can estimate the flow between all pairs of
routers.
 You can compute the mean packet delays using
queuing theory from the known average amount of
traffic and the average length of a packet.
 Flow-based routing then seeks to find a routing
table to minimize the average packet delay through
the subnet.
 Given the line capacity and the flow, we can
determine the delay. It needs to use the formula for
delay time T.
Where, μ = Mean number of arrivals in
packet/sec, 1/μ = The mean packet size in the
bits, and c = Line capacity (bits/s).

Distance vector (DVR):

The first column is network id or destination.

Lets take an example of 5 networks as shown below.for each and


every network we have build a network table with columns
destination ,distance and next hop.
X=source, Y=destination ,v= intermediate router
The routing tables will be shared among the neighbouring nodes.

We have to consider two main points

 Minimum cost
 Minimum number of intermediary nodes.
In the same way update all the tables

 The Distance vector algorithm is iterative, asynchronous and distributed.


 The Distance vector algorithm is a dynamic algorithm.
 It is mainly used in ARPANET, and RIP.
 Each router maintains a distance table known as Vector.

Routing Table
Two process occurs:
 Creating the Table
 Updating the Table

Creating the Table

Initially, the routing table is created for each router that contains atleast three types of
information such as Network ID, the cost and the next hop.

 NET ID: The Network ID defines the final destination of the packet.
 Cost: The cost is the number of hops that packet must take to get there.
 Next hop: It is the router to which the packet must be delivered.

Distance vector is the "Dynamic Routing" protocol. Distant vector protocol also
called as Bellman-Ford algorithm used to calculate the shortest path.

The Bellman-Ford algorithm is defined as:


where, dx(y)= The least distance from x to y
c(x,v)= Node x's cost from each of its neighbor v
dv(y)= Distance to each node from initial node
minv= selecting shortest distance

Consider a scenario where all the routers are set and run the distant vector routing algorithm. Each
router in the network will share the distance information with the neighboring router. All the
information is gathered from the neighbor routers. With each router's information, an optimal
distance is calculated and stored in the routing table. This way, the process of calculating the optimal
path is done using the distant vector routing protocol.
Advantage of Distance Vector Routing
 The distance vector routing protocol is easy to implement for small networks.
 Protocol faces a lower redundancy in the small network.

Disadvantages of Distance Vector Routing


 The distance vector routing faces a slow coverage problem, as it requires more time to
get accurate information for the routing table.
 Traffic is created due to periodic changes in the network topology.
 The distance vector routing faces an account-to-infinity problem.

Link state:

Every node will send the packet to all its neighbouring nodes.
Link State Routing (LSR) is a routing algorithm used
in computer networks to determine the best path for
data to travel from one node to another. LSR is
considered to be a more advanced and efficient
method of routing compared to Distance Vector
Routing (DVR) algorithm.

In LSR, each node in the network maintains a map or


database, called a link state database (LSDB), that
contains information about the state of all the links
in the network. This information includes the cost of
each link, the status of each link (up or down), and
the neighboring nodes that are connected to each
link.

When a node in the network wants to send data to


another node, it consults its LSDB to determine the
best path to take. The node selects the path with
the lowest cost, also known as the shortest path, to
reach the destination node. To determine the
shortest path, LSR uses Dijkstra’s shortest path
algorithm.

The LSR process can be divided into several phases:

1. initialization phase: The first phase is the


initialization phase, where each router in the
network learns about its own directly connected
links. This information is then stored in the
router’s link state database.
2. flooding phase: The second phase is the flooding
phase, where each router floods its link state
information to all other routers in the network.
This allows each router to learn about the entire
network topology.
3. path calculation phase: The third phase is the
shortest path calculation phase, where each
router uses the link state information to
calculate the shortest path to every other
router in the network. This is typically done using
Dijkstra’s algorithm.
4. route installation phase: The fourth and final
phase is the route installation phase, where each
router installs the calculated shortest paths in
its routing table. This allows the router to
forward packets along the optimal path to their
destination.

Advantages of Link State Routing Algorithm:

Some advantages of the link state routing algorithm


are given below-

 One of the main advantages of LSR is that it only


needs to know the state of the links it is directly
connected to, as opposed to DVR which needs to
know the entire state of the network. This allows
LSR to converge quickly, and to adapt to changes
in the network more quickly. This is particularly
useful in large networks where the topology
changes frequently.
 Another advantage of LSR is that it does not
suffer from the count-to-infinity problem which
is prevalent in DVR. In DVR, if two nodes have
incorrect information about the distance to a
destination, they will continue to update each
other indefinitely, leading to a stalemate.
However, with LSR, nodes only exchange
information about their directly connected links,
so there is no possibility of the count-to-infinity
problem.

Disadvantages of Link State Routing Algorithm:


However, LSR also has some drawbacks. Let’s discuss
some of the disadvantages below-

 One of the main disadvantages is that it requires


more memory and processing power than DVR.
The LSDB of each node must be updated
constantly to reflect changes in the network, and
this can consume a lot of resources.
 Additionally, LSR is not as scalable as DVR. It
can be difficult to implement in very large
networks with thousands of nodes.

Conclusion:
In conclusion, Link State Routing (LSR) is a powerful
and efficient routing algorithm used in computer
networks. It uses a link state database (LSDB) to
store information about the state of all the links in
the network and uses Dijkstra’s shortest path
algorithm to determine the best path for data to
travel. LSR is particularly useful in large networks
where the topology changes frequently and it does
not suffer from the count-to-infinity problem.
However, it requires more memory and processing
power than DVR and is not as scalable for very large
networks.

Hierarchical routing:-
One router of one region is connected with another
router of another region,they are called as gateway
routers.here in the diagram B,D,G are the gate way
routers.

Diagram when cluster is not formed


Diagram when we cluster the regions.

Number of entries when cluster is formed is lesser than


the number of entries formed without entries.
In hierarchical routing, the routers are divided into regions.
Each router has complete details about how to route packets to
destinations within its own region. But it does not have any
idea about the internal structure of other regions.
As we know, in both LS and DV algorithms, every router
needs to save some information about other routers. When
network size is growing, the number of routers in the network
will increase. Therefore, the size of routing table increases,
then routers cannot handle network traffic as efficiently. To
overcome this problem we are using hierarchical routing.
In hierarchical routing, routers are classified in groups called
regions. Each router has information about the routers in its
own region and it has no information about routers in other
regions. So, routers save one record in their table for every
other region.
For huge networks, a two-level hierarchy may be insufficient
hence, it may be necessary to group the regions into clusters,
the clusters into zones, the zones into groups and so on.
Example
Consider an example of two-level hierarchy with five regions as
shown in figure −

Let see the full routing table for router 1A which has 17 entries, as
shown below –
When routing is done hierarchically then there will be only 7 entries
as shown below −

Hierarchical Table for 1A

Unfortunately, this reduction in table space comes with the increased


path length.

Explanation

Step 1 − For example, the best path from 1A to 5C is via region 2, but
hierarchical routing of all traffic to region 5 goes via region 3 as it is
better for most of the other destinations of region 5.

Step 2 − Consider a subnet of 720 routers. If no hierarchy is used,


each router will have 720 entries in its routing table.
Step 3 − Now if the subnet is partitioned into 24 regions of 30 routers
each, then each router will require 30 local entries and 23 remote
entries for a total of 53 entries.

Example

If the same subnet of 720 routers is partitioned into 8 clusters, each


containing 9 regions and each region containing 10 routers. Then
what will be the total number of table entries in each router.

Solution

10 local entries + 8 remote regions + 7 clusters = 25 entries.

Congestion Control algorithms:


Congestion prevention polices:
Congestion control refers to the techniques used to control or
prevent congestion. Congestion control techniques can be broadly
classified into two categories:

Open Loop Congestion Control

Open loop congestion control policies are applied to prevent


congestion before it happens. The congestion control is handled either
by the source or the destination.
Policies adopted by open loop congestion control –

1. Retransmission Policy :
It is the policy in which retransmission of the packets are taken
care of. If the sender feels that a sent packet is lost or
corrupted, the packet needs to be retransmitted. This
transmission may increase the congestion in the network.
To prevent congestion, retransmission timers must be designed
to prevent congestion and also able to optimize efficiency.

2. Window Policy :
The type of window at the sender’s side may also affect the
congestion. Several packets in the Go-back-n window are re-
sent, although some packets may be received successfully at
the receiver side. This duplication may increase the congestion
in the network and make it worse.
Therefore, Selective repeat window should be adopted as it
sends the specific packet that may have been lost.

3. Discarding Policy :
A good discarding policy adopted by the routers is that the
routers may prevent congestion and at the same time partially
discard the corrupted or less sensitive packages and also be
able to maintain the quality of a message.
In case of audio file transmission, routers can discard less
sensitive packets to prevent congestion and also maintain the
quality of the audio file.

4. Acknowledgment Policy :
Since acknowledgements are also the part of the load in the
network, the acknowledgment policy imposed by the receiver
may also affect congestion. Several approaches can be used to
prevent congestion related to acknowledgment.
The receiver should send acknowledgement for N packets
rather than sending acknowledgement for a single packet. The
receiver should send an acknowledgment only if it has to send
a packet or a timer expires.

5. Admission Policy :
In admission policy a mechanism should be used to prevent
congestion. Switches in a flow should first check the resource
requirement of a network flow before transmitting it further. If
there is a chance of a congestion or there is a congestion in the
network, router should deny establishing a virtual network
connection to prevent further congestion.

All the above policies are adopted to prevent congestion before it


happens in the network.

Closed Loop Congestion Control

Closed loop congestion control techniques are used to treat or


alleviate congestion after it happens. Several techniques are used by
different protocols; some of them are:

1. Backpressure :
Backpressure is a technique in which a congested node stops
receiving packets from upstream node. This may cause the upstream
node or nodes to become congested and reject receiving data from
above nodes. Backpressure is a node-to-node congestion control
technique that propagate in the opposite direction of data flow. The
backpressure technique can be applied only to virtual circuit where
each node has information of its above upstream node.
In above diagram the 3rd node is congested and stops
receiving packets as a result 2nd node may be get congested
due to slowing down of the output data flow. Similarly 1st
node may get congested and inform the source to slow down.

2. Choke Packet Technique :


Choke packet technique is applicable to both virtual networks
as well as datagram subnets. A choke packet is a packet sent
by a node to the source to inform it of congestion. Each router
monitors its resources and the utilization at each of its output
lines. Whenever the resource utilization exceeds the threshold
value which is set by the administrator, the router directly
sends a choke packet to the source giving it a feedback to
reduce the traffic. The intermediate nodes through which the
packets has traveled are not warned about congestion.

3. Implicit Signaling :
In implicit signaling, there is no communication between the
congested nodes and the source. The source guesses that there
is congestion in a network. For example when sender sends
several packets and there is no acknowledgment for a while,
one assumption is that there is a congestion.

4. Explicit Signaling :
In explicit signaling, if a node experiences congestion it can
explicitly sends a packet to the source or destination to inform
about congestion. The difference between choke packet and
explicit signaling is that the signal is included in the packets
that carry data rather than creating a different packet as in case
of choke packet technique.
Explicit signaling can occur in either forward or backward
direction.
 Forward Signaling : In forward signaling, a signal is
sent in the direction of the congestion. The destination is
warned about congestion. The receiver in this case adopt
policies to prevent further congestion.
 Backward Signaling : In backward signaling, a signal is
sent in the opposite direction of the congestion. The
source is warned about congestion and it needs to slow
down.
Approaches to Congestion Control:
The presence of congestion means the load is greater than the
resources available over a network to handle. Generally we will get an
idea to reduce the congestion by trying to increase the resources or
decrease the load, but it is not that much of a good idea.

Approaches to Congestion Control


There are some approaches for congestion control over a network
which are usually applied on different time scales to either prevent
congestion or react to it once it has occurred.

Let us understand these approaches step wise as mentioned below −

Step 1 − The basic way to avoid congestion is to build a network that


is well matched to the traffic that it carries. If more traffic is directed
but a low-bandwidth link is available, definitely congestion occurs.

Step 2 − Sometimes resources can be added dynamically like routers


and links when there is serious congestion. This is called
provisioning, and which happens on a timescale of months, driven by
long-term trends.

Step 3 − To utilise most existing network capacity, routers can be


tailored to traffic patterns making them active during daytime when
network users are using more and sleep in different time zones.

Step 4 − Some of local radio stations have helicopters flying around


their cities to report on road congestion to make it possible for their
mobile listeners to route their packets (cars) around hotspots. This is
called traffic aware routing.

Step 5 − Sometimes it is not possible to increase capacity. The only


way to reduce the congestion is to decrease the load. In a virtual
circuit network, new connections can be refused if they would cause
the network to become congested. This is called admission control.

Step 6 − Routers can monitor the average load, queueing delay, or


packet loss. In all these cases, the rising number indicates growing
congestion. The network is forced to discard packets that it cannot
deliver. The general name for this is Load shedding. The better
technique for choosing which packets to discard can help to prevent
congestion collapse.

Traffic Aware Routing:


Whenever there is congestion in the network, there will be one
strategy for network-wide congestion control and traffic awareness.
Congestion can be avoided by designing a network that is well-suited
to the traffic it transports. Congestion develops when more traffic is
targeted but only a low-bandwidth link is available.

Traffic-aware routing’s main objective is to choose the optimum


routes by taking the load into account. It does this by setting the link
weight to be a function of the fixed connection bandwidth and
propagation delay, traffic awareness, as well as the variable observed
load or average queuing time.

Consider a network that is separated into two regions, North and


South, connected by links E1F1 and EF.

Step 2: In this scenario, assume that Link E1F1 is heavily congested


due to the majority of traffic between East and West. Queueing delays
are significant, and there’s a need to factor in queueing time when
calculating the shortest path.
Step 3: After updating routing tables to account for queueing delays,
most of the North-South traffic will now flow through the EF link, as
it appears to be a faster route.

Step 4: This shift in traffic may lead to significant fluctuations in


routing tables, causing unpredictable routing behavior and potential
network issues.

Step 5: An alternative approach is to focus on bandwidth and


propagation delay rather than load when adjusting routing weights.
This helps alleviate the problem and prevents routing oscillations.

Step 6:The following two strategies can help find a successful


solution
Multipath Routing: The routing scheme to shift
traffic across all the routes.

As a result, it may oscillate greatly, which could cause inconsistent


routing and a variety of other issues. This won’t happen if the load is
disregarded in favour of just bandwidth and propagation delay. In an
effort to reduce routing oscillations, weights are changed with a range
but no additional load is added. A successful solution can be achieved
using two strategies. First, there is multipath routing, in which there
may be several routes from one point to another.

Features

1. It is a congestion technique.

2. These roots can be changed in accordance with traffic patterns


because, as network users, we can sleep in various time zones
throughout the day.

3. As there are heavily used paths so roots can be changed to shift


traffic away.

4. Traffic can be split across multiple paths.

Admission control:
It is one of techniques that is widely used in virtual-circuit
networks to keep congestion at bay. The idea is do not set up a
new virtual circuit unless the network can carry the added
traffic without becoming congested.
Admission control can also be combined with traffic aware
routing by considering routes around traffic hotspots as part of
the setup procedure.
Example
Take two networks (a) A congestion network and (b) The
portion of the network that is not congested. A virtual circuit
A to B is also shown below −

Explanation
Step 1 − Suppose a host attached to router A wants to set up a
connection to a host attached to router B. Normally this
connection passes through one of the congested routers.
Step 2 − To avoid this situation, we can redraw the network
as shown in figure (b), removing the congested routers and all
of their lines.
Step 3 − The dashed line indicates a possible route for the
virtual circuit that avoids the congested routers.

Traffic throttling:
Traffic throttling is one of the approaches for congestion
control. In the internet and other computer networks, senders
trying to adjust the transmission need to send as much traffic
as the network can readily deliver. In this setting the network
aim is to operate just before the onset of congestion.
There are some approaches to throttling traffic that can be
used in both datagram and virtual-circuit networks.
Each approach has to solve two problems −
Firs
Routers have to determine when congestion is approaching
ideally before it has arrived. Each router can continuously
monitor the resources it is using.
There are three possibilities, which are as follows −
 Utilisation of output links.
 Buffering of queued packets inside the router.
 Numbers of packets are lost due to insufficient buffering.
Second
Average of utilization does not directly account for burstiness
of most traffic and queueing delay inside routers directly
captures any congestion experienced by packets.
To manage the good estimation of queueing delay d, a sample
of queue length s, can be made periodically and d updated
according to,
dnew=αdold+(1−α)s
Where the constant α determines how fast the router forgets
recent history. This is called EWMA (Exponentially Weighted
Moving Average)
It smoothest out fluctuations and is equivalent to allow-pass
filter. Whenever d moves above the threshold, the router notes
the onset of congestion.
Routers must deliver timely feedback to the senders that are
causing the congestion. Routers must also identify the
appropriate senders. It must then warn carefully, without
sending many more packets into an already congested
network.
There are many feedback mechanisms one of them is as
follows −
Explicit Congestion Notification (ECN)
The Explicit Congestion Notification (ECN) is
diagrammatically represented as follows −

Explanation of ECN
Step 1 − Instead of generating additional packets to warn of
congestion, a router can tag any packet it forwards by setting a
bit in the packet header to signal that it is experiencing
congestion.
Step 2 − When the network delivers the packet, the
destination can note that there is congestion and inform the
sender when it sends a reply packet.
Step 3 − The sender can then throttle its transmissions as
before.
Step 4 − This design is called explicit congestion notification
and is mostly used on the Internet.

Load Shedding:
Load shedding is one of the techniques used for congestion control.
A network router consists of a buffer. This buffer is used to store the
packets and then route them to their destination. Load shedding is
defined as an approach of discarding the packets when the buffer is
full according to the strategy implemented in the data link layer. The
selection of packets to discard is an important task. Many times
packets with less importance and old packets are discarded.

Selection of Packets to be Discarded

In the process of load shedding the packets need to be discarded in


order to avoid congestion. Therefore which packet needs to be
discarded is a question. Below are the approaches used to discard the
packets.

1. Random Selection of packets

When the router is filled with more packets, the packets are selected
randomly for discarding. Discarding the packets it can include old,
new, important, priority-based, or less important packets. Random
selection of packets can lead to various disadvantages and problems.
2. Selection of packets based on applications

According to the application, the new packets will be discarded or old


packets can be discarded by the router. When the application is
regarding file transfer new packets are discarded and when the
application is regarding multimedia the old packets are discarded.

3. Selection of packets based on priority

The source of packets can mark the priority stating how much
important the packet is. Depending upon the priority provided by the
sender the packet can either be selected or discarded. The priority can
be given according to price, algorithm, and methods used, the
functions that it will perform, and its effect on another task upon
selecting and discarding the packets.

4. Random early detection

Randomly early detection is an approach in which packets are


discarded before the buffer space becomes full. Therefore the
situation of congestion is controlled earlier. In this approach, the
router initially maintains a specific queue length for the outgoing
lines. When this average set line is exceeded it warns for congestion
and discards the packets.

Advantages of Load Shedding

 Using the load shedding technique can help to recover from


congestion.

 Load shedding technique reduces the flow of network traffic.

 It discards the packet from the network before congestion


occurs

 Load shedding maintains a synchronized flow of packets in the


network.

Disadvantages of Load Shedding


 If the size of the buffer is very less it discards more packets

 It is an overhead task for the router to continuously check if it


has becomes full.

 Load shedding can sometimes discard important packets also


considered as old packets.

 Load shedding cannot completely guarantee the avoidance of


congestion.

Traffic control algorithms:


Fragmentation
1. Transparent Fragmentation:
This fragmentation is done by one network is made transparent to all other subsequent
networks through which packet will pass. Whenever a large packet arrives at a gateway, it
breaks the packet into smaller fragments as shown in the following figure i.e the gateway G1
breaks a packet into smaller fragments.
After this, each fragment is going to address to same exit gateway. Exit gateway of a network
reassembles or recombines all fragments as shown in above figure. The exit gateway, G2 of
network 1 recombines all fragments created by G1 before passing them to network 2. Thus,
subsequent network is not aware that fragmentation has occurred. This type of strategy is
used by ATM networks . These networks use special hardware that provides transparent
fragmentation of packets.

There are some disadvantages of transparency strategy which are as follows :

 Exit fragment that recombines fragments in a network must known when it has
received all fragments.
 Some fragments chooses different gateways for exit that results in poor performance.
 It adds considerable overhead in repeatedly fragmenting and reassembling large
packet.

2. Non-Transparent Fragmentation:
This fragmentation is done by one network is non-transparent to the subsequent networks
through which a packet passes. Packet fragmented by a gateway of a network is not
recombined by exit gateway of same network as shown in the below figure.
Figure – Non-transparent Fragmentation

Once a packet is fragmented, each fragment is treated as original packet. All fragments of a
packet are passed through exit gateway and recombination of these fragments is done at the
destination host.

Advantages of Non-Transparent Fragmentation is as follows :

 We can use multiple exit gateways and can improve the network performance.
 It has a higher throughput.

Disadvantages of Non-Transparent Fragmentation is as follows :

 Every host has capability of reassembling fragments.


 When a packet is fragmented, fragments should be numbered in such a way that the
original data stream can be reconstructed.
 Total overhead increases due to fragmentation as each fragment must have its own
header.

Types of Internet Protocols

Internet Protocols are a set of rules that governs the communication and exchange of data
over the internet. Both the sender and receiver should follow the same protocols in order to
communicate the data. In order to understand it better, let’s take an example of a language.
Any language has its own set of vocabulary and grammar which we need to know if we want
to communicate in that language. Similarly, over the internet whenever we access a website
or exchange some data with another device then these processes are governed by a set of
rules called the internet protocols.

Working of Internet Protocol


The internet and many other data networks work by organizing data into small pieces called
packets. Each large data sent between two network devices is divided into smaller packets by
the underlying hardware and software. Each network protocol defines the rules for how its
data packets must be organized in specific ways according to the protocols the network
supports.

Need of Protocols
It may be that the sender and receiver of data are parts of different networks, located in
different parts of the world having different data transfer rates. So, we need protocols to
manage the flow control of data, and access control of the link being shared in the
communication channel. Suppose there is a sender X who has a data transmission rate of 10
Mbps. And, there is a receiver Y who has a data receiving rate of 5Mbps. Since the rate of
receiving the data is slow so some data will be lost during transmission. In order to avoid this,
receiver Y needs to inform sender X about the speed mismatch so that sender X can adjust its
transmission rate. Similarly, the access control decides the node which will access the link
shared in the communication channel at a particular instant in time. If not the transmitted data
will collide if many computers send data simultaneously through the same link resulting in
the corruption or loss of data.

What is IP Addressing?
An IP address represents an Internet Protocol address. A unique address that identifies the
device over the network. It is almost like a set of rules governing the structure of data sent
over the Internet or through a local network. An IP address helps the Internet to distinguish
between different routers, computers, and websites. It serves as a specific machine identifier
in a specific network and helps to improve visual communication between source and
destination.

Types of Internet Protocol


Internet Protocols are of different types having different uses. These are mentioned below:

1. TCP/IP(Transmission Control Protocol/ Internet Protocol)


2. SMTP(Simple Mail Transfer Protocol)
3. PPP(Point-to-Point Protocol)
4. FTP (File Transfer Protocol)
5. SFTP(Secure File Transfer Protocol)
6. HTTP(Hyper Text Transfer Protocol)
7. HTTPS(HyperText Transfer Protocol Secure)
8. TELNET(Terminal Network)
9. POP3(Post Office Protocol 3)
10. IPv4
11. IPv6
12. ICMP
13. UDP
14. IMAP
15. SSH
16. Gopher

Ipv4:-
IP stands for Internet Protocol and v4 stands for Version Four (IPv4). IPv4 was the
primary version brought into action for production within the ARPANET in 1983.
IP version four addresses are 32-bit integers which will be expressed in decimal notation.
Example- 192.0.2.126 could be an IPv4 address.

Parts of IPv4

 Network part:
The network part indicates the distinctive variety that’s appointed to the network. The
network part conjointly identifies the category of the network that’s assigned.
 Host Part:
The host part uniquely identifies the machine on your network. This part of the IPv4 address
is assigned to every host.
For each host on the network, the network part is the same, however, the host half must
vary.
 Subnet number:
This is the nonobligatory part of IPv4. Local networks that have massive numbers of hosts
are divided into subnets and subnet numbers are appointed to that.

Characteristics of IPv4

 IPv4 could be a 32-Bit IP Address.


 IPv4 could be a numeric address, and its bits are separated by a dot.
 The number of header fields is twelve and the length of the header field is twenty.
 It has Unicast, broadcast, and multicast style of addresses.
 IPv4 supports VLSM (Virtual Length Subnet Mask).
 IPv4 uses the Post Address Resolution Protocol to map to the MAC address.
 RIP may be a routing protocol supported by the routed daemon.
 Networks ought to be designed either manually or with DHCP.
 Packet fragmentation permits from routers and causing host.

Advantages of IPv4

 IPv4 security permits encryption to keep up privacy and security.


 IPV4 network allocation is significant and presently has quite 85000 practical routers.
 It becomes easy to attach multiple devices across an outsized network while not NAT.
 This is a model of communication so provides quality service also as economical knowledge
transfer.
 IPV4 addresses are redefined and permit flawless encoding.
 Routing is a lot of scalable and economical as a result of addressing is collective more
effectively.
 Data communication across the network becomes a lot of specific in multicast organizations.
o Limits net growth for existing users and hinders the use of the net for brand new
users.
o Internet Routing is inefficient in IPv4.
o IPv4 has high System Management prices and it’s labor-intensive, complex, slow &
frequent to errors.
o Security features are nonobligatory.
o Difficult to feature support for future desires as a result of adding it on is extremely
high overhead since it hinders the flexibility to attach everything over IP.

Limitations of IPv4

 IP relies on network layer addresses to identify end-points on network, and each network
has a unique IP address.
 The world’s supply of unique IP addresses is dwindling, and they might eventually run out
theoretically.
 If there are multiple host, we need IP addresses of next class.
 Complex host and routing configuration, non-hierarchical addressing, difficult to re-
numbering addresses, large routing tables, non-trivial implementations in providing security,
QoS (Quality of Service), mobility and multi-homing, multicasting etc. are the big limitation
of IPv4 so that’s why IPv6 came into the picture.
 IPv4 Datagram Header Size of the header is 20 to 60 bytes.

 IPv4 Datagram Header


 VERSION: Version of the IP protocol (4 bits), which is 4 for IPv4
 HLEN: IP header length (4 bits), which is the number of 32 bit words in the
header. The minimum value for this field is 5 and the maximum is 15.
 Type of service: Low Delay, High Throughput, Reliability (8 bits)
 Total Length: Length of header + Data (16 bits), which has a minimum value
20 bytes and the maximum is 65,535 bytes.
 Identification: Unique Packet Id for identifying the group of fragments of a
single IP datagram (16 bits)
 Flags: 3 flags of 1 bit each : reserved bit (must be zero), do not fragment flag,
more fragments flag (same order)
 Fragment Offset: Represents the number of Data Bytes ahead of the particular
fragment in the particular Datagram. Specified in terms of number of 8 bytes,
which has the maximum value of 65,528 bytes.
 Time to live: Datagram’s lifetime (8 bits), It prevents the datagram to loop
through the network by restricting the number of Hops taken by a Packet
before delivering to the Destination.
 Protocol: Name of the protocol to which the data is to be passed (8 bits)
 Header Checksum: 16 bits header checksum for checking errors in the
datagram header
 Source IP address: 32 bits IP address of the sender
 Destination IP address: 32 bits IP address of the receiver
 Option: Optional information such as source route, record route. Used by the
Network administrator to check whether a path is working or not.
 Due to the presence of options, the size of the datagram header can be of variable
length (20 bytes to 60 bytes).

Ipv6:
IPv6 or Internet Protocol Version 6 is a network layer protocol that allows communication to
take place over the network. IPv6 was designed by Internet Engineering Task Force (IETF) in
December 1998 with the purpose of superseding the IPv4 due to the global exponentially
growing internet users.

Types of IPv6 Address

Now that we know about what is IPv6 address let’s take a look at its different types.

 Unicast addresses It identifies a unique node on a network and usually refers to a single
sender or a single receiver.
 Multicast addresses It represents a group of IP devices and can only be used as the
destination of a datagram.
 Anycast addresses It is assigned to a set of interfaces that typically belong to different
nodes.

Advantages of IPv6

 Reliability
 Faster Speeds: IPv6 supports multicast rather than broadcast in IPv4.This feature allows
bandwidth-intensive packet flows (like multimedia streams) to be sent to multiple
destinations all at once.
 Stronger Security: IPSecurity, which provides confidentiality, and data integrity, is
embedded into IPv6.
 Routing efficiency
 Most importantly it’s the final solution for growing nodes in Global-network.

Disadvantages of IPv6

 Conversion: Due to widespread present usage of IPv4 it will take a long period to completely
shift to IPv6.
 Communication: IPv4 and IPv6 machines cannot communicate directly with each other.
They need an intermediate technology to make that possible.

IPv4 vs IPv6

The common type of IP address (is known as IPv4, for “version 4”). Here’s an example of
what an IP address might look like:

25.59.209.224

An IPv4 address consists of four numbers, each of which contains one to three digits, with a
single dot (.) separating each number or set of digits. Each of the four numbers can range
from 0 to 255. This group of separated numbers creates the addresses that let you and
everyone around the globe to send and retrieve data over our Internet connections. The IPv4
uses a 32-bit address scheme allowing to store 2^32 addresses which is more than 4 billion
addresses.

An IPv6 address consists of eight groups of four hexadecimal digits. Here’s an example IPv6
address:

3001:0da8:75a3:0000:0000:8a2e:0370:7334
This new IP address version is being deployed to fulfil the need for more Internet addresses.
It was aimed to resolve issues which are associated with IPv4. With 128-bit address space, it
allows 340 undecillion unique address space. IPv6 also called IPng (Internet Protocol next
generation).

IP version 6 is the new version of Internet Protocol, which is way better than IP version 4 in
terms of complexity and efficiency. Let’s look at the header of IP version 6 and understand
how it is different from the IPv4 header.

IP version 6 Header Format :

IPv6 fixed header is 40 bytes long and contains the following information.

S.N. Field & Description

1
Version (4-bits): It represents the version of Internet Protocol, i.e. 0110.
Traffic Class (8-bits): These 8 bits are divided into two parts. The most significant 6
2 bits are used for Type of Service to let the Router Known what services should be
provided to this packet. The least significant 2 bits are used for Explicit Congestion
Notification (ECN).
3 Flow Label (20-bits): This label is used to maintain the sequential flow of the packets
belonging to a communication. The source labels the sequence to help the router
identify that a particular packet belongs to a specific flow of information. This field
helps avoid re-ordering of data packets. It is designed for streaming/real-time media.
Payload Length (16-bits): This field is used to tell the routers how much information a
particular packet contains in its payload. Payload is composed of Extension Headers and
4
Upper Layer data. With 16 bits, up to 65535 bytes can be indicated; but if the Extension
Headers contain Hop-by-Hop Extension Header, then the payload may exceed 65535
bytes and this field is set to 0.
Next Header (8-bits): This field is used to indicate either the type of Extension Header,
5
or if the Extension Header is not present then it indicates the Upper Layer PDU. The
values for the type of Upper Layer PDU are same as IPv4’s.
Hop Limit (8-bits): This field is used to stop packet to loop in the network infinitely.
6
This is same as TTL in IPv4. The value of Hop Limit field is decremented by 1 as it
passes a link (router/hop). When the field reaches 0 the packet is discarded.
7
Source Address (128-bits): This field indicates the address of originator of the packet.
8 Destination Address (128-bits): This field provides the address of intended recipient of
the packet.

Differences between IPv4 and IPv6

The address through which any computer communicates with our computer is simply called
an Internet Protocol Address or IP address. For example, If we want to load a web page or we
want to download something, we require the address for delivery of that particular file or
webpage. That address is called an IP Address.

Types of IP Addresses
1. IPv4 (Internet Protocol Version 4)

2. IPv6 (Internet Protocol Version 6)

IPv4
IPv4 address consists of two things that are the network address and the host address. It
stands for Internet Protocol version four. It was introduced in 1981 by DARPA and was
the first deployed version in 1982 for production on SATNET and on the ARPANET in
January 1983.

IPv4 addresses are 32-bit integers that have to be expressed in Decimal Notation. It is
represented by 4 numbers separated by dots in the range of 0-255, which have to be
converted to 0 and 1, to be understood by Computers. For Example, An IPv4 Address can be
written as 189.123.123.90.

IPv4 Address Format

IPv4 Address Format is a 32-bit Address that comprises binary digits separated by a dot (.).
IPv4 Address Format

IPv6
IPv6 is based on IPv4 and stands for Internet Protocol version 6. It was first introduced in
December 1995 by Internet Engineering Task Force. IP version 6 is the new version of
Internet Protocol, which is way better than IP version 4 in terms of complexity and
efficiency. IPv6 is written as a group of 8 hexadecimal numbers separated by colon (:). It can
be written as 128 bits of 0s and 1s.

IPv6 Address Format

IPv6 Address Format is a 128-bit IP Address, which is written in a group of 8 hexadecimal


numbers separated by colon (:).

IPv6 Address Format

Benefits of IPv6
The recent Version of IP IPv6 has a greater advantage over IPv4. Here are some of the
mentioned benefits:

 Larger Address Space: IPv6 has a greater address space than IPv4, which is required for
expanding the IP Connected Devices. IPv6 has 128 bit IP Address rather and IPv4 has a 32-bit
Address.

 Improved Security: IPv6 has some improved security which is built in with it. IPv6 offers
security like Data Authentication, Data Encryption, etc. Here, an Internet Connection is more
Secure.

 Simplified Header Format: As compared to IPv4, IPv6 has a simpler and more effective
header Structure, which is more cost-effective and also increases the speed of Internet
Connection.

 Prioritize: IPv6 contains stronger and more reliable support for QoS features, which helps in
increasing traffic over websites and increases audio and video quality on pages.

 Improved Support for Mobile Devices: IPv6 has increased and better support for Mobile
Devices. It helps in making quick connections over other Mobile Devices and in a safer way
than IPv4.

For more, you can refer to, the Advantages of IPv6.

Difference Between IPv4 and IPv6


IPv4 IPv6

IPv4 has a 32-bit address length IPv6 has a 128-bit address length

It Supports Manual and DHCP address


It supports Auto and renumbering address configuration
configuration

In IPv4 end to end, connection integrity is


In IPv6 end-to-end, connection integrity is Achievable
Unachievable

The address space of IPv6 is quite large it can produce


It can generate 4.29×109 address space
3.4×1038 address space

The Security feature is dependent on the


IPSEC is an inbuilt security feature in the IPv6 protocol
application

Address representation of IPv4 is in


Address Representation of IPv6 is in hexadecimal
decimal

Fragmentation performed by Sender and


In IPv6 fragmentation is performed only by the sender
forwarding routers

In IPv4 Packet flow identification is not In IPv6 packet flow identification are Available and uses
IPv4 IPv6

available the flow label field in the header

In IPv4 checksum field is available In IPv6 checksum field is not available

It has a broadcast Message Transmission In IPv6 multicast and anycast message transmission
Scheme scheme is available

In IPv4 Encryption and Authentication In IPv6 Encryption and Authentication are provided
facility not provided

IPv6 has a header of 40 bytes fixed


IPv4 has a header of 20-60 bytes.

IPv4 can be converted to IPv6 Not all IPv6 can be converted to IPv4

IPv4 consists of 4 fields which are


IPv6 consists of 8 fields, which are separated by a colon (:)
separated by addresses dot (.)

IPv4’s IP addresses are divided into five


different classes. Class A , Class B, Class C, IPv6 does not have any classes of the IP address.
Class D , Class E.

IPv4 supports VLSM(Variable Length


IPv6 does not support VLSM.
subnet mask).

Example of IPv6:
Example of IPv4: 66.94.29.13
2001:0000:3238:DFE1:0063:0000:0000:FEFB

Classful addressing:

Introduction of Classful IP Addressing


GeeksforGeeks

9–11 minutes

An IP address is an address having information about how to reach a specific host, especially
outside the LAN. An IP address is a 32-bit unique address having an address space of 232.

Generally, there are two notations in which the IP address is written, dotted decimal notation
and hexadecimal notation.
Dotted Decimal Notation

Dotted Decimal Notation

Hexadecimal Notation

Some points to be noted about dotted decimal notation:

1. The value of any segment (byte) is between 0 and 255 (both included).

2. No zeroes are preceding the value in any segment (054 is wrong, 54 is correct).

Classful Addressing
The 32-bit IP address is divided into five sub-classes. These are:

 Class A

 Class B

 Class C

 Class D

 Class E
Each of these classes has a valid range of IP addresses. Classes D and E are reserved for
multicast and experimental purposes respectively. The order of bits in the first octet
determines the classes of the IP address.
The IPv4 address is divided into two parts:

 Network ID

 Host ID

The class of IP address is used to determine the bits used for network ID and host ID and the
number of total networks and hosts possible in that particular class. Each ISP or network
administrator assigns an IP address to each device that is connected to its network.

Classful Addressing

Note:

1. IP addresses are globally managed by Internet Assigned Numbers Authority(IANA) and


regional Internet registries(RIR).

2. While finding the total number of host IP addresses, 2 IP addresses are not counted and are
therefore, decreased from the total count because the first IP address of any network is the
network number and whereas the last IP address is reserved for broadcast IP.

Class A

IP addresses belonging to class A are assigned to the networks that contain a large number of
hosts.
 The network ID is 8 bits long.

 The host ID is 24 bits long.

The higher-order bit of the first octet in class A is always set to 0. The remaining 7 bits in the
first octet are used to determine network ID. The 24 bits of host ID are used to determine the
host in any network. The default subnet mask for Class A is 255.x.x.x. Therefore, class A has
a total of:

 2^24 – 2 = 16,777,214 host ID

IP addresses belonging to class A ranges from 1.0.0.0 – 126.255.255.255.

Class A

Class B

IP address belonging to class B is assigned to networks that range from medium-sized to


large-sized networks.

 The network ID is 16 bits long.

 The host ID is 16 bits long.

The higher-order bits of the first octet of IP addresses of class B are always set to 10. The
remaining 14 bits are used to determine the network ID. The 16 bits of host ID are used to
determine the host in any network. The default subnet mask for class B is 255.255.x.x. Class
B has a total of:

 2^14 = 16384 network address

 2^16 – 2 = 65534 host address

IP addresses belonging to class B ranges from 128.0.0.0 – 191.255.255.255.


Class B

Class C

IP addresses belonging to class C are assigned to small-sized networks.

 The network ID is 24 bits long.

 The host ID is 8 bits long.

The higher-order bits of the first octet of IP addresses of class C is always set to 110. The
remaining 21 bits are used to determine the network ID. The 8 bits of host ID are used to
determine the host in any network. The default subnet mask for class C is 255.255.255.x.
Class C has a total of:

 2^21 = 2097152 network address

 2^8 – 2 = 254 host address

IP addresses belonging to class C range from 192.0.0.0 – 223.255.255.255.

Class C

Class D

IP address belonging to class D is reserved for multi-casting. The higher-order bits of the first
octet of IP addresses belonging to class D is always set to 1110. The remaining bits are for
the address that interested hosts recognize.

Class D does not possess any subnet mask. IP addresses belonging to class D range from
224.0.0.0 – 239.255.255.255.
Class D

Class E

IP addresses belonging to class E are reserved for experimental and research purposes. IP
addresses of class E range from 240.0.0.0 – 255.255.255.254. This class doesn’t have any
subnet mask. The higher-order bits of the first octet of class E are always set to 1111.

Class E

range of Special IP Addresses


169.254.0.0 – 169.254.0.16 : Link-local addresses
127.0.0.0 – 127.255.255.255 : Loop-back addresses
0.0.0.0 – 0.0.0.8: used to communicate within the current network.

Problems with Classful Addressing


The problem with this classful addressing method is that millions of class A addresses are
wasted, many of the class B addresses are wasted, whereas, the number of addresses available
in class C is so small that it cannot cater to the needs of organizations. Class D addresses are
used for multicast routing and are therefore available as a single block only. Class E
addresses are reserved.

Since there are these problems, Classful networking was replaced by Classless Inter-Domain
Routing (CIDR) in 1993

Cidr:
Nat
Subnet
ICMP Protocol | Internet Control Message Protocol

The ICMP stands for Internet Control Message Protocol. It is a network layer protocol. It is
used for error handling in the network layer, and it is primarily used on network devices such
as routers. As different types of errors can exist in the network layer, so ICMP can be used to
report these errors and to debug those errors.

For example, some sender wants to send the message to some destination, but the router
couldn't send the message to the destination. In this case, the router sends the message to the
sender that I could not send the message to that destination.
Position of ICMP in the network layer

The ICMP resides in the IP layer, as shown in the below diagram.

Messages

The ICMP messages are usually divided into two categories:

 Error-reporting messages

The error-reporting message means that the router encounters a problem when it processes an
IP packet then it reports a message.

 Query messages

The query messages are those messages that help the host to get the specific information of
another host. For example, suppose there are a client and a server, and the client wants to
know whether the server is live or not, then it sends the ICMP message to the server.

ICMP Message Format

The message format has two things; one is a category that tells us which type of message it is.
If the message is of error type, the error message contains the type and the code . The type
defines the type of message while the code defines the subtype of the message.

The ICMP message contains the following fields:


 Type: It is an 8-bit field. It defines the ICMP message type. The values range from 0 to 127
are defined for ICMPv6, and the values from 128 to 255 are the informational messages.
 Code: It is an 8-bit field that defines the subtype of the ICMP message
 Checksum: It is a 16-bit field to detect whether the error exists in the message or not.

Note: The ICMP protocol always reports the error messages to the original source. For
example, when the sender sends the message, if any error occurs in the message then the
router reports to the sender rather than the receiver as the sender is sending the message.

Arp

Address Resolution Protocol (ARP) and its types

Address Resolution Protocol (ARP) is a communication protocol used to find the MAC
(Media Access Control) address of a device from its IP address. This protocol is used when a
device wants to communicate with another device on a Local Area Network or Ethernet.

Types of ARP
There are four types of Address Resolution Protocol, which is given below:

 Proxy ARP
 Gratuitous ARP
 Reverse ARP (RARP)
 Inverse ARP
Proxy ARP - Proxy ARP is a method through which a Layer 3 devices may respond to ARP
requests for a target that is in a different network from the sender. The Proxy ARP configured
router responds to the ARP and map the MAC address of the router with the target IP address
and fool the sender that it is reached at its destination.

At the backend, the proxy router sends its packets to the appropriate destination because the
packets contain the necessary information.

Dynamic Host Configuration Protocol DHCP

DHCP stands for Dynamic Host Configuration Protocol. It is the critical feature on which the
users of an enterprise network communicate. DHCP helps enterprises to smoothly manage
the allocation of IP addresses to the end-user clients’ devices such as desktops, laptops,
cellphones, etc. is an application layer protocol that is used to provide:

Subnet Mask (Option 1 - e.g., 255.255.255.0)


Router Address (Option 3 - e.g., 192.168.1.1)
DNS Address (Option 6 - e.g., 8.8.8.8)
Vendor Class Identifier (Option 43 - e.g.,
'unifi' = 192.168.1.9 ##where unifi = controller)

DHCP is based on a client-server model and based on discovery, offer, request, and ACK.
Why Use DHCP?
DHCP helps in managing the entire process automatically and centrally. DHCP helps in
maintaining a unique IP Address for a host using the server. DHCP servers maintain
information on TCP/IP configuration and provide configuration of address to DHCP-enabled
clients in the form of a lease offer.

Components of DHCP
The main components of DHCP include:

 DHCP Server: DHCP Server is basically a server that holds IP Addresses and other
information related to configuration.
 DHCP Client: It is basically a device that receives configuration information from the server.
It can be a mobile, laptop, computer, or any other electronic device that requires a
connection.
 DHCP Relay: DHCP relays basically work as a communication channel between DHCP Client
and Server.
 IP Address Pool: It is the pool or container of IP Addresses possessed by the DHCP Server. It
has a range of addresses that can be allocated to devices.
 Subnets: Subnets are smaller portions of the IP network partitioned to keep networks under
control.
 Lease: It is simply the time that how long the information received from the server is valid, in
case of expiration of the lease, the tenant must have to re-assign the lease.
 DNS Servers: DHCP servers can also provide DNS (Domain Name System) server information
to DHCP clients, allowing them to resolve domain names to IP addresses.
 Default Gateway: DHCP servers can also provide information about the default gateway,
which is the device that packets are sent to when the destination is outside the local
network.
 Options: DHCP servers can provide additional configuration options to clients, such as the
subnet mask, domain name, and time server information.
 Renewal: DHCP clients can request to renew their lease before it expires to ensure that they
continue to have a valid IP address and configuration information.
 Failover: DHCP servers can be configured for failover, where two servers work together to
provide redundancy and ensure that clients can always obtain an IP address and
configuration information, even if one server goes down.
 Dynamic Updates: DHCP servers can also be configured to dynamically update DNS records
with the IP address of DHCP clients, allowing for easier management of network resources.
 Audit Logging: DHCP servers can keep audit logs of all DHCP transactions, providing
administrators with visibility into which devices are using which IP addresses and when
leases are being assigned or renewed.

The working of DHCP is as follows:

 DHCP works on the Application layer of the TCP/IP Protocol. The main task of
DHCP is to dynamically assigns IP Addresses to the Clients and allocate information
on TCP/IP configuration to Clients. For more, you can refer to the Article Working of
DHCP.
 The DHCP port number for the server is 67 and for the client is 68. It is a client-
server protocol that uses UDP services. An IP address is assigned from a pool of
addresses. In DHCP, the client and the server exchange mainly 4 DHCP messages in
order to make a connection, also called the DORA process, but there are 8 DHCP
messages in the process.

Tunnelling

A technique of inter-networking called Tunneling is used when source and destination


networks of the same type are to be connected through a network of different types.
Tunneling uses a layered protocol model such as those of the OSI or TCP/IP protocol suite.

So, in other words, when data moves from host A to B it covers all the different levels of the
specified protocol (OSI, TCP/IP, etc.) while moving between different levels, data conversion
(Encapsulation) to suit different interfaces of the particular layer is called tunneling.

For example, let us consider an Ethernet to be connected to another Ethernet through a WAN
as:

Tunneling

The task is sent on an IP packet from host A of Ethernet-1 to host B of Ethernet-2 via a
WAN.

Steps
 Host A constructs a packet that contains the IP address of Host B.
 It then inserts this IP packet into an Ethernet frame and this frame is addressed to the
multiprotocol router M1
 Host A then puts this frame on Ethernet.
 When M1 receives this frame, it removes the IP packet, inserts it in the payload packet of
the WAN network layer packet, and addresses the WAN packet to M2. The multiprotocol
router M2 removes the IP packet and sends it to host B in an Ethernet frame.

How Does Encapsulation Work?


Data travels from one place to another in the form of packets, and a packet has two parts, the
first one is the header which consists of the destination address and the working protocol and
the second thing is its contents.

In simple terminology, Encapsulation is the process of adding a new packet within the
existing packet or a packet inside a packet. In an encapsulated packet, the header part of the
first packet is remain surrounded by the payload section of the surrounding packet, which has
actual contents.

Why is this Technique Called Tunneling?


In this particular example, the IP packet does not have to deal with WAN, and the host’s A
and B also do not have to deal with the WAN. The multiprotocol routers M1 and M2 will
have to understand IP and WAN packets. Therefore, the WAN can be imagined to be
equivalent to a big tunnel extending between multiprotocol routers M1 and M2 and the
technique is called Tunneling.

Types of Tunneling Protocols


1. Generic Routing Encapsulation
2. Internet Protocol Security
3. Ip-in-IP
4. SSH
5. Point-to-Point Tunneling Protocol
6. Secure Socket Tunneling Protocol
7. Layer 2 Tunneling Protocol
8. Virtual Extensible Local Area Network

You might also like