unit - 3
unit - 3
unit - 3
Network Layer
Switching
Network Layer
Process of Switching
Frame Reception: The switch receives a data frame or packet from a computer connected to its ports.
MAC Address Extraction: The switch reads the header of the data frame and collects the
destination MAC Address from it.
MAC Address Table Lookup: Once the switch has retrieved the MAC Address, it performs a lookup in
its Switching table to find a port that leads to the MAC Address of the data frame.
Frame Transition: Once the destination port is found, the switch sends the data frame to that port
and forwards it to its target computer/network.
Types of Switching
Circuit Switching
Advantages
This type of switching technique is suitable for the continuous transmission of data as the data
remains in conservation.
The rate of communication is steady as a dedicated path for transmission.
With the establishment of the circuit, there are no intermediate delays that make it suitable for
voice and data transmission.
Disadvantages
As there is an establishment of a dedicated connection between both ends, the transmission of
any other data is challenging.
Data with low volume demand high bandwidth.
The usage of system resources becomes underutilized as the repetition of resources for other
connections is not possible.
The establishment time is high.
In space division switching, the paths in the circuit are separated from each other.
The main purpose of the space division was for the analog network. However, it is used for
both analog and digital switching.
A switch known as a cross point is used in space division switches.
It finds applications in digital communication and uses semiconductor gates.
The advantage of a space-division switch is that it is instantaneous and the disadvantage is the
number of cross points is dependent on the blocking.
Time-Division Switches
In the time-division switching method, the number of connections travels along the same trunk
line.
The breaking of the streams into segments takes place with the help of time-division
multiplexing, making sure that the segments are sent at specific intervals.
The detection of the elements happens with the help of a de-multiplexer.
Packet Switching
The packet switching is a switching technique in which the message is sent in one go, but it
is divided into smaller pieces, and they are sent individually.
The message splits into smaller pieces known as packets and packets are given a unique
number to identify their order at the receiving end.
Every packet contains some information in its headers such as source address, destination
address and sequence number.
Packets will travel across the network, taking the shortest path as possible.
All the packets are reassembled at the receiving end in correct order.
If any packet is missing or corrupted, then the message will be sent to resend the message.
If the correct order of the packets is reached, then the acknowledgment message will be
sent.
Message Switching
Data channels are shared among the communicating devices that improve the
efficiency of using available bandwidth.
Traffic congestion can be reduced because the message is temporarily stored in
the nodes.
Message priority can be used to manage the network.
The size of the message which is sent over the network can be varied. Therefore,
it supports the data of unlimited size.
The message switches must be equipped with sufficient storage to enable them to
store the messages until the message is forwarded.
The Long delay can occur due to the storing and forwarding facility provided by
the message switching technique.
Routing Algorithm
In order to transfer the packets from source to the destination, the network layer
must determine the best route through which packets can be transmitted.
Whether the network layer provides datagram service or virtual circuit service,
the main job of the network layer is to provide the best route. The routing
protocol provides this job.
The routing protocol is a routing algorithm that provides the best path from the
source to the destination.
The best path is the path that has the "least-cost path" from source to the
destination.
Routing is the process of forwarding the packets from source to the destination
but the best route to send the packets is determined by the routing algorithm.
Adaptive Algorithms
These are the algorithms that change their routing decisions whenever network
topology or traffic load changes.
The changes in routing decisions are reflected in the topology as well as the
traffic of the network.
Also known as dynamic routing, these make use of dynamic information such as
current topology, load, delay, etc. to select routes.
Optimization parameters are distance, number of hops, and estimated transit
time.
Isolated:
In this method each, node makes its routing decisions using the information it has
without seeking information from other nodes.
The sending nodes don’t have information about the status of a particular link.
The disadvantage is that packets may be sent through a congested network which
may result in delay.
Examples: Hot potato routing, and backward learning.
Distributed:
In this method, the node receives information from its neighbors and then takes
the decision about routing the packets.
A disadvantage is that the packet may be delayed if there is a change in between
intervals in which it receives information and sends packets.
It is also known as a decentralized algorithm as it computes the least-cost path
between source and destination.
Centralized:
In this method, a centralized node has entire information about the network and
makes all the routing decisions.
The advantage of this is only one node is required to keep the information of the
entire network and the disadvantage is that if the central node goes down the
entire network is done.
The link state algorithm is referred to as a centralized algorithm since it is aware
of the cost of each link in the network.
Non-Adaptive Algorithms
These are the algorithms that do not change their routing decisions
once they have been selected.
This is also known as static routing as a route to be taken is computed
in advance and downloaded to routers when a router is booted.
Flooding:
This adapts the technique in which every incoming packet is sent on
every outgoing line except from which it arrived.
One problem with this is that packets may go in a loop and as a result
of which a node may receive duplicate packets.
These problems can be overcome with the help of sequence numbers,
hop count, and spanning trees.
Random walk:
In this method, packets are sent host by host or node by node to one
of its neighbors randomly.
This is a highly robust method that is usually implemented by sending
packets onto the link which is least queued.
3. Hybrid Algorithms
As the name suggests, these algorithms are a combination of both
adaptive and non-adaptive algorithms.
In this approach, the network is divided into several regions, and each
region uses a different algorithm.
Link-state:
In this method, each router creates a detailed and complete map of the
network which is then shared with all other routers.
This allows for more accurate and efficient routing decisions to be
made.
Distance vector:
In this method, each router maintains a table that contains information
about the distance and direction to every other node in the network.
This table is then shared with other routers in the network.
The disadvantage of this method is that it may lead to routing loops.
Algorithm
At each node x,
Initialization
for all destinations y in N:
Dx(y) = c(x,y) // If y is not a neighbor then c(x,y) = ∞
for each neighbor w
Dw(y) = ? for all destination y in N.
for each neighbor w
send distance vector Dx = [ Dx(y) : y in N ] to w
loop
wait(until I receive any distance vector from some neighbor w)
for each y in N:
Dx(y) = minv{c(x,v)+Dv(y)}
If Dx(y) is changed for any destination y
Send distance vector Dx = [ Dx(y) : y in N ] to all neighbors
forever
Let's understand through an example:
In the above figure, each cloud represents the network, and the number
inside the cloud represents the network ID.
All the LANs are connected by routers, and they are represented in boxes
labeled as A, B, C, D, E, F.
Distance vector routing algorithm simplifies the routing process by
assuming the cost of every link is one unit. Therefore, the efficiency of
transmission can be measured by the number of links to reach the
destination.
In Distance vector routing, the cost is based on hop count.
In the above figure, we observe that the router sends the knowledge to the immediate
neighbors.
The neighbors add this knowledge to their own knowledge and sends the updated
table to their own neighbors.
In this way, routers get its own information plus the new information about the
neighbors.
Routing Table
Two process occurs:
Creating the Table
Updating the Table
NET ID: The Network ID defines the final destination of the packet.
Cost: The cost is the number of hops that packet must take to get there.
Next hop: It is the router to which the packet must be delivered.
A sends its routing table to B, F & E.
B sends its routing table to A & C.
C sends its routing table to B & D.
D sends its routing table to E & C.
E sends its routing table to A & D.
F sends its routing table to A.
Knowledge about the neighborhood: Instead of sending its routing table, a router
sends the information about its neighborhood only. A router broadcast its identities
and cost of the directly attached links to other routers.
Flooding: Each router sends the information to every other router on the internetwork
except its neighbors. This process is known as Flooding. Every router that receives
the packet sends the copies to all its neighbors. Finally, each and every router receives
a copy of the same information.
Information sharing: A router sends the information to every other router only when
the change occurs in the information.
c( i , j): Link cost from node i to node j. If i and j nodes are not directly linked, then
c(i , j) = ∞.
D(v): It defines the cost of the path from source code to destination v that has the least
cost currently.
P(v): It defines the previous node (neighbor of v) along with current least cost path
from source to v.
N: It is the total number of nodes available in the network.
Link State Algorithm
Shortest Path Algorithm in Computer Network
It refers to the algorithms that help to find the shortest path between a sender and
receiver for routing the data packets through the network in terms of shortest
distance, minimum cost, and minimum time.
It is mainly for building a graph or subnet containing routers as nodes and edges
as communication lines connecting the nodes.
Hop count is one of the parameters that is used to measure the distance.
Hop count: It is the number that indicates how many routers are covered. If the
hop count is 6, there are 6 routers/nodes and the edges connecting them.
Another metric is a geographic distance like kilometers.
We can find the label on the arc as the function of bandwidth, average traffic,
distance, communication cost, measured delay, mean queue length, etc.
Dijkstra’s Algorithm
Bellman Ford’s Algorithm
Floyd Warshall’s Algorithm
Dijkstra’s Algorithm
The Dijkstra’s Algorithm is a greedy algorithm that is used to find the minimum
distance between a node and all other nodes in a given graph. Here we can consider
node as a router and graph as a network. It uses weight of edge .ie, distance between
the nodes to find a minimum distance route.
Algorithm:
1: Mark the source node current distance as 0 and all others as infinity.
2: Set the node with the smallest current distance among the non-visited nodes as the
current node.
3: For each neighbor, N, of the current node:
Calculate the potential new distance by adding the current distance of the current node
with the weight of the edge connecting the current node to N.
If the potential new distance is smaller than the current distance of node N, update N's
current distance with the new distance.
4: Make the current node as visited node.
5: If we find any unvisited node, go to step 2 to find the next node which has the
smallest current distance and continue this process.
Example:
Bellman Ford’s Algorithm
The Bell man Ford’s algorithm is a single source graph search algorithm which help
us to find the shortest path between a source vertex and any other vertex in a give
graph. We can use it in both weighted and unweighted graphs. This algorithm is
slower than Dijkstra's algorithm and it can also use negative edge weight.
Algorithm
1: First we Initialize all vertices v in a distance array dist[] as INFINITY.
2: Then we pick a random vertex as vertex 0 and assign dist[0] =0.
3: Then iteratively update the minimum distance to each node (dist[v]) by comparing
it with the sum of the distance from the source node (dist[u]) and the edge weight
(weight) N-1 times.
4: To identify the presence of negative edge cycles, with the help of following cases
do one more round of edge relaxation.
We can say that a negative cycle exists if for any edge uv the sum of distance from the
source node (dist[u]) and the edge weight (weight) is less than the current distance to
the largest node(dist[v])
It indicates the absence of negative edge cycle if none of the edges satisfies case1.
Floyd Warshall’s Algorithm
The Floyd Warshall’s Algorithm is used to find the shortest path between any two nodes in a
given graph. It keeps a matrix of distances between each pair of vertices.it will continue
iterating the matrix until it reaches at a shortest path.
Algorithm:
1: Using the data about the graph, make a matrix.
2: By taking all vertices as an intermediate vertex, we have to update the final matrix.
3: It is to be noted that it includes at a time we pick one vertex, and we update the shortest
path which includes this chosen vertex as an in-between point along the path.
4: When we select a vertex say k almost like the middle of the path, in previous calculations
we have already taken all vertices P{0,1,2..,k-1} as potential middle points.
5: We have to consider the following subpoints while dealing with the source and destination
vertices I,j respectively
If vertex k is not the part of shortest path from I to j, we don’t have to change dist[i][j]
value .ie, it will remain unchanged.
If vertex k is indeed part of shortest path from I to j, update dist[i][j] to the sum of dist[i][k]
and dist[k][j] but note that only if dist[i][j] is greater than this value we newly calculated.
What is Congestion?
Congestion in a computer network happens when there is too much data being
sent at the same time, causing the network to slow down.
Just like traffic congestion on a busy road, network congestion leads to delays
and sometimes data loss.
When the network can’t handle all the incoming data, it gets “clogged,” making it
difficult for information to travel smoothly from one place to another.
Improved Network Stability: Congestion control helps keep the network stable by
preventing it from getting overloaded. It manages the flow of data so the network
doesn’t crash or fail due to too much traffic.
Reduced Latency and Packet Loss: Without congestion control, data transmission
can slow down, causing delays and data loss. Congestion control helps manage
traffic better, reducing these delays and ensuring fewer data packets are lost,
making data transfer faster and the network more responsive.
Enhanced Throughput: By avoiding congestion, the network can use its resources
more effectively. This means more data can be sent in a shorter time, which is
important for handling large amounts of data and supporting high-speed
applications.
Fairness in Resource Allocation: Congestion control ensures that network
resources are shared fairly among users. No single user or application can take up
all the bandwidth, allowing everyone to have a fair share.
Mitigation of Network Congestion Collapse: Without congestion control, a
sudden spike in data traffic can overwhelm the network, causing severe
congestion and making it almost unusable. Congestion control helps prevent this
by managing traffic efficiently and avoiding such critical breakdowns.
Better User Experience: When data flows smoothly and quickly, users have a
better experience. Websites, online services, and applications work more reliably
and without annoying delays.
Congestion Control is a mechanism that controls the entry of data packets into the
network, enabling a better use of a shared network infrastructure and avoiding
congestive collapse.
Congestive-avoidance algorithms (CAA) are implemented at the TCP layer as the
mechanism to avoid congestive collapse in a network.
Explanation
Step 1 − Consider a network which is divided into two parts, East and West both are
connected by links CF and EI.
Step 2 − Suppose most of the traffic in between East and West is using link CF, and
as a result CF link is heavily loaded with long delays. Including queueing delay in the
weight which is used for shortest path calculation will make EI more attractive.
Step 3 − After installing the new routing tables, most of East-West traffic will now go
over the EI link. As a result in the next update CF link will appear to be the shortest
path.
Step 4 − As a result the routing tables may oscillate widely, leading to erratic routing
and many potential problems.
Step 5 − If we consider only bandwidth and propagation delay by ignoring the load,
this problem does not occur. Attempts to include load but change the weights within
routing scheme to shift traffic across routes arrow range only to slow down routing
oscillations.
Features
It is a congestion technique.
These roots can be changed in accordance with traffic patterns because, as
network users, we can sleep in various time zones throughout the day.
As there are heavily used paths so roots can be changed to shift traffic away.
Traffic can be split across multiple paths.
Admission Control
It is one of techniques that is widely used in virtual-circuit networks to keep
congestion at bay.
The idea is do not set up a new virtual circuit unless the network can carry the
added traffic without becoming congested.
Admission control can also be combined with traffic aware routing by
considering routes around traffic hotspots as part of the setup procedure.
Internetworking
The word “internetworking,” which combines the words “inter” and
“networking,” denotes a connection between completely distinct nodes/segments.
This connection is made possible by intermediary hardware like routers or
gateways. Catenet was the initial title for associate degree internetwork.
Private, public, commercial, industrial, and governmental networks frequently
connect to one another.
Therefore, a degree of internetwork could be a collection of several networks
that operate as a single large network and are connected by intermediate
networking devices.
The trade, goods, and methods used to address the difficulty of creating and
managing internet works are referred to as internetworking.
Intranet
This associate degree computer network could be a set of interconnected
networks, which exploits the Internet Protocol and uses IP-based tools akin to
web browsers and FTP tools, that are underneath the management of one body
entity.
That body entity closes the computer network to the remainder of the planet and
permits solely specific users.
Most typically, this network is the internal network of a corporation or different
enterprise.
An outsized computer network can usually have its own internet server to supply
users with browsable data.
Internet
A selected Internetworking, consisting of a worldwide interconnection of
governmental, academic, public, and personal networks based mostly upon the
Advanced analysis comes Agency Network (ARPANET) developed by ARPA of
the U.S. Department of Defense additionally home to the World Wide Web
(WWW) and cited as the ‘Internet’ to differentiate from all different generic
Internetworks.
Participants within the web, or their service suppliers, use IP Addresses obtained
from address registries that manage assignments.
Internetwork Addressing
The internetwork addresses set up devices singly or collectively. Depending on the
protocol family and because of the OSI layer, addressing strategies vary.
DLL, MAC addresses, and network-layer addresses are the three types of
internetwork address area units that are typically employed.
DLL Addresses
All the physical network associations of network devices are clearly identified by
a data-link layer address.
Area units are frequently used as physical addresses or hardware addresses in
data-link addresses.
Data-link addresses can occasionally be found within a flat address space and are
pre-configured with a fixed relationship to a particular device.
End systems typically only have one data-link address since they only have one
physical network association.
As a result of having many physical network connections, routers and other
internetworking equipment frequently have various data-link addresses.
The network addresses can occasionally be seen in both gradable address areas
and the more common virtual or logical address area units.
The relationship between the network address and the tool is logical and flexible;
it typically depends either on the properties of the physical network or on
groupings without any physical foundation.
For each network-layer protocol that a finished system supports, a network-layer
address is required.
For each supported network-layer protocol, routers and other internetworking
devices require a single network-layer address for every physical network
association.
The network addresses can occasionally be seen in both gradable address areas
and the more common virtual or logical address area units.
The relationship between the network address and the tool is logical and flexible;
it typically depends either on the properties of the physical network or on
groupings without any physical foundation.
For each network-layer protocol that a finished system supports, a network-layer
address is required.
For each supported network-layer protocol, routers and other internetworking
devices require a single network-layer address for every physical network
association.
Challenges to Internetworking
There is no guarantee that useful internetwork will be implemented.
There are many difficult fields, especially in the ones of dependability,
connection, adaptability, and network management.
However, each and every one of these fields is crucial to the creation of an
efficient and cost-effective internetwork.
Fragmentation
Fragmentation is an important function of network layer.
It is technique in which gateways break up or divide larger packets into smaller
ones called fragments.
Each fragment is then sent as a separate internal packet.
Each fragment has its separate header and trailer.
Sometimes, a fragmented datagram can also get fragmented further when it
encounters a network that handles smaller fragments.
Thus, a datagram can be fragmented several times before it reaches final
destination.
Reverse process of the fragmentation is difficult.
Reassembling of fragments is usually done by the destination host because each
fragment has become an independent datagram.
1. Transparent Fragmentation:
Advantage:
End devices (sender and receiver) do not need to handle fragmentation or
reassembly logic, reducing their computational burden.
Fragmentation and reassembly occur entirely within the network, making the
process invisible to end applications.
Transparent fragmentation ensures compatibility across networks with varying
MTU sizes without requiring changes at the endpoints.
Applications can send data without worrying about MTU sizes, as the network
handles fragmentation and reassembly.
Disadvantage:
Exit fragment that recombines fragments in a network must known when it has
received all fragments.
Some fragments chooses different gateways for exit that results in poor
performance.
It adds considerable overhead in repeatedly fragmenting and reassembling large
packet.
2. Non-Transparent Fragmentation:
This fragmentation is done by one network is non-transparent to the subsequent
networks through which a packet passes.
Packet fragmented by a gateway of a network is not recombined by exit gateway
of same network as shown in the below figure.
We can use multiple exit gateways and can improve the network performance.
It has a higher throughput.
Disadvantages of Non-Transparent Fragmentation is as follows :
very host has capability of reassembling fragments.
When a packet is fragmented, fragments should be numbered in such a way that
the original data stream can be reconstructed.
Total overhead increases due to fragmentation as each fragment must have its
own header.