FON Unit III - IoT
FON Unit III - IoT
A host with a packet to send transmits it to the nearest router, either on its own LAN or over a
point-to-point link to the ISP. The packet is stored there until it has fully arrived and the link
has finished its processing by verifying the checksum. Then it is forwarded to the next router
along the path until it reaches the destination host, where it is delivered. This mechanism is
store-and-forward packet switching.
If connectionless service is offered, packets are injected into the network individually and
routed independently of each other. No advance setup is needed. In this context, the packets
are frequently called datagrams (in analogy with telegrams) and the network is called a
datagram network.
A’s table (initially) A’s table (later) C’s Table E’s Table
Let us assume for this example that the message is four times longer than the maximum
packet size, so the network layer has to break it into four packets, 1, 2, 3, and 4, and send each
of them in turn to router A.
Every router has an internal table telling it where to send packets for each of the possible
destinations. Each table entry is a pair(destination and the outgoing line). Only directly
connected lines can be used.
A’s initial routing table is shown in the figure under the label ‘‘initially.’’
At A, packets 1, 2, and 3 are stored briefly, having arrived on the incoming link. Then each
packet is forwarded according to A’s table, onto the outgoing link to C within a new frame.
Packet 1 is then forwarded to E and then to F.
However, something different happens to packet 4. When it gets to A it is sent to router B,
even though it is also destined for F. For some reason (traffic jam along ACE path), A decided
to send packet 4 via a different route than that of the first three packets. Router A updated its
routing table, as shown under the label ‘‘later.’’
The algorithm that manages the tables and makes the routing decisions is called the routing
algorithm.
4 Implementation of connection-oriented service
If connection-oriented service is used, a path from the source router all the way to the
destination router must be established before any data packets can be sent. This connection is
called a VC (virtual circuit), and the network is called a virtual-circuit network
When a connection is established, a route from the source machine to the destination
machine is chosen as part of the connection setup and stored in tables inside the routers. That
route is used for all traffic flowing over the connection, exactly the same way that the
telephone system works. When the connection is released, the virtual circuit is also
terminated. With connection-oriented service, each packet carries an identifier telling which
virtual circuit it belongs to.
As an example, consider the situation shown in Figure. Here, host H1 has established
connection 1 with host H2. This connection is remembered as the first entry in each of the
routing tables. The first line of A’s table says that if a packet bearing connection identifier 1
comes in from H1, it is to be sent to router C and given connection identifier 1. Similarly, the
first entry at C routes the packet to E, also with connection identifier 1.
Now let us consider what happens if H3 also wants to establish a connection to H2. It chooses
connection identifier 1 (because it is initiating the connection and this is its only connection)
and tells the network to establish the virtual circuit.
This leads to the second row in the tables. Note that we have a conflict here because although
A can easily distinguish connection 1 packets from H1 from connection 1 packets from H3, C
cannot do this. For this reason, A assigns a different connection identifier to the outgoing
traffic for the second connection. Avoiding conflicts of this kind is why routers need the ability
to replace connection identifiers in outgoing packets.
In some contexts, this process is called label switching. An example of a connection-oriented
network service is MPLS (Multi Protocol Label Switching).
Routing Algorithms
The main function of NL (Network Layer) is routing packets from the source machine to the
destination machine.
There are two processes inside router:
a) One of them handles each packet as it arrives, looking up the outgoing line to use for it in
the routing table. This process is forwarding.
b) The other process is responsible for filling in and updating the routing tables. That is where
the routing algorithm comes into play. This process is routing.
Regardless of whether routes are chosen independently for each packet or only when new
connections are established, certain properties are desirable in a routing algorithm
correctness, simplicity, robustness, stability, fairness, optimality
Routing algorithms can be grouped into two major classes:
1) nonadaptive (Static Routing)
2) adaptive. (Dynamic Routing)
Adaptive algorithm, in contrast, change their routing decisions to reflect changes in the
topology, and usually the traffic as well.
Adaptive algorithms differ in
1) Where they get their information (e.g., locally, from adjacent routers, or from all routers),
2) When they change the routes (e.g., every ∆T sec, when the load changes or when the
topology changes), and
3) What metric is used for optimization (e.g., distance, number of hops, or estimated transit
time).
This procedure is called dynamic routing
New estimated
Router delay from J
A B C D To A I H K Line
A 0 24 20 21 8
B 12 36 31 28 20
C 25 18 19 36 28
F G D 40 27 8 24 20
E H
E 14 7 30 22 17
F 23 20 19 40 30
G 18 31 6 31 18
H 17 20 0 19 12
I J K L 21 0 14 22 10
I
J 9 11 7 10 0
K 24 22 22 0 6 K
L 29 33 9 9 15 K
JA JI JH JK
delay delay delay delay New
is is is is routing
8 10 12 6 table
for J
Vectors received from
J's four neighbors
(a) (b)
Figure 5-9. (a) A network. (b) Input from A, I, H, K, and the new routing table for J.
When A comes up, the other routers learn about it via the vector
exchanges. For simplicity, we will assume that there is a gigantic gong
somewhere that is struck periodically to initiate a vector exchange at all
routers simultaneously. At the time of the first exchange, B learns that its
left-hand neighbor has zero delay to A. B now makes an entry in its
routing table indicating that A is one hop away to the left. All the other
routers still think that A is down. At this point, the rout- ing table entries
for A are as shown in the second row of Fig. 5-10(a). On the next
exchange, C learns that B has a path of length 1 to A, so it updates its
routing table to indicate a path of length 2, but D and E do not hear the
good news until later. Clearly, the good news is spreading at the rate of one
hop per exchange. In a net- work whose longest path is of length N hops,
within N exchanges everyone will know about newly revived links and
routers.
Now let us consider the situation of Fig. 5-10(b), in which all the links
and routers are initially up. Routers B, C, D, and E have distances to A of
1, 2, 3, and 4 hops, respectively. Suddenly, either A goes down or the link
between A and B is cut (which is effectively the same thing from B’s point
of view).
At the first packet exchange, B does not hear anything from A.
Fortunately, C says ‘‘Do not worry; I have a path to A of length 2.’’ Little
does B suspect that C’s path runs through B itself. For all B knows, C might
have ten links all with sepa- rate paths to A of length 2. As a result, B thinks
it can reach A via C, with a path length of 3. D and E do not update their
entries for A on the first exchange.
On the second exchange, C notices that each of its neighbors claims to
have a path to A of length 3. It picks one of them at random and makes its
new distance to A 4, as shown in the third row of Fig. 5-10(b). Subsequent
exchanges produce the history shown in the rest of Fig. 5-10(b).
From this figure, it should be clear why bad news travels slowly: no
router ever has a value more than one higher than the minimum of all its
neighbors. Gradually, all routers work their way up to infinity, but the
number of exchanges required depends on the numerical value used for
infinity. For this reason, it is wise to set infinity to the longest path plus 1.
Not entirely surprisingly, this problem is known as the count-to-infinity prob-lem.
Hierarchical Routing:As networks grow in size, the router routing
tables grow proportionally. Not only is router memory consumed by ever-increasing
tables, but more CPU time is needed to scan them and more bandwidth is needed to
send status reports about them. At a certain point, the network may grow to the point
where it is no longer feasible for every router to have an entry for every other router,
so the routing will have to be done hierarchically, as it is in the telephone network.
When hierarchical routing is used, the routers are divided into what we
will call regions. Each router knows all the details about how to route
packets to dest- inations within its own region but knows nothing about the
internal structure of other regions. When different networks are
interconnected, it is natural to regard each one as a separate region to free
the routers in one network from having to know the topological structure of
the other ones.For huge networks, a two-level hierarchy may be
insufficient; it may be nec- essary to group the regions into clusters, the
clusters into zones, the zones into groups, and so on.
Figure 5-14 gives a quantitative example of routing in a two-level hierarchy
with five regions. The full routing table for router 1A has 17 entries, as
shown in Fig. 5-14(b). When routing is done hierarchically, as in Fig. 5-
14(c), there are en- tries for all the local routers, as before, but all other
regions are condensed into a single router, so all traffic for region 2 goes via
the 1B-2A line, but the rest of the remote traffic goes via the 1C-3B line.
Hierarchical routing has reduced the table from 17 to 7 entries. As the ratio
of the number of regions to the number of rout- ers per region grows, the
savings in table space increase.
Full table for 1A Hierarchical table for 1A
Dest. Line Hops Dest. Line Hops
Region 1 Region 2 1A – – 1A – –
1B 2A 2B 1B 1B 1 1B 1B 1
1C 1C 1 1C 1C 1
1A
2A 1B 2 2 1B 2
1C 2C 2D
2B 1B 3 3 1C 2
2C 1B 3 4 1C 3
2D 1B 4 5 1C 4
4A 5B 5C 3A 1C 3
3A
5A 3B 1C 2
3B 4B 4C 5D 4A 1C 3
5E
4B 1C 4
Region 3 Region 4 Region 5
4C 1C 4
5A 1C 4
5B 1C 5
5C 1B 5
5D 1C 6
5E 1C 5
(a) (b) (a)
Figure 5-14. Hierarchical routing.
Broadcast Routing
In some applications, hosts need to send messages to many
or all other hosts. Sending a packet to all destinations
simultaneously is called broadcasting. Various methods have
been proposed for doing it.
One broadcasting method that requires no special features from the network is for
the source to simply send a distinct packet to each destination.
multidestination routing, in which each packet contains either a list of destinations
or a bit map indicating the desired destinations. When a packet arrives at a router,
the router checks all the destinations to determine the set of output lines that will
be needed.
flooding. When implemented with a sequence number per source, flooding uses
links efficiently with a decision rule at routers that is relatively simple.
reverse path forwarding: When a broadcast packet ar- rives at a router, the router
checks to see if the packet arrived on the link that is normally used for sending
packets toward the source of the broadcast. If so, there is an excellent chance that
the broadcast packet itself followed the best route from the router.
B C B C I
A D A D
F E F F H J N
E
I G I G
A D E K G O M O
J J
H N H L
L N E C G D N K
O O
K K
H B L H
M M
L B
Figure 5-15. Reverse path forwarding. (a) A network. (b) A sink tree.
(c) The tree built by reverse path forwarding.
Multicast Routing
Some applications, such as a multiplayer game or live video of a sports event
streamed to many viewing locations, send packets to multiple receivers.
Sending a message to such a group is called multicasting, and the routing al-
gorithm used is called multicast routing. All multicasting schemes require some
way to create and destroy groups and to identify which routers are members of a
group.
As an example, consider the two groups, 1 and 2, in the
network shown in Fig. 5-16(a). Some routers are attached to
hosts that belong to one or both of these groups, as indicated
in the figure. A spanning tree for the leftmost router is shown
in Fig. 5-16(b). This tree can be used for broadcast but is
overkill for mu- lticast, as can be seen from the two
pruned versions that are shown next. In Fig. 5-16(c), all the
links that do not lead to hosts that are members of group 1
have been removed. The result is the multicast spanning tree
for the leftmost router to send to group 1. Packets are
forwarded only along this spanning tree, which is more
efficient than the broadcast tree because there are 7 links
instead of
10. Fig. 5-16(d) shows the multicast spanning tree after pruning for group
2. It is efficient too, with only five links this time. It also shows that
different multicast groups have different spanning trees.
2 1 2 1
1, 2 1, 2
1, 2
1, 2 2
2 2 2
1
1
1
1
(a) (b)
1
1 2
1 2 2 2
1
1
(c) (d)
Figure 5-16. (a) A network. (b) A spanning tree for the leftmost router. (c) A multicast tree for group 1. (d) A
multicast tree for group 2.
Congestion: It occurs in a network when load of a network is greater than capacity of the network.
Congestion Control Techniques or Policies: (VERY VERY IMPORTANT)
These techniques or policies are used prevent the congestion before it happens or remove the congestion after it happens.
These techniques are two types
1)Open loop congestion control techniques
2)Closed loop congestion control techniques
1)Retransmision policy: packet or data or acknowledgement is sent when it is lost or damaged.Generaaly it leads to more load or
congestion on a network.but good transmission policy available in TCP reduces the congestion.
2)Window policy: Selective repeat protocol has good window policy than GOBACKN.In GOBACK N if one frame is lost all the
subsequent successful frames need to be transmitted again where as in the selective repeat only the lost frame retransmitted.
3)Acknowledgement Policy: good acknowledgement policy prevents the congestion.This is possible in 2 ways.
a)send acknowledgement for all the frames atba time with single message.
b)send the acknowledgement only when receiver has the data to send.
4.Discarding Policy:
Good discarding policy also prevents the congestion.In audio transmission applications sensitive or corrupted packets will be
discarded by the routers with out affecting the quality of sound.
These techniques or policies are used remove the congestion after it happens
The policies are
1) Back pressure
2) Choke packet
3) Implicit Signaling
4) Explicit Signaling
1)Back Pressure(BP):
Assume that data is flowing from source to destination.
source
destin
ation
Bp bp bp bp
Congested node
Data flow
Suppose node3 is congested then it stops receiving the data from its immediate upstream node.so node 2 is congested ,it also follow the
same till source node is congested.finally source node slows down the sending data to remove the congestion.
In backpressure nodes between congested node and source node affected due to congestion.
3)choke packet:
Assume that data is flowing from source to destination.
Congested node
Data flow
Suppose node3 is congested then it sends a special packet called choke packet directly to source node. node 2 is simply forwards the
choke packet,similary other nodes alos.This process will be continued till it reaches to l source node .Finally source node slows down
the sending data to remove the congestion.
In choke packet nodes between congested node and source node not affected due to congestion.
Hop-by-Hop
Choke Packets
4)Explicit Signaling:
Congested node
Data flow
Suppose node3 is congested then it sends a special signal in the form of bit directly to source node.This sepecial bit will be sent along
with the data packet. Node 2 is simply forwards the choke packet,similary other nodes alos.This process will be continued till it reaches
to l source node .Finally source node slows down the sending data to remove the congestion.
In choke packet nodes between congested node and source node not affected due to congestion.
Quality of Service:
Four characteristics determine the quality of service of a data.They are
1)Reliability: network should transfer the data or acknowledgement without lost.
2)delay: application like audio ,video transmissions require minimum delay than applications like file transfer and e mail applications.
3)Jitter: variation in packet’s delay is called jitter or jitter control.
For example if four packets depart at times 0,1,2,3 and arrive at times 20,21,22,23 then delay respectively 20-0,21-1,22-2,23-3.
Suppose the arrival time is 25,28,21,29 then delay respectively 25-0,28-21,21-2,29-3. The variation in packet’s delay is called jitter.
Min jitter is required for audio,video applications.
4)Bandwidth:
High bandwidth required for applications like audio,video transmission to transfer millions of bits per second to refresh the screen.
Low bandwidth required for email transmission.
34
(a) A leaky bucket with water. (b) a leaky bucket with packets.
The leaky bucket enforces a constant output rate (average rate) regardless of the burstiness of the input
• The host injects one packet per clock tick onto the network. This results in a uniform flow of packets, smoothing out bursts and
reducing congestion.
• Implementing the original leaky bucket algorithm is easy. The leaky bucket consists of a finite queue. When a packet arrives, if
there is room on the queue it is appended to the queue; otherwise, it is discarded. At every clock tick, one packet is transmitted
(unless the queue is empty).
• In contrast to the LB, the Token Bucket Algorithm, allows the output rate to vary, depending on the size of the burst.
• In the TB algorithm, the bucket holds tokens. To transmit a packet, the host must capture and destroy one token.
• Tokens are generated by a clock at the rate of one token every t sec.
• Idle hosts can capture and save up tokens (up to the max. size of the bucket) in order to send larger bursts later.
5-34
2. Hub – A hub is basically a multiport repeater. A hub connects multiple wires coming from different branches,
for example, the connector in star topology which connects different stations. Hubs cannot filter data, so data
packets are sent to all connected devices. They do not have intelligence to find out best path for data packets
which leads to inefficiencies and wastage.
3. Bridge – A bridge operates at data link layer. A bridge is a repeater, with add on functionality of filtering content by
reading the MAC addresses of source and destination. It is also used for interconnecting two LANs working on the same
protocol. It has a single input and single output port.
4. Switch – A switch is a multi port bridge with a buffer and a design that can boost its efficiency(large number of ports
imply less traffic) and performance. Switch is data link layer device. Switch can perform error checking before forwarding
data, that makes it very efficient as it does not forward packets that have errors and forward good packets selectively to
correct port only.
5. Routers – A router is a device like a switch that routes data packets based on their IP addresses. Router is mainly a
Network Layer device. Routers normally connect LANs and WANs together and have a dynamically updating routing table
based on which they make decisions on routing the data packets.
6. Gateway – A gateway, as the name suggests, is a passage to connect two networks together that may work upon
different networking models. They basically works as the messenger agents that take data from one system, interpret it,
and transfer it to another system. Gateways are also called protocol converters and can operate at any network layer.
Gateways are generally more complex than switch or router.
Tunneling:
A technique of internetworking called Tunneling is used when source and destination networks of same type are to be
connected through a network of different type. For example, let us consider an Ethernet to be connected to another Etherne t
through a WAN as:
The task is sent on an IP packet from host A of Ethernet-1 to the host B of ethernet-2 via a WAN.
Sequence of events:
Fragmentation:
Different networks have different capacity of processing the packets. Small capacity networks cannot process large packets.
Divide the large packets into small packets is called fragmentation. Two strategies used for fragmentation.
1)Transparent Fragmentation: In this large packet is divided in to small packets at the entry gateway of the
network and reassembled or merged or combined at the exit gateway of the network. The same process is
continued in all networks.
2)Non Transparent Fragmentation: In this large packet is divided in to small packets only at the entry
gateway of the first network and reassembled or merged or combined only at the destination. The same process
is continued in all networks.
Fragmentation
IP address formats.
IP Addresses ….
Special IP addresses.
5-61
ARP:
Address Resolution Protocol is important for changing the higher-level protocol address (IP addresses) to physical network addresses.
It is described in RFC 826.
ARP relates an IP address with the physical address. On a typical physical network such as LAN, each device on a link is identified by
a physical address, usually printed on the network interface card (NIC). A physical address can be changed easily when NIC on a
particular machine fails.
When one host wants to communicate with another host on the network, it needs to resolve the IP address of each host to the host's
hardware address.
This process is as follows−
When a host tries to interact with another host, an ARP request is initiated. If the IP address is for the local network, the source
host checks its ARP cache to find out the hardware address of the destination computer.
If the correspondence hardware address is not found, ARP broadcasts the request to all the local hosts.
All hosts receive the broadcast and check their own IP address. If no match is discovered, the request is ignored.
The destination host that finds the matching IP address sends an ARP reply to the source host along with its hardware
address, thus establishing the communication. The ARP cache is then updated with the hardware address of the destination
host.
Subnetting:
If an organization has a large network,it is better to divide the network.The process of dividing the large network into small networks is
called subnetting.
Advantages of subnetting:
1) Reduces the network traffic
2) Increases the network performance
3) Increases the security in the network
4) Maintenance of the network is easy.
Allow a single network address to span multiple physical networks is called subnet addressing.
For subnet address scheme to work, one should know which part of id address is used as subnet address.To accomplish this
Subnet mask is used.
The network administrator creates 32 bit subnet mask which contains 1’s and 0’s.1’s indicate network address or subnet address
and 0’s indicate host address.
class B IP address ,the format is
Bits 8 8 8 8
Means
A Networkid.hostid.hostid.hostid
. .
11111111 00000000.00000000 00000000
(255.0.0)
B Networkid.networkidid.hostid.hostid 11111111.1111111.00000000.00000000
(255.255.0.0)
C Networkid.networkidid.networkid.hostid 11111111.1111111.11111111.00000000
(255.255.255.0)
Masking:
b) IP Address : 200.34.22.156
Mask: 255.255.255.240
_______________________
Subnet address: 200.34.22.156&240
Bitwise And operation b/w 156 &240
Binary equivalents of 156 ,240
156: 1001 1100
240 : 1111 0000
___________________
156&240: 1001 0000 144
c)
IP Address : 125.35.12.57
Mask: 255.255.0.0
_______________________
Subnet address: 125.35.0.0