CN Unit Iv
CN Unit Iv
CN Unit Iv
NETWORK LAYER
The network layer is concerned with getting packets from the source all the way to the
destination. Getting to the destination may require making many hops at intermediate routers
along the way. The main function of the network layer is routing packets from the source
machine to the destination machine..
This equipment is used as follows. A host with a packet to send transmits it to the nearest
router, either on its own LAN or over a point-to-point link to the carrier. The packet is stored
there until it has fully arrived so the checksum can be verified. Then it is forwarded to the
next router along the path until it reaches the destination host, where it is delivered. This
mechanism is store-and-forward packet switching.
Services Provided to the Transport Layer
The network layer provides services to the transport layer at the network layer/transport layer
interface.
1. The services should be independent of the router technology.
2. The transport layer should be shielded from the number, type, and topology of the
routers present.
3. The network addresses made available to the transport layer should use a uniform
numbering plan, even across LANs and WANs.
ROUTING ALGORITHMS
The main function of the network layer is routing packets from the source machine to the
destination machine.The routing algorithm is that part of the network layer software
responsible for deciding which output line an incoming packet should be transmitted on.
Figure . The first five steps used in computing the shortest path from A to D. The arrows
indicate the working node.
Several algorithms for computing the shortest path between two nodes of a graph are known.
This one is due to Dijkstra (1959). Each node is labeled (in parentheses) with its distance
from the source node along the best known path. Initially, no paths are known, so all nodes
are labeled with infinity. As the algorithm proceeds and paths are found, the labels may
change, reflecting better paths.
Flooding
It is a static algorithm,in which every incoming packet is sent out on every outgoing line
except the one it arrived on. Flooding obviously generates vast numbers of duplicate packets,
An alternative technique for damming the flood is to keep track of which packets have been
flooded, to avoid sending them out a second time. To achieve this goal is to have the source
router put a sequence number in each packet it receives from its hosts.
A variation of flooding that is slightly more practical is selective flooding.In this algorithm
the routers do not send every incoming packet out on every line, only on those lines that are
going approximately in the right direction.
Figure . (a) A subnet. (b) Input from A, I, H, K, and the new routing table for J.
Hierarchical Routing
As networks grow in size, the router routing tables grow proportionally. So router memory
increase and more CPU time is needed to scan them and more bandwidth is needed to send
status reports about them.
When hierarchical routing is used, the routers are divided into regions, with each router
knowing all the details about how to route packets to destinations within its own region, but
knowing nothing about the internal structure of other regions.
Figure: Hierarchical routing.
Broadcast Routing
In some applications, hosts need to send messages to many or all other hosts. For example, a
service distributing weather reports, stock market updates, or live radio programs might work
best by broadcasting to all machines and letting those that are interested read the data.
Sending a packet to all destinations simultaneously is called broadcasting. Several methods
are used.
One broadcasting method that requires no special features from the subnet is for the source to
simply send a distinct packet to each destination. Flooding is another obvious method. A third
algorithm is multidestination routing. If this method is used, each packet contains either a
list of destinations or a bit map indicating the desired destinations. When a packet arrives at a
router, the router checks all the destinations to determine the set of output lines that will be
needed. A fourth broadcast algorithm makes explicit use of the sink tree for the router
initiating the broadcast—or any other convenient spanning tree for that matter. A spanning
tree is a subset of the subnet that includes all the routers but contains no loops.
Our last broadcast algorithm is called reverse path forwarding, is remarkably simple once it
has been pointed out. When a broadcast packet arrives at a router, the router checks to see if
the packet arrived on the line that is normally used for sending packets to the source of the
broadcast. If so, there is an excellent chance that the broadcast packet itself followed the best
route from the router and is therefore the first copy to arrive at the router. This being the case,
the router forwards copies of it onto all lines except the one it arrived on. If, however, the
broadcast packet arrived on a line other than the preferred one for reaching the source, the
packet is discarded as a likely duplicate.
Multicast Routing
Sending a message to such a group is called multicasting, and its routing algorithm is called
multicast routing.
Multicasting requires group management. Some way is needed to create and destroy groups,
and to allow processes to join and leave groups. To do multicast routing, each router
computes a spanning tree covering all other routers.
Figure . (a) A network. (b) A spanning tree for the leftmost router. (c) A multicast tree for
group 1. (d) A multicast tree for group 2.
********************************
TRANSPORT LAYER
The transport layer is not just another layer. It is the heart of the whole protocol hierarchy.
Its task is to provide reliable, cost-effective data transport from the source machine to the
destination machine, independently of the physical network or networks currently in use.
frames.
Getting back to our client-server example, the client's CONNECT call causes a
CONNECTION REQUEST TPDU to be sent to the server. When it arrives, the transport
entity checks to see that the server is blocked on a LISTEN (i.e., is interested in handling
requests). It then unblocks the server and sends a CONNECTION ACCEPTED TPDU
back to the client. When this TPDU arrives, the client is unblocked and the connection is
established.
Data can now be exchanged using the SEND and RECEIVE primitives. In the simplest
form, either party can do a (blocking) RECEIVE to wait for the other party to do a SEND.
When the TPDU arrives, the receiver is unblocked. It can then process the TPDU and send
a reply. As long as both sides can keep track of whose turn it is to send, this scheme works
fine.
************************
Addressing
When an application (e.g., a user) process wishes to set up a connection to a remote
application process, it must specify which one to connect to. (Connectionless transport has
the same problem: To whom should each message be sent?) The method normally used is
to define transport addresses to which processes can listen for connection requests. In the
Internet, these end points are called ports. In ATM networks, they are called AAL-SAPs.
We will use the generic term TSAP, (Transport Service Access Point). The analogous
end points in the network layer (i.e., network layer addresses) are then called NSAPs. IP
addresses are examples of NSAPs.
A possible scenario for a transport connection is as follows.
1. A time of day server process on host 2 attaches itself to TSAP 1522 to wait for an
incoming call. How a process attaches itself to a TSAP is outside the networking
model and depends entirely on the local operating system. A call such as our
LISTEN might be used, for example.
2. An application process on host 1 wants to find out the time-of-day, so it issues a
CONNECT request specifying TSAP 1208 as the source and TSAP 1522 as the
destination. This action ultimately results in a transport connection being
established between the application process on host 1 and server 1 on host 2.
3. The application process then sends over a request for the time.
4. The time server process responds with the current time.
5. The transport connection is then released.
Figure: (a) Chained fixed-size buffers. (b) Chained variable-sized buffers. (c) One large
circular buffer per connection.
Multiplexing
Multiplexing several conversations onto connections, virtual circuits, and physical links
plays a role in several layers of the network architecture. In the transport layer the need for
multiplexing can arise in a number of ways. For example, if only one network address is
available on a host, all transport connections on that machine have to use it. When a TPDU
comes in, some way is needed to tell which process to give it to. This situation, called
upward multiplexing.
Multiplexing can also be useful in the transport layer for another reason. Suppose, for
example, that a subnet uses virtual circuits internally and imposes a maximum data rate on
each one. If a user needs more bandwidth than one virtual circuit can provide, a way out is
to open multiple
network connections and distribute the traffic among them on a round-robin basis. This
modus operandi is called downward multiplexing.
Crash Recovery
If hosts and routers are subject to crashes, recovery from these crashes becomes an issue. If
the transport entity is entirely within the hosts, recovery from network and router crashes is
straightforward. If the network layer provides datagram service, the transport entities
expect lost TPDUs all the time and know how to cope with them. If the network layer
provides connection- oriented service, then loss of a virtual circuit is handled by
establishing a new one and then probing the remote transport entity to ask it which TPDUs
it has received and which ones it has not received. The latter ones can be retransmitted.
A more troublesome problem is how to recover from host crashes. In particular, it may be
desirable for clients to be able to continue working when servers crash and then quickly
reboot. To illustrate the difficulty, let us assume that one host, the client, is sending a long
file to another host, the file server, using a simple stop-and-wait protocol. The transport
layer on the server simply passes the incoming TPDUs to the transport user, one by one.
Partway through the transmission, the server crashes. When it comes back up, its tables are
reinitialized, so it no longer knows precisely where it was.
In an attempt to recover its previous status, the server might send a broadcast TPDU to all
other hosts, announcing that it had just crashed and requesting that its clients inform it of
the status of all open connections. Each client can be in one of two states: one TPDU
outstanding, S1, or no TPDUs outstanding, S0. Based on only this state information, the
client must decide whether to retransmit the most recent TPDU.
*****************************************