0% found this document useful (0 votes)
31 views9 pages

Unit 6 IPQoS Part 1

Download as pdf or txt
Download as pdf or txt
Download as pdf or txt
You are on page 1/ 9

UNIT 6: IPQOS

Congestion:

Too many packets present in (a part of) the network causes packet delay and loss that degrades
performance. This situation is called congestion. In other words, congestion in a network may occur if the
load on the network, the number of packets sent to the network, is greater than the capacity of the
network (the number of packets a network can handle). The network and transport layers share the
responsibility for handling congestion. Since congestion occurs within the network, it is the network layer
that directly experiences it and must ultimately determine what to do with the excess packets. However,
the most effective way to control congestion is to reduce the load that the transport layer is placing on
the network. Congestion control refers to the mechanisms and techniques to control the congestion and
keep the load below the capacity.

Effects of Congestion:

• As delay increases, performance decreases.


• If delay increases, retransmission occurs, making situation even worse

Compiled by: Krishna Bhandari www.genuinenotes.com


Congestion Control:

Congestion, in the context of networks, refers to a network state where a node or link carries so much
data that it may deteriorate network service quality, resulting in queuing delay, frame or data packet loss
and the blocking of new connections.

Congestion in data networking and queuing theory is the reduced quality of service that occurs when a
network node is carrying more data than it can handle. Typical effects include queuing, delay, packet loss
or the blocking of new connections.

Congestion in a network or internetwork occurs because routers and switches have queues – buffers that
hold the packets before and after processing.

A router, for example, has an input queue and an output queue for each interface. When a packet arrives
at the incoming interface, it undergoes three steps before departing.

• The packet is put at the end of the input queue while waiting to be checked.
• The processing module of the router removes the packet from the input
queue once it reaches the front of the queue and uses its routing table and the
destination address to find the route.
• The packet is put in the appropriate output queue and waits its turn to be sent.

General Principles of Congestion Control:

Congestion control refers to techniques and mechanisms that can either prevent congestion, before it
happens, or remove congestion, after it has happened.

The General Principles of Congestion Control are as follows:

Open Loop Principle: solve problem by good design

• attempt to prevent congestion from happening


• after system is running, no corrections made

Closed Loop Principle: concept of feedback loop

• monitor system to detect congestion


• pass information to where action is taken
• adjust system operation to correct problem

Congestion Prevention Policies:

Congestion prevention policies can be broadly divided into two broad categories.

Open loop congestion control

Closed loop congestion control

Compiled by: Krishna Bhandari www.genuinenotes.com


Open Loop Congestion Control

In open-loop congestion control, policies are applied to prevent congestion before it happens. In these
mechanisms, congestion control is handled by either the source or the destination. The list of policies that
can prevent congestion are:

Retransmission Policy:

It is the policy in which retransmission of the packets are taken care. If the sender feels that a sent packet
is lost or corrupted, the packet needs to be retransmitted. This transmission may increase the congestion
in the network.

To prevent congestion, retransmission timers must be designed to prevent congestion and also able to
optimize efficiency

Window Policy

The type of window at the sender side may also affect the congestion. Several packets in the Go-back-n
window are resent, although some packets may be received successfully at the receiver side. This
duplication may increase the congestion in the network and making it worse.

Therefore, Selective repeat window should be adopted as it sends the specific packet that may have been
lost.

Acknowledgement Policy

Since acknowledgement are also the part of the load in network, the acknowledgment policy imposed by
the receiver may also affect congestion. Several approaches can be used to prevent congestion related to
acknowledgment.

The receiver should send acknowledgement for N packets rather than sending acknowledgement for a
single packet. The receiver should send a acknowledgment only if it has to send a packet or a timer expires.

Discarding Policy

Compiled by: Krishna Bhandari www.genuinenotes.com


A good discarding policy adopted by the routers is that the routers may prevent congestion and at the
same time partially discards the corrupted or less sensitive package and also able to maintain the quality
of a message.

In case of audio file transmission, routers can discard less sensitive packets to prevent congestion and also
maintain the quality of the audio file.

Admission Policy

In admission policy a mechanism should be used to prevent congestion. Switches in a flow should first
check the resource requirement of a network flow before transmitting it further. If there is a chance of a
congestion or there is a congestion in the network, router should deny establishing a virtual network
connection to prevent further congestion.

Closed Loop Congestion Control

Closed-Loop congestion control mechanisms try to reduce effects of congestion after it happens.

Back Pressure:

Backpressure is a technique in which a congested node stops receiving packet from upstream node. This
may cause the upstream node or nodes to become congested and rejects receiving data from above
nodes. Backpressure is a node-to-node congestion control technique that propagate in the opposite
direction of data flow. The backpressure technique can be applied only to virtual circuit where each node
has information of its above upstream node.

In above diagram the 3rd node is congested and stops receiving packets as a result 2nd node may be get
congested due to slowing down of the output data flow. Similarly 1st node may get congested and informs
the source to slow down.

Choke Packet:

Choke packet technique is applicable to both virtual networks as well as datagram subnets. A choke packet
is a packet sent by a node to the source to inform it of congestion. Each router monitors its resources and
the utilization at each of its output lines. whenever the resource utilization exceeds the threshold value
which is set by the administrator, the router directly sends a choke packet to the source giving it a
feedback to reduce the traffic. The intermediate nodes through which the packets have traveled are not
warned about congestion.

Compiled by: Krishna Bhandari www.genuinenotes.com


Implicit Signaling:

In this method, there is no communication between congested node or nodes and the source. The source
guesses that there is congestion somewhere in the network from other symptoms. For example, when
source sends several packets and there is no acknowledgment for a while, one assumption is that network
is congested and source should slow down.

Explicit Signaling:

In explicit signaling, if a node experiences congestion it can explicitly sends a packet to the source or
destination to inform about congestion. The difference between choke packet and explicit signaling is that
the signal is included in the packets that carry data rather than creating different packet as in case of
choke packet technique.

Explicit signaling can occur in either forward or backward direction.

Forward Signaling: In forward signaling signal is sent in the direction of the congestion. The destination is
warned about congestion. The receiver in this case adopt policies to prevent further congestion.

Backward Signaling: In backward signaling signal is sent in the opposite direction of the congestion. The
source is warned about congestion and it needs to slow down.

Congestion Control in Virtual‐Circuit Subnet (TCP)

Congestion Control in Virtual‐Circuit Subnet is closed loop-based design for connection-oriented services
which can be done during connection set up. The basic principle is that when setting up a virtual circuit,
we have to make sure that congestion be avoided.

The following method is used for congestion control in VC.

• Admission Control

➢ Once the congestion has been signaled, no more new virtual circuits can be set up until
the problem has been solved.

➢ This type of approach is often used in normal telephone networks. When the exchange is
overloaded, then no new calls are established.

• Another Approach: Alternative routes

Compiled by: Krishna Bhandari www.genuinenotes.com


➢ To allow new virtual connections, route these carefully so that none of the congested
router (or none of the problem area) is a part of this route i.e. to avoid the part of the
network that is overloaded.

➢ Yet another approach can be: To negotiate different parameters between the host and
the network, when the connection is setup. During the setup time itself, Host specifies
the volume and shape of traffic, quality of service, maximum delay and other parameters,
related to the traffic it would be offering to the network. Once the host specifies its
requirement, the resources needed are reserved along the path, before the actual packet
follows.

Normally when router A sets a connection to B, it would pass through one of the two congested routers,
as this would result in a minimum-hop route. To avoid congestion, a temporary subnet is redrawn by
eliminating congested routers. A virtual circuit can then be established to avoid congestion.

Congestion control in datagram subnets (UDP)

Congestion control approaches which can be used in the datagram subnets. The techniques are:

• Choke Packets.
• Load Shedding.
• Jitter control.

Choke Packets:

This approach can be used in virtual circuits as well as in the datagram subnets. There are two
approaches:

1) Basic idea:

Compiled by: Krishna Bhandari www.genuinenotes.com


Router checks the status of each output line. If it is too occupied, sends a choke packet to the
source. The host is assumed to be cooperative and will slow down. When the source gets a
choke packet, it cuts rate by half and ignores further choke packets coming from the same
destination for a fixed period. After that period has expired, the host listens for more choke
packets. If one arrives, the host cuts rate by half again. If no choke packet arrives, the host may
increase the rate.

2) Hop-by-Hop Choke Packets:


This technique is advanced over Choked packet method. At high speed over long distances,
sending a packet all the way back to the source doesn’t help much, because by the time choke
packet reach the source, already a lot of packets destined to the same original destination would
be out from the source. Solution to this scenario can be hop-by-hop choke packets. When choke
packet reaches router F, it forwards choke packet to router E as well as reduces its traffic to D.
Thus, the problem that D has is “push-back” to F and D gets relief quickly. This process is repeated
down the route until the ball is back to the root source A.

Compiled by: Krishna Bhandari www.genuinenotes.com


Load shedding:

Admission control, choke packets, fair queuing are the techniques suitable for light congestion. But if
these techniques cannot make the congestion to disappear, then the load shedding technique is to be
used. The principle of load shedding states that when the routers are being inundated by the packets
away. A router which is flooding with packets due to congestion can drop any packets at random. The
policy for dropping a packet depends on the type of packet. So, the policy for file transfer called wine (old
is better than new) and that for the multimedia is called milk (new is better than old). To implement such
an intelligent discard policy, co-operation from the sender is essential. The applications should mark their
packets are to be discarded the routers can first drop packets from lowest class.

Jitter control:

Jitter is defined as the variation in delay for the packets belonging to the same flow. The real time audio
and video cannot tolerate jitter on the other hand the jitter does not matter if the packets are carrying an
information contained in a file. For the audio and video transmission if the packets take 20 msec to 30msec
to reach the destination, it does not matter, provided that the delay remains constant. When a packet
arrives at a router, the router will check to see whether the packet is behind or ahead and by what time.
This information is stored in the packet and updated every hop. If the packet is ahead of the schedule
then the router will hold it for slightly longer time and if the packet is behind the schedule, then the router
will try to send it out as quickly as possible.

QoS Concept:
Quality of service (QoS) refers to a network’s ability to achieve maximum bandwidth and deal with other
network performance elements like latency, error rate and uptime. Quality of service also involves
controlling and managing network resources by setting priorities for specific types of data (video, audio,
files) on the network. In other words, QoS in networking is the overall performance of a network,
particularly seen by the users of the network. To quantitatively measure QoS, several related aspects of
the network service are often considered, such as delay, bandwidth, jitter, reliability etc.

• Reliability

➢ Lack of reliability means losing a packet or acknowledgment, that means retransmission.

➢ Sensitivity of application programs to reliability is not the same. For example, it is more
important that electronic mail, file transfer and internet access have reliable
transmissions than telephony or audio conferencing.

• Delay

➢ Source to destination is another flow characteristic.

➢ Applications can tolerate delay in different degrees.

➢ For example, telephony, audio conferencing, video conferencing and remote log in need
minimum delay, while delay in file transfer or email is less important.

• Jitter

Compiled by: Krishna Bhandari www.genuinenotes.com


➢ Defined as the variation in the packet delay. High jitter means the difference between
delays is large, low jitter means the variation is small.

➢ For examples, if four packets depart at times 0, 1, 2, 3 and arrive at 20, 21, 22, 23, all have
the same delay, 20 units of time. On the other hand, if the above four packets arrive at
21, 23, 21, and 28, they will have different delays.

➢ For applications such as audio and video, the first case is completely acceptable. The
second case is not.

• Bandwidth

➢ Different applications need different bandwidths.

➢ In video conferencing we need to send millions of bits per second to refresh a color screen
while the total number of bits in an email may not reach even a million.

Compiled by: Krishna Bhandari www.genuinenotes.com

You might also like