Unit 4 ACN
Unit 4 ACN
Unit 4 ACN
In above diagram the 3rd node is congested and stops receiving packets as a result 2nd
node may be get congested due to slowing down of the output data flow. Similarly 1st
node may get congested and informs the source to slow down.
2. Choke Packet Technique :
Choke packet technique is applicable to both virtual networks as well as datagram
subnets. A choke packet is a packet sent by a node to the source to inform it of
congestion. Each router monitor its resources and the utilization at each of its output
lines. whenever the resource utilization exceeds the threshold value which is set by the
administrator, the router directly sends a choke packet to the source giving it a feedback
to reduce the traffic. The intermediate nodes through which the packets has traveled are
not warned about congestion.
3. Implicit Signaling :
In implicit signaling, there is no communication between the congested nodes and the
source. The source guesses that there is congestion in a network. For example when
sender sends several packets and there is no acknowledgment for a while, one
assumption is that there is a congestion.
4. Explicit Signaling :
In explicit signaling, if a node experiences congestion it can explicitly sends a packet to
the source or destination to inform about congestion. The difference between choke
packet and explicit signaling is that the signal is included in the packets that carry data
rather than creating different packet as in case of choke packet technique.
Explicit signaling can occur in either forward or backward direction.
Forward Signaling : In forward signaling signal is sent in the direction of the
congestion. The destination is warned about congestion. The reciever in this case
adopt policies to prevent further congestion.
Backward Signaling : In backward signaling signal is sent in the opposite direction
of the congestion. The source is warned about congestion and it needs to slow down.
TCP Congestion Control
TCP uses a congestion window and a congestion policy that avoid congestion.Previously,
we assumed that only receiver can dictate the sender’s window size. We ignored another
entity here, the network. If the network cannot deliver the data as fast as it is created by the
sender, it must tell the sender to slow down. In other words, in addition to the receiver, the
network is a second entity that determines the size of the sender’s window.
Initially cwnd = i
After 1 RTT, cwnd = i+1
2 RTT, cwnd = i+2
3 RTT, cwnd = i+3
Congestion Detection Phase : multiplicative decrement – If congestion occurs, the
congestion window size is decreased. The only way a sender can guess that congestion has
occurred is the need to retransmit a segment. Retransmission is needed to recover a missing
packet which is assumed to have been dropped by a router due to congestion.
Retransmission can occur in one of two cases: when the RTO timer times out or when three
duplicate ACKs are received.
Case 1 : Retransmission due to Timeout – In this case congestion possibility is high.
(a) ssthresh is reduced to half of the current window size.
(b) set cwnd = 1
(c) start with slow start phase again.
Case 2 : Retransmission due to 3 Acknowledgement Duplicates – In this case
congestion possibility is less.
(a) ssthresh value reduces to half of the current window size.
(b) set cwnd= ssthresh
(c) start with congestion avoidance phase
Example – Assume a TCP protocol experiencing the behavior of slow start. At 5th
transmission round with a threshold (ssthresh) value of 32 goes into congestion
avoidance phase and continues till 10th transmission. At 10th transmission round, 3
duplicate ACKs are received by the receiver and enter into additive increase mode.
Timeout occurs at 16th transmission round. Plot the transmission round (time) vs
congestion window size of TCP segments.
Frame Relay:
Congestion in Frame Relay decreases throughput and increases delay. A high throughput and
low delay is the main goal of Frame Relay protocol. Frame Relay does not have flow control
and it allows user to transmit burst data. This means that a Frame Relay network has potential
to be really congested with traffic, requiring congestion control. Frame Relay uses congestion
avoidance by means of two bit fields present in the Frame Relay frame to explicitly warn
source and destination of presence of congestion:
BECN:
Backward Explicit Congestion Notification (BECN) warns the sender of congestion present
in the network. This is achieved by resending the frame in reverse direction with the help of
switches in the network. This warning can be responded by the sender by reducing the
transmission data rate, thus reducing congestion effects in the network.
FECN:
Forward Explicit Congestion Notification (FECN) is used to warn the receiver of congestion
in the network. It might appear that receiver cannot do anything to relieve the congestion,
however the Frame Relay protocol assumes that sender and receiver are communicating with
each other and when it receives FECN bit as 1 receiver delays the acknowledgement. This
forces sender to slow down and reducing effects of congestion in the network.
Frame Relay :-
Frame Format is shown below:-
his frame is very similar to the HDLC frame except for the missing control field here.
• The control field is not needed because flow and error control are not needed.
• The Flag, FCS and information fields are same as those of HDLC.
• The address field defines the DLCI along with some other bits required for congestion
control and traffic control.
• Their description is as follows:
1. DLCI field:
The first part of DLCI is of 6 bits and the second part is of 4 bits. They together form a 10 bit
data link connection identifier.
2. Command / Response (C / R):
The C/R bit allows the upper layers to identify a frame as either a command or response. It is
not used by the frame relay protocol.
3. Extended Address (EA):
• This bit indicates whether the current byte is the final byte of the address.
• If EA = 1 it indicates that the current byte is the final one but if EA = 0, then it tells that
another address byte is going to follow.
4. Forward Explicit Congestion Notification (FECN):
• This bit can be set by any switch to indicate that traffic is congested in the direction of
travel of the frame.
• The destination is informed about the congestion via this bit.
5. Backward Explicit Congestion Notification (BECN):
• This bit indicates the congestion in the direction opposite to the direction of frame travel.
• It informs the sender about the congestion.
6. Discard Eligibility (DE):
• The DE bit indicates the priority level of the frame. In the overload situations a frame may
have to be discarded.
• If DE = 1 then that frame can be discarded in the event of congestion.
• DE bit can be set by the sender or by any switch in the network.
QoS:
Important flow characteristics of the QoS are given below:
1. Reliability
If a packet gets lost or acknowledgement is not received (at sender), the re-transmission of
data will be needed. This decreases the reliability.
The importance of the reliability can differ according to the application.
For example:
E- mail and file transfer need to have a reliable transmission as compared to that of an audio
conferencing.
2. Delay
Delay of a message from source to destination is a very important characteristic. However,
delay can be tolerated differently by the different applications.
For example:
The time delay cannot be tolerated in audio conferencing (needs a minimum time delay),
while the time delay in the e-mail or file transfer has less importance.
3. Jitter
The jitter is the variation in the packet delay.
If the difference between delays is large, then it is called as high jitter. On the contrary, if the
difference between delays is small, it is known as low jitter.
Example:
Case1: If 3 packets are sent at times 0, 1, 2 and received at 10, 11, 12. Here, the delay is
same for all packets and it is acceptable for the telephonic conversation.
Case2: If 3 packets 0, 1, 2 are sent and received at 31, 34, 39, so the delay is different for all
packets. In this case, the time delay is not acceptable for the telephonic conversation.
4. Bandwidth
Different applications need the different bandwidth.
For example:
Video conferencing needs more bandwidth in comparison to that of sending an e-mail.
Classification of services
1. Overprovisioning –
The logic of overprovisioning is to provide greater router capacity, buffer space and
bandwidth. It is an expensive technique as the resources are costly. Eg: Telephone
System.
2. Buffering –
Flows can be buffered on the receiving side before being delivered. It will not affect
reliability or bandwidth, but helps to smooth out jitter. This technique can be used at
uniform intervals.
3. Traffic Shaping –
It is defined as about regulating the average rate of data transmission. It smooths the
traffic on server side other than client side. When a connection is set up, the user
machine and subnet agree on a certain traffic pattern for that circuit called as Service
Level Agreement. It reduces congestion and thus helps the carrier to deliver the packets
in the agreed pattern.
Techniques to Improve QoS
Some techniques that can be used to improve the quality of service. The four common
methods: scheduling, traffic shaping, admission control, and resource reservation.
a. Scheduling
Packets from different flows arrive at a switch or router for processing. A good scheduling
technique treats the different flows in a fair and appropriate manner. Several scheduling
techniques are designed to improve the quality of service. We discuss three of them here:
FIFO queuing, priority queuing, and weighted fair queuing.
i. FIFO Queuing
In first-in, first-out (FIFO) queuing, packets wait in a buffer (queue) until the node (router or
switch) is ready to process them. If the average arrival rate is higher than the average
processing rate, the queue will fill up and new packets will be discarded. A FIFO queue is
familiar to those who have had to wait for a bus at a bus stop.
ii. Priority Queuing
In priority queuing, packets are first assigned to a priority class. Each priority class has its
own queue. The packets in the highest-priority queue are processed first. Packets in the
lowest- priority queue are processed last. Note that the system does not stop serving a queue
until it is empty. Figure 4.32 shows priority queuing with two priority levels (for simplicity).
A priority queue can provide better QoS than the FIFO queue because higher priority traffic,
such as multimedia, can reach the destination with less delay. However, there is a potential
drawback. If there is a continuous flow in a high-priority queue, the packets in the lower-
priority queues will never have a chance to be processed. This is a condition called starvation
In the figure, we assume that the network has committed a bandwidth of 3 Mbps for a host.
The use of the leaky bucket shapes the input traffic to make it conform to this commitment.
In Figure 4.34 the host sends a burst of data at a rate of 12 Mbps for 2 s, for a total of 24
Mbits of data. The host is silent for 5 s and then sends data at a rate of 2 Mbps for 3 s, for a
total of 6 Mbits of data. In all, the host has sent 30 Mbits of data in lOs. The leaky bucket
smooth’s the traffic by sending out data at a rate of 3 Mbps during the same 10 s.
A simple leaky bucket implementation is shown in Figure 4.35. A FIFO queue holds the
packets. If the traffic consists of fixed-size packets (e.g., cells in ATM networks), the process
removes a fixed number of packets from the queue at each tick of the clock. If the traffic
consists of variable-length packets, the fixed output rate must be based on the number of
bytes or bits.
The following is an algorithm for variable-length packets:
Initialize a counter to n at the tick of the clock.
If n is greater than the size of the packet, send the packet and decrement the counter
by the packet size. Repeat this step until n is smaller than the packet size.
Reset the counter and go to step 1.
A leaky bucket algorithm shapes bursty traffic into fixed-rate traffic by averaging the
data rate. It may drop the packets if the bucket is full.
ii. Token Bucket
The leaky bucket is very restrictive. It does not credit an idle host. For example, if a host is
not sending for a while, its bucket becomes empty. Now if the host has bursty data, the leaky
bucket allows only an average rate. The time when the host was idle is not taken into account.
On the other hand, the token bucket algorithm allows idle hosts to accumulate credit for the
future in the form of tokens. For each tick of the clock, the system sends n tokens to the
bucket. The system removes one token for every cell (or byte) of data sent. For example, if n
is 100 and the host is idle for 100 ticks, the bucket collects 10,000 tokens.
The token bucket can easily be implemented with a counter. The token is initialized to zero.
Each time a token is added, the counter is incremented by 1. Each time a unit of data is sent,
the counter is decremented by 1. When the counter is zero, the host cannot send data.
The token bucket allows bursty traffic at a regulated maximum rate.
Combining Token Bucket and Leaky Bucket
The two techniques can be combined to credit an idle host and at the same time regulate the
traffic. The leaky bucket is applied after the token bucket; the rate of the leaky bucket needs
to be higher than the rate of tokens dropped in the bucket.
c. Resource Reservation
A flow of data needs resources such as a buffer, bandwidth, CPU time, and so on. The
quality of service is improved if these resources are reserved beforehand. We discuss in this
section one QoS model called Integrated Services, which depends heavily on resource
reservation to improve the quality of service.
d. Admission Control
Admission control refers to the mechanism used by a router, or a switch, to accept or reject a
flow based on predefined parameters called flow specifications. Before a router accepts a
flow for processing, it checks the flow specifications to see if its capacity (in terms of
bandwidth, buffer size, CPU speed, etc.) and its previous commitments to other flows can
handle the new flow.
Emerging Trends in networks.
1. Networking Will Become More Automated
Over the last few years, networking vendors released automation platforms to make
managing networks easier. But until now, they’ve felt sort of like flowers not ready to bloom.
There have been interoperability issues, feature parity issues and even cultural issues as the
networking community opined if this was the end of the traditional network engineer.
As we enter 2020, I think we’ve finally reached the point when we’ll see automation blossom
into the dominant way to manage networks going forward. Though homegrown automation is
still popular in the Reddit threads and blogosphere, watch out for vendors incorporating
automation into their platforms—not as just another feature, but as the baseline for how to
operate networks.
2. 5G and Wi-Fi 6 Will Make Their Way into Our Homes and Offices
There will be a 5G versus Wi-Fi 6 battle for home and business use next year. Now, I know
this isn’t exactly a trend per say, but it’s nevertheless an important thing to watch for in the
coming year.
Cellular providers have an opportunity to make good on their promises to bring 5G right into
our homes and offices with personal 5G cellular networks. I realize this was promised a long
time ago, and it took years for ISPs to finally coalesce around a common 5G communications
language. In 2020, we’ll see ISPs rally behind 5G NR and make a push to move from large-
scale connectivity to small scale in our offices and homes.
This could help solve the challenge of having so many wireless devices connected at once—a
problem IoT makes worse—by slowing wireless network performance even more. However,
Wi-Fi 6 promises to solve the same problem as 5G and has much easier inroads into the
market, which is why I’m not sure 5G from large ISPs will win out in the fight over small-
scale wireless networking. Pay attention to how ISPs approach this new market niche in
2020.
Why Wi-Fi 6? In the second half of 2019, 802.11ax (better known as Wi-Fi 6) boomed.
Networking vendors rushed to release new access points, wireless controllers and marketing
initiatives. Now, Wi-Fi 6 infrastructure is ready to go, but devices, such as phones, laptops
and other Wi-Fi-capable device chipsets, weren’t there until very recently.
3. AI and ML Will Lead to Autonomous Networks
Analyzing data with machine learning (ML) algorithms and artificial intelligence (AI) will
become the common starting point for many technology. ML can make predictions based on
network data. And in the broader sense, AI can take intelligent action based on those
predictions.
In 2020, analytics tools built on ML and AI will get better and more powerful. Instead of
being yet another management platform no one logs into, they’ll be built right into
networking platforms.
That doesn’t mean routers and switches will be doing network analysis for us. Remember
we’re also seeing a trend toward completely automated networks managed by some sort of
controller. I believe advanced analytics will be baked right into these automation platforms,
which will evolve into validation mechanisms and the beginnings of a self-operating network.
• Physica l Layer
• Network Layer
• The nearby station is responsible for ha ndling node's tra nsmission. Then the
a ddress c ha nges acc ording to the base station
There are many aspects to security at the data dissem- ination level. In the following
subsections, many of the largest security threats are defined and analyzed. Each type of
threat falls into one of the three major security views: confidentiality, integrity, and
availability [7]. Con- fidentiality can be defined as the ability of a system to service
requests for only allowed nodes. An unauthorized node should not be able to access the
data or tasks of another node. Integrity states that an unauthorized node should not be able
to change data or tasks of another node. Availability is the ability of the node or network
to function, particularly under attack. Each dissemination method is then compared to each
other for each particular threat.
A. Flooding
The act of flooding in a WSN attacks the network with artificial host queries or tasks or
with repetitive data, intent on causing resource exhaustion [1]. These attacks impact the
availability of the network. As explained previously, each transmission consumes energy
reserves as well as processing time. Query and task floods work at the host level by
incorrectly issuing data requests and jobs for the network nodes to perform. Data floods
occur when a node sends its sensed data repetitively, beyond its intended operation.
Query flooding only occurs in local and data-centric methods. This is because external
storage sends its data immediately, without the host explicitly requesting it. Local storage
nodes are impacted the most. When a query is initiated, the host does not know where the
requested data resides. All that is known is that each node holds its own data. Thus, queries
are sent to the entire network.
B. Compromise at the Gateway
All communication between the host and the sensor network must pass through the
gateway. Loss of this node will break all ties between the host and network. Compro- mises
may affect confidentiality and integrity, but more importantly availability. In
confidentiality and integrity compromises, all dissemination methods are affected equally.
Availability compromises are slightly different, but in an important way. In local and data-
centric storage, data is stored until requested - a link between the sensing nodes and the
host does not necessarily incur data loss. Since queries are not initiated, energy from
transmissions is not lost, but memory will eventually run out. External storage relies on the
host-network link via the gateway. When broken, data will ultimately be dropped as there
is no sink for it. Transmissions are affected unless the nodes can “learn” about gateway
failures and enter into a dormant phase.
C. Data Loss Due to a Single Node Failure
Node failure is inevitable. The larger the network, the higher the probability a node will
fail. Failure here is considered to be when a node ceases to collect and compile data. The
impact of node failure on data loss depends highly on the node’s job and its previously
compiled data. Networks using external storage are least affected. Only after data has been
gathered and before the data is sent can data loss occur, and only this newly gathered data
would be lost. Local storage is similar to external storage in these regards. The difference is
that the data previously gathered at the failed node would be lost. The impact would be
closely tied with the frequency of queries. Data-centric storage has the highest potential for
large data loss as all data for a task is stored at one node. Other node failures will be
similar to external storage nodes. However, a clever attack could monitor the transmission
patterns and determine the node with highest source and sink transmission and thus
determine the storage node. Resiliency to this attack may come in the form of redundancy -
using multiple nodes to store the data. This type of threat falls into the category of data
integrity and availability.
D.Unauthorized Reads
Confidentiality comes into play in applications where a leak of information is
undesirable. In the case of temper- ature readings in anticipation of wildfires, unauthorized
reading of the data stored in a network may not be a concern. Military and
commercial WSNs may collect sensitive data, and thus unauthorized reading is a concern.
E. Node Movement, Removal, and Replacement
Physically moving, removing, or replacing a node is the most fundamental storage threat
in WSNs. Without a node to sense and store data, the network will not be able to operate at
full capacity. (For simplicity, it is assumed nodes do not have nearby redundant nodes in
which either node can be used.)
Moving a node can have the same impact as removal the network does not know where
the node is. For external storage, this is not a concern. Data need not be stored anywhere
except the host and relies on other nodes purely for transmission hops. Local storage
operates in the same fashion. Data-centric storage stands to have a higher risk due to its
centralized storage. As with single node failures, data may be lost if the storage node is
moved or removed, thus causing integrity loss. Replacing a node can have a drastically
different impact on data-centric storage. Using a node specifically altered to perform as the
attacker wishes (relay information, alter data, etc.), the entire object of the network will be
compromised. This may lead to availability and confidentiality breaches.
F. Injection of Invalid Data
Altering the sensing environment for a node or network is a real concern for all storage
methods. Consider the temperature sensing application one more time. If an attacker
decided to artificially increase the temperature around a single node, monitoring agencies
could instanti- ate an area-wide exodus to prevent loss of life. All storage methods are
equally affected, but the extent of the damage is determinant on the application. The
application may range from reporting a maximum value, a single event, an average, or so
on.