QoS Building Blocks
QoS Building Blocks
QoS Building Blocks
Router
Internetwork or WAN
Workstation Router
Router Workstation
Data Queue
Flow 1
Flow 2
Data Queue
Traffic Shapers
• Simple leaky bucket
• Isosynchronous flow: regular intervals between
packets
• Token bucket
• Bursty flow
Simple Leaky Bucket
Data
Data packets leak from a bucket of
depth β at a rate ρ
= bucket size
= rate data is sent onto network
Definitions:
•CIR – Committed Information Rate
•CBS – Committed Burst Size (max)
•EBS – Excess Burst Size (max)
•Tc – Current size of CBS bucket
•Te – Current size of EBS bucket Mark Mark Drop
Two-Rate Three-Color Marker
Usage:
•Mark packets within CIR as conforming
•Mark packets between CIR and PIR as exceeding
•Drop packets above the PIR
Definitions:
•CIR – Committed Rate
•PIR – Peak rate
•CBS – Committed burst size (max)
•PBS – Peak burst size (max)
•Tc – Current size of CBS bucket
•Tp – Current size of PBS bucket
Drop Mark Mark
Traffic policing (cont.)
• Metering operates on two modes :
• Color blind mode
• Packets assumed to be uncolored
• Color-aware mode
• Packets may be colored when arriving at the meter
• Policer can demote the color by remarking the packet with
a worse color e.g. policer can change the color from yellow
to red
DSCP Field (former IPv4 TOS field)
•By using the (until now unused) IP precedence bits, we can color packets so they can be
easily identifiable by the routers and end-nodes
Outline
Buffer level
packet drop
Drop Probability
at buffer-full
1
Queue Length
Buffer Limit
Bursty Loss From Drop-Tail Queuing
• TCP depends on packet loss
• Packet loss is indication of congestion
• TCP additive increase drives network into loss
• Drop-tail leads to bursty loss
• Congested link: many packets encounter full queue
• Synchronization: many connections lose packets at once
27
Slow Feedback from Drop Tail
28
Random Early Detection (RED) Queue
• Router notices that queue is getting full
• … and randomly drops packets to signal congestion
• Packet drop probability
• Drop probability increases as queue length increases
• Else, set drop probability f(avg queue length)
Probability
1
Drop
29
Random Early Detection (RED) Queue
Buffer level
TH max TH min
0
Drop Probability
1
Pmax
Pmin
THmin THmax
Average Queue Length
RED – Using Averaging
• RED computes average queue length
• qlen-avg = (1-w) * qlen-avg + w * qlen-curr
• Average queue length captures the notion of congestion more accurately
• Because of bursty nature of Internet traffic, queues can become full very
quickly and become empty again
• Weighted running average tries to detect long-lived congestions
Instantaneous queue
Queue length
Average queue
time
Properties of RED
• Drops packets before queue is full
• In the hope of reducing the rates of some flows
32
Problems With RED
• Hard to get tunable parameters just right
• How early to start dropping packets?
• What slope for increase in drop probability?
• What time scale for averaging queue length?
33
WRED (Weighted RED) Queue
RED with multiple drop profiles.
Drop Probability
1
(Note: THmin(i) =
Pmax (1/2 + i/8)*THmax
(0..7)
1
1
Pmax(0)
Pmax(0)
Out-Profile (Yellow)
Avg_total
Buffer level
(average_Total) Drop Probability
1
Probabilistic
Pmax_out
packet drop
Pmax_in
Pmin_out Pmin_in
min_out min_in max_out max_in
Average Queue Length
Outline
37
Strict Priority
38
Priority Queuing
• Classes have different priorities; class may depend on explicit
marking or other header info, eg IP source or destination, TCP
Port numbers, etc.
• Preemptive and non-preemptive versions
• Problem:
• lower priority traffic may starve
Weighted Fair Queuing
• Round Robin: scan class queues serving one from each class that has a
non-empty queue
Weighted Fair Queuing
41
Weighted Fair Queuing (Round Robin)
Discussion
• Advantages: protection among flows
• Misbehaving flows will not affect the performance of
well-behaving flows
• Misbehaving flow – a flow that does not implement any
congestion control
• FIFO does not have such a property
• Disadvantages:
• More complex than FIFO: per flow queue/state
• Biased toward large packets – a flow receives service
proportional to the number of packets
Generalized Processor Sharing(GPS)
• Assume a fluid model of traffic
• Visit each non-empty queue in turn (RR)
• Serve infinitesimal from each
• Leads to “max-min” fairness
• each connection gets no more than what it wants
• the excess, if any, is equally shared
GPS is un-implementable!
We cannot serve
infinitesimals, only packets
DRR (Deficit RR) algorithm
• Choose a quantum of bits to serve from each connection in
order.
• For each Head of Line packet,
• credit := credit + quantum
• if the packet size is ≤ credit; send and save excess,
• otherwise save entire credit.
• If no packet to send, reset counter (to remain fair)
• If some packet sent: counter = min{ excess, quantum }
• Each connection has a deficit counter (to store credits) with
initial value zero.
• Easier implementation than other fair policies
• WFQ
Deficit Round-Robin
• Packet-based implementation of the ideal Generalized Processor
Sharing
• DRR can handle variable packet size almost fairly
• 1st Round
Quantum size : 1000 A. credit 1000; count : 1000
byte B. credit 1000,
2000 1000 0
served twicecount : 200
1500 A C. credit 1000; count : 1000
• 2nd Round
500 300 B A. credit 2000;
served count : 500
1200 C
B. credit 1000 count : 0
C. credit 2000;
Second First Head of
Round Round Queue served count : 800
DiffServ Packet Scheduler (1)
• Priority-based, Weight-based Packet Scheduler
priority weight
priority
NCT1 priority
NCT0
priority shaping rate
(PIR/PBS,
EF
CIR/CBS+EBS)
Shaper
TrafficShaper
Min rate
Priority
AF4 Min rate Priority
Scheduler
Rate-based priority Scheduler
Traffic
AF3 Min rate Rate-based
scheduler
scheduler
AF2 Min rate (WRR
(WRRororWFQ)
WFQ)
AF1
priority
BE