Hi Speed Networs Lecture Notes
Hi Speed Networs Lecture Notes
Hi Speed Networs Lecture Notes
net
Unit I
Frame relay Networks
Frame Relay often is described as a streamlined version of X.25, offering fewer of the
robust capabilities, such as windowing and retransmission of last data that are offered in
X.25.
Frame Relay Devices
Devices attached to a Frame Relay WAN fall into the following two general categories:
DTEs generally are considered to be terminating equipment for a specific network and
typically are located on the premises of a customer. In fact, they may be owned by the
customer. Examples of DTE devices are terminals, personal computers, routers, and
bridges.
DCEs are carrier-owned internetworking devices. The purpose of DCE equipment is to
provide clocking and switching services in a network, which are the devices that actually
transmit data through the WAN. In most cases, these are packet switches. Figure 10-1
shows the relationship between the two categories of devices.
www.tnlearner.net
1. Flag Field. The flag is used to perform high level data link synchronization which
indicates the beginning and end of the frame with the unique pattern 01111110.
To ensure that the 01111110 pattern does not appear somewhere inside the frame,
bit stuffing and destuffing procedures are used.
2. Address Field. Each address field may occupy either octet 2 to 3, octet 2 to 4, or
octet 2 to 5, depending on the range of the address in use. A two-octet address
field comprising the EA=ADDRESS FIELD EXTENSION BITS and the
C/R=COMMAND/RESPONSE BIT.
3. DLCI-Data Link Connection Identifier Bits. The DLCI serves to identify the
virtual connection so that the receiving end knows which information connection
a frame belongs to. Note that this DLCI has only local significance. A single
physical channel can multiplex several different virtual connections.
4. FECN, BECN, DE bits. These bits report congestion:
o FECN=Forward Explicit Congestion Notification bit
o BECN=Backward Explicit Congestion Notification bit
o DE=Discard Eligibility bit
5. Information Field. A system parameter defines the maximum number of data
bytes that a host can pack into a frame. Hosts may negotiate the actual maximum
frame length at call set-up time. The standard specifies the maximum information
field size (supportable by any network) as at least 262 octets. Since end-to-end
protocols typically operate on the basis of larger information units, frame relay
recommends that the network support the maximum value of at least 1600 octets
in order to avoid the need for segmentation and reassembling by end-users.
Frame Check Sequence (FCS) Field. Since one cannot completely ignore the bit errorrate of the medium, each switching node needs to implement error detection to avoid
wasting bandwidth due to the transmission of erred frames. The error detection
mechanism used in frame relay uses the cyclic redundancy check (CRC) as its basis.
Congestion-Control Mechanisms
Frame Relay reduces network overhead by implementing simple congestion-notification
mechanisms rather than explicit, per-virtual-circuit flow control. Frame Relay typically is
implemented on reliable network media, so data integrity is not sacrificed because flow
control can be left to higher-layer protocols. Frame Relay implements two congestionnotification mechanisms:
www.tnlearner.net
Frame Relay versus X.25
The design of X.25 aimed to provide error-free delivery over links with high error-rates.
Frame relay takes advantage of the new links with lower error-rates, enabling it to
eliminate many of the services provided by X.25. The elimination of functions and fields,
combined with digital links, enables frame relay to operate at speeds 20 times greater
than X.25.
X.25 specifies processing at layers 1, 2 and 3 of the OSI model, while frame relay
operates at layers 1 and 2 only. This means that frame relay has significantly less
processing to do at each node, which improves throughput by an order of magnitude.
X.25 prepares and sends packets, while frame relay prepares and sends frames. X.25
packets contain several fields used for error and flow control, none of which frame relay
needs. The frames in frame relay contain an expanded address field that enables frame
relay nodes to direct frames to their destinations with minimal processing .
X.25 has a fixed bandwidth available. It uses or wastes portions of its bandwidth as the
load dictates. Frame relay can dynamically allocate bandwidth during call setup
negotiation at both the physical and logical channel level.
www.tnlearner.net
transferring voice and video traffic because such traffic is intolerant of delays that result
from having to wait for a large data packet to download, among other things. Figure
illustrates the basic format of an ATM cell. Figure :An ATM Cell Consists of a Header
and Payload Data
www.tnlearner.net
ATM layerCombined with the ATM adaptation layer, the ATM layer is roughly
analogous to the data link layer of the OSI reference model. The ATM layer is
responsible for the simultaneous sharing of virtual circuits over a physical link (cell
multiplexing) and passing cells through the ATM network (cell relay). To do this, it uses
the VPI and VCI information in the header of each ATM cell.
ATM adaptation layer (AAL)Combined with the ATM layer, the AAL is roughly
analogous to the data link layer of the OSI model. The AAL is responsible for isolating
higher-layer protocols from the details of the ATM processes. The adaptation layer
prepares user data for conversion into cells and segments the data into 48-byte cell
payloads.
Finally, the higher layers residing above the AAL accept user data, arrange it into
packets, and hand it to the AAL. Figure :illustrates the ATM reference model.
www.tnlearner.net
Diagram of the UNI ATM Cell
7
GFC
7
VPI
4 3
VPI
VPI
VCI
4 3
VPI
VCI
VCI
VCI
VCI
PT
CLP
HEC
VCI
PT
CLP
(48
bytes)
HEC
Payload
(48
bytes)
Payload
www.tnlearner.net
addressing almost 212 VPs of up to almost 216 VCs each (in practice some of the VP and
VC numbers are reserved).
A Virtual Channel (VC) denotes the transport of ATM cells which have the same
unique identifier, called the Virtual Channel Identifier (VCI). This identifier is encoded in
the cell header. A virtual channel represents the basic means of communication between
two end-points, and is analogous to an X.25 virtual circuit.
A Virtual Path (VP) denotes the transport of ATM cells belonging to virtual channels
which share a common identifier, called the Virtual Path Identifier (VPI), which is also
encoded in the cell header. A virtual path, in other words, is a grouping of virtual
channels which connect the same end-points. This two layer approach results in improved
network performance. Once a virtual path is set up, the addition/removal of virtual
channels is straightforward
Service Class
This class is used for emulating circuit switching. The cell rate is
constant bit rate constant with time. CBR applications are quite sensitive to cell-delay
(CBR)
variation. Examples of applications that can use CBR are telephone
traffic (i.e., nx64 kbps), videoconferencing, and television.
This class allows users to send traffic at a rate that varies with time
variable bit rate
depending on the availability of user information. Statistical
non-real
time
multiplexing is provided to make optimum use of network resources.
(VBRNRT)
Multimedia e-mail is an example of VBRNRT.
www.tnlearner.net
This class is similar to VBRNRT but is designed for applications that
variable bit rate
are sensitive to cell-delay variation. Examples for real-time VBR are
real time (VBR
voice with speech activity detection (SAD) and interactive compressed
RT)
video.
This class of ATM services provides rate-based flow control and is
aimed at data traffic such as file transfer and e-mail. Although the
standard does not require the cell transfer delay and cell-loss ratio to
available bit rate be guaranteed or minimized, it is desirable for switches to minimize
(ABR)
delay and loss as much as possible. Depending upon the state of
congestion in the network, the source is required to control its rate.
The users are allowed to declare a minimum cell rate, which is
guaranteed to the connection by the network.
unspecified
rate (UBR)
bit This class is the catch-all, other class and is widely used today for
TCP/IP.
Technical
Parameter
Definition
www.tnlearner.net
The benefits of ATM are the following:
The following ATM Adaptation Layer protocols (AALs) have been defined by the ITUT. It is meant that these AALs will meet a variety of needs. The classification is based on
whether a timing relationship must be maintained between source and destination,
whether the application requires a constant bit rate, and whether the transfer is connection
oriented or connectionless.
AAL Type 1 supports constant bit rate (CBR), synchronous, connection oriented
traffic. Examples include T1 (DS1), E1, and x64 kbit/s emulation.
AAL Type 2 supports time-dependent Variable Bit Rate (VBR-RT) of
connection-oriented, synchronous traffic. Examples include Voice over ATM.
AAL2 is also widely used in wireless applications due to the capability of
multiplexing voice packets from different users on a single ATM connection.
AAL Type 3/4 supports VBR, data traffic, connection-oriented, asynchronous
traffic (e.g. X.25 data) or connectionless packet data (e.g. SMDS traffic) with an
additional 4-byte header in the information payload of the cell. Examples include
Frame Relay and X.25.
www.tnlearner.net
AAL Type 5 is similar to AAL 3/4 with a simplified information header scheme.
This AAL assumes that the data is sequential from the end user and uses the
Payload Type Indicator (PTI) bit to indicate the last cell in a transmission.
Examples of services that use AAL 5 are classic IP over ATM, Ethernet Over
ATM, SMDS, and LAN Emulation (LANE). AAL 5 is a widely used ATM
adaptation layer protocol. This protocol was intended to provide a streamlined
transport facility for higher- layer protocols that are connection oriented.
T AAL1 PDU
The structure of the AAL1 PDU is given in the following illustration:
SN
SNP
CSI
SC
CRC
1 bit
3 bits 3 bits
EPC
1 bit
47 bytes
AAL1 PDU
SN
Sequence number. Numbers the stream of SAR PDUs of a CPCS PDU (modulo 16). The
sequence number is comprised of the CSI and the SN.
CSI
Convergence sublayer indicator. Used for residual time stamp for clocking.
SC
Sequence count. The sequence number for the entire CS PDU, which is generated by the
Convergence Sublayer.
SNP
PDU
payload
10
www.tnlearner.net
AAL2
AAL2 provides bandwidth-efficient transmission of low-rate, short and variable packets
in delay sensitive applications. It supports VBR and CBR. AAL2 also provides for
variable payload within cells and across cells. AAL type 2 is subdivided into the
Common Part Sublayer (CPS ) and the Service Specific Convergence Sublayer (SSCS ).
AAL2 CPS Packet
The CPS packet consists of a 3 octet header followed by a payload. The structure of the
AAL2 CPS packet is shown in the following illustration.
CID
8 bits
LI
6 bits
field
P
1 bit
CPS-PDU payload
AAL2 PDU payload
PAD
0-47
bytes
11
www.tnlearner.net
SN
Sequence number. Protects data integrity.
P
Parity. Protects the start field from errors.
SAR
Information field of the SAR PDU.
PDU
payload
PAD
Padding.
AAL2 SSCS Packet
The SSCS conveys narrowband calls consisting of voice, voiceband data or circuit mode
data. SSCS packets are transported as CPS packets over AAL2 connections. The CPS
packet contains a SSCS payload. There are 3 SSCS packet types.
Type 1 Unprotected; this is used by default.
Type 2 Partially protected.
Type 3 Fully protected: the entire payload is protected by a 10-bit CRC which is
computed as for OAM cells. The remaining 2 bits of the 2-octet trailer consist of the
message type field.
AAL2 SSCS Type 3 Packets:
The type 3 packets are used for the following:
Dialled digits
Channel associated signalling bits
Facsimile demodulated control data
Alarms
User state control operations.
The following illustration gives the general sturcture of AAL2 SSCS Type 3 PDUs. The
format varies and each message has its own format according to the actual message type.
Redundancy
Time
stamp
Message
dependant
information
Message
type
CRC10
14
16
10 bits
www.tnlearner.net
Redundancy
Packets are sent 3 times to ensure error correction. The value in this field signifies the
transmission number.
Time
stamp
Counters packet delay variation and allows a receiver to accurately reproduce the relative
timing of successive events separated by a short interval.
Message
dependant
Packet content that varies, depending on the message type.
information
Message
The message type code.
type
CRC-10
The 10-bit CRC.
AAL3/4
AAL3/4 consists of message and streaming modes. It provides for point-to-point and
point-to-multipoint (ATM layer) connections. The Convergence Sublayer (CS) of the
ATM Adaptation Layer (AAL) is divided into two parts: service specific (SSCS ) and
common part (CPCS ). This is illustrated in the following diagram:
AAL3/4 packets are used to carry computer data, mainly SMDS traffic.
AAL3/4 CPCS PDU
The functions of the AAL3/4 CPCS include connectionless network layer (Class D),
meaning no need for an SSCS; and frame relaying telecommunication service in Class C.
The CPCS PDU is composed of the following fields:
Header
Info
Trailer
CPI
Btag
Basize CPCS
SDU
Pad
Etag Length
0-65535 0-3
2 bytes
CPI
Message type. Set to zero when the BAsize and Length fields are encoded in bytes.
13
www.tnlearner.net
Btag
Beginning tag. This is an identifier for the packet. It is repeated as the Etag.
BAsize
Buffer allocation size. Size (in bytes) that the receiver has to allocate to capture all the
data.
CPCS
Variable information field up to 65535 bytes.
SDU
PAD
Padding field which is used to achieve 32-bit alignment of the length of the packet.
0
All-zero.
Etag
End tag. Must be the same as Btag.
Length
Must be the same as BASize.
AAL3/4 SAR PDU
The structure of the AAL3/4 SAR PDU is illustrated below:
ST
SN
MID
Information
LI
CRC
10
352
10 bits
44 bytes
2-byte trailer
2-byte header
48 bytes
AAL3/4 SAR PDU
ST
Segment type. Values may be as follows:
SN
Sequence number. Numbers the stream of SAR PDUs of a CPCS PDU (modulo 16).
MID
Multiplexing identification. This is used for multiplexing several AAL3/4 connections
over one ATM link.
14
www.tnlearner.net
Information
This field has a fixed length of 44 bytes and contains parts of CPCS PDU.
LI
Length indication. Contains the length of the SAR SDU in bytes, as follows:
CRC
Cyclic redundancy check.
Functions of AAL3/4 SAR include identification of SAR SDUs; error indication and
handling; SAR SDU sequence continuity; multiplexing and demultiplexing.
AAL5 The type 5 adaptation layer is a simplified version of AAL3/4. It also consists of
message and streaming modes, with the CS divided into the service specific and common
part. AAL5 provides point-to-point and point-to-multipoint (ATM layer) connections.
AAL5 is used to carry computer data such as TCP/IP. It is the most popular AAL and is
sometimes referred to as SEAL (simple and easy adaptation layer).
AAL5 CPCS PDU
The AAL5 CPCS PDU is composed of the following fields:
Trailer
Info
CPCS payload
0-65535
Pad
0-47
15
www.tnlearner.net
Length
Length of the user information without the Pad.
CRC
CRC-32. Used to allow identification of corrupted transmission.
AAL5 SAR PDU The structure of the AAL5 CS PDU is as follows:
Information
PAD
UU CPI Length CRC-32
1-48
0-47
4 bytes
8-byte trailer
AAL5 SAR PDU
High-Speed LANs
Emergence of High-Speed LANs
2 Significant trends
Computing power of PCs continues to grow rapidly
Network computing
Examples of requirements
Centralized server farms
Power workgroups
High-speed local backbone
Classical Ethernet
Bus topology LAN
10 Mbps
CSMA/CD medium access control protocol
2 problems:
A transmission from any station can be received by all stations
How to regulate transmission
immediately.
If a collision is detected during transmission, immediately cease transmitting.
16
www.tnlearner.net
After a collision, wait a random amount of time, then attempt to transmit again (repeat
17
www.tnlearner.net
Star
point)
topology
(hub
or
multipoint
repeater
at
central
18
www.tnlearner.net
Bridge
Frame handling done in software
Analyze and forward one frame at a time
Store-and-forward
Layer 2 Switch
Frame handling done in hardware
Multiple data paths and can handle multiple frames at a time
Can do cut-through
Layer 2 Switches
Flat address space
Broadcast storm
Only one path between any 2 devices
Solution 1: subnetworks connected by routers
Solution 2: layer 3 switching, packet-forwarding logic in hardware
19
www.tnlearner.net
20
www.tnlearner.net
I/O channel
Hardware based, high-speed, short distance
Direct point-to-point or multipoint communications link
21
www.tnlearner.net
Data type qualifiers for routing payload
Link-level constructs for individual I/O operations
Protocol specific specifications to support e.g. SCSI
Fibre Channel Network-Oriented Facilities
Full multiplexing between multiple destinations
Peer-to-peer connectivity between any pair of ports
Internetworking with other connection technologies
Fibre Channel Requirements
Full duplex links with 2 fibres/link
100 Mbps 800 Mbps
Distances up to 10 km
Small connectors
high-capacity
Greater connectivity than existing multidrop channels
Broad availability
Support for multiple cost/performance levels
Support for multiple existing interface command sets
Fibre Channel Protocol Architecture
FC-0 Physical Media
FC-1 Transmission Protocol
FC-2 Framing Protocol
FC-3 Common Services
FC-4 Mapping
22
www.tnlearner.net
23
www.tnlearner.net
Unit II
Queing analysis
In queueing theory, a queueing model is used to approximate a real queueing
situation or system, so the queueing behaviour can be analysed
mathematically. Queueing models allow a number of useful steady state
performance measures to be determined, including:
the
the
the
the
the
www.tnlearner.net
Single-server queue
Single-server queues are, perhaps, the most commonly encountered queueing
situation in real life. One encounters a queue with a single server in many
situations, including business (e.g. sales clerk), industry (e.g. a production
line), transport (e.g. a bus, a taxi rank, an intersection), telecommunications
(e.g. Telephone line), computing (e.g. processor sharing). Even where there are
multiple servers handling the situation it is possible to consider each server
individually as part of the larger system, in many cases. (e.g A supermarket
checkout has several single server queues that the customer can select from.)
25
www.tnlearner.net
Consequently, being able to model and analyse a single server queue's
behaviour is a particularly useful thing to do.
Multiple-servers queue
Multiple (identical)-servers queue situations are frequently encountered in
telecommunications or a customer service environment. When modelling these
situations care is needed to ensure that it is a multiple servers queue, not a
network of single server queues, because results may differ depending on how
the queuing model behaves.
One observational insight provided by comparing queuing models is that a
single queue with multiple servers performs better than each server having
their own queue and that a single large pool of servers performs better than two
or more smaller pools, even though there are the same total number of servers
in the system.
26
www.tnlearner.net
One simple example to prove the above fact is as follows: Consider a system
having 8 input lines, single queue and 8 servers.The output line has a capacity
of 64 kbit/s. Considering the arrival rate at each input as 2 packets/s. So, the
total arrival rate is 16 packets/s. With an average of 2000 bits per packet, the
service rate is 64 kbit/s/2000b = 32 packets/s. Hence, the average response
time of the system is 1/(-) = 1/(32-16) = 0.0667 sec. Now, consider a second
system with 8 queues, one for each server. Each of the 8 output lines has a
capacity of 8 kbit/s. The calculation yields the response time as 1/(-) = 1/(42) = 0.5 sec. And the average waiting time in the queue in the first case is /(1) = 0.25, while in the second case is 0.03125.
How do customers arrive in the restaurant? Are customer arrivals more during
lunch and dinner time (a regular restaurant)? Or is the customer traffic more
uniformly distributed (a cafe)?
How much time do customers spend in the restaurant? Do customers typically
leave the restaurant in a fixed amount of time? Does the customer service time
vary with the type of customer?
How many tables does the restaurant have for servicing customers?
The above three points correspond to the most important characteristics of a
queueing system. They are explained below:
Arrival Process
Service Process
The
probability
density
distribution
that
determines the customer arrivals in the system.
In a messaging system, this refers to the message
arrival probability distribution.
The
probability
density
distribution
that
determines the customer service times in the
27
www.tnlearner.net
Number
Servers
of
system.
In a messaging system, this refers to the message
transmission time distribution. Since message
transmission is directly proportional to the length
of the message, this parameter indirectly refers to
the message length distribution.
Number of servers available to service the
customers.
In a messaging system, this refers to the number
of links between the source and destination nodes.
Examples of queueing systems that can be defined with this convention are:
M/M/1: This is the simplest queueing system to analyze. Here the arrival and
service time are negative exponentially distributed (poisson process). The
system consists of only one server. This queueing system can be applied to a
wide variety of problems as any system with a very large number of
independent customers can be approximated as a Poisson process. Using a
Poisson process for service time however is not applicable in many
applications and is only a crude approximation. Refer to M/M/1 Queueing
System for details.
M/D/n: Here the arrival process is poisson and the service time distribution is
deterministic. The system has n servers. (e.g. a ticket booking counter with n
cashiers.) Here the service time can be assumed to be same for all customers)
G/G/n: This is the most general queueing system where the arrival and service
time processes are both arbitrary. The system has n servers. No analytical
solution is known for this queueing system.
28
www.tnlearner.net
Markovian arrival processes
In queuing theory, Markovian arrival processes are used to model the arrival
customers to queue.
Some of the most common include the Poisson process, Markovian arrival
process and the batch Markovian arrival process.
Markovian arrival processes has two processes. A continuous-time Markov
process j(t), a Markov process which is generated by a generator or rate
matrix, Q. The other process is a counting process N(t), which has state space
(where
is the set of all natural numbers). N(t) increases
every time there is a transition in j(t) which marked.
Poisson process
The Poisson arrival process or Poisson process counts the number of arrivals,
each of which has a exponentially distributed time between arrival. In the most
general case this can be represented by the rate matrix,
29
www.tnlearner.net
Handily his result applies to any system, and particularly, it applies to systems
within systems. So in a bank, the queue might be one subsystem, and each of
the tellers another subsystem, and Little's result could be applied to each one,
as well as the whole thing. The only requirement is that the system is stable -it can't be in some transition state such as just starting up or just shutting down.
Mathematical formalization of Little's theorem
Let (t) be to some system in the interval [0, t]. Let (t) be the number of
departures from the same system in the interval [0, t]. Both (t) and (t) are
integer valued increasing functions by their definition. Let Tt be the mean time
spent in the system (during the interval [0, t]) for all the customers who were in
the system during the interval [0, t]. Let Nt be the mean number of customers
in the system over the duration of the interval [0, t].
If the following limits exist,
Ideal Performance
30
www.tnlearner.net
31
www.tnlearner.net
Effects of Congestion
Congestion-Control Mechanisms
Backpressure
Request from destination to source to reduce rate
Useful only on a logical connection basis
Requires hop-by-hop flow control mechanism
Policing
Measuring and restricting packets as they enter the network
Choke packet
Specific message back to source
E.g., ICMP Source Quench
Implicit congestion signaling
Source detects congestion from transmission delays and lost packets
and reduces flow
32
www.tnlearner.net
Explicit congestion signaling
Frame Relay reduces network overhead by implementing simple congestionnotification mechanisms rather than explicit, per-virtual-circuit flow control. Frame
Relay typically is implemented on reliable network media, so data integrity is not
sacrificed because flow control can be left to higher-layer protocols. Frame Relay
implements two congestion-notification mechanisms:
FECN and BECN each is controlled by a single bit contained in the Frame Relay
frame header. The Frame Relay frame header also contains a Discard Eligibility (DE)
bit, which is used to identify less important traffic that can be dropped during periods
of congestion.
The FECN bit is part of the Address field in the Frame Relay frame header. The
FECN mechanism is initiated when a DTE device sends Frame Relay frames into the
network. If the network is congested, DCE devices (switches) set the value of the
frames' FECN bit to 1. When the frames reach the destination DTE device, the
Address field (with the FECN bit set) indicates that the frame experienced congestion
in the path from source to destination. The DTE device can relay this information to a
higher-layer protocol for processing. Depending on the implementation, flow control
may be initiated, or the indication may be ignored.
The BECN bit is part of the Address field in the Frame Relay frame header. DCE
devices set the value of the BECN bit to 1 in frames traveling in the opposite direction
of frames with their FECN bit set. This informs the receiving DTE device that a
particular path through the network is congested. The DTE device then can relay this
33
www.tnlearner.net
information to a higher-layer protocol for processing. Depending on the
implementation, flow-control may be initiated, or the indication may be ignored.
Frame Relay Discard Eligibility
The Discard Eligibility (DE) bit is used to indicate that a frame has lower importance
than other frames. The DE bit is part of the Address field in the Frame Relay frame
header.
DTE devices can set the value of the DE bit of a frame to 1 to indicate that the frame
has lower importance than other frames. When the network becomes congested, DCE
devices will discard frames with the DE bit set before discarding those that do not.
This reduces the likelihood of critical data being dropped by Frame Relay DCE
devices during periods of congestion.
Frame Relay Error Checking
Frame Relay uses a common error-checking mechanism known as the cyclic
redundancy check (CRC). The CRC compares two calculated values to determine
whether errors occurred during the transmission from source to destination. Frame
Relay reduces network overhead by implementing error checking rather than error
correction. Frame Relay typically is implemented on reliable network media, so data
integrity is not sacrificed because error correction can be left to higher-layer protocols
running on top of Frame Relay.
Traffic Management
Considerations
in
Congested Network
Some
Fairness
34
www.tnlearner.net
each frame handler monitors its queuing behavior and takes action
35
www.tnlearner.net
Frame Relay Traffic Rate Management Parameters
Committed Information Rate (CIR)
Average data rate in bits/second that the network agrees to support for a
connection
Data Rate of User Access Channel (Access Rate)
Fixed rate link between user and network (for network access)
Committed Burst Size (Bc)
Maximum data, above Bc, over an interval that network will attempt to
transfer
36
www.tnlearner.net
37
www.tnlearner.net
Unit III
38
www.tnlearner.net
39
www.tnlearner.net
Credit
Policy
Receiver needs a policy for how much credit to give sender
Conservative approach: grant credit up to limit of available buffer space
May limit throughput in long-delay situations
Optimistic approach: grant credit based on expectation of freeing space before
data arrives
Effect of Window Size
W = TCP window size (octets)
R = Data rate (bps) at TCP source
D = Propagation delay (seconds)
40
www.tnlearner.net
After TCP source begins transmitting, it takes D seconds for first octet to arrive,
and D seconds for acknowledgement to return
TCP source could transmit at most 2RD bits, or RD/4 octets
Normalized Throughput S
1
W > RD / 4
4W/RD
W < RD / 4
S =
Complicating Factors
Multiple TCP connections are multiplexed over same network interface, reducing
R and efficiency
For multi-hop connections, D is the sum of delays across each network plus
delays at each router
If source data rate R exceeds data rate on one of the hops, that hop will be a
bottleneck
Lost segments are retransmitted, reducing throughput. Impact depends on
retransmission policy
Retransmission Strategy
TCP relies exclusively on positive acknowledgements and retransmission on
acknowledgement timeout
There is no explicit negative acknowledgement
Retransmission required when:
41
www.tnlearner.net
Segment
segment
Segment fails to arrive
Timers
A timer is associated with each segment as it is sent
If timer expires before segment acknowledged, sender must retransmit
Key Design Issue:
value of retransmission timer
Too small: many unnecessary retransmissions, wasting network bandwidth
Too large: delay in handling lost segment
Two Strategies
Timer should be longer than round-trip delay (send segment, receive ack)
Delay is variable
Strategies:
Fixed timer
Adaptive
=
K+1
K ART(K) +
K+1
RTT(K + 1)
42
www.tnlearner.net
RFC 793 Retransmission Timeout
RTO(K + 1) =
Min(UB, Max(LB, SRTT(K + 1)))
UB, LB: prechosen fixed upper and lower bounds
Example values for , :
0.8 < < 0.9
43
www.tnlearner.net
Sender cannot tell which
Only the internet bottleneck can be due to congestion
44
www.tnlearner.net
Retransmission Timer Management
Three Techniques to calculate retransmission timer (RTO):
RTT Variance Estimation
Exponential RTO Backoff
Karns Algorithm
45
www.tnlearner.net
Ack
Karns Algorithm
Do not use measured RTT to update SRTT and SDEV
Calculate backoff RTO when a retransmission occurs
Use backoff RTO for segments until an ack arrives for a segment that has not been
retransmitted
Then use Jacobsons algorithm to calculate RTO
Window Management
Slow start
Dynamic window sizing on congestion
Fast retransmit
Fast recovery
Limited transmit
Slow Start
awnd = MIN[ credit, cwnd]
where
awnd = allowed window in segments
cwnd = congestion window in segments
credit = amount of unused credit granted in most recent ack
cwnd = 1 for a new connection and increased by 1 for each ack received, up to a
maximum
Effect of Slow Start
46
www.tnlearner.net
47
www.tnlearner.net
Fast Retransmit
RTO is generally noticeably longer than actual RTT
If a segment is lost, TCP may be slow to retransmit
TCP rule: if a segment is received out of order, an ack must be issued immediately for
the last in-order segment
Fast Retransmit rule: if 4 acks received for same segment, highly likely it was lost, so
retransmit immediately, rather than waiting for timeout
Fast Recovery
When TCP retransmits a segment using Fast Retransmit, a segment was assumed lost
Congestion avoidance measures are appropriate at this point
E.g., slow-start/congestion avoidance procedure
This may be unnecessarily conservative since multiple acks indicate segments are
getting through
Fast Recovery: retransmit lost segment, cut cwnd in half, proceed with linear increase
of cwnd
This avoids initial exponential slow-start
Limited Transmit
If congestion window at sender is small, fast retransmit may not get triggered, e.g.,
cwnd = 3
Under what circumstances does sender have small congestion window?
Is the problem common?
If the problem is common, why not reduce number of duplicate acks needed to trigger
retransmit?
Limited Transmit Algorithm
Sender can transmit new segment when 3 conditions are met:
Two consecutive duplicate acks are received
48
www.tnlearner.net
Destination advertised window allows transmission of segment
Amount of outstanding data after sending is less than or equal to cwnd + 2
Performance of TCP over ATM
How best to manage TCPs segment size, window management and congestion
control
at the same time as ATMs quality of service and traffic control policies
TCP may operate end-to-end over one ATM network, or there may be multiple ATM
LANs or WANs with non-ATM networks
49
www.tnlearner.net
Smaller buffer increase probability of dropped cells
Larger segment size increases number of useless cells transmitted if a single cell
dropped
Partial Packet and Early Packet Discard
Reduce the transmission of useless cells
Work on a per-virtual circuit basis
Partial Packet Discard
If a cell is dropped, then drop all subsequent cells in that segment (i.e., look for cell with
SDU type bit set to one)
Early Packet Discard
When a switch buffer reaches a threshold level, preemptively discard all cells in a
segment
Selective Drop
Ideally, N/V cells buffered for each of the V virtual circuits
W(i) = N(i) = N(i) V
N/V
N
If N > R and W(i) > Z
then drop next new packet on VC i
Z is a parameter to be chosen
ATM Switch Buffer Layout
50
www.tnlearner.net
Good performance of TCP over UBR can be achieved with minor adjustments to switch
mechanisms
This reduces the incentive to use the more complex and more expensive ABR service
Performance and fairness of ABR quite sensitive to some ABR parameter settings
Overall, ABR does not provide significant performance over simpler and less expensive
UBR-EPD or UBR-EPD-FBA
www.tnlearner.net
E.g. data rate 150Mbps
Takes (53 x 8 bits)/(150 x 106) =2.8 x 10-6 seconds to insert a cell
Transfer time depends on number of intermediate switches, switching time and
propagation delay. Assuming no switching delay and speed of light propagation,
round trip delay of 48 x 10-3 sec across USA
A dropped cell notified by return message will arrive after source has transmitted
N further cells
N=(48 x 10-3 seconds)/(2.8 x 10-6 seconds per cell)
=1.7 x 104 cells = 7.2 x 106 bits
i.e. over 7 Mbits
Cell Delay Variation
For digitized voice delay across network must be small
Rate of delivery must be constant
Variations will occur
Dealt with by Time Reassembly of CBR cells (see next slide)
Results in cells delivered at CBR with occasional gaps due to dropped cells
Subscriber requests minimum cell delay variation from network provider
Increase data rate at UNI relative to load
Increase resources within network
Time Reassembly of CBR Cells
52
www.tnlearner.net
53
www.tnlearner.net
Required for VBR
Minimum cell rate
Min commitment requested of network
Can be zero
Used with ABR and GFR
ABR & GFR provide rapid access to spare network capacity up to PCR
PCR MCR represents elastic component of data flow
Shared among ABR and GFR flows
Maximum frame size
Max number of cells in frame that can be carried over GFR connection
Only relevant in GFR
Connection Traffic Descriptor
Includes source traffic descriptor plus:Cell delay variation tolerance
Amount of variation in cell delay introduced by network interface and UNI
Bound on delay variability due to slotted nature of ATM, physical layer
overhead and layer functions (e.g. cell multiplexing)
Represented by time variable
Conformance definition
Specify conforming cells of connection at UNI
Enforced by dropping or marking cells over definition
Quality of Service Parameters-maxCTD
Cell transfer delay (CTD)
Time between transmission of first bit of cell at source and reception of last
bit at destination
Typically has probability density function (see next slide)
Fixed delay due to propagation etc.
Cell delay variation due to buffering and scheduling
Maximum cell transfer delay (maxCTD)is max requested delay for connection
Fraction of cells exceed threshold
Discarded or delivered late
Peak-to-peak CDV & CLR
Peak-to-peak Cell Delay Variation
Remaining (1-) cells within QoS
Delay experienced by these cells is between fixed delay and maxCTD
This is peak-to-peak CDV
CDVT is an upper bound on CDV
Cell loss ratio
Ratio of cells lost to cells transmitted
54
www.tnlearner.net
www.tnlearner.net
56
www.tnlearner.net
No knowledge of QoS for individual VCC
User checks that VPC can take VCCs demands
User-to-network applications
VPC between UNI and network node
Network aware of and accommodates QoS of VCCs
Network-to-network applications
VPC between two network nodes
Network aware of and accommodates QoS of VCCs
57
www.tnlearner.net
VPC capacity >= average data rate of VCCs but < aggregate peak demand
Greater CDV and CTD
May have greater CLR
More efficient use of capacity
For VCCs requiring lower QoS
Group VCCs of similar traffic together
58
www.tnlearner.net
VPC level more important
Network resources allocated at this level
59
www.tnlearner.net
60
www.tnlearner.net
UPC Actions
Compliant cell pass, non-compliant cells discarded
If no additional resources allocated to CLP=1 traffic, CLP=0 cells C
If two level cell loss priority cell with:
CLP=0 and conforms passes
CLP=0 non-compliant for CLP=0 traffic but compliant for CLP=0+1 is
tagged and passes
CLP=0 non-compliant for CLP=0 and CLP=0+1 traffic discarded
61
www.tnlearner.net
CLP=1 compliant for CLP=0+1 passes
CLP=1 non-compliant for CLP=0+1 discarded
Possible Actions of UPC
www.tnlearner.net
Provide feedback to sources to adjust load
Avoid cell loss
Share capacity fairly
Used for ABR
Characteristics of ABR
ABR connections share available capacity
Access instantaneous capacity unused by CBR/VBR
Increases utilization without affecting CBR/VBR QoS
Share used by single ABR connection is dynamic
Varies between agreed MCR and PCR
Network gives feedback to ABR sources
ABR flow limited to available capacity
Buffers absorb excess traffic prior to arrival of feedback
Low cell loss
Major distinction from UBR
Feedback Mechanisms
Cell transmission rate characterized by:
Allowable cell rate
Current rate
Minimum cell rate
Min for ACR
May be zero
Peak cell rate
Max for ACR
Initial cell rate
Start with ACR=ICR
Adjust ACR based on feedback
Feedback in resource management (RM) cells
Cell contains three fields for feedback
Congestion indicator bit (CI)
No increase bit (NI)
Explicit cell rate field (ER)
Source Reaction to Feedback
If CI=1
Reduce ACR by amount proportional to current ACR but not less than CR
Else if NI=0
Increase ACR by amount proportional to PCR but not more than PCR
If ACR>ER set ACR<-max[ER,MCR]
Cell Flow on ABR
Two types of cell
Data & resource management (RM)
Source receives regular RM cells
Feedback
Bulk of RM cells initiated by source
63
www.tnlearner.net
ATM Switch
EFCI marking
Explicit forward congestion indication
Causes destination to set CI bit in ERM
Relative rate marking
Switch directly sets CI or NI bit of RM
If set in FRM, remains set in BRM
Faster response by setting bit in passing BRM
Fastest by generating new BRM with bit set
Explicit rate marking
Switch reduces value of ER in FRM or BRM
Flow of Data and RM Cells
64
www.tnlearner.net
ARB Parameters
65
www.tnlearner.net
www.tnlearner.net
67
www.tnlearner.net
DPF=down pressure factor, typically 7/8
ER<-min[ER, DPF*MACR]
Load Factor
Adjustments based on load factor
LF=Input rate/target rate
Input rate measured over fixed averaging interval
Target rate slightly below link bandwidth (85 to 90%)
LF>1 congestion threatened
VCs will have to reduce rate
Explicit Rate Indication for Congestion Avoidance (ERICA)
Attempt to keep LF close to 1
Define:
fairshare = (target rate)/(number of connections)
VCshare = CCR/LF
= (CCR/(Input Rate)) *(Target Rate)
ERICA selectively adjusts VC rates
Total ER allocated to connections matches target rate
Allocation is fair
ER = max[fairshare, VCshare]
VCs whose VCshare is less than their fairshare get greater increase
Congestion Avoidance Using Proportional Control (CAPC)
If LF<1 fairshare<- fairshare*min[ERU,1+(1-LF)*Rup]
If LF>1 fairshare<- fairshare*min[ERU,1-(1-LF)*Rdn]
ERU>1, determines max increase
Rup between 0.025 and 0.1, slope parameter
Rdn, between 0.2 and 0.8, slope parameter
ERF typically 0.5, max decrease in allottment of fair share
If fairshare < ER value in RM cells, ER<-fairshare
Simpler than ERICA
Can show large rate oscillations if RIF (Rate increase factor) too high
Can lead to unfairness
GRF Overview
Simple as UBR from end system view
End system does no policing or traffic shaping
May transmit at line rate of ATM adaptor
Modest requirements on ATM network
No guarantee of frame delivery
Higher layer (e.g. TCP) react to congestion causing dropped frames
User can reserve cell rate capacity for each VC
Application can send at min rate without loss
Network must recognise frames as well as cells
If congested, network discards entire frame
All cells of a frame have same CLP setting
CLP=0 guaranteed delivery, CLP=1 best efforts
GFR Traffic Contract
68
www.tnlearner.net
69
www.tnlearner.net
70
www.tnlearner.net
Unit IV
Integrated and Differentiated Services
Introduction
New additions to Internet increasing traffic
High volume client/server application
Web
Graphics
Real time voice and video
Need to manage traffic and control congestion
IEFT standards
Integrated services
Collective service to set of traffic demands in domain
Limit demand & reserve resources
Differentiated services
Classify traffic in groups
Different group traffic handled differently
71
www.tnlearner.net
Integrated Services Architecture (ISA)
IPv4 header fields for precedence and type of service usually ignored
ATM only network designed to support TCP, UDP and real-time traffic
May need new installation
Need to support Quality of Service (QoS) within TCP/IP
Add functionality to routers
Means of requesting QoS
ISA Approach
Provision of QoS over IP
Sharing available capacity when congested
72
www.tnlearner.net
Router mechanisms
Routing Algorithms
Select to minimize delay
Packet discard
Causes TCP sender to back off and reduce load
Enahnced by ISA
Flow
IP packet can be associated with a flow
Distinguishable stream of related IP packets
From single user activity
Requiring same QoS
E.g. one transport connection or one video stream
Unidirectional
Can be more than one recipient
Multicast
Membership of flow identified by source and destination IP address, port numbers,
protocol type
IPv6 header flow identifier can be used but isnot necessarily equivalent to ISA flow
ISA Functions
Admission control
For QoS, reservation required for new flow
RSVP used
Routing algorithm
Base decision on QoS parameters
Queuing discipline
Take account of different flow requirements
Discard policy
Manage congestion
Meet QoS
73
www.tnlearner.net
Forwarding functions
ISA Components Background Functions
Reservation Protocol
RSVP
Admission control
Management agent
Can use agent to modify traffic control database and direct admission control
Routing protocol
ISA Components Forwarding
Classifier and route selection
Incoming packets mapped to classes
Single flow or set of flows with same QoS
E.g. all video flows
Based on IP header fields
Determines next hop
Packet scheduler
Manages one or more queues for each output
Order queued packets sent
Based on class, traffic control database, current and past activity on outgoing port
Policing
ISA Services
Traffic specification (TSpec) defined as service for flow
On two levels
General categories of service
74
www.tnlearner.net
Guaranteed
Controlled
load
effort (default)
Particular flow within category
TSpec is part of contract
Token Bucket
Many traffic sources can be defined by token bucket scheme
Provides concise description of load imposed by flow
Easy to determine resource requirements
Provides input parameters to policing function
Token Bucket Diagram
Best
ISA Services
Guaranteed Service
Assured capacity level or data rate
Specific upper bound on queuing delay through network
Must be added to propagation delay or latency to get total delay
Set high to accommodate rare long queue delays
No queuing losses
I.e. no buffer overflow
E.g. Real time play back of incoming signal can use delay buffer for incoming signal
but will not tolerate packet loss
ISA Services
Controlled Load
Tightly approximates to best efforts under unloaded conditions
No upper bound on queuing delay
High percentage of packets do not experience delay over minimum transit delay
Propagation plus router processing with no queuing delay
Very high percentage delivered
Almost no queuing loss
Adaptive real time applications
75
www.tnlearner.net
Receiver measures jitter and sets playback point
Video can drop a frame or delay output slightly
Voice can adjust silence periods
Queuing Discipline
Traditionally first in first out (FIFO) or first come first served (FCFS) at each router
port
No special treatment to high priority packets (flows)
Small packets held up by large packets ahead of them in queue
Larger average delay for smaller packets
Flows of larger packets get better service
Greedy TCP connection can crowd out altruistic connections
If one connection does not back off, others may back off more
Fair Queuing (FQ)
Multiple queues for each port
One for each source or flow
Queues services round robin
Each busy queue (flow) gets exactly one packet per cycle
Load balancing among flows
No advantage to being greedy
Your queue gets longer, increasing your delay
Short packets penalized as each queue sends one packet per cycle
FIFO and FQ
Processor Sharing
Multiple queues as in FQ
Send one bit from each queue per round
Longer packets no longer get an advantage
Can work out virtual (number of cycles) start and finish time for a given packet
However, we wish to send packets, not bits
Bit-Round Fair Queuing (BRFQ)
Compute virtual start and finish time as before
When a packet finished, the next packet sent is the one with the earliest virtual finish
time
Good approximation to performance of PS
76
www.tnlearner.net
Throughput and delay converge as time increases
Comparison of FIFO, FQ and BRFQ
77
www.tnlearner.net
FIFO v WFQ
78
www.tnlearner.net
Adds to load and delay
Global synchronization
Traffic burst fills queues so packets lost
Many TCP connections enter slow start
Traffic drops so network under utilized
Connections leave slow start at same time causing burst
Bigger buffers do not help
Try to anticipate onset of congestion and tell one connection to slow down
79
www.tnlearner.net
80
www.tnlearner.net
81
www.tnlearner.net
Differentiated on basis of performance
Characteristics of DS
Use IPv4 header Type of Service or IPv6 Traffic Class field
No change to IP
Service level agreement (SLA) established between provider (internet domain) and
customer prior to use of DS
DS mechanisms not needed in applications
Build in aggregation
All traffic with same DS field treated same
E.g. multiple voice connections
DS implemented in individual routers by queuing and forwarding based on DS field
State information on flows not saved by routers
Services
Provided
within DS domain
Contiguous portion of Internet over which consistent set of DS policies administered
Typically under control of one administrative entity
Defined in SLA
Customer may be user organization or other DS domain
Packet class marked in DS field
Service provider configures forwarding policies routers
Ongoing measure of performance provided for each class
DS domain expected to provide agreed service internally
If destination in another domain, DS domain attempts to forward packets through other
domains
Appropriate service level requested from each domain
SLA Parameters
Detailed service performance parameters
Throughput, drop probability, latency
Constraints on ingress and egress points
Indicate scope of service
Traffic profiles to be adhered to
Token bucket
Disposition of traffic in excess of profile
Example Services
Qualitative
A: Low latency
B: Low loss
Quantitative
C: 90% in-profile traffic delivered with no more than 50ms latency
D: 95% in-profile traffic delivered
Mixed
E: Twice bandwidth of F
F: Traffic with drop precedence X has higher delivery probability than that with drop
precedence Y
DS Field Detail
82
www.tnlearner.net
Leftmost 6 bits are DS codepoint
83
www.tnlearner.net
Measure traffic for conformance to profile
Marker
Policing by remarking codepoints if required
Shaper
Dropper
DS Traffic Conditioner
84
www.tnlearner.net
85
www.tnlearner.net
Unit V
Protocols for QoS Support
Increased Demands
Increase
capacity
Faster links, switches, routers
Intelligent routing policies
End-to-end flow control
Multicasting
www.tnlearner.net
RSVP Characteristics
Unicast and Multicast
Simplex
Unidirectional data flow
Separate reservations in two directions
Receiver initiated
Receiver knows which subset of source transmissions it wants
Maintain soft state in internet
Responsibility of end users
Providing different reservation styles
Users specify how reservations for groups are aggregated
Transparent operation through non-RSVP routers
Support IPv4 (ToS field) and IPv6 (Flow label field)
Data Flows - Session
Data flow identified by destination
Resources allocated by router for duration of session
Defined by
Destination IP address
Unicast or multicast
IP protocol identifier
TCP, UDP etc.
Destination port
May not be used in multicast
Flow Descriptor
Reservation Request
Flow spec
Desired QoS
Used to set parameters in nodes packet scheduler
Service class, Rspec (reserve), Tspec (traffic)
Filter spec
Set of packets for this reservation
Source address, source prot
Treatment of Packets of One Session at One Router
87
www.tnlearner.net
RSVP Operation
G1, G2, G3 members of multicast group
S1, S2 sources transmitting to that group
Heavy black line is routing tree for S1, heavy grey line for S2
Arrowed lines are packet transmission from S1 (black) and S2 (grey)
All four routers need to know reservation s for each multicast address
Resource requests must propagate back through routing tree
Filtering
G3 has reservation filter spec including S1 and S2
G1, G2 from S1 only
R3 delivers from S2 to G3 but does not forward to R4
G1, G2 send RSVP request with filter excluding S2
G1, G2 only members of group reached through R4
R4 doesnt need to forward packets from this session
R4 merges filter spec requests and sends to R3
R3 no longer forwards this sessions packets to R4
Handling of filtered packets not specified
Here they are dropped but could be best efforts delivery
R3 needs to forward to G3
Stores filter spec but doesnt propagate it
88
www.tnlearner.net
Reservation Styles
Determines manner in which resource requirements from members of group are
aggregated
Reservation attribute
Reservation shared among senders (shared)
Characterizing entire flow received on multicast address
Allocated to each sender (distinct)
Simultaneously capable of receiving data flow from each sender
Sender selection
List of sources (explicit)
All sources, no filter spec (wild card)
www.tnlearner.net
Multicast applications with multiple data sources but unlikely to transmit
simultaneously
types
Originate at multicast group receivers
Propagate upstream
Merged and packet when appropriate
Create soft states
Reach sender
Allow host to set up traffic control for first hop
Path
Provide upstream routing information
Issued by sending hosts
Transmitted through distribution tree to all destinations
Summary
RSVP is a transport layer protocol that enables a network to provide differentiated levels
of service to specific flows of data. Ostensibly, different application types have different
performance requirements. RSVP acknowledges these differences and provides the
mechanisms necessary to detect the levels of performance required by different applications and to modify network behaviors to accommodate those required levels. Over
time, as time and latency-sensitive applications mature and proliferate, RSVP's
capabilities will become increasingly important.
Review Questions
QIs it necessary to migrate away from your existing routing protocol to support
RSVP?
90
www.tnlearner.net
ARSVP is not a routing protocol. Instead, it was designed to work in conjunction with
existing routing protocols. Thus, it is not necessary to migrate to a new routing protocol
to support RSVP.
QIdentify the three RSVP levels of service, and explain the difference among them.
ARSVP's three levels of service include best-effort, rate-sensitive, and delay-sensitive
service. Best-effort service is used for applications that require reliable delivery rather
than a timely delivery. Rate-sensitive service is used for any traffic that is sensitive to
variation in the amount of bandwidth available. Such applications include H.323
videoconferencing, which was designed to run at a nearly constant rate. RSVP's third
level of service is delay-sensitive service. Delay-sensitive traffic requires timely but not
reliable delivery of data.
QWhat are the two RSVP reservation classes, and how do they differ?
AA reservation style is a set of control options that defines how a reservation operates.
RSVP supports two primary types of reservation styles: distinct reservations and shared
reservations. A distinct reservation establishes a flow for each sending device in a
session. Shared reservations aggregate communications flows for a set of senders. Each
of these two reservation styles is defined by a series of filters.
QWhat are RSVP filters?
AA filter in RSVP is a specific set of control options that specifies operational
parameters for a reservation. RSVP's styles include wildcard-filter (WF), fixed-filter
(FF), and shared-explicit (SE) filters.
QHow can RSVP be used through network regions that do not support RSVP?
ARSVP supports tunneling through network regions that do not support RSVP. This
capability was developed to enable a phased-in implementation of RSVP.
www.tnlearner.net
Background
Developments
IETF working group in 1997, proposed standard 2001
Routers developed to be as fast as ATM switches
Remove the need to provide both technologies in same network
MPLS does provide new capabilities
QoS support
Traffic engineering
Virtual private networks
Multiprotocol support
Connection Oriented QoS Support
Guarantee fixed capacity for specific applications
Control latency/jitter
Ensure capacity for voice
Provide specific, guaranteed quantifiable SLAs
Configure varying degrees of QoS for multiple customers
MPLS imposes connection oriented framework on IP based internets
Traffic Engineering
Ability to dynamically define routes, plan resource commitments based on known
demands and optimize network utilization
Basic IP allows primitive traffic engineering
E.g. dynamic routing
MPLS makes network resource commitment easy
Able to balance load in face of demand
Able to commit to different levels of support to meet user traffic
requirements
Aware of traffic flows with QoS requirements and predicted demand
Intelligent re-routing when congested
VPN Support
Traffic from a given enterprise or group passes transparently through an internet
Segregated from other traffic on internet
92
www.tnlearner.net
Performance guarantees
Security
Multiprotocol Support
MPLS can be used on different network technologies
IP
Requires router upgrades
Coexist with ordinary routers
ATM
Enables and ordinary switches co-exist
Frame relay
Enables and ordinary switches co-exist
Mixed network
MPLS Terminology
MPLS Operation
Label switched routers capable of switching and routing packets based on label
appended to packet
Labels define a flow of packets between end points or multicast destinations
Each distinct flow (forward equivalence class FEC) has specific path through
LSRs defined
Connection oriented
Each FEC has QoS requirements
IP header not examined
Forward based on label value
MPLS Operation Diagram
93
www.tnlearner.net
Explanation Setup
Labelled switched path established prior to routing and delivery of packets
QoS parameters established along path
Resource commitment
Queuing and discard policy at LSR
Interior routing protocol e.g. OSPF used
Labels assigned
Local significance only
Manually or using Label distribution protocol (LDP) or enhanced
version of RSVP
Explanation Packet Handling
Packet enters domain through edge LSR
Processed to determine QoS
LSR assigns packet to FEC and hence LSP
May need co-operation to set up new LSP
Append label
Forward packet
Within domain LSR receives packet
Remove incoming label, attach outgoing label and forward
Egress edge strips label, reads IP header and forwards
Notes
MPLS domain is contiguous set of MPLS enabled routers
94
www.tnlearner.net
Traffic may enter or exit via direct connection to MPLS router or from non-MPLS
router
FEC determined by parameters, e.g.
Source/destination IP address or network IP address
Port numbers
IP protocol id
Differentiated services codepoint
IPv6 flow label
Forwarding is simple lookup in predefined table
Map label to next hop
Can define PHB at an LSR for given FEC
Packets between same end points may belong to different FEC
MPLS Packet Forwarding
Label Stacking
Packet may carry number of labels
LIFO (stack)
Processing based on top label
Any LSR may push or pop label
Unlimited levels
Allows aggregation of LSPs into single LSP for part of route
C.f. ATM virtual channels inside virtual paths
E.g. aggregate all enterprise traffic into one LSP for access provider to
handleReduces size of tables
Label Format Diagram
www.tnlearner.net
FECs,
Topology of LSPs
Unique ingress and egress LSR
Single path through domain
Unique egress, multiple ingress LSRs
Multiple paths, possibly sharing final few hops
Multiple egress LSRs for unicast traffic
Multicast
Route Selection
96
www.tnlearner.net
Selection of LSP for particular FEC
Hop-by-hop
LSR independently chooses next hop
Ordinary routing protocols e.g. OSPF
Doesnt support traffic engineering or policy routing
Explicit
LSR (usually ingress or egress) specifies some or all LSRs in LSP for
given FEC
Selected by configuration,or dynamically
Constraint Based Routing Algorithm
Take in to account traffic requirements of flows and resources available along
hops
Current utilization, existing capacity, committed services
Additional metrics over and above traditional routing protocols (OSPF)
Max link data rate
Current capacity reservation
Packet loss ratio
Link propagation delay
Label Distribution
Setting up LSP
Assign label to LSP
Inform all potential upstream nodes of label assigned by LSR to FEC
Allows proper packet labelling
Learn next hop for LSP and label that downstream node has assigned to
FEC
Allow LSR to map incoming to outgoing label
Real Time Transport Protocol
TCP not suited to real time distributed application
Point to point so not suitable for multicast
Retransmitted segments arrive out of order
No way to associate timing with segments
UDP does not include timing information nor any support for real time
applications
Solution is real-time transport protocol RTP
RTP Architecture
Close coupling between protocol and application layer functionality
Framework for application to implement single protocol
Application level framing
Integrated layer processing
97
www.tnlearner.net
98
www.tnlearner.net
Relays
Intermediate system acting as receiver and transmitter for given protocol layer
Mixers
Receives streams of RTP packets from one or more sources
Combines streams
Forwards new stream
Translators
Produce one or more outgoing RTP packets for each incoming packet
E.g. convert video to lower quality
RTP Header
Functions
QoS and congestion control
Identification
Session size estimation and scaling
Session control
RTCP Transmission
Number of separate RTCP packets bundled in single UDP datagram
Sender report
Receiver report
Source description
Goodbye
Application specific
RTCP Packet Formats
99
www.tnlearner.net
100
www.tnlearner.net
Least significant
Most significant
101