Feature Description - Ip Multicast: Huawei Netengine5000E Core Router V300R007C00
Feature Description - Ip Multicast: Huawei Netengine5000E Core Router V300R007C00
V300R007C00
Issue 02
Date 2009-12-10
and other Huawei trademarks are trademarks of Huawei Technologies Co., Ltd.
All other trademarks and trade names mentioned in this document are the property of their respective holders.
Notice
The purchased products, services and features are stipulated by the contract made between Huawei and the
customer. All or part of the products, services and features described in this document may not be within the
purchase scope or the usage scope. Unless otherwise specified in the contract, all statements, information,
and recommendations in this document are provided "AS IS" without warranties, guarantees or representations
of any kind, either express or implied.
The information in this document is subject to change without notice. Every effort has been made in the
preparation of this document to ensure accuracy of the contents, but all statements, information, and
recommendations in this document do not constitute the warranty of any kind, express or implied.
Website: http://www.huawei.com
Email: support@huawei.com
Contents
1 PIM................................................................................................................................................1-1
1.1 Introduction to PIM.........................................................................................................................................1-2
1.2 References.......................................................................................................................................................1-2
1.3 Principles.........................................................................................................................................................1-2
1.3.1 Basic Concepts.......................................................................................................................................1-3
1.3.2 PIM-SM..................................................................................................................................................1-4
1.3.3 PIM-SSM.............................................................................................................................................1-14
1.3.4 PIM-DM...............................................................................................................................................1-15
1.3.5 Comparison Among Protocols.............................................................................................................1-18
1.3.6 PIM GR................................................................................................................................................1-19
1.3.7 PIM Security........................................................................................................................................1-21
1.4 Terms and Abbreviations..............................................................................................................................1-22
2 IGMP............................................................................................................................................2-1
2.1 Introduction to IGMP......................................................................................................................................2-2
2.2 References.......................................................................................................................................................2-2
2.3 Principles.........................................................................................................................................................2-2
2.3.1 IGMPv1&v2&v3....................................................................................................................................2-3
2.3.2 IGMP Group Compatibility...................................................................................................................2-5
2.3.3 IGMP Querier Election..........................................................................................................................2-6
2.3.4 Router-Alert for IGMP...........................................................................................................................2-7
2.3.5 IGMP Only-Link....................................................................................................................................2-7
2.3.6 IGMP On-Demand.................................................................................................................................2-7
2.3.7 IGMP Prompt-Leave..............................................................................................................................2-8
2.3.8 Controllable Multicast............................................................................................................................2-8
2.3.9 SSM Mapping......................................................................................................................................2-11
2.3.10 IGMP Host Address Filtering............................................................................................................2-12
2.3.11 Multi-Instance Supported by IGMP...................................................................................................2-13
2.3.12 Protocol Comparison..........................................................................................................................2-13
2.4 Typical IGMP Applications..........................................................................................................................2-14
2.4.1 Typical IGMP Applications.................................................................................................................2-14
2.4.2 Applications of the IGMP Entry Limit................................................................................................2-15
2.5 Terms and Abbreviations..............................................................................................................................2-15
3 MSDP............................................................................................................................................3-1
4 Multicast Management.............................................................................................................4-1
4.1 Introduction to Multicast Management...........................................................................................................4-2
4.2 References.......................................................................................................................................................4-3
4.3 Principles.........................................................................................................................................................4-3
4.3.1 MPing.....................................................................................................................................................4-3
4.3.2 MTrace...................................................................................................................................................4-4
4.4 Terms and Abbreviations................................................................................................................................4-5
6 Multicast VPN.............................................................................................................................6-1
6.1 Introduction to Multicast VPN........................................................................................................................6-2
6.2 References.......................................................................................................................................................6-2
6.3 Principles.........................................................................................................................................................6-2
6.3.1 Concepts in MVPN................................................................................................................................ 6-2
6.3.2 Inter-Domain Multicast Implemented by MVPN.................................................................................. 6-3
6.3.3 PIM Neighbor Relationship Between CE, PE, and P.............................................................................6-5
6.3.4 Process of Establishing a Share-MDT................................................................................................... 6-7
6.3.5 MT Transmission Process Based on the Share-MDT............................................................................6-8
6.3.6 Switch-MDT Switchover.....................................................................................................................6-12
6.4 MVPN Applications......................................................................................................................................6-13
6.4.1 Single-AS MD VPN.............................................................................................................................6-13
6.4.2 Multi-AS MD VPN..............................................................................................................................6-14
6.5 Terms and Abbreviations..............................................................................................................................6-16
7 MLD..............................................................................................................................................7-1
7.1 Introduction to MLD.......................................................................................................................................7-2
7.2 References.......................................................................................................................................................7-2
7.3 Principles.........................................................................................................................................................7-3
7.3.1 MLDv1 and MLDv2..............................................................................................................................7-3
7.3.2 MLD Group Compatibility.....................................................................................................................7-6
7.3.3 MLD Querier Election Mechanism........................................................................................................7-6
7.3.4 Comparison Between Protocols.............................................................................................................7-7
7.4 MLD Applications...........................................................................................................................................7-7
7.5 Terms and Abbreviations................................................................................................................................7-8
Figures
Tables
1 PIM
Purpose
The multicast source sends multicast packets to multicast groups and these packets at last reach
all the multicast group members by transversing the intermediate networks. The routers in the
network need to be configured with the multicast routing protocol so that intermediate networks
can replicate and forward multicast packets. PIM is an important protocol used to replicate and
forward multicast packets in the network.
1.2 References
The following table lists the references of this document.
1.3 Principles
1.3.1 Basic Concepts
1.3.2 PIM-SM
1.3.3 PIM-SSM
1.3.4 PIM-DM
PIM Router
The multicast router that supports PIM is called a PIM router. The interface enabled with PIM
is called a PIM interface.
PIM Domain
The network formed by PIM routers is called a PIM network.
By setting a boundary on a router interface, a large PIM network can be divided into multiple
PIM domains. The boundary can reject the transmission of specific multicast packets or limit
the transmission of PIM control messages.
MDT
In the PIM multicast domain, a Point-to-Multipoint (P2MP) multicast forwarding path with the
group as a unit is set up. The multicast forwarding path looks like a tree, so it is also called a
Multicast Distribution Tree (MDT).
l The MDT with the multicast source as the root and group members as leaves is called
Shortest Path Tree (SPT). SPT is applicable to both PIM-DM and PIM-SM.
l The MDT with a Rendezvous Point (RP) as the root and group members as leaves is called
a RP Tree (RPT). RPT is applicable only to PIM-SM.
l No matter how many group members exist in the network, each link has only one copy of
the same multicast data.
l The multicast data is copied and distributed at a branch as far from the source as possible.
Leaf Router
The PIM router connected to user hosts is called a leaf router.
Source DR
The Source DR refers to the PIM router that is directly connected to the multicast source and is
responsible for sending Register messages to the RP.
Receiver's DR
The Receiver's DR refers to the PIM router that is directly connected to group members
(receivers) and is responsible for forwarding multicast data to group members.
Intermediate Router
The intermediate router refers to the PIM router that exists between the first-hop router and the
last-hop router on the multicast forwarding path.
1.3.2 PIM-SM
PIM Sparse Mode (PIM-SM) is applicable to large-scale networks in which group members are
sparsely distributed. In PIM-SM, the Multicast Distribution Tree (MDT) is set up by receivers
joining the multicast group actively.
Basic Principle
In PIM-SM, multicast data forwarding is implemented based on the setup of the Rendezvous
Point Tree (RPT) and Shortest Path Tree (SPT).
Receiver
RouterC DR
(*,G) join RouterD
packets
Setting up an RPT refers to setting up a forwarding path for multicast data. Figure 1-1 shows
the process of RPT setup and data forwarding.
l When an active multicast source appears in the network (that is, when the source sends the
first multicast packet to a multicast group G), the source DR encapsulates the multicast
packet in a Register message and unicasts the Register message to the RP. Thus, an (S, G)
entry is created on the RP and source information is registered.
l When a new group member appears in the network (that is, when a user host joins a multicast
group G through IGMP), the receiver DR at the group member side sends a Join message
to the RP. A (*, G) entry is created hop by hop and an RPT with the RP as a root is thus
generated.
l When a group member and a multicast source that sends multicast data to the group appear
in the network, multicast data is encapsulated in a Register message and then unicasted to
the RP. The RP then forwards the multicast data along the RPT to group members.
RPT implements on-demand multicast data forwarding and reduces the usage of network
bandwidth by unwanted data.
The classification of the Designated Router (DR) is as follows:
l In the shared network segment connected to the multicast source, the DR is responsible for
sending Register messages to the RP. The DR connected to the multicast source is called
the source DR.
l In the shared network segment connected to group members, the DR is responsible for
sending Join messages to the RP. The DR connected to group members is called the
receiver's DR.
NOTE
To reduce the forwarding workload of the RPT and improve the forwarding efficiency of multicast data,
PIM-SM allows SPT switchover. That is, a direct forwarding link is set up from the multicast source to the
receiver so that the multicast source can forward multicast data to the receiver along the SPT.
Receiver
RouterC DR
RouterD
(*,G) join
packets
(S,G) join
NOTE
By default, the RP immediately performs SPT switchover after receiving the first Register message, and
the receiver's DR immediately performs SPT switchover after receiving the first multicast packet.
Neighbor Discovery
PIM routers send Hello messages through each interface enabled with PIM. The destination
address of the multicast packet in a Hello message is 224.0.0.13 (the address indicates all PIM
routers in the same network segment). Its source address is the IP address of the interface, and
the TTL value is 1.
The Hello message is used to discover neighbors, adjust various protocol parameters, and
maintain neighbor relationship.
l Discovering PIM neighbors
All PIM routers in the same network segment must receive the multicast packets with the
destination address being 224.0.0.13. After receiving Hello messages, the directly
connected multicast routers can learn their own neighbor information.
A router can receive PIM control messages or multicast packets to create multicast routing
entries and maintain the MDT only after it receives a Hello message from its neighbor.
l Adjusting protocol parameters
Hello messages contain the following protocol parameters:
– DR_Priority: indicates the priority used by the router interface to elect the DR. The
higher the priority of an interface is, the more possible that the interface becomes the
DR. It is applicable to PIM-SM.
– Holdtime: indicates the timeout period during which the neighbor is in the reachable
state.
– LAN_Delay: indicates the delay for transmitting Prune message on the shared network
segment.
– Neighbor-Tracking: indicates the neighbor tracking function.
– Override-Interval: specifies the interval carried in a Hello message for overriding the
Prune message.
l Maintaining neighbor relationship
PIM routers periodically send Hello messages to each other. If a PIM router does not receive
a new Hello message from its PIM neighbor within Holdtime, the router considers that the
neighbor is unreachable and deletes the neighbor from the neighbor list.
Changes of PIM neighbors lead to the changes of the multicast topology of the network. If
an upstream neighbor or a downstream neighbor in the MDT is unreachable, multicast
routes reconverge and the MDT is transferred.
RP Discovery
l RP classification
An RP can synchronously serve multiple multicast groups, but a multicast group can
correspond to only one RP. The RP is a forwarding core of the PIM-SM network. All
routers in the PIM-SM network must know the address of the RP. The RP can be configured
in the following modes:
– Static RP: Users configure all routers in the network with the same RP address through
a configuration command.
– Dynamic RP: Several PIM routers are selected in the PIM domain and configured as
Candidate-RPs (C-RPs). The RP is elected from these C-RPs.
C-BSR
PIM-SM
BSR
C-RP
C-RP
Bootstrap
C-RP advertisement
To interoperate with non-Huawei devices that can function as Auto-RP, Huawei routers can function
as the listening end of non-Huawei's Auto-RP and support the IPv6 embedded RP.
l Anycast RP
In the traditional PIM-SM domain, each multicast group is mapped to only one RP. When
the network is overloaded or the traffic is too concentrated, many network problems are
caused. For example, the pressure of the RP is too heavy, the router converges slowly after
the RP fails, or the multicast forwarding path is not optimal.
Anycast RP emerges to solve the problems. In a PIM-SM domain, multiple RPs with the
same address are configured and Multicast Source Discovery Protocol (MSDP) peers are
set up between the RPs to share multicast data sources. The receiver and multicast source
respectively select their nearest RPs to create RPTs. The receiver determines whether to
perform SPT switchover after receiving multicast data. Thus, the optimal RP path and load
balancing are implemented.
BSR
l BSR election mechanism
BSR is responsible for collecting and advertising C-RP information to ensure that all
routers in the network know the location of RPs.
The BSR is elected from multiple C-BSRs. At first, each C-BSR considers itself as a BSR
and sends Bootstrap messages to the entire network. A Bootstrap message carries the C-
BSR address and the priority of the C-BSR. Each router receives the Bootstrap messages
sent by all C-BSRs and compares them. The election winner serves as the BSR. The election
rules are as follows:
– The C-BSR of higher priority wins (the greater the value, the higher the priority).
– In case of the same priority, the C-BSR with the largest IP address wins.
The same election rules are applied to all routers, so only one BSR is selected.
l BSR administrative domain
To provide precise management, a PIM-SM network is divided into multiple BSR
administrative domains and a global domain. This can reduce the workload of a single BSR
and provide special services for users in the specific domain by using the private group
address.
Each BSR administrative domain maintains only one BSR that serves a multicast group
within a specific address range. The global domain maintains a BSR that serves the other
multicast groups.
The relationship between the BSR domain and the global domain is described as follows
from the aspects of the region, group address range, and multicast function.
– Address space
C-RP BSR
BSR1 domain
BSR C-RP
C-RP
Global domain
C-RP BSR
BSR2 domain
BSR1 BSR3
G1 address G3 address
Global
G1-G2-G3 address BSR2
G2 address
Each BSR administrative domain provides services for the multicast group within the
specific address range. The multicast groups that different BSR administrative domains
serve can overlap. The address of a multicast group that the BSR administrative domain
serves is valid only in its BSR administrative domain. That is, the multicast address is
used as the private group address. As shown in Figure 1-5, the group address range of
BSR1 and that of the BSR3 overlap.
The multicast group that does not belong to any BSR administrative domain belongs to
the global domain. That is, the group address range of the global domain is G-G1-G2.
– Multicast function
As shown in Figure 1-4, the global domain and each BSR administrative domain have
their respective C-RP and BSR devices. These devices function only in the local domain.
That is, the BSR mechanism and the RP election are independent of each other among
administrative domains.
Each BSR administrative domain has its border. Multicast information of this domain,
such as the C-RP Advertisement message and BSR Bootstrap message, can be
transmitted only within the domain. Multicast information of the global domain can be
transmitted in the entire global domain and can traverse any BSR administrative domain.
address; the TTL value of the packet is 1. The Assert message carries the route cost from the
PIM router to source or RP, priority of the used unicast routing protocol, and the group address.
A router compares its information with the information carried in the packet sent by its neighbor.
This is called the Assert election. The election rules are as follows:
l The router of the higher priority for the unicast routing protocol wins.
l In case of the same priority, the router with the smaller route cost to S wins.
l In case of the same priority and the same route cost, the router with the largest IP address
for the downstream interface wins.
Based on the result of the Assert election, a router performs the following operations:
l If the router wins, the downstream interface of the router is responsible for forwarding
multicast packets in the network segment. The downstream interface is called an Assert
winner.
l If the router fails, the downstream interface is prohibited from forwarding multicast packets
and deleted from the downstream interface list of the (S, G) entry. The downstream interface
is called an Assert loser.
After the Assert election is complete, only one upstream router that has a downstream interface
exists on the network segment and the downstream interface transmits only one multicast packet.
The Assert winner then periodically sends Assert messages to maintain the status of the Assert
losers. If the Assert losers do not receive any Assert messages before the timer expires, it re-
adds downstream interfaces for multicast data forwarding.
Ethernet
Ethernet
UserA
Source
DR RP
DR
UserB
Server
Hello
Join
Register Message
As shown in Figure 1-6, the DR is applied in the following positions in the PIM-SM network:
l In the shared network segment connected to S, the DR is responsible for sending Register
messages to the RP. The DR connected to S is called the source DR.
l In the shared network segment connected to group members, the DR is responsible for
sending Join messages to the RP. The DR connected to group members is called the
receiver's DR.
The network segment where S or group members reside is usually connected to multiple PIM
routers. The PIM routers exchange Hello messages to set up PIM neighbors. The Hello messages
carry the DR priority and the interface address of the network segment. A PIM router compares
its information with the information carried in the packet sent by its neighbor. This is called the
DR election. The election rules are as follows:
l The PIM router with the higher DR priority wins (all routers in the network segment support
the DR priority).
l If PIM routers have the same DR priority or a minimum of one PIM router disallows Hello
messages to carry the DR priority, the PIM router with the largest IP address wins.
When the existing DR is faulty, the PIM neighbor relationship times out. A new round of DR
election is triggered among other PIM neighbors.
By default, when the interface changes from a DR to a non-DR, the router stops using the
interface to forward data immediately. At that moment, if multicast data sent from a new DR
does not arrive at the interface yet, multicast data streams are temporarily discontinued.
After being configured with the PIM DR switchover delay, when a PIM-SM interface changes
from a DR to a non-DR due to receiving Hello messages from a new neighbor, this interface
still has the partial DR function and continues to forward multicast packets before the delay
times out.
If a router configured with the DR switchover delay receives packets from a new DR within the
DR switchover, the router immediately stops forwarding packets, thus avoiding repeated
packets. In this case, when a new IGMP Join message is received in the shared network segment,
the new DR, instead of the old DR configured with the DR switchover delay, sends a PIM Join
message to the upstream device.
NOTE
Within the DR switchover delay period, if the new DR receives multicast data from the old DR, Assert
election is triggered.
l Hardware detection: For example, the Synchronous Digital Hierarchy (SDH) alarm
function can be used to detect link faults. Hardware detection takes the advantage of
detecting the faults rapidly; this mechanism, however, is not applicable to all the media.
l Slow Hello mechanism: It usually refers to the Hello mechanism of a routing protocol. The
slow Hello mechanism can detect a fault in seconds. In high-speed data transmission, for
example, at gigabit rates, the detection time longer than one second causes the loss of a
large amount of data. In delay-sensitive services such as the voice service, the delay longer
than one second is also unacceptable.
l Other detection mechanisms: Different protocols or manufacturers may provide private
detection mechanisms; however, it is difficult to deploy the private mechanisms when
systems are interconnected for interworking.
Bidirectional Forwarding Detection (BFD) is a unified detection mechanism on the entire
network. It is applicable to all types of transmission medium and protocols. It can detect a fault
in milliseconds. In the BFD detection mechanism, two systems set up a BFD session, and
periodically send the BFD packets along the path between them. If one system does not receive
BFD packets within a specified period, the system considers that a fault occurs on the path.
In multicast applications, if the current DR or Assert winner on the shared network segment is
faulty, other PIM neighbors start new DR election or Assert election after the neighbor
relationship or the Assert timer times out. Consequently, multicast data transmission is
discontinued. The discontinue period, usually in seconds, is longer than the timeout period of
the neighbor relationship or the Assert timer.
BFD for PIM can detect the status of the link on the shared network segment within milliseconds
and fast respond to the fault on the PIM neighbor. If the interface configured with BFD for PIM
does not receive any BFD packets from the current DR or Assert winner within a detection
period, it considers that a fault occurs on the current DR or Assert winner. BFD then immediately
instructs the PIM module to trigger a new DR election or Assert election rather than waits until
the neighbor relationship or the Assert timer times out. This reduces the duration of multicast
data transmission interruption and thus improves the reliability of multicast data transmission.
Source PIM SM
RouterC
RouterB
GE1/0/0
GE2/0/0
Ethernet
Receiver
As shown in Figure 1-7, on the shared network segment connected with the user hosts, a PIM
BFD session is set up between the downstream interface GE 2/0/0 of Router B and the
downstream interface GE 1/0/0 of Router C. The two interfaces send BFD packets for detecting
the status of the link between them.
To avoid that, you can configure the function of PIM Silent on a router interface connected to
hosts to disable this interface from receiving and forwarding any PIM packet. At the same time,
the IGMP function on the interface is not affected.
1.3.3 PIM-SSM
PIM supports the Any-Source Multicast (ASM) model and the Source-Specific Multicast (SSM)
model. This section describes the SSM model.
The SSM model is based on partial PIM-SM technologies and IGMPv3/MLDv2. The procedure
for setting up a multicast forwarding tree in the SSM model is similar to the procedure for setting
up an SPT in PIM-SM. That is, the receiver's DR, with the knowledge of the exact position of
the multicast source, sends Join messages directly to the multicast source so that multicast data
streams can be sent to the receiver's DR.
By default, the SSM group address ranges from 232.0.0.0 to 232.255.255.255. When the address
of the multicast group that users join is within the address range of the SSM group, the SSM
model is used. Otherwise, the ASM model is used. The principles of the ASM model are the
same as those of PIM-SM.
In the SSM model, users can know the exact position of the multicast source in advance.
Therefore, users can specify the source when joining a multicast group. After knowing the
requirements of users, the DR at the group member side sends a Join message to the multicast
source. Then the Join message is transmitted upstream hop by hop. The SPT is thus built between
the source and group members.
The SSM model adopts only part of the PIM-SM technology. That is, there is no need to maintain
the RP, construct the RPT, or register the multicast source. In addition, an SPT can be built
directly between the source and group members.
NOTE
1.3.4 PIM-DM
Applicable Environment
PIM-DM adopts the Flooding-Prune method to forward multicast data. In the network where
multicast group members are distributed sparsely, a large number of Prune messages are
generated. In the large-scale network, the Flooding-Prune process takes a long time. Therefore,
PIM-DM is applicable to the small-scale network with multicast group members distributed
densely.
Basic Principle
PIM-DM assumes that all members are densely distributed on the network and each network
segment may have members. According to the assumption, the multicast source floods multicast
data to each network segment and then prunes the network segment that does not have any
member. Through the periodical Flooding-Prune, PIM-DM creates and maintains a
unidirectional and loop-free SPT connecting the multicast source and group members.
Neighbor Discovery
Neighbor discovery in PIM-DM is the same as that in PIM-SM. For details, see PIM-SM.
Flooding
As shown in Figure 1-8, the source sends data to Router A and then Router A floods data to all
neighbors except the neighbor that sends data to Router A. For example, Router B and Router
C do not send data to Router A. Meanwhile, Router B and Router C send data between each
other, but DM, adopting the Reverse Path Forwarding (RPF) mechanism, can ensure that data
is received from only one direction. Finally, data is flooded to Router B that is connected to the
receiver and then Router B sends data to its receiver User A.
UserA
PIM-DM
RoutreC
packets
Flooding
Prune
As shown in Figure 1-9, Router C has no receiver and needs no data, so it sends a Prune message
upstream to Router A to notify Router A to stop forwarding data to the interface connected to
Router C.
Router A then stops forwarding data to the downstream interface. Other downstream interfaces
in the forwarding state still exist on Router A, so Router A stops the prune action. Router A,
therefore, forwards subsequent packets to Router B. Thus, a unidirectional and loop-free SPT
is set up from the source to User A.
PIM-DM
RoutreC
packets
Prune
Graft
As shown in Figure 1-10, if Router C receives an IGMP Report message from User B in a request
for multicast data, it indicates that Router C has the demand of data forwarding. To avoid delay
in periodical Flooding-Prune delay, PIM-DM employs graft to implement fast data forwarding.
Router C sends a Graft message upstream to require Router A to restore the forwarding of the
related outgoing interface. Router A then restores the forwarding of the outgoing interface
connected to Router C. Finally, multicast data is sent from this outgoing interface to Router C.
PIM-DM
Receiver
UserB
RoutreC
packets
Graft
Assert
As shown in Figure 1-11, Router B and Router C can receive multicast packets from the
multicast source S and the multicast packets pass the RPF check. Therefore, related (S, G) entries
can be created on Router B and Router C. Since the downstream interfaces of Router B and
Router C are connected to the same network segment, Router A and Router C synchronously
send multicast data to the network segment. In this case, the Assert mechanism emerges to ensure
that only one multicast data forwarder exists in the network segment. The assert procedure is as
follows:
1. Router B receives a multicast packet from Router C through a downstream interface, but
this packet fails the RPF check and therefore is discarded by Router B. At the same time,
Router B sends an Assert message to the network segment.
2. Router C compares its routing information with that carried in the packet sent by Router
B. Router C fails because the route cost from Router B to the source is lower. Hence, the
downstream interface of Router C is prohibited from forwarding multicast packets and
deleted from the downstream interface list of the (S, G) entry.
3. Router C receives a multicast packet from Router B through the network segment, but the
packet fails the RPF check and therefore is discarded.
So far, the assert process ends.
RouterB Ethernet
RouterD
RouterC
multicast packets
Assert message from RouterB
Assert message from RouterC
State Refresh
As shown in Figure 1-11, if the network segment of the interface on Router C connected to
Router A is in the prune state, Router A maintains a prune timer for Router C. When the prune
timer expires, Router A resumes forwarding data to Router C. This causes a waste of network
resources.
PIM-DM uses the state refresh feature to solve this problem. To be specific, the first hop nearest
to the multicast source, namely, Router A, periodically floods State Refresh messages in the
entire network to refresh the status of prune timers on all routers.
PIM Silent
PIM silent in PIM-DM is the same as that in PIM-SM. For details, see PIM-SM.
Protocol Description
1.3.6 PIM GR
Graceful Restart (GR) is a type of master/slave switchover protocol on the control plane. Protocol
Independent Multicast (PIM) GR can ensure nonstop multicast traffic forwarding during master/
slave switchover. At present, PIM GR supports PIM-Sparse Mode (PIM-SM) and PIM-Source
Specific Multicast (PIM-SSM) but does not support PIM-Dense Mode (PIM-DM).
Basic Principle
PIM GR is on the basis of unicast GR. On the router that runs PIM-SM or PIM-SSM, when the
master/slave control board switchover occurs, interface boards keep multicast nonstop
forwarding in hardware and software. By learning Join messages from downstream neighbors
or learning Report messages from Internet Group Management Protocol (IGMP) hosts, the PIM
protocol of the new master main control board performs the following operations:
l Recalculates PIM multicast routing entries.
l Maintains the Join status of upstream neighbors.
l Updates multicast routing entries of the forwarding plane.
Through these operations, after master/slave switchover, PIM routing entries of the main control
board are quickly restored and multicast forwarding entries are refreshed. This minimizes
multicast traffic interruption during master/slave switchover.
PIM GR is applicable to the PIM-SM/SSM network. Through PIM GR, the router in the PIM-
SM network can ensure nonstop multicast traffic forwarding during master/slave switchover.
PIM GR is also applicable to In Service Software Upgrade (ISSU). Through PIM GR, the
router can ensure multicast traffic forwarding during full-image ISSU of the main control board
and interface board.
As shown in Figure 1-12, take Router A as an example to show the PIM GR process.
PIM-SM
RouterC
RouterD
IGMP
Receiver Receiver
PIM GR is on the basis of unicast GR. It involves three phases: GR_START, GR_SYNC, and
GR_END.
GR_START
1. After Router A performs master/slave switchover, the PIM protocol starts the GR timer.
In this manner, PIM GR enters the GR_START phase. Meanwhile, unicast GR begins.
2. The PIM protocol sends Hello messages carrying new Generation IDs to all the interfaces
enabled with PIM-SM.
3. When Router D, the Reverse Path Forwarding (RPF) neighbor of Router A, finds that the
Generation ID of Router A changes, it re-sends a Join/Prune message to Router A.
4. If dynamic RP is used on the network, after the neighbor receives a Hello message with
the Generation ID being changed, the neighbor sends a BSM message to Router A to restore
BSR information and RP information on Router A.
5. After Router A receives the Join/Prune message from Router D, it creates a PIM routing
entry in an empty inbound interface table to record the Join status of the downstream device.
6. During this period, the entries in the forwarding module remain unchanged to maintain
multicast traffic forwarding.
GR_SYNC
After unicast GR is complete, PIM GR enters the GR_SYNC phase. The PIM protocol builds a
Multicast Distribution Tree (MDT) according to unicast routing information, restores the
inbound interface of the PIM routing entry, and updates the Join queue to the source or the RP.
The PIM protocol then notifies the multicast forwarding module to update the forwarding table.
GR_END
After the GR timer expires, PIM GR enters the GR_END phase and the PIM protocol notifies
the multicast forwarding module of this event. The multicast forwarding module then ages the
forwarding entries that are not updated during GR.
Source-Address-based Filtering
This function is applicable to both PIM-DM and PIM-SM models.
With this function, the router filters the received multicast data packets based on source addresses
or source/group addresses. By setting ACL rules, you can configure a router to forward the
multicast data packets whose source addresses or both source and group addresses match the
ACL rules.
Through this function, you can set the range of valid BSR addresses to enable a router to discard
the multicast data packets received from the BSRs whose addresses are beyond the set address
range, thereby preventing BSR spoofing.
You can set the range of valid C-RP addresses and the range of multicast groups that the C-RP
serves. The BSR then discards the multicast data packets received from the C-RPs whose
addresses are beyond the set address range, thereby preventing C-RP spoofing.
Through this function, you can enable an RP to filter the Register messages sent by the DR at
the multicast source side based on the ACL rules, thereby preventing the illegal Register message
attack.
To prevent a router from setting the PIM neighbor relationship with unknown devices and
prevent an unknown device from becoming a DR, filtering PIM neighbors is required. After this
function is configured, an interface sets up neighbor relationships with only the addresses
matching the ACL rules and removes the neighbor relationship with the interfaces unmatching
the ACL rules.
PIM Silent
After a router interface connected to hosts is enabled with PIM, a PIM neighbor can be set up
on the interface to process various PIM packets. The configuration, however, has the security
vulnerability. To be specific, when a host maliciously sends PIM Hello messages, the router
may break down.
To avoid that, you can configure the function of PIM Silent on a router interface connected to
hosts to disable this interface from receiving and forwarding any PIM packet. At the same time,
the IGMP function on the interface is not affected.
Terms Description
Abbreviation
Abbreviation Full Spelling
RP Rendezvous Point
2 IGMP
Purpose
To ensure that multicast messages reach receivers, you need to connect the receivers to the IP
multicast network and let the receivers join the multicast group. In this case, you can use IGMP.
IGMP manages multicast group members by exchanging IGMP messages between hosts and
routers. In addition, IGMP records information about adding and leaving receivers on an
interface. This ensures that the multicast data can be correctly forwarded to the interface.
2.2 References
The references of this feature are as follows:
2.3 Principles
2.3.1 IGMPv1&v2&v3
2.3.2 IGMP Group Compatibility
2.3.3 IGMP Querier Election
2.3.4 Router-Alert for IGMP
2.3.1 IGMPv1&v2&v3
IGMP
ISP
RouterA RouterB
Ethernet
By sending IGMP Query messages to hosts and receiving IGMP Report messages and Leave
messages from hosts, a multicast router can identify the receivers (multicast group members)
on the relevant network segment. If a host is identified to be a receiver, the multicast router
forwards the corresponding multicast data to the network segment; if no host is identified to be
a receiver, the multicast router forwards no multicast data. Note that hosts can decide whether
to join or leave a multicast group by themselves.
As shown in Figure 2-1, the IGMP-enabled Router A automatically functions as the querier to
periodically send IGMP Query messages, and all hosts (Host A, Host B, and Host C) on the
same network segment of Router A can receive these IGMP Query messages.
l When a host receives an IGMP Query message, the processing flow is as follows:
– If the host already joins the multicast group G, during the response period specified by
Router A, the host randomly replies an IGMP Report message of G to Router A.
After receiving the IGMP Report message, Router A records information about G, and
starts a timer for G (or refreshes the timer if the timer has been started). In this way,
Router A can interrupt the multicast traffic to G as soon as no hosts respond. Then
Router A forwards multicast traffic to the network segment where the host interface that
is connected to Router A locates.
– If a host does not join any multicast group, the host does not respond to the IGMP Query
message from Router A.
l When a host joins a multicast group, the processing flow is as follows:
After the host joins the multicast group G, the host initiatively sends an IGMP Report
message of G to Router A. In this way, Router A is informed to update its multicast group
information. Then the subsequent IGMP Report messages of the host are sent in response
to IGMP Query messages of Router A.
l When a host leaves a multicast group, the processing flow is as follows:
If the host decides to leave the multicast group G, the host sends an IGMP Leave message
of G to Router A. After receiving the IGMP Leave message, Router A triggers a query on
G to identify receivers on the relevant network segment. After the query is ended, but Router
A still receives no IGMP Report message of G, Router A deletes the information about G,
and stops forwarding the multicast traffic of G to the relevant network segment.
IGMPv2 features the Report message suppression mechanism, which reduces the repetitive
IGMP report messages on the network.
After a host joins a multicast group G, the host receives an IGMP Query message from the
router, and then the host randomly selects a value from 0 to the maximum response time
(specified in the IGMP Query message) as the timer value. When the timer expires, the host
sends the IGMP Report message of G to the router. Nevertheless, if the host receives an IGMP
Report message from another host in G before the timer expires, Host A does not send the IGMP
Report message of G to the router when the timer expires.
When a host quits the multicast group G, the host sends the IGMP Leave message of G to the
router. Because of the Report message suppression mechanism in IGMPv2, the router cannot
determine whether another host joins G. Therefore, the router triggers a query on G. If another
host joins G, the host sends the IGMP Report message of G to the router.
If the router sends the query on G for several times, but receives no IGMP Report message from
any host, the router does not record information about G, and stops forwarding the multicast
data of G to the relevant network segment.
NOTE
The IGMP querier and non-querier can both process the IGMP Report message, while only the querier is
responsible for forwarding the IGMP Report message. In addition, the IGMP non-querier cannot process
the IGMP Leave message.
automatically lowers the version of the corresponding multicast group to be the same as that for
the hosts and then operates in this version.
For example, when the router of IGMPv2 or IGMPv3 version receives Report messages from
the hosts in the IGMPv1 version, the router lowers the version of the corresponding multicast
group to IGMPv1. Then, the router ignores the IGMPv2 Leave messages in the multicast group.
In addition, when the router of the IGMPv3 version receives Report messages from the hosts in
the IGMPv2 version, the router lowers the version of the corresponding multicast group to
IGMPv2. Then, the router ignores the IGMPv2 Leave messages, the IGMPv3 BLOCK messages,
the IGMPv3 TO_IN messages, and the multicast source list in the IGMPv3 TO_EX messages.
The multicast source-selecting function of IGMPv3 messages is suppressed.
If the IGMP version of a router is configured higher, the multicast group of the original IGMP
version can still function properly as soon as the multicast group contains hosts.
NOTE
l Querier
The router is responsible for sending IGMP Query messages and receiving IGMP Report
messages and Leave messages from hosts. In this way, the router knows which multicast
group has receivers (multicast group members) on the relevant network segment.
l Non-querier
The router only receives IGMP Report messages from hosts, and knows which muticast
group on the network segment has receivers. Then, according to the action of the querier
on the network segment, the router identifies which receives leave the network segment.
Generally, only one querier exists on a network segment. Therefore, you must follow the
principles to select the querier among routers (take Router A, Router B, and Router C as an
example):
l After Router A is enabled with IGMP, Router A considers itself as the default querier of
the network segment in the IGMP startup process, and sends IGMP Query messages on the
network segment. If Router A receives the IGMP Query message from Router B that has
a lower IP address, Router A is changed from the querier to the non-querier, starts the
another-querier-existing timer, and records Router B as the querier of the network segment.
l If Router A in the non-querier state receives the IGMP Query message from the querier
Router B, the another-querier-existing timer is updated; if Router A in the non-querier status
receives the IGMP Query message from Router C that has a lower IP address than the
querier Router B, the querier is updated to be Router C, and the another-querier-existing
timer is updated.
l When Router A is in the non-querier status, the another-querier-existing timer expires. Then
Router A is changed from the non-querier to the querier.
NOTE
IGMPv1 does not support the querier election, and the querier in IGMPv1 is designated by the upper-layer
protocol, such as the Protocol Independent Multicast (PIM). At present, only the querier election for
routers of the same network segment and same IGMP version is supported. Therefore, all routers on the
same network segment must be configured with the same IGMP version.
NOTE
If PIM is enabled on the interface, DR is responsible for guiding data forwarding. For the details, refer to
the section "Basic Principles of PIM DR Election" in the PIM-SM.
the router only if the first member joins the multicast group, and sends the IGMP Leave message
to the router only if the last member leaves the multicast group. This is called IGMP On-Demand.
The router enabled with IGMP On-Demand does not send the IGMP Query message initiatively
to identify whether the IGMP multicast group contains receivers on the network segment, but
maintains the IGMP multicast group by receiving the Report/Leave status of the multicast group
converged by its connected access device (IGMP proxy).
After a router is enabled with IGMP On-Demand, the router implements IGMP different from
the standard one, as shown follows:
If the router is only connected to an access device that is enabled with IGMP proxy, when the
access device leaves a multicast group G and sends the IGMP Leave message of G to the
router, the router can identify that G contains no receivers and thus need not to trigger the IGMP
Query message. Then the router can delete all records about G, and stop forwarding data of G
to the relevant network segment. This is called IGMP Prompt-Leave.
After the router is enabled with IGMP Prompt-Leave, the router triggers no IGMP Query
messages destined for the multicast group when the router receives the IGMP Leave message
from the multicast group. Then the router deletes all records about the multicast group, and stops
forwarding the data of the multicast group to the relevant network segment. In this manner, the
router responses faster to the IGMP Leave message.
NOTE
The IGMP Prompt-Leave feature is only supported in IGMPv2, and other IGMP versions do not support
this feature. The IGMP On-Demand feature already includes the IGMP Prompt-Leave feature.
l IGMP-Limit
IGMP-Limit limits the number of multicast groups or source/group by setting entry limits
on interfaces, a single instance, and all instances.
l Static-Group
IGMP-Limit
Ethernet
HostA
RouterA Receiver
POS2/0/0 N1
192.168.1.1/24 GE1/0/0
HostB
10.110.1.1/24
RouterB
POS2/0/0 GE1/0/0
192.168.2.1/24 10.110.2.1/24 Leaf network
HostC
PIM network
RouterC GE1/0/0 Receiver
N2
10.110.2.2/24
POS2/0/0 HostD
192.168.3.1/24
Ethernet
When too many users watch multiple programs simultaneously, great bandwidth of the router
is consumed, leading to the degraded performance of the router. To avoid this, the number of
IGMP interfaces and the number of globally-joinable multicast groups must be limited. In this
manner, the number of joined multicast groups is restricted within the limit, and users joining
multicast groups can watch clearer and stabler programs.
IGMP-Limit sets the upper limit for the number of IGMP multicast groups on a router interface,
a single instance, and all instances. When the router receives an IGMP Report message, the
router first checks whether the number of multicast groups exceeds the upper limit if a new
multicast group is added. If the upper limit is not exceeded, the multicast member is added into
the new multicast group, and the multicast data is forwarded to the multicast member.
l IGMP entry limit on an interface
– You can set an IGMP entry limit on an interface. After an interface receives an IGMP
Join message, the interface can determine whether to create an entry according to
whether the IGMP entry limit on the interface is crossed.
– You can remove the entry limit for the groups or source/groups falling in a specified
range by configuing the IGMP entry limit.
l IGMP entry limit of a VPN instance
You can set the limit for IGMP entries of a single multicast VPN instance. That is, you can
limit the number of IGMP entries on all interfaces in the current VPN instance.
– After an interface receives an IGMP Join message, the interface determines whether to
create an entry according to whether the number of the IGMP entries on all interfaces
in the current VPN instance reaches the configured limit.
– When an interface deletes (*, G) or (S, G) entries, the interface decreases the IGMP
entries in the current instance correspondingly.
l IGMP entry limit on a router
You can set the IGMP entry limit on a router. That is, you can limit the number of IGMP
entries on the interfaces belonging to all instances on a router.
– After an interface receives an IGMP Join message, the interface determines whether to
create an entry according to whether the number of the IGMP entries on the whole
router reaches the configured limit.
– When an interface deletes (*, G) and (S, G) entries, the interface decreases the IGMP
entries on the router correspondingly.
The preceding IGMP entry limit policies are subject to the following rules:
l A (*, G) entry or an (S, G) entry is counted as one entry.
l A (*, G) entry used in SSM mapping is counted as one entry; however, the (S, G) entry
mapped by the (*, G) entry is not counted as an entry.
Static-Group
RouterA
Source1
User1
PIM-DM
or
PIM-SM
Source2 User2
Router B
Static-Group is implemented by configuring the static multicast group on the relevant interface.
After Static-Group is configured, the entries created on the router have no timer and never expire.
Therefore, the router continuously forwards data to receivers in the static multicast group. When
the receivers do not need the forwarded multicast data, the multicast data cannot be automatically
deleted through entry expiration, but through the manual deletion of the static multicast group.
In real applications, Static-Group is configured on the router interface that is connected to the
host. This facilitates multicast data forwarding to the router. When the host or router that is
directly connected to the router has receivers that want to receive the multicast data, the router
can fast forward the multicast data, and thus shorten the channel switchover period.
Static-Group can be configured per piece or in batches. In other words, the function of Static-
Group supports the joining of a multicast group (multicast source and group) and the joining of
multiple multicast groups (multicast sources and groups).
Group-Policy
Ethernet
HostA
RouterA Receiver
POS2/0/0 N1
192.168.1.1/24 GE1/0/0
HostB
10.110.1.1/24
RouterB
POS2/0/0 GE1/0/0
192.168.2.1/24 10.110.2.1/24 Leaf network
HostC
PIM network
RouterC GE1/0/0 Receiver
N2
10.110.2.2/24
POS2/0/0 HostD
192.168.3.1/24
Ethernet
Group-Policy refers to a filtering policy configured on the router interface. After Group-Policy
is configured, the router can set restrictions on certain multicast groups, and establish no entries
for these multicast groups.
When too many users watch multiple programs simultaneously, great bandwidth of the router
is consumed, leading to the degraded performance of the router. To avoid this, you can use
Group-Policy to set restrictions on certain multicast groups and limit the number of multicast
groups. In addition, for network security or expedient management, you can also use Group-
Policy to forbid receiving IGMP Report messages from certain multicast groups and prohibit
forwarding data of these multicast groups.
Group-Policy is configured through ACL.
The router does not process the (*,G) requirements, but only the (S,G) requirements from the multicast
group of the SSM range. For details of SSM, seePIM-SSM.
IGMPv1 Report
Router IGMPv2 Report
SSM
IGMPv3
Report
As shown in Figure 2-5, in the user network segment of the SSM network, Host A runs IGMPv3,
Host B runs IGMPv2, and Host C runs IGMPv1. If you want Host B and Host C to provide SSM
multicast services for all hosts in the network segment without upgrading their IGMP versions
to IGMPv3, the router needs to support SSM mapping.
If the router supports SSM mapping, and is configured with the relevant conversion principle,
the router performs either of the following after receiving the IGMP Report messages (*,G) from
Host B and Host C:
l If the multicast group of the messages indicates the ASM range, see the section
IGMPv2&v3 for the processing method.
l If the multicast group of the messages indicates the SSM range, follow the SSM mapping
mechanism to convert the (*,G) of IGMPv1/v2 into the (S,G) according to the configured
conversion principle.
ISP
RouterA
RouterB
10.0.0.1/24
Ethernet
To ensure the precision in sending multicast traffic, you can configure the IGMP host address
filtering policy on the router interface.
l If the host address of an IGMP message and the IP address of the receiving interface are
on the same network segment, or the host address of the IGMP message is 0.0.0.0, the
IGMP host address filtering is passed.
l If the host address of an IGMP message and the IP address of the receiving interface are
not on the same network segment, the IGMP host address filtering fails and the IGMP
message is discarded.
As shown in Figure 2-6, the IP addresses assigned to interfaces that connects Router A to hosts
are on the network segment 10.0.0.1/24; the host address of the IGMP Report message sent by
Host A is 11.0.0.1; the host address of the IGMP Report message sent by Host B is 10.0.0.8; the
host address of the IGMP Report message sent by Host C is 0.0.0.0. In such a situation, Router
A processes the IGMP Report messages from Host B and Host C, and discards the IGMP Report
message from Host A.
The message contains the A message contains not The multicast source can be
multicast group only the multicast group selected directly, and thus the
information, rather than the information, but also the selection is more precise.
multicast source multicast source
information. information.
A message contains the A message contains records The number of IGMP messages
record of a multicast group. of multiple multicast is reduced on the network
groups. segment.
The IGMP Query message The IGMP Query message The multicast information
of a specified multicast of a specified multicast maintained by the non-querier
group features no re- group and a specified and querier can be kept
transmission mechanism. multicast source features consistent better.
the re-transmission
mechanism.
RouterA
Source1
User1
PIM-SM
Source2 User2
Router B
IGMP is the protocol responsible for adding hosts into the routing network. Therefore, IGMP
is applied to the area where the router and host are connected. Note that IGMP can be used for
hosts and routers of different versions.
The IGMP On-Demand and IGMP Prompt-Leave features are only applicable to the scenario
where only a single router and a single access device are located on the shared network segment.
2.4 Typical IGMP Applications
2.4.2 Applications of the IGMP Entry Limit
RouterA
Source1
User1
PIM-SM
Source2 User2
Router B
IGMP is the protocol responsible for adding hosts into the routing network. Therefore, IGMP
is applied to the area where the router and host are connected. Note that IGMP can be used for
hosts and routers of different versions.
The IGMP On-Demand and IGMP Prompt-Leave features are only applicable to the scenario
where only a single router and a single access device are located on the shared network segment.
On the UPE, you can configure interface-based IGMP entry limit, global IGMP entry limit for
an instance, and global IGMP entry limit for all instances.
Vod ES
NPE
ISP1
IP/MPLS
Backbone
Vod ES
UPE ISP2
NPE
UPE
Vod ES
ISP3
IGMP The Internet Group Management Protocol (IGMP) refers to the signaling
mechanism between the host and router on the leaf network of IP multicast.
The host joins or leaves a multicast group by sending relevant IGMP messages;
the router identifies whether the multicast group contains members on the
downstream network.
Terms Description
(S,G) (S,G) refers to a multicast routing entry. S indicates a multicast source, and G
indicates a multicast group.
After a multicast message with S as the source address and G as the group address
reaches the router, it is forwarded through the downstream interface of the (S, G)
entry.
Usually, the multicast message is expressed as the (S, G) message.
(*,G) (*,G) refers to a PIM routing entry. * indicates any multicast source, and G
indicates a multicast group.
(*, G) is applicable to all multicast messages with the multicast group address as
G. That is, all the multicast messages sent to G are forwarded through the
downstream interface of the (*, G) entry, regardless of which multicast sources
send the multicast messages.
Abbreviations
Abbreviation Full Spelling
3 MSDP
Purpose
The network composed of multiple PIM-SM routers is called the PIM-SM network. A large
PIM-SM network may be maintained by multiple Internet Service Providers (ISPs).
PIM-SM domains are isolated by Rendezvous Points (RPs), and thereby the multicast source
can only register to the local RP and hosts can only send the Join message to the local RP. As a
result, the RP only knows the local multicast source and distributes the data from the multicast
source to the local users.
A PIM-SM network depends on RPs to forward multicast data. To implement load balancing
among RPs, enhance network reliability, and facilitate management, you can group multiple
RPs respectively into different domains on the PIM-SM network. Each domain is called a PIM-
SM domain.
After a PIM-SM network is divided into multiple PIM-SM domains, RPs in different domains
cannot communicate with each other. To implement the communication between PIM-SM
domains, MSDP is introduced.
NOTE
A PIM-SM domain can be considered as the service scope of a RP, and different PIM-SM domains can be
divided by the BootStrap Router (BSR) boundary or by configuring different static RPs on different
routers.
3.2 References
The references of this feature are as follows:
3.3 Principles
3.3.1 Inter-Domain Multicast in MSDP
For details of MBGP, refer to the chapter "MBGP Configuration" in the HUAWEI
NetEngine5000E Core Router Configuration Guide - IP Multicast.
Basic Principle
Setting up the MSDP peer relationships between RPs in different PIM-SM domains ensures the
communications between MSDP peers (RPs), and thereby forming an MSDP-connected graph.
MSDP peers then exchange Source Active (SA) messages. The SA message carries (S,G)
information registered on RP of the source DR. SA messages exchange among MSDP peers.
This ensures that SA messages sent by a RP can be received by all the other RPs.
As shown in Figure 3-1, the PIM-SM network is divided into four PIM-SM domains. The
multicast source of PIM-SM1 domain, namely, Source, sends data to the multicast group G.
Receiver in the PIM-SM3 domain, as a member of G, maintains an RP-rooted Shared Tree (RPT)
of G with RP3.
Receiver
PIM-SM 3
DR3 RP3
Source DR1
PIM-SM 4
PIM-SM 1
RP2
RP1 PIM-SM 2
MSDP peers
multicast packet
Register
SA message
Join
As shown in Figure 3-1, Receiver can receive the multicast data sent by Source after the MSDP
peer relationships between RP1, RP2, and RP3 are set up.
1. Source sends multicast data to G. DR1 then encapsulates the data into the Register message
and sends the message to RP1. As the RP of the multicast source, RP1 creates an SA
message, carrying IP addresses of the multicast source, multicast group G, and RP1, and
sends the SA message to the peer RP2.
2. After RP2 receives the SA message, it performs Reverse Path Forwarding (RPF) check on
the message. If the check succeeds, RP2 forwards the message to RP3.
3. After RP3 receives the SA message, it performs RPF check on the message, and the check
succeeds. RP3 has the (*,G) entry, and thereby the domain contains the member of G.
4. RP3 creates an (S, G) entry and sends a Join message with the (S, G) information to Source
hop by hop. A multicast path (source tree) from the Source to RP3 is thus set up. After the
multicast data reaches RP3 along the source tree, RP3 forwards it to Receiver along the
RPT.
5. After Receiver receives the multicast data, it determines whether to initiate the SPT
switchover.
Applicable Environment
In a traditional PIM-SM domain, each multicast group is mapped to only one RP. When the
network is overloaded or the traffic is too heavy, many network problems occur, such as the
heavy pressure of the RP, the slow convergence after the RP fails, and the non-optimal multicast
forwarding path.
To sum up, anycast RP can properly address the problem of heavy load on the single RP in a
PIM-SM domain, which is caused by the convergence of all multicast source information and
multicast join information on the RP. Meanwhile, anycast RP ensures the path destined for a RP
is optimal because the receiver and multicast source join and register to the nearest RP.
Principles
As shown in Figure 3-2, in the PIM-SM domain, the multicast sources, S1 and S2, send multicast
data to the multicast group G that contains multicast members, U1 and U2.
PIM-SM
RP1 DR1
U1 S1
S2 U2
DR2 RP2
SA message
MSDP peers
1. Establish the MSDP peer relationship between RP1 and RP2, and then enable multicast in
the PIM-SM domain through the MSDP peers.
2. The receiver sends a Join message to the nearest RP to set up an RPT tree; in addition, the
multicast source registers to the nearest RP, and RPs sends each other SA messages to share
the multicast source information.
3. RPs join the Shortest Path Tree (SPT), whose root is the multicast source DR. Then RPs
receive and forward multicast data. After the receiver receives the multicast data, it decides
whether to initiate the SPT switchover.
AS 100
Receiver
Source
PIM-SM1 PIM-SM2
RP1 RP2
Router1 Router2
MSDP Peers
Anycast RP
U2
PIM-SM
S1
Loopback1 Router2
S2
BSR
Loopback1
Router1
U1
MSDP peers
MSDP Multicast Source Discovery Protocol (MSDP) is only applicable to the PIM-SM
domain and only meaningful for the Any-Source Multicast (ASM) model.
After the MSDP peer relationship is set up between RPs of different PIM-SM
domains, multicast source information can be shared between PIM-SM domains,
and the inter-domain multicast can be implemented.
After the MSDP peer relationship is set up between RPs of the same PIM-SM
domain, multicast source information can be shared in the PIM-SM domain, and
anycast RP can be implemented.
PIM Protocol Independent Multicast (PIM) is one of the multicast routing protocols.
PIM forwarding can be implemented only if unicast routes are reachable. By using
the existing unicast routing information, PIM performs Reverse Path Forwarding
(RPF) check on multicast messages. In this manner, multicast routing entries are
created and the multicast distribution tree is set up.
SA Source Active (SA) refers to a type of the MSDP message. An SA message contains
multiple groups of (S,G) information or encapsulates a Register message. MSDP
peers exchange (S,G) information to share the multicast source information.
SPT Shortest Path Tree (SPT) distributes multicast data by taking the multicast source
as the root and multicast group members as leaves. SPT is applicable to PIM-DM,
PIM-SM, and PIM SSM.
BSR BootStrap Router (BSR), also called Boot router, is the management core of the
PIM-SM network. The BSR collects the C-RP information into an RP-set,
encapsulates the RP-set into a Bootstrap message, and advertises the Bootstrap
message to each PIM-SM router in the entire network. The PIM-SM router then
calculates the RP corresponding to the specified multicast group according to the
RP-set.
Abbreviations
Abbreviations Full Spelling
AS Autonomous System
RP Rendezvous Point
4 Multicast Management
l Multicast Ping (MPing): is a tool used to probe multicast services. By sending ICMP Echo
Request messages, MPing triggers the setup of the multicast forwarding tree and detects
the members of reserved multicast groups over the network.
NOTE
Reserved multicast group: The reserved multicast group addresses are within the range from 224.0.0.0
to 224.0.0.255. For example, 224.0.0.5 is reserved for the OSPF multicast group; 224.0.0.13 is
reserved for the PIMv2 multicast group.
l Multicast trace route (MTrace): is a tool used to trace multicast forwarding paths. It can
trace the path from a receiver to a multicast source along the multicast forwarding tree.
Purpose
As multicast services are widely applied, MPing and MTrace become more important in
multicast service maintenance and fault location. When selecting the network devices that
support multicast, users demand that the devices should support not only multicast forwarding
and multicast routing protocols but also tools for diagnosing multicast faults. With the
development of multicast services, multicast maintenance and fault location are absolutely
necessary.
l Locating faulty nodes and finding configuration errors in multicast troubleshooting and
routine maintenance
l Tracing the actual forwarding path of packets and collecting traffic information during the
trace; calculating multicast traffic rate in cyclic path tracing
l Outputting information about the faulty nodes for the NMS to analyze the fault and generate
alarms
4.2 References
The following table lists the references of this document.
4.3 Principles
4.3.1 MPing
4.3.2 MTrace
4.3.1 MPing
MPing uses standard ICMP messages to detect the connectivity of a multicast path. A standard
ICMP message used by MPing is an ICMP Echo Request message, with the encapsulated
destination address being a multicast address (either a multicast address for the reserved
multicast group or a common multicast group address).
l If the encapsulated destination address is a multicast address for the reserved multicast
group, the querier router must specify the outgoing interface of the ICMP Echo Request
message. Finding that the destination address of the received ICMP Echo Request message
is the address of the reserved multicast group, the member (router) of the reserved multicast
group responds with an ICMP Echo Reply message. Therefore, MPing can be used to check
the members of reserved multicast groups over the network.
l If the encapsulated destination address is a common multicast group address, the querier
router cannot specify the outgoing interface of the ICMP Echo Request message. The ICMP
Echo Request message, as multicast traffic, is forwarded across the multicast network,
which can build multicast routing. The network quality analysis (NQA) software can
perform the MPing operations on multicast groups, and then gather the information about
delay and jitter. In this manner, multicast services can be successfully maintained and
multicast faults can be located.
4.3.2 MTrace
MTrace is complied with the protocol standard draft-fenner-traceroute-ipm-01.txt defined by
the Internet Engineering Task Force (IETF).
This standard describes a mechanism to trace the path on which multicast data is forwarded from
a multicast source to a designated receiver.
Receiver
MTrace is based on the multicast-enabled network such as the Protocol Independent Multicast
(PIM), including PIM-DM or PIM-SM and the established multicast distribution tree. MTrace
probes the multicast forwarding path by sending IGMP Tracert messages. IGMP Tracert
messages fall into the following types: IGMP Tracert Query message, IGMP Tracert Request
message, and IGMP Tracert Response message.
l The IGMP Tracert Request message is the IGMP Tracert Query message with an additional
response data block added to the end of the message.
l The IGMP Tracert Response message is the IGMP Tracert Request message with only the
message type field changed.
1. Run the MTrace command on the querier router, with the multicast source address,
destination host address, and multicast group being specified.
2. The querier router sends an IGMP Tracert Query message to the last-hop router connected
with the destination host.
3. After receiving the IGMP Tracert Query message, the last-hop router adds a response data
block containing the information about the interface receiving this IGMP Tracert Query
message to construct an IGMP Tracert Request message, and sends the message to the
previous-hop router.
4. The router of each hop adds a response data block to the IGMP Tracert Request message
and sends the message upstream.
5. When the first-hop router connected with the multicast source receives the IGMP Tracert
Request message, it also adds a response data block and sends the IGMP Tracert Response
message to the querier router.
6. The querier router parses the IGMP Tracert Response message and obtains the information
about the forwarding path from the multicast source to the destination host.
7. If the IGMP Tracert Request message cannot reach the first-hop router because of some
errors, the IGMP Tracert Response message is directly sent to the querier router. The querier
router then parses the data block information for locating the faulty node. In this way, faulty
node monitoring is realized.
An MTrace operation can be initiated in the following modes. The initiating modes vary
with networking environment.
l all-router: indicates that the current router is directly connected to the destination host
but it is not the last-hop router. 224.0.0.2 is set as the destination address of the message.
Such a message can be received by all routers residing on the network segment of the
destination host, including the last-hop router.
l last-hop: indicates that the IP address of the last-hop router is set as the destination
address of the message. This mode requires the user to input the IP address of the last-
hop router.
l destination: indicates that the IP address of the destination host is set as the destination
address of the message. When the router that directly connects the destination host
receives such a message, the router judges whether it is the last-hop router. If not, the
router re-encapsulates the IGMP Tracert Query message in all-router mode.
l multicast-tree: indicates that the querier router is just on the path from the multicast
source to the destination host (for example, the first-hop router). The IP address of the
traced multicast group is set as the destination address of the message, and the IP address
of the multicast source is set as the source address of the message. Then, the message
is forwarded along the multicast path and finally arrives at the last-hop router.
Abbreviation
Abbreviation Full Spelling
Purpose
l RPF check
This function is used to search for an optimal unicast route to a multicast source and create
a multicast forwarding tree. The outgoing interface of the unicast route is the incoming
interface of the forwarding entry. Then, when the forwarding module receives multicast
data packets, it searches the forwarding entry and checks whether the incoming interface
of the data packets is correct. If the interface a multicast data packet reaches matches the
outgoing interface of the unicast route, the packet passes the RPF check; otherwise, the
packet cannot pass the RPF check and thus is discarded. The RPF check effectively avoids
traffic loops during multicast data forwarding.
l Multicast load splitting
During multicast routing, you can configure a multicast load splitting policy on the
router so that the router can select different routes from the equal-cost routes as RPF routes
for different forwarding entries to guide data forwarding. Because the RPF routes of
forwarding entries can be distributed to different equal-cost routes, multicast data
distribution is implemented.
l Longest-match multicast routing
During multicast routing, the router prefers the route whose destination address mask and
source address mask are of the longest match to achieve accurate route matching.
l Multicast boundary designation
By configuring a multicast boundary on an interface, you can block multicast data on the
interface. That is, disable the interface from forwarding the received multicast data.
5.2 References
The following table lists the references of this document.
5.3 Principles
5.3.1 RPF Check
5.3.2 Multicast Load Splitting
5.3.3 Longest-Match Multicast Routing
5.3.4 Multicast Boundary Designation
RouterB
POS2/0/0 Receiver
RouterA POS1/0/0
Source
ISP
192.168.0.1/24
POS1/0/0
POS2/0/0 Receiver
As shown in Figure 5-1, multicast packets reach Router C from POS 1/0/0 and Router C
performs the RPF check for the multicast packets. The actual incoming interface POS 1/0/0 is
inconsistent with that in the forwarding entry, the RPF check fails. For example, if Router C
searches the routing table on itself and finds that POS 2/0/0 is the outgoing interface of the
shortest path to source, which is consistent with the incoming interface of (S,G) entry, Router
C judges that the current (S, G) entry is correct and the packets are sent along an incorrect path,
and discards the packet.
RouterA
RouterE
RouterB
Source1 RouterG
RouterC
(S,G1) RouterF
(S,G2)
RouterD
(S,G3)
(S,G4)
......
Based on a series of algorithms, a multicast router can select a proper route among several equal-
cost routes for each multicast group. This route is used for packet forwarding for this group. At
last, multicast traffic for different groups can be split into different forwarding paths.
RouterA
RouterE
RouterB
Source1 RouterG
RouterC
(S1,G)
......
RouterF
Source10
RouterD
(S10,G)
Based on a series of algorithms, a multicast router router can select a proper route among several
equal-cost routes for each multicast source. This route is used for packet forwarding for this
source. At last, multicast traffic from different sources can be split into different forwarding
paths.
Figure 5-4 Networking diagram of multicast source- and multicast group-based load splitting
RouterA
RouterE
RouterB
Source1 RouterG
RouterC
(S1,G1)
......
RouterF
Source10
RouterD
(S10,G10)
Based on a series of algorithms, a multicast router router can select a proper route among several
equal-cost routes for each source-specific multicast group. This route is used for packet
forwarding for this source-specific multicast group. At last, multicast traffic for different source-
specific groups can be split into different forwarding paths.
RouterA
RouterE
RouterB
Source RouterG
RouterC
RouterF Receiver
RouterD
l Implementation principle
The router configured with stable-preferred load splitting selects the most proper route for
a newly created entry, that is, the route assigned the fewest entries. When the network
topology and entries are stable, all entries with the sources on the same network segment
are distributed evenly among the equal-cost routes.
If unbalance is caused because an entry is deleted or the weight of a route changes, the
router configured with stable-preferred load splitting does not balance the entries. Instead,
the router solves the problem by selecting the most proper routes for subsequent entries.
prevents the frequent changes of routes for the entries. In addition, within the delay, the
router can balance the entries by selecting the most proper routes for subsequent entries.
Principles
Source1 Source2
RouterB RouterD
GE1/0/0 GE2/0/0
RouterA RouterE
Multicast packet
RouterC RouterF
Receiver Receiver
As shown in Figure 5-6, Router A, Router B, and Router C form a multicast domain 1;
Router D, Router E, and Router F form a multicast domain 2. The two multicast domains
communicate through Router B and Router D.
If the data for a multicast group (G) in one multicast domain is required to be isolated from the
other multicast domain, you only need to configure GE 1/0/0 or GE 2/0/0 as a multicast boundary
for G so that the interface no longer forwards data to and receives data from G.
Multicast load Multicast load splitting is different from load balancing. Multicast load
splitting splitting indicates that multicast entries can be distributed to multiple
equal-cost routes and the number of multicast entries transmitted on each
equal-cost route can be different.
Abbreviations
Abbreviation Full Spelling
6 Multicast VPN
Purpose
MVPN implements multicast transmission on MPLS/BGP VPNs. It transmits multicast data and
control messages of the PIM instances in private network (VPN-specific PIM instances or PIM
C-instances, C indicating Customer) over the public network to the remote sites of the VPN.
The PIM instances in the public network (PIM P-instances) need not know multicast data
transmitted between the private networks and the PIM C-instances also need not know multicast
routing information of the PIM P-instance. Therefore, isolating the PIM instances of the public
network from those of the private networks is implemented.
6.2 References
The following table lists the references of this document:
6.3 Principles
6.3.1 Concepts in MVPN
6.3.2 Inter-Domain Multicast Implemented by MVPN
6.3.3 PIM Neighbor Relationship Between CE, PE, and P
6.3.4 Process of Establishing a Share-MDT
6.3.5 MT Transmission Process Based on the Share-MDT
6.3.6 Switch-MDT Switchover
MD is short for Multicast Domains. MD is the set of all the VPN instances that can transmit
multicast packets on each PE.
l Share-Group
Based on the MD principle, all the VPN instances on the PEs in the same MD must join a
common group, called a Share-Group.
Currently, one VPN instance can be configured with only one Share-Group, that is, one
VPN instance can join only one MD.
l Share-MDT
Share-MDT is short for Share-Multicast Distribution Tree. Actually, it is set up when the
PIM C-instances on the PEs join Share-Groups. A Share-MDT transmits the PIM protocol
packets and low-rate data packets to other PEs within the same VPN. The Share-MDT is
regarded as a multicast tunnel (MT) within an MD.
l MTI
MTI is short for Multicast Tunnel Interface. It is the outgoing interface or incoming
interface of an MT. An MTI is equal to the outgoing interface or incoming interface of an
MD. The local PE and remote PE send and receive VPN data through MTIs.
The MTI is the channel through which the public network instance and VPN instances on
PEs communicate. PEs are connected to an MT by using MTIs, which is equal to the
situation that PEs are connected to a shared network segment. On each PE, VPN instances
that belong to the MD set up the PIM neighbor relationship on MTIs.
l Switch-Group
It is a group to which all the VPN receivers of the PE join for establishing a Switch-MDT
after a Share-MDT is established.
l Switch-MDT
Switch-MDT is short for Switch-Multicast Distribution Tree. It prevents multicast data
packets from being transmitted to unnecessary PEs. After a Share-MDT is set up, all the
PEs to which the receivers in the VPN are attached join an MDT set up based on Switch-
Groups. A Switch-MDT can transmit high-rate data packets to other PEs in the same VPN.
VPN CE2B
RED CE1R P VPN
BLUE
PC2
PC1
The process of implementing the communication between PIM C-instances on the PEs through
MVPN is as follows:
In this manner, the VPN instances with the same Share-Group address form an MD.
As shown in Figure 6-1, VPN BLUE instances bound to PE1 and PE2 communicate through
the MD BLUE and similarly, VPN RED instances bound to PE1 and PE2 communicate through
the MD RED, as shown in Figure 6-2 and Figure 6-3.
VPN CE1B
BLUE
CE2B
VPN
BLUE
PC2
CE2R VPN
RED
VPN
RED CE1R
PC1
The PIM C-instance on the PE considers the MTI as a LAN interface and sets up the PIM
neighbor relationship with the remote PIM C-instance through MTIs. The PIM C-instances then
use MTIs to perform DR election, send Join/Prune messages, and forward and receive multicast
data.
The PIM C-instance sends PIM protocol packets or multicast data packets to the MTI and the
MTI encapsulates the received packets. The packets after encapsulation are public network
multicast data packets and therefore are forwarded by the PIM P-instances on the network. In
conclusion, an MT is actually a multicast distribution tree on the public network.
l Different VPNs use different MTs and each MT uses a unique packet encapsulation mode.
In this manner, multicast data in different VPNs is isolated from each other.
l The PIM C-instances on the PEs in the same VPN use the same MT and communicate
through this MT.
NOTE
A VPN uniquely defines an MD. An MD serves only one VPN. This relationship is called one to one
relationship. The VPN, MD, MTI, Share-Group, and Switch-group-pool are all in one-to-one relationship.
VPNA
site1
CE1
PE1_vpnA-instance
PE3_vpnA-instance MD A
CE2
VPN A
site3 CE3
PE2_vpnA-instance
VPN A
site2
PE-P neighbour
PE-CE neighbour
PE3
CE1 CE2
P
MD
PE1 PE2
P RP
MD
PE1 PE2
IBGP:11.1.1.1/24 IBGP:11.1.2.1/24
As shown in Figure 6-6, the public network runs PIM-SM. The process of establishing a Share-
MDT is as follows:
1. The PIM P-instance on PE1 sends a Join message with the Share-Group address being the
multicast group address to the RP in the public network. Routers that receive the Join
message then create the (*, 239.1.1.1) entry on themselves. PE2 and PE3 also send Join
messages to the RP in the public network. A Rendezvous Point Tree (RPT) is thus formed
in the MD, with the RP being the root and PE1, PE2, and PE3 being leaves.
2. The PIM P-instance on PE1 sends a Register message with the MTI address being the source
address and the Share-Group address being the group address to the RP in the public
network. The RP then creates the (11.11.1.1, 239.1.1.1) entry on itself. PE2 and PE3 also
send Register messages to the RP in the public network. Thus, three independent RP-source
trees that connect PEs to the RP are formed in the MD.
In the PIM-SM network, an RPT (*, 239.1.1.1) and three independent RP-source trees form a
Share-MDT.
MD
PE1 PE2
IBGP:11.1.1.1/24 IBGP:11.1.2.1/24
As shown in Figure 6-7, the public network runs PIM-DM. The process of establishing a Share-
MDT is as follows.
A flooding-pruning process is initiated on the entire public network with the PIM P-instance on
PE1 being a multicast source, the Share-Group address being the multicast group address, and
other PEs that support VPN A being group members. During this process, the (11.1.1.1,
239.1.1.1) entry is created on the routers along the path in the public network. A Shortest Path
Tree (SPT) with PE1 being the root and PE2 and PE3 being leaves is thus set up. PE2 and PE3
also start the similar flooding-pruning process in the public network to form two SPTs.
As a result, in the PIM-DM network, three independent SPTs are created and form a Share-
MDT.
4. The packet is forwarded to the public network instance on the remote PE along the Share-
MDT.
5. The remote PE decapsulates the packet, reverts it to a VPN multicast packet, and forwards
it to the VPN instance.
l All interfaces that belong to the same VPN, including the PE interfaces bound to the VPN instance and
MTI, must be in the same PIM mode.
l The VPN instance and the public network instance are independent of each other. They can be in
different PIM modes.
RP
l If receivers and the VPN RP belong to different sites, the VPN multicast data is transmitted
across the public network along the VPN RPT.
l If the multicast source and the receiver belong to different sites, the VPN multicast data is
transmitted across the public network along the source tree.
In the following example, the public network and VPNs run PIM-DM. VPN multicast data is
transmitted across the public network along the SPT. An example is given to show the process
of transmitting multicast data packets along the Share-MDT.
As shown in Figure 6-9, the multicast source in VPN A sends multicast data to the group G
(225.1.1.1). The receiver belongs to Site2 and is connected to CE2.
RP
P
The process of transmitting VPN multicast data across the public network is as follows:
5. The VPN instance on PE2 searches for the forwarding entry and then sends the VPN
multicast data to the receiver. So far, the process of transmitting VPN multicast data across
the public network is complete.
l The forwarding rate of VPN multicast data packets should be lower than the specified
threshold and remain unchanged in the switch-Holddown period.
l In some cases, the forwarding rate of VPN multicast data packets fluctuates around the
switchover threshold. To prevent the multicast data flow from being frequently switched
between the Switch-MDT and Share-MDT, the system does not perform the switchover
when the system finds that the forwarding rate is lower than the switchover threshold.
Instead, the system starts the Holddown timer. The timeout period of the timer is configured
through the related command. Before the Holddown timer expires, the system continues to
detect the data forwarding rate. If the rate is always lower than the switchover threshold,
the data packets are switched back to the Share-MDT. Otherwise, the packets are still
forwarded through the Switch-MDT.
l When the switch-group-pool is changed, the switch-group address used to encapsulate the
VPN multicast data should be outside the switch-group-pool.
l If the advanced ACL rules used to control the switchover of VPN multicast data packets
to the Switch-MDT change, the VPN multicast data packets cannot pass the filtering of
new ACL rules.
VPN CE2B
RED CE1R P VPN
BLUE
PC2
PC1
As shown in Figure 6-10, a single AS runs MPLS/BGP VPN. Both PE1 and PE2 are configured
with two VPN instances, namely, VPN BLUE and VPN RED, and the same Share-Group address
is set for the same VPN instances on the two PEs. In such a case, the VPN instances with the
same Share-Group address join the same MD. After the corresponding Share-MDT is
established, the protocol packets and low-rate data in the VPNs can be transmitted through their
respective MTs.
VPN BLUE is taken as an example to describe how multicast services are transmitted between
VPNs.
1. A VPN instance named VPN BLUE is configured on both PE1 and PE2 and the instances
on the two PEs use the same Share-Group address. After the corresponding Share-MDT is
established, the VPN BLUE instances connected with CE1B and CE2B can exchange
multicast protocol packets through the corresponding MT.
2. Routers in the VPNs connected with CE1B and CE2B can then establish neighbor
relationships, and send Join, Prune, and BSR messages to each other. The protocol packets
in the VPNs are encapsulated and decapsulated only on the MTs of the PEs. The routers,
however, do not known they are in VPN networks. They still process the multicast protocol
packets and forward multicast data packets like the routers in the public network. In this
way, multicast service transmission in one VPN instance is implemented and multicast
services in different VPN instances are isolated.
PE3 PE4
Public instance
As shown in Figure 6-11, a VPN spans AS1 and AS2. PE3 and PE4 are AS Boundary Routers
(ASBRs) of AS1 and AS2 respectively. PE3 and PE4 are connected by their respective VPN
instances, regarding each other as a CE.
Multi-Hop EBGP
A VPN covers multiple ASs and the ASs are connected through the public network EBGP.
PE3 PE4 P2
Public instance P1
MTI MTI
CE1 MD
MT CE2
PE1" PE2"
VPN instance
As shown in Figure 6-12, a VPN covers AS1 and AS2. PE3 and PE4 are the ASBRs of AS1
and AS2 respectively. PE3 and PE4 are connected by their respective VPN instances, regarding
each other as a CE.
In multi-hop EBGP mode, only one MD needs to be set in AS1 and AS2. The public network
multicast data is transmitted across ASs in the MD.
2. After an EBGP connection is set up between the ASBRs PE3 and PE4 in the public network,
intercommunication between AS1 and AS2 is implemented. Thus, inter-AS multicast is
realized and the protocol packets and data packets from the VPN encapsulated to be
common multicast data packets can reach PE2. For PE1 and PE2, they do not care about
the actual way in which the protocol packets and data packets from the VPN are transmitted.
They consider that the protocol packets and data packets from the VPN are transmitted
within one AS. In this manner, multi-AS MD VPN intercommunication is realized.
PIM It is a multicast routing protocol, with the full name being Protocol
Independent Multicast. Reachable unicast routes are the basis of PIM
forwarding. PIM uses the existing unicast routing information to perform
the RPF check on multicast packets to create multicast routing entries and
set up an MDT.
SPT It is a shortest path tree, with the multicast source being the root and group
members being leaves. SPT is applicable to PIM-DM, PIM-SM, and PIM-
SSM.
Share-Group Based on the MD principle, all the VPN instances on the PEs in the same
MD must join a common group, called a Share-Group.
Currently, one VPN instance can be configured with only one Share-Group,
that is, one VPN instance can join only one MD.
MTI MTI is short for Multicast Tunnel Interface. It is the outgoing interface or
incoming interface of an MT. An MTI is equal to the outgoing interface or
incoming interface of an MD. The local PE sends VPN data through an MTI.
The remote PE receives it through an MTI.
The MTI is the channel through which the public network instance and VPN
instances on PEs communicate. PEs are connected to an MT by using MTIs,
which is equal to the situation that PEs are connected to a shared network
segment. On each PE, VPN instances that belong to the MD set up the PIM
neighbor relationships on MTIs.
Abbreviations
Abbreviations Full Spelling
AS Autonomous System
RP Rendezvous Point
7 MLD
After MLD is configured on the receiver hosts and the multicast router to which the hosts are
directly connected, the hosts can dynamically join related groups and the multicast router can
manage members on the local network.
MLD has two versions: MLDv1 defined in RFC 2710 and MLDv2 defined in RFC 3810. Both
MLD versions support the Any-Source Multicast (ASM) model. MLDv2 supports the Source-
Specific Multicast (SSM) model, whereas MLDv1 supports the SSM model only with the help
of SSM Mapping.
MLD functions the same as the Internet Group Management Protocol (IGMP) for IPv6.
Implementation of MLD and IGMP is similar. For example, MLDv1 is similar to IGMPv2;
MLDv2 is similar to IGMPv3.
Some features of MLD and IGMP are implemented in the same manner. This section describes
the unique features of MLD, and the common features of MLD and IGMP are not mentioned
here. The common features are as follows:
l MLD Router-Alert
l MLD Only-Link
l MLD On-Demand
l MLD Prompt-Leave
l MLD static group
l MLD Group-Policy
l MLD SSM Mapping
l MLD Limit
NOTE
The unique feature_news of MLD include principles of MLDv1 and MLDv2, MLD querier
election mechanism, and MLD group compatibility.
Purpose
On IPv6 networks, MLD can be configured on receiver hosts and the multicast router to which
the hosts are directly connected. This enables the hosts to dynamically join related groups and
the multicast router to manage members on the local network.
7.2 References
The following table lists the references of this document.
7.3 Principles
7.3.1 MLDv1 and MLDv2
7.3.2 MLD Group Compatibility
7.3.3 MLD Querier Election Mechanism
7.3.4 Comparison Between Protocols
ISP
RouterA RouterB
Ethernet
By sending Multicast Listener Query messages to hosts and receiving Multicast Listener Report
messages and Multicast Listener Done messages from hosts, the router can know which multicast
group contains receivers on the relevant network segment. If receivers exist on the network
segment, the multicast router forwards the corresponding multicast data to the network segment;
if no receivers exist on the network segment, the multicast router forwards no multicast data.
What is more, hosts can determine whether to join or leave a multicast group by themselves.
As shown in Figure 7-1, the MLD-enabled Router A automatically functions as the querier to
periodically send Multicast Listener Query messages, and all hosts (Host A, Host B, and Host
C) on the same network segment of the router can receive these Multicast Listener Query
messages.
MLDv1 provides the report suppression mechanism, which reduces the repetitive reports on the
network.
After a host, Host A for example, joins the multicast group G, Host A receives a Multicast
Listener Query message from the router and then randomly selects a value from 0 to Maximum
Response Delay specified in the Multicast Listener Query message. When the timer expires,
Host A sends the Multicast Listener Report message of G to the router. If Host A receives a
Multicast Listener Report message of G from Host B in G before the timer expires, Host A does
not send the Multicast Listener Report message of G to the router when the timer expires.
When a host quits the multicast group G, the host sends a Multicast Listener Done message of
G to the router. Because of the report suppression mechanism in MLDv1, the router cannot
determine whether G has other receiver hosts. Therefore, the router triggers a query on G. If G
has other receiver hosts, the host sends the Multicast Listener Report message of G to the
router.
If the router sends the query of G for several times, but receives no Multicast Listener Report
message from any host, the router does not record information about G, and stops forwarding
the multicast data of G to the relevant network segment.
NOTE
The MLD querier and non-querier can both process the Multicast Listener Report messages; whereas only
the querier is responsible for sending the Multicast Listener Query messages. The MLD non-querier does
not process the Multicast Listener Done messages of MLDv1.
l MODE_IS_INCLUDE: indicates that the corresponding mode between a group and its
source list is Include. That is, hosts receive the data sent by a source in the source-specific
list to the group.
l MODE_IS_EXCLUDE: indicates that the corresponding mode between a group and its
source list is Exclude. That is, hosts receive the data sent by a source that is not in the
source-specific list to the group.When the EXCLUDE table is empty, the MLDv2 packet
is equivalent to the report packet of MLDv1.
l CHANGE_TO_INCLUDE_MODE: indicates that the corresponding mode between a
group and its source list changes from Exclude to Include. If the source-specific list is
empty, the hosts leave the group.
l CHANGE_TO_EXCLUDE_MODE: indicates that the corresponding mode between a
group and its the source list changes from Include to Exclude.
l ALLOW_NEW_SOURCES: indicates that a host still wants to receive data from certain
multicast sources. If the current relationship is Include, certain sources are added to the
current source list. If the current relationship is Exclude, certain sources are deleted from
the current source list.
l BLOCK_OLD_SOURCES: indicates that a host does not want to receive data from certain
multicast sources any longer. If the current relationship is Include, certain sources are
deleted from the current source list. If the current relationship is Exclude, certain sources
are added to the current source list.
On the router side, the querier sends Multicast Listener Query messages and receives Multicast
Listener Report. In this manner, the router can identify which multicast group on the network
segment contains receivers, and then forwards the multicast data to the network segment
accordingly. In MLDv2, records of multicast groups can be filtered in either Include mode or
Exclude mode.
l In Include mode:
– The multicast source in the activated state requires the router to forward its data.
– The multicast source in the deactivated state is deleted by the router and data forwarding
for the multicast source is ceased.
l In Exclude mode:
– The multicast source in the activated state is in the conflict domain. That is, no matter
whether hosts on the same network segment of the router interface require the data of
the multicast source, the data is forwarded.
address, Router A changes from the querier to the non-querier, starts the timers of other
queriers, and records Router B as the querier of the network segment.
l If Router A in the non-querier state receives a Multicast Listener Query message from
Router B in the querier state, Router A updates the timers of other queriers; if the received
Multicast Listener Query message is sent from Router C whose IP address is lower than
that of Router B in the querier state, Router A records Router C as the querier of the network
segment and updates the timers of other queriers.
l When Router A is in the non-querier state, if the timer of another querier expires, Router
A moves to the querier state and resumes the role of the querier.
NOTE
At present, querier election is supported only among the routers of the same version on the same network
segment. Therefore, all routers on the same network segment must be configured with MLD of the same
version.
The message contains only The message contains the The multicast source can be
the multicast group multicast group selected directly.
information, rather than the information and the
multicast source multicast source
information. information.
A message contains the A message contains records The number of MLD messages is
record of a single multicast of multiple multicast reduced on the network segment.
group. groups.
Ethernet
HostA
RouterA Receiver
N1
POS2/0/0
GE1/0/0 HostB
RouterB
POS2/0/0 GE1/0/0 Leaf network
HostC
PIM network
RouterC Receiver
GE1/0/0 N2
POS2/0/0 HostD
Ethernet
Host A is the receiver of N1; Host C is the receiver of N2. MLDv1 is configured on GE 1/0/0
of the routerRouter A that is directly connected to Host A; MLDv2 is configured on GE 1/0/0
of the routerRouter B and routerRouter C that are directly connected to its respective host. That
is, MLDv1 runs on N1; MLDv2 runs on N2. The routers on the same network segment must run
MLD of the same version.
MLD Multicast Listener Discovery (MLD) is used by the IPv6 router to discover the
multicast listeners on their directly connected network segments, and set up and
maintain member relationships.
On IPv6 networks, after MLD is configured on the receiver hosts and the multicast
router to which the hosts are directly connected, the hosts can dynamically join
related groups and the multicast router can manage members on the local network.
(S,G) (S,G) refers to a multicast routing entry. S indicates a multicast source, and G
indicates a multicast group.
After a multicast packet with S as the source address and G as the group address
reaches the router, it is forwarded through the downstream interface of the (S, G)
entry,
and the packet is usually expressed as the (S, G) packet.
Term Description
(*,G) (*,G) refers to a PIM routing entry. * indicates any multicast source, and G indicates
a multicast group.
(*, G) is applicable to the multicast packet with G being the multicast group address.
That is, the multicast packet sent to G are forwarded through the downstream
interface of the (*, G) entry, regardless of the multicast source that sends the
multicast packet.
Abbreviations
Abbreviation Full Spelling
Purpose
Layer 3 multicast CAC mainly implements the following functions:
l Limit the number of PIM routing entries to control the number of multicast groups that can
be served. This prevents the introduction of multicast data beyond the forwarding
capability.
l Plan multicast networks by reserving bandwidth for channels or interfaces. When
bandwidth resources of a channel or an interface are insufficient, no multicast group is
added to the channel or the interface. This ensures the quality of services.
As shown in Figure 8-1:
l Multicast CAC can prevent traffic beyond forwarding capability from entering an IP/MPLS
backbone network by limiting multicast entries or bandwidth on the multicast control plane
and establishing multicast distribution trees (MDTs) on the ingress NPE and egress UPE
in the backbone network.
l Multicast CAC can ensure the bandwidth for the MDTs in an IP/MPLS backbone network.
PIM-DR NPE
basic UPE
CH1-1
Silv er 00
CH101
gold C -2
H20 1-3 00
00
Vod ES
IP/MPLS ISP2
Backbone
NPE
PIM-BDR
UPE Vod ES
ISP3
8.2 References
None.
8.3 Principles
8.3.1 Implementation of Multicast CAC
8.3.2 Multicast CAC
Multicast Multicast CAC channel You can manage the group range or source/
CAC management group range based on channels globally and
specify bandwidths for ICPs.
To facilitate multicast group management, channels are classified according to programs, and
each program is configured with the multicast entries limit and bandwidth limit. According to
the Any Source Multicast/Source Specific Multicast (ASM/SSM) model, the entry is recorded
according to the following rules:
l In the ASM model, all (*, G)s, (S1, G), (S2, G) entries with the same G are recorded as one
entry.
l In the SSM model, each (S, G) entry is recorded as one entry.
After the global entry limit is set to a larger value, new PIM entries are created for the Join messages
previously discarded because of the global entry limit.
The multicast CAC limit on the outgoing interface controls the traffic volume to be copied by
controlling the number of joined users on the access interface. The multicast CAC limit on the
outgoing interface can control the number of joined users on the access interface by restricting
entries and bandwidth.
After the multicast CAC limit on the outgoing interface is set, the statistics of existing PIM
entries on the outgoing interface are automatically updated. After a router receives an IGMP or
a PIM Join message, PIM creates a new entry if the multicast CAC limit on the outgoing interface
is not reached.
If the configured multicast CAC limit on the outgoing interface is smaller than the number of
existing entries, the excessive entries are not deleted but the interface rejects new Join messages.
The multicast CAC limit on the outgoing interface implements the following policies to take
statistics and control the entries on the outgoing interface, and thus to limit the number of joined
users:
l Global multicast CAC limit on an outgoing interface
According to the entry recording method in the ASM and SSM models, multicast CAC
limits the number of joined users on the specified access interface by taking statistics of all
entries (regardless of whether they belong to the channel or not) and bandwidth on the
access interface.
l Multicast CAC channel-based entry limit on an outgoing interface
According to the entry recording method in the ASM and SSM models, multicast CAC
limits the number of joined users in the current channel on the specified access interface
by taking statistics of the bandwidth and the entries belonging to the current channel on the
access interface.
NOTE
After the multicast CAC entry limit or bandwidth limit on the outgoing interface is set to a larger value,
new PIM entries are created for the Join messages previously discarded because of multicast CAC entry
limit or bandwidth limit on the outgoing interface.
PIM-DR NPE
basic UPE
CH1-1
Silv er 00
CH101
gold C -200
H20 1-3
00
Vod ES
IP/MPLS
ISP2
Backbone
NPE
PIM-BDR
UPE
Vod ES
ISP3
As shown in Figure 8-2, the group range or source/group range and permitted maximum
bandwidth can be configured on the UPE.
In a network running IPTV services, the IP core network may be connected to multiple ISP
networks. Each ISP network bears IPTV programs by using the pre-assigned group or source/
group. Each group or source/group occupies different bandwidths. To simplify the management
of the multicast entries of the groups or source/groups, you can specify a channel name for each
multicast group.
Operators providing IPTV services tend to classify groups or source/groups with the same
bandwidth into the same channel or classify groups or source/groups in the same ISP network
into the same channel. After that, the operators can set the unified multicast CAC policy for the
groups in the same channel.
The MDT adopts either of the following models:
l ASM model
If the MDT adopts the ASM model, users can receive multicast programs if they know the
multicast group that they join. The MDT adopting the ASM model is established by means
of PIM-SM.
l SSM model
If the MDT adopts the SSM model, users can receive multicast programs only when they
know the multicast group that they join and the multicast source from which they want to
receive programs. The MDT adopting the SSM model is established by means of PIM-
SSM.
When designing a channel, you must specify the MDT model for the channel.
To classify specified multicast groups into the same channel, you must conform to the following
rules when designing a channel:
l Each channel must be assigned a manageable name.
l The following rules should be followed when you design a channel adopting the ASM
model or SSM model:
– For a channel adopting the ASM model, you can configure only G/Mask.
After a channel adopting the ASM model is configured with one G1/Mask, the channel
and other channels cannot be configured with (G1/Mask, S/Mask), G2/Mask overlapped
with G1/Mask, or (G2/Mask, S/Mask) overlapped with G1/Mask. If G2/Mask overlaps
with G1/Mask, it means that G2/Mask includes G1/Mask or G1/Mask includes G2/
Mask.
– For a channel adopting the SSM model, you can configure only G/Mask and S/Mask.
After a channel adopting the SSM model is configured with one (G1/Mask, S1/Mask),
the channel and other channels cannot be configured with G1/Mask, G2/Mask that is
overlapped with G1/Mask, or (G2/Mask, S1/Mask) that is overlapped with G1/Mask;
however, the channel and other channels can be configured with (G1/Mask, S2/Mask)
if S2/Mask is not overlapped with S1/Mask.
NPE Vod ES
basic
CH1-1 ISP1
Silv er 00
C UPE
gold C H101-200
H20 1-3
00
IP/MPLS
Backbone
ISP
1 Vod ES
2
ISP ISP2
3 NPE
ISP
UPE
Vod ES
ISP3
As shown in Figure 8-3, global multicast CAC limit is configured on the ingress NPE to control
the multicast traffic entering the IP/MPLS backbone network. When deploying IPTV services
in a network, to prevent multicast traffic exceeding routers' processing capability or bandwidth
limit from entering the access network, you can set the maximum number of global multicast
entries and the maximum number of channel-based multicast entries on the NPE.
After the configuration, you can manage all channels, including the channels provided by
multiple ISP networks connected with the IP core network, based on different control policies
in a centralized way.
NOTE
The global multicast CAC limit or channel-based multicast CAC limit does not take effect on multicast
entries created before the configuration of the limit. That is, the multicast entries created before the
configuration of the limit will not be deleted. The multicast entries created when an interface is added to
a group statically or users in the private network join the multicast group in the public network are not
controlled but only counted by the global multicast CAC limit configured on the NPE.
Figure 8-4 Networking diagram of multicast CAC limit on the outgoing interface
Vod ES
ISP1
PIM-DR NPE
basic
CH1-1
UPE
Silv er 00
CH101
gold C -200
H20 1-3
00
Vod ES
IP/MPLS
ISP2
Backbone
NPE
PIM-BDR
UPE
Vod ES
ISP3
As shown in Figure 8-4, multicast CAC limit on the outgoing interface is configured on the
egress UPE to control the multicast traffic entering the IP/MPLS backbone network.
When deploying IPTV services in a network, to prevent multicast traffic exceeding routers'
processing capability or bandwidth limit from entering the access network, you can set the global
entry limit or channel-based entry limit and bandwidth limit on the access interface on the UPE.
The main scenarios where the access interface on the UPE receives IGMP Join messages are as
follows:
IGMPv2 runs on the device at the user side and PIM-SM runs on the access interface on
the UPE. The global entry limit/channel-based entry limit and bandwidth limit are
configured on the access interface on the UPE.
l Operators provide IPTV services by means of IGMPv3+PIM-SM/SSM.
IGMPv3 runs on the device at the user side and PIM-SM or PIM-SSM runs on the access
interface on the UPE. The global entry limit/channel-based entry limit and bandwidth limit
are configured on the access interface on the UPE.
l Operators provide IPTV services by means of IGMPv2, SSM mapping, and PIM-SSM.
IGMPv2 runs on the device at the user side and PIM-SM or PIM-SSM runs on the access
interface on the UPE. The SSM mapping is configured on the UPE. The global entry limit/
channel-based entry limit and bandwidth limit are configured on the access interface on
the UPE.
Abbreviations
Abbreviations Full Spelling