Junos Multicast Protocols User Guide
Junos Multicast Protocols User Guide
Published
2021-04-18
ii
Juniper Networks, the Juniper Networks logo, Juniper, and Junos are registered trademarks of Juniper Networks, Inc.
in the United States and other countries. All other trademarks, service marks, registered marks, or registered service
marks are the property of their respective owners.
Juniper Networks assumes no responsibility for any inaccuracies in this document. Juniper Networks reserves the right
to change, modify, transfer, or otherwise revise this publication without notice.
The information in this document is current as of the date on the title page.
Juniper Networks hardware and software products are Year 2000 compliant. Junos OS has no known time-related
limitations through the year 2038. However, the NTP application is known to have some difficulty in the year 2036.
The Juniper Networks product that is the subject of this technical documentation consists of (or is intended for use
with) Juniper Networks software. Use of such software is subject to the terms and conditions of the End User License
Agreement ("EULA") posted at https://support.juniper.net/support/eula/. By downloading, installing or using such
software, you agree to the terms and conditions of that EULA.
iii
Table of Contents
About This Guide | xlv
1 Overview
Understanding Multicast | 2
Multicast Overview | 2
Configuring IGMP | 25
Understanding IGMP | 27
Configuring IGMP | 29
Enabling IGMP | 31
Disabling IGMP | 57
iv
Configuring MLD | 60
Understanding MLD | 60
Configuring MLD | 64
Enabling MLD | 65
Modifying the MLD Version | 67
Requirements | 73
Overview | 73
Configuration | 74
Verification | 75
Requirements | 86
Overview | 86
Configuration | 87
Verification | 89
Disabling MLD | 91
Requirements | 129
Configuration | 132
Requirements | 135
Configuration | 136
Requirements | 153
Configuration | 157
Verification | 161
Configuring IGMP Snooping Trace Operations | 161
Requirements | 164
Configuration | 165
Configuring MLD Snooping on a Switch VLAN with ELS Support (CLI Procedure) | 195
Requirements | 202
Configuration | 204
Requirements | 207
Configuration | 209
Configuring MLD Snooping Tracing Operations on EX Series Switches (CLI Procedure) | 214
Configuring MLD Snooping Tracing Operations on EX Series Switch VLANs (CLI Procedure) | 217
Requirements | 221
Configuration | 223
Requirements | 226
Configuration | 229
Viewing MVLAN and MVR Receiver VLAN Information on EX Series Switches with ELS | 263
Configuring Multicast VLAN Registration on non-ELS EX Series Switches | 264
Example: Configuring Multicast VLAN Registration on EX Series Switches Without ELS | 266
Requirements | 266
Configuration | 270
Routing Content to Densely Clustered Receivers with PIM Dense Mode | 294
Routing Content to Larger, Sparser Groups with PIM Sparse Mode | 305
Requirements | 321
Overview | 321
Configuration | 323
Verification | 326
Example: Configuring Multicast for Virtual Routers with IPv6 Interfaces | 334
Requirements | 334
Overview | 334
Configuration | 335
Verification | 340
Requirements | 345
x
Overview | 345
Configuration | 345
Verification | 347
Configuring the Static PIM RP Address on the Non-RP Routing Device | 349
Overview | 353
Configuration | 353
Verification | 356
Example: Rejecting PIM Bootstrap Messages at the Boundary of a PIM Domain | 368
Requirements | 381
Overview | 382
Configuration | 382
Verification | 384
Overview | 388
Configuration | 389
Verification | 391
Understanding Multicast Rendezvous Points, Shared Trees, and Rendezvous-Point Trees | 396
Requirements | 408
Overview | 408
Configuration | 410
Requirements | 412
Overview | 412
Configuration | 414
Verification | 416
Requirements | 440
Overview | 440
Configuration | 442
Verification | 444
Requirements | 459
Overview | 459
Configuration | 461
Verification | 463
Example: Configuring SSM Maps for Different Groups to Different Sources | 464
Requirements | 465
xiii
Overview | 465
Configuration | 465
Verification | 468
Requirements | 478
Overview | 478
Configuration | 482
Verification | 489
Rapidly Detecting Communication Failures with PIM and the BFD Protocol | 499
Configuring PIM and the Bidirectional Forwarding Detection (BFD) Protocol | 499
Requirements | 509
Overview | 509
Configuration | 510
Verification | 515
Requirements | 519
Overview | 519
Configuration | 521
Verification | 534
Requirements | 551
Overview | 552
Configuration | 555
Verification | 560
Example: Configuring MSDP with Active Source Limits and Mesh Groups | 562
Requirements | 562
Overview | 563
Configuration | 567
Verification | 569
Requirements | 591
Overview | 592
Configuration | 593
Verification | 596
Requirements | 600
Overview | 601
Configuration | 602
Verification | 604
Requirements | 605
Overview | 605
Configuration | 607
Verification | 610
Requirements | 618
xvi
Overview | 618
Configuration | 621
Verification | 630
Example: Configuring a Specific Tunnel for IPv4 Multicast VPN Traffic (Using Draft-Rosen MVPNs) | 636
Requirements | 636
Overview | 636
PE Router Configuration | 638
Verification | 650
Requirements | 656
Overview | 656
Configuration | 659
Verification | 668
Requirements | 675
Overview | 676
Configuration | 680
Verification | 688
Example: Configuring Data MDTs and Provider Tunnels Operating in Any-Source Multicast Mode | 690
Requirements | 690
Overview | 691
Configuration | 694
Verification | 695
xvii
Example: Configuring Data MDTs and Provider Tunnels Operating in Source-Specific Multicast
Mode | 696
Requirements | 696
Overview | 697
Configuration | 704
Verification | 709
Example: Configuring Data MDTs and Provider Tunnels Operating in Source-Specific Multicast
Mode | 713
Requirements | 714
Overview | 714
Configuration | 721
Verification | 726
Example: Configuring Data MDTs and Provider Tunnels Operating in Any-Source Multicast
Mode | 728
Requirements | 728
Overview | 728
Configuration | 731
Verification | 733
Requirements | 734
Overview | 734
Configuration | 735
Verification | 743
Generating Next-Generation MVPN VRF Import and Export Policies Overview | 765
xviii
Comparison of Draft Rosen Multicast VPNs and Next-Generation Multiprotocol BGP Multicast
VPNs | 769
PIM Sparse Mode, PIM Dense Mode, Auto-RP, and BSR for MBGP MVPNs | 771
Example: Configuring Point-to-Multipoint LDP LSPs as the Data Plane for Intra-AS MBGP
MVPNs | 781
Requirements | 781
Overview | 783
Configuration | 786
Verification | 788
Example: Configuring Ingress Replication for IP Multicast Using MBGP MVPNs | 789
Requirements | 789
Overview | 790
Configuration | 792
Verification | 798
Requirements | 807
Configuration | 809
Requirements | 832
Overview | 832
Configuration | 834
Verification | 844
Requirements | 844
Overview | 845
Configuration | 847
Verification | 851
Example: Configuring BGP Route Flap Damping Based on the MBGP MVPN Address Family | 851
xix
Requirements | 852
Overview | 852
Configuration | 853
Verification | 865
Requirements | 868
Configuring Sender-Only and Receiver-Only Sites Using PIM ASM Provider Tunnels | 874
Requirements | 892
Configuration | 894
Requirements | 947
Overview | 947
Configuration | 948
Verification | 959
Requirements | 966
Overview | 967
Verification | 983
Example: Configuring Sender-Based RPF in a BGP MVPN with MLDP Point-to-Multipoint Provider
Tunnels | 1003
Requirements | 1003
Overview | 1004
Verification | 1019
Understanding Wildcards to Configure Selective Point-to-Multipoint LSPs for an MBGP MVPN | 1039
Anti-spoofing support for MPLS labels in BGP/MPLS IP VPNs (Inter-AS Option B) | 1086
Example: Configuring PIM Join Load Balancing on Draft-Rosen Multicast VPN | 1098
Requirements | 1099
Configuration | 1104
Verification | 1108
Example: Configuring PIM Join Load Balancing on Next-Generation Multicast VPN | 1110
Requirements | 1111
xxi
Configuration | 1114
Verification | 1120
Requirements | 1123
Overview | 1124
Configuration | 1125
Verification | 1131
Requirements | 1140
Overview | 1140
Configuration | 1141
Verification | 1152
Requirements | 1159
Overview | 1160
Configuration | 1161
Requirements | 1165
Overview | 1165
Configuration | 1165
Verification | 1168
Requirements | 1171
Overview | 1171
xxii
Configuration | 1172
Verification | 1174
Requirements | 1174
Overview | 1175
Configuration | 1176
Verification | 1179
Use Multicast-Only Fast Reroute (MoFRR) to Minimize Packet Loss During Link Failures | 1180
Requirements | 1193
Overview | 1193
Verification | 1201
Requirements | 1204
Overview | 1205
Verification | 1212
Requirements | 1216
Overview | 1216
Configuration | 1226
Verification | 1233
Enable Multicast Between Layer 2 and Layer 3 Devices Using Snooping | 1239
Requirements | 1243
Configuration | 1246
Verification | 1249
Enabling Bulk Updates for Multicast Snooping | 1250
Enabling Multicast Snooping for Multichassis Link Aggregation Group Interfaces | 1251
Configuring Multicast Snooping to Ignore Spanning Tree Topology Change Messages | 1253
Requirements | 1259
Overview | 1259
Configuration | 1261
Verification | 1271
Requirements | 1278
Overview | 1279
Configuration | 1279
Verification | 1282
Requirements | 1282
Overview | 1283
Configuration | 1283
Verification | 1286
Requirements | 1290
Overview | 1291
Configuration | 1292
Verification | 1294
Requirements | 1294
Configuration | 1299
Verification | 1311
Requirements | 1317
Overview | 1317
Configuration | 1318
Verification | 1320
Requirements | 1321
Overview | 1321
Configuration | 1323
Verification | 1325
Requirements | 1327
Overview | 1327
Configuration | 1329
xxv
Verification | 1333
7 Troubleshooting
Knowledge Base | 1336
accept-remote-source | 1350
active-source-limit | 1360
advertise-from-main-vpn-tables | 1368
algorithm | 1370
anycast-pim | 1377
anycast-prefix | 1379
asm-override-ssm | 1380
assert-timeout | 1382
xxvi
authentication-key | 1385
auto-rp | 1386
autodiscovery | 1388
autodiscovery-only | 1389
backoff-period | 1391
backup-pe-group | 1393
backups | 1396
bandwidth | 1397
bootstrap | 1403
bootstrap-export | 1405
bootstrap-import | 1406
bootstrap-priority | 1408
cont-stats-collection-interval | 1414
count | 1416
create-new-ucast-tunnel | 1417
dampen | 1419
data-encapsulation | 1420
data-forwarding | 1422
data-mdt-reuse | 1424
xxvii
default-peer | 1425
default-vpn-source | 1427
defaults | 1428
dense-groups | 1430
df-election | 1433
disable | 1434
distributed-dr | 1450
dr-election-on-p2p | 1453
dr-register-policy | 1454
dvmrp | 1456
embedded-rp | 1458
export-target | 1468
flood-groups | 1479
flow-map | 1480
group-ranges | 1526
group-rp-mapping | 1528
hello-interval | 1533
host-only-interface | 1540
idle-standby-path-switchover-delay | 1545
igmp | 1547
igmp-snooping | 1551
xxx
igmp-snooping-options | 1557
ignore-stp-topology-change | 1558
immediate-leave | 1559
import-target | 1568
inclusive | 1570
infinity | 1571
ingress-replication | 1572
inet-mdt | 1576
interface | 1593
interface-name | 1600
interval | 1602
intra-as | 1605
join-load-balance | 1607
join-prune-timeout | 1608
l2-querier | 1613
ldp-p2mp | 1617
listen | 1623
local | 1624
loose-check | 1643
mapping-agent-election | 1644
maximum-bandwidth | 1649
maximum-rps | 1651
mdt | 1655
min-rate | 1661
minimum-receive-interval | 1665
mld | 1667
mld-snooping | 1669
mpls-internet-multicast | 1689
xxxiii
msdp | 1690
multicast | 1693
multicast-replication | 1697
multicast-snooping-options | 1703
multichassis-lag-replicate-state | 1707
multiplier | 1708
multiple-triggered-joins | 1710
mvpn | 1713
mvpn-iana-rt-import | 1716
mvpn-mode | 1720
neighbor-policy | 1721
nexthop-hold-time | 1723
no-bidirectional-mode | 1727
no-qos-adjust | 1730
offer-period | 1731
omit-wildcard-address | 1735
override-interval | 1738
pim | 1747
pim-asm | 1754
pim-snooping | 1755
pim-to-igmp-proxy | 1760
pim-to-mld-proxy | 1761
prefix | 1771
process-non-null-as-null-register | 1782
xxxv
propagation-delay | 1784
provider-tunnel | 1787
proxy | 1793
qualified-vlan | 1797
receiver | 1817
redundant-sources | 1820
register-limit | 1822
register-probe-time | 1824
reset-tracking-bit | 1828
restart-duration | 1831
reverse-oif-mapping | 1832
robustness-count | 1846
rp | 1850
rp-register-policy | 1853
rp-set | 1855
rpf-selection | 1858
rpt-spt | 1861
sap | 1866
scope | 1868
scope-policy | 1869
secret-key-timeout | 1871
selective | 1872
xxxvii
sglimit | 1877
signaling | 1879
snoop-pseudowires | 1881
source-active-advertisement | 1882
source-address | 1899
spt-only | 1908
spt-threshold | 1909
ssm-groups | 1911
standby-path-creation-delay | 1921
static-lsp | 1932
stickydr | 1935
subscriber-leave-timer | 1939
threshold-rate | 1954
tunnel-source | 2001
unicast-umh-election | 2007
upstream-interface | 2008
use-p2mp-lsp | 2010
vrf-advertise-selective | 2019
vpn-group-address | 2031
wildcard-group-inet | 2032
wildcard-group-inet6 | 2034
mtrace | 2096
Multicast allows an IP network to support more than just the unicast model of data delivery that
prevailed in the early stages of the Internet. Multicast provides an efficient method for delivering traffic
flows that can be characterized as one-to-many or many-to-many.
In a multicast network, the key component is the routing device, which is able to replicate packets and is
therefore multicast-capable. The routing devices in the IP multicast network, which has exactly the same
topology as the unicast network it is based on, use a multicast routing protocol to build a distribution
tree that connects receivers (preferred to the multimedia implications of listeners, but listeners is also
used) to sources. In multicast terminology, the distribution tree is rooted at the source (the root of the
distribution tree is the source). The interface on the routing device leading toward the source is the
upstream interface, although the less precise terms incoming or inbound interface are used as well. To
keep bandwidth use to a minimum, it is best for only one upstream interface on the routing device to
receive multicast packets. The interface on the routing device leading toward the receivers is the
downstream interface, although the less precise terms outgoing or outbound interface are used as well.
There can be 0 to N–1 downstream interfaces on a routing device, where N is the number of logical
interfaces on the routing device.
RELATED DOCUMENTATION
Overview
Understanding Multicast | 2
2
CHAPTER 1
Understanding Multicast
IN THIS CHAPTER
Multicast Overview | 2
Multicast Overview
IN THIS SECTION
IP Multicast Uses | 4
IP Multicast Terminology | 6
IP Multicast Addressing | 8
Multicast Addresses | 9
IP has three fundamental types of addresses: unicast, broadcast, and multicast. A unicast address is used
to send a packet to a single destination. A broadcast address is used to send a datagram to an entire
subnetwork. A multicast address is used to send a datagram to a set of hosts that can be on different
subnetworks and that are configured as members of a multicast group.
A multicast datagram is delivered to destination group members with the same best-effort reliability as a
standard unicast IP datagram. This means that multicast datagrams are not guaranteed to reach all
members of a group or to arrive in the same order in which they were transmitted. The only difference
between a multicast IP packet and a unicast IP packet is the presence of a group address in the IP
header destination address field. Multicast addresses use the Class D address format.
NOTE: On all SRX Series devices, reordering is not supported for multicast fragments. Reordering
of unicast fragments is supported.
Individual hosts can join or leave a multicast group at any time. There are no restrictions on the physical
location or the number of members in a multicast group. A host can be a member of more than one
multicast group at any time. A host does not have to belong to a group to send packets to members of a
group.
Routers use a group membership protocol to learn about the presence of group members on directly
attached subnetworks. When a host joins a multicast group, it transmits a group membership protocol
message for the group or groups that it wants to receive and sets its IP process and network interface
card to receive frames addressed to the multicast group.
The Junos® operating system (Junos OS) routing protocol process supports a wide variety of routing
protocols. These routing protocols carry network information among routing devices not only for unicast
traffic streams sent between one pair of clients and servers, but also for multicast traffic streams
containing video, audio, or both, between a single server source and many client receivers. The routing
protocols used for multicast differ in many key ways from unicast routing protocols.
Information is delivered over a network by three basic methods: unicast, broadcast, and multicast.
The differences among unicast, broadcast, and multicast can be summarized as follows:
• Multicast: One-to-many, from one source to multiple destinations expressing an interest in receiving
the traffic.
4
NOTE: This list does not include a special category for many-to-many applications, such as
online gaming or videoconferencing, where there are many sources for the same receiver and
where receivers often double as sources. Many-to-many is a service model that repeatedly
employs one-to-many multicast and therefore requires no unique protocol. The original
multicast specification, RFC 1112, supports both the any-source multicast (ASM) many-to-
many model and the source-specific multicast (SSM) one-to-many model.
With unicast traffic, many streams of IP packets that travel across networks flow from a single source,
such as a website server, to a single destination such as a client PC. Unicast traffic is still the most
common form of information transfer on networks.
Broadcast traffic flows from a single source to all possible destinations reachable on the network, which
is usually a LAN. Broadcasting is the easiest way to make sure traffic reaches its destinations.
Television networks use broadcasting to distribute video and audio. Even if the television network is a
cable television (CATV) system, the source signal reaches all possible destinations, which is the main
reason that some channels’ content is scrambled. Broadcasting is not feasible on the Internet because of
the enormous amount of unnecessary information that would constantly arrive at each end user's
device, the complexities and impact of scrambling, and related privacy issues.
Multicast traffic lies between the extremes of unicast (one source, one destination) and broadcast (one
source, all destinations). Multicast is a “one source, many destinations” method of traffic distribution,
meaning only the destinations that explicitly indicate their need to receive the information from a
particular source receive the traffic stream.
On an IP network, because destinations (clients) do not often communicate directly with sources
(servers), the routing devices between source and destination must be able to determine the topology of
the network from the unicast or multicast perspective to avoid routing traffic haphazardly. Multicast
routing devices replicate packets received on one input interface and send the copies out on multiple
output interfaces.
In IP multicast, the source and destination are almost always hosts and not routing devices. Multicast
routing devices distribute the multicast traffic across the network from source to destinations. The
multicast routing device must find multicast sources on the network, send out copies of packets on
several interfaces, prevent routing loops, connect interested destinations with the proper source, and
keep the flow of unwanted packets to a minimum. Standard multicast routing protocols provide most of
these capabilities, but some router architectures cannot send multiple copies of packets and so do not
support multicasting directly.
IP Multicast Uses
Multicast allows an IP network to support more than just the unicast model of data delivery that
prevailed in the early stages of the Internet. Multicast, originally defined as a host extension in RFC
5
1112 in 1989, provides an efficient method for delivering traffic flows that can be characterized as one-
to-many or many-to-many.
Unicast traffic is not strictly limited to data applications. Telephone conversations, wireless or not,
contain digital audio samples and might contain digital photographs or even video and still flow from a
single source to a single destination. In the same way, multicast traffic is not strictly limited to
multimedia applications. In some data applications, the flow of traffic is from a single source to many
destinations that require the packets, as in a news or stock ticker service delivered to many PCs. For this
reason, the term receiver is preferred to listener for multicast destinations, although both terms are
common.
Network applications that can function with unicast but are better suited for multicast include
collaborative groupware, teleconferencing, periodic or “push” data delivery (stock quotes, sports scores,
magazines, newspapers, and advertisements), server or website replication, and distributed interactive
simulation (DIS) such as war simulations or virtual reality. Any IP network concerned with reducing
network resource overhead for one-to-many or many-to-many data or multimedia applications with
multiple receivers benefits from multicast.
If unicast were employed by radio or news ticker services, each radio or PC would have to have a
separate traffic session for each listener or viewer at a PC (this is actually the method for some Web-
based services). The processing load and bandwidth consumed by the server would increase linearly as
more people “tune in” to the server. This is extremely inefficient when dealing with the global scale of
the Internet. Unicast places the burden of packet duplication on the server and consumes more and
more backbone bandwidth as the number of users grows.
If broadcast were employed instead, the source could generate a single IP packet stream using a
broadcast destination address. Although broadcast eliminates the server packet duplication issue, this is
not a good solution for IP because IP broadcasts can be sent only to a single subnetwork, and IP routing
devices normally isolate IP subnetworks on separate interfaces. Even if an IP packet stream could be
addressed to literally go everywhere, and there were no need to “tune” to any source at all, broadcast
would be extremely inefficient because of the bandwidth strain and need for uninterested hosts to
discard large numbers of packets. Broadcast places the burden of packet rejection on each host and
consumes the maximum amount of backbone bandwidth.
For radio station or news ticker traffic, multicast provides the most efficient and effective outcome, with
none of the drawbacks and all of the advantages of the other methods. A single source of multicast
packets finds its way to every interested receiver. As with broadcast, the transmitting host generates
only a single stream of IP packets, so the load remains constant whether there is one receiver or one
million. The network routing devices replicate the packets and deliver the packets to the proper
receivers, but only the replication role is a new one for routing devices. The links leading to subnets
consisting of entirely uninterested receivers carry no multicast traffic. Multicast minimizes the burden
placed on sender, network, and receiver.
6
IP Multicast Terminology
Multicast has its own particular set of terms and acronyms that apply to IP multicast routing devices and
networks. Figure 1 on page 6 depicts some of the terms commonly used in an IP multicast network.
In a multicast network, the key component is the routing device, which is able to replicate packets and is
therefore multicast-capable. The routing devices in the IP multicast network, which has exactly the same
topology as the unicast network it is based on, use a multicast routing protocol to build a distribution
tree that connects receivers (preferred to the multimedia implications of listeners, but listeners is also
used) to sources. In multicast terminology, the distribution tree is rooted at the source (the root of the
distribution tree is the source). The interface on the routing device leading toward the source is the
upstream interface, although the less precise terms incoming or inbound interface are used as well. To
keep bandwidth use to a minimum, it is best for only one upstream interface on the routing device to
receive multicast packets. The interface on the routing device leading toward the receivers is the
downstream interface, although the less precise terms outgoing or outbound interface are used as well.
There can be 0 to N–1 downstream interfaces on a routing device, where N is the number of logical
interfaces on the routing device. To prevent looping, the upstream interface must never receive copies
of downstream multicast packets.
Routing loops are disastrous in multicast networks because of the risk of repeatedly replicated packets.
One of the complexities of modern multicast routing protocols is the need to avoid routing loops, packet
by packet, much more rigorously than in unicast routing protocols.
7
The routing device's multicast forwarding state runs more logically based on the reverse path, from the
receiver back to the root of the distribution tree. In RPF, every multicast packet received must pass an
RPF check before it can be replicated or forwarded on any interface. When it receives a multicast packet
on an interface, the routing device verifies that the source address in the multicast IP packet is the
destination address for a unicast IP packet back to the source.
If the outgoing interface found in the unicast routing table is the same interface that the multicast
packet was received on, the packet passes the RPF check. Multicast packets that fail the RPF check are
dropped, because the incoming interface is not on the shortest path back to the source. Routing devices
can build and maintain separate tables for RPF purposes.
The distribution tree used for multicast is rooted at the source and is the shortest-path tree (SPT), but
this path can be long if the source is at the periphery of the network. Providing a shared tree on the
backbone as the distribution tree locates the multicast source more centrally in the network. Shared
distribution trees with roots in the core network are created and maintained by a multicast routing
device operating as a rendezvous point (RP), a feature of sparse mode multicast protocols.
Scoping limits the routing devices and interfaces that can forward a multicast packet. Multicast scoping
is administrative in the sense that a range of multicast addresses is reserved for scoping purposes, as
described in RFC 2365, Administratively Scoped IP Multicast. Routing devices at the boundary must
filter multicast packets and ensure that packets do not stray beyond the established limit.
Each subnetwork with hosts on the routing device that has at least one interested receiver is a leaf on
the distribution tree. Routing devices can have multiple leaves on different interfaces and must send a
copy of the IP multicast packet out on each interface with a leaf. When a new leaf subnetwork is added
to the tree (that is, the interface to the host subnetwork previously received no copies of the multicast
packets), a new branch is built, the leaf is joined to the tree, and replicated packets are sent out on the
interface. The number of leaves on a particular interface does not affect the routing device. The action is
the same for one leaf or a hundred.
NOTE: On Juniper Networks security devices, if the maximum number of leaves on a multicast
distribution tree is exceeded, multicast sessions are created up to the maximum number of
8
leaves, and any multicast sessions that exceed the maximum number of leaves are ignored. The
maximum number of leaves on a multicast distribution tree is device specific.
When a branch contains no leaves because there are no interested hosts on the routing device interface
leading to that IP subnetwork, the branch is pruned from the distribution tree, and no multicast packets
are sent out that interface. Packets are replicated and sent out multiple interfaces only where the
distribution tree branches at a routing device, and no link ever carries a duplicate flow of packets.
Collections of hosts all receiving the same stream of IP packets, usually from the same multicast source,
are called groups. In IP multicast networks, traffic is delivered to multicast groups based on an IP
multicast address, or group address. The groups determine the location of the leaves, and the leaves
determine the branches on the multicast network.
IP Multicast Addressing
Multicast uses the Class D IP address range (224.0.0.0 through 239.255.255.255). Class D addresses are
commonly referred to as multicast addresses because the entire classful address concept is obsolete.
Multicast addresses can never appear as the source address in an IP packet and can only be the
destination of a packet.
Multicast addresses usually have a prefix length of /32, although other prefix lengths are allowed.
Multicast addresses represent logical groupings of receivers and not physical collections of devices.
Blocks of multicast addresses can still be described in terms of prefix length in traditional notation, but
only for convenience. For example, the multicast address range from 232.0.0.0 through
232.255.255.255 can be written as 232.0.0.0/8 or 232/8.
Internet service providers (ISPs) do not typically allocate multicast addresses to their customers because
multicast addresses relate to content, not to physical devices. Receivers are not assigned their own
multicast addresses, but need to know the multicast address of the content. Sources need to be
assigned multicast addresses only to produce the content, not to identify their place in the network.
Every source and receiver still needs an ordinary, unicast IP address.
Multicast addressing most often references the receivers, and the source of multicast content is usually
not even a member of the multicast group for which it produces content. If the source needs to monitor
the packets it produces, monitoring can be done locally, and there is no need to make the packets
traverse the network.
Many applications have been assigned a range of multicast addresses for their own use. These
applications assign multicast addresses to sessions created by that application. You do not usually need
to statically assign a multicast address, but you can do so.
9
Multicast Addresses
Multicast host group addresses are defined to be the IP addresses whose high-order four bits are 1110,
giving an address range from 224.0.0.0 through 239.255.255.255, or simply 224.0.0.0/4. (These
addresses also are referred to as Class D addresses.)
The Internet Assigned Numbers Authority (IANA) maintains a list of registered IP multicast groups. The
base address 224.0.0.0 is reserved and cannot be assigned to any group. The block of multicast
addresses from 224.0.0.1 through 224.0.0.255 is reserved for local wire use. Groups in this range are
assigned for various uses, including routing protocols and local discovery mechanisms.
The range from 239.0.0.0 through 239.255.255.255 is reserved for administratively scoped addresses.
Because packets addressed to administratively scoped multicast addresses do not cross configured
administrative boundaries, and because administratively scoped multicast addresses are locally assigned,
these addresses do not need to be unique across administrative boundaries.
Which MAC addresses are used on the frame containing this packet? The packet source address—the
unicast IP address of the host originating the multicast content—translates easily and directly to the
MAC address of the source. But what about the packet’s destination address? This is the IP multicast
group address. Which destination MAC address for the frame corresponds to the packet’s multicast
group address?
One option is for LANs simply to use the LAN broadcast MAC address, which guarantees that the frame
is processed by every station on the LAN. However, this procedure defeats the whole purpose of
multicast, which is to limit the circulation of packets and frames to interested hosts. Also, hosts might
have access to many multicast groups, which multiplies the amount of traffic to noninterested
destinations. Broadcasting frames at the LAN level to support multicast groups makes no sense.
However, there is an easy way to effectively use Layer 2 frames for multicast purposes. The MAC
address has a bit that is set to 0 for unicast (the LAN term is individual address) and set to 1 to indicate
that this is a multicast address. Some of these addresses are reserved for multicast groups of specific
vendors or MAC-level protocols. Internet multicast applications use the range 0x01-00-5E-00-00-00 to
0x01-00-5E-FF-FF-FF. Multicast receivers (hosts running TCP/IP) listen for frames with one of these
addresses when the application joins a multicast group. The host stops listening when the application
terminates or the host leaves the group at the packet layer (Layer 3).
10
This means that 3 bytes, or 24 bits, are available to map IPv4 multicast addresses at Layer 3 to MAC
multicast addresses at Layer 2. However, all IPv4 addresses, including multicast addresses, are 32 bits
long, leaving 8 IP address bits left over. Which method of mapping IPv4 multicast addresses to MAC
multicast addresses minimizes the chance of “collisions” (that is, two different IP multicast groups at the
packet layer mapping to the same MAC multicast address at the frame layer)?
First, it is important to realize that all IPv4 multicast addresses begin with the same 4 bits (1110), so
there are really only 4 bits of concern, not 8. A LAN must not drop the last bits of the IPv4 address
because these are almost guaranteed to be host bits, depending on the subnet mask. But the high-order
bits, the leftmost address bits, are almost always network bits, and there is only one LAN (for now).
One other bit of the remaining 24 MAC address bits is reserved (an initial 0 indicates an Internet
multicast address), so the 5 bits following the initial 1110 in the IPv4 address are dropped. The 23
11
remaining bits are mapped, one for one, into the last 23 bits of the MAC address. An example of this
process is shown in Figure 2 on page 12.
12
Note that this process means that there are 32 (25) IPv4 multicast addresses that could map to the same
MAC multicast addresses. For example, multicast IPv4 addresses 224.8.7.6 and 229.136.7.6 translate to
the same MAC address (0x01-00-5E-08-07-06). This is a real concern, and because the host could be
interested in frames sent to both of the those multicast groups, the IP software must reject one or the
other.
NOTE: This “collision” problem does not exist in IPv6 because of the way IPv6 handles multicast
groups, but it is always a concern in IPv4. The procedure for placing IPv6 multicast packets inside
multicast frames is nearly identical to that for IPv4, except for the MAC destination address
0x3333 prefix (and the lack of “collisions”).
Once the MAC address for the multicast group is determined, the host's operating system essentially
orders the LAN interface card to join or leave the multicast group. Once joined to a multicast group, the
host accepts frames sent to the multicast address as well as the host’s unicast address and ignores other
multicast group’s frames. It is possible for a host to join and receive multicast content from more than
one group at the same time, of course.
To avoid multicast routing loops, every multicast routing device must always be aware of the interface
that leads to the source of that multicast group content by the shortest path. This is the upstream
(incoming) interface, and packets are never to be forwarded back toward a multicast source. All other
interfaces are potential downstream (outgoing) interfaces, depending on the number of branches on the
distribution tree.
Routing devices closely monitor the status of the incoming and outgoing interfaces, a process that
determines the multicast forwarding state. A routing device with a multicast forwarding state for a
particular multicast group is essentially “turned on” for that group's content. Interfaces on the routing
device's outgoing interface list send copies of the group's packets received on the incoming interface list
for that group. The incoming and outgoing interface lists might be different for different multicast
groups.
The multicast forwarding state in a routing device is usually written in either (S,G) or (*,G) notation.
These are pronounced “ess comma gee” and “star comma gee,” respectively. In (S,G), the S refers to the
unicast IP address of the source for the multicast traffic, and the G refers to the particular multicast
group IP address for which S is the source. All multicast packets sent from this source have S as the
source address and G as the destination address.
The asterisk (*) in the (*,G) notation is a wildcard indicating that the state applies to any multicast
application source sending to group G. So, if two sources are originating exactly the same content for
multicast group 224.1.1.2, a routing device could use (*,224.1.1.2) to represent the state of a routing
device forwarding traffic from both sources to the group.
14
Multicast routing protocols enable a collection of multicast routing devices to build (join) distribution
trees when a host on a directly attached subnet, typically a LAN, wants to receive traffic from a certain
multicast group, prune branches, locate sources and groups, and prevent routing loops.
• Distance Vector Multicast Routing Protocol (DVMRP)—The first of the multicast routing protocols
and hampered by a number of limitations that make this method unattractive for large-scale Internet
use. DVMRP is a dense-mode-only protocol, and uses the flood-and-prune or implicit join method to
deliver traffic everywhere and then determine where the uninterested receivers are. DVMRP uses
source-based distribution trees in the form (S,G), and builds its own multicast routing tables for RPF
checks.
• Multicast OSPF (MOSPF)—Extends OSPF for multicast use, but only for dense mode. However,
MOSPF has an explicit join message, so routing devices do not have to flood their entire domain with
multicast traffic from every source. MOSPF uses source-based distribution trees in the form (S,G).
• Bidirectional PIM mode—A variation of PIM. Bidirectional PIM builds bidirectional shared trees that
are rooted at a rendezvous point (RP) address. Bidirectional traffic does not switch to shortest path
trees as in PIM-SM and is therefore optimized for routing state size instead of path length. This
means that the end-to-end latency might be longer compared to PIM sparse mode. Bidirectional PIM
routes are always wildcard-source (*,G) routes. The protocol eliminates the need for (S,G) routes and
data-triggered events. The bidirectional (*,G) group trees carry traffic both upstream from senders
toward the RP, and downstream from the RP to receivers. As a consequence, the strict reverse path
forwarding (RPF)-based rules found in other PIM modes do not apply to bidirectional PIM. Instead,
bidirectional PIM (*,G) routes forward traffic from all sources and the RP. Bidirectional PIM routing
devices must have the ability to accept traffic on many potential incoming interfaces. Bidirectional
PIM scales well because it needs no source-specific (S,G) state. Bidirectional PIM is recommended in
deployments with many dispersed sources and many dispersed receivers.
• PIM dense mode—In this mode of PIM, the assumption is that almost all possible subnets have at
least one receiver wanting to receive the multicast traffic from a source, so the network is flooded
with traffic on all possible branches, then pruned back when branches do not express an interest in
receiving the packets, explicitly (by message) or implicitly (time-out silence). This is the dense mode
of multicast operation. LANs are appropriate networks for dense-mode operation. Some multicast
routing protocols, especially older ones, support only dense-mode operation, which makes them
inappropriate for use on the Internet. In contrast to DVMRP and MOSPF, PIM dense mode allows a
routing device to use any unicast routing protocol and performs RPF checks using the unicast routing
table. PIM dense mode has an implicit join message, so routing devices use the flood-and-prune
method to deliver traffic everywhere and then determine where the uninterested receivers are. PIM
dense mode uses source-based distribution trees in the form (S,G), as do all dense-mode protocols.
PIM also supports sparse-dense mode, with mixed sparse and dense groups, but there is no special
15
notation for that operational mode. If sparse-dense mode is supported, the multicast routing
protocol allows some multicast groups to be sparse and other groups to be dense.
• PIM sparse mode—In this mode of PIM, the assumption is that very few of the possible receivers
want packets from each source, so the network establishes and sends packets only on branches that
have at least one leaf indicating (by message) an interest in the traffic. This multicast protocol allows
a routing device to use any unicast routing protocol and performs reverse-path forwarding (RPF)
checks using the unicast routing table. PIM sparse mode has an explicit join message, so routing
devices determine where the interested receivers are and send join messages upstream to their
neighbors, building trees from receivers to the rendezvous point (RP). PIM sparse mode uses an RP
routing device as the initial source of multicast group traffic and therefore builds distribution trees in
the form (*,G), as do all sparse-mode protocols. PIM sparse mode migrates to an (S,G) source-based
tree if that path is shorter than through the RP for a particular multicast group's traffic. WANs are
appropriate networks for sparse-mode operation, and indeed a common multicast guideline is not to
run dense mode on a WAN under any circumstances.
• Core Based Trees (CBT)—Shares all of the characteristics of PIM sparse mode (sparse mode, explicit
join, and shared (*,G) trees), but is said to be more efficient at finding sources than PIM sparse mode.
CBT is rarely encountered outside academic discussions. There are no large-scale deployments of
CBT, commercial or otherwise.
• PIM source-specific multicast (SSM)—Enhancement to PIM sparse mode that allows a client to
receive multicast traffic directly from the source, without the help of an RP. Used with IGMPv3 to
create a shortest-path tree between receiver and source.
• IGMPv1—The original protocol defined in RFC 1112, Host Extensions for IP Multicasting. IGMPv1
sends an explicit join message to the routing device, but uses a timeout to determine when hosts
leave a group. Three versions of the Internet Group Management Protocol (IGMP) run between
receiver hosts and routing devices.
• IGMPv2—Defined in RFC 2236, Internet Group Management Protocol, Version 2. Among other
features, IGMPv2 adds an explicit leave message to the join message.
• IGMPv3—Defined in RFC 3376, Internet Group Management Protocol, Version 3. Among other
features, IGMPv3 optimizes support for a single source of content for a multicast group, or source-
specific multicast (SSM). Used with PIM SSM to create a shortest-path tree between receiver and
source.
• Bootstrap Router (BSR) and Auto-Rendezvous Point (RP)—Allow sparse-mode routing protocols to
find RPs within the routing domain (autonomous system, or AS). RP addresses can also be statically
configured.
• Multicast Source Discovery Protocol (MSDP)—Allows groups located in one multicast routing domain
to find RPs in other routing domains. MSDP is not used on an RP if all receivers and sources are
16
located in the same routing domain. Typically runs on the same routing device as PIM sparse mode
RP. Not appropriate if all receivers and sources are located in the same routing domain.
• Session Announcement Protocol (SAP) and Session Description Protocol (SDP)—Display multicast
session names and correlate the names with multicast traffic. SDP is a session directory protocol that
advertises multimedia conference sessions and communicates setup information to participants who
want to join the session. A client commonly uses SDP to announce a conference session by
periodically multicasting an announcement packet to a well-known multicast address and port using
SAP.
• Pragmatic General Multicast (PGM)—Special protocol layer for multicast traffic that can be used
between the IP layer and the multicast application to add reliability to multicast traffic. PGM allows a
receiver to detect missing information in all cases and request replacement information if the
receiver application requires it.
The differences among the multicast routing protocols are summarized in Table 1 on page 16.
Multicast Dense Mode Sparse Implicit Join Explicit Join (S,G) SBT (*,G) Shared Tree
Routing Mode
Protocol
Multicast Dense Mode Sparse Implicit Join Explicit Join (S,G) SBT (*,G) Shared Tree
Routing Mode
Protocol
It is important to realize that retransmissions due to a high bit-error rate on a link or overloaded routing
device can make multicast as inefficient as repeated unicast. Therefore, there is a trade-off in many
multicast applications regarding the session support provided by the Transmission Control Protocol
(TCP) (but TCP always resends missing segments), or the simple drop-and-continue strategy of the User
Datagram Protocol (UDP) datagram service (but reordering can become an issue). Modern multicast uses
UDP almost exclusively.
The Juniper Networks T Series Core Routers handle extreme multicast packet replication requirements
with a minimum of router load. Each memory component replicates a multicast packet twice at most.
Even in the worst-case scenario involving maximum fan-out, when 1 input port and 63 output ports
need a copy of the packet, the T Series routing platform copies a multicast packet only six times. Most
multicast distribution trees are much sparser, so in many cases only two or three replications are
18
necessary. In no case does the T Series architecture have an impact on multicast performance, even with
the largest multicast fan-out requirements.
Multicast is a “one source, many destinations” method of traffic distribution, meaning that only the
destinations that explicitly indicate their need to receive the information from a particular source receive
the traffic stream.
In the data plane of the SRX Series chassis, the SRX5000 line Module Port Concentrator (SRX5K-MPC)
forwards Layer 3 IP multicast data packets, which include multicast protocol packets (for example, MLD,
IGMP and PIM packets), and the data packets.
In incoming direction, the MPC receives multicast packets from an interface and forwards them to the
central point or to a Services Processing Unit (SPU). The SPU performs multicast route lookup, flow-
based security check, and packet replication.
In outgoing direction, the MPC receives copies of a multicast packet or Layer 3 multicast control
protocol packets from SPU, and transmits them to either multicast capable routers or to hosts in a
multicast group.
In the SRX Series chassis, the SPU perform multicast route lookup, if available, to forward an incoming
multicast packet and replicates it for each multicast outgoing interface. After receiving replicated
multicast packets and their corresponding outgoing interface information from the SPU, the MPC
transmits these packets to next hops.
NOTE: On all SRX Series devices, during RG1 failover with multicast traffic and high number of
multicast sessions, the failover delay is from 90 through 120 seconds for traffic to resume on the
secondary node. The delay of 90 through 120 seconds is only for the first failover. For
subsequent failovers, the traffic resumes within 8 through 18 seconds.
RELATED DOCUMENTATION
You configure a router network to support multicast applications with a related family of protocols. To
use multicast, you must understand the basic components of a multicast network and their relationships,
and then configure the device to act as a node in the network.
RELATED DOCUMENTATION
Multicast Overview | 2
Verifying a Multicast Configuration
IN THIS SECTION
• Fragment handling
• Packet reordering
The structure and processing of IPv6 multicast data session are the same as those of IPv4. Each data
session has the following:
The reverse path forwarding (RPF) check behavior for IPv6 is the same as that for IPv4. Incoming
multicast data is accepted only if the RPF check succeeds. In an IPv6 multicast flow, incoming Multicast
Listener Discovery (MLD) protocol packets are accepted only if MLD or PIM is enabled in the security
zone for the incoming interface. Sessions for multicast protocol packets have a default timeout value of
300 seconds. This value cannot be configured. The null register packet is sent to rendezvous point (RP).
In IPv6 multicast flow, a multicast router has the following three roles:
21
• Designated router
This router receives the multicast packets, encapsulates them with unicast IP headers, and sends
them for multicast flow.
• Intermediate router
There are two sessions for the packets, the control session, for the outer unicast packets, and the
data session. The security policies are applied to the data session and the control session, is used for
forwarding.
• Rendezvous point
The RP receives the unicast PIM register packet, separates the unicast header, and then forwards the
inner multicast packet. The packets received by RP are sent to the pd interface for decapsulation and
are later handled like normal multicast packets.
On a Services Processing Unit (SPU), the multicast session is created as a template session for matching
the incoming packet's tuple. Leaf sessions are connected to the template session. On the Customer
Premise Equipment (CPE), only the template session is created. Each CPE session carries the fan-out lists
that are used for load-balanced distribution of multicast SPU sessions.
NOTE: IPv6 multicast uses the IPv4 multicast behavior for session distribution.
The network service access point identifier (nsapi) of the leaf session is set up on the multicast text
traffic going into the tunnels, to point to the outgoing tunnel. The zone ID of the tunnel is used for
policy lookup for the leaf session in the second stage. Multicast packets are unidirectional. Thus for
multicast text session sent into the tunnels, forwarding sessions are not created.
When the multicast route ages out, the corresponding chain of multicast sessions is deleted. When the
multicast route changes, then the corresponding chain of multicast sessions is deleted. This forces the
next packet hitting the multicast route to take the first path and re-create the chain of sessions; the
multicast route counter is not affected.
NOTE: The IPv6 multicast packet reorder approach is same as that for IPv4.
For the encapsulating router, the incoming packet is multicast, and the outgoing packet is unicast. For
the intermediate router, the incoming packet is unicast, and the outgoing packet is unicast.
RELATED DOCUMENTATION
Junos OS substantially supports the following RFCs and Internet drafts, which define standards for IP
multicast protocols, including the Distance Vector Multicast Routing Protocol (DVMRP), Internet Group
Management Protocol (IGMP), Multicast Listener Discovery (MLD), Multicast Source Discovery Protocol
(MSDP), Pragmatic General Multicast (PGM), Protocol Independent Multicast (PIM), Session
Announcement Protocol (SAP), and Session Description Protocol (SDP).
• RFC 3956, Embedding the Rendezvous Point (RP) Address in an IPv6 Multicast Address
• RFC 3590, Source Address Selection for the Multicast Listener Discovery (MLD) Protocol
• RFC 7761, Protocol Independent Multicast – Sparse Mode (PIM-SM): Protocol Specification
• RFC 5059, Bootstrap Router (BSR) Mechanism for Protocol Independent Multicast (PIM)
• RFC 6514, BGP Encodings and Procedures for Multicast in MPLS/BGP IP VPNs
The following RFCs and Internet drafts do not define standards, but provide information about multicast
protocols and related technologies. The IETF classifies them variously as “Best Current Practice,”
“Experimental,” or “Informational.”
• RFC 3446, Anycast Rendevous Point (RP) mechanism using Protocol Independent Multicast (PIM)
and Multicast Source Discovery Protocol (MSDP)
• RFC 3973, Protocol Independent Multicast – Dense Mode (PIM-DM): Protocol Specification
(Revised)
RELATED DOCUMENTATION
CHAPTER 2
IN THIS CHAPTER
Configuring IGMP | 25
Configuring MLD | 60
Configuring IGMP
IN THIS SECTION
Understanding IGMP | 27
Configuring IGMP | 29
Enabling IGMP | 31
Disabling IGMP | 57
Multicast group membership protocols enable a routing device to detect when a host on a directly
attached subnet, typically a LAN, wants to receive traffic from a certain multicast group. Even if more
than one host on the LAN wants to receive traffic for that multicast group, the routing device sends only
one copy of each packet for that multicast group out on that interface, because of the inherent
broadcast nature of LANs. When the multicast group membership protocol informs the routing device
that there are no interested hosts on the subnet, the packets are withheld and that leaf is pruned from
the distribution tree.
The Internet Group Management Protocol (IGMP) and the Multicast Listener Discovery (MLD) Protocol
are the standard IP multicast group membership protocols: IGMP and MLD have several versions that
are supported by hosts and routing devices:
• IGMPv1—The original protocol defined in RFC 1112. An explicit join message is sent to the routing
device, but a timeout is used to determine when hosts leave a group. This process wastes processing
cycles on the routing device, especially on older or smaller routing devices.
• IGMPv2—Defined in RFC 2236. Among other features, IGMPv2 adds an explicit leave message to
the join message so that routing devices can more easily determine when a group has no interested
listeners on a LAN.
• IGMPv3—Defined in RFC 3376. Among other features, IGMPv3 optimizes support for a single source
of content for a multicast group, or source-specific multicast (SSM).
27
The various versions of IGMP and MLD are backward compatible. It is common for a routing device to
run multiple versions of IGMP and MLD on LAN interfaces. Backward compatibility is achieved by
dropping back to the most basic of all versions run on a LAN. For example, if one host is running
IGMPv1, any routing device attached to the LAN running IGMPv2 can drop back to IGMPv1 operation,
effectively eliminating the IGMPv2 advantages. Running multiple IGMP versions ensures that both
IGMPv1 and IGMPv2 hosts find peers for their versions on the routing device.
SEE ALSO
Configuring MLD
Understanding IGMP
The Internet Group Management Protocol (IGMP) manages the membership of hosts and routing
devices in multicast groups. IP hosts use IGMP to report their multicast group memberships to any
immediately neighboring multicast routing devices. Multicast routing devices use IGMP to learn, for
each of their attached physical networks, which groups have members.
IGMP is also used as the transport for several related multicast protocols (for example, Distance Vector
Multicast Routing Protocol [DVMRP] and Protocol Independent Multicast version 1 [PIMv1]).
A routing device receives explicit join and prune messages from those neighboring routing devices that
have downstream group members. When PIM is the multicast protocol in use, IGMP begins the process
as follows:
1. To join a multicast group, G, a host conveys its membership information through IGMP.
2. The routing device then forwards data packets addressed to a multicast group G to only those
interfaces on which explicit join messages have been received.
3. A designated router (DR) sends periodic join and prune messages toward a group-specific rendezvous
point (RP) for each group for which it has active members. One or more routing devices are
automatically or statically designated as the RP, and all routing devices must explicitly join through
the RP.
28
4. Each routing device along the path toward the RP builds a wildcard (any-source) state for the group
and sends join and prune messages toward the RP.
The term route entry is used to refer to the state maintained in a routing device to represent the
distribution tree.
• source address
• group address
• timers
• flag bits
The wildcard route entry's incoming interface points toward the RP.
The outgoing interfaces point to the neighboring downstream routing devices that have sent join and
prune messages toward the RP as well as the directly connected hosts that have requested
membership to group G.
5. This state creates a shared, RP-centered, distribution tree that reaches all group members.
IGMP is also used as the transport for several related multicast protocols (for example, Distance Vector
Multicast Routing Protocol [DVMRP] and Protocol Independent Multicast version 1 [PIMv1]).
IGMP is an integral part of IP and must be enabled on all routing devices and hosts that need to receive
IP multicast traffic.
For each attached network, a multicast routing device can be either a querier or a nonquerier. The
querier routing device periodically sends general query messages to solicit group membership
information. Hosts on the network that are members of a multicast group send report messages. When
a host leaves a group, it sends a leave group message.
IGMP version 3 (IGMPv3) supports inclusion and exclusion lists. Inclusion lists enable you to specify
which sources can send to a multicast group. This type of multicast group is called a source-specific
multicast (SSM) group, and its multicast address is 232/8.
IGMPv3 provides support for source filtering. For example, a routing device can specify particular
routing devices from which it accepts or rejects traffic. With IGMPv3, a multicast routing device can
learn which sources are of interest to neighboring routing devices.
29
Exclusion mode works the opposite of an inclusion list. It allows any source but the ones listed to send
to the SSM group.
IGMPv3 interoperates with versions 1 and 2 of the protocol. However, to remain compatible with older
IGMP hosts and routing devices, IGMPv3 routing devices must also implement versions 1 and 2 of the
protocol. IGMPv3 supports the following membership-report record types: mode is allowed, allow new
sources, and block old sources.
SEE ALSO
Configuring IGMP
Before you begin:
1. Determine whether the router is directly attached to any multicast sources. Receivers must be able
to locate these sources.
2. Determine whether the router is directly attached to any multicast group receivers. If receivers are
present, IGMP is needed.
3. Determine whether to configure multicast to use sparse, dense, or sparse-dense mode. Each mode
has different configuration considerations.
5. Determine whether to locate the RP with the static configuration, BSR, or auto-RP method.
6. Determine whether to configure multicast to use its own RPF routing table when configuring PIM in
sparse, dense, or sparse-dense mode.
7. Configure the SAP and SDP protocols to listen for multicast session announcements. See Configuring
the Session Announcement Protocol.
To configure the Internet Group Management Protocol (IGMP), include the igmp statement:
igmp {
accounting;
interface interface-name {
disable;
30
(accounting | no-accounting);
group-policy [ policy-names ];
immediate-leave;
oif-map map-name;
promiscuous-mode;
ssm-map ssm-map-name;
static {
group multicast-group-address {
exclude;
group-count number;
group-increment increment;
source ip-address {
source-count number;
source-increment increment;
}
}
}
version version;
}
query-interval seconds;
query-last-member-interval seconds;
query-response-interval seconds;
robust-count number;
traceoptions {
file filename <files number> <size size> <world-readable | no-world-
readable>;
flag flag <flag-modifier> <disable>;
}
}
• [edit protocols]
By default, IGMP is enabled on all interfaces on which you configure Protocol Independent Multicast
(PIM), and on all broadcast interfaces on which you configure the Distance Vector Multicast Routing
Protocol (DVMRP).
NOTE: You can configure IGMP on an interface without configuring PIM. PIM is generally not
needed on IGMP downstream interfaces. Therefore, only one “pseudo PIM interface” is created
31
to represent all IGMP downstream (IGMP-only) interfaces on the router. This reduces the amount
of router resources, such as memory, that are consumed. You must configure PIM on upstream
IGMP interfaces to enable multicast routing, perform reverse-path forwarding for multicast data
packets, populate the multicast forwarding table for upstream interfaces, and in the case of
bidirectional PIM and PIM sparse mode, to distribute IGMP group memberships into the
multicast routing domain.
Enabling IGMP
The Internet Group Management Protocol (IGMP) manages multicast groups by establishing,
maintaining, and removing groups on a subnet. Multicast routing devices use IGMP to learn which
groups have members on each of their attached physical networks. IGMP must be enabled for the router
to receive IPv4 multicast packets. IGMP is only needed for IPv4 networks, because multicast is handled
differently in IPv6 networks. IGMP is automatically enabled on all IPv4 interfaces on which you
configure PIM and on all IPv4 broadcast interfaces when you configure DVMRP.
If IGMP is not running on an interface—either because PIM and DVMRP are not configured on the
interface or because IGMP is explicitly disabled on the interface—you can explicitly enable IGMP.
1. If PIM and DVMRP are not running on the interface, explicitly enable IGMP by including the interface
name.
2. See if IGMP is disabled on any interfaces. In the following example, IGMP is disabled on a Gigabit
Ethernet interface.
5. Verify the operation of IGMP on the interfaces by checking the output of the show igmp interface
command.
SEE ALSO
Understanding IGMP
Disabling IGMP
show igmp interface
The query interval, the response interval, and the robustness variable are related in that they are all
variables that are used to calculate the group membership timeout. The group membership timeout is
the number of seconds that must pass before a multicast router determines that no more members of a
host group exist on a subnet. The group membership timeout is calculated as the (robustness variable x
query-interval) + (query-response-interval). If no reports are received for a particular group before the
group membership timeout has expired, the routing device stops forwarding remotely-originated
multicast packets for that group onto the attached network.
By default, host-query messages are sent every 125 seconds. You can change this interval to change the
number of IGMP messages sent on the subnet.
SEE ALSO
Understanding IGMP
Modifying the IGMP Query Response Interval
Modifying the IGMP Robustness Variable
show igmp interface
show igmp statistics
The query response interval, the host-query interval, and the robustness variable are related in that they
are all variables that are used to calculate the group membership timeout. The group membership
timeout is the number of seconds that must pass before a multicast router determines that no more
members of a host group exist on a subnet. The group membership timeout is calculated as the
(robustness variable x query-interval) + (query-response-interval). If no reports are received for a
particular group before the group membership timeout has expired, the routing device stops forwarding
remotely originated multicast packets for that group onto the attached network.
The default query response interval is 10 seconds. You can configure a subsecond interval up to one
digit to the right of the decimal point. The configurable range is 0.1 through 0.9, then in 1-second
intervals 1 through 999,999.
2. Verify the configuration by checking the IGMP Query Response Interval field in the output of the
show igmp interface command.
3. Verify the operation of the query interval by checking the Membership Query field in the output of
the show igmp statistics command.
SEE ALSO
Understanding IGMP
Modifying the IGMP Host-Query Message Interval
Modifying the IGMP Robustness Variable
show igmp interface
show igmp statistics
The immediate-leave setting enables host tracking, meaning that the device keeps track of the hosts that
send join messages. This allows IGMP to determine when the last host sends a leave message for the
multicast group.
When the immediate leave setting is enabled, the device removes an interface from the forwarding-table
entry without first sending IGMP group-specific queries to the interface. The interface is pruned from
the multicast tree for the multicast group specified in the IGMP leave message. The immediate leave
setting ensures optimal bandwidth management for hosts on a switched network, even when multiple
multicast groups are being used simultaneously.
When immediate leave is disabled and one host sends a leave group message, the routing device first
sends a group query to determine if another receiver responds. If no receiver responds, the routing
device removes all hosts on the interface from the multicast group. Immediate leave is disabled by
default for both IGMP version 2 and IGMP version 3.
35
NOTE: Although host tracking is enabled for IGMPv2 and MLDv1 when you enable immediate
leave, use immediate leave with these versions only when there is one host on the interface. The
reason is that IGMPv2 and MLDv1 use a report suppression mechanism whereby only one host
on an interface sends a group join report in response to a membership query. The other
interested hosts suppress their reports. The purpose of this mechanism is to avoid a flood of
reports for the same group. But it also interferes with host tracking, because the router only
knows about the one interested host and does not know about the others.
2. Verify the configuration by checking the Immediate Leave field in the output of the show igmp
interface command.
SEE ALSO
Understanding IGMP
show igmp interface
You define the policy to match only IGMP group addresses (for IGMPv2) by using the policy's route-
filter statement to match the group address. You define the policy to match IGMP (source, group)
addresses (for IGMPv3) by using the policy's route-filter statement to match the group address and the
policy's source-address-filter statement to match the source address.
36
3. Apply the policies to the IGMP interfaces on which you prefer not to receive specific group or
(source, group) reports. In this example, ge-0/0/0.1 is running IGMPv2, and ge-0/1/1.0 is running
IGMPv3.
4. Verify the operation of the filter by checking the Rejected Report field in the output of the show
igmp statistics command.
SEE ALSO
Understanding IGMP
Example: Configuring Policy Chains and Route Filters
37
NOTE: When you enable IGMP on an unnumbered Ethernet interface that uses a /32 loopback
address as a donor address, you must configure IGMP promiscuous mode to accept the IGMP
packets received on this interface.
NOTE: When enabling promiscuous-mode, all routers on the ethernet segment must be
configured with the promiscuous mode statement. Otherwise, only the interface configured with
lowest IPv4 address acts as the querier for IGMP for this Ethernet segment.
2. Verify the configuration by checking the Promiscuous Mode field in the output of the show igmp
interface command.
3. Verify the operation of the filter by checking the Rx non-local field in the output of the show igmp
statistics command.
SEE ALSO
Understanding IGMP
Loopback Interface Configuration
Junos OS Network Interfaces Library for Routing Devices
show igmp interface
show igmp statistics
38
When the routing device that is serving as the querier receives a leave-group message from a host, the
routing device sends multiple group-specific queries to the group being left. The querier sends a specific
number of these queries at a specific interval. The number of queries sent is called the last-member
query count. The interval at which the queries are sent is called the last-member query interval. Because
both settings are configurable, you can adjust the leave latency. The IGMP leave latency is the time
between a request to leave a multicast group and the receipt of the last byte of data for the multicast
group.
The last-member query count x (times) the last-member query interval = (equals) the amount of time it
takes a routing device to determine that the last member of a group has left the group and to stop
forwarding group traffic.
The default last-member query interval is 1 second. You can configure a subsecond interval up to one
digit to the right of the decimal point. The configurable range is 0.1 through 0.9, then in 1-second
intervals 1 through 999,999.
1. Configure the time (in seconds) that the routing device waits for a report in response to a group-
specific query.
2. Verify the configuration by checking the IGMP Last Member Query Interval field in the output of the
show igmp interfaces command.
NOTE: You can configure the last-member query count by configuring the robustness variable.
The two are always equal.
SEE ALSO
When the query router receives an IGMP leave message on a shared network running IGMPv2, the
query router must send an IGMP group query message a specified number of times. The number of
IGMP group query messages sent is determined by the robust count.
The value of the robustness variable is also used in calculating the following IGMP message intervals:
• Group member interval—Amount of time that must pass before a multicast router determines that
there are no more members of a group on a network. This interval is calculated as follows:
(robustness variable x query-interval) + (1 x query-response-interval).
• Other querier present interval—The robust count is used to calculate the amount of time that must
pass before a multicast router determines that there is no longer another multicast router that is the
querier. This interval is calculated as follows: (robustness variable x query-interval) + (0.5 x query-
response-interval).
• Last-member query count—Number of group-specific queries sent before the router assumes there
are no local members of a group. The number of queries is equal to the value of the robustness
variable.
In IGMPv3, a change of interface state causes the system to immediately transmit a state-change report
from that interface. In case the state-change report is missed by one or more multicast routers, it is
retransmitted. The number of times it is retransmitted is the robust count minus one. In IGMPv3, the
robust count is also a factor in determining the group membership interval, the older version querier
interval, and the other querier present interval.
By default, the robustness variable is set to 2. You might want to increase this value if you expect a
subnet to lose packets.
2. Verify the configuration by checking the IGMP Robustness Count field in the output of the show
igmp interfaces command.
SEE ALSO
Increasing the maximum number of IGMP packets transmitted per second might be useful on a router
with a large number of interfaces participating in IGMP.
To change the limit for the maximum number of IGMP packets the router can transmit in 1 second,
include the maximum-transmit-rate statement and specify the maximum number of packets per second
to be transmitted.
SEE ALSO
To enable source-specific multicast (SSM) functionality, you must configure version 3 on the host and
the host’s directly connected routing device. If a source address is specified in a multicast group that is
statically configured, the version must be set to IGMPv3.
If a static multicast group is configured with the source address defined, and the IGMP version is
configured to be version 2, the source is ignored and only the group is added. In this case, the join is
treated as an IGMPv2 group join.
41
BEST PRACTICE: If you configure the IGMP version setting at the individual interface hierarchy
level, it overrides the interface all statement. That is, the new interface does not inherit the
version number that you specified with the interface all statement. By default, that new interface
is enabled with version 2. You must explicitly specify a version number when adding a new
interface. For example, if you specified version 3 with interface all, you would need to configure
the version 3 statement for the new interface. Additionally, if you configure an interface for a
multicast group at the [edit interface interface-name static group multicast-group-address]
hierarchy level, you must specify a version number as well as the other group parameters.
Otherwise, the interface is enabled with the default version 2.
If you have already configured the routing device to use IGMP version 1 (IGMPv1) and then configure it
to use IGMPv2, the routing device continues to use IGMPv1 for up to 6 minutes and then uses IGMPv2.
2. Verify the configuration by checking the version field in the output of the show igmp interfaces
command. The show igmp statistics command has version-specific output fields, such as V1
Membership Report, V2 Membership Report, and V3 Membership Report.
SEE ALSO
Understanding IGMP
show pim interfaces
show igmp statistics
42
When enabling IGMP static group membership, you cannot configure multiple groups using the group-
count, group-increment, source-count, and source-increment statements if the all option is specified as
the IGMP interface.
Class-of-service (CoS) adjustment is not supported with IGMP static group membership.
1. On the DR, configure the static groups to be created by including the static statement and group
statement and specifying which IP multicast address of the group to be created. When creating
groups individually, you must specify a unique address for each group.
2. After you commit the configuration, use the show configuration protocol igmp command to verify
the IGMP protocol configuration.
interface fe-0/1/2.0 {
static {
group 233.252.0.1 ;
}
}
3. After you have committed the configuration and the source is sending traffic, use the show igmp
group command to verify that static group 233.252.0.1 has been created.
NOTE: When you configure static IGMP group entries on point-to-point links that connect
routing devices to a rendezvous point (RP), the static IGMP group entries do not generate join
messages toward the RP.
When you create IGMP static group membership to test multicast forwarding on an interface on which
you want to receive multicast traffic, you can specify that a number of static groups be automatically
created. This is useful when you want to test forwarding to multiple receivers without having to
configure each receiver separately.
1. On the DR, configure the number of static groups to be created by including the group-count
statement and specifying the number of groups to be created.
2. After you commit the configuration, use the show configuration protocol igmp command to verify
the IGMP protocol configuration.
interface fe-0/1/2.0 {
static {
group 233.252.0.1 {
group-count 3;
}
}
}
44
3. After you have committed the configuration and after the source is sending traffic, use the show
igmp group command to verify that static groups 233.252.0.1, 233.252.0.2, and 233.252.0.3 have
been created.
When you create IGMP static group membership to test multicast forwarding on an interface on which
you want to receive multicast traffic, you can also configure the group address to be automatically
incremented for each group created. This is useful when you want to test forwarding to multiple
receivers without having to configure each receiver separately and when you do not want the group
addresses to be sequential.
In this example, you create three groups and increase the group address by an increment of two for each
group.
1. On the DR, configure the group address increment by including the group-increment statement and
specifying the number by which the address should be incremented for each group. The increment is
specified in dotted decimal notation similar to an IPv4 address.
2. After you commit the configuration, use the show configuration protocol igmp command to verify
the IGMP protocol configuration.
interface fe-0/1/2.0 {
version 3;
static {
group 233.252.0.1 {
group-increment 0.0.0.2;
group-count 3;
}
}
}
3. After you have committed the configuration and after the source is sending traffic, use the show
igmp group command to verify that static groups 233.252.0.1, 233.252.0.3, and 233.252.0.5 have
been created.
When you create IGMP static group membership to test multicast forwarding on an interface on which
you want to receive multicast traffic, and your network is operating in source-specific multicast (SSM)
mode, you can also specify that the multicast source address be accepted. This is useful when you want
to test forwarding to multicast receivers from a specific multicast source.
If you specify a group address in the SSM range, you must also specify a source.
46
If a source address is specified in a multicast group that is statically configured, the IGMP version on the
interface must be set to IGMPv3. IGMPv2 is the default value.
In this example, you create group 233.252.0.1 and accept IP address 10.0.0.2 as the only source.
1. On the DR, configure the source address by including the source statement and specifying the IPv4
address of the source host.
2. After you commit the configuration, use the show configuration protocol igmp command to verify
the IGMP protocol configuration.
interface fe-0/1/2.0 {
version 3;
static {
group 233.252.0.1 {
source 10.0.0.2;
}
}
}
3. After you have committed the configuration and the source is sending traffic, use the show igmp
group command to verify that static group 233.252.0.1 has been created and that source 10.0.0.2
has been accepted.
When you create IGMP static group membership to test multicast forwarding on an interface on which
you want to receive multicast traffic, you can specify that a number of multicast sources be
47
automatically accepted. This is useful when you want to test forwarding to multicast receivers from
more than one specified multicast source.
In this example, you create group 233.252.0.1 and accept addresses 10.0.0.2, 10.0.0.3, and 10.0.0.4 as
the sources.
1. On the DR, configure the number of multicast source addresses to be accepted by including the
source-count statement and specifying the number of sources to be accepted.
2. After you commit the configuration, use the show configuration protocol igmp command to verify
the IGMP protocol configuration.
interface fe-0/1/2.0 {
version 3;
static {
group 233.252.0.1 {
source 10.0.0.2 {
source-count 3;
}
}
}
}
3. After you have committed the configuration and the source is sending traffic, use the show igmp
group command to verify that static group 233.252.0.1 has been created and that sources 10.0.0.2,
10.0.0.3, and 10.0.0.4 have been accepted.
Source: 10.0.0.3
Last reported by: Local
Timeout: 0 Type: Static
Group: 233.252.0.1
Source: 10.0.0.4
Last reported by: Local
Timeout: 0 Type: Static
When you configure static groups on an interface on which you want to receive multicast traffic, and
specify that a number of multicast sources be automatically accepted, you can also specify the number
by which the address should be incremented for each source accepted. This is useful when you want to
test forwarding to multiple receivers without having to configure each receiver separately and you do
not want the source addresses to be sequential.
In this example, you create group 233.252.0.1 and accept addresses 10.0.0.2, 10.0.0.4, and 10.0.0.6 as
the sources.
1. Configure the multicast source address increment by including the source-increment statement and
specifying the number by which the address should be incremented for each source. The increment is
specified in dotted decimal notation similar to an IPv4 address.
2. After you commit the configuration, use the show configuration protocol igmp command to verify
the IGMP protocol configuration.
interface fe-0/1/2.0 {
version 3;
static {
group 233.252.0.1 {
source 10.0.0.2 {
source-count 3;
source-increment 0.0.0.2;
}
}
49
}
}
3. After you have committed the configuration and after the source is sending traffic, use the show
igmp group command to verify that static group 233.252.0.1 has been created and that sources
10.0.0.2, 10.0.0.4, and 10.0.0.6 have been accepted.
When you configure static groups on an interface on which you want to receive multicast traffic and
your network is operating in source-specific multicast (SSM) mode, you can specify that certain
multicast source addresses be excluded.
By default the multicast source address configured in a static group operates in include mode. In include
mode the multicast traffic for the group is accepted from the source address configured. You can also
configure the static group to operate in exclude mode. In exclude mode the multicast traffic for the
group is accepted from any address other than the source address configured.
If a source address is specified in a multicast group that is statically configured, the IGMP version on the
interface must be set to IGMPv3. IGMPv2 is the default value.
In this example, you exclude address 10.0.0.2 as a source for group 233.252.0.1.
1. On the DR, configure a multicast static group to operate in exclude mode by including the exclude
statement and specifying which IPv4 source address to exclude.
2. After you commit the configuration, use the show configuration protocol igmp command to verify
the IGMP protocol configuration.
interface fe-0/1/2.0 {
version 3;
static {
group 233.252.0.1 {
exclude;
source 10.0.0.2;
}
}
}
3. After you have committed the configuration and the source is sending traffic, use the show igmp
group detail command to verify that static group 233.252.0.1 has been created and that the static
group is operating in exclude mode.
SEE ALSO
1. Enable accounting globally or on an IGMP interface. This example shows both options.
2. Configure the events to be recorded and filter the events to a system log file with a descriptive
filename, such as igmp-events.
4. You can monitor the system log file as entries are added to the file by running the monitor start and
monitor stop commands.
SEE ALSO
Understanding IGMP
Specifying Log File Size, Number, and Archiving Properties
When configuring limits for IGMP multicast groups, keep the following in mind:
• Each any-source group (*,G) counts as one group toward the limit.
53
• Each source-specific group (S,G) counts as one group toward the limit.
• Multiple source-specific groups count individually toward the group limit, even if they are for the
same group. For example, (S1, G1) and (S2, G1) would count as two groups toward the configured
limit.
• Combinations of any-source groups and source-specific groups count individually toward the group
limit, even if they are for the same group. For example, (*, G1) and (S, G1) would count as two groups
toward the configured limit.
• Configuring and committing a group limit on a network that is lower than what already exists on the
network results in the removal of all groups from the configuration. The groups must then request to
rejoin the network (up to the newly configured group limit).
• You can dynamically limit multicast groups on IGMP logical interfaces using dynamic profiles.
Starting in Junos OS Release 12.2, you can optionally configure a system log warning threshold for
IGMP multicast group joins received on the logical interface. It is helpful to review the system log
messages for troubleshooting purposes and to detect if an excessive amount of IGMP multicast group
joins have been received on the interface. These log messages convey when the configured group limit
has been exceeded, when the configured threshold has been exceeded, and when the number of groups
drop below the configured threshold.
The group-threshold statement enables you to configure the threshold at which a warning message is
logged. The range is 1 through 100 percent. The warning threshold is a percentage of the group limit, so
you must configure the group-limit statement to configure a warning threshold. For instance, when the
number of groups exceed the configured warning threshold, but remain below the configured group
limit, multicast groups continue to be accepted, and the device logs the warning message. In addition,
the device logs a warning message after the number of groups drop below the configured warning
threshold. You can further specify the amount of time (in seconds) between the log messages by
configuring the log-interval statement. The range is 6 through 32,767 seconds.
You might consider throttling log messages because every entry added after the configured threshold
and every entry rejected after the configured limit causes a warning message to be logged. By
configuring a log interval, you can throttle the amount of system log warning messages generated for
IGMP multicast group joins.
NOTE: On ACX Series routers, the maximum number of multicast routes is 1024.
[edit]
user@host# edit protocols igmp interface interface-name
To confirm your configuration, use the show protocols igmp command. To verify the operation of IGMP
on the interface, including the configured group limit and the optional warning threshold and interval
between log messages, use the show igmp interface command.
SEE ALSO
Flag Description
(Continued)
Flag Description
In the following example, tracing is enabled for all routing protocol packets. Then tracing is narrowed to
focus only on IGMP packets of a particular type. To configure tracing operations for IGMP:
1. (Optional) Configure tracing at the routing options level to trace all protocol packets.
6. Configure tracing flags. Suppose you are troubleshooting issues with a particular multicast group. The
following example shows how to flag all events for packets associated with the group IP address.
SEE ALSO
Understanding IGMP
Tracing and Logging Junos OS Operations
mtrace
Disabling IGMP
To disable IGMP on an interface, include the disable statement:
disable;
SEE ALSO
Understanding IGMP
Configuring IGMP
58
Enabling IGMP
SEE ALSO
12.2 Starting in Junos OS Release 12.2, you can optionally configure a system log warning threshold for IGMP
multicast group joins received on the logical interface.
RELATED DOCUMENTATION
Configuring MLD | 60
IN THIS SECTION
Purpose | 59
59
Action | 59
Meaning | 59
Purpose
Action
Sample Output
command-name
Configured Parameters:
IGMP Query Interval: 125.0
IGMP Query Response Interval: 10.0
IGMP Last Member Query Interval: 1.0
IGMP Robustness Count: 2
Derived Parameters:
IGMP Membership Timeout: 260.0
IGMP Other Querier Present Timeout: 255.0
Meaning
The output shows a list of the interfaces that are configured for IGMP. Verify the following information:
Configuring MLD
IN THIS SECTION
Understanding MLD | 60
Configuring MLD | 64
Enabling MLD | 65
Disabling MLD | 91
Understanding MLD
The Multicast Listener Discovery (MLD) Protocol manages the membership of hosts and routers in
multicast groups. IP version 6 (IPv6) multicast routers use MLD to learn, for each of their attached
physical networks, which groups have interested listeners. Each routing device maintains a list of host
multicast addresses that have listeners for each subnetwork, as well as a timer for each address.
However, the routing device does not need to know the address of each listener—just the address of
each host. The routing device provides addresses to the multicast routing protocol it uses, which
ensures that multicast packets are delivered to all subnetworks where there are interested listeners. In
this way, MLD is used as the transport for the Protocol Independent Multicast (PIM) Protocol.
MLD is an integral part of IPv6 and must be enabled on all IPv6 routing devices and hosts that need to
receive IP multicast traffic. The Junos OS supports MLD versions 1 and 2. Version 2 is supported for
source-specific multicast (SSM) include and exclude modes.
61
In include mode, the receiver specifies the source or sources it is interested in receiving the multicast
group traffic from. Exclude mode works the opposite of include mode. It allows the receiver to specify
the source or sources it is not interested in receiving the multicast group traffic from.
For each attached network, a multicast routing device can be either a querier or a nonquerier. A querier
routing device, usually one per subnet, solicits group membership information by transmitting MLD
queries. When a host reports to the querier routing device that it has interested listeners, the querier
routing device forwards the membership information to the rendezvous point (RP) routing device by
means of the receiver's (host's) designated router (DR). This builds the rendezvous-point tree (RPT)
connecting the host with interested listeners to the RP routing device. The RPT is the initial path used
by the sender to transmit information to the interested listeners. Nonquerier routing devices do not
transmit MLD queries on a subnet but can do so if the querier routing device fails.
All MLD-configured routing devices start as querier routing devices on each attached subnet (see Figure
3 on page 61). The querier routing device on the right is the receiver's DR.
To elect the querier routing device, the routing devices exchange query messages containing their IPv6
source addresses. If a routing device hears a query message whose IPv6 source address is numerically
lower than its own selected address, it becomes a nonquerier. In Figure 4 on page 62, the routing
device on the left has a source address numerically lower than the one on the right and therefore
becomes the querier routing device.
NOTE: In the practical application of MLD, several routing devices on a subnet are nonqueriers.
If the elected querier routing device fails, query messages are exchanged among the remaining
routing devices. The routing device with the lowest IPv6 source address becomes the new
querier routing device. The IPv6 Neighbor Discovery Protocol (NDP) implementation drops
62
incoming Neighbor Announcement (NA) messages that have a broadcast or multicast address in
the target link-layer address option. This behavior is recommended by RFC 2461.
The querier routing device sends general MLD queries on the link-scope all-nodes multicast address
FF02::1 at short intervals to all attached subnets to solicit group membership information (see Figure 5
on page 62). Within the query message is the maximum response delay value, specifying the maximum
allowed delay for the host to respond with a report message.
If interested listeners are attached to the host receiving the query, the host sends a report containing
the host's IPv6 address to the routing device (see Figure 6 on page 63). If the reported address is not
yet in the routing device's list of multicast addresses with interested listeners, the address is added to
63
the list and a timer is set for the address. If the address is already on the list, the timer is reset. The
host's address is transmitted to the RP in the PIM domain.
If the host has no interested multicast listeners, it sends a done message to the querier routing device.
On receipt, the querier routing device issues a multicast address-specific query containing the last
listener query interval value to the multicast address of the host. If the routing device does not receive a
report from the multicast address, it removes the multicast address from the list and notifies the RP in
the PIM domain of its removal (see Figure 7 on page 63).
Figure 7: Host Has No Interested Receivers and Sends a Done Message to Routing Device
If a done message is not received by the querier routing device, the querier routing device continues to
send multicast address-specific queries. If the timer set for the address on receipt of the last report
expires, the querier routing device assumes there are no longer interested listeners on that subnet,
64
removes the multicast address from the list, and notifies the RP in the PIM domain of its removal (see
Figure 8 on page 64).
Figure 8: Host Address Timer Expires and Address Is Removed from Multicast Address List
SEE ALSO
Enabling MLD
Example: Recording MLD Join and Leave Events
Example: Modifying the MLD Robustness Variable
Configuring MLD
To configure the Multicast Listener Discovery (MLD) Protocol, include the mld statement:
mld {
accounting;
interface interface-name {
disable;
(accounting | no-accounting);
group-policy [ policy-names ];
immediate-leave;
oif-map [ map-names ];
passive;
ssm-map ssm-map-name;
static {
group multicast-group-address {
exclude;
group-count number;
group-increment increment;
source ip-address {
65
source-count number;
source-increment increment;
}
}
}
version version;
}
maximum-transmit-rate packets-per-second;
query-interval seconds;
query-last-member-interval seconds;
query-response-interval seconds;
robust-count number;
}
• [edit protocols]
By default, MLD is enabled on all broadcast interfaces when you configure Protocol Independent
Multicast (PIM) or the Distance Vector Multicast Routing Protocol (DVMRP).
Enabling MLD
The Multicast Listener Discovery (MLD) Protocol manages multicast groups by establishing, maintaining,
and removing groups on a subnet. Multicast routing devices use MLD to learn which groups have
members on each of their attached physical networks. MLD must be enabled for the router to receive
IPv6 multicast packets. MLD is only needed for IPv6 networks, because multicast is handled differently
in IPv4 networks. MLD is enabled on all IPv6 interfaces on which you configure PIM and on all IPv6
broadcast interfaces when you configure DVMRP.
MLD specifies different behaviors for multicast listeners and for routers. When a router is also a listener,
the router responds to its own messages. If a router has more than one interface to the same link, it
needs to perform the router behavior over only one of those interfaces. Listeners, on the other hand,
must perform the listener behavior on all interfaces connected to potential receivers of multicast traffic.
If MLD is not running on an interface—either because PIM and DVMRP are not configured on the
interface or because MLD is explicitly disabled on the interface—you can explicitly enable MLD.
1. If PIM and DVMRP are not running on the interface, explicitly enable MLD by including the interface
name.
2. Check to see if MLD is disabled on any interfaces. In the following example, MLD is disabled on a
Gigabit Ethernet interface.
interface fe-0/0/0.0;
interface ge-0/0/0.0 {
disable;
}
interface fe-0/0/0.0;
interface ge-0/0/0.0;
5. Verify the operation of MLD by checking the output of the show mld interface command.
SEE ALSO
Understanding MLD | 0
Disabling MLD | 0
67
If you configure the MLD version setting at the individual interface hierarchy level, it overrides
configuring the IGMP version using the interface all statement.
If a source address is specified in a multicast group that is statically configured, the version must be set
to MLDv2.
2. Verify the configuration by checking the version field in the output of the show mld interface
command. The show mld statistics command has version-specific output fields, such as the
counters in the MLD Message type field.
SEE ALSO
Understanding MLD | 0
Source-Specific Multicast Groups Overview | 0
Example: Configuring Source-Specific Multicast Groups with Any-Source Override | 458
Example: Configuring an SSM-Only Domain | 454
Example: Configuring PIM SSM on a Network | 452
Example: Configuring SSM Mapping | 455
scope all-nodes address FF02::1. A general host-query message has a maximum response time that you
can set by configuring the query response interval.
The query response timeout, the query interval, and the robustness variable are related in that they are
all variables that are used to calculate the multicast listener interval. The multicast listener interval is the
number of seconds that must pass before a multicast router determines that no more members of a host
group exist on a subnet. The multicast listener interval is calculated as the (robustness variable x query-
interval) + (1 x query-response-interval). If no reports are received for a particular group before the
multicast listener interval has expired, the routing device stops forwarding remotely-originated multicast
packets for that group onto the attached network.
By default, host-query messages are sent every 125 seconds. You can change this interval to change the
number of MLD messages sent on the subnet.
SEE ALSO
Understanding MLD | 0
Modifying the MLD Query Response Interval | 0
Example: Modifying the MLD Robustness Variable | 0
show mld interface | 2230
CLI Explorer
show mld statistics | 2237
CLI Explorer
interval to adjust the burst peaks of MLD messages on the subnet. Set a larger interval to make the
traffic less bursty.
The query response timeout, the query interval, and the robustness variable are related in that they are
all variables that are used to calculate the multicast listener interval. The multicast listener interval is the
number of seconds that must pass before a multicast router determines that no more members of a host
group exist on a subnet. The multicast listener interval is calculated as the (robustness variable x query-
interval) + (1 x query-response-interval). If no reports are received for a particular group before the
multicast listener interval has expired, the routing device stops forwarding remotely-originated multicast
packets for that group onto the attached network.
The default query response interval is 10 seconds. You can configure a subsecond interval up to one
digit to the right of the decimal point. The configurable range is 0.1 through 0.9, then in 1-second
intervals 1 through 999,999.
2. Verify the configuration by checking the MLD Query Response Interval field in the output of the
show mld interface command.
3. Verify the operation of the query interval by checking the Listener Query field in the output of the
show mld statistics command.
SEE ALSO
Understanding MLD | 0
Modifying the MLD Host-Query Message Interval | 0
Example: Modifying the MLD Robustness Variable | 0
show mld interface | 2230
CLI Explorer
show mld statistics | 2237
CLI Explorer
on the link-scope-all-routers address FF02::2. You can lower this interval to reduce the amount of time it
takes a router to detect the loss of the last member of a group.
When the routing device that is serving as the querier receives a leave-group (done) message from a
host, the routing device sends multiple group-specific queries to the group. The querier sends a specific
number of these queries, and it sends them at a specific interval. The number of queries sent is called
the last-listener query count. The interval at which the queries are sent is called the last-listener query
interval. Both settings are configurable, thus allowing you to adjust the leave latency. The IGMP leave
latency is the time between a request to leave a multicast group and the receipt of the last byte of data
for the multicast group.
The last-listener query count x (times) the last-listener query interval = (equals) the amount of time it
takes a routing device to determine that the last member of a group has left the group and to stop
forwarding group traffic.
The default last-listener query interval is 1 second. You can configure a subsecond interval up to one
digit to the right of the decimal point. The configurable range is 0.1 through 0.9, then in 1-second
intervals 1 through 999,999.
1. Configure the time (in seconds) that the routing device waits for a report in response to a group-
specific query.
2. Verify the configuration by checking the MLD Last Member Query Interval field in the output of the
show igmp interfaces command.
NOTE: You can configure the last-member query count by configuring the robustness variable.
The two are always equal.
SEE ALSO
Understanding MLD | 0
Modifying the MLD Query Response Interval | 0
Example: Modifying the MLD Robustness Variable | 0
show mld interface | 2230
CLI Explorer
71
The immediate-leave setting enables host tracking, meaning that the device keeps track of the hosts that
send join messages. This allows MLD to determine when the last host sends a leave message for the
multicast group.
When the immediate leave setting is enabled, the device removes an interface from the forwarding-table
entry without first sending MLD group-specific queries to the interface. The interface is pruned from the
multicast tree for the multicast group specified in the MLD leave message. The immediate leave setting
ensures optimal bandwidth management for hosts on a switched network, even when multiple multicast
groups are being used simultaneously.
When immediate leave is disabled and one host sends a leave group message, the routing device first
sends a group query to determine if another receiver responds. If no receiver responds, the routing
device removes all hosts on the interface from the multicast group. Immediate leave is disabled by
default for both MLD version 1 and MLD version 2.
NOTE: Although host tracking is enabled for IGMPv2 and MLDv1 when you enable immediate
leave, use immediate leave with these versions only when there is one host on the interface. The
reason is that IGMPv2 and MLDv1 use a report suppression mechanism whereby only one host
on an interface sends a group join report in response to a membership query. The other
interested hosts suppress their reports. The purpose of this mechanism is to avoid a flood of
reports for the same group. But it also interferes with host tracking, because the router only
knows about the one interested host and does not know about the others.
2. Verify the configuration by checking the Immediate Leave field in the output of the show mld
interface command.
SEE ALSO
Understanding MLD | 0
72
When the group-policy statement is enabled on a router, after the router receives an MLD report, the
router compares the group against the specified group policy and performs the action configured in that
policy (for example, rejects the report if the policy matches the defined address or network).
You define the policy to match only MLD group addresses (for MLDv1) by using the policy's route-filter
statement to match the group address. You define the policy to match MLD (source, group) addresses
(for MLDv2) by using the policy's route-filter statement to match the group address and the policy's
source-address-filter statement to match the source address.
3. Apply the policies to the MLD interfaces where you prefer not to receive specific group or (source,
group) reports. In this example, ge-0/0/0.1 is running MLDv1 and ge-0/1/1.0 is running MLDv2.
4. Verify the operation of the filter by checking the Rejected Report field in the output of the show mld
statistics command.
73
SEE ALSO
Understanding MLD | 0
Routing Policies, Firewall Filters, and Traffic Policers User Guide
show mld statistics | 2237
CLI Explorer
IN THIS SECTION
Requirements | 73
Overview | 73
Configuration | 74
Verification | 75
This example shows how to configure and verify the MLD robustness variable in a multicast domain.
Requirements
• Configure an interior gateway protocol or static routing. See the Junos OS Routing Protocols Library
for Routing Devices.
• Enable IPv6 unicast routing. See the Junos OS Routing Protocols Library for Routing Devices.
Overview
The MLD robustness variable can be fine-tuned to allow for expected packet loss on a subnet.
Increasing the robust count allows for more packet loss but increases the leave latency of the
subnetwork.
The value of the robustness variable is used in calculating the following MLD message intervals:
74
• Group member interval—Amount of time that must pass before a multicast router determines that
there are no more members of a group on a network. This interval is calculated as follows:
(robustness variable x query-interval) + (1 x query-response-interval).
• Other querier present interval—Amount of time that must pass before a multicast router determines
that there is no longer another multicast router that is the querier. This interval is calculated as
follows: (robustness variable x query-interval) + (0.5 x query-response-interval).
• Last-member query count—Number of group-specific queries sent before the router assumes there
are no local members of a group. The default number is the value of the robustness variable.
By default, the robustness variable is set to 2. The number can be from 2 through 10. You might want to
increase this value if you expect a subnet to lose packets.
Configuration
IN THIS SECTION
Procedure | 74
Procedure
To quickly configure this example, copy the following commands, paste them into a text file, remove any
line breaks, change any details necessary to match your network configuration, copy and paste the
commands into the CLI at the [edit] hierarchy level, and then enter commit from configuration mode.
Step-by-Step Procedure
The following example requires you to navigate various levels in the configuration hierarchy. For
information about navigating the CLI, see Using the CLI Editor in Configuration Mode in the Junos OS
CLI User Guide.
Verification
To verify the configuration is working properly, check the MLD Robustness Count field in the output of
the show mld interfaces command.
SEE ALSO
Understanding MLD | 0
Modifying the MLD Query Response Interval | 0
Modifying the MLD Last-Member Query Interval | 0
show mld interface | 2230
CLI Explorer
Increasing the maximum number of MLD packets transmitted per second might be useful on a router
with a large number of interfaces participating in MLD.
To change the limit for the maximum number of MLD packets the router can transmit in 1 second,
include the maximum-transmit-rate statement and specify the maximum number of packets per second
to be transmitted.
76
IN THIS SECTION
You can create MLD static group membership to test multicast forwarding without a receiver host.
When you enable MLD static group membership, data is forwarded to an interface without that
interface receiving membership reports from downstream hosts.
Class-of-service (CoS) adjustment is not supported with MLD static group membership.
When you configure static groups on an interface on which you want to receive multicast traffic, you
can specify the number of static groups to be automatically created.
1. Configure the static groups to be created by including the static statement and group statement and
specifying which IPv6 multicast address of the group to be created.
2. After you commit the configuration, use the show configuration protocol mld command to verify the
MLD protocol configuration.
interface fe-0/1/2.0 {
static {
group ff0e::1:ff05:1a8d;
}
}
3. After you have committed the configuration and after the source is sending traffic, use the show mld
group command to verify that static group ff0e::1:ff05:1a8d has been created.
When you create MLD static group membership to test multicast forwarding on an interface on which
you want to receive multicast traffic, you can specify that a number of static groups be automatically
created. This is useful when you want to test forwarding to multiple receivers without having to
configure each receiver separately.
1. Configure the number of static groups to be created by including the group-count statement and
specifying the number of groups to be created.
2. After you commit the configuration, use the show configuration protocol mld command to verify the
MLD protocol configuration.
interface fe-0/1/2.0 {
static {
group ff0e::1:ff05:1a8d {
group-count 3;
}
}
}
3. After you have committed the configuration and the source is sending traffic, use the show mld
group command to verify that static groups ff0e::1:ff05:1a8d, ff0e::1:ff05:1a8e, and ff0e::1:ff05:1a8f
have been created.
Interface: fe-0/1/2
Group: ff0e::1:ff05:1a8d
Source: fe80::2e0:81ff:fe05:1a8d
Last reported by: Local
Timeout: 0 Type: Static
Interface: fe-0/1/2
Group: ff0e::1:ff05:1a8e
Source: fe80::2e0:81ff:fe05:1a8d
Last reported by: Local
Timeout: 0 Type: Static
Interface: fe-0/1/2
Group: ff0e::1:ff05:1a8f
Source: fe80::2e0:81ff:fe05:1a8d
79
When you configure static groups on an interface on which you want to receive multicast traffic and you
specify the number of static groups to be automatically created, you can also configure the group
address to be automatically incremented by some number of addresses.
In this example, you create three groups and increase the group address by an increment of two for each
group.
1. Configure the group address increment by including the group-increment statement and specifying
the number by which the address should be incremented for each group. The increment is specified
in a format similar to an IPv6 address.
2. After you commit the configuration, use the show configuration protocol mld command to verify the
MLD protocol configuration.
interface fe-0/1/2.0 {
static {
group ff0e::1:ff05:1a8d {
group-increment ::2;
group-count 3;
}
}
}
3. After you have committed the configuration and the source is sending traffic, use the show mld
group command to verify that static groups ff0e::1:ff05:1a8d, ff0e::1:ff05:1a8f, and ff0e::1:ff05:1a91
have been created.
Interface: fe-0/1/2
80
Group: ff0e::1:ff05:1a8d
Source: fe80::2e0:81ff:fe05:1a8d
Last reported by: Local
Timeout: 0 Type: Static
Interface: fe-0/1/2
Group: ff0e::1:ff05:1a8f
Source: fe80::2e0:81ff:fe05:1a8d
Last reported by: Local
Timeout: 0 Type: Static
Interface: fe-0/1/2
Group: ff0e::1:ff05:1a91
Source: fe80::2e0:81ff:fe05:1a8d
Last reported by: Local
Timeout: 0 Type: Static
When you configure static groups on an interface on which you want to receive multicast traffic and
your network is operating in source-specific multicast (SSM) mode, you can specify the multicast source
address to be accepted.
If you specify a group address in the SSM range, you must also specify a source.
If a source address is specified in a multicast group that is statically configured, the MLD version must be
set to MLDv2 on the interface. MLDv1 is the default value.
In this example, you create group ff0e::1:ff05:1a8d and accept IPv6 address fe80::2e0:81ff:fe05:1a8d as
the only source.
1. Configure the source address by including the source statement and specifying the IPv6 address of
the source host.
2. After you commit the configuration, use the show configuration protocol mld command to verify the
MLD protocol configuration.
interface fe-0/1/2.0 {
static {
group ff0e::1:ff05:1a8d {
source fe80::2e0:81ff:fe05:1a8d;
}
}
}
3. After you have committed the configuration and the source is sending traffic, use the show mld
group command to verify that static group ff0e::1:ff05:1a8d has been created and that source
fe80::2e0:81ff:fe05:1a8d has been accepted.
Interface: fe-0/1/2
Group: ff0e::1:ff05:1a8d
Source: fe80::2e0:81ff:fe05:1a8d
Last reported by: Local
Timeout: 0 Type: Static
When you configure static groups on an interface on which you want to receive multicast traffic, you
can specify a number of multicast sources to be automatically accepted.
In this example, you create static group ff0e::1:ff05:1a8d and accept fe80::2e0:81ff:fe05:1a8d,
fe80::2e0:81ff:fe05:1a8e, and fe80::2e0:81ff:fe05:1a8f as the source addresses.
1. Configure the number of multicast source addresses to be accepted by including the source-count
statement and specifying the number of sources to be accepted.
2. After you commit the configuration, use the show configuration protocol mld command to verify the
MLD protocol configuration.
interface fe-0/1/2.0 {
static {
group ff0e::1:ff05:1a8d {
source fe80::2e0:81ff:fe05:1a8d {
source-count 3;
}
}
}
}
3. After you have committed the configuration and the source is sending traffic, use the show mld
group command to verify that static group ff0e::1:ff05:1a8d has been created and that sources
fe80::2e0:81ff:fe05:1a8d, fe80::2e0:81ff:fe05:1a8e, and fe80::2e0:81ff:fe05:1a8f have been
accepted.
Interface: fe-0/1/2
Group: ff0e::1:ff05:1a8d
Source: fe80::2e0:81ff:fe05:1a8d
Last reported by: Local
Timeout: 0 Type: Static
Interface: fe-0/1/2
Group: ff0e::1:ff05:1a8d
Source: fe80::2e0:81ff:fe05:1a8e
Last reported by: Local
Timeout: 0 Type: Static
Interface: fe-0/1/2
Group: ff0e::1:ff05:1a8d
Source: fe80::2e0:81ff:fe05:1a8f
Last reported by: Local
Timeout: 0 Type: Static
83
When you configure static groups on an interface on which you want to receive multicast traffic, and
specify a number of multicast sources to be automatically accepted, you can also specify the number by
which the address should be incremented for each source accepted.
In this example, you create static group ff0e::1:ff05:1a8d and accept fe80::2e0:81ff:fe05:1a8d,
fe80::2e0:81ff:fe05:1a8f, and fe80::2e0:81ff:fe05:1a91 as the sources.
1. Configure the number of multicast source addresses to be accepted by including the source-
increment statement and specifying the number of sources to be accepted.
2. After you commit the configuration, use the show configuration protocol mld command to verify the
MLD protocol configuration.
interface fe-0/1/2.0 {
static {
group ff0e::1:ff05:1a8d {
source fe80::2e0:81ff:fe05:1a8d {
source-count 3;
source-increment ::2;
}
}
}
}
3. After you have committed the configuration and the source is sending traffic, use the show mld
group command to verify that static group ff0e::1:ff05:1a8d has been created and that sources
fe80::2e0:81ff:fe05:1a8d, fe80::2e0:81ff:fe05:1a8f, and fe80::2e0:81ff:fe05:1a91 have been
accepted.
Interface: fe-0/1/2
84
Group: ff0e::1:ff05:1a8d
Source: fe80::2e0:81ff:fe05:1a8d
Last reported by: Local
Timeout: 0 Type: Static
Interface: fe-0/1/2
Group: ff0e::1:ff05:1a8d
Source: fe80::2e0:81ff:fe05:1a8f
Last reported by: Local
Timeout: 0 Type: Static
Interface: fe-0/1/2
Group: ff0e2::1:ff05:1a8d
Source: fe80::2e0:81ff:fe05:1a91
Last reported by: Local
Timeout: 0 Type: Static
Interface: fe-0/1/2
Group: ff0e::1:ff05:1a8d
Group mode: Include
Source: fe80::2e0:81ff:fe05:1a8d
Last reported by: Local
Timeout: 0 Type: Static
Group: ff0e::1:ff05:1a8d
Group mode: Include
Source: fe80::2e0:81ff:fe05:1a8f
Last reported by: Local
Timeout: 0 Type: Static
Group: ff0e::1:ff05:1a8d
Group mode: Include
Source: fe80::2e0:81ff:fe05:1a91
Last reported by: Local
Timeout: 0 Type: Static
When you configure static groups on an interface on which you want to receive multicast traffic and
your network is operating in source-specific multicast (SSM) mode, you can specify that certain
multicast source addresses be excluded.
By default the multicast source address configured in a static group operates in include mode. In include
mode the multicast traffic for the group is accepted from the configured source address. You can also
configure the static group to operate in exclude mode. In exclude mode the multicast traffic for the
group is accepted from any address other than the configured source address.
85
If a source address is specified in a multicast group that is statically configured, the MLD version must be
set to MLDv2 on the interface. MLDv1 is the default value.
In this example, you exclude address fe80::2e0:81ff:fe05:1a8d as a source for group ff0e::1:ff05:1a8d.
1. Configure a multicast static group to operate in exclude mode by including the exclude statement
and specifying which IPv6 source address to be excluded.
2. After you commit the configuration, use the show configuration protocol mld command to verify the
MLD protocol configuration.
interface fe-0/1/2.0 {
static {
group ff0e::1:ff05:1a8d {
exclude;
source fe80::2e0:81ff:fe05:1a8d;
}
}
}
3. After you have committed the configuration and the source is sending traffic, use the show mld
group detail command to verify that static group ff0e::1:ff05:1a8d has been created and that the
static group is operating in exclude mode.
Similar configuration is available for IPv4 multicast traffic using the IGMP protocol.
86
RELATED DOCUMENTATION
IN THIS SECTION
Requirements | 86
Overview | 86
Configuration | 87
Verification | 89
This example shows how to determine whether MLD tuning is needed in a network by configuring the
routing device to record MLD join and leave events.
Requirements
• Configure an interior gateway protocol or static routing. See the Junos OS Routing Protocols Library
for Routing Devices.
• Enable IPv6 unicast routing. See the Junos OS Routing Protocols Library for Routing Devices.
Overview
Table 3 on page 86 describes the recordable MLD join and leave events.
Configuration
IN THIS SECTION
Procedure | 87
Procedure
To quickly configure this example, copy the following commands, paste them into a text file, remove any
line breaks, change any details necessary to match your network configuration, copy and paste the
commands into the CLI at the [edit] hierarchy level, and then enter commit from configuration mode.
Step-by-Step Procedure
The following example requires you to navigate various levels in the configuration hierarchy. For
information about navigating the CLI, see Using the CLI Editor in Configuration Mode in the Junos OS
CLI User Guide.
1. Enable accounting globally or on an MLD interface. This example shows the interface configuration.
2. Configure the events to be recorded, and filter the events to a system log file with a descriptive
filename, such as mld-events.
This example rotates the file every 24 hours (1440 minutes) when it reaches 100 KB and keeps three
files.
Verification
You can view the system log file by running the file show command.
You can monitor the system log file as entries are added to the file by running the monitor start and
monitor stop commands.
SEE ALSO
Understanding MLD | 0
When configuring limits for MLD multicast groups, keep the following in mind:
• Each any-source group (*,G) counts as one group toward the limit.
90
• Each source-specific group (S,G) counts as one group toward the limit.
• Multiple source-specific groups count individually toward the group limit, even if they are for the
same group. For example, (S1, G1) and (S2, G1) would count as two groups toward the configured
limit.
• Combinations of any-source groups and source-specific groups count individually toward the group
limit, even if they are for the same group. For example, (*, G1) and (S, G1) would count as two groups
toward the configured limit.
• Configuring and committing a group limit on a network that is lower than what already exists on the
network results in the removal of all groups from the configuration. The groups must then request to
rejoin the network (up to the newly configured group limit).
• You can dynamically limit multicast groups on MLD logical interfaces by using dynamic profiles. For
detailed information about creating dynamic profiles, see the Junos OS Subscriber Management and
Services Library .
Beginning with Junos OS 12.2, you can optionally configure a system log warning threshold for MLD
multicast group joins received on the logical interface. It is helpful to review the system log messages for
troubleshooting purposes and to detect if an excessive amount of MLD multicast group joins have been
received on the interface. These log messages convey when the configured group limit has been
exceeded, when the configured threshold has been exceeded, and when the number of groups drop
below the configured threshold.
The group-threshold statement enables you to configure the threshold at which a warning message is
logged. The range is 1 through 100 percent. The warning threshold is a percentage of the group limit, so
you must configure the group-limit statement to configure a warning threshold. For instance, when the
number of groups exceed the configured warning threshold, but remain below the configured group
limit, multicast groups continue to be accepted, and the device logs a warning message. In addition, the
device logs a warning message after the number of groups drop below the configured warning threshold.
You can further specify the amount of time (in seconds) between the log messages by configuring the
log-interval statement. The range is 6 through 32,767 seconds.
You might consider throttling log messages because every entry added after the configured threshold
and every entry rejected after the configured limit causes a warning message to be logged. By
configuring a log interval, you can throttle the amount of system log warning messages generated for
MLD multicast group joins.
[edit]
user@host# edit protocols mld interface interface-name
To confirm your configuration, use the show protocols mld command. To verify the operation of MLD on
the interface, including the configured group limit and the optional warning threshold and interval
between log messages, use the show mld interface command.
SEE ALSO
Disabling MLD
To disable MLD on an interface, include the disable statement:
interface interface-name {
disable;
}
SEE ALSO
Enabling MLD | 0
Release Description
12.2 Beginning with Junos OS 12.2, you can optionally configure a system log warning threshold for MLD
multicast group joins received on the logical interface.
RELATED DOCUMENTATION
Configuring IGMP | 25
IN THIS SECTION
By default, Internet Group Management Protocol (IGMP) processing takes place on the Routing Engine
for MX Series routers. This centralized architecture may lead to reduced performance in scaled
environments or when the Routing Engine undergoes CLI changes or route updates. You can improve
system performance for IGMP processing by enabling distributed IGMP, which utilizes the Packet
Forwarding Engine to maintain a higher system-wide processing rate for join and leave events.
Distributed IGMP works by moving IGMP processing from the Routing Engine to the Packet Forwarding
Engine. When distributed IGMP is not enabled, IGMP processing is centralized on the routing protocol
process (rpd) running on the Routing Engine. When you enable distributed IGMP, join and leave events
93
are processed across Modular Port Concentrators (MPCs) on the Packet Forwarding Engine. Because
join and leave processing is distributed across multiple MPCs instead of being processed through a
centralized rpd on the Routing Engine, performance improves and join and leave latency decreases.
When you enable distributed IGMP, each Packet Forwarding Engine processes reports and generates
queries, maintains local group membership to the interface mapping table and updates the forwarding
state based on this table, runs distributed IGMP independently, and implements the group-policy and
ssm-map-policy IGMP interface options.
NOTE: Information from group-policy and ssm-map-policy IGMP interface options passes from
the Routing Engine to the Packet Forwarding Engine.
When you enable distributed IGMP, the rpd on the Routing Engine synchronizes all IGMP configurations
(including global and interface-level configurations) from the rpd to each Packet Forwarding Engine, runs
passive IGMP on distributed interfaces, and notifies Protocol Independent Multicast (PIM) of all group
memberships per distributed IGMP interface.
Consider the following guidelines when you configure distributed IGMP on an MX Series router with
MPCs:
• Distributed IGMP increases network performance by reducing the maximum join and leave latency
and by increasing join and leave events.
NOTE: Join and leave latency may increase if multicast traffic is not preprovisioned and
destined for an MX Series router when a join or leave event is received from a client interface.
• Distributed IGMP is supported for Ethernet interfaces. It does not improve performance on PIM
interfaces.
• Starting in Junos OS release 18.2, distributed IGMP is supported on aggregated Ethernet interfaces,
and for enhanced subscriber management. As such, IGMP processing for subscriber flows is moved
from the Routing Engine to the Packet Forwarding Engine of supported line cards. Multicast groups
can be comprised of mixed receivers, that is, some centralized IGMP and some distributed IGMP.
• You can reduce initial join delays by enabling Protocol Independent Multicast (PIM) static joins or
IGMP static joins. You can reduce initial delays even more by preprovisioning multicast traffic. When
you preprovision multicast traffic, MPCs with distributed IGMP interfaces receive multicast traffic.
94
• For distributed IGMP to function properly, you must enable enhanced IP network services on a
single-chassis MX Series router. Virtual Chassis is not supported.
• When you enable distributed IGMP, the following interface options are not supported on the Packet
Forwarding Engine: oif-map, group-limit, ssm-map, and static. The traceoptions and accounting
statements can only be enabled for IGMP operations still performed on the Routing Engine; they are
not supported on the Packet Forwarding Engine. The clear igmp membership command is not
supported when distributed IGMP is enabled.
Release Description
18.2 Starting in Junos OS release 18.2, distributed IGMP is supported on aggregated Ethernet interfaces, and
for enhanced subscriber management. As such, IGMP processing for subscriber flows is moved from the
Routing Engine to the Packet Forwarding Engine of supported line cards. Multicast groups can be
comprised of mixed receivers, that is, some centralized IGMP and some distributed IGMP.
RELATED DOCUMENTATION
Understanding IGMP | 27
Junos OS Multicast Protocols User Guide
IN THIS SECTION
Configuring distributed IGMP improves performance by reducing join and leave latency. This works by
moving IGMP processing from the Routing Engine to the Packet Forwarding Engine. In contrast to
centralized IGMP processing on the Routing Engine, the Packet Forwarding Engine disperses traffic
across multiple Modular Port Concentrators (MPCs).
95
You can enable distributed IGMP on static interfaces or dynamic interfaces. As a prerequisite, you must
enable enhanced IP network services on a single-chassis MX Series router.
Issuing the distributed keyword at this hierarchy level enables static joins for specific multicast (S,G)
groups and preprovisions all of them so that all distributed IGMP Packet Forwarding Engines receive
traffic.
Issuing the distributed keyword at this hierarchy level enables static joins for multicast (S,G) groups
so that all distributed IGMP Packet Forwarding Engines receive traffic and preprovisions a specific
multicast group address (G).
Issuing the distributed keyword at this hierarchy level enables static joins for multicast (S,G) groups
so that all Packet Forwarding Engines receive traffic, but preprovisions a specific multicast (S,G)
group.
2. (Optional) Enable static joins for specific (S,G) addresses and preprovision all of them so that all
distributed IGMP Packet Forwarding Engines receive traffic. In the example, multicast traffic for all of
the groups (225.0.0.1, 10.10.10.1), (225.0.0.1, 10.10.10.2), and (225.0.0.2, *) is preprovisioned.
3. (Optional) Enable static joins for specific multicast (S,G) groups so that all distributed IGMP Packet
Forwarding Engines receive traffic and preprovision a specific multicast group address (G). In the
97
example, multicast traffic for groups (225.0.0.1, 10.10.10.1) and (225.0.0.1, 10.10.10.2) is
preprovisioned, but group (225.0.0.2, *) is not preprovisioned.
4. (Optional) Enable a static join for specific multicast (S,G) groups so that all Packet Forwarding Engines
receive traffic, but preprovision only one specific multicast address group. In the example, multicast
traffic for group (225.0.0.1, 10.10.10.1) is preprovisioned, but all other groups are not preprovisioned.
SEE ALSO
CHAPTER 3
IN THIS CHAPTER
IN THIS SECTION
Internet Group Management Protocol (IGMP) snooping constrains the flooding of IPv4 multicast traffic
on VLANs on a device. With IGMP snooping enabled, the device monitors IGMP traffic on the network
and uses what it learns to forward multicast traffic to only the downstream interfaces that are
connected to interested receivers. The device conserves bandwidth by sending multicast traffic only to
interfaces connected to devices that want to receive the traffic, instead of flooding the traffic to all the
downstream interfaces in a VLAN.
Devices usually learn unicast MAC addresses by checking the source address field of the frames they
receive and then send any traffic for that unicast address only to the appropriate interfaces. However, a
multicast MAC address can never be the source address for a packet. As a result, when a device receives
traffic for a multicast destination address, it floods the traffic on the relevant VLAN, sending a significant
amount of traffic for which there might not necessarily be interested receivers.
IGMP snooping prevents this flooding. When you enable IGMP snooping, the device monitors IGMP
packets between receivers and multicast routers and uses the content of the packets to build a multicast
forwarding table—a database of multicast groups and the interfaces that are connected to members of
the groups. When the device receives multicast packets, it uses the multicast forwarding table to
selectively forward the traffic to only the interfaces that are connected to members of the appropriate
multicast groups.
On EX Series and QFX Series switches that do not support the Enhanced Layer 2 Software (ELS)
configuration style, IGMP snooping is enabled by default on all VLANs (or only on the default VLAN on
some devices) and you can disable it selectively on one or more VLANs. On all other devices, you must
explicitly configure IGMP snooping on a VLAN or in a bridge domain to enable it.
100
NOTE: You can’t configure IGMP snooping on a secondary (private) VLAN (PVLAN). However,
starting in Junos OS Release 18.3R1 on EX4300 switches and EX4300 Virtual Chassis, and Junos
OS Release 19.2R1 on EX4300 multigigabit switches, when you enable IGMP snooping on a
primary VLAN, you also implicitly enable it on any secondary VLANs defined for that primary
VLAN. See "IGMP Snooping on Private VLANs (PVLANs)" on page 104 for details.
The device can use a routed VLAN interface (RVI) to forward traffic between VLANs in its configuration.
IGMP snooping works with Layer 2 interfaces and RVIs to forward multicast traffic in a switched
network.
When the device receives a multicast packet, its Packet Forwarding Engines perform a multicast lookup
on the packet to determine how to forward the packet to its local interfaces. From the results of the
lookup, each Packet Forwarding Engine extracts a list of Layer 3 interfaces that have ports local to the
Packet Forwarding Engine. If the list includes an RVI, the device provides a bridge multicast group ID for
the RVI to the Packet Forwarding Engine.
For VLANs that include multicast receivers, the bridge multicast ID includes a sub-next-hop ID, which
identifies the Layer 2 interfaces in the VLAN that are interested in receiving the multicast stream. The
Packet Forwarding Engine then forwards multicast traffic to bridge multicast IDs that have multicast
receivers for a given multicast group.
Multicast routers use IGMP to learn which groups have interested listeners for each of their attached
physical networks. In any given subnet, one multicast router acts as an IGMP querier. The IGMP querier
sends out the following types of queries to hosts:
• Group-specific query—(IGMPv2 and IGMPv3 only) Asks whether any host is listening to a specific
multicast group. This query is sent in response to a host leaving the multicast group and allows the
router to quickly determine if any remaining hosts are interested in the group.
Hosts that are multicast listeners send the following kinds of messages:
101
• Membership report—Indicates that the host wants to join a particular multicast group.
• Leave report—(IGMPv2 and IGMPv3 only) Indicates that the host wants to leave a particular
multicast group.
• By sending an unsolicited IGMP join message to a multicast router that specifies the IP multicast
group the host wants to join.
• By sending an IGMP join message in response to a general query from a multicast router.
A multicast router continues to forward multicast traffic to a VLAN provided that at least one host on
that VLAN responds to the periodic general IGMP queries. For a host to remain a member of a multicast
group, it must continue to respond to the periodic general IGMP queries.
• By not responding to periodic queries within a particular interval of time, which is considered a
“silent leave.” This is the only leave method for IGMPv1 hosts.
• By sending a leave report. This method can be used by IGMPv2 and IGMPv3 hosts.
In IGMPv3, a host can send a membership report that includes a list of source addresses. When the host
sends a membership report in INCLUDE mode, the host is interested in group multicast traffic only from
those sources in the source address list. If host sends a membership report in EXCLUDE mode, the host
is interested in group multicast traffic from any source except the sources in the source address list. A
host can also send an EXCLUDE report in which the source-list parameter is empty, which is known as
an EXCLUDE NULL report. An EXCLUDE NULL report indicates that the host wants to join the multicast
group and receive packets from all sources.
Devices that support IGMPv3 process INCLUDE and EXCLUDE membership reports, and most devices
forward source-specific multicast (SSM) traffic only from requested sources to subscribed receivers
accordingly. However, you might see the device doesn’t strictly forward multicast traffic on a per-source
basis in some configurations such as:
• EX Series and QFX Series switches that do not use the Enhanced Layer 2 Software (ELS)
configuration style
• EX4300 switches running Junos OS Releases prior to 18.2R1, 18.1R2, 17.4R2, 17.3R3, 17.2R3, and
14.1X53-D47
In these cases, the device might consolidate all INCLUDE and EXCLUDE mode reports they receive on a
VLAN for a specified group into a single route that includes all multicast sources for that group, with the
next hop representing all interfaces that have interested receivers for the group. As a result, interested
receivers on the VLAN can receive traffic from a source that they did not include in their INCLUDE
report or from a source they excluded in their EXCLUDE report. For example, if Host 1 wants traffic for
G from Source A and Host 2 wants traffic for group G from Source B, they both receive traffic for group
G regardless of whether A or B sends the traffic.
To determine how to forward multicast traffic, the device with IGMP snooping enabled maintains
information about the following interfaces in its multicast forwarding table:
• Group-member interfaces—These interfaces lead toward hosts that are members of multicast groups.
The device learns about these interfaces by monitoring IGMP traffic. If an interface receives IGMP
queries or Protocol Independent Multicast (PIM) updates, the device adds the interface to its multicast
forwarding table as a multicast-router interface. If an interface receives membership reports for a
multicast group, the device adds the interface to its multicast forwarding table as a group-member
interface.
Learned interface table entries age out after a time period. For example, if a learned multicast-router
interface does not receive IGMP queries or PIM hellos within a certain interval, the device removes the
entry for that interface from its multicast forwarding table.
NOTE: For the device to learn multicast-router interfaces and group-member interfaces, the
network must include an IGMP querier. This is often in a multicast router, but if there is no
multicast router on the local network, you can configure the device itself to be an IGMP querier.
An interface in a VLAN with IGMP snooping enabled receives multicast traffic and forwards it according
to the following rules.
IGMP traffic:
• Forward IGMP general queries received on a multicast-router interface to all other interfaces in the
VLAN.
• Forward IGMP reports received on a host interface to multicast-router interfaces in the same VLAN,
but not to the other host interfaces in the VLAN.
• Flood multicast packets with a destination address of 233.252.0.0/24 to all other interfaces on the
VLAN.
• Forward unregistered multicast packets (packets for a group that has no current members) to all
multicast-router interfaces in the VLAN.
• Forward registered multicast packets to those host interfaces in the VLAN that are members of the
multicast group and to all multicast-router interfaces in the VLAN.
With IGMP snooping on a pure Layer 2 local network (that is, Layer 3 is not enabled on the network), if
the network doesn’t include a multicast router, multicast traffic might not be properly forwarded
through the network. You might see this problem if the local network is configured such that multicast
traffic must be forwarded between devices in order to reach a multicast receiver. In this case, an
upstream device does not forward multicast traffic to a downstream device (and therefore to the
multicast receivers attached to the downstream device) because the downstream device does not
forward IGMP reports to the upstream device. You can solve this problem by configuring one of the
devices to be an IGMP querier. The IGMP querier device sends periodic general query packets to all the
devices in the network, which ensures that the snooping membership tables are updated and prevents
multicast traffic loss.
If you configure multiple devices to be IGMP queriers, the device with the lowest (smallest) IGMP
querier source address takes precedence and acts as the querier. The devices with higher IGMP querier
source addresses stop sending IGMP queries unless they do not receive IGMP queries for 255 seconds.
If the device with a higher IGMP querier source address does not receive any IGMP queries during that
period, it starts sending queries again.
104
NOTE: QFabric systems in Junos OS Release 14.1X53-D15 support the igmp-querier statement,
but do not support this statement in Junos OS 15.1.
[edit protocols]
user@host# set igmp-snooping vlan vlan-name l2-querier source-address source address
To configure a QFabric Node device switch to act as an IGMP querier, enter the following:
[edit protocols]
user@host# set igmp-snooping vlan vlan-name igmp-querier source-address source address
A PVLAN consists of secondary isolated and community VLANs configured within a primary VLAN.
Without IGMP snooping support on the secondary VLANs, multicast streams received on the primary
VLAN are flooded to the secondary VLANs.
Starting in Junos OS Release 18.3R1, EX4300 switches and EX4300 Virtual Chassis support IGMP
snooping with PVLANs. Starting in Junos OS Release 19.2R1, EX4300 multigigabit model switches
support IGMP snooping with PVLANs. When you enable IGMP snooping on a primary VLAN, you also
implicitly enabled it on all secondary VLANs. The device learns and stores multicast group information
on the primary VLAN, and also learns the multicast group information on the secondary VLANs in the
context of the primary VLAN. As a result, the device further constrains multicast streams only to
interested receivers on secondary VLANs, rather than flooding the traffic in all secondary VLANs.
The CLI prevents you from explicitly configuring IGMP snooping on secondary isolated or community
VLANs. You only need to configure IGMP snooping on the primary VLAN under which the secondary
VLANs are defined. For example, for a primary VLAN vlan-pri with a secondary isolated VLAN vlan-iso
and a secondary community VLAN vlan-comm:
IGMP reports and leave messages received on secondary VLAN ports are learned in the context of the
primary VLAN. Promiscuous trunk ports or inter-switch links acting as multicast router interfaces for the
PVLAN receive incoming multicast data streams from multicast sources and forward them only to the
secondary VLAN ports with learned multicast group entries.
This feature does not support secondary VLAN ports as multicast router interfaces. The CLI does not
strictly prevent you from statically configuring an interface on a community VLAN as a multicast router
port, but IGMP snooping does not work properly on PVLANs with this configuration. When IGMP
snooping is configured on a PVLAN, the switch also automatically disables dynamic multicast router port
learning on any isolated or community VLAN interfaces. IGMP snooping with PVLANs also does not
support configurations with an IGMP querier on isolated or community VLAN interfaces.
See Understanding Private VLANs and Creating a Private VLAN Spanning Multiple EX Series Switches
with ELS Support (CLI Procedure) for details on configuring PVLANs.
Release Description
19.2R1 Starting in Junos OS Release 19.2R1, EX4300 multigigabit model switches support IGMP snooping
with PVLANs.
18.3R1 Starting in Junos OS Release 18.3R1, EX4300 switches and EX4300 Virtual Chassis support IGMP
snooping with PVLANs.
14.1X53-D15 QFabric systems in Junos OS Release 14.1X53-D15 support the igmp-querier statement, but do
not support this statement in Junos OS 15.1.
RELATED DOCUMENTATION
IN THIS SECTION
Supported IGMP or MLD Versions and Group Membership Report Modes | 108
Use Case 2: Inter-VLAN Multicast Routing and Forwarding—IRB Interfaces with PIM | 114
Use Case 3: Inter-VLAN Multicast Routing and Forwarding—PIM Gateway with Layer 2 Connectivity | 118
Use Case 4: Inter-VLAN Multicast Routing and Forwarding—PIM Gateway with Layer 3 Connectivity | 121
Use Case 5: Inter-VLAN Multicast Routing and Forwarding—External Multicast Router | 123
Internet Group Management Protocol (IGMP) snooping and Multicast Listener Discovery (MLD)
snooping constrain multicast traffic in a broadcast domain to interested receivers and multicast devices.
In an environment with a significant volume of multicast traffic, using IGMP or MLD snooping preserves
bandwidth because multicast traffic is forwarded only on those interfaces where there are multicast
listeners. IGMP snooping optimizes IPv4 multicast traffic flow. MLD snooping optimizes IPv6 multicast
traffic flow.
Starting with Junos OS Release 17.2R1, QFX10000 switches support IGMP snooping in an Ethernet
VPN (EVPN)-Virtual Extensible LAN (VXLAN) edge-routed bridging overlay (EVPN-VXLAN topology
with a collapsed IP fabric).
Starting with Junos OS Release 17.3R1, QFX10000 switches support the exchange of traffic between
multicast sources and receivers in an EVPN-VXLAN edge-routed bridging overlay, which uses IGMP, and
sources and receivers in an external Protocol Independent Multicast (PIM) domain. A Layer 2 multicast
VLAN (MVLAN) and associated IRB interfaces enable the exchange of multicast traffic between these
two domains.
IGMP snooping support in an EVPN-VXLAN network is available on the following switches in the
QFX5000 line. In releases up until Junos OS Releases 18.4R2 and 19.1R2, with IGMP snooping enabled,
these switches only constrain flooding for multicast traffic coming in on the VXLAN tunnel network
ports; they still flood multicast traffic coming in from an access interface to all other access and network
interfaces:
107
• Starting with Junos OS Release 18.1R1, QFX5110 switches support IGMP snooping in an EVPN-
VXLAN centrally-routed bridging overlay (EVPN-VXLAN topology with a two-layer IP fabric) for
forwarding multicast traffic within VLANs. You can’t configure IRB interfaces on a VXLAN with IGMP
snooping for forwarding multicast traffic between VLANs. (You can only configure and use IRB
interfaces for unicast traffic.)
• Starting with Junos OS Release 18.4R2 (but not Junos OS Releases 19.1R1 and 19.2R1),
QFX5120-48Y switches support IGMP snooping in an EVPN-VXLAN centrally-routed bridging
overlay.
• Starting with Junos OS Release 19.1R1, QFX5120-32C switches support IGMP snooping in EVPN-
VXLAN centrally-routed and edge-routed bridging overlays.
• Starting in Junos OS Releases 18.4R2 and 19.1R2, selective multicast forwarding is enabled by
default on QFX5110 and QFX5120 switches when you configure IGMP snooping in EVPN-VXLAN
networks, further constraining multicast traffic flooding. With IGMP snooping and selective multicast
forwarding, these switches send the multicast traffic only to interested receivers in both the EVPN
core and on the access side for multicast traffic coming in either from an access interface or an EVPN
network interface.
Starting with Junos OS Release 19.3R1, EX9200 switches, MX Series routers, and vMX virtual routers
support IGMP version 2 (IGMPv2) and IGMP version 3 (IGMPv3), IGMP snooping, selective multicast
forwarding, external PIM gateways, and external multicast routers with an EVPN-VXLAN centrally-
routed bridging overlay.
NOTE: Unless called out explicitly, the information in this topic applies to IGMPv2, IGMPv3,
MLDv1, and MLDv2 on the devices that support these protocols in the following IP fabric
architectures:
NOTE: On a Juniper Networks switching device, for example, a QFX10000 switch, you can
configure a VLAN. On a Juniper Networks routing device, for example, an MX480 router, you can
configure the same entity, which is called a bridge domain. To keep things simple, this topic uses
the term VLAN when referring to the same entity configured on both Juniper Networks
switching and routing devices.
• In an environment with a significant volume of multicast traffic, using IGMP snooping or MLD
snooping constrains multicast traffic in a VLAN to interested receivers and multicast devices, which
conserves network bandwidth.
• Synchronizing the IGMP or MLD state among all EVPN devices for multihomed receivers ensures
that all subscribed listeners receive multicast traffic, even in cases such as the following:
• IGMP or MLD membership reports for a multicast group might arrive on an EVPN device that is
not the Ethernet segment’s designated forwarder (DF).
• An IGMP or MLD message to leave a multicast group arrives at a different EVPN device than the
EVPN device where the corresponding join message for the group was received.
• Selective multicast forwarding conserves bandwidth usage in the EVPN core and reduces the load on
egress EVPN devices that do not have listeners.
• The support of external PIM gateways enables the exchange of multicast traffic between sources and
listeners in an EVPN-VXLAN network and sources and listeners in an external PIM domain. Without
this support, the sources and listeners in these two domains would not be able to communicate.
Table 4 on page 109 outlines the supported IGMP versions and the membership report modes
supported for each version.
109
To explicitly configure EVPN devices to process only SSM (S,G) membership reports for IGMPv3 or
MLDv2, set the evpn-ssm-reports-only configuration option at the [edit protocols igmp-snooping vlan
vlan-name] hierarchy level.
You can enable SSM-only processing for one or more VLANs in an EVPN routing instance (EVI). When
enabling this option for a routing instance of type virtual switch, the behavior applies to all VLANs in the
virtual switch instance. When you enable this option, the device doesn’t process ASM reports and drops
them.
If you don’t configure the evpn-ssm-reports-only option, by default, EVPN devices process IGMPv2,
IGMPv3, MLDv1, or MLDv2 ASM reports and drop IGMPv3 or MLDv2 SSM reports.
Table 5 on page 110 provides a summary of the multicast traffic forwarding and routing use cases that
we support in EVPN-VXLAN networks and our recommendation for when you should apply a use case
to your EVPN-VXLAN network.
110
Table 5: Supported Multicast Traffic Forwarding and Routing Use Cases and Recommended Usage
For example, in a typical EVPN-VXLAN edge-routed bridging overlay, you can implement use case 1 for
intra-VLAN forwarding and use case 2 for inter-VLAN routing and forwarding. Or, if you want an
111
external multicast router to handle inter-VLAN routing in your EVPN-VXLAN network instead of EVPN
devices with IRB interfaces running PIM, you can implement use case 5 instead of use case 2. If there
are hosts in an existing external PIM domain that you want hosts in your EVPN-VXLAN network to
communicate with, you can also implement use case 3.
When implementing any of the use cases in an EVPN-VXLAN centrally-routed bridging overlay, you can
use a mix of spine devices—for example, MX Series routers, EX9200 switches, and QFX10000 switches.
However, if you do this, keep in mind that the functionality of all spine devices is determined by the
limitations of each spine device. For example, QFX10000 switches support a single routing instance of
type virtual-switch. Although MX Series routers and EX9200 switches support multiple routing
instances of type evpn or virtual-switch, on each of these devices, you would have to configure a single
routing instance of type virtual-switch to interoperate with the QFX10000 switches.
This use case supports the forwarding of multicast traffic to hosts within the same VLAN and includes
the following key features:
• Hosts that are single-homed to an EVPN device or multihomed to more than one EVPN device in all-
active mode.
NOTE: EVPN-VXLAN multicast uses special IGMP and MLD group leave processing to handle
multihomed sources and receivers, so we don’t support the immediate-leave configuration
option in the [edit protocols igmp-snooping] or [edit protocols mld-snooping] hierarchies in
EVPN-VXLAN networks.
• Routing instances:
• (MX Series routers, vMX virtual routers, and EX9200 switches) Multiple routing instances of type
evpn or virtual-switch.
• EVI route target extended community attributes associated with multihomed EVIs. BGP EVPN
Type 7 (Join Sync Route) and Type 8 (Leave Synch Route) routes carry these attributes to
enable the simultaneous support of multiple EVPN routing instances.
For information about another supported extended community, see the “EVPN Multicast Flags
Extended Community” section.
• IGMPv2, IGMPv3, MLDv1 or MLDv2. For information about the membership report modes
supported for each IGMP or MLD version, see Table 4 on page 109. For information about IGMP or
112
MLD route synchronization between multihomed EVPN devices, see Overview of Multicast
Forwarding with IGMP or MLD Snooping in an EVPN-MPLS Environment.
• IGMP snooping or MLD snooping. Hosts in a network send IGMP reports (for IPv4 traffic) or MLD
reports (for IPv6 traffic) expressing interest in particular multicast groups from multicast sources.
EVPN devices with IGMP snooping or MLD snooping enabled listen to the IGMP or MLD reports,
and use the snooped information on the access side to establish multicast routes that only forward
traffic for a multicast group to interested receivers.
IGMP snooping or MLD snooping supports multicast senders and receivers in the same or different
sites. A site can have either receivers only, sources only, or both senders and receivers attached to it.
• Selective multicast forwarding (advertising EVPN Type 6 Selective Multicast Ethernet Tag (SMET)
routes for forwarding only to interested receivers). This feature enables EVPN devices to selectively
forward multicast traffic to only the devices in the EVPN core that have expressed interest in that
multicast group.
NOTE: We support selective multicast forwarding to devices in the EVPN core only in EVPN-
VXLAN centrally-routed bridging overlays.
When you enable IGMP snooping or MLD snooping, selective multicast forwarding is enabled
by default.
• EVPN devices that do not support IGMP snooping, MLD snooping, and selective multicast
forwarding.
Although you can implement this use case in an EVPN single-homed environment, this use case is
particularly effective in an EVPN multihomed environment with a high volume of multicast traffic.
All multihomed interfaces must have the same configuration, and all multihomed peer EVPN devices
must be in active mode (not standby or passive mode).
An EVPN device that initially receives traffic from a multicast source is known as the ingress device. The
ingress device handles the forwarding of intra-VLAN multicast traffic as follows:
• With IGMP snooping or MLD snooping enabled (which also enable selective multicast forwarding on
supporting devices):
• As shown in Figure 9 on page 113, the ingress device (leaf 1) selectively forwards the traffic to
other EVPN devices with access interfaces where there are interested receivers for the same
multicast group.
• The traffic is then selectively forwarded to egress devices in the EVPN core that have advertised
the EVPN Type 6 SMET routes.
113
• If any EVPN devices do not support IGMP snooping or MLD snooping, or the ability to originate
EVPN Type 6 SMET routes, the ingress device floods multicast traffic to these devices.
• If a host is multihomed to more than one EVPN device, the EVPN devices exchange EVPN Type 7
and Type 8 routes as shown in Figure 9 on page 113. This exchange synchronizes IGMP or MLD
membership reports received on multihomed interfaces to coordinate status from messages that go
to different EVPN devices or in case one of the EVPN devices fails.
NOTE: The EVPN Type 7 and Type 8 routes carry EVI route extended community attributes
to ensure the right EVPN instance gets the IGMP state information on devices with multiple
routing instances. QFX Series switches support IGMP snooping only in the default EVPN
routing instance (default-switch). In Junos OS releases before 17.4R2, 17.3R3, or 18.1R1,
these switches did not include EVI route extended community attributes in Type 7 and Type 8
routes, so they don’t properly synchronize the IGMP state if you also have other routing
instances configured. Starting in Junos OS releases 17.4R2, 17.3R3, and 18.1R1, QFX10000
switches include the EVI route extended community attributes that identify the target routing
instance, and can synchronize IGMP state if IGMP snooping is enabled in the default EVPN
routing instance when other routing instances are configured.
In releases that support MLD and MLD snooping in EVPN-VXLAN fabrics with multihoming,
the same behavior applies to synchronizing the MLD state.
Figure 9: Intra-VLAN Multicast Traffic Flow with IGMP Snooping and Selective Multicast Forwarding
114
If you have configured IRB interfaces with PIM on one or more of the Layer 3 devices in your EVPN-
VXLAN network (use case 2), note that the ingress device forwards the multicast traffic to the Layer 3
devices. The ingress device takes this action to register itself with the Layer 3 device that acts as the
PIM rendezvous point (RP).
Use Case 2: Inter-VLAN Multicast Routing and Forwarding—IRB Interfaces with PIM
We recommend this basic use case for all EVPN-VXLAN networks except when you prefer to use an
external multicast router to handle inter-VLAN routing (see Use Case 5: Inter-VLAN Multicast Routing
and Forwarding—External Multicast Router).
For this use case, IRB interfaces using Protocol Independent Multicast (PIM) route multicast traffic
between source and receiver VLANs. The EVPN devices on which the IRB interfaces reside then forward
the routed traffic using these key features:
The default behavior of inclusive multicast forwarding is to replicate multicast traffic and flood the
traffic to all devices. For this use case, however, we support inclusive multicast forwarding coupled with
IGMP snooping (or MLD snooping) and selective multicast forwarding. As a result, the multicast traffic is
replicated but selectively forwarded to access interfaces and devices in the EVPN core that have
interested receivers.
For information about the EVPN multicast flags extended community, which Juniper Networks devices
that support EVPN and IGMP snooping (or MLD snooping) include in EVPN Type 3 (Inclusive Multicast
Ethernet Tag) routes, see the “EVPN Multicast Flags Extended Community” section.
In an EVPN-VXLAN centrally-routed bridging overlay, you can configure the spine devices so that some
of them perform inter-VLAN routing and forwarding of multicast traffic and some do not. At a minimum,
we recommend that you configure two spine devices to perform inter-VLAN routing and forwarding.
When there are multiple devices that can perform the inter-VLAN routing and forwarding of multicast
traffic, one device is elected as the designated router (DR) for each VLAN.
115
In the sample EVPN-VXLAN centrally-routed bridging overlay shown in Figure 10 on page 115, assume
that multicast traffic needs to be routed from source VLAN 100 to receiver VLAN 101. Receiver VLAN
101 is configured on spine 1, which is designated as the DR for that VLAN.
Figure 10: Inter-VLAN Multicast Traffic Flow with IRB Interface and PIM
After the inter-VLAN routing occurs, the EVPN device forwards the routed traffic to:
• Access interfaces that have multicast listeners (IGMP snooping or MLD snooping).
• Egress devices in the EVPN core that have sent EVPN Type 6 SMET routes for the multicast group
members in receiver VLAN 2 (selective multicast forwarding).
To understand how IGMP snooping (or MLD snooping) and selective multicast forwarding reduce the
impact of the replicating and flooding behavior of inclusive multicast forwarding, assume that an EVPN-
VXLAN centrally-routed bridging overlay includes the following elements:
• 100 IRB interfaces using PIM starting with irb.1 and going up to irb.100
• 100 VLANs
• 20 EVPN devices
For the sample EVPN-VXLAN centrally-routed bridging overlay, m represents the number of VLANs, and
n represents the number of EVPN devices. Assuming that IGMP snooping (or MLD snooping) and
selective multicast forwarding are disabled, when multicast traffic arrives on irb.1, the EVPN device
replicates the traffic m * n times or 100 * 20 times, which equals a rate of 20,000 packets. If the
116
incoming traffic rate for a particular multicast group is 100 packets per second (pps), the EVPN device
would have to replicate 200,000 pps for that multicast group.
If IGMP snooping (or MLD snooping) and selective multicast forwarding are enabled in the sample
EVPN-VXLAN centrally-routed bridging overlay, assume that there are interested receivers for a
particular multicast group on only 4 VLANs and 3 EVPN devices. In this case, the EVPN device replicates
the traffic at a rate of 100 * m * n times (100 * 4 * 3), which equals 1200 pps. Note the significant
reduction in the replication rate and the amount of traffic that must be forwarded.
When implementing this use case, keep in mind that there are important differences for EVPN-VXLAN
centrally-routed bridging overlays and EVPN-VXLAN edge-routed bridging overlays. Table 6 on page
116 outlines these differences
Table 6: Use Case 2: Important Differences for EVPN-VXLAN Edge-routed and Centrally-routed
Bridging Overlays
EVPN VXLAN Support Mix of Juniper All EVPN All EVPN Required PIM
IP Fabric Networks Devices? Devices Devices Configuration
Architectures Required to Required to
Host All Host All VLANs
VLANs In that Include
EVPN- Multicast
VXLAN Listeners?
Network?
Table 6: Use Case 2: Important Differences for EVPN-VXLAN Edge-routed and Centrally-routed
Bridging Overlays (Continued)
EVPN VXLAN Support Mix of Juniper All EVPN All EVPN Required PIM
IP Fabric Networks Devices? Devices Devices Configuration
Architectures Required to Required to
Host All Host All VLANs
VLANs In that Include
EVPN- Multicast
VXLAN Listeners?
Network?
Table 6: Use Case 2: Important Differences for EVPN-VXLAN Edge-routed and Centrally-routed
Bridging Overlays (Continued)
EVPN VXLAN Support Mix of Juniper All EVPN All EVPN Required PIM
IP Fabric Networks Devices? Devices Devices Configuration
Architectures Required to Required to
Host All Host All VLANs
VLANs In that Include
EVPN- Multicast
VXLAN Listeners?
Network?
In addition to the differences described in Table 6 on page 116, a hair pinning issue exists with an EVPN-
VXLAN centrally-routed bridging overlay. Multicast traffic typically flows from a source host to a leaf
device to a spine device, which handles the inter-VLAN routing. The spine device then replicates and
forwards the traffic to VLANs and EVPN devices with multicast listeners. When forwarding the traffic in
this type of EVPN-VXLAN overlay, be aware that the spine device returns the traffic to the leaf device
from which the traffic originated (hair-pinning). This issue is inherent with the design of the EVPN-
VXLAN centrally-routed bridging overlay. When designing your EVPN-VXLAN overlay, keep this issue in
mind especially if you expect the volume of multicast traffic in your overlay to be high and the
replication rate of traffic (m * n times) to be large.
Use Case 3: Inter-VLAN Multicast Routing and Forwarding—PIM Gateway with Layer
2 Connectivity
We recommend the PIM gateway with Layer 2 connectivity use case for both EVPN-VXLAN edge-
routed bridging overlays and EVPN-VXLAN centrally-routed bridging overlays.
• There are multicast sources and receivers within the data center that you want to communicate with
multicast sources and receivers in an external PIM domain.
NOTE: We support this use case with both EVPN-VXLAN edge-routed bridging overlays and
EVPN-VXLAN centrally-routed bridging overlays.
The use case provides a mechanism for the data center, which uses IGMP (or MLD) and PIM, to
exchange multicast traffic with the external PIM domain. Using a Layer 2 multicast VLAN (MVLAN) and
associated IRB interfaces on the EVPN devices in the data center to connect to the PIM domain, you
can enable the forwarding of multicast traffic from:
NOTE: In this section, external refers to components in the PIM domain. Internal refers to
components in your EVPN-VXLAN network that supports a data center.
Figure 11 on page 119 shows the required key components for this use case in a sample EVPN-VXLAN
centrally-routed bridging overlay.
Figure 11: Use Case 3: PIM Gateway with Layer 2 Connectivity—Key Components
120
• A PIM gateway that acts as an interface between an existing PIM domain and the EVPN-VXLAN
network. The PIM gateway is a Juniper Networks or third-party Layer 3 device on which PIM and
a routing protocol such as OSPF are configured. The PIM gateway does not run EVPN. You can
connect the PIM gateway to one, some, or all EVPN devices.
• A PIM rendezvous point (RP) is a Juniper Networks or third-party Layer 3 device on which PIM
and a routing protocol such as OSPF are configured. You must also configure the PIM RP to
translate PIM join or prune messages into corresponding IGMP (or MLD) report or leave messages
then forward the reports and messages to the PIM gateway.
NOTE: These components are in addition to the components already configured for use cases
1 and 2.
• EVPN devices. For redundancy, we recommend multihoming the EVPN devices to the PIM
gateway through an aggregated Ethernet interface on which you configure an Ethernet segment
identifier (ESI). On each EVPN device, you must also configure the following for this use case:
• A Layer 2 multicast VLAN (MVLAN). The MVLAN is a VLAN that is used to connect the PIM
gateway. In the MVLAN, PIM is enabled.
• An MVLAN IRB interface on which you configure PIM, IGMP snooping (or MLD snooping), and
a routing protocol such as OSPF. To reach the PIM gateway, the EVPN device forwards
multicast traffic out of this interface.
• To enable the EVPN devices to forward multicast traffic to the external PIM domain, configure:
• PIM-to-IGMP translation:
For EVPN-VXLAN centrally-routed bridging overlays, you do not need to include the pim-
to-igmp-proxy upstream-interface irb-interface-name or pim-to-mld-proxy upstream-
interface irb-interface-name configuration statements. In this type of overlay, the PIM
protocol handles the routing of multicast traffic from the PIM domain to the EVPN-VXLAN
network and vice versa.
121
• PIM passive mode. For EVPN-VXLAN edge-routed bridging overlays only, you must ensure that
the PIM gateway views the data center as only a Layer 2 multicast domain. To do so, include the
passive configuration statement at the [edit protocols pim] hierarchy level.
Use Case 4: Inter-VLAN Multicast Routing and Forwarding—PIM Gateway with Layer
3 Connectivity
We recommend the PIM gateway with Layer 3 connectivity use case for EVPN-VXLAN centrally-routed
bridging overlays only.
• There are multicast sources and receivers within the data center that you want to communicate with
multicast sources and receivers in an external PIM domain.
NOTE: We recommend the PIM gateway with Layer 3 connectivity use case for EVPN-VXLAN
centrally-routed bridging overlays only.
This use case provides a mechanism for the data center, which uses IGMP (or MLD) and PIM, to
exchange multicast traffic with the external PIM domain. Using Layer 3 interfaces on the EVPN devices
in the data center to connect to the PIM domain, you can enable the forwarding of multicast traffic
from:
NOTE: In this section, external refers to components in the PIM domains. Internal refers to
components in your EVPN-VXLAN network that supports a data center.
Figure 12 on page 122 shows the required key components for this use case in a sample EVPN-VXLAN
centrally-routed bridging overlay.
Figure 12: Use Case 4: PIM Gateway with Layer 3 Connectivity—Key Components
• A PIM gateway that acts as an interface between an existing PIM domain and the EVPN-VXLAN
network. The PIM gateway is a Juniper Networks or third-party Layer 3 device on which PIM and
a routing protocol such as OSPF are configured. The PIM gateway does not run EVPN. You can
connect the PIM gateway to one, some, or all EVPN devices.
• A PIM rendezvous point (RP) is a Juniper Networks or third-party Layer 3 device on which PIM
and a routing protocol such as OSPF are configured. You must also configure the PIM RP to
translate PIM join or prune messages into corresponding IGMP or MLD report or leave messages
then forward the reports and messages to the PIM gateway.
NOTE: These components are in addition to the components already configured for use cases
1 and 2.
• EVPN devices. You can connect one, some, or all EVPN devices to a PIM gateway. You must make
each connection through a Layer 3 interface on which PIM is configured. Other than the Layer 3
interface with PIM, this use case does not require additional configuration on the EVPN devices.
Starting with Junos OS Release 17.3R1, you can configure an EVPN device to perform inter-VLAN
forwarding of multicast traffic without having to configure IRB interfaces on the EVPN device. In such a
scenario, an external multicast router is used to send IGMP or MLD queries to solicit reports and to
forward VLAN traffic through a Layer 3 multicast protocol such as PIM. IRB interfaces are not supported
with the use of an external multicast router.
For this use case, you must include the igmp-snooping proxy or mld-snooping proxy configuration
statements at the [edit routing-instances routing-instance-name protocols vlan vlan-name] hierarchy
level.
Juniper Networks devices that support EVPN-VXLAN and IGMP snooping also support the EVPN
multicast flags extended community. When you have enabled IGMP snooping on one of these devices,
the device adds the community to EVPN Type 3 (Inclusive Multicast Ethernet Tag) routes.
The absence of this community in an EVPN Type 3 route can indicate the following about the device
that advertises the route:
• The device is running a Junos OS software release that doesn’t support the community.
• The device does not support the advertising of EVPN Type 6 SMET routes.
• The device has IGMP snooping and a Layer 3 interface with PIM enabled on it. Although the Layer 3
interface with PIM performs snooping on the access side and selective multicast forwarding on the
EVPN core, the device needs to attract all traffic to perform source registration to the PIM RP and
inter-VLAN routing.
The behavior described above also applies to devices that support EVPN-VXLAN with MLD and MLD
snooping.
124
Figure 13 on page 124 shows the EVPN multicast flag extended community, which has the following
characteristics:
• The IGMP Proxy Support flag is set to 1, which means that the device supports IGMP proxy.
The same applies to the MLD Proxy Support flag; if that flag is set to 1, the device supports MLD
proxy. Either or both flags might be set.
Release Description
20.4R1 Starting in Junos OS Releases 20.4R1, in EVPN-VXLAN centrally-routed bridging overlay fabrics,
QFX5110, QFX5120 and the QFX10000 line of switches support IGMPv3 with IGMP snooping for IPv4
multicast traffic, and MLD version 1 (MLDv1) and MLD version 2 (MLDv2) with MLD snooping for IPv6
multicast traffic.
19.3R1 Starting with Junos OS Release 19.3R1, EX9200 switches, MX Series routers, and vMX virtual routers
support IGMP version 2 (IGMPv2) and IGMP version 3 (IGMPv3), IGMP snooping, selective multicast
forwarding, external PIM gateways, and external multicast routers with an EVPN-VXLAN centrally-
routed bridging overlay.
19.1R1 Starting with Junos OS Release 19.1R1, QFX5120-32C switches support IGMP snooping in EVPN-
VXLAN centrally-routed and edge-routed bridging overlays.
18.4R2 Starting with Junos OS Release 18.4R2 (but not Junos OS Releases 19.1R1 and 19.2R1), QFX5120-48Y
switches support IGMP snooping in an EVPN-VXLAN centrally-routed bridging overlay.
125
18.4R2 Starting in Junos OS Releases 18.4R2 and 19.1R2, selective multicast forwarding is enabled by default
on QFX5110 and QFX5120 switches when you configure IGMP snooping in EVPN-VXLAN networks,
further constraining multicast traffic flooding. With IGMP snooping and selective multicast forwarding,
these switches send the multicast traffic only to interested receivers in both the EVPN core and on the
access side for multicast traffic coming in either from an access interface or an EVPN network interface.
18.1R1 Starting with Junos OS Release 18.1R1, QFX5110 switches support IGMP snooping in an EVPN-VXLAN
centrally-routed bridging overlay (EVPN-VXLAN topology with a two-layer IP fabric) for forwarding
multicast traffic within VLANs.
17.3R1 Starting with Junos OS Release 17.3R1, QFX10000 switches support the exchange of traffic between
multicast sources and receivers in an EVPN-VXLAN edge-routed bridging overlay, which uses IGMP, and
sources and receivers in an external Protocol Independent Multicast (PIM) domain. A Layer 2 multicast
VLAN (MVLAN) and associated IRB interfaces enable the exchange of multicast traffic between these
two domains.
17.3R1 Starting with Junos OS Release 17.3R1, you can configure an EVPN device to perform inter-VLAN
forwarding of multicast traffic without having to configure IRB interfaces on the EVPN device.
17.2R1 Starting with Junos OS Release 17.2R1, QFX10000 switches support IGMP snooping in an Ethernet
VPN (EVPN)-Virtual Extensible LAN (VXLAN) edge-routed bridging overlay (EVPN-VXLAN topology
with a collapsed IP fabric).
RELATED DOCUMENTATION
distributed-dr
igmp-snooping
mld-snooping
multicast-router-interface
Example: Preserving Bandwidth with IGMP Snooping in an EVPN-VXLAN Environment
Internet Group Management Protocol (IGMP) snooping constrains the flooding of IPv4 multicast traffic
on VLANs on a device. With IGMP snooping enabled, the device monitors IGMP traffic on the network
and uses what it learns to forward multicast traffic to only the downstream interfaces that are
connected to interested receivers. The device conserves bandwidth by sending multicast traffic only to
126
interfaces connected to devices that want to receive the traffic, instead of flooding the traffic to all the
downstream interfaces in a VLAN.
NOTE: You cannot configure IGMP snooping on a secondary (private) VLAN (PVLAN). However,
starting in Junos OS Release 18.3R1 on EX4300 switches and EX4300 Virtual Chassis, and Junos
OS Release 19.2R1 on EX4300 multigigabit switches, you can configure the vlan statement at
the [edit protocols igmp-snooping] hierarchy level with a primary VLAN, which implicitly enables
IGMP snooping on its secondary VLANs and avoids flooding multicast traffic on PVLANs. See
"IGMP Snooping on Private VLANs (PVLANs)" on page 98 for details.
NOTE: Starting in Junos OS Releases 14.1X53 and 15.2, QFabric Systems support the igmp-
querier statement to configure a Node device as an IGMP querier.
The default factory configuration on legacy EX Series switches enables IGMP snooping by default on all
VLANs. In this case, you don’t need any other configuration for IGMP snooping to work. However, if you
want IGMP snooping enabled only on some VLANs, you can either disable the feature on all VLANs and
then enable it selectively on the desired VLANs, or simply disable the feature selectively on those where
you do not want IGMP snooping. You can also customize other available IGMP snooping options.
TIP: When you configure IGMP snooping using the vlan all statement (where supported), any
VLAN that is not individually configured for IGMP snooping inherits the vlan all configuration.
Any VLAN that is individually configured for IGMP snooping, on the other hand, does not inherit
the vlan all configuration. Any parameters that are not explicitly defined for the individual VLAN
assume their default values, not the values specified in the vlan all configuration. For example, in
the following configuration:
protocols {
igmp-snooping {
vlan all {
robust-count 8;
}
vlan employee-vlan {
interface ge-0/0/8.0 {
static {
group 233.252.0.1;
}
}
}
127
}
}
all VLANs except employee-vlan have a robust count of 8. Because you individually configured
employee-vlan, its robust count value is not determined by the value set under vlan all. Instead,
its robust-count value is 2, the default value.
On switches without IGMP snooping enabled in the default factory configuration, you must explicitly
enable IGMP snooping and configure any other of the available IGMP snooping options you want on a
VLAN.
Use the following configuration steps as needed for your network to enable IGMP snooping on all
VLANs (where supported), enable or disable IGMP snooping selectively on a VLAN, and configure
available IGMP snooping options:
1. To enable IGMP snooping on all VLANs (where supported, such as on some EX Series switches):
[edit protocols]
user@switch# set igmp-snooping vlan all
NOTE: The default factory configuration on legacy EX Series switches has IGMP snooping
enabled on all VLANs.
Or disable IGMP snooping on all VLANs (where supported, such as on some EX Series switches):
[edit protocols]
user@switch# set igmp-snooping vlan all disable
2. To enable IGMP snooping on a specified VLAN, for example, on a VLAN named employee-vlan:
[edit protocols]
user@switch# set igmp-snooping vlan employee-vlan
128
3. To configure the switch to immediately remove group memberships from interfaces on a VLAN when
it receives a leave message through that VLAN, so it doesn’t forward any membership queries for the
multicast group to the VLAN (IGMPv2 only):
[edit protocols]
user@switch# set igmp-snooping vlan vlan-name immediate-leave
[edit protocols]
user@switch# set igmp-snooping vlan vlan-name interface interface-name static group group-address
[edit protocols]
user@switch# set igmp-snooping vlan vlan-name interface interface-name multicast-router-interface
6. To change the default number of timeout intervals the device waits before timing out and removing a
multicast group on a VLAN:
[edit protocols]
user@switch# set igmp-snooping vlan vlan-name robust-count number
[edit protocols]
user@switch# set igmp-snooping vlan vlan-name l2-querier source-address source address
Or on QFabric Systems only, if you want a QFabric Node device to act as an IGMP querier, enter the
following:
[edit protocols]
user@switch# set igmp-snooping vlan vlan-name igmp-querier source-address source address
The switch sends IGMP queries with the configured source address. To ensure this switch is always
the IGMP querier on the network, make sure the source address is greater (a higher number) than the
IP addresses for any other multicast routers on the same local network.
129
Release Description
14.1X53 Starting in Junos OS Releases 14.1X53 and 15.2, QFabric Systems support the igmp-querier statement
to configure a Node device as an IGMP querier.
RELATED DOCUMENTATION
IN THIS SECTION
Requirements | 129
Configuration | 132
You can enable IGMP snooping on a VLAN to constrain the flooding of IPv4 multicast traffic on a VLAN.
When IGMP snooping is enabled, a switch examines IGMP messages between hosts and multicast
routers and learns which hosts are interested in receiving multicast traffic for a multicast group. Based
on what it learns, the switch then forwards multicast traffic only to those interfaces connected to
interested receivers instead of flooding the traffic to all interfaces.
Requirements
This example uses the following software and hardware components:
IN THIS SECTION
Topology | 131
In this example, interfaces ge-0/0/0, ge-0/0/1, and ge-0/0/2 on the switch are in vlan100 and are
connected to hosts that are potential multicast receivers. Interface ge-0/0/12, a trunk interface also in
vlan100, is connected to a multicast router. The router acts as the IGMP querier and forwards multicast
traffic for group 255.100.100.100 to the switch from a multicast source.
131
Topology
In this example topology, the multicast router forwards multicast traffic to the switch from the source
when it receives a memberhsip report for group 255.100.100.100 from one of the hosts—for example,
Host B. If IGMP snooping is not enabled on vlan100, the switch floods the multicast traffic on all
interfaces in vlan100 (except for interface ge-0/0/12). If IGMP snooping is enabled on vlan100, the
switch monitors the IGMP messages between the hosts and router, allowing it to determine that only
Host B is interested in receiving the multicast traffic. The switch then forwards the multicast traffic only
to interface ge-0/0/1.
IGMP snooping is enabled on all VLANs in the default factory configuration. For many implementations,
IGMP snooping requires no additional configuration. This example shows how to perform the following
optional configurations, which can reduce group join and leave latency:
• Configure immediate leave on the VLAN. When immediate leave is configured, the switch stops
forwarding multicast traffic on an interface when it detects that the last member of the multicast
group has left the group. If immediate leave is not configured, the switch waits until the group-
specific queries time out before it stops forwarding traffic.
Immediate leave is supported by IGMP version 2 (IGMPv2) and IGMPv3. With IGMPv2, we
recommend that you configure immediate leave only when there is only one IGMP host on an
interface. In IGMPv2, only one host on a interface sends a membership report in response to a
132
group-specifc query—any other interested hosts suppress their reports to avoid a flood of reports for
the same group. This report-suppression feature means that the switch only knows about one
interested host at any given time.
• Configure ge-0/0/12 as a static multicast-router interface. In this topology, ge-0/0/12 always leads
to the multicast router. By statically configuring ge-0/0/12 as a multicast-router interface, you avoid
any delay imposed by the switch having to learn that ge-0/0/12 is a multicast-router interface.
Configuration
IN THIS SECTION
Procedure | 132
Procedure
To quickly configure IGMP snooping, copy the following commands and paste them into the switch
terminal window:
[edit]
set protocols igmp-snooping vlan vlan100 immediate-leave
set protocols igmp-snooping vlan vlan100 interface ge-0/0/12 multicast-router-interface
Step-by-Step Procedure
1. Configure the switch to immediately remove a group membership from an interface when it receives
a leave report from the last member of the group on the interface:
[edit protocols]
user@switch# set igmp-snooping vlan vlan100 immediate-leave
133
[edit protocols]
user@switch# set igmp-snooping vlan vlan100 interface ge-0/0/12 multicast-router-interface
Results
[edit protocols]
user@switch# show igmp-snooping
vlan all;
vlan vlan100 {
immediate-leave;
interface ge-0/0/12.0 {
multicast-router-interface;
}
}
IN THIS SECTION
To verify that IGMP snooping is operating as configured, perform the following task:
Purpose
Verify that IGMP snooping is enabled on vlan100 and that ge-0/0/12 is recognized as a multicast-router
interface.
134
Action
Meaning
By showing information for vlan100, the command output confirms that IGMP snooping is configured
on the VLAN. Interface ge-0/0/12.0 is listed as multicast-router interface, as configured. Because none
of the host interfaces are listed, none of the hosts are currently receivers for the multicast group.
RELATED DOCUMENTATION
IN THIS SECTION
Requirements | 135
Configuration | 136
Internet Group Management Protocol (IGMP) snooping constrains the flooding of IPv4 multicast traffic
on VLANs on a device. With IGMP snooping enabled, the device monitors IGMP traffic on the network
and uses what it learns to forward multicast traffic to only the downstream interfaces that are
connected to interested receivers. The device conserves bandwidth by sending multicast traffic only to
135
interfaces connected to devices that want to receive the traffic, instead of flooding the traffic to all the
downstream interfaces in a VLAN.
Requirements
This example requires Junos OS Release 11.1 or later on a QFX Series product.
IN THIS SECTION
Topology | 135
In this example you configure an interface to receive multicast traffic from a source and configure some
multicast-related behavior for downstream interfaces. The example assumes that IGMP snooping was
previously disabled for the VLAN.
Topology
Table 7 on page 135 shows the components of the topology for this example.
Components Settings
Configuration
IN THIS SECTION
Procedure | 136
Procedure
To quickly configure IGMP snooping, copy the following commands and paste them into a terminal
window:
[edit protocols]
set igmp-snooping vlan employee-vlan
set igmp-snooping vlan employee-vlan interface ge-0/0/3 static group 225.100.100.100
set igmp-snooping vlan employee-vlan interface ge-0/0/2 multicast-router-interface
set igmp-snooping vlan employee-vlan robust-count 4
Step-by-Step Procedure
[edit protocols]
user@switch# set igmp-snooping vlan employee-vlan
[edit protocols]
user@switch# set igmp-snooping vlan employee-vlan interface ge-0/0/3 static group 225.100.100.100
137
[edit protocols]
user@switch# set igmp-snooping vlan employee-vlan interface ge-0/0/2 multicast-router-interface
4. Configure the switch to wait for four timeout intervals before timing out a multicast group on a
VLAN:
[edit protocols]
user@switch# set igmp-snooping vlan employee-vlan robust-count 4
Results
RELATED DOCUMENTATION
The IGMP snooping group timeout value determines how long a switch waits to receive an IGMP query
from a multicast router before removing a multicast group from its multicast cache table. A switch
calculates the timeout value by using the query-interval and query-response-interval values.
When you enable IGMP snooping, the query-interval and query-response-interval values are applied to
all VLANs on the switch. The values are:
• query-interval—125 seconds
• query-response-interval—10 seconds
The switch automatically calculates the group timeout value for an IGMP snooping-enabled switch by
multiplying the query-interval value by 2 (the default robust-count value) and then adding the query-
response-interval value. By default, the switch waits 260 seconds to receive an IGMP query before
removing a multicast group from its multicast cache table: (125 x 2) + 10 = 260.
You can modify the group timeout value by changing the robust-count value. For example, if you want
the system to wait 510 seconds before timing groups out—(125 x 4) + 10 = 510—enter this command:
[edit protocols]
user@switch# set igmp-snooping vlan employee-vlan robust-count 4
RELATED DOCUMENTATION
IN THIS SECTION
Purpose | 139
Action | 139
Meaning | 140
Purpose
Use the monitoring feature to view status and information about the IGMP snooping configuration.
Action
To display details about IGMP snooping, enter the following operational commands:
• show igmp snooping interface—Display information about interfaces enabled with IGMP snooping,
including which interfaces are being snooped in a learning domain and the number of groups on each
interface.
• show igmp snooping membership—Display IGMP snooping membership information, including the
multicast group address and the number of active multicast groups.
• show igmp snooping options—Display brief or detailed information about IGMP snooping.
• show igmp snooping statistics—Display IGMP snooping statistics, including the number of messages
sent and received.
The show igmp snooping interface, show igmp snooping membership, and show igmp snooping
statistics commands also support the following options:
• instance instance-name
• interface interface-name
• qualified-vlan vlan-identifier
• vlan vlan-name
140
Meaning
Field Values
Next-Hop Next hop assigned by the switch after performing the route lookup.
RELATED DOCUMENTATION
IN THIS SECTION
Internet Group Management Protocol (IGMP) snooping constrains the flooding of IPv4 multicast traffic
on VLANs on a switch. This topic describes how to verify IGMP snooping operation on the switch.
It covers:
IN THIS SECTION
Purpose | 141
Action | 141
Meaning | 142
Purpose
Determine group memberships, multicast-router interfaces, host IGMP versions, and the current values
of timeout counters.
Action
Group: 233.252.0.1
ge-1/0/17.0 259 Last reporter: 10.0.0.90 Receiver count: 1
Uptime: 00:00:19 timeout: 259 Flags: <V3-hosts>
Include source: 10.2.11.5, 10.2.11.12
Meaning
The switch has multicast membership information for one VLAN on the switch, vlan2. IGMP snooping
might be enabled on other VLANs, but the switch does not have any multicast membership information
for them. The following information is provided:
• Information on the multicast-router interfaces for the VLAN—in this case, ge-1/0/0.0. The multicast-
router interface has been learned by IGMP snooping, as indicated by the dynamic value. The timeout
value shows how many seconds from now the interface will be removed from the multicast
forwarding table if the switch does not receive IGMP queries or Protocol Independent Multicast
(PIM) updates on the interface.
• Currently, the VLAN has membership in only one multicast group, 233.252.0.1.
• The host or hosts that have reported membership in the group are on interface ge-1/0/17.0. The
last host that reported membership in the group has address 10.0.0.90. The number of hosts
belonging to the group on the interface is shown in the Receiver count field, which is displayed
only when host tracking is enabled if immediate leave is configured on the VLAN.
• The Uptime field shows that the multicast group has been active on the interface for 19 seconds.
The interface group membership will time out in 259 seconds if no hosts respond to membership
queries during this interval. The Flags field shows the lowest version of IGMP used by a host that
is currently a member of the group, which in this case is IGMP version 3 (IGMPv3).
• Because the interface has IGMPv3 hosts on it, the source addresses from which the IGMPv3
hosts want to receive group multicast traffic are shown (addresses 10.2.11.5 and 10.2.11.12). The
timeout value for the interface group membership is derived from the largest timeout value for all
sources addresses for the group.
IN THIS SECTION
Purpose | 143
143
Action | 143
Meaning | 143
Purpose
Display IGMP snooping statistics, such as number of IGMP queries, reports, and leaves received and
how many of these IGMP messages contained errors.
Action
Meaning
The output shows how many IGMP messages of each type—Queries, Reports, Leaves—the switch
received or transmitted on interfaces on which IGMP snooping is enabled. For each message type, it also
shows the number of IGMP packets the switch received that had errors—for example, packets that do
not conform to the IGMPv1, IGMPv2, or IGMPv3 standards. If the Recv Errors count increases, verify
that the hosts are compliant with IGMP standards. If the switch is unable to recognize the IGMP
message type for a packet, it counts the packet under Receive unknown.
IN THIS SECTION
Purpose | 144
144
Action | 144
Meaning | 144
Purpose
Action
Meaning
The output shows the next-hop interfaces for a given multicast group on a VLAN.
RELATED DOCUMENTATION
IN THIS SECTION
Routers can handle both Layer 2 and Layer 3 addressing information because the frame and its
addresses must be processed to access the encapsulated packet inside. Routers can run Layer 3
multicast protocols such as PIM or IGMP and determine where to forward multicast content or when a
host on an interface joins or leaves a group. However, bridges and LAN switches, as Layer 2 devices, are
not supposed to have access to the multicast information inside the packets that their frames carry.
How then are bridges and other Layer 2 devices to determine when a device on an interface joins or
leaves a multicast tree, or whether a host on an attached LAN wants to receive the content of a
particular multicast group?
The answer is for the Layer 2 device to implement multicast snooping. Multicast snooping is a general
term and applies to the process of a Layer 2 device “snooping” at the Layer 3 packet content to
determine which actions are taken to process or forward a frame. There are more specific forms of
snooping, such as IGMP snooping or PIM snooping. In all cases, snooping involves a device configured to
function at Layer 2 having access to normally “forbidden” Layer 3 (packet) information. Snooping makes
multicasting more efficient in these devices.
SEE ALSO
Layer 2 devices (LAN switches or bridges) handle multicast packets and the frames that contain them
much in the same way the Layer 3 devices (routers) handle broadcasts. So, a Layer 2 switch processes an
arriving frame having a multicast destination media access control (MAC) address by forwarding a copy
of the packet (frame) onto each of the other network interfaces of the switch that are in a forwarding
state.
However, this approach (sending multicast frames everywhere the device can) is not the most efficient
use of network bandwidth, particularly for IPTV applications. IGMP snooping functions by “snooping” at
the IGMP packets received by the switch interfaces and building a multicast database similar to that a
multicast router builds in a Layer 3 network. Using this database, the switch can forward multicast traffic
only onto downstream interfaces with interested receivers, and this technique allows more efficient use
of network bandwidth.
You configure IGMP snooping for each bridge on the router. A bridge instance without qualified learning
has just one learning domain. For a bridge instance with qualified learning, snooping will function
separately within each learning domain in the bridge. That is, IGMP snooping and multicast forwarding
will proceed independently in each learning domain in the bridge.
This discussion focuses on bridge instances without qualified learning (those forming one learning
domain on the device). Therefore, all the interfaces mentioned are logical interfaces of the bridge or
VPLS instance.
• Bridge or VPLS instance interfaces are either multicast-router interfaces or host-side interfaces.
NOTE: When integrated routing and bridging (IRB) is used, if the router is an IGMP querier, any
leave message received on any Layer 2 interface will cause a group-specific query on all Layer 2
interfaces (as a result of this practice, some corresponding reports might be received on all
Layer 2 interfaces). However, if some of the Layer 2 interfaces are also router (Layer 3)
interfaces, reports and leaves from other Layer 2 interfaces will not be forwarded on those
interfaces.
147
If an IRB interface is used as an outgoing interface in a multicast forwarding cache entry (as determined
by the routing process), then the output interface list is expanded into a subset of the Layer 2 interface
in the corresponding bridge. The subset is based on the snooped multicast membership information,
according to the multicast forwarding cache entry installed by the snooping process for the bridge.
If no snooping is configured, the IRB output interface list is expanded to all Layer 2 interfaces in the
bridge.
The Junos OS does not support IGMP snooping in a VPLS configuration on a virtual switch. This
configuration is disallowed in the CLI.
SEE ALSO
All other interfaces that are not multicast-router interfaces are considered host-side interfaces.
Any multicast traffic received on a bridge interface with IGMP snooping configured will be forwarded
according to following rules:
• Any IGMP packet is sent to the Routing Engine for snooping processing.
• Other multicast traffic with destination address 224.0.0/24 is flooded onto all other interfaces of the
bridge.
• Other multicast traffic is sent to all the multicast-router interfaces but only to those host-side
interfaces that have hosts interested in receiving that multicast group.
148
SEE ALSO
• Query—All general and group-specific IGMP query messages received on a multicast-router interface
are forwarded to all other interfaces (both multicast-router interfaces and host-side interfaces) on
the bridge.
• Report—IGMP reports received on any interface of the bridge are forwarded toward other multicast-
router interfaces. The receiving interface is added as an interface for that group if a multicast routing
entry exists for this group. Also, a group timer is set for the group on that interface. If this timer
expires (that is, there was no report for this group during the IGMP group timer period), then the
interface is removed as an interface for that group.
• Leave—IGMP leave messages received on any interface of the bridge are forwarded toward other
multicast-router interfaces on the bridge. The Leave Group message reduces the time it takes for the
multicast router to stop forwarding multicast traffic when there are no longer any members in the
host group.
Proxy snooping reduces the number of IGMP reports sent toward an IGMP router.
NOTE: With proxy snooping configured, an IGMP router is not able to perform host tracking.
As proxy for its host-side interfaces, IGMP snooping in proxy mode replies to the queries it receives
from an IGMP router on a multicast-router interface. On the host-side interfaces, IGMP snooping in
proxy mode behaves as an IGMP router and sends general and group-specific queries on those
interfaces.
NOTE: Only group-specific queries are generated by IGMP snooping directly. General queries
received from the multicast-router interfaces are flooded to host-side interfaces.
149
All the queries generated by IGMP snooping are sent using 0.0.0.0 as the source address. Also, all
reports generated by IGMP snooping are sent with 0.0.0.0 as the source address unless there is a
configured source address to use.
Proxy mode functions differently on multicast-router interfaces than it does on host-side interfaces.
SEE ALSO
Besides replying to queries, IGMP snooping in proxy mode forwards all queries, reports, and leaves
received on a multicast-router interface to other multicast-router interfaces. IGMP snooping keeps the
membership information learned on this interface but does not send a group-specific query for leave
messages received on this interface. It simply times out the groups learned on this interface if there are
no reports for the same group within the timer duration.
NOTE: For the hosts on all the multicast-router interfaces, it is the IGMP router, not the IGMP
snooping proxy, that generates general and group-specific queries.
SEE ALSO
If a group is removed from a host-side interface and this was the last host-side interface for that group, a
leave is sent to the multicast-router interfaces. If a group report is received on a host-side interface and
this was the first host-side interface for that group, a report is sent to all multicast-router interfaces.
150
SEE ALSO
SEE ALSO
igmp-snooping {
immediate-leave;
interface interface-name {
group-limit limit;
host-only-interface;
immediate-leave;
multicast-router-interface;
static {
group ip-address {
source ip-address;
}
}
}
proxy {
source-address ip-address;
}
query-interval seconds;
query-last-member-interval seconds;
query-response-interval seconds;
robust-count number;
vlan vlan-id {
151
immediate-leave;
interface interface-name {
group-limit limit;
host-only-interface;
immediate-leave;
multicast-router-interface;
static {
group ip-address {
source ip-address;
}
}
}
proxy {
source-address ip-address;
}
query-interval seconds;
query-last-member-interval seconds;
query-response-interval seconds;
robust-count number;
}
}
By default, IGMP snooping is not enabled. Statements configured at the VLAN level apply only to that
particular VLAN.
SEE ALSO
vlan vlan-id;
immediate-leave;
interface interface-name {
group-limit limit;
host-only-interface;
multicast-router-interface;
static {
group ip-address {
source ip-address;
}
}
}
proxy {
source-address ip-address;
}
query-interval seconds;
query-last-member-interval seconds;
query-response-interval seconds;
robust-count number;
}
SEE ALSO
IN THIS SECTION
Requirements | 153
Configuration | 157
Verification | 161
This example shows how to configure IGMP snooping. IGMP snooping can reduce unnecessary traffic
from IP multicast applications.
Requirements
• Configure the interfaces. See the Interfaces User Guide for Security Devices.
• Configure an interior gateway protocol. See the Junos OS Routing Protocols Library for Routing
Devices.
• Configure a multicast protocol. This feature works with the following multicast protocols:
• DVMRP
• PIM-DM
• PIM-SM
• PIM-SSM
154
IN THIS SECTION
Topology | 156
IGMP snooping controls multicast traffic in a switched network. When IGMP snooping is not enabled,
the Layer 2 device broadcasts multicast traffic out of all of its ports, even if the hosts on the network do
not want the multicast traffic. With IGMP snooping enabled, a Layer 2 device monitors the IGMP join
and leave messages sent from each connected host to a multicast router. This enables the Layer 2
device to keep track of the multicast groups and associated member ports. The Layer 2 device uses this
information to make intelligent decisions and to forward multicast traffic to only the intended
destination hosts.
• proxy—Enables the Layer 2 device to actively filter IGMP packets to reduce load on the multicast
router. Joins and leaves heading upstream to the multicast router are filtered so that the multicast
router has a single entry for the group, regardless of how many active listeners have joined the
group. When a listener leaves a group but other listeners remain in the group, the leave message is
filtered because the multicast router does not need this information. The status of the group remains
the same from the router's point of view.
• immediate-leave—When only one IGMP host is connected, the immediate-leave statement enables
the multicast router to immediately remove the group membership from the interface and suppress
the sending of any group-specific queries for the multicast group.
When you configure this feature on IGMPv2 interfaces, ensure that the IGMP interface has only one
IGMP host connected. If more than one IGMPv2 host is connected to a LAN through the same
interface, and one host sends a leave message, the router removes all hosts on the interface from the
multicast group. The router loses contact with the hosts that properly remain in the multicast group
until they send join requests in response to the next general multicast listener query from the router.
When IGMP snooping is enabled on a router running IGMP version 3 (IGMPv3) snooping, after the
router receives a report with the type BLOCK_OLD_SOURCES, the router suppresses the sending of
group-and-source queries but relies on the Junos OS host-tracking mechanism to determine whether
or not it removes a particular source group membership from the interface.
• query-interval—Enables you to change the number of IGMP messages sent on the subnet by
configuring the interval at which the IGMP querier router sends general host-query messages to
solicit membership information.
155
By default, the query interval is 125 seconds. You can configure any value in the range 1 through
1024 seconds.
The last-member query interval is the maximum amount of time between group-specific query
messages, including those sent in response to leave-group messages.
By default, the last-member query interval is 1 second. You can configure any value in the range 0.1
through 0.9 seconds, and then 1-second intervals from 1 through 1024 seconds.
• query-response-interval—Configures how long the router waits to receive a response from its host-
query messages.
By default, the query response interval is 10 seconds. You can configure any value in the range 1
through 1024 seconds. This interval should be less than the interval set in the query-interval
statement.
• robust-count—Provides fine-tuning to allow for expected packet loss on a subnet. It is basically the
number of intervals to wait before timing out a group. You can wait more intervals if subnet packet
loss is high and IGMP report messages might be lost.
By default, the robust count is 2. You can configure any value in the range 2 through 10 intervals.
• group-limit—Configures a limit for the number of multicast groups (or [S,G] channels in IGMPv3) that
can join an interface. After this limit is reached, new reports are ignored and all related flows are
discarded, not flooded.
By default, there is no limit to the number of groups that can join an interface. You can configure a
limit in the range 0 through a 32-bit number.
By default, the router learns about multicast groups on the interface dynamically.
156
Topology
Figure 15 on page 156 shows networks without IGMP snooping. Suppose host A is an IP multicast
sender and hosts B and C are multicast receivers. The router forwards IP multicast traffic only to those
segments with registered receivers (hosts B and C). However, the Layer 2 devices flood the traffic to all
hosts on all interfaces.
Figure 16 on page 157 shows the same networks with IGMP snooping configured. The Layer 2 devices
forward multicast traffic to registered receivers only.
Configuration
IN THIS SECTION
Procedure | 158
158
Procedure
To quickly configure this example, copy the following commands, paste them into a text file, remove any
line breaks, change any details necessary to match your network configuration, copy and paste the
commands into the CLI at the [edit] hierarchy level, and then enter commit from configuration mode.
Step-by-Step Procedure
The following example requires you to navigate various levels in the configuration hierarchy. For
information about navigating the CLI, see Using the CLI Editor in Configuration Mode in the Junos OS
CLI User Guide.
3. Configure the limit for the number of multicast groups allowed on the ge-0/0/1.1 interface to 50.
4. Configure the router to immediately remove a group membership from an interface when it receives
a leave message from that interface without waiting for any other IGMP messages to be exchanged.
7. Configure an interface to be an exclusively host-facing interface (to drop IGMP query messages).
user@host# commit
Results
}
}
Verification
SEE ALSO
Flag Description
(Continued)
Flag Description
You can configure tracing operations for IGMP snooping globally or in a routing instance. The following
example shows the global configuration.
5. Configure tracing flags. Suppose you are troubleshooting issues with a policy related to received
packets on a particular logical interface with an IP address of 192.168.0.1. The following example
shows how to flag all policy events for received packets associated with the IP address.
SEE ALSO
RELATED DOCUMENTATION
IN THIS SECTION
Requirements | 164
Configuration | 165
You can enable IGMP snooping on a VLAN to constrain the flooding of IPv4 multicast traffic on a VLAN.
When IGMP snooping is enabled, the device examines IGMP messages between hosts and multicast
routers and learns which hosts are interested in receiving multicast traffic for a multicast group. Based
on what it learns, the device then forwards multicast traffic only to those interfaces that are connected
to relevant receivers instead of flooding the traffic to all interfaces.
Requirements
This example uses the following hardware and software components:
IN THIS SECTION
Topology | 165
165
IGMP snooping controls multicast traffic in a switched network. When IGMP snooping is not enabled,
the SRX Series device broadcasts multicast traffic out of all of its ports, even if the hosts on the network
do not want the multicast traffic. With IGMP snooping enabled, the SRX Series device monitors the
IGMP join and leave messages sent from each connected host to a multicast router. This enables the
SRX Series device to keep track of the multicast groups and associated member ports. The SRX Series
device uses this information to make intelligent decisions and to forward multicast traffic to only the
intended destination hosts.
Topology
In this sample topology, the multicast router forwards multicast traffic to the device from the source
when it receives a membership report for group 233.252.0.100 from one of the hosts—for example,
Host B. If IGMP snooping is not enabled on vlan100, the device floods the multicast traffic on all
interfaces in vlan100 (except for interface ge-0/0/2.0). If IGMP snooping is enabled on vlan100, the
device monitors the IGMP messages between the hosts and router, allowing it to determine that only
Host B is interested in receiving the multicast traffic. The device then forwards the multicast traffic only
to interface ge-0/0/2.
Configuration
IN THIS SECTION
Procedure | 166
166
Procedure
To quickly configure this example, copy the following commands, paste them into a text file, remove any
line breaks, change any details necessary to match your network configuration, copy and paste the
commands into the CLI at the [edit] hierarchy level, and then enter commit from configuration mode.
Step-by-Step Procedure
The following example requires you to navigate various levels in the configuration hierarchy. For
instructions on how to do that, see Using the CLI Editor in Configuration Mode in the CLI User Guide.
[edit]
user@host# set interfaces ge-0/0/1 unit 0 family ethernet-switching interface-mode access
167
[edit]
user@host# set vlans v1 vlan-id 100
[edit]
user@host# set protocols igmp-snooping vlan v1 proxy
4. Configure the limit for the number of multicast groups allowed on the ge-0/0/1.0 interface to 50.
[edit]
user@host# set protocols igmp-snooping vlan v1 interface ge-0/0/1.0 group-limit 50
5. Configure the device to immediately remove a group membership from an interface when it receives
a leave message from that interface without waiting for any other IGMP messages to be exchanged.
[edit]
user@host# set protocols igmp-snooping vlan v1 immediate-leave
[edit]
user@host# set protocols igmp-snooping vlan v1 interface ge-0/0/4.0 static group 233.252.0.100
168
7. Configure an interface to be an exclusively host-facing interface (to drop IGMP query messages).
[edit]
user@host# set protocols igmp-snooping vlan v1 interface ge-0/0/1.0 host-only-interface
[edit]
user@host# set protocols igmp-snooping vlan v1 query-interval 200
user@host# set protocols igmp-snooping vlan v1 query-response-interval 0.4
user@host# set protocols igmp-snooping vlan v1 query-last-member-interval 0.1
user@host# set protocols igmp-snooping vlan v1 robust-count 4
user@host# commit
Results
From configuration mode, confirm your configuration by entering the show protocols igmp-snooping
command. If the output does not display the intended configuration, repeat the configuration
instructions in this example to correct it.
[edit]
user@host# show protocols igmp-snooping
vlan v1 {
query-interval 200;
query-response-interval 0.4;
query-last-member-interval 0.1;
robust-count 4;
immediate-leave;
proxy;
interface ge-0/0/1.0 {
host-only-interface;
group-limit 50;
}
interface ge-0/0/4.0 {
static {
169
group 233.252.0.100;
}
}
}
IN THIS SECTION
To verify that IGMP snooping is operating as configured, perform the following task:
Purpose
Verify that IGMP snooping is enabled on vlan v1 and that ge-0/0/4 is recognized as a multicast-router
interface.
Action
From operational mode, enter the show igmp snooping membership command.
Vlan: v1
Learning-Domain: default
Interface: ge-0/0/4.0, Groups: 1
Group: 233.252.0.100
Group mode: Exclude
Source: 0.0.0.0
Last reported by: Local
Group timeout: 0 Type: Static
170
Meaning
By showing information for vlanv1, the command output confirms that IGMP snooping is configured on
the VLAN. Interface ge-0/0/4.0 is listed as a multicast-router interface, as configured. Because none of
the host interfaces are listed, none of the hosts are currently receivers for the multicast group.
RELATED DOCUMENTATION
By default, IGMP snooping in VPLS uses multiple parallel streams when forwarding multicast traffic to
PE routers participating in the VPLS. However, you can enable point-to-multipoint LSP for IGMP
snooping to have multicast data traffic in the core take the point-to-multipoint path rather than using a
pseudowire path. The effect is a reduction in the amount of traffic generated on the PE router when
sending multicast packets for multiple VPLS sessions.
Figure 1 shows the effect on multicast traffic generated on the PE1 router (the device where the setting
is enabled). When pseudowire LSP is used, the PE1 router sends multiple packets whereas with point-
to-multipoint LSP enabled, only a single copy of the packets on the PE1 router is sent.
The options configured for IGMP snooping are applied on a per routing-instance, so all IGMP snooping
routes in the same instance will use the same mode, point-to-multipoint or pseudowire.
NOTE: The point-to-multipoint option is available on MX960, MX480, MX240, and MX80
routers running Junos OS 13.3 and later.
171
NOTE: IGMP snooping is not supported on the core-facing pseudowire interfaces; all PE routers
participating in VPLS will continue to receive multicast data traffic even when this option is
enabled.
Figure 18: Point-to-multipoint LSP generates less traffic on the PE router than pseudowire.
In a VPLS instance with IGMP-snooping that uses a point-to-multipoint LSP, mcsnoopd (the multicast
snooping process that allows Layer 3 inspection from Layer 2 device) will start listening for point-to-
multipoint next-hop notifications and then manage the IGMP snooping routes accordingly. Enabling the
use-p2mp-lsp command in Junos allows the IGMP snooping routes to start using this next-hop. In short,
172
if point-to-multipoint is configured for a VPLS instance, multicast data traffic in the core can avoid
ingress replication by taking the point-to-multipoint path. If the point-to-multipoint next-hop is
unavailable, packets are handled in the VPLS instance in the same way as broadcast packets or unknown
unicast frames. Note that IGMP snooping is not supported on the core-facing pseudowire interfaces. PE
routers participating in VPLS will continue to receive multicast data traffic regardless of how Point-to-
Multipoint is set.
[edit]
user@host> set routing-instances instance name instance-type vpls igmp-snooping-
options use-p2mp-lsp
routing-instances {
<instance-name> {
instance-type vpls;
igmp-snooping-options {
use-p2mp-lsp;
}
}
}
To show the operational status of point-to-multipoint LSP for IGMP snooping routes, use the following
CLI command:
Instance: master
P2MP LSP in use: no
Instance: default-switch
P2MP LSP in use: no
Instance: name
P2MP LSP in use: yes
173
RELATED DOCUMENTATION
use-p2mp-lsp | 2010
show igmp snooping options | 2180
multicast-snooping-options | 1703
174
CHAPTER 4
IN THIS CHAPTER
Configuring MLD Snooping on a Switch VLAN with ELS Support (CLI Procedure) | 195
Configuring MLD Snooping Tracing Operations on EX Series Switches (CLI Procedure) | 214
Configuring MLD Snooping Tracing Operations on EX Series Switch VLANs (CLI Procedure) | 217
IN THIS SECTION
Multicast Listener Discovery (MLD) snooping constrains the flooding of IPv6 multicast traffic on VLANs.
When MLD snooping is enabled on a VLAN, a Juniper Networks device examines MLD messages
between hosts and multicast routers and learns which hosts are interested in receiving traffic for a
multicast group. On the basis of what it learns, the device then forwards multicast traffic only to those
interfaces in the VLAN that are connected to interested receivers instead of flooding the traffic to all
interfaces.
MLD snooping supports MLD version 1 (MLDv1) and MLDv2. For details on MLDv1 and MLDv2, see
the following standards:
• MLDv2—See RFC 3810, Multicast Listener Discovery Version 2 (MLDv2) for IPv6.
By default, the device floods Layer 2 multicast traffic on all of the interfaces belonging to that VLAN on
the device, except for the interface that is the source of the multicast traffic. This behavior can consume
significant amounts of bandwidth.
You can enable MLD snooping to avoid this flooding. When you enable MLD snooping, the device
monitors MLD messages between receivers (hosts) and multicast routers and uses the content of the
messages to build an IPv6 multicast forwarding table—a database of IPv6 multicast groups and the
interfaces that are connected to the interested members of each group. When the device receives
multicast traffic for a multicast group, it uses the forwarding table to forward the traffic only to
interfaces that are connected to receivers that belong to the multicast group.
176
Figure 19 on page 176 shows an example of multicast traffic flow with MLD snooping enabled.
Multicast routers use MLD to learn, for each of their attached physical networks, which groups have
interested listeners. In any given subnet, one multicast router is elected to act as an MLD querier. The
MLD querier sends out the following types of queries to hosts:
• Group-specific query—Asks whether any host is listening to a specific multicast group. This query is
sent in response to a host leaving the multicast group and allows the router to quickly determine if
any remaining hosts are interested in the group.
• Group-and-source-specific query—(MLD version 2 only) Asks whether any host is listening to group
multicast traffic from a specific multicast source. This query is sent in response to a host indicating
that it is no longer interested in receiving group multicast traffic from the multicast source and allows
the router to quickly determine any remaining hosts are interested in receiving group multicast traffic
from that source.
Hosts that are multicast listeners send the following kinds of messages:
• Membership report—Indicates that the host wants to join a particular multicast group.
• Leave report—Indicates that the host wants to leave a particular multicast group.
Only MLDv1 hosts use two different kinds of reports to indicate whether they want to join or leave a
group. MLDv2 hosts send only one kind of report, the contents of which indicate whether they want to
join or leave a group. However, for simplicity’s sake, the MLD snooping documentation uses the term
membership report for a report that indicates that a host wants to join a group and uses the term leave
report for a report that indicates a host wants to leave a group.
• By sending an unsolicited membership report that specifies the multicast group that the host is
attempting to join.
A multicast router continues to forward multicast traffic to an interface provided that at least one host
on that interface responds to the periodic general queries indicating its membership. For a host to
remain a member of a multicast group, therefore, it must continue to respond to the periodic general
queries.
• By not responding to periodic queries within a set interval of time. This results in what is known as a
“silent leave.”
NOTE: If a host is connected to the device through a hub, the host does not automatically leave
the multicast group if it disconnects from the hub. The host remains a member of the group until
group membership times out and a silent leave occurs. If another host connects to the hub port
before the silent leave occurs, the new host might receive the group multicast traffic until the
silent leave, even though it never sent an membership report.
In MLDv2, a host can send a membership report that includes a list of source addresses. When the host
sends a membership report in INCLUDE mode, the host is interested in group multicast traffic only from
those sources in the source address list. If host sends a membership report in EXCLUDE mode, the host
is interested in group multicast traffic from any source except the sources in the source address list. A
host can also send an EXCLUDE report in which the source-list parameter is empty, which is known as
an EXCLUDE NULL report. An EXCLUDE NULL report indicates that the host wants to join the multicast
group and receive packets from all sources.
Devices that support MLD snooping support MLDv2 membership reports that are in INCLUDE and
EXCLUDE mode. However, SRX Series devices, QFX Series switches, and EX Series switches running
MLD snooping, except for EX9200 switches, do not support forwarding on a per-source basis. Instead,
the device consolidates all INCLUDE and EXCLUDE mode reports it receives on a VLAN for a specified
group into a single route that includes all multicast sources for that group, with the next hop being all
interfaces that have interested receivers for the group. As a result, interested receivers on the VLAN can
receive traffic from a source that they did not include in their INCLUDE report or from a source they
excluded in their EXCLUDE report. For example, if Host 1 wants traffic for group G from Source A and
Host 2 wants traffic for group G from Source B, they both receive traffic for group G regardless of
whether A or B sends the traffic.
To determine how to forward multicast traffic, the device with MLD snooping enabled maintains
information about the following interfaces in its multicast forwarding table:
• Group-member interfaces—These interfaces lead toward hosts that are members of multicast groups.
179
The device learns about these interfaces by monitoring MLD traffic. If an interface receives MLD
queries, the device adds the interface to its multicast forwarding table as a multicast-router interface. If
an interface receives membership reports for a multicast group, the device adds the interface to its
multicast forwarding table as a group-member interface.
Table entries for interfaces that the device learns about are subject to aging. For example, if a learned
multicast-router interface does not receive MLD queries within a certain interval, the device removes
the entry for that interface from its multicast forwarding table.
NOTE: For the device to learn multicast-router interfaces and group-member interfaces, an MLD
querier must exist in the network. For the device itself to function as an MLD querier, MLD must
be enabled on the device.
Multicast traffic received on the device interface in a VLAN on which MLD snooping is enabled is
forwarded according to the following rules.
• MLD general queries received on a multicast-router interface are forwarded to all other interfaces in
the VLAN.
• MLD group-specific queries received on a multicast-router interface are forwarded to only those
interfaces in the VLAN that are members of the group.
• MLD reports received on a host interface are forwarded to multicast-router interfaces in the same
VLAN, but not to the other host interfaces in the VLAN.
• An unregistered multicast packet—that is, a packet for a group that has no current members—is
forwarded to all multicast-router interfaces in the VLAN.
• A registered multicast packet is forwarded only to those host interfaces in the VLAN that are
members of the multicast group and to all multicast-router interfaces in the VLAN.
180
NOTE: When IGMP and MLD snooping are both enabled on the same VLAN, multicast-router
interfaces are created as part of IGMP and MLD snooping configuration. Unregistered multicast
traffic is not blocked and can be passed through router interfaces, so due to hardware limitations,
unregistered IPv4 multicast traffic might be passed through the multicast router interfaces
created as part of MLD snooping configuration, and unregistered IPv6 multicast traffic might
pass through multicast-router interfaces created as part of IGMP snooping configuration.
The following examples are provided to illustrate how MLD snooping forwards multicast traffic in
different topologies:
In the topology shown in Figure 20 on page 181, the device acting as a Layer 2 device receives multicast
traffic belonging to multicast group ff1e::2010 from Source A, which is connected to the multicast
router. It also receives multicast traffic belonging to multicast group ff15::2 from Source B, which is
connected directly to the device. All interfaces on the device belong to the same VLAN.
Because the device receives MLD queries from the multicast router on interface P1, MLD snooping
learns that interface P1 is a multicast-router interface and adds the interface to its multicast forwarding
table. It forwards any MLD general queries it receives on this interface to all host interfaces on the
device, and, in turn, forwards membership reports it receives from hosts to the multicast-router
interface.
In the example, Hosts A and C have responded to the general queries with membership reports for
group ff1e::2010. MLD snooping adds interfaces P2 and P4 to its multicast forwarding table as member
interfaces for group ff1e::2010. It forwards the group multicast traffic received from Source A to Hosts
A and C, but not to Hosts B and D.
Host B has responded to the general queries with a membership report for group ff15::2. The device
adds interface P3 to its multicast forwarding table as a member interface for group ff15::2 and forwards
181
multicast traffic it receives from Source B to Host B. The device also forwards the multicast traffic it
receives from Source B to the multicast-router interface P1.
Figure 20: Scenario 1: Device Forwarding Multicast Traffic to a Multicast Router and Hosts
In the topology show in Figure 21 on page 182, a multicast source is connected to Device A. Device A in
turn is connected to another device, Device B. Hosts on both Device A and B are potential members of
the multicast group. Both devices are acting as Layer 2 devices, and all interfaces on the devices are
members of the same VLAN.
Device A receives MLD queries from the multicast router on interface P1, making interface P1 a
multicast-router interface for Device A. Device A forwards all general queries it receives on this
interface to the other interfaces on the device, including the interface connecting Device B. Because
Device B receives the forwarded MLD queries on interface P6, P6 is the multicast-router interface for
182
Device B. Device B forwards the membership report it receives from Host C to Device A through its
multicast-router interface. Device A forwards the membership report to its multicast-router interface,
includes interface P5 in its multicast forwarding table as a group-member interface, and forwards
multicast traffic from the source to Device B.
In the topology shown in Figure 22 on page 184, the device is connected to a multicast source and to
hosts. There is no multicast router in this topology—hence there is no MLD querier. Without an MLD
querier to respond to, a host does not send periodic membership reports. As a result, even if the host
sends an unsolicited membership report to join a multicast group, its membership in the multicast group
will time out.
For MLD snooping to work correctly in this network so that the device forwards multicast traffic to
Hosts A and C only, you can either:
• Configure a routed VLAN interface (RVI), also referred to as an integrated routing and bridging (IRB)
interface, on the VLAN and enable MLD on it. In this case, the device itself acts as an MLD querier,
184
and the hosts can dynamically join the multicast group and refresh their group membership by
responding to the queries.
Figure 22: Scenario 3: Device Connected to Hosts Only (No MLD Querier)
In the topology shown in Figure 23 on page 185, a multicast source, Multicast Router A, and Hosts A
and B are connected to the device and are in VLAN 10. Multicast Router B and Hosts C and D are also
connected to the device and are in VLAN 20.
185
In a pure Layer 2 environment, traffic is not forwarded between VLANs. For Host C to receive the
multicast traffic from the source on VLAN 10, RVIs (or IRB interfaces) must be created on VLAN 10 and
VLAN 20 to permit routing of the multicast traffic between the VLANs.
Figure 23: Scenario 4: Layer 2/Layer 3 device Forwarding Multicast Traffic Between VLANs
RELATED DOCUMENTATION
IN THIS SECTION
You can enable MLD snooping on a VLAN to constrain the flooding of IPv6 multicast traffic on a VLAN.
When MLD snooping is enabled, a switch examines MLD messages between hosts and multicast routers
and learns which hosts are interested in receiving multicast traffic for a multicast group. Based on what
it learns, the switch then forwards IPv6 multicast traffic only to those interfaces connected to interested
receivers instead of flooding the traffic to all interfaces.
MLD snooping is not enabled on the switch by default. To enable MLD snooping on all VLANs:
[edit]
user@switch# set protocols mld-snooping vlan all
• Specify the MLD version for the general query that the switch sends on an interface when the
interface comes up.
• Enable immediate leave on a VLAN or all VLANs. Immediate leave reduces the length of time it takes
the switch to stop forwarding multicast traffic when the last member host on the interface leaves the
group.
• Configure an interface as a static multicast-router interface for a VLAN or for all VLANs so that the
switch does not need to dynamically learn that the interface is a multicast-router interface.
• Configure an interface as a static member of a multicast group so that the switch does not need to
dynamically learn the interface’s membership.
• Change the value for certain timers and counters to match the values configured on the multicast
router serving as the MLD querier.
TIP: When you configure MLD snooping using the vlan all statement, any VLAN that is not
individually configured for MLD snooping inherits the vlan all configuration. Any VLAN that is
individually configured for MLD snooping, on the other hand, inherits none of its configuration
from vlan all. Any parameters that are not explicitly defined for the individual VLAN assume their
default values, not the values specified in the vlan all configuration. For example, in the following
configuration:
protocols {
mld-snooping {
vlan all {
robust-count 8;
}
vlan employee {
interface ge-0/0/8.0 {
static {
group ff1e::1;
}
}
}
}
}
all VLANs, except employee, have a robust count of 8. Because employee has been individually
configured, its robust count value is not determined by the value set under vlan all. Instead, its
robust count is the default value of 2.
188
This topic describes how you can enable or disable MLD snooping on specific VLANs or on all VLANs on
the switch.
For example, to enable MLD snooping on all VLANs except vlan100 and vlan200:
You can also deactivate the MLD snooping protocol on the switch without changing the MLD snooping
VLAN configurations:
[edit]
user@switch# deactivate protocols mld-snooping
Typically, a switch passively monitors MLD messages sent between multicast routers and hosts and does
not send MLD queries. The exception is when a switch detects that an interface has come up. When an
interface comes up, the switch sends an immediate general membership query to all hosts on the
interface. By doing so, the switch enables the multicast routers to learn group memberships more
quickly than they would if they had to wait until the MLD querier sent its next general query.
The MLD version of the general query determines the MLD version of the host membership reports as
follows:
• MLD version 1 (MLDv1) general query—Both MLDv1 and MLDv2 hosts respond with an MLDv1
membership report.
• MLDv2 general query—MLDv2 hosts respond with an MLDv2 membership report, while MLDv1
hosts are unable to respond to the query.
By default, the switch sends MLDv1 queries. This ensures compatibility with hosts and multicast routers
that support MLDv1 only and cannot process MLDv2 reports. However, if your VLAN contains MLDv2
190
multicast routers and hosts and the routers are running PIM-SSM, we recommend that you configure
MLD snooping for MLDv2. Doing so enables the routers to quickly learn which multicast sources the
hosts on the interface want to receive traffic from.
NOTE: Configuring the MLD version does not limit the version of MLD messages that the switch
can snoop. A switch can snoop both MLDv1 and MLDv2 messages regardless of the MLD
version configured.
For example, to set the MLD version to version 2 for VLAN marketing:
You can decrease the leave latency created by this default behavior by enabling immediate leave on a
VLAN.
When you enable immediate leave on a VLAN, host tracking is also enabled, allowing the switch to keep
track of the hosts on a interface that have joined a multicast group. When the switch receives a leave
report from the last member of the group, it immediately stops forwarding traffic to the interface and
does not wait for the interface group membership to time out.
Immediate leave is supported for both MLD version 1 (MLDv1) and MLDv2. However, with MLDv1, we
recommend that you configure immediate leave only when there is only one MLD host on an interface.
In MLDv1, only one host on a interface sends a membership report in response to a group-specifc query
—any other interested hosts suppress their reports. This report-suppression feature means that the
switch only knows about one interested host at any given time.
191
In addition to dynamically learned interfaces, the multicast forwarding table can include interfaces that
you explicitly configure to be multicast router interfaces. Unlike the table entries for dynamically learned
interfaces, table entries for statically configured interfaces are not subject to aging and deletion from the
forwarding table.
Examples of when you might want to configure a static multicast-router interface include:
• You have an unusual network configuration that prevents MLD snooping from reliably learning about
a multicast-router interface through monitoring MLD queries or PIM updates.
• You have a stable topology and want to avoid the delay the dynamic learning process entails.
NOTE: If the interface you are configuring as a multicast-router interface is a trunk port, the
interface becomes a multicast-router interface for all VLANs configured on the trunk port even if
you have not explicitly configured it for all the VLANs. In addition, all unregistered multicast
packets, whether they are IPv4 or IPv6 packets, are forwarded to the multicast-router interface,
even if the interface is configured as a multicast-router interface only for MLD snooping.
For example, to configure ge-0/0/5.0 as a multicast-router interface for all VLANs on the switch:
In addition to such dynamically learned interfaces, the multicast forwarding table can include interfaces
that you statically configure to be members of multicast groups. When you configure a static group
interface, the switch adds the interface to the forwarding table as a host interface for the group. Unlike
an entry for a dynamically learned interface, a static interface entry is not subject to aging and deletion
from the forwarding table.
Examples of when you might want to configure static group membership on an interface include:
• The interface has receivers that cannot send MLD membership reports.
• You want the multicast traffic for a specific group to be immediately available to a receiver without
any delay imposed by the dynamic join process.
You cannot configure multicast source addresses for a static group interface. The MLD version of a
static group interface is always MLD version 1.
NOTE: The switch does not simulate MLD membership reports on behalf of a statically
configured interface. Thus a multicast router might be unaware that the switch has an interface
that is a member of the multicast group. You can configure a static group interface on the router
to ensure that the switch receives the group multicast traffic.
For example, to configure interface ge-0/0/11.0 in VLAN ip-camera-vlan as a static member of multicast
group ff1e::1:
There might be cases, however, where you might want to adjust the timer and counter values—for
example, to reduce burstiness, to reduce leave latency, or to adjust for expected packet loss on a subnet.
If you change a timer or counter value for the MLD querier on a VLAN, we recommend that you change
the value for all multicast routers and switches on the VLAN so that all devices time out group
memberships at approximately the same time.
• query-interval—The length of time the MLD querier waits between sending general queries (the
default is 125 seconds). You can change this interval to tune the number of MLD messages on the
subnet; larger values cause general queries to be sent less often.
You cannot configure this value directly for MLD snooping. MLD snooping inherits the value from the
MLD value configured on the switch, which is applied to all VLANs on the switch.
• query-response-interval—The maximum length of time the host can wait until it responds (the default
is 10 seconds). You can change this interval to adjust the burst peaks of MLD messages on the
subnet. Set a larger interval to make the traffic less bursty.
You cannot configure this value directly for MLD snooping. MLD snooping inherits the value from the
MLD value configured on the switch, which is applied to all VLANs on the switch.
194
• query-last-member-interval—The length of time the MLD querier waits between sending group-
specific membership queries (the default is 1 second). The MLD querier sends a group-specific query
after receiving a leave report from a host. You can decrease this interval to reduce the amount of
time it takes for multicast traffic to stop forwarding after the last member leaves a group.
You cannot configure this value directly for MLD snooping. MLD snooping inherits the value from the
MLD value configured on the switch, which is applied to all VLANs on the switch.
• robust-count—The number of times the querier resends a general membership query or a group-
specific membership query (the default is 2 times). You can increase this count to tune for higher
expected packet loss.
For MLD snooping, you can configure robust-count for a specific VLAN. If a VLAN does not have
robust-count configured, the robust-count value is inherited from the value configured for MLD.
The values configured for query-interval, query-response-interval, and robust-count determine the
multicast listener interval—the length of time the switch waits for a group membership report after a
general query before removing a multicast group from its multicast forwarding table. The switch
calculates the multicast listener interval by multiplying query-interval by robust-count and then adding
query-response-interval:
For example, the multicast listener interval is 260 seconds when the default settings for query-interval,
query-response-interval, and robust-count are used:
(125 x 2) + 10 = 260
You can display the time remaining in the multicast listener interval before a group times out by using
the show mld-snooping membership command.
195
RELATED DOCUMENTATION
Configuring MLD | 60
IN THIS SECTION
NOTE: This task uses Junos OS with support for the Enhanced Layer 2 Software (ELS)
configuration style. If your switch runs software that does not support ELS, see "Configuring
MLD Snooping on an EX Series Switch VLAN (CLI Procedure)" on page 186. For ELS details, see
Using the Enhanced Layer 2 Software CLI.
You can enable MLD snooping on a VLAN to constrain the flooding of IPv6 multicast traffic on the
VLAN. When MLD snooping is enabled, a switch examines MLD messages between hosts and multicast
routers and learns which hosts are interested in receiving multicast traffic for a multicast group. Based
on what it learns, the switch then forwards IPv6 multicast traffic only to those interfaces connected to
interested receivers instead of flooding the traffic to all interfaces.
• Specify the MLD version for the general query that the switch sends on an interface when the
interface comes up.
• Enable immediate leave to reduce the length of time it takes the switch to stop forwarding multicast
traffic when the last member host on the interface leaves the group.
196
• Configure an interface as a static multicast-router interface so that the switch does not need to
dynamically learn that the interface is a multicast-router interface.
• Configure an interface as a static member of a multicast group so that the switch does not need to
dynamically learn the interface’s membership.
• Change the value for certain timers and counters to match the values configured on the multicast
router serving as the MLD querier.
You can also deactivate the MLD snooping protocol on the switch without changing the MLD snooping
VLAN configurations:
[edit]
user@switch# deactivate protocols mld-snooping
197
Typically, a switch passively monitors MLD messages sent between multicast routers and hosts and does
not send MLD queries. The exception is when a switch detects that an interface has come up. When an
interface comes up, the switch sends an immediate general membership query to all hosts on the
interface. By doing so, the switch enables the multicast routers to learn group memberships more
quickly than they would if they had to wait until the MLD querier sent its next general query.
The MLD version of the general query determines the MLD version of the host membership reports as
follows:
• MLD version 1 (MLDv1) general query—Both MLDv1 and MLDv2 hosts respond with an MLDv1
membership report.
• MLDv2 general query—MLDv2 hosts respond with an MLDv2 membership report, while MLDv1
hosts are unable to respond to the query.
By default, the switch sends MLDv1 queries. This ensures compatibility with hosts and multicast routers
that support MLDv1 only and cannot process MLDv2 reports. However, if your VLAN contains MLDv2
multicast routers and hosts and the routers are running PIM-SSM, we recommend that you configure
MLD snooping for MLDv2. Doing so enables the routers to quickly learn which multicast sources the
hosts on the interface want to receive traffic from.
NOTE: Configuring the MLD version does not limit the version of MLD messages that the switch
can snoop. A switch can snoop both MLDv1 and MLDv2 messages regardless of the MLD
version configured.
You can decrease the leave latency created by this default behavior by enabling immediate leave on a
VLAN.
When you enable immediate leave on a VLAN, host tracking is also enabled, allowing the switch to keep
track of the hosts on a interface that have joined a multicast group. When the switch receives a leave
report from the last member of the group, it immediately stops forwarding traffic to the interface and
does not wait for the interface group membership to time out.
Immediate leave is supported for both MLD version 1 (MLDv1) and MLDv2. However, with MLDv1, we
recommend that you configure immediate leave only when there is only one MLD host on an interface.
In MLDv1, only one host on a interface sends a membership report in response to a group-specifc query
—any other interested hosts suppress their reports. This report-suppression feature means that the
switch only knows about one interested host at any given time.
In addition to dynamically learned interfaces, the multicast forwarding table can include interfaces that
you explicitly configure to be multicast router interfaces. Unlike the table entries for dynamically learned
interfaces, table entries for statically configured interfaces are not subject to aging and deletion from the
forwarding table.
Examples of when you might want to configure a static multicast-router interface include:
• You have an unusual network configuration that prevents MLD snooping from reliably learning about
a multicast-router interface through monitoring MLD queries or PIM updates.
• You have a stable topology and want to avoid the delay the dynamic learning process entails.
In addition to such dynamically learned interfaces, the multicast forwarding table can include interfaces
that you statically configure to be members of multicast groups. When you configure a static group
interface, the switch adds the interface to the forwarding table as a host interface for the group. Unlike
an entry for a dynamically learned interface, a static interface entry is not subject to aging and deletion
from the forwarding table.
Examples of when you might want to configure static group membership on an interface include:
• The interface has receivers that cannot send MLD membership reports.
• You want the multicast traffic for a specific group to be immediately available to a receiver without
any delay imposed by the dynamic join process.
You cannot configure multicast source addresses for a static group interface. The MLD version of a
static group interface is always MLD version 1.
NOTE: The switch does not simulate MLD membership reports on behalf of a statically
configured interface. Thus a multicast router might be unaware that the switch has an interface
that is a member of the multicast group. You can configure a static group interface on the router
to ensure that the switch receives the group multicast traffic.
200
For example, to configure interface ge-0/0/11.0 in VLAN employee as a static member of multicast
group ff1e::1:
There might be cases, however, where you might want to adjust the timer and counter values—for
example, to reduce burstiness, to reduce leave latency, or to adjust for expected packet loss on a subnet.
If you change a timer or counter value for the MLD querier on a VLAN, we recommend that you change
the value for all multicast routers and switches on the VLAN so that all devices time out group
memberships at approximately the same time.
• query-interval—The length of time in seconds the MLD querier waits between sending general
queries (the default is 125 seconds). You can change this interval to tune the number of MLD
messages on the subnet; larger values cause general queries to be sent less often.
• query-response-interval—The maximum length of time in seconds the host waits before it responds
(the default is 10 seconds). You can change this interval to accommodate the burst peaks of MLD
messages on the subnet. Set a larger interval to make the traffic less bursty.
• query-last-member-interval—The length of time the MLD querier waits between sending group-
specific membership queries (the default is 1 second). The MLD querier sends a group-specific query
after receiving a leave report from a host. You can decrease this interval to reduce the amount of
time it takes for multicast traffic to stop forwarding after the last member leaves a group.
• robust-count—The number of times the querier resends a general membership query or a group-
specific membership query (the default is 2 times). You can increase this count to tune for higher
anticipated packet loss.
For MLD snooping, you can configure robust-count for a specific VLAN. If a VLAN does not have
robust-count configured, the value is inherited from the value configured for MLD.
The values configured for query-interval, query-response-interval, and robust-count determine the
multicast listener interval—the length of time the switch waits for a group membership report after a
general query before removing a multicast group from its multicast forwarding table. The switch
calculates the multicast listener interval by multiplying query-interval value by the robust-count value
and then adding the query-response-interval to the product:
For example, the multicast listener interval is 260 seconds when the default settings for query-interval,
query-response-interval, and robust-count are used:
(125 x 2) + 10 = 260
To display the time remaining in the multicast listener interval before a group times out, use the show
mld-snooping membership command.
RELATED DOCUMENTATION
IN THIS SECTION
Requirements | 202
Configuration | 204
You can enable MLD snooping on a VLAN to constrain the flooding of IPv6 multicast traffic on a VLAN.
When MLD snooping is enabled, a switch examines MLD messages between hosts and multicast routers
and learns which hosts are interested in receiving multicast traffic for a multicast group. Based on what
it learns, the switch then forwards IPv6 multicast traffic only to those interfaces connected to interested
receivers instead of flooding the traffic to all interfaces.
Requirements
This example uses the following software and hardware components:
IN THIS SECTION
Topology | 203
In this example, interfaces ge-0/0/0, ge-0/0/1, and ge-0/0/2 on the switch are in vlan100 and are
connected to hosts that are potential multicast receivers. Interface ge-0/0/12, a trunk interface also in
vlan100, is connected to a multicast router. The router acts as the MLD querier and forwards multicast
traffic for group ff1e::2010 to the switch from a multicast source.
Topology
In this example topology, the multicast router forwards multicast traffic to the switch from the source
when it receives a memberhsip report for group ff1e::2010 from one of the hosts—for example, Host B.
If MLD snooping is not enabled on vlan100, the switch floods the multicast traffic on all interfaces in
vlan100 (except for interface ge-0/0/12). If MLD snooping is enabled on vlan100, the switch monitors
the MLD messages between the hosts and router, allowing it to determine that only Host B is interested
in receiving the multicast traffic. The switch then forwards the multicast traffic only to interface
ge-0/0/1.
This example shows how to enable MLD snooping on vlan100. It also shows how to perform the
following optional configurations, which can reduce group join and leave latency:
• Configure immediate leave on the VLAN. When immediate leave is configured, the switch stops
forwarding multicast traffic on an interface when it detects that the last member of the multicast
group has left the group. If immediate leave is not configured, the switch waits until the group-
specific membership queries time out before it stops forwarding traffic.
• Configure ge-0/0/12 as a static multicast-router interface. In this topology, ge-0/0/12 always leads
to the multicast router. By statically configuring ge-0/0/12 as a multicast-router interface, you avoid
any delay imposed by the switch having to learn that ge-0/0/12 is a multicast-router interface.
Configuration
IN THIS SECTION
Procedure | 204
Procedure
To quickly configure MLD snooping, copy the following commands and paste them into the switch
terminal window:
[edit]
set protocols mld-snooping vlan vlan100
205
Step-by-Step Procedure
[edit protocols]
user@switch# set mld-snooping vlan vlan100
2. Configure the switch to immediately remove a group membership from an interface when it receives
a leave report from the last member of the group on the interface:
[edit protocols]
user@switch# set mld-snooping vlan vlan100 immediate-leave
[edit protocols]
user@switch# set mld-snooping vlan vlan100 interface ge-0/0/12 multicast-router-interface
Results
[edit protocols]
user@switch# show mld-snooping
vlan vlan100 {
immediate-leave;
interface ge-0/0/12.0 {
multicast-router-interface;
}
}
206
IN THIS SECTION
To verify that MLD snooping is enabled on the VLAN and the MLD snooping forwarding interfaces are
correct, perform the following task:
Purpose
Verify that MLD snooping is enabled on vlan100 and that the multicast-router interface is statically
configured:
Action
Meaning
MLD snooping is running on vlan100, and interface ge-0/0/12.0 is a statically configured multicast-
router interface. Because the multicast group ff1e::2010 is listed, at least one host in the VLAN is a
current member of the multicast group and that host is on interface ge-0/0/1.0.
RELATED DOCUMENTATION
IN THIS SECTION
Requirements | 207
Configuration | 209
You can enable MLD snooping on a VLAN to constrain the flooding of IPv6 multicast traffic on a VLAN.
When MLD snooping is enabled, SRX Series device examines MLD messages between hosts and
multicast routers and learns which hosts are interested in receiving multicast traffic for a multicast
group. Based on what it learns, the device then forwards IPv6 multicast traffic only to those interfaces
connected to interested receivers instead of flooding the traffic to all interfaces.
Requirements
This example uses the following software and hardware components:
IN THIS SECTION
Topology | 208
In this example, interfaces ge-0/0/0, ge-0/0/1, and ge-0/0/2 on the device are in vlan100 and are
connected to hosts that are potential multicast receivers. Interface ge-0/0/3, a trunk interface also in
vlan100, is connected to a multicast router. The router acts as the MLD querier and forwards multicast
traffic for group 2001:db8::1 to the device from a multicast source.
Topology
In this example topology, the multicast router forwards multicast traffic to the device from the source
when it receives a memberhsip report for group 2001:db8::1 from one of the hosts—for example, Host
B. If MLD snooping is not enabled on vlan100, then the device floods the multicast traffic on all
interfaces in vlan100 (except for interface ge-0/0/3). If MLD snooping is enabled on vlan100, the device
monitors the MLD messages between the hosts and router, allowing it to determine that only Host B is
interested in receiving the multicast traffic. The device then forwards the multicast traffic only to
interface ge-0/0/1.
This example shows how to enable MLD snooping on vlan100. It also shows how to perform the
following optional configurations, which can reduce group join and leave latency:
209
• Configure immediate leave on the VLAN. When immediate leave is configured, the device stops
forwarding multicast traffic on an interface when it detects that the last member of the multicast
group has left the group. If immediate leave is not configured, the device waits until the group-
specific membership queries time out before it stops forwarding traffic
• Configure ge-0/0/3 as a static multicast-router interface. In this topology, ge-0/0/3 always leads to
the multicast router. By statically configuring ge-0/0/3 as a multicast-router interface, you avoid any
delay imposed by the device having to learn that ge-0/0/3 is a multicast-router interface.
Configuration
IN THIS SECTION
Procedure | 209
Procedure
To quickly configure this example, copy the following commands, paste them into a text file, remove any
line breaks, change any details necessary to match your network configuration, copy and paste the
commands into the CLI at the [edit] hierarchy level, and then enter commit from configuration mode.
Step-by-Step Procedure
The following example requires you to navigate various levels in the configuration hierarchy. For
instructions on how to do that, see Using the CLI Editor in Configuration Mode in the CLI User Guide.
[edit interfaces]
user@host# set ge-0/0/0 unit 0 family ethernet-switching interface-mode access
user@host# set ge-0/0/0 unit 0 family ethernet-switching vlan members vlan100
user@host# set ge-0/0/1 unit 0 family ethernet-switching interface-mode access
user@host# set ge-0/0/1 unit 0 family ethernet-switching vlan members vlan100
user@host# set ge-0/0/2 unit 0 family ethernet-switching interface-mode access
user@host# set ge-0/0/2 unit 0 family ethernet-switching vlan members vlan100
[edit interfaces]
user@host# set ge-0/0/3 unit 0 family ethernet-switching interface-mode trunk
user@host# set ge-0/0/3 unit 0 family ethernet-switching vlan members vlan100
[edit]
user@host# set routing-options nonstop-routing
5. Configure the limit for the number of multicast groups allowed on the ge-0/0/1.0 interface to 50.
6. Configure the device to immediately remove a group membership from an interface when it
receives a leave message from that interface without waiting for any other MLD messages to be
exchanged.
9. Configure an interface to be an exclusively host-facing interface (to drop MLD query messages).
11. If you are done configuring the device, commit the configuration.
user@host# commit
Results
From configuration mode, confirm your configuration by entering the show protocols mld-snooping
command. If the output does not display the intended configuration, repeat the configuration
instructions in this example to correct it.
[edit]
user@host# show protocols mld-snooping
vlan vlan100 {
query-interval 200;
query-response-interval 0.4;
query-last-member-interval 0.1;
robust-count 4;
immediate-leave;
interface ge-0/0/1.0 {
host-only-interface;
}
interface ge-0/0/0.0 {
group-limit 50;
}
interface ge-0/0/2.0 {
static {
group 2001:db8::1;
}
}
interface ge-0/0/3.0 {
multicast-router-interface;
}
}
213
IN THIS SECTION
To verify that MLD snooping is enabled on the VLAN and the MLD snooping forwarding interfaces are
correct, perform the following task:
Purpose
Verify that MLD snooping is enabled on vlan100 and that the multicast-router interface is statically
configured:
Action
From operational mode, enter the show mld snooping membership command.
Vlan: vlan100
Learning-Domain: default
Interface: ge-0/0/0.0, Groups: 0
Interface: ge-0/0/1.0, Groups: 0
Interface: ge-0/0/2.0, Groups: 1
Group: 2001:db8::1
Group mode: Exclude
Source: ::
Last reported by: Local
Group timeout: 0 Type: Static
214
Meaning
MLD snooping is running on vlan100, and interface ge-0/0/3.0 is a statically configured multicast-
router interface. Because the multicast group 2001:db8::1 is listed, at least one host in the VLAN is a
current member of the multicast group and that host is on interface ge-0/0/1.0.
RELATED DOCUMENTATION
mld-snooping | 1669
Understanding MLD Snooping | 174
IN THIS SECTION
By enabling tracing operations for MLD snooping, you can record detailed messages about the
operation of the protocol, such as the various types of protocol packets sent and received. Table 9 on
page 214 describes the tracing operations you can enable and the flags used to specify them in the
tracing configuration.
Trace normal MLD snooping protocol events. If you do not specify this flag, only normal
unusual or abnormal operations are traced.
For example:
2. (Optional) Configure the maximum number of trace files and size of the trace files:
[edit protocols mld-snooping ]user@switch # set file files number size size
For example:
causes the contents of the trace file to be emptied and archived in a .gz file when the file reaches 1
MB. Four archive files are maintained, the contents of which are rotated whenever the current active
trace file is archived.
If you omit this step, the maximum number of trace files defaults to 10, with the maximum file size
defaulting to 128 K.
3. Specify one of the tracing flags shown in Table 9 on page 214:
For example, to perform trace operations on VLAN-related events and MLD query messages:
You can stop and restart tracing operations by deactivating and reactivating the configuration:
RELATED DOCUMENTATION
IN THIS SECTION
By enabling tracing operations for MLD snooping, you can record detailed messages about the
operation of the protocol, such as the various types of protocol packets sent and received. Table 10 on
page 218 describes the tracing operations you can enable and the flags used to specify them in the
tracing configuration.
218
Trace normal MLD snooping protocol events. If you do not specify this flag, only normal
unusual or abnormal operations are traced.
[edit protocols mld-snooping ]user@switch# set vlan vlan-name traceoptions file filename
For example:
[edit protocols mld-snooping ]user@switch# set vlan vlan100 traceoptions file mld-snoop-
trace
2. (Optional) Configure the maximum number of trace files and size of the trace files:
[edit protocols mld-snooping ]user@switch # set vlan vlan-name traceoptions file files
number size size
For example:
[edit protocols mld-snooping ]user@switch # set vlan vlan100 traceoptions file files 5 size
1m
causes the contents of the trace file to be emptied and archived in a .gz file when the file reaches 1
MB. Four archive files are maintained, the contents of which are rotated whenever the current active
trace file is archived.
If you omit this step, the maximum number of trace files defaults to 10, and the maximum file size to
128 KB.
220
[edit protocols mld-snooping ]user@switch # set vlan vlan-name traceoptions flag flagname
For example, to perform trace operations on VLAN-related events and on MLD query messages:
[edit protocols mld-snooping ]user@switch# set vlan vlan100 traceoptions flag vlan
[edit protocols mld-snooping ]user@switch# set vlan vlan100 traceoptions flag query
You can stop and restart tracing operations by deactivating and reactivating the configuration:
RELATED DOCUMENTATION
IN THIS SECTION
Requirements | 221
Configuration | 223
You can enable MLD snooping on a VLAN to constrain the flooding of IPv6 multicast traffic on a VLAN.
When MLD snooping is enabled, a switch examines MLD messages between hosts and multicast routers
and learns which hosts are interested in receiving multicast traffic for a multicast group. Based on what
it learns, the switch then forwards IPv6 multicast traffic only to those interfaces connected to interested
receivers instead of flooding the traffic to all interfaces.
Requirements
This example uses the following software and hardware components:
IN THIS SECTION
Topology | 222
In this example, interfaces ge-0/0/0, ge-0/0/1, and ge-0/0/2 on the switch are in vlan100 and are
connected to hosts that are potential multicast receivers. Interface ge-0/0/12, a trunk interface also in
vlan100, is connected to a multicast router. The router acts as the MLD querier and forwards multicast
traffic for group ff1e::2010 to the switch from a multicast source.
Topology
In this example topology, the multicast router forwards multicast traffic to the switch from the source
when it receives a memberhsip report for group ff1e::2010 from one of the hosts—for example, Host B.
If MLD snooping is not enabled on vlan100, the switch floods the multicast traffic on all interfaces in
vlan100 (except for interface ge-0/0/12). If MLD snooping is enabled on vlan100, the switch monitors
the MLD messages between the hosts and router, allowing it to determine that only Host B is interested
in receiving the multicast traffic. The switch then forwards the multicast traffic only to interface
ge-0/0/1.
This example shows how to enable MLD snooping on vlan100. It also shows how to perform the
following optional configurations, which can reduce group join and leave latency:
• Configure immediate leave on the VLAN. When immediate leave is configured, the switch stops
forwarding multicast traffic on an interface when it detects that the last member of the multicast
group has left the group. If immediate leave is not configured, the switch waits until the group-
specific membership queries time out before it stops forwarding traffic.
• Configure ge-0/0/12 as a static multicast-router interface. In this topology, ge-0/0/12 always leads
to the multicast router. By statically configuring ge-0/0/12 as a multicast-router interface, you avoid
any delay imposed by the switch having to learn that ge-0/0/12 is a multicast-router interface.
Configuration
IN THIS SECTION
Procedure | 223
Procedure
To quickly configure MLD snooping, copy the following commands and paste them into the switch
terminal window:
[edit]
set protocols mld-snooping vlan vlan100
224
Step-by-Step Procedure
[edit protocols]
user@switch# set mld-snooping vlan vlan100
2. Configure the switch to immediately remove a group membership from an interface when it receives
a leave report from the last member of the group on the interface:
[edit protocols]
user@switch# set mld-snooping vlan vlan100 immediate-leave
[edit protocols]
user@switch# set mld-snooping vlan vlan100 interface ge-0/0/12 multicast-router-interface
Results
[edit protocols]
user@switch# show mld-snooping
vlan vlan100 {
immediate-leave;
interface ge-0/0/12.0 {
multicast-router-interface;
}
}
225
IN THIS SECTION
To verify that MLD snooping is enabled on the VLAN and the MLD snooping forwarding interfaces are
correct, perform the following task:
Purpose
Verify that MLD snooping is enabled on vlan100 and that the multicast-router interface is statically
configured:
Action
Meaning
MLD snooping is running on vlan100, and interface ge-0/0/12.0 is a statically configured multicast-
router interface. Because the multicast group ff1e::2010 is listed, at least one host in the VLAN is a
current member of the multicast group and that host is on interface ge-0/0/1.0.
RELATED DOCUMENTATION
IN THIS SECTION
Requirements | 226
Configuration | 229
NOTE: This example uses Junos OS with support for the Enhanced Layer 2 Software (ELS)
configuration style. For ELS details, see Using the Enhanced Layer 2 Software CLI.
You can enable MLD snooping on a VLAN to constrain the flooding of IPv6 multicast traffic on a VLAN.
When MLD snooping is enabled, a switch examines MLD messages between hosts and multicast routers
and learns which hosts are interested in receiving multicast traffic for a multicast group. On the basis of
what it learns, the switch then forwards IPv6 multicast traffic only to those interfaces connected to
interested receivers instead of flooding the traffic to all interfaces.
Requirements
This example uses the following software and hardware components:
• Junos OS Release 13.3 or later for EX Series switches or Junos OS Release 15.1X53-D10 or later for
QFX10000 switches
See Configuring VLANs for EX Series Switches or Configuring VLANs on Switches with Enhanced Layer
2 Support.
IN THIS SECTION
Topology | 228
In this example, interfaces ge-0/0/0, ge-0/0/1, and ge-0/0/2 on the switch are in vlan100 and are
connected to hosts that are potential multicast receivers. Interface ge-0/0/12, a trunk interface also in
vlan100, is connected to a multicast router. The router acts as the MLD querier and forwards multicast
traffic for group ff1e::2010 to the switch from a multicast source.
228
Topology
In this sample topology, the multicast router forwards multicast traffic to the switch from the source
when it receives a membership report for group ff1e::2010 from one of the hosts—for example, Host B.
If MLD snooping is not enabled on vlan100, the switch floods the multicast traffic on all interfaces in
vlan100 (except for interface ge-0/0/12). If MLD snooping is enabled on vlan100, the switch monitors
the MLD messages between the hosts and router, allowing it to determine that only Host B is interested
in receiving the multicast traffic. The switch then forwards the multicast traffic only to interface
ge-0/0/1.
This example shows how to enable MLD snooping on vlan100. It also shows how to perform the
following optional configurations, which can reduce group join and leave latency:
• Configure immediate leave on the VLAN. When immediate leave is configured, the switch stops
forwarding multicast traffic on an interface when it detects that the last member of the multicast
229
group has left the group. If immediate leave is not configured, the switch waits until the group-
specific membership queries time out before it stops forwarding traffic.
• Configure ge-0/0/12 as a static multicast-router interface. In this topology, ge-0/0/12 always leads
to the multicast router. By statically configuring ge-0/0/12 as a multicast-router interface, you avoid
any delay imposed by the switch having to learn that ge-0/0/12 is a multicast-router interface.
Configuration
IN THIS SECTION
Procedure | 229
Procedure
To quickly configure MLD snooping, copy the following commands and paste them into the switch
terminal window:
[edit]
set protocols mld-snooping vlan vlan100
Step-by-Step Procedure
[edit protocols]
user@switch# set mld-snooping vlan vlan100
230
2. Configure the switch to immediately remove a group membership from an interface when it receives
a leave report from the last member of the group on the interface:
[edit protocols]
user@switch# set mld-snooping vlan vlan100 immediate-leave
[edit protocols]
user@switch# set mld-snooping vlan vlan100 interface ge-0/0/12 multicast-router-interface
Results
[edit protocols]
user@switch# show mld-snooping
vlan vlan100 {
immediate-leave;
interface ge-0/0/12.0 {
multicast-router-interface;
}
}
IN THIS SECTION
To verify that MLD snooping is enabled on the VLAN and the MLD snooping forwarding interfaces are
correct, perform the following task:
231
Purpose
Verify that MLD snooping is enabled on the VLAN vlan 100 and that the multicast-router interface is
statically configured:
Action
Vlan: vlan100
Learning-Domain: default
Interface: ge-0/0/12.0
State: Up Groups: 3
Immediate leave: On
Router interface: yes
Configured Parameters:
MLD Query Interval: 125.0
MLD Query Response Interval: 10.0
MLD Last Member Query Interval: 1.0
MLD Robustness Count: 2
Meaning
MLD snooping is running on vlan100, and interface ge-0/0/12.0 is a statically configured multicast-
router interface. Immediate leave is enabled on the interface.
RELATED DOCUMENTATION
Configuring MLD Snooping on a Switch VLAN with ELS Support (CLI Procedure) | 195
Verifying MLD Snooping on Switches | 237
Understanding MLD Snooping | 174
232
IN THIS SECTION
Multicast Listener Discovery (MLD) snooping constrains the flooding of IPv6 multicast traffic on VLANs
on a switch. This topic describes how to verify MLD snooping operation on the switch.
IN THIS SECTION
Purpose | 232
Action | 232
Meaning | 233
Purpose
Determine group memberships, multicast-router interfaces, host MLD versions, and the current values
of timeout counters.
Action
Meaning
The switch has multicast membership information for one VLAN on the switch, mld-vlan. MLD snooping
might be enabled on other VLANs, but the switch does not have any multicast membership information
for them. The following information is provided:
• Information on the multicast-router interfaces for the VLAN—in this case, ge-1/0/0.0. The multicast-
router interface has been learned by MLD snooping, as indicated by dynamic. The timeout value
shows how many seconds from now the interface will be removed from the multicast forwarding
table if the switch does not receive MLD queries or Protocol Independent Multicast (PIM) updates on
the interface.
• Currently, the VLAN has membership in only one multicast group, ff1e::2010.
• The host or hosts that have reported membership in the group are on interface ge-1/0/30.0. The
interface group membership will time out in 180 seconds if no hosts respond to membership
queries during this interval. The flags field shows the lowest version of MLD used by a host that is
currently a member of the group, which in this case is MLD version 2 (MLDv2).
• The last host that reported membership in the group has address fe80::2020:1:1:3.
• Because interface has MLDv2 hosts on it, the source addresses from which the MLDv2 hosts
want to receive group multicast traffic are shown (addresses 2020:1:1:1::2 and 2020:1:1:1::5). The
timeout value for the interface group membership is derived from the largest timeout value for all
sources addresses for the group.
IN THIS SECTION
Purpose | 234
Action | 234
Meaning | 234
234
Purpose
Verify that MLD snooping is enabled on a VLAN and display MLD snooping information for each VLAN
on which MLD snooping is enabled.
Action
Meaning
MLD snooping is configured on two VLANs on the switch: v10 and v20. Each interface in each VLAN is
listed and the following information is provided:
IN THIS SECTION
Purpose | 235
Action | 235
Meaning | 235
235
Purpose
Display MLD snooping statistics, such as number of MLD queries, reports, and leaves received and how
many of these MLD messages contained errors.
Action
Meaning
The output shows how many MLD messages of each type—Queries, Reports, Leaves—the switch
received or transmitted on interfaces on which MLD snooping is enabled. For each message type, it also
shows the number of MLD packets the switch received that had errors—for example, packets that do
not conform to the MLDv1 or MLDv2 standards. If the Recv Errors count increases, verify that the hosts
are compliant with MLDv1 or MLDv2 standards. If the switch is unable to recognize the MLD message
type for a packet, it counts the packet under Receive unknown.
IN THIS SECTION
Purpose | 236
Action | 236
Meaning | 236
236
Purpose
Action
Meaning
The output shows the next-hop interfaces for a given multicast group on a VLAN. Only the last 32 bits
of the group address are shown because the switch uses only these bits in determining multicast routes.
For example, route ::0000:2010 on mld-vlan has next-hop interfaces ge-1/0/30.0 and ge-1/0/33.0.
RELATED DOCUMENTATION
IN THIS SECTION
NOTE: This topic uses Junos OS with support for the Enhanced Layer 2 Software (ELS)
configuration style. If your switch runs software that does not support ELS, see "Verifying MLD
Snooping on EX Series Switches (CLI Procedure)" on page 232. For ELS details, see Using the
Enhanced Layer 2 Software CLI.
Multicast Listener Discovery (MLD) snooping constrains the flooding of IPv6 multicast traffic on VLANs.
This topic describes how to verify MLD snooping operation on a VLAN.
IN THIS SECTION
Purpose | 237
Action | 238
Meaning | 238
Purpose
Verify that MLD snooping is enabled on a VLAN and determine group memberships.
238
Action
Vlan: v1
Learning-Domain: default
Interface: ge-0/0/1.0, Groups: 1
Group: ff05::1
Group mode: Exclude
Source: ::
Last reported by: fe80::
Group timeout: 259 Type: Dynamic
Interface: ge-0/0/2.0, Groups: 0
Meaning
The switch has multicast membership information for one VLAN on the switch, v1. MLD snooping might
be enabled on other VLANs, but the switch does not have any multicast membership information for
them.
• The following information is provided about the group memberships for the VLAN:
• Currently, the VLAN has membership in only one multicast group, ff05::1.
• The host or hosts that have reported membership in the group are on interface ge-0/0/1.0.
• The last host that reported membership in the group has address fe80::.
• The interface group membership will time out in 259 seconds if no hosts respond to membership
queries during this interval.
• The group membership has been learned by MLD snooping, as indicated by Dynamic.
IN THIS SECTION
Purpose | 239
239
Action | 239
Meaning | 239
Purpose
Display MLD snooping information for each interface on which MLD snooping is enabled.
Action
Vlan: v100
Learning-Domain: default
Interface: ge-0/0/1.0
State: Up Groups: 1
Immediate leave: Off
Router interface: no
Interface: ge-0/0/2.0
State: Up Groups: 0
Immediate leave: Off
Router interface: no
Configured Parameters:
MLD Query Interval: 125.0
MLD Query Response Interval: 10.0
MLD Last Member Query Interval: 1.0
MLD Robustness Count: 2
Meaning
MLD snooping is configured on one VLAN on the switch, v100. Each interface in each VLAN is listed
and the following information is provided:
The output also shows the configured parameters for the MLD querier.
IN THIS SECTION
Purpose | 240
Action | 240
Meaning | 241
Purpose
Display MLD snooping statistics, such as number of MLD queries, reports, and leaves received and how
many of these MLD messages contained errors.
Action
Vlan: v2
MLD Message type Received Sent Rx errors
Listener Query (v1/v2) 0 4 0
Listener Report (v1) 154 0 0
Listener Done (v1/v2) 0 0 0
Listener Report (v2) 0 0 0
Other Unknown types 0
241
Instance: default-switch
MLD Message type Received Sent Rx errors
Listener Query (v1/v2) 0 8 0
Listener Report (v1) 601 0 0
Listener Done (v1/v2) 0 0 0
Listener Report (v2) 0 0 0
Other Unknown types 0
Meaning
The output shows how many MLD messages of each type—Queries, Done, Report—the switch received
or transmitted on interfaces on which MLD snooping is enabled. For each message type, it also shows
the number of MLD packets the switch received that had errors—for example, packets that do not
conform to the MLDv1 or MLDv2 standards. If the Rx errors count increases, verify that the hosts are
compliant with MLDv1 or MLDv2 standards. If the switch is unable to recognize the MLD message type
for a packet, it counts the packet under Other Unknown types.
IN THIS SECTION
Purpose | 241
Action | 242
Meaning | 242
Purpose
Display the next-hop information maintained in the multicast snooping forwarding table.
242
Action
Family: INET6
Group: ff00::/8
Source: ::/128
Vlan: v1
Group: ff02::1/128
Source: ::/128
Vlan: v1
Downstream interface list:
ge-1/0/16.0
Group: ff05::1/128
Source: ::/128
Vlan: v1
Downstream interface list:
ge-1/0/16.0
Group: ff06::1/128
Source: ::/128
Vlan: v1
Downstream interface list:
ge-1/0/16.0
Meaning
The output shows the next-hop interfaces for a given multicast group on a VLAN. For example, route
ff02::1/128 on VLAN v1 has the next-hop interface ge-1/0/16.0.
RELATED DOCUMENTATION
CHAPTER 5
IN THIS CHAPTER
Example: Configuring Multicast VLAN Registration on EX Series Switches Without ELS | 266
IN THIS SECTION
Multicast VLAN registration (MVR) enables more efficient distribution of IPTV multicast streams across
an Ethernet ring-based Layer 2 network.
In a standard Layer 2 network, a multicast stream received on one VLAN is never distributed to
interfaces outside that VLAN. If hosts in multiple VLANs request the same multicast stream, a separate
copy of that multicast stream is distributed to each requesting VLAN.
When you configure MVR, you create a multicast VLAN (MVLAN) that becomes the only VLAN over
which IPTV multicast traffic flows throughout the Layer 2 network. Devices with MVR enabled
selectively forward IPTV multicast traffic from interfaces on the MVLAN (source interfaces) to hosts that
are connected to interfaces that are not part of the MVLAN that you designate as MVR receiver ports.
MVR receiver ports can receive traffic from a port on the MVLAN but cannot send traffic onto the
MVLAN, and those ports remain in their own VLANs for bandwidth and security reasons.
244
• Reduces the bandwidth required to distribute IPTV multicast streams by eliminating duplication of
multicast streams from the same source to interested receivers on different VLANs.
MVR operates similarly to and in conjunction with Internet Group Management Protocol (IGMP)
snooping. Both MVR and IGMP snooping monitor IGMP join and leave messages and build forwarding
tables based on the media access control (MAC) addresses of the hosts sending those IGMP messages.
Whereas IGMP snooping operates within a given VLAN to regulate multicast traffic, MVR can operate
with hosts on different VLANs in a Layer 2 network to selectively deliver IPTV multicast traffic to any
requesting hosts. This reduces the bandwidth needed to forward the traffic.
MVR Basics
MVR is not enabled by default on devices that support MVR. You explicitly configure an MVLAN and
assign a range of multicast group addresses to it. That VLAN carries MVLAN traffic for the configured
multicast groups. You then configure other VLANs to be MVR receiver VLANs that receive multicast
streams from the MVLAN. When MVR is configured on a device, the device receives only one copy of
each MVR multicast stream, and then replicates the stream only to the hosts that want to receive it,
while forwarding all other types of multicast traffic without modification.
You can configure multiple MVLANs on a device, but they must have disjoint multicast group subnets.
An MVR receiver VLAN can be associated with more than one MVLAN on the device.
MVR does not support MVLANs or MVR receiver VLANs on a private VLAN (PVLAN).
On non-ELS switches, the MVR receiver ports comprise all the interfaces that exist on any of the MVR
receiver VLANs.
On ELS switches, the MVR receiver ports are all the interfaces on the MVR receiver VLANs except the
multicast router ports; an interface can be configured in both an MVR receiver VLAN and its MVLAN
only if it is configured as a multicast router port in both VLANs. ELS EX Series switches support MVR as
follows:
• Starting in Junos OS Release 18.3R1, EX4300 switches and Virtual Chassis support MVR. You can
configure up to 10 MVLANs on these devices.
• Starting in Junos OS Release 18.4R1, EX2300 and EX3400 switches and Virtual Chassis support
MVR. You can configure up to 5 MVLANs on these devices.
245
• Starting in Junos OS Release 19.4R1, EX4300 multigigabit model (EX4300-48MP) switches and
Virtual Chassis support MVR. You can configure up to 10 MVLANs on these devices.
NOTE: MVR has some configuration and operational differences on EX Series switches that use
the Enhanced Layer 2 Software (ELS) configuration style compared to MVR on switches that do
not support ELS. Where applicable, the following sections explain these differences.
MVR Modes
MVR can operate in two modes: MVR transparent mode and MVR proxy mode. Both modes enable
MVR to forward only one copy of a multicast stream to the Layer 2 network. However, the main
difference between the two modes is in how the device sends IGMP reports upstream to the multicast
router. The device essentially handles IGMP queries the same way in either mode.
You configure MVR modes differently on non-ELS and ELS switches. Also, on ELS switches, you can
associate an MVLAN with some MVR receiver VLANs operating in proxy mode and others operating in
transparent mode if you have multicast requirements for both modes in your network.
Transparent mode is the default mode when you configure an MVR receiver VLAN, also called a data-
forwarding receiver VLAN.
NOTE: On ELS switches, you can explicitly configure transparent mode, although it is also the
default setting if you don’t configure an MVR receiver mode.
In MVR transparent mode, the device handles IGMP packets destined for both the multicast source
VLAN and multicast receiver VLANs similarly to the way that it handles them when MVR is not being
used. Without MVR, when a host on a VLAN sends IGMP join and leave messages, the device forwards
the messages to all multicast router interfaces in the VLAN. Similarly, when a VLAN receives IGMP
queries from its multicast router interfaces, it forwards the queries to all interfaces in the VLAN.
With MVR in transparent mode, the device handles IGMP reports and queries as follows:
• Receives IGMP join and leave messages on MVR receiver VLAN interfaces and forwards them to the
multicast router ports on the MVR receiver VLAN.
• Forwards IGMP queries on the MVR receiver VLAN to all MVR receiver ports.
• Forwards IGMP queries received on the MVLAN only to the MVR receiver ports that are in receiver
VLANs associated with that MVLAN, even though those ports might not be on the MVLAN itself.
246
NOTE: Devices in transparent mode only send IGMP reports in the context of the MVR receiver
VLAN. In other words, if MVR receiver ports receive an IGMP query from an upstream multicast
router on the MVLAN, they only send replies on the MVR receiver VLAN multicast router ports.
The upstream router (that sent the queries on the MVLAN) does not receive the replies and does
not forward any traffic, so to solve this problem, you must configure static membership. As a
result, we recommend that you use MVR proxy mode instead of transparent mode on the device
that is closest to the upstream multicast router. See "MVR Proxy Mode" on page 246.
If a host on a multicast receiver port in the MVR receiver VLAN joins a group, the device adds the
appropriate bridging entry on the MVLAN for that group. When the device receives traffic on the
MVLAN for that group, it forwards the traffic on that port tagged with the MVLAN tag (even though the
port is not in the MVLAN). Likewise, if a host on a multicast receiver port on the MVR receiver VLAN
leaves a group, the device deletes the matching bridging entry, and the MVLAN stops forwarding that
group’s MVR traffic on that port.
When in transparent mode, by default, the device installs bridging entries only on the MVLAN that is the
source for the group address, so if the device receives MVR receiver VLAN traffic for that group, the
device would not forward the traffic to receiver ports on the MVR receiver VLAN that sent the join
message for that group. The device only forwards traffic to MVR receiver interfaces on the MVLAN. To
enable MVR receiver VLAN ports to receive traffic forwarded on the MVR receiver VLAN, you can
configure the install option at the [edit protocols igmp-snooping vlans vlan-name data-forwarding
receiver] hierarchy level so the device also installs the bridging entries on the MVR receiver VLAN.
When you configure MVR in proxy mode, the device acts as an IGMP proxy to the multicast router for
MVR group membership requests received on MVR receiver VLANs. That means the device forwards
IGMP reports from hosts on MVR receiver VLANs in the context of the MVLAN. and only forwards
them to the multicast router ports on the MVLAN. The multicast router receives IGMP reports only on
the MVLAN for those MVR receiver hosts.
The device handles IGMP queries in the same way as in transparent mode:
• Forwards IGMP queries received on the MVR receiver VLAN to all MVR receiver ports.
• Forwards IGMP queries received on the MVLAN only to the MVR receiver ports that are in receiver
VLANs belonging to that MVLAN, even though those ports might not be on the MVLAN itself.
In proxy mode, for multicast group memberships established in the context of the MVLAN, the device
installs bridging entries only on the MVLAN and forwards incoming MVLAN traffic to hosts on the MVR
receiver VLANs subscribed to those groups. Proxy mode doesn’t support the install option that enables
the device to also install bridging entries on the MVR receiver VLANs. As a result, when the device
247
receives traffic on an MVR receiver VLAN, it does not forward the traffic to the hosts on the MVR
receiver VLAN because the device does not have bridging entries for those MVR receiver ports on the
MVR receiver VLANs.
On non-ELS switches, you configure MVR proxy mode on an MVLAN using the "proxy" on page 1795
statement at the [edit protocols igmp-snooping vlan vlan-name] hierarchy level along with other IGMP
snooping configuration options.
NOTE: On non-ELS switches, this proxy configuration statement only supports MVR proxy mode
configuration. General IGMP snooping proxy operation is not supported.
When this option is enabled on non-ELS switches, the device acts as an IGMP proxy for any MVR
groups sourced by the MVLAN in both the upstream and downstream directions. In the downstream
direction, the device acts as the querier for those multicast groups in the MVR receiver VLANs. In the
upstream direction, the device originates the IGMP reports and leave messages, and answers IGMP
queries from multicast routers. Configuring this proxy option on an MVLAN automatically enables MVR
proxy operation for all MVR receiver VLANs associated with the MVLAN.
On ELS switches, you configure MVR proxy mode on the MVR receiver VLANs. You can configure MVR
proxy mode separately from IGMP snooping proxy mode, as follows:
• IGMP snooping proxy mode—You can use the "proxy" on page 1795 statement at the [edit protocols
igmp-snooping vlan vlan-name] hierarchy level on ELS switches to enable IGMP proxy operation
with or without MVR configuration. When you configure this option for a VLAN without configuring
MVR, the device acts as an IGMP proxy to the multicast router for ports in that VLAN. When you
configure this option on an MVLAN, the device acts as an IGMP proxy between the multicast router
and hosts in any associated MVR receiver VLANs.
NOTE: You configure this proxy mode on the MVLAN only, not on MVR receiver VLANs.
• MVR proxy mode—On ELS switches, you configure MVR proxy mode on an MVR receiver VLAN
(rather than on the MVLAN), using the proxy option at the [edit igmp-snooping vlan vlan-name data-
forwarding receiver mode] hierarchy level, when you associate the MVR receiver VLAN with an
MVLAN. An ELS switch operating in MVR proxy mode for an MVR receiver VLAN acts as an IGMP
proxy for that MVR receiver VLAN to the multicast router in the context of the MVLAN.
248
When you configure MVR, the device sends multicast traffic and IGMP queries packets downstream to
hosts in the context of the MVLAN by default. The MVLAN tag is included for VLAN-tagged traffic
egressing on trunk ports, while traffic egressing on access ports is untagged.
On ELS EX Series switches that support MVR, for VLANs with trunk ports and hosts on a multicast
receiver VLAN that expect traffic in the context of that receiver VLAN, you can configure the device to
translate the MVLAN tags into the multicast receiver VLAN tags. See the translate option at the [edit
protocols igmp-snooping vlans vlan-name data-forwarding receiver] hierarchy level.
Based on the access layer topology of your network, the following sections describe recommended
ways you should configure MVR on devices in the access layer to smoothly deliver a single multicast
stream to subscribed hosts in multiple VLANs.
NOTE: These sections apply to EX Series switches running Junos OS with the Enhanced Layer 2
Software (ELS) configuration style only.
249
Figure 28 on page 249 shows a device in a single-tier access layer topology. The device is connected to
a multicast router in the upstream direction (INTF-1), with host trunk or access ports in the downstream
direction connected to multicast receivers in two different VLANs (v10 on INTF-2 and v20 on INTF-3).
Without MVR, the upstream interface (INTF-1) acts as a multicast router interface to the upstream
router and a trunk port in both VLANs. In this configuration, the upstream router would require two
integrated routing and bridging (IRB) interfaces to send two copies of the multicast stream to the device,
which then would forward the traffic to the receivers on the two different VLANs on INTF-2 and
INTF-3.
With MVR configured as indicated in Figure 28 on page 249, the multicast stream can be sent to
receivers in different VLANs in the context of a single MVLAN, and the upstream router only requires
one downstream IRB interface on which to send one MVLAN stream to the device.
For MVR to operate smoothly in this topology, we recommend you set up the following elements on the
single–tier device as illustrated in Figure 28 on page 249:
• An MVLAN with the device’s upstream multicast router interface configured as a trunk port and a
multicast router interface in the MVLAN. This upstream interface was already a trunk port and a
multicast router port for the receiver VLANs that will be associated with the MVLAN.
250
Figure 28 on page 249 shows an MVLAN configured on the device, and the upstream interface
INTF-1 configured previously as a trunk port and multicast router port in v10 and v20, is
subsequently added as a trunk and multicast router port in the MVLAN as well.
In Figure 28 on page 249, the device is connected to Host 1 on VLAN v10 (using trunk interface
INTF-2) and Host 2 on v20 (using access interface INTF-3). VLANs v10 and v20 use INTF-1 as a
trunk port and multicast router port in the upstream direction. These VLANs become MVR receiver
VLANs for the MVLAN, with INTF-1 also added as a trunk port and multicast router port in the
MVLAN.
• MVR running in proxy mode on the device, so the device processes MVR receiver VLAN IGMP group
memberships in the context of the MVLAN. The upstream router sends only one multicast stream on
the MVLAN downstream to the device, which is forwarded to hosts on the MVR receiver VLANs that
are subscribed to the multicast groups sourced by the MVLAN.
The device in Figure 28 on page 249 is configured in proxy mode and establishes group memberships
on the MVLAN for hosts on MVR receiver VLANs v10 and v20. The upstream router in the figure
sends only one multicast stream on the MVLAN through INTF-1 to the device, which forwards the
traffic to subscribed hosts on MVR receiver VLANs v10 and v20.
• MVR receiver VLAN tag translation enabled on receiver VLANs that have hosts on trunk ports, so
those hosts receive the multicast traffic in the context of their receiver VLANs. Hosts reached by way
of access ports receive untagged multicast packets (and don’t need MVR VLAN tag translation).
In Figure 28 on page 249, the device has translation enabled on v10 and substitutes the v10 VLAN
tag for the mvlan VLAN tag when forwarding the multicast stream on trunk interface INTF-2. The
device does not have translation enabled on v20, and forwards untagged multicast packets on access
port INTF-3.
Figure 29 on page 251 shows devices in a two-tier access layer topology. The upper or upstream device
is connected to the multicast router in the upstream direction (INTF-1) and to a second device
downstream (INTF-2). The lower or downstream device connects to the upstream device (INTF-3), and
251
uses trunk or access ports in the downstream direction to connect to multicast receivers in two different
VLANs (v10 on INTF-4 and v20 on INTF-5).
Without MVR, similar to the single-tier access layer topology, the upper device connects to the
upstream multicast router using a multicast router interface that is also a trunk port in both receiver
VLANs. The two layers of devices are connected with trunk ports in the receiver VLANs. The lower
device has trunk or access ports in the receiver VLANs connected to the multicast receiver hosts. In this
configuration, the upstream router must duplicate the multicast stream and use two IRB interfaces to
send copies of the same data to the two VLANs. The upstream device also sends duplicate streams
downstream for receivers on the two VLANs.
With MVR configured as shown in Figure 29 on page 251, the multicast stream can be sent to receivers
in different VLANs in the context of a single MVLAN from the upstream router and through the multiple
tiers in the access layer.
For MVR to operate smoothly in this topology, we recommend to set up the following elements on the
different tiers of devices in the access layer, as illustrated in Figure 29 on page 251:
252
• An MVLAN configured on the devices in all tiers in the access layer. The device in the uppermost tier
connects to the upstream multicast router with a multicast router interface and a trunk port in the
MVLAN. This upstream interface was already a trunk port and a multicast router port for the receiver
VLANs that will be associated with the MVLAN.
Figure 29 on page 251 shows an MVLAN configured on all tiers of devices. The upper-tier device is
connected to the multicast router using interface INTF-1, configured previously as a trunk port and
multicast router port in v10 and v20, and subsequently added to the configuration as a trunk and
multicast router port in the MVLAN as well.
• MVR receiver VLANs associated with the MVLAN on the devices in all tiers in the access layer.
In Figure 29 on page 251, the lower-tier device is connected to Host 1 on VLAN v10 (using trunk
interface INTF-4) and Host 2 on v20 (using access interface INTF-5). VLANs v10 and v20 use INTF-3
as a trunk port and multicast router port in the upstream direction to the upper-tier device. The
upper device connects to the lower device using INTF-2 as a trunk port in the downstream direction
to send IGMP queries and forward multicast traffic on v10 and v20. VLANs v10 and v20 are then
configured as MVR receiver VLANs for the MVLAN, with INTF-3 also added as a trunk port and
multicast router port in the MVLAN. VLANs v10 and v20 are also configured on the upper-tier
device as MVR receiver VLANs for the MVLAN.
• MVR running in proxy mode on the device in the uppermost tier for the MVR receiver VLANs, so the
device acts as a proxy to the multicast router for group membership requests received on the MVR
receiver VLANs. The upstream router sends only one multicast stream on the MVLAN downstream
to the device.
In Figure 29 on page 251, the upper-tier device is configured in proxy mode and establishes group
memberships on the MVLAN for hosts on MVR receiver VLANs v10 and v20. The upstream router in
the figure sends only one multicast stream on the MVLAN, which reaches the upper device through
INTF-1. The upper device forwards the stream to the devices in the lower tiers using INTF-2.
• No MVR receiver VLAN tag translation enabled on MVLAN traffic egressing from upper-tier devices.
Devices in the intermediate tiers should forward MVLAN traffic downstream in the context of the
MVLAN, tagged with the MVLAN tag.
The upper device in the figure does not have translation enabled for either receiver VLAN v10 or v20
for the interface INTF-2 that connects to the lower-tier device.
• MVR running in transparent mode on the devices in the lower tiers of the access layer. The lower
devices send IGMP reports upstream in the context of the receiver VLANs because they are
operating in transparent mode, and install bridging entries for the MVLAN only, by default, or with
the install option configured, for both the MVLAN and the MVR receiver VLANs. The uppermost
device is running in proxy mode and installs bridging entries for the MVLAN only. The upstream
router sends only one multicast stream on the MVLAN downstream toward the receivers, and the
traffic is forwarded to the MVR receiver VLANs in the context of the MVLAN, with VLAN tag
translation if the translate option is enabled (described next).
253
In Figure 29 on page 251, the lower device is connected to the upper device with INTF-3 as a trunk
port and the multicast router port for receiver VLANs v10 and v20. To enable MVR on the lower-tier
device, the two MVR receiver VLANs are configured in MVR transparent mode, and INTF-3 is
additionally configured to be a trunk port and multicast router port for the MVLAN.
• MVR receiver VLAN tag translation enabled on receiver VLANs on lower-tier devices that have hosts
on trunk ports, so those hosts receive the multicast traffic in the context of their receiver VLANs.
Hosts reached by way of access ports receive untagged packets, so no VLAN tag translation is
needed in that case.
In Figure 29 on page 251, the device has translation enabled on v10 and substitutes the v10 receiver
VLAN tag for mvlan’s VLAN tag when forwarding the multicast stream on trunk interface INTF-4.
The device does not have translation enabled on v20, and forwards untagged multicast packets on
access port INTF-5.
Release Description
19.4R1 Starting in Junos OS Release 19.4R1, EX4300 multigigabit model (EX4300-48MP) switches and Virtual
Chassis support MVR. You can configure up to 10 MVLANs on these devices.
18.4R1 Starting in Junos OS Release 18.4R1, EX2300 and EX3400 switches and Virtual Chassis support MVR.
You can configure up to 5 MVLANs on these devices.
18.3R1 Starting in Junos OS Release 18.3R1, EX4300 switches and Virtual Chassis support MVR. You can
configure up to 10 MVLANs on these devices.
RELATED DOCUMENTATION
IN THIS SECTION
Viewing MVLAN and MVR Receiver VLAN Information on EX Series Switches with ELS | 263
Multicast VLAN registration (MVR) enables hosts that are not part of a multicast VLAN (MVLAN) to
receive multicast streams from the MVLAN, sharing the MVLAN across multiple VLANs in a Layer 2
network. Hosts remain in their own VLANs for bandwidth and security reasons but are able to receive
multicast streams on the MVLAN.
MVR is not enabled by default on switches that support MVR. You must explicitly configure a switch
with a data-forwarding source MVLAN and associate it with one or more data-forwarding MVR receiver
VLANs. When you configure one or more VLANs on a switch to be MVR receiver VLANs, you must
configure at least one associated source MVLAN. However, you can configure a source MVLAN without
associating MVR receiver VLANs with it at the same time.
The overall purpose and benefits of employing MVR are the same on switches that use Enhanced
Layer 2 Software (ELS) configuration style and those that do not use ELS. However, there are differences
in MVR configuration and operation on the two types of switches.
• In an access layer with a single tier of switches, where a switch is connected to a multicast router in
the upstream direction, and has host trunk or access ports connecting to downstream multicast
receivers:
• Statically configure the upstream interface to the multicast router as a multicast router port in the
MVLAN.
• Configure the translate option on MVR receiver VLANs that have trunk ports, so hosts on those
trunk ports receive the multicast packets tagged for their own VLANs.
255
• In an access layer with multiple tiers of switches, with a switch connected upstream to the multicast
router and a path through one or more downstream switches to multicast receivers:
• Configure MVR on the receiver VLANs to operate in proxy mode on the uppermost switch that is
directly connected to the upstream multicast router.
• Configure MVR on the receiver VLANs to operate in transparent mode for the remaining
downstream tiers of switches.
• Statically configure a multicast router port to the switch in the upstream direction on each tier for
the MVLAN.
• On the lowest tier of MVR switches (connected to receiver hosts), configure MVLAN tag
translation for MVR receiver VLANs that have trunk ports, so hosts on those trunk ports receive
the multicast stream with the packets tagged with their own VLANs.
NOTE: When enabling MVR on ELS switches, depending on your multicast network
requirements, you can have some MVR receiver VLANs configured in proxy mode and some in
transparent mode that are associated with the same MVLAN, because the MVR mode setting
applies individually to an MVR receiver VLAN. The mode configurations described here are only
recommendations for smooth MVR operation in those topologies.
The following constraints apply when configuring MVR on ELS EX Series switches:
• A VLAN can be configured as either an MVLAN or an MVR receiver VLAN, not both. However, an
MVR receiver VLAN can be associated with more than one MVLAN.
• An MVLAN can be the source for only one multicast group subnet, so multiple MVLANs configured
on a switch must have unique multicast group subnet ranges.
• You can configure an interface in both an MVR receiver VLAN and its MVLAN only if it is configured
as a multicast router port in both VLANs.
• You cannot configure proxy mode with the install option to also install forwarding entries on an MVR
receiver VLAN. In proxy mode, IGMP reports are sent to the upstream router only in the context of
the MVLAN. Multicast sources will not receive IGMP reports on the MVR receiver VLAN , and
multicast traffic will not be sent on the MVR receiver VLAN.
• MVR does not support configuring an MVLAN or MVR receiver VLANs on private VLANs (PVLANs).
256
For example, configure VLAN mvlan as an MVLAN for multicast group subnet 233.252.0.0/8:
2. Configure one or more data-forwarding MVR receiver VLANs associated with the source MVLAN:
For example, configure two MVR receiver VLANs v10 and v20 associated with the MVLAN named
mvlan:
For example, configure the two MVR receiver VLANs v10 and v20 (associated with the MVLAN
named mvlan) from the previous step to use proxy mode:
NOTE: On ELS switches, the MVR mode setting applies to individual MVR receiver VLANs. All
MVR receiver VLANS associated with an MVLAN are not required to have the same mode
setting. Depending on your multicast network requirements, you might want to configure
some MVR receiver VLANs in proxy mode and others that are associated with the same
MVLAN in transparent mode.
4. In a multiple-tier topology, for the remaining switches that are not the uppermost switch, configure
each MVR receiver VLAN on each switch to operate in transparent mode. An MVR receiver VLAN
operates in transparent mode by default if you do not set the mode explicitly, so this step is optional
on these switches.
For example, configure two MVR receiver VLANs v10 and v20 that are associated with the MVLAN
named mvlan to use transparent mode:
NOTE:
5. Configure a multicast router port in the upstream direction for the MVLAN on the MVR switch in a
single-tier topology or on the MVR switch in each tier of a multiple-tier topology:
For example, configure a multicast router interface ge-0/0/10.0 for the MVLAN named mvlan:
6. On an MVR switch connected to the receiver hosts with trunk or access ports (applies only to the
lowest tier in a multiple-tier topology), configure MVLAN tag translation on MVR receiver VLANs
that have trunk ports, so hosts on the trunk ports can receive the multicast stream with the packets
tagged with their own VLANs:
For example, a switch connects to receiver hosts on MVR receiver VLAN v10 using a trunk port, but
reaches receiver hosts on MVR receiver VLAN v20 on an access port, so configure the MVR translate
option only on VLAN v10:
7. (Optional and applicable only to MVR receiver VLANs configured in transparent mode) Install
forwarding entries for an MVR receiver VLAN as well as the MVLAN:
NOTE: This option cannot be configured for an MVR receiver VLAN configured in proxy
mode.
For example:
Figure 30 on page 259 illustrates a single-tier access layer topology in which MVR is employed with an
MVLAN named mvlan and receiver hosts on MVR receiver VLANs v10 and v20. A sample of the
recommended MVR configuration for this topology follows the figure.
The MVR switch in Figure 30 on page 259 is configured in proxy mode, connects to the upstream
multicast router on interface INTF-1, and connects to receiver hosts on v10 using trunk port INTF-2 and
on v20 using access port INTF-3. The switch is configured to translate MVLAN tags in the multicast
stream into the receiver VLAN tags only for v10 on INTF-2.
Figure 31 on page 261 illustrates a two-tier access layer topology in which MVR is employed with an
MVLAN named mvlan, MVR receiver VLANs v10 and v20, and receiver hosts connected to trunk port
261
INTF-4 on v10 and access port INTF-5 on v20. A sample of the recommended MVR configuration for
this topology follows the figure.
The upper switch in Figure 31 on page 261 connects to the upstream multicast router on INTF-1, and
the lower switch connects to the upper switch on INTF-3, both configured as trunk ports and multicast
router interfaces in the MVLAN. The upper switch is configured in proxy mode and the lower switch is
configured in transparent mode for all MVR receiver VLANs. The lower switch is configured to translate
MVLAN tags in the multicast stream into the receiver VLAN tags for v10 on INTF-4.
Upper Switch:
Lower Switch:
Viewing MVLAN and MVR Receiver VLAN Information on EX Series Switches with
ELS
On EX Series switches with the Enhanced Layer 2 Software (ELS) configuration style that support MVR,
you can use the "show igmp snooping data-forwarding" on page 2159 command to view information
about the MVLANs and MVR receiver VLANs configured on a switch, as follows:
Vlan: v2
Learning-Domain : default
Type : MVR Source Vlan
Group subnet : 225.0.0.0/24
Receiver vlans:
vlan: v1
vlan: v3
Vlan: v1
264
Learning-Domain : default
Type : MVR Receiver Vlan
Mode : PROXY
Egress translate : FALSE
Install route : FALSE
Source vlans:
vlan: v2
Vlan: v3
Learning-Domain : default
Type : MVR Receiver Vlan
Mode : TRANSPARENT
Egress translate : FALSE
Install route : TRUE
Source vlans:
vlan: v2
MVLANs are listed as Type: MVR Source Vlan with the associated group subnet range and MVR
receiver VLANs. MVR receiver VLANs are listed as Type: MVR Receiver Vlan with the associated source
MVLANs and configured options (proxy or transparent mode, VLAN tag translation, and installation of
receiver VLAN forwarding entries).
In addition, the "show igmp snooping interface" on page 2163 and "show igmp snooping membership" on
page 2171 commands on ELS EX Series switches list MVR receiver VLAN interfaces under both the MVR
receiver VLAN and its MVLAN, and display the output field Data-forwarding receiver: yes when MVR
receiver ports are listed under the MVLAN. This field is not displayed for other interfaces in an MVLAN
listed under the MVLAN that are not in MVR receiver VLANs.
• A VLAN can be configured as an MVLAN or an MVR receiver VLAN, but not both. However, an MVR
receiver VLAN can be associated with more than one MVLAN.
• An MVLAN can be the source for only one multicast group subnet, so multiple MVLANs configured
on a switch must have disjoint multicast group subnets.
• After you configure a VLAN as an MVLAN, that VLAN is no longer available for other uses.
265
• You cannot enable multicast protocols on VLAN interfaces that are members of MVLANs.
• If you configure an MVLAN in proxy mode, IGMP snooping proxy mode is automatically enabled on
all MVR receiver VLANs of this MVLAN. If a VLAN is an MVR receiver VLAN for multiple MVLANs,
all of the MVLANs must have proxy mode enabled or all must have proxy mode disabled. You can
enable proxy mode only on VLANs that are configured as MVR source VLANs and that are not
configured for Q-in-Q tunneling.
• You cannot configure proxy mode with the install option to also install forwarding entries for
received IGMP packets on an MVR receiver VLAN.
[edit protocols]
user@switch# set igmp-snooping vlan mv0 data-forwarding source groups 225.10.0.0/16
[edit protocols]
user@switch# set igmp-snooping vlan mv0 proxy source-address 10.0.0.1
3. Configure the VLAN named v2 to be an MVR receiver VLAN with mv0 as its source:
[edit protocols]
user@switch# set igmp-snooping vlan v2 data-forwarding receiver source-vlans mv0
[edit protocols]
user@switch# set igmp-snooping vlan v2 data-forwarding receiver install
SEE ALSO
RELATED DOCUMENTATION
IN THIS SECTION
Requirements | 266
Configuration | 270
Multicast VLAN registration (MVR) enables hosts that are not part of a multicast VLAN (MVLAN) to
receive multicast streams from the MVLAN, which enable the MVLAN to be shared across the Layer 2
network and eliminate the need to send duplicate multicast streams to each requesting VLAN in the
network. Hosts remain in their own VLANs for bandwidth and security reasons.
NOTE: This example describes configuring MVR only on EX Series and QFX Series switches that
do not support the Enhanced Layer 2 Software configuration style.
Requirements
This example uses the following hardware and software components:
• Junos OS Release 9.6 or later for EX Series switches or Junos OS Release 12.3 or later for the QFX
Series
• Configured two or more VLANs on the switch. See the task for your platform:
• Example: Setting Up Bridging with Multiple VLANs on Switches for the QFX Series and EX4600
switch
267
• Connected the switch to a network that can transmit IPTV multicast streams from a video server.
• Connected a host that is capable of receiving IPTV multicast streams to an interface in one of the
VLANs.
IN THIS SECTION
Topology | 267
In a standard Layer 2 network, a multicast stream received on one VLAN is never distributed to
interfaces outside that VLAN. If hosts in multiple VLANs request the same multicast stream, a separate
copy of that multicast stream is distributed to the requesting VLANs.
MVR introduces the concept of a multicast source VLAN (MVLAN), which is created by MVR and
becomes the only VLAN over which multicast traffic flows throughout the Layer 2 network. Multicast
traffic can then be selectively forwarded from interfaces on the MVLAN (source ports) to hosts that are
connected to interfaces (multicast receiver ports) that are not part of the multicast source VLAN. When
you configure an MVLAN, you assign a range of multicast group addresses to it. You then configure
other VLANs to be MVR receiver VLANs, which receive multicast streams from the MVLAN. The MVR
receiver ports comprise all the interfaces that exist on any of the MVR receiver VLANs.
Topology
You can configure MVR to operate in one of two modes: transparent mode (the default mode) or proxy
mode. Both modes enable MVR to forward only one copy of a multicast stream to the Layer 2 network.
In transparent mode, the switch receives one copy of each IPTV multicast stream and then replicates the
stream only to those hosts that want to receive it, while forwarding all other types of multicast traffic
without modification. Figure 32 on page 268 shows how MVR operates in transparent mode.
In proxy mode, the switch acts as a proxy for the IGMP multicast router in the MVLAN for MVR group
memberships established in the MVR receiver VLANs and generates and sends IGMP packets into the
MVLAN as needed. Figure 33 on page 269 shows how MVR operates in proxy mode.
This example shows how to configure MVR in both transparent mode and proxy mode on an EX Series
switch or the QFX Series. The topology includes a video server that is connected to a multicast router,
which in turn forwards the IPTV multicast traffic in the MVLAN to the Layer 2 network.
Figure 32 on page 268 shows the MVR topology in transparent mode. Interfaces P1 and P2 on Switch C
belong to service VLAN s0 and MVLAN mv0. Interface P4 of Switch C also belongs to service VLAN s0.
268
In the upstream direction of the network, only non-IPTV traffic is being carried in individual customer
VLANs of service VLAN s0. VLAN c0 is an example of this type of customer VLAN. IPTV traffic is being
carried on MVLAN mv0. If any host on any customer VLAN connected to port P4 requests an MVR
stream, Switch C takes the stream from VLAN mv0 and replicates that stream onto port P4 with tag
mv0. IPTV traffic, along with other network traffic, flows from port P4 out to the Digital Subscriber Line
Access Multiplexer (DSLAM) D1.
Figure 33 on page 269 shows the MVR topology in proxy mode. Interfaces P1 and P2 on Switch C
belong to MVLAN mv0 and customer VLAN c0. Interface P4 on Switch C is an access port of customer
VLAN c0. In the upstream direction of the network, only non-IPTV traffic is being carried on customer
VLAN c0. Any IPTV traffic requested by hosts on VLAN c0 is replicated untagged to port P4 based on
streams received in MVLAN mv0. IPTV traffic flows from port P4 out to an IPTV-enabled device in Host
269
H1. Other traffic, such as data and voice traffic, also flows from port P4 to other network devices in
Host H1.
For information on VLAN tagging, see the topic for your platform:
Configuration
IN THIS SECTION
Procedure | 270
Procedure
To quickly configure this example, copy the following commands, paste them into a text file, remove any
line breaks, change any details necessary to match your network configuration, and then copy and paste
the commands into the CLI at the [edit protocols igmp-snooping] hierarchy level.
Step-by-Step Procedure
The following example requires that you navigate various levels in the configuration hierarchy. For
information about navigating the CLI, see Using the CLI Editor in Configuration Mode in the CLI User
Guide.
To configure MVR:
Results
From configuration mode, confirm your configuration by entering the show command at the [edit
protocols igmp-snooping] hierarchy level. If the output does not display the intended configuration,
repeat the instructions in this example to correct the configuration.
RELATED DOCUMENTATION
Routing Content to Densely Clustered Receivers with PIM Dense Mode | 294
Routing Content to Larger, Sparser Groups with PIM Sparse Mode | 305
Rapidly Detecting Communication Failures with PIM and the BFD Protocol | 499
CHAPTER 6
Understanding PIM
IN THIS CHAPTER
PIM Overview
IN THIS SECTION
The predominant multicast routing protocol in use on the Internet today is Protocol Independent
Multicast, or PIM. The type of PIM used on the Internet is PIM sparse mode. PIM sparse mode is so
accepted that when the simple term “PIM” is used in an Internet context, some form of sparse mode
operation is assumed.
PIM emerged as an algorithm to overcome the limitations of dense-mode protocols such as the Distance
Vector Multicast Routing Protocol (DVMRP), which was efficient for dense clusters of multicast
receivers, but did not scale well for the larger, sparser, groups encountered on the Internet. The Core
Based Trees (CBT) Protocol was intended to support sparse mode as well, but CBT, with its all-powerful
core approach, made placement of the core critical, and large conference-type applications (many-to-
many) resulted in bottlenecks in the core. PIM was designed to avoid the dense-mode scaling issues of
DVMRP and the potential performance issues of CBT at the same time.
Starting in Junos OS Release 15.2, only PIM version 2 is supported. In the CLI, the command for
specifying a version (1 or 2) is removed.
PIMv1 and PIMv2 can coexist on the same routing device and even on the same interface. The main
difference between PIMv1 and PIMv2 is the packet format. PIMv1 messages use Internet Group
Management Protocol (IGMP) packets, whereas PIMv2 has its own IP protocol number (103) and packet
275
structure. All routing devices connecting to an IP subnet such as a LAN must use the same PIM version.
Some PIM implementations can recognize PIMv1 packets and automatically switch the routing device
interface to PIMv1. Because the difference between PIMv1 and PIMv2 involves the message format, but
not the meaning of the message or how the routing device processes the PIM message, a routing device
can easily mix PIMv1 and PIMv2 interfaces.
PIM is used for efficient routing to multicast groups that might span wide-area and interdomain
internetworks. It is called “protocol independent” because it does not depend on a particular unicast
routing protocol. Junos OS supports bidirectional mode, sparse mode, dense mode, and sparse-dense
mode.
NOTE: ACX Series routers supports only sparse mode. Dense mode on ACX series is supported
only for control multicast groups for auto-discovery of rendezvous point (auto-RP).
PIM operates in several modes: bidirectional mode, sparse mode, dense mode, and sparse-dense mode.
In sparse-dense mode, some multicast groups are configured as dense mode (flood-and-prune, [S,G]
state) and others are configured as sparse mode (explicit join to rendezvous point [RP], [*,G] state).
PIM drafts also establish a mode known as PIM source-specific mode, or PIM SSM. In PIM SSM there is
only one specific source for the content of a multicast group within a given domain.
Because the PIM mode you choose determines the PIM configuration properties, you first must decide
whether PIM operates in bidirectional, sparse, dense, or sparse-dense mode in your network. Each mode
has distinct operating advantages in different network environments.
• In sparse mode, routing devices must join and leave multicast groups explicitly. Upstream routing
devices do not forward multicast traffic to a downstream routing device unless the downstream
routing device has sent an explicit request (by means of a join message) to the rendezvous point (RP)
routing device to receive this traffic. The RP serves as the root of the shared multicast delivery tree
and is responsible for forwarding multicast data from different sources to the receivers.
Sparse mode is well suited to the Internet, where frequent interdomain join messages and prune
messages are common.
Starting in Junos OS Release 19.2R1, on SRX300, SRX320, SRX340, SRX345, SRX550, SRX1500, and
vSRX 2.0 and vSRX 3.0 (with 2 vCPUs) Series devices, Protocol Independent Multicast (PIM) using
point-to-multipoint (P2MP) mode supports AutoVPN and Auto Discovery VPN in which a new p2mp
interface type is introduced for PIM. The p2mp interface tracks all PIM joins per neighbor to ensure
multicast forwarding or replication only happens to those neighbors that are in joined state. In
addition, the PIM using point-to-multipoint mode supports chassis cluster mode.
276
NOTE: On all the EX series switches (except EX4300 and EX9200), QFX5100 switches, and
OCX series switches, the rate limit is set to 1pps per SG to avoid overwhelming the
rendezvous point (RP), First hop router (FHR) with PIM-sparse mode (PIM-SM) register
messages and cause CPU hogs. This rate limit helps in improving scaling and convergence
times by avoiding duplicate packets being trapped, and tunneled to RP in software. (Platform
support depends on the Junos OS release in your installation.)
• Bidirectional PIM is similar to sparse mode, and is especially suited to applications that must scale to
support a large number of dispersed sources and receivers. In bidirectional PIM, routing devices build
shared bidirectional trees and do not switch to a source-based tree. Bidirectional PIM scales well
because it needs no source-specific (S,G) state. Instead, it builds only group-specific (*,G) state.
• Unlike sparse mode and bidirectional mode, in which data is forwarded only to routing devices
sending an explicit PIM join request, dense mode implements a flood-and-prune mechanism, similar
to the Distance Vector Multicast Routing Protocol (DVMRP). In dense mode, a routing device
receives the multicast data on the incoming interface, then forwards the traffic to the outgoing
interface list. Flooding occurs periodically and is used to refresh state information, such as the source
IP address and multicast group pair. If the routing device has no interested receivers for the data, and
the outgoing interface list becomes empty, the routing device sends a PIM prune message upstream.
Dense mode works best in networks where few or no prunes occur. In such instances, dense mode is
actually more efficient than sparse mode.
• Sparse-dense mode, as the name implies, allows the interface to operate on a per-group basis in
either sparse or dense mode. A group specified as “dense” is not mapped to an RP. Instead, data
packets destined for that group are forwarded by means of PIM dense mode rules. A group specified
as “sparse” is mapped to an RP, and data packets are forwarded by means of PIM sparse-mode rules.
Sparse-dense mode is useful in networks implementing auto-RP for PIM sparse mode.
NOTE: On SRX Series devices, PIM does not support upstream and downstream interfaces
across different virtual routers in flow mode.
PIM dense mode requires only a multicast source and series of multicast-enabled routing devices
running PIM dense mode to allow receivers to obtain multicast content. Dense mode makes sure that all
multicast traffic gets everywhere by periodically flooding the network with multicast traffic, and relies
on prune messages to make sure that subnets where all receivers are uninterested in that particular
multicast group stop receiving packets.
277
PIM sparse mode is more complicated and requires the establishment of special routing devices called
rendezvous points (RPs) in the network core. These routing devices are where upstream join messages
from interested receivers meet downstream traffic from the source of the multicast group content. A
network can have many RPs, but PIM sparse mode allows only one RP to be active for any multicast
group.
If there is only one RP in a routing domain, the RP and adjacent links might become congested and form
a single point of failure for all multicast traffic. Thus, multiple RPs are the rule, but the issue then
becomes how other multicast routing devices find the RP that is the source of the multicast group the
receiver is trying to join. This RP-to-group mapping is controlled by a special bootstrap router (BSR)
running the PIM BSR mechanism. There can be more than one bootstrap router as well, also for single-
point-of-failure reasons.
The bootstrap router does not have to be an RP itself, although this is a common implementation. The
bootstrap router's main function is to manage the collection of RPs and allow interested receivers to find
the source of their group's multicast traffic. PIM bootstrap messages are sourced from the loopback
address, which is always up. The loopback address must be routable. If it is not routable, then the
bootstrap router is unable to send bootstrap messages to update the RP domain members. The show
pim bootstrap command displays only those bootstrap routers that have routable loopback addresses.
PIM SSM can be seen as a subset of a special case of PIM sparse mode and requires no specialized
equipment other than that used for PIM sparse mode (and IGMP version 3).
Bidirectional PIM RPs, unlike RPs for PIM sparse mode, do not need to perform PIM Register tunneling
or other specific protocol action. Bidirectional PIM RPs implement no specific functionality. RP
addresses are simply a location in the network to rendezvous toward. In fact, for bidirectional PIM, RP
addresses need not be loopback interface addresses or even be addresses configured on any routing
device, as long as they are covered by a subnet that is connected to a bidirectional PIM-capable routing
device and advertised to the network.
Release Description
19.2R1 Starting in Junos OS Release 19.2R1, on SRX300, SRX320, SRX340, SRX345, SRX550, SRX1500, and
vSRX 2.0 and vSRX 3.0 (with 2 vCPUs) Series devices, Protocol Independent Multicast (PIM) using point-
to-multipoint (P2MP) mode supports AutoVPN and Auto Discovery VPN in which a new p2mp interface
type is introduced for PIM.
15.2 Starting in Junos OS Release 15.2, only PIM version 2 is supported. In the CLI, the command for
specifying a version (1 or 2) is removed.
278
RELATED DOCUMENTATION
You can configure several Protocol Independent Multicast (PIM) features on an interface regardless of its
PIM mode (bidirectional, sparse, dense, or sparse-dense mode).
NOTE: ACX Series routers supports only sparse mode. Dense mode on ACX series is supported
only for control multicast groups for auto-discovery of rendezvous point (auto-RP).
If you configure PIM on an aggregated (ae- or as-) interface, each of the interfaces in the aggregate is
included in the multicast output interface list and carries the single stream of replicated packets in a
load-sharing fashion. The multicast aggregate interface is “expanded” into its constituent interfaces in
the next-hop database.
RELATED DOCUMENTATION
CHAPTER 7
IN THIS CHAPTER
PIM instances are supported only for VRF instance types. You can configure multiple instances of PIM to
support multicast over VPNs.
routing-instances {
routing-instance-name {
interface interface-name;
instance-type vrf;
protocols {
pim {
... pim-configuration ...
}
}
}
}
280
RELATED DOCUMENTATION
Starting in Junos OS Release 15.2, it is no longer necessary to configure the PIM version. Support for
PIM version 1 has been removed and the remaining, default, version is PIM 2.
PIM version 2 is the default for both rendezvous point (RP) mode (at the [edit protocols pim rp static
address address] hierarchy level) and for interface mode (at the [edit protocols pim interface interface-
name] hierarchy level).
15.2 Starting in Junos OS Release 15.2, it is no longer necessary to configure the PIM version.
Because of the distributed nature of QFabric systems, the default configuration does not allow the
maximum number of supported Layer 3 multicast flows to be created. To allow a QFabric system to
create the maximum number of supported flows, configure the following statement:
After configuring this statement, you must reboot the QFabric Director group to make the change take
effect.
281
Routing devices send hello messages at a fixed interval on all PIM-enabled interfaces. By using hello
messages, routing devices advertise their existence as PIM routing devices on the subnet. With all PIM-
enabled routing devices advertised, a single designated router for the subnet is established.
When a routing device is configured for PIM, it sends a hello message at a 30-second default interval.
The interval range is from 0 through 255. When the interval counts down to 0, the routing device sends
another hello message, and the timer is reset. A routing device that receives no response from a
neighbor in 3.5 times the interval value drops the neighbor. In the case of a 30-second interval, the
amount of time a routing device waits for a response is 105 seconds.
If a PIM hello message contains the hold-time option, the neighbor timeout is set to the hold-time sent
in the message. If a PIM hello message does not contain the hold-time option, the neighbor timeout is
set to the default hello hold time.
To modify how often the routing device sends hello messages out of an interface:
1. This example shows the configuration for the routing instance. Configure the interface globally or in
the routing instance.
2. Verify the configuration by checking the Hello Option Holdtime field in the output of the show pim
neighbors detail command.
Interface: lo0.0
Address: 10.255.245.91, IPv4, PIM v2, Mode: Sparse
Hello Option Holdtime: 255 seconds
282
Interface: pd-6/0/0.32768
Address: 0.0.0.0, IPv4, PIM v2, Mode: Sparse
Hello Option Holdtime: 255 seconds
Hello Option DR Priority: 0
Hello Option LAN Prune Delay: delay 500 ms override 2000 ms
Join Suppression supported
RELATED DOCUMENTATION
The ping utility uses ICMP Echo messages to verify connectivity to any device with an IP address.
However, in the case of multicast applications, a single ping sent to a multicast address can degrade the
performance of routers because the stream of packets is replicated multiple times.
You can disable the router's response to ping (ICMP Echo) packets sent to multicast addresses. The
system responds normally to unicast ping packets.
[edit system]
user@host# set no-multicast-echo
283
2. Verify the configuration by checking the echo drops with broadcast or multicast destination address
field in the output of the show system statistics icmp command.
icmp:
0 drops due to rate limit
0 calls to icmp_error
0 errors not generated because old message was icmp
Output histogram:
echo reply: 21
0 messages with bad code fields
0 messages less than the minimum length
0 messages with bad checksum
0 messages with bad source address
0 messages with bad length
100 echo drops with broadcast or multicast destination address
0 timestamp drops with broadcast or multicast destination address
Input histogram:
echo: 21
21 message responses generated
RELATED DOCUMENTATION
Tracing operations record detailed messages about the operation of routing protocols, such as the
various types of routing protocol packets sent and received, and routing policy actions. You can specify
which trace operations are logged by including specific tracing flags. The following table describes the
flags that you can include.
284
Flag Description
join Trace join messages, which are sent to join a branch onto
the multicast distribution tree.
(Continued)
Flag Description
prune Trace prune messages, which are sent to prune a branch off
the multicast distribution tree.
In the following example, tracing is enabled for all routing protocol packets. Then tracing is narrowed to
focus only on PIM packets of a particular type.
1. (Optional) Configure tracing at the [routing-options hierarchy level to trace all protocol packets.
6. Configure tracing flags. Suppose you are troubleshooting issues with PIM version 1 control packets
that are received on an interface configured for PIM version 2. The following example shows how to
trace messages associated with this problem.
RELATED DOCUMENTATION
The Bidirectional Forwarding Detection (BFD) Protocol is a simple hello mechanism that detects failures
in a network. BFD works with a wide variety of network environments and topologies. A pair of routing
devices exchanges BFD packets. Hello packets are sent at a specified, regular interval. A neighbor failure
is detected when the routing device stops receiving a reply after a specified interval. The BFD failure
detection timers have shorter time limits than the Protocol Independent Multicast (PIM) hello hold time,
so they provide faster detection.
The BFD failure detection timers are adaptive and can be adjusted to be faster or slower. The lower the
BFD failure detection timer value, the faster the failure detection and vice versa. For example, the
timers can adapt to a higher value if the adjacency fails (that is, the timer detects failures more slowly).
Or a neighbor can negotiate a higher value for a timer than the configured value. The timers adapt to a
higher value when a BFD session flap occurs more than three times in a span of 15 seconds. A back-off
algorithm increases the receive (Rx) interval by two if the local BFD instance is the reason for the session
flap. The transmission (Tx) interval is increased by two if the remote BFD instance is the reason for the
session flap. You can use the clear bfd adaptation command to return BFD interval timers to their
configured values. The clear bfd adaptation command is hitless, meaning that the command does not
affect traffic flow on the routing device.
You must specify the minimum transmit and minimum receive intervals to enable BFD on PIM.
3. Configure the minimum interval after which the routing device expects to receive a reply from a
neighbor with which it has established a BFD session.
288
Specifying an interval smaller than 300 ms can cause undesired BFD flapping.
5. Configure the threshold for the adaptation of the BFD session detection time.
When the detection time adapts to a value equal to or greater than the threshold, a single trap and a
single system log message are sent.
6. Configure the number of hello packets not received by a neighbor that causes the originating
interface to be declared down.
8. Specify that BFD sessions should not adapt to changing network conditions.
We recommend that you not disable BFD adaptation unless it is preferable not to have BFD
adaptation enabled in your network.
9. Verify the configuration by checking the output of the show bfd session command.
289
RELATED DOCUMENTATION
IN THIS SECTION
Beginning with Junos OS Release 9.6, you can configure authentication for Bidirectional Forwarding
Detection (BFD) sessions running over Protocol Independent Multicast (PIM). Routing instances are also
supported.
The following sections provide instructions for configuring and viewing BFD authentication on PIM:
NOTE: Nonstop active routing (NSR) is not supported with the meticulous-keyed-md5 and
meticulous-keyed-sha-1 authentication algorithms. BFD sessions using these algorithms
might go down after a switchover.
2. Specify the keychain to be used to associate BFD sessions on the specified PIM route or routing
instance with the unique security authentication keychain attributes.
The keychain you specify must match the keychain name configured at the [edit security
authentication key-chains] hierarchy level.
NOTE: The algorithm and keychain must be configured on both ends of the BFD session, and
they must match. Any mismatch in configuration prevents the BFD session from being
created.
• At least one key, a unique integer between 0 and 63. Creating multiple keys allows multiple clients
to use the BFD session.
• The time at which the authentication key becomes active, in the format yyyy-mm-dd.hh:mm:ss.
[edit security]
user@host# set authentication-key-chains key-chain bfd-pim key 53 secret $ABC123$/ start-time
2009-06-14.10:00:00
4. (Optional) Specify loose authentication checking if you are transitioning from nonauthenticated
sessions to authenticated sessions.
5. (Optional) View your configuration by using the show bfd session detail or show bfd session
extensive command.
6. Repeat these steps to configure the other end of the BFD session.
The following example shows BFD authentication configured for the ge-0/1/5 interface. It specifies the
keyed SHA-1 authentication algorithm and a keychain name of bfd-pim. The authentication keychain is
configured with two keys. Key 1 contains the secret data “$ABC123/” and a start time of June 1, 2009,
at 9:46:02 AM PST. Key 2 contains the secret data “$ABC123/” and a start time of June 1, 2009, at
3:29:20 PM PST.
}
}
}
If you commit these updates to your configuration, you see output similar to the following example. In
the output for the show bfd session detail command, Authenticate is displayed to indicate that BFD
authentication is configured. For more information about the configuration, use the show bfd session
extensive command. The output for this command provides the keychain name, the authentication
algorithm and mode for each client in the session, and the overall BFD authentication configuration
status, keychain name, and authentication algorithm and mode.
Detect Transmit
Address State Interface Time Interval Multiplier
192.0.2.2 Up ge-0/1/5.0 0.900 0.300 3
Client PIM, TX interval 0.300, RX interval 0.300, Authenticate
Session up time 3d 00:34
Local diagnostic None, remote diagnostic NbrSignal
Remote state Up, version 1
Replicated
Release Description
9.6 Beginning with Junos OS Release 9.6, you can configure authentication for Bidirectional Forwarding
Detection (BFD) sessions running over Protocol Independent Multicast (PIM). Routing instances are also
supported.
RELATED DOCUMENTATION
CHAPTER 8
IN THIS CHAPTER
PIM dense mode is less sophisticated than PIM sparse mode. PIM dense mode is useful for multicast
LAN applications, the main environment for all dense mode protocols.
PIM dense mode implements the same flood-and-prune mechanism that DVMRP and other dense mode
routing protocols employ. The main difference between DVMRP and PIM dense mode is that PIM dense
mode introduces the concept of protocol independence. PIM dense mode can use the routing table
populated by any underlying unicast routing protocol to perform reverse-path-forwarding (RPF) checks.
Internet service providers (ISPs) typically appreciate the ability to use any underlying unicast routing
protocol with PIM dense mode because they do not need to introduce and manage a separate routing
protocol just for RPF checks. While unicast routing protocols extended as multiprotocol BGP (MBGP)
and Multitopology Routing in IS-IS (M-IS-IS) were later employed to build special tables to perform RPF
checks, PIM dense mode does not require them.
PIM dense mode can use the unicast routing table populated by OSPF, IS-IS, BGP, and so on, or PIM
dense mode can be configured to use a special multicast RPF table populated by MBGP or M-IS-IS when
performing RPF checks.
Unlike sparse mode, in which data is forwarded only to routing devices sending an explicit request,
dense mode implements a flood-and-prune mechanism, similar to DVMRP. In PIM dense mode, there is
295
no RP. A routing device receives the multicast data on the interface closest to the source, then forwards
the traffic to all other interfaces (see Figure 34 on page 295).
Figure 34: Multicast Traffic Flooded from the Source Using PIM Dense Mode
Flooding occurs periodically. It is used to refresh state information, such as the source IP address and
multicast group pair. If the routing device has no interested receivers for the data, and the OIL becomes
296
empty, the routing device sends a prune message upstream to stop delivery of multicast traffic (see
Figure 35 on page 296).
Figure 35: Prune Messages Sent Back to the Source to Stop Unwanted Multicast Traffic
Sparse-dense mode, as the name implies, allows the interface to operate on a per-group basis in either
sparse or dense mode. A group specified as dense is not mapped to an RP. Instead, data packets
destined for that group are forwarded by means of PIM dense-mode rules. A group specified as sparse is
mapped to an RP, and data packets are forwarded by means of PIM sparse-mode rules.
For information about PIM sparse-mode and PIM dense-mode rules, see "Understanding PIM Sparse
Mode" on page 305 and "Understanding PIM Dense Mode" on page 294.
297
RELATED DOCUMENTATION
It is possible to mix PIM dense mode, PIM sparse mode, and PIM source-specific multicast (SSM) on the
same network, the same routing device, and even the same interface. This is because modes are
effectively tied to multicast groups, an IP multicast group address must be unique for a particular
group's traffic, and scoping limits enforce the division between potential or actual overlaps.
NOTE: PIM sparse mode was capable of forming shortest-path trees (SPTs) already. Changes to
PIM sparse mode to support PIM SSM mainly involved defining behavior in the SSM address
range, because shared-tree behavior is prohibited for groups in the SSM address range.
A multicast routing device employing sparse-dense mode is a good example of mixing PIM modes on the
same network or routing device or interface. Dense modes are easy to support because of the flooding,
but scaling issues make dense modes inappropriate for Internet use beyond very restricted uses.
IN THIS SECTION
PIM dense mode implements the same flood-and-prune mechanism that DVMRP and other dense mode
routing protocols employ. The main difference between DVMRP and PIM dense mode is that PIM dense
298
mode introduces the concept of protocol independence. PIM dense mode can use the routing table
populated by any underlying unicast routing protocol to perform reverse-path-forwarding (RPF) checks.
Internet service providers (ISPs) typically appreciate the ability to use any underlying unicast routing
protocol with PIM dense mode because they do not need to introduce and manage a separate routing
protocol just for RPF checks. While unicast routing protocols extended as multiprotocol BGP (MBGP)
and Multitopology Routing in IS-IS (M-IS-IS) were later employed to build special tables to perform RPF
checks, PIM dense mode does not require them.
PIM dense mode can use the unicast routing table populated by OSPF, IS-IS, BGP, and so on, or PIM
dense mode can be configured to use a special multicast RPF table populated by MBGP or M-IS-IS when
performing RPF checks.
Unlike sparse mode, in which data is forwarded only to routing devices sending an explicit request,
dense mode implements a flood-and-prune mechanism, similar to DVMRP. In PIM dense mode, there is
299
no RP. A routing device receives the multicast data on the interface closest to the source, then forwards
the traffic to all other interfaces (see Figure 36 on page 299).
Figure 36: Multicast Traffic Flooded from the Source Using PIM Dense Mode
Flooding occurs periodically. It is used to refresh state information, such as the source IP address and
multicast group pair. If the routing device has no interested receivers for the data, and the OIL becomes
300
empty, the routing device sends a prune message upstream to stop delivery of multicast traffic (see
Figure 37 on page 300).
Figure 37: Prune Messages Sent Back to the Source to Stop Unwanted Multicast Traffic
By default, PIM is disabled. When you enable PIM, it operates in sparse mode by default.
301
You can configure PIM dense mode globally or for a routing instance. This example shows how to
configure the routing instance and how to specify that PIM dense mode use inet.2 as its RPF routing
table instead of inet.0.
1. (Optional) Create an IPv4 routing table group so that interface routes are installed into two routing
tables, inet.0 and inet.2.
2. (Optional) Associate the routing table group with a PIM routing instance.
3. Configure the PIM interface. If you do not specify any interfaces, PIM is enabled on all router
interfaces. Generally, you specify interface names only if you are disabling PIM on certain interfaces.
NOTE: You cannot configure both PIM and Distance Vector Multicast Routing Protocol
(DVMRP) in forwarding mode on the same interface. You can configure PIM on the same
interface only if you configured DVMRP in unicast-routing mode.
4. Monitor the operation of PIM dense mode by running the show pim interfaces, show pim join, show
pim neighbors, and show pim statistics commands.
SEE ALSO
RELATED DOCUMENTATION
IN THIS SECTION
For information about PIM sparse-mode and PIM dense-mode rules, see "Understanding PIM Sparse
Mode" on page 305 and "Understanding PIM Dense Mode" on page 294.
SEE ALSO
NOTE: PIM sparse mode was capable of forming shortest-path trees (SPTs) already. Changes to
PIM sparse mode to support PIM SSM mainly involved defining behavior in the SSM address
range, because shared-tree behavior is prohibited for groups in the SSM address range.
A multicast routing device employing sparse-dense mode is a good example of mixing PIM modes on the
same network or routing device or interface. Dense modes are easy to support because of the flooding,
but scaling issues make dense modes inappropriate for Internet use beyond very restricted uses.
By default, PIM is disabled. When you enable PIM, it operates in sparse mode by default.
You can configure PIM sparse-dense mode globally or for a routing instance. This example shows how to
configure PIM sparse-dense mode globally on all interfaces, specifying that the groups 224.0.1.39 and
224.0.1.40 are using dense mode.
[protocols pim]
user@host# set dense-groups 224.0.1.39
user@host# set dense-groups 224.0.1.40
2. Configure all interfaces on the routing device to use sparse-dense mode. When configuring all
interfaces, exclude the fxp0.0 management interface by adding the disable statement for that
interface.
3. Monitor the operation of PIM sparse-dense mode by running the show pim interfaces, show pim join,
show pim neighbors, and show pim statistics commands.
304
SEE ALSO
RELATED DOCUMENTATION
CHAPTER 9
IN THIS CHAPTER
IN THIS SECTION
A Protocol Independent Multicast (PIM) sparse-mode domain uses reverse-path forwarding (RPF) to
create a path from a data source to the receiver requesting the data. When a receiver issues an explicit
306
join request, an RPF check is triggered. A (*,G) PIM join message is sent toward the RP from the
receiver's designated router (DR). (By definition, this message is actually called a join/prune message, but
for clarity in this description, it is called either join or prune, depending on its context.) The join message
is multicast hop by hop upstream to the ALL-PIM-ROUTERS group (224.0.0.13) by means of each
router’s RPF interface until it reaches the RP. The RP router receives the (*,G) PIM join message and
adds the interface on which it was received to the outgoing interface list (OIL) of the rendezvous-point
tree (RPT) forwarding state entry. This builds the RPT connecting the receiver with the RP. The RPT
remains in effect, even if no active sources generate traffic.
NOTE: State—the (*,G) or (S,G) entries—is the information used for forwarding unicast or
multicast packets. S is the source IP address, G is the multicast group address, and * represents
any source sending to group G. Routers keep track of the multicast forwarding state for the
incoming and outgoing interfaces for each group.
When a source becomes active, the source DR encapsulates multicast data packets into a PIM register
message and sends them by means of unicast to the RP router.
If the RP router has interested receivers in the PIM sparse-mode domain, it sends a PIM join message
toward the source to build a shortest-path tree (SPT) back to the source. The source sends multicast
packets out on the LAN, and the source DR encapsulates the packets in a PIM register message and
forwards the message toward the RP router by means of unicast. The RP router receives PIM register
messages back from the source, and thus adds a new source to the distribution tree, keeping track of
sources in a PIM table. Once an RP router receives packets natively (with S,G), it sends a register stop
message to stop receiving the register messages by means of unicast.
In actual application, many receivers with multiple SPTs are involved in a multicast traffic flow. To
illustrate the process, we track the multicast traffic from the RP router to one receiver. In such a case,
the RP router begins sending multicast packets down the RPT toward the receiver’s DR for delivery to
the interested receivers. When the receiver’s DR receives the first packet from the RPT, the DR sends a
PIM join message toward the source DR to start building an SPT back to the source. When the source
DR receives the PIM join message from the receiver’s DR, it starts sending traffic down all SPTs. When
the first multicast packet is received by the receiver’s DR, the receiver’s DR sends a PIM prune message
to the RP router to stop duplicate packets from being sent through the RPT. In turn, the RP router stops
sending multicast packets to the receiver’s DR, and sends a PIM prune message for this source over the
RPT toward the source DR to halt multicast packet delivery to the RP router from that particular source.
If the RP router receives a PIM register message from an active source but has no interested receivers in
the PIM sparse-mode domain, it still adds the active source into the PIM table. However, after adding
the active source into the PIM table, the RP router sends a register stop message. The RP router
discovers the active source’s existence and no longer needs to receive advertisement of the source
(which utilizes resources).
307
NOTE: If the number of PIM join messages exceeds the configured MTU, the messages are
fragmented in IPv6 PIM sparse mode. To avoid the fragmentation of PIM join messages, the
multicast traffic receives the interface MTU instead of the path MTU.
• Routers with downstream receivers join a PIM sparse-mode tree through an explicit join message.
• PIM sparse-mode RPs are the routers where receivers meet sources.
• Senders announce their existence to one or more RPs, and receivers query RPs to find multicast
sessions.
• Once receivers get content from sources through the RP, the last-hop router (the router closest to
the receiver) can optionally remove the RP from the shared distribution tree (*,G) if the new source-
based tree (S,G) is shorter. Receivers can then get content directly from the source.
The transitional aspect of PIM sparse mode from shared to source-based tree is one of the major
features of PIM, because it prevents overloading the RP or surrounding core links.
There are related issues regarding source, RPs, and receivers when sparse mode multicast is used:
• Receivers initially need to know only one RP (they later learn about others).
• Receivers that never transition to a source-based tree are effectively running Core Based Trees (CBT).
PIM sparse mode has standard features for all of these issues.
Rendezvous Point
The RP router serves as the information exchange point for the other routers. All routers in a PIM
domain must provide mapping to an RP router. It is the only router that needs to know the active
sources for a domain—the other routers just need to know how to reach the RP. In this way, the RP
matches receivers with sources.
308
The RP router is downstream from the source and forms one end of the shortest-path tree. As shown in
Figure 38 on page 308, the RP router is upstream from the receiver and thus forms one end of the
rendezvous-point tree.
The benefit of using the RP as the information exchange point is that it reduces the amount of state in
non-RP routers. No network flooding is required to provide non-RP routers information about active
sources.
RP Mapping Options
• Static configuration
• Anycast RP
• Auto-RP
• Bootstrap router
We recommend a static RP mapping with anycast RP and a bootstrap router (BSR) with auto-RP
configuration, because static mapping provides all the benefits of a bootstrap router and auto-RP
without the complexity of the full BSR and auto-RP mechanisms.
RELATED DOCUMENTATION
IN THIS SECTION
Example: Configuring Multicast for Virtual Routers with IPv6 Interfaces | 334
IN THIS SECTION
A Protocol Independent Multicast (PIM) sparse-mode domain uses reverse-path forwarding (RPF) to
create a path from a data source to the receiver requesting the data. When a receiver issues an explicit
join request, an RPF check is triggered. A (*,G) PIM join message is sent toward the RP from the
receiver's designated router (DR). (By definition, this message is actually called a join/prune message, but
for clarity in this description, it is called either join or prune, depending on its context.) The join message
is multicast hop by hop upstream to the ALL-PIM-ROUTERS group (224.0.0.13) by means of each
router’s RPF interface until it reaches the RP. The RP router receives the (*,G) PIM join message and
adds the interface on which it was received to the outgoing interface list (OIL) of the rendezvous-point
tree (RPT) forwarding state entry. This builds the RPT connecting the receiver with the RP. The RPT
remains in effect, even if no active sources generate traffic.
310
NOTE: State—the (*,G) or (S,G) entries—is the information used for forwarding unicast or
multicast packets. S is the source IP address, G is the multicast group address, and * represents
any source sending to group G. Routers keep track of the multicast forwarding state for the
incoming and outgoing interfaces for each group.
When a source becomes active, the source DR encapsulates multicast data packets into a PIM register
message and sends them by means of unicast to the RP router.
If the RP router has interested receivers in the PIM sparse-mode domain, it sends a PIM join message
toward the source to build a shortest-path tree (SPT) back to the source. The source sends multicast
packets out on the LAN, and the source DR encapsulates the packets in a PIM register message and
forwards the message toward the RP router by means of unicast. The RP router receives PIM register
messages back from the source, and thus adds a new source to the distribution tree, keeping track of
sources in a PIM table. Once an RP router receives packets natively (with S,G), it sends a register stop
message to stop receiving the register messages by means of unicast.
In actual application, many receivers with multiple SPTs are involved in a multicast traffic flow. To
illustrate the process, we track the multicast traffic from the RP router to one receiver. In such a case,
the RP router begins sending multicast packets down the RPT toward the receiver’s DR for delivery to
the interested receivers. When the receiver’s DR receives the first packet from the RPT, the DR sends a
PIM join message toward the source DR to start building an SPT back to the source. When the source
DR receives the PIM join message from the receiver’s DR, it starts sending traffic down all SPTs. When
the first multicast packet is received by the receiver’s DR, the receiver’s DR sends a PIM prune message
to the RP router to stop duplicate packets from being sent through the RPT. In turn, the RP router stops
sending multicast packets to the receiver’s DR, and sends a PIM prune message for this source over the
RPT toward the source DR to halt multicast packet delivery to the RP router from that particular source.
If the RP router receives a PIM register message from an active source but has no interested receivers in
the PIM sparse-mode domain, it still adds the active source into the PIM table. However, after adding
the active source into the PIM table, the RP router sends a register stop message. The RP router
discovers the active source’s existence and no longer needs to receive advertisement of the source
(which utilizes resources).
NOTE: If the number of PIM join messages exceeds the configured MTU, the messages are
fragmented in IPv6 PIM sparse mode. To avoid the fragmentation of PIM join messages, the
multicast traffic receives the interface MTU instead of the path MTU.
• Routers with downstream receivers join a PIM sparse-mode tree through an explicit join message.
311
• PIM sparse-mode RPs are the routers where receivers meet sources.
• Senders announce their existence to one or more RPs, and receivers query RPs to find multicast
sessions.
• Once receivers get content from sources through the RP, the last-hop router (the router closest to
the receiver) can optionally remove the RP from the shared distribution tree (*,G) if the new source-
based tree (S,G) is shorter. Receivers can then get content directly from the source.
The transitional aspect of PIM sparse mode from shared to source-based tree is one of the major
features of PIM, because it prevents overloading the RP or surrounding core links.
There are related issues regarding source, RPs, and receivers when sparse mode multicast is used:
• Receivers initially need to know only one RP (they later learn about others).
• Receivers that never transition to a source-based tree are effectively running Core Based Trees (CBT).
PIM sparse mode has standard features for all of these issues.
Rendezvous Point
The RP router serves as the information exchange point for the other routers. All routers in a PIM
domain must provide mapping to an RP router. It is the only router that needs to know the active
sources for a domain—the other routers just need to know how to reach the RP. In this way, the RP
matches receivers with sources.
312
The RP router is downstream from the source and forms one end of the shortest-path tree. As shown in
Figure 39 on page 312, the RP router is upstream from the receiver and thus forms one end of the
rendezvous-point tree.
The benefit of using the RP as the information exchange point is that it reduces the amount of state in
non-RP routers. No network flooding is required to provide non-RP routers information about active
sources.
RP Mapping Options
• Static configuration
• Anycast RP
• Auto-RP
• Bootstrap router
We recommend a static RP mapping with anycast RP and a bootstrap router (BSR) with auto-RP
configuration, because static mapping provides all the benefits of a bootstrap router and auto-RP
without the complexity of the full BSR and auto-RP mechanisms.
SEE ALSO
• The receiver DR sends PIM join and PIM prune messages from the receiver network toward the RP.
• The source DR sends PIM register messages from the source network to the RP.
Neighboring PIM routers multicast periodic PIM hello messages to each other every 30 seconds (the
default). The PIM hello message usually includes a holdtime value for the neighbor to use, but this is not
a requirement. If the PIM hello message does not include a holdtime value, a default timeout value (in
Junos OS, 105 seconds) is used. On receipt of a PIM hello message, a router stores the IP address and
priority for that neighbor. If the DR priorities match, the router with the highest IP address is selected as
the DR.
If a DR fails, a new one is selected using the same process of comparing IP addresses.
NOTE: DR priority is specific to PIM sparse mode; as per RFC 3973, PIM DR priority cannot be
configured explicitly in PIM Dense Mode (PIM-DM) in IGMPv2 – PIM-DM only support DRs with
IGMPv1.
CAUTION: For redundancy, we strongly recommend that each routing device has
multiple Tunnel Services PICs. In the case of MX Series routers, the recommendation is
to configure multiple tunnel-services statements.
We also recommend that the Tunnel PICs be installed (or configured) on different FPCs.
If you have only one Tunnel PIC or if you have multiple Tunnel PICs installed on a single
FPC and then that FPC is removed, the multicast session will not come up. Having
redundant Tunnel PICs on separate FPCs can help ensure that at least one Tunnel PIC is
available and that multicast will continue working.
314
On MX Series routers, the redundant configuration looks like the following example:
[edit chassis]
user@mx-host# set fpc 1 pic 0 tunnel-services bandwidth 1g
user@mx-host# set fpc 2 pic 0 tunnel-services bandwidth 1g
In PIM sparse mode, the source DR takes the initial multicast packets and encapsulates them in PIM
register messages. The source DR then unicasts the packets to the PIM sparse-mode RP router, where
the PIM register message is de-encapsulated.
When a router is configured as a PIM sparse-mode RP router (by specifying an address using the
address statement at the [edit protocols pim rp local] hierarchy level) and a Tunnel PIC is present on the
router, a PIM register de-encapsulation interface, or pd interface, is automatically created. The pd
interface receives PIM register messages and de-encapsulates them by means of the hardware.
If PIM sparse mode is enabled and a Tunnel Services PIC is present on the router, a PIM register
encapsulation interface (pe interface) is automatically created for each RP address. The pe interface is
used to encapsulate source data packets and send the packets to RP addresses on the PIM DR and the
PIM RP. The pe interface receives PIM register messages and encapsulates the packets by means of the
hardware.
Do not confuse the configurable pe and pd hardware interfaces with the nonconfigurable pime and pimd
software interfaces. Both pairs encapsulate and de-encapsulate multicast packets, and are created
automatically. However, the pe and pd interfaces appear only if a Tunnel Services PIC is present. The
pime and pimd interfaces are not useful in situations requiring the pe and pd interfaces.
If the source DR is the RP, then there is no need for PIM register messages and consequently no need
for a Tunnel Services PIC.
When PIM sparse mode is used with IP version 6 (IPv6), a Tunnel PIC is required on the RP, but not on
the IPv6 PIM DR. The lack of a Tunnel PIC requirement on the IPv6 DR applies only to IPv6 PIM sparse
mode and is not to be confused with IPv4 PIM sparse-mode requirements.
Table 11 on page 314 shows the complete matrix of IPv4 and IPv6 PIM Tunnel PIC requirements.
Table 11: Tunnel PIC Requirements for IPv4 and IPv6 Multicast
Table 11: Tunnel PIC Requirements for IPv4 and IPv6 Multicast (Continued)
IPv6 Yes No
Starting in Junos OS Release 16.1, PIM is disabled by default. When you enable PIM, it operates in
sparse mode by default. You do not need to configure Internet Group Management Protocol (IGMP)
version 2 for a sparse mode configuration. After you enable PIM, by default, IGMP version 2 is also
enabled.
Junos OS uses PIM version 2 for both rendezvous point (RP) mode (at the [edit protocols pim rp static
address address] hierarchy level) and interface mode (at the [edit protocols pim interface interface-
name] hierarchy level).
You can configure PIM sparse mode globally or for a routing instance. This example shows how to
configure PIM sparse mode globally on all interfaces. It also shows how to configure a static RP router
and how to configure the non-RP routers.
2. Configure the RP router interfaces. When configuring all interfaces, exclude the fxp0.0 management
interface by including the disable statement for that interface.
3. Configure the non-RP routers. Include the following configuration on all of the non-RP routers.
SEE ALSO
For PIM sparse mode, you can configure PIM join load balancing to spread join messages and traffic
across equal-cost upstream paths (interfaces and routing devices) provided by unicast routing toward a
source. PIM join load balancing is only supported for PIM sparse mode configurations.
PIM join load balancing is supported on draft-rosen multicast VPNs (also referred to as dual PIM
multicast VPNs) and multiprotocol BGP-based multicast VPNs (also referred to as next-generation
Layer 3 VPN multicast). When PIM join load balancing is enabled in a draft-rosen Layer 3 VPN scenario,
the load balancing is achieved based on the join counts for the far-end PE routing devices, not for any
intermediate P routing devices.
If an internal BGP (IBGP) multipath forwarding VPN route is available, the Junos OS uses the multipath
forwarding VPN route to send join messages to the remote PE routers to achieve load balancing over
the VPN.
By default, when multiple PIM joins are received for different groups, all joins are sent to the same
upstream gateway chosen by the unicast routing protocol. Even if there are multiple equal-cost paths
317
available, these alternative paths are not utilized to distribute multicast traffic from the source to the
various groups.
When PIM join load balancing is configured, the PIM joins are distributed equally among all equal-cost
upstream interfaces and neighbors. Every new join triggers the selection of the least-loaded upstream
interface and neighbor. If there are multiple neighbors on the same interface (for example, on a LAN),
join load balancing maintains a value for each of the neighbors and distributes multicast joins (and
downstream traffic) among these as well.
Join counts for interfaces and neighbors are maintained globally, not on a per-source basis. Therefore,
there is no guarantee that joins for a particular source are load-balanced. However, the joins for all
sources and all groups known to the routing device are load-balanced. There is also no way to
administratively give preference to one neighbor over another: all equal-cost paths are treated the same
way.
You can configure message filtering globally or for a routing instance. This example shows the global
configuration.
You configure PIM join load balancing on the non-RP routers in the PIM domain.
1. Determine if there are multiple paths available for a source (for example, an RP) with the output of
the show pim join extensive or show pim source commands.
Group: 224.1.1.1
Source: *
RP: 10.255.245.6
Flags: sparse,rptree,wildcard
Upstream interface: t1-0/2/3.0
Upstream neighbor: 192.168.38.57
Upstream state: Join to RP
Downstream neighbors:
Interface: t1–0/2/1.0
192.168.38.16 State: JOIN Flags; SRW Timeout: 164
Group: 224.2.127.254
Source: *
RP: 10.255.245.6
Flags: sparse,rptree,wildcard
Upstream interface: so–0/3/0.0
Upstream neighbor: 192.168.38.47
Upstream state: Join to RP
Downstream neighbors:
318
Interface: t1–0/2/3.0
192.168.38.16 State: JOIN Flags; SRW Timeout: 164
Note that for this router, the RP at IP address 10.255.245.6 is the source for two multicast groups:
224.1.1.1 and 224.2.127.254. This router has two equal-cost paths through two different upstream
interfaces (t1-0/2/3.0 and so-0/3/0.0) with two different neighbors (192.168.38.57 and
192.168.38.47). This router is a good candidate for PIM join load balancing.
2. On the non-RP router, configure PIM sparse mode and join load balancing.
Note that the two equal-cost paths shown by the show pim interfaces command now have nonzero
join counts. If the counts differ by more than one and were zero (0) when load balancing commenced,
319
an error occurs (joins before load balancing are not redistributed). The join count also appears in the
show pim neighbors detail output:
Interface: t1-0/2/3.0
Note that the join count is nonzero on the two load-balanced interfaces toward the upstream
neighbors.
PIM join load balancing only takes effect when the feature is configured. Prior joins are not
redistributed to achieve perfect load balancing. In addition, if an interface or neighbor fails, the new
joins are redistributed among remaining active interfaces and neighbors. However, when the
interface or neighbor is restored, prior joins are not redistributed. The clear pim join-distribution
command redistributes the existing flows to new or restored upstream neighbors. Redistributing the
320
existing flows causes traffic to be disrupted, so we recommend that you perform PIM join
redistribution during a maintenance window.
SEE ALSO
A downstream router periodically sends join messages to refresh the join state on the upstream router. If
the join state is not refreshed before the timeout expires, the join state is removed.
By default, the join state timeout is 210 seconds. You can change this timeout to allow additional time
to receive the join messages. Because the messages are called join-prune messages, the name used is
the join-prune-timeout statement.
The join timeout value can be from 210 through 420 seconds.
SEE ALSO
join-prune-timeout
IN THIS SECTION
Requirements | 321
Overview | 321
321
Configuration | 323
Verification | 326
Requirements
• Configure an interior gateway protocol or static routing. See the Junos OS Routing Protocols Library
for Routing Devices.
• Configure PIM Sparse Mode on the interfaces. See Enabling PIM Sparse Mode.
Overview
IN THIS SECTION
Topology | 322
PIM join suppression enables a router on a multiaccess network to defer sending join messages to an
upstream router when it sees identical join messages on the same network. Eventually, only one router
sends these join messages, and the other routers suppress identical messages. Limiting the number of
join messages improves scalability and efficiency by reducing the number of messages sent to the same
router.
• override-interval—Sets the maximum time in milliseconds to delay sending override join messages.
When a router sees a prune message for a join it is currently suppressing, it waits before it sends an
override join message. Waiting helps avoid multiple downstream routers sending override join
messages at the same time. The override interval is a random timer with a value of 0 through the
maximum override value.
• propagation-delay—Sets a value in milliseconds for a prune pending timer, which specifies how long
to wait before executing a prune on an upstream router. During this period, the router waits for any
322
prune override join messages that might be currently suppressed. The period for the prune pending
timer is the sum of the override-interval value and the value specified for propagation-delay.
When multiple identical join messages are received, a random join suppression timer is activated,
with a range of 66 through 84 milliseconds. The timer is reset each time join suppression is triggered.
Topology
• Routers R2, R3, R4, and R5 are downstream routers in the multicast LAN.
This example shows the configuration of the downstream devices: Routers R2, R3, R4, and R5.
Configuration
IN THIS SECTION
Procedure | 324
Results | 325
To quickly configure this example, copy the following commands, paste them into a text file, remove any
line breaks, change any details necessary to match your network configuration, copy and paste the
commands into the CLI at the [edit] hierarchy level, and then enter commit from configuration mode.
[edit]
set protocols pim traceoptions file pim.log
set protocols pim traceoptions file size 5m
set protocols pim traceoptions file world-readable
set protocols pim traceoptions flag join detail
set protocols pim traceoptions flag prune detail
set protocols pim traceoptions flag normal detail
set protocols pim traceoptions flag register detail
set protocols pim rp static address 10.255.112.160
set protocols pim interface all mode sparse
set protocols pim interface all version 2
set protocols pim interface fxp0.0 disable
set protocols pim reset-tracking-bit
324
Procedure
Step-by-Step Procedure
The following example requires you to navigate various levels in the configuration hierarchy. For
information about navigating the CLI, see Using the CLI Editor in Configuration Mode in the Junos OS
CLI User Guide.
To configure PIM join suppression on a non-RP downstream router in the multicast LAN:
[edit]
user@host# edit protocols pim
[edit protocols pim]
user@host# set rp static address 10.255.112.160
[edit protocols pim]
user@host# set interface all mode sparse version 2
[edit protocols pim]
user@host# set interface all version 2
[edit protocols pim]
user@host# set interface fxp0.0 disable
Results
From configuration mode, confirm your configuration by entering the show protocols command. If the
output does not display the intended configuration, repeat the instructions in this example to correct
the configuration.
address 10.255.112.160;
}
}
interface all {
mode sparse;
version 2;
}
interface fxp0.0 {
disable;
}
reset-tracking-bit;
propagation-delay 500;
override-interval 4000;
}
Verification
To verify the configuration, run the following commands on the upstream and downstream routers:
SEE ALSO
The tunnel endpoints do not need to be the same platform type. For example, the device on one end of
the tunnel can be a JCS1200 router, while the device on the other end can be a standalone T Series
router. The two routers that are the tunnel endpoints can be in the same autonomous system or in
different autonomous systems.
In the configuration shown in this example, OSPF is configured between the tunnel endpoints. In Figure
41 on page 327, the tunnel endpoints are R0 and R1. The network that contains the multicast source is
connected to R0. The network that contains the multicast receivers is connected to R1. R1 serves as the
statically configured rendezvous point (RP).
[edit interfaces]
user@host# set ge-0/1/1 description "incoming interface"
user@host# set ge-0/1/1 unit 0 family inet address 10.20.0.1/30
[edit interfaces]
user@host# set ge-0/0/7 description "outgoing interface"
user@host# set ge-0/0/7 unit 0 family inet address 10.10.1.1/30
328
3. On R0, configure unit 0 on the sp- interface. The Junos OS uses unit 0 for service logging and other
communication from the services PIC.
[edit interfaces]
user@host# set sp-0/2/0 unit 0 family inet
4. On R0, configure the logical interfaces that participate in the IPsec services. In this example, unit 1
is the inward-facing interface. Unit 1001 is the interface that faces the remote IPsec site.
[edit interfaces]
user@host# set sp-0/2/0 unit 1 family inet
user@host# set sp-0/2/0 unit 1 service-domain inside
user@host# set sp-0/2/0 unit 1001 family inet
user@host# set sp-0/2/0 unit 1001 service-domain outside
6. On R0, configure PIM sparse mode. This example uses static RP configuration. Because R0 is a non-
RP router, configure the address of the RP router, which is the routable address assigned to the
loopback interface on R1.
7. On R0, create a rule for a bidirectional dynamic IKE security association (SA) that references the IKE
policy and the IPsec policy.
8. On R0, configure the IPsec proposal. This example uses the Authentication Header (AH) Protocol.
12. On R0, create a service set that defines IPsec-specific information. The first command associates
the IKE SA rule with IPsec. The second command defines the address of the local end of the IPsec
security tunnel. The last two commands configure the logical interfaces that participate in the IPsec
330
services. Unit 1 is for the IPsec inward-facing traffic. Unit 1001 is for the IPsec outward-facing
traffic.
[edit interfaces]
user@host# set ge-2/0/1 description "incoming interface"
user@host# set ge-2/0/1 unit 0 family inet address 10.10.1.2/30
[edit interfaces]
user@host# set ge-2/0/0 description "outgoing interface"
user@host# set ge-2/0/0 unit 0 family inet address 10.20.0.5/30
[edit interfaces]
user@host# set lo0.0 family inet address 10.255.0.156
16. On R1, configure unit 0 on the sp- interface. The Junos OS uses unit 0 for service logging and other
communication from the services PIC.
[edit interfacesinterfaces]
user@host# set sp-2/1/0 unit 0 family inet
331
17. On R1, configure the logical interfaces that participate in the IPsec services. In this example, unit 1
is the inward-facing interface. Unit 1001 is the interface that faces the remote IPsec site.
[edit interfaces]
user@host# set sp-2/1/0 unit 1 family inet
user@host# set sp-2/1/0 unit 1 service-domain inside
user@host# set sp-2/1/0 unit 1001 family inet
user@host# set sp-2/1/0 unit 1001 service-domain outside
19. On R1, configure PIM sparse mode. R1 is an RP router. When you configure the local RP address,
use the shared address, which is the address of R1’s loopback interface.
20. On R1, create a rule for a bidirectional dynamic Internet Key Exchange (IKE) security association
(SA) that references the IKE policy and the IPsec policy.
21. On R1, define the IPsec proposal for the dynamic SA.
25. On R1, create a service set that defines IPsec-specific information. The first command associates
the IKE SA rule with IPsec. The second command defines the address of the local end of the IPsec
security tunnel. The last two commands configure the logical interfaces that participate in the IPsec
services. Unit 1 is for the IPsec inward-facing traffic. Unit 1001 is for the IPsec outward-facing
traffic.
SEE ALSO
IN THIS SECTION
Requirements | 334
Overview | 334
Configuration | 335
Verification | 340
A virtual router is a type of simplified routing instance that has a single routing table. This example
shows how to configure PIM in a virtual router.
Requirements
Before you begin, configure an interior gateway protocol or static routing. See the Junos OS Routing
Protocols Library for Routing Devices.
Overview
IN THIS SECTION
Topology | 335
You can configure PIM for the virtual-router instance type as well as for the vrf instance type. The
virtual-router instance type is similar to the vrf instance type used with Layer 3 VPNs, except that it is
used for non-VPN-related applications.
The virtual-router instance type has no VPN routing and forwarding (VRF) import, VRF export, VRF
target, or route distinguisher requirements. The virtual-router instance type is used for non-Layer 3 VPN
situations.
When PIM is configured under the virtual-router instance type, the VPN configuration is not based on
RFC 2547, BGP/MPLS VPNs, so PIM operation does not comply with the Internet draft draft-rosen-
vpn-mcast-07.txt, Multicast in MPLS/BGP VPNs. In the virtual-router instance type, PIM operates in a
routing instance by itself, forming adjacencies with PIM neighbors over the routing instance interfaces
as the other routing protocols do with neighbors in the routing instance.
335
1. On R1, configure a virtual router instance with three interfaces (ge-0/0/0.0, ge-0/1/0.0, and
ge-0/1/1.0).
After you configure this example, you should be able to send multicast traffic from R2 through ge-0/0/0
on R1 to the static group and verify that the traffic egresses from ge-0/1/0.0 and ge-0/1/1.0.
NOTE: Do not include the group-address statement for the virtual-router instance type.
Topology
Configuration
IN THIS SECTION
Procedure | 336
Results | 338
336
Procedure
To quickly configure this example, copy the following commands, paste them into a text file, remove any
line breaks, change any details necessary to match your network configuration, copy and paste the
commands into the CLI at the [edit] hierarchy level, and then enter commit from configuration mode.
[edit]
set interfaces ge-0/0/0 unit 0 family inet6 address 2001:4:4:4::1/64
set interfaces ge-0/1/0 unit 0 family inet6 address 2001:24:24:24::1/64
set interfaces ge-0/1/1 unit 0 family inet6 address 2001:7:7:7::1/64
set protocols mld interface ge-0/1/0.0 static group ff0e::10
set protocols mld interface ge-0/1/1.0 static group ff0e::10
set routing-instances mvrf1 instance-type virtual-router
set routing-instances mvrf1 interface ge-0/0/0.0
set routing-instances mvrf1 interface ge-0/1/0.0
set routing-instances mvrf1 interface ge-0/1/1.0
set routing-instances mvrf1 protocols pim rp local family inet6 address 2001:1:1:1::1
set routing-instances mvrf1 protocols pim interface ge-0/0/0.0
set routing-instances mvrf1 protocols pim interface ge-0/1/0.0
set routing-instances mvrf1 protocols pim interface ge-0/1/1.0
Step-by-Step Procedure
The following example requires you to navigate various levels in the configuration hierarchy. For
information about navigating the CLI, see Using the CLI Editor in Configuration Mode in the Junos OS
CLI User Guide.
[edit]
user@host# edit interfaces
[edit interfaces]
user@host# set ge-0/0/0 unit 0 family inet6 address 2001:4:4:4::1/64
[edit interfaces]
user@host# set ge-0/1/0 unit 0 family inet6 address 2001:24:24:24::1/64
[edit interfaces]
337
[edit]
user@host# edit routing-instances
[edit routing-instances]
user@host# set mvrf1 instance-type virtual-router
[edit routing-instances]
user@host# set mvrf1 interface ge-0/0/0
[edit routing-instances]
user@host# set mvrf1 interface ge-0/1/0
[edit routing-instances]
user@host# set mvrf1 interface ge-0/1/1
[edit routing-instances]
user@host# set mvrf1 protocols pim rp local family inet6 address 2001:1:1:1::1
[edit routing-instances]
user@host# set mvrf1 protocols pim interface ge-0/0/0
[edit routing-instances]
user@host# set mvrf1 protocols pim interface ge-0/1/0
[edit routing-instances]
user@host# set mvrf1 protocols pim interface ge-0/1/1
[edit routing-instances]
user@host# exit
338
[edit]
user@host# edit protocols mld
[edit protocols mld]
user@host# set interface ge-0/1/0.0 static group ff0e::10
[edit protocols mld]
user@host# set interface ge-0/1/1.0 static group ff0e::10
[edit routing-instances]
user@host# commit
Results
Confirm your configuration by entering the show interfaces, show routing-instances, and show protocols
commands.
}
}
Verification
SEE ALSO
Release Description
16.1 Starting in Junos OS Release 16.1, PIM is disabled by default. When you enable PIM, it operates in
sparse mode by default.
RELATED DOCUMENTATION
Configuring Static RP
IN THIS SECTION
Configuring the Static PIM RP Address on the Non-RP Routing Device | 349
Understanding Static RP
Protocol Independent Multicast (PIM) sparse mode is the most common multicast protocol used on the
Internet. PIM sparse mode is the default mode whenever PIM is configured on any interface of the
device. However, because PIM must not be configured on the network management interface, you must
disable it on that interface.
Each any-source multicast (ASM) group has a shared tree through which receivers learn about new
multicast sources and new receivers learn about all multicast sources. The rendezvous point (RP) router
is the root of this shared tree and receives the multicast traffic from the source. To receive multicast
traffic from the groups served by the RP, the device must determine the IP address of the RP for the
source.
You can configure a static rendezvous point (RP) configuration that is similar to static routes. A static
configuration has the benefit of operating in PIM version 1 or version 2. When you configure the static
342
RP, the RP address that you select for a particular group must be consistent across all routers in a
multicast domain.
Starting in Junos OS Release 15.2, the static configuration uses PIM version 2 by default, which is the
only version supported in that release and beyond..
One common way for the device to locate RPs is by static configuration of the IP address of the RP. A
static configuration is simple and convenient. However, if the statically defined RP router becomes
unreachable, there is no automatic failover to another RP router. To remedy this problem, you can use
anycast RP.
SEE ALSO
You can configure a local RP globally or for a routing instance. This example shows how to configure a
local RP in a routing instance for IPv4 or IPv6.
By default, PIM operates in sparse mode on an interface. If you explicitly configure sparse mode, PIM
uses this setting for all IPv6 multicast groups. However, if you configure sparse-dense mode, PIM
does not accept IPv6 multicast groups as dense groups and operates in sparse mode over them.
NOTE: The priority statement is not supported for IPv6, but is included here for informational
purposes. The routing device’s priority value for becoming the RP is included in the bootstrap
messages that the routing device sends. Use a smaller number to increase the likelihood that
the routing device becomes the RP for local multicast groups. Each PIM routing device uses
the priority value and other factors to determine the candidate RPs for a particular group
range. After the set of candidate RPs is distributed, each routing device determines
algorithmically the RP from the candidate RP set using a hash function. By default, the priority
value is set to 1. If this value is set to 0, the bootstrap router can override the group range
being advertised by the candidate RP.
4. Configure the groups for which the routing device is the RP.
By default, a routing device running PIM is eligible to be the RP for all IPv4 or IPv6 groups
(224.0.0.0/4 or FF70::/12 to FFF0::/12). The following example limits the groups for which this
routing device can be the RP.
receive a candidate RP advertisement from an RP within the hold time, it removes that routing device
from its list of candidate RPs. The default hold time is 150 seconds.
If you exclude this statement from the configuration and you use both static and dynamic RP
mechanisms for different group ranges within the same routing instance, the dynamic RP mapping
takes precedence over the static RP mapping, even if static RP is defined for a specific group range.
7. Monitor the operation of PIM by running the show pim commands. Run show pim ? to display the
supported commands.
SEE ALSO
PIM Overview
Understanding MLD
IN THIS SECTION
Requirements | 345
Overview | 345
Configuration | 345
Verification | 347
This example shows how to configure PIM sparse mode and RP static IP addresses.
345
Requirements
1. Determine whether the router is directly attached to any multicast sources. Receivers must be able
to locate these sources.
2. Determine whether the router is directly attached to any multicast group receivers. If receivers are
present, IGMP is needed.
3. Determine whether to configure multicast to use sparse, dense, or sparse-dense mode. Each mode
has different configuration considerations.
5. Determine whether to locate the RP with the static configuration, BSR, or auto-RP method.
6. Determine whether to configure multicast to use its own RPF routing table when configuring PIM in
sparse, dense, or sparse-dense mode.
7. Configure the SAP and SDP protocols to listen for multicast session announcements.
8. Configure IGMP.
Overview
In this example, you set the interface value to all and disable the ge-0/0/0 interface. Then you configure
the IP address of the RP as 192.168.14.27.
Configuration
IN THIS SECTION
Procedure | 346
346
Procedure
To quickly configure this example, copy the following commands, paste them into a text file, remove any
line breaks, change any details necessary to match your network configuration, copy and paste the
commands into the CLI at the [edit] hierarchy level and then enter commit from configuration mode.
Step-by-Step Procedure
The following example requires you to navigate various levels in the configuration hierarchy. For
instructions on how to do that, see Using the CLI Editor in Configuration Mode in the Junos OS CLI User
Guide.
1. Configure PIM.
[edit]
user@host# edit protocols pim
4. Configure RP.
[edit]
user@host# edit protocols pim rp
[edit]
user@host# set static address 192.168.14.27
Results
From configuration mode, confirm your configuration by entering the show protocols command. If the
output does not display the intended configuration, repeat the configuration instructions in this example
to correct it.
[edit]
user@host# show protocols
pim {
rp {
static {
address 192.168.14.27;
}
}
interface all;
interface ge-0/0/0.0 {
disable;
}
}
If you are done configuring the device, enter commit from configuration mode.
Verification
IN THIS SECTION
Purpose
Verify that SAP and SDP are configured to listen on the correct group addresses and ports.
Action
Purpose
Action
Purpose
Action
SEE ALSO
You configure a static RP address on the non-RP routing device. This enables the non-RP routing device
to recognize the local statically defined RP. For example, if R0 is a non-RP router and R1 is the local RP
router, you configure R0 with the static RP address of R1. The static IP address is the routable address
assigned to the loopback interface on R1. In the following example, the loopback address of the RP is
2001:db8:85a3::8a2e:370:7334.
Starting in Junos OS Release 15.2, the default PIM version is version 2, and version 1 is not supported.
For Junsos OS Release 15.1 and earlier, the default PIM version can be version 1 or version 2, depending
on the mode you are configuring. PIM version 1 is the default for RP mode ([edit pim rp static address
address]). PIM version 2 is the default for interface mode ([edit pim interface interface-name]). An
explicitly configured PIM version will override the default setting.
You can configure a static RP address globally or for a routing instance. This example shows how to
configure a static RP address in a routing instance for IPv6.
1. On a non-RP routing device, configure the routing instance to point to the routable address assigned
to the loopback interface of the RP.
NOTE: Logical systems are also supported. You can configure a static RP address in a logical
system only if the logical system is not directly connected to a source.
350
The RP that you select for a particular group must be consistent across all routers in a multicast
domain.
4. (Optional) Override dynamic RP for the specified group address range.
If you configure both static RP mapping and dynamic RP mapping (such as auto-RP) in a single
routing instance, allow the static mapping to take precedence for the given static RP group range,
and allow dynamic RP mapping for all other groups.
If you exclude this statement from the configuration and you use both static and dynamic RP
mechanisms for different group ranges within the same routing instance, the dynamic RP mapping
takes precedence over the static RP mapping, even if static RP is defined for a specific group range.
5. Monitor the operation of PIM by running the show pim commands. Run show pim ? to display the
supported commands.
SEE ALSO
PIM Overview
Understanding MLD
351
Release Description
15.2 Starting in Junos OS Release 15.2, the static configuration uses PIM version 2 by default, which is the
only version supported in that release and beyond.
15.2 Starting in Junos OS Release 15.2, the default PIM version is version 2, and version 1 is not supported.
15.1 For Junsos OS Release 15.1 and earlier, the default PIM version can be version 1 or version 2, depending
on the mode you are configuring. PIM version 1 is the default for RP mode ([edit pim rp static address
address]). PIM version 2 is the default for interface mode ([edit pim interface interface-name]). An
explicitly configured PIM version will override the default setting.
RELATED DOCUMENTATION
IN THIS SECTION
idle, and convergence is slow when the resource fails. In multicast specifically, there might be closer RPs
on the shared tree, so the use of a single RP is suboptimal.
For the purposes of load balancing and redundancy, you can configure anycast RP. You can use anycast
RP within a domain to provide redundancy and RP load sharing. When an RP fails, sources and receivers
are taken to a new RP by means of unicast routing. When you configure anycast RP, you bypass the
restriction of having one active RP per multicast group, and instead deploy multiple RPs for the same
group range. The RP routers share one unicast IP address. Sources from one RP are known to other RPs
that use the Multicast Source Discovery Protocol (MSDP). Sources and receivers use the closest RP, as
determined by the interior gateway protocol (IGP).
Anycast means that multiple RP routers share the same unicast IP address. Anycast addresses are
advertised by the routing protocols. Packets sent to the anycast address are sent to the nearest RP with
this address. Anycast addressing is a generic concept and is used in PIM sparse mode to add load
balancing and service reliability to RPs.
Anycast RP is defined in RFC3446 , Anycast RP Mechanism Using PIM and MSDP, and can be found
here: https://www.ietf.org/rfc/rfc3446.txt .
SEE ALSO
IN THIS SECTION
Requirements | 353
Overview | 353
Configuration | 353
Verification | 356
This example shows how to configure anycast RP on each RP router in the PIM-SM domain. With this
configuration you can deploy more than one RP for a single group range. This enables load balancing and
redundancy.
353
Requirements
• Configure an interior gateway protocol or static routing. See the Junos OS Routing Protocols Library
for Routing Devices.
• Configure PIM Sparse Mode on the interfaces. See Enabling PIM Sparse Mode.
Overview
When you configure anycast RP, the RP routers in the PIM-SM domain use a shared address. In this
example, the shared address is 10.1.1.2/32. Anycast RP uses Multicast Source Discovery Protocol
(MSDP) to discover and maintain a consistent view of the active sources. Anycast RP also requires an RP
selection method, such as static, auto-RP, or bootstrap RP. This example uses static RP and shows only
one RP router configuration.
Configuration
IN THIS SECTION
Procedure | 354
Results | 355
To quickly configure this example, copy the following commands, paste them into a text file, remove any
line breaks, change any details necessary to match your network configuration, copy and paste the
commands into the CLI at the [edit] hierarchy level, and then enter commit from configuration mode.
RP Routers
Non-RP Routers
Procedure
Step-by-Step Procedure
The following example requires that you navigate various levels in the configuration hierarchy. For
information about navigating the CLI, see Using the CLI Editor in Configuration Mode in the Junos OS
CLI User Guide.
1. On each RP router in the domain, configure the shared anycast address on the router’s loopback
address.
[edit interfaces]
user@host# set lo0 unit 0 family inet address 10.1.1.2/32
2. On each RP router in the domain, make sure that the router’s regular loopback address is the primary
address for the interface, and set the router ID.
[edit interfaces]
user@host# set lo0 unit 0 family inet address 192.168.132.1/32 primary
[edit routing-options]
user@host# set router-id 192.168.132.1
3. On each RP router in the domain, configure the local RP address, using the shared address.
4. On each RP router in the domain, create MSDP sessions to the other RPs in the domain.
5. On each non-RP router in the domain, configure a static RP address using the shared address.
user@host# commit
Results
From configuration mode, confirm your configuration by entering the show interfaces, show protocols,
and show routing-options commands. If the output does not display the intended configuration, repeat
the instructions in this example to correct the configuration.
On the RP routers:
Verification
To verify the configuration, run the show pim rps extensive inet command.
SEE ALSO
You can use anycast RP within a domain to provide redundancy and RP load sharing. When an RP stops
operating, sources and receivers are taken to a new RP by means of unicast routing.
You can configure anycast RP to use PIM and MSDP for IPv4, or PIM alone for both IPv4 and IPv6
scenarios. Both are discussed in this section.
We recommend a static RP mapping with anycast RP over a bootstrap router and auto-RP configuration
because it provides all the benefits of a bootstrap router and auto-RP without the complexity of the BSR
and auto-RP mechanisms.
Starting in Junos OS Release 16.1, all systems on a subnet must run the same version of PIM.
The default PIM version can be version 1 or version 2, depending on the mode you are configuring.
PIMv1 is the default RP mode (at the [edit protocols pim rp static address address] hierarchy level).
However, PIMv2 is the default for interface mode (at the [edit protocols pim interface interface-name]
hierarchy level). Explicitly configured versions override the defaults. This example explicitly configures
PIMv2 on the interfaces.
The following example shows an anycast RP configuration for the RP routers, first with MSDP and then
using PIM alone, and for non-RP routers.
1. For a network using an RP with MSDP, configure the RP using the lo0 loopback interface, which is
always up. Include the address statement and specify the unique and routable router ID and the RP
address at the [edit interfaces lo0 unit 0 family inet] hierarchy level. In this example, the router ID is
198.51.100.254 and the shared RP address is 198.51.100.253. Include the primary statement for the
first address. Including the primary statement selects the router’s primary address from all the
preferred addresses on all interfaces.
interfaces {
lo0 {
description "PIM RP";
unit 0 {
family inet {
address 198.51.100.254/32;
primary;
address 198.51.100.253/32;
}
358
}
}
}
2. Specify the RP address. Include the address statement at the [edit protocols pim rp local] hierarchy
level (the same address as the secondary lo0 interface).
For all interfaces, include the mode statement to set the mode to sparse and the version statement
to specify PIM version 2 at the [edit protocols pim rp local interface all] hierarchy level. When
configuring all interfaces, exclude the fxp0.0 management interface by including the disable
statement for that interface.
protocols {
pim {
rp {
local {
family inet;
address 198.51.100.253;
}
interface all {
mode sparse;
version 2;
}
interface fxp0.0 {
disable;
}
}
}
}
3. Configure MSDP peering. Include the peer statement to configure the address of the MSDP peer at
the [edit protocols msdp] hierarchy level. For MSDP peering, use the unique, primary addresses
instead of the anycast address. To specify the local address for MSDP peering, include the local-
address statement at the [edit protocols msdp peer] hierarchy level.
protocols {
msdp {
peer 198.51.100.250 {
local-address address 198.51.100.254;
}
359
}
}
NOTE: If you need to configure a PIM RP for both IPv4 and IPv6 scenarios, perform Step "4"
on page 359 and Step "5" on page 359. Otherwise, go to Step "6" on page 360.
4. Configure an RP using the lo0 loopback interface, which is always up. Include the address statement
to specify the unique and routable router address and the RP address at the [edit interfaces lo0 unit
0 family inet] hierarchy level. In this example, the router ID is 198.51.100.254 and the shared RP
address is 198.51.100.253. Include the primary statement on the first address. Including the primary
statement selects the router’s primary address from all the preferred addresses on all interfaces.
interfaces {
lo0 {
description "PIM RP";
unit 0 {
family inet {
address 198.51.100.254/32 {
primary;
}
address 198.51.100.253/32;
}
}
}
}
5. Include the address statement at the [edit protocols pim rp local] hierarchy level to specify the RP
address (the same address as the secondary lo0 interface).
For all interfaces, include the mode statement to set the mode to sparse, and the version statement
to specify PIM version 2 at the [edit protocols pim rp local interface all] hierarchy level. When
configuring all interfaces, exclude the fxp0.0 management interface by Including the disable
statement for that interface.
Include the anycast-pim statement to configure anycast RP without MSDP (for example, if IPv6 is
used for multicasting). The other RP routers that share the same IP address are configured using the
rp-set statement. There is one entry for each RP, and the maximum that can be configured is 15. For
each RP, specify the routable IP address of the router and whether MSDP source active (SA)
messages are forwarded to the RP.
360
MSDP configuration is not necessary for this type of IPv4 anycast RP configuration.
protocols {
pim {
rp {
local {
family inet {
address 198.51.100.253;
anycast-pim {
rp-set {
address 198.51.100.240;
address 198.51.100.241 forward-msdp-sa;
}
local-address 198.51.100.254; #If not configured, use
lo0 primary
}
}
}
}
interface all {
mode sparse;
version 2;
}
interface fxp0.0 {
disable;
}
}
}
6. Configure the non-RP routers. The anycast RP configuration for a non-RP router is the same whether
MSDP is used or not. Specify a static RP by adding the address at the [edit protocols pim rp static]
hierarchy level. Include the version statement at the [edit protocols pim rp static address] hierarchy
level to specify PIM version 2.
protocols {
pim {
rp {
static {
address 198.51.100.253 {
version 2;
}
361
}
}
}
}
7. Include the mode statement at the [edit protocols pim interface all] hierarchy level to specify sparse
mode on all interfaces. Then include the version statement at the [edit protocols pim rp interface all
mode] to configure all interfaces for PIM version 2. When configuring all interfaces, exclude the
fxp0.0 management interface by including the disable statement for that interface.
protocols {
pim {
interface all {
mode sparse;
version 2;
}
interface fxp0.0 {
disable;
}
}
}
interfaces {
lo0 {
description "PIM RP";
unit 0 {
family inet {
address 198.51.100.254/32 {
primary;
}
address 198.51.100.253/32;
}
}
362
}
}
Add the address statement at the [edit protocols pim rp local] hierarchy level to specify the RP address
(the same address as the secondary lo0 interface).
For all interfaces, use the mode statement to set the mode to sparse, and include the version statement
to specify PIM version 2 at the [edit protocols pim rp local interface all] hierarchy level. When
configuring all interfaces, exclude the fxp0.0 management interface by adding the disable statement for
that interface.
Use the anycast-pim statement to configure anycast RP without MSDP (for example, if IPv6 is used for
multicasting). The other RP routers that share the same IP address are configured using the rp-set
statement. There is one entry for each RP, and the maximum that can be configured is 15. For each RP,
specify the routable IP address of the router and whether MSDP source active (SA) messages are
forwarded to the RP.
protocols {
pim {
rp {
local {
family inet {
address 198.51.100.253;
anycast-pim {
rp-set {
address 198.51.100.240;
address 198.51.100.241 forward-msdp-sa;
}
local-address 198.51.100.254; #If not configured, use
lo0 primary
}
}
}
}
interface all {
mode sparse;
version 2;
}
interface fxp0.0 {
disable;
}
363
}
}
MSDP configuration is not necessary for this type of IPv4 anycast RP configuration.
SEE ALSO
Release Description
16.1 Starting in Junos OS Release 16.1, all systems on a subnet must run the same version of PIM.
RELATED DOCUMENTATION
IN THIS SECTION
Example: Rejecting PIM Bootstrap Messages at the Boundary of a PIM Domain | 368
SEE ALSO
NOTE: For legacy configuration purposes, there are two sections that describe the configuration
of bootstrap routers: one section for both IPv4 and IPv6, and this section, which is for IPv4 only.
The method described in Configuring PIM Bootstrap Properties for IPv4 or IPv6 is
recommended. A commit error occurs if the same IPv4 bootstrap statements are included in both
the IPv4-only and the IPv4-and-IPv6 sections of the hierarchy. The error message is “duplicate
IPv4 bootstrap configuration.”
To determine which routing device is the RP, all routing devices within a PIM domain collect bootstrap
messages. A PIM domain is a contiguous set of routing devices that implement PIM. All are configured
to operate within a common boundary. The domain's bootstrap router initiates bootstrap messages,
which are sent hop by hop within the domain. The routing devices use bootstrap messages to distribute
RP information dynamically and to elect a bootstrap router when necessary.
You can configure bootstrap properties globally or for a routing instance. This example shows the global
configuration.
with the highest IP address is elected to be the bootstrap router. A simple bootstrap configuration
assigns a bootstrap priority value to a routing device.
2. (Optional) Create import and export policies to control the flow of IPv4 bootstrap messages to and
from the RP, and apply the policies to PIM. Import and export policies are useful when some of the
routing devices in your PIM domain have interfaces that connect to other PIM domains. Configuring
a policy prevents bootstrap messages from crossing domain boundaries. The bootstrap-import
statement prevents messages from being imported into the RP. The bootstrap-export statement
prevents messages from being exported from the RP.
4. Monitor the operation of PIM bootstrap routing devices by running the show pim bootstrap
command.
SEE ALSO
NOTE: For legacy configuration purposes, there are two sections that describe the configuration
of bootstrap routers: one section for IPv4 only, and this section, which is for both IPv4 and IPv6.
The method described in this section is recommended. A commit error occurs if the same IPv4
bootstrap statements are included in both the IPv4-only and the IPv4-and-IPv6 sections of the
hierarchy. The error message is “duplicate IPv4 bootstrap configuration.”
To determine which routing device is the RP, all routing devices within a PIM domain collect bootstrap
messages. A PIM domain is a contiguous set of routing devices that implement PIM. All devices are
configured to operate within a common boundary. The domain's bootstrap router initiates bootstrap
messages, which are sent hop by hop within the domain. The routing devices use bootstrap messages to
distribute RP information dynamically and to elect a bootstrap router when necessary.
You can configure bootstrap properties globally or for a routing instance. This example shows the global
configuration.
priority field. To disable the bootstrap function in the IPv4 and IPv6 configuration, delete the
bootstrap statement.
2. (Optional) Create import and export policies to control the flow of bootstrap messages to and from
the RP, and apply the policies to PIM. Import and export policies are useful when some of the routing
devices in your PIM domain have interfaces that connect to other PIM domains. Configuring a policy
prevents bootstrap messages from crossing domain boundaries. The import statement prevents
messages from being imported into the RP. The export statement prevents messages from being
exported from the RP.
4. Monitor the operation of PIM bootstrap routing devices by running the show pim bootstrap
command.
SEE ALSO
protocols {
pim {
rp {
bootstrap {
family inet {
priority 1;
import pim-import;
export pim-export;
}
family inet6 {
priority 1;
import pim-import;
export pim-export;
}
}
}
}
}
policy-options {
policy-statement pim-import {
from interface so-0/1/0;
then reject;
}
policy-statement pim-export {
to interface so-0/1/0;
then reject;
}
}
protocols {
pim {
369
rp {
bootstrap-import no-bsr;
bootstrap-export no-bsr;
}
}
}
policy-options {
policy-statement no-bsr {
then reject;
}
}
RELATED DOCUMENTATION
You can configure a more dynamic way of assigning rendezvous points (RPs) in a multicast network by
means of auto-RP. When you configure auto-RP for a router, the router learns the address of the RP in
the network automatically and has the added advantage of operating in PIM version 1 and version 2.
Although auto-RP is a nonstandard (non-RFC-based) function that typically uses dense mode PIM to
advertise control traffic, it provides an important failover advantage that simple static RP assignment
does not. You can configure multiple routers as RP candidates. If the elected RP fails, one of the other
preconfigured routers takes over the RP functions. This capability is controlled by the auto-RP mapping
agent.
RELATED DOCUMENTATION
Use the mode statement at the [edit protocols pim rp interface all] hierarchy level to specify sparse
mode on all interfaces. Then add the version statement at the [edit protocols pim rp interface all mode]
to configure all interfaces for PIM version 2. When configuring all interfaces, exclude the fxp0.0
management interface by adding the disable statement for that interface.
protocols {
pim {
interface all {
mode sparse;
version 2;
}
interface fxp0.0 {
disable;
}
}
}
Add the address statement at the [edit protocols pim rp local] hierarchy level to specify the RP address
(the same address as the secondary lo0 interface).
For all interfaces, use the mode statement to set the mode to sparse and the version statement to
specify PIM version 2 at the [edit protocols pim rp local interface all] hierarchy level. When configuring
all interfaces, exclude the fxp0.0 management interface by adding the disable statement for that
interface.
protocols {
pim {
rp {
local {
family inet;
address 198.51.100.253;
}
interface all {
mode sparse;
371
version 2;
}
interface fxp0.0 {
disable;
}
}
}
}
To configure MSDP peering, add the peer statement to configure the address of the MSDP peer at the
[edit protocols msdp] hierarchy level. For MSDP peering, use the unique, primary addresses instead of
the anycast address. To specify the local address for MSDP peering, add the local-address statement at
the [edit protocols msdp peer] hierarchy level.
protocols {
msdp {
peer 198.51.100.250 {
local-address 198.51.100.254;
}
}
}
Configuring Embedded RP
IN THIS SECTION