Vmware NSX-T Data Center: Install, Configure, Manage (V3.2) : Lecture Manual
Vmware NSX-T Data Center: Install, Configure, Manage (V3.2) : Lecture Manual
Vmware NSX-T Data Center: Install, Configure, Manage (V3.2) : Lecture Manual
Lecture Manual
Copyright © 2022 VMware, Inc. All rights reserved. This manual and its accompanying materials are
protected by U.S. and international copyright and intellectual property laws. VMware products are covered
by one or more patents listed at http://www.vmware.com/go/patents. VMware is a registered trademark or
trademark of VMware, Inc. in the United States and/or other jurisdictions. All other marks and names
mentioned herein may be trademarks of their respective companies. VMware vSphere® with VMware
Tanzu®, VMware vSphere® vMotion®, VMware vSphere® Lifecycle Manager™, VMware vSphere® Fault
Tolerance, VMware vSphere® Distributed Switch™, VMware vSphere® Client™, VMware vSphere® 2015,
VMware vSphere®, VMware vRealize® Network Insight™, VMware vRealize® Automation™ , VMware
vCenter Server®, VMware Workspace ONE® Access™, VMware View®, VMware Horizon® View™, VMware
Verify™, VMware SD-WAN™ by VeloCloud®, VMware SD-WAN™ by VeloCloud® – WFH Pro Subscription,
VMware SD-WAN™ by VeloCloud® – WFH Subscription, VMware SD-WAN™, VMware SD-WAN™ for
AWS GovCloud (US), VMware SD-WAN™ on AWS GovCloud (US), VMware Ports and Protocols™,
VMware HCX®, VMware HCX® for Telco Cloud, VMware Customer Connect™, VMware Carbon Black
Cloud™, VMware vSphere® Distributed Switch™, VMware Tanzu® Service Mesh™ Advanced edition,
VMware Tanzu® Kubernetes Grid™ Integrated Edition, VMware Pivotal Labs® Platform Deployment™,
VMware vSphere® Network I/O Control, VMware NSX-T™ Data Center, VMware NSX-T™, VMware NSX®
Network Detection and Response™, VMware NSX® Manager™, VMware NSX® Intelligence™, VMware NSX®
Gateway Firewall™, VMware NSX® Edge™, VMware NSX® Distributed IDS/IPS™, VMware NSX® Distributed
Firewall™, VMware NSX® Defender™, VMware NSX® Data Center Enterprise Plus, VMware NSX® Data
Center, VMware NSX® Controller™, VMware NSX Cloud™, VMware NSX® Advanced Threat Prevention™,
VMware NSX® Advanced Threat Analyzer™, VMware NSX® Advanced Load Balancer™ Enterprise with
Cloud services, VMware NSX® Advanced Load Balancer Controller™, VMware NSX® Advanced Load
Balancer™, VMware NSX® Advanced Load Balancer™ – Basic Edition, VMware NSX® API™, VMware NSX®,
VMware Lab Connect™, VMware Horizon® Standard Edition, VMware Go™, VMware ESXi™, VMware ESX®,
Cloud Foundry, and VMware Accelerate™ are registered trademarks or trademarks of VMware, Inc. in the
United States and/or other jurisdictions.
The training material is provided “as is,” and all express or implied conditions, representations, and warranties,
including any implied warranty of merchantability, fitness for a particular purpose or noninfringement, are
disclaimed, even if VMware, Inc., has been advised of the possibility of such claims. This material is designed
to be used for reference purposes in conjunction with a training course.
The training material is not a standalone training tool. Use of the training material for self-study without class
attendance is not recommended. These materials and the computer programs to which it relates are the
property of, and embody trade secrets and confidential information proprietary to, VMware, Inc., and may
not be reproduced, copied, disclosed, transferred, adapted or modified without the express written approval
of VMware, Inc.
www.vmware.com/education
Module 2 VMware Virtual Cloud Network and NSX-T Data Center ............... 9
2-2 Importance ............................................................................................................................................... 9
2-3 Lesson 1: VMware Virtual Cloud Network and NSX-T Data Center ................................ 10
2-4 Learner Objectives.............................................................................................................................. 10
2-5 Virtual Cloud Network Framework ................................................................................................ 11
2-6 NSX Portfolio ........................................................................................................................................ 13
2-7 Use Cases for NSX-T Data Center ............................................................................................... 15
2-8 NSX-T Data Center Features (1) ................................................................................................... 16
2-9 NSX-T Data Center Features (2) .................................................................................................. 16
2-10 High-Level Architecture of NSX-T Data Center.......................................................................17
2-11 Management and Control Planes................................................................................................... 18
2-12 About the NSX Management Cluster .......................................................................................... 19
2-13 NSX Management Cluster with Virtual IP Address ................................................................. 21
2-14 NSX Management Cluster with Load Balancer ....................................................................... 22
2-15 About the NSX Policy....................................................................................................................... 23
iii
VMware Confidential Internal Use Only
2-16 About NSX Manager ......................................................................................................................... 24
2-17 NSX Policy and NSX Manager Workflow ................................................................................. 25
2-18 About NSX Controller....................................................................................................................... 26
2-19 Control Plane Components (1)........................................................................................................27
2-20 Control Plane Components (2)...................................................................................................... 28
2-21 Control Plane Change Propagation ............................................................................................. 30
2-22 Control Plane Sharding Function ................................................................................................... 31
2-23 Handling Controller Failure .............................................................................................................. 32
2-24 About the Data Plane ....................................................................................................................... 33
2-25 Data Plane Functions ........................................................................................................................ 33
2-26 Data Plane Components .................................................................................................................. 34
2-27 Data Plane Communication Channels ......................................................................................... 35
2-28 Review of Learner Objectives ....................................................................................................... 36
2-29 Key Points ............................................................................................................................................. 36
v
VMware Confidential Internal Use Only
3-59 About Uplink Profiles......................................................................................................................... 85
3-60 Default Uplink Profiles ....................................................................................................................... 86
3-61 About Teaming Policies ................................................................................................................... 86
3-62 Teaming Policy Modes ..................................................................................................................... 87
3-63 Teaming Policies Supported by ESXi and KVM Hosts ......................................................... 89
3-64 About LLDP.......................................................................................................................................... 90
3-65 Enabling LLDP...................................................................................................................................... 90
3-66 About Network I/O Control Profiles ............................................................................................ 91
3-67 Creating Network I/O Control Profiles ...................................................................................... 92
3-68 About Transport Node Profiles .................................................................................................... 93
3-69 Benefits of Transport Node Profiles ........................................................................................... 94
3-70 Prerequisites for Transport Node Profile .................................................................................. 95
3-71 Attaching a Transport Node Profile to the vSphere Cluster (1) ....................................... 96
3-72 Attaching a Transport Node Profile to the vSphere Cluster (2) ...................................... 97
3-73 Managing ESXi: Host Preparation (1) ........................................................................................... 98
3-74 Managing ESXi: Host Preparation (2) .......................................................................................... 98
3-75 Reviewing the ESXi Transport Node Status (1) ...................................................................... 99
3-76 Reviewing the ESXi Transport Node Status (2) ................................................................... 100
3-77 Verifying the ESXi Transport Node by CLI .............................................................................. 101
3-78 Transport Node Preparation: KVM ............................................................................................ 102
3-79 Configuring KVM Hosts as Transport Nodes (1) ................................................................... 103
3-80 Configuring KVM Hosts as Transport Nodes (2) .................................................................. 103
3-81 Reviewing the KVM Transport Node Status .......................................................................... 104
3-82 Verifying the KVM Transport Node by CLI ............................................................................ 104
3-83 Lab 4: Preparing the NSX Infrastructure ................................................................................. 105
3-84 Review of Learner Objectives ..................................................................................................... 105
3-85 Key Points (1) ..................................................................................................................................... 106
3-86 Key Points (2) .................................................................................................................................... 106
vii
VMware Confidential Internal Use Only
4-45 Creating an IP Discovery Segment Profile ...............................................................................141
4-46 MAC Discovery Segment Profile ................................................................................................142
4-47 QoS Segment Profile ...................................................................................................................... 144
4-48 Segment Security Profile ................................................................................................................146
4-49 SpoofGuard Segment Profile ........................................................................................................147
4-50 Creating a SpoofGuard Segment Profile ..................................................................................148
4-51 Review of Learner Objectives ......................................................................................................149
4-52 Lesson 5: Logical Switching Packet Forwarding .................................................................. 150
4-53 Learner Objectives........................................................................................................................... 150
4-54 NSX-T Data Center Controller Tables ...................................................................................... 150
4-55 TEP Table Update (1) ........................................................................................................................ 151
4-56 TEP Table Update (2) ...................................................................................................................... 152
4-57 TEP Table Update (3) ......................................................................................................................153
4-58 TEP Table Update (4) ......................................................................................................................154
4-59 MAC Table Update (1) ..................................................................................................................... 155
4-60 MAC Table Update (2) ....................................................................................................................156
4-61 MAC Table Update (3) .................................................................................................................... 157
4-62 MAC Table Update (4) ....................................................................................................................158
4-63 About the ARP Table ......................................................................................................................158
4-64 ARP Table Update (1) ......................................................................................................................159
4-65 ARP Table Update (2) .................................................................................................................... 160
4-66 ARP Table Update (3) ......................................................................................................................161
4-67 ARP Table Update (4) .....................................................................................................................162
4-68 Unicast Packet Forwarding Across Hosts (1) ......................................................................... 163
4-69 Unicast Packet Forwarding Across Hosts (2) ........................................................................164
4-70 Unicast Packet Forwarding Across Hosts (3) ........................................................................ 165
4-71 Unicast Packet Forwarding Across Hosts (4) ........................................................................166
4-72 Overview of BUM Traffic................................................................................................................ 167
4-73 Managing BUM Traffic: Head Replication..................................................................................169
4-74 Managing BUM Traffic: Hierarchical Two-Tier Replication .................................................170
4-75 Lab 5: Configuring Segments ......................................................................................................... 171
4-76 Review of Learner Objectives ....................................................................................................... 171
4-77 Key Points ............................................................................................................................................ 172
ix
VMware Confidential Internal Use Only
5-40 Joining NSX Edge Bare Metal with the Management Plane............................................ 208
5-41 Verifying the Edge Transport Node Status ........................................................................... 209
5-42 Changing the NSX Edge VM Resource Reservations ........................................................ 210
5-43 Changing Node Settings .................................................................................................................. 211
5-44 Postdeployment Verification Checklist ...................................................................................... 211
5-45 Creating an NSX Edge Cluster ..................................................................................................... 212
5-46 Lab 6: Deploying and Configuring NSX Edge Nodes........................................................... 213
5-47 Review of Learner Objectives ...................................................................................................... 213
5-48 Lesson 3: Configuring Tier-0 and Tier-1 Gateways ..............................................................214
5-49 Learner Objectives............................................................................................................................214
5-50 Gateway Configuration Tasks ...................................................................................................... 215
5-51 Creating the Tier-1 Gateway .........................................................................................................216
5-52 Connecting Segments to the Tier-1 Gateway ........................................................................ 217
5-53 Using Network Topology to Validate the Tier-1 Gateway Configuration .................... 218
5-54 Testing East-West Connectivity..................................................................................................219
5-55 Creating the Uplink Segments .................................................................................................... 220
5-56 Creating the Tier-0 Gateway (1) .................................................................................................. 221
5-57 Creating the Tier-0 Gateway (2) ................................................................................................. 221
5-58 Configuring Routing ......................................................................................................................... 222
5-59 Connecting the Tier-1 and Tier-0 Gateways .......................................................................... 222
5-60 Enabling Route Advertisement in the Tier-1 Gateway ....................................................... 223
5-61 Configuring Route Redistribution on the Tier-0 Gateway ................................................224
5-62 Using Network Topology to Validate the Tier-0 Gateway Configuration .................. 225
5-63 Testing North-South Connectivity .............................................................................................226
5-64 Lab 7: Configuring the Tier-1 Gateway ..................................................................................... 227
5-65 Review of Learner Objectives ..................................................................................................... 227
5-66 Lesson 4: Configuring Static and Dynamic Routing ............................................................. 228
5-67 Learner Objectives...........................................................................................................................228
5-68 Static and Dynamic Routing..........................................................................................................229
5-69 Tier-0 Gateway Routing Configurations (1) ........................................................................... 230
5-70 Tier-0 Gateway Routing Configurations (2) ............................................................................ 231
5-71 Configuring Static Routes on a Tier-0 Gateway (1) ............................................................. 232
5-72 Configuring Static Routes on a Tier-0 Gateway (2) ............................................................ 233
5-73 Configuring Dynamic Routing with BGP on Tier-0 Gateways (1) ...................................234
5-74 Configuring Dynamic Routing with BGP on Tier-0 Gateways (2) .................................. 235
5-75 Verifying the BGP Configuration of the Tier-0 Gateways ................................................ 236
x
VMware Confidential Internal Use Only
5-76 BGP Route Aggregation ................................................................................................................ 237
5-77 Configuring Route Aggregation with BGP ..............................................................................238
5-78 Configuring Dynamic Routing with OSPF on Tier-0 Gateways (1) ................................ 239
5-79 Configuring Dynamic Routing with OSPF on Tier-0 Gateways (2) .............................. 240
5-80 Configuring Dynamic Routing with OSPF on Tier-0 Gateways (3) ................................ 241
5-81 Verifying OSPF Configuration of the Tier-0 Gateways .....................................................242
5-82 OSPF Route Summarization .........................................................................................................243
5-83 Configuring Route Summarization with OSPF ...................................................................... 244
5-84 Lab 8: Creating and Configuring a Tier-0 Gateway with OSPF..................................... 244
5-85 Lab 9: Configuring the Tier-0 Gateway with BGP ...............................................................245
5-86 Review of Learner Objectives .....................................................................................................245
5-87 Lesson 5: ECMP and High Availability.......................................................................................246
5-88 Learner Objectives...........................................................................................................................246
5-89 About Equal-Cost Multipath Routing.........................................................................................247
5-90 Enabling ECMP in BGP....................................................................................................................248
5-91 Enabling ECMP in OSPF.................................................................................................................248
5-92 About High Availability ...................................................................................................................249
5-93 Active-Active HA Mode ............................................................................................................... 250
5-94 Active-Active Topology with BGP ............................................................................................. 251
5-95 Active-Active Topology with OSPF ......................................................................................... 252
5-96 Active-Standby HA Mode............................................................................................................. 253
5-97 Active-Standby Topology with BGP ........................................................................................ 255
5-98 Active-Standby Topology with OSPF .....................................................................................256
5-99 Failover Detection Mechanisms .................................................................................................. 257
5-100 About BFD ..........................................................................................................................................258
5-101 Failover Scenario with BFD ..........................................................................................................259
5-102 Failover Scenario with Dynamic Routing ................................................................................ 260
5-103 Failover Modes ...................................................................................................................................261
5-104 Review of Learner Objectives ......................................................................................................261
5-105 Lesson 6: Logical Routing Packet Walk...................................................................................262
5-106 Learner Objectives...........................................................................................................................262
5-107 Single-Tier Routing: Egress to Physical Network (1) ........................................................... 262
5-108 Single-Tier Routing: Egress to Physical Network (2) .......................................................... 263
5-109 Single-Tier Routing: Egress to Physical Network (3) ..........................................................264
5-110 Single-Tier Routing: Egress to Physical Network (4) .......................................................... 265
5-111 Single-Tier Routing: Egress to Physical Network (5) ..........................................................266
xi
VMware Confidential Internal Use Only
5-112 Single-Tier Routing: Egress to Physical Network (6) .......................................................... 267
5-113 Single-Tier Routing: Ingress from Physical Network (7) ....................................................268
5-114 Single-Tier Routing: Ingress from Physical Network (8) ....................................................269
5-115 Single-Tier Routing: Ingress from Physical Network (9) ....................................................270
5-116 Single-Tier Routing: Ingress from Physical Network (10) ................................................... 271
5-117 Single-Tier Routing: Ingress from Physical Network (11).................................................... 272
5-118 Multitier Routing: Egress to Physical Network (1) ................................................................. 273
5-119 Multitier Routing: Egress to Physical Network (2) ................................................................ 274
5-120 Multitier Routing: Egress to Physical Network (3) ................................................................ 275
5-121 Multitier Routing: Egress to Physical Network (4) ............................................................... 276
5-122 Multitier Routing: Egress to Physical Network (5) ................................................................ 277
5-123 Multitier Routing: Egress to Physical Network (6)................................................................ 278
5-124 Multitier Routing: Egress to Physical Network (7) ................................................................ 279
5-125 Multitier Routing: Egress to Physical Network (8)............................................................... 280
5-126 Multitier Routing: Egress to Physical Network (9)................................................................. 281
5-127 Multitier Routing: Ingress from Physical Network (10) ........................................................ 282
5-128 Multitier Routing: Ingress from Physical Network (11) ......................................................... 283
5-129 Multitier Routing: Ingress from Physical Network (12) ........................................................284
5-130 Multitier Routing: Ingress from Physical Network (13) ........................................................ 285
5-131 Multitier Routing: Ingress from Physical Network (14) ........................................................286
5-132 Multitier Routing: Ingress from Physical Network (15) ........................................................ 287
5-133 Multitier Routing: Ingress from Physical Network (16) ........................................................288
5-134 Review of Learner Objectives .....................................................................................................288
5-135 Lesson 7: VRF Lite ...........................................................................................................................289
5-136 Learner Objectives...........................................................................................................................289
5-137 About VRF Lite ................................................................................................................................ 290
5-138 VRF Lite Requirements and Limitations ....................................................................................291
5-139 Use Cases for VRF Lite ..................................................................................................................292
5-140 VRF Lite Topologies........................................................................................................................293
5-141 VRF Lite Gateway Interfaces.......................................................................................................294
5-142 VRF Lite: Control and Data Planes ............................................................................................295
5-143 Configuring VRF Lite .......................................................................................................................296
5-144 Deploying the Default Tier-0 Gateway .................................................................................... 297
5-145 Adding Uplink Interfaces to the Default Tier-0 Gateway..................................................298
5-146 Configuring BGP for the Default Tier-0 Gateway................................................................299
5-147 Adding the Uplink Trunk Segment for the VRF Gateway................................................ 300
xii
VMware Confidential Internal Use Only
5-148 Deploying the VRF Gateway ....................................................................................................... 301
5-149 Adding Uplink Interfaces to the VRF Gateway .................................................................... 302
5-150 Configuring the BGP for the VRF Gateway .......................................................................... 303
5-151 Connecting a Tier-1 Gateway to the VRF Gateway........................................................... 304
5-152 VRF Lite Validation ......................................................................................................................... 305
5-153 Lab 10: Configuring VRF Lite ...................................................................................................... 306
5-154 Review of Learner Objectives .................................................................................................... 306
5-155 Key Points (1) .....................................................................................................................................307
5-156 Key Points (2) ....................................................................................................................................307
xiii
VMware Confidential Internal Use Only
Module 7 NSX-T Data Center Firewalls ................................................................. 321
7-2 Importance ........................................................................................................................................... 321
7-3 Module Lessons.................................................................................................................................. 321
7-4 Lesson 1: NSX Segmentation ....................................................................................................... 322
7-5 Learner Objectives........................................................................................................................... 322
7-6 Traditional Security Challenges ................................................................................................... 323
7-7 About Zero-Trust Security ...........................................................................................................324
7-8 About NSX Segmentation ............................................................................................................ 325
7-9 Use Cases for NSX Segmentation .............................................................................................326
7-10 NSX Segmentation Benefits ......................................................................................................... 327
7-11 Enforcing Zero-Trust with NSX Segmentation ..................................................................... 328
7-12 Step 1: Creating Virtual Security Zones ....................................................................................329
7-13 Step 2: Identifying the Applications Boundaries .................................................................. 330
7-14 Step 3: Implementing Micro-Segmentation............................................................................. 332
7-15 Step 4: Securing Through Context ............................................................................................ 333
7-16 Review of Learner Objectives .....................................................................................................334
7-17 Lesson 2: NSX-T Data Center Distributed Firewall ............................................................. 335
7-18 Learner Objectives...........................................................................................................................335
7-19 NSX-T Data Center Firewalls .......................................................................................................336
7-20 Features of the Distributed Firewall .......................................................................................... 337
7-21 Distributed Firewall: Key Concepts............................................................................................338
7-22 Overview of a Security Policy .....................................................................................................339
7-23 Distributed Firewall Policy Categories ..................................................................................... 340
7-24 About Distributed Firewall Policies .............................................................................................341
7-25 Distributed Firewall Rule Processing within a Policy ...........................................................342
7-26 Applied To Field for the Policy....................................................................................................343
7-27 Configuring Distributed Firewall Policy Settings ...................................................................343
7-28 Configuring Time-Based Firewall Policies ............................................................................... 344
7-29 Creating Distributed Firewall Rules ............................................................................................345
7-30 Configuring Distributed Firewall Rule Parameters ................................................................346
7-31 Specifying Sources and Destinations for a Rule ...................................................................347
7-32 Creating Groups ................................................................................................................................347
7-33 Adding Members and Member Criteria for a Group ............................................................348
7-34 Creating Groups Based on Tags .................................................................................................348
7-35 Specifying Services for a Rule .....................................................................................................349
7-36 Adding a Context Profile to a Rule ........................................................................................... 350
xiv
VMware Confidential Internal Use Only
7-37 Configuring Context Profile Attributes...................................................................................... 351
7-38 Custom FQDN Filtering .................................................................................................................. 352
7-39 Setting the Scope of Rule Enforcement .................................................................................. 353
7-40 Specifying the Action for a Rule .................................................................................................354
7-41 Jump To Application DFW Rules (1) ......................................................................................... 355
7-42 Jump To Application DFW Rules (2) ........................................................................................356
7-43 Distributed Firewall Rule Settings ............................................................................................... 357
7-44 Saving and Viewing the Distributed Firewall Configuration .............................................. 358
7-45 Rolling Back to a Saved Distributed Firewall Configuration ............................................. 359
7-46 Distributed Firewall Configuration Export and Import ....................................................... 360
7-47 Distributed Firewall Architecture .................................................................................................361
7-48 Distributed Firewall Architecture: ESXi ....................................................................................362
7-49 Distributed Firewall Rule Processing: ESXi..............................................................................363
7-50 Distributed Firewall Architecture: KVM ....................................................................................364
7-51 Distributed Firewall Rule Processing: KVM ............................................................................. 365
7-52 Lab 11: Configuring the NSX Distributed Firewall ..................................................................366
7-53 Review of Learner Objectives .....................................................................................................366
7-54 Lesson 3: Use Case for Security in Distributed Firewall on VDS .................................... 367
7-55 Learner Objectives...........................................................................................................................367
7-56 About Distributed Firewall on VDS............................................................................................368
7-57 Supported Features ........................................................................................................................369
7-58 Distributed Firewall on VDS Requirements.............................................................................369
7-59 Installation Workflow.......................................................................................................................370
7-60 Preparing the Cluster for Security .............................................................................................370
7-61 Validating the Security Cluster Preparation from the NSX UI .......................................... 371
7-62 Transport Node Preparation ......................................................................................................... 371
7-63 Autoconfigured Transport Node Profile.................................................................................. 372
7-64 VLAN Transport Zones .................................................................................................................. 373
7-65 Discovered Segments (1)............................................................................................................... 373
7-66 Discovered Segments (2)..............................................................................................................374
7-67 Configuring Segment Profiles ...................................................................................................... 375
7-68 Grouping Enhancement .................................................................................................................. 376
7-69 Review of Learner Objectives ..................................................................................................... 376
7-70 Lesson 4: NSX-T Data Center Gateway Firewall ................................................................. 377
7-71 Learner Objectives........................................................................................................................... 377
7-72 About the Gateway Firewall ........................................................................................................ 378
xv
VMware Confidential Internal Use Only
7-73 Predefined Gateway Firewall Categories .............................................................................. 380
7-74 Gateway Firewall Policy ..................................................................................................................381
7-75 Configuring Gateway Firewall Policy Settings ....................................................................... 382
7-76 Configuring Gateway Firewall Rules ..........................................................................................383
7-77 Configuring Gateway Firewall Rules Settings ........................................................................384
7-78 Gateway Firewall Architecture ....................................................................................................385
7-79 Gateway Firewall Rule Processing .............................................................................................386
7-80 Lab 12: Configuring the NSX Gateway Firewall..................................................................... 387
7-81 Review of Learner Objectives ..................................................................................................... 387
7-82 Key Points ...........................................................................................................................................388
xvii
VMware Confidential Internal Use Only
8-63 Creating Rules for East-West Malware Prevention............................................................ 444
8-64 About North-South Malware Prevention................................................................................ 445
8-65 Use Cases for North-South Malware Prevention ................................................................ 446
8-66 Requirements for North-South Malware Prevention ..........................................................447
8-67 North-South Malware Prevention Architecture ................................................................... 448
8-68 NSX Edge Components ................................................................................................................ 449
8-69 North-South Malware Prevention Packet Flow for a Known File ................................. 450
8-70 North-South Malware Prevention Packet Flow for an Unknown File............................451
8-71 Enabling Malware Prevention on Tier-1 Gateways...............................................................452
8-72 Creating North-South Malware Prevention Profiles ............................................................453
8-73 Creating Rules for North-South Malware Prevention ........................................................ 454
8-74 Malware Prevention Dashboard (1) ............................................................................................455
8-75 Malware Prevention Dashboard (2) ...........................................................................................456
8-76 About the Allowlist ..........................................................................................................................457
8-77 Lab 15: (Simulation) Configuring Malware Prevention for East-West Traffic.............458
8-78 Review of Learner Objectives .....................................................................................................458
8-79 Lesson 4: NSX Intelligence............................................................................................................459
8-80 Learner Objectives...........................................................................................................................459
8-81 About NSX Intelligence ................................................................................................................. 460
8-82 Use Cases for NSX Intelligence....................................................................................................461
8-83 NSX Intelligence Requirements ...................................................................................................462
8-84 NSX Intelligence Installation ..........................................................................................................463
8-85 Validating the NSX Intelligence Installation ............................................................................ 464
8-86 Granular Data Collection ................................................................................................................465
8-87 NSX Intelligence Visualization (1)................................................................................................ 466
8-88 NSX Intelligence Visualization (2)............................................................................................... 468
8-89 NSX Intelligence Recommendations (1) .................................................................................. 469
8-90 NSX Intelligence Recommendations (2) ................................................................................. 470
8-91 NSX Intelligence Recommendations (3) ..................................................................................472
8-92 NSX Intelligence Recommendations (4) ..................................................................................473
8-93 Suspicious Traffic Detection .........................................................................................................474
8-94 Configuring Detector Definitions ................................................................................................476
8-95 Visualizing Detected Threats (1) .................................................................................................. 477
8-96 Visualizing Detected Threats (2) .................................................................................................478
8-97 Review of Learner Objectives .....................................................................................................478
8-98 Lesson 5: NSX Network Detection and Response..............................................................479
xviii
VMware Confidential Internal Use Only
8-99 Learner Objectives...........................................................................................................................479
8-100 About NSX Network Detection and Response ....................................................................479
8-101 NSX Network Detection and Response Use Cases........................................................... 480
8-102 NSX Network Detection and Response High-Level Architecture ..................................481
8-103 NSX Network Detection and Response in NSX-T Data Center .....................................482
8-104 NSX Network Detection and Response Architecture (1) ..................................................483
8-105 NSX Network Detection and Response Architecture (2) ................................................ 484
8-106 NSX Network Detection and Response Requirements .....................................................485
8-107 NSX Network Detection and Response Activation (1) ..................................................... 486
8-108 NSX Network Detection and Response Activation (2) .....................................................488
8-109 Validating the NDR and Cloud Connector Deployments...................................................489
8-110 Visualizing and Mitigating Attacks ............................................................................................. 490
8-111 Accessing the NSX Network Detection and Response UI ................................................491
8-112 Campaign Overview: Active Threats and Attack Stages .................................................492
8-113 Campaign Blueprint ..........................................................................................................................492
8-114 Campaign Timeline ...........................................................................................................................493
8-115 Reviewing Events .............................................................................................................................493
8-116 Lab 16: (Simulation) Using NSX Network Detection and Response to Detect Threats
494
8-117 Review of Learner Objectives .................................................................................................... 494
8-118 Key Points (1) .....................................................................................................................................495
8-119 Key Points (2) ....................................................................................................................................495
xx
VMware Confidential Internal Use Only
9-51 Benefits of NSX Advanced Load Balancer............................................................................. 539
9-52 NSX Advanced Load Balancer Feature Edition Comparison (1).................................... 540
9-53 NSX Advanced Load Balancer Feature Edition Comparison (2).....................................541
9-54 NSX Advanced Load Balancer Architecture .........................................................................542
9-55 NSX Advanced Load Balancer Deployment Workflow ....................................................543
9-56 NSX Advanced Load Balancer Consumption Workflow.................................................. 544
9-57 Requirements for NSX Advanced Load Balancer................................................................545
9-58 Deploying the NSX Advanced Load Balancer Controller Cluster ..................................546
9-59 Service Engines Deployment and Connectivity ....................................................................547
9-60 Creating a Cloud Connector (1) ...................................................................................................548
9-61 Creating a Cloud Connector (2) ..................................................................................................549
9-62 Creating a Service Engine Group............................................................................................... 550
9-63 NSX Advanced Load Balancer Components.......................................................................... 551
9-64 NSX Advanced Load Balancer Topologies ............................................................................ 552
9-65 VIP Placement and Route Redistribution ................................................................................ 553
9-66 North-South Traffic..........................................................................................................................554
9-67 East-West Traffic (1) ....................................................................................................................... 555
9-68 East-West Traffic (2) ......................................................................................................................556
9-69 Creating a Virtual IP Address ....................................................................................................... 557
9-70 Creating a Virtual Service ..............................................................................................................558
9-71 Creating a Server Pool .................................................................................................................. 560
9-72 Configuring Load-Balancing Algorithms ....................................................................................561
9-73 Configuring Server Pool Security Settings ............................................................................. 562
9-74 Configuring Health Monitor Profiles ...........................................................................................563
9-75 Configuring Persistence Profiles .................................................................................................564
9-76 Validating Virtual Services and Server Pools from the NSX UI ....................................... 565
9-77 Accessing the NSX Advanced Load Balancer UI (1) ...........................................................566
9-78 Accessing the NSX Advanced Load Balancer UI (2) .......................................................... 567
9-79 Lab 18: Configuring NSX Advanced Load Balancer ............................................................568
9-80 Review of Learner Objectives .....................................................................................................568
9-81 Lesson 4: IPSec VPN ......................................................................................................................569
9-82 Learner Objectives...........................................................................................................................569
9-83 Use Cases for IPSec VPN .............................................................................................................570
9-84 IPSec VPN Protocols and Algorithms ....................................................................................... 571
9-85 IPSec VPN Methods ........................................................................................................................ 572
9-86 IPSec VPN Modes ............................................................................................................................ 573
xxi
VMware Confidential Internal Use Only
9-87 IPSec VPN Types .............................................................................................................................574
9-88 NSX-T Data Center IPSec VPN Deployment ........................................................................ 575
9-89 IPSec VPN: High Availability ......................................................................................................... 576
9-90 Configuring IPSec VPN................................................................................................................... 577
9-91 Configuring an IPSec VPN Service ............................................................................................ 578
9-92 Configuring DPD Profiles ............................................................................................................... 579
9-93 Configuring IKE Profiles ................................................................................................................. 580
9-94 Configuring IPSec Profiles ..............................................................................................................581
9-95 Configuring a Local Endpoint .......................................................................................................582
9-96 Configuring IPSec VPN Sessions (1) ..........................................................................................583
9-97 Configuring IPSec VPN Sessions (2) .........................................................................................584
9-98 Configuring IPSec VPN Sessions (3) .........................................................................................585
9-99 Configuring IPSec VPN Sessions (4) .........................................................................................586
9-100 Review of Learner Objectives ..................................................................................................... 587
9-101 Lesson 5: L2 VPN .............................................................................................................................588
9-102 Learner Objectives...........................................................................................................................588
9-103 About Layer 2 VPN .........................................................................................................................588
9-104 Overview of L2 VPN.......................................................................................................................589
9-105 L2 VPN Edge Packet Flow .......................................................................................................... 590
9-106 L2 VPN Considerations ...................................................................................................................591
9-107 Supported L2 VPN Clients ............................................................................................................592
9-108 About Autonomous Edge .............................................................................................................593
9-109 About Standalone Edge.................................................................................................................594
9-110 About Managed NSX Edge Nodes ............................................................................................595
9-111 Sample L2 VPN Network Topology .........................................................................................595
9-112 L2 VPN Server Configuration Steps .........................................................................................596
9-113 Configuring an IPSec for the L2 VPN Service ....................................................................... 597
9-114 Configuring an IPSec for L2 VPN Local Endpoint ................................................................598
9-115 Configuring the L2 VPN Server Service ..................................................................................599
9-116 Configuring an L2 VPN Server Session ................................................................................... 600
9-117 Configuring the L2 VPN Segments (1) ...................................................................................... 601
9-118 Configuring the L2 VPN Segments (2) .................................................................................... 602
9-119 L2 VPN Client Configuration Steps .......................................................................................... 602
9-120 Configuring the L2 VPN Client Service ................................................................................... 603
9-121 Configuring the L2 VPN Client Session (1) ............................................................................. 603
9-122 Configuring the L2 VPN Client Session (2) ............................................................................ 604
xxii
VMware Confidential Internal Use Only
9-123 Configuring the L2 VPN Segments........................................................................................... 605
9-124 Lab 19: Deploying Virtual Private Networks .......................................................................... 605
9-125 Review of Learner Objectives .................................................................................................... 606
9-126 Key Points (1) .................................................................................................................................... 606
9-127 Key Points (2) ................................................................................................................................... 606
Module 10 NSX-T Data Center User and Role Management ........................ 607
10-2 Importance ......................................................................................................................................... 607
10-3 Module Lessons................................................................................................................................ 607
10-4 Lesson 1: Integrating NSX-T Data Center with VMware Identity Manager ............... 608
10-5 Learner Objectives.......................................................................................................................... 608
10-6 About VMware Identity Manager .............................................................................................. 609
10-7 Benefits of Integrating VMware Identity Manager with NSX-T Data Center............. 610
10-8 Prerequisites for VMware Identity Manager Integration...................................................... 611
10-9 Configuring VMware Identity Manager ......................................................................................612
10-10 Overview of the VMware Identity Manager and NSX-T Data Center Integration ... 613
10-11 Creating an OAuth Client................................................................................................................614
10-12 Obtaining the SHA-256 Certificate Thumbprint .................................................................... 615
10-13 Configuring the VMware Identity Manager Details in NSX-T Data Center .................. 616
10-14 Verifying the VMware Identity Manager Integration ............................................................ 617
10-15 Default UI Login ..................................................................................................................................618
10-16 UI Login with VMware Identity Manager...................................................................................619
10-17 Local Login with VMware Identity Manager .......................................................................... 620
10-18 Review of Learner Objectives .................................................................................................... 620
10-19 Lesson 2: Integrating NSX-T Data Center with LDAP ........................................................ 621
10-20 Learner Objectives............................................................................................................................621
10-21 About LDAP........................................................................................................................................621
10-22 Benefits of Integrating LDAP with NSX-T Data Center .................................................... 622
10-23 Authentication with LDAP ............................................................................................................622
10-24 Adding an Identity Source.............................................................................................................623
10-25 Configuring the LDAP Server ......................................................................................................624
10-26 UI Login with LDAP .........................................................................................................................625
10-27 Review of Learner Objectives .....................................................................................................625
10-28 Lesson 3: Managing Users and Configuring RBAC ..............................................................626
10-29 Learner Objectives...........................................................................................................................626
10-30 NSX-T Data Center Users ............................................................................................................. 627
10-31 Activate Guest Users ......................................................................................................................628
xxiii
VMware Confidential Internal Use Only
10-32 Using Role-Based Access Control .............................................................................................629
10-33 Built-In Roles (1) ................................................................................................................................ 630
10-34 Built-In Roles (2) ............................................................................................................................... 630
10-35 Custom Role-Based Access Control..........................................................................................631
10-36 Creating Custom Roles (1) .............................................................................................................632
10-37 Creating Custom Roles (2) ............................................................................................................633
10-38 Role Assignment ...............................................................................................................................634
10-39 Lab 20: Managing Users and Roles ............................................................................................635
10-40 Review of Learner Objectives .....................................................................................................635
10-41 Key Points ...........................................................................................................................................635
xxv
VMware Confidential Internal Use Only
11-63 GM Groups and Span (1) ................................................................................................................689
11-64 GM Groups and Span (2) .............................................................................................................. 690
11-65 Group Span and Dynamic Members Span................................................................................691
11-66 Dynamic Groups Based on the VM Tag (1) .............................................................................692
11-67 Dynamic Groups Based on the VM Tag (2) ............................................................................693
11-68 Review of Learner Objectives .................................................................................................... 694
11-69 Key Points .......................................................................................................................................... 694
xxvii
VMware Confidential Internal Use Only
xxviii
VMware Confidential Internal Use Only
Module 1
Course Introduction
1-3 Importance
NSX-T Data Center is the network virtualization and security platform that enables the virtual
cloud network. The virtual cloud network is a software-defined approach to networking that
extends across data centers, clouds, and application frameworks. An application might run on
virtual machines, containers, or bare metal. NSX-T Data Center brings networking and security
closer to the location where the application runs. The application framework that you create can
support multiple hypervisors, containers, bare-metal servers, and public clouds.
In an NSX-T Data Center environment, you can select the technologies that best suit your
particular applications. You can also perform your daily operational and management tasks with
various tools supported by NSX-T Data Center.
• Prepare ESXi and KVM hosts to participate in NSX-T Data Center networking
• Create and configure Tier-0 and Tier-1 gateways for logical routing
• Use distributed and gateway firewall policies to filter east-west and north-south traffic in
NSX-T Data Center
• Use VMware Identity Manager and LDAP to manage users and access
1
VMware Confidential Internal Use Only
1-5 Course Outline
1. Course Introduction
2
VMware Confidential Internal Use Only
1-6 Typographical Conventions
The following typographical conventions are used in this course.
• <ESXi_host_name>
3
VMware Confidential Internal Use Only
1-7 References
Title Location
4
VMware Confidential Internal Use Only
1-8 VMware Online Resources
Documentation for NSX-T Data Center: https://docs.vmware.com/en/VMware-NSX-T-Data-
Center/
• Start a discussion.
• Access communities.
5
VMware Confidential Internal Use Only
1-9 VMware Learning Overview
You can access the following Education Services:
— Help you find the course that you need based on the product, your role, and your level
of experience
• VMware Customer Connect Learning, which is the official source of digital training, includes
the following options:
— On Demand Courses: Self-paced learning that combines lecture modules with hands-on
practice labs
— VMware Lab Connect: Self-paced, technical lab environment where you can practice
skills learned during instructor-led training
6
VMware Confidential Internal Use Only
1-10 VMware Certification Overview
VMware certifications validate your expertise and recognize your technical knowledge and skills
with VMware technology.
VMware certification sets the standards for IT professionals who work with VMware technology.
Certifications are grouped into technology tracks. Each track offers one or more levels of
certification (up to four levels).
For the complete list of certifications and details about how to attain these certifications, see
https://vmware.com/certification.
7
VMware Confidential Internal Use Only
1-11 VMware Credentials Overview
VMware badges are digital emblems of skills and achievements. Career certifications align to job
roles and validate expertise across a solution domain. Certifications can cover multiple products
in the same certification.
Specialist certifications and skills badges align to products and verticals and show expanded
expertise.
• Easy to share in social media (LinkedIn, Twitter, Facebook, blogs, and so on)
8
VMware Confidential Internal Use Only
Module 2
VMware Virtual Cloud Network and NSX-T
Data Center
2-2 Importance
As a network administrator, you must understand the VMware Virtual Cloud Network
framework and the solutions that it offers for addressing challenges in your data center. You
must also understand the NSX-T Data Center architecture and components so that you can
properly design, deploy, and manage a data center that meets your business requirements.
9
VMware Confidential Internal Use Only
2-3 Lesson 1: VMware Virtual Cloud Network
and NSX-T Data Center
• Identify the benefits and recognize the use cases for NSX-T Data Center
• Describe how NSX-T Data Center fits into the NSX product portfolio
• Recognize features and the main elements in the NSX-T Data Center architecture
• Identify the functions of control plane components, data plane components, and
communication channels
10
VMware Confidential Internal Use Only
2-5 Virtual Cloud Network Framework
Virtual Cloud Network is the VMware framework for connecting and protecting different types
of workloads running across various environments.
Virtual Cloud Network is a software layer. This layer provides connectivity between data center,
cloud, and edge infrastructure with data visibility and security.
Virtual Cloud Network connects and protects applications and data, regardless of their physical
locations. Virtual Cloud Network also connects and protects workloads running across any
environment. Workloads might be running on premises in a customer data center, in a branch, or
in a public cloud such as Amazon AWS or Microsoft Azure.
Virtual Cloud Network enables organizations to embrace cloud networking as the software-
defined architecture for connecting components in a distributed world.
Virtual Cloud Network is a ubiquitous software layer that provides maximum visibility into, and
context for, the interaction among various users, applications, and data. NSX supports various
types of endpoints.
The VMware software-based approach delivers a networking and security platform that enables
customers to connect, secure, and operate an end-to-end architecture to deliver services to
applications.
11
VMware Confidential Internal Use Only
The VMware software-based approach provides the following benefits:
• Enables you to design and build the next-generation policy-driven data center.
This data center connects, secures, and automates traditional hypervisors and new
microservices-based (container) applications across a range of deployment targets such as
the data center, cloud, and so on.
• Delivers a WAN solution that provides full visibility, metrics, control, and automation of all
endpoints.
12
VMware Confidential Internal Use Only
2-6 NSX Portfolio
NSX-T Data Center provides consistent networking and security across the entire IT
environment.
Virtual Cloud Network is based on a robust portfolio of products built on the foundations of the
concept of any infrastructure, any cloud, any application, any platform, and any device. Virtual
Cloud Network includes several key solutions that provide security, integration, extensibility,
automation, and elasticity.
VMware Virtual Cloud Network enables you to run your applications everywhere.
You can bring key capabilities from one central control point to wherever your applications run.
• NSX Data Center is the industry's only complete layer 2 to layer 7 software-defined
networking stack, including networking, load balancing, security, and analytics. With NSX
Data Center, you can provision networking and security services across multiple hypervisors
and bare-metal servers.
• NSX Cloud extends the networking and security capabilities of NSX Data Center to the
public cloud. You can provide your workloads running natively on Amazon AWS or
Microsoft Azure with consistent networking and security policies, helping you improve
scalability, control, and visibility.
13
VMware Confidential Internal Use Only
• NSX Distributed IDS/IPS is an advanced threat detection engine built to detect lateral threat
movement on east-west network traffic across multicloud environments.
• NSX Advanced Load Balancer enables you to deliver multicloud application services such as
load balancing, application, security, autoscaling, container networking, and web application
firewall.
• VMware HCX makes it easy to migrate thousands of virtual machines within and across data
centers or clouds, without requiring a reboot.
• NSX Intelligence is a distributed analytics solution that provides visibility and dynamic
security policy enforcement for NSX Data Center environments. NSX Intelligence enables
network and application security teams to deliver a granular security posture, simplify
compliance analysis, and enable proactive security. It also supports network traffic analysis
to help identify advanced threats in the environment.
• vRealize Network Insight provides visibility across virtual and physical networks. It helps with
operations management for NSX Data Center and NSX Cloud.
• vRealize Automation is the VMware infrastructure automation platform for the modern
software-defined data center. When used with NSX, it automates an application's network
connectivity, security, performance, and availability.
• Tanzu Service Mesh Advanced, built on VMware NSX, is the VMware enterprise-class
service mesh solution that provides consistent control and security for microservices, end
users, and data across the most demanding multicluster and multicloud environments.
14
VMware Confidential Internal Use Only
2-7 Use Cases for NSX-T Data Center
NSX-T Data Center can be used in several ways.
You can use NSX-T Data Center for the following purposes:
• Security: Delivers application-centric security at the workload level to prevent the lateral
spread of threats
• Multicloud networking: Brings consistency in networking and security across varied sites and
streamlines multicloud operations
15
VMware Confidential Internal Use Only
2-8 NSX-T Data Center Features (1)
16
VMware Confidential Internal Use Only
2-10 High-Level Architecture of NSX-T Data
Center
The three main elements of NSX-T Data Center architecture are the management, control, and
data planes. This architectural separation enables scalability without affecting workloads.
• Control plane: The control plane manages computing and distributing the runtime virtual
networking and security state of the NSX-T Data Center environment. The control plane
includes a central control plane (CCP) and a local control plane (LCP). This separation
significantly simplifies the work of the CCP and enables the platform to extend and scale for
various endpoints. The management plane and control plane are converged. Each manager
node in NSX-T Data Center is an appliance with converged functions, including
management, control, and policy.
17
VMware Confidential Internal Use Only
• Data plane: The data plane includes a group of ESXi or KVM hosts and NSX Edge nodes.
The group of servers and edge nodes prepared for NSX-T Data Center are called transport
nodes. Transport nodes manage the distributed forwarding of network traffic. Rather than
relying on the distributed virtual switch, the data plane includes a virtual distributed switch
managed by NSX (N-VDS), which decouples the data plane from vCenter Server and
normalizes the networking connectivity. The ESXi hosts managed by vCenter Server can
also be configured to use the vSphere Distributed Switch (VDS) during the transport node
preparation.
• Consumption plane: Although the consumption plane is not part of NSX-T Data Center, it
provides integration into any CMP through the REST API and integration with VMware
cloud management planes, such as vRealize Automation:
— The consumption of NSX-T Data Center can be driven directly through the NSX UI.
— Typically, end users tie network virtualization to their cloud management plane for
deploying applications.
The management plane performs all operations. These operations include create, read, update,
and delete (CRUD).
• The management plane provides the REST API and web-based UI interface for all user
configurations.
• The control plane manages computing and distributing the network runtime state.
18
VMware Confidential Internal Use Only
2-12 About the NSX Management Cluster
The NSX management cluster is formed by a group of three NSX Manager nodes for high
availability and scalability.
The NSX Manager appliance has the built-in policy, manager, and controller roles:
The desired state is replicated in the distributed persistent database, providing the same
configuration view to all nodes in the cluster.
The NSX Manager appliance is available in different sizes for different deployment scenarios.
NSX Manager is a standalone appliance. It includes the manager, controller, and policy roles. As a
result of this integrated approach, users do not need to install the manager, controller, and policy
roles as separate VMs.
The diagram shows that the manager and controller instances run on all three nodes and provide
resiliency. Three manager nodes can handle requests from users through the API or UI, resulting
in shared workloads and efficiency.
Although the three services are merged on each node in the cluster, separate resources (CPU,
memory, and so on) are allocated for each of the services.
19
VMware Confidential Internal Use Only
The distributed persistent database runs across all three nodes, providing the same configuration
view to each node. A manager or controller running on one node has the same view of the
configuration topology as managers or controllers running on the other two nodes.
20
VMware Confidential Internal Use Only
2-13 NSX Management Cluster with Virtual IP
Address
The NSX management cluster is highly available. It is configured in the following way:
• Traffic is not load balanced across the managers while using VIP.
• The cluster virtual IP address is used for traffic destined for NSX Manager nodes.
• Traffic destined for any transport node uses the management IP of the node.
• A single virtual IP address is used for API and GUI client access.
The API and GUI are available on all three manager nodes in the cluster. When a user request is
sent to the virtual IP address, the active manager (the leader that has the virtual IP address
attached) responds to the request. If the leader fails, the two remaining managers elect a new
leader. The new leader responds to the requests sent to that virtual IP address.
Load-balancing requests and traffic are not balanced across managers while using VIP.
21
VMware Confidential Internal Use Only
If the leader node that owns VIP fails, a new leader is elected. The new leader sends a GARP
request to take the ownership of the VIP. The new leader node then receives all new API and UI
requests from users.
The diagram shows an administrator's perspective, where a single IP address (the virtual IP
address) is always used to access the NSX management cluster.
The diagram shows how a traditional load balancer can balance the traffic across multiple
manager nodes.
22
VMware Confidential Internal Use Only
2-15 About the NSX Policy
The policy role performs several functions:
• Provides a centralized location for configuring networking and security across the
environment
• Enables users to specify the final desired state of the system without being concerned
about the current state or underlying implementation
23
VMware Confidential Internal Use Only
2-16 About NSX Manager
NSX Manager performs several functions:
24
VMware Confidential Internal Use Only
2-17 NSX Policy and NSX Manager Workflow
The components of an NSX Manager node interact with each other:
• The policy role manages all networking and security policies and enforces them in the
manager role.
• Proton is the core component of the NSX Manager node. Proton manages various
functionalities such as logical switching, logical routing, distributed firewall, and so on.
NSX Policy Manager and Proton are internal web applications that communicate with each other
through HTTP.
CorfuDB is a persistent in-memory object store. Persistence is achieved by writing each
transaction in a shared transaction log file. Queries are served from memory and provide better
performance and scalability.
25
VMware Confidential Internal Use Only
2-18 About NSX Controller
NSX Controller maintains the realized state of the system and configures the data plane.
• Providing control plane functionality, such as logical switching, routing, and distributed
firewall
• Computing all ephemeral runtime states based on the configuration from the management
plane
26
VMware Confidential Internal Use Only
2-19 Control Plane Components (1)
In NSX-T Data Center, the control plane is divided into the CCP and local control plane (LCP).
The CCP exists as part of the NSX Manager nodes and is offered by the NSX Controller role.
The LCP exists on host transport nodes or on NSX Edge transport nodes.
27
VMware Confidential Internal Use Only
2-20 Control Plane Components (2)
The CCP and LCP perform different functions.
• The CCP:
— Computes the ephemeral runtime state based on the configuration from the
management plane
— Disseminates information reported by the data plane elements by using the LCP
• The LCP:
— Computes local ephemeral runtime states based on updates from the data plane and
the CCP
The CCP computes and disseminates the ephemeral runtime state based on the configuration
from the management plane and topology information reported by the data plane elements.
28
VMware Confidential Internal Use Only
The LCP runs on the compute endpoints. It computes the local ephemeral runtime state for the
endpoint based on updates from the CCP and local data plane information. The LCP pushes
stateless configurations to forwarding engines in the data plane and reports the information back
to the CCP. This process simplifies the work of the CCP significantly and enables the platform to
scale to thousands of different types of endpoints (hypervisor, container host, bare metal, or
public cloud).
In NSX-T Data Center 2.5 and earlier, two messaging protocols (RabbitMQ and NSX-RPC) were
used to communicate with the management plane, CCP, and data plane. RabbitMQ-based
messaging is not used since NSX-T Data Center 3.0.
The NSX-RPC messaging protocol is a messaging solution for all communications between the
management plane, CCP, and data plane.
Remote procedure call (RPC) is a protocol that one program can use to request a service from
another program on another computer without the need to understand the network's details.
29
VMware Confidential Internal Use Only
2-21 Control Plane Change Propagation
The CCP receives the configuration information from NSX Manager and propagates the
information to the LCP of the transport nodes.
If a change occurs, the LCP on the transport node notifies its assigned CCP, which further
propagates these changes to the transport nodes.
The LCP on the transport node reports local runtime changes to its master CCP node. The
master CCP nodes receive the changes and propagate the changes to other controllers in the
cluster. All controllers propagate the changes to the transport nodes that they manage.
30
VMware Confidential Internal Use Only
2-22 Control Plane Sharding Function
The NSX management cluster includes a three-node CCP.
• Each transport node is assigned to a controller for L2 and L3 configuration and distributed
firewall rule distribution.
• Each controller receives configuration updates from the management and data planes but
maintains only the relevant information on the nodes that it is assigned to.
31
VMware Confidential Internal Use Only
2-23 Handling Controller Failure
When a controller fails, its load is redistributed:
• The sharding table is recalculated to redistribute the load among the remaining controller
nodes.
• The traffic in the data plane continues to flow without being affected.
In the diagram, controller 3 is assigned to two transport nodes. When controller 3 fails, the
nodes are moved to controllers 1 and 2.
32
VMware Confidential Internal Use Only
2-24 About the Data Plane
The data plane has several components and functions:
• Includes multiple endpoints (ESXi hosts, KVM hosts, bare-metal servers, and NSX Edge
nodes)
• Contains various workloads, such as VMs, containers, and applications running on bare-metal
servers
• Implements logical switching, distributed and centralized routing, and firewall filtering
• Maintains the status of and manages failover between multiple links or tunnels
• Performs stateless forwarding based on tables and rules populated by the control plane
33
VMware Confidential Internal Use Only
2-26 Data Plane Components
Types of data plane components, called transport nodes, include:
For packet forwarding, ESXi uses N-VDS or VDS v7 depending on the hypervisor version, and
KVM uses Open vSwitch.
34
VMware Confidential Internal Use Only
2-27 Data Plane Communication Channels
Appliance Proxy Hub (APH) acts as a communication channel between NSX Manager and the
transport node.
• Uses port 1234 for communication between the management plane and transport node
• Uses port 1235 for communication between the CCP and transport node
The NSX Manager management plane communicates with the transport nodes by using APH
Server over NSX-RPC/TCP through port 1234.
CCP communicates with the transport nodes by using APH Server over NSX-RPC/TCP through
port 1235.
The NSX-Proxy on the transport node receives the NSX-RPC messages from NSX Manager and
CCP.
35
VMware Confidential Internal Use Only
2-28 Review of Learner Objectives
• Describe the purpose of VMware Virtual Cloud Network and its framework
• Identify the benefits and recognize the use cases for NSX-T Data Center
• Describe how NSX-T Data Center fits into the NSX product portfolio
• Recognize features and the main elements in the NSX-T Data Center architecture
• Identify the functions of control plane components, data plane components, and
communication channels
• The NSX family is a portfolio of various offerings, including NSX-T Data Center, vRealize
Network Insight, NSX Cloud, NSX Intelligence, NSX Distributed IDS/IPS, NSX Advanced
Load Balancer, Tanzu Service Mesh Advanced, VMware SD-WAN, and VMware HCX.
• In an NSX management cluster, each node performs the management, control, and policy
roles.
• NSX policy provides consistency in networking and security configuration across the NSX-T
Data Center environment.
• The data plane in NSX-T Data Center forwards packets based on tables populated by the
control plane and reports topology information to the control plane.
Questions?
36
VMware Confidential Internal Use Only
Module 3
Preparing the NSX-T Data Center
Infrastructure
3-2 Importance
As the network administrator of a software-defined data center, you must plan and deploy a
network infrastructure that meets the business requirements and growth of users and
applications. You must thoroughly understand the function and configuration of the NSX
management cluster and transport nodes to ensure a fully prepared environment for NSX-T
Data Center.
37
VMware Confidential Internal Use Only
3-4 Lesson 1: Deploying the NSX
Management Cluster
• Verify the deployment status of NSX Manager nodes and the NSX management cluster
38
VMware Confidential Internal Use Only
3-6 Implementing NSX-T Data Center in
vSphere
5. Preconfigure transport nodes, including transport zones, IP pools, and uplink profiles.
39
VMware Confidential Internal Use Only
3-7 Implementing NSX-T Data Center in KVM
4. Use the CLI to join the new NSX Manager nodes to the existing NSX management cluster.
5. Preconfigure transport nodes, including transport zones, IP pools, and uplink profiles
creation.
40
VMware Confidential Internal Use Only
3-8 Considerations for Deploying NSX
Manager
You can deploy NSX Manager instances in the following ways:
• NSX Manager can be deployed on ESXi hosts managed by vCenter Server or on standalone
ESXi hosts.
• Automated deployment of NSX Manager by using the UI or API is supported only on ESXi
hosts managed by vCenter Server.
NSX Manager combines the roles of the policy, manager, and controller in a single node (virtual
appliance).
NSX Manager nodes can be installed on supported hypervisors (vSphere, ESXi, RHEL KVM, and
Ubuntu KVM) for an on-premises deployment.
41
VMware Confidential Internal Use Only
3-9 NSX Manager Node Sizing
NSX Manager supports the small, medium, and large form factors.
Small 4 16 300
Medium 6 24 300
Large 12 48 300
42
VMware Confidential Internal Use Only
3-11 Accessing the NSX UI
You enter the FQDN or IP address of the newly deployed NSX Manager instance.
43
VMware Confidential Internal Use Only
3-12 Accessing the NSX CLI
You can access the NSX CLI in the following ways:
• Open an SSH session and enter the admin user credentials that were configured during the
NSX Manager installation.
• You can also use the NSX Manager virtual machine console to access the NSX CLI.
• Use the list command to retrieve all available commands to query and configure the
environment.
The NSX CLI is also available on all types of transport nodes and provides a consistent view of
the NSX configuration across the environment.
You can access the CLI mode of NSX Manager either through the NSX Manager virtual machine
console or by taking the SSH session remotely by using PuTTY.
You can access the NSX CLI mode through SSH only if the SSH mode is enabled in the NSX
Manager appliance.
44
VMware Confidential Internal Use Only
By default, the following levels of access are available when connecting through the CLI mode:
• Root: This role can view, deploy, and configure any component in NSX by using the Linux-
based commands.
Root is an admin-level role. You must access this role only with technical support assistance.
• Admin: This role can view, deploy, and configure any component in NSX by using the CLI
commands.
• Audit: This role can view settings, events, and reports. This role is read-only.
To access the admin mode, log in as the admin user through SSH or through the NSX Manager
console by using the admin credentials.
1. Start a PuTTY session or open the NSX Manager virtual machine console.
Use the list command to retrieve all available commands to query and configure the
environment.
For example, you can use the get command to query information, for example, the get
services command.
The NSX CLI is also available on all types of transport nodes (ESXi, KVM, and NSX Edge nodes)
and provides a consistent view of the NSX configuration across the environment. You can
access the NSX CLI from the transport nodes by running the nsxcli command.
45
VMware Confidential Internal Use Only
3-13 Accessing NSX Manager with API
Rest APIs are used when you cannot use the GUI or when you want to automate by using
scripting or other tools.
• NSX Manager accepts API requests on TCP port 443 over HTTP application protocol to
programmatically create, retrieve, modify, or delete NSX objects.
• To use the NSX API, you must configure a client and verify that the required ports are open
between your client and NSX Manager.
For information about the various API calls and functionality, see NSX-T Data Center API Guide
at https://developer.vmware.com/apis/1198/nsx-t.
46
VMware Confidential Internal Use Only
3-14 Registering vCenter Server with NSX
Manager
You register vCenter Server to NSX-T Data Center.
You add the configuration details to register the vCenter Server system to NSX-T Data Center.
You turn on the Create a Service Account toggle for features, such as vSphere Lifecycle
Manager, which need to authenticate with NSX-T Data Center APIs. During registration,
compute manager creates a service account.
NSX-T Data Center 3.2 introduces the Enable Trust feature for vCenter Server 7.0 or later. This
feature enables vCenter Server to execute tasks on NSX Manager.
• Full Access to NSX: This access level ensures that vSphere with Tanzu and vSphere Lifecycle
Manager can communicate with NSX-T Data Center. This access level is selected by default.
• Limited Access to NSX: This access level ensures that vSphere Lifecycle Manager can
communicate with NSX-T Data Center.
47
VMware Confidential Internal Use Only
3-15 Verifying vCenter Server Registration
with NSX Manager
You verify that vCenter Server is successfully registered and the connection status appears as
Up.
The screenshot shows the process for automatically deploying NSX Manager instances from the
NSX UI.
48
VMware Confidential Internal Use Only
3-17 Deploying Additional NSX Manager
Instances (2)
For information about manually joining the NSX Manager nodes to form a cluster, see NSX-T
Data Center Installation Guide at https://docs.vmware.com/en/VMware-NSX-T-Data-
Center/3.2/installation/GUID-3E0C4CEC-D593-4395-84C4-150CD6285963.html.
You can view the status of the nodes in the cluster from the NSX UI.
You can check the status of nodes by selecting Home > Monitoring Dashboard > System in the
NSX UI.
Different colors indicate different messages. Red indicates degraded performance, for example,
memory usage of NSX Manager is consistently higher than 90 percent for the past 5 minutes.
49
VMware Confidential Internal Use Only
3-19 Management Cluster Status: GUI (2)
On the Overview page in the NSX UI, you can view the status of the management cluster and its
nodes.
The following functions are performed by the most relevant operational components:
• Manager: Manager provides a GUI and REST APIs for creating, configuring, and monitoring
NSX-T Data Center components such as logical switches, logical routers, firewall, and so on.
• Policy: With NSX policies, you can manage resource access and usage without worrying
about low-level details.
• HTTPS: The main HTTPS endpoint where API is involved. The component distributes
incoming calls among components of NSX Manager.
50
VMware Confidential Internal Use Only
3-20 Configuring the Virtual IP Address
You can manually configure a virtual IP address for the NSX management cluster. You access
the GUI by using the virtual IP shared by all NSX Manager nodes.
You can configure a virtual IP address for the management cluster to provide the availability
among the management nodes:
• You can configure the address for the management nodes to share.
• You might need to wait a few minutes for the newly configured address to take effect.
51
VMware Confidential Internal Use Only
3-21 Management Cluster Status: CLI (1)
Run get cluster status to query the NSX management cluster status.
You connect to an appliance in the cluster and enter the get cluster status command.
The number and status of the nodes in the cluster appear.
The example output lists the Cluster Boot Manager, controller, and manager groups. It also
shows each group’s status, with its members and member status.
52
VMware Confidential Internal Use Only
3-22 Management Cluster Status: CLI (2)
The following common misconfigurations might exist when troubleshooting an NSX Manager
node:
• ESXi or KVM host with insufficient resources (CPUs, memory, or hard disk)
• Incorrect network details, such as the gateway address, network mask, DNS, and so on
53
VMware Confidential Internal Use Only
3-23 Deploying NSX Manager on KVM Hosts
To deploy an NSX Manager node on a KVM host:
For more information about deploying NSX Manager on a KVM host, see Install NSX Manager
on KVM at https://docs.vmware.com/en/VMware-NSX-T-Data-Center/3.2/installation/GUID-
5229A83D-1B97-4203-BA30-F52716F68F7F.html.
54
VMware Confidential Internal Use Only
3-24 Replacing Self-Signed Certificates (1)
After you install NSX-T Data Center, the manager nodes and cluster have self-signed certificates.
NSX Manager requires a signed certificate to authenticate the identity of the NSX Manager web
service and encrypt information sent to the NSX Manager web server. As a security
recommendation, you use CA-signed certificates instead of self-signed certificates.
2. You send the CSR file to a certificate authority (CA) to apply for a digital identity certificate.
55
VMware Confidential Internal Use Only
To generate a CSR:
4. Select GENERATE CSR > GENERATE CSR and enter the CSR details.
5. Click SAVE.
4. Click SAVE.
• To replace the certificate of a manager node, use the POST API call:
https://<nsx-
mgr>/api/v1/node/services/http?action=apply_certificate&cer
tificate_id=<certificate_id>
• To replace the certificate of the manager cluster VIP, use the POST API call:
https://<nsx-mgr>/api/v1/cluster/api-
certificate?action=set_cluster_certificate&certificate_id=<cer
tificate_id>
After importing the CA-signed certificate in NSX Manager, replace the certificate:
1. In your browser, log in with admin privileges to an NSX Manager instance at https://<nsx-
manager-ip-address>.
3. Expand the certificate to show its details and copy the certificate ID.
Ensure that the Service Certificate option was set to No when this certificate was imported.
56
VMware Confidential Internal Use Only
4. To replace the certificate of a manager node, use the POST
/api/v1/node/services/http?action=apply_certificate API call.
For example, POST https://<nsx-
mgr>/api/v1/node/services/http?action=apply_certificate&cert
ificate_id=e61c7537-3090-4149-b2b6-19915c20504f
5. To replace the certificate of the manager cluster VIP, use the POST
/api/v1/cluster/api-
certificate?action=set_cluster_certificate API call.
For example, POST https://<nsx-mgr>/api/v1/cluster/api-
certificate?action=set_cluster_certificate&certificate_id=d6
0c6a07-6e59-4873-8edb-339bf75711ac
This step is not necessary if you did not configure VIP. For more information, see NSX-T
Data Center REST API at https://code.vmware.com/apis/1198/nsx-t.
• Verify the deployment status of NSX Manager nodes and the NSX management cluster
57
VMware Confidential Internal Use Only
3-27 Lesson 2: Navigating the NSX UI
58
VMware Confidential Internal Use Only
3-29 NSX Manager Policy and Manager Views
The NSX Manager interface provides the following modes for configuring resources:
• Policy mode:
• Manager mode
• When NSX is integrated with cloud management platforms, for example, vRealize
Automation, VMware Integrated OpenStack, and so on.
• To work with objects that are not available in the Policy mode.
• To work with objects that are created under Advanced Network and Security before
upgrade to NSX-T Data Center 3.0.
For an overview of NSX Manager, see NSX-T Data Center Administration Guide at
https://docs.vmware.com/en/VMware-NSX-T-Data-Center/3.2/administration/GUID-
FBFD577B-745C-4658-B713-A3016D18CB9A.html.
A new Management Plane to Policy Promotion tool exists in NSX-T Data Center 3.2. This tool
supports the migration of the Management plane objects to the policy objects. You can use this
tool through the UI or the API. For more information about the Management Plane to Policy
Promotion tool, see NSX-T Data Center Administration Guide at <add-link>.
59
VMware Confidential Internal Use Only
3-30 User Interface Preferences for Policy and
Manager Modes
By default, new installations display the UI in Policy mode, and the UI Mode toggle is hidden. You
can change the visibility of the UI mode toggle under System > Settings > General Settings >
User Interface.
Environments that contain objects created through the Manager mode, such as from NSX
upgrades or cloud management platforms, display the UI Mode toggle by default in the top-right
corner of the UI.
60
VMware Confidential Internal Use Only
3-31 About the Networking Tab
On the Networking tab, you can configure functions such as switching, routing, and layer 3
services. Layer 3 services include NAT, VPN, load balancing, and so on. A Policy view and a
Manager view are available.
A segment in the Policy view is called a logical switch in the Manager view.
The Tier-0 or Tier-1 gateways in the Policy view are called T-0 or T-1 logical routers in the
Manager view.
61
VMware Confidential Internal Use Only
3-32 About the Security Tab
On the Security tab, you can create firewall policies and endpoint security policies.
62
VMware Confidential Internal Use Only
3-33 About the Inventory Tab
On the Inventory tab, you can review information about services, groups, VMs, containers,
physical servers, and context profiles.
63
VMware Confidential Internal Use Only
3-34 About the Plan & Troubleshoot Tab
On the Plan & Troubleshoot tab, you can select IPFIX, Port Mirroring, Traceflow, and
Consolidated Capacity for monitoring and troubleshooting.
64
VMware Confidential Internal Use Only
3-35 About the System Tab
On the System tab, you can deploy the transport node and management cluster, add licenses,
register compute managers, and so on.
The Overview page shows the number and details of the management nodes and the cluster.
The System tab does not have separate Policy and Manager views.
65
VMware Confidential Internal Use Only
3-36 Lab 1: Reviewing the Lab Environment
and Topologies
Review the lab environment and network topologies:
5. Use the NSX CLI to Review the NSX Management Cluster Information
66
VMware Confidential Internal Use Only
3-38 Lab 3: (Simulation) Deploying a Three-
Node NSX Management Cluster
Deploy a three-node NSX Management cluster from the NSX UI:
5. Review the NSX Management Cluster Information from the NSX CLI
67
VMware Confidential Internal Use Only
3-40 Lesson 3: Preparing the Data Plane
• Explain the relationships among transport nodes, transport zones, VDS, and N-VDS
• Prepare ESXi and KVM hosts to participate in NSX-T Data Center networking
• Uses a scale-out distributed forwarding model and carries data over designated transport
networks in the physical network
• Performs logical switching, distributed and centralized routing, and firewall filtering
68
VMware Confidential Internal Use Only
3-43 Overview of the Transport Node
NSX-T Data Center requires transport nodes to perform networking (overlay or VLAN) and
security functions.
A transport node is responsible for forwarding the data plane traffic that originates from VMs,
containers, or applications running on bare-metal servers.
• NSX Edge
The NSX-T Data Center logical topology is decoupled from the hypervisor-type transport
nodes.
ESXi and KVM transport nodes can work together. Networks and topologies can extend to both
ESXi and KVM environments, regardless of the hypervisor type.
Since the 3.0 release, NSX-T Data Center also supports Windows bare-metal servers as
transport nodes.
69
VMware Confidential Internal Use Only
3-44 Transport Node Components and
Architecture
Each transport node contains:
• Virtual distributed switch: Virtual distributed switch managed by NSX (N-VDS) or vSphere
Distributed Switch (VDS). It is the core data plane component on the transport nodes.
• NSX-Proxy: It is an agent running on all transport nodes that receives configuration and
control plane data from CCP.
• NSX-T Data Center virtual switch (nsxt-vswitch) or VDS (vSwitch) for the ESXi host
70
VMware Confidential Internal Use Only
3-45 Physical Connectivity of a Transport
Node
For the physical connectivity of a transport node, you can select one of these options:
• Use dedicated physical NICs for management and transport (overlay or VLAN) traffic.
• Share the physical NIC for both management and transport traffic.
71
VMware Confidential Internal Use Only
3-46 About IP Address Pools
Each transport node has a tunnel endpoint (TEP). Each TEP requires an IP address.
You can create one or more IP address pools to assign addresses to TEPs.
You can manually configure IP address pools. If you use both ESXi and KVM hosts, you can use
two different subnets for the ESXi TEP IP pool and the KVM TEP IP pool. You must create a
static route with a dedicated default gateway on the KVM hosts.
Each transport node has a TEP. Each TEP has an IP address. These IP addresses can be in the
same subnet or in different subnets, depending on the IP pools or DHCP configured for the
transport nodes.
72
VMware Confidential Internal Use Only
3-47 About Transport Zones (1)
A transport zone defines the span of a logical network over the physical infrastructure.
• Overlay:
— Used as the internal tunnel between NSX hosts and NSX Edge transport nodes
• VLAN:
A transport zone defines a collection of transport nodes that can communicate with each other
across a physical infrastructure over one or more interfaces (TEPs).
Transport nodes are hypervisor hosts, NSX Edge nodes, and bare-metal nodes that participate
in an NSX-T Data Center overlay.
73
VMware Confidential Internal Use Only
3-48 About Transport Zones (2)
Transport zones determine which hosts can participate in a network and have the following
characteristics:
• A single transport zone can have all types of transport nodes (ESXi, KVM, bare-metal
servers, and NSX Edge).
• A hypervisor transport node can belong to multiple transport zones. A segment can belong
to only one transport zone.
• The NSX Edge nodes can belong to multiple transport zones: one overlay transport zone
and multiple VLAN transport zones.
• Each N-VDS or VDS switch can be associated with one or more transport zones.
To validate the versions of vSphere compatible with NSX-T Data Center 3.2, see VMware
Product Interoperability Matrix at https://interopmatrix.vmware.com/Interoperability.
74
VMware Confidential Internal Use Only
3-50 About N-VDS
N-VDS is a software logical switch that provides the forwarding service on a transport node. N-
VDS is created and managed centrally by NSX Manager.
N-VDS is created and distributed across hypervisors (ESXi and KVM) and NSX Edge transport
nodes with a consistent configuration.
N-VDS, previously called the host switch, is the software that operates in hypervisors to form a
software abstraction layer between servers and the physical network. N-VDS is based on
vSphere Distributed Switch, and provides uplinks for the host connectivity to physical switches.
When an ESXi host is prepared for NSX-T Data Center, N-VDS is created. N-VDS is similar in
function to a KVM Open vSwitch on a KVM host.
• N-VDS has a name assigned for grouping and management. For example, the diagram
shows two N-VDS instances that are configured on the transport nodes: one N-VDS named
Lab and another N-VDS named Prod (production).
The networks configured by NSX Manager are opaque to compute managers, such as vCenter
Server. As a compute manager, vCenter Server has visibility into the networks. From the
vSphere Client, a user can see the network components and can select them. However, the user
cannot edit the settings of the network components.
The control plane and data plane are optimized for logical switching.
75
VMware Confidential Internal Use Only
3-51 About VDS
The ESXi hosts managed by vCenter Server can be configured to use VDS during the transport
node preparation.
VDS is backed by the vswitch kernel module and is supported only for the ESXi environment.
The segments from NSX Manager are realized as distributed port groups in vCenter Server.
76
VMware Confidential Internal Use Only
3-52 N-VDS or VDS on the ESXi Transport
Nodes
In the ESXi transport node, both N-VDS and VDS are supported.
• Both N-VDS and VDS support layer 2 forwarding, VLAN, port mirroring, NIC teaming, link
aggregation groups (LAGs), and so on.
• VDS depends on vCenter Server and relies on the configuration of VDS v7.
NSX-T Data Center does not require vCenter Server to operate. NSX Manager is responsible
for the creation of N-VDS and it is independent of vCenter Server.
vCenter Server views N-VDS as an opaque network. vCenter Server is aware of its existence
but cannot configure it.
N-VDS performs layer 2 forwarding and supports VLAN, port mirroring, and NIC teaming. The
teaming configuration is applied across the switch. Link aggregation groups are implemented as
ports.
If using VDS v7, its MTU size must be set to 1,600 bytes or greater from the vSphere Client to
utilize it with NSX-T Data Center.
77
VMware Confidential Internal Use Only
3-53 N-VDS on the KVM Transport Nodes
In the KVM transport nodes, N-VDS is an implementation of OVS:
• N-VDS supports logical switching, logical routing, distributed firewall, and Internet Protocol
Flow Information Export (IPFIX) protocol.
• The NSX agent configures the OVS through OVSDB and the OpenFlow protocol.
78
VMware Confidential Internal Use Only
3-54 Transport Zone and N-VDS or VDS
Mapping
A transport node uses N-VDS or VDS to connect to the transport zone.
Meaningful transport zone and N-VDS or VDS names can be used to help link related objects.
Corresponding names assist with troubleshooting in scenarios with multiple virtual switches and
transport zones.
• A transport zone is mapped to N-VDS or VDS by using the Switch Name text box.
• Multiple overlay N-VDS or VDS instances can be configured on hosts to isolate overlay-
backed traffic.
79
VMware Confidential Internal Use Only
3-55 Creating Transport Zones
Transport zones have the following characteristics:
• Dictate which transport nodes and which workloads can participate in a network.
• When using ESXi transport nodes, transport zones can span one or more vSphere clusters.
When creating a transport zone, you must specify the traffic type.
Transport zones dictate which transport nodes and which workloads can participate in a
network:
• The overlay transport zone is used by host transport nodes, NSX Edge nodes, and bare-
metal transport nodes.
• The VLAN transport zone is used by NSX Edge nodes for their VLAN uplinks.
An NSX-T Data Center environment can contain one or more transport zones, depending on
your requirements. A transport node can belong to multiple transport zones. A logical switch can
belong to only one transport zone.
80
VMware Confidential Internal Use Only
By default, NSX Manager has the following preconfigured transport zones:
NSX-T Data Center does not allow VMs in different transport zones in the layer 2 network to
connect. The span of a logical switch is limited to a transport zone. So virtual machines in
different transport zones cannot be on the same layer 2 network.
In the example, two transport zones are created. PROD-Overlay-TZ is an overlay transport zone
used by host transport nodes and NSX Edge nodes. PROD-VLAN-TZ is the VLAN transport
zone used by NSX Edge and host transport nodes for its VLAN uplinks.
81
VMware Confidential Internal Use Only
3-57 N-VDS and VDS Operational Modes
N-VDS and VDS switches can be configured in one of three modes based on performance
requirements:
• Standard datapath: Configured for regular workloads, where normal workload traffic
throughput is expected.
• Enhanced datapath: Configured for telecom workloads, where high traffic throughput is
expected on the workloads only on ESXi hypervisors.
82
VMware Confidential Internal Use Only
The Enhanced Data Path virtual switch is optimized for Network Function Virtualization (NFV),
where the workloads typically perform networking functions with very demanding requirements
in terms of latency and packet rate. To benefit from this mode, workloads must be compiled
with DPDK and will use VMXNET3 for their vNIC. This mode is only available to ESX hypervisor
(6.7 and later, recommended 6.7 U2 and later) and unavailable on KVM, Edge, and Public Cloud
Gateway. All features are not available in this mode.
ENS Interrupt and Enhanced datapath requires compatible NICs and CPU cores dedicated to
packet processing to support Telco-type environments with high packet count and small packet
size (64 bytes).
For information about identifying suitable hardware components, see VMware Compatibility
Guide at https://www.vmware.com/resources/compatibility/search.php.
Telecom service providers use SDN to deploy network function virtualization (NFV), which
virtualizes a physical server or other dedicated hardware.
The two N-VDS or VDS operational modes can coexist on the same hypervisor.
83
VMware Confidential Internal Use Only
3-58 Physical NICs, LAGs, and Uplinks
A host can have several physical ports called physical NICs. Several physical NICs can be
bundled to form an aggregated link called LAG.
An N-VDS uplink typically maps to an individual physical NIC or a LAG on the host. This mapping
is needed when a transport node is configured.
With a VDS implementation, the ESXi host uplinks must be configured to carry the overlay and
VLAN traffic.
N-VDS allows for the virtual-to-physical packet flow by binding logical router uplinks and
downlinks to physical NICs.
Link Aggregation Groups (LAGs) use Link Aggregation Control Protocol (LACP) for the
transport network.
For VDS, map the uplinks on NSX to uplinks on VDS and not to physical NICs directly.
In the example, logical uplink 1 is mapped to a physical LAG (comprised of physical port p1 and
p2). Logical uplink 2 is mapped to physical port p3.
84
VMware Confidential Internal Use Only
3-59 About Uplink Profiles
The uplink profile is a template that defines how N-VDS or VDS connects to the physical
network.
An uplink profile is a container of properties or capabilities that you want your network adapters
to have. It allows you to consistently configure identical capabilities for network adapters across
multiple hosts or nodes.
When an administrator modifies a parameter in the uplink profile, it is automatically updated in all
the transport nodes following the uplink profile.
If NSX Edge is installed on bare metal, you can use the default uplink profile. The default uplink
profile requires one active uplink and one passive standby uplink.
85
VMware Confidential Internal Use Only
3-60 Default Uplink Profiles
Uplink profiles enable you to configure consistent capabilities for network adapters across
multiple transport nodes.
You can find the default uplink profiles by navigating to System > Fabric > Profiles > Uplink
Profiles in the NSX UI.
The teaming policy only defines how the NSX virtual switch balances traffic across its uplinks.
The uplinks can in turn be individual pNICs or LAGs. A LAG uplink has its own hashing options.
However, those hashing options only define how traffic is distributed across the physical
members of the LAG uplink, whereas the teaming policy defines how traffic is distributed
between NSX virtual switch uplinks.
86
VMware Confidential Internal Use Only
3-62 Teaming Policy Modes
You can select a teaming policy for the new uplink profile:
• Failover Order: Uses one active port and a list of standby ports
• Load Balanced Source Mac: Determines the uplink based on the source VM’s MAC address
The image shows that you can specify a type of teaming policy for the uplink profile.
87
VMware Confidential Internal Use Only
You can select from the following teaming policy modes:
• Failover Order: An active uplink is specified with an optional list of standby uplinks. If the
active uplink fails, the next uplink in the standby list replaces the active uplink. No actual load
balancing is performed with this option.
• Load Balanced Source: A list of active uplinks is specified, and each interface on the
transport node is pinned to one active uplink based on the source port ID. This configuration
allows use of several active uplinks at the same time.
• Load Balanced Source Mac: This option determines the uplink based on the source VM’s
MAC address.
The number of VTEPs on transport nodes is determined by the NIC teaming policy:
• The failover order NIC teaming policy creates a single VTEP on transport nodes.
• The load balance source and source MAC create multiple VTEPs on transport nodes.
88
VMware Confidential Internal Use Only
3-63 Teaming Policies Supported by ESXi and
KVM Hosts
On ESXi hosts:
On KVM hosts:
89
VMware Confidential Internal Use Only
The Load Balanced Source and Load Balanced Source Mac teaming policies do not allow the
configuration of standby uplinks.
The Load Balanced Source and Load Balanced Source Mac teaming policies are not supported
on KVM transport nodes.
The KVM hosts are limited to the failover order teaming policy and a single LAG support. For
LACP, multiple LAG is not supported on the KVM hosts.
90
VMware Confidential Internal Use Only
3-66 About Network I/O Control Profiles
The Network I/O Control profile manages traffic contention by reserving bandwidth for the
system traffic based on the capacity of the physical adapters on a host:
• The Network I/O Control profile is only available for ESXi hosts.
• You can use the Network I/O Control profile to allocate the network bandwidth to
business-critical applications.
• Network I/O Control version 3 provisions bandwidth to the network adapters of VMs by
using shares, reservation, and limit parameters.
• You can configure Network I/O Control to allocate a certain amount of bandwidth for traffic
generated by vSphere Fault Tolerance, vSphere vMotion, VMs, and so on.
• Network I/O Control version 3 for NSX-T Data Center supports resource management of
the system traffic related to VMs and to infrastructure services, such as vSphere Fault
Tolerance. System traffic is strictly associated with an ESXi host.
91
VMware Confidential Internal Use Only
3-67 Creating Network I/O Control Profiles
You can create Network I/O Control profiles from the NSX UI.
Using several configuration parameters, the Network I/O Control service allocates the
bandwidth to traffic from basic vSphere system features.
• Shares: This parameter, ranging 1 through 100, reflects the relative priority of a system
traffic type against the other system traffic types that are active on the same physical
adapter.
• Reservation: This parameter shows the minimum bandwidth in percentage, which must be
guaranteed on a single physical adapter. You can only reserve up to 75 percent in NIOCv3.
• Limit: This parameter shows the maximum bandwidth in percentage, which a system traffic
type can consume on a single physical adapter.
92
VMware Confidential Internal Use Only
3-68 About Transport Node Profiles
A transport node profile captures the configuration required to create a transport node.
The transport node profile can be applied to an existing vSphere cluster to create transport
nodes for the member hosts.
• Transport zones
• Uplink profile
• IP assignment
Transport node creation begins when a transport node profile is applied to a vSphere cluster.
NSX Manager prepares the hosts in the cluster and installs the NSX-T Data Center components
on them.
Transport nodes are created based on the configuration specified in the transport node profile.
93
VMware Confidential Internal Use Only
3-69 Benefits of Transport Node Profiles
Transport node profiles make deployments easier for customers by using vSphere clusters:
• Speed deployments
94
VMware Confidential Internal Use Only
3-70 Prerequisites for Transport Node Profile
Before you configure transport node profiles, several requirements must be met:
• The ESXi hosts planned to configure NSX-T Data Center are part of a vCenter Server
instance.
95
VMware Confidential Internal Use Only
3-71 Attaching a Transport Node Profile to
the vSphere Cluster (1)
You create a transport node profile with the N-VDS setting and define the transport zone.
Attaching a transport node profile is required only when configuring ESXi hosts managed by
vCenter Server at the cluster level.
When modifying a transport node profile, the clusters using that profile are immediately updated.
Also, when an ESXi host is added to a cluster, the host is updated with the transport node
profile attached to the cluster.
96
VMware Confidential Internal Use Only
3-72 Attaching a Transport Node Profile to
the vSphere Cluster (2)
You create a transport node profile with the VDS setting and define the transport zone.
97
VMware Confidential Internal Use Only
3-73 Managing ESXi: Host Preparation (1)
In the NSX UI, you can prepare a host or a host cluster managed by vCenter Server
The diagram shows how you can prepare a host or a host cluster managed by a compute
manager, such as vCenter Server.
The screenshot shows that the ESXi hosts prepared for NSX-T Data Center, sa-esxi-
04.vclass.local, and sa-esx-05.vclass.local are automatically listed as transport nodes in the NSX
UI.
98
VMware Confidential Internal Use Only
3-75 Reviewing the ESXi Transport Node
Status (1)
After the ESXi host is prepared, you verify that the Configuration Status appears as Success
and the Node Status appears as Up.
99
VMware Confidential Internal Use Only
3-76 Reviewing the ESXi Transport Node
Status (2)
You can view the status of the host transport nodes from the NSX Manager dashboard.
You can check the status of host transport nodes in the System view of the dashboard. Point to
the circle and messages appear. These messages provide details about the nodes. For example,
in the screenshot, out of seven nodes, four nodes are configured as transport nodes and three
nodes are not configured for NSX-T Data Center.
• Green: Indicates a healthy environment where all components are working without issues
100
VMware Confidential Internal Use Only
3-77 Verifying the ESXi Transport Node by
CLI
The NSX-T Data Center kernel modules are packaged in VIB files and downloaded to hosts. The
kernel modules provide services such as distributed routing, distributed firewall, and so on.
After an ESXi host is prepared for NSX-T Data Center, VIBs are installed for the host to
participate in networking and security operations.
• nsx-exporter: Provides host agents that report runtime state to the aggregation service.
• nsx-host: Provides metadata for the VIB bundle that is installed on the host.
• nsx-sfhc: Service fabric host component (SFHC) provides a host agent for managing the life
cycle of the hypervisor as a fabric host.
101
VMware Confidential Internal Use Only
3-78 Transport Node Preparation: KVM
The KVM hypervisor can be configured as a transport node either manually or automatically.
2. Run the join management-plane command to register the KVM host with NSX
Manager.
To automatically configure the transport node from the NSX UI, add the KVM host to NSX
Manager through the UI:
• The Deb/RPM packages are downloaded from NSX Manager and installed on the host.
The KVM host preparation manual workflow includes the following steps:
If the transport node is already configured, then automated transport node creation is not
applicable for that node.
102
VMware Confidential Internal Use Only
3-79 Configuring KVM Hosts as Transport
Nodes (1)
From the NSX UI, you can configure a KVM host to be a transport node.
103
VMware Confidential Internal Use Only
3-81 Reviewing the KVM Transport Node
Status
After the KVM host is prepared, you verify that the Configuration State appears as Success and
the Node Status appears as Up. Successfully prepared KVM hosts are listed as transport nodes.
The screenshot uses dpkg on an Ubuntu host, but other Linux distributions use a different
package manager. For example, RHEL and CentOS use RPM and yum, while SLES uses Zypper
and YaST. See your distribution's documentation for information about querying its respective
package database.
104
VMware Confidential Internal Use Only
3-83 Lab 4: Preparing the NSX Infrastructure
Deploy transport zones, create IP pools, and prepare hosts for use by NSX:
3. Create IP Pools
• Explain the relationships among transport nodes, transport zones, VDS, and N-VDS
• Prepare ESXi and KVM hosts to participate in NSX-T Data Center networking
105
VMware Confidential Internal Use Only
3-85 Key Points (1)
• The management plane, control plane, and policy functions are deployed in NSX Manager.
• You can deploy the NSX Manager nodes on ESXi or KVM hosts.
• N-VDS is a software switch that provides the underlying layer 2 forwarding service on a
transport node (hypervisor host or NSX Edge).
• The ESXi hosts that are managed by vCenter Server can be configured to use VDS during
the transport node preparation.
• Uplink profiles enable you to configure consistent identical capabilities for network adapters
across multiple hosts or nodes.
• A teaming policy applies to each VDS or N-VDS uplink and defines how VDS or N-VDS uses
its uplinks for redundancy and load balancing.
• VIBs install kernel modules that run in the hypervisor kernel and provide services such as
distributed routing, distributed firewall, and other capabilities.
Questions?
106
VMware Confidential Internal Use Only
Module 4
NSX-T Data Center Logical Switching
4-2 Importance
To build and run layer 2 switching in NSX-T Data Center, you must understand the overall
architecture and components that interact during logical switching. You must deploy, configure,
and manage layer 2 features that are provided by NSX-T Data Center, such as segments,
segment profiles, and Generic Network Virtualization Encapsulation (Geneve).
3. Configuring Segments
107
VMware Confidential Internal Use Only
4-4 Lesson 1: Overview of Logical Switching
108
VMware Confidential Internal Use Only
4-6 Use Cases for Logical Switching
Traditional data center switching challenges:
109
VMware Confidential Internal Use Only
4-7 Prerequisites for Logical Switching
Before configuring logical switching, ensure that the following conditions are met:
• The NSX management cluster must be formed, stable, and ready to use.
Transport nodes are hypervisor hosts, bare-metal servers, and NSX Edge instances that
participate in NSX-T Data Center.
110
VMware Confidential Internal Use Only
4-8 Logical Switching Terminology
Logical switching involves several concepts.
A segment, also known as a logical switch, reproduces switching functionality in an NSX-T Data
Center virtual environment. Segments are similar to VLANs. Segments segregate networks and
provide network connections to which you can attach VMs. The VMs can then communicate
with each other over tunnels between hypervisors if the VMs are connected to the same
segment. Each segment has a virtual network identifier (VNI), similar to a VLAN ID. However,
unlike VLANs, VNIs scale beyond the limits of VLAN IDs.
A segment contains multiple segment ports. Entities such as routers, VMs, or containers are
connected to a segment through the segment ports.
Segment profiles include layer 2 networking configuration details for logical switches and logical
ports. NSX Manager supports several types of switching profiles and maintains one or more
system-defined default switching profiles for each profile type.
Segment profiles contain different configurations of the logical ports. These profiles can be
applied at a port level or at a segment level. Profiles applied on a segment are applicable on all
ports of the segment unless they are explicitly overwritten at the port level. Multiple segment
profiles are supported, including IP Discovery, MAC Discovery, SpoofGuard, Segment Security,
and Quality of Service (QoS).
The virtual distributed switch managed by NSX (N-VDS) is configured on each transport node,
which provides layer 2 functionality. N-VDS exists on each transport node within the transport zone.
In vSphere environments, ESXi hosts can use both N-VDS and VDS for layer 2 forwarding.
111
VMware Confidential Internal Use Only
4-9 About Segments (1)
A segment is a representation of a layer 2 broadcast domain across transport nodes.
VMs attached to the same segment can communicate with each other, even across transport
nodes.
Each segment is assigned a virtual network identifier (VNI), which is similar to a VLAN ID.
One or more VMs can be attached to a segment. The VMs connected to a segment can
communicate with each other through tunnels between hosts.
Segments are similar to VLANs. Segments separate networks from each other. Each segment
has a virtual network identifier (VNI), similar to a VLAN ID.
112
VMware Confidential Internal Use Only
4-10 About Segments (2)
The type of segment created on a host depends on the transport zone to which it is attached.
Segment configuration changes are allowed only from the NSX UI and API.
Workloads, such as VMs and containers, are connected to the segment ports.
113
VMware Confidential Internal Use Only
4-11 About Tunneling
Tunneling encapsulates the virtual network traffic data and carries it over the physical network.
VM frames are encapsulated with Geneve tunnel headers and sent across the tunnel.
The NSX-T Data Center overlay network implementation is based on tunneling. It provides
isolation between the underlay network (physical network) and the overlay network (virtual
network). This isolation is achieved by encapsulating the overlay frame with a Geneve header.
The underlying transport network can be another layer 2 network, or it can cross layer 3
boundaries.
The transport node endpoints in an NSX-T Data Center overlay network are called the tunnel
endpoints (TEPs):
• TEPs are the source and destination IP addresses used in the external IP header to identify
the transport nodes.
• TEPs typically carry two types of traffic: VM traffic and control (health check) traffic.
114
VMware Confidential Internal Use Only
4-12 About Geneve
Geneve is an IETF overlay tunneling mechanism providing L2 over L3 encapsulation of data
plane packets.
1. The source TEP encapsulates the VM's frame in the Geneve header.
2. The encapsulated UDP packet is transmitted to the destination TEP over port 6081.
3. The destination TEP decapsulates the Geneve header and delivers the source frame to the
destination VM.
NSX-T Data Center uses a tunneling encapsulation mechanism called Generic Network
Virtualization Encapsulation (Geneve).
The Geneve protocol is comparable to other tunneling protocols (such as VXLAN, NVGRE, and
STT) and is more flexible.
The Geneve-encapsulated packets are communicated over standard back planes, switches, and
routers:
• Packets are sent from one tunnel endpoint to one or more tunnel endpoints using unicast
addressing.
• The Geneve protocol does not modify the end-user application and the VMs in which the
application runs.
• The tunnel endpoint encapsulates the end-user Ethernet frame in the Geneve header.
115
VMware Confidential Internal Use Only
• The completed Geneve packet is transmitted to the destination endpoint in a standard User
Datagram Protocol (UDP) packet. Both IPv4 and IPv6 are supported.
• The receiving tunnel endpoint strips the Geneve header, interprets any included options, and
directs the end-user frame to its destination in the virtual network.
• Runs on UDP
To support the needs of network virtualization, the tunneling protocol draws on the evolving
capabilities of each type of device in both the underlay and overlay networks.
This process imposes a few requirements on the data plane tunneling protocol:
• The data plane is generic and extensible enough to support current and future control
planes.
• Tunnel components are efficiently implemented in both hardware and software without
restricting capabilities to the lowest common denominator.
116
VMware Confidential Internal Use Only
The Geneve packet format includes a compact tunnel header encapsulated in UDP over either
IPv4 or IPv6. A small fixed tunnel header provides control information, as well as a base level of
functionality and interoperability with a focus on simplicity. This header is followed by a set of
variable options for future development. The payload consists of a protocol data unit of the
indicated type, such as an Ethernet frame.
• Options Length (6 bits): This variable results in a minimum total Geneve header size of 8
bytes and a maximum of 260 bytes.
• O (1 bit): Operations, Administration, and Maintenance (OAM) packet. This packet contains a
control message instead of a data payload.
• Rsvd. (6 bits): The Reserved field must be zero on transmission and ignored on receipt.
• Protocol Type (16 bits): The field indicates the type of protocol data unit appearing after the
Geneve header.
• Reserved (8 bits): The Reserved field must be zero on transmission and ignored on receipt.
• Virtual Network Identifier: A unique VNI identifies each logical network. The VNI uniquely
identifies the segment that the inner Ethernet frame belongs to. It is a 24-bit number that is
added to the Geneve frame, allowing a theoretical limit of 16 million separate networks. The
NSX VNI range is 5,000 through 16,777,216.
The base Geneve header is followed by zero or more options in type-length-value format. Each
option includes a 4-byte option header and a variable amount of option data interpreted
according to the type. Geneve provides NSX-T Data Center with the complete flexibility of
inserting metadata in the type, length, and value fields that can be used for new features. One of
the examples of this metadata is the VNI. You must use an MTU of 1600 to account for the
encapsulation header.
• Can add new metadata to the encapsulation without revising the Geneve standard
• Provides the same kind of NIC offloads as VXLAN (check compatibility list)
117
VMware Confidential Internal Use Only
4-14 Logical Switching: End-to-End
Communication
Two VMs connected to the same segment communicate end to end.
• The ESXi host is configured as a transport node with TEP IP: 172.20.11.51, and PROD-NVDS
is installed on the hypervisor during the transport node creation. The VMkernel interface
VMK10 is created on the ESXi host.
• The KVM host is configured as a transport node with TEP IP: 172.20.11.52, and PROD-NVDS
is installed on the hypervisor during the transport node creation. The nsx-vtep0.0 interface
is created on the KVM host.
• The ESXi and KVM transport nodes are configured in the transport zone named PROD-
OVERLAY-TZ.
• Transport node A is running VM-1 with IP address 10.1.10.11 and MAC address ABC.
• Transport node B is running VM-2 with IP address 10.1.10.12 and MAC address DEF.
118
VMware Confidential Internal Use Only
• VM-1 and VM-2 are connected to the segment ports on Web-Segment 69632. This web
segment is an overlay-based segment configured in the transport zone named PROD-
OVERLAY-TZ.
• When VM-1 communicates with VM-2, the source hypervisor encapsulates the packet with
the Geneve header and sends it to the destination transport node, which decapsulates the
packet and forwards it to the destination VM.
2. The source hypervisor encapsulates the packet with the Geneve header.
3. The source transport node forwards the packet to the physical network.
4. The destination transport node receives the packet and performs the decapsulation.
119
VMware Confidential Internal Use Only
4-16 Lesson 2: Logical Switching Architecture
3. NSX Manager realizes the segment information as logical switches in the Corfu database.
120
VMware Confidential Internal Use Only
4-19 Creating Segments: ESXi Hosts
On ESXi hosts, additional steps are required:
6. The APH service sends the switching configuration to the local control plane (nsx-proxy)
over port 1235.
7. The nsx-proxy agent forwards the switching configuration to the nsxt-vdl2 kernel module,
which creates and configures the segments in the datapath.
The nsx-proxy agent is the local control plane agent running on each ESXi transport node.
The CCP sends the information to the nsx-proxy agent running on the ESXi hypervisor through
the Appliance Proxy Hub.
The nsx-proxy uses the nsxt-vdl2 module that creates and configures layer 2 segments.
The APH service sends the configuration to nsx-proxy through the nsx-rpc messages. The nsx-
rpc messages include the segment information to configure on the KVM transport node.
121
VMware Confidential Internal Use Only
4-20 Creating Segments: KVM Hosts
On KVM hosts, additional steps are required:
6. The APH service sends the switching configuration to the local control plane (nsx-proxy)
over port 1235.
7. The nsx-proxy agent forwards the switching configuration to Open vSwitch on the KVM
host, which creates and configures the segments in the datapath.
122
VMware Confidential Internal Use Only
4-22 Lesson 3: Configuring Segments
1. Create a segment.
2. Attach a VM to a segment.
123
VMware Confidential Internal Use Only
4-25 Creating Segments
You use the NSX UI to create segments.
A segment connects to gateways and VMs. A segment performs the functions of a logical
switch.
To create segments:
1. From your browser, log in to NSX Manager and select Networking > Segments > NSX >
ADD SEGMENT.
— None
— Tier-0 Gateway
— Tier-1 Gateway
4. From the Transport Zone drop-down menu, select the transport zone.
124
VMware Confidential Internal Use Only
4-26 Viewing Configured Segments
You can connect to vCenter Server with the vSphere Client to view the configured segments.
The ESXi hosts can be prepared as transport nodes by using either N-VDS or VDS. In
environments, where ESXi hosts are configured with N-VDS, segments appear as opaque
networks in vCenter Server. However, if ESXi hosts are configured with VDS, segments are
represented as NSX distributed port groups in vCenter Server.
125
VMware Confidential Internal Use Only
4-27 Attaching a vSphere VM to a Segment
You can attach a VM, which either runs on a standalone ESXi host or is managed by vCenter
Server, to a segment.
If the VM runs on an ESXi host managed by vCenter Server, you attach the VM to the desired
segment by editing its settings in the vSphere Client.
For the procedure to attach a VM hosted on a standalone ESXi host to a segment, see NSX-T
3.0 Data Center Administration Guide at https://docs.vmware.com/en/VMware-NSX-T-Data-
Center/3.2/administration/GUID-FBFD577B-745C-4658-B713-A3016D18CB9A.html.
126
VMware Confidential Internal Use Only
4-28 Workflow: Attaching a vSphere VM to a
Segment (1)
To attach a VM managed by vCenter Server to a segment:
127
VMware Confidential Internal Use Only
4-29 Workflow: Attaching a vSphere VM to a
Segment (2)
4. NSX Manager configures logical interface 1 (LIF 1) on the segment with a virtual interface
(VIF 1) attachment.
6. The CCP sends the request to the ESXi host on which the VM resides.
128
VMware Confidential Internal Use Only
4-30 Attaching a KVM VM to a Segment
For VMs residing on KVM hosts, you must manually add the VM’s UUID to the segment.
If your VM resides on a KVM host, you must manually create a logical port and attach the VM:
1. From the KVM CLI, run the virsh dumpxml <VM_name> | grep interfaceid
command and record the UUID information.
2. In the NSX UI, add a segment port by configuring the UUID, attachment type, and other
settings.
For more information about creating the UUID, see VMware knowledge base article 2150850 at
https://kb.vmware.com/s/article/2150850.
When adding segment ports, you can select a port type from the Type drop-down menu:
• Leave this field blank except for use cases such as containers or VMware HCX.
— If the type is set to Child, enter the parent virtual interface (VIF) ID in the Context ID
text box.
— If the type is set to Static, enter the transport node ID in the Context ID text box.
129
VMware Confidential Internal Use Only
4-31 Workflow: Attaching a KVM VM to a
Segment (1)
To attach a VM on a KVM host to a segment:
2. You add a segment port on the segment and attach the VIF ID to the segment port.
130
VMware Confidential Internal Use Only
4-32 Workflow: Attaching a KVM VM to a
Segment (2)
4. NSX Manager advertises the attachment configuration to the CCP.
5. The CCP sends the request to the KVM host on which the VM resides.
131
VMware Confidential Internal Use Only
4-33 Verifying the Segment Port Status
You verify that the status of the segment and the status of the port (to which the VM is
connected) appear as Success. VMs attached to the same segment should be able to ping each
other.
After you successfully set up the segment and attach VMs to it, you can test the connectivity
between VMs on the same segment. In the example, you can test the connectivity in the
following way:
1. Using SSH or the VM console, log in to the T1-Web-01 (172.16.10.11) VM, which is attached to
Web-Segment.
2. Ping the T1-Web-03 (172.16.10.13) VM, which resides on another KVM host. This VM is also
attached to Web-Segment.
132
VMware Confidential Internal Use Only
4-34 About Network Topology
The Network Topology feature enables users to understand how the different NSX networking
components are configured and interconnected.
• Tier-0 and Tier-1 gateways with their attached segments and workloads
• Details for each VM, segment, and Tier-1 and Tier-0 gateway
133
VMware Confidential Internal Use Only
4-35 Using Network Topology to Validate the
Segment Configuration
The Network Topology tool shows the segment configuration and the VMs that are attached to
segments.
134
VMware Confidential Internal Use Only
4-37 Lesson 4: Configuring Segment Profiles
135
VMware Confidential Internal Use Only
4-40 About Segment Profiles (2)
Each type of segment profile has a different function:
• SpoofGuard: Helps prevent NIC spoofing by authenticating the IP and MAC address of the
virtual NIC
• Quality of Service (QoS): Provides high-quality and dedicated network performance for
preferred traffic
NSX-T Data Center supports several types of segment profiles and maintains one or more
system-defined default segment profiles:
• The IP Discovery profile uses DHCP snooping, Address Resolution Protocol (ARP)
snooping, or VMware Tools to learn the VM MAC and IP addresses.
• The MAC Discovery profile supports two functionalities: MAC learning and MAC address
change.
• SpoofGuard prevents traffic with incorrect source IP and MAC addresses from being
transmitted.
• Segment Security provides stateless layer 2 and layer 3 security by checking the ingress
traffic to the segment and matching the IP address, MAC address, and protocols to a set of
allowed addresses and protocols. Unauthorized packets are dropped.
• QoS provides high-quality and dedicated network performance for preferred traffic.
136
VMware Confidential Internal Use Only
4-41 Default Segment Profiles
The system default segment profiles are not editable.
You cannot edit or delete the default segment profiles, but you can create custom segment
profiles.
137
VMware Confidential Internal Use Only
4-42 Applying Segment Profiles to Segments
You can apply default or user-created profiles to a segment.
138
VMware Confidential Internal Use Only
4-43 Applying Segment Profiles to Segment
Ports
You can apply default or custom profiles to segment ports. A segment or segment port can be
associated with only one segment profile of each type.
For example, two QoS segment profiles cannot be associated with a segment or segment port.
When the segment profile is associated or disassociated from a segment, the segment profile
for the child segment ports is applied based on the following criteria:
• If the parent segment has a profile associated with it, the child segment port inherits the
segment profile from the parent.
• If the parent segment does not have a segment profile associated with it, a default segment
profile is assigned to the segment, and the segment port inherits that default segment
profile.
139
VMware Confidential Internal Use Only
• If you explicitly associate a custom profile with a segment port, this custom profile overrides
the existing segment profile.
You can associate a custom segment profile with a segment and retain the default segment
profile for one of the child segment ports. You must make a copy of the default segment profile
and associate it with the specific segment port.
• DHCP/DHCPv6 snooping
• ARP snooping
• VMware Tools
• ND snooping
Learned addresses are shared with the CCP to achieve ARP/ND suppression.
In NSX-T Data Center, the IP Discovery profile works in the following ways:
• ARP Snooping inspects a VM's outgoing ARPs and GARPs to learn the IP and MAC
addresses of the VM.
• The VMware Tools software runs on a VM hosted on ESXi and can provide the VM's
configuration information.
• ND Snooping is the IPv6 equivalent of ARP snooping. It inspects neighbor solicitation (NS)
and neighbor advertisement (NA) messages to learn the IP and MAC addresses.
The VMware Tools IP Discovery method can also provide the VM's configuration information
and is available for only VMs hosted by ESXi.
The IP Discovery profile might be used in the following scenario: The distributed firewall depends
on the IP-to-port mapping to create firewall rules. Without IP Discovery, the distributed firewall
must find the IP of a logical port through SpoofGuard and manual address bindings, which is a
cumbersome and error-prone process.
140
VMware Confidential Internal Use Only
4-45 Creating an IP Discovery Segment Profile
You can create an IP Discovery segment profile on the SEGMENT PROFILES tab in the NSX UI.
By default, the discovery methods ARP snooping and ND snooping operate in a mode called
trust on first use (TOFU). In the TOFU mode, when an address is discovered and added to the
realized bindings list, that binding remains in the realized list forever. TOFU applies to the first n
unique <IP, MAC, VLAN> bindings discovered using ARP/ND snooping, where n is the binding
limit that you can configure. You can disable TOFU for ARP/ND snooping. The methods then
operate in trust on every use (TOEU) mode. In the TOEU mode, when an address is discovered,
it is added to the realized bindings list, and when it is deleted or expired, it is removed from the
realized bindings list. DHCP snooping and VMware Tools always operate in TOEU mode.
141
VMware Confidential Internal Use Only
4-46 MAC Discovery Segment Profile
The MAC Discovery profile supports MAC learning, MAC address change, unknown unicast
flooding, MAC limit, and MAC limit policy functions.
• Source MAC address-based learning is a common feature in the physical world for learning
the MAC address of a machine. The MAC Learning feature provides network connectivity
to deployments where multiple MAC addresses are configured behind one vNIC. For
example, in a nested hypervisor deployment an ESXi VM runs on an ESXi host and multiple
VMs run in the ESXi VM.
• Without MAC Learning, when the ESXi VM’s vNIC connects to a segment port, its MAC
address is static. VMs running inside the ESXi VM do not have network connectivity because
their packets have different source MAC addresses. With MAC Learning, the source MAC
address of every packet coming from the vNIC is inspected, the MAC address is learned,
and the packet is allowed to go through. If a MAC address that is learned is not used for 10
minutes, it is removed. This aging property is not configurable.
• MAC Learning also supports Unknown Unicast Flooding. When a unicast packet is received
by a port that has an unknown destination MAC address, the packet is flooded out on all
segment ports that have MAC Learning and Unknown Unicast Flooding enabled. This
property is enabled by default, but only if MAC Learning is enabled.
142
VMware Confidential Internal Use Only
The MAC Discovery profile also supports a VM's ability to change its MAC address:
• A VM connected to a port with MAC Change enabled can run an administrative command
to change the MAC address of its vNIC and still send and receive traffic on that vNIC.
• This feature (disabled by default) is used when a VM needs the ability to change its MAC
address and not lose network connectivity.
The number of MAC addresses that can be learned is configurable. The maximum value is 4,096,
which is the default. You can also set the policy for when the limit is reached. The options are:
• Drop: Packets from an unknown source MAC address are dropped. Packets inbound to this
MAC address are treated as unknown unicast. The port receives the packets only if it has
unknown unicast flooding enabled.
• Allow: Packets from an unknown source MAC address are forwarded although the address
is not learned. Packets inbound to this MAC address are treated as unknown unicast. The
port receives the packets only if it has unknown unicast flooding enabled.
If you enable both MAC Learning and MAC Change, you should also enable SpoofGuard to
improve security.
For information about creating a MAC Discovery profile and associating the profile with a
segment or a port, see NSX-T Data Center Administration Guide at
https://docs.vmware.com/en/VMware-NSX-T-Data-Center/3.2/administration/GUID-
FBFD577B-745C-4658-B713-A3016D18CB9A.html.
143
VMware Confidential Internal Use Only
4-47 QoS Segment Profile
The QoS profile provides high-quality network performance for preferred traffic that requires
high bandwidth.
QoS provides high-quality and dedicated network performance for preferred traffic that requires
high bandwidth. The QoS mechanism achieves this performance by providing sufficient
bandwidth, controlling latency and jitter, and reducing data loss for preferred packets even with
network congestion. This level of network service is provided by using the existing network
resources efficiently.
• Class of Service (CoS): Marks the packet’s layer 2 header to specify its priority
• Differentiated Services Code Point (DSCP): Inserts a code value into the packet’s layer 3
header for prioritization
The layer 2 CoS allows you to specify priority for data packets when traffic is buffered in the
segment because of congestion. The layer 3 DSCP detects packets based on their DSCP
values. CoS is always applied to the data packet regardless of the trusted mode.
144
VMware Confidential Internal Use Only
NSX-T Data Center trusts the DSCP setting applied by a VM or modifies and sets the DSCP
value at the segment level. In each case, the DSCP value is propagated to the outer IP header of
encapsulated frames. In this way, the external physical network can prioritize the traffic based on
the DSCP setting on the external header. When DSCP is in the trusted mode, the DSCP value is
copied from the inner header. When in the untrusted mode, the DSCP value is not preserved for
the inner header. DSCP settings work only on tunneled traffic. These settings do not apply to
traffic inside the same hypervisor.
You can use the QoS segment profile to configure the average ingress and egress bandwidth
values to set the transmit limit rate. To prevent congestion on the northbound network links, you
can use the peak bandwidth rate to specify the upper limit that traffic on a segment is allowed to
burst. The settings in a QoS segment profile do not guarantee the bandwidth but help limit the
use of network bandwidth. The actual bandwidth you observe is determined by the link speed of
the port or the values in the segment profile, whichever is lower.
For information about the QoS segment profile, see NSX-T Data Center Administration Guide at
https://docs.vmware.com/en/VMware-NSX-T-Data-Center/3.2/administration/GUID-
62BB7145-EDD7-4611-A50D-17F4A0EAE57C.html.
145
VMware Confidential Internal Use Only
4-48 Segment Security Profile
The Segment Security profile provides stateless layer 2 and layer 3 security. It protects segment
integrity by filtering malicious attacks from VMs in the network.
The Segment Security profile provides stateless layer 2 and layer 3 security by checking the
ingress traffic to the segment and dropping unauthorized packets sent from VMs. The profile
matches the IP address, MAC address, and protocols to a set of allowed addresses and
protocols.
You can configure the Bridge Protocol Data Unit (BPDU) filter, DHCP snooping, DHCP server
block, and rate limiting options:
• BPDU Filter: When the BPDU filter is enabled, all BPDU traffic is blocked for each port on
the segment.
• BPDU Filter Allow List: You click the destination MAC address from the BPDU destination
MAC addresses list to allow traffic to the permitted destination.
• To enable DHCP filtering, you turn on the Server Block and Client Block toggles. DHCP
Server Block blocks traffic from a DHCP server to a DHCP client. It does not block traffic
from a DHCP server to a DHCP relay agent.
• DHCP filtering can also be configured for IPv6 traffic by using the Server Block-IPv6 and
Client Block-IPv6 options.
146
VMware Confidential Internal Use Only
• Turning on the Non-IP Traffic Block toggle allows only IPv4, IPv6, ARP, GARP, and BPDU
traffic. The rest of the non-IP traffic is blocked. The permitted IPv4, IPv6, ARP, GARP, and
BPDU traffic is based on other policies set in address binding and SpoofGuard
configurations. By default, this option is disabled to allow non-IP traffic to be handled as
regular traffic.
• Turn on the RA Guard toggle to filter out ingress IPv6 router advertisements. The ICMPv6-
type 134 packets are filtered out. This option is enabled by default.
• You can configure rate limits for the ingress or egress broadcast and multicast traffic. Rate
limits are configured to protect the segment or the VM from threats such as broadcast
storms. To avoid any connectivity problems, the minimum rate limit value must be greater
than or equal to 10 PPS.
• Ensures that the IP addresses of VMs are not altered without intervention
• Ensures that distributed firewall rules are not inadvertently or deliberately bypassed
147
VMware Confidential Internal Use Only
SpoofGuard might be used in your environment for the following reasons:
• Ensuring that the IP addresses of VMs cannot be altered without intervention. You might
not want VMs to alter their IP addresses without proper change control review. You can use
SpoofGuard to ensure that the VM owner cannot alter the IP address and continue working
unimpeded.
• Ensuring that the distributed firewall rules are not inadvertently (or deliberately) bypassed.
For distributed firewall rules created using IP sets as sources or destinations, a VM could
have its IP address forged in the packet header, thereby bypassing the rules in question.
If the IP address of a VM changes, traffic from the VM might be blocked by SpoofGuard until the
corresponding configured port-segment address bindings are updated with the new IP address.
You can enable SpoofGuard for the port groups containing the guests. When enabled for each
network adapter, SpoofGuard inspects packets for the prescribed MAC and its corresponding IP
address.
• At the port level, the allowed MAC, VLAN, or IP allowlist is provided through the Address
Bindings property of the port. When the VM sends traffic, it is dropped if its MAC, VLAN, or
IP address does not match the MAC, VLAN, or IP properties of the port. The port-level
SpoofGuard deals with traffic authentication, that is, the traffic consistent with VIF
configuration.
148
VMware Confidential Internal Use Only
• At the segment level, the allowed MAC, VLAN, or IP allowlist is provided through the
Address Bindings property of the segment. This property is typically an allowed IP range or
subnet for the segment, and the segment-level SpoofGuard deals with traffic authorization.
Traffic must be permitted by the port and the segment levels by SpoofGuard before it is
allowed into a segment. Enabling or disabling port- and segment-level SpoofGuard can be
controlled using the SpoofGuard segment profile.
149
VMware Confidential Internal Use Only
4-52 Lesson 5: Logical Switching Packet
Forwarding
To achieve network virtualization, a network controller must configure the hypervisor virtual
segment with network flow tables that form the logical broadcast domains.
150
VMware Confidential Internal Use Only
4-55 TEP Table Update (1)
When a powered-on VM is connected to a segment:
1. The VNI-to-TEP mapping is registered on the transport nodes in its local TEP table.
151
VMware Confidential Internal Use Only
4-56 TEP Table Update (2)
2. Each transport node updates the CCP about the learned VNI-to-TEP IP mapping.
152
VMware Confidential Internal Use Only
4-57 TEP Table Update (3)
3. The CCP maintains the consolidated entries of VNI-to-TEP IP mappings.
153
VMware Confidential Internal Use Only
4-58 TEP Table Update (4)
4. The CCP sends the updated TEP table to all the transport nodes where the VNI is realized.
154
VMware Confidential Internal Use Only
4-59 MAC Table Update (1)
When a powered-on VM is connected to a segment:
1. The VM MAC-to-TEP IP mapping is registered on the transport nodes in its local MAC table.
155
VMware Confidential Internal Use Only
4-60 MAC Table Update (2)
2. Each transport node updates the CCP about the learned VM MAC-to-TEP IP mapping.
156
VMware Confidential Internal Use Only
4-61 MAC Table Update (3)
3. The CCP maintains the consolidated entries of VM MAC-to-TEP IP mappings.
157
VMware Confidential Internal Use Only
4-62 MAC Table Update (4)
4. The CCP sends the updated MAC table to all the transport nodes where the VNI is realized.
• NSX-T Data Center uses the ARP table maintained in the CCP to provide ARP suppression.
• Transport nodes learn the MAC-to-IP association by snooping the ARP and DHCP traffic.
• The learned information is pushed from each transport node to the control plane.
158
VMware Confidential Internal Use Only
4-64 ARP Table Update (1)
1. Each transport node records the local VM IP-to-MAC mapping in its local table.
159
VMware Confidential Internal Use Only
4-65 ARP Table Update (2)
2. Each transport node sends known VM IP-to-MAC mappings to the CCP.
160
VMware Confidential Internal Use Only
4-66 ARP Table Update (3)
3. The CCP updates its ARP table based on the VM IP-to-MAC mappings received from
transport nodes.
161
VMware Confidential Internal Use Only
4-67 ARP Table Update (4)
4. The CCP sends the updated ARP table to all the transport nodes.
The ARP table values in both the CCP and the transport nodes are flushed after 10 minutes.
162
VMware Confidential Internal Use Only
4-68 Unicast Packet Forwarding Across Hosts
(1)
VM1 assumes that the ARP is resolved:
163
VMware Confidential Internal Use Only
4-69 Unicast Packet Forwarding Across Hosts
(2)
2. The original packet is encapsulated in the Geneve header by the ESXi-A source transport
node.
164
VMware Confidential Internal Use Only
4-70 Unicast Packet Forwarding Across Hosts
(3)
3. The packet is sent to the ESXi-B destination transport node.
165
VMware Confidential Internal Use Only
4-71 Unicast Packet Forwarding Across Hosts
(4)
4. The destination transport node decapsulates the Geneve header and delivers the original
source VM frame to VM2
166
VMware Confidential Internal Use Only
4-72 Overview of BUM Traffic
A VM’s broadcast, unknown unicast, and multicast (BUM) traffic must be flooded to all other
VMs that belong to the same segment.
The BUM traffic originated by a VM on a transport node must be replicated to remote transport
nodes (running the VMs connected to the same segment).
• Head replication
All broadcast, unicast, and multicast (BUM) traffic is treated the same: flooded to all participating
hypervisors in the segment. The replication is performed in software.
Each host transport node is a tunnel endpoint. Each TEP has an IP address. These IP addresses
can be in the same subnet or in different subnets, depending on your configuration of IP pools or
DHCP for your transport nodes.
167
VMware Confidential Internal Use Only
When two VMs on different hosts communicate directly and ARP is resolved, unicast-
encapsulated traffic is exchanged between the two TEP IP addresses without any need for
flooding. However, as with any layer 2 network, sometimes traffic that is originated by a VM,
such as an ARP request, needs to be flooded. The packet must be sent to all the other VMs
belonging to the same segment. For the layer 2 BUM traffic, the packet must be sent to all the
other VMs belonging to the same segment.
In the diagram, VM2 residing on transport node 2 (TN2) must send traffic to VM9 residing on
TN9. VM9’s MAC address is unknown to TN2 or the control plane. Therefore, VM2 sends an
ARP request (broadcast frame) seeking VM9’s MAC address. TN2 floods this ARP request
frame out to all other transport nodes within VNI 73728. VM9 on TN9 receives the ARP request
and responds with an ARP reply. ARP tables on hosts are then updated to reduce future
flooding.
To enable flooding, NSX-T Data Center segment supports the following types of replication
modes:
• Head Replication mode: This mode is also known as Source Mode or Headend Replication.
The source host duplicates each BUM frame and sends a copy to each TEP (on a particular
VNI) that it knows.
• Hierarchical Two-Tier Replication: This mode is also known as the MTEP mode. It involves a
host in another L2 domain that performs replication of BUM traffic to other hosts within the
same VNI.
The TEP only replicates traffic in which the replication option TLV is set in the Geneve header.
168
VMware Confidential Internal Use Only
4-73 Managing BUM Traffic: Head Replication
The Head replication mode performs source-based replication. The BUM packet is replicated by
the source transport node to all other transport nodes participating in that VNI.
• TN1 replicates to TN2 and TN3 because they are in the same L2 domain.
TN1 replicates because the control plane does not have the desired information.
• Meanwhile, TN1 also needs to replicate the packet to the remote transport nodes (TN4 and
TN5 in one L2 domain and TN7, TN8, TN9 in another L2 domain).
• Because TN6 does not participate in VNI 73728, the packet is not replicated to TN6.
169
VMware Confidential Internal Use Only
4-74 Managing BUM Traffic: Hierarchical Two-
Tier Replication
The source transport node replicates the BUM packet locally within its L2 domain.
The source transport node elects a proxy TEP (MTEP) from each remote L2 domain and sends
the BUM packet to each proxy TEP.
The proxy TEP replicates the BUM packet to the transport nodes within its L2 domain.
• An MTEP is elected for each L2 domain (segment). TN1 elects an MTEP for each remote L2
domain.
170
VMware Confidential Internal Use Only
• TN1 sends a copy of the BUM packet to each remote MTEP with the replication option TLV
embedded in the Geneve header.
The role of MTEP is to replicate the received BUM packet locally and forward it to other
TNs within the same L2 domain.
• Because TN6 does not participate in VNI 73728, the packet is not sent to TN6.
2. Create Segments
171
VMware Confidential Internal Use Only
4-77 Key Points
• A segment is a representation of the L2 broadcast domain across transport nodes.
• Segment profiles provide L2 networking configuration details for segments and ports.
• Five types of segment profiles are available: IP Discovery, MAC Discovery, SpoofGuard,
Segment Security, and QoS.
• Network flow tables used in packet forwarding include TEP, ARP, and MAC table.
• BUM traffic replication supports head mode and hierarchical two-tier mode.
Questions?
172
VMware Confidential Internal Use Only
Module 5
NSX-T Data Center Logical Routing
5-2 Importance
In NSX-T Data Center, logical routing provides an optimized and scalable way to manage east-
west and north-south traffic. You must understand the NSX-T Data Center logical routing
architecture, routing components, and routing features to build an efficient and secure layer 3
network infrastructure.
7. VRF Lite
173
VMware Confidential Internal Use Only
5-4 Lesson 1: Overview of Logical Routing
174
VMware Confidential Internal Use Only
5-6 Use Cases for Logical Routing
In NSX-T Data Center, logical routing is used in many ways:
The NSX-T Data Center logical routing has many use cases:
• NSX-T Data Center meets the demands of containerized workload, multihypervisor, and
multicloud environments.
• The logical routing functionality focuses on multitenant environments. Gateways can support
multiple instances where a separation of tenants and networks is required.
• Logical routing is optimized for cloud environments. It suits containerized workload and
multihypervisor and multicloud data centers.
• The distributed routing architecture provides optimal routing paths. Routing is done closest
to the source. For example, traffic from two VMs on different subnets residing on the same
host can be routed in the kernel. The traffic does not need to leave the host to be routed.
This method helps avoid hairpinning.
• NSX Edge transport nodes that host gateways provide network services that cannot be
distributed to hosts.
• Gateways exist where east-west routing, north-south routing, and centralized services (such
as NAT or load balancing) are required.
• A dynamic routing protocol is not needed between the two-tiered gateways, simplifying
data center routing.
175
VMware Confidential Internal Use Only
5-7 Prerequisites for Logical Routing
For logical routing to work, certain requirements must be met:
• Hypervisors must be prepared as NSX-T Data Center transport nodes and added to the
management plane.
• The NSX Edge nodes must be deployed and preconfigured according to the requirements.
176
VMware Confidential Internal Use Only
5-8 Logical Routing in NSX-T Data Center
The NSX-T Data Center gateways provide:
• Multitenancy
• High availability
177
VMware Confidential Internal Use Only
An NSX-T Data Center gateway reproduces routing functionality in a virtual environment:
• Logical routing is distributed and decoupled from the underlying hardware. Basic forwarding
decisions are made locally on the prepared transport nodes.
• Gateways also provide centralized services. Layer 3 functionalities, such as NAT, are
provided through the services running on the NSX edge nodes.
• When multiple gateway instances are installed, multitenancy and network separation are
supported on a single gateway. Logical routing is enhanced for most cloud use cases that
involve multiple service providers and tenants.
The NSX-T Data Center gateways provide north-south and east-west connectivity:
• North-south routing enables tenants to access public networks. Traffic leaves or enters a
tenant administrative domain. Connections to and from the entities outside the tenant's
premises are considered as north-south connectivity.
• East-west traffic flows between various networks in the same tenant. Traffic is sent
between logical networks (between logical switches) under the same administrative domain.
178
VMware Confidential Internal Use Only
5-9 Tier-0 and Tier-1 Gateways
NSX-T Data Center provides Tier-0 and Tier-1 gateways. Each gateway has several
characteristics.
Supports static or dynamic routing Does not use dynamic routing protocols
Supports equal-cost multipath (ECMP) routing Does not support ECMP routing
to upstream physical gateways
Forwards the traffic between logical and Enables routing between segments (east-west)
physical networks (north-south) and must be connected to a Tier-0 gateway to
provide external connectivity
Gateways are distributed across the kernel of each host. A gateway can be deployed as either a
Tier-0 or a Tier-1 gateway:
The Tier-1 gateway must connect to the Tier-0 gateway to access external networks. The Tier-
0 gateway is directly connected to upstream physical gateways.
The Tier-1 gateway does not require an edge node if no services are used. It has
preprogrammed (by the management plane) connections toward its upstream Tier-0 gateway.
Both Tier-0 and Tier-1 gateways support stateful services, such as NAT. Stateful services are
centralized on edge nodes.
Dynamic routing processes are a centralized services while packet forwarding is stateless.
179
VMware Confidential Internal Use Only
5-10 Single-Tier Topology
In a single-tier topology:
In a single-tier deployment, only Tier-0 gateways are used. Tier-1 gateways are not used. The
segments are directly connected to the Tier-0 layer. The upstream connectivity is provided by
the service provider. The tenant performs southbound connectivity.
180
VMware Confidential Internal Use Only
5-11 Multitier Topology
In a multitier topology:
The two-tier routing topology is not mandatory. If the provider and the tenant do not need to
be separated, a single-tier topology can be used.
In most use cases, the provider owns and configures the Tier-0 gateway. The tenants own and
configure the Tier-1 gateway. Cloud management platforms (CMPs) typically provision Tier-1
gateways.
181
VMware Confidential Internal Use Only
5-12 Edge Nodes and Edge Clusters
NSX Edge nodes have the following functions:
• Run gateways with centralized and stateful services such as NAT or load balancing.
NSX Edge nodes provide computational resources to deliver dynamic routing and services for
NSX gateways.
182
VMware Confidential Internal Use Only
5-13 Tier-0 Gateway Uplink Connections
Each Tier-0 can have one or multiple uplinks per NSX Edge node to the physical world.
The diagram shows two different configurations for the edge nodes uplinks:
• On the left, the Tier-0 gateway has one uplink per NSX Edge mapped to one VLAN to
connect to the outside world.
• On the right, the Tier-0 gateway has two uplinks per NSX Edge mapped to different
VLANs.
In both scenarios, the NSX Edge cluster contains two NSX Edge nodes.
The Tier-0 deployment can be active-active or active-standby. When using a dynamic routing
protocol, ECMP can be enabled for multiple northbound uplinks.
183
VMware Confidential Internal Use Only
5-14 Gateway Components: Distributed
Router and Service Router (1)
A distributed router (DR) has the following features:
• Runs as a kernel module in the ESXi hypervisor and as an OVS file in the KVM
184
VMware Confidential Internal Use Only
5-15 Gateway Components: Distributed
Router and Service Router (2)
A gateway can be either a Tier-0 or a Tier-1 gateway, depending on the design requirements:
• The DR component is distributed among all hypervisors and provides basic packet
forwarding:
• The SR component is only located in the NSX Edge nodes and provides services:
— An SR is automatically created on the edge node when you configure the gateway with
an edge cluster.
185
VMware Confidential Internal Use Only
5-16 Realization of Distributed Routers and
Service Routers
Distributed routers and service routers are realized in the following manner:
• Distributed router instances can be realized on host and edge transport nodes.
The diagram shows that SR and DR instances can be distributed across the Compute and
Management clusters.
SR instances are only realized on the edge transport nodes that are running in the Management
cluster while DR instances span across both clusters as they run in host transport nodes and in
edge transport nodes. The service and distributed routers are interconnected through the
TransitLink ports that are automatically created at the time of deployment.
186
VMware Confidential Internal Use Only
5-17 Gateway Components in a Single-Tier
Topology
The diagram shows a logical and physical view of a single-tier configuration.
The diagram represents a single-tier topology where the Tier-0 gateway (T0-GW) has two
uplinks configured to the physical world and each uplink is connected to a different SR to
provide redundancy.
In the physical view, the DR component of the Tier-0 gateway is distributed across all transport
nodes (ESXi-1, ESXi-2, KVM, Edge Node-1, and Edge-Node-2), whereas the SR components are
located only on the edge nodes.
187
VMware Confidential Internal Use Only
5-18 Gateway Components in a Multitier
Topology (1)
The diagram shows a logical and physical view of a multitier configuration where the Tier-1
gateways are not configured with an edge cluster.
The diagram represents a multitier topology where the Tier-0 gateway (T0-GW) has two
uplinks configured to the physical world. The two Tier-1 gateways have no configured services
and therefore were not configured with an edge cluster. The Tier-1 gateways have no SR
components.
In the physical view, the DR component of the Tier-0 gateway is visible. The DR component of
the Tier-1 Gateway A and the DR component of the Tier-1 Gateway B are distributed across all
transport nodes (ESXi-1, ESXi-2, KVM, Edge Node-1, and Edge-Node-2).
The Tier-0 gateway is configured as active-active. SR1 is located in Edge Node-1, and SR2 in
Edge Node-2.
188
VMware Confidential Internal Use Only
5-19 Gateway Components in a Multitier
Topology (2)
The diagram shows the logical and physical views of a multitier configuration with services
configured on both Tier-1 gateways.
The diagram represents a multitier topology where the Tier-0 gateway (T0-GW) has two
uplinks configured to the physical world.
The two Tier-1 gateways have some services configured and each gateway has an SR component.
The DR component of Tier-1 Gateway A and the DR component of Tier-1 Gateway B are
distributed across all transport nodes (ESXi-1, ESXi-2, KVM, Edge Node-1, and Edge-Node-2).
The DR component of the Tier-0 gateway is distributed across the edge transport nodes (Edge
Node-1 and Edge-Node-2). It is not distributed across the host transport nodes (ESXi-1, ESXi-2,
and KVM) when the Tier-1 gateway SR components are deployed.
The Tier-0 gateway has each uplink connected to a different SR. SR1 is in Edge Node-1, and SR2
is in Edge Node-2.
Tier-1 gateways are automatically configured in active-standby mode so the SRs are deployed
on the two edge nodes. It is possible to select the preferred active node in each gateway.
A dedicated edge cluster can be used to deploy Tier-1 gateways with services to achieve better
performance in larger environments.
189
VMware Confidential Internal Use Only
5-20 Gateway Interfaces
The following types of interfaces are used by gateways:
• Uplink interfaces connect the Tier-0 gateways to the upstream physical devices.
• An intratier TransitLink is an internal link between the distributed and service routers on a
gateway.
• The service interface is a special interface for VLAN-based services and partner service
redirection.
190
VMware Confidential Internal Use Only
In a logical router deployment in NSX-T Data Center, different types of connections require
different types of interfaces:
• The uplink interface provides connections to the external physical infrastructure. VLAN and
overlay interface types are supported depending on the use case. The uplink interface is
where the external BGP peerings and OSPF adjancencies can be established. External
service connections, such as IPSec VPN, can also be used through the uplink interface.
• The downlink interface connects workload networks (where endpoint VMs are running) to
the routing infrastructure. A downlink interface is configured to connect to a logical switch
(corresponding to the segment defined at the policy). This interface provides the default
gateway for the VMs in that subnet.
• RouterLink is a type of interface that connects Tier-0 and Tier-1 gateways. The interface is
created automatically when Tier-0 and Tier-1 gateways are connected through an internal
logical switch also created automatically. It uses a subnet assigned from the 100.64.0.0/10
IPv4 address space by default.
• The intratier TransitLink connection is also created when a service router is created. It is an
internal logical switch between the distributed and service routers on a gateway. By default,
the intratier TransitLink has an IP address in the 169.254.0.0/28 subnet range.
191
VMware Confidential Internal Use Only
5-22 Lesson 2: NSX Edge and Edge Clusters
• Identify the NSX Edge node form factors and sizing options
192
VMware Confidential Internal Use Only
5-24 About the NSX Edge Node
The NSX Edge node has several functions:
• Runs the dynamic routing processes and services such as DHCP, NAT, or load balancing
NSX Edge is an important component of the NSX-T Data Center transport zone.
NSX Edges nodes support Data Plane Development Kit (DPDK) for faster packet forwarding in
high-performance environments.
193
VMware Confidential Internal Use Only
5-25 About the NSX Edge Cluster
An NSX Edge cluster is formed by a group of edge nodes and has the following characteristics:
194
VMware Confidential Internal Use Only
Edge nodes must join an edge cluster to be used as a transport node
An edge cluster can be formed with edge nodes of different form factor types.
Failure domains can be configured with NSX APIs and are used to automatically place the Tier-1
gateway active and standby instances. Failure domains guarantee service availability, for
example, if a rack failure occurs. Active and standby Tier-1 gateway services always run in
different failure domains.
The NSX-T Data Center edge cluster scaling and maximums are available at
https://configmax.vmware.com.
• VM on an ESXi host
• Bare-metal node
195
VMware Confidential Internal Use Only
5-27 NSX Edge VM Sizing Options
For NSX Edge nodes deployed as VMs on hypervisors, several deployment sizes are available.
For an NSX Edge node VM deployment, the following sizes are available:
• The medium size is suitable when only L2 through L4 features, such as NAT, routing, L4
firewall, and L4 load balancer, are required and the total throughput requirement is less than
2 Gbps.
• The large size is suitable when only L2 through L4 features, such as NAT, routing, L4
firewall, and L4 load balancer, are required and the total throughput is 2 through 10 Gbps. It
is also suitable when L7 load balancer, for example, SSL offload, is required.
• The extra large size is suitable when the total throughput required is multiple Gbps for L7
load balancer and VPN.
For additional information, see NSX-T Data Center 3.2 Installation Guide at
https://docs.vmware.com/en/VMware-NSX-T-Data-Center/3.2/nsxt_32_install.pdf.
196
VMware Confidential Internal Use Only
5-28 Prerequisites for Deploying the NSX
Edge Node VM
For deploying an NSX Edge node in the VM form factor, the following prerequisites must be
satisfied:
• The supported deployment media are OVA, OVF, ISO, and preboot execution environment
(PXE).
• You can only deploy the NSX Edge node VM on an ESXi hypervisor.
• If using PXE, the password for root and admin users must be encrypted with SHA-512.
• You cannot remove or replace VMware Tools on the NSX Edge node VM.
• All the edge nodes in an edge cluster should use the same NTP service.
For DPDK support, the underlaying platform needs to meet the following requirements:
For additional information, see NSX-T Data Center 3.2 Installation Guide at
https://docs.vmware.com/en/VMware-NSX-T-Data-Center/3.2/nsxt_32_install.pdf.
For information about the required ports and protocols, see VMware Ports and Protocols at
https://ports.esp.vmware.com/home/NSX-T-Data-Center.
197
VMware Confidential Internal Use Only
5-29 Deployment Considerations for NSX
Edge Node VM Interfaces
An edge node deployment requires various interface types and assignments:
• In the vSphere virtual switch, you must allocate at least two ports for the NSX Edge node.
• The first interface must be defined for management access (eth0) by using one NSX Edge
VM vNIC.
• The other interfaces are datapath interfaces (fp-ethX) and are dedicated for overlay
tunneling and uplink connections by using the remaining vNICs.
• Other interfaces must be assigned to the datapath process that creates the overlay or
VLAN-based N-VDS.
198
VMware Confidential Internal Use Only
5-30 Deploying the NSX Edge Node VM with
Multiple N-VDS
An edge node VM with multiple N-VDS has the following characteristics:
• The remaining interfaces are allocated for the datapath module (fp-ethX).
• Multiple N-VDS exist on the edge for overlay and VLAN uplink traffic.
Each N-VDS in the edge node can have its own teaming policy.
When the edge node VM is running on an ESXi host that has been prepared for NSX-T Data
Center and is connected to an N-VDS, the ESXi TEP and the edge node TEP must be in
different subnets.
199
VMware Confidential Internal Use Only
5-31 Deploying the NSX Edge Node VM with
a Single N-VDS
An edge node VM with single N-VDS has the following characteristics:
• Two TEPs are configured to provide load balancing for the overlay traffic.
The ESXi TEP and the edge node TEP IP addresses are in the same subnet.
A named teaming policy can be used for each VLAN uplink traffic for better load balancing of
the traffic across the uplinks and override the default teaming policy.
This architecture aligns with the existing support for single N-VDS in bare-metal edge nodes.
200
VMware Confidential Internal Use Only
5-32 Requirements for the NSX Edge Bare-
Metal Node
The NSX Edge node can be installed on bare-metal hardware.
The NSX Edge bare metal supports only specific CPU types and has some NIC requirements.
For a list of requirements, see NSX-T 3.2 Data Center Installation Guide at
https://docs.vmware.com/en/VMware-NSX-T-Data-Center/3.2/nsxt_32_install.pdf.
If the hardware is not listed, the storage, video adapter, or motherboard components might not
work on the NSX Edge node.
• The only supported deployment media is ISO either with or without the preboot execution
environment (PXE).
• If using PXE, the password for root and admin users must be encrypted with SHA-512.
• All the edge nodes in an edge cluster should use the same NTP service.
201
VMware Confidential Internal Use Only
5-34 Deployment Methods for NSX Edge
Nodes
The following ways are available to deploy an edge node in the VM form factor:
• Use an ISO file and a PXE server to automate the network configuration.
For the bare-metal form factor, an ISO file is used for the installation:
202
VMware Confidential Internal Use Only
5-35 Deploying NSX Edge Nodes from the
NSX UI (1)
You can deploy edge transport nodes directly from the NSX UI by navigating to System >
Fabric > Nodes > Edge Transport Nodes.
In the NSX UI, select System > Fabric > Nodes and click the Edge Transport Nodes tab to add
an edge VM.
On the Name and Description page of the Add Edge VM wizard, configure the following
settings:
• Name
• Form factor
You can also select the resource reservation (CPU, memory, and shares) during the NSX Edge
deployment.
203
VMware Confidential Internal Use Only
5-36 Deploying NSX Edge Nodes from the
NSX UI (2)
The datapath interfaces are defined when adding the edge transport node:
• An NSX Edge node can belong to one overlay network and multiple VLAN transport zones.
• An NSX Edge node must belong to at least one VLAN transport zone to provide the uplink
access.
All nonmanagement links on the edge node are used for the uplinks and tunnels.
In the example, one uplink is used for the tunnel endpoint (overlay network), and another uplink
is used for the external physical network (VLAN).
During the N-VDS creation, the uplinks can be individually assigned per N-VDS. The uplink
profiles (single-nic-uplink-profile in the diagram) determine the number of uplink interfaces.
You can modify the datapath interfaces later by editing the edge transport nodes.
204
VMware Confidential Internal Use Only
5-37 Deploying NSX Edge Nodes from
vCenter Server
You can deploy NSX Edge nodes in the vSphere Client from an OVF template.
The NSX-T Data Center edge nodes can be installed or deployed using various methods. If you
prefer an interactive edge installation, you can use a UI-based VM management tool, such as the
vSphere Client connected to vCenter Server.
The image shows the option to deploy through vCenter Server or the vSphere Client. A wizard
guides you through the steps so that you can provide the required details.
This process does not register the NSX-T Data Center edge node with the management plane.
Additional command-line operations are required.
205
VMware Confidential Internal Use Only
5-38 Using PXE to Deploy NSX Edge Nodes
from an ISO File
By using PXE, the networking settings, such as IP address, gateway, network mask, NTP, and
DNS, are automatically configured.
The PXE boot process includes several components, including DHCP, HTTP, and TFTP servers.
This operation automates the installation process. You can preconfigure the deployment with all
the required network settings for the appliance.
The password for root and admin users must be encrypted with SHA-512.
The preboot execution environment (PXE) boot can also be used to install NSX Edge nodes on
a bare-metal platform.
The PXE method supports only the NSX Edge node deployment. It does not support NSX
Manager deployments.
206
VMware Confidential Internal Use Only
5-39 Installing NSX Edge on Bare Metal
To install NSX Edge for NSX-T Data Center on bare metal by using an ISO file:
2. Create a bootable disk with the NSX Edge ISO file on it.
Manual installation is also available when you install NSX Edge nodes on a bare-metal server.
After the listed requirements are verified, the installation process should start automatically from
the installation media.
After the bootup and power-on processes are complete, the system requests an IP address
(manual or DHCP).
• Password: default
Further setup procedures include enabling the interfaces and joining the edge node to the
management plane.
207
VMware Confidential Internal Use Only
5-40 Joining NSX Edge Bare Metal with the
Management Plane
Installing the NSX Edge node by any method other than the NSX UI does not automatically join
NSX Edge to the management plane.
1. Open an SSH session to the NSX Manager appliance and retrieve the SSL thumbprint by
entering get certificate api thumbprint at the command prompt.
2. Open an SSH session to the edge node and run the join management-plane
command.
The manual installation of NSX Edge nodes does not include an automated procedure to ensure
that the management plane sees edge nodes as available resources.
You must join NSX Edge with the management plane so that they can communicate with each
other.
Joining NSX Edge nodes to the management plane ensures that the edge nodes are available
from the management plane as managed nodes.
First, you must verify that you have administration privileges to access NSX Edge nodes and the
NSX UI. Then you can use the CLI to join the NSX Edge nodes to the management plane.
208
VMware Confidential Internal Use Only
5-41 Verifying the Edge Transport Node
Status
In the NSX UI, navigate to System > Fabric > Nodes > Edge Transport Nodes to verify the
nodes status and configuration state.
In the NSX UI, select System > Fabric > Nodes and click the Edge Transport Nodes tab to view
the status of the edge nodes known by NSX Manager or the management plane.
• Configuration state
• Node status
• N-VDS
• NSX version
Clicking the information icon next to the node status provides additional information about the
reasons for a given status.
Clicking the number of N-VDS gives information about the attached transport zones.
If you want to verify the datapath interfaces, click Edit on the given edge transport node.
209
VMware Confidential Internal Use Only
5-42 Changing the NSX Edge VM Resource
Reservations
You can change the VM resource reservation for the NSX Edge VMs deployed by using NSX
Manager. Navigate to Actions > Change Edge VM Resource Reservations.
1. Click Change Edge VM Resource Reservations to access the reservations for the particular
NSX Edge node selected.
210
VMware Confidential Internal Use Only
5-43 Changing Node Settings
Select Change Node Settings from the Actions menu to modify the NSX Edge node settings.
1. Click Change Node Settings to access the settings for the particular NSX Edge node selected.
— Host name/FQDN
— DNS Servers
— NTP Servers
If you enabled SSH, verify that you can use SSH to access the newly deployed edge nodes.
Verify that the NSX Edge nodes can ping their corresponding default gateway.
Verify that the NSX Edge nodes can ping the hypervisor hosts that are in the same network
as the NSX Edge nodes.
Verify that the NSX Edge nodes can reach their configured DNS server and NTP server.
211
VMware Confidential Internal Use Only
5-45 Creating an NSX Edge Cluster
You can deploy an NSX Edge cluster from the NSX UI by navigating to System > Fabric >
Nodes > Edge Clusters.
You might want to create an NSX Edge cluster for the following reasons:
• Having a multinode cluster of NSX Edge nodes ensures that at least one NSX Edge node is
always available.
• An NSX Edge cluster is required to configure Tier-0 gateways uplinks and enable stateful
services such as NAT, load balancer, and so on.
1. Click +ADD EDGE CLUSTER to start the process for creating an NSX Edge cluster.
An NSX Edge transport node can be added to only one NSX Edge cluster.
After creating the NSX Edge cluster, you can later edit it to add NSX Edge nodes.
212
VMware Confidential Internal Use Only
5-46 Lab 6: Deploying and Configuring NSX
Edge Nodes
Deploy NSX Edge nodes and configure them as transport nodes:
• Identify the NSX Edge node form factors and sizing options
213
VMware Confidential Internal Use Only
5-48 Lesson 3: Configuring Tier-0 and Tier-1
Gateways
214
VMware Confidential Internal Use Only
5-50 Gateway Configuration Tasks
To achieve full network connectivity, you must configure the following components:
Depending on the environment, the order of the configuration tasks can vary. Sometimes you
might want to create the Tier-0 gateway before the Tier-1 gateway.
215
VMware Confidential Internal Use Only
Before configuring the Tier-1 and Tier-0 gateways, verify the following settings:
The gateways are not automatically connected to each other during the creation process. The
management plane cannot determine which Tier-1 instance should connect to which Tier-0
instance. You must manually connect the gateways after their creation.
After you manually connect these instances, the management plane programs the routes in
these instances to establish connectivity between tiers.
If you are not planning to use any stateful services for the Tier-1 gateway, you should not select
any edge cluster.
If no cluster is selected for the Tier-1 gateway, no SR is created for this gateway (only the DR
component is created). This method saves resources and protects from unintended hairpinning
of traffic over the edge nodes.
216
VMware Confidential Internal Use Only
5-52 Connecting Segments to the Tier-1
Gateway
Connect segments to the Tier-1 gateway by navigating to Networking > Connectivity >
Segments > NSX.
217
VMware Confidential Internal Use Only
5-53 Using Network Topology to Validate the
Tier-1 Gateway Configuration
The topology diagram shows the segments connected to T1-GW-01 and its subnets.
Pointing to an entity highlights the path to the root, for example, VM or segment or Tier-1
Gateway.
If the zoom level is less than 1, then the entity names are not displayed in the topology diagram.
Clicking an entity, for example, VM, segment and gateways, opens the side panel with more
details per entity type.
218
VMware Confidential Internal Use Only
5-54 Testing East-West Connectivity
VMs on various subnets (segments) attached to the Tier-1 gateway can reach each other.
The Tier-1 gateway is created and the interfaces for various logical networks are configured.
Now you can verify the east-west connectivity in the tenant environment.
219
VMware Confidential Internal Use Only
5-55 Creating the Uplink Segments
Create the uplink segments that are associated with the Tier-0 gateway uplinks.
Each Tier-0 gateway can have multiple uplink connections, depending on the requirements and
the actual configuration.
In the example, two different segments are configured to connect the Tier-0 gateway uplink
interfaces.
220
VMware Confidential Internal Use Only
5-56 Creating the Tier-0 Gateway (1)
Create a Tier-0 gateway by navigating to Networking > Connectivity > Tier-0 Gateways.
221
VMware Confidential Internal Use Only
5-58 Configuring Routing
Configure static or dynamic routing to the remote networks by editing the Tier-0 gateway.
You can configure a static or dynamic route to remote networks. BGP is the dynamic routing
protocol enabled by default.
You can have BGP and OSPF routing protocols active at the same time but for simplicity, you
must disable BGP before enabling OSPF dynamic routing protocol.
Edit the Tier-1 gateway to connect it to the desired Tier-0 gateway to provide north-south
routing and access to external networks.
222
VMware Confidential Internal Use Only
5-60 Enabling Route Advertisement in the
Tier-1 Gateway
Enable route advertisement on the Tier-1 gateway for tenant networks to be propagated to the
Tier-0 gateway.
Using route advertisement ensures that the networks defined for tenant segments are available
for the connected Tier-0 gateway, which can advertise them with the preferred dynamic routing
protocol.
223
VMware Confidential Internal Use Only
5-61 Configuring Route Redistribution on the
Tier-0 Gateway
Configure route redistribution on the Tier-0 gateway to redistribute learned routes to the
upstream routers.
Navigate to Networking > Connectivity > Tier-0 Gateway and edit the Tier-0 gateway to
configure route redistribution.
If only static routes are used in your environment, you do not have to configure route
redistribution in Tier-0 gateways.
224
VMware Confidential Internal Use Only
5-62 Using Network Topology to Validate the
Tier-0 Gateway Configuration
Network Topology shows the Tier-0 gateway connected to the Tier-1 gateway and their
network subnets.
The topology diagram shows the IP addresses, such as uplink IPs, router link IPs, and interface
IPs, between the NSX objects based on the zoom level
If the zoom level is less than 1, then the entity names do not appear in the topology diagram.
225
VMware Confidential Internal Use Only
5-63 Testing North-South Connectivity
VMs on the tenant networks can communicate with external workloads.
In the diagram and the command output, the sa-web-01 (172.16.10.11) VM can ping the Tier-0
gateway (192.168.100.2) and the upstream physical router (192.168.100.1), assuming that routing
is configured on the physical router.
226
VMware Confidential Internal Use Only
5-64 Lab 7: Configuring the Tier-1 Gateway
Create and configure a Tier-1 gateway for east-west L3 connectivity:
227
VMware Confidential Internal Use Only
5-66 Lesson 4: Configuring Static and Dynamic
Routing
228
VMware Confidential Internal Use Only
5-68 Static and Dynamic Routing
Static routing:
— Network administrators must design and account for all network failure scenarios.
Dynamic routing:
• Dynamic route configuration enables gateways to exchange information about the network.
• Routing protocols are used to dynamically obtain routes to access the networks.
• Interior Gateway Protocols (IGPs): These protocols are used for routing in a single routing
domain under the administration of a single organization. Some IGP routing protocols are
RIP, EIGRP, OSPF, and IS-IS.
• Exterior Gateway Protocols (EGPs): These protocols are used to establish network
connectivity between autonomous systems (AS) run by different organizations. BGP
protocol is an example of an EGP.
NSX-T Data Center implements Border Gateway Protocol (BGP) and Open Shortest Path First
(OSPF).
229
VMware Confidential Internal Use Only
5-69 Tier-0 Gateway Routing Configurations
(1)
The Tier-0 gateway supports the following routing configurations:
In the diagram external BGP is used to establish neighbor relationships between Tier-0 and
upstream physical gateways with different AS network prefixes that are exchanged between
the BGP peers.
The Tier-0 gateway BGP topology should be configured with redundancy and symmetry
between the Tier-0 gateways and the external peers.
230
VMware Confidential Internal Use Only
5-70 Tier-0 Gateway Routing Configurations
(2)
Tier-0 gateways also support dynamic routing configurations using OSPF:
In the diagram OSPF establishes adjacencies between Tier-0 and upstream physical gateways.
231
VMware Confidential Internal Use Only
OSPF adjacencies have the following characteristics:
• OSPF is a link state routing protocol, and OSPF establishes and maintains neighbor
relationships for exchanging routing updates with other routers.
• Two OSPF routers are neighbors if they are members of the same subnet and share the
same area ID, subnet mask, timers, and authentication.
• Setting a password is optional. Authentication methods can be MD5 hashing or clear text.
• Broadcast networks support multiple routers connected to the same network. A single
broadcast packet can reach all the attached routers. The Ethernet protocol is an example of
a broadcast network.
• Point-to-Point networks are networks that only join a single pair of routers. They are
typically seen on WAN links.
232
VMware Confidential Internal Use Only
5-72 Configuring Static Routes on a Tier-0
Gateway (2)
You can add one or multiple static routes and configure the next hops.
233
VMware Confidential Internal Use Only
5-73 Configuring Dynamic Routing with BGP
on Tier-0 Gateways (1)
To configure dynamic routing, you can configure BGP in the BGP section of the Tier-0 gateway.
BGP is enabled by default on Tier-0 gateways. You must set the local AS and configure the
BGP neighbors.
You can also configure the following advanced BGP settings:
• Inter-SR routing so Service routers (SRs) components exchange routing information through
iBGP in the same Tier-0 gateway.
• Multipath Relax to enable ECMP across different neighboring ASNs if all other BGP
attributes are equal.
— IP prefix lists to define the networks with subnet masks that are permitted or denied
based on a match condition.
— Route maps include a sequence of IP prefix lists or community lists with an associated
action to filter or modify the routes advertised. When a match occurs, the gateway
performs the action and stops scanning the rest of the route map.
234
VMware Confidential Internal Use Only
5-74 Configuring Dynamic Routing with BGP
on Tier-0 Gateways (2)
You can configure BGP neighbors by adding their AS number, IP addresses, and source
addresses.
• Enable Allowas-in to prevent BGP process from dropping the routes received that contain
the same AS as the one defined in the Tier-0 gateway. Do not enable unless required
because the default BGP configuration designed to avoid loops might break.
• Graceful Restart can eliminate or reduce the disruption of traffic associated with routes
learned from a BGP neighbor when a control plane failover occurs. The default mode is
Helper Only.
235
VMware Confidential Internal Use Only
5-75 Verifying the BGP Configuration of the
Tier-0 Gateways
You verify the BGP Connectivity Status for each neighbor.
You can also use the get bgp neighbor summary nsxcli command to verify that the
BGP neighbor state is established.
236
VMware Confidential Internal Use Only
5-76 BGP Route Aggregation
Route aggregation is a BGP feature that allows the aggregation of specific routes into one route:
237
VMware Confidential Internal Use Only
5-77 Configuring Route Aggregation with BGP
Route aggregation can be configured in the BGP section of the Tier-0 gateway.
238
VMware Confidential Internal Use Only
5-78 Configuring Dynamic Routing with OSPF
on Tier-0 Gateways (1)
OSPF is not enabled by default in the Tier-0 gateway configuration. It must be enabled before
setting any parameter.
The OSPF protocol is not enabled by default when creating a Tier-0 gateway.
Enable OSPF Graceful Restart to keep sending traffic if a control plane failover occurs in a ToR
router.
239
VMware Confidential Internal Use Only
5-79 Configuring Dynamic Routing with OSPF
on Tier-0 Gateways (2)
You configure the area for the Tier-0 gateway.
An OSPF network is divided into areas that are the logical groupings of hosts and networks:
• OSPF routers in an area have the same detailed topology for only their own area.
• An area border router (ABR) is the OSPF boundary between two areas.
• Backbone Area: Must be designed while considering redundancy and cannot be partitioned.
This area has knowledge of the entire topology. Inter-area traffic must flow through this area.
• Not-so-Stubby Area (NSSA): Blocks external routes from other areas (inter-area) but can
import external routes type-2 from other AS.
Stub Areas, Totally Stubby Areas, and Virtual links to connect areas to the Backbone Area
through nonbackbone areas are not supported in NSX-T Data Center.
• Area ID must be either a single number (0) or use dotted format (0.0.0.0).
• Authentication is optional and can be configured using MD5 (hashing) or Password (plain text).
240
VMware Confidential Internal Use Only
5-80 Configuring Dynamic Routing with OSPF
on Tier-0 Gateways (3)
Configure the Tier-0 gateway interfaces that will form OSPF adjacencies.
You must follow these criteria when configuring OSPF in the interfaces:
• You can configure a maximum of two uplink interfaces in OSPF per edge node.
• The two interfaces on the NSX Edge node must be in the same area.
• The BFD Hello interval supports a minimum value of 500 milliseconds for an interface
configured in an edge VM and 50 milliseconds for an interface configured on bare-metal
edges. Dead interval minimum values are 1,500 milliseconds for edge VM and 150
milliseconds for bare-metal edges.
241
VMware Confidential Internal Use Only
5-81 Verifying OSPF Configuration of the Tier-
0 Gateways
You verify the OSPF Neighbors State.
You can also use the get ospf neighbor nsxcli command to verify that the OSPF
adjacencies are established.
242
VMware Confidential Internal Use Only
5-82 OSPF Route Summarization
You must use route summarization in large-scale environments for the following reasons:
• Ease troubleshooting
Link State Advertisements (LSAs) are the messaging system used in OSPF routing protocol:
• LSAs are stored in a local Link State Database (LSDB) in each OSPF router.
• Network 10.1.1.0/24 can be advertised as a summary route for both /25 Tier-1 gateway
segments.
• The LSA for the summary route is advertised as a type 5 LSA. These LSA types are used
to inform about redistributed routes from other routing protocols including static routes. The
summarized route is advertised as an OSPF external route of type-2 (N E2).
243
VMware Confidential Internal Use Only
5-83 Configuring Route Summarization with
OSPF
Set Route Summarization in the OSPF section to define the summarized networks.
244
VMware Confidential Internal Use Only
5-85 Lab 9: Configuring the Tier-0 Gateway
with BGP
Create a Tier-0 gateway and use BGP to configure the north-south end-to-end connectivity:
245
VMware Confidential Internal Use Only
5-87 Lesson 5: ECMP and High Availability
246
VMware Confidential Internal Use Only
5-89 About Equal-Cost Multipath Routing
Equal-cost multipath (ECMP) routing has several features and functions:
ECMP hashing is based on a 5-tuple algorithm that uses source IP address, destination IP
address, source port, destination port, and IP protocol. This method allows a better distribution
of the traffic across all the available paths.
247
VMware Confidential Internal Use Only
5-90 Enabling ECMP in BGP
ECMP is enabled by default on Tier-0 gateways when Border Gateway Protocol (BGP) is
enabled. ECMP can be disabled in the BGP section on the Tier-0 Gateway configuration page.
When configuring ECMP in OSPF, a maximum of two uplink interfaces can be enabled per edge
node.
248
VMware Confidential Internal Use Only
5-92 About High Availability
You can configure high availability on the gateways for redundancy.
• Active-active:
— All the edge nodes are active and run the gateway services simultaneously.
— The workload is distributed between all nodes to prevent overloading one single node.
• Active-standby:
— One edge node is active, and one edge node remains on standby.
— The standby node takes over when the active node becomes unavailable.
Grouping edge nodes offers the benefits of high availability for edge node services. The service
router runs on an edge node and has two modes of operation: active-active or active-standby.
• Logical routing is active on more than one NSX Edge node at a time.
249
VMware Confidential Internal Use Only
5-93 Active-Active HA Mode
The active-active mode is the default high availability mode for Tier-0 gateways and provides:
• Logical routing services are active on more than one edge node at a time.
Active-active is a high availability mode where a gateway is hosted on more than one edge
node at a time:
• For northbound traffic, the DR component sends traffic across the different active SR
components.
250
VMware Confidential Internal Use Only
• When one node fails, traffic is not disrupted but bandwidth is constrained.
• A gateway can span up to eight edge nodes to provide load balancing and redundancy.
By default, Tier-0 gateways are configured with this mode and do not enable stateful services
such as NAT and firewall. Only routing and stateless services, such as reflexive NAT, are
enabled.
In the active-active mode, all the SRs process the northbound and southbound traffic.
• In the diagram, DR sends traffic to both active SRs with IPs 169.254.0.2 and 169.254.0.3 in
the Transit Segment.
251
VMware Confidential Internal Use Only
5-95 Active-Active Topology with OSPF
Active-active HA mode topologies with OSPF have the following characteristics:
• OSPF adjacencies with physical routers are established in all SRs using the same cost
• ECMP can be leveraged with the physical routers but a maximum of two uplink interfaces
can be enabled for OSPF per edge node.
• An OSPF cost of 20 is announced by all Tier-0 gateways, influencing the routing decision on
physical routers with equal cost paths.
252
VMware Confidential Internal Use Only
5-96 Active-Standby HA Mode
Active-standby is a high availability mode where a gateway is operational on only a single edge
node at a time.
The following centralized stateful services are provided in the active-standby mode:
• SNAT/DNAT
• Edge firewall
• VPN
253
VMware Confidential Internal Use Only
Active-standby is a high availability mode where a gateway is operational on only a single edge
node at a time.
The active-standby mode is supported on both Tier-1 and Tier-0 service routers (SRs).
In the active-standby mode, an elected active member processes all traffic. If the active member
fails, a new member is elected to be active:
• Tier-0:
— The active-standby SRs have different northbound IP addresses and have dynamic
routing sessions established on both links.
— Gateway state is synchronized but does not actively forward traffic. Both SRs maintain
dynamic routing peering with the physical gateway.
• Tier-1:
254
VMware Confidential Internal Use Only
5-97 Active-Standby Topology with BGP
Active-standby HA mode topologies with BGP have the following characteristics:
• The standby SR performs AS path prepending and does not forward traffic to the physical
routers.
• AS path prepending influences BGP peer path selection so the standby SR is less preferred
to receive any traffic.
• BGP peering over the standby path ensures optimal BGP route convergence time during
failover.
255
VMware Confidential Internal Use Only
5-98 Active-Standby Topology with OSPF
The standby Tier-0 uses a high OSPF cost to influence route selection on physical routers.
• The standby Tier-0 uses the OSPF cost to influence the routing decisions on the physical
routers.
• The OSPF cost sent by the standby Tier-0 is always 65534, a hard-coded value that cannot
be adjusted.
• The route with the lowest value for cost is chosen as the best southbound route.
• The DR sends traffic to the active SR only, using the active SR as the northbound route.
256
VMware Confidential Internal Use Only
5-99 Failover Detection Mechanisms
The failover process uses the following mechanisms to check the connectivity between tiers:
257
VMware Confidential Internal Use Only
5-100 About BFD
High availability uses BFD to detect forwarding path failures.
BFD provides a low-overhead detection of faults even on physical media that do not support
failure detection of any kind, such as Ethernet.
BFD is a network protocol used to detect faults between two forwarding engines connected by
a link. Failures are detected per logical router. The conditions used to declare an edge node as
down are the same in active-active and active-standby high availability modes.
To ensure uninterrupted routing of network traffic, the NSX Edge nodes exchange keepalive
messages, which are BFD sessions running between the nodes. The edge nodes in an edge
cluster exchange BFD keepalive on the management and tunnel interfaces. When the standby
Tier-0 gateway fails to receive keepalives on both management and tunnel interfaces, it
announces itself as active.
258
VMware Confidential Internal Use Only
5-101 Failover Scenario with BFD
If a standby gateway fails to receive BFD keepalives on both management and tunnel interfaces,
the gateway becomes active.
The BFD protocol provides fast detection of failure for forwarding paths or forwarding engines,
improving convergence. Edge VMs support BFD with a minimum BFD timer of 500 milliseconds
with three retries, providing 1.5 seconds failure detection time. Bare-metal edges support BFD
with a minimum BFD timer of 50 milliseconds with three retries, which implies a 150 milliseconds
failure-detection time.
259
VMware Confidential Internal Use Only
5-102 Failover Scenario with Dynamic Routing
Dynamic routing peering sessions are established on the uplinks with physical routers. If an active
gateway loses all its routing neighbors and a standby gateway is available, the active gateway
steps down and becomes the standby gateway and the standby gateway is promoted to the
new active gateway.
If an active gateway loses all its dynamic routing peerings and a standby gateway is configured,
failover occurs. An active SR on an edge node is declared down when all the dynamic routing
sessions on the peer SR are down.
BGP or OSPF is configured on the uplink between each NSX Edge node and the exterior
physical gateways.
The default BGP timers are a keepalive interval of 60 seconds and the minimum time between
advertisements is 30 seconds. The default OSPF timers are a Hello interval of 10 seconds and a
Dead interval of 40 seconds.
If all overlay tunnels to the compute hypervisors are down, the active edge node does not
receive tunnel traffic from compute hypervisors. Then the standby edge node takes over.
260
VMware Confidential Internal Use Only
5-103 Failover Modes
You can select different failover modes in Active Standby HA Mode:
• Preemptive: If the preferred node fails and recovers, it takes over its peer and becomes the
active node. The peer changes its state to standby.
• Non Preemptive: If the preferred node fails and recovers, it checks whether its peer is
active. If the peer is active, the preferred node stays in the standby mode.
Preemptive and non-preemptive modes are used in a failback scenario after a failover occurs.
The failback happens when the node that failed becomes available again:
• If non-preemptive mode is configured, nothing happens.
• If preemptive is configured, the original active (preferred) node takes over again.
261
VMware Confidential Internal Use Only
5-105 Lesson 6: Logical Routing Packet Walk
The default gateway 10.10.10.1/24 is on the Tier-0 DR (T0 DR) component of the hypervisor
where the VM resides.
262
VMware Confidential Internal Use Only
5-108 Single-Tier Routing: Egress to Physical
Network (2)
2. The gateway (T0 DR) checks its forwarding table. Because a specific route does not exist
for the 192.168.10.0/24 network, the packet is sent to the default 169.254.0.2 gateway,
which is the T0 SR component on the edge node.
The packet is sent to the default 169.254.0.2 gateway over the Internal Transit Subnet.
263
VMware Confidential Internal Use Only
5-109 Single-Tier Routing: Egress to Physical
Network (3)
3. To send the packet from the hypervisor to the edge node, the packet is encapsulated with
a Geneve header.
The source host (TEP 172.16.215.67) encapsulates the packet with a Geneve header to send it to
the remote host (TEP 172.16.215.124). The original packet is intact.
264
VMware Confidential Internal Use Only
5-110 Single-Tier Routing: Egress to Physical
Network (4)
4. The encapsulated packet is sent to the edge node across the overlay tunnel.
265
VMware Confidential Internal Use Only
5-111 Single-Tier Routing: Egress to Physical
Network (5)
5. The edge node decapsulates the packet and sends it to its SR component. The gateway
(T0 SR) routing table shows a route for the 192.168.10.0/24 network over the uplink
segment.
266
VMware Confidential Internal Use Only
5-112 Single-Tier Routing: Egress to Physical
Network (6)
6. The edge node sends the packet to its upstream physical gateway, which routes the packet
to its destination 192.168.10.1.
267
VMware Confidential Internal Use Only
5-113 Single-Tier Routing: Ingress from Physical
Network (7)
7. For the return packet, the source VM 192.168.10.1 sends the packet to its default gateway,
which routes the packet to the edge node.
268
VMware Confidential Internal Use Only
5-114 Single-Tier Routing: Ingress from Physical
Network (8)
8. The SR and the DR components on an edge node share their routing table. A route is
directly connected to the 10.1.1.0/24 network over Segment 1. The packet is sent to the
remote host by using the T0 DR interface.
In the edge node, the SR and DR components share their routing table. This method removes
the extra step of using the Internal Transit Subnet to route from SR to DR.
The Internal Transit Subnet is used when routing from a DR component from a hypervisor to an
SR component in an edge node.
Because the routing table is shared, when the T0 SR component in the edge receives the
packet, it sends the packet to the remote host (Hypervisor with TEP 172.16.215.67) through the
T0 DR interface.
269
VMware Confidential Internal Use Only
5-115 Single-Tier Routing: Ingress from Physical
Network (9)
9. To send the packet from the edge node to the hypervisor, the packet is encapsulated with
a Geneve header.
270
VMware Confidential Internal Use Only
5-116 Single-Tier Routing: Ingress from Physical
Network (10)
10. The encapsulated packet is sent across the overlay tunnel.
271
VMware Confidential Internal Use Only
5-117 Single-Tier Routing: Ingress from Physical
Network (11)
11. The receiving host decapsulates the packet and routes it to its destination (VM 10.1.1.10).
272
VMware Confidential Internal Use Only
5-118 Multitier Routing: Egress to Physical
Network (1)
A packet needs to be sent from the source VM 10.1.1.10 to the destination VM 192.168.10.1:
In the example, the Tier-1 gateway has no service configured and therefore has no SR
components.
The default gateway 10.10.10.1/24 is on the Tier-1 DR component of the hypervisor where the
VM resides.
273
VMware Confidential Internal Use Only
5-119 Multitier Routing: Egress to Physical
Network (2)
2. The gateway (T1 DR) checks its forwarding table to make a routing decision. Because no
specific route exists for the 192.168.10.0/24 network, the packet is sent to the default
100.64.16.0 gateway, which is the DR instance of Tier-0 on the same hypervisor.
274
VMware Confidential Internal Use Only
5-120 Multitier Routing: Egress to Physical
Network (3)
3. The packet is sent to the T0 DR instance on the same hypervisor through T0-T1 Transit
Subnet.
275
VMware Confidential Internal Use Only
5-121 Multitier Routing: Egress to Physical
Network (4)
4. The gateway (T0 DR) checks its forwarding table to make a routing decision. The packet is
sent to the default 169.254.0.2 gateway, which is the T0 SR component on the edge node.
The packet is sent to the default 169.254.0.2 gateway over the Transit segment. 169.254.0.2 is
an interface of the Tier-0 SR component that attaches to the Internal Transit network.
276
VMware Confidential Internal Use Only
5-122 Multitier Routing: Egress to Physical
Network (5)
5. To send the packet from the hypervisor to the edge node, the packet is encapsulated with
a Geneve header.
The source host (TEP 172.16.215.67) encapsulates the packet with a Geneve header to send it to
the edge node (TEP 172.16.215.124). The original packet is intact.
277
VMware Confidential Internal Use Only
5-123 Multitier Routing: Egress to Physical
Network (6)
6. The encapsulated packet is sent to the edge node across the overlay tunnel.
278
VMware Confidential Internal Use Only
5-124 Multitier Routing: Egress to Physical
Network (7)
7. The edge node decapsulates the packet and sends it to its T0 SR instance.
279
VMware Confidential Internal Use Only
5-125 Multitier Routing: Egress to Physical
Network (8)
8. The gateway (T0 SR) routing table shows a route for the 192.168.10.0/24 network over the
uplink segment.
280
VMware Confidential Internal Use Only
5-126 Multitier Routing: Egress to Physical
Network (9)
9. The edge node sends the packet to its upstream physical gateway, which routes the packet
to its destination, 192.168.10.1.
281
VMware Confidential Internal Use Only
5-127 Multitier Routing: Ingress from Physical
Network (10)
10. For the return packet, the source VM 192.168.10.1 sends the packet to its default gateway,
which routes the packet to the edge node.
282
VMware Confidential Internal Use Only
5-128 Multitier Routing: Ingress from Physical
Network (11)
11. The SR and the DR components of the Tier-0 gateway share their routing table because
they are both on the edge node. The routing decision is made to send the packet to the
Tier-1 DR instance in the same edge node.
In the edge node, the SR, and DR components share their routing table. This method removes
the extra step of using the Internal Transit Subnet to route from SR to DR.
The Internal Transit Subnet is used when routing from a DR component from a hypervisor to an
SR component in an edge node.
As the routing table is shared, when the Tier-0 SR component receives the packet, the packet is
sent to the Tier-1 DR component of the edge node through the DR interface.
283
VMware Confidential Internal Use Only
5-129 Multitier Routing: Ingress from Physical
Network (12)
12. The packet is sent to the T1 DR instance on the edge node through T0-T1 Transit Subnet.
284
VMware Confidential Internal Use Only
5-130 Multitier Routing: Ingress from Physical
Network (13)
13. The gateway (T1 DR) checks its forwarding table to make a routing decision. A route is
directly connected to the 10.1.1.0/24 network over Segment 1. The packet is sent to the
remote host.
285
VMware Confidential Internal Use Only
5-131 Multitier Routing: Ingress from Physical
Network (14)
14. To send the packet from the edge node to the hypervisor, the packet is encapsulated with
a Geneve header.
The source host (TEP 172.16.215.124) encapsulates the packet with a Geneve header to send it
to the remote host (TEP 172.16.215.67). The original packet is intact.
286
VMware Confidential Internal Use Only
5-132 Multitier Routing: Ingress from Physical
Network (15)
15. The encapsulated packet is sent to the edge node across the overlay tunnel.
287
VMware Confidential Internal Use Only
5-133 Multitier Routing: Ingress from Physical
Network (16)
16. The receiving host decapsulates the packet and routes it to its destination (VM 10.1.1.10).
288
VMware Confidential Internal Use Only
5-135 Lesson 7: VRF Lite
289
VMware Confidential Internal Use Only
5-137 About VRF Lite
VRF Lite has the following characteristics:
• Multiple routing instances can be configured without deploying additional Tier-0 gateways
and edge nodes.
• Logical routing isolation is provided in NSX and to external peers that are compatible with
the VRF Lite technology.
Virtual Routing and Forwarding (VRF) allows the coexistence of multiple routing instances in one
routing device. Independent routing and forwarding tables are maintained for each instance.
Separation between tenants and applications does not require additional Tier-0 gateways and
edges with VRF Lite.
VRF Lite provides logical routing isolation in NSX and spans it to external peer devices that
support this technology.
290
VMware Confidential Internal Use Only
VRF Lite differs from other VRF implementations because it does not rely on MPLS and MP-
BGP protocols running in the physical network.
• Multiprotocol Label Switching (MPLS): This layer 2 protocol is used to forward traffic based
on labels. MPLS does not use network addresses like IP protocol. These labels identify the
paths between the endpoints in VRFs.
• Multiprotocol Border Gateway Protocol (MP-BGP): This BGP protocol extension is used to
propagate the VRF routing information across MPLS network devices.
• VPN
• Load balancer
• OSPF routing
A Tier-0 gateway must be used to deploy VRF gateways. It is the default Tier-0 gateway and is
the parent gateway of the VRF gateways.
The Tier-0 gateway, used as the default Tier-0 gateway, can be an existing Tier-0 gateway with
connected Tier-1 gateways.
You can have more than one Tier-0 gateway with VRF gateways.
VLAN tagging is used to separate the VRFs in the uplink segment that connects with the
external devices.
These limitations apply only to the VRF gateway. You can connect a Tier-1 gateway configured
with a load balancer or a VPN to a VRF gateway and that is fully supported.
291
VMware Confidential Internal Use Only
5-139 Use Cases for VRF Lite
VRF Lite can be used to enable the following features:
• Run multiple routing instances in the same gateway to optimize existing resources.
The only solution for customers that require separate network routing instances for each tenant
is to deploy multiple Tier-0 gateways. However, these deployments can create scalability issues
because only a single Tier-0 gateway can be deployed per edge node, specifically, for
deployments based on bare-metal edges.
VRF Lite helps network administrators to deal with the overlapping of network ranges in the
same routing domain between business units or after a merger.
It also allows existing VRF Lite deployments in the physical network infrastructure to be
extended to NSX-T Data Center.
292
VMware Confidential Internal Use Only
5-140 VRF Lite Topologies
VRF Lite can be deployed in single-tier and multitier topologies.
• A trunk is used to interconnect the different VRFs with the Data Center Gateway.
• The Data Center Gateway and the underlying infrastructure like vSphere Distributed Port
Groups have to support trunking.
293
VMware Confidential Internal Use Only
5-141 VRF Lite Gateway Interfaces
The following types of interfaces are used with VRF gateways:
• The Logical Router (LR) trunk port connects the parent Tier-0 gateway to upstream
physical devices.
• The VRF Uplink interface is internally connected to the LR trunk port of the parent Tier-0
gateway.
• The Intratier Transit Link is the internal link between the service router (SR) and distributed
router (DR) of a VRF gateway.
• The Downlink interfaces connect VRF gateways to segments with attached workloads.
294
VMware Confidential Internal Use Only
The LR trunk port is a network interface while the VRF uplink port can be seen as a subinterface
with a specific VLAN ID.
The LR trunk port is internally created in the parent Tier-0 gateway and is the only port
connected to the uplink trunk segment.
The other interfaces are the same type as the interfaces used in the standard Tier-0 and Tier-1
gateways.
• BGP protocol instance in each VRF provides the control plane functionality.
A dedicated BGP instance runs in every VRF. You do not need to use the extensions in the MP-
BGP protocol to exchange the VRF routing information.
BGP is the control plane because it dynamically propagates and updates routing information to
all VRF peers.
Each VLAN is mapped to a VRF and only transports traffic for that particular VRF.
295
VMware Confidential Internal Use Only
5-143 Configuring VRF Lite
Follow these steps to configure VRF Lite.
The deployment of a Tier-0 gateway is optional if an existing Tier-0 gateway is used instead as
the default Tier-0 for the VRF gateway.
VRF gateways inherit the following configuration options from the default Tier-0 gateway:
• HA mode
• Edge cluster
• BGP AS number
You do not need to connect a Tier-1 gateway to the VRF gateways. Tenants can be directly
connected to VRF gateways.
296
VMware Confidential Internal Use Only
5-144 Deploying the Default Tier-0 Gateway
To deploy and configure the default Tier-0 gateway as a standard Tier-0 gateway:
1. Navigate to Networking > Connectivity > Tier-0 Gateways in the NSX UI.
You configure the following parameters to deploy the default Tier-0 gateway:
• HA mode
• Edge cluster
• Uplink interfaces
297
VMware Confidential Internal Use Only
5-145 Adding Uplink Interfaces to the Default
Tier-0 Gateway
Connect the default Tier-0 uplink interfaces to the uplink segments in the Set Interfaces window.
Uplink interfaces are required to deploy the default Tier-0 gateway in the edge nodes.
298
VMware Confidential Internal Use Only
5-146 Configuring BGP for the Default Tier-0
Gateway
Configure BGP parameters to use dynamic routing with external routers in the BGP
configuration section.
• Local AS
• Graceful restart
• Multipath relax
299
VMware Confidential Internal Use Only
5-147 Adding the Uplink Trunk Segment for the
VRF Gateway
To configure the trunk segment for connecting the VRF gateway uplinks:
1. Navigate to Networking > Connectivity > Segments > NSX in the NSX UI.
A segment is configured as a trunk when more than one VLAN is configured. A range of VLANs
can also be specified (VLAN X-Y).
Uplink trunk segments specify which VLANs are allowed but do not add 802.1Q VLAN tagging.
Tags are added in the uplink interface of VRF gateways.
You can configure a dedicated uplink trunk segment for each VRF uplink if the trunk is
configured as a single VLAN range (X-X). As a best practice, you should connect the VRF
uplinks to the same uplink trunk segment. This method reduces the amount of resources
required (segments, logical switch ports, and logical router ports).
300
VMware Confidential Internal Use Only
5-148 Deploying the VRF Gateway
To deploy and configure the VRF gateway:
1. Navigate to Networking > Connectivity > Tier-0 Gateways in the NSX UI.
Edge cluster and HA mode configuration values are automatically taken from the default Tier-0
gateway.
You do not need to configure VRF settings for VRF Lite. These settings are used for Ethernet
VPN (EVPN).
301
VMware Confidential Internal Use Only
5-149 Adding Uplink Interfaces to the VRF
Gateway
In the Set Interfaces window, connect the VRF gateway uplink interfaces to the uplink trunk
segment.
• Access VLAN ID is required and must belong to the range specified for the trunk segment.
302
VMware Confidential Internal Use Only
5-150 Configuring the BGP for the VRF
Gateway
Set up the BGP parameters related to the VRF.
The following parameters are inherited from the default Tier-0 gateway and cannot be modified
at the VRF level:
• Local AS
• Graceful restart
• Multipath relax
Route aggregation and BGP neighbors are local configurations per VRF.
303
VMware Confidential Internal Use Only
5-151 Connecting a Tier-1 Gateway to the VRF
Gateway
To connect the Tier-1 gateway to the VRF gateway:
1. Navigate to Networking > Connectivity > Tier-1 Gateways in the NSX UI.
2. Select a Tier-1 Gateway and click Edit from the Actions menu next to >.
3. From the Linked Tier-0 Gateway drop-down menu, select the VRF gateway.
All Tier-0 gateways and VRF gateways are listed in the Linked Tier-0 Gateway drop-down
menu.
304
VMware Confidential Internal Use Only
5-152 VRF Lite Validation
Navigate to Networking > Connectivity > Tier-0 Gateways to obtain the list of VRF gateways
with its status and associated errors.
VRF gateways are marked with the VRF tag in the name field.
305
VMware Confidential Internal Use Only
5-153 Lab 10: Configuring VRF Lite
Configure and verify the VRF Lite functionality to isolate routing domains:
306
VMware Confidential Internal Use Only
5-155 Key Points (1)
• The NSX-T Data Center routing function meets the needs of service providers and tenants.
• Dynamic route configuration enables gateways to exchange information about the network.
• Tier-1 gateways have downlink ports to connect to NSX segments and uplink ports to
connect to Tier-0 gateways.
• A gateway includes two optional parts: a distributed gateway and one or more service
gateways.
• You can deploy an NSX Edge node through the NSX UI, the OVF tool, and an ISO file in a
PXE environment.
• Joining NSX Edge nodes with the management plane ensures that NSX Manager and the
NSX Edge nodes can communicate with one another.
• NSX-T Data Center implements Border Gateway Protocol (BGP) and Open Shortest Path
First (OSPF) dynamic routing protocols.
• External BGP (eBGP) is used to interchange autonomous system IP addresses with another
autonomous system.
• OSPF is a link state routing protocol that maintains adjacencies with neighbor routers over
Broadcast and Point-to-Point networks.
• VRF Lite enables you to configure multiple routing instances without deploying additional
Tier-0 gateways and edge nodes.
Questions?
307
VMware Confidential Internal Use Only
308
VMware Confidential Internal Use Only
Module 6
NSX-T Data Center Logical Bridging
6-2 Importance
Logical bridging enables layer 2 communication between devices on NSX-T Data Center
overlay-backed virtual networks and VLAN-backed physical networks. Logical bridging is also
useful in a physical-to-virtual migration scenario, where you must split a subnet across physical
and virtual workloads.
• Create a bridge-backed segment and bridge traffic between virtual and physical
environments
309
VMware Confidential Internal Use Only
6-4 Overview of Logical Bridging
The bridging function provides layer 2 connectivity between the overlay segments and VLAN-
backed physical networks:
• The traffic is bridged in and out of the NSX-T Data Center domain.
• The NSX Edge firewall provides granular control over the traffic that is bridged.
The bridge feature is available in the bare-metal edges and in the VM edges.
Layer 2 bridging is useful in a physical-to-virtual migration scenario, where you might need to
split a subnet across physical and virtual workloads.
310
VMware Confidential Internal Use Only
6-6 Routing and Bridging for Physical-to-
Virtual Communication
You can achieve physical to virtual communication by using these methods.
Routing:
Bridging:
• The layer 2 flat broadcast domain is used for both physical and virtual workloads, resulting in
limited domain size and lack of scalability.
311
VMware Confidential Internal Use Only
When connecting your physical workloads on traditional physical networks to a virtualized
environment, you can use routers running standard routing protocols to route traffic between
workloads in the two environments.
If you do not want to use routing, and you have to place your physical and virtual devices on a
single layer 2 subnet, you can enable bridging.
312
VMware Confidential Internal Use Only
6-7 Example of Virtual-to-Physical Routing
Routing occurs between VMs in the NSX-T Data Center virtual environment and a server in the
physical environment:
• The web tier and application tier belong to the NSX-T Data Center overlay.
• The Tier-0 gateway provides north-south routing between the physical server and the
application tier servers.
313
VMware Confidential Internal Use Only
6-8 Example of Virtual-to-Physical Bridging
Bridging occurs between a VM in the NSX-T Data Center virtual environment and a server in the
physical environment on the same subnet:
• The physical server is on the same subnet as the application tier servers.
• The communication between the physical server and the application tier occurs through
NSX Edge.
The diagram demonstrates how the physical server and the App1 server (virtual) can exist on
the same subnet.
In the diagram, the traffic between the physical server and the application tier does not pass
through the Tier-1 and Tier-0 gateways.
314
VMware Confidential Internal Use Only
6-9 Logical Bridging Components
NSX-T Data Center components for layer 2 bridging:
• A bridge profile is used to specify which edge nodes are involved in bridging.
An NSX-T Data Center segment that is attached to a bridge profile provides the following
information:
For more information about the additional edge configurations, see NSX-T Data Center
Administration Guide at https://docs-staging.vmware.com/en/VMware-NSX-T-Data-
Center/3.2/administration/GUID-0E28AC86-9A87-47D4-BE25-5E425DAF7585.html.
315
VMware Confidential Internal Use Only
6-10 Using Multiple Bridge Profiles
You can configure multiple bridge profiles on an NSX Edge node.
Edge 1 is the primary edge for bridge profile 1 and the backup edge for bridge profile 2.
Edge 2 is the primary edge for bridge profile 2 and the backup edge for bridge profile 1.
316
VMware Confidential Internal Use Only
6-11 Creating an Edge Bridge Profile
To create an edge bridge profile, navigate to Networking > Connectivity > Segments > Profiles
> Edge Bridge Profiles.
Failover mode:
• Preemptive: The bridge on the primary edge node always becomes the active bridge when
the edge node is available again after a failure.
• Nonpreemptive: The bridge on the primary edge node remains as standby if it becomes
available after a failure when the bridge on the other edge node is already active.
317
VMware Confidential Internal Use Only
6-12 Creating a Layer 2 Bridge-Backed
Segment
The bridge-backed segment provides layer 2 connectivity to overlay VMs outside NSX-T Data
Center:
• After the segment creation, set up an edge bridge to attach the edge bridge profile.
• At least one ESXi or KVM host must exist to serve as a regular transport node. This node
hosts VMs that require connectivity with devices outside an NSX-T Data Center
deployment.
• A VM or another physical device must exist outside the NSX-T Data Center deployment.
This physical device must be attached to a VLAN port matching the VLAN ID of the bridge-
backed logical segment.
A regular transport node and a physical device are not mandatory for creating a bridge-
backed logical segment. But if the transport node and physical device are not available, you
cannot use the bridge.
The option to add an edge bridge profile is not visible immediately after you create a segment.
You need to save the configuration and edit it again.
318
VMware Confidential Internal Use Only
6-13 Monitoring the Bridged Traffic Statistics
To monitor the bridged traffic statistics, select VIEW STATISTICS when expanding the bridge-
backed segment.
319
VMware Confidential Internal Use Only
6-14 Review of Learner Objectives
• Describe the purpose and function of logical bridging
• Create a bridge-backed segment and bridge traffic between virtual and physical
environments
• A bridge profile enables an NSX Edge cluster to provide layer 2 bridging to a segment.
• The traffic bridged in and out of the NSX-T Data Center domain is subject to the NSX Edge
layer 2 bridge firewall.
Questions?
320
VMware Confidential Internal Use Only
Module 7
NSX-T Data Center Firewalls
7-2 Importance
NSX-T Data Center includes a distributed firewall and gateway firewall to protect both east-
west and north-south traffic. You must understand the architecture and configuration of the
NSX-T Data Center firewalls to ensure that your workloads are protected.
321
VMware Confidential Internal Use Only
7-4 Lesson 1: NSX Segmentation
322
VMware Confidential Internal Use Only
7-6 Traditional Security Challenges
The traditional security model assumes that all users and components in an organization's
network can be trusted.
The foundation of IT security has remained almost the same for the last 30 years.
Virtual Private Network (VPN) and Multifactor Authentication (MFA) were introduced later to
ensure external protection. But these types of authentication are not enough.
This approach assumes that a user's identity is not compromised and that all users act
responsibly and can be trusted.
This traditional perimeter-centric security approach has proven inadequate for protecting
modern IT environments.
323
VMware Confidential Internal Use Only
7-7 About Zero-Trust Security
Zero-Trust is a security model that does not automatically trust entities in the security perimeter.
Zero-Trust is a security model that does not automatically trust entities in the security perimeter.
This model emerged to mitigate the increase of network attacks and insider threats that exploit
the breaches of a traditional perimeter-centric approach to security.
The rapidly changing work styles and increased use of SaaS applications resulted in Zero-Trust
security becoming one of the most important forms of alternative security.
Zero-Trust moves the architecture from a single large DMZ to multiple smaller boundaries
around each application and data. If an attacker succeeds in penetrating one of these
boundaries, the attacker can only move in that perimeter and be easily contained.
324
VMware Confidential Internal Use Only
7-8 About NSX Segmentation
Segmentation is the process of dividing data center infrastructure into small zones, allowing fine-
grain control and inspection of traffic flows.
NSX-T Data Center includes a distributed, scale-out internal firewall that simplifies and automates
both macro-segmentation and micro-segmentation:
Micro-segmentation enables security teams to define and enforce granular controls to the
workload level of an application.
325
VMware Confidential Internal Use Only
7-9 Use Cases for NSX Segmentation
NSX segmentation has the following use cases:
With NSX-T Data Center, security teams can deploy network segments easily, enable
application isolation, and enforce a Zero-Trust architecture with a single solution.
Use cases:
• NSX segmentation enforces a Zero-Trust architecture by creating granular policies between
applications, services, and workloads.
• Network segments, virtual security zones, and partner domains are quickly created and
configured as they are entirely defined in software. NSX-T Data Center also removes the
need to architect the network again and to deploy discrete appliances.
• Critical applications and shared services are protected from being compromised by two
mechanisms: discovery of application boundaries using NSX Intelligence and setting up
segmentation policies at the application level. NSX-T Data Center also ensures that policies
stay up-to-date as applications evolve or move.
326
VMware Confidential Internal Use Only
7-10 NSX Segmentation Benefits
NSX Segmentation offers key business and functional benefits:
327
VMware Confidential Internal Use Only
7-11 Enforcing Zero-Trust with NSX
Segmentation
NSX segmentation helps build a Zero-Trust approach to security by defining a security perimeter
around each application.
NSX-T Data Center improves the security of today’s modern workloads by preventing lateral
movement using network segmentation. It is distributed, application-aware, and simple to
operate.
Follow this process to secure a data center environment with NSX segmentation:
You must always monitor the environment for changes or unexpected behavior and adapt the
security policies.
328
VMware Confidential Internal Use Only
7-12 Step 1: Creating Virtual Security Zones
Protect segments of the network by creating virtual security zones.
Using macro-segmentation to isolate environments improves the security of the data center. It
prevents lateral movement between virtual zones.
Depending on their business structure and use cases, a security team typically chooses to
segment environments that should not be able to directly communicate with each other.
Examples include different business units (such as HR, Finance, and so on), partner environment,
and production environments.
Macro-segmentation is the first step to the Zero-Trust journey. It starts defining security zones in
the data center environment that will be further secured.
329
VMware Confidential Internal Use Only
7-13 Step 2: Identifying the Applications
Boundaries
Identify the virtual machines and containers used by an application and the network traffic that is
necessary for the application to function.
When the network is macro-segmented into virtual security zones, you can move to micro-
segmentation and secure applications in a virtual zone.
• Define the application boundaries by identifying the VMs and containers that an application is
using. Also, define the data, assets, applications, and services that need protection, such as:
330
VMware Confidential Internal Use Only
• Identify how the traffic moves across the organization in relation to the previously defined
boundaries.
— External application traffic: Which user is connecting to the app, which shared services
the application is using, and so on
A good understanding of the application footprint and the network traffic is the only way to
determine and enforce policies that secure access to the data.
This identification process is tedious and time-consuming when performed manually. NSX
Intelligence or vRealize Network Insight can be used to automate the discovery of the
application boundaries.
331
VMware Confidential Internal Use Only
7-14 Step 3: Implementing Micro-Segmentation
Use micro-segmentation to allow necessary network traffic.
After the application's composition and necessary network traffic are identified, firewall rules
must be configured to allow the necessary network traffic.
NSX Distributed Firewall enables users to configure firewall rules from a single point, which are
then pushed to all hosts that participate in the NSX network. The creation of rules can be
automated with NSX Intelligence. NSX Intelligence can recommend distributed firewall rules
based on the discovered traffic flows in the environment.
• Security groups include different objects that are added both statically and dynamically, and
can be used as the source and destination of a firewall rule. Security groups can use security
tags or other criteria (such as IP sets, MAC sets, segment ports, AD user groups, and so on)
to group virtual machines together.
332
VMware Confidential Internal Use Only
7-15 Step 4: Securing Through Context
Set up security policies to establish the behavior of virtual machines and containers.
This step secures the traffic and the context of the application by setting up policies based on
the behavior of virtual machines and containers.
In the example, a security administrator wants to create a firewall policy to restrict network
access to VMs with an earlier version of Windows:
2. Create a security group that gathers all VMs that do not match the OS version threshold.
3. Create a security policy to restrict access to the members of that security group.
When a VM is created and does not meet the OS version criteria, it is automatically put in that
security group and blocked by the firewall rule. This approach removes the need for checking
each VMs individually.
Third-party services can be integrated to create more granular control. For more information
about a list of NSX partners, see NSX Data Center Technology Partners at
https://www.vmware.com/products/nsx/technology-partners.html.
333
VMware Confidential Internal Use Only
7-16 Review of Learner Objectives
• Define NSX segmentation
334
VMware Confidential Internal Use Only
7-17 Lesson 2: NSX-T Data Center Distributed
Firewall
• Save, roll back, export, and import the distributed firewall configuration
335
VMware Confidential Internal Use Only
7-19 NSX-T Data Center Firewalls
NSX-T Data Center includes the distributed firewall (east-west) and the gateway firewall (north-
south).
• It resides in the kernel of the hypervisor and outside the guest OS of the VM.
The gateway firewall is used for north-south traffic between the NSX gateways and the physical
network:
336
VMware Confidential Internal Use Only
7-20 Features of the Distributed Firewall
The distributed firewall provides visibility and control for virtualized workloads and networks.
• FQDN Filtering
• Time-based policies
337
VMware Confidential Internal Use Only
7-21 Distributed Firewall: Key Concepts
Several key concepts apply to distributed firewalls:
• Firewall rule: A set of instructions that determine whether a packet should be allowed or
blocked.
• Service: Defines a port and protocol combination and is used to specify the type of traffic
to be blocked or allowed in firewall rules.
• Context profile: Inspects the layer 7 content of the packets to allow or deny them.
338
VMware Confidential Internal Use Only
7-22 Overview of a Security Policy
A security policy is a collection of firewall rules. You can configure different types of security
policies from the NSX UI.
• Firewall policies: Used for configuring firewall rules to control north-south and east-west
traffic.
• Endpoint policies: Used for configuring Guest Introspection services and rules.
• IDS/IPS policies: You can use these policies to define Intrusion Detection and Prevention
rules for east-west and north-south traffic.
• Malware Prevention policies: You can use these policies to define anti-malware rules for
east-west and north-south traffic.
• Network Introspection policies: Used for configuring north-south and east-west traffic
redirection rules.
• TLS Inspection policies: Used for configuring TLS Inspection rules for the north-south traffic.
339
VMware Confidential Internal Use Only
7-23 Distributed Firewall Policy Categories
A Distributed Firewall policy is a collection of firewall rules applied to east-west traffic.
The NSX UI enables you to group distributed firewall policies into different categories.
• Ethernet: All layer 2 policies. Layer 2 firewall rules are always evaluated before layer 3 rules.
• Environment: High-level policy groupings, for example, the production group cannot communicate
with the testing group, or the testing group cannot communicate with the development group.
• Application: Specific and granular application policy rules, such as rules between applications
or application tiers, or rules between microservices.
Each of these categories has its own policies and rules. Firewall rules are enforced left to right
and top to bottom across these categories.
You can reorder policies and rules in a specific category. However, you cannot move policies or
rules across different categories.
340
VMware Confidential Internal Use Only
7-24 About Distributed Firewall Policies
A firewall policy includes one or more firewall rules, which contain specific instructions for
managing various types of traffic.
In a firewall policy, each firewall rule contains instructions that determine the following factors:
Policies are used for multitenancy, such as creating specific rules for sales and engineering
departments in separate policies.
A policy can be defined as enforcing stateful or stateless rules. Stateless rules are treated as
traditional stateless access-control lists (ACLs).
341
VMware Confidential Internal Use Only
7-25 Distributed Firewall Rule Processing
within a Policy
Firewall rules are processed in a top-to-bottom order:
• Packets that do not match any other rule are matched by the default rule.
• Like firewall policies, firewall rules are processed in the top-to-bottom order.
• Each packet is checked against the top rule in the rule table before moving down the
subsequent rules in the table.
• The first rule in the table that matches the traffic parameters is enforced. Subsequent rules
cannot be enforced because the search is terminated for that packet.
Because of this behavior, you must place the most granular policies at the top of the rule table.
Packets not matching other rules are enforced by the default rule. The default rule is originally
set to the Allow action. This rule ensures that VM-to-VM communication is not broken during
staging or migration phases. To implement a Zero-Trust model where only the specified traffic is
allowed, the action for the default rule should be changed to block, and firewall policies defined
to specify all traffic to be allowed.
342
VMware Confidential Internal Use Only
7-26 Applied To Field for the Policy
When creating a Distributed Firewall policy, you can define the scope of the policy. It can be the
full DFW or a specific security group.
The Applied To field configured at the policy level overrides the Applied To field configured at
the rules within it. You must configure the Applied To field either at the policy level or at the rule
level, but not at both levels. For more granularity, consider configuring the Applied To field at
the rule level.
343
VMware Confidential Internal Use Only
7-28 Configuring Time-Based Firewall Policies
You can configure security policies that are only valid for a specific period. You can specify the
following parameters in the Time Window:
• Name
• Recurring days
The From and Till parameters must be configured in 30-minute increments. For example, from
08:00 to 08:30 is a valid configuration. However, if a user configures an interval from 08:15 to
08:45, a configuration error appears in the UI.
Before configuring a time-based rule, you must configure NTP servers for the transport nodes.
344
VMware Confidential Internal Use Only
7-29 Creating Distributed Firewall Rules
Rules are a set of criteria used to evaluate traffic flows. They contain instructions that determine
whether a packet should be allowed, dropped, or rejected.
345
VMware Confidential Internal Use Only
7-30 Configuring Distributed Firewall Rule
Parameters
A distributed firewall rule includes parameters such as source, destination, service, and context
profile. This rule defines the scope where the rules should be applied to and the action that
should be taken on a rule match. It also provides an option of logging when the traffic matches a
rule.
• Applied To: Defines the scope of the rule. It can be the full DFW or a specific security
group.
• Action: You can select from the following firewall rule actions:
— Allow
— Drop
— Reject
The order of firewall rules determines how the traffic is managed. You can drag the rules in the
UI to change the order.
346
VMware Confidential Internal Use Only
7-31 Specifying Sources and Destinations for a
Rule
When specifying sources and destinations for a firewall rule, you can use an IP or MAC address
or an object (such as a group). If you do not specify these parameters, they match Any.
Both IPv4 and IPv6 addresses are supported for sources and destinations options of the firewall
rule. Multicast addresses are also supported.
A group can contain VMs, VIFs, segments, segment ports, IP and MAC addresses, AD user
groups, and physical servers.
Before creating a group that includes AD users, you must add an AD domain to NSX Manager.
You add this domain through the NSX UI by navigating to System > Configuration > Identity
Firewall AD > ADD ACTIVE DIRECTORY.
The main use case for creating a group that includes AD users is to configure identity-based
firewall rules.
347
VMware Confidential Internal Use Only
7-33 Adding Members and Member Criteria for
a Group
Groups can be defined by using dynamic or static membership criteria:
• Dynamic group inclusion for VMs can be based on tags, machine names, OS names, or
computer names.
• Static group inclusion criteria apply to VMs, VIFs, segments, segment ports, IP sets, MAC
sets, AD user groups, physical servers, and nested groups.
Security administrators can assign one or multiple tags to workloads based on a given criteria.
These tags can then be used to create dynamic security groups for use in firewall rules.
348
VMware Confidential Internal Use Only
7-35 Specifying Services for a Rule
When configuring distributed firewall rules, you specify one or more services.
Services contain the port and protocol definition for network traffic.
NSX includes an extensive list of predefined services. You cannot modify or delete these
services. However, you can create additional services to meet your communication
requirements.
You can create a service while configuring a distributed firewall rule. Alternatively, you can
create additional services by navigating to Inventory > Services > ADD SERVICE.
349
VMware Confidential Internal Use Only
7-36 Adding a Context Profile to a Rule
You can apply a context profile to a distributed firewall rule to enable a layer 7 firewall.
NSX Manager includes a list of predefined context profiles. You can also configure custom
context profiles for your firewall rules. Layer 7 firewall rules can be defined only in a stateful
firewall policy.
Alternatively, you can create context profiles by navigating to Inventory > Profiles > Profiles >
Context Profiles > ADD CONTEXT PROFILE.
350
VMware Confidential Internal Use Only
7-37 Configuring Context Profile Attributes
When creating a context profile for a distributed firewall rule, you configure two attributes:
domain name and application ID.
A context profile defines context-aware attributes, including application ID, domain name, and
subattributes such as application version or cipher set.
Context profiles for distributed firewall rules include the following main attributes:
• DOMAIN_NAME: You can choose from a static list of fully qualified domain names (FQDNs)
or add your own FQDN.
• APP_ID: You can choose from a list of preconfigured applications. You cannot add
applications. Examples include FTP, SSH, and SSL. Certain applications allow users to
specify subattributes. For example, when choosing SSL Application, you can specify the
TLS_VERSION and the TLS_CIPHER_SUITE. For CIFS, you can specify the
SMB_VERSION.
351
VMware Confidential Internal Use Only
7-38 Custom FQDN Filtering
You can add your own fully qualified domain name (FQDN) for custom filtering.
Alternatively, you can create context profiles by navigating to Inventory > Profiles > Attribute
Types > FQDNs > ACTIONS > Add FQDN.
352
VMware Confidential Internal Use Only
7-39 Setting the Scope of Rule Enforcement
The Applied To attribute optimizes the resource utilization on the ESXi and KVM hosts. It also
helps in defining targeted policies at specific zones or tenants without affecting the policy
defined on other zones or tenants.
The appropriate use of the Applied To field is paramount to optimize resource utilization on the
transport nodes and to avoid scalability issues. You must configure the Applied To field in a
distributed firewall rule to match the security groups used as the source and destination.
353
VMware Confidential Internal Use Only
7-40 Specifying the Action for a Rule
You configure the following actions in a distributed firewall rule:
• Allow: Allows all traffic with the specified source, destination, and protocol.
• Drop: Drops packets with the specified source, destination, and protocol. Dropping a packet
is a silent action with no notification to the source and destination systems.
• Reject: Rejects packets with the specified source, destination, and protocol. Rejecting a
packet is a more graceful way to deny a packet, because it sends a destination unreachable
message to the sender.
354
VMware Confidential Internal Use Only
7-41 Jump To Application DFW Rules (1)
Jump To Application is an action type that allows you to skip the entire rule processing in the
Environment category and jump ahead to process the Application category rule set:
The Jump to Application action enables you to do more granular rule processing for
communications between and across application levels.
You can create a firewall rule with Jump To Action with the same ease as writing any other
distributed firewall rule.
355
VMware Confidential Internal Use Only
7-42 Jump To Application DFW Rules (2)
When a Jump To rule is configured in the Environment category, the traffic is managed as
follows:
1. The WEB to DB traffic flow matches the JUMP TO APP rule in the PROD TO PROD policy,
under the Environment category. Such a rule allows the traffic flow to jump from the
Environment category to the Application category.
2. After jumping to the Application category, the WEB to DB traffic flow is checked against all
the rules in the Application category in a top-down order. When the traffic hits the WEB TO
DB rule in the INTRA-APP policy, the traffic flow is dropped based on the action of the
WEB TO DB rule.
356
VMware Confidential Internal Use Only
7-43 Distributed Firewall Rule Settings
You configure distributed firewall rules settings, such as logging, direction, and IP protocol.
• Logging: You can turn logging off or on. Logs are stored in the
/var/log/dfwpktlogs.log file on ESXi and KVM hosts.
• Direction: This setting matches the direction of traffic from the point of view of the
destination object.
• Log Label: Log labels can be used to identify a rule when analyzing the log files.
357
VMware Confidential Internal Use Only
7-44 Saving and Viewing the Distributed
Firewall Configuration
You can save and view distributed firewall configurations. Every time you publish a distributed
firewall rule, a draft of the configuration is saved automatically.
From this view, you can see a timeline with all saved distributed firewall configurations. The
following types of saved items are available:
• Auto-saved: Drafts automatically saved by the system immediately after distributed firewall
changes are published. This feature is enabled by default but can be disabled if required.
Rolling back to the previous configuration requires reverting to the previously published
autosave.
— You can disable the Auto-saved feature in the NSX UI by navigating to Security >
Policy Management > Distributed Firewall > ACTIONS > General Settings and turn off
the Auto Save Drafts toggle.
• Saved by others: Distributed firewall configurations saved by other users different from the
user currently logged in to the system.
• Saved by me: Distributed firewall configurations saved by the user currently logged in to the
system.
358
VMware Confidential Internal Use Only
7-45 Rolling Back to a Saved Distributed
Firewall Configuration
You can roll back to a previously saved distributed firewall configuration.
When you click the name of the saved configuration in the histogram, a new wizard displays
details about the distributed firewall configuration on the top, including name, description, and
creation date. The bottom part of the screen displays the differences between the saved
configuration and the last published configuration.
359
VMware Confidential Internal Use Only
7-46 Distributed Firewall Configuration Export
and Import
You can also export and import the distributed firewall configuration.
While exporting a firewall configuration, you must provide a passphrase. This passphrase is
needed when importing this configuration into the firewall. After the export is complete, you can
download the firewall configuration.
While importing a firewall configuration with the passphrase, you must also provide a name for
this configuration. All the imports are saved as drafts in the firewall.
360
VMware Confidential Internal Use Only
7-47 Distributed Firewall Architecture
The high-level distributed firewall workflow includes the following steps:
3. Distributed firewall policies are then pushed to the manager role and persisted.
4. The manager role forwards the distributed firewall rule configuration to the central control
plane (CCP).
5. The CCP forwards the configuration to the LCP (nsx-proxy) through the Appliance Proxy
Hub (APH).
6. The host transport nodes (ESXi/KVM) store the distributed firewall configuration and
configure the datapath accordingly.
7. The transport nodes send rule statistics and status to NSX Manager.
361
VMware Confidential Internal Use Only
7-48 Distributed Firewall Architecture: ESXi
On an ESXi host, the distributed firewall includes several components:
1. nsx-proxy receives the configuration changes from the CCP and configures datapath modules.
— VSIP: Receives firewall rules and downloads them on each VM’s vNIC
3. Stats Exporter collects flow records from the distributed firewall data plane kernel modules
and generates rules statistics.
4. nsx-proxy passes rules statistics and real-time data to the management plane.
The following datapath modules are responsible for distributed firewall rule processing:
• VMware Internetworking Service Insertion Platform (VSIP): This module is the main part of
the distributed firewall kernel module that receives the firewall rules and downloads them on
each VM’s vNIC.
• VMware Deep Packet Inspection (VDPI): This deep packet inspection module daemon in the
user space is responsible for L7 packet inspection. VDPI can identify application IDs and
extract context for a traffic flow.
L7 rules, like the remaining DFW rules, are programmed into VSIP. VSIP forwards L7 packets to
VDPI, which inspects and extracts the L7 information from the packets and returns them to VSIP.
Stats Exporter collects flow records from the VSIP kernel module and generates rule statistics.
362
VMware Confidential Internal Use Only
7-49 Distributed Firewall Rule Processing: ESXi
vmware-sfw is a software construct where distributed firewall rules are stored and enforced.
When a distributed firewall rule is configured, its information is stored in the rule table. After the
initial rule configuration, the traffic is managed as follows:
2. If the connection is not present, the packet is matched against the rule table.
3. If the packet is allowed and the traffic type is stateful, a connection entry is created in the
connection table.
4. All subsequent packets for the same connection are serviced directly from the connection
table. Stateless packets are always matched against the rule table.
The vSphere ESXi network IOChain is a framework that enables you to insert functions into the
network datapath.
The IOChain framework is used by NSX to host vmware-sfw, which is a software construct
where distributed firewall rules are stored and enforced.
• Connection Table: Caches flow entries for stateful rules with an action
363
VMware Confidential Internal Use Only
7-50 Distributed Firewall Architecture: KVM
On a KVM host, the distributed firewall includes several components:
1. nsx-proxy: Receives configuration changes from the CCP and configures datapath modules
2. Datapath modules:
4. nsx-proxy: Passes rules statistics and real-time data to the management plane
The diagram shows the distributed firewall architecture on a KVM. The same architecture applies
to bare-metal servers.
The following datapath modules are responsible for distributed firewall rule processing on a KVM:
• OVS: Core data path component for L2, L3, and distributed firewall. It provides ingress and
egress filtering for stateless rules.
• Conntrack: Module responsible for tracking established connections for stateful firewall rules.
• VDPI: A deep packet inspection module daemon in the user space that is responsible for L7
packet inspection. VDPI can identify application IDs and extract context for a traffic flow.
364
VMware Confidential Internal Use Only
7-51 Distributed Firewall Rule Processing: KVM
In a KVM environment:
• The conntrack module is responsible for tracking established connections for stateful firewall
rules.
When a distributed firewall rule is configured, its information is stored in OVS as a flow. After the
initial configuration, the traffic is managed as follows:
2. If the connection is not present, the packet is matched against the OVS flow table.
3. If the packet is allowed and the type of traffic is stateful, an entry is created in the
connection table.
4. All subsequent packets for the same connection are serviced directly from the connection
table. Stateless packets are always matched against the OVS flow table.
365
VMware Confidential Internal Use Only
7-52 Lab 11: Configuring the NSX Distributed
Firewall
Create NSX distributed firewall rules to allow or deny the application traffic:
• Save, roll back, export, and import the distributed firewall configuration
366
VMware Confidential Internal Use Only
7-54 Lesson 3: Use Case for Security in
Distributed Firewall on VDS
367
VMware Confidential Internal Use Only
7-56 About Distributed Firewall on VDS
Distributed firewall on VDS enables NSX security features on existing vSphere distributed port
groups (DVPG) without changing the vSphere platform.
• Removes the need for migrating workloads when configuring NSX security
• Implements the same security policies and rules for network objects irrespective of whether
they are created or owned by NSX Manager or vSphere
• Enables the security administrator to deploy the distributed firewall and other security
features without involving the networking administrators
Distributed firewall on VDS is a new feature in NSX-T Data Center 3.2. This feature enables NSX
Security features on workloads attached to a distributed port group (DVPG) managed by
vCenter Server. Earlier versions of NSX-T Data Center required VMs to be attached to
segments managed by NSX (VLAN or overlay) to take advantage of distributed security
functions. As of NSX-T Data Center 3.2, this functionality has been expanded to cover DVPG
managed by vCenter Server.
Distributed firewall on VDS is implemented on the NSX Management Plane based on information
reported by the vCenter Server inventory.
368
VMware Confidential Internal Use Only
7-57 Supported Features
Several features are supported when you prepare your vCenter Server cluster for NSX Security.
• The vCenter Server system must be registered as a Compute Manager in NSX Manager.
This feature is only configurable with the NSX API or by using the Quick Start installation in the
NSX UI or through vCenter NSX plug-in.
VDS can span multiple vSphere clusters. Some of these clusters have NSX security mode
enabled, and some do not.
369
VMware Confidential Internal Use Only
7-59 Installation Workflow
You follow these steps to install the NSX distributed firewall on a VDS with existing
dvportgroups.
1. Navigate to System > Configuration > Quick Start > Prepare Clusters for Networking and
Security > GET STARTED.
All hosts in the cluster must have identical VDS configuration to perform an NSX Quick Install.
You can also configure the host for security only with a REST API call.
370
VMware Confidential Internal Use Only
7-61 Validating the Security Cluster
Preparation from the NSX UI
You can validate the Security Cluster Preparation by navigating to System > Configuration >
Fabric > Nodes > Host Transport Nodes.
• One host switch per VDS was created and the Type is VDS.
Because the cluster is not prepared for NSX networking features, it is not expected to have
tunnels configured on the host.
During the transport nodes preparation, the following objects are created:
• Discovered segments
When the ESXi hosts are prepared for NSX security, vCenter Server performs a full inventory
sync with NSX Manager to share its objects.
371
VMware Confidential Internal Use Only
7-63 Autoconfigured Transport Node Profile
A separate transport node profile is created for each cluster.
You can check the autocreated transport node profile by navigating to System > Fabric >
Profile > Transport Node Profile.
A separate transport node profile is created for each cluster even if multiple clusters have
identical VDS configuration.
The created transport node profiles are neither configurable nor editable.
372
VMware Confidential Internal Use Only
7-64 VLAN Transport Zones
A VLAN transport zone is created for every VDS.
You can check the autocreated transport zone by navigating to System > Fabric > Transport
Zones > Transport Zones.
The created transport node profiles created are neither configurable nor editable.
A single instance of a port has two IDs: a vSphere ID and an NSX ID. The NSX Control plane
uses the NSX ID.
Discovered segments can be consumed by the NSX security features such as the distributed
firewall.
373
VMware Confidential Internal Use Only
7-66 Discovered Segments (2)
You can access the discovered segments by navigating to Networking > Connectivity >
Segments > Distributed Port Groups.
Discovered segments do not appear in the NSX UI as segments, but they appear as distributed
port groups.
The vCenter Server inventory is monitored. Any change that occurs (DVPG added, host added,
and so on) is reflected in NSX-T Data Center.
374
VMware Confidential Internal Use Only
7-67 Configuring Segment Profiles
You can configure IP discovery, segment security, and Spoofguard from the NSX UI.
375
VMware Confidential Internal Use Only
7-68 Grouping Enhancement
You can use Distributed Port, Distributed Port Groups, and their corresponding tags to define
NSX Group membership.
Since NSX-T Data Center 3.2, you can select distributed port group and distributed port as
members and membership criteria.
You can use these groups when configuring distributed firewall rules.
376
VMware Confidential Internal Use Only
7-70 Lesson 4: NSX-T Data Center Gateway
Firewall
377
VMware Confidential Internal Use Only
7-72 About the Gateway Firewall
The gateway firewall has the following characteristics:
• Implemented on both Tier-0 and Tier-1 gateways and requiring the SR component of the
router
The NSX-T Data Center gateway firewall provides essential perimeter firewall protection that
can be used in addition to a physical perimeter firewall. The gateway firewall supports stateless
and stateful firewall rules.
378
VMware Confidential Internal Use Only
The gateway firewall works independent of the distributed firewall. A user can consume the
gateway firewall using either the UI or REST API framework provided by NSX Manager. The
gateway firewall configuration is similar to the distributed firewall policy. This configuration is
defined as a set of individual rules in a policy. Like the distributed firewall, the gateway firewall
rules can use tagging and groups to build policies.
The service router component of a Tier-0 or Tier-1 gateway provides north-south routing
functionality and centralized services, such as NAT, load balancing, and so on.
From NSX-T Data Center 3.2, the Tier-0 gateway firewall supports stateful firewall filtering with
both active-active and active-standby high availability modes.
The gateway firewall service is part of the NSX Edge node for both bare-metal and VM form
factors. The gateway firewall is useful in developing PCI zones, multitenant environments, or
DevOps-style connectivity without forcing the intertenant or interzone traffic onto the physical
network. The gateway firewall datapath uses the Data Plane Development Kit (DPDK)
framework supported on NSX Edge to provide better throughput.
379
VMware Confidential Internal Use Only
7-73 Predefined Gateway Firewall Categories
The gateway firewall includes predefined categories on the All Shared Rules tab, where rules
across all gateways are visible.
• Emergency: Used for quarantine and can also be used for Allow rules.
• System: Automatically generated by NSX-T Data Center and specific to internal control
plane traffic, such as BFD rules, VPN rules, and so on.
Categories are evaluated from left to right. Each category can have its own rules, which are
evaluated top to bottom.
380
VMware Confidential Internal Use Only
7-74 Gateway Firewall Policy
A Gateway Firewall policy includes one or more individual firewall rules and is applied to north-
south traffic.
Gateway policies can be applied to Tier-0 and Tier-1 gateways and their interfaces.
381
VMware Confidential Internal Use Only
7-75 Configuring Gateway Firewall Policy
Settings
To create a Gateway Firewall policy, you assign a policy name and configure the settings. You
can also set a Time Window so that the policy is only applicable during the specified period.
You can configure the following settings when creating a Gateway Firewall policy:
• TCP Strict: In certain circumstances, the firewall might not see the TCP three-way
handshake for a particular flow (that is, due to asymmetric traffic). By default, the firewall
does not enforce the need to see a three-way handshake and will pick up sessions that are
already established. TCP Strict can be enabled per section to turn off midsession pickup.
When enabling the TCP Strict mode for a particular firewall policy and using a default ANY-
ANY Block rule, packets that do not complete the three-way handshake connection
requirements and that match a TCP-based rule in this policy section are dropped.
• Stateful: When this option is enabled, the gateway firewall performs stateful packet
inspection and tracks the state of network connections. Packets matching a known active
connection are allowed by the firewall, and packets that do not match are inspected against
the gateway firewall rules.
• Locked: This setting allows you to lock a policy while making configuration changes so that
others cannot make modifications at the same time.
You can also set a Time Window so that the policy is only applicable during the specified period.
For this feature, the NSX Edge nodes need to have an NTP server configured.
382
VMware Confidential Internal Use Only
7-76 Configuring Gateway Firewall Rules
You create one or more rules in the policy to allow, drop, or reject traffic.
383
VMware Confidential Internal Use Only
7-77 Configuring Gateway Firewall Rules
Settings
You can specify the logging, direction, and IP protocol for the gateway firewall rule. A firewall
rule must be published for it to take effect.
The rules are logged in the Syslog file of the edge node.
• Out: The rule only checks traffic from the gateway interface.
384
VMware Confidential Internal Use Only
7-78 Gateway Firewall Architecture
The gateway firewall workflow is as follows:
3. Gateway policies are sent to the manager role, which validates and forwards them to the
CCP.
4. The CCP distributes the firewall configuration through APH to the relevant edge nodes.
5. nsx-proxy receives the firewall configuration from the CCP and configures the edge data
path.
6. The Stats Exporter collects flow records from the datapath and generates rule statistics.
7. nsx-proxy reports the firewall rules statistics and status to the management plane.
385
VMware Confidential Internal Use Only
7-79 Gateway Firewall Rule Processing
On an NSX Edge node:
• The flow table tracks established connections for stateful firewall rules.
When a Gateway Firewall rule is configured, its information is stored in a rule table in the Rule
Classifier module. After the initial rule configuration, the traffic is managed as follows:
2. If the connection is not present, the packet is matched against the rule table in Rule Classifier.
4. All subsequent packets for the same connection are serviced directly from the flow table.
Stateless packets are always matched against the rule table.
The Rule Classifier maintains stateful and stateless rules for the following features:
• Gateway Firewall
• NAT
• Load Balancing
• IPSec
• Service Insertion
386
VMware Confidential Internal Use Only
The flow table is responsible for tracking established connections for stateful firewall rules, NAT,
and load balancing edge services. When a new connection is made, the first packet is matched
against the flow table to determine if a session exists.
A rule classifier and flow table are created for each gateway. If two gateways are present in the
NSX Edge node, two rule classifier instances and flow tables are created: one for each gateway.
387
VMware Confidential Internal Use Only
7-82 Key Points
• Zero-Trust is a security model that does not automatically trust entities in the security
perimeter.
• Macro-segmentation is the process of dividing data center infrastructure into smaller zones.
• The distributed firewall resides outside the VM guest OS and controls the I/O path to and
from the vNIC.
• The gateway firewall, also called the perimeter firewall, protects traffic from physical
environments.
Questions?
388
VMware Confidential Internal Use Only
Module 8
NSX-T Data Center Advanced Threat
Prevention
8-2 Importance
NSX Distributed IDS/IPS, NSX Malware Prevention, NSX Intelligence, and NSX Network
Detection and Response provide visibility and protection against advanced threats in your
network. As a security administrator, you must learn to properly configure these features to
successfully prevent malicious attacks against your environment.
3. Malware Prevention
4. NSX Intelligence
389
VMware Confidential Internal Use Only
8-4 Lesson 1: Distributed Intrusion Detection
and Prevention
390
VMware Confidential Internal Use Only
8-6 About NSX Distributed IDS/IPS
NSX Distributed IDS/IPS uses real-time deep packet inspection to identify and prevent attempts
at exploiting vulnerabilities in your applications:
Using real-time deep packet inspection, NSX Distributed IDS/IPS performs the following tasks:
• Protects east-west traffic and prevents lateral threat movement: The objective of many
malicious attacks is not solely to penetrate the network. Once in, attackers often pivot
through multiple systems and explore the network to find their main target and gain access
to it. NSX Distributed IDS/IPS recognizes and prevents lateral movement across the
network when perimeter security is compromised.
• Uses signatures to identify malicious traffic patterns: NSX Distributed IDS/IPS uses an
external cloud-based signature store to remain up-to-date with known malicious activity,
including zero-day vulnerabilities. It helps protect against both L4 and L7 attacks.
• Is implemented as a distributed solution across multiple ESXi hosts: Similar to the distributed
firewall architecture, NSX Distributed IDS/IPS is implemented as a kernel module in the ESXi
hosts. This distributed architecture significantly reduces hairpinning by processing traffic that
is closer to the source.
391
VMware Confidential Internal Use Only
8-7 Use Cases for NSX Distributed IDS/IPS
NSX Distributed IDS/IPS protects against malicious activity, including the following activities:
• Lateral movement
With NSX Distributed IDS/IPS, security administrators can augment or replace discrete
appliances.
Application denial of service and client and server exploits have the following characteristics:
• Client-side and server-side exploits: Client-side attacks exploit the trust between users and
the website or server that they visit. Common client-side and server-side exploits are as
follows:
392
VMware Confidential Internal Use Only
With NSX Distributed IDS/IPS, security administrators can replace or augment discrete
appliances. By using native IDS/IPS capabilities in NSX, you can replace traditional IDS/IPS
appliances, including standalone, firewall-based, or virtual host-based solutions. You might also
decide to keep the traditional IDS/IPS appliances, while using NSX Distributed IDS/IPS for
additional east-west traffic protection.
Behavior-based IDS/IPS does not require a separate installation because it is part of the NSX
Distributed IDS/IPS implementation.
• Identifying a client with a high failure rate in SSH authentication with a server
• Identifying periodic callback behavior in a given flow: A variety of remote access Trojan
(RAT) toolkits expose this type of behavior when checking in with a command-and-control
server. RATs are a type of malware threat where an attacker takes control of your
computer.
• Identifying a client with a high failure rate in SSH authentication with a server (can indicate a
credential enumeration attack).
• Tunneling of network data over anonymous proxies such as TOR (not necessarily malicious,
but unusual in an enterprise environment). TORs aim to conceal users' identities and their
online activity from surveillance and traffic analysis by separating identification and routing.
393
VMware Confidential Internal Use Only
8-9 Requirements for NSX Distributed
IDS/IPS
Before using NSX Distributed IDS/IPS, administrators must consider the following factors:
• The NSX Distributed IDS/IPS components are installed as part of the host preparation.
• NSX Distributed IDS/IPS can be enabled at the vSphere cluster level and for standalone
ESXi hosts.
• You can configure NSX Manager to download intrusion detection signatures from the
Internet or manually download and upload the signatures to NSX Manager.
• The NSX-T Data Center environment must be configured with a valid license for NSX
Distributed IDS/IPS.
For additional information about the type of licenses that are valid for NSX Distributed IDS/IPS,
see the VMware NSX Data Center Datasheet at
https://www.vmware.com/content/dam/digitalmarketing/vmware/en/pdf/products/nsx/vmwa
re-nsx-datasheet.pdf.
394
VMware Confidential Internal Use Only
8-10 About IDS/IPS Signatures
An IDS/IPS signature contains metadata that is used to identify an attacker's attempt to exploit
a known operating system or application vulnerability.
NSX Manager downloads IDS/IPS signatures daily from a cloud-based signature repository.
• Critical
• High
• Medium
• Low
• Suspicious
An IDS/IPS signature contains metadata that is used to identify an attacker's attempt to exploit
a known operating system or application vulnerability. Such metadata provides context about
the attempt, such as the affected product, attack target, and so on.
IDS/IPS signatures are matched against traffic headers by using regular expressions.
IDS/IPS signatures are classified into severity categories based on their Common Vulnerability
Scoring System (CVSS) score.
395
VMware Confidential Internal Use Only
8-11 About IDS/IPS Profiles
An IDS/IPS profile defines the IDS signatures that are included or excluded from detection.
The default IDS profile is configured to include all signatures that are labeled as critical.
396
VMware Confidential Internal Use Only
8-12 About IDS/IPS Policies and Rules
An IDS/IPS policy is a collection of IDS/IPS rules.
An IDS/IPS rule contains a set of instructions that determine which traffic is analyzed, including
values for the following parameters:
• Services
• Applied to
• Mode
NSX-T Data Center 3.2 includes the following modes for an IDS/IPS rule:
• Detect & Prevent: Detects signatures and performs the action specified by the security
administrator. Available actions are alert, drop, and reject.
Different rules can be configured with different modes.
You typically start with the Detect Only mode. After tuning for false positives, you change to
Detect & Prevent.
397
VMware Confidential Internal Use Only
8-13 IDS/IPS Signature Curation
The NSX IDS curator engine combines the IDS signatures from Trustwave, Secureworks, and
Lastline into a single signature set, which it pushes, as an NSX IDS bundle, to NSX Threat
Intelligence Cloud.
NSX Threat Intelligence Cloud forwards the NSX IDS bundle to NSX Manager for consumption in
the NSX-T Data Center environment.
The NSX IDS curator engine combines the IDS signatures from Trustwave, Secureworks, and
Lastline into a single signature set and pushes it as an NSX IDS bundle to NSX Threat Intelligence
Cloud. NSX Threat Intelligence Cloud forwards the NSX IDS bundle to NSX Manager for
consumption in the NSX-T Data Center environment.
The NSX IDS curator engine performs the following tasks in the back end:
— If two signatures match the same traffic, only one signature is kept.
398
VMware Confidential Internal Use Only
8-14 NSX Distributed IDS/IPS Architecture
NSX Distributed IDS/IPS operates as follows:
1. NSX Manager downloads curated IDS/IPS signatures from NSX Threat Intelligence Cloud.
4. NSX Manager passes the information to the central control plane (CCP).
5. CCP pushes the IDS/IPS configuration to hosts through the appliance proxy hub.
6. The ESXi hosts store the signature information locally and configure the datapath.
7. The ESXi hosts collect traffic data and send events to NSX Manager.
IDS signatures are written into the IDS module in the datapath, and IDS rules are stored in the
VSIP module.
VSIP evaluates traffic against IDS rules. If a match is found, the packet is sent to IDS.
Distributed firewall rules are always evaluated before distributed IDS/IPS rules. If a distributed
firewall rule rejects a traffic flow, this traffic is never evaluated by IDS/IPS.
399
VMware Confidential Internal Use Only
8-15 Configuring NSX Distributed IDS/IPS
To enable distributed intrusion detection and prevention for standalone hosts or clusters, you
select Security > IDS/IPS & Malware Prevention > Settings >Shared.
If the Auto Update new versions (recommended) check box is selected, signatures are
automatically applied to the ESXi hosts after they are downloaded from the cloud. If the check
box is not selected, the signatures are stopped at the listed version.
400
VMware Confidential Internal Use Only
8-17 Global Intrusion Signature Management
With Global Intrusion Signature Management, you can override the default action for a given
signature and globally disable signatures that are not relevant to your environment.
If a signature is disabled globally, it is removed from custom profiles. You cannot include the
disabled signature in newly created custom profiles.
All signatures are preconfigured with a default action that is recommended by VMware. You can
override this action globally or per profile.
401
VMware Confidential Internal Use Only
The following actions are available:
• Alert: This action is typically used in new deployments or for new signatures.
• Drop and Reject actions are commonly used in the following circumstances:
— High-impact exploits
Dropping a packet is a silent action with no notification to the source and destination
systems. Rejecting a packet is a more graceful way to deny a packet because it sends a
destination unreachable message to the sender. Rejecting the packet is also faster because
the action occurs immediately but the drawback is that it can notify a potential attacker of
the defense invoked.
402
VMware Confidential Internal Use Only
8-18 Configuring Custom IDS/IPS Profiles
Using custom IDS/IPS profiles, you specify the IDS signatures that you want to include or
exclude for detection based on their severity, attack type, attack target, CVSS, and affected
products.
Creating granular workload-specific profiles reduces noise and false positives in your
environment.
You create custom IDS profiles by selecting Security > IDS/IPS & Malware Prevention >
Profiles.
You configure the IDS signatures that you want to include or exclude for detection based on
their severity and more granular criteria:
• Attack Types: Categorizes signatures by attack techniques such as Trojan activity or
attempted denial of service (DoS). The types align with the MITRE ATT&CK framework.
• Attack Targets: Broad category of possible attack targets such as IoT, mobile client, or
networking equipment.
• CVSS: Include or exclude signatures based on their CVSS score range (none, low, medium,
high, and critical).
You can also click the Manage signatures for this profile link to disable individual profile
signatures that might not be relevant to your workloads, or to override the global action
configured for a given signature.
Suspicious IDS signatures are used to detect traffic anomalies in the network.
403
VMware Confidential Internal Use Only
8-19 Configuring IDS/IPS Rules
You specify how traffic is managed in your environment by configuring one or more rules with
an IDS profile and the mode of operation.
As with distributed firewall rules, IDS/IPS rules are evaluated from top to bottom.
To create IDS policies and rules, you select Security > Policy Management > IDS/IPS &
Malware Prevention > Distributed FW Rules.
The Sources, Destinations, Services, and Applied To values work in the same way as those in a
distributed firewall rule.
IDS Profile specifies the group of signatures that the traffic is matched against.
• Detect Only: Regardless of the global or per-signature action, only alerts are generated, and
no preventive action is taken. This mode is equivalent to an intrusion detection system.
• Detect & Prevent: The action that is specified for the given signature either globally or at
the profile level is taken (alert, drop, reject). The action that is specified at the profile level
overrides the action configured globally. This mode is equivalent to an inline intrusion
prevention system.
When configuring IDS/IPS rules, do not use the drop action in a rule that is configured with a
security profile that includes suspicious-level signatures. With this configuration, you
guarantee that any abnormal traffic is inspected.
Like distributed firewall rules, IDS/IPS rules are evaluated from top to bottom. You must place
the most hit rules at the top to avoid the unnecessary evaluation of subsequent rules.
404
VMware Confidential Internal Use Only
8-20 Monitoring IDS/IPS Events (1)
To monitor IDS/IPS events, you select Security > IDS/IPS.
The Events tab shows all intrusion attempts that are detected by the system:
Administrators can filter events based on their severity. Free-form text is also available for
further filtering of events.
IDS events are graphically represented by using a histogram. Security administrators can specify
the period that they are interested in by adjusting the vertical lines in the diagram.
Each IDS/IPS event type is represented by a dot in the histogram. The size of the dot is
proportional to the number of occurrences of an event.
NSX Manager keeps the last 14 days of data or up to 1.5 million records.
405
VMware Confidential Internal Use Only
8-21 Monitoring IDS/IPS Events (2)
Each event can be expanded to retrieve details about the intrusion attempt, including the
attacker, victim, protocol, attack type, and so on.
• Signature ID
• Severity
• Product affected
• Users affected
• VMs affected
• Bytes exchanged
The Intrusion Activity diagram shows the occurrences for a particular signature event on the
selected date. You can click the bars for prevented occurrences (green bar) and for detected-
only occurrences (purple bar) to view a detailed intrusion history for a given signature.
406
VMware Confidential Internal Use Only
8-22 About North-South IDS/IPS
NSX-T Data Center 3.2 introduces North-South IDS/IPS as a tech preview feature.
North-South IDS/IPS uses real-time deep packet inspection to identify and prevent attempts at
exploiting vulnerabilities in your applications:
• Protects north-south traffic and prevents malicious traffic from entering your internal
network
NSX-T Data Center 3.2 introduces North-South IDS/IPS as a tech preview feature. Tech
preview features can be tested and consumed by users, but they are not officially supported by
VMware.
You can configure North-South IDS/IPS for the following use cases:
• Detecting and preventing intrusion attempts across different zones in the data center
407
VMware Confidential Internal Use Only
8-23 Lab 13: Configuring Distributed Intrusion
Detection and Prevention
Configure Distributed Intrusion Detection and analyze the malicious traffic:
408
VMware Confidential Internal Use Only
8-25 Lesson 2: NSX Application Platform
You must deploy NSX Application Platform before using the following NSX security features:
• NSX Intelligence
• NSX Metrics
Before deploying NSX Application Platform, you must understand the concepts presented in the
Kubernetes Fundamentals course.
409
VMware Confidential Internal Use Only
8-28 Prerequisites for NSX Application
Platform Deployment
NSX Application Platform deployment does not automatically prepare the underlying
Kubernetes cluster.
NSX Application Platform can run over a Tanzu Kubernetes cluster or an upstream Kubernetes
cluster.
You must provide the configuration file for an existing Kubernetes cluster during the deployment
of NSX Application Platform.
You must also set up a private Harbor registry with chart repository service to deploy NSX
Application Platform.
For information about the NSX Application Platform deployment prerequisites, including the
supported Tanzu Kubernetes cluster and upstream Kubernetes versions, see Deploying and
Managing the VMware NSX Application Platform at https://docs.vmware.com/en/VMware-
NSX-T-Data-Center/3.2/nsx-application-platform/GUID-D54C1B87-8EF3-45B3-AB27-
EFE90A154DD3.html.
410
VMware Confidential Internal Use Only
8-29 Setting Up a Private Harbor Registry
Before you deploy NSX Application Platform, you must set up a private Harbor registry with
chart repository service.
You then use this registry to upload the Helm charts and Docker images required to deploy NSX
Application Platform.
The Helm charts specify the configuration settings to be used for the deployment.
The Docker images include the container images to be used for the deployment.
For production environments, the private Harbor instance must be configured using external CA-
signed certificates.
VMware Harbor Registry is an enterprise-class registry server that stores and distributes
container images.
For information about installing a Harbor registry with a chart repository service, and uploading
the NSX Application Platform Docker images and Helm charts, see Deploying and Managing the
VMware NSX Application Platform at https://docs.vmware.com/en/VMware-NSX-T-Data-
Center/3.2/nsx-application-platform/GUID-FAC9DBE3-A8EE-4891-A723-
942D0AB679F6.html#GUID-FAC9DBE3-A8EE-4891-A723-942D0AB679F6.
411
VMware Confidential Internal Use Only
8-30 NSX Application Platform Form Factors
Based on the required features, NSX Application Platform can be deployed in different form
factors:
• Standard
• Advanced
• Evaluation
The NSX Application Platform form factors have the following features:
• Standard:
— Supports NSX Network Detection and Response, NSX Malware Prevention, and Metrics
— Requires one controller and three worker nodes in the Kubernetes cluster
• Advanced:
— Supports NSX Network Detection and Response, NSX Malware Prevention, NSX
Intelligence, and Metrics
— Requires one controller and three worker nodes in the Kubernetes cluster
• Evaluation:
— Requires one controller and one worker node in the Kubernetes cluster
This form factor is not supported in production environments. It is intended only for
evaluation or proof-of-concept.
412
VMware Confidential Internal Use Only
8-31 NSX Application Platform Deployment (1)
You deploy NSX Application Platform from the NSX UI by navigating to System >
Configuration > NSX Application Platform.
The Helm Repository and Docker Registry URLs must point to your private Harbor registry.
413
VMware Confidential Internal Use Only
8-32 NSX Application Platform Deployment (2)
During deployment, you must specify the configuration file for the underlying Kubernetes
infrastructure. You also select the form factor based on your feature requirements.
The configuration file for the Kubernetes cluster must be stored locally in the machine, where
NSX Application Platform deployment is being initiated. This is typically a file with a YAML
format that must be provided by your Kubernetes administrator.
The exact steps required to obtain the Kubernetes cluster configuration file depend on the
platform where your Kubernetes cluster is running.
In vSphere with Tanzu environments, you must work with your infrastructure administrator to
create a kubeconfig file with a long-lived token to be used in the NSX Application Platform
deployment. For information about creating this kubeconfig file with a long-lived token, see
Deploying and Managing the VMware NSX Application Platform at
https://docs.vmware.com/en/VMware-NSX-T-Data-Center/3.2/nsx-application-platform/GUID-
52A52C0B-9575-43B6-ADE2-E8640E22C29F.html#GUID-52A52C0B-9575-43B6-ADE2-
E8640E22C29F.
You may then use a file transfer utility to copy the YAML configuration file to your local system.
414
VMware Confidential Internal Use Only
8-33 NSX Application Platform
Predeployment Checks
Before proceeding with the deployment of NSX Application Platform, the wizard checks the
connection, resources, and correct configuration of the specified Kubernetes cluster.
415
VMware Confidential Internal Use Only
8-34 NSX Application Platform Deployment
Validation
After the deployment, you can validate the status of NSX Application Platform nodes from the
NSX UI.
The example illustrates the nodes available after the deployment of NSX Application Platform
with an Advanced form factor.
• Three worker nodes: Used to perform data processing and analytics tasks
You can review the resource utilization at the overall cluster level, or you can also view the
specific resource utilization for each node.
416
VMware Confidential Internal Use Only
8-35 NSX Application Platform Services
The following core services are deployed as part of NSX Application Platform in both the
Standard and the Advanced form factor:
• Messaging
• Configuration Database
• Platform Services
• Metrics
The following core services are deployed only as part for the Advanced form factor:
• Analytics
• Data Storage
The following core services are deployed as part of NSX Application Platform in both the
Standard and the Advanced form factor:
• Platform Services: Includes all services related to certificate and cluster management.
417
VMware Confidential Internal Use Only
The following core services are deployed only as part for the Advanced form factor:
• Analytics: Processes and correlates the network flows, and generates recommendations
and events. Analytics is only available in the Advanced form factor deployment.
• Data Storage: Includes a distributed database used to persistently store correlated flows.
Data Storage is only available in the Advanced form factor deployment.
During the deployment of NSX Application Platform with a Standard form factor, only the
messaging, configuration database, metrics, and platform services are enabled. Installing
additional security features on top of NSX Application Platform with a Standard form factor will
enable additional services. For example, the installation of Malware Prevention and NSX
Network Detection and Response will automatically enable the analytics service.
Pods run in the worker nodes and can run one or more container processes providing the
functionality for multiple NSX security features.
One or more pods are deployed per core service of NSX Application Platform.
418
VMware Confidential Internal Use Only
8-37 Basic kubectl Commands (1)
Because NSX Application Platform is a container-based solution, you must be familiar with the
key kubectl commands to perform basic troubleshooting.
419
VMware Confidential Internal Use Only
8-39 Namespaces Available After NSX
Application Platform Deployment
The cert-manager, nsxi-platform, and projectcontour namespaces are automatically created as
part of NSX Application Platform deployment.
420
VMware Confidential Internal Use Only
8-40 Pods Available After NSX Application
Platform Deployment
The example shows an extract of the pods created during NSX Application Platform deployment.
The example shows an extract of the pods created during NSX Application Platform
deployment with an Advanced form factor in the nsxi-platform namespace. The output is
cropped. In a healthy environment, the status of all pods should be Running. In a Standard form
factor deployment, fewer pods are visible because the analytics and data storage pods are not
deployed.
421
VMware Confidential Internal Use Only
8-41 Lab 14: (Simulation) Deploying NSX
Application Platform
Deploy and validate NSX Application Platform:
5. Validate the NSX Application Platform Deployment from the Tanzu Kubernetes Cluster
422
VMware Confidential Internal Use Only
8-43 Lesson 3: Malware Prevention
• Describe the malware prevention packet flows for known and unknown files
423
VMware Confidential Internal Use Only
8-45 About Malware Prevention
Malware Prevention detects and prevents malicious file transfers by using a combination of
signature-based detection of known malware, and static and dynamic analysis of malware
samples.
• East-west malware prevention detects and prevents malicious files directly on the guest
VMs.
Malware Prevention protects your environment against viruses, worms, Trojan horses, spyware,
and ransomware.
424
VMware Confidential Internal Use Only
Malware Prevention uses the following techniques to detect and prevent malicious file transfers:
• Signature-based detection uses databases of known malware patterns and scans the file
and memory of a system for any data matching the pattern of known malicious software.
• Static file analysis extracts the unique characteristics of a file, such as its structure, and uses
machine learning algorithms to classify and identify malware indicators.
• Dynamic file analysis performs memory analysis and observes how the file interacts with the
system, identifying indicators of malicious activity.
Static document analysis occurs when a file is examined without executing it, whereas dynamic
analysis examines the actions of the file and can occur only when the file is executing. Both static
and dynamic analysis are behavior-based malware detection mechanisms.
Malware prevention is a new feature in NSX-T Data Center 3.2 and can be configured in two
different locations:
• East-west malware prevention is configured directly on the guest VMs to protect malware
from spreading laterally in the data center.
• North-south malware prevention is configured on the edge node to prevent malware from
entering the perimeter.
425
VMware Confidential Internal Use Only
8-46 About East-West Malware Prevention
East-west malware prevention detects known malicious files on guest VMs and prevents them
from being downloaded.
Guest Introspection agents are installed on the guest VMs and perform the following functions:
• Extract unknown files and send for local and cloud-based analysis:
East-west malware prevention protects the data center from the spread of internal malware and
from malware that makes it past the network perimeter. To perform these tasks, it monitors files
downloaded on the guest VMs for malicious content.
Malware prevention uses signature-based detection of known malware as well as static and
dynamic analyses of malware samples to detect and prevent malware from spreading in the
environment.
Local analysis, which is a combination of static analysis and machine learning-based analysis of
files, is performed on the ESXi host.
426
VMware Confidential Internal Use Only
8-47 Use Cases for East-West Malware
Prevention
East-west malware prevention provides malware detection and prevention capabilities for data
center and VDI end users.
East-west malware prevention uses distributed firewall capabilities to protect end users from
downloading malicious content.
East-west malware prevention protects guest VMs and VDIs (virtual desktops) from lateral
malware spreads. Each ESXi host includes a service virtual machine (SVM), which inspects all files
that are downloaded at the guest level.
427
VMware Confidential Internal Use Only
8-48 Requirements for East-West Malware
Prevention
NSX east-west malware prevention has the following requirements:
• The NSX-T Data Center environment must be configured with a valid license for malware
prevention.
• At a minimum, NSX Application Platform must be deployed with the standard form factor in
the environment.
• ESXi nodes require Internet access to get file reputations and to send files for cloud-based
analysis.
For more information about types of valid licenses for malware prevention, see
https://www.vmware.com/content/dam/digitalmarketing/vmware/en/pdf/products/nsx/vmwa
re-nsx-datasheet.pdf.
In Windows operating systems, the thin agent is installed as part of VMware Tools.
In Linux operating systems, the thin agent is a separate package that can be downloaded from
the VMware web page. For more information about installing the Guest Introspection thin agent
on Linux virtual machines, see NSX-T Data Center Administration Guide at
https://docs.vmware.com/en/VMware-NSX-T-Data-Center/3.1/administration/GUID-
4871C429-CFE6-41C9-86C9-7FCFE9C95EC8.html.
428
VMware Confidential Internal Use Only
8-49 East-West Malware Prevention
Architecture
East-west malware prevention includes ESXi hosts, NSX Application Platform, NSX Threat
Intelligence Cloud, and NSX Advanced Threat Analyzer Cloud.
— Obtains file reputations from VMware Carbon Black Cloud and caches them for use in
the NSX environment
429
VMware Confidential Internal Use Only
8-50 NSX Application Platform Components
The malware prevention feature requires the Standard form factor for NSX Application
Platform.
In NSX Application Platform, the main components that are deployed for malware prevention
are as follows:
• Security Analyzer:
— Reputation service
— Messaging pod
The Standard form factor is enough for the use of the Malware Prevention feature, but the
Advanced form factor is mandatory to install NSX Intelligence.
430
VMware Confidential Internal Use Only
In NSX Application Platform, the main components that are deployed for malware prevention
are as follows:
• Security Analyzer collects all information that is received from the messaging pod and
fetches reputations and signatures from VMware Carbon Black Cloud through the
reputation service. Security Analyzer maintains two databases:
— The NSX Advanced Signature Distribution Service (ASDS) back end gathers the verdict
and reputations for all known files.
• NSX Cloud Connector sends files for dynamic analysis to NSX Advanced Threat Analyzer
Cloud. NSX Cloud Connector acts as a gateway between on-premises services and NSX
Advanced Threat Analyzer Cloud. Its purpose is to centralize communication and provide an
authenticated channel between clients and the cloud.
431
VMware Confidential Internal Use Only
8-51 ESXi Host Components
The main ESXi host components for east-west malware prevention are as follows:
• Service VM:
— Security hub
— RAPID
— ASDS cache
432
VMware Confidential Internal Use Only
The main ESXi host components for east-west malware prevention are as follows:
• Guest virtual machine: Runs a Network Introspection agent, called the thin agent, which
offloads files for scanning to the service virtual machine (SVM).
— In Windows operating systems, the thin agent is installed as part of VMware Tools and
includes the following components NSX Network Introspection (vnetWFP.sys), NSX
File Introspection (vsepflt.sys), and VMCI (vsock.sys).
— In Linux operating systems, the thin agent is a separate package that can be
downloaded from the VMware webpage. It does not require open-vm-tools. GLib 2.0
must also be installed on the Linux VM.
• Context Multiplexer (MUX): Relays messages between the guest VMs and the SVM,
maintains the SVM configuration, and processes the east-west malware prevention policies.
The Context Multiplexer (MUX) is installed as a VIB during transport node preparation.
• The Service VM (SVM) is a VM appliance that is deployed on every ESXi host part of a
Malware Prevention enabled cluster. The SVM monitors files that are offloaded from the
guest VMs, performs local analysis, and connects to the NSX application platform for file
reputation and cloud-based analysis. The SVM contains the following modules:
— Guest Introspection agent: Relays event and data received from the guest VM to the
Security Hub.
— Security Hub: Collects file events, gets verdict for known files, and sends files for local
and cloud-based analysis.
— RAPID (Rapid API for Detection): Provides local analysis of the file. It uses a
combination of static analysis and machine learning based analysis of the files.
433
VMware Confidential Internal Use Only
8-52 East-West Malware Prevention Packet
Flow for a Known File
A packet flow occurs when a transfer of a known file is detected on the guest VM.
1. The thin agent extracts the file, computes the hash, and provides information to the security
hub through the MUX and the Guest Introspection agent.
2. The security hub checks whether the file is known in the local ASDS cache by sending the
hash.
4. If the file is not in the local cache, ASDS queries the ASDS back end internally for the file
reputation.
5. The ASDS back end sends the verdict back to the security hub, and the appropriate action
is taken.
434
VMware Confidential Internal Use Only
8-53 East-West Malware Prevention Packet
Flow for an Unknown File
If the security hub cannot retrieve the file reputation from the ASDS back end, the file is
considered as unknown and is sent for local and cloud-based analysis.
If the security hub cannot retrieve the file reputation from the ASDS back end, the file is
considered as unknown and is sent for local and cloud-based analysis:
6. The security hub sends the file to the RAPID module to perform local analysis.
7. Based on the malware prevention policy and local analysis results, RAPID sends the file to
the NSX Advanced Threat Analyzer Cloud for analysis through the NSX Cloud Connector.
This step occurs only if the policy is set up for cloud-based analysis. If the cloud-based
analysis is not set up, only the local analysis verdict is sent to the security hub.
8. NSX Advanced Threat Analyzer Cloud sends the combined verdicts of the local and cloud-
based analysis to the security hub, and the appropriate action is taken.
If the verdict of a file is malicious, and if the file's type is portable executable, the security
hub sends the file's hash to the reputation service to cross-check its reputation. This step is
performed to reduce the false positives. The file reputation service queries VMware Carbon
Black Cloud to retrieve the file reputation.
435
VMware Confidential Internal Use Only
9. The security hub collects verdicts and statistics and sends an event to the security analyzer.
10. The security analyzer reports the verdict and statistics to NSX Manager.
The security analyzer polls the security hub for the local and cloud-based analysis verdicts and
updates the ASDS back end accordingly. This data is used for future downloads of the same file.
436
VMware Confidential Internal Use Only
8-54 Activating Malware Prevention on the
NSX Application Platform
To enable the Malware Prevention feature, you select System > Configuration > NSX
Application Platform and select the feature.
437
VMware Confidential Internal Use Only
8-55 Setting the Cloud Region
You select the cloud region, run prechecks, and click ACTIVATE.
The NSX Malware Prevention installation deploys both NSX Cloud Connector and the
components required for malware prevention.
You specify the NSX Advanced Threat Analyzer Cloud instance that you want your
environment to connect to.
The FQDN for the United States cloud is nsx.west.us.lastline.com, and the FQDN for the
European cloud is nsx.nl.emea.lastline.com. NSX Malware Prevention uses HTTPS port 443 to
access NSX Advanced Threat Analyzer Cloud.
438
VMware Confidential Internal Use Only
Before proceeding with the installation of NSX Malware Prevention, the installation wizard
verifies that the license is valid and that NSX Advanced Threat Analyzer Cloud is accessible.
NSX Cloud Connector is a shared component between NSX Network Detection and Response
and NSX Malware Prevention. The NSX Cloud Connector deployment is skipped if NSX
Network Detection and Response is already configured in the environment.
In environments where both these features are installed, changing the cloud region requires
reinstalling both NSX Network Detection and Response and NSX Malware Prevention. Modifying
the cloud region after the installation is not supported.
439
VMware Confidential Internal Use Only
8-57 Service Registration
Before you can use Malware Prevention on the transport nodes, you must register the Malware
Prevention service and deploy SVMs on each host.
You register the Malware Prevention service with NSX Manager with the following API call:
POST https://sa-nsxmgr-01.vclass.local/napp/api/v1/malware-
prevention/svm-spec
Body:
{
"ovf_url": "<OVF_PATH>",
"deployment_spec_name": "MPS-SVM",
"svm_version": "3.2"
}
You must use a REST API call to register Malware Prevention.
440
VMware Confidential Internal Use Only
8-59 Service Deployment
To deploy the SVMs on each host, you select System > Configuration > Service Deployments
> Deployment.
• All hosts inside the cluster are deployed with an instance of the service.
• If a new host is added to the cluster, it is automatically deployed with a service instance.
After clicking SAVE, the deployment of the SVMs on each ESXi host starts. You can monitor the
deployment by looking at the tasks in vCenter Server.
441
VMware Confidential Internal Use Only
8-60 Service Deployment Validation from the
NSX UI
You can validate the service instance deployment from the NSX UI by navigating to System >
Configuration > Service Deployments > Service Instances.
In a healthy environment, both the Deployment Status and the Health Status appear as Up.
The SVM is deployed in vCenter Server and is up and running with two configured interfaces:
• The Control network is defined by the system and gets the IP 169.254.1.22.
442
VMware Confidential Internal Use Only
8-62 Creating East-West Malware Prevention
Profile
The Malware Prevention profile defines the types of files to be analyzed.
You create a profile for Malware Prevention by navigating to Security > Policy Management >
IDS/IPS & Malware Prevention > Profiles > Malware Prevention.
443
VMware Confidential Internal Use Only
8-63 Creating Rules for East-West Malware
Prevention
To create a rule, you select Security > Policy Management > IDS/IPS & Malware Prevention >
Distributed FW Rules.
A malware prevention rule contains a set of instructions that determine which file is analyzed,
including the source and destination, the services, the malware prevention profile, where to
apply the rule, and the detection mode.
Malware prevention rules must be applied to the group of VMs that you want to protect. The
rules do not work if they are applied to the distributed firewall.
NSX-T Data Center 3.2 includes the following modes for a malware prevention rule:
Only one malware prevention profile can be attached to a rule. But a rule can have both a
malware prevention profile and an IDS/IPS profile.
444
VMware Confidential Internal Use Only
8-64 About North-South Malware Prevention
North-south malware prevention detects known malicious files when they enter the perimeter
on the NSX gateway firewall.
If the file is unknown, NSX Edge extracts the file and sends it for local and cloud-based analysis.
In NSX-T Data Center 3.2, only north-south malware detection of the malicious files is
supported.
The detection system compares the file hashes (SHA1/MD5/SHA256) to file hashes of known
malware.
Local analysis, which is a combination of static analysis and machine learning-based analysis of
the files, is performed on NSX Edge.
445
VMware Confidential Internal Use Only
8-65 Use Cases for North-South Malware
Prevention
North-south malware prevention provides malware detection capabilities at the perimeter of the
data center.
North-south malware prevention monitors and generates alerts when users want to download
malicious files from an external network or from public clouds.
446
VMware Confidential Internal Use Only
8-66 Requirements for North-South Malware
Prevention
North-south malware prevention has the following requirements:
• The NSX-T Data Center environment must be configured with a valid license for malware
prevention.
• At a minimum, the NSX application platform with the standard form factor must be
deployed in the environment.
• NSX Edge nodes require Internet access to get the file reputations and to send files for
cloud-based analysis.
For additional information about the types of licenses that are valid for malware prevention, see
https://www.vmware.com/content/dam/digitalmarketing/vmware/en/pdf/products/nsx/vmwa
re-nsx-datasheet.pdf.
The sizing requirement for extra large edge nodes are 16 vCPU, 64GB of RAM, and 200 GB of
storage.
447
VMware Confidential Internal Use Only
8-67 North-South Malware Prevention
Architecture
North-south malware prevention architecture includes the NSX Edge node, NSX Application
Platform, NSX Threat Intelligence Cloud, and NSX Advanced Threat Analyzer Cloud.
The North-south malware prevention architecture shares the same components as east-west
malware prevention with the difference that the edge node is involved in the architecture in
place of the ESXi hosts.
448
VMware Confidential Internal Use Only
8-68 NSX Edge Components
The main components on the edge node for north-south malware prevention are as follows:
• IDS/IPS engine
• Security Hub
• RAPID
• ASDS Cache
The main components on the edge node for north-south malware prevention perform the
following functions:
• IDS/IPS engine: Extracts files and relays events and data to the security hub
North-south malware prevention uses the file extraction features of the IDS/IPS engine that
runs on NSX Edge for north-south traffic.
• Security hub: Collects file events, gets verdicts for known files, sends files for local and
cloud-based analysis, and sends information to the security analyzer
• RAPID (Rapid API for Detection): Provides local analysis of the file
• ASDS Cache (NSX Advanced Signature Distribution Service): Caches reputation and
verdicts of known files
449
VMware Confidential Internal Use Only
8-69 North-South Malware Prevention Packet
Flow for a Known File
The packet flow for north-south malware prevention is similar to east-west malware prevention.
2. The IDS/IPS engine running on the edge extracts the file, computes the hash, and provides
information to the security hub.
3. The security hub uses the hash to verify whether the file is known to the local ASDS cache.
5. If the file is not in the local cache, ASDS queries the ASDS back end internally for the file
reputation.
6. The ASDS back end sends the verdict back to the security hub, and appropriate action is
taken.
450
VMware Confidential Internal Use Only
8-70 North-South Malware Prevention Packet
Flow for an Unknown File
If the security hub cannot retrieve the file reputation from the ASDS back end, the file is sent for
local and cloud-based analysis.
7. The security hub sends the file to the RAPID module to perform local analysis.
8. Based on the NSX Policy and local analysis results, RAPID sends the file to NSX Advanced
Threat Analyzer Cloud for analysis through NSX Cloud Connector.
This step takes place only if the policy is set up for cloud-based analysis. If the cloud-based
analysis is not set up, only the local analysis verdict is used and sent back to the security hub.
451
VMware Confidential Internal Use Only
9. NSX Advanced Threat Analyzer Cloud sends the combined verdicts of the local and cloud-
based analysis to the security hub, and the appropriate action is taken.
If the verdict of a file is malicious and if the file's type is Portable Executable, the security
hub sends the file's hash to the reputation service to cross-check its reputation. This step is
performed to reduce the false positives. The file reputation service queries VMware Carbon
Black Cloud to retrieve the file reputation.
10. The security hub collects verdict information and statistics and sends an event to the
security analyzer.
11. The security analyzer reports the verdict and statistics to NSX Manager.
The security analyzer polls the security hub for the local and cloud-based analysis verdicts and
updates the ASDS back end accordingly. This data will be used for future download of the same
file.
452
VMware Confidential Internal Use Only
8-72 Creating North-South Malware
Prevention Profiles
You create a malware prevention profile by navigating to Security > Policy Management >
IDS/IPS & Malware Prevention > Profiles > Malware Prevention.
453
VMware Confidential Internal Use Only
8-73 Creating Rules for North-South Malware
Prevention
Gateway firewall rules specify the parameters on which north-south malware prevention is
applied. The rules are enforced on the selected Tier-1 gateway.
Only one malware prevention profile can be attached to a rule. But a rule can have both a
malware prevention profile and an IDS IPS profile.
You create a rule by navigating to Security > Policy Management > IDS/IPS & Malware
Prevention > Gateway FW Rules.
454
VMware Confidential Internal Use Only
8-74 Malware Prevention Dashboard (1)
On the Potential Malware tab, you can find the list of all detected files that are potentially
harmful to the system for both east-west and north-south traffic.
You can access the UI dashboard by selecting Security > Threat Detection & Response >
Malware Prevention.
On the Potential Malware tab, only files with a potentially harmful verdict appear. Files with a
benign verdict are not shown on this tab.
The NSX UI shows the number of files detected, the time of the detection, the verdict
(malicious, suspicious, or uninspected) and the malware family and class.
• -1 (gray): Uninspected
Uninspected files are not analyzed because they appear in the allow list. Files in the allow list
are not blocked even if they are classified as suspicious or malicious.
455
VMware Confidential Internal Use Only
8-75 Malware Prevention Dashboard (2)
On the All Files tab, you can find the list of all inspected files.
You can click an event to view more information about the files, including details such as file
type, filename, last client that downloaded the file, the number of inspections, and so on.
456
VMware Confidential Internal Use Only
8-76 About the Allowlist
Files in the allowlist are not blocked even if they are classified as suspicious or malicious.
You can add files to the allowlist after they are detected by the system. You select Security >
Threat Detection & Response > Malware Prevention.
You can list all files that are present in the allowlist by selecting Security > Policy Management >
IDS/IPS & Malware Prevention > Settings > Malware Prevention.
457
VMware Confidential Internal Use Only
8-77 Lab 15: (Simulation) Configuring Malware
Prevention for East-West Traffic
Configure Malware Prevention for east-west traffic:
• Describe the malware prevention packet flows for known and unknown files
458
VMware Confidential Internal Use Only
8-79 Lesson 4: NSX Intelligence
459
VMware Confidential Internal Use Only
8-81 About NSX Intelligence
NSX Intelligence is a distributed analytics solution that provides visibility and dynamic security
policy enforcement for NSX-T Data Center environments, including:
460
VMware Confidential Internal Use Only
8-82 Use Cases for NSX Intelligence
NSX Intelligence enables several capabilities for security administrators.
461
VMware Confidential Internal Use Only
8-83 NSX Intelligence Requirements
The requirements for using NSX Intelligence are:
• NSX Application Platform with an Advanced form factor must be deployed in the
environment.
• You require a valid NSX license to avail the NSX Intelligence features.
• You must have an Enterprise Administrator role to start and stop data collection.
The Enterprise Administrator role starts and stops data collection. Other user roles, such as
Security Administrators, can visualize the NSX Intelligence data and create and apply
recommendations. However, an Enterprise Administrator role is mandatory to start and stop
data collection.
462
VMware Confidential Internal Use Only
8-84 NSX Intelligence Installation
Before you can start using the NSX Intelligence visualization, recommendation, and suspicious
traffic detection capabilities, you must install the NSX Intelligence feature from the NSX UI or
API.
The installation process runs some prechecks to ensure that the feature can be successfully
installed in the environment.
You can install NSX Intelligence from the NSX UI by navigating to System > Configuration >
NSX Application Platform. The NSX Intelligence tile can be found under the features section of
NSX Application Platform.
The NSX Intelligence tile is unavailable if NSX Application Platform is not deployed with an
Advanced form factor in the environment or if the required licenses are not available.
For upgrading from an earlier version of NSX Intelligence to NSX Intelligence 3.2 or later, see
Activating and Upgrading VMware NSX Intelligence at https://docs.vmware.com/en/VMware-
NSX-Intelligence/3.2/install-upgrade/GUID-9F91CFBC-DE26-451C-90E0-5AC07117BFFD.html.
463
VMware Confidential Internal Use Only
8-85 Validating the NSX Intelligence Installation
You can use common kubectl commands to better understand the state of the NSX Intelligence
deployment.
Validate the successful deployment of the NSX Intelligence pods by running the following
command in your Kubernetes cluster:
464
VMware Confidential Internal Use Only
8-86 Granular Data Collection
NSX Intelligence 3.2 provides the ability to select the standalone hosts or clusters for which you
want to enable data collection.
On new deployments of NSX Intelligence, data collection is enabled by default on all hosts and
clusters. As an NSX Administrator, you can choose to collect data for particular standalone hosts
or clusters. This method helps reduce the amount of data to capture and process, which
improves the amount of resources available in the environment.
You can adjust the NSX Intelligence data collection settings from the NSX UI by navigating to
System > Settings > NSX Intelligence.
465
VMware Confidential Internal Use Only
8-87 NSX Intelligence Visualization (1)
You can visualize flows for VMs and security groups by navigating to Plan & Troubleshoot >
Discover & Plan > Discover & Take Action.
Users can filter data flows based on Security groups, VMs, physical servers, and IP addresses.
In addition, NSX Intelligence offers enhanced filtering capabilities to more granularly define flows
that are displayed in the canvas.
• Unprotected: The traffic flow matches the default firewall rule to allow, drop, or reject any
traffic from any source and any destination. More granular security policies are needed to
secure the environment.
• Blocked: The traffic flow matches a more granular rule than the default rule that drops or
rejects traffic.
• Allowed: The traffic flow matches a more granular rule than the default rule that allows
traffic.
466
VMware Confidential Internal Use Only
NSX Intelligence 3.2 includes the following visualization enhancements:
• Flow visualization of current traffic: NSX Intelligence offers a new time range for the
visualization of traffic flows called Now. This time range allows you to visualize the most
recent flows in your environment.
• Display of external IP addresses for traffic flows: Traffic flows now display public IP
addresses used as sources and destinations instead of ANY.
467
VMware Confidential Internal Use Only
8-88 NSX Intelligence Visualization (2)
You can examine the details of traffic flows by clicking the corresponding line in the canvas.
The screenshot displays details about two unprotected flows from sa-app-01 and sa-db-01 by
using HTTP and MySQL, respectively.
• Source: Name of the source VM, source IP, the user, and the process run
468
VMware Confidential Internal Use Only
8-89 NSX Intelligence Recommendations (1)
You can start security recommendations by navigating to Plan & Troubleshoot > Discover &
Plan > Recommendations.
You can initiate multiple recommendations at one time. However, they are processed serially.
469
VMware Confidential Internal Use Only
8-90 NSX Intelligence Recommendations (2)
Recommendations analyze traffic data for a given set of VMs or a security group for a specified
period. Recommendations suggest security groups, services, and distributed firewall rules.
• Time Range
• Connectivity Strategy
• Recommendation Output
470
VMware Confidential Internal Use Only
You configure the following parameters to start a recommendation:
• Selected Entities in Scope: VMs, physical servers, and security groups can be used as inputs
for the recommendation. Security groups can include virtual machines, segment ports,
segments, and VIFs.
NSX Intelligence 3.2 allows the selection of multiple groups as the scope for a new
recommendation. These groups must contain no more than 250 effective compute entities in total.
To enhance the fidelity of the recommendations in brownfield deployments, NSX
Intelligence 3.2 also considers the existing distributed firewall policies applied to the groups
selected as the scope for a new recommendation.
• Time Range: Period for which data is analyzed. It ranges from the last 1 month to the last 1 hour.
• Connectivity Strategy:
— Create Rules For:
– All Traffic: This default option considers all outbound, inbound, and intra-application
traffic flow types.
– Incoming and Outgoing Traffic: This option considers all traffic flow types that
originate from and outside the application boundary.
– Incoming Traffic: This option only considers traffic flow that originates outside the
application boundary.
— Default Rule:
– None: This default option does not create any default rule for the security policy.
– Allowlist: This option creates a default drop rule.
– Denylist: This option creates a default allow rule.
• Recommendation Output:
— Compute-based: This default option recommends security groups, including VMs.
— IP-based: This option recommends security groups, including a static list of IPs.
• Recommendation Service Type:
— L4 Services: This default option generates L4 services and rules as an output for the
recommendation.
— L7 Context Profiles: L7 rules, including context-profile information, are recommended
for flows with L7 application id information. If application information is not available, L4
recommendations are generated.
• Group Reuse Threshold: With NSX Intelligence 3.2, users can customize the group reuse
threshold to determine whether existing groups should preferably be reused or new ones
created instead. A low threshold of around 10 percent represents an aggressive group
reuse, whereas a high threshold of 100 percent indicates minimal group reuse. The default
group reuse threshold is set to 80 percent.
471
VMware Confidential Internal Use Only
8-91 NSX Intelligence Recommendations (3)
The Monitoring column indicates whether VM changes are detected after the initial
recommendation.
If VMs change their group membership after the initial analysis session, a rerun flag is set and
users are prompted to rerun the recommendation to analyze changes.
472
VMware Confidential Internal Use Only
8-92 NSX Intelligence Recommendations (4)
The recommended distributed firewall rules, security groups, and services can be published in
NSX Manager.
After the recommendation session is completed, recommendations about the distributed firewall
rules, security groups, and services that should be created to secure the environment are
provided. You can publish these recommendations in NSX Manager and the distributed firewall
rules, security groups, and services are automatically configured for you. You can customize the
recommendations before final publication. This customization can include changing the names of
the recommended rules and security groups.
473
VMware Confidential Internal Use Only
8-93 Suspicious Traffic Detection
Suspicious Traffic Detection analyzes network traffic to gain insights into advanced threats. The
threat detection capabilities of NSX Intelligence have been significantly enhanced in the 3.2
release with the addition of new detectors.
Netflow Beaconing
Suspicious Traffic Detection was first introduced in NSX Intelligence 1.2 as a Tech Preview
feature, and it was formerly called Network Traffic Analysis.
• Horizontal Port Scan: Detects and alerts if an intruder tries to attack a single port or service
across multiple virtual machines. It is also known as sweeping.
• Vertical Port Scan: Detects and alerts if an intruder tries to attack multiple open ports or
services of a target virtual machine.
• Unusual Traffic Drop: Detects and alerts if an unusually high amount of traffic is dropped by
a distributed firewall rule.
• Uncommonly Used Port: Detects and alerts if a nonstandard port is used for a given a
protocol. For example, SSH traffic runs on a port other than the standard port 22.
• Unusual Remote Services Connections: Detects and alerts if suspicious behavior is observed
for remote connections such as telnet, SSH, and VNC, as well as, remote RDP/RDS
sessions.
474
VMware Confidential Internal Use Only
In the 3.2 release, the threat detection capabilities of NSX Intelligence have been significantly
enhanced with the addition of the following detectors:
• Data Upload/Download: Detects and alerts if an unusually large amount of data is uploaded
or downloaded from a virtual machine.
• Destination IP Profiler: Detects and alerts if a virtual machine connects to an IP address that
is not part of its typical communication pattern.
• Server Port Profiler: Detects and alerts about suspicious ports accessed on a target
machine.
• Port Profiler: Detects and alerts about suspicious ports accessed from a source virtual
machine.
• DNS Tunneling: Detects and alerts about an unusual volume of differing DNS requests
towards the same root DNS name. This action might suggest an attempt to exfiltrate data
over DNS.
• Domain Generator Algorithm: Detects and alerts about suspicious DNS traffic from a virtual
machine, indicating potential activity from DGA malware. Domain Generating Algorithms are
used by cybercriminals to prevent their servers from being blocked or taken down. The
algorithm produces new domains on demand that a malware sample can use as its
Command & Control server.
• Unusual Network Traffic Pattern: Detects and alerts about deviations from predicted
network traffic patterns for a given virtual machine.
475
VMware Confidential Internal Use Only
8-94 Configuring Detector Definitions
You can enable the detectors you are interested in from the Detector Definitions tab in the
NSX UI. All detectors are disabled by default.
Suspicious Traffic Detection is only supported for hosts and clusters that have data collection
enabled.
When data collection is enabled, you can enable the detectors you are interested in by
navigating to Threat Detection & Response > Suspicious Traffic > Detector Definitions.
476
VMware Confidential Internal Use Only
8-95 Visualizing Detected Threats (1)
Detected Threats are displayed in the Detection Events tab. Threats are classified according to
the MITRE ATT&CK Framework.
You visualize the threat detection events by navigating to Threat Detection & Response >
Suspicious Traffic > Detection Events.
The Detection Events tab displays all threat detection events identified in the system classified
according to the MITRE ATT&CK Framework:
Events which cannot be clearly mapped to an existing MITRE ATT&CK Framework tactic or
technique are categorized under the Other category.
Threat detection events are graphically represented by using a histogram. Security administrators
can specify the period that they are interested in by adjusting the blue vertical lines.
Each threat event type is represented by a dot in the histogram. The size of the dot is
proportional to the number of occurrences of an event.
477
VMware Confidential Internal Use Only
8-96 Visualizing Detected Threats (2)
Each detected threat can be expanded to retrieve additional details, including its impact score,
severity, detector type, affected objects, detected anomalies, and so on.
The impact score for a given event is calculated by combining its severity and the confidence of
the detector technique.
The detector type is also displayed in the event details, along with a brief description.
The affected objects, such as target or source virtual machines, are also displayed here.
Finally, depending on the detector type, the deviation between the normal pattern of behavior
and the anomaly is also included.
478
VMware Confidential Internal Use Only
8-98 Lesson 5: NSX Network Detection and
Response
• Explain the architecture of NSX Network Detection and Response in NSX-T Data Center
• It collects network traffic across on-premises networks, cloud, and hybrid cloud
infrastructures.
• It uses artificial intelligence techniques to analyze network traffic and gain insights into
advanced threats.
• It helps security teams to visualize the entire attack and trigger the appropriate response.
479
VMware Confidential Internal Use Only
8-101 NSX Network Detection and Response
Use Cases
Security teams use NSX Network Detection and Response to perform several functions.
Security teams can use NSX Network Detection and Response for the following purposes:
• Detect all threat movements: NSX Network Detection and Response can detect threats
entering the network perimeter (north-south), as well as attacks that move laterally in the
network perimeter (east-west).
• Visualize the entire attack: NSX Network Detection and Response enables you to visualize a
complete campaign blueprint and a detailed threat timeline across the network so that you
can quickly understand the scope of an attack and prioritize resources. Additionally, NDR
maps to the MITRE ATT&CK tactics and techniques for greater understanding of the key
events in a campaign.
• Prevent intrusions faster: NSX Network Detection and Response uses real-time, scalable AI
and machine learning to detect and stop threats at wire speed.
• Reduce false positives: NSX Network Detection and Response delivers the industry’s
highest fidelity insights into advanced threats and reduces false positives by up to 90
percent. NDR learns in real time to update detection fidelity.
480
VMware Confidential Internal Use Only
8-102 NSX Network Detection and Response
High-Level Architecture
The high-level architecture of NSX Network Detection and Response contains the following
tiers:
• Management tier
• Sensor tier
The high-level architecture of NSX Network Detection and Response contains distinct tiers with
the following functions:
• The Scalable Analytics tier is like the brain of NSX Network Detection and Response. It is a
distributed analytics platform containing multiple nodes that perform deep content
inspection, network traffic analysis, network and asset profiling, and anomaly detection.
• The Management tier provides the REST API and a web-based UI interface for all user
configurations. It also displays alerts and intrusion events.
• The Sensor tier includes one or more sensors, and it is responsible for collecting network
traffic across the environment.
481
VMware Confidential Internal Use Only
8-103 NSX Network Detection and Response in
NSX-T Data Center
In NSX-T Data Center 3.2, the capabilities of NSX Network Detection and Response are tightly
integrated with on-premises NSX deployments:
• NSX Network Detection and Response collects security events and system configuration
data from the NSX Edge nodes, NSX Manager, and ESXi hosts (Sensor tier).
• The collected data is analyzed and correlated using a cloud-based distributed analytics
platform called NSX Advanced Threat Analyzer Cloud (Scalable Analytics tier).
• Cloud Connector displays alerts and intrusion events by using a web-based UI interface
(Management tier).
NSX Network Detection and Response is also used for dynamic file analysis or sandboxing when
Malware Prevention is enabled in the NSX environment.
482
VMware Confidential Internal Use Only
8-104 NSX Network Detection and Response
Architecture (1)
NSX Network Detection and Response collects the following data from the NSX-T Data Center
platform:
• Files and anti-malware file events from the NSX Edge nodes and Security Analyzer
In this release, North-South IDPS events and East-West anti-malware events are not collected.
483
VMware Confidential Internal Use Only
8-105 NSX Network Detection and Response
Architecture (2)
Data collected from the NSX-T Data Center environment is aggregated and analyzed as follows:
1. Security Analyzer receives anti-malware file events from the NSX Edge nodes and forwards
them to the Cloud Connector.
2. Cloud Connector gathers IDPS events, anti-malware events, files, and suspicious traffic
events from the NSX platform and forwards them to NSX Advanced Threat Analyzer
Cloud.
3. NSX Advanced Threat Analyzer Cloud analyzes and correlates the IDPS, malware, and
suspicious traffic events and provides insights about ongoing campaigns.
4. Campaign information appears in the NSX Network Detection and Response UI.
484
VMware Confidential Internal Use Only
Data collected from the NSX-T Data Center environment is aggregated and analyzed as follows:
1. Security Analyzer receives file events from the anti-malware modules in the NSX Edge
nodes and persists them locally. These events are then forwarded to Cloud Connector.
3. NSX Advanced Threat Analyzer Cloud analyzes and correlates the IDPS, malware, and
suspicious traffic events and provides insights about ongoing campaigns. A campaign is a
set of related events that use specific techniques and can be mapped to the MITRE
ATT&CK Framework stages to define an attack story.
4. Campaign information is displayed in the NSX Network Detection and Response UI.
The requirements for installing NSX Network Detection and Response are:
• As a minimum, NSX Application Platform must be deployed with a Standard form factor in
the environment. The Advanced and Evaluation form factors are also supported.
• NSX Detection and Response requires an NSX Advanced Threat Prevention license.
• NSX Advanced Threat Analyzer Cloud must be reachable from NSX Application Platform.
By installing NSX Network Detection and Response, customers accept sending the following
data to the public cloud:
• Suspicious traffic events for correlation, if NSX Intelligence is available in the environment
NSX Advanced Threat Analyzer Cloud must be reachable from NSX Application Platform.
Deployments without Internet connectivity are not supported.
485
VMware Confidential Internal Use Only
8-107 NSX Network Detection and Response
Activation (1)
You activate NSX Network Detection and Response by navigating to System > Configuration >
NSX Application Platform > Features.
The NSX Network Detection and Response activation deploys both the Cloud Connector and
the components required by Network Detection and Response.
As part of the activation, you specify the NSX Advanced Threat Analyzer Cloud instance you
want your environment to connect to.
486
VMware Confidential Internal Use Only
The Cloud Connector is a shared component between NDR and NSX Malware Prevention. The
Cloud Connector deployment is skipped if the NSX Malware Prevention feature is already
configured in the environment.
In environments where both these features are activated, changing the cloud region requires
reinstalling both NDR and NSX Malware Prevention. Modifying the cloud region after the
installation is not supported.
To fully use the capabilities of NSX Network Detection and Response, enable the following NSX
features in the environment:
• NSX Intelligence
• IDPS
Before proceeding with the activation of NDR, the activation wizard verifies that the applied
NSX Advanced Threat Prevention license is valid and that NSX Advanced Threat Analyzer
Cloud is accessible.
The FQDN for the United States cloud is nsx.west.us.lastline.com and the FQDN for the
European cloud is nsx.nl.emea.lastline.com. NDR uses HTTPS port 443 to access NSX Advanced
Threat Analyzer Cloud.
487
VMware Confidential Internal Use Only
8-108 NSX Network Detection and Response
Activation (2)
You can validate the successful activation of NSX Network Detection and Response from the
NSX UI.
488
VMware Confidential Internal Use Only
8-109 Validating the NDR and Cloud Connector
Deployments
You verify that Cloud Connector and NDR components are running by using the following
command in your Kubernetes cluster:
In a healthy environment, all Cloud Connector and NDR-related pods must show a status of
Running or Completed. The sample output corresponds to an environment with a Standard form
factor where NSX Intelligence is not installed. In an Advanced form factor deployment with NSX
Intelligence installed, additional pods are available, such as the nsx-ndr-worker-nta-
event-processor that is responsible for processing Suspicious Traffic events from NSX
Intelligence.
489
VMware Confidential Internal Use Only
8-110 Visualizing and Mitigating Attacks
NSX Network Detection and Response creates attack visualizations that give security teams the
context they need to quickly understand the scope of an attack and prioritize their response,
including:
490
VMware Confidential Internal Use Only
8-111 Accessing the NSX Network Detection
and Response UI
The NSX Network Detection and Response UI is installed as a plug-in in the NSX UI during the
Cloud Connector deployment.
You can access the NSX Network Detection and Response UI by selecting the GO TO
CAMPAIGNS link in the Security Overview page.
Alternatively, you can also access the NSX Network Detection and Response UI from the NSX
Network Detection and Response tile in NSX Application Platform.
491
VMware Confidential Internal Use Only
8-112 Campaign Overview: Active Threats and
Attack Stages
NSX Network Detection and Response provides a summary of malicious activity in the network
by showing the affected hosts and the stages of the active threats.
NSX Network Detection and Response classifies malicious activity by attack stage in alignment
with the MITRE ATT&CK framework.
492
VMware Confidential Internal Use Only
8-114 Campaign Timeline
NSX Network Detection and Response also provides a detailed chronology of each stage of an
attack to assist with remediation.
Depending on the severity of the event and other events occurring in the environment at the
time, an event might be correlated into a campaign. Regardless of whether the event is part of
the campaigning, it appears on the Events tab in the NSX Network Detection and Response UI.
493
VMware Confidential Internal Use Only
8-116 Lab 16: (Simulation) Using NSX Network
Detection and Response to Detect
Threats
Install and use NSX Network Detection and Response to detect and visualize advanced threats:
2. Validate the NSX Network Detection and Response Deployment from the CLI
• Explain the architecture of NSX Network Detection and Response in NSX-T Data Center
494
VMware Confidential Internal Use Only
8-118 Key Points (1)
• NSX Distributed IDS/IPS uses real-time deep packet inspection to identify and prevent
attempts at exploiting vulnerabilities in your applications.
• NSX Distributed IDS/IPS protects against malicious activity, including the exploits of known
application-level vulnerabilities, application denial of service, lateral movement, and client-
side and server-side exploits.
• NSX-T Data Center 3.2 introduces support for behavior-based IDS/IPS, which helps to
detect unusual traffic, malicious attacks, and security breaches in the network when
compared to a baseline of normal traffic.
• NSX Malware Prevention detects and prevents malicious file transfers by using a
combination of signature-based detection of known malware and static and dynamic
analysis of malware samples.
• North-south malware prevention detects known malicious files when they enter the
perimeter on the NSX Edge gateway firewall. It uses the IDS/IPS Engine of the edge node
to extract files.
• NSX Intelligence is a native distributed analytics solution that provides visibility and dynamic
security policy enforcement for NSX-T Data Center environments.
• NSX Network Detection and Response is an advanced threat prevention platform that
provides complete network visibility, detection, and prevention of sophisticated threats.
• In NSX-T Data Center 3.2, the capabilities of NSX Network Detection and Response are
integrated with on-premises NSX deployments.
Questions?
495
VMware Confidential Internal Use Only
496
VMware Confidential Internal Use Only
Module 9
NSX-T Data Center Services
9-2 Importance
NSX-T Data Center provides several layer 3 services that can help you address operational
challenges in the virtual network architecture.
4. IPSec VPN
5. L2 VPN
497
VMware Confidential Internal Use Only
9-4 Lesson 1: Configuring NAT
• Explain how NAT64 facilitates communication between IPv6 and IPv4 networks
498
VMware Confidential Internal Use Only
9-6 About NAT
In NSX-T Data Center, you can configure NAT on Tier-0 and Tier-1 gateways.
Tier-0 and Tier-1 gateways in active-standby mode support SNAT and DNAT.
Network address translation (NAT) was designed originally to conserve the public Internet
address space. During the 1990s, Internet providers quickly depleted the available IPv4 address
supply. NAT became the primary method for IPv4 address conservation. NAT performs one-to-
one mapping (one public IP address is mapped to one private IP address) or one-to-many
mapping (one public IP address is mapped to multiple private IP addresses).
• Source NAT (SNAT) translates the source IP of the outbound packets to a known public IP
address so that the application can communicate with the outside world without using its
private IP address. SNAT also tracks the reply.
• Destination NAT (DNAT) enables access to internal private IP addresses from the outside
world by translating the destination IP address when inbound communication is initiated.
DNAT also manages the reply. For both SNAT and DNAT, users can apply NAT rules based
on the 5-tuple match criteria.
• Reflexive NAT rules are stateless access control lists (ACLs) that must be defined in both
directions. These rules do not track the connection. Reflexive NAT rules are applied when
stateful NAT cannot be used.
499
VMware Confidential Internal Use Only
9-7 About SNAT
The SNAT rule changes the source address in the IP header of a packet. The rule can also
change the source port in the TCP or UDP headers.
In the diagram, as packets are received from the NAT VM, the T1-GW-01 Tier-1 gateway
changes the source IP address of the packets from 172.16.101.11 to 80.80.80.1.
You can selectively bypass an existing SNAT rule for specific traffic by creating a No SNAT rule.
SNAT changes the source address in the IP header of a packet. It can also change the source
port in the TCP/UDP headers.
The typical usage is to change a private (rfc1918) address or port into a public address or port
for packets leaving your network. You can create a rule to either enable or disable the source
NAT.
500
VMware Confidential Internal Use Only
9-8 About DNAT
The DNAT rule changes the destination address in the IP header of a packet. It can also change
the destination port in the TCP or UDP headers.
This rule is typically used to redirect incoming packets with a destination public IP address or
port to a private IP address or port in the network.
You can selectively bypass an existing DNAT rule for specific traffic by creating a No DNAT rule.
501
VMware Confidential Internal Use Only
9-9 About Reflexive NAT
You can use the reflexive NAT rule when a Tier-0 gateway runs in active-active mode and when
stateful NAT might lead to issues because of asymmetric paths.
Reflexive NAT rules are stateless access control lists (ACLs) that must be defined in both
directions. These rules do not track connections and are sometimes called stateless NAT.
In the diagram, because the same NAT is applied to the NSX Edge Edge-1 and the NSX Edge
Edge-2, asymmetric flows are supported.
When Tier-0 is running in active-active mode, you cannot configure stateful NAT where
asymmetrical paths might cause issues. For active-active routers, you can use reflexive NAT,
which is also called stateless NAT.
502
VMware Confidential Internal Use Only
For reflexive NAT, you can configure a single source address to be translated or a range of
addresses. If you configure a range of source addresses, you must also configure a range of
translated addresses.
For reflexive NAT, you can configure a single source address to be translated or a range of
addresses. If you configure a range of source addresses, you must also configure a range of
translated addresses. The size of the two ranges must be the same. The address translation is
deterministic. The first address in the source address range is translated to the first address in
the translated address range. The second address in the source range is translated to the
second address in the translated range, and so on.
In the diagram, the source VM (172.16.101.11) on the internal network sends a packet to an
external client (x.x.x.x) on the Internet. The packet is routed to the Tier-0 gateway hosted on
NSX Edge Edge-2, which creates a reflexive NAT entry with source IP 172.16.10.11 and translated
IP 80.80.80.1. When the return traffic arrives on NSX Edge Edge-1 (with the destination
80.80.80.1), the same reflexive NAT entry is used to translate 80.80.80.1 to 172.16.10.11.
Because the same NAT is applied to NSX Edge Edge-1 and NSX Edge Edge-2, asymmetric
flows are supported.
• You typically use SNAT to change a private address or port into a public address or port for
packets leaving your network.
• You typically use DNAT to redirect incoming packets with a destination public address or
port to a private IP address or port in your network.
503
VMware Confidential Internal Use Only
To configure SNAT and DNAT, you provide values for the following options:
• Source IP: Specify a source IP address or an IP address range in CIDR format. If you leave
this text box blank, the NAT rule applies to all sources outside the local subnet.
• Service: Select a single service entry on which the NAT rule is applied.
• Applied To: Select objects that this NAT rule applies to. The available objects are gateways,
interfaces, labels, service instance endpoints, and virtual endpoints.
— Match External Address: To match the firewall rule with the external address of the
NAT rule
— Match Internal Address: To match the firewall rule with the internal address of the NAT
rule
504
VMware Confidential Internal Use Only
9-11 Configuring Reflexive NAT
When a Tier-0 gateway runs in active-active mode, you can use reflexive NAT:
• The address translation is sequential, for example, the first address in the source range is
translated to the first address in the translated range, and so on.
• The two ranges (source range and translated range) must be of equal size.
To configure reflexive NAT, you provide values for the following options:
• Source IP: Specify a source IP address or an IP address range in CIDR format. If you leave
this text box blank, the NAT rule applies to all sources outside the local subnet.
• Service: Select a single service entry on which the NAT rule is applied.
• Applied To: Select objects that this NAT rule applies to. The available objects are gateways,
interfaces, labels, service instance endpoints, and virtual endpoints.
505
VMware Confidential Internal Use Only
• Firewall includes the following options:
— Match External Address: To match the firewall rule with the external address of the
NAT rule
— Match Internal Address: To match the firewall rule with the internal address of the NAT
rule
• NAT64 is stateful and requires the Tier-0 gateway to be deployed in active-passive mode.
• NAT64 requires the Tier-1 gateway to be configured with an active-standby edge cluster.
The NAT64 mechanism enables IPV6 to IPV4 connectivity. NAT64 is based on the RFC 6146
and RFC 6145 standards.
NAT64 allows an IPv6-only client to initiate communications to an IPv4-only server. IPv6 must
initiate the traffic.
NAT64 translates IPv6 packets to IPv4 packets and forwards them to the IPv4 network. This
functionality is designed so that changes are not required to either IPv6 or IPv4 nodes.
506
VMware Confidential Internal Use Only
9-13 Configuring NAT64 Rules
To configure NAT64 rules:
3. Specify the Tier-0 or Tier-1 gateway where you want to create the rule and click ADD
NAT64 RULE.
• Action: The only supported action is NAT64, which translates between IPv6 and IPv4
addresses.
• Destination: IPv6 or IPv6 address range in CIDR format. The prefix must be /96 because
the destination IPv4 IP is embedded as the last 4 bytes in the IPv6 address.
• Translated: This address is the IPv4 address or address pool for the source IPv6 address.
• Applied To: NAT64 rules can only be applied to uplink interfaces or tags.
507
VMware Confidential Internal Use Only
• Firewall includes the following options:
— Match External Address: To match the firewall rule with the external address of the
NAT rule
— Match Internal Address: To match the firewall rule with the internal address of the NAT
rule
508
VMware Confidential Internal Use Only
9-15 NAT Packet Flow (1)
The VM (172.16.101.11) attached to NAT-Segment:5000 sends a packet to its default T1-GW-01
gateway:
509
VMware Confidential Internal Use Only
9-16 NAT Packet Flow (2)
2. The distributed router (DR) of T1-GW-01 manages the packet.
510
VMware Confidential Internal Use Only
9-17 NAT Packet Flow (3)
3. The T1-GW-01 DR determines that this packet requires address translation, which is
provided by the service router (SR) of the Tier-1 gateway on a remote host. The T1-GW-01
DR sends the packet to the local TEP.
511
VMware Confidential Internal Use Only
9-18 NAT Packet Flow (4)
4. The packet is encapsulated with the Geneve header, sent across the overlay tunnel, and
arrives at the remote host (TEP).
512
VMware Confidential Internal Use Only
9-19 NAT Packet Flow (5)
5. The receiving TEP decapsulates the packet and sends the original packet to the SR of T1-
GW-01 for processing. NAT (provided by the SR) translates the source address to
80.80.80.1.
513
VMware Confidential Internal Use Only
9-20 NAT Packet Flow (6)
6. The packet with the translated IP (80.80.80.1) is sent to the DR of the T0-GW-01 Tier-0
gateway for routing.
514
VMware Confidential Internal Use Only
9-21 NAT Packet Flow (7)
7. Based on its routing table, T0-GW-01 routes the packet to the DR of T1-GW-02.
515
VMware Confidential Internal Use Only
9-22 NAT Packet Flow (8)
8. The T1-GW-02 DR determines that the packet is destined for a remote VM and sends the
packet to the local TEP.
516
VMware Confidential Internal Use Only
9-23 NAT Packet Flow (9)
9. Transport Node B encapsulates the packet and sends it across the overlay tunnel.
517
VMware Confidential Internal Use Only
9-24 NAT Packet Flow (10)
10. The destination segment, Prod-Segment:5001, is attached to T1-GW-02 on Transport Node
A. The DR of T1-GW-02 routes the packet to the segment based on its routing table.
518
VMware Confidential Internal Use Only
9-25 NAT Packet Flow (11)
11. The packet arrives at the destination VM.
519
VMware Confidential Internal Use Only
9-26 Lab 17: Configuring Network Address
Translation
Configure source and destination network address translation rules on the Tier-1 gateway:
3. Create a Segment
4. Attach a VM to NAT-Segment
5. Configure NAT
• Explain how NAT64 facilitates communication between IPv6 and IPv4 networks
520
VMware Confidential Internal Use Only
9-28 Lesson 2: Configuring DHCP and DNS
Services
521
VMware Confidential Internal Use Only
9-30 About DHCP
DHCP allows clients to automatically obtain network configuration settings, such as IP
addresses, subnet masks, default gateways, and DNS configuration, from a DHCP server.
The DHCP protocol eliminates the need for individually configuring network devices manually.
• The DHCP client broadcasts a DHCP Discover message to locate all available DHCP servers
on the subnet.
• A DHCP server broadcasts a DHCP Offer message, informing the client that it is available,
proposing a client IP address, subnet mask, default gateway IP address, DNS IP address, IP
lease time, and a DHCP server IP address.
• The DHCP client broadcasts a DHCP Request message to the server, requesting the
proposed IP network configuration data.
• The DHCP client configures its network interface with the proposed IP network
configuration.
522
VMware Confidential Internal Use Only
9-31 About DHCP Relay
A DHCP relay agent offers a more centralized approach to providing DHCP services across
multiple subnets.
A DHCP relay agent forwards requests and replies between a DHCP server and a DHCP client,
offering the flexibility of placing the DHCP server on a remote network.
From the DHCP client perspective, the IP allocation process remains broadcast based.
The DHCP relay agent converts the local DHCP broadcast message into a unicast message and
sends the unicast message to the DHCP server.
523
VMware Confidential Internal Use Only
9-32 DHCP in NSX-T Data Center
NSX-T Data Center supports three types of DHCP services.
DHCP Local Server A DHCP server that is centrally managed by NSX-T Data Center. This
server is local to a single segment.
DHCP Relay A DHCP relay agent is local to a single segment that relays client requests
to an external DHCP server.
Gateway DHCP A DHCP server that is centrally managed by NSX-T Data Center. This
server is available to all segments that are connected to the gateway.
524
VMware Confidential Internal Use Only
9-33 DHCP Local Server
DHCP Local Server is a DHCP service managed by NSX-T Data Center that is local to the
segment and not available to the other segments in the network.
• It provides a dynamic IP assignment service only to the VMs that are attached to the segment.
• It runs as a service (service router) in the edge nodes of an NSX Edge cluster.
• The IP address of a local DHCP server must be in the subnet that is configured on the
segment.
• It supports Tier-0 and Tier-1 gateways and segments not connected to a gateway.
In this configuration, the DHCP requests are managed in the NSX-T Data Center environment
without relying on an external DHCP server.
DHCP Local Server runs as a service (service router) in the edge nodes of an NSX Edge cluster.
525
VMware Confidential Internal Use Only
9-34 DHCP Relay
A DHCP relay agent is local to a single segment and relays client requests to an external DHCP
server.
The DHCP client segment must be connected to either a Tier-0 or Tier-1 gateway.
526
VMware Confidential Internal Use Only
9-35 Gateway DHCP
Gateway DHCP is a DHCP server that is centrally managed by NSX-T Data Center. This server
is available to all segments that are connected to the gateway.
By default, segments that are connected to a Tier-0 or Tier-1 gateway use Gateway DHCP.
The IP address of a Gateway DHCP server can be different from the subnets that are
configured in the segments.
Individual segments connected to a gateway configured for Gateway DHCP can be selectively
configured to use different DHCP types, Local DHCP Server, or DHCP Relay.
527
VMware Confidential Internal Use Only
9-37 Creating a DHCP Profile
To create a DHCP Profile, click Networking > DHCP > ADD DHCP PROFILE and select either
DHCP Relay or DHCP Server as the required profile type.
To create a DHCP Profile, you provide values for the following options:
• (Optional) Tags: Add tags to label static bindings so that you can quickly search or filter
bindings, troubleshoot and trace binding-related issues, or do other tasks.
For DHCP Server profiles, you provide values for these additional options:
• (Optional) Lease Time: Enter the amount of time in seconds for which the IP address is
bound to the DHCP client.
• Edge Cluster: Select an NSX Edge cluster from the drop-down menu.
528
VMware Confidential Internal Use Only
9-38 Connecting the DHCP Profile to a
Segment
To connect the DHCP profile to a segment, click Networking > Segments > Edit an existing
segment > SET DHCP CONFIG and select a configured profile.
To connect the DHCP profile to a segment, you provide values for the following options:
529
VMware Confidential Internal Use Only
9-39 Connecting the DHCP Profile to a
Gateway
To connect the DHCP Profile to a gateway, click Networking > Tier-0 or Tier-1 Gateway, edit
the gateway, set the DHCP configuration, and select a configured profile.
To connect the DHCP profile to a gateway, you provide values for the following options:
530
VMware Confidential Internal Use Only
9-40 Configuring DHCP Services
To configure DHCP services, click Networking > Segments > Edit an existing segment > EDIT
DHCP CONFIG.
To configure DHCP services, you provide values for the following options:
• DHCP Server Address: If you are configuring a DHCP local server, server IP address is
required.
• Lease Time: Optionally enter the amount of time in seconds for which the IP address is
bound to the DHCP client.
• DNS Servers: Optionally enter the IP address of the domain name server (DNS) to use for
name resolution. A maximum of two DNS servers are permitted.
531
VMware Confidential Internal Use Only
9-41 About DNS Services
A Domain Name System (DNS) server translates domain names to IP addresses.
Depending on the DNS zone, a DNS forwarder can forward DNS queries to specific DNS
servers.
The resource to resolve, vmbeans.lab.com, is in the lab.com DNS namespace. The DNS
forwarder can be configured to forward DNS queries to the lab.com DNS zone servers.
532
VMware Confidential Internal Use Only
9-42 About DNS Forwarder
In NSX-T Data Center, the DNS client requests can be forwarded to the external DNS server by
configuring a DNS forwarder in Tier-0 or Tier-1 gateways:
• DNS forwarder forwards DNS requests from clients to upstream DNS servers.
• DNS forwarder caches the responses received from the upstream servers, reducing system
load, and improving performance.
533
VMware Confidential Internal Use Only
9-44 Configuring DNS Zones
A default DNS zone is required. Optionally, you can configure one or more FQDN DNS zones.
To create DNS zones, click Networking > DNS > DNS Zones > ADD DNS ZONE.
To configure DNS zones, you provide values for the following options:
• Domain: Select Any for the default zone. Select an FQDN for the domain for an FQDN
zone.
• DNS Servers: Enter the IP address of up to three remote DNS servers for this DNS zone.
• (Optional) Source IP: You must specify a source IP if the DNS forwarder service listener IP
is an internal address that is not reachable from the external upstream DNS server.
When you configure a DNS zone, you can specify a source IP for a DNS forwarder to use when
forwarding DNS queries to an upstream DNS server. If you do not specify a source IP, the DNS
query packet source IP will be the DNS forwarder's listener IP.
534
VMware Confidential Internal Use Only
9-45 Configuring DNS Forwarder Services
To create a DNS forwarding services, click Networking > DNS > DNS Services > ADD DNS
SERVICE.
To configure DNS forwarder services, you provide values for the following options:
• Name: You use this name to identify the DNS forwarder service.
• DNS Service IP: Clients send DNS queries to this IP address, which is also known as the
DNS forwarder's listener IP.
• (Optional) Log Level: Used in analysis and troubleshooting. The default log level is info.
When you configure a DNS zone, you can specify a source IP for a DNS forwarder to use when
forwarding DNS queries to an upstream DNS server. If you do not specify a source IP, the DNS
query packet source IP will be the DNS forwarder's listener IP.
535
VMware Confidential Internal Use Only
9-46 Configuring Forwarder Connectivity
The external upstream DNS servers require IP connectivity to the DNS forwarder, which can be
achieved using route advertisement or SNAT.
If the DNS listener IP is not routed, forwarder connectivity can be achieved by:
536
VMware Confidential Internal Use Only
9-48 Lesson 3: Configuring NSX Advanced
Load Balancer
• Explain the NSX Advanced Load Balancer components and how they manage traffic
537
VMware Confidential Internal Use Only
9-50 About NSX Advanced Load Balancer
VMware will use NSX Advanced Load Balancer as part of its load-balancing strategy.
NSX Advanced Load Balancer provides multicloud load balancing, web application firewall,
application analytics, and container ingress services across data centers and clouds.
In NSX-T Data Center 3.2, NSX Advanced Load Balancer is integrated with the NSX
environment. Network administrators can configure and manage the NSX Advanced Load
Balancer components directly from the NSX UI.
538
VMware Confidential Internal Use Only
9-51 Benefits of NSX Advanced Load
Balancer
NSX Advanced Load Balancer offers several advantages.
• End-to-end automation: NSX Advanced Load Balancer fully automates the life cycle
management and placement of load-balancing components.
• Higher performance: NSX Advanced Load Balancer provides optimal traffic flows with no
traffic hairpinning. Additionally, it supports ECMP-based active-active scale-out mode.
• Easy to troubleshoot: NSX Advanced Load Balancer offers native built-in log analysis and
rich analytics tools that provide end-to-end visibility of the environment. This improved
visibility reduces the troubleshooting time from days to minutes.
• On-demand scalability: The NSX Advanced Load Balancer platform automatically scales
horizontally based on the traffic needs and rebalances the load across all components to
ensure high performance.
539
VMware Confidential Internal Use Only
9-52 NSX Advanced Load Balancer Feature
Edition Comparison (1)
The table provides a feature comparison between the Basic and Enterprise licenses of NSX
Advanced Load Balancer.
NSX Data Center Advanced and NSX Data Center Enterprise Plus editions include the NSX
Advanced Load Balancer Basic entitlement. The Basic entitlement provides load-balancing
features that are equivalent to the native NSX load balancer.
The NSX Advanced Load Balancer Enterprise edition requires an additional license and provides
all features available in NSX Advanced Load Balancer, including GSLB, WAF, multicloud support,
and so on.
For more information about the licenses and features available for NSX Advanced Load
Balancer, see "NSX Advanced Load Balancer Editions" at
https://avinetworks.com/docs/21.1/nsx-license-editions/.
540
VMware Confidential Internal Use Only
9-53 NSX Advanced Load Balancer Feature
Edition Comparison (2)
541
VMware Confidential Internal Use Only
9-54 NSX Advanced Load Balancer Architecture
The NSX Advanced Load Balancer architecture includes the following main components:
• NSX Manager:
— Provides an entry point for the NSX Advanced Load Balancer configuration
— Processes the configuration information and configures the data plane load-balancing
functions
The NSX Advanced Load Balancer architecture is built on software-defined principles. It separates
the data and control plane to deliver scalable application load balancing. The platform provides a
centrally managed, dynamic pool of load-balancing resources for virtual machines and containers.
542
VMware Confidential Internal Use Only
Since NSX-T Data Center 3.2, NSX Advanced Load Balancer is integrated with the NSX-T Data
Center platform. Users can deploy NSX Advanced Load Balancer Controller instances and
configure application load-balancing services directly from the NSX Manager user interface.
However, the integration between the two platforms is not fully complete at this point. Some
Day-0 operations, such as the initial integration between the NSX Advanced Load Balancer
platform and the NSX-T Data Center environment, must be performed directly on the NSX
Advanced Load Balancer UI.
1. The user uploads the NSX Advanced Load Balancer Controller .ova file to NSX Manager.
2. NSX Manager invokes the NSX Advanced Load Balancer Controller node deployment in
vCenter Server.
4. On startup, NSX Advanced Load Balancer Controller registers with NSX Manager by using
the Avi Lifecycle Manager (Avi LCM).
5. After registration, Avi LCM performs the basic configuration of NSX Advanced Load
Balancer Controller.
6. In a cluster deployment, the NSX Advanced Load Balancer Controller instances 2 and 3 are
deployed, and the cluster is created.
7. Service engine VMs are created or deleted based on the load-balancer configuration.
543
VMware Confidential Internal Use Only
The NSX Advanced Load Balancer Controller OVA must be version 20.1.6 or later. Previous
versions of the image are not accepted. Only one OVA file can be uploaded at a time. The disk
space required for the OVA bundle is about 4 GB.
Before registration of NSX Advanced Load Balancer Controller with the Avi Lifecycle Manager,
trust is established between the two entities using certificates.
During the provisioning phase, the Avi Lifecycle Manager configures the following parameters on
the controller:
• Controller cluster information, including the cluster VIP if specified during the deployment
1. The user configures NSX Advanced Load Balancer through the Policy UI or Policy API.
2. The Policy module receives and processes the intended user configuration.
5. Advanced LB Provider converts the policy intent to constructs that are specific to NSX
Advanced Load Balancer and configures the NSX Advanced Load Balancer Controller.
544
VMware Confidential Internal Use Only
9-57 Requirements for NSX Advanced Load
Balancer
Before configuring NSX Advanced Load Balancer from the NSX UI, you must perform the
following steps:
A cloud connector is used to integrate the NSX Advanced Load Balancer platform with the
NSX-T Data Center environment. The cloud connector defines the connectivity information for
the service engines, including the NSX segments and the Tier-1 gateway they are connected to.
545
VMware Confidential Internal Use Only
9-58 Deploying the NSX Advanced Load
Balancer Controller Cluster
You can deploy the NSX Advanced Load Balancer Controller cluster from the NSX UI by
navigating to System > Configuration > Appliances > NSX Advanced Load Balancer.
546
VMware Confidential Internal Use Only
Before deploying the NSX Advanced Load Balancer Controller cluster:
• A virtual IP address must be configured for the NSX Advanced Load Balancer Controller
cluster.
Each NSX Advanced Load Balancer Controller cluster requires only one management IP
address. This IP address is used to configure the controller. The management IP address is also
used by the controller to communicate with the service engines.
In a cluster deployment, the management IP addresses for all controllers must belong to the
same subnet.
An NSX Advanced Load Balancer Controller cluster includes one or three nodes. All nodes must
be deployed individually. To ensure that the cluster quorum is maintained, the deployment of the
second node is queued until the third node is deployed.
• Management IP address: Used for communication with the NSX Advanced Load Balancer
Controller cluster.
• Data plane IP address: Used for communication with the server pool network.
547
VMware Confidential Internal Use Only
9-60 Creating a Cloud Connector (1)
Before NSX Advanced Load Balancer Controller can create service engine VMs, you must
create a cloud connector for the NSX-T Data Center environment.
The cloud connector defines the connectivity information for both the management and the
data plane networks of the service engines, including:
• NSX segment
In a typical deployment, separate NSX segments are used for the management and data plane
interface of the service engines.
All required NSX constructs must be manually preconfigured in NSX Manager before creating
the cloud connector.
Placing the service engines on NSX segments directly connected to a Tier-0 gateway is not
supported.
Both NSX overlay and NSX VLAN-backed segments can be used to configure the management
and data plane network for the service engines.
All required NSX constructs, including transport zones, Tier-1 gateway, and segments, must be
manually preconfigured in NSX Manager before creating the cloud connector.
548
VMware Confidential Internal Use Only
9-61 Creating a Cloud Connector (2)
You create a Cloud Connector from the NSX Advanced Load Balancer UI by navigating to
Infrastructure > Clouds.
The cloud connector wizard first connects to NSX Manager to fetch the network constructs
available and presents them as options to configure the SE management and data plane networks.
Only one management network is supported for all SE groups created for the NSX-T Data
Center environment.
549
VMware Confidential Internal Use Only
9-62 Creating a Service Engine Group
A service engine group is a method of grouping service engines to provide data plane isolation
and redundancy:
• SE groups are used to manage load-balancing traffic for a given load balancer service.
• If a service engine fails, another service engine within the same SE group takes over.
Creating a service engine group is optional. A default service engine group is automatically
created for the NSX-T Data Center cloud. However, you can modify the existing service engine
group or create another service engine group.
To create a service engine group from the NSX Advanced Load Balancer UI, navigate to
Infrastructure > Cloud Resources > Service Engine Group.
The high availability mode of the SE group controls the behavior of the SE group if an SE failure
occurs. It also controls how the load is distributed across SEs.
• Legacy high availability active-standby mode: This mode is primarily intended to mimic a
legacy appliance for easy migration to NSX Advanced Load Balancer. It deploys two
services engines: one in active and the other in standby mode.
• Elastic high availability N+M mode: This default mode deploys N active SEs for load-
balancing purposes, but also deploys M additional SEs within the service group as a buffer to
absorb any SE failures.
• Elastic high availability active-active mode: This high availability mode load balances services
across a minimum of two SEs that are both in active mode.
The VS Placement across SEs option determines whether the creation of load-balancing
services also creates new SEs or uses those already available.
550
VMware Confidential Internal Use Only
The following options exist:
• Compact: Attempts to place load balancer services on already existing service engines. This
option is default for elastic high availability N+M and legacy high availability active-standby
mode.
For more information about the SE group configuration options, see "Service Engine Group" at
https://avinetworks.com/docs/21.1/service-engine-group/.
The virtual service is associated with a single SE group and attached to a Tier-1 gateway.
• A server pool is associated with a virtual service and includes the group of servers
responsible for load balancing the client requests.
• A health monitor is attached to a server pool and verifies the status of the servers within it.
• A persistence profile is attached to a server pool and reconnects clients to the same pool
member.
A virtual service is a software construct containing a virtual IP address, a port, and a protocol.
External clients use this combination to access the servers behind the load balancer.
551
VMware Confidential Internal Use Only
9-64 NSX Advanced Load Balancer
Topologies
NSX Advanced Load Balancer supports different topologies for the SE and server pool
deployment.
NSX Advanced Load Balancer supports two different topologies for the SE and server pool
deployment:
• Service engines on a dedicated segment: This option allows you to manage the IP address
assignments for the SE data plane interfaces and the server pool separately. In the current
version, the NSX segment used for the SE data plane must be created in NSX-T Data
Center before creating an NSX-T Cloud Connector in NSX Advanced Load Balancer
Controller.
• Service engines on a shared segment: With this option, the SE data plane interfaces share
the same address space as the server pool servers and reside in the same NSX segment.
552
VMware Confidential Internal Use Only
9-65 VIP Placement and Route Redistribution
Before deploying a virtual service, the Tier-1 and Tier-0 gateways must be configured to
distribute the LB VIP.
During the virtual service configuration, the following actions are automatically performed:
1. The VIP is placed on one or more service engines depending on the high availability mode
configured.
2. VIP static routes are created on the Tier-1 gateway where the SE data plane network is
connected.
The VIP static routes include as many next hops as available service engines.
The NSX administrator is expected to configure the Tier-1 gateway to advertise the virtual
service VIP with the Tier-0 gateway. For north-south reachability of the VIP, the administrator
should also configure the Tier-0 gateway to redistribute the VIP to the external router through
BGP.
After a virtual service is placed on an SE group, the NSX Advanced Load Balancer Controller
creates VIP static routes on the Tier-1 gateway where the SE data plane network is connected.
The VIP static routes include as many next hops as available service engines. These static routes
do not need to be advertised or redistributed, because they are only locally relevant to the Tier-
1 gateway and the back-end service engines for ECMP purposes.
553
VMware Confidential Internal Use Only
9-66 North-South Traffic
An external client request to the VIP is managed as follows:
1. The external client traffic enters through the uplink of the Tier-0 gateway.
2. The Tier-0 gateway forwards the traffic to the appropriate Tier-1 gateway.
3. Using the static VIP routes, the Tier-1 gateway routes the request to the VIP on the service
engines.
4. The service engines forward the traffic to the back-end server pool through the Tier-1
gateway.
554
VMware Confidential Internal Use Only
9-67 East-West Traffic (1)
An internal client request to a VIP connected to the same Tier-1 gateway is managed as follows:
1. The request is sent to the Tier-1A gateway where the client network is connected.
2. Using the static VIP routes, the Tier-1A gateway routes the request to the VIP on the
service engines.
3. The service engines forward the traffic to the back-end server pool through the Tier-1A
gateway.
555
VMware Confidential Internal Use Only
9-68 East-West Traffic (2)
An internal client request to a VIP connected to a different Tier-1 gateway is managed as
follows:
1. The request is sent to the Tier-1B gateway where the client network is connected.
2. The traffic is routed to the Tier-0 gateway, which forwards it to the Tier-1A gateway.
3. Using the static VIP routes, the Tier-1A gateway routes the request to the VIP on the
service engines.
4. The service engines forward the traffic to the back-end server pool through the Tier-1A
gateway.
556
VMware Confidential Internal Use Only
9-69 Creating a Virtual IP Address
You create a virtual IP address that you can associate with a virtual service.
During the creation of the virtual IP, you specify the Tier-1 gateway to which the service engine
data plane network is connected.
557
VMware Confidential Internal Use Only
9-70 Creating a Virtual Service
When creating a virtual service, you specify its virtual IP, port and protocol combination, the
server pool, and the SE group responsible for managing the load-balancing traffic.
You specify the following main parameters during the creation of a virtual service instance:
• Cloud Connector: Specify the cloud connector details for the NSX-T Data Center
environment.
• Application Profile: Determines the behavior of the virtual services, based on application
type.
— System-DNS: Default for processing DNS traffic and uses UDP port 53.
— System-HTTP: Default for processing nonsecure layer 7 HTTP traffic and uses port 80.
— System-L4-Application: The virtual service listens for layer 4 requests on the port that
you specify in the Service Port field. Select this option to use the virtual service for non-
HTTP applications, such as mail or a database.
— System-Secure-HTTP: Default for processing secure layer 7 HTTPS traffic. Uses port
443.
558
VMware Confidential Internal Use Only
• Virtual Hosting: When selected, this virtual service participates in virtual hosting through the
SSL Server Name Indication (SNI). This method allows a single SSL decrypting virtual
service IP:port to forward traffic to different internal virtual services based on the name of
the site requested by the client. The virtual hosting VS must be either a parent or a child.
Unless your application has a specific requirement for virtual hosting, leave this option blank.
• Virtual IP Address: Select an existing virtual service IP address from the drop-down menu
or create an address.
• Service Port: Specify a port number or range for the virtual service.
• Pool/Pool Group: Select the pool to use for this virtual service using the Pool drop-down
menu. A pool may only be associated with one virtual service.
• Service Engine Group: The group of SEs used to manage load-balancing traffic for the
virtual service. If no group is specified, the Default-SE group is used.
Under the profile section, you can specify the SSL and TCP/UDP profiles for the virtual server.
For more information about the Virtual Service configuration options, see Create a Virtual
Service at https://avinetworks.com/docs/21.1/architectural-overview/applications/virtual-
services/create-virtual-service/.
559
VMware Confidential Internal Use Only
9-71 Creating a Server Pool
When configuring a server pool, you specify the pool members, load-balancing algorithm, health
monitors, persistence profiles, and other parameters.
You specify the following parameters during the creation of a server pool:
• Load-balancing algorithm: The selected load-balancing algorithm controls how the incoming
connections are distributed among the servers in the pool.
— Static Membership: Manually enter the IP address and port for each member.
— IP Group: Select an existing IP pool profile or create a profile.
• Cloud Connector: Specify the cloud connector details for the NSX-T Data Center
environment.
• VRF Context: Virtual Routing Framework (VRF) is a method of isolating traffic within a
system. VRF is also called a route domain in the load balancer community. A global VRF
context is created by default. Network administrators might create custom VRF contexts to
isolated traffic between different tenants or subsets.
• Tier-1 gateway: Specify the Tier-1 gateway that you want to attach the server pool to. This
value matches the Tier-1 gateway specified for the virtual service and VIP.
• Health monitors: Verify server health by applying one or more health monitors. Active
monitors generate traffic from each service engine and mark a server up or down based on
the response. The passive monitor listens only to client to server communication.
560
VMware Confidential Internal Use Only
• In the Additional Settings section, you can configure a persistence profile to ensure that
subsequent connections from the same client connect to the same server. Persistent
connections are critical for most servers that maintain client session information locally, such
as HTTP applications that need to keep user's information for a period of time.
• In the Security section, you can enable SSL encryption between the service engines and the
back-end servers.
For more information about the server pool configuration options, see
https://avinetworks.com/docs/21.1/architectural-overview/applications/pools/.
Least Connections (Default) New connections are sent to the server that currently has the least
number of outstanding concurrent connections.
Round Robin New connections are sent to the next eligible server in the pool in
sequential order.
Fastest Response New connections are sent to the server that is currently providing
the fastest response to new connections or requests.
Consistent Hash New connections are distributed across the servers using a hash
key.
Least Load New connections are sent to the server with the lightest load,
regardless of the number of connections that server has.
Fewest Servers Instead of trying to distribute all connections or requests across all
servers, NSX Advanced Load Balancer determines the fewest
number of servers required to satisfy the current client load.
For a full list of supported load balancing algorithms, see "Load Balancing Algorithms" at
https://avinetworks.com/docs/21.1/load-balancing-algorithms/.
561
VMware Confidential Internal Use Only
9-73 Configuring Server Pool Security Settings
You can encrypt the traffic between the service engines and the back-end servers by enabling
SSL in the server pool.
The chosen SSL profile defines which ciphers and SSL versions are used to encrypt the traffic.
You can encrypt the traffic between the service engines and the back-end servers by specifying
the following parameters:
• SSL profile: The SSL profile defines which ciphers and SSL versions are supported to
encrypt the traffic.
• PKI profile: This option validates the certificate presented by the server against the selected
PKI profile. When not enabled, the service engine automatically accepts the certificate
presented by the server when sending health checks.
• Service engine client certificate: When establishing an SSL connection with a server, either
for normal client-to-server communications or when executing a health monitor, the service
engine presents this certificate to the server.
562
VMware Confidential Internal Use Only
9-74 Configuring Health Monitor Profiles
Health monitors validate the status of the servers in a pool to make forwarding decisions. You
can configure health monitor profiles for HTTP and HTTPS applications from the NSX UI.
The following common configuration settings are available for both HTTP and HTTPS health
monitors:
• Send interval: Frequency at which the health monitor initiates a server check, in seconds.
• Receive interval: Maximum amount of time before the server must return a valid response to
the health monitor, in seconds.
• Successful checks: Number of continuous successful health checks before the server is
marked up.
• Failed checks: Number of continuous failed health checks before the server is marked down.
• Health monitor port: Specify a port that should be used for the health check.
563
VMware Confidential Internal Use Only
9-75 Configuring Persistence Profiles
Persistence profiles ensure the stability of stateful applications by directing all related
connections to the same back-end server.
NSX Advanced Load Balancer supports the following types of persistence profiles based on
cookies:
• HTTP Cookie: The NSX Advanced Load Balancer service engines insert an HTTP cookie
into a server's first response to a client. In this mode, no configuration changes are required
on the back-end servers.
• App Cookie: Rather than NSX Advanced Load Balancer inserting its own cookie into HTTP
responses for persistence, this mode relies on an existing cookie that is inserted by the
back-end server.
564
VMware Confidential Internal Use Only
9-76 Validating Virtual Services and Server
Pools from the NSX UI
You verify the realization status of virtual services and server pools from the NSX UI.
In the example, the realization of both the VS-Web virtual service and the Web-Pool pool were
successful. It might take a few minutes for the virtual service status to change to success, while
the service engines are being deployed in the back end.
565
VMware Confidential Internal Use Only
9-77 Accessing the NSX Advanced Load
Balancer UI (1)
In this release, the NSX Advanced Load Balancer Analytics and Troubleshooting tools are not
yet integrated into the NSX UI. Sometimes, you might need to directly access the NSX
Advanced Load Balancer UI to troubleshoot issues.
You can start the NSX Advanced Load Balancer UI from NSX Manager.
566
VMware Confidential Internal Use Only
9-78 Accessing the NSX Advanced Load
Balancer UI (2)
On the Applications tab in the NSX Advanced Load Balancer UI, you can review the following
details for each virtual service and server pool instance:
• Analytics
• Logs
• Health
• Clients/servers
• Events
• Alerts
On the Applications tab in the NSX Advanced Load Balancer UI, you can review the following
details for each virtual service and server pool instance:
• Analytics: Provides insights into performance through the real-time analysis of key
performance indicators.
• Logs: Logs can be indexed, viewed, and filtered locally in the NSX Advanced Load Balancer UI.
• Health: The health score denotes both the responsiveness of the virtual service or pool, and
any vulnerabilities.
• Clients/servers: The Clients tab is available for virtual services and displays information
about clients accessing that service. The Servers tab is available for server pools and
displays information about the status, health, and throughput of each member.
567
VMware Confidential Internal Use Only
• Security: This tab is only available for virtual services. It provides detailed security
information, including SSL and DDoS information.
• Events: Events are used to provide a history of relevant changes that have occurred in a
virtual service or service pool.
• Alerts: Alerts are intended to inform administrators of significant events within a virtual
service or service pool.
• Explain the NSX Advanced Load Balancer components and how they manage traffic
568
VMware Confidential Internal Use Only
9-81 Lesson 4: IPSec VPN
569
VMware Confidential Internal Use Only
9-83 Use Cases for IPSec VPN
IPSec VPN has several use cases:
• Provides a secure communication channel for other nonsecure protocols, such as Generic
Routing Encapsulation (GRE)
IPSec VPN secures the traffic that flows between two networks. These networks are
connected over a public network through IPSec gateways called endpoints.
NSX Edge supports site-to-site IPSec VPN between an NSX Edge instance and remote IPSec-
capable gateways.
NSX Edge can be one or both endpoints, supporting site-to-site IPsec VPN with another NSX
Edge or another vendor's IPsec gateway.
570
VMware Confidential Internal Use Only
9-84 IPSec VPN Protocols and Algorithms
IPSec VPN includes several protocols and algorithms:
• Key management: IPSec VPN uses the Internet Key Exchange (IKE) protocol to negotiate
security parameters:
— IKE runs over UDP port 500. If NAT is detected in the gateway, the port is set to UDP 4500.
— IKEv1 (RFC 2409) and IKEv2 (RFC 5996) are supported.
• Authentication:
— Preshared key
— Certificates
• Encryption:
— Advanced Encryption Standard (AES)
• Data integrity:
— Secure Hash Algorithm (SHA)
IPSec is not a single protocol. It is a suite of protocols designed to provide confidentiality,
authentication, and integrity for a VPN.
To accomplish these goals, IPSec uses Internet Key Exchange (IKE) to:
• Manage the connection to a peer.
• Define security associations used to secure and validate data exchanges.
• Define security protocols used to carry IP traffic over the VPN.
Security Associations (SA): An SA is a basic component of IPSec and contains information about
the security parameters negotiated between peers.
The following types of SAs are available:
• IKE (or ISAKMP) SA
• IPSec SA
The IKE SA is used for the control plane of the VPN and contains a combination of mandatory
and optional values:
• Encryption Algorithm: Mandatory
• Hash Algorithm: Mandatory
• Authentication Method: Mandatory
• Diffie-Hellman Group: Mandatory
• Lifetime: Optional
571
VMware Confidential Internal Use Only
9-85 IPSec VPN Methods
IPSec VPN tunnel packets can use different types of headers:
• The authentication header (AH) provides data integrity and authentication without
encryption.
• The encapsulating security payload header (ESP) provides encryption, data integrity, and
authentication.
The authentication header does not provide encryption, whereas the encapsulating security
payload header enables the encryption of the protected payload.
572
VMware Confidential Internal Use Only
9-86 IPSec VPN Modes
IPSec VPN supports the following modes:
• Transport mode:
• Tunnel mode:
573
VMware Confidential Internal Use Only
9-87 IPSec VPN Types
The following IPSec VPN types are available.
• Policy-based VPN:
— A VPN policy is used to determine which traffic is protected by IPSec and passes
through the VPN tunnel.
— This static configuration requires modification of the VPN policy when network
topology changes occur.
• Route-based VPN:
— The remote IPSec VPN gateway is a BGP peer and protected local and remote
networks are learned based on routes exchanged by using BGP.
— Routes are learned over a specific interface called a Virtual Tunnel Interface (VTI).
• All packets routed through the VTI are protected by using IPSec.
• OSPF dynamic routing is not supported for routing through IPSec VPN tunnels.
— If the peer is unreachable on the primary tunnel, the secondary tunnel becomes active.
574
VMware Confidential Internal Use Only
9-88 NSX-T Data Center IPSec VPN
Deployment
When you deploy IPSec VPN, you should consider several factors:
• IPSec VPN services are available on both Tier-1 and Tier-0 gateways.
• Protected networks must be segments created through the NSX UI or policy APIs.
• Segments can be connected to either Tier-0 or Tier-1 gateways to use VPN services.
575
VMware Confidential Internal Use Only
9-89 IPSec VPN: High Availability
• VPN supports active-standby high availability on the Tier-1 and Tier-0 gateways.
• This feature is supported for both policy-based and route-based IPSec VPN services.
• You can use high availability virtual IP addresses for external connections.
576
VMware Confidential Internal Use Only
9-90 Configuring IPSec VPN
To create an IPSec VPN tunnel:
577
VMware Confidential Internal Use Only
9-91 Configuring an IPSec VPN Service
To configure an IPSec VPN service, you select Networking > VPN SERVICES > ADD SERVICE
> IPsec. Then you specify the values for the options.
• Tier-0/Tier1 Gateway: From the Tier-0/Tier-1 Gateway drop-down menu, select a Tier-
0/Tier-1 gateway to associate with this IPSec VPN service.
• Admin Status: This option enables or disables the IPSec VPN service. By default, the value
is set to Enabled to enable the service on the Tier-0 gateway.
• IKE Log Level: This option enables VPN service logging. The IKE logging level determines
the amount of information that you want collected for the IPSec VPN traffic. The default is
set to the Info level.
• Session Sync: This option enables or disables the stateful synchronization of VPN sessions.
By default, this parameter is set to Enabled.
• Tags: You enter a value for tags if you want to include this service in a tag group.
578
VMware Confidential Internal Use Only
9-92 Configuring DPD Profiles
Dead peer detection (DPD) is a method for detecting whether an IPSec connection is alive.
To configure a DPD profile, you select DPD PROFILES on the PROFILES tab. This option
appears after you configure an IPSec service.
A DPD profile specifies the number of seconds to wait between probes to detect whether an
IPSec peer is alive.
To configure a DPD profile, you select values for the following options:
— Periodic: For a periodic DPD probe mode, a DPD probe is sent every time the specified
DPD probe interval time is reached.
— On Demand: For an on-demand DPD probe mode, a DPD probe is sent if no IPSec
packet is received from the peer site after an idle period. The value in the DPD Probe
Interval text box determines the idle period used.
• DPD Probe Interval (sec): You provide a value in seconds to define at which times a DPD
detection packet should be sent.
• Tags: For cloud-based installations, almost every entity can hold a tag.
579
VMware Confidential Internal Use Only
9-93 Configuring IKE Profiles
The Internet Key Exchange (IKE) profile defines the algorithms used for arranging for secure,
authenticated communication.
To configure an IKE profile, you select IKE PROFILES. This step is optional after you configure
an IPSec service.
• IKE Version: The options are IKE V1, IKE V2, or IKE FLEX. The selection depends on your
business requirements.
• Encryption Algorithm: The encryption algorithm used during the Internet Key Exchange
(IKE) negotiation.
• Digest Algorithm: The secure hashing algorithm used during the IKE negotiation.
• Diffie-Hellman: The cryptography schemes that the peer site and the NSX Edge instance
use to establish a shared secret over an insecure communications channel.
• SA Lifetime (sec): The lifetime (in seconds) of the security associations (individual
communicating peer identifiers) after which a renewal is required.
• Tags: For cloud-based installations, almost every entity can hold a tag.
580
VMware Confidential Internal Use Only
9-94 Configuring IPSec Profiles
The IPSec profile defines the security parameters used for negotiations to establish and maintain
a secure tunnel between two peers.
To configure the IPSec profile, you select IPSEC PROFILES. This step is optional after you
configure an IPSec service.
You provide values for the following settings to configure the IPSec profile:
• Encryption Algorithm: The encryption algorithm used during the Internet Protocol Security
(IPSec) negotiation.
• Digest Algorithm: The secure hashing algorithm used during the IPSec negotiation.
• PFS Group: This setting specifies the Perfect Forward Secrecy (PFS) group, which adds
protection to the keys used for building secure channels. You can enable or disable this
option.
• SA Lifetime (sec): The setting specifies the lifetime (in seconds) of the security associations
(individual communicating peer identifiers) after which a renewal is required.
• DF Bit: This setting defines whether the encrypted traffic should copy the Don't Fragment
(DF) bit from the inner payload to the encrypted traffic.
• Tags: For cloud-based installations, almost every entity can hold a tag.
581
VMware Confidential Internal Use Only
9-95 Configuring a Local Endpoint
To configure the local endpoint, you select Local Endpoints. This step is required in preparation
for configuring an IPSec VPN session.
You provide values for the following settings to configure the local endpoint:
• VPN Service: This setting is the predefined service to use with this session.
— For an IPSec VPN service running on a Tier-0 gateway, the local endpoint IP address
must be different from the Tier-0 gateway's uplink interface IP address.
— For an IPSec VPN service running on a Tier-1 gateway, the route advertisement for
IPSec local endpoints must be enabled in the Tier-1 gateway configuration.
• Site Certificate: Local site certificate, used for certificate-based authentication mode for the
IPSec VPN session.
• Trusted CA / Self Signed Certificates: Remote site certificate, used for certificate-based
authentication mode for the IPSec VPN session.
• Certificate Revocation List: A list of digital certificates that were revoked before their
scheduled expiration date and should no longer be trusted.
• Local ID: Used for identifying the local NSX Edge instance. This local ID is the peer ID
configured on the remote site. The local ID can be any string but is typically the public IP
address of the VPN or a fully qualified domain name (FQDN) for the local VPN service.
• Tags: For cloud-based installations, almost every entity can hold a tag.
582
VMware Confidential Internal Use Only
9-96 Configuring IPSec VPN Sessions (1)
You add an IPSec session to define the VPN type: either policy-based or route-based. You can
find this configuration option by selecting Networking > VPN > IPSec VPN Sessions.
583
VMware Confidential Internal Use Only
9-97 Configuring IPSec VPN Sessions (2)
When you select the policy-based option, IPSec tunnels are used to connect multiple local
subnets that are behind the NSX Edge instance, with peer subnets on the remote VPN site.
To configure the policy-based IPSec session, you specify the following settings:
• Name : You use the name to identify the service when you need to use it.
• VPN Service: This setting is the predefined service to use with this session.
• Local Endpoint: This setting is the earlier configured local endpoint for use with this
configuration session.
• Remote IP: The setting specifies the IP address of the remote IPSec-capable gateway for
building the secure connection.
• Authentication Mode: This setting defines whether to use the preshared key (PSK) or
certificate-based connection authentication.
• Local Networks and Remote Networks: These settings define the interesting traffic that
should be encrypted through this VPN session.
• Pre-shared Key: This setting specifies the string to define the key if the authentication
mode is PSK.
• Remote ID: This setting defines the identifier of the remote peer for verifying the
authenticity of the peering.
• Tags: For cloud-based installations, almost every entity can hold a tag.
584
VMware Confidential Internal Use Only
9-98 Configuring IPSec VPN Sessions (3)
In the Profiles and Initiation Mode section, you select the predefined DPD, IKE, and IPsec profiles
from the drop-down menus. You also define which side initiates the connection.
585
VMware Confidential Internal Use Only
9-99 Configuring IPSec VPN Sessions (4)
When you select the route-based option, tunneling is provided on traffic that is based on routes.
These routes were learned dynamically over a virtual tunnel interface (VTI) by using a preferred
protocol, such as BGP. IPSec secures all the traffic flowing through the VTI.
• Name: You use the name to identify the service for later use.
• VPN Service: This setting is the predefined service to use with this session.
• Local Endpoint: This setting defines an earlier configured local endpoint to use with this
session configuration.
• Remote IP: This setting defines the IP address of the remote IPSec-capable gateway for
building the secure connection.
• Compliance suite: You can specify a security compliance suite such as CNSA, FIPS,
Foundation, PRIME, or Suite-B to configure the security profiles used for an IPSec VPN
session.
• Authentication Mode: This setting defines whether to use a preshared key (PSK) or
certificate-based connection authentication.
• Tunnel Interface: This setting defines the IP address of the local virtual tunnel interface
(VTI) that is created to use with this session.
• Pre-shared Key: This setting provides the string for defining the key if the authentication
mode is PSK.
• Remote ID: This setting specifies the identifier of the remote peer for verifying the
authenticity of the peering.
• Tags: For cloud-based installations, almost every entity can hold a tag.
586
VMware Confidential Internal Use Only
9-100 Review of Learner Objectives
• Explain how IPSec-based technologies are used to establish VPNs
587
VMware Confidential Internal Use Only
9-101 Lesson 5: L2 VPN
• Enables VM mobility, such as vSphere vMotion migration and disaster recovery without IP
address changes
L2 tunnels are established between L2 VPN endpoints that are interconnected over L3
networks.
588
VMware Confidential Internal Use Only
9-104 Overview of L2 VPN
NSX-T Data Center L2 VPNs are established using the following layers of encapsulation:
Combined, GRE tunnels over IPSec provide the secure extension of multicast traffic between
sites, required to stretch an L2 broadcast domain. On their own:
• L2 VPN client: This source initiates communication with the destination L2 VPN server.
589
VMware Confidential Internal Use Only
9-105 L2 VPN Edge Packet Flow
The inbound and outbound L2 VPN Edge packet flows illustrate the sequence of operations that
includes GRE and IPSec tunneling.
For outbound L2 VPN traffic (traffic from the internal network behind the edge node) that is
destined for any remote L2 network, the first step is to decapsulate the Geneve frames. The
destination address of the internal frame designates whether traffic goes through the local
bridge port toward remote sites or is locally managed. Further steps are inserting the ID for the
proper VLAN and sending the traffic to the local VTI interface to encapsulate into GRE, which
gets protected by IPSec and is forwarded to any given destination.
In the inbound direction, when receiving L2 VPN traffic that is identified by the IPSec engine, the
traffic requires IPSec decryption first and GRE decapsulation. After being sent to the bridge
interface, traffic is sent to local networks. The required Geneve encapsulation parameters are
based on the actual tunnel IDs for the traffic.
590
VMware Confidential Internal Use Only
9-106 L2 VPN Considerations
When deploying L2 VPN, you must consider several important points:
• Segments can be connected to either Tier-1 or Tier-0 gateways to use L2 VPN services.
• Only one L2 VPN service (either client or server) can be configured for either Tier-0 or Tier-
1 gateway.
In NSX-T Data Center, layer 2 VPN has the following additional characteristics:
• L2 VPN interoperability is available only in NSX Data Center for vSphere and NSX-T Data
Center. It does not support third-party interoperability.
• The supported number of L2 VPN server and client sessions depends on the size and type
of the edge node. To review recommended L2 VPN configuration limits, see the VMware
Configuration Maximums tool at https://configmax.vmware.com/home.
591
VMware Confidential Internal Use Only
9-107 Supported L2 VPN Clients
The following L2 VPN clients are supported:
Autonomous NSX Edge L2 VPN Client Deployed by using an OVF file on a host that is not
Platform managed by NSX-T Data Center
NSX Data Center for Overlay and The edge services gateway in the NSX Data Center
vSphere managed NSX VLAN backed for vSphere environment can act as an L2 VPN client
Edge segments or server endpoint
NSX-T Data Center Overlay and Deployed and configured in an environment that is
managed NSX Edge VLAN backed managed by NSX-T Data Center
segments
The NSX Standalone Edge is available for download for both NSX Data Center for vSphere and
NSX-T Data Center products.
The NSX Standalone Edge is compatible with previous releases for customers who have
workflows using this solution and do not want to adopt the newer high performance
autonomous Edge.
592
VMware Confidential Internal Use Only
9-108 About Autonomous Edge
Autonomous edge has the following characteristics:
• Is deployed by using an OVF file on a host that is not managed by NSX-T Data Center
• Acts as an NSX Edge gateway, which can be deployed on on-premises data centers and
public clouds (for example, Amazon AWS and Microsoft Azure)
• Runs independently without the management plane/central control plane installed in the
NSX domain
The NSX Autonomous Edge VM deployment is currently not supported in the KVM
environments.
The NSX Autonomous Edge L2 VPN Client has its own API / UI for configuration and provides
better performance than Standalone Edge – Client. This L2 VPN client is recommended over
Standalone Edge – Client for any new deployment of NSX-T Data Center.
593
VMware Confidential Internal Use Only
9-109 About Standalone Edge
You can configure a standalone edge as an L2 VPN client:
• From the deployment bundle, select the L2 OVF file (large or extra large) to install an edge
VM.
• During deployment, the L2T section requests the peer code. No additional L2 VPN
configuration is needed.
The standalone edge provides OVF and CLI configuration, which is compatible with previous
releases for customers who have workflows using this solution and do not want to adopt the
new autonomous edge.
594
VMware Confidential Internal Use Only
9-110 About Managed NSX Edge Nodes
In NSX Data Center for vSphere and in NSX-T Data Center, you can either configure the
managed edge as an L2 VPN client or an L2 VPN server:
• Beginning with NSX Data Center for vSphere version 6.4.2, the IPSec-based L2 VPN client
or server is available on the NSX Data Center for vSphere edges.
The information presented on the slide is relates to the NSX Data Center for vSphere
requirements to act as a peer for the NSX-T Data Center L2 VPN. For more detailed
configuration steps, see NSX API Guide at https://docs.vmware.com/en/VMware-NSX-Data-
Center-for-vSphere/6.4/nsx_64_api.pdf.
To configure the local endpoints, you provide values for the following options:
595
VMware Confidential Internal Use Only
9-112 L2 VPN Server Configuration Steps
The L2 VPN configuration uses the IPSec features of the gateway. To enable L2 VPN, you must
first configure an IPSec VPN Service and an IPSec local endpoint. Then, you can configure the
L2 VPN server.
596
VMware Confidential Internal Use Only
9-113 Configuring an IPSec for the L2 VPN
Service
To configure the IPSec VPN service to be used for L2 VPN, you select Networking > VPN >
VPN Services. You click ADD Service to associate the IPSec VPN service with a Tier-0 or Tier-1
gateway.
To configure the IPSec for L2 VPN service , you provide values for the following options:
• Name: You use this name to identify the IPSec for L2 VPN service .
• Session Sync: To enable or disable the stateful synchronization of the IPSec for L2 VPN
session.
• IKE Log Level: The Internet Key Exchange log level. The default is Info level.
• Tags: Enter a value for Tags if you want to include this service in a tag group.
• Global Bypass Rules: Enter the list of local and remote subnets between which IPSec
protection is bypassed.
597
VMware Confidential Internal Use Only
9-114 Configuring an IPSec for L2 VPN Local
Endpoint
To configure the IPSec endpoint to be used for L2 VPN, you select Networking > VPN > Local
Endpoints. You click ADD LOCAL ENDPOINT to define the local side of the IPSec connection.
To configure the local endpoints, you provide values for the following options:
• VPN Service: This setting specifies which IPSec VPN service to use with the endpoint.
• Site Certificate: You use this setting with certificate-based authentication to specify which
certificate to use with this endpoint.
• Local ID: This setting specifies the IPsec ID of the local side. The local ID is usually the same
as the local IP address.
• Tags: For cloud-based installations, almost every entity can hold a tag.
598
VMware Confidential Internal Use Only
9-115 Configuring the L2 VPN Server Service
To configure the L2 VPN Server service, you select Networking > VPN > VPN Services. To
begin, you click ADD SERVICE and select L2 VPN Server to define an L2 VPN Server service.
To configure the L2 VPN server service, you provide values for the following options:
• Name: You use this name to identify the L2 VPN server service.
• Hub & Spoke: By default, the value is set to Disabled, which means the traffic received from
the L2 VPN clients is only replicated to the segments connected to the L2 VPN server. If
this property is set to Enabled, the traffic from any L2 VPN client is replicated to all other L2
VPN clients.
• Tags: For cloud-based installations, almost every entity can hold a tag.
599
VMware Confidential Internal Use Only
9-116 Configuring an L2 VPN Server Session
To configure the L2 VPN server session, you select Networking > VPN > L2 VPN Sessions.
You click ADD L2 VPN SESSION to complete the L2 VPN Server configuration.
To configure the L2 VPN session, you provide values for the following options:
• VPN Service: This setting specifies which L2 VPN server service to use with this L2 VPN
session.
• Local Endpoint/IP: This setting specifies which local endpoint to use with this L2 VPN
session.
• Remote IP: The IP address of the client-side IPSec tunnel endpoint.
• Pre-shared Key: A shared secret common to both L2 VPN client and the L2 VPN server
configurations.
• Remote ID: This setting specifies the IPsec ID of the remote side. The remote ID is usually
the same as the remote IP address.
• Tags: For cloud-based installations, almost every entity can hold a tag.
600
VMware Confidential Internal Use Only
9-117 Configuring the L2 VPN Segments (1)
You identify the segments that should be extended through the L2 VPN tunnels. You can click
the vertical ellipsis icon to edit an existing segment or create a segment.
You can optionally configure a local egress gateway IP so that all VMs on the segment use it as
their default gateway.
When you configure the L2 VPN segments, the following key settings are available:
• L2 VPN: This setting defines the previously configured L2 VPN session. The segment that is
defined is used through that session.
• VPN Tunnel ID: This number is used to identify the communicating local and remote L2
networks. The same ID on both sides means that they are on the same L2 broadcast
domain.
• Local Egress Gateway IP: The IP address of the local gateway that the VMs on the
segment use as their default gateway. The same IP address can be configured in the
remote site on the extended segment.
601
VMware Confidential Internal Use Only
9-118 Configuring the L2 VPN Segments (2)
You can also configure segments on the L2 VPN Session page. After selecting the edit mode
and clicking SEGMENTS, you can add segments and define the tunnel ID for each segment.
For steps 1 and 2, follow the L2 VPN server configuration steps, switching local and remote
endpoint IPs.
602
VMware Confidential Internal Use Only
9-120 Configuring the L2 VPN Client Service
To configure the L2 VPN client service, you select Networking > VPN > VPN Services. To
begin, you click ADD SERVICE and select L2 VPN Client to define an L2 VPN client service.
To configure the L2 VPN client service, you provide values for the following options:
• Name: You use this name to identify the L2 VPN client service.
The peer code is a Base64-encoded configuration string that is available from the L2 VPN
server through the DOWNLOAD CONFIG option or through a REST API call.
603
VMware Confidential Internal Use Only
9-122 Configuring the L2 VPN Client Session
(2)
You configure the local and remote IP addresses and the peer code, which was retrieved in a
previous step. You can also click the Admin Status toggle to enable or disable the session.
To configure the L2 VPN client session, you provide values for the following options:
• Name: You use this name to identify the L2 VPN client session.
• VPN Service: This setting specifies which L2 VPN client service to use with this L2 VPN
session.
• Local Endpoint/IP: This setting specifies which L2 VPN client local endpoint to use with this
L2 VPN session.
• Remote IP: The IP address of the server-side IPSec tunnel endpoint.
• Peer Configuration: The peer code downloaded from the L2 VPN server.
604
VMware Confidential Internal Use Only
9-123 Configuring the L2 VPN Segments
On the Set Segments page, you define the network and the tunnel ID. The same configuration is
also available by selecting the SEGMENTS option.
4. Deploy and Configure a New Tier-0 Gateway and Segments for VPN Support
605
VMware Confidential Internal Use Only
9-125 Review of Learner Objectives
• Describe L2 VPN technologies in an NSX-T Data Center
• Typically, source translation is used to change a private address or port to a public address
or port for packets leaving your network.
• Reflexive NAT can be used when a Tier-0 gateway runs in active-active mode with
asymmetric traffic paths.
• A DNS is a computer application that implements a service for resolving a computer name
to an IP address.
• NSX Advanced Load Balancer includes multiple components such as virtual IP address,
virtual service, and a server pool with associated health and monitor profiles.
• IPSec VPN services are available on Tier-0 gateways to interconnect different IP networks.
• Using the GRE over IPSec, L2 VPN tunnels can be used to extend layer 2 networks.
Questions?
606
VMware Confidential Internal Use Only
Module 10
NSX-T Data Center User and Role
Management
10-2 Importance
You must manage users and roles to enforce the least user privilege and provide clear
separation of duties. By integrating NSX-T Data Center with VMware Identity Manager or LDAP,
you can configure role-based access control (RBAC) for external users.
607
VMware Confidential Internal Use Only
10-4 Lesson 1: Integrating NSX-T Data Center
with VMware Identity Manager
• Identify the benefits of integrating NSX-T Data Center with VMware Identity Manager
• Configure the integration between NSX-T Data Center and VMware Identity Manager
• Verify the integration between NSX-T Data Center and VMware Identity Manager
608
VMware Confidential Internal Use Only
10-6 About VMware Identity Manager
VMware Identity Manager is an identity as a service (IDaaS) solution.
VMware Identity Manager provides the following services for software as a service (SaaS), web,
cloud, and native mobile applications:
• Application provisioning
VMware products can use VMware Identity Manager as an enterprise SSO solution.
To verify the version compatibility between NSX-T Data Center and VMware Identity Manager,
use the VMware Product Interoperability Matrix at
https://www.vmware.com/resources/compatibility/sim/interop_matrix.php#interop&175=&140
=.
609
VMware Confidential Internal Use Only
10-7 Benefits of Integrating VMware Identity
Manager with NSX-T Data Center
The integration of VMware Identity Manager with NSX-T Data Center provides the following
benefits related to user authentication:
— RADIUS
— RSA SecureID
• Enterprise SSO:
NSX-T Data Center has its own native LDAP and Active Directory integration, but VMware
Identity Manager also offers this capability.
610
VMware Confidential Internal Use Only
10-8 Prerequisites for VMware Identity
Manager Integration
The following prerequisites must be met before integrating VMware Identity Manager with NSX-
T Data Center:
2. Configure time synchronization for the VMware Identity Manager virtual machine.
The following steps must be completed before integrating VMware Identity Manager with NSX-
T Data Center:
1. From the vSphere Client, deploy the VMware Identity Manager appliance from an OVF
template.
2. Synchronize the VMware Identity Manager virtual machine time with the ESXi host where it
is running.
b.. Scroll down to the Time section and select the Synchronize guest time with host
check box.
3. After deploying the VMware Identity Manager appliance, use the Setup wizard available at
https://<VMware_Identity_Manager_FQDN>.
a.. Set passwords for the admin, root, and remote SSH user.
611
VMware Confidential Internal Use Only
10-9 Configuring VMware Identity Manager
After the initial setup, connect to the VMware Identity Manager administration console.
• Identity sources
• Authentication methods
• Access policies
• LDAP
• Local directory
To configure authentications methods, select Identity & Access Management > Authentication
Methods.
To define access policies, select Identity & Access Management > Policies.
Administrators can configure rules that specify the network ranges and types of devices that
users can use to sign in.
612
VMware Confidential Internal Use Only
10-10 Overview of the VMware Identity
Manager and NSX-T Data Center
Integration
After both NSX-T Data Center and VMware Identity Manager appliances are deployed and
configured, you can integrate these components:
1. Create an OAuth client for NSX-T Data Center in VMware Identity Manager.
2. Obtain the SHA-256 certificate thumbprint for the VMware Identity Manager appliance.
613
VMware Confidential Internal Use Only
10-11 Creating an OAuth Client
Before enabling the integration of VMware Identity Manager and NSX-T Data Center, you must
register NSX-T Data Center as a trusted OAuth client in VMware Identity Manager:
1. From the VMware Identity Manager administration console, click the Catalog tab.
VMware Identity Manager uses the OAuth 2.0 authorization framework to enable NSX-T Data
Center and its users to access specific data and services.
Before enabling the integration between VMware Identity Manager and NSX-T Data Center, you
must register NSX-T Data Center as a trusted OAuth client in VMware Identity Manager.
614
VMware Confidential Internal Use Only
When configuring NSX-T Data Center details, you select Service Client Token from the Access
Type drop-down menu. This selection indicates that the application, NSX-T Data Center in this
example, accesses the APIs for itself. The application does not access the APIs for a particular
user.
You must specify a client ID to uniquely identify NSX. You need this value to enable the VMware
Identity Manager integration.
You must also click Generate Shared Secret. You need this value to enable the VMware Identity
Manager integration.
On the Create Client page, you can optionally set the token time-to-live values by specifying the
access, refresh, and idle timers.
cd /usr/local/horizon/conf
4. Retrieve the SHA-256 certificate thumbprint of VMware Identity Manager.
615
VMware Confidential Internal Use Only
10-13 Configuring the VMware Identity
Manager Details in NSX-T Data Center
To enable the VMware Identity Manager integration from the UI:
3. Click EDIT.
In the OAuth Client ID and OAuth Client Secret text boxes, you enter the client ID and shared
secret that you generated when you created the OAuth client for NSX-T Data Center in
VMware Identity Manager.
In the SSL Thumbprint text box, you enter the SHA-256 certificate thumbprint value that you
generated from the VMware Identity Manager appliance command line.
The value entered in the NSX Appliance text box must be used to access NSX Manager after
the integration. If you enter the fully qualified domain name (FQDN) of NSX Manager and try to
access the appliance through its IP address, the authentication fails.
If a virtual IP (VIP) is set up in the NSX Management cluster, you cannot use the external load-
balancer integration even if you enable it. You can either have VIP or the external load balancer
while configuring VMware Identity Manager, but not both. Disable VIP if you want to use the
external load balancer.
616
VMware Confidential Internal Use Only
10-14 Verifying the VMware Identity Manager
Integration
You can validate the successful communication between NSX-T Data Center and VMware
Identity Manager from the NSX UI.
Navigate to System > Settings > User Management > VMware Identity Manager to validate
the VMware Identity Manager integration. If the integration is successful, the VMware Identity
Manager integration appears as Enabled. The VMware Identity Manager appliance, the OAuth
Client ID, and the NSX Appliance fields are populated.
617
VMware Confidential Internal Use Only
10-15 Default UI Login
The default login page appears when integration with VMware Identity Manager is not enabled.
The default login page also appears if integration with VMware Identity Manager is configured,
but VMware Identity Manager is down or not reachable at the time of the login.
618
VMware Confidential Internal Use Only
10-16 UI Login with VMware Identity Manager
After the integration with VMware Identity Manager is enabled, you are redirected to the
VMware Identity Manager login page for authentication.
619
VMware Confidential Internal Use Only
10-17 Local Login with VMware Identity
Manager
For troubleshooting or administration, you might need to bypass VMware Identity Manager
when the integration is enabled. Go to https://<NSX_Manager_FQDN>/login.jsp?local=true.
• Identify the benefits of integrating NSX-T Data Center with VMware Identity Manager
• Configure the integration between NSX-T Data Center and VMware Identity Manager
• Verify the integration between NSX-T Data Center and VMware Identity Manager
620
VMware Confidential Internal Use Only
10-19 Lesson 2: Integrating NSX-T Data Center
with LDAP
Distributed directory services store information about users and groups, the network
infrastructure, and network services.
NSX-T Data Center 3.2 supports the following directory services or identity sources:
• OpenLDAP
621
VMware Confidential Internal Use Only
10-22 Benefits of Integrating LDAP with NSX-T
Data Center
Integrating LDAP with NSX-T Data Center offers the following benefits:
• Does not require the deployment of the VMware Identity Manager appliance
After integrating LDAP with NSX-T Data Center, the authentication process is as follows:
1. The user initiates a login request from the UI or the API.
2. NSX Manager receives the login request and creates an LDAP bind request to the
appropriate identity source, for example, Active Directory.
4. The bind response might succeed or fail during authentication. If the status is success, NSX
Manager provides the appropriate access privilege based on the assigned RBAC role for the
user or group. If the status is failure, NSX Manager displays an authentication error.
622
VMware Confidential Internal Use Only
10-24 Adding an Identity Source
You can add up to three identity sources.
You can add a new identity source by navigating to System > Settings > User Management >
LDAP
You specify the following settings as part of the identity source configuration:
• Domain Name: The domain that you want to add as an identity source.
• Type
— Open LDAP
• LDAP Servers: The connection settings to your LDAP servers (mandatory).
• Base DN: The point from where a server searches for users.
623
VMware Confidential Internal Use Only
10-25 Configuring the LDAP Server
As part of adding an identity source, you specify the connection settings to the LDAP server.
Since NSX-T Data Center 3.2, three LDAP servers are allowed per LDAP domain. The servers
are tried in order with each request.
You specify the following settings when configuring the connection to the LDAP server:
• Hostname/IP: The fully qualified domain name or the IP address of the LDAP server. In
LDAPS configurations, the FQDN must match the host name in the LDAP server certificate.
• Port: The default LDAP port 389 and LDAPS port 636 are used for the directory sync. Do
not change the default values.
• Use StartTLS: This toggle is available only for the LDAP protocol. If enabled, SSL/TLS is
used to establish a secure connection.
• Certificate: The LDAP server provides a certificate as part of the ADD/Check status
workflow. Accept the certificate.
• Bind Identity: Domain account with read permission for all objects in the domain tree.
• Password: Password for the bind identity account.
To verify that you can connect to the LDAP server, click Check Status (under Connection
Status).
624
VMware Confidential Internal Use Only
10-26 UI Login with LDAP
You log in as an Active Directory or OpenLDAP user by specifying the domain name on the
NSX login page.
625
VMware Confidential Internal Use Only
10-28 Lesson 3: Managing Users and
Configuring RBAC
626
VMware Confidential Internal Use Only
10-30 NSX-T Data Center Users
The following types of users can access the NSX-T Data Center environment:
• Local users
• External users:
• audit has the Auditor role. It can be renamed but it cannot be configured with other roles.
• guestuser1 and guestuser2 can be renamed and configured with RBAC roles.
Principal identity users are unique users. These users own the object that they create and ensure
that the object can only be modified or deleted by the owning principal identity. Principal identity
users are authenticated by client certificate. The authentication is local to NSX Manager. Principal
identities are usually used by third-party management platforms, such as VMware Integrated
OpenStack, Tanzu Kubernetes Grid Integrated Edition, vRealize Automation, and so on, but they
are also used by NSX Application Platform.
627
VMware Confidential Internal Use Only
10-31 Activate Guest Users
The two local users, guestuser1 and guestuser2, are inactive by default and can be activated by
using the UI or the API.
You can activate the two guest users by navigating to System > Settings > User Management
> Local Users.
The two local users, guestuser1 and guestuser2, can only be activated by the admin user.
Unlike the admin and audit users, guestuser1 and guestuser2 can be configured with RBAC roles.
628
VMware Confidential Internal Use Only
10-32 Using Role-Based Access Control
RBAC enables you to restrict system access to users based on their role in the company.
Users are assigned roles, and each role has specific permissions:
• Local users, admin, and audit are preconfigured with specific roles that cannot be modified.
• Guest users, principal identity users, and external users can be configured with any of the
built-in roles or custom roles.
Role-based access control (RBAC) is a method to enforce the least privilege and separation of
duties principles.
629
VMware Confidential Internal Use Only
10-33 Built-In Roles (1)
NSX-T Data Center provides built-in roles.
Role Description
GI Partner Admin Role used for third-party endpoint protection service insertion
LB Admin Read permissions on all networking services and full access permissions
on load-balancing features
Netx Partner Admin Role used for third-party Network Introspection service insertion
VPN Admin Read permissions on all networking services and full access
permissions on VPN features
630
VMware Confidential Internal Use Only
10-35 Custom Role-Based Access Control
The custom role-based access control (Custom RBAC) feature enables you to create custom
roles in addition to the existing built-in roles.
• Provide flexibility to create custom roles and grant custom permissions to NSX users.
• Extend the current RBAC capabilities beyond the built-in roles with preconfigured
permissions.
• Help companies to meet regulatory guidelines and compliance requirements for RBAC.
When default roles cannot enforce least user privileges and clear separation of duties, Custom
RBAC helps enforce these options by providing more granularity. Custom RBAC provides
flexibility and customization opportunities for specific deployment considerations and use cases.
631
VMware Confidential Internal Use Only
10-36 Creating Custom Roles (1)
You can either clone an existing role or create a role in the NSX UI.
Only an Enterprise administrator can create a custom role. However, Enterprise administrators
can delegate the custom role creation to another custom role.
632
VMware Confidential Internal Use Only
10-37 Creating Custom Roles (2)
You can set permissions to the new role.
The following NSX features are not supported with the Custom RBAC role:
• Upgrade
• Migrate
• Fabric
• TraceFlow
• NSX Intelligence
633
VMware Confidential Internal Use Only
10-38 Role Assignment
You can add, change, and delete role assignments for users or user groups:
1. Select System > Settings > User Management > User Role Assignment.
4. Search for the users or user groups that you want to assign the roles to.
634
VMware Confidential Internal Use Only
10-39 Lab 20: Managing Users and Roles
Integrate NSX Manager with Active Directory over LDAP:
• You can integrate NSX-T Data Center with VMware Identity Manager and configure RBAC
for users that VMware Identity Manager manages.
• You can add Active Directory over LDAP or OpenLDAP identity sources to NSX-T Data
Center and configure RBAC for these users.
• Role-based access control (RBAC) enables you to restrict system access to users based on
their role in the company.
• The Custom RBAC feature in NSX-T Data Center enables administrators to create custom
roles in addition to the existing built-in roles.
Questions?
635
VMware Confidential Internal Use Only
636
VMware Confidential Internal Use Only
Module 11
NSX-T Data Center Federation
11-2 Importance
In NSX-T Data Center, Federation enables network administrators to define global configuration
settings and policies that span multiple sites. You must understand the Federation architecture
and configuration, logical switching, logical routing, and security to successfully configure NSX
Federation in your environment.
3. Federation Networking
4. Federation Security
637
VMware Confidential Internal Use Only
11-4 Lesson 1: Federation Architecture
638
VMware Confidential Internal Use Only
11-6 About NSX Federation
NSX Federation provides consistent networking and security policies across multiple sites. NSX-
T Data Center 3.2 supports up to eight locations.
• Provide the most cost-effective hosting solution depending on the application criticality
639
VMware Confidential Internal Use Only
11-7 NSX Federation Use Cases
NSX Federation has the following use cases:
• Operational simplicity: NSX Federation enables users to configure networking and security
constructs for multiple locations from a single console.
• Consistent policy configuration: Networking objects that span multiple locations are called
stretched, and security objects are called global.
640
VMware Confidential Internal Use Only
NSX Federation helps deliver a cloud-like operating model by simplifying the consumption of
networking and security constructs.
NSX Federation introduces the NSX Global Manager (GM), a centralized console that enables
users to view and configure the Federation services from a single management user interface.
NSX Federation provides operational simplicity and consistent policy configuration to manage
the network as a single entity while keeping configuration and operational state synchronized
across multiple locations.
Networking objects that span across multiple locations are called stretched and security objects
are called global.
From an Infrastructure Operational perspective, one location (rack, building, or site) does not
offer enough capacity to host all its applications. Capacity can be of different types, such as
compute (servers), storage, and network (bandwidth). Each application can be deployed in a
specific location or can be stretched across multiple locations.
641
VMware Confidential Internal Use Only
11-8 Federation Components: Global Manager
GM provides the GUI and the REST APIs for configuring objects across geographical sites.
• Any configuration that needs protection against site failures must be configured in GM.
• During site onboarding, GM offers a choice to move the existing LM configuration to GM.
642
VMware Confidential Internal Use Only
11-10 Federation Components: GM and LM
Clusters
In NSX-T Data Center 3.2, NSX Federation supports eight Local Managers to scale the
stretched or global objects:
• Local Managers are deployed in each location as a three-node cluster for high availability
and scalability.
• Global Managers are deployed as an active cluster and a standby cluster in two locations.
• The GM node includes a Management plane. It does not include a Control plane.
643
VMware Confidential Internal Use Only
11-11 Federation Configuration Types
NSX Federation supports the following types of configurations:
• Global configuration:
• Local configuration:
644
VMware Confidential Internal Use Only
11-12 Ownership of Logical Configuration (1)
Network objects that are created by GM are owned by GM:
• Tier-0 and Tier-1 gateways, segments, segment profiles, and so on, can be configured from
GM.
• GM is the single source of truth for these objects. These objects can only be modified or
deleted by GM.
645
VMware Confidential Internal Use Only
11-13 Ownership of Logical Configuration (2)
Network objects that are created by LM are owned by LM:
• Tier-0 and Tier-1 gateways, segments, segment profiles, and so on, can be configured from
LM.
• LM is the single source of truth for these objects. These objects can only be modified or
deleted by LM.
• During the onboarding process, you can move the object to GM. Then, LM loses ownership
of these objects.
The following configurations can be imported from the Local Manager into Global Manager:
• T0 gateway
• T1 gateway
• Services
• Security profiles
• Context profiles
• Groups
• NAT
• DHCP
• DNS
• Gateway profiles
For more information about the local manager configurations supported for importing into global
manager, see NSX-T Data Center Installation Guide at https://docs.vmware.com/en/VMware-
NSX-T-Data-Center/3.1/installation/GUID-388CE659-3FE3-4EF4-ABA3-AE3FCAA191E9.html.
While NSX releases earlier than 3.2.0 supported onboarding of existing local manager sites, this
onboarding support is delayed from 3.2.0 and will be introduced in a later point release of 3.2.
646
VMware Confidential Internal Use Only
11-14 Infrastructure Ownership
LM always manages infrastructure objects:
• These objects include transport nodes, edge nodes, transport zones, and so on.
647
VMware Confidential Internal Use Only
11-15 Global Configuration
The global configuration workflow is as follows:
2. The active GM stores locally and replicates the configuration to the standby GM.
648
VMware Confidential Internal Use Only
11-16 Local Configuration
The local configuration workflow is as follows:
649
VMware Confidential Internal Use Only
11-17 Federation Configuration Example
In the configuration example, networks are created and realized only in two locations (Location 1
and Location 2).
T0, T1, and their associated segment are stretched to Location 1 and Location 2.
650
VMware Confidential Internal Use Only
11-18 Federation Configuration Example
Workflow (1)
As the T0, T1, and their associated segment are only stretched to Locations 1 and 2, the
configuration is not sent to LM at Location 3.
In the example, the user wants to apply a configuration that is stretched to Location 1 and
Location 2 only:
1. The user sends the configuration by using REST over HTTPS to the active GM node in
Location 1, which stores the configuration locally.
651
VMware Confidential Internal Use Only
11-19 Federation Configuration Example
Workflow (2)
LMs of each location initiate a sync operation to exchange the Control Plane configuration of a
given topology.
In the example, the control plane of LM in each location learns the IP and MAC information of the
VMs associated with Blue_Segment:
1. LMs at each location initiate a sync operation to exchange the IP and MAC information of
VMs associated with Blue_Segment.
2. Each LM stores the realization information of a given topology discovered through the sync
operation. As a result, the Control Plane of LM in each location has the IP and MAC
information of all the VMs associated with Blue_Segment across Location 1 and Location 2.
LM does not update control plane information to GM, because GM has no control plane.
However, the GM UI enables you to see the inventory of all locations by querying the relevant
LMs.
652
VMware Confidential Internal Use Only
11-20 Review of Learner Objectives
• Describe Federation and its use cases
653
VMware Confidential Internal Use Only
11-21 Lesson 2: Installing and Onboarding
Federation
11-22 Learner Objectives
• Describe the prerequisites for Federation
WAN Bandwidth No congestion for the management plane or the data plane
Data Plane For different Internet Service Providers (ISPs), a public address
must be advertised from both locations
— MTU of 1,700 bytes or more to avoid the edge node RTEP traffic fragmentation.
654
VMware Confidential Internal Use Only
11-24 Onboarding Process
The onboarding process involves connecting an LM to a GM for enabling the Federation
functionality.
The onboarding process is repeated for each location until all locations are on board.
655
VMware Confidential Internal Use Only
11-25 Active Global Manager Configuration
From the Site-A-GM UI, select Action and mark GM as active.
The Location Manager tab is the central management point for configuring GMs (active and
standby).
You can add different locations from this tab to GM. The managers in these locations are Local
Manager (LM).
You can configure the standby GM from the active GM site to provide flexibility for configuring
the Federation component from a single console.
656
VMware Confidential Internal Use Only
11-26 Adding Standby Global Manager (1)
You must add a standby GM. If the active GM fails, the standby GM can be activated from its UI.
You can configure the standby GM from the active GM site to provide flexibility for configuring
the Federation component from a single console.
The compatibility check is mandatory because it checks version compatibility between the two
GMs.
From the CLI, run get certificate api thumbprint to fill the details of the SHA-
256 thumbprint in the UI.
657
VMware Confidential Internal Use Only
11-28 Adding a Local Manager
You add the LM from ADD ON-PREM LOCATION, and provide the details about NSX Manager
at Location A.
You can obtain the certificate thumbprint by using the following command on NSX Manager:
658
VMware Confidential Internal Use Only
11-29 Validating the Local Manager
The Location Manager tab on LM provides information about the Sync status of LM to GM.
The Location Manager tab also provides details about the active and standby GM. The details
are read-only.
659
VMware Confidential Internal Use Only
11-30 Importing Local Objects
You can import LM objects after your location is onboarded.
LM objects can be imported either during onboarding or after onboarding. The screenshot
shows an option to import after your location is onboarded.
A mandatory check happens for data that is backed up. The timestamp can be used to validate
the backup.
When LM objects are imported to GM, you must mark them to differentiate these objects.
To distinguish better, you add either a prefix or a suffix. You can define the prefix or suffix.
You can use this time to prepare the Networking and Security workloads and configuration.
The screenshot is from a release earlier release than 3.2.0. This onboarding support is delayed
from 3.2.0 and will be introduced in a later point release of 3.2.
660
VMware Confidential Internal Use Only
11-32 Lesson 3: Federation Networking
661
VMware Confidential Internal Use Only
11-34 Stretched Networking (1)
NSX Federation supports the stretching of the following networking constructs across locations:
• Segments
• NAT
• Gateway firewall
• IPv6
• DHCP
• DNS
The Global Manager UI enables the user to create stretched networks, for example, segments,
gateways and security policies, and rules.
662
VMware Confidential Internal Use Only
11-35 Stretched Networking (2)
GM can stretch the Tier-0 and Tier-1 gateways across locations:
• All segments that are connected to the downlink port of the stretched T1 gateway are
automatically stretched across locations.
• This method eliminates the need for tunnels between hypervisors across locations.
On the edge nodes, RTEPs are configured to forward the traffic across sites.
NSX-T Data Center does not encrypt the traffic, but the user can encrypt this cross-location
traffic with a third-party tool.
663
VMware Confidential Internal Use Only
11-36 Tier-0 and Tier-1 Gateways: Logical
Topologies (1)
GM supports stretch networks for cross-location communication and nonstretched networks for
a local location.
664
VMware Confidential Internal Use Only
11-37 Tier-0 and Tier-1 Gateways: Logical
Topologies (2)
GM-supported configuration:
• Tier-0 and Tier-1 gateways can be stretched to all or some of the locations.
• Segments associated with stretched for the Tier-0/Tier-1 gateway are also stretched to the
same span:
• If the scope of the Tier-1 gateway is equal to or is a subset of the Tier-0 gateway, a
stretched Tier-0 gateway can connect to a nonstretched Tier-1 gateway.
• For a Tier-1 gateway without services, the span of Tier-1 is equal to the span of the Tier-0
gateway.
The example includes the stretched network objects, including the stretched segments and
stretched Tier-0 and Tier-1 gateways.
• The span of the Tier-0 gateway is equal to the span of the Tier-1-no services gateway, or
conversely.
• The span of the Tier-0 gateway is equal to or a subset of the span of the Tier-1 gateway.
665
VMware Confidential Internal Use Only
11-38 Tier-0 and Tier-1 Gateways: Logical
Topologies (3)
In the example, the span of the T0-Stretched and T1-Stretched gateways are not the same. The
connection is not possible.
However, the span of T1-Not Stretched is a subset of the T0-Stretched span. The connection is
possible.
The example shows how a stretched and nonstretched network can co-exist.
A nonstretched Tier-1 gateway is local to Location A only and Tier-0 gateway is stretched
across Location A and Location B. The connection is possible because the nonstretched Tier-1
gateway is a subset of the Tier-0 Stretched gateway.
666
VMware Confidential Internal Use Only
11-39 Single-Location Tier-0 Gateway
Deployments
GM supports the following configuration for Tier-0 gateways for each location:
— All Tier-0 gateways are active on all the NSX Edge nodes in the edge cluster.
— The traffic ingresses or egresses from all the edge nodes with the active Tier-0
gateway.
— One Tier-0 gateway is active on one NSX Edge node and one standby gateway exists
in the edge cluster.
— The traffic ingresses or egresses from the edge nodes with the active Tier-0 gateway.
667
VMware Confidential Internal Use Only
11-40 Single-Location Tier-1 Gateway
Deployments
GM supports the configuration for the Tier-1 gateway for each location.
• The Tier-1 gateway is active on one NSX Edge node and is standby on another node.
• The traffic ingresses or egresses from the edge node deployed with the active Tier-0
gateway.
668
VMware Confidential Internal Use Only
11-41 Multilocation Tier-0 and Tier-1 Gateway
Deployments (1)
A location is configured as either primary or secondary:
• Only one location can be configured as primary, and all other locations are secondary.
Location 1 is primary, and all northbound egress traffic is routed by using the primary Tier-0
gateways edge node.
The Tier-1 gateways at Location B and Location C send the traffic through RTEPs to the
Location A edge uplink.
669
VMware Confidential Internal Use Only
11-42 Multilocation Tier-0 and Tier-1 Gateway
Deployments (2)
T0 can be deployed in all primary locations (All_P):
• Traffic to Tier-0 gateway from Tier-1 gateway and segments is contained in the location.
• All locations have an active Tier-0 gateway to send northbound egress traffic.
For all primary locations, the northbound egress traffic is routed locally.
670
VMware Confidential Internal Use Only
11-43 Multilocation T0-Stretched Gateway
Modes (1)
Tier-0 is deployed in A/A mode in P/S locations:
• For the Tier-0 gateway configured in the active-active HA mode, the Tier-0 gateway will be
active on all the NSX Edge nodes in the edge cluster.
• For the Tier-0 gateways configured in the primary-secondary mode, the north-south traffic
from all locations is forwarded to the Tier-0 gateways configured in a primary location.
• The Tier-0 gateway in the active-active high availability mode does not support stateful
NAT. However, stateless NAT can be used.
In the diagram:
• EN3 and EN4 at Location 2 egress northbound traffic to either EN1 or EN2, or both, on
Location 1.
671
VMware Confidential Internal Use Only
11-44 Multilocation T0-Stretched Gateway
Modes (2)
Tier-0 is deployed in A/A mode in all primary locations:
• For the Tier-0 gateway configured in active-active HA mode, the Tier-0 gateway will be
active on all the NSX Edge nodes in the edge cluster.
• For the Tier-0 gateway configured in a primary setup, northbound and southbound are sent
and received through their respective Tier-0 gateways in their location.
In the diagram:
• In this use case for local egress, all Tier-0 sites route the northbound traffic locally.
672
VMware Confidential Internal Use Only
11-45 Multilocation T1-Stretched Gateway
Modes (1)
Tier-1 is deployed without services:
• Only Distributed Router (DR) is realized because services are not configured and Service
Router (SR) does not exist.
• The T1-no-Services gateway does not require edge nodes for routing, but edge is required
for L2 stretching.
673
VMware Confidential Internal Use Only
11-46 Multilocation T1-Stretched Gateway
Modes (2)
Tier-1 is deployed in A/S mode in P/S locations:
• Active edge nodes in both primary and secondary locations can receive southbound traffic
for their respective locations.
• Only the primary location’s active edge node can send the northbound traffic of both
locations to stretched Tier-0.
The concept is similar to active-standby Tier-0, where the active node on the primary site routes
the traffic.
674
VMware Confidential Internal Use Only
11-47 About RTEP
RTEP is an edge node IP that is used for edge node communication across sites:
To configure RTEP:
2. Click the System tab to display the configuration for Location Manager.
3. Click Location Manager to display details about the location where you want to create the
RTEP.
You can select the LM edge node cluster where you need to configure RTEP.
5. Click CONFIGURE.
675
VMware Confidential Internal Use Only
11-48 Stretched Layer-2 Network
The cross-location layer-2 communication is provided by the edge nodes of an edge cluster in
each location.
This method avoids the management of many tunnels and BFD sessions between all hosts
across locations.
1. The source ESXi host sends frames to the edge node (TEP-TEP communication).
2. The source edge node forwards a frame to the destination edge node (RTEP-RTEP
communication).
3. The destination edge node forwards a frame to the destination ESXi host (TEP-TEP
communication).
The MAC address of a remote VM is learned through RTEP on the edge nodes. The edge node
passes this information to its local transport node.
676
VMware Confidential Internal Use Only
11-49 Stretched L2: VNI Mapping
When a stretched segment is created from GM:
— The VNI selection is local to LM and can be different VNIs in different locations.
Example: If you create a stretched segment from GM and it is stretched to two locations, then
the VNI id can be 5001 on one location. On another location, the stretched segment VNI id can
be 6003.
VNI can be different, but UUID of the stretched segment will be same across all locations.
677
VMware Confidential Internal Use Only
11-50 Review of Learner Objectives
• Describe the stretched networking concepts in Federation
678
VMware Confidential Internal Use Only
11-51 Lesson 4: Federation Security
Federation provides central security for data centers managed on premises, such as data center
extension with a centralized pane.
679
VMware Confidential Internal Use Only
11-54 Federation Security Components
Federation security includes the following components:
• Region: Collection of locations. Each location that is added to the GM becomes a region.
Custom regions can also be created.
• Tags: Objects can be tagged and searched by a specific location or can be defined globally
and added to the Global group.
• Policy/Section: A firewall policy includes one or more individual firewall rules. The policy can
be Global, Regional, or Local.
• Rules: Firewall rules are based on objects and tags instead of IP addresses. The rules are of
the same span as the policy.
680
VMware Confidential Internal Use Only
11-55 About Regions
Regions are used to create focused groups for security and networking policies.
Some regions are created automatically after the onboarding process in GM.
Global, Site-A-LM, and Site-B-LM are created by the system while registering and onboarding to
GM of LM. Up to eight locations can appear on this page based on the number of supported
locations in NSX Federation.
Region A+B is created manually. In the use case, regional groups were added based on the
regions. Locations A and B were manually added.
681
VMware Confidential Internal Use Only
11-56 About Groups
The NSX objects can be grouped to use in firewall rules that are specific to Global, Regional, or
Local.
Groups are defined based on the region. It can be Global, Regional, or Local.
The Global groups span across all sites in Federation, for example, Global-Web-servers.
The Regional groups span across two or more locations, for example, Location A+B.
The Local groups are specific to the site, for example, Region-A-Web-Servers, Region-B-Web-
Servers, and Region-A-App-Servers.
682
VMware Confidential Internal Use Only
11-57 Global Groups
The Global groups have the following characteristics:
• Global groups are owned by NSX and are created with discovered objects. They can also
be imported from a file (IP or MAC list).
• Users can only create, update, and delete Global Groups for the GM.
Discovered members: These members are LM-created objects that GM discovers later on
demand (according to the criteria and tags).
From LM, the Global groups are visible. The groups and the other NSX inventory objects in LM
are imported into Global Manager on demand.
Bare-metal servers (BMS) and cloud-native storage (CNS) are also discovered objects.
683
VMware Confidential Internal Use Only
11-58 Federation Tags
You create tag-based criteria to include both the local and global objects.
As the Grouping behavior is agnostic of the local and global members, tag-based criteria include
local and global objects.
684
VMware Confidential Internal Use Only
11-59 GM-Based Policy
You can configure the following settings for a policy:
— All logical switch ports (VMs and containers) of the span receive the rules in that
section.
— All group members (VMs and containers) receive the rules in that section.
You can create a section and apply it to a location, groups, and so on.
685
VMware Confidential Internal Use Only
11-60 GM-Based Rules
You can configure the following settings for a rule:
— All logical switch ports (VMs and containers) of the span receive the rules within that
section.
— All group members (VMs and containers) receive the rules within that section.
686
VMware Confidential Internal Use Only
11-61 Overlap of GM and LM Sections
GM and LM can create gateway firewall and distributed firewall sections in the same category.
For GM and LM sections created in the same category, GM sections are always at the top.
• GM and LM sections are individually created under the T0/T1 gateway firewall and under
the distributed firewall.
687
VMware Confidential Internal Use Only
11-62 Security Configuration Workflow
The workflow includes the following steps:
4. The central control planes (CCPs) at each location synchronize the configuration across
locations.
688
VMware Confidential Internal Use Only
11-63 GM Groups and Span (1)
The GM group span can be Global, Region, or Local.
689
VMware Confidential Internal Use Only
11-64 GM Groups and Span (2)
GM groups:
For dynamic groups, each LM resolves its local members and updates other LMs that belong to
the span.
• Group1: Global and stretched to all sites, that is, Locations A, B, and C
In the example, Group1 contains segment S1, which has a larger scope than Group1. As a result,
VM3 does not contain the required policies.
The span of the group should always be more than the span of the objects associated with it:
• In the example, the span of Group1 is Location 1 and Location 2. The span of segment S1 is
all three locations, because it contains VMs from all three locations.
• However, if you associate Group1 with segment S1, VM3 does not get the necessary
security policies.
691
VMware Confidential Internal Use Only
11-66 Dynamic Groups Based on the VM Tag
(1)
Dynamic groups might contain VM-based tags.
In the example, the groups are associated with VM-based tags for dynamic membership. As a
result, VM1, VM2, and VM3 are part of the group.
692
VMware Confidential Internal Use Only
11-67 Dynamic Groups Based on the VM Tag
(2)
Tags are applied to VMs even after migration with vSphere vMotion to Location 2.
A check is done, and the segment must be of an equal or lower span than group.
693
VMware Confidential Internal Use Only
11-68 Review of Learner Objectives
• Explain the Federation security use cases
• Firewall rules must be created with at least one of the source and destination groups of the
same domain.
Questions?
694
VMware Confidential Internal Use Only
Module 12
Appendix: Configuring Load Balancing
12-2 Importance
The NSX-T Data Center logical load balancer offers a high availability service for applications and
distributes the network traffic load among multiple servers.
695
VMware Confidential Internal Use Only
12-5 Use Cases for Load Balancing
The NSX-T Data Center load balancer distributes incoming service requests among multiple
servers and offers high availability for applications.
• Fast response time is required by spreading client requests across multiple servers.
696
VMware Confidential Internal Use Only
12-6 Layer 4 Load Balancing
The layer 4 load balancer is connection-based and supports the following protocols:
• TCP
• UDP
697
VMware Confidential Internal Use Only
12-7 Layer 7 Load Balancing
The layer 7 load balancer is content based:
698
VMware Confidential Internal Use Only
12-8 Load Balancer Architecture
The load balancer must be attached to a Tier-1 gateway.
A load balancer includes virtual servers, profiles, server pools, and monitors.
699
VMware Confidential Internal Use Only
12-9 Connecting Load Balancers to Tier-1
Gateways
A load balancer must be connected to a Tier-1 gateway:
700
VMware Confidential Internal Use Only
12-10 About Virtual Servers
A virtual server is a service abstraction represented by a combination of a virtual IP address, a
port, and a protocol. External clients use this combination to access the servers behind the load
balancer.
701
VMware Confidential Internal Use Only
12-11 About Profiles
Profiles are used to configure the characteristics of virtual servers. You can configure the
following types of profiles:
• Application: Defines how the virtual server processes the network traffic
• Persistence: Used in stateful applications to redirect all related connections to the same
back-end server
• SSL: Defines the SSL protocol type and ciphers to be used by the client and server (layer 7
only)
702
VMware Confidential Internal Use Only
12-12 About Server Pools
A server pool includes a group of servers that provide a specific functionality.
703
VMware Confidential Internal Use Only
12-13 About Monitors
Monitors are used to verify the status of the servers in a server pool.
• Passive monitors check for failures during client connections and mark servers causing
consistent failures as down.
704
VMware Confidential Internal Use Only
12-14 Relationships Among Load Balancer
Components
Load balancer components work together:
705
VMware Confidential Internal Use Only
NSX-T Data Center load balancers have different sizes:
• Small
• Medium
• Large
• Xlarge
To check the maximum number of virtual servers and pools members for each size, see VMware
Configuration Maximums at https://configmax.vmware.com.
706
VMware Confidential Internal Use Only
12-15 Deployment Modes for Load Balancing
In NSX-T Data Center, load balancing is commonly deployed in one of the following modes:
• Inline
• One arm
707
VMware Confidential Internal Use Only
12-16 Inline Topology
With the inline topology, the load balancer is in the traffic path between the client and the
server.
Clients and servers must not be connected to the same Tier-1 gateway.
708
VMware Confidential Internal Use Only
12-17 One-Arm Topology (1)
With the one-arm topology, the load balancer is not in the traffic path between the client and
the server.
SNAT is required.
709
VMware Confidential Internal Use Only
12-18 One-Arm Topology (2)
In the one-arm deployment mode, the load balancer performs SNAT to force the return traffic
from the back-end servers to the client through the load balancer:
2. Depending on the SNAT configuration, the load balancer replaces the client IP address with
the load balancer virtual IP address.
3. The back-end server sends a response to the load balancer virtual IP address.
710
VMware Confidential Internal Use Only
12-19 Configuration Steps for Load Balancing
The NSX UI provides a nested wizard that enables you to configure load balancing.
You must configure the Tier-1 gateway before configuring the load balancer.
711
VMware Confidential Internal Use Only
12-20 Creating Load Balancers
To create a load balancer, you navigate to Networking > Load Balancing > Load Balancers >
ADD LOAD BALANCER.
In the ADD LOAD BALANCER wizard, you provide the name of the load balancer, specify the
deployment size, and provide the Tier-1 gateway to attach your load balancer to.
In this wizard, you can also select the Set link under Virtual Servers to configure the virtual
servers for the load balancer that you created.
712
VMware Confidential Internal Use Only
12-21 Creating Virtual Servers
When creating a virtual server, you can select from several protocols:
• L4 TCP: Applications running on TCP with load-balancing requirements for only layer 4
• L4 UDP: Applications running on UDP with load-balancing requirements for only layer 4
• L7 HTTP: HTTP and HTTPS applications where the load balancer must act based on the
layer 7 parameters
713
VMware Confidential Internal Use Only
12-22 Configuring Layer 4 Virtual Servers
When configuring a layer 4 virtual server, you specify values for several parameters.
When configuring a layer 4 virtual server, you provide values for the following parameters:
• Name
• Virtual IP address
• Ports:
• Server Pool:
• Application Profile:
— This setting is populated by default based on the protocol type specified when you
created the virtual server.
Application profiles define the behavior of a particular type of network traffic. The
associated virtual server processes network traffic according to the values specified in
the application profile. Fast TCP, Fast UDP, and HTTP application profiles are the
supported types of profiles. The HTTP application profile is used for both HTTP and
HTTPS applications when the load balancer must act based on layer 7, such as load
balancing all images requests to a specific server pool member or terminating HTTPS to
offload SSL from pool members. Unlike the TCP application profile, the HTTP application
profile terminates the client TCP connection before selecting the server pool member.
• Persistence profile:
When configuring a layer 7 virtual server, you provide values for the following parameters:
• Name
• IP address
• Ports:
— Port ranges are not supported when configuring a layer 7 virtual server.
• Server Pool:
• Application Profile:
— This setting is populated by default based on the protocol type specified when you
create the virtual server.
• Persistence:
— Layer 7 virtual servers support both Source IP and Cookie persistence options.
• SSL Configuration:
— You can configure SSL parameters on both the server and client.
715
VMware Confidential Internal Use Only
12-24 Configuring Application Profiles
Application profiles define how the virtual server processes network traffic:
• You can create additional application profiles, based on TCP, UDP, and HTTP, to suit your
requirements.
716
VMware Confidential Internal Use Only
12-25 Configuring Persistence Profiles
Persistence profiles ensure the stability of stateful applications by directing all related
connections to the same back-end server.
• Source IP: Tracks sessions based on the client’s source IP address. This profile can be used
with both layer 4 and layer 7 virtual servers.
• Cookie: Uses a unique HTTP cookie to identify the session, enabling the client to remain
with a server during the session. This profile is used with layer 7 virtual servers only.
You can create custom persistence profiles to suit your application needs. A generic persistence
profile is also available.
Additional persistence profiles can be created based on source IP or cookies to suit your
application needs
717
VMware Confidential Internal Use Only
12-26 Layer 7 Load Balancer SSL Modes
In NSX-T Data Center, the layer 7 load balancer supports SSL modes:
— The load balancer uses HTTP to communicate with the servers in the pool.
• End-to-end SSL:
— The load balancer creates an SSL connection (HTTPS) toward the servers in the pool.
• SSL Passthrough:
718
VMware Confidential Internal Use Only
12-27 Configuring Layer 7 SSL Profiles
SSL profiles define the SSL protocol type and ciphers to be used by the client and server.
— Used to configure the SSL connection between the client and the load balancer
— Used to configure the SSL connection between the load balancer and the back-end
server pool
719
VMware Confidential Internal Use Only
12-28 Configuring Layer 7 Load Balancer Rules
When configuring a layer 7 virtual server, you can optionally configure load balancer rules to
customize the load-balancing behavior:
720
VMware Confidential Internal Use Only
12-29 Creating Server Pools
When configuring a server pool, you specify values for several parameters.
When configuring a server pool, you provide values for the following parameters:
• Name
— Load balancing algorithm controls how the incoming connections are distributed among
the members.
• Pool Members/Group:
— Depending on the topology, SNAT might be required so that the load balancer receives
the traffic from the server destined to the client.
• Active Monitor:
— The active health monitor is used to test whether a server within the server pool is
available or not.
721
VMware Confidential Internal Use Only
12-30 Configuring Load-Balancing Algorithms
Load-balancing algorithms control how the incoming connections are distributed among the
servers in the pool.
Round Robin Client requests are cycled through the available servers.
Weighted Round Robin Each server is assigned a weight based on its performance. This
weight determines the number of client requests that the server
receives compared to other servers in the pool.
Least Connection New connections are sent to the server that has the fewest
connections.
Weighted Least Connection Each server is assigned a weight based on its connection
capacity. This weight determines the number of client requests
that the server receives compared to other servers in the pool.
722
VMware Confidential Internal Use Only
12-31 Configuring SNAT Translation Modes
Depending on the topology, SNAT might be required so that the load balancer receives the
traffic from the server destined to the client. SNAT can be enabled per server pool.
When creating a server pool, the following SNAT translation modes are available:
• Automap: The load balancer uses its interface IP address and ephemeral port to continue
the communication with a client who initially connected to one of the server's established
listening ports.
• Disabled: No SNAT.
• IP Pool: Allows users to specify a single IP address range to be used by servers for SNAT.
Automap is used for load balancer pools with small/medium load (below 1K new connections per
minute).
The IP pool is used for the load balancer pools with large load.
723
VMware Confidential Internal Use Only
12-32 Configuring Active Monitors
You use active monitors to verify whether back-end servers are available.
• HTTP
• HTTPS
• ICMP
• TCP
• UDP
724
VMware Confidential Internal Use Only
12-33 Configuring Passive Monitors
Passive monitors check for errors during client connections and mark servers causing consistent
failures as down.
• Layer 4 UDP virtual servers: Internet Control Message Protocol (ICMP) errors
725
VMware Confidential Internal Use Only
12-34 Review of Learner Objectives
• Describe the load balancer architecture and components
• The load balancer must be attached to a Tier-1 gateway and can be deployed in one-arm or
inline mode.
• The load balancer accepts TCP, UDP, HTTP, or HTTPS requests on the virtual server IP
address and decides which pool server to use.
Questions?
726
VMware Confidential Internal Use Only