0% found this document useful (0 votes)
63 views210 pages

Flex Appliance Deployment 4x

Download as pdf or txt
Download as pdf or txt
Download as pdf or txt
You are on page 1/ 210

Internal Use - Confidential

Dell PowerFlex Appliance with PowerFlex 4.x


Deployment Guide

January 2023
Rev. 1.1
Internal Use - Confidential

Notes, cautions, and warnings

NOTE: A NOTE indicates important information that helps you make better use of your product.

CAUTION: A CAUTION indicates either potential damage to hardware or loss of data and tells you how to avoid
the problem.

WARNING: A WARNING indicates a potential for property damage, personal injury, or death.

© 2022 - 2023 Dell Inc. or its subsidiaries. All rights reserved. Dell Technologies, Dell, and other trademarks are trademarks of Dell Inc. or its
subsidiaries. Other trademarks may be trademarks of their respective owners.
Internal Use - Confidential

Contents

Chapter 1: Introduction................................................................................................................. 8

Chapter 2: Revision history........................................................................................................... 9

Chapter 3: Deployment requirements...........................................................................................10


Software requirements.....................................................................................................................................................10
Hardware requirements.................................................................................................................................................... 10
Resource requirements..................................................................................................................................................... 11
PowerFlex management controller datastore and virtual machine details........................................................... 11
Jump server.........................................................................................................................................................................12
License and registration requirements..........................................................................................................................12
Download files from the Dell Support site................................................................................................................... 12
Ports and security configuration data requirements................................................................................................. 12
Networking pre-requisites............................................................................................................................................... 13
Network requirements for a PowerFlex appliance deployment..............................................................................16
Full network automation............................................................................................................................................. 16
Partial network automation....................................................................................................................................... 16
PowerFlex appliance and Cisco application centric infrastructure........................................................................ 17
PowerFlex appliance node cabling................................................................................................................................. 17

Chapter 4: Network configuration................................................................................................19


Configuration data............................................................................................................................................................. 19
Configuring Dell PowerSwitch switches.......................................................................................................................21
Configure the Dell PowerSwitch access switches...............................................................................................21
Upgrading the switch software................................................................................................................................23
Configuring the network for deployment.............................................................................................................. 34
Configuring Cisco Nexus switches................................................................................................................................38
Configure Cisco Nexus access switches............................................................................................................... 39
Upgrading the switch software................................................................................................................................ 41
Configuring the network for deployment.............................................................................................................. 45

Chapter 5: Configuring the iDRAC...............................................................................................50


Configure iDRAC network settings...............................................................................................................................50

Chapter 6: Installing and configuring PowerFlex management controller 2.0............................... 51


Installing and configuring a PowerFlex R650 controller node.................................................................................51
Configure the switch ports........................................................................................................................................51
PowerFlex node network configurations................................................................................................................51
Configure local RAID storage on a PowerFlex management node..................................................................56
Upgrade the firmware................................................................................................................................................ 56
Configure the BOSS card.......................................................................................................................................... 57
Install VMware ESXi ...................................................................................................................................................57
Configure VMware ESXi............................................................................................................................................ 58
Install the Dell Integrated Service Module............................................................................................................ 58

Contents 3
Internal Use - Confidential

Modify the existing VM network............................................................................................................................. 58


Configure NTP on the host.......................................................................................................................................59
Rename the BOSS datastore................................................................................................................................... 59
Create a PERC datastore.......................................................................................................................................... 59
Deploy VMware vCenter Server Appliance (vCSA) on the PowerFlex management controller............. 60
Create a datacenter and add a host........................................................................................................................61
Add VMware vSphere licenses................................................................................................................................. 61
Installing and configuring a PowerFlex management controller 2.0..................................................................... 62
Configure the switch ports....................................................................................................................................... 62
Upgrade the firmware................................................................................................................................................ 62
Configure the BOSS card..........................................................................................................................................62
Installing VMware ESXi.............................................................................................................................................. 63
Deploying VMware vCenter......................................................................................................................................66
Deploying PowerFlex...................................................................................................................................................74
Delete vSwitch0.......................................................................................................................................................... 82
Migrate the storage VMware vMotion vCSA....................................................................................................... 82
Enable VMware vSphere HA and DRS for the new cluster.............................................................................. 82
Install and configure the embedded operating system jump server..................................................................... 83

Chapter 7: Deploying the PowerFlex management platform......................................................... 85


Deploying and configuring the Powerflex management platform installer VM.................................................. 85
Deploying and configuring the PowerFlex management platform installer VM using VMware
vSphere..................................................................................................................................................................... 85
Deploying and configuring the PowerFlex management platform installer using Linux KVM................... 87
Deploying and configuring the PowerFlex management platform VMs...............................................................89
Deploying and configuring the PowerFlex management platform using VMware vSphere...................... 89
Deploying and configuring the PowerFlex management platform using Linux KVM.................................. 96

Chapter 8: Configuring PowerFlex Manager .............................................................................. 103


Log in to PowerFlex Manager.......................................................................................................................................103
Perform the initial setup................................................................................................................................................ 103
Enable SupportAssist................................................................................................................................................ 104
Configure the initial setup for compliance...........................................................................................................105
Specify the installation type....................................................................................................................................105
Verifying the initial setup......................................................................................................................................... 106
Getting started................................................................................................................................................................. 107
Change your password...................................................................................................................................................108
Configuring the PowerFlex Manager settings.......................................................................................................... 109
Configure repositories.............................................................................................................................................. 109
Configuring networking.............................................................................................................................................110
Configure license management............................................................................................................................... 112
Discover resources...........................................................................................................................................................112
Upgrading the switch software.....................................................................................................................................114
Creating or cloning a template...................................................................................................................................... 114
Clone an existing template....................................................................................................................................... 114
Create a template.......................................................................................................................................................115
Publish a template............................................................................................................................................................ 116
Deploy CloudLink Center with PowerFlex Manager.................................................................................................117
Create a VM-VM affinity rule.................................................................................................................................. 119
Deploy resource groups.................................................................................................................................................. 119

4 Contents
Internal Use - Confidential

Configure individual trunk with per NIC VLAN setup for storage-only nodes with a bonded
management interface............................................................................................................................................... 120
Verify resource group status........................................................................................................................................ 123
Supported modes for a new deployment...................................................................................................................124
Adding the PowerFlex management service to PowerFlex Manager................................................................. 125
Gather PowerFlex system information................................................................................................................. 125
Add the PowerFlex system as a resource............................................................................................................125
Add as an existing resource group.........................................................................................................................126
Upload a management data store license............................................................................................................ 127

Chapter 9: Deploying the PowerFlex file nodes.......................................................................... 128


Deployment requirements for PowerFlex file services........................................................................................... 129
Resource group deployment......................................................................................................................................... 130
Define networks......................................................................................................................................................... 130
Edit a network............................................................................................................................................................. 131
Delete a network.........................................................................................................................................................131
Discover resources.....................................................................................................................................................131
Build or clone a template..........................................................................................................................................133
Component types.......................................................................................................................................................133
Node settings..............................................................................................................................................................133
Cluster component settings.................................................................................................................................... 136
Create a template...................................................................................................................................................... 137
Clone a template........................................................................................................................................................ 138
Build and publish a template....................................................................................................................................138
Edit template information........................................................................................................................................ 139
Edit a template........................................................................................................................................................... 140
Deploy a resource group.......................................................................................................................................... 140
Verify resource group status................................................................................................................................... 141

Chapter 10: Deploying PowerFlex NVMe over TCP......................................................................142


Create the storage NVMe template............................................................................................................................142
Deploy storage with the NVMe/TCP template........................................................................................................ 142
Configuring NVMe over TCP on a VMware ESXi compute-only node............................................................... 143
Enable the NVMe/TCP VMkernel ports...............................................................................................................143
Add NVMe over TCP software storage adapter................................................................................................ 143
Copy the host NQN...................................................................................................................................................144
Add a host to PowerFlex Manager........................................................................................................................ 144
Create a volume......................................................................................................................................................... 144
Map a volume to the host........................................................................................................................................144
Discover target IP addresses.................................................................................................................................. 145
Configuring NVMe over TCP on SLES....................................................................................................................... 145
Add a host to PowerFlex Manager........................................................................................................................ 145
Create a volume......................................................................................................................................................... 146
Map a volume to the host........................................................................................................................................146
Discover target IP addresses.................................................................................................................................. 146
Configuring NVMe over TCP on Red Hat Enterprise Linux.................................................................................. 148
Pre-configure the embedded operating system 7.x.......................................................................................... 148
Create a volume......................................................................................................................................................... 149
Map a volume to the host........................................................................................................................................149
Discover target IP addresses.................................................................................................................................. 149

Contents 5
Internal Use - Confidential

Add a PowerFlex compute-only host to the PowerFlex storage system.....................................................150

Chapter 11: Deploying the VMware NSX-T Ready nodes..............................................................152


Configuring the Cisco Nexus switches.......................................................................................................................152
Update management switches............................................................................................................................... 152
Update aggregation switches................................................................................................................................. 153
Update access switches.......................................................................................................................................... 159
Update border leaf switches...................................................................................................................................159
Update leaf switches................................................................................................................................................ 166
Configuring the VMware NSX-T Edge hosts for VMware ESXi ......................................................................... 166
Configure iDRAC network settings....................................................................................................................... 166
Update the BIOS and system firmware................................................................................................................ 167
Disable the hot spare power supply...................................................................................................................... 167
Configure system monitoring..................................................................................................................................168
Enable UEFI and configure data protection for the BOSS card.....................................................................168
Disabling IPMI for NSX-T Edge nodes..................................................................................................................169
Configure data protection for the PERC Mini Controller..................................................................................171
Install and configure VMware ESXi........................................................................................................................172
Create a vSphere cluster and add NSX-T Edge hosts to VMware vCenter............................................... 173
Add the new VMware ESXi local datastore and rename the operating system datastore (RAID
local storage only)..................................................................................................................................................173
Enable and configure vSAN on the NSX-T Edge cluster (vSAN storage option)......................................174
Configure NTP settings............................................................................................................................................175
Configuring virtual networking for NSX-T Edge nodes.................................................................................... 175
Patch and install drivers for VMware ESXi..........................................................................................................179
Configuring the hyperconverged or compute-only transport nodes ................................................................. 180
Configure the NSX-T overlay distributed virtual port group.......................................................................... 180
Convert trunk access to LACP-enabled switch ports for flex_dvswitch (option 1) .................................181
Convert LACP to trunk access enabled switch ports for cust_dvswitch (option 2)................................183
Add the VMware NSX-T nodes using PowerFlex Manager............................................................................ 185

Chapter 12: Optional deployment tasks...................................................................................... 187


Configuring replication on PowerFlex nodes.............................................................................................................187
Clone the storage replication template................................................................................................................ 187
Deploy storage with replication template ........................................................................................................... 188
Clone the hyperconverged replication template................................................................................................ 188
Deploy hyperconverged nodes with replication template ...............................................................................189
Create and copy certificates...................................................................................................................................190
Create remote consistency groups (RCG)..........................................................................................................190
Add peer replication systems...................................................................................................................................191
Storage data client authentication.............................................................................................................................. 192
Prepare for SDC authentication.............................................................................................................................192
Configure storage data client to use authentication.........................................................................................192
Enable storage data client authentication........................................................................................................... 194
Installing a Windows compute-only node with LACP bonding NIC port design............................................... 194
Mount the Windows Server 2016 or 2019 ISO................................................................................................... 195
Install the Windows Server 2016 or 2019 on a PowerFlex compute-only node..........................................195
Download and install drivers....................................................................................................................................196
Configure networks...................................................................................................................................................196
Disable Windows Firewall......................................................................................................................................... 197

6 Contents
Internal Use - Confidential

Enable the hyper-V role through Windows Server 2016 or 2019................................................................... 197
Enable the Hyper-V role through Windows PowerShell................................................................................... 197
Enable Remote Desktop access............................................................................................................................. 198
Install and configure SDC........................................................................................................................................ 198
Map volumes............................................................................................................................................................... 198
Activate the license...................................................................................................................................................199
Enable PowerFlex file on an existing PowerFlex appliance................................................................................... 199
Configure VMware vCenter high availability............................................................................................................200

Chapter 13: Post-deployment tasks........................................................................................... 201


Enabling SupportAssist...................................................................................................................................................201
Deploy or configure Secure Connect Gateway.................................................................................................. 201
Configuring the initial setup and generating the access key and pin........................................................... 202
Configuring SupportAssist on PowerFlex Manager..........................................................................................202
Events and alerts...................................................................................................................................................... 204
Redistribute the MDM cluster..................................................................................................................................... 205
Verify the PowerFlex Manager resource group...................................................................................................... 206
Verify PowerFlex status................................................................................................................................................ 206
Export a compliance report.......................................................................................................................................... 207
Export a configuration report.......................................................................................................................................207
Back up using PowerFlex Manager............................................................................................................................ 208
Back up the networking switch configuration......................................................................................................... 209
Backing up the VMware vCenter................................................................................................................................209
Log in to PowerFlex using scli...................................................................................................................................... 210

Contents 7
Internal Use - Confidential

1
Introduction
The Dell PowerFlex Appliance with PowerFlex 4.x Deployment Guide provides specific steps to deploy the software applications
and hardware components required to deploy and configure PowerFlex appliance with PowerFlex Manager.
The target audience for this guide is Dell Technologies Services deploying a PowerFlex appliance and configuring it with
PowerFlex Manager.
PowerFlex appliance deploys as follows:
1. Check the prerequisites
2. Complete the node cabling
3. Configure the networking
4. Configure iDRAC
5. Configure the PowerFlex management controller
6. Configure the PowerFlex management platform
7. Deploy PowerFlex appliance
8. Verify the deployment status
See the Dell PowerFlex 4.0.x Administration Guide for additional documentation about using PowerFlex Manager.
See Dell Support to search the knowledge base for FAQs, Tech Alerts, and Tutorials.

8 Introduction
Internal Use - Confidential

2
Revision history
Date Document revision Description of changes
January 2023 1.1 Added support for
● Broadcom 57414 and 57508 network
adapters
● CloudLink 7.1.5
Updated support for
● PowerFlex management platform
August 2022 1.0 Initial release

Revision history 9
Internal Use - Confidential

3
Deployment requirements
This section lists the hardware and software required to build a PowerFlex appliance.
For a complete list of supported hardware, refer to the Dell PowerFlex Appliance with PowerFlex 4.x Support Matrix.

Related information
Deploying and configuring the PowerFlex management platform installer VM using VMware vSphere
Deploying and configuring the PowerFlex management platform installer using Linux KVM
Deploying and configuring the PowerFlex management platform using VMware vSphere
Deploying and configuring the PowerFlex management platform using Linux KVM

Software requirements
Download the Intelligent Catalog (IC) before starting the deployment. The following are the operating systems, software and
packages required as part of IC apart from Dell CloudLink and the secure connect gateway (SCG) images.
● VMware vSphere vCenter and ESXi 7.x
● Dell embedded operating system
● PowerFlex 4.x packages
● PowerFlex management platform packages
● Jump server image
● Dell CloudLink (optional)
● Secure connect gateway (optional)

Other requirements
● Enterprise Management Platform (EMP)Enterprise Management Platform - prepare before starting deployment
● Licenses:
○ PowerFlex Manager
○ CloudLink

Hardware requirements
Before deploying a PowerFlex appliance, the hardware requirements must be met.
Ensure you have:
● A minimum of four PowerFlex appliance nodes.
● PowerFlex appliance management controller nodes (PowerFlex R650).
● Supported PowerFlex appliance management controller configurations:
○ Single node (Dell provided or customer provided)
○ Multiple nodes (minimum of three nodes)
● A minimum of two PowerFlex Manager supported access/leaf switches.
● Cables / SFP28 25 GB direct attached to copper (four for each of the PowerFlex appliance nodes and four for the
PowerFlex management controller node).
● Cables / QSFP28 100 GB direct attached to copper (two for access switch uplinks and two for access switch VLT or VPC
interconnects).
● CAT5 / CAT6 cables 1 GB (one for each node for iDRAC connectivity) and one for each access switch for management
connectivity.

10 Deployment requirements
Internal Use - Confidential

Resource requirements
Resource requirements must be met before you deploy.
For all the examples in this document, the following conventions are used:
● The third octets of the example IP addresses match the VLAN of the interface.
● All networks in the example have a subnet mask of 255.255.255.0.
The following table lists the minimum resource requirements for the infrastructure virtual machines:

Application RAM (GB) Number of vCPUs Hard disk (GB)


Secure connect gateway 4 2 16
Dell CloudLink 6 4 64
VMware vCenter Server 32 16 1065 - 1765
Appliance
Embedded operating system 8 (minimum) 2 320
based jump server
PowerFlex management 3 VMs, 32 GB per, total 96 3 VMs, 16 vCPU per, total 48 3 VMs, 650 GB per, total
virtual machines 1950
NOTE: PowerFlex
management platform
installer required for initial
install

PowerFlex management 16 4 500


platform installer
NOTE: Removed after
PowerFlex management
platform is installed

PowerFlex management controller datastore and


virtual machine details
The following table lists the datastores to use:

Controller Volume Size (GB) VMs Domain name Storage pool


type name
PowerFlex vcsa 3000 pfmc_vcsa PFMC PFMC-pool
management
controller 2.0 general 1500 Management VMs PFMC PFMC-pool
For example:
● Management
gateway
● Customer gateway
● CloudLink
● Additional VMs

pfmp 3000 PowerFlex Manager PFMC PFMC-pool

Deployment requirements 11
Internal Use - Confidential

Jump server
The jump server is an embedded operating system-based VM available for PowerFlex appliance to access and manage all the
devices in the system.
The embedded operating system-based jump server is marked as internal-only and can be downloaded only by the professional
services or manufacturing team. The embedded operating system-based jump server does not provide the DNS or NTP services
that are needed for full PowerFlex appliance functionality.

License and registration requirements


Specific licensing and registration are required before you can deploy a PowerFlex appliance.
The following are required for a customer deployment to enable secure connect gateway:
● Site ID - Required for secure connect gateway provisioning the site name (for example: Site ID) and is in a license fulfillment
email that is sent to the customer.
● Enterprise License Management Systems (ELMS) software unique ID - Required for device registration for alerts.
The activation serial number (software unique ID) is in a software license activation notification email that is sent to the
customer. If the customer does not receive the email messages, or if the site name must be changed, go to Dell Licensing
and open a service request.
● PowerFlex Manager - Required for full functionality. The users are restricted to Unmanaged mode until a license is
installed.
● Cloudlink - Requires capacity license or SED license depending on customer requirements.

Download files from the Dell Support site


Ensure the integrity of files from the Dell Support site.

About this task


The Dell Technologies Support site contains files (Intelligent Catalog (IC) bundle) that are required by PowerFlex appliance for
deployment.
Compare the SHA256 value of files that are downloaded from the Dell Support site with the SHA2 value of the files that are
stored in the Dell Support site.

Steps
1. On the Dell Support site, to see the SHA2 hash value, hover over the question mark (? ) next to the File Description.
2. In the Windows file manager, right-click the downloaded file and select CRC SHA > SHA-256. The CRC SHA option is
available only if you install 7-zip application.
The SHA-256 value is calculated.
3. The SHA2 value that is shown on the Dell Technologies Support site and the SHA-256 value that is generated by Microsoft
Windows must match. If the values do not match, the file is corrupted. Download the file again.

Ports and security configuration data requirements

PowerFlex ports
For information about the ports and protocols used by the components, see the Dell PowerFlex Rack with PowerFlex 4.x
Security Configuration Guide.

12 Deployment requirements
Internal Use - Confidential

PowerFlex Manager
Port TCP Service
20 Yes FTP
21 Yes FTP
22 Yes SSH
80 Yes HTTP
443 Yes HTTPS

Jump server ports


Port TCP Service
22 Yes SSH

CloudLink Center ports


See Network port information for CloudLink Center in the Dell PowerFlex Rack with PowerFlex 4.x Security Configuration Guide
for information about the ports used by CloudLink Center.

Networking pre-requisites
Configure the customer network for routing and layer-2 access for the various networks before PowerFlex Manager deploys the
PowerFlex appliance cluster.
The pre-deployment customer network requirements are as follows:
● Redundant connections to access switches using virtual link trunking (VLT) or virtual port channel (VPC).
● MTU=9216 on all ports or link aggregation interfaces carrying PowerFlex data VLANs.
● MTU=9216 as default on VMware vMotion and PowerFlex data interfaces.
The following table lists customer network pre-deployment VLAN configuration options:
● Example VLAN: Lists the VLANs that are used in the PowerFlex appliance deployment.
● Network Name: Network names and or the VLAN defined by PowerFlex Manager.
● Descripton: Describes each network or VLAN.
NOTE: VLANs numbers in the table are an example, they may change depends on customer requirements.

In a default PowerFlex setup two data networks are standard. Four data networks are only required for specific customer
requirements, for example, high performance or use of trunk ports. For more information, contact your Dell Sales Engineer.
CAUTION: All defined data networks must be accessible from all storage data clients (SDC). If you have
implemented a solution with four data networks, all four must be assigned and accessible from each storage
data client. Using less than the configured number of networks will result in an error in PowerFlex and can lead
to path failures and other challenges if not properly configured.

VLAN network requirements:


● VLAN flex-node-mgmt (105) and flex-stor-mgmt (150) must be routable to each other
● VLAN flex-node-mgmt (105) and pfmc-sds-mgmt (140) must be routable to each other
● VLAN pfmc-sds-mgmt (140) and flex-stor-mgmt (150) must not route to each other

Deployment requirements 13
Internal Use - Confidential

Network requirements for PowerFlex management controller 2.0


Example Network Name Description Properties
VLAN
101 Hardware Management For connection to PowerFlex management Layer-2/Layer-3 connectivity,
controller node iDRAC interface MTU=1500/9216
and PowerFlex Manager (PowerFlex
management platform VMs and ingress
controller)
103 VMware vCenter HA For VMware vCenter High Availability Layer-2 connectivity, MTU=1500
(vCenter HA) network interface
105 Hypervisor management For management interface of ESXi, Layer-3 connectivity,
VMware vCenter, CloudLink Center, Jump MTU=1500/9216
server, Secure Connect Gateway (SCG)
and PowerFlex Manager (PowerFlex
management platform VMs and ingress
controller)
140 PowerFlex management For SVM management interface on Layer-3 connectivity,
controller 2.0 PowerFlex management controller PowerFlex cluster MTU=1500/9216
management
141 PowerFlex management For VMware vMotion interface on Layer-2 connectivity,
controller 2.0 Hypervisor management controller vSphere cluster MTU=1500/9216
migration
142 PowerFlex management For SDS-to-SDS and SDS-to-SDC data path Layer-2 connectivity, MTU=9216
controller 2.0 PowerFlex data 1
143 PowerFlex management For SDS-to-SDS and SDS-to-SDC
controller 2.0 PowerFlex data 2 data path
150 PowerFlex management For SVM and SO node Management Layer-3 connectivity,
interface MTU=1500/9216
151 PowerFlex data 1 For SDS-to-SDS and SDS-to-SDC data path Layer-2 connectivity, MTU=9216
152 PowerFlex data 2 For SDS-to-SDS and SDS-to-SDC data path Layer-2 connectivity, MTU=9216
153 PowerFlex data 3 (if required) For SDS-to-SDS and SDS-to-SDC data path Layer-2 connectivity, MTU=9216
154 PowerFlex data 4 (if required) For SDS-to-SDS and SDS-to-SDC data path Layer-2 connectivity, MTU=9216

Network requirements for PowerFlex production cluster


Example Network Name Description Properties
VLAN
101 Hardware management For connection to PowerFlex production Layer-2/Layer-3 connectivity,
node iDRAC interface MTU=1500/9216
105 Hypervisor management For VMware ESXi management interface on Layer-3 connectivity,
production vSphere cluster MTU=1500/9216
106 Hypervisor migration For VMware vMotion interface on Layer-2 connectivity,
production vSphere cluster MTU=1500/9216
150 PowerFlex management For SVM and PowerFlex storage-only node Layer-3 connectivity,
management interface MTU=1500/9216
151 PowerFlex data 1 For SDS-to-SDS and SDS-to-SDC data path Layer-2 connectivity, MTU=9216
152 PowerFlex data 2 For SDS-to-SDS and SDS-to-SDC data path Layer-2 connectivity, MTU=9216
153 PowerFlex data 3 (if required) For SDS-to-SDS and SDS-to-SDC data path Layer-2 connectivity, MTU=9216

14 Deployment requirements
Internal Use - Confidential

Example Network Name Description Properties


VLAN
154 PowerFlex data 4 (if required) For SDS-to-SDS and SDS-to-SDC data path Layer-2 connectivity, MTU=9216

Network requirements for PowerFlex Asynchronous replication (optional)


Example Network Name Description Properties
VLAN
161 PowerFlex replication 1 For SDR-SDR external communication Layer-3 connectivity, MTU=9216.
Routable to replication peer
system
162 PowerFlex replication 2 For SDR-SDR external communication Layer-3 connectivity, MTU=9216.
Routable to replication peer
system

Network requirements for PowerFlex file (optional)


Example Network Name Description Properties
VLAN
101 Hardware management For connection to PowerFlex file node Layer-2/Layer-3 connectivity,
iDRAC interface MTU=1500/9216
150 PowerFlex management For PowerFlex file node operating system Layer-3 connectivity,
management MTU=1500/9216
151 PowerFlex data 1 For SDS-to-SDC data path Layer-2 connectivity, MTU=9216
152 PowerFlex data 2 For SDS-to-SDC data path Layer-2 connectivity, MTU=9216
153 PowerFlex data 3 (if required) For SDS-to-SDC data path Layer-2 connectivity, MTU=9216
154 PowerFlex data 4 (if required) For SDS-to-SDC data path Layer-2 connectivity, MTU=9216
250 NAS file management (untagged For NAS management traffic Layer-3 connectivity,
VLAN) MTU=1500/9216

Untagged VLAN
251 NAS file data 1 For accessing PowerFlex file data from Layer-2/layer-3,
client MTU=1500/9000
252 NAS file data 2 For accessing PowerFlex file data from Layer-2/layer-3,
client MTU=1500/9000

Network requirements for NSX-T (optional)


Example Network Name Description Properties
VLAN
101 Hardware management For connection to NSX-T node iDRAC Layer-2/Layer-3 connectivity,
interface MTU=1500/9216
105 Hypervisor management For VMware ESXi management interface Layer-3 connectivity,
on NSX-T Edge cluster (Shared with MTU=1500/9216
production vSphere cluster)
113 Hypervisor migration (only if For VMware vMotion interface on NSX-T Layer-2 connectivity,
required) Edge cluster (This is optional meaning only if MTU=1500/9216
customer chooses to deploy vSAN on NSX-
T Edge cluster)

Deployment requirements 15
Internal Use - Confidential

Example Network Name Description Properties


VLAN
116 nsx-vsan (only if required) For VMware vSAN interface on NSX-T Edge Layer-2 connectivity, MTU=9216
cluster (This is optional meaning only if
customer chooses to deploy vSAN on NSX-
T Edge cluster)
121 nsx-transport For NSX-T Transport interface on NSX-T Layer-2 connectivity, MTU=9216
Edge cluster (Used for NSX-T overlay)
122 nsx-edge1 For NSX-T Edge external VLAN1 used for Layer-3 connectivity, MTU=1500
BGP uplink
123 nsx-edge2 For NSX-T Edge external VLAN2 used for Layer-3 connectivity, MTU=1500
BGP uplink

Related information
Partial network automation
Configuring Cisco Nexus switches
Configure the Dell PowerSwitch access switches

Network requirements for a PowerFlex appliance


deployment
Configure the access with pre-deployment configurations. See Configuring Cisco Nexus switches and Configuring Dell
PowerSwitch switches for information on specific switch configuration.
If the PowerFlex management controllers are Dell provided or customer provided, configure the switch ports manually.
Customers are responsible for switch port configuration. PowerFlex Manager does not support the PowerFlex management
controller deployment, however once successfully installed and configured PowerFlex Manager can take over and start
managing it. If it is customer provided management controller infrastructure, advise the customer to configure it.
See the Network Configuration section for more details.

Related information
Configuring Cisco Nexus switches
Configuring Dell PowerSwitch switches
Network configuration

Full network automation


Full network automation allows you to work with supported switches and requires less manual configuration.
Full network automation also provides better error handling since PowerFlex Manager can communicate with the switches and
identify any problems that may exist with the switch configurations.
Specific networks are required for a PowerFlex appliance deployment. Each network requires enough IP addresses allocated for
the deployment and future expansion. If the access switches are supported by PowerFlex Manager, server facing switch ports
are configured automatically.

Partial network automation


Partial network automation allows you to work with unsupported switches but requires more manual configuration before a
deployment can proceed successfully.
If you choose to use partial network automation, you give up the error handling and network automation features that are
available with a full network configuration that includes supported switches. For a partial network deployment, the switches are
not discovered, so PowerFlex Manager does not have access to switch configuration information. You must ensure that the
switches are configured correctly, since PowerFlex Manager does not have the ability to configure the switches for you. If your

16 Deployment requirements
Internal Use - Confidential

switch is not configured correctly, the deployment may fail and PowerFlex Manager is not able to provide information about why
the deployment failed.
If you select the Partial networking template then you should configure the switches before deploying the service. For
more information about example configuration for Dell PowerSwitch and Cisco Nexus switch, the Configure Dell PowerSwitch
switches or Configure Cisco Nexus switches section.
The pre-deployment access switch requirements are as follows:
● Management interfaces IP addresses configured.
● Switches and interconnect link aggregation interfaces configured to support VLT or VPC.
● MTU=9216 on redundant uplinks with link aggregation (VLT or VPC) to customer data center network.
● MTU=9216 on VLT or VPC interconnect link aggregation interfaces.
● LLDP enabled on switch ports that are connected to PowerFlex appliance node ports.
● SNMP enabled, community string set (public) and trap destination set to PowerFlex Manager.
● All uplink, VLT or VPC, and PowerFlex appliance connected ports are not shut down.
● Interface port configuration for downlink to PowerFlex node (only applicable for partial network automation ).
Dell recommends that only PowerFlex appliance (including the PowerFlex management node if present) be connected to access
switches.
NOTE: VLANs 140 through 143 are only required for PowerFlex management controller 2.0.

See the table in Configuration data for the VLAN and network information.

Related information
Configuration data
Networking pre-requisites

PowerFlex appliance and Cisco application centric


infrastructure
If the customer network is a Cisco application centric infrastructure (ACI) software-defined network, the following configuration
restrictions apply:
● Cisco ACI must have network-centric mode enabled.
● There must be no VM monitor (VMM) integration to any of the VMware ESXi-based PowerFlex appliance nodes.
● When attaching more than one domain to an endpoint groups, it is important to ensure that the VLAN pools tied to the
domains do not overlap.
● The PowerFlex appliance nodes are configured using PowerFlex Manager in partial network automation mode:
○ The requirements must include having all PowerFlex related VLANs previously configured on the leaf switches.
○ The nodes must be configured using Link Aggregation Control Protocol (LACP).
NOTE:
● The testing performed to validate ACI was done using ACI version 5.x; other versions are expected to function similarly.
● The PowerFlex appliance Intelligent Catalog (IC) should be 36.354.A05 or newer.

PowerFlex appliance node cabling


Redundancy and throughput are the main considerations when cabling the PowerFlex appliance nodes.
The two power connections to PSUs are connected from two different power sources (UPSs). The two PowerFlex data
network connections for each node are on different NICs and are connected to each access switch. The two management
network connections for each node are on different NICs and are connected to each access switch. PowerFlex Manager
discovers which PowerFlex appliance NIC ports are connected to which access switch ports. Dell recommends that you are
methodical and logical when cabling to simplify debugging of connectivity issues. Also, you must know the cabling of NIC ports
to switches when creating PowerFlex Manager templates to maintain redundancy. The cabling of NIC ports to the access
switches of each model is outlined in the following table.

Deployment requirements 17
Internal Use - Confidential

NOTE: The PowerFlex appliance iDRAC NIC is connected to separate customer or Dell provided switch, which is the
out-of-band management switch.
See the following example for the physical network connectivity details between PowerFlex hyperconverged nodes, PowerFlex
storage-only nodes, PowerFlex compute-only nodes, PowerFlex controller nodes, access switches and management switch.

Node Node type NIC X, port 1 NIC X, port NIC Y, port 1 NIC Y, port 2 NIC Z, port 1 M0/iDRAC
number 2
1 PowerFlex Access A, Access B, Access B, Access A, NA OOB_switch
port 1 port 2 port 1 port 2
2 PowerFlex Access A, Access B, Access B, Access A, NA OOB_switch
port 3 port 4 port 3 port 4
3 PowerFlex Access A, Access B, Access B, Access A, NA OOB_switch
port 5 port 6 port 5 port 6
4 PowerFlex Access A, Access B, Access B, Access A, NA OOB_switch
port 7 port 8 port 7 port 8
1 Controller Access A, Access B, Access B, Access A, OOB_switch OOB_switch
port 23 port 24 port 23 port 24
2 Controller Access A, Access B, Access B, Access A, OOB_switch OOB_switch
port 21 port 22 port 21 port 22
3 Controller Access A, Access B, Access B, Access A, OOB_switch OOB_switch
port 19 port 20 port 19 port 20
4 Controller Access A, Access B, Access B, Access A, OOB_switch OOB_switch
port 17 port 18 port 17 port 18

On some servers, the NIC cards in certain PCIe slots are inverted. That means that NIC port 1 is on the right and NIC port 2 is
on the left. Consider the NIC port positions during cabling and deployment. Also, the port numbers might be mentioned in the
card as shown in the picture below:

Related information
Configure storage data client on the PowerFlex management controller

18 Deployment requirements
Internal Use - Confidential

4
Network configuration
PowerFlex appliance is available in two standard network architectures: access and aggregation (Cisco Nexus or Dell
PowerSwitch) or leaf-spine (Cisco Nexus). For most PowerFlex appliance deployments, access-aggregation network
configuration provides the simplest integration, however when customer scale or east/west bandwidth requirements exceed
the aggregation and access design abilities, the leaf-spine architecture is used instead.

Network configuration workflow


The following is the workflow for configuration requirements and reference examples for customer switch configuration before
PowerFlex appliance can be deployed.
NOTE: The customer may already have some of the tasks completed if the deployment is connecting to an existing
network.
1. Initialize the switch (if needed)
2. Upgrade firmware (if needed)
3. Configure the basic requirements
4. Configure the VLANs
5. Configure for access/aggregation or leaf-spine
6. Configure controller access ports
7. Configure customer uplinks

Information required from the Enterprise Management


Platform (EMP)
● Customer uplink number, type, and speed
● VLAN names and numbers
● L2 and L3 information
● Interface types (access, port-channel, Link Aggregation Control Protocol)

Related information
Network requirements for a PowerFlex appliance deployment

Configuration data
This section has the information about supported networking configuration in PowerFlex appliance and applicable for both Dell
PowerSwitch and Cisco Nexus switches.
PowerFlex appliance supports the following node connectivity network configurations:
● Port-channel - All PowerFlex nodes are connected to access or leaf pair switches.
● Port-channel with link aggregation control protocol (LACP) - All PowerFlex nodes are connected to access and leaf pair
switches.
● Individual trunk - All PowerFlex nodes are connected using trunk configuration.
Access/leaf switch ports connected to nodes require different configuration parameters for PowerFlex appliance deployment. If
partial network automation is used, the customer is responsible for the access/leaf switch configuration.
The following table shows the different configuration parameters required on the access or leaf switch ports connected to the
nodes for the deployment of PowerFlex appliance:

Network configuration 19
Internal Use - Confidential

NOTE: VLANs in the table are an example, this may change depends on customer requirements.

Supported Node Virtual Port- Speed LACP Required VLANs Node load
networking switch/bond channel/ (GB) mode balancing
name interface
mode
Port-channel PowerFlex fe_dvSwitch Trunk 25 Active 105,140,150 LAG-Active-
with LACP management Src and dest
(manual build) controller 2.0 be_dvSwitch Trunk 25 Active 103,141,142,143,151, 152 IP and TCP/
If required: 153 and 154 UDP

oob_dvSwitch Access 25 NA 101 NA


Port-channel PowerFlex cust_dvSwitc Trunk 25/100 ON 105-106,150 Route based
for full compute-only h on IP hash
network nodes
automation (VMware flex_dvSwitch Trunk 25/100 ON 151, 152 Route based
ESXi based) on IP hash
If required: 153 and 154

PowerFlex cust_dvSwitc Trunk 25/100 ON 105-106,150 Route based


hyperconverg h on IP hash
ed nodes
flex_dvSwitch Trunk 25/100 ON 151, 152 Route based
on IP hash
If required: 153, 154, 161,
and 162

PowerFlex NA NA NA NA NA NA
storage-only
nodes / NA NA NA NA NA NA
PowerFlex
file nodes
Port-channel PowerFlex cust_dvSwitc Trunk 25/100 Active 105-106 LAG-Active-
with Link compute-only h Src and dest
Aggregation nodes IP and TCP/
Control (VMware flex_dvSwitch Trunk 25/100 Active 151, 152 UDP
Protocol ESXi based) If required: 153 and 154
(LACP) for
full network PowerFlex cust_dvSwitc Trunk 25/100 Active 105-106,150
automation/ hyperconverg h
partial ed nodes
network flex_dvSwitch Trunk 25/100 Active 151, 152
automation
If required: 153, 154, 161,
and 162

PowerFlex Bond0 Trunk 25/100 Active 150,151 Mode 4


storage-only
nodes If required: 153 and 161

Bond1 Trunk 25/100 Active 152

If required: 154 and 162

PowerFlex Bond0 Trunk 25/100 Active 150, 151, 152


file nodes
If required: 153 and 154

Bond1 Trunk 25/100 Active 250 (untagged VLAN),


251, 252
Trunk for full PowerFlex cust_dvSwitc Trunk 25/100 NA 105-106 Originating
network compute-only h virtual port
automation/ nodes (recommende
partial d), Physical

20 Network configuration
Internal Use - Confidential

Supported Node Virtual Port- Speed LACP Required VLANs Node load
networking switch/bond channel/ (GB) mode balancing
name interface
mode
network flex_dvSwitch Trunk 25/100 151-152 NIC load,
automation (153,154 Source MAC
, if hash
required
)
PowerFlex cust_dvSwitc Trunk 25/100 NA 105-106,150 Originating
hyperconverg h virtual port
ed nodes (recommende
flex_dvSwitch Trunk 25/100 d), Physical
NIC load,
Source MAC
hash
PowerFlex Bond0 Trunk 25/100 NA 150,151 Mode0-RR,
storage-only Mode1- Active
nodes (option If required: 153 and 161 backup,
1) Mode6-
Bond1 Trunk 25/100 NA 152 Adaptive LB
If required: 154 and 162 (recommende
d)
Individual PowerFlex Per NIC VLAN Trunk 25/100 NA 151,152 bonded Mode0-RR,
trunk for full storage-only Mode1- Active
network nodes (option If required: 150, 153, 154, backup,
automation/ 2) 161,162 Mode6-
partial Adaptive LB
network (recommende
automation d)

Related information
Partial network automation
Configure the Dell PowerSwitch access switches
Create new dvSwitches
Create distributed port groups on dvswitches
Create LAG on dvSwitches
Add hosts to dvSwitches
Assign LAG as an active uplink for the dvSwitch
Set load balancing for dvSwitch

Configuring Dell PowerSwitch switches

Related information
Network requirements for a PowerFlex appliance deployment

Configure the Dell PowerSwitch access switches


Use this procedure to configure two Dell PowerSwitch access switches for deploying a PowerFlex appliance using PowerFlex
Manager and to configure the PowerFlex management node (optional). Perform the steps in this procedure on both switches.

About this task


Before you can deploy a PowerFlex appliance through PowerFlex Manager, the Dell PowerSwitch access switches running Dell
SmartFabric OS10.x need specific configuration. For the requirements, see Configuration data.

Network configuration 21
Internal Use - Confidential

See the Dell PowerFlex with PowerFlex 4.x Appliance Support Matrix and Intelligent Catalog for supported switch model and
software versions in the current release.
NOTE: If access switches are provided and configured by customer, the below configuration is only for reference purpose.
See Configuration data for more details about supported full network automation options.

NOTE: VLANs 140 through 143 are only required for PowerFlex management controller 2.0.

Ensure to be in the configuration mode to perform the commands in the following procedures.
● Type configure terminal, to enter the config mode in the CLI.
● Type end to fully exit the configuration mode.

Steps
1. Configure the hardware, by completing the following:
a. Turn on both switches.
b. Connect a serial cable to the serial port of the first switch.
c. Use a terminal utility to open the terminal emulator and configure it to use the serial port. The serial port is usually COM1,
but this may vary depending on your system. Configure serial communications for 115200,8, N,1 and no flow control.
d. Connect the switches by connecting port 53 on switch 1 to port 53 on switch 2 and port 54 on switch 1 to port 54 on
switch 2.
2. Configure the IP address for the management ports, enter the following:
interface mgmt 1/1/1
no shutdown
no ip address dhcp
ip address <ipaddress>/<mask>
exit
3. Set a global username and password and an enable mode password, enter the following:
username <admin> password <admin> role <sysadmin>
4. Enable SSH, complete the following:
a. Regenerate keys for the SSH server in EXEC mode, enter the following:
crypto ssh-key generate {rsa {2048}}
b. Overwrite an existing key,enter the following:
Host key already exists. Overwrite [confirm yes/no]:yes
Generated 2048-bit RSA key
c. Display the SSH public keys in EXEC mode, enter the following:
show crypto ssh-key rsa
d. Save the configuration, enter the following:
copy running-config startup-config
5. Set the SSH login attempts, enter the following:
password-attributes max-retry 5 lockout-period 30
6. Configure SNMP, enter the following:
snmp-server community <snmpCommunityString> ro
7. Set SNMP destinations, enter the following:
snmp-server host <PowerFlex Manager IP> traps version 2c stringtest entity lldp snmp
envmon
8. Enable LLDP, enter the following:
lldp enable

Related information
Networking pre-requisites
Configuration data

22 Network configuration
Internal Use - Confidential

Upgrading the switch software


Upgrade the Dell PowerSwitches running out-of-date software levels.

About this task


Use os10# show version to verify the latest version. PowerFlex Manager can be used to upgrade the switch software.
Ensure PowerFlex Manager is up and running and switches are discovered before starting the switch software upgrade.
See Upgrade switch software in PowerFlex appliance service deployment for more information.

Upgrade the Dell network


There are several options for upgrading OS10. It can be installed manually using onie-nos-install <URL> while in ONIE
and can be upgraded from the OS10# command prompt using the image install and boot system commands.

About this task


Several protocols are supported for the transfer of OS10 files over the network to the switch. These protocols include TFTP,
FTP, HTTP, and SCP. You can also copy and install the OS from a local file using a USB device, or the /image directory on the
switch.

Prerequisites
● Ensure the primary MDM is not residing with the same PowerFlex appliance as the current switch upgrade.
● The primary MDM usually resides on R01S01, when upgrading access switches, secondary MDM (usually resides on R02S01)
will be promoted to primary. The primary MDM will be moved back after completion of primary switch upgrade
● To switch ownership between primary and secondary MDM, type on the primary MDM: scli--
switch_mdm_ownership --new_master_mdm_id <MDM_ID>

Steps
1. Check the current version of switch operating system:
a. Log in to the Dell-OS CLI as admin. Use PuTTY to enter the password admin.
b. Check the operating system version, by entering the following command: show version.
The screen displays an output similar to the following:

Dell EMC Networking OS10-Enterprise


Copyright (c) 1999-‐2019 by Dell Inc. All Rights Reserved. OS Version: 10.x.x
Build Version: 10.x.x
Build Time: 2019-‐03-‐01T10:51:29-0800
System Type: Z9100-ON
Architecture: x86_64 Up Time:1 day 00:02:03

OS Version shows the version the operating system version should be 10.x.x or later.
2. Save the license file and the configuration:
a. In the Dell-OS CLI, type show license status to get the license path.
The screen displays an output similar to the following:

my-switch# show license status


System Information
---------------------------------------------------------
Vendor Name : DELL Product Name : Z9100-ON Hardware Version: A03
Platform Name : x86_64-dell_z9100_c2538-r0 PPID : xxxxxxxxxxxxxx
Service Tag : xxxxxxxx License Details
----------------
Software : OS10-Enterprise Version : 10.x.x
License Type : PERPETUAL License Duration: Unlimited License Status : Active
*License location: xxxxxxx/xxxxxxx/xx.xx
---------------------------------------------------------

You can find the license path in the license location row.

Network configuration 23
Internal Use - Confidential

b. Get the switch address (IP address configured by DHCP) and hostname, by entering the following command: show
interface mgmt.
The screen displays an output similar to the following:

my-switch# show interface mgmt


Management 1/1/1 is up, line protocol is up
Hardware is Dell EMC Eth, address is 34:17:eb:42:ed:00 Current address is
34:17:eb:42:ed:00
Interface index is 9
*Internet address is 5.5.169.236/20 Mode of IPv4 Address Assignment: DHCP Interface
IPv6 oper status: Disabled Virtual-IP is not set
Virtual-IP IPv6 address is not set MTU 1532 bytes, IP MTU 1500 bytes LineSpeed 1000M
Flowcontrol rx off tx off
ARP type: ARPA, ARP Timeout: 60
Last clearing of "show interface" counters: 1 weeks 01:31:32 Queuing strategy: fifo
Input statistics:
Input 43661179 packets, 6924867854 bytes, 0 multicast
Received 0 errors, 0 discarded Output statistics:
Output 24878 packets, 2163269 bytes, 0 multicast Output 0 errors, Output 0 invalid
protocol

Record the IP address for the switch.


c. Determine if the IP addresses and management route are set manually, by entering the following command:
my-switch# show running-configuration interfacemgmt 1/1/1
!
interface mgmt1/1/1
no shutdown
no ip address dhcp
ip address 5.5.169.236/20
no ipv6 enable
If the management IP address is not set to DHCP, record the management IP address.

my-switch# show running-configuration management-route


!
management route 0.0.0.0/0 managementethernet

If the hostname is not set to OS10, record the hostname.

Download Dell SmartFabric OS10


Use this procedure to download Dell SmartFabric OS10 and license for a new switch.

About this task


OS10 runs with a perpetual license on a device with OS10 factory-loaded. The license file is installed on the switch. If the license
becomes corrupted or wiped out, you must download the license from DDL under the purchaser's account and re-install it.

Steps
1. Sign in to Dell SmartFabric OS10 using your account credentials.
2. Locate your entitlement ID and order number sent by email, and select the product name.
3. On the Product page, the Assigned To: field on the Product tab is blank. Click Key Available for Download.
4. Enter the device service tag you purchased the OS10 Enterprise Edition for in the Bind to: and Re-enter ID: fields.
This step binds the software entitlement to the service tag of the switch.
5. Select how to receive the license key — by email or downloaded to your local device.
6. Click Submit to download the License.zip file.
7. Select the Available Downloads tab.
8. Select the OS10 Enterprise Edition release to download, and click Download.
9. Read the Dell End User License Agreement. Scroll to the end of the agreement, and click Yes, I agree.

24 Network configuration
Internal Use - Confidential

10. Select how to download the software files, and click Download Now.
11. After you download the OS10 Enterprise Edition image, unpack the TAR file and store the OS10 binary image on a local
server. To unpack the TAR file, follow these guidelines:
● Extract the OS10 binary file from the TAR file. For example, to unpack a TAR file on a Linux server or from the ONIE
prompt, enter:

tar -xf tar_filename

12. Some Windows unzip applications insert extra carriage returns (CR) or line feeds (LF) when extracting the contents of
a .TAR file. The additional CRs or LFs may corrupt the downloaded OS10 binary image. Turn this option off if you use a
Windows-based tool to untar an OS10 binary file.
13. Generate a checksum for the downloaded OS10 binary image by running the md5sum command on the image file. Ensure
that the generated checksum matches the checksum extracted from the TAR file.

md5sum image_filename

14. Type copy to copy the OS10 image file to a local server.

Connect to a switch
Use this procedure to connect to the switch.

Steps
Use one of the following methods to verify that the system is properly connected before starting installation:
● Connect a serial cable and terminal emulator to the console serial port on the switch. The serial port settings can be found
in the Installation Guide for your particular switch model. For example, the S4100-ON serial port settings are 115200, 8 data
bits, and no parity.
● Connect the management port to the network if you prefer downloading the image over the network. Use the Installation
Guide for your particular switch model for more information about setting up the management port.

Configure a USB drive for OS10 installation


Use this procedure to prepare and mount the USB drive on the switch.

About this task


This process is required for automatic or manual installations using a USB.
NOTE: Optionally, you can use secure copy protocol (SCP) for this procedure. Download the image scp://
userid:password@<hostip>:/ filepath/PKGS_OS10-Enterprise-10.5.1.0EX.110strech-installer-
x86_64.bin

Steps
1. Extract the TAR file, and copy the contents to a FAT32 formatted USB flash drive.
2. Plug the USB flash drive into the USB port on the switch.
3. From the ONIE menu, select ONIE: Install OS, then press the Ctrl + C key sequence to cancel.
4. From the ONIE:/ # command prompt, type:
ONIE:/ # onie-discovery-stop (this optional commandstops the scrolling)
ONIE:/ # mkdir /mnt/usb
ONIE:/ # cd /mnt
ONIE:/mnt # fdisk -l (this command shows the deviceUSB is using)
The switches storage devices and partitions are displayed.
5. Use the device or partition that is formatted FAT32 (example: /dev/sdb1 ) in the next command.
ONIE:/mnt # mount -t vfat /dev/sdb1 /mnt/usb
ONIE:/mnt # mount -a

Network configuration 25
Internal Use - Confidential

The USB is now available for installing OS10 onto the switch.

Manual install using USB


A USB device can be used to manually upgrade OS10.

Steps
1. Use the output of the following command to copy/paste the BIN filename into the install command below.

ONIE:/ # ls /mnt/usb

2. Change to the USB directory.

ONIE:/ # cd /mnt/usb

3. Manually install using the onie-nos-install command. If installing version 10.x.x, the command is:

ONIE:/mnt/usb # onie-nos-install PKGS_OS10-Enterprise-10.x.x-installer-x86_64.bin

The OS10 update takes approximately 10 minutes to complete and boots to the OS10 login: prompt when done. Several
messages display during the installation process.
4. Log in to OS10 and run the show version command to verify that the update was successful.

OS10# show version


Dell EMC Networking OS10 Enterprise
Copyright (c) 1999-2018 by Dell Inc. All Rights Reserved.
OS Version: 10.x.x
Build Version: 10.x.x
Build Time: 2018-03-30T18:05:41-0700
System Type: S4148F-ON
Architecture: x86_64
Up Time: 00:02:14

Upgrade OS10 image from existing OS10 install


Use this procedure to upgrade the OS10 image from an existing OS10.

Steps
1. Once you download the OS10 Enterprise Edition image, extract the TAR file.
● Some Windows unzip applications insert extra carriage returns (CR) or line feeds (LF) when they extract the contents of
a TAR file, which may corrupt the downloaded OS10 binary image. Turn OFF this option if you use a Windows-based tool
to untar an OS10 binary file.
● For example, in WinRAR under the Advanced Options tab de-select the TAR file smart CR/LF conversion feature.
2. Save the current configuration on the switch, and backup the startup configuration.

Command Parameter
OS10#write memory Write the current configuration to startup-config

OS10#copy running-configuration tftp:// Backup the startup-config to a TFTP server


10.1.1.1/switch-config.txt

3. Format a USB as VFAT/FAT32 and add the BIN file, or move the BIN file to a TFTP/FTP Server.
● Use the native Windows tool, or equivalent, to format as VFAT/FAT32.
● Starting with OS10.4, OS10 will auto-mount a new USB key after a reboot.
4. Save the BIN file in EXEC mode, and view the status. Update file name to match your firmware version.

26 Network configuration
Internal Use - Confidential

CAUTION: Do NOT use the TAR file.

The image download command only downloads the software image - it does not install the software on your device. The
image install command installs the downloaded image to the standby partition.

Command Parameter
OS10#image download usb://PKGS_OS10- Update via USB
Enterprise-10.version-info-here.BIN

-OR- ftp://userid:passwd@hostip:/filepath/ Update via FTP


PKGS_OS10-Enterprise-10.version-info-here.BIN

-OR- scp://userid:password@<hostip>:/filepath/ Update via SCP


PKGS_OS10-Enterprise-10.5.1.0EX.110strech-
installer-x86_64.bin

OS10#show image status View status


Monitor and wait for State Detail to change from Progress to
Complete. For example:

OS10# show image status


============================================
======
File Transfer State: idle
--------------------------------------------
------
State Detail: Completed: No error
Task Start:
2020-02-11T17:03:54Z
Task End:
2020-02-11T17:04:05Z
Transfer Progress: 100 %
Transfer Bytes: 563709117 bytes
File Size: 563709117 bytes
Transfer Rate: 49829 kbps

Installation State: idle


--------------------------------------------
------
State Detail: No install
information available
Task Start:
0000-00-00T00:00:00Z
Task End:
0000-00-00T00:00:00Z

OS10#dir image View status


For example:

OS10#dir image

Directory contents for folder: image


Date (modified) Size (bytes) Name
--------------------- ------------
------------------------------------------
2020-02-11T17:34:12Z 563709117
PKGS_OS10-
Enterrprise-10.5.1.0EX.110stretch-installer-
x86_64.bin

5. Install the software image in EXEC mode.

Network configuration 27
Internal Use - Confidential

Command Parameter
OS10#image install image://PKGS_OS10- Installs OS
Enterprise-10.version-info-here.bin

NOTE: On older versions of OS10, the image install command will appear frozen, without showing the current status.
Duplicating the ssh/telnet session will allow you to run show image status to see the current status.

6. View the status of the current software install in EXEC mode. If the install status shows FAILED, check to make sure the
TAR file is extracted correctly.

Command Parameter
OS10#show image status Verify OS was updated

7. Change the next boot partition to the standby partition in EXEC mode.

Command Parameter
OS10#boot system standby Changes next boot partition

8. Check whether the next boot partition has changed to standby in EXEC mode.

Command Parameter
OS10#show boot detail Verify next boot partition is new firmware

9. Reload the new software image in EXEC mode.

Command Parameter
OS10#reload Reboots the switch

10. After the reload, verify the firmware is updated:

OS10# show version


Dell EMC Networking OS10 Enterprise
Copyright (c) 1999-2018 by Dell Inc. All Rights Reserved.
OS Version: 10.4.0E(X2)
Build Version: 10.4.0E(X2.22)
Build Time: 2018-01-26T17:46:11-0800
System Type: S4148F-ON
Architecture: x86_64
Up Time: 02:50:18

Install OS from ONIE


Steps
To install the OS from within ONIE instead, see How to Install Dell Networking FTOS on Dell Open Networking (ON) switches.

Update ONIE using an existing ONIE and TFTP


Use this procedure to update ONIE with an existing installation using a TFTP server.

Steps
1. Download the ONIE software from support.dell.com and place it on the TFTP server.
NOTE: In this example, the file name is onie-updater-x86_64-dellemc_s5200_c3538-r0.3.40.1.1-6.

28 Network configuration
Internal Use - Confidential

2. Reload the switch.


3. From the GRUB menu, select ONIE and then ONIE: Update ONIE.
4. From the CLI, enter onie-self-update tftp://TFTP IP/onie-updater-x86_64-dellemc_s5200_c3538-
r0.3.40.1.1-6 . Once ONIE is updated, the switch reboots into the active operating system partition.
5. Enter ONIE:/ # onie-sysinfo -v to verify the version.

Install or update DIAG OS


Use this procedure to install or update the DIAG OS. This procedure is a firmware upgrade.

About this task


Load or update the DIAG-OS—the diag installer image—using the onie-nos-install command. The DIAG-OS installer runs
in two modes: Update mode or Install mode.
● In Update mode, the DIAG-OS updates the existing DIAG-OS and boots back to ONIE.
● In Install mode, the DIAG-OS erases the existing DIAG-OS and loads the new DIAG-OS.
NOTE: If you have a recovery USB plugged into your system, remove it before using the onie-nos-install command.

NOTE: Before you begin, go to www.dell.com/support and download the diagnostic package.

To activate the DIAG installer:


1. Boot in to ONIE: Rescue mode.
2. Enter ONIE:/ # touch /tmp/diag_os_install_mode to activate the DIAG installer.
3. Run the installer file.
4. Enter ONIE:/ # onie-nos-install tftp://<ip address>/diag-installer-x86_64-
dellemc_<model>_c2338-r0-<version>-<date>.bin to ensure that the file location is accessible over the
network.

Steps
1. Enter the onie-discovery-stop command to stop ONIE Discovery mode.
2. Assign an IP address to the management interface and verify the network connectivity.

ONIE:/ # ifconfig eth0 xx.xx.xx.xx netmask xxx.xxx.x.x up


ONIE:/ # ifconfig
eth0 Link encap:Ethernet HWaddr 34:17:EB:05:B4:00
inet addr:xx.xx.xx.xx Bcast:xx.xx.xxx.xxx Mask:xxx.xxx.x.x
inet6 addr: fe80::3617:ebff:fe05:b400/64 Scope:Link
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
RX packets:43 errors:0 dropped:0 overruns:0 frame:0
TX packets:31 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1000
RX bytes:5118 (4.9 KiB) TX bytes:7104 (6.9 KiB)
Memory:dff40000-dff5ffff

3. Upgrade the DIAG Installer.


NOTE: In Install mode, the DIAG-OS installation removes any existing NOS and DIAG-OS partition. If you do not create
file /tmp/diag_os_install_mode, the DIAG-OS installs in Upgrade mode. In this case, the installation process
does NOT touch any existing NOS.

ONIE:/ onie-nos-install tftp://<tftp-server ip>/<filepath>/filename/diag-installer-x86_64-


dell_< platform >_c2538-r0-2016-08-12.bin
discover: installer mode detected.
Stopping: discover... done.
Info: Fetching tftp://<tftp-server ip>/users/<user>/<platform>/diag-installer-x86_64-
dell_<platform>_c2538-r0-2016-08-12.bin ...
users/<user>/<platform> 100% |*******************************| 154M 0:00:00 ETA
ONIE: Executing installer: tftp://<tftp-server ip>/users/<user>/<platform>/diag-
installer-x86_64-dell_<platform>_c2538-r0-2016-08-12.bin
Ignoring Verifying image checksum ... OK.
cur_dir / archive_path /var/tmp/installer tmp_dir /tmp/tmp.qlnVIY
Preparing image archive ...sed -e '1,/^exit_marker$/d' /var/tmp/installer | tar xf -

Network configuration 29
Internal Use - Confidential

OK.
Diag-OS Installer: platform: x86_64-dell_<platform>_c2538-r0

EDA-DIAG Partiton not found.


Diag OS Installer Mode : INSTALL

Creating new diag-os partition /dev/sda3 ...


Warning: The kernel is still using the old partition table.
The new table will be used at the next reboot.
The operation has completed successfully.

EDA-DIAG dev is /dev/sda3


mke2fs 1.42.13 (17-May-2015)
Creating filesystem with 262144 4k blocks and 65536 inodes
Filesystem UUID: 63fc156f-b6c1-415d-9676-ae4478704c5a
Superblock backups stored on blocks:
32768, 98304, 163840, 229376

Allocating group tables: done


Writing inode tables: done
Creating journal (8192 blocks): done
Writing superblocks and filesystem accounting information: done

Created filesystem on /dev/sda3 with label EDA-DIAG

Mounted /dev/sda3 on /tmp/tmp.BBEygm

Preparing /dev/sda3 EDA-DIAG for rootfs install


untaring into /tmp/tmp.BBEygm

rootfs copy done


Success: Support tarball created: /tmp/tmp.BBEygm/onie-support.tar.bz2

Updating Grub Cfg /dev/sda3 EDA-DIAG

ONIE uefi_uuid 69AD-9CBF

INSTALLER DONE...
Removing /tmp/tmp.qlnVIY
ONIE: NOS install successful: tftp://<tftp-server ip>/users/<user>/<platform>/diag-
installer-x86_64-dell_<platform>_c2538-r0-2016-08-12.bin
ONIE: Rebooting...
ONIE:/ # discover: installer mode detected.
Stopping: discover...start-stop-daemon: warning: killing process 2605: No such process
done.
Stopping: dropbear ssh daemon... done.
Stopping: telnetd... done.
Stopping: syslogd... done.
Info: Unmounting kernel filesystems
umount: can't umount /: Invalid argument
The system is going down NOW!
Sent SIGTERM to all processes
Sent SIGKILL tosd 4:0:0:0: [sda] Synchronizing SCSI cache
reboot: Restarting system
reboot: machine restart

BIOS Boot Selector for <platform>


Primary BIOS Version x.xx.x.x_MRC48

SMF Version: MSS x.x.x, FPGA x.x


Last POR=0x11, Reset Cause=0x55

POST Configuration
CPU Signature 406D8
CPU FamilyID=6, Model=4D, SteppingId=8, Processor=0
Microcode Revision 125
Platform ID: 0x10041A43
PMG_CST_CFG_CTL: 0x40006
BBL_CR_CTL3: 0x7E2801FF
Misc EN: 0x840081

30 Network configuration
Internal Use - Confidential

Gen PM Con1: 0x203808


Therm Status: 0x884C0000
POST Control=0xEA000100, Status=0xE6000000

BIOS initializations...

CPGC Memtest ................................ PASS

CPGC Memtest ................................ PASS

Booting `EDA-DIAG'

Loading DIAG-OS ...


[ 3.786758] dummy-irq: no IRQ given. Use irq=N
[ 3.792812] esas2r: driver will not be loaded because no ATTO esas2r devices were
found
[ 3.818171] mtdoops: mtd device (mtddev=name/number) must be supplied
[ 4.880285] i8042: No controller found
[ 4.890134] fmc_write_eeprom fake-design-for-testing-f001: fmc_write_eeprom: no
busid passed, refusing all cards
[ 4.901699] intel_rapl: driver does not support CPU family 6 model 77

Debian GNU/Linux 8 dell-diag-os ttyS1

dell-diag-os login: root


Password:
Linux dell-diag-os x.xx.xx #1 SMP Fri Aug 12 05:14:52 PDT 2016 x86_64

The programs included with the Debian GNU/Linux system are free software;
the exact distribution terms for each program are described in the
individual files in /usr/share/doc/*/copyright.

Debian GNU/Linux comes with ABSOLUTELY NO WARRANTY, to the extent


permitted by applicable law.
Diag OS version <platform>_DIAG_OS_x.xx.x.x
Build date/time Fri Aug 12 05:23:56 PDT 2016
Build server netlogin-eqx-03
Build by <name>
Kernel Info:
Linux x.xx.xx #1 SMP Fri Aug 12 05:14:52 PDT 2016 x86_64 GNU/Linux
Debian GNU/Linux 8 \n \l

Done Initializing Ethernet


root@dell-diag-os:~#

4. Start diagnostics.
To start the ONIE diagnostics, use the EDA-DIAG option from the GRUB menu.
a. Boot into the EDA Diags.
b. Log in as root.
Password: calvin.
c. Install the EDA-DIAG tools package.

Next steps

NOTE: To return to your networking operating software, enter the reboot command.

Install or upgrade EDA-DIAG tools


To install or upgrade the DIAGs in the DIAGs OS, use the dpkg --install dn-diags-<platform>-DiagOS-
<version>-<date>.deb command.

Steps
1. Download the diagnostic tools from support.dell.com and unzip.

Network configuration 31
Internal Use - Confidential

2. Copy using SCP the dn-diags-sssss-DiagOS-vvvvvv-ddddd.deb file to the switch. For example

root@dellemc-diag-os:~# ls dn-diags-S4100-DiagOS-3.33.4.1-6-2018-01-21.deb

3. Run the dpkg command to upgrade the tools.

root@dell-diag-os:~#dpkg --install dn-diags-< platform >-DiagOS-< version >-< date >.deb


Selecting previously unselected package dn-diags-<platform>.deb.
(Reading database ... 18873 files and directories currently installed.)
Preparing to unpack dn-diags-<platform>-DiagOS-<version>-<date>.deb ...
Unpacking dn-diags-<platform>.deb (1.10) ...
Setting up dn-diags-<platform>.deb (1.10) ...
root@dell-diag-os:~#

Firmware requirements
CAUTION: The minimum required ONIE version is 3.40.1.1-6. Before using ONIE firmware updater, if your switch
has an ONIE version lower than 3.40.1.1-6, you must first upgrade your switch to this minimum requirement.

NOTE: Boot the switch and choose ONIE: Rescue mode to perform firmware upgrade.

To upgrade the ONIE version, use the ONIE discovery-stop command, as shown:

# onie-discovery-stop
# onie-self-update onie-updater-x86_64-dellemc__c3538-r0.3.40.1.1-6

After you upgrade your switch to the minimum ONIE version requirement, you can use the ONIE firmware updater, as shown:

# onie-discovery-stop
# onie-fwpkg add onie-firmware-x86_64-dellemc__c3538-r0.3.40.5.1-9.bin
# onie-discovery-start

New in the release


BMC ● Changes the system LED to solid amber when the
temperature sensor reaches the critical temperature
threshold.
● Changes the system LED to solid amber when a CPU
thermal trip event occurs.
● Fixes the fault LED that did not recover to normal after
temperature sensor reading goes low.
● Fixes the PSU and voltage sensors that did not work when
the CPU is power off.
● Fixes the front panel fan LED that did not blink amber
when fan 4 failed.
BIOS ● Updates the Code Base to Label 40.
● Updates the Microcode to 0x2E to fix and Intel security
issue.
● Sets the system LED in the BIOS at a specific time.
● Gets the BMC IP and displays this information under the
BIOS setup.
● Adds the FW version GUID for AMI Afu support.
● Disables unused 10 GbE LAN
● Disables preserved NVRAM region during flash BIOS.
● Improves system-if BMC crashes, still have serial output.
SSD ● Updates SATA imageL18702C.
● Adds invalid count to Smarttools display list.
CPLD Master v00_06 ● Fixes internal test features. No change to functionality.

32 Network configuration
Internal Use - Confidential

NOTE: During a firmware update, if there is an efivars duplicate issue, the BIOS configuration sets to the default,
and the efivar duplicate issue resolves.

Verify Dell PowerSwitch firmware


Use this procedure to verify Dell PowerSwitch firmware.

Steps
1. In the command prompt, type:
# system "/mnt/onie-boot/onie/tools/bin/onie-fwpkg show-log |
grep Firmware | grep version"
A message is displayed:

2021-07-08 11:04:34 ONIE: Success: Firmware update version: 3.33.1.1-7


2021-07-08 12:31:22 ONIE: Success: Firmware update version: 3.33.5.1-20
2022-01-21 20:48:03 ONIE: Success: Firmware update version: 3.33.5.1-23

3.33.5.1-23 will correspond to onie-firmware-x86_64-dellemc_s4100_c2338-


r0.3.33.5.1-23.bin

2. Type show system to verify BIOS and CPLD.


Example output:

Node Id : 1
MAC : 50:9a:4c:e2:21:00
Number of MACs : 256
Up Time : 00:28:17

-- Unit 1 --

Status : up
System Identifier : 1
Down Reason : user-triggered
Digital Optical Monitoring : disable
System Location LED : off
Required Type : S4148T
Current Type : S4148T
Hardware Revision : A02
Software Version : 10.5.2.3
Physical Ports : 48x10GbE, 2x40GbE, 4x100GbE
BIOS : 3.33.0.1-11
System CPLD : 1.3
Master CPLD : 1.2

-- Power Supplies --

PSU-ID Status Type AirFlow Fan Speed(rpm) Status


----------------------------------------------------------------
1 up AC REVERSE 1 14000 up

2 up AC REVERSE 1 13936 up

-- Fan Status –

FanTray Status AirFlow Fan Speed(rpm) Status


----------------------------------------------------------------
1 up REVERSE 1 9637 up
2 9614 up

2 up REVERSE 1 9590 up
2 9590 up

3 up REVERSE 1 9567 up

Network configuration 33
Internal Use - Confidential

2 9637 up

4 up REVERSE 1 9590 up
2 9567 up

In order to correspond to BIOS and CPLD versions, see Dell support for the release notes on the firmware documentation.

IP address assignment in ONIE

Prerequisites
By default, DHCP is enabled in ONIE. If your network has DHCP configured, ONIE gets the valid IP address for the management
port using DHCP, as shown.

Info: Using eth0 MAC address: xx:xx:xx:xx:xx:xx


Info: Using eth1 MAC address: xx:xx:xx:xx:xx:xx
Info: eth0: Checking link... up.
Info: Trying DHCPv4 on interface: eth0
ONIE: Using DHCPv4 addr: eth0: xx.xx.xxx.xx / xxx.xxx.xxx.x

About this task


You can manually assign an IP address.

Steps
1. Wait for ONIE to complete a DHCP timeout and return to the prompt.
2. Wait for ONIE to assign a random default IP address. This address may not be valid for your network.
3. Enter the ifconfig command to assign a valid IP address.
This command is not persistent. After you reboot, you must reconfigure the IP address.

** Rescue Mode Enabled ** ONIE:/ #


ONIE:/ # ifconfig eth0 xx.xx.xxx.xxx/xx up

Configuring the network for deployment


The following procedures are applicable for both full networking automation and partial networking automation based
deployment.
NOTE: If the switches are already configured, consider these configurations an example and verify the existing
configuration.

Configure the VLANs


Use this procedure to configure the specific VLANs on both access switches for a successfully deployed PowerFlex cluster.

Steps
To configure the VLANs, enter the following command:
Interface vlan <vlan number>
name <vlan-name>
no shutdown
NOTE:
● To remove the created VLAN, enter the following command: Dell(config)# no interface vlan <VLAN ID>

34 Network configuration
Internal Use - Confidential

● To describe the created VLAN, enter the interface mode and enter the following command: Dell(config VLAN
ID)# description <enter the description>

Configure the VLT domain


Use this procedure to configure the VLT domain.

About this task

NOTE: This procedure is optional. If a customer is not planning to configure VLT, skip this step.

Steps
1. For switch A, enter the following commands:
vlt-domain 10
backup destination <ip address of second access switch>
discovery-interface ethernet<slot>/<port>-<slot>/<port+1>
peer-routing
primary-priority 1
vlt-mac <VLT mac address>
2. For switch B, enter the following commands:
vlt-domain 10
backup destination <ip address of second access switch>
discovery-interface ethernet<slot>/<port>-<slot>/<port+1>
peer-routing
primary-priority 8192
vlt-mac <VLT mac address>

NOTE: primary-priority should be different on both the switches.

3. Configure VLTi interfaces (use 2 x 100 GB interfaces as VLTi), enter the following commands:
interface range eth 1/1/X-1/1/X
description "VLTi interfaces"
no switchport
no shutdown
exit

NOTE: The starting and ending values of the command should match the ports.

Disable the telnet service


Use this procedure to disable the telnet service in the configure mode.

Steps
At the command prompt, type:
no ip telnet server enable

Network configuration 35
Internal Use - Confidential

Configure REST API


Use these manual procedures to enable the REST API service on the switch.

Steps
1. Enable the REST API service on the switch, type:
OS10(config)# rest api restconf
2. Limit the ciphers to encrypt and decrypt the REST HTTPS data, type:
OS10(config)#rest https cipher-suite <encryption-suite>
Where <encryption-suite> needs to be predetermined by the customer in order to match the communication through
the REST methods.

3. Configure REST HTTPS server-certificate, type:


OS10(config)#rest https server-certificate name hostname
Where <hostname> is the IP address or domain name of the switch.

4. Configure REST HTTPS session timeout, type:


OS10(config)#rest https session timeout <timeout value>
Where the <timeout> is 30 seconds by default.

Configure streaming telemetry


Use this procedure to configure the streaming telemetry.

Steps
1. To enable the telemetry, type:
OS10(config)# telemetry
OS10(conf-telemetry)# enable
2. Configure a destination group, type:
OS10(conf-telemetry)# destination-group dest1
OS10(conf-telemetry-dg-dest1)# destination <PowerFlex Manager IP> <PowerFlex Manager
port>
3. Return to telemetry mode, type:
OS10(conf-telemetry-dg-dest1)# exit
4. Configure a subscription profile, type:
OS10(conf-telemetry)# subscription-profile subscription-1
OS10(conf-telemetry-sp-subscription-1)# sensor-group bgp 300000
OS10(conf-telemetry-sp-subscription-1)# sensor-group bgp-peer 0
OS10(conf-telemetry-sp-subscription-1)# sensor-group buffer 15000
OS10(conf-telemetry-sp-subscription-1)# sensor-group device 300000
OS10(conf-telemetry-sp-subscription-1)# sensor-group environment 300000
OS10(conf-telemetry-sp-subscription-1)# sensor-group interface 180000
OS10(conf-telemetry-sp-subscription-1)# sensor-group lag 0
OS10(conf-telemetry-sp-subscription-1)# sensor-group system 300000
OS10(conf-telemetry-sp-subscription-1)# destination-group dest1
OS10(conf-telemetry-sp-subscription-1)# encoding gpb
OS10(conf-telemetry-sp-subscription-1)# transport grpc no-tls
OS10(conf-telemetry-sp-subscription-1)# source-interface ethernet 1/1/1
OS10(conf-telemetry-sp-subscription-1)# end

36 Network configuration
Internal Use - Confidential

NOTE: The above mentioned sensor groups are pre-configured.

Configure the port channel uplink to the customer network


Use this task to configure the uplink ports from the access switches connecting to the aggregation switches.

Steps
Configure the port channels uplink to the customer network, type:
interface port-channel 101
description <Uplink-Port-Channel-to-customer network>
no shutdown
switchport mode trunk
switchport trunk allowed vlan 105, 150, 161, 162
mtu 9216
vlt-port-channel 101

Add interfaces to the newly created port channel for customer network
Use this procedure to add interfaces to the newly created port channel for customer network.

Steps
Add ethernet interfaces to newly created port channels, type:
interface Ethernet <ID> (change the ethernet ID based on the interface)
description <description>
no shutdown
channel-group 101 mode active
no switchport
mtu 9216
speed 25000
flowcontrol receive off
Before you can deploy a PowerFlex appliance through PowerFlex Manager, the Dell access switches running Dell SmartFabric
OS10.x need specific configuration. The requirements are listed in Configuration data.

Configure port channels for partial network automation

Steps
To configure the port channels, enter the following commands:
interface port-channel <port-channel number>
Description "Port Channel to <node info>"
switchport trunk allowed vlan <vlan list>
spanning-tree port type edge
spanning-tree bpduguard enable
spanning-tree guard root
switchport mode trunk
lacp fallback enable # applicable only for port-channel with LACP
speed <speed>

Network configuration 37
Internal Use - Confidential

vlt-port-channel <vlt number same as port-channel number>

Configure interfaces for partial network automation

Steps
Configure the interface depending on the interface, enter the following commands:

If the interface type Run the following command using command prompt...
is...
Port channel
interface <interface number>
Description “Connected to <connectivity info>"
channel-group <channel-group> mode <mode>
no shutdown

Access
interface <interface number> # applicable only for access interface
switchport mode access
switchport access vlan <vlan number>
spanning-tree port type edge
spanning-tree bpduguard enable
spanning-tree guard root
speed <speed>

Trunk
interface <interface number>
switchport mode trunk
switchport trunk allowed vlan <vlan-list>
spanning-tree port type edge trunk
spanning-tree bpduguard enable
spanning-tree guard root
speed <speed>

Save the configuration

Steps
To save the configuration, enter the following command: Dell#copy running-config startup-config.

Configuring Cisco Nexus switches


Before you can deploy a PowerFlex appliance through PowerFlex Manager, the Cisco Nexus access switch needs specific
configuration.
See Networking pre-requisites for more information.

Related information
Network requirements for a PowerFlex appliance deployment
Networking pre-requisites

38 Network configuration
Internal Use - Confidential

Configure Cisco Nexus access switches


Use this procedure to configure the Cisco Nexus access switches.

Prerequisites
For correct functionality, the switch must have the supported switch firmware or software version that is available in Intelligent
Catalog (IC). Using firmware or software other than the versions that are specified in the IC, may have unpredictable results.

About this task

NOTE: VLANs 140 through 143 are required only for PowerFlex management controller 2.0.

Steps
1. Turn on both switches.
2. Connect a serial cable to the serial port of the first switch.
3. Use a terminal utility to open the terminal emulator and configure it to use the serial port (usually COM1 but this may vary
depending on your system). Configure serial communications for 9600,8, N,1 and no flow control.
4. Connect the switches by connecting port 53 on switch 1 to port 53 on switch 2 and port 54 on switch 1 to port 54 on switch
2.
5. Delete the startup configuration using the following commands
NOTE: This example assumes a switch at its default configuration settings. Using the write erase command sets
the startup configuration file to its default settings. You should always back up your configuration settings prior to
performing any configuration changes.

# write erase, type Y for confirmation.

6. Reboot the switch using following command:


# reload, type Y for confirmation.
7. Perform the initial switch configuration.
After the switch fully reboots, the following prompts appear:

Abort Power On Auto Provisioning and continue with normal setup ?(yes/no)[n]: yes
---- System Admin Account Setup ----
Do you want to enforce secure password standard (yes/no): yes
Enter the password for "admin":
Confirm the password for "admin":
---- Basic System Configuration Dialog ----
This setup utility will guide you through the basic configuration of the system.
Setup configures only enough connectivity for management of the system.
Please register Cisco Nexus9000 Family devices promptly with your supplier.
Failure to register may affect response times for initial service calls.
Nexus9000 devices must be registered to receive entitled support services.
Press Enter at anytime to skip a dialog. Use ctrl-c at anytime
to skip the remaining dialogs.
Would you like to enter the basic configuration dialog (yes/no):yes
Create another login account (yes/no) [n]: no
Configure read-only SNMP community string (yes/no) [n]: no
Configure read-write SNMP community string (yes/no) [n]: no
Enter the switch name : Cisco_Access-A
Continue with Out-of-band (mgmt0) management configuration? (yes/no) [y]: yes
Mgmt0 IPv4 address : 192.168.101.45
Mgmt0 IPv4 netmask : 255.255.255.0
Configure the default gateway? (yes/no) [y]: yes
IPv4 address of the default gateway : 192.168.101.254
Configure advanced IP options? (yes/no) [n]: no
Enable the telnet service? (yes/no) [n]: no
Enable the ssh service? (yes/no) [y]: yes
Type of ssh key you would like to generate (dsa/rsa) [rsa]: rsa
Number of rsa key bits <1024-2048> [1024]: 1024
Configure the ntp server? (yes/no) [n]: no
Configure default interface layer (L3/L2) [L2]: L2
Configure default switchport interface state (shut/noshut) [noshut]: noshut
Configure CoPP system profile (strict/moderate/lenient/dense) [strict]: strict

Network configuration 39
Internal Use - Confidential

The following configuration will be applied:


password strength-check
switchname Cisco_Access-A
vrf context management
ip route 0.0.0.0/0 192.168.101.254
exit
feature telnet
ssh key rsa 1024 force
feature ssh
system default switchport
no system default switchport shutdown
copp profile strict
interface mgmt0
ip address 192.168.101.45 255.255.255.0
no shutdown
Would you like to edit the configuration? (yes/no) [n]: no
Use this configuration and save it? (yes/no) [y]: yes
2019 Jun 6 10:13:08 Cisco_Access-A %$ VDC-1 %$ %COPP-2-COPP_POLICY: Control-Plane
is protected with policy copp-system-p-policy-strict.
[########################################] 100%
Copy complete.
User Access Verification
Cisco_Access-A login: admin
Password:
Cisco Nexus Operating System (NX-OS) Software
TAC support: http://www.cisco.com/tac
Copyright (C) 2002-2016, Cisco and/or its affiliates.
All rights reserved.
The copyrights to certain works contained in this software are
owned by other third parties and used and distributed under their own
licenses, such as open source. This software is provided "as is," and unless
otherwise stated, there is no warranty, express or implied, including but not
limited to warranties of merchantability and fitness for a particular purpose.
Certain components of this software are licensed under
the GNU General Public License (GPL) version 2.0 or
GNU General Public License (GPL) version 3.0 or the GNU
Lesser General Public License (LGPL) Version 2.1 or
Lesser General Public License (LGPL) Version 2.0.
A copy of each such license is available at
http://www.opensource.org/licenses/gpl-2.0.php and
http://opensource.org/licenses/gpl-3.0.html and
http://www.opensource.org/licenses/lgpl-2.1.php and
http://www.gnu.org/licenses/old-licenses/library.txt.
Cisco_Access-A#

8. Configure LLDP, at the command prompt, type:


Cisco_Access-A# config
Enter configuration commands, one per line. End with CNTL/Z.
Cisco_Access-A(config)# feature lldp
Cisco_Access-A(config)# no lldp tlv-select dcbxp
9. Configure LACP, at the command prompt, type:
Cisco_Access-A(config)# feature lacp
10. Configure LACP load balancing, at the command prompt, type:
Cisco_Access-A# config t
Enter configuration commands, one per line. End with CNTL/Z.
Cisco_Access-A(config)# port-channel load-balance src-dst ip-l4port
11. Configure VPC, at the command prompt, type:
Cisco_Access-A(config)# feature vpc
12. Configure SNMP, at the command prompt, type:
Cisco_Access-A(config)# snmp-server community public ro
Cisco_Access-A(config)# snmp-server host <PowerFlex Manager IP> traps version 2c public
udp-port 162
Cisco_Access-A(config)# snmp-server enable trap
13. Configure NTP, at the command prompt, type:

40 Network configuration
Internal Use - Confidential

Cisco_Access-A(config)# ntp server 192.168.200.101 use-vrf default

Upgrading the switch software


Upgrade any Cisco Nexus switches running out-of-date software levels.
PowerFlex Manager can be used to upgrade the switch software. Ensure PowerFlex Manager is up and running and switches are
discovered before starting the switch software upgrade.
See Upgrade switch software in PowerFlex appliance service deployment for more information.

Upgrade Cisco Nexus switches


Use this procedure if the Cisco Nexus switches are running an older version of NX-OS and need to be upgraded.

About this task


The Cisco Nexus 3000 series switches provide a limited amount of flash memory. When performing an upgrade, additional steps
are required to compact the NX-OS image files for the upgrade to complete successfully.

Steps
1. Start an SSH session to the switch.
2. Enter the following command to commit to persistent storage. In addition, copy the configuration to a remote server or jump
server:
copy running-config startup-config

3. Type the show version command to determine the current running version.
NOTE: The output from the command displays a running firmware version. Depending on your switch model, near the
bottom of the display, the previous running version might display and should not be confused with the current running
version.

4. Check the contents of the bootflash directory to verify that enough free space is available for the new Cisco NX- OS
software image.
a. Enter the following command to check the free space on the flash:
dir bootflash:

The following is an example of the output:


Usage for bootflash:// 1275002880 bytes used
375902208 bytes free
1650905088 bytes total
b. If necessary, delete older firmware files to create additional space.
CAUTION: Do not delete the current running version of the firmware files, as shown in the previous show
version display.

NOTE: The Cisco Nexus 3000 and Cisco Nexus 9000 switches do not provide a confirmation prompt before deleting
them.

delete bootflash:nxos.7.0.2.I7.6.bin
5. If upgrading a Cisco Nexus 3000 switch, enter the following command to compact the current running image file:
switch# install all nxos bootflash:nxos.7.0.3.I7.bin compact

6. From the SCP, FTP, and TFTP servers, enter one of the following commands to copy the firmware file to local storage on
the Cisco Nexus switches.
Use the TFTP command to copy the image:

Network configuration 41
Internal Use - Confidential

copy tftp://XXX.XXX.XXX.XXX/nxos.9.3.3.bin bootflash:

Use the SCP command to copy the image:

copy scp://filescp@x.x.x.x//home/filescp/image/nxos.9.3.3.bin bootflash:

NOTE: The firmware files are hardware model-specific. The firmware follows the same naming convention as the
current, running firmware files that are displayed in the show version command. If you receive warnings of insufficient
space to copy files, you must perform an SCP copy with the compact option to compact the file while it is copied.
Doing this might result with encountering the Cisco defect CSCvg51567. The work-around for this defect requires
cabling the management port and configuring its IP address on a shared network with the SCP server, allowing the copy
to take place across that management port. After the process is complete, go to Step 7.

Enter vrf (If no input, current vrf 'default' is considered): management Trying to
connect to tftp server..... Connecting to Server Established. TFTP get operation was
successful
Copy complete, now saving to disk (please wait)..

7. Enter the show install all impact command to identify the upgrade impact.
switch# show install all impact nxos bootflash:nxos.9.3.3.bin

Validate the output if the image is compatible for an upgrade.

8. Enter the following command to start the upgrade process:


install all nxos bootflash:nxos.9.3.32.bin

NOTE: If you receive errors regarding free space on the bootflash, go to Step 3 to ensure that you have removed older
firmware files to free additional disk space for the upgrade to complete. Check all subdirectories on bootflash when
searching for older bootflash files.

NOTE: After the upgrade, the switch reboot can take 5 to 10 minutes. Use a continuous ping command from the jump
server to validate when the switch is online.

Installer will perform compatibility check first. Please wait. Installer is forced
disruptive

Verifying image bootflash:/nxos.9.3.23.bin for boot variable "nxos".


[###############################] 100% -- SUCCESS
Verifying image type. [###############################] 100% -- SUCCESS Preparing
"nxos" version info using image bootflash:/nxos.9.3.32.bin
[###############################] 100% -- SUCCESS
Preparing "bios" version info using image bootflash:/ nxos.9.3.23.bin
[###############################] 100% -- SUCCESS
Performing module support checks. [###############################] 100% -- SUCCESS
Notifying services about system upgrade. [###############################] 100% --
SUCCESS

Switch will be reloaded for disruptive upgrade.


Do you want to continue with the installation (y/n)? [n] y Install is in progress,
please wait.
Performing runtime checks, [###############################] 100% -- SUCCESS Setting
boot variables. [###############################] 100% -- SUCCESS Performing
configuration copy. [###############################] 100% -- SUCCESS Module 1:
Refreshing compact flash and upgrading bios/loader/bootrom.
Warning: please do not remove or power off the module at this time.
[###############################] 100% -- SUCCESS

Finishing the upgrade, switch will reboot in 10 seconds.

42 Network configuration
Internal Use - Confidential

Example of continuous ping:


ping 1.1.1.1 -t
9. Using SSH, log back in to the switch with the username and password.
10. Enter the following command to display the entire upgrade process:
switch# show install all status
11. Enter the following command to verify that the switch is running the correct version:
switch# show version

Sample output:

Upgrade an electronic programmable logic device


Cisco provides electronic programmable logic device (EPLD) image upgrades to enhance hardware functionality or to resolve
known issues. Upgrade the primary and backup regions using these steps.

About this task


The primary and backup EPLD regions must be upgraded during the same process.

NOTE: The screen captures below are examples. Versions might vary, based on the Intelligent Catalog (IC).

Steps
1. Start an SSH session to the switch.
2. Enter the following command to commit to persistent storage. In addition, copy the configuration to a remote server or jump
server:
copy running-config startup-config

3. Enter the show version module <number> epld command to determine the current running version.
4. Check the contents of the bootflash directory to verify that enough free space is available for the software image.
a. Enter the following command to check the free space on the flash:
dir bootflash:

The following is an example command output:

Usage for bootflash:// 1275002880 bytes used


375902208 bytes free
1650905088 bytes total

b. Delete older firmware files to make additional space, if needed.


NOTE: The Cisco Nexus 9000 switches do not provide a confirmation prompt before deleting them.

5. From the SCP, FTP, or TFTP server, enter the following command to copy the firmware file to local storage on the Cisco
Nexus switches:
Use the following TFTP command to copy the image:

Network configuration 43
Internal Use - Confidential

copy tftp://XXX.XXX.XXX.XXX/ n9000-epld.9.3.3.img bootflash:

You can also use SCP to copy the image.

6. To determine if you must upgrade, use the show install all impact epld bootflash: n9000-
epld.9.3.3.img command.

switch# show install all impact epld bootflash: n9000-epld.9.3.3.img

7. Enter the following command to start the upgrade process:


install epld bootflash: n9000-epld.9.3.3.img module all

44 Network configuration
Internal Use - Confidential

NOTE: After the upgrade, the switch reboot could take 5 to 10 minutes. Use a continuous ping command from the jump
server to validate when the switch is back online.

8. Update the backup region of EPLD by typing this command:


install epld bootflash:n9000-epld.9.3.5.img module all golden

9. Using SSH, log back in to the switch with username and password.
10. Enter the following command to verify that the switch is running the correct version:
switch# show install epld status

Configuring the network for deployment


The following procedures are applicable for both full networking automation and partial networking automation based
deployment.
NOTE: If the switches are already configured, consider these configurations an example and verify the existing
configuration.

Configure the VLANs


Configure specific VLANs on both the access switches for a successfully deployed PowerFlex appliance.

Steps
At the command prompt, type:
Cisco_Access-A(config)# vlan 100,104,105,106,150,151,152,153,154,161,162
Cisco_Access-A(config-vlan)# exit

Configure spanning tree protocol


Perform this procedure to configure spanning tree protocol for both the Cisco Nexus access switches.

Prerequisites
Confirm with the network administrator that enabling spanning tree is appropriate for the network and discuss any specific
spanning tree mode/feature configuration options.

Network configuration 45
Internal Use - Confidential

Steps
At the command prompt, type:
Cisco_Access-A(config)# spanning-tree vlan 1-3967
Cisco_Access-A(config)# spanning-tree port type edge bpduguard default
Cisco_Access-A(config)# spanning-tree port type edge bpdufilter default

Configure vPC domain


Perform this procedure to configure the vPC domain.

About this task

NOTE: This is an optional procedure. If you are not planning to configure vPC, skip this step.

Steps
At the command prompt, for the first access switch, type:
vpc domain 60
peer-switch
role priority 8192
system-priority 8192
peer-keepalive destination <oob mgmt ip> source <oob mgmt ip>
delay restore 300
auto-recovery reload-delay 360
ip arp synchronize

NOTE: Role priority should be different on both the switches and system priority should be same on both the switches.

Configure the port channel for vPC peer link


Perform this procedure to configure the port channel for vPC peer-link on both the access switches.

About this task

NOTE: This is an optional procedure. If you are not planning to configure vPC, skip this step.

Steps
At the command prompt, type:
interface port-channel 100
description "virtual port-channel vpc-peer-link"
switchport mode trunk
spanning-tree port type network
vpc peer-link

Configure the interfaces for vPC peer-link


Perform this procedure to configure the ports for vPC peer-link on both the access switches.

About this task

NOTE: This is an optional procedure, if you are not planning to configure vPC, skip this step.

46 Network configuration
Internal Use - Confidential

Steps
At the command prompt, type:
interface <interface>
Description "Peerlink to Peer Switch "
channel-group 100 mode active
no shutdown

Configure the PowerFlex node interfaces for PowerFlex Manager discovery


Configure the PowerFlex appliance interfaces for PowerFlex Manager discovery.

Prerequisites
For PowerFlex Manager, the switch ports must be up (no shutdown), and unconfigured.

Steps
At the command prompt, type:
interface range eth 1/1/1-1/1/X
no shutdown
exit
Where X is the number of ports used by the PowerFlex node.

Configure the port channel for uplink to customer network


Perform this procedure to configure the port channel for uplink to the customer network.

Steps
At the command prompt, type:
Cisco_Access-A(config)# interface port-channel 101
Cisco_Access-A(config-if)# switchport mode trunk
Cisco_Access-A(config-if)# switchport trunk allowed vlan 105,150,161,162
Cisco_Access-A(config-if)# spanning-tree port type network
Cisco_Access-A(config-if)# mtu 9216
Cisco_Access-A(config-if)# vpc

Add interfaces to newly created port channel for customer network


Perform this procedure to configure the interface ports for uplink to a customer network..

Steps
At the command prompt, enter the following commands:
Cisco_Access-A(config)# interface ethernet 1/49
Cisco_Access -A(config-if)# switchport mode trunk
Cisco_Access-A(config-if)# switchport trunk allowed vlan 105,150,161,162
Cisco_Access-A(config-if)# spanning-tree port type network
Cisco_Access-A(config-if)# mtu 9216
Cisco_Access-A(config-if)# channel-group 101 mode active
Cisco_Access-A(config-if)# no shutdown
Cisco_Access-A(config-if)# exit

Network configuration 47
Internal Use - Confidential

Configuring port channels for partial network automation

Steps
To configure the port channels, enter the following commands:
interface port-channel <port-channel number>
Description "Port Channel to <node info>"
switchport trunk allowed vlan <vlan list>
spanning-tree port type edge
spanning-tree bpduguard enable
spanning-tree guard root
switchport mode trunk
no lacp suspend-individual
lacp vpc-convergence #only for LACP based network
speed <speed>
vpc <vpc number same as port-channel number>

Configure interfaces for partial network automation


If using a 25 GB controller, use this procedure to split 100 GB to 25 GB for Cisco Nexus 9364C-GX and Cisco Nexus 9336C-FX2
devices; #interface breakout module 1 port <breakoutPorts> map 25g-4x

Steps
Configure the interface depending on the interface type:

If the interface type is... Run the following command at the command prompt...
Port channel interface <interface number>

Description “Connected to <connectivity info>"

channel-group <channel-group> mode <mode>

no shutdown

Access interface <interface number> # applicable only for access interface

switchport mode access

switchport access vlan <vlan number>

spanning-tree port type edge

spanning-tree bpduguard enable

spanning-tree guard root

speed <speed>

Trunk interface <interface number>

switchport mode trunk

switchport trunk allowed vlan <vlan-list>

spanning-tree port type edge trunk

spanning-tree bpduguard enable

spanning-tree guard root

48 Network configuration
Internal Use - Confidential

If the interface type is... Run the following command at the command prompt...

speed <speed>

Save the configuration

Steps
To save the configuration, type:
Cisco_Access-A# copy running-config startup-config
[########################################] 100%
Copy complete.
Cisco_Access-A#

Network configuration 49
Internal Use - Confidential

5
Configuring the iDRAC
Related information
Deploying the PowerFlex file nodes

Configure iDRAC network settings


Use this procedure to configure the iDRAC network settings on each PowerFlex node and PowerFlex controller node (only if the
PowerFlex controller nodes are Dell provided).

About this task


The iDRAC is a piece of hardware placed on the server motherboard that allows systems administrators to update and manage
Dell systems, even when the server is turned off. The iDRAC also provides both a web interface and command line interface
that allows administrators to perform remote management tasks.

Prerequisites
For console operations, ensure that you have a crash cart. A crash cart enables a keyboard, mouse, and monitor (KVM)
connection to the node.

Steps
1. Connect the KVM to the node.
2. During boot, to access the Main Menu, press F2.
3. From System Setup Main Menu, select the iDRAC Settings menu option. To configure the network settings, do the
following:
a. From the iDRAC Settings pane, select Network.
b. From the iDRAC Settings-Network pane, verify the following parameter values:
● Enable NIC = Enabled
● NIC Selection = Dedicated
c. From the IPv4 Settings pane, configure the IPv4 parameter values for the iDRAC port as follows:
● Enable IPv4 = Enabled
● Enable DHCP = Disabled
● Static IP Address = <ip address > # select the IP address from this range for each node (192.168.101.21 to
192.168.101.24)
● Static Gateway = 192.168.101.254
● Static Subnet Mask = 255.255.255.0
● Static Preferred DNS Server = 192.168.200.101
4. After configuring the parameters, click Back to display the iDRAC Settings pane.
5. From the iDRAC Settings pane, select User Configuration and configure the following:
a. Enter a user name in the User name field.
b. LAN User privilege = Administrator
c. Enter a new password in the Change Password field.
d. In the Re-enter password dialog box, type the password again and press Enter twice.
e. Click Back.
6. From the iDRAC Settings pane, click Finish > Yes. Click OK to return to the System Setup Main Menu pane.
7. To exit the BIOS and apply all setting post boot, select Finish.
8. Reboot the node and confirm iDRAC settings by accessing the iDRAC using the web interface.

50 Configuring the iDRAC


Internal Use - Confidential

6
Installing and configuring PowerFlex
management controller 2.0

Installing and configuring a PowerFlex R650


controller node
This section provides information about installing and configuring a PowerFlex R650 controller node.

Configure the switch ports


Each brand of switch commands can be configured differently. See the vendor documentation for correct commands. Due to
the number of switch vendors available, it is not possible to provide configurations for each switch. For more information, see
Configuration data.

PowerFlex node network configurations


NOTE: For PowerFlex compute-only nodes and PowerFlex hyperconverged nodes, these configurations are automated
using PowerFlex Manager, this section is for reference if the manual configuration is required. PowerFlex management
controller node configurations are done manually using these procedures.

Add the VMkernel adapter to the hosts


Use this procedure to add the VMkernel adapter to the hosts.

Steps
1. Log in to the VMware vSphere Client.
2. Select the host and click Configure in the right pane.
3. Under Networking tab, select the VMkernel adapter.
4. Click Add.
5. Select Connection type as VMkernel network adapter and click Next.
6. Select Target device as Existing network and click Browse to select the appropriate port group.
7. In port properties, select Enable services, select the appropriate service, and click Next.
For example, for vMotion, select vMotion. For any other networks, retain the default service.
NOTE: The MTU for pfmc-vmotion is 1500.

8. In IPV4 Settings, select Use static IPV4 settings, provide the appropriate IP address and subnet details, and click Next.
9. Verify the details on Ready to Complete and click Finish.
10. Repeat steps 2 through 9 to create the VMkernel adapters for the vLANs referenced in the configuration data as port
groups.

Installing and configuring PowerFlex management controller 2.0 51


Internal Use - Confidential

Create new dvSwitches


Use this procedure to create new dvSwitches.

About this task


The dvSwitch names are for example only and may not match the configured system. Do not change these names or a data
unavailable or data lost event may occur. See Configuration data for information on the dvSwitches.

Steps
1. Log in to VMware vCenter.
2. Click Networking.
3. Right-click the data center and select Distributed Switch > New Distributed Switch.
4. On the name and location page, type dvswitch name for the new distributed switch and click Next.
5. On the Select Version tab, select the latest VMware ESXi version, and click Next.
6. On the Configure tab, select 2 for the number of uplinks and click Next.
7. Click Finish.
8. Repeat steps 3 through 7 to create additional dvSwitches for the PowerFlex node.

Related information
Configuration data

Create distributed port groups on dvswitches


Use this procedure to create distributed port groups on dvswitches.

Steps
1. Log in to the VMware vSphere client and select the Networking inventory view.
2. Select Inventory, right-click the dvswitch, and select New Port Group.
3. Enter the dvswitch port group name and click Next. See Configuration data for more information on the VLANs.
4. From the VLAN type, select VLAN and enter 105 as the VLAN ID.
5. Click Next > Finish.
6. Repeat steps 2 to 4 to create the additional port groups.

Related information
Configuration data

Create LAG on dvSwitches


Use this procedure to create Link Aggregation Group (LAG) on the new dvSwitches.

Steps
1. Log in to the VMware vSphere client and select Networking inventory.
2. Select Inventory, right-click the dvswitch, and select Configure.
3. In Settings, select LACP.
4. Click New, type name as FE-LAG or BE-LAG.
The default number of ports is 2.
5. Select mode as active.
6. Select the load balancing option. See Configuration data for more information.
7. Click OK to create LAG.
Repeat steps 1 through 6 to create LAG on additional dvswitches.

52 Installing and configuring PowerFlex management controller 2.0


Internal Use - Confidential

Related information
Configuration data

Assign LAG as a standby uplink for the dvSwitch


Use this procedure to assign LAG as a standby uplink for the dvSwitch.

Steps
1. Select the dvSwitch.
2. Click Configure and from Settings, select LACP.
3. Click Migrating network traffic to LAGs.
4. Click Manage Distributed Port Groups, click Teaming and Failover, and click Next.
5. Select all port groups, and click Next.
6. Select LAG and move it to Standby Uplinks.
7. Click Finish.

Add hosts to dvSwitches


Use this procedure to add a host with one vmnic and migrate the VM networking to the dvSwitch port-groups.

Prerequisites
See Configuration data for naming information of the dvSwitches.

Steps
1. Select the dvSwitch.
NOTE: If you are not using LACP, right-click and skip to step 4.

2. Click Configure and in Settings select LACP.


3. Click Migrating network traffic to LAGs.
4. Click Add and Manage Hosts.
5. Click Add Hosts and click Next.
6. Click New Host, select the host in maintenance mode, and click OK.
7. Click Next.
8. Select <vmnicX> on <dvSwitch> and click Assign Uplink.
9. Select LAG-0 for an LACP bonding NIC port design or Uplink1, click OK, and click Next.
10. Assign the respective port groups for VMkernel adapters.
11. Click OK > Next.
12. On Migrating VM networking, select all the VMs and assign to corresponding portgroup.
13. Click Next > Finish.
14. Add a second vmnic to the dvSwitch:
a. Select the dvSwitch.
NOTE: If you are not using LACP, right-click and skip to Step d.

b. On the right-hand pane, click Configure and in Settings select LACP.


c. Click Migrating network traffic to LAGs.
d. Click Add and Manage Hosts.
e. Click Add Hosts and click Next.
f. Click Attached Hosts, select the server in maintenance mode, and click Next.
g. Click Next.
h. Select <vmnicX> on <dvSwitch> and click Assign Uplink.
i. Select LAG-1 for an LACP bonding NIC port design or Uplink2, and click OK.

Installing and configuring PowerFlex management controller 2.0 53


Internal Use - Confidential

j. Click Next > Next > Next > Finish.

Related information
Configuration data

Assign LAG as an active uplink for the dvSwitch


Use this procedure to assign LAG as an active uplink for the dvSwitch.

Prerequisites
See Configuration data for naming information of the dvSwitches and loadbalancing options.

Steps
1. Select the dvSwitch.
2. Click Configure and from Settings, select LACP.
3. Click Migrating network traffic to LAGs.
4. Click Manage Distributed Port Groups, click Teaming and Failover, and click Next.
5. Select all port groups, and click Next.
6. Select a load balancing option.
7. Select LAG and move it to Active Uplinks.
8. Move Uplink1 and Uplink2 to Unused Uplinks and click Next.
9. Click Finish.

Related information
Configuration data

Set load balancing for dvSwitch


Use this procedure for setting the load balancing for a dvSwitch without LAG.

Prerequisites
See Configuration data for naming information of the dvSwitches and loadbalancing options.

Steps
1. Select the dvSwitch.
2. Right-click the dvSwitch, select Distributed Portgroup > Manage distributed portgroups.
3. Select teaming and failover and select all the port groups, and click Next.
4. Select load balancing.

Related information
Configuration data

Create the distributed switch (oob_dvswitch) for the PowerFlex


management node network
Use this procedure to create virtual distributed switches on the PowerFlex management node hosts

Steps
1. Log in to VMware vSphere client.
2. From Home, click Networking and expand the data center.

54 Installing and configuring PowerFlex management controller 2.0


Internal Use - Confidential

3. Right-click the data center and perform the following:


a. Click Distributed Switch > New Distributed Switch.
b. Update the name to oob_dvswitch and click Next.
c. On the Select Version page, select 7.0.0 - ESXi 7.0 and later, and click Next.
d. Under Edit Settings, select 1 for Number of uplinks.
e. Select Enabled from Network I/O Control.
f. Clear the Create default port group option.
g. Click Next.
h. On the Read to complete page, click Finish.

Create a distributed port group on oob_dvswitch


Use this procedure to create a distributed port group for the PowerFlex management node.

Steps
1. Log in to the VMware vSphere client and click Networking.
2. Right-click oob_dvswitch and select Distributed Port Group > New Distribution Port Group.
3. Retain the default values for the following port related options:
● Port binding
● Port allocation
● # of ports
4. Select VLAN as the VLAN type.
5. Enter flex-oob-mgmt-<vlanID> and click Next.

Add a host to oob_dvswitch


Use this procedure to add a host to oob_dvswitch.

Steps
1. Log in to the VMware vSphere client.
2. Click Networking and select oob_dvswitch.
3. Right-click Add and Manage Hosts.
4. Select Add Hosts and click Next.
5. Click New Host, select the host in maintenance mode, and click OK.
6. Click Next.
7. Select vmnic4 and click Assign Uplink.
8. Select Uplink 1, and click OK.
9. Click Next > Next > Next.
10. Click Finish.

Delete the standard switch (vSwitch0)


Use this procedure to delete the standard switch (vSwitch0).

Steps
1. Log in to VMware vSphere Client.
2. On Menu, click Host and Cluster.
3. Select Host.
4. Click Configure > Networking > Virtual Switches.
5. Right-click Standard Switch: vSwitch0 and click ...> Remove.
6. On the Remove Standard Switch window, click Yes.

Installing and configuring PowerFlex management controller 2.0 55


Internal Use - Confidential

Configure local RAID storage on a PowerFlex management node


Perform this procedure to configure the local redundant disk storage on the optional PowerFlex management node.

Prerequisites
Ensure that iDRAC is configured and connected to the management network.

Steps
1. Connect to the iDRAC web interface.
2. Click Storage > Overview > Controllers.
3. In the Actions drop down for the PERC H755 Front (embedded), select Reset Configurations > OK > Apply now.
4. Click Job Queue and wait for the task to complete.
5. Select Storage > Overview > Controller.
6. In the Actions drop down for the PERC H755 Front (embedded), select Create Virtual Disk.
7. For Setup Virtual Disk:
● Name: Leave blank for auto-name
● Layout: Raid-5
● Media type: SSD
● Physical disk selection: New Group
8. For Advanced Settings:
● Security: Disabled
● Stripe element size: 256 KB
● Read policy: Read Ahead
● Write policy: Write Back
9. Click Next.
10. For the Select Physical Disk, select All SSDs and click Next.
11. For Virtual Disk Settings:
● Leave Defaults
12. Click Next.
13. For Confirmation and confirm Settings, select Add to Pending.
14. Select Apply Now.
15. Click Job Queue and wait for the task to complete.
16. Click Storage > Overview > Virtual disks.
17. Confirm the PERC-01 virtual disk.

Upgrade the firmware


Use this procedure to upgrade the firmware on the PowerFlex management controller.

Steps
1. In the web browser, enter https://<ip-address-of-idrac>.
2. From the iDRAC dashboard, click Maintenance > System Update > Manual Update.
3. Click Choose File. Browse to the release appropriate Intelligent Catalog folder and select the appropriate files.
Required firmware:
● Dell iDRAC or Lifecycle Controller firmware
● Dell BIOS firmware
● Dell BOSS Controller firmware
● Dell Mellanox ConnectX-5 EN or Broadcom NetXtreme firmware
● PERC H755P controller firmware
4. Click Upload.
5. Click Install and Reboot.

56 Installing and configuring PowerFlex management controller 2.0


Internal Use - Confidential

Configure the BOSS card


Use this procedure to upgrade the firmware on the PowerFlex management controller. Use this procedure only if the BOSS card
RAID1 is not configured.

Steps
1. Launch the virtual console, select Boot from the menu, and select BIOS setup from Boot Controls to enter the system
BIOS.
2. Power cycle the server and enter the BIOS setup.
3. From the menu, click Power > Reset System (Warm Boot).
4. Press F2 to enter the System Setup main menu, select Device Settings.
5. Select AHCI Controller in SlotX: BOSS-X Configuration Utility.
6. Select Create RAID Configuration.
7. Select both the devices and click Next.
8. Enter VD_R1_1 for name and retain the default values.
9. Click Yes to create the virtual disk and then click OK to apply the new configuration.
10. Click Next > OK.
11. Select VD_R1_1 that was created and click Back > Finish > Yes > OK.
12. Select System BIOS.
13. Select Boot Settings and enter the following settings:
● Boot Mode: UEFI
● Boot Sequence Retry: Enabled
● Hard Disk Failover: Disabled
● Generic USB Boot: Disabled
● Hard-disk Drive Placement: Disabled
● Clean all Sysprep order and variables: None
14. Click Back > Finish > Finish and click Yes to reboot the node.

Install VMware ESXi


Use this procedure to install VMware ESXi on the PowerFlex management controller.

Steps
1. Log in to iDRAC and perform the following steps:
a. Connect to the iDRAC interface and launch a virtual remote console from Dashboard and click Launch Virtual Console.
b. Select Virtual Media > Connect Virtual Media > Map CD/DVD.
c. Click Choose File and browse to the folder where the ISO file is saved, select it, and click Open.
d. Click Map Device > Close.
e. Click Boot > Virtual CD/DVD/ISO.
f. Click Yes to confirm boot action.
g. Click Power > Reset System (warm boot).
h. Click Yes.
2. Perform the following steps to install VMware ESXi:
a. On the VMware ESXi installer screen, press Enter to continue.
b. Press F11 to accept the license agreement.
c. Under Local, select ATA DELLBOSS VD as the installation location. If prompted, press Enter.
d. Select US Default as the keyboard layout and press Enter.
e. At the prompt, type the root password, and press Enter.
f. At the Confirm Install screen, press F11.
g. In Virtual Console, click Virtual Media > Disconnect Virtual Media.
h. Click Yes to un-map all devices.
i. Press Enter to reboot the PowerFlex management controller when the installation completes.

Installing and configuring PowerFlex management controller 2.0 57


Internal Use - Confidential

Configure VMware ESXi


Use this procedure to configure VMware ESXi on the PowerFlex management controller.

Steps
1. Press F2 to customize the system.
2. Enter the root password and press Enter.
3. Go to DCUI and select Troubleshooting Options.
4. Select Enable SSH.
5. Select Enable ESXi Shell.
6. Press ESC to exit from troubleshooting mode options.
7. Go to Direct Console User Interface (DCUI) > Configure Management Network.
8. Set Network Adapter to VMNIC2.
9. Set the ESXi Management VLAN ID to the required VLAN value.
10. Set the IPv4 ADDRESS, SUBNET MASK, and DEFAULT GATEWAY.
11. Select IPV6 Configuration > Disable IPV6 and press Enter.
12. Go to DNS Configuration and set the customer provided value.
13. Go to Custom DNS Suffixes and set the customer provided value.
14. Press ESC to exit the network configuration and enter Y to apply the changes.
15. Enter Y to commit the changes and the node restarts.
16. Verify the host connectivity by pinging the IP address from the jump server using the command prompt.

Install the Dell Integrated Service Module


Use this procedure to install the Dell Integrated Service Module (ISM).

Prerequisites
Download the latest supported version from Dell iDRAC Service Module.

Steps
1. Copy ISM-Dell-Web-X.X.X-XXXX.VIB-ESX7i-Live_AXX.zip to the /vmfs/volumes/<datastore>/ folder on
the PowerFlex management node running VMware ESXi.
2. Start an SSH session with the new appliance management host running VMware ESXi using PuTTY.
3. To install VMware vSphere 7.x Dell iDRAC service module, type esxcli software vib install -d /vmfs/
volumes/<datastore>/ISM-Dell-Web-X.X.X-XXXX.VIB-ESX7i-Live_AXX.zip.

Modify the existing VM network


Use this procedure to modify the VLAN ID to ensure communication during the VMware vCenter deployment.

About this task

NOTE: Modify the VM network on the PowerFlex controller node planned for VMware vCenter deployment

Steps
1. Log in to VMware ESXi host client as root.
2. On the left pane, click Networking.
3. Right-click VM Network and click Edit Settings.
4. Change VLAN ID to flex-node-mgmt-<vlanid> and click Save.

58 Installing and configuring PowerFlex management controller 2.0


Internal Use - Confidential

Configure NTP on the host


Use this procedure to configure the NTP on the PowerFlex management controller.

Steps
1. Log in to VMware ESXi host client as root.
2. In the left pane, click Manage.
3. Click System and Time & Date.
4. Click Edit NTP Settings.
5. Select Use Network Time Protocol (enable NTP client).
6. Select Start and Stop with host from the drop-down list.
7. Enter NTP IP Addresses.
8. Click Save.
9. Click Services > ntpd.
10. Click Start.

Rename the BOSS datastore


Use this procedure to rename the BOSS datastore for easier identification.

Steps
1. Log in to VMware ESXi host client as a root.
2. On the left pane, click Storage.
3. Click Datastores.
4. Right-click datastore1 and click Rename.
5. Enter PFMC_DS<last_ip_octet>.
6. Click Save.

Create a PERC datastore


Perform this procedure to create a PERC datastore. This allows VMs to directly access the physical PCI devices.

Steps
1. Log in to VMware ESXi host client as a root.
2. On the left pane, click Storage.
3. Click New Datastore.
4. Select Create new VMFS datastore and click Next.
5. Enter PERC-01 in the Name box and select Local Dell Disk.
6. Click Next.
7. Make sure that Use Full Disk and VMFS 6 are selected and click Next.
8. Verify Summary and click Finish.
9. Click Yes on content warning.

Installing and configuring PowerFlex management controller 2.0 59


Internal Use - Confidential

Deploy VMware vCenter Server Appliance (vCSA) on the PowerFlex


management controller
Perform this procedure to deploy the VMware vCenter Server Appliance (vCSA) on the optional PowerFlex management
controller.

About this task


Deploying VMware vCenter Server Appliance (vCSA) is a two-step process:
1. Deploy a new appliance to the target VMware vCenter server or ESXi host.
2. Copy data from the source appliance to the VMware vCenter Server Appliance.

Steps
NOTE: VMware vCSA 7.0 installation fails if FQDN is not specified or DNS records are not created for the corresponding
assigned FQDN during installation. Ensure the correct forward and reverse records are created in DNS for this service. It is
assumed that the customer provides DNS and may require the DNS to create the required records.
1. Deploy a new appliance to the target VMware vCenter server or ESXi host:
a. Mount the ISO and open the VMware vCSA 7.x installer from \vcsa-ui-installer\win32\installer.exe.
b. Select Install from the VMware vCSA 7.x installer.
c. Click Next in Stage 1: Deploy vCenter Server wizard.
d. Select I accept the terms of the License Agreement and click Next.
e. Type the host FQDN of the PowerFlex management controller:
i. Provide the login credentials.
ii. Click Next and click Yes.
f. Type the host FQDN of the PowerFlex management controller.
g. Enter the vCenter VM name (FQDN), set the root password, and confirm the root password. Click Next.
h. Select the deployment size to Large and leave storage as default. Click Next.
i. Select Install on an existing datastore accessible from the target host, select PERC-01 (that was created
previously), and Enable Thin Disk Mode. Click Next.
j. In Configure network settings page, do the following:
Select the following:
● VM network from network
● IPv4 from IP version
● Static from IP assignment
Enter the following:
● FQDN
● IP address
● Subnet
● Default gateway
● DNS server information
k. Click Next.
l. Review the summary and click Finish.
2. Copy data from the source appliance to the VMware vCenter Server Appliance:
a. After selecting Continue from stage 1, select Next from the stage 2 introduction page.
b. Select Synchronize time with NTP Server and enter NTP Server IP Address(es) and select Disabled for SSH
access. Click Next.
c. Enter Single Sign-On domain name and password for SSO. Click Next.
d. Clear the Customer Experience Improvement Program (CEIP) check box. Click Next.
e. Review the summary information and click Finish > OK to continue.
f. Click Close on completion.
g. Log in to validate that the new controller vCSA is operational using SSO credentials.

60 Installing and configuring PowerFlex management controller 2.0


Internal Use - Confidential

Create a datacenter and add a host


Perform this procedure to create a datacenter and then add a host to the data center.

Steps
1. Create a data center:
a. Log in to the VMware vSphere Client.
b. Right-click vCenter and click New Datacenter.
c. Enter data center name as PowerFlex Management and click OK.
2. Add a host to the data center:
NOTE: The vCLS VMs are deployed on the local datastore when the node is added to the cluster from VCSA 7.0Ux.
These VMs are auto deployed using VMware vCenter. When you add the host cluster they are used for managing the HA
and DRS service on the cluster.

a. Right-click Datacenter and click New Cluster.


b. Enter the cluster name as PowerFlex Management Cluster and retain the default for DRS, HA, and vSAN. Click OK.
c. Right-click the cluster and click Add Host.
d. Enter FQDN of host.
e. Enter root username and password and click Next.
f. Select the certificate and click OK for certificate alert.
g. Verify the Host Summary and click Next.
h. Verify the summary and click Finish.
NOTE: If the node goes into maintenance mode, right-click the VMware ESXi host and click Maintenance Mode >
Exit Maintenance Mode.

Add VMware vSphere licenses


Use this procedure to add VMware vSphere licenses.

Steps
1. In the VMware vSphere client, log in to the vCSA, on the Administration tab, select Licensing.
2. Click Add to open the New Licenses wizard.
3. Enter or paste the license keys for VMware vSphere and vCenter. Click Next.
4. Optionally, provide an identifying name for each license. Click Next.
5. Click Finish to complete the addition of licenses to the system inventory.
6. Select the vCenter license from the list and click OK.
7. Click Assets.
8. Click vCenter Server Systems and select the vCenter server and click Assign License.
9. In the Licenses view, the added licenses should be visible. Click the Assets tab.
10. Click Hosts.
11. Select the controller nodes.
12. Click Assign License.
13. In the Assign License dialog box, select the vSphere license from the list and click OK.

Related information
Deploying the PowerFlex management platform

Installing and configuring PowerFlex management controller 2.0 61


Internal Use - Confidential

Installing and configuring a PowerFlex management


controller 2.0
This section provides information about installing and configuring a multi-node PowerFlex management controller 2.0.

Configure the switch ports


Each brand of switch commands can be configured differently. See the vendor documentation for correct commands. Due to
the number of switch vendors available, it is not possible to provide configurations for each switch. For more information, see
Configuration data.

Upgrade the firmware


Perform this procedure to upgrade the firmware.

Steps
1. In the web browser, enter https://<ip-address-of-idrac>.
2. From the iDRAC dashboard, click Maintenance > System Update > Manual Update.
3. Click Choose File. Browse to the release appropriate IC folder and select the appropriate files.
Required firmware:
● Dell BIOS firmware
● Dell BOSS Controller firmware
● Dell iDRAC or Lifecycle Controller firmware
● Dell Mellanox ConnectX-5 EN or Broadcom NetXtreme firmware
● HBA 355i (multi-node) controller firmware
4. Click Upload.
5. Click Install and Reboot.

Configure the BOSS card


Use this procedure to upgrade the firmware on the PowerFlex management controller. Use this procedure only if the BOSS card
RAID1 is not configured.

Steps
1. Launch the virtual console, select Boot from the menu, and select BIOS setup from Boot Controls to enter the system
BIOS.
2. Power cycle the server and enter the BIOS setup.
3. From the menu, click Power > Reset System (Warm Boot).
4. Press F2 to enter the System Setup main menu, select Device Settings.
5. Select AHCI Controller in SlotX: BOSS-X Configuration Utility.
6. Select Create RAID Configuration.
7. Select both the devices and click Next.
8. Enter VD_R1_1 for name and retain the default values.
9. Click Yes to create the virtual disk and then click OK to apply the new configuration.
10. Click Next > OK.
11. Select VD_R1_1 that was created and click Back > Finish > Yes > OK.
12. Select System BIOS.
13. Select Boot Settings and enter the following settings:
● Boot Mode: UEFI
● Boot Sequence Retry: Enabled

62 Installing and configuring PowerFlex management controller 2.0


Internal Use - Confidential

● Hard Disk Failover: Disabled


● Generic USB Boot: Disabled
● Hard-disk Drive Placement: Disabled
● Clean all Sysprep order and variables: None
14. Click Back > Finish > Finish and click Yes to reboot the node.

Installing VMware ESXi

Install VMware ESXi


Use this procedure to install VMware ESXi on the PowerFlex management controller.

Steps
1. Log in to iDRAC and perform the following steps:
a. Connect to the iDRAC interface and launch a virtual remote console from Dashboard and click Launch Virtual Console.
b. Select Virtual Media > Connect Virtual Media > Map CD/DVD.
c. Click Choose File and browse to the folder where the ISO file is saved, select it, and click Open.
d. Click Map Device > Close.
e. Click Boot > Virtual CD/DVD/ISO.
f. Click Yes to confirm boot action.
g. Click Power > Reset System (warm boot).
h. Click Yes.
2. Perform the following steps to install VMware ESXi:
a. On the VMware ESXi installer screen, press Enter to continue.
b. Press F11 to accept the license agreement.
c. Under Local, select ATA DELLBOSS VD as the installation location. If prompted, press Enter.
d. Select US Default as the keyboard layout and press Enter.
e. At the prompt, type the root password, and press Enter.
f. At the Confirm Install screen, press F11.
g. In Virtual Console, click Virtual Media > Disconnect Virtual Media.
h. Click Yes to un-map all devices.
i. Press Enter to reboot the PowerFlex management controller when the installation completes.

Configure VMware ESXi


Use this procedure to configure VMware ESXi on the PowerFlex management controller.

Steps
1. Press F2 to customize the system.
2. Enter the root password and press Enter.
3. Go to DCUI and select Troubleshooting Options.
4. Select Enable SSH.
5. Select Enable ESXi Shell.
6. Press ESC to exit from troubleshooting mode options.
7. Go to Direct Console User Interface (DCUI) > Configure Management Network.
8. Set Network Adapter to VMNIC2.
9. Set the ESXi Management VLAN ID to the required VLAN value.
10. Set the IPv4 ADDRESS, SUBNET MASK, and DEFAULT GATEWAY.
11. Select IPV6 Configuration > Disable IPV6 and press Enter.
12. Go to DNS Configuration and set the customer provided value.
13. Go to Custom DNS Suffixes and set the customer provided value.

Installing and configuring PowerFlex management controller 2.0 63


Internal Use - Confidential

14. Press ESC to exit the network configuration and enter Y to apply the changes.
15. Enter Y to commit the changes and the node restarts.
16. Verify the host connectivity by pinging the IP address from the jump server using the command prompt.

Install the Dell Integrated Service Module


Use this procedure to install the Dell Integrated Service Module (ISM).

Prerequisites
Download the latest supported version from Dell iDRAC Service Module.

Steps
1. Copy ISM-Dell-Web-X.X.X-XXXX.VIB-ESX7i-Live_AXX.zip to the /vmfs/volumes/<datastore>/ folder on
the PowerFlex management node running VMware ESXi.
2. Start an SSH session with the new appliance management host running VMware ESXi using PuTTY.
3. To install VMware vSphere 7.x Dell iDRAC service module, type esxcli software vib install -d /vmfs/
volumes/<datastore>/ISM-Dell-Web-X.X.X-XXXX.VIB-ESX7i-Live_AXX.zip.

Modify the existing VM network


Use this procedure to modify the VLAN ID to ensure communication during the VMware vCenter deployment.

About this task

NOTE: Modify the VM network on the PowerFlex controller node planned for VMware vCenter deployment

Steps
1. Log in to VMware ESXi host client as root.
2. On the left pane, click Networking.
3. Right-click VM Network and click Edit Settings.
4. Change VLAN ID to flex-node-mgmt-<vlanid> and click Save.

Configure NTP on the host


Use this procedure to configure the NTP on the PowerFlex management controller.

Steps
1. Log in to VMware ESXi host client as root.
2. In the left pane, click Manage.
3. Click System and Time & Date.
4. Click Edit NTP Settings.
5. Select Use Network Time Protocol (enable NTP client).
6. Select Start and Stop with host from the drop-down list.
7. Enter NTP IP Addresses.
8. Click Save.
9. Click Services > ntpd.
10. Click Start.

64 Installing and configuring PowerFlex management controller 2.0


Internal Use - Confidential

Rename the BOSS datastore


Use this procedure to rename the BOSS datastore for easier identification.

Steps
1. Log in to VMware ESXi host client as a root.
2. On the left pane, click Storage.
3. Click Datastores.
4. Right-click datastore1 and click Rename.
5. Enter PFMC_DS<last_ip_octet>.
6. Click Save.

Enable PCI passthrough for the HBA 355 on the PowerFlex controller nodes
Use this procedure to enable PCI passthrough on the PowerFlex management controller.

About this task

NOTE: This task is only applicable to a multi-node configuration.

Steps
1. Log in to the VMware ESXi host.
2. Select Manage > Hardware > PCI Devices.
3. Select Broadcom / LSI HBA H355i Front Device > Toggle passthrough.
4. A reboot is required, after the storage data client (SDC) is installed.
NOTE: Ignore the VMware popup warning: Failed to configure passthrough devices.

Install the storage data client on the PowerFlex management controller


Use this procedure to install the PowerFlex storage data client on the PowerFlex management controller.

Steps
1. Copy the storage data client file to the local datastore on the VMware ESXi server.
2. Use SSH to log in to each VMware ESXi hosts as root.
3. Type the following command to install the storage data client (SDC): esxcli software component apply -d /
vmfs/volumes/PFMC_DS<last ip octet>/sdc.zip.

Configure storage data client on the PowerFlex management controller


Use this procedure to manually configure the storage data client on the PowerFlex management controller 2.0.

Prerequisites
Each storage data client requires a unique UUID.

Steps
1. To configure the storage data client, generate one UUID per server (https://www.guidgenerator.com/online-
guid-generator.aspx).

NOTE: Use the default UUID settings.

2. Use SSH to log in to the VMware ESXi host as root.

Installing and configuring PowerFlex management controller 2.0 65


Internal Use - Confidential

3. Substitute the new UUID in the following command with the pfmc-data1-vip and pfmc-data2-vip:

esxcli system module parameters set -m scini -p "IoctlIniGuidStr=<guid>


IoctlMdmIPStr=pfmc-data1-vip,pfmc-data2-vip"

4. Type the following command to verify scini configuration: esxcli system module parameters list -m scini |
head
5. Reboot the PowerFlex management controller 2.0.

Related information
PowerFlex appliance node cabling

Deploying VMware vCenter

Deploy the VMware vCenter Server Appliance (vCSA) on the PowerFlex


management controller
Use this procedure to deploy the VMware vCenter Server Appliance (vCSA) on the PowerFlex management controller.

About this task


The GUI deployment is a two stage process. The first stage is a deployment wizard that deploys the OVA file of the appliance on
the target VMware ESXi host or vCenter Server instance. After the OVA deployment finishes, you are redirected to the second
stage of the process that sets up and starts the services of the newly deployed vCSA appliance.

Steps
1. Deploy a new appliance to the target VMware vCenter server or VMware ESXi host:
a. Mount the ISO and open the VMware vCSA 7.x installer from \vcsa-ui-installer\win32\installer.exe.
b. Select Install from the VMware vCSA 7.x installer.
c. Click Next in Stage 1: Deploy vCenter Server wizard.
d. Accept the End User License Agreement and click Next.
e. Type the host FQDN of the PowerFlex management controller 2.0 (install on the node with the modified VM network):
i. Provide all the log in credentials.
ii. Click Next and click Yes.
f. Enter the vCenter VM name (FQDN), set the root password, and confirm the root password. Click Next.
g. Select the deployment size to Large and leave storage as Default and click Next.
h. In the Select Datastore page, select the following:
● Select PFMC_DSxxx.
● Select Enable Thin Disk Mode
i. Click Next.
j. In Configure network settings page, do the following:
Select the following:
● VM network from network
● IPv4 from IP version
● Static from IP assignment
Enter the following:
● FQDN
● IP address
● Subnet
● Default gateway
● DNS server information
k. Click Next.
l. Review the summary and click Finish.

66 Installing and configuring PowerFlex management controller 2.0


Internal Use - Confidential

2. Copy data from the source appliance to the VMware vCenter Server Appliance (vCSA):
a. Click Continue to continue from Stage 1 and select Next from the Stage 2 Introduction page.
b. Select Synchronize time with NTP Server and enter the NTP Server IP addresses and select Disabled for
SSH access. Click Next.
c. Enter the Single Sign-On domain name and password for SSO, and click Next.
d. Clear the Customer Experience Improvement Program (CEIP) check box, and click Next.
e. Review the summary information and click Finish > OK to continue.
f. Click Close when it completes.
g. Log in to validate that the new controller vCSA is operational using SSO credentials.

Create a datacenter
Use this procedure to create a datacenter. This will be the container for all the PowerFlex management controller inventory.

Steps
1. Log in to the VMware vSphere Client.
2. Right-click vCenter and click New Datacenter.
3. Enter data center name as PFMC-Datacenter and click OK.

Create a cluster
Use this to create a cluster.

Steps
1. Right-click Datacenter and click New Cluster.
2. Enter the cluster name as PFMC-Management-Cluster and retain the default for DRS, and HA and click OK.
3. Verify the summary and click Finish.

Add hosts to the data center


Use this procedure to add all the PowerFlex management controller 2.0 nodes to the data center.

Steps
1. Log in to the VMware vSphere Client.
2. In the left pane, click vCenter > Hosts and Clusters.
3. Right-click the PowerFlex Management Cluster and click Add Host .
4. Enter the FQDN.
5. Enter root username and password for the host and click Next.
6. Repeat steps 2 through 5 for all the controller hosts.
7. On the security alert popup, select All Hosts and click OK.
8. Verify the summary and click Finish.
NOTE: If the node goes into maintenance mode, right-click the VMware ESXi host and click Maintenance Mode > Exit
Maintenance Mode. vCLS VMs are migrated using PowerFlex Manager after takeover.

Add VMware vSphere licenses


Perform this procedure to add VMware vSphere licenses.

Steps
1. In the VMware vSphere client, log in to the vCSA, on the Administration tab, select Licensing.
2. Click Add to open the New Licenses wizard.

Installing and configuring PowerFlex management controller 2.0 67


Internal Use - Confidential

3. Enter or paste the license keys for VMware vSphere and vCenter. Click Next.
4. Optionally, provide an identifying name for each license. Click Next.
5. Click Finish to complete the addition of licenses to the system inventory.
6. Select the vCenter license from the list and click OK.
7. Click Assets.
8. Click vCenter Server Systems and select the vCenter server and click Assign License.
9. In the Licenses view, the added licenses should be visible. Click the Assets tab.
10. Click Hosts.
11. Select the controller nodes.
12. Click Assign License.
13. In the Assign License dialog box, select the vSphere license from the list and click OK.

VMware vSphere logical networking details for PowerFlex management


controller 2.0

Supported Virtual Port-channel Speed (G) LACP mode Required VLANs Node load
networkin Switch mode balancing
g name
Port- fe_dvSwitc Trunk 25 Active 105, 140, 150 LAG-Active-Src
channel h and dest IP and
with LACP TCP/UDP
be_dvSwit Trunk 25 Active 103,141,142,143,
ch 151, 152, 153 (if
required), 154 (if
required)
oob_dvSwi Access 10 / 25 N/A 101 N/A
tch

Create the first distributed switch (FE_dvSwitch) for the PowerFlex


management controller 2.0
Set up virtual distributed switches on the PowerFlex management controller 2.0 FE_dvswitch.

About this task


The FE_dvswitch contains the following VLANs:
● flex-node-mgmt
● flex-stor-mgmt
● pfmc-sds-mgmt

Steps
1. Log in to the VMware vSphere Client.
2. From Home, click Networking and expand the data center.
3. Right-click data center:
a. Click Distributed Switch > New Distributed Switch.
b. Update the name to FE_dvSwitch and click Next.
c. On the Select version page, select 7.0.3 - ESXi 7.0.3 and later and click Next.
d. Under Configure Settings, select 2 for Number of uplinks.
e. Select Enabled from the Network I/O Control menu.
f. Clear the Create default port group option.
g. Click Next.
h. In Ready to complete, click Finish.
4. Right-click FE_dvSwitch and click Settings > Edit Settings.

68 Installing and configuring PowerFlex management controller 2.0


Internal Use - Confidential

5. Select Advanced.
6. Set MTU to 9000.
7. Under Discovery Protocol, set the type to Link Layer Discovery Protocol and the operation to Both, and click OK.

Create the distributed port group for the FE_dvSwitch


Create a distributed port group for the PowerFlex management node network.

Steps
1. Log in to the VMware vSphere Client and click Networking.
2. Right-click FE_dvSwitch and select Distributed Port Group > New Distribution Port Group.
3. Enter flex-node-mgmt-<vlanid> and click Next.
4. Leave the port related options (port binding, allocation, and number of ports) as the default values.
5. Select VLAN as the VLAN type.
6. Set the VLAN ID to the appropriate VLAN number and click Next.
7. In the Ready to complete screen, verify the details and click Finish.
8. Repeat steps 2 through 7 to create the following port groups:
● pfmc-sds-mgmt-<vlanid>
● flex-stor-mgmt-<vlanid>

Create the distributed switch (BE_dvSwitch) for the PowerFlex


management node network
Set up virtual distributed switches on the PowerFlex management controller 2.0 BE_dvswitch.

About this task


The BE_dvSwitch contains the following VLANs:
● flex-vcsa-ha
● pfmc-vmotion
● pfmc-sds-data1
● pfmc-sds-data2
● flex-data1-<vlanid>
● flex-data2-<vlanid>
● flex-data3-<vlanid> (if required)
● flex-data4-<vlanid> (if required)

Steps
1. Log in to the VMware vSphere Client.
2. From Home, click Networking and expand the data center.
3. Right-click the data center:
a. Click Distributed Switch > New Distributed Switch.
b. Update the name to BE_dvSwitch and click Next.
c. On the Select version page, select 7.0.3 - ESXi 7.0.3 and later and click Next.
d. Under Configure Settings, select 2 for Number of uplinks.
e. Select Enabled from the Network I/O Control menu.
f. Clear the Create default port group option.
g. Click Next.
h. In Ready to complete, click Finish.
4. Right-click BE_dvSwitch and click Settings > Edit Settings.
5. Select Advanced.
6. Set MTU to 9000.
7. Under Discovery Protocol, set the type to Link Layer Discovery Protocol and the operation to Both, and click OK.

Installing and configuring PowerFlex management controller 2.0 69


Internal Use - Confidential

Create the distributed port groups for the BE_dvSwitch


Use this procedure to create a distributed port group for the PowerFlex management node network.

Steps
1. Log in to the VMware vSphere Client and click Networking.
2. Right-click BE_dvSwitch and select Distributed Port Group > New Distribution Port Group.
3. Enter flex-vcsa-ha-<vlanid> and click Next.
4. Leave the port-related options (Port binding, Port allocation, and # of ports) as the default values.
5. Select VLAN as the VLAN type.
6. Set the VLAN ID to the appropriate VLAN number.
7. Clear the Customize default policies configuration and click Next > Finish.
8. Repeat steps 2 through 7 to create the following port groups:
● pfmc-vmotion-<vlanid>
● pfmc-sds-data1-<vlanid>
● pfmc-sds-data2-<vlanid>
● flex-data1-<vlanid>
● flex-data2-<vlanid>
● flex-data3-<vlanid> (if required)
● flex-data4-<vlanid> (if required)

Create a link aggregation group on the new FE_dvswitch


Use this procedure to create a link aggregation group (LAG) on the new FE_dvswitch.

Steps
1. Log in to the VMware vSphere client.
2. Select FE_dvswitch.
3. Click Configure. In Settings, select LACP.
4. Click New and enter the name LAG-FE. The default number of ports is 2.
5. Select the mode as active.
6. Select Load Balancing Mode as Source and Destination IP address and TCP/UDP Port.
7. Set time out mode to Slow.
8. Click OK to create LAG.

Create a link aggregation group on the new BE_dvswitch


Use this procedure to create a link aggregation group (LAG) on the new BE_dvswitch.

Steps
1. Log in to the VMware vSphere client.
2. Select BE_dvswitch.
3. Click Configure. In Settings, select LACP.
4. Click New and enter the name LAG-BE. The default number of ports is 2.
5. Select mode as active.
6. Select Load Balancing Mode as Source and Destination IP address and TCP/UDP Port.
7. Set time out mode to Slow.
8. Click OK to create LAG.

70 Installing and configuring PowerFlex management controller 2.0


Internal Use - Confidential

Modify the failover order for the FE_dvSwitch


Use this procedure to modify the failover order for the FE_dvSwitch by assigning the physical NICs to the LAG ports and set
the LAG as active in the teaming and failover order of distributed port groups.

Steps
1. Log in to the VMware vSphere Client.
2. Click Networking and select FE_dvSwitch.
3. Click Configure, and from Settings, select LACP.
4. Click Migrating network traffic to LAGs.
5. Click Manage Distributed Port Groups, click Teaming and Failover, and click Next.
6. Select All port groups and click Next.
7. Select LAG-FE and move it to Active Uplinks.
8. Move Uplink1 and Uplink2 to Unused Uplinks and click Next.
9. Click Finish.

Modify the failover order for the BE_dvSwitch


Use this procedure to modify the failover order for the BE_dvSwitch by assigning the physical NICs to the LAG ports and set
the LAG as active in the teaming and failover order of distributed port groups.

Steps
1. Log in to the VMware vSphere Client.
2. Click Networking and select BE_dvSwitch.
3. Click Configure, and from Settings, select LACP.
4. Click Migrating network traffic to LAGs.
5. Click Manage Distributed Port Groups, click Teaming and Failover, and click Next.
6. Select All port groups and click Next.
7. Select LAG-BE and move it to Active Uplinks.
8. Move Uplink1 and Uplink2 to Unused Uplinks and click Next.
9. Click Finish.

Migrate interfaces to the BE_dvSwitch


Use this procedure to migrate the interfaces vmnic3 and vmnic7 to the BE_dvSwitch.

Steps
1. Log in to the VMware vSphere Client.
2. Select BE_dvSwitch.
3. Click Configure and in Settings, select LACP.
4. Click Migrating network traffic to LAGs > Add and Manage Hosts.
5. Select Add Hosts and click Next.
6. Click New Hosts, select All Hosts, and click OK > Next.
7. Select vmnic3 and click Assign Uplink.
8. Select LAG-BE-0 and select Apply this uplink assignment to the rest of the hosts, and click OK.
9. Select vmnic7 and click Assign Uplink.
10. Select LAG-BE-1 and select Apply this uplink assignment to the rest of the hosts, and click OK.
11. Click Next > Next > Next > Finish.

Installing and configuring PowerFlex management controller 2.0 71


Internal Use - Confidential

Add the first uplink of the vCenter PowerFlex management controller to the
FE_dvSwitch
Use this procedure to migrate the first uplink on the PowerFlex management controller with VMware vCenter.

Steps
1. Log in to the VMware vSphere Client.
2. Click Networking and select FE_dvSwitch.
3. Click Configure and in Settings, select LACP.
4. Click Migrating network traffic to LAGs > Add and Manage Hosts.
5. Click Add Hosts and click Next.
6. Select the host with the vCSA.
7. On the Manage physical adapters page, select the Adapters on all hosts tab.
8. For vmnic6, using the drop-down menu for assign uplink, select LAG-FE-1.
9. Click Next > Next > Next > Finish.

Migrate VMware vCenter on the PowerFlex management controller to the


FE_dvSwitch
Use this procedure to migrate the VMware vCenter to the FE_dvSwitch for the PowerFlex management controller.

Steps
1. Log in to the VMware vSphere Client.
2. Click Networking and select FE_dvSwitch.
3. Click Configure and in Settings, select LACP.
4. Click Migrating network traffic to LAGs > Add and Manage Hosts.
5. Click Manage host networking and click Next.
6. Select the host with the vCSA, and click OK > Next > Next > Next.
7. On the Migrate VM networking page, select the Configure per virtual machine tab.
8. For the vCSA, under the Destination port group column select Assign Port Group > For the Select network page.
9. Under Actions, click Assign for the flex-node-mgmt network.
10. Click Next > Finish.

Migrate the second uplink and vmkernel of the vCenter PowerFlex


management controller
Use this procedure to migrate the second uplink and vmkernel port on the PowerFlex management controller with VMware
vCenter.

Steps
1. Log in to the VMware vSphere Client.
2. Click Networking and select FE_dvSwitch.
3. Click Configure and in Settings, select LACP.
4. Click Migrating network traffic to LAGs > Add and Manage Hosts.
5. Click Manage host networking and click Next.
6. Select the host with the vCSA and click Next.
7. On the Manage physical adapters page, select the Adapters on all hosts tab.
8. For vmnic2, using the drop-down menu for assign uplink, select LAG-FE-0.
9. Click Next.

72 Installing and configuring PowerFlex management controller 2.0


Internal Use - Confidential

10. On the Manage VMkernel adapters page, for vmk0, under the destination port group column select Assign Port Group >
For the Select network page.
11. Under actions, click Assign for the flex-node-mgmt network.
12. Click Next > Next > Finish.

Add the PowerFlex management controllers to the uplinks of the


FE_dvSwitch
Use this procedure to add the PowerFlex management controllers to the uplinks of the FE_dvSwitch.

Steps
1. Log in to the VMware vSphere Client.
2. Click Networking and select FE_dvSwitch.
3. Click Configure and in Settings, select LACP.
4. Click Migrating network traffic to LAGs > Add and Manage Hosts.
5. Click Add Hosts and click Next.
6. Click Select All and click Next.
7. On the Manage physical adapters page, select the Adapters on all hosts tab.
8. For vmnic2, using the drop-down menu for assign uplink, select LAG-FE-0.
9. For vmnic6, using the drop down menu for assign uplink, select LAG-FE-1.
10. Click Next.
11. On the Manage VMkernel adapters page, for vmk0, under the Destination port group column select Assign Port
Group > For the Select network page.
12. Under Actions, click Assign for the flex-node-mgmt network.
13. Click Next > Next > Finish.

Create the distributed switch (OOB_dvSwitch) for the PowerFlex


management controller
Use this procedure to set up virtual distributed switches on the PowerFlex management controllers.

Steps
1. Log in to the VMware vSphere Client.
2. From Home, click Networking and expand the data center.
3. Right-click the data center:
a. Click Distributed Switch > New Distributed Switch.
b. Update the name to oob_dvSwitch and click Next.
c. On the Select version page, select 7.0.3 - ESXi 7.0.3 and later, and click Next.
d. Under Edit Settings, select 1 for Number of uplinks.
e. Select Enabled from the Network I/O Control menu.
f. Clear the Create default port group option.
g. Click Next.
h. On Ready to complete page, click Finish.

Create the distributed port group for the OOB_dvSwitch


Use this procedure to create a distributed port group for the PowerFlex management controller network.

Steps
1. Log in to the VMware vSphere Client and click Networking.
2. Right-click oob_dvSwitch, and select Distributed Port Group > New Distribution Port Group.

Installing and configuring PowerFlex management controller 2.0 73


Internal Use - Confidential

3. Leave the port-related options (Port binding, Port allocation, and # of ports) as the default values.
4. Select VLAN as the VLAN type.
5. Enter flex-oob-mgmt-<vlanid> and click Next.
6. Click Next > Finish.

Add a host to the oob_dvSwitch


Use this procedure to add a PowerFlex management controller to the oob_dvswitch.

Steps
1. Log in to the VMware vSphere Client.
2. Click Networking and select oob_dvSwitch.
3. Right-click Add and Manage Hosts.
4. Select Add Hosts and click Next.
5. Click Select All and click Next.
6. For vmnic4, using the drop-down menu for assign uplink, select Uplink 1.
7. Click Next > Next > Next > Finish.

Add a VMkernel adapter to the hosts


Use this procedure to add a VMkernel adapter to the PowerFlex management controllers.

Steps
1. Log in to the VMware vSphere Client.
2. Select the host and click Configure on the right pane.
3. Under Networking tab, select the VMkernel adapter.
4. Click Add Networking.
5. Select Connection type as VMkernel network adapter and click Next.
6. Select Target device as Existing network and click Browse to select the appropriate port group.
7. On the port properties, select Enable services, select the appropriate service, and click Next.
For example, for vMotion, select vMotion. For any other networks, retain the default service.
8. In IPV4 Settings, select Use static IPV4 settings, provide the appropriate IP address and subnet details, and click Next.
9. Verify the details on Ready to Complete and click Finish.
10. Repeat the steps 2 through 9 to create the VMkernel adapters for the following port groups:
● pfmc-vmotion-<vlanid>
● pfmc-sds-data1-<vlanid>
● pfmc-sds-data2-<vlanid>

Deploying PowerFlex

Manually deploy the storage VM


Use this procedure to manually deploy the storage VM of the selected Intelligent Catalog (IC). The storage VM is a Linux-based
VM dedicated to PowerFlex and is used to host the PowerFlex software components.

About this task

Deploy the storage VM on the PowerFlex controller nodes.

74 Installing and configuring PowerFlex management controller 2.0


Internal Use - Confidential

NOTE: Manually deploy the PowerFlex SVM on each PowerFlex controller nodes. The SVM on the PowerFlex management
controller 2.0 is installed in the local storage. The SVM on the PowerFlex management controller is installed in the PERC-01
storage.

Steps
1. Log in to the VMware vCSA.
2. Select Hosts and Clusters.
3. Right-click the ESXi host > Select Deploy OVF Template.
4. Select Local file > Upload file > Browse to the SVM OVA template.
5. Click Open > Next.
6. Enter pfmc-<svm-ip-address> for VM name.
7. Click Next.
8. Identify the cluster and select the node that you are deploying. Verify that there are no compatibility warnings and click
Next.
9. Click Next.
10. Review details and click Next.
11. Select Local datastore Thin Provision and Disable Storage DRS for this VM and click Next.
12. Select pfmc-sds-mgmt-<vlanid> for VM network and click Next.
13. Click Finish.

Configure the storage VM


Use this procedure to configure the settings for a storage VM.

About this task

NOTE: Deployment of the OVA has five interfaces configured. Remove the two unused interfaced from the OVA.

Steps
1. Right-click each SVM, and click Edit Settings.
a. Set CPU to 12 CPUs with 12 cores per socket.
b. Select Reservation and enter the GHz value: 17.4.
c. Set Memory to 18 GB and check Reserve all guest memory (all locked).
d. Set Network Adapter 1 to the pfmc-sds-mgmt-<vlanid>.
e. Set Network Adapter 2 to the pfmc-sds-data1-<vlanid>.
f. Set Network Adapter 3 to the pfmc-sds-data2-<vlanid>.
g. Click Add New Device and select PCI Device (new single PowerFlex management controller)
h. Enable Toggle DirectPath IO.
i. PCI Device = HBA 355i Front BroadCom / LSI
j. Click OK.
2. Power on the SVM and open a console.
3. Log in using the following credentials:
● Username: root
● Password: admin
4. To change the root password type passwd and enter the new SVM root password twice.
5. Set the hostname, type: hostnamectl set-hostname <hostname>.

Installing and configuring PowerFlex management controller 2.0 75


Internal Use - Confidential

Configure the pfmc-sds-mgmt-<vlanid> networking interface


Use this procedure to configure the networking interface for SDS-to-PowerFlex gateway communication.

Steps
1. Configure the PowerFlex management controller 2.0 network. Type: vi /etc/sysconfig/network/ifcfg-eth0 and
enter the following information:

DEVICE=eth0
NAME=eth0
STARTMODE=auto
BOOTPROTO=static
IPADDR=<ip address>
NETMASK=<netmask>

For example:

DEVICE=eth0
NAME=eth0
STARTMODE=auto
BOOTPROTO=static
IPADDR=10.10.10.11
NETMASK=255.255.255.224

2. Configure the default route. Type: vi /etc/sysconfig/network/routes.

default <gateway ip> - <interface>

For example:

default 10.10.10.1 – eth0

3. Configure DNS search and DNS servers. Type: vi /etc/sysconfig/network/config and modify the following:

NETCONFIG_DNS_STATIC_SEARCHLIST=”<search domain>”
NETCONFIG_DNS_STATIC_SERVERS=”<dns ip>”

For example:

NETCONFIG_DNS_STATIC_SEARCHLIST=”example.com”
NETCONFIG_DNS_STATIC_SEARCHLIST=”10.10.10.240”

4. To restart the network. Type: systemctl restart network.

Configure the pfmc-sds-data1-<vlanid> networking interface


Use this procedure to configure the networking interface for SDS-to-SDS, SDS-to-SDC, and SDS-to-PowerFlex gateway
communication.

Steps
1. Configure the PowerFlex management controller 2.0 network. Type: vi /etc/sysconfig/network/ifcfg-eth1 and
enter the following information:

DEVICE=eth1
NAME=eth1
STARTMODE=auto
BOOTPROTO=static
IPADDR=<ip address>
NETMASK=<netmask>
MTU=<mtu>

76 Installing and configuring PowerFlex management controller 2.0


Internal Use - Confidential

For example:

DEVICE=eth1
NAME=eth1
STARTMODE=auto
BOOTPROTO=static
IPADDR=10.10.11.11
NETMASK=255.255.255.224
MTU=9000

2. To restart the network. Type: systemctl restart network.

Configure the pfmc-sds-data2-<vlanid> networking interface


Use this procedure to configure the networking interface for SDS-to-SDS, SDS-to-SDC, and SDS-to-PowerFlex gateway
communication.

Steps
1. Configure the PowerFlex management controller 2.0 network. Type: vi /etc/sysconfig/network/ifcfg-eth2 and
enter the following information:

DEVICE=eth2
NAME=eth2
STARTMODE=auto
BOOTPROTO=static
IPADDR=<ip address>
NETMASK=<netmask>
MTU=<mtu>

For example:

DEVICE=eth2
NAME=eth2
STARTMODE=auto
BOOTPROTO=static
IPADDR=10.10.12.11
NETMASK=255.255.255.224
MTU=9000

2. Restart the network. Type: systemctl restart network.

Verify connectivity between the storage VMs


Use this procedure to validate the jumbo frames between SVMs and verify that the packet size of the MTU is 8972.

Steps
1. Log in to the VMware vCSA.
2. From Home, select PFMC-Datacenter.
3. Select Hosts and Cluster and expand PFMC-Management-Cluster.
4. Select the SVM and on the VM summary page, select Launch Web Console.
5. Log in to the SVM as root.
6. Run the following commands to verify connectivity between the SVMs:
● For the pfmc-sds-mgmt interface, run: ping [destination pfmc-sds-mgmt-ip]
● For the pfmc-sds-data interfaces, run: ping -M do -s 8972 [pfmc-sds-data-ip]
7. Confirm connectivity for all interfaces to all SVMs.

Installing and configuring PowerFlex management controller 2.0 77


Internal Use - Confidential

Install the required PowerFlex packages


Use this procedure to install the required PowerFlex packages.

About this task


The required PowerFlex packages include the following:
● LIA
● SDS
● OpenJDK
● ActiveMq
● MDM

Steps
1. On all PowerFlex controller nodes perform the following:
a. Install LIA on all the PowerFlex management controllers, enter the following command:
TOKEN='<TOKEN-PASSWORD>' rpm -ivh /root/install/EMC-ScaleIO-lia-x.x-
x.sles15.3.x86_64.rpm
b. Install the SDS on all PowerFlex management controllers, enter the following command:
rpm -ivh /root/install/EMC-ScaleIO-sds-x.xxx.xxx.sles15.3.x86_64.rpm
c. Verify Java is installed, enter the following command: - java -version
Example output if running:

java -version
openjdk version "11.0.13" 2021-10-19
OpenJDK Runtime Environment (build 11.0.13+8-suse-3.68.1-x8664)
OpenJDK 64-Bit Server VM (build 11.0.13+8-suse-3.68.1-x8664, mixed mode)

If not install OpenJDK on all the PowerFlex management controllers, enter the following command: rpm -ivh /root/
install/java-11-openjdk-headless- x.xxx.xxx.x86_64.rpm
d. Install ActiveMq on all the PowerFlex management controllers, type:
rpm -ivh /root/install/EMC-ScaleIO-activemq-x.xxx.xxx.noarch.rpm
2. On the MDM PowerFlex controller nodes, perform the following:
a. Install MDM on the SVM1 and SVM2 by running the following command:
MDM_ROLE_IS_MANAGER=1 rpm -ivh /root/install/EMC-ScaleIO-mdm-x.x-
xxxx.xxx.el7.x86_64.rpm
3. On the tiebreaker PowerFlex controller nodes, perform the following:
a. Install MDM on SVM3 by running the following command:
MDM_ROLE_IS_MANAGER=0 rpm -ivh /root/install/EMC-ScaleIO-mdm-x.x-
xxxx.xxx.el7.x86_64.rpm
b. To reboot, type Reboot.
4. Reboot all SVMs.

Generate PowerFlex MDM certificates


Use this procedure to generate a temporary self-signed certificate required to install the PowerFlex cluster.

Steps
1. Log in as a root user to the primary MDM.
2. Go to the config folder, type: cd /opt/emc/scaleio/mdm/cfg
3. Generate CA certificate, type: python3 certificate_generator_MDM_USER.py --generate_ca
mgmt_ca.pem.
4. Create a CLI certificate, type: python3 certificate_generator_MDM_USER.py --generate_cli
cli_certificate.p12 -CA mgmt_ca.pem --password <password>.

78 Installing and configuring PowerFlex management controller 2.0


Internal Use - Confidential

5. Create MDM certificate, type: python3 certificate_generator_MDM_USER.py --generate_mdm


mdm_certificate.pem -CA mgmt_ca.pem.
6. Create additional MDM certificates (each MDM needs a certificate), type: python3
certificate_generator_MDM_USER.py --generate_mdm slv1_mdm_certificate.pem -CA
mgmt_ca.pem.
7. Using SCP, copy the new MDM certificates and mgmt_ca.pem to the config folder to each MDM cluster member (primary
and secondary).
scp slv1_mdm_certificate.pem root@<mdm ip address>:/opt/emc/scaleio/mdm/cfg/
mdm_certificate.pem
scp cli_certificate.p12 root@<mdm ip address>:/opt/emc/scaleio/mdm/cfg/
scp mgmt_ca.pem root@<mdm ip address>:/opt/emc/scaleio/mdm/cfg/
8. On the primary and secondary MDMs, add the CA certificate, type: - cd /opt/emc/scaleio/mdm/cfg; scli --
add_certificate --certificate_file mgmt_ca.pem.
9. Start the MDM service on all SVMs, type: systemctl restart mdm.service.
10. Check the status of the MDM service on all SVMs, type: systemctl status mdm.service.

Deploy PowerFlex manually on the PowerFlex management controller


Use this procedure to create the PowerFlex cluster.

Steps
1. Create the MDM cluster in the SVM, type: scli --create_mdm_cluster --master_mdm_ip <data1
ip address,data2 ip address> --master_mdm_management_ip <mdm mgmt ip address>
--cluster_virtual_ip <vip 1,vip 2> --master_mdm_virtual_ip_interface eth1,eth2 --
master_mdm_name <pfmc-svm-last ip octet> --accept_license --approve_certificate.
2. Log in, type: scli --login --p12_path /opt/emc/scaleio/mdm/cfg/cli_certificate.p12 --
p12_password <password>.
3. Query the cluster, type: scli --query_cluster.
4. Add a secondary MDM to the cluster, type: scli --add_standby_mdm --new_mdm_ip <data1 ip
address,data2 ip address> --new_mdm_virtual_ip_interface eth1,eth2 --mdm_role manager --
new_mdm_management_ip <mdm mgmt ip address> --new_mdm_name <pfmc-svm-last ip octet> --
i_am_sure.
5. Add the Tiebreaker MDM to the cluster, type: scli --add_standby_mdm --mdm_role tb --new_mdm_ip <data1
ip address,data2 ip address> --new_mdm_name <pfmc-svm-last ip octet> --i_am_sure.

Convert a PowerFlex single cluster to a multi-node cluster


Use this procedure to convert a single cluster to a multi-node cluster.

Steps
1. To log in as root to the primary MDM, type: scli --login --p12_path /opt/emc/scaleio/mdm/cfg/
cli_certificate.p12 --p12_password password.
2. To verify cluster status (cluster mode is 1_node), type: scli --query_cluster.
Example:
Cluster: Mode: 1_node

3. To convert a single node cluster to a three node cluster, type: scli --switch_cluster_mode --cluster_mode
3_node --add_slave_mdm_name <standby-mdm-name> --add_tb_name <tiebreaker-mdm-name>.
Example:
scli --switch_cluster_mode --cluster_mode 3_node --add_slave_mdm_name pfmc-svm-39 --
add_tb_name pfmc-svm-40

4. To verify the three-node cluster, type: scli --query_cluster


Example:

Installing and configuring PowerFlex management controller 2.0 79


Internal Use - Confidential

Cluster: Mode: 3_node

Add a protection domain


Use this procedure to add a protection domain.

Steps
1. Log in to the MDM. Type: scli --login --p12_path /opt/emc/scaleio/mdm/cfg/cli_certificate.p12
--p12_password <password>

NOTE: After discovering MDS on PowerFlex Manager, the login will be as follows:

scli --login --username admin --password <PFxM_password> --management_system_ip <PFxM IP>


--insecure

2. Create a protection domain. Type: scli --add_protection_domain --protection_domain_name PFMC

Add a storage pool


Use this procedure to add storage pools.

Steps
1. Log in to the MDM. Type: scli --login --p12_path /opt/emc/scaleio/mdm/cfg/cli_certificate.p12
--p12_password <password>.
2. To create the storage pool. Type: scli --add_storage_pool --protection_domain_name PFMC --
dont_use_rmcache --media_type SSD --data_layout medium_granularity --storage_pool_name
PFMC-Pool.

Set the spare capacity for the medium granularity storage pool
Use this procedure to set the spare capacity for the medium granularity storage pool.

Steps
1. Log in to the primary MDM, type: scli --login --p12_path /opt/emc/scaleio/mdm/cfg/
cli_certificate.p12 --p12_password <password>.
2. To modify the capacity pool, type scli --modify_spare_policy --protection_domain_name PFMC --
storage_pool_name PFMC-Pool --spare_percentage <percentage>.
NOTE: Spare percentage is 1/n (where n is the number of nodes in the cluster). For example, the spare percentage for
a three-node cluster is 34%.

3. Type Y to proceed.

Add storage data servers


Use this procedure to add storage data servers.

Steps
1. Log in to the MDM. Type: scli --login --p12_path /opt/emc/scaleio/mdm/cfg/cli_certificate.p12
--p12_password <password>.
2. To add storage data servers. Type scli --add_sds --sds_ip <pfmc-sds-data1-ip,pfcm-sds-data2-ip>
--protection_domain_name PFMC --storage_pool_name PFMC-Pool --disable_rmcache --sds_name
PFMC-SDS-<last ip octet>.
3. Repeat for each PowerFlex management controller.

80 Installing and configuring PowerFlex management controller 2.0


Internal Use - Confidential

Identify the disks on the PowerFlex controller nodes


Use this procedure to identify the disks on each of the PowerFlex controller nodes.

Steps
1. Log in as root to each of the storage VMs.
2. To identify all available disks on the SVM, type lsblk.
3. Repeat for all PowerFlex controller nodes SVMs.

Add storage devices


Use this procedure to add storage devices.

Steps
1. Log in to the MDM: scli --login --p12_path /opt/emc/scaleio/mdm/cfg/cli_certificate.p12 --
p12_password <password>.
2. To add SDS storage devices, type scli --add_sds_device --sds_name <sds name> --storage_pool_name
<storage pool name> --device_path /dev/sd(x).
3. Repeat for all storage devices and for all PowerFlex management controller SVMs.

Create datastores
Use this procedure to create datastores and add volumes.

About this task


This procedure creates the following datastores and volumes:
● vCSA
● General
● PFMP

Steps
1. Log in to the MDM: scli --login --p12_path /opt/emc/scaleio/mdm/cfg/cli_certificate.p12 --
p12_password <password>.
2. To create the vCSA datastore, type scli --add_volume --protection_domain_name PFMC --
storage_pool_name PFMC-Pool --size_gb 3000 --volume_name vcsa --dont_use_rmcache.
3. To create the general datastore, type scli --add_volume --protection_domain_name PFMC --
storage_pool_name PFMC-Pool --size_gb 1500 --volume_name general --dont_use_rmcache.
4. To create the PowerFlex Manager datastore, type scli --add_volume --protection_domain_name PFMC --
storage_pool_name PFMC-Pool --size_gb 3000 --volume_name PFMP --dont_use_rmcache.

Add PowerFlex storage to PowerFlex management controller nodes


Use this procedure to map the volumes to all the PowerFlex management controllers.

Steps
1. Log in to the MDM: scli --login --p12_path /opt/emc/scaleio/mdm/cfg/cli_certificate.p12 --
p12_password <password>.
2. To query all storage data clients (SDC) to capture the SDC IDs, type scli --query_all_sdc.
3. To query all the volumes to capture volume names, type scli --query_all_volumes.
4. To map volumes to SDCs, type scli --map_volume_to_sdc --volume_name <volume name> --sdc_id <sdc
id> --allow_multi_map.
5. Repeat steps 2 through 4 for all volumes and SDCs.

Installing and configuring PowerFlex management controller 2.0 81


Internal Use - Confidential

Create VMFS datastores for PowerFlex controller nodes


Use this procedure to create VMFS datastores for PowerFlex controller nodes.

Steps
1. Log in to the VMware vCSA.
2. Right-click the new PowerFlex management controller node in Host and Clusters view.
3. Select Storage > New Datastore
4. Select VMFS and click Next.
5. Enter a name for the datastore, select an available LUN, and click Next.
6. Select VMFS 6 and click Next.
7. For partition configuration, retain the default settings and click Next.
8. Click Finish to start creating the datastore.
9. Repeat for all additional volumes created in the PowerFlex cluster.

Delete vSwitch0
Use this procedure to delete the standard switch (vSwitch0) for all PowerFlex management controller.

Steps
1. Log in to the VMware vSphere Client.
2. On the Menu, click Host and Cluster.
3. Select Controller A node.
4. Click Configure > Networking > Virtual Switches.
5. Expand Standard Switch: vSwitch0.
6. Click ... > Remove.
7. On the Remove Standard Switch window, click Yes.

Migrate the storage VMware vMotion vCSA


Steps
1. Right-click the VMware vCSA VM.
2. Click Migrate.
3. Click Migration Type = Change storage only.
4. Click Next.
5. Select vcsa.
Ensure compatibility checks succeed.
6. Click Next.
7. Click Finish.

Enable VMware vSphere HA and DRS for the new cluster


Use this procedure to enable vSphere DRS and vSphere Availability for the new cluster.

Steps
1. Log in to the VMware vSphere Client.
2. Click VMware vCenter > Hosts and Clusters > Cluster Name.
3. To enable vSphere HA, click vSphere Availability under Services, and click Edit.
4. Select Turn ON VMware vSphere HA.

82 Installing and configuring PowerFlex management controller 2.0


Internal Use - Confidential

5. Click Heartbeat Datastores.


6. Select Use datastores only from the specified list.
7. In the Available heartbeat datastores, select the following:
● general
● pfmp
● vcsa
8. Click OK.

Install and configure the embedded operating system


jump server
Perform this procedure to set up an embedded operating system jump server (embeddedOS15 SPx).

Prerequisites
● Ensure VMware ESXi is installed on all the PowerFlex controller nodes.
● Copy the IC code repository to the /home/admin/share path of the jump server.
● Confirm the availability of the virtual machine template: Embedded-JumpSrv-YYYYMMDD.ova, as specified in the
appropriate IC.
● Obtain an IP address from flex-node-mgmt-<vlanid> for the jump server main interface.

Steps
1. Deploy the OVA:
a. Log in to VMware vSphere Client using credentials.
b. Select vSphere Client and select Host and Clusters.
c. Right-click on the controller cluster PFMC Cluster and select Deploy a OVF Template. The Deploy OVF template
opens.
d. On the Select an OVF Template page, upload the OVF template using either one of the option URL or local files and
click Next.
e. On the Select a name and folder page, enter the name of the virtual machine according to the Enterprise Management
Platform (EMP), and click to select the files. Locate and click Next.
f. On the Select a Compute resource page, select the node where the jump server you wanted to be hosted and click
Next.
g. On the Review details page, verify the template details and click Next.
h. From the Select Storage page, choose the datastore as per the EMP. From the Select virtual Disk Format page, choose
Thin Provision and click Next.
i. From the Select Networks page, assign the Destination Networks to the VM
● Primary NIC for management access (flex-node-mgmt-<vlanid>)
● Secondary NIC for iDRAC access (flex-oob-mgmt-<vlanid>)
● Third NIC for initial deployment support access (optional)
j. Click Next.
k. On the Ready to Complete page, review the settings and click Finish.
2. On the first boot:
a. Right-click the VM and power it on. Wait to complete the initial boot.
b. Log in as admin.
c. Set up the networking:
i. Use the yast command to configure the management.

ii. On yast controller center page, select System and select Network Settings.
iii. On select GLOBAL tab, disable IPV6 on Network settings.
iv. Uncheck the Enable IPv6.

d. Configure the interface:

Installing and configuring PowerFlex management controller 2.0 83


Internal Use - Confidential

i. On Network settings, select Overview tab, select eth0.

ii. Use F4, key to edit the interface.


iii. Select the statically assigned IP address.
iv. Provide the management IP, subnet mask.
v. Move general tab.
vi. Provide the MTU size as per EMP.
vii. Use F10, to save the changes

e. Configure the hostname/DNS:


i. On Network settings page, select Hostname/DNS.
ii. Provide the hostname in the Static Hostname field.
iii. Provide the Name Server # DNS IP details in Name Server and Domain Search List.

f. Configure the routing:


i. On Network settings page, select Routing.
ii. Use F3, to add the new Destination Gateway.
iii. On the popup screen, provide the Gateway IP and select Device as eth0 which is used for MGMT IP address.

g. Use F10, to save all the changes and use F9 to exit from YAST center controller page.
h. Use ip addr s to verify whether the IP addresses are configured properly, and the interface are up.
i. Edit the /etc/exports file and add the flex-node-mgmt-<vlanid> subnet for NFS shares.
j. To change the default password, run the command sudo passwd, at the prompt, provide the password as per the
EMP.
k. Power off the VM.
3. Upgrade the VM hardware version:
a. Select Upgrade.
b. Check Schedule VM Compatibility Upgrade.
c. Expand Upgrade.
d. Select Compatible with (*): ESXi 7.x and later.
e. Click OK.
4. Power on the VM.

84 Installing and configuring PowerFlex management controller 2.0


Internal Use - Confidential

7
Deploying the PowerFlex management
platform
This section describes how to install and configure the PowerFlex management platform.
This includes the deployment and configuration of the temporary PowerFlex management platform installer virtual machine.
This PowerFlex management platform installer VM is used to deploy the containerized services required for the PowerFlex
management platform. Remove the installer VM after the deployment of the PowerFlex management platform. PowerFlex
management platform deployment types are:
● PowerFlex controller node - A single node VMware ESXi system with local (RAID) storage.
● PowerFlex management controller - A multi-node highly available cluster based on PowerFlex storage and VMware ESXi.
● Customer provided hypervisor based on kernel-based VM (KVM) - A customer deploys our eSLES VMs on their hypervisor to
run the management.
The PowerFlex management controller (single node or multi-node) needs to be configured before installing the PowerFlex
management platform. Verify the PowerFlex management controller has the recommended resource requirements needed
before proceeding.
NOTE:
● The PowerFlex management platform installer VM is removed after installation of the PowerFlex management platform
cluster.
● The PowerFlex management platform cluster requires three VMs to be deployed.
● Ensure the network vLAN requirements are met:
○ VLAN flex-node-mgmt (105) and flex-stor-mgmt (150) must be routable to each other
○ VLAN flex-node-mgmt (105) and pfmc-sds-mgmt (140) must be routable to each other
○ VLAN pfmc-sds-mgmt (140) and flex-stor-mgmt (150) must not route to each other
○ If VLAN 150 and 105 are not routed to each other, contact Dell support.
● Ensure the NTP is configured for correct time synchronization for all hosts and VMs.
● Ensure the DNS and PTR records are setup and properly configured.

Related information
Add VMware vSphere licenses

Deploying and configuring the Powerflex management


platform installer VM

Deploying and configuring the PowerFlex management platform


installer VM using VMware vSphere
Use the following procedures to deploy and configure the PowerFlex management platform installer VM using VMware vSphere.

Related information
Deployment requirements

Deploying the PowerFlex management platform 85


Internal Use - Confidential

Deploy the PowerFlex management platform installer VM


Use this procedure to deploy and configure the PowerFlex management platform installer VM in a VMware environment.

Steps
1. Log in to the VMware vCSA.
2. Click Menu > Shortcuts > Hosts and Clusters.
3. Right-click the ESXi Host > Select Deploy OVF Template.
4. Select Local File > Upload Files > Browse to the PowerFlex management platform OVA Template
5. Click Open > Next.
6. Enter pfmp-installer for the VM name.
7. Click Next.
8. Verify that there are no compatibility warnings and click Next.
9. Click Next.
10. Review details and click Next.
11. Select Virtual disk format > Thin provision > Next.
12. Select the datastore.
13. Select flex-node-mgmt-<vlanid> > OK > Next.
14. Click Finish.
15. Right-click the VM and select Power > Power On.

Configure the PowerFlex management platform installer networking


interface
Use this procedure to configure the networking for the Powerflex management platform installer VM.

About this task


The following network adapter needs to be configure on the Powerflex management platform installer:
● 105: flex-node-mgmt (eth0)

Steps
1. Launch the web console from vCSA and log in as delladmin.
2. To configure the <flex-node-mgmt> interface, type sudo vi /etc/sysconfig/network/ifcfg-eth0

DEVICE=eth0
NAME=eth0
STARTMODE=auto
BOOTPROTO=static
IPADDR=<ip address>
NETMASK=<mask>

Example:

DEVICE=eth0
NAME=eth0
STARTMODE=auto
BOOTPROTO=static
IPADDR=10.10.10.12
NETMASK=255.255.255.0

a. To configure the default route, type sudo vi /etc/sysconfig/network/routes. default <gateway ip>
- <interface>
Example: default 10.10.10.1 - eth0

86 Deploying the PowerFlex management platform


Internal Use - Confidential

b. To configure DNS search and DNS servers, type sudo vi /etc/sysconfig/network/


config and modify the following: NETCONFIG_DNS_STATIC_SEARCHLIST="<search domain>"
NETCONFIG_DNS_STATIC_SERVERS="< dns_ip1 dns_ip2>"
Example: NETCONFIG_DNS_STATIC_SEARCHLIST="example.com" NETCONFIG_DNS_STATIC_SERVERS="
10.10.10.240 10.10.10.241"

Configure the NTP service


Configure the NTP services for the PowerFlex management platform installer VM.

Steps
1. To configure the NTP server, type sudo vi /etc/chrony.conf and add the following: server <ntp ip address>
iburst.
For example: server 10.10.10.240 iburst
2. To enable chronyd, type sudo systemctl enable chronyd.
3. To reboot, type sudo reboot.
4. Log in as delladmin
5. To check the server is synced to NTP, type chronyc tracking.

NOTE: This may take a few minutes time to sync. Examples of chronyc tracking status are as follows:

Successful chronyc tracking status Unsuccessful chronyc tracking status


A configured NTP server should be listed under Reference Stratum 0 means not synced and Leap status says Not
ID, Stratum should be a value greater than 0 and synchronized:
System Time should report much less than 1 second time
difference: node1:~ # chronyc tracking
Reference ID : 00000000 ()
node1:~ # chronyc tracking Stratum : 0
Reference ID : 64400012 (0.pool.ntp.org) Ref time (UTC) : Thu Jan 01 00:00:00 1970
Stratum : 4 System time : 0.000000000 seconds fast
Ref time (UTC) : Mon Mar 07 17:15:39 2022 of NTP time
System time : 0.000001647 seconds slow Last offset : +0.000000000 seconds
of NTP time RMS offset : 0.000000000 seconds
Last offset : -0.000006101 seconds Frequency : 0.000 ppm slow
RMS offset : 0.000020258 seconds Residual freq : +0.000 ppm
Frequency : 33.247 ppm slow Skew : 0.000 ppm
Residual freq : -0.000 ppm Root delay : 1.000000000 seconds
Skew : 0.016 ppm Root dispersion : 1.000000000 seconds
Root delay : 0.040418178 seconds Update interval : 0.0 seconds
Root dispersion : 0.033837218 seconds Leap status : Not synchronized
Update interval : 1024.6 seconds
Leap status : Normal

Deploying and configuring the PowerFlex management platform


installer using Linux KVM
Use the following procedures to deploy and configure the PowerFlex management platform installer VM using Linux KVM.
Linux KVM environment with recommended resource availability for creating management platform installer VM.
● Ensure the NTP is configured for correct time synchronization for all hosts and VMs.
● Ensure the reverse DNS and PTR records are set up and properly configured.

Related information
Deployment requirements

Deploying the PowerFlex management platform 87


Internal Use - Confidential

Deploy the PowerFlex management platform installer VM


Use this procedure to set up the management platform installer virtual machine on a Linux KVM.

Steps
1. Log in to the KVM server.
2. Copy the management eSLES QCOW image to the KVM server.
3. Open terminal and type virt-manager.
4. Click File > New Virtual Machine.
5. Select Import existing disk image and click Forward.
6. Click Browse and select the eSLES QCOW image from the saved path.
7. Select the operating system as Generic OS and click Forward.
8. Complete necessary changes to the CPU and RAM as per requirements and click Forward.
9. Enter VM name and in network selection, select Bridge device and enter device name and click finish.

Configure the PowerFlex management platform installer networking


interface
Use this procedure to configure the networking for the Powerflex management platform installer VM.

About this task


The following network adapter needs to be configure on the Powerflex management platform installer:
● 105: flex-node-mgmt (eth0)

Steps
1. Launch the web console from vCSA and log in as delladmin.
2. To configure the <flex-node-mgmt> interface, type sudo vi /etc/sysconfig/network/ifcfg-eth0

DEVICE=eth0
NAME=eth0
STARTMODE=auto
BOOTPROTO=static
IPADDR=<ip address>
NETMASK=<mask>

Example:

DEVICE=eth0
NAME=eth0
STARTMODE=auto
BOOTPROTO=static
IPADDR=10.10.10.12
NETMASK=255.255.255.0

a. To configure the default route, type sudo vi /etc/sysconfig/network/routes. default <gateway ip>
- <interface>
Example: default 10.10.10.1 - eth0
b. To configure DNS search and DNS servers, type sudo vi /etc/sysconfig/network/
config and modify the following: NETCONFIG_DNS_STATIC_SEARCHLIST="<search domain>"
NETCONFIG_DNS_STATIC_SERVERS="< dns_ip1 dns_ip2>"
Example: NETCONFIG_DNS_STATIC_SEARCHLIST="example.com" NETCONFIG_DNS_STATIC_SERVERS="
10.10.10.240 10.10.10.241"

88 Deploying the PowerFlex management platform


Internal Use - Confidential

Configure the NTP service


Configure the NTP services for the PowerFlex management platform installer VM.

Steps
1. To configure the NTP server, type sudo vi /etc/chrony.conf and add the following: server <ntp ip address>
iburst.
For example: server 10.10.10.240 iburst
2. To enable chronyd, type sudo systemctl enable chronyd.
3. To reboot, type sudo reboot.
4. Log in as delladmin
5. To check the server is synced to NTP, type chronyc tracking.

NOTE: This may take a few minutes time to sync. Examples of chronyc tracking status are as follows:

Successful chronyc tracking status Unsuccessful chronyc tracking status


A configured NTP server should be listed under Reference Stratum 0 means not synced and Leap status says Not
ID, Stratum should be a value greater than 0 and synchronized:
System Time should report much less than 1 second time
difference: node1:~ # chronyc tracking
Reference ID : 00000000 ()
node1:~ # chronyc tracking Stratum : 0
Reference ID : 64400012 (0.pool.ntp.org) Ref time (UTC) : Thu Jan 01 00:00:00 1970
Stratum : 4 System time : 0.000000000 seconds fast
Ref time (UTC) : Mon Mar 07 17:15:39 2022 of NTP time
System time : 0.000001647 seconds slow Last offset : +0.000000000 seconds
of NTP time RMS offset : 0.000000000 seconds
Last offset : -0.000006101 seconds Frequency : 0.000 ppm slow
RMS offset : 0.000020258 seconds Residual freq : +0.000 ppm
Frequency : 33.247 ppm slow Skew : 0.000 ppm
Residual freq : -0.000 ppm Root delay : 1.000000000 seconds
Skew : 0.016 ppm Root dispersion : 1.000000000 seconds
Root delay : 0.040418178 seconds Update interval : 0.0 seconds
Root dispersion : 0.033837218 seconds Leap status : Not synchronized
Update interval : 1024.6 seconds
Leap status : Normal

Deploying and configuring the PowerFlex


management platform VMs
Deploying and configuring the PowerFlex management platform
using VMware vSphere
Use the following procedures to deploy and configure the PowerFlex management platform using VMware vSphere.

Related information
Deployment requirements

Deploying the PowerFlex management platform 89


Internal Use - Confidential

Deploy the PowerFlex management platform


Use this procedure to complete the steps needed to deploy the PowerFlex management platform cluster VMs.

Prerequisites
The PowerFlex management platform cluster requires three VMs to be deployed.

Steps
1. Log in to the VMware vCSA.
2. Click Menu > Shortcuts > Hosts and Clusters.
3. Right-click the ESXi Host > Select Deploy OVF Template.
4. Select Local File > Upload Files > Browse to the PowerFlex management platform OVA
5. Click Open > Next.
6. Enter pfmp-mvm-<number> for the VM name.
7. Click Next.
8. Verify that there are no compatibility warnings and click Next.
9. Click Next.
10. Review details and click Next.
11. Select Virtual disk format > Thin provision > Next.
12. Select the datastore.
13. Select Desired Destination Network > OK > Next.
14. Click Finish.
15. Repeat the above for the three management VMs.
NOTE: The management VMs and the installer VM use the flex-node-mgmt network. They must be on the same
network.

Add the PowerFlex management platform networking interfaces


Use this procedure to complete the steps needed to configure the networking for the PowerFlex management platform VMs.

About this task


The following network adapters need to be configured on the management virtual machines:

Network Adapter # VLAN Network Interface


1 flex-node-mgmt-105 eth0
2 flex-oob-mgmt-101 eth1
3 flex-data1 eth2
4 flex-data2 eth3
5 (if required) flex-data3 (if required) eth4 (if required)
6 (if required) flex-data4 (if required) eth5 (if required)

Prerequisites
PowerFlex manager requires these interfaces for alerting, upgrade, management, and other services.

Steps
1. Log in to the VMware vCSA.
2. Right-click a management virtual machine and click Edit Settings.
3. Select Virtual Hardware > Add New Device > Network Adapter.

90 Deploying the PowerFlex management platform


Internal Use - Confidential

4. For the new network adapter created, click the dropdown menu or select <vlan id>.
5. Repeat for all required network adapters.
6. Click OK.
7. Repeat for all management virtual machines.
8. Power on the management virtual machine.

Configure the PowerFlex management platform networking interfaces


Use this procedure to complete the steps needed to configure the networking files for the PowerFlex management platform
cluster VMs.

Steps
1. Launch the web console from the Virtual Machine Manager. Log in as delladmin.
2. Configure the flex-node-mgmt-<vlanid> eth0 interface:
a. Type: sudo vi /etc/sysconfig/network/ifcfg-eth0

DEVICE=eth0
NAME=eth0
STARTMODE=auto
BOOTPROTO=static
IPADDR=<ip address>
NETMASK=<mask>

For example:

DEVICE=eth0
NAME=eth0
STARTMODE=auto
BOOTPROTO=static
IPADDR=10.10.10.12
NETMASK=255.255.255.0

b. Configure the default route. Type: sudo vi /etc/sysconfig/network/routes.


default <gateway ip> - <interface>
For example:
default 10.10.10.1 - eth0
c. Configure DNS search and DNS servers. Type: sudo vi /etc/sysconfig/network/config and modify the
following:
NETCONFIG_DNS_STATIC_SEARCHLIST="<search domain>"
NETCONFIG_DNS_STATIC_SERVERS="< dns_ip1 dns_ip2>"
For example:
NETCONFIG_DNS_STATIC_SEARCHLIST="example.com"
NETCONFIG_DNS_STATIC_SERVERS=" 10.10.10.240 10.10.10.241"
3. Configure the flex-oob-mgmt-<vlanid> eth1 interface:
a. Type: sudo vi /etc/sysconfig/network/ifcfg-eth1.

DEVICE=eth1
NAME=eth1
STARTMODE=auto
BOOTPROTO=static
IPADDR=<ip address>
NETMASK=<mask>

For example:

DEVICE=eth1
NAME=eth1
STARTMODE=auto
BOOTPROTO=static

Deploying the PowerFlex management platform 91


Internal Use - Confidential

IPADDR=10.10.9.11
NETMASK=255.255.255.0

4. Configure the flex-data1-<vlanid> eth2 interface:


a. Type: sudo vi /etc/sysconfig/network/ifcfg-eth2.

DEVICE=eth2
NAME=eth2
STARTMODE=auto
BOOTPROTO=static
IPADDR=<ip address>
NETMASK=<mask>

For example:

DEVICE=eth2
NAME=eth2
STARTMODE=auto
BOOTPROTO=static
IPADDR=192.168.151.10
NETMASK=255.255.255.0

5. Configure the flex-data2-<vlanid> eth3 interface:


a. Type: sudo vi /etc/sysconfig/network/ifcfg-eth3.

DEVICE=eth3
NAME=eth3
STARTMODE=auto
BOOTPROTO=static
IPADDR=<ip address>
NETMASK=<mask>

For example:

DEVICE=eth3
NAME=eth3
STARTMODE=auto
BOOTPROTO=static
IPADDR=192.168.152.10
NETMASK=255.255.255.0

NOTE: Configure flex-data3-<vlanid> and flex-data4-<vlanid>, if required.

6. Repeat steps 1 through 5 for all PowerFlex management platform VMs.

Configure the NTP service


Configure the NTP services for the PowerFlex management platform VMs.

Steps
1. To configure the NTP server, type sudo vi /etc/chrony.conf and add the following: server <ntp ip address>
iburst.
For example: server 10.10.10.240 iburst
2. To enable chronyd, type sudo systemctl enable chronyd.
3. To reboot, type sudo reboot.
4. Log in as delladmin
5. To check the server is synced to NTP, type chronyc tracking.

NOTE: This may take a few minutes time to sync. Examples of chronyc tracking status are as follows:

92 Deploying the PowerFlex management platform


Internal Use - Confidential

Successful chronyc tracking status Unsuccessful chronyc tracking status


A configured NTP server should be listed under Reference Stratum 0 means not synced and Leap status says Not
ID, Stratum should be a value greater than 0 and synchronized:
System Time should report much less than 1 second time
difference: node1:~ # chronyc tracking
Reference ID : 00000000 ()
node1:~ # chronyc tracking Stratum : 0
Reference ID : 64400012 (0.pool.ntp.org) Ref time (UTC) : Thu Jan 01 00:00:00 1970
Stratum : 4 System time : 0.000000000 seconds fast
Ref time (UTC) : Mon Mar 07 17:15:39 2022 of NTP time
System time : 0.000001647 seconds slow Last offset : +0.000000000 seconds
of NTP time RMS offset : 0.000000000 seconds
Last offset : -0.000006101 seconds Frequency : 0.000 ppm slow
RMS offset : 0.000020258 seconds Residual freq : +0.000 ppm
Frequency : 33.247 ppm slow Skew : 0.000 ppm
Residual freq : -0.000 ppm Root delay : 1.000000000 seconds
Skew : 0.016 ppm Root dispersion : 1.000000000 seconds
Root delay : 0.040418178 seconds Update interval : 0.0 seconds
Root dispersion : 0.033837218 seconds Leap status : Not synchronized
Update interval : 1024.6 seconds
Leap status : Normal

6. Repeat these steps for all PowerFlex management platform VMs.

Deploy the PowerFlex management platform cluster


Use this task to complete the steps needed to configure the PFMP_Config.json file which is used for the configuration and
deployment of the PowerFlex management platform cluster.

Prerequisites
The following locations contain log files for troubleshooting:
● PowerFlex management platform installer logs: /opt/dell/pfmp/PFMP_Installer/logs
● Platform installer logs: /opt/dell/pfmp/atlantic/logs/bedrock.log
This table describes the PFMP_Config.json and its configuration parameters:

Key Value Description


Nodes (recommended three VMs / hostname The IP addresses of the nodes on which
nodes) the PowerFlex management platform
IP address cluster will be deployed, along with the
hostnames that need to be used for
these nodes. Ensure the hostname is in
lowercase.
ClusterReservedIPPoolCIDR IP address A private subnet that does not
(recommended / 23 network) conflict with any of the subnets in
their datacenter. Users could use the
same subnet for multiple PowerFlex
management platform deployments.
ServiceReservedIPPoolCIDR IP address A private subnet that does not
(recommended / 23 network) conflict with any of the subnets in
their datacenter. Users could use the
same subnet for multiple PowerFlex
management platform deployments.
RoutableIPPoolCIDR flex-node-mgmt-<vlanid> IP requirements (number of IP
addresses) :
flex-oob-mgmt-<vlanid> ● flex-node-mgmt-<vlanid> (5)
● flex-oob-mgmt-<vlanid> (5)
flex-data1-<vlanid> ● flex-data1-<vlanid> (5)

Deploying the PowerFlex management platform 93


Internal Use - Confidential

Key Value Description


flex-data2-<vlanid> ● flex-data2-<vlanid> (5)
● flex-data3-<vlanid> (5) (if required)
flex-data3-<vlanid> (if required) ● flex-data4-<vlanid> (5) (if required)
IP addresses are for the following
flex-data4-<vlanid> (if required) services:
● Ingress (web server) IP address
● NFS IP address
● SNMP IP address (receiving SNMP
traps)
● Support Assist Remote
● Additional IP for NFS services (will be
used only during upgrade)

PFMPHostname FQDN This is the FQDN or hostname which


a user can connect to the PowerFlex
management platform UI via a browser.
The FQDN/hostname needs to be
resolvable via DNS or via a host file
(where the browser is running).
PFMPHostIP IP address This parameter specifics the IP address
for the ingress IP address.
NOTE: The ingress IP address must
be selected from one of the IP
addresses specified in the flex-node-
mgmt-<vlanid> IP service range.

Steps
1. To SSH as non-root user to the PowerFlex management platform Installer, run the following command: ssh
delladmin@<pfmp installer ip>.
2. To navigate to the config directory, run the following command: cd /opt/dell/pfmp/PFMP_Installer/config
For example: cd /pfmp/PFMP_Installer/config/
3. To configure the PFMP_Config.json, run the following command: sudo vi PFMP_Config.json and update the
configuration parameters.
For example:

{
"Nodes":
[
{
"hostname": "pfmp-mgmt-01",
"ipaddress": "10.10.10.01"
},
{
"hostname": "pfmp-mgmt-02",
"ipaddress": "10.10.10.02"
},
{
"hostname": "pfmp-mgmt-03",
"ipaddress": "10.10.10.03"
}
],

"ClusterReservedIPPoolCIDR" : "10.42.0.0/23",

"ServiceReservedIPPoolCIDR" : "10.43.0.0/23",

"RoutableIPPoolCIDR" : [{"flex-node-mgmt-<vlanid>":"10.10.10.20-10.10.10.24"},
{"flex-oob-mgmt-<vlanid>":"10.10.20.20-10.10.20.24"},

{"flex-data1-<vlanid>”:”192.168.151.20-192.169.151.24"},

94 Deploying the PowerFlex management platform


Internal Use - Confidential

{"flex-data2-<vlanid>”:”192.168.152.20-192.168.152.24"}],

"PFMPHostname" : "dellpowerflex.com",

“PFMPHostIP” : “10.10.10.20”

NOTE: PMFPHostIP has to be in the range of "RoutableIPPoolCIDR" : {"flex-node-mgmt-


<vlanid>":"10.10.10.20-10.10.10.24"}.

NOTE: If the customer is planning for four data networks, add flex-data3-<vlanid> and flex-data4-<vlanid> also in
PFMP_Config.json file.

4. Navigate to the scripts folder in the working directory, type: cd /opt/dell/pfmp/PFMP_Installer/scripts


5. To set up the installer, run the following command: sudo ./setup_installer.sh.
6. To install PowerFlex management platform, run the following command: sudo ./install_PFMP.sh (enter sudo user
and password for target nodes when prompted, when prompted enter Y at Are passwords the same for all the
cluster nodes?).
7. Once the installer is complete, log in to PowerFlex Manager using username and password.
If a failure has occurred with PowerFlex management platform installer, the relevant errors are located at: cd /opt/dell/
pfmp/atlantic/logs/bedrock.log.

Remove the PowerFlex management platform installer VM


Use this procedure to remove the PowerFlex management platform installer VM.

Steps
1. Log in to the VMware vCSA.
2. Select Hosts and Clusters.
3. Right-click PowerFlex management platform installer VM > Power > Power Off.
4. Once powered off, right-click Delete from Disk.

Creating VM anti-affinity rules for management virtual machines


A VM-host affinity rule establishes an affinity (or anti-affinity) relationship between a virtual machine DRS group with a host
DRS group. You must create both groups before you can create a rule that links them. This section covers the procedures to
create both the DRS groups and the anti-affinity rule.
NOTE: Creating VM anti-affinity rules for management virtual machines is not applicable for the single-node controller, only
used in multi-node environments.

Create a host DRS group


Use this procedure to create a host Distributed Resource Scheduler (DRS) group.

About this task


A VM-host affinity rule establishes an affinity (or anti-affinity) relationship between a virtual machine DRS group with a host
DRS group. You must create both groups before you can create a rule that links them.

Steps
1. Log in to the VMware vSphere Client.
2. Browse to the cluster in the VMware vSphere Client and click the Configure tab.
3. Under Configuration, select VM/Host Groups and click Add.
4. From the Create VM/Host Group window, type MVM Host Group for the group.

Deploying the PowerFlex management platform 95


Internal Use - Confidential

5. Select Host Group from the Type list and click Add.
6. Ensure that all PowerFlex management controller hosts are selected and click OK > OK.

Create a virtual machine DRS group


Use this procedure to create a virtual machine Distributed Resource Scheduler (DRS) group.

About this task


A VM-host affinity rule establishes an affinity (or anti-affinity) relationship between a virtual machine DRS group with a host
DRS group. You must create both groups before you can create a rule that links them.

Steps
1. Log in to the VMware vSphere Client.
2. Browse to the cluster in the VMware vSphere Client and click the Configure tab.
3. Under Configuration, select VM/Host Groups and click Add.
4. From the Create VM/Host Group window, type MVM VM Group for the group.
5. Select VM Group from the Type list and click Add.
6. Ensure that all PowerFlex management virtual machines are selected and click OK > OK.

Create VM anti-affinity rules for management virtual machines


Use this procedure to create VM anti-affinity rules for management virtual machines.

About this task


The procedure is applicable only for multi-node (3 or more) controllers. If it is single node controller, skip these steps.

Prerequisites
Ensure you have created the host and virtual machine DRS groups to which the VM-host anti-affinity rule applies.

Steps
1. Log in to the VMware vSphere Client.
2. Select Hosts and Clusters.
3. Browse to the PowerFlex management cluster in the VMware vSphere Client and click the Configure tab.
4. Under Configuration, click VM/Host Rules.
5. Click Add.
6. From the Create VM/Host Rules window, type MVM Rule for the rule.
7. From the Type menu, select Virtual Machines to Hosts.
8. Select the virtual machine DRS group (management virtual machines VM group) and the host DRS group to which the rule
applies.
9. Select the Should run on hosts in group check box.
10. Click OK to save the rule.

Deploying and configuring the PowerFlex management platform


using Linux KVM
Use the following procedures to deploy and configure the PowerFlex management platform using Linux KVM.

Related information
Deployment requirements

96 Deploying the PowerFlex management platform


Internal Use - Confidential

Deploy the PowerFlex management platform VM


Use this procedure to set up the platform management virtual machines on Linux KVM.

Steps
1. Log in to the KVM server.
2. Copy the management eSLES QCOW image to KVM server.
3. Open terminal, type: virt-manager.
4. Click File > New Virtual Machine.
5. Select Import existing disk image and click Forward.
6. Click Browse and select the eSLES QCOW image from the saved path.
7. Select operating system as Generic OS and click Forward.
8. Complete necessary changes to CPU and RAM as per requirements and click Forward.
9. Enter VM name, and in network selection select Bridge device and enter Device name and click Finish.
10. Repeat Steps 4 through 9 for all management VMs.

Add the PowerFlex management platform networking interfaces


Use this procedure to complete the steps needed to configure the networking for the PowerFlex management platform VMs.

About this task


The following network adapters need to be configured on the management virtual machines:

Network Adapter # VLAN Network Interface


1 flex-node-mgmt-105 eth0
2 flex-oob-mgmt-101 eth1
3 flex-data1 eth2
4 flex-data2 eth3
5 (if required) flex-data3 (if required) eth4 (if required)
6 (if required) flex-data4 (if required) eth5 (if required)

Configure the PowerFlex management platform networking interfaces


Use this procedure to complete the steps needed to configure the networking files for the PowerFlex management platform
cluster VMs.

Steps
1. Launch the web console from the Virtual Machine Manager. Log in as delladmin.
2. Configure the flex-node-mgmt-<vlanid> eth0 interface:
a. Type: sudo vi /etc/sysconfig/network/ifcfg-eth0

DEVICE=eth0
NAME=eth0
STARTMODE=auto
BOOTPROTO=static
IPADDR=<ip address>
NETMASK=<mask>

For example:

DEVICE=eth0
NAME=eth0

Deploying the PowerFlex management platform 97


Internal Use - Confidential

STARTMODE=auto
BOOTPROTO=static
IPADDR=10.10.10.12
NETMASK=255.255.255.0

b. Configure the default route. Type: sudo vi /etc/sysconfig/network/routes.


default <gateway ip> - <interface>
For example:
default 10.10.10.1 - eth0
c. Configure DNS search and DNS servers. Type: sudo vi /etc/sysconfig/network/config and modify the
following:
NETCONFIG_DNS_STATIC_SEARCHLIST="<search domain>"
NETCONFIG_DNS_STATIC_SERVERS="< dns_ip1 dns_ip2>"
For example:
NETCONFIG_DNS_STATIC_SEARCHLIST="example.com"
NETCONFIG_DNS_STATIC_SERVERS=" 10.10.10.240 10.10.10.241"
3. Configure the flex-oob-mgmt-<vlanid> eth1 interface:
a. Type: sudo vi /etc/sysconfig/network/ifcfg-eth1.

DEVICE=eth1
NAME=eth1
STARTMODE=auto
BOOTPROTO=static
IPADDR=<ip address>
NETMASK=<mask>

For example:

DEVICE=eth1
NAME=eth1
STARTMODE=auto
BOOTPROTO=static
IPADDR=10.10.9.11
NETMASK=255.255.255.0

4. Configure the flex-data1-<vlanid> eth2 interface:


a. Type: sudo vi /etc/sysconfig/network/ifcfg-eth2.

DEVICE=eth2
NAME=eth2
STARTMODE=auto
BOOTPROTO=static
IPADDR=<ip address>
NETMASK=<mask>

For example:

DEVICE=eth2
NAME=eth2
STARTMODE=auto
BOOTPROTO=static
IPADDR=192.168.151.10
NETMASK=255.255.255.0

5. Configure the flex-data2-<vlanid> eth3 interface:


a. Type: sudo vi /etc/sysconfig/network/ifcfg-eth3.

DEVICE=eth3
NAME=eth3
STARTMODE=auto
BOOTPROTO=static

98 Deploying the PowerFlex management platform


Internal Use - Confidential

IPADDR=<ip address>
NETMASK=<mask>

For example:

DEVICE=eth3
NAME=eth3
STARTMODE=auto
BOOTPROTO=static
IPADDR=192.168.152.10
NETMASK=255.255.255.0

NOTE: Configure flex-data3-<vlanid> and flex-data4-<vlanid>, if required.

6. Repeat steps 1 through 5 for all PowerFlex management platform VMs.

Configure the NTP service


Configure the NTP services for the PowerFlex management platform VMs.

Steps
1. To configure the NTP server, type sudo vi /etc/chrony.conf and add the following: server <ntp ip address>
iburst.
For example: server 10.10.10.240 iburst
2. To enable chronyd, type sudo systemctl enable chronyd.
3. To reboot, type sudo reboot.
4. Log in as delladmin
5. To check the server is synced to NTP, type chronyc tracking.

NOTE: This may take a few minutes time to sync. Examples of chronyc tracking status are as follows:

Successful chronyc tracking status Unsuccessful chronyc tracking status


A configured NTP server should be listed under Reference Stratum 0 means not synced and Leap status says Not
ID, Stratum should be a value greater than 0 and synchronized:
System Time should report much less than 1 second time
difference: node1:~ # chronyc tracking
Reference ID : 00000000 ()
node1:~ # chronyc tracking Stratum : 0
Reference ID : 64400012 (0.pool.ntp.org) Ref time (UTC) : Thu Jan 01 00:00:00 1970
Stratum : 4 System time : 0.000000000 seconds fast
Ref time (UTC) : Mon Mar 07 17:15:39 2022 of NTP time
System time : 0.000001647 seconds slow Last offset : +0.000000000 seconds
of NTP time RMS offset : 0.000000000 seconds
Last offset : -0.000006101 seconds Frequency : 0.000 ppm slow
RMS offset : 0.000020258 seconds Residual freq : +0.000 ppm
Frequency : 33.247 ppm slow Skew : 0.000 ppm
Residual freq : -0.000 ppm Root delay : 1.000000000 seconds
Skew : 0.016 ppm Root dispersion : 1.000000000 seconds
Root delay : 0.040418178 seconds Update interval : 0.0 seconds
Root dispersion : 0.033837218 seconds Leap status : Not synchronized
Update interval : 1024.6 seconds
Leap status : Normal

6. Repeat these steps for all PowerFlex management platform VMs.

Deploying the PowerFlex management platform 99


Internal Use - Confidential

Deploy the PowerFlex management platform cluster


Use this task to complete the steps needed to configure the PFMP_Config.json file which is used for the configuration and
deployment of the PowerFlex management platform cluster.

Prerequisites
The following locations contain log files for troubleshooting:
● PowerFlex management platform installer logs: /opt/dell/pfmp/PFMP_Installer/logs
● Platform installer logs: /opt/dell/pfmp/atlantic/logs/bedrock.log
This table describes the PFMP_Config.json and its configuration parameters:

Key Value Description


Nodes (recommended three VMs / hostname The IP addresses of the nodes on which
nodes) the PowerFlex management platform
IP address cluster will be deployed, along with the
hostnames that need to be used for
these nodes. Ensure the hostname is in
lowercase.
ClusterReservedIPPoolCIDR IP address A private subnet that does not
(recommended / 23 network) conflict with any of the subnets in
their datacenter. Users could use the
same subnet for multiple PowerFlex
management platform deployments.
ServiceReservedIPPoolCIDR IP address A private subnet that does not
(recommended / 23 network) conflict with any of the subnets in
their datacenter. Users could use the
same subnet for multiple PowerFlex
management platform deployments.
RoutableIPPoolCIDR flex-node-mgmt-<vlanid> IP requirements (number of IP
addresses) :
flex-oob-mgmt-<vlanid> ● flex-node-mgmt-<vlanid> (5)
● flex-oob-mgmt-<vlanid> (5)
flex-data1-<vlanid> ● flex-data1-<vlanid> (5)
● flex-data2-<vlanid> (5)
flex-data2-<vlanid> ● flex-data3-<vlanid> (5) (if required)
● flex-data4-<vlanid> (5) (if required)
flex-data3-<vlanid> (if required) IP addresses are for the following
services:
flex-data4-<vlanid> (if required) ● Ingress (web server) IP address
● NFS IP address
● SNMP IP address (receiving SNMP
traps)
● Support Assist Remote
● Additional IP for NFS services (will be
used only during upgrade)

PFMPHostname FQDN This is the FQDN or hostname which


a user can connect to the PowerFlex
management platform UI via a browser.
The FQDN/hostname needs to be
resolvable via DNS or via a host file
(where the browser is running).
PFMPHostIP IP address This parameter specifics the IP address
for the ingress IP address.
NOTE: The ingress IP address must
be selected from one of the IP

100 Deploying the PowerFlex management platform


Internal Use - Confidential

Key Value Description

addresses specified in the flex-node-


mgmt-<vlanid> IP service range.

Steps
1. To SSH as non-root user to the PowerFlex management platform Installer, run the following command: ssh
delladmin@<pfmp installer ip>.
2. To navigate to the config directory, run the following command: cd /opt/dell/pfmp/PFMP_Installer/config
For example: cd /pfmp/PFMP_Installer/config/
3. To configure the PFMP_Config.json, run the following command: sudo vi PFMP_Config.json and update the
configuration parameters.
For example:

{
"Nodes":
[
{
"hostname": "pfmp-mgmt-01",
"ipaddress": "10.10.10.01"
},
{
"hostname": "pfmp-mgmt-02",
"ipaddress": "10.10.10.02"
},
{
"hostname": "pfmp-mgmt-03",
"ipaddress": "10.10.10.03"
}
],

"ClusterReservedIPPoolCIDR" : "10.42.0.0/23",

"ServiceReservedIPPoolCIDR" : "10.43.0.0/23",

"RoutableIPPoolCIDR" : [{"flex-node-mgmt-<vlanid>":"10.10.10.20-10.10.10.24"},
{"flex-oob-mgmt-<vlanid>":"10.10.20.20-10.10.20.24"},

{"flex-data1-<vlanid>”:”192.168.151.20-192.169.151.24"},

{"flex-data2-<vlanid>”:”192.168.152.20-192.168.152.24"}],

"PFMPHostname" : "dellpowerflex.com",

“PFMPHostIP” : “10.10.10.20”

NOTE: PMFPHostIP has to be in the range of "RoutableIPPoolCIDR" : {"flex-node-mgmt-


<vlanid>":"10.10.10.20-10.10.10.24"}.

NOTE: If the customer is planning for four data networks, add flex-data3-<vlanid> and flex-data4-<vlanid> also in
PFMP_Config.json file.

4. Navigate to the scripts folder in the working directory, type: cd /opt/dell/pfmp/PFMP_Installer/scripts


5. To set up the installer, run the following command: sudo ./setup_installer.sh.
6. To install PowerFlex management platform, run the following command: sudo ./install_PFMP.sh (enter sudo user
and password for target nodes when prompted, when prompted enter Y at Are passwords the same for all the
cluster nodes?).
7. Once the installer is complete, log in to PowerFlex Manager using username and password.
If a failure has occurred with PowerFlex management platform installer, the relevant errors are located at: cd /opt/dell/
pfmp/atlantic/logs/bedrock.log.

Deploying the PowerFlex management platform 101


Internal Use - Confidential

Remove the PowerFlex management platform installer VM


Use this procedure to remove the PowerFlex management platform installer VM.

Steps
1. Log in to the KVM server.
2. Connect to the virt-manager.
3. Select the installer VM to be deleted.
4. Right-click and select Delete.

102 Deploying the PowerFlex management platform


Internal Use - Confidential

8
Configuring PowerFlex Manager
Use the procedures in this section to configure PowerFlex Manager.

Log in to PowerFlex Manager


Log in to PowerFlex Manager to continue configuration activities and deploy storage resources.

Prerequisites
● Ensure that you have access to a web browser that has network connectivity with PowerFlex Manager.
● Ensure that you know the address that was configured for accessing PowerFlex Manager. This address was configured for
PFMPHostname in the JSON file, during PowerFlex Manager installation. The default address is dellpowerflex.com.
● Prepare a new password for accessing PowerFlex Manager. Admin123! cannot be used. Password rules are:
○ Contains less than 32 characters
○ Contains only alphanumeric and punctuation characters

About this task


After installing PowerFlex Manager, the next step is to log in to PowerFlex Manager using a web browser. Upon log in for the
first time, you will have to set a user name and a password. You will then be presented with a wizard that will guide you through
initial configuration activities.

Steps
1. Point your browser to the address configured for PowerFlex Manager.
The PowerFlex Manager login page is displayed.
2. Enter your new password in the New Password box, and enter it again in the Confirm Password box.
3. Make a note of the password for future use.
This password is for the SuperUser who is performing the initial configuration activities. Additional users and passwords can
be configured later in the process.
4. Click Submit.
You are now logged into the system. The initial setup wizard is displayed. Proceed with initial configuration activities, guided
by this wizard.

Perform the initial setup


The first time you log in to PowerFlex Manager, you are prompted with an Initial Configuration wizard, which prompts you to
configure the basic settings that are required to start using PowerFlex Manager.
Before you begin, have the following information available:
● SupportAssist configuration details
● Information about whether you intend to use a Intelligent Catalog (IC)
● Information about the type of installation you want to perform, including details about your existing PowerFlex instance, if
you intend to import from another PowerFlex instance
To configure the basic settings:
1. On the Welcome page, read the instructions and click Next.
2. On the SupportAssist page, optionally enable SupportAssist and specify SupportAssist connection settings, and click Next.
3. On the Compliance page, indicate whether you are planning to use a Intelligent Catalog (IC), and click Next.
4. On the Installation Type page, specify whether you want to deploy a new instance of PowerFlex or import an existing
instance, and click Next.

Configuring PowerFlex Manager 103


Internal Use - Confidential

5. On the Summary page, verify all settings for SupportAssist, compliance, and installation type. Click Finish to complete the
initial setup.
After completing the initial setup, you can begin to configure PowerFlex Manager and deploy resources from the Getting
Started page.

Enable SupportAssist
SupportAssist is a secure support technology for the data center. You can enable SupportAssist as part of the initial
configuration wizard. Alternatively, you can enable it later by adding it as a destination to a notification policy in Events and
Alerts.

About this task


Depending on your service agreement, SupportAssist provides the following features:
● Automated issue detection - SupportAssist monitors your Dell Technologies devices and automatically detects hardware
issues, both proactively and predictively.
● Automated case creation - When an issue is detected, SupportAssist automatically opens a support case with Dell
Technologies Technical Support.
● Automated diagnostic collection - SupportAssist automatically collects system state information from your devices and
uploads it securely to Dell Technologies. Dell Technologies Technical Support uses this information to troubleshoot the issue.
● Proactive contact - A Dell Technologies Technical Support agent contacts you about the support case and helps you resolve
the issue.
The first time that you access the initial configuration wizard, the connection status displays as not configured.

Prerequisites
Ensure you have the details about your SupportAssist configuration.

Steps
1. Click Enable SupportAssist.
2. From the Connection Type tab, there are two options:

If you want to: Do the following:


Connect directly to Dell Click Connect Directly.
Technologies.
Connect to a SupportAssist Click Connect via Gateway Server and provide the following details:
gateway server that connects ● The IP address and port number that is assigned to the primary and secondary
to Dell Technologies by remote gateway servers. The port number is auto populated to 9443.
access. ● Optionally, click Add Gateway to include an additional gateway server. The
maximum number of gateways is eight.
● If your environment has a proxy server, enter the IP address, port number,
username, and password for the proxy server.
● Enter the access key and PIN for authentication to SupportAssist.
● Optionally, click Test connection to test the connection status.
● Optionally, click Send Test Alert to send a test alert.
3. From the Support Contacts tab, identify the primary support contact by providing their first name, last name, email, and
phone details. Optionally, you can add up to two additional support contacts.
4. From the Device Registration tab, register your device by completing the following steps:
a. Select the type of device registration from the Type menu.
b. Depending on the device registration type, you may have to enter information:

Device registration type Information required


● PowerFlex rack ● Enterprise License Management Systems (ELMS)
● PowerFlex appliance Software Unique ID (eSWUID)
● Solution serial number
● Site ID

104 Configuring PowerFlex Manager


Internal Use - Confidential

Device registration type Information required


PowerFlex software N/a
NOTE: No information is required because the site ID
is registered when the PowerFlex license is uploaded.

5. From the Customize Connection to SupportAssist tab, you can:


a. Click Connect to CloudIQ to enable Dell Technologies to send telemetry data, alerts, and analytics through
SupportAssist.
b. Click Enable Remote Support to enable authorized Dell Technologies support engineers to troubleshoot your system
remotely.
6. From the Re-authenticate SupportAssist tab, click request a New Access Key and PIN to generate an access key and
pin. The AccessKey Portal opens.
a. From the menu, select SERIALNUMBER.
b. Enter the serial number of your device or PowerFlex software and click Submit.
c. Click Generate New Access key.
7. From the Re-authenticate SupportAssist tab, enter the access key and pin.
8. Click Next.

Related information
Enabling SupportAssist

Configure the initial setup for compliance


If you want to be able to use an Intelligent Catalog (IC) to deploy and update operating systems, firmware, switches, and other
resources, you need to enable compliance as part of the initial setup.

Steps
1. Click I use RCM or IC to manage other components in my system if you want to enable the full compliance features of
PowerFlex Manager.
If you choose this option, PowerFlex Manager allows you to upload a compliance file on the Getting Started page.
2. Click I only manage PowerFlex if you only want to use PowerFlex Manager to manage PowerFlex software.
If you choose this option, PowerFlex Manager does not allow you to upload a compliance file on the Getting Started page.
If you are unsure about which option you want, select this option.

Specify the installation type


If you are importing an existing PowerFlex deployment, you can specify details about this deployment as part of the initial setup.
Alternatively, if you are deploying a new instance, skip this task.

About this task


The initial setup wizard supports three different installation workflows:
● No migration required
Deploying a new instance of PowerFlex.
● Migration from a core PowerFlex (software-only) instance that was not managed with PowerFlex Manager
In this workflow, you need to provide the IP and credentials for the PowerFlex MDM cluster
● Migration from a full PowerFlex Manager environment that had previously been used to manage a PowerFlex instance
In this workflow, you need to provide the IP and credentials for the PowerFlex Manager virtual appliance instance

Configuring PowerFlex Manager 105


Internal Use - Confidential

Prerequisites
If you are importing an existing PowerFlex deployment that was not managed by PowerFlex Manager, make sure you have the IP
address, username, and password for the primary and secondary MDMs. If you are importing an existing PowerFlex deployment
that was managed by PowerFlex Manager, make sure you have the IP address, username, and password for the PowerFlex
Manager virtual appliance.

Steps
1. Click one of the following options:

Option Description
I want to deploy a new instance of PowerFlex If you do not have an existing PowerFlex deployment and would
like to bypass the import step.
I have a PowerFlex instance to import If you would like to import an existing PowerFlex instance that
was not managed by PowerFlex Manager
Provide the following details about the existing PowerFlex
instance:
● IP addresses for the primary and secondary MDMs (separated
by a comma with no spaces)
● Admin username and password for the primary MDM
● Operating system username and password for the primary
MDM
● LIA password
I have a PowerFlex instance managed by If you would like to import a an existing PowerFlex directly from
PowerFlex Manager to import an existing PowerFlex Manager virtual appliance.
Provide the following details about the existing PowerFlex
Manager virtual appliance:
● IP address or DNS name for the virtual appliance
● Username and password for the virtual appliance

2. Click Next to proceed.

Results
For a full PowerFlex Manager migration, the import process backs up and restores information from the old PowerFlex Manager
virtual appliance. The migration process for the full PowerFlex Manager workflow imports all resources, templates, and services
from a previous instance of PowerFlex Manager. The migration also connects the legacy PowerFlex gateway to the MDM
cluster, thereby enabling the Block tab in the user interface to function.
The migrated environment includes a PowerFlex gateway instance called "block-legacy-gateway". It will not include the gateway
instance for the Management Data Store ("block-legacy-gateway-mds") until you discover the PowerFlex System resource on
the Resources page.
For a software-only PowerFlex system, there will be no PowerFlex Manager information available after the migration completes.
The migrated environment will not include resources, templates, and services.

Verifying the initial setup


Before you complete the initial setup, you can review all of the settings you provided for SupportAssist, compliance, and
installation type.

Steps
1. On the Summary page, verify the settings that you configured on the previous pages.
2. To edit any information, click Back or click the corresponding page name in the left pane.
3. If you are importing an existing PowerFlex instance from PowerFlex Manager, type IMPORT POWERFLEX MANAGER .
4. If the information is correct, click Finish to complete the initial setup.

106 Configuring PowerFlex Manager


Internal Use - Confidential

If you are importing an existing environment, PowerFlex Manager displays a message indicating that the import operation is
in progress. When the import operation is complete, PowerFlex Manager displays the Getting Started page. The steps you
can perform after the initial setup vary depending on the compliance option you selected. If you indicated that you would
only like to manage PowerFlex, the Getting Started page displays steps that are suitable for a software-only environment
that does not require the compliance step. Otherwise, the Getting Started page displays steps that are suitable for a
full-featured installation that includes the compliance step.

Next steps
If you did not migrate an existing PowerFlex environment, you now have the option to deploy a new instance of PowerFlex.
After completing the migration wizard for a full PowerFlex Manager import, you need to perform these steps:
1. On the Settings page, upload the compatibility matrix file and upload the latest repository catalog (IC), or use the software-
only catalog.
The software-only catalog is new in this release. This catalog only includes the components required for an upgrade of
PowerFlex.
2. On the Resources page, select the PowerFlex entry, and perform a non-disruptive update.
3. On the Resource Groups page, perform an IC upgrade on any migrated service that needs to be upgraded.
The migrated resource groups are initially non-compliant, because PowerFlex Manager 4.0 is running a later IC that includes
PowerFlex 4.0. These resource groups must be upgraded to the latest IC before they can be expanded or managed with
automation operations.
4. Power down the old PowerFlex Manager VM, the old PowerFlex gateway VM, and the presentation server VM.
The upgrade of the cluster to version 4.0 will cause the old PowerFlex Manager virtual appliances to stop working.
5. After validating the upgrade to version 4.0, decommission the old instances of PowerFlex Manager, the PowerFlex gateway,
and the presentation server.
Do not delete the old instances until you have had a chance to review the initial setup and confirm that the old environment
was migrated successfully.

After completing the migration wizard for a PowerFlex (software only) import, you need to perform these steps:
1. On the Settings page, upload the compatibility matrix file and confirm that you have the latest software-only catalog.
The software-only catalog is new in this release. This catalog only includes the components required for an upgrade of
PowerFlex.
2. On the Resources page, select the PowerFlex entry, and perform a non-disruptive update.
You do not need a resource group (service) to perform an upgrade of the PowerFlex environment. In addition, PowerFlex
Manager does not support Add Existing Resource Group operations for a software-only migration. If you want to be able
to perform any deployments, you will need a new resource group. Therefore, you need to create a new template (or clone a
sample template), and then deploy a new resource group from the template.

Getting started
The Getting Started page guides you through the common configurations that are required to prepare a new PowerFlex
Manager environment. A green check mark on a step indicates that you have completed the step. Only super users have access
to the Getting Started page.
The following table describes each step:

Step Description

Upload Compliance File Provide compliance file location and authentication information for use within
PowerFlex Manager. The compliance file defines the specific hardware
NOTE: Use http as the preferred loading
components and software version combinations that are tested and certified
method for the RCM. by Dell for hyperconverged infrastructure and other Dell products. This step
enables you to choose a default compliance version for compliance or add new
compliance versions.

Configuring PowerFlex Manager 107


Internal Use - Confidential

Step Description

This step is enabled after you complete the initial setup if you selected I use
RCM or IC to manage other components in my system. Otherwise, this
step is not available on the Getting Started page.
You can also click Settings > Repositories > Compliance Versions.
NOTE: Before you make an RCM or IC the default compliance version,
you first need to upload a suitable compatibility management file under
Settings > Repositories > Compatibility Management.

Define Networks Enter detailed information about the available networks in the environment.
This information is used later during deployments to configure nodes and
switches to have the right network connectivity. PowerFlex Manager uses
the defined networks in templates to specify the networks or VLANs that are
configured on nodes and switches for your resource groups.
This step is enabled immediately after you perform an initial setup for
PowerFlex Manager.
You can also click Settings > Networking > Networks.

Discover Resources Grant PowerFlex Manager access to resources (nodes, switches, virtual
machine managers) in the environment by providing the management IP and
credential for the resources to be discovered.
This step is not enabled until you define your networks.
You can also click Resources > Discover Resources.

Manage Deployed Resources (Optional) Add existing resource group for a cluster that is already deployed and manage
the resources within PowerFlex Manager.
This step is not enabled until you define your networks.
You can also click Lifecycle > Resource Groups > Add Existing Resource
Group.

Deploy Resources Create a template with requirements that must be followed during a
deployment. Templates enable you to automate the process of configuring
and deploying infrastructure and workloads. For most environments, you can
clone one of the sample templates that are provided with PowerFlex Manager
and make modifications as needed. Choose the sample template that is most
appropriate for your environment.
For example, for a hyperconverged deployment, clone one of the
hyperconverged templates.
For a two-layer deployment, clone the compute-only templates. Then clone one
of the storage templates.
This step is not enabled until you define your networks.
You can also click Lifecycle > Templates.

Manage PowerFlex License Configure licensing for PowerFlex.


You can also click Settings > License Management.

To revisit the Getting Started page, click Getting Started on the help menu.

Change your password


When you first log in to PowerFlex Manager, you need to set your password. You can also change your password at any time
after the first login.

108 Configuring PowerFlex Manager


Internal Use - Confidential

Steps
1. Click the user icon in the upper right corner of PowerFlex Manager.
2. Click Change password.
3. Type the password in the New Password field.
4. Type the password again in the Verify Password field.
5. Click Apply.

Configuring the PowerFlex Manager settings


NOTE: Skip if PowerFlex Manager settings are already configured as part of Getting started.

Configure repositories
Use this section to configure the repositories.

About this task


Intelligent catalogue (IC) has Operating System images (Embedded Operating system based on SUSE Linux) required for
deploying Storage Only and PowerFlex file resource group, no need to separately upload. However, ESXi image is not part of IC,
so you need to upload the ESXi image separately for deploying ESXi based Hyperconverged and Compute only resource group.

Configure compliance versions


Use the Compliance Versions tab to load compliance versions and specify a default version for compliance.

About this task


PowerFlex Manager shows any deviation from the baseline in the compliance status of the resources. You can use PowerFlex
Manager to initiate updates to bring the resources to a compliant state. To facilitate upgrades of the PowerFlex Manager virtual
appliance, and Intelligent Catalog (IC), PowerFlex Manager steers you toward valid upgrade paths. If the current version of
the software is incompatible with the target version, when you attempt an upgrade, PowerFlex Manager displays a warning.
PowerFlex Manager also warns you if any of the IC versions that are loaded on the virtual appliance are incompatible with the
target compliance versions.

Steps
1. On the menu bar, select Settings and choose Repositories.
2. Select Compliance Version and click Add.
3. In the Add Compliance File dialog, select one of the following options:
a. Download from Secure Connect Gateway (SCG) - Select this option to import the compliance file that contains the
firmware bundles you need. (SupportAssist)
b. Download from local network path - Select this option to download the compliance file from an NFS or CIFS file share.
4. Optionally, set this compliance file as the default by choosing Make this default version for compliance checking. and
click Save.
5. PowerFlex Manager takes some time to unpack the packages from the compliance bundle.
a. If you attempt to add an unsigned compliance, the compliance file state displays as Needs Approval. You can choose to
do either of the following from the Available Actions drop-down menu:
● Allow Unsigned File - Select this option to allow PowerFlex Manager to use the unsigned compliance file. The
compliance file then moves to an Available state.
● Delete - Select this option to remove the unsigned compliance file.

Configuring PowerFlex Manager 109


Internal Use - Confidential

Configure compatibility management

About this task


PowerFlex Manager uses information that is provided in the Dell PowerFlex Appliance with PowerFlex 4.x Compatibility Matrix
file to determine the valid paths. The Dell PowerFlex Appliance with PowerFlex 4.x Compatibility Matrix file maps all the known
valid and invalid paths for all previous releases of the software.

Steps
1. From the menu, select Settings > Repositories > Compatibility management.
2. Click Edit Settings.
3. In the Compatibility Management dialog, select one of the following options:
a. Download from configured Dell Technologies Support Assist.
b. Upload from the local. Click Choose File to select the GPG file.
4. Click Save.

Configure OS image repositories

About this task


NOTE: Optionally, you can select a custom operating system image that will be used for future add node operations on the
resource group.

Prerequisites
Upload OS images only for deploying ESXi based Hyperconverged and Compute only resource group. The customer can also
upload supported Linux based OS images for deploying Compute only resource group.

Steps
1. From the menu, click Settings > Repositories > OS Images.
2. Click Add.
3. In the Add OS image Repository dialog, enter the following:
a. For Repository Name, enter the name of the repository.
The repository name must be unique and case insensitive.
b. For Image Type, enter the image type.
c. For Source Path and Filename, enter the path of the OS image file name in a file share.
● To enter the CIFS share, use the following format example: \\host\lab\isos\filename.iso
● To enter the NFS share, use the following format example: Host:/var/nfs/filename.iso
d. If you are using the CIFS share, enter the Username and Password to access the share.
4. Click Add.

Configuring networking
Adding the details of an existing network enables PowerFlex Manager to automatically configure nodes that are connected to
the network.

Define a network

Steps
1. On the menu bar, click Settings > Networking and click Networks.

110 Configuring PowerFlex Manager


Internal Use - Confidential

2. Click Define.
3. In the Name field, enter the name of the network. Optionally, in the Description field, enter a description for the network.
4. From the Network Type drop-down, select one of the following network types:
● General purpose LAN
● Hypervisor management
● Hypervisor migration
● Hardware management
● PowerFlex data
● PowerFlex data (client traffic only)
● PowerFlex data (server traffic only)
● PowerFlex replication
● PowerFlex management
NOTE:
● For a PowerFlex configuration that uses a hyperconverged architecture with two/four data networks, you typically
have two or four networks that are defined with the PowerFlex data network type.
● The PowerFlex data network type supports both client and server communications and used with hyperconverged
resource groups.
● For a PowerFlex configuration that uses a two-layer architecture with four dedicated data networks, you typically
have two PowerFlex (client-traffic only) VLANs and two PowerFlex data (server-traffic only) VLANs. These network
types are used with storage-only and compute-only resource groups

5. In the VLAN ID field, enter a VLAN ID between 1 and 4094.


NOTE: PowerFlex Manager uses the VLAN ID to configure I/O modules to enable network traffic to flow from the node
to configured networks during deployment.

6. Optionally, select the Configure Static IP Address Ranges check box, and do the following:
a. In the Subnet box, enter the IP address for the subnet. The subnet is used to support static routes for data and
replication networks.
b. In the Subnet Mask box, enter the subnet mask.
c. In the Gateway box, enter the default gateway IP address for routing network traffic.
d. Optionally, in the Primary DNS and Secondary DNS fields, enter the IP addresses of primary DNS and secondary DNS.
e. Optionally, in the DNS Suffix field, enter the DNS suffix to append for hostname resolution.
f. To add an IP address range, click Add IP Address Range. In the row, indicate the role in PowerFlex nodes for the IP
address range and then specify a starting and ending IP address for the range. For the Role, select either:
● Server or Client: Default; range is assigned to the server and client roles.
● Client Only: Range is assigned to the client role on PowerFlex hyperconverged nodes and PowerFlex compute-only
nodes.
● Server Only: Range is assigned to the server role on PowerFlex hyperconverged nodes and PowerFlex storage-only
nodes.
NOTE: The Configure Static IP Address Ranges check box is not available for all network types. For example,
you cannot configure a static IP address range for the operating system Installation network type. You cannot select
or clear this check box to configure static IP address pools after a network is created.

7. Click Save.
8. If replicating the network, repeat steps 1 through 7 to add the remote replication networks.

Edit a network
If a network is not associated with a template or resource group, you can edit the network name, the VLAN ID, or the IP address
range.

Steps
1. On the menu bar, click Settings > Networking and click Networks.
2. Select the network that you want to modify, and click Modify.
3. Edit the information in any of the following fields: Name, VLAN ID, IP Address Range.

Configuring PowerFlex Manager 111


Internal Use - Confidential

For a PowerFlex data or replication network, you can specify a subnet IP address for a static route configuration. The subnet
is used to support static routes for data and replication networks.
4. Click Save.

Delete a network
You cannot delete a network that is associated with a template or resource group.

Steps
1. On the menu bar, click Settings > Networking and click Networks.
2. Click the network that you want to delete, and click Delete.
3. Click Yes when the confirmation message is displayed.

Configure license management


Use this task to configure the license management.

About this task


PowerFlex Manager has a default trial / evaluation license for 90 days. After 90 days, the license should be enforced otherwise
it would display an error message and no operation can be performed. The license is on a single format which includes both
PowerFlex Manager and PowerFlex. The license is based on capacity based with no expiry date. Once the capacity is maximized
a new license with more capacity needs to be uploaded.

NOTE: New license capacity is the aggregate of the old capacity with newly purchased capacity.

Steps
1. To upload the PowerFlex license:
a. lick Settings and License management from the left pane.
b. Select PowerFlex License, under Production License, click Choose File.
c. Browse and select the license to upload and click Open.
d. Click Save.
2. To upload CloudLink license:
a. Click Other Software Licenses.
b. Click ADD, and Add Software license dialog box opens.
c. Under Upload License, select Choose License.
d. Browse and select the license to upload and click Open.
e. Select the Type as CloudLink and click Save.

Discover resources
Use this procedure to discover and allow PowerFlex Manager access to resources in the environment. Provide the management
IP address or hostname and credential for each discoverable resource.

About this task


Dell Technologies recommends using separate operating system credentials for SVM and VMware ESXi. For information about
creating or updating credentials in PowerFlex Manager, click Settings > Credentials Management and access the Online
Help.
NOTE: Gateway is container based and will be auto discovered in the Resource page. The resource page shows:
● Powerflex - PowerFlex Gateway
● powerflex-mds - PowerFlex system
● powerflex-file - PowerFlex file

112 Configuring PowerFlex Manager


Internal Use - Confidential

NOTE: The powerflex-mds (PowerFlex system) will not be available until you complete the Add the PowerFlex system as
a resource section.
During node discovery, you can configure the iDRAC nodes to automatically send alerts to PowerFlex Manager. If the PowerFlex
nodes are not configured for alert connector, SupportAssist does not receive critical or error alerts for those resources.
The following table describes how to configure resources in managed mode:

Resource type Resource state Example


Element manager Managed CloudLink Center IP address
Nodes (hardware/ Managed PowerEdge iDRAC management IP address If you want to perform firmware
software management) updates or deployments on a discovered node, ensure to change the default
state to managed. Perform firmware or catalog updates from the Services
page, not the Resources page.
Switch Managed Switch management IP address
VM manager Managed vCenter IP address

NOTE: The gateways are container based and automatically discovered.

Prerequisites
Ensure you gather the IP addresses and credentials that are associated with the resources.
NOTE: PowerFlex Manager also allows you to use the name-based searches to discover a range of nodes that were
assigned the IP addresses through DHCP to iDRAC. For more information about this feature, see Dell PowerFlex 4.0.x
Administration Guide.

Steps
1. On the PowerFlex Manager Getting Started page, click Discover Resources.
2. On the Welcome page of the Discovery wizard, read the instructions and click Next.
3. On the Identify Resources page, click Add Resource Type. From the Resource Type list, select the resource that you
want to discover.
4. Select IP address option or Hostname option. Enter the IP address/hostname of the resource in the IP address/hostname
range field.
● To discover one or more nodes by IP address, select IP address and provide a starting and ending IP address.
● To discover one or more nodes by hostname, select hostname and identify the nodes to discover in one of the following
ways:
○ Enter the fully qualified domain name (FQDN) with a domain suffix.
○ Enter the FQDN without a domain suffix
If you use a variable, you must provide a start number and end number for the hostname search.
5. In the Resource State list, select Managed, Unmanaged or Reserved.

Option Description
Managed ● Select this option to monitor the firmware version compliance, upgrade firmware,
and deploy resource groups on the discovered resources. A managed state is the
default option for the switch, VMware vCenter, element manager, and PowerFlex
Gateway resource types.
● Resource state must be set to Managed for PowerFlex Manager to send alerts to
Secure Connect Gateway.
Unmanaged ● Select this option to monitor the health status of a device and the firmware version
compliance only. The discovered resources are not available for a firmware upgrade
or deploying resource groups by PowerFlex Manager. This is the default option for
the node resource type.
● If you did not upload a license in the Initial Setup wizard, PowerFlex Manager is
configured for monitoring and alerting only. In this case, Unmanaged is the only
option available.

Configuring PowerFlex Manager 113


Internal Use - Confidential

Option Description
Reserved ● Select this option to monitor firmware version compliance and upgrade firmware. The
discovered resources are not available for deploying resource groups by PowerFlex
Manager.

6. For a PowerFlex node, to discover resources into a selected node pool instead of the global (default), select the node pool
from the Discover into Node Pool list. To create a node pool, click + to the right of the Discover into Node Pool.
7. Select the appropriate credential from the Credentials list. To create a credential, click + to the right of Credentials.
PowerFlex Manager maps the credential type to the type of resource you are discovering.
8. For a PowerFlex node, if you want PowerFlex Manager to automatically reconfigure the iDRAC IP addresses of the nodes it
discovers, select the Reconfigure discovered nodes with new management IP and credentials check box. This option
is not selected by default, because it is faster to discover the nodes if you bypass the reconfiguration.
9. For a PowerFlex node, select the Auto configure nodes to send alerts to PowerFlex Manager check box to have
PowerFlex Manager automatically configure nodes to send alerts to PowerFlex Manager.
10. Click Next to start discovery.
11. On the Discovered Resources page, select the resources from which you want to collect inventory data and click Finish.
The discovered resources are listed on the Resources page.
NOTE: The gateway is container based. It is automatically discovered on the Resource page:
● Powerflex - PowerFlex Gateway
● powerflex-mds - PowerFlex system
● powerflex-file - PowerFlex file

Upgrading the switch software


This section describes how to upgrade access switches using PowerFlex Manager. If the switches are running with the latest
supported software based on the IC or already upgraded using manual procedures, skip these tasks.

Creating or cloning a template


Use this section to clone an existing template or create a new template.

Clone an existing template


For most environments, you can clone one of the sample templates that are provided with PowerFlex Manager and modify as
needed.

About this task


Choose the sample template that is most appropriate for the environment. For example, for a hyperconverged deployment,
clone the PowerFlex hyperconverged nodes template. For a two-layer deployment, clone the Compute Only - VMware ESXi
template and then clone one of the storage templates. If deploying a PowerFlex storage-only node, PowerFlex compute-only
node, and PowerFlex hyperconverged node, you must create or clone three templates.
PowerFlex Manager can deploy CloudLink Center. Clone the appropriate sample template to deploy these. For CloudLink Center,
clone the Management - CloudLink Center template. Ensure the following:
● The template that you first deploy depends on whether you want primary MDMs on storage-only or PowerFlex
hyperconverged nodes.
● If deploying storage-only and compute-only nodes, deploy the storage-only template first.

Steps
1. If your system will be configured with CloudLink encryption, deploy CloudLink Center VM.
NOTE: Before deploying the PowerFlex hyperconverged node or PowerFlex storage-only node template, you must
deploy the CloudLink VM if you choose encryption enabled service.

114 Configuring PowerFlex Manager


Internal Use - Confidential

2. On the PowerFlex Manager menu, click Lifecycle > Templates.


3. Select the applicable template from the sample template list and click View Details.
Example list of few sample templates
● For a hyperconverged node, choose Hyperconverged.
● For a storage-only node, choose Storage-only.
● For a compute-only node, choose Compute-only.
● For a CloudLink enabled storage-only node, choose Storage-only with encryption.
4. From Sample Templates and select Template to be Cloned. Click View Details and click More Actions and select
Clone.
5. On the Template Information page, provide the following information:
a. Enter a Template Name.
b. From the Template Category list, select a category. To create a category, select Create New Category from the list.
c. Enter a Template Description (optional).
d. Specify the version to use for compliance by selecting it from the Firmware and Software Compliance list or choose
Use PowerFlex Manager appliance default catalog.
e. Specify the service permissions for the template under Who should have access to the service deployed from this
template? by performing one of the following actions:
● Restrict access to Only PowerFlex SuperUser.
● Grant access to PowerFlex SuperUser and Specific Lifecycle Admin and DriveReplacer.
● Click Add User(s) to add standard users to the list.
● Grant access to PowerFlex SuperUser and All Lifecycle Admin and DriveReplacer.
6. Click Next.
7. On the Additional Settings page, select the appropriate values for the Network Settings, OS Settings, Cluster
Settings, PowerFlex Gateway Settings, and Node Pool Settings.
8. Click Validate Settings, to validate the list of nodes match the network configuration parameters.
The list of nodes is filtered according to the target boot device and NIC type settings specified.
When you enable the PowerFlex settings for the node, the Validate Settings page filters the list of nodes according to the
supported storage types (NVMe, All Flash, and HDD). Within the section for each storage type, the nodes are also sorted by
health, with the healthy (green) nodes displayed first and the critical (red) nodes displayed last.

9. Click Finish.

Create a template
Create a template with requirements to follow during deployment.

About this task


The create feature allows you to create a template, clone the components of an existing template into a new template, or import
a pre-existing template.
For most environments, you can simply clone one of the sample templates that are provided with PowerFlex Manager and edit
as needed. Choose the sample template that is most appropriate for your environment.

Steps
1. On the menu bar, click Lifecycle > Templates.
2. On the Templates page, click Create.
3. In the Create dialog box, select one of the following options:
● Clone an existing PowerFlex Manager template
● Upload External Template
● Create a new template
If you select Clone an existing PowerFlex Manager template, select the Category and the Template to be cloned.
The components of the selected template are in the new template.
● For software-only block storage, ensure that you select a template that includes "SW Only" in the name.
● For software-only file storage, ensure that you select a template that includes "File-SW Only" in the name.

Configuring PowerFlex Manager 115


Internal Use - Confidential

4. Enter a Template Name.


5. From the Template Category list, select a template category. To create a category, select Create New Category from the
list.
6. Enter a Template Description (optional).
7. To specify the version to use for compliance, select the version from the Firmware and Software Compliance list or choose
Use PowerFlex Manager appliance default catalog.
You cannot select a minimal compliance version for a template, since it only includes server firmware updates. The
compliance version for a template must include the full set of compliance update capabilities. PowerFlex Manager does
not show any minimal compliance versions in the Firmware and Software Compliance list.
NOTE: Changing the compliance version might update the firmware level on nodes for this resource group. Firmware on
shared devices will still be maintained by the global default firmware repository.

8. Specify the resource group permissions for this template under Who should have access to the resource group deployed
from this template by performing one of the following actions:
● To restrict access to super users, select Only PowerFlex SuperUser.
● To grant access to super users and some specific lifecycle administrators and drive replacers, select the PowerFlex
SuperUser and specific LifecycleAdmin and DriveReplacer option, and perform the following steps:
○ Click Add User(s) to add one or more LifecycleAdmin or DriverReplacer us.

Publish a template
Use this procedure to publish the template.

Steps
1. On the Templates page, perform the following steps to modify a component type to the template:
a. Select Node Component and click Modify.
If you select a template from Sample templates, PowerFlex Manager selects the default number of PowerFlex nodes
for deployment.
b. Under Related Components, perform one of the following actions:
● To associate the component with all existing components, click Associate All.
● To associate the component with selected components, click Associate Selected and then select the components to
associate.
Based on the component type, specific required settings and properties appear automatically. You can edit components
as needed.
c. Click Continue. Ensure there is appropriate values are available on all the settings.
d. Click Validate Settings.
PowerFlex Manager will list the identified resource which are valid/invalid (if any) with the settings mentioned in the
template.
e. Click Save.
Cluster component will be available only for HC and CO ESXi based template, If they are not, complete Step 2.
2. Select Cluster Component and click Modify.
If you select a template from Sample templates, PowerFlex Manager selects the default component name VMware Cluster.
a. Under Related Components, perform one of the following actions:
● To associate the component with all existing components, click Associate All.
● To associate the component with selected components, click Associate Selected and then select the components
to associate.
Based on the component type, specific required settings and properties appear automatically. You can edit components
as needed.
b. Ensure Cluster settings are populated with the right details
c. To configure the vSphere distributed switch settings, click Configure VDS Settings.
Under VDS Port Group Configuration, perform one of the following actions
i. Click User Entered Port Groups and click Next:
● Under VDS Naming, provide the name for each VDS. For each VDS, click Create VDS and type the VDS name
and click Next.

116 Configuring PowerFlex Manager


Internal Use - Confidential

● On the Port Group Select page, for each VDS, click Create Port Group and type the port group name. Initially,
the port group name defaults to the name of the network, but you can type over the default to suit for your
requirements. Alternatively, you can click Select and choose an existing port group.
● Click Next.
ii. Click Auto Create All Port Groups and click Next.
NOTE: PowerFlex Manager determines the VDS order based on the following criteria: PowerFlex Manager first
considers the number of port groups on each VDS. Then, PowerFlex Manager considers whether a management
port group is present on a particular VDS. PowerFlex Manager considers the network type for port groups on a
VDS by performing lifecycle operations for a resource group.
PowerFlex Manager considers the network name for port groups on a VDS.
d. On VDS Naming, Provide the name for each VDS under VDS Naming.
e. For each VDS, click Create VDS and type the VDS name. Click Next.
f. On the Port Group Select page, review the port group names automatically assigned for the networks
g. Click Next.
h. On Advanced Networking, select the MTU Selection as per the LCS. Click Next.
i. On the Summary Page, verify all the details and click Finish.
j. Click Save.
3. The components should not have any warnings or error.
4. Click Publish Template.
Ensure there are no warnings in the template and the template remains in draft state until published. A template must be
published to be deployed.
After publishing a template, you can use the template to deploy a service. For more information, see the PowerFlex Manager
online help.

Deploy CloudLink Center with PowerFlex Manager


PowerFlex Manager supports up to three instances of CloudLink Center, however, two instances are recommended.

About this task

NOTE: Skip this task if the deployment type is without CloudLink (encryption).

Prerequisites
● Ensure hypervisor management or PowerFlex Manager management networks are added on the PowerFlex Manager
Networks page.
● The latest, valid release IC should be uploaded and be in the Available state.
● A VMware vCenter with a valid data center, cluster, network (matching with the network from the first item), and datastore
should be discovered in the Resources page.

Steps
1. For a CloudLink Center deployment, clone the Management - CloudLink Center from the sample template.
2. Select on View Details > More Actions > Clone.
3. In the Clone Template wizard, complete the following:
a. Enter a Template name.
b. From the Template Category list, select a template category.
To create a category, select Create New Category from the list.
c. Optionally, enter a Template Description.
d. To specify the version to use for compliance, select the version from the Firmware and Software Compliance list or
choose Use PowerFlex Manager appliance default catalog.
e. Specify the resource group permissions for this template under Who should have access to the resource group
deployed from this template:
i. To restrict access to administrators, select Only PowerFlex SuperUser.
ii. To grant access to administrators and specific standard users, select PowerFlex SuperUser and Specific Lifecycle
Admin and Drive Replacer and perform the following steps:

Configuring PowerFlex Manager 117


Internal Use - Confidential

● Click Add User(s) to add one more standard or operator users to the list.
● To remove a standard or operator user from the list, select the user and click Remove User(s).
● After adding the standard and or operator users, select or clear the check box next to the users to grant or block
access to this template.
● To grant access to administrators and all standard users, select PowerFlex SuperUser and Specific Lifecycle
Admin and Drive Replacer.
f. Click Next.
4. From the Additional Settings page:
a. Under Network Settings, select Hypervisor Network (PowerFlex management network).
b. Under OS Settings, select CLC credential or create a credential with root or CloudLink user by clicking +.
c. Under Cloudlink Settings, select the Secadmin credential from the list or create a secadmin credential by clicking +
and do the following:
i. Enter Credential Name
ii. Enter Username as secadmin
iii. Leave the Domain empty.
iv. Enter the password for secadmin in Password and Confirm Password.
v. Select V2 in SNMP Type and click Save.
d. Select a License File from the list based on the types of drives or select + to upload a license through the Add
Software License page.
NOTE: For SSD/NVMe drives, upload a capacity-based license. For SED drives, upload an SED-based license.

e. Under Cluster Settings, select vCenter and click Finish.


5. In the Template screen, click the cluster and select Edit.
a. Select VMware Cluster as the component and click Continue.
b. Go to Cluster Settings and select the Target Virtual Machine Manager (vCenter), Data Center, and Cluster under
the CloudLink Center that will be deployed.
c. Expand the VMware vSphere network settings and select Port group from the drop-down menu.
d. Click Save.
6. On the Template screen, click the VM component and click Edit.
a. Select CloudLink Center as the component. Select 2 for the number of instances to deploy.
b. Select VMware Cluster in Related Components. Click Continue.
c. Under VM Settings:
i. Select the datastore to associate with the CloudLink Center.
ii. Select Network in PowerFlex Manager. This option is required for IP address assignment and VLAN selection.
d. Under CloudLink Settings:
i. Select HostName selection. Specify this option at deployment time, or auto generate.
● Auto generate provides a template option to generate the name.
● Specify at deployment time expects a user manual entry to specify at deployment time.
ii. Select or create an operating system credential. The password should be10 characters minimum, including at least one
special character.
iii. NTP is auto populated from the Virtual Appliance Management page or can provide manual entry.
iv. Select or create the secadmin credential.
● The username should be changed to secadmin.
● The password should be a minimum of 10 characters, with at least one special character.
v. Provide vault passwords. Only one vault password is required. The password should be 10 characters minimum,
including at least one special character.
vi. Select or upload the license file.
There should be a mandatory file with a future expiration date.

e. Under Additional Cloudlink Settings (Optional):


i. Select the check box for Configure syslog forwarding. Select syslog as the syslog facility. This is the
recommended method.
ii. Select the check box for Configure Email Notifications.
● The server address should be an IP address
● Port: 25
● The sender address should be in email address format

118 Configuring PowerFlex Manager


Internal Use - Confidential

● Provide the username


● Provide the password. The password should be 10 characters minimum, including at least one special character
f. Click Save.
7. Ensure that there are no warnings.
8. Publish the template.
9. Complete all fields on the Deploy service page.
10. Provide the CloudLink VM name and IP address.
11. On the Schedule Deployment page, select one of the following options and click Next:
● Deploy Now - Select this option to deploy the service immediately.
● Deploy Later - Select this option and enter the date and time to deploy the service.
12. Click Finish to begin the deployment. Click Yes to confirm.
13. After the deployment is successful, the CloudLink VM is autodiscovered in the resource page.
14. Verify the CloudLink VM discovery by logging into one of the CloudLink VMs with the secadmin credentials.

Create a VM-VM affinity rule


Use this procedure to create a VM-VM affinity rule for CloudLink VMs.

Steps
1. Log in to the vSphere Web Client and access the cluster.
2. Click the Configure tab.
3. Under Configuration, select VM/Host Rules and click Add.
4. In Create VM/Host Rule, enter a rule name.
5. From the Type menu, select Separate Virtual Machines and click Add.
6. Select both CloudLink Center VMs to which the rule will apply, and click OK.

Deploy resource groups


Deploy the resource group using the published template.

About this task


NOTE: You cannot deploy the resource group using the template in draft state. Publish the template before deploying a
resource group.

Prerequisites
Ensure the:
● Compliance version and compatibility management file are uploaded
● Template to be used in published state.
● Networks are defined.
● CloudLink Center is deployed if it is CloudLink based PowerFlex hyperconverged or PowerFlex storage-only deployment.

Steps
1. On the menu bar, click Lifecycle > Resource Groups and click Deploy New Resource Group.
2. The Deploy Resource Group wizard opens. On the Deploy Resource Group page, perform the following steps:
a. From the Select Published Template list, select the template to deploy a Resource Group.
b. Enter the Resource Group Name and Resource Group Description (optional) that identifies the Resource Group.
c. To specify the version to use for compliance, select the version from the Firmware and Software Compliance list or
choose Use PowerFlex Manager appliance default catalog.
NOTE: Changing the firmware repository might update the firmware level on nodes for this resource group. The
global default firmware repository maintains the firmware on the shared devices.

Configuring PowerFlex Manager 119


Internal Use - Confidential

d. Indicate Who should have access to the service deployed from this template by selecting one of the available options.
Click Next.
i. Grant access to Only PowerFlex Manager Administrators.
ii. To grant access to administrators and specific standard and operator users, select the PowerFlex Manager
Administrators and Specific Standard and Operator Users option, and perform the following steps
● Click Add User(s) to add one more standard or operator users to the list displayed
● Select which users will have access to this resource group
● To delete a standard and or operator user from the list, select the user and click Remove User(s).
● After adding the standard and or users, select or clear the check box next to the standard or operator users to
grant or block access to use this template
iii. Grant access to PowerFlex Manager Administrators and All Standard and Operator Users.
3. On the Deployment Settings page, configure the required settings. You can override many of the settings that are
specified in the template. You must specify other settings that are not part of the template:
If you are deploying a resource group with CloudLink, ensure that the correct CloudLink Center is displayed under the
CloudLink Center settings.
a. Under PowerFlex Settings, choose one of the following options for PowerFlex MDM virtual IP address source:
i. PowerFlex Manager Selected IP instructs PowerFlex Manager to select the virtual IP addresses.
ii. User Entered IP enables you to specify the IP address manually for each PowerFlex data network that is part of the
node definition in the resource group template.
b. Under PowerFlex Cluster, to configure OS Settings, select an IP address source. To manually enter the IP address,
select User Entered IP.
c. From the IP Source list, select Manual Entry. Then enter the IP address in the Static IP Address field.
d. To configure Hardware Settings, select the node source from the Node Source list.
i. If you select Node Pool, you can view all user-defined node pools and the global pool. Standard users can see
only the pools for which they have permission. Select Retry on Failure to ensure that PowerFlex Manager selects
another node from the node pool for deployment if any node fails. Each node can be retried up to five times.
ii. If you select Manual Entry, the Choose Node list is displayed. Select the node for deployment from the list by its
Service Tag.
e. Click Next.
f. On the Schedule Deployment page, select one of the following options and click Next.
i. Deploy Now - Select this option to deploy the resource group immediately.
ii. Deploy Later - Select this option and enter the date and time to deploy the service.
g. Review the Summary page.
The Summary page gives you a preview of what the Resource group will look like after the deployment.
h. Click Finish when you are ready to begin the deployment.

Configure individual trunk with per NIC VLAN setup


for storage-only nodes with a bonded management
interface
Use this procedure to configure individual trunk with per NIC VLAN setup for storage-only nodes with a bonded management
interface.

About this task


This procedure documents how to setup a PowerFlex storage-only node with the new performance setup where an individual
PowerFlex data VLAN is setup per network interface with the management interface shared on one of the interfaces.

NOTE: This configuration removes some of the redundancy from the PowerFlex system in trade for raw speed.

This document assumes that the switches are already setup with trunk allowing the management VLAN on each port and the
data VLANs on their individual ports.

120 Configuring PowerFlex Manager


Internal Use - Confidential

Steps
1. Log in as root.
2. Change directory into /etc/sysconfig/network-scripts, enter the following command: cd /etc/sysconfig/
network-scripts.
3. Gather the interface configuration file name, enter the following command: grep -H <data1 IP> ifcfg-p*
a. Note the file name that matches.
b. Perform the same command with the other data network IP addresses and note the filenames.
4. Setup the data networks using the data gathered in step 2.
5. Create ifcfg-<interface name>.<vlan id>, enter the following command: vi ifcfg-<interface
name>.<vlan id> and add the following:
VLAN=yes
TYPE=Vlan
PHYSDEV=<interface name>
VLAN_ID=<vlan id>
REORDER_HDR=yes
GVRP=no
MVRP=no
MTU=9000
PROXY_METHOD=none
BROWSER_ONLY=no
BOOTPROTO=none
IPADDR=<ip address>
PREFIX=<subnet in CIDR notation>
DEFROUTE=no
IPV4_FAILURE_FATAL=no
IPV6INIT=no
NAME=<interface name>.<vlan id>
DEVICE=<interface name>.<vlan id>
ONBOOT=yes
NM_CONTROLLED=no
For example, the data1 network is:
VLAN=yes
TYPE=Vlan
PHYSDEV=em1
VLAN_ID=151
REORDER_HDR=yes
GVRP=no
MVRP=no
MTU=9000
PROXY_METHOD=none
BROWSER_ONLY=no
BOOTPROTO=none
IPADDR=10.10.151.246
PREFIX=24>
DEFROUTE=no
IPV4_FAILURE_FATAL=no
IPV6INIT=no
NAME=em1.151

Configuring PowerFlex Manager 121


Internal Use - Confidential

DEVICE=em1.151
ONBOOT=yes
NM_CONTROLLED=no
6. Repeat step 5 for data2 network and if required, repeat for data3 and data4 networks.
7. Create the bond sub-interfaces, enter the following command: vi ifcfg-<interface name>-bond and add the
following:
MTU=9000
TYPE=Ethernet
NAME=<interface name>-bond
DEVICE=<interface name>
ONBOOT=yes
PRIMARY=bond0.<mgmt.vlan id>
SECONDARY=yes
For example, the data1 bond primary is:
MTU=9000
TYPE=Ethernet
NAME=em1-bond
DEVICE=em1
ONBOOT=yes
PRIMARY=bond0.150
SECONDARY=yes
8. Repeat step 7 for data2 network and if required, repeat for data3 and data4 networks.
9. Create the bond interfaces for management, enter the following command: vi ifcfg-bond0.<mgmt.vlan id> and add
the following:
BONDING_OPTS="ad_select=stable all_seconday_active=0 arp_all_targets=any downdelay=0
fail_over_mac=none lp_interval=1 miimon=100 min_links=0 mode=balance-alb num_grat_arp=1
num_unsol_na=1 primary_reselect=always resend_igmp=1 updelay=0 use_carrier=1
xmit_hash_policy=layer2"
TYPE=Bond
BONDING_PRIMARY=yes
PROXY_METHOD=none
BROWSER_ONLY=no
BOOTPROTO=none
IPADDR=<mgmt.ip>
PREFIX=<subnet mask in CIDR notation>
GATEWAY=<mgmt.gateway ip>
DNS1=<dns1>
DNS2=<dns2>
DEFROUTE=yes
IPV4_FAILURE_FATAL=no
IPV6INIT=no
NAME=bond0.<mgmt.vlan id>
DEVICE=bond0.<mgmt.vlan id>
ONBOOT=yes
10. Save the file.
For example:

BONDING_OPTS="ad_select=stable all_secondary_active=0 arp_all_targets=any downdelay=0


fail_over_mac=none lp_interval=1 miimon=100 min_links=0 mode=balance-alb num_grat_arp=1

122 Configuring PowerFlex Manager


Internal Use - Confidential

num_unsol_na=1 primary_reselect=always resend_igmp=1 updelay=0 use_carrier=1


xmit_hash_policy=layer2"
TYPE=Bond
BONDING_PRIMARY=yes
PROXY_METHOD=none
BROWSER_ONLY=no
BOOTPROTO=none
IPADDR=10.10.150.246
PREFIX=27
GATEWAY=10.10.150.225
DNS1=10.10.10.10
DNS2=10.10.10.11
DEFROUTE=yes
IPV4_FAILURE_FATAL=no
IPV6INIT=no
NAME=bond0.150
DEVICE=bond0.150
ONBOOT=yes
11. Restart the network, enter the following command: systemctl restart network.
12. Confirm connectivity on all interfaces and IP addresses.

Verify resource group status


Use this procedure after the resource group is successfully deployed, to verify the status of the resource group in Resource
Groups page.

Steps
1. On the menu bar, click Lifecycle > Resource Groups.
2. On the Resource Groups page, ensure Status and Deployment state shows as Healthy.
See the following table for various states of resource group:

State Description
Healthy The resource group is successfully deployed and healthy.
Warning One or more resources in the resource group requires corrective action.
Critical The resource group is in a severely degraded or nonfunctional state and requires attention.
Pending The deployment is scheduled for a later time or date.
In The resource group deployment is in progress, or has other actions currently in process, such as a node
Progress expansion or removal.
Cancelled The resource group deployment has bee stopped. You can update the resources or retry the deployment, if
necessary.
Incomplet The resource group is no fully functional because it has no volumes that are associated with it. Click Add
e Resources to add volumes.
Service The resource group is in service mode.
Mode
Lifecycle The resource group is in lifecycle mode. Resource groups in life cycle mode are enabled with health and
Mode compliance monitoring, and non-disruptive upgrade features only.

Configuring PowerFlex Manager 123


Internal Use - Confidential

State Description
Managed The resource group is in lifecycle mode. Resource groups in life cycle mode are enabled with health and
Mode compliance monitoring, and non-disruptive upgrade, automated resource addition, and automated resource
replacement features.

The Resource Groups page displays the resource groups that are in the following states in both Tile and List view.

Supported modes for a new deployment


There are a number of modes available for a new deployment.

Supported mode Description


Lifecycle mode The service supports health and compliance monitoring, service mode, and non-disruptive
upgrades. All other service operations are blocked. Lifecycle mode controls the operations
that can be performed for configurations that have limited support.
For example: VMware NSX-T enabled environment will be in Lifecycle mode.

Service Mode The resource group is in service mode.

Incomplete The resource group is not fully functional because it has no volumes that are associated with
it. Click Add Resources to add volumes.

Pending The deployment is scheduled for a later time or date.

Cancelled The resource group deployment has been stopped. You can update the resources or retry
the deployment, if necessary.
● Healthy - The resource group is successfully deployed and is healthy.
● Warning - One or more resources in the resource group requires corrective action.
● Critical - The resource group is in a severely degraded or nonfunctional state and
requires attention.
● In Progress - The resource group deployment is in progress, or has other actions
currently in process, such as a node expansions or removal.

Managed mode The service supports health and compliance monitoring, non-disruptive upgrades, automated
resource addition, and automated resource replacement features.
Apart from a VMware NSX-T environment, all other supported deployments would be in
managed mode regardless of full network automation or partial network automation.

Full network automation PowerFlex Manager configures the required interface port configuration on supported
access or leaf switches for downlink to the PowerFlex appliance node.

Partial networking Requires a manual interface port configuration on the customer managed access or leaf
automation switches for downlink to PowerFlex appliance node. partial networking automation uses
iDRAC virtual media for installing operating system.

124 Configuring PowerFlex Manager


Internal Use - Confidential

Adding the PowerFlex management service to


PowerFlex Manager
Use this procedure to add the PowerFlex management controller to PowerFlex Manager.

Gather PowerFlex system information


Gather the PowerFlex cluster system ID and the PowerFlex MDM management IP addresses. These are needed when adding the
cluster as a resource in PowerFlex Manager.

Steps
1. Start an SSH session to the primary MDM as a non-root user.
2. Type scli --login --p12_path /opt/emc/scaleio/mdm/cfg/cli_certificate.p12 --p12_password
password to capture the system ID used to discover the PowerFlex system.
Example output:

Logged in. User role is SuperUser. System ID is b5cf87e367b2b50f

3. Type scli --query_cluster to discover the PowerFlex cluster management IP address.


Example output:

Cluster:
Mode: 3_node, State: Normal, Active: 3/3, Replicas: 2/2
Virtual IP Addresses: 192.168.109.250, 192.168.110.250
Master MDM:
Name: pfmc-svm-38, ID: 0x13ca7e24633b9200
IP Addresses: 192.168.109.138, 192.168.110.138, Port: 9011, Virtual IP
interfaces: eth1, eth2
Management IP Addresses: 10.10.10.38, Port: 8611
Status: Normal, Version: 4.0.9999
Slave MDMs:
Name: pfmc-svm-39, ID: 0x7741eb2c255c6101
IP Addresses: 192.168.109.139, 192.168.110.139, Port: 9011, Virtual IP
interfaces: eth1, eth2
Management IP Addresses: 10.10.10.39, Port: 8611
Status: Normal, Version: 4.0.9999
Tie-Breakers:
Name: pfmc-svm-40, ID: 0x089bab052efed002
IP Addresses: 192.168.109.140, 192.168.110.140, Port: 9011
Status: Normal, Version: 4.0.9999

Add the PowerFlex system as a resource


Use this procedure for the steps needed to add the PowerFlex System as a resource in PowerFlex Manager.

Steps
1. Log in to PowerFlex Manager.
2. Navigate to the Resources tab and click Discover Resources > Next.
3. Click Add Resource Type.
4. For Resource, select the PowerFlex system.
5. For the MDM cluster IP address, enter all the PowerFlex cluster management IP addresses.
Enter the management IP addresses of the LIA nodes in the MDM cluster IP address field. You need to provide the IP
addresses for all of the nodes in a comma-separated list. The list should include a minimum of three nodes and a maximum of
five nodes.

Configuring PowerFlex Manager 125


Internal Use - Confidential

If you forget to add a node, the node will not be reachable after discovery. To fix this, you can rerun the discovery later to
provide the missing node. You can enter just the one missing node, or all of the nodes again. If you enter IP addresses for any
nodes that were previously discovered, these will be ignored on the second run.

6. For System ID, enter the PowerFlex system ID.


7. Click + to add new credentials:
a. Enter the credential name.
b. Enter the LIA password.
c. Confirm the LIA password.
8. Click Save > Next.
9. Review the summary details and click Finish > Yes.

Add as an existing resource group


Use this procedure to add PowerFlex system as an existing group.

Prerequisites
Ensure the following before you add an existing service:
● The VMware vCenter, switches, and hosts are discovered in the resource list.
● Ensure that Add the PowerFlex system as a resource has been completed.
● oob-mgmt, and vcsa-ha must be type general purpose for PowerFlex Manager to run without error.

Steps
1. In PowerFlex Manager, on the menu bar, click Lifecycle > Resource Groups > + Add Existing Resource Group > Next.
2. On the Service Information page, enter a service name in the Name field.
3. Enter a description in the Description field.
4. For Type, select Hyperconverged.
5. From Firmware and Software Compliance, select the applicable IC version.
6. Specify the resource group permissions for this template under Who should have access to the resource group
deployed from this template:
a. To restrict access to administrators, select Only PowerFlex SuperUser.
b. To grant access to administrators and specific standard users, select PowerFlex SuperUser and Specific Lifecycle
Admin and Drive Replacer and perform the following steps:
● Click Add User(s) to add one more standard or operator users to the list.
● To remove a standard or operator user from the list, select the user and click Remove User(s).
● After adding the standard and or operator users, select or clear the check box next to the users to grant or block
access to this template.
● To grant access to administrators and all standard users, select PowerFlex SuperUser and Specific Lifecycle
Admin and Drive Replacer.
7. Click Next.
8. Select the network automation type: Full network automation (FNA).
9. On the Cluster Information page, enter a name for the cluster component in the Component Name field.
10. Select values for the cluster settings:

Cluster settings Description


Target Virtual Machine Manager Select the VMware vCenter name where the cluster is
available.
Data Center Name Select the data center name where the cluster is available.
Cluster Name Select the name of the cluster that you want to discover.
Target PowerFlex Gateway Select the name of the gateway that you want to discover.
Target Protection Domain Select the name of the Protection Domain you want to
discover.

126 Configuring PowerFlex Manager


Internal Use - Confidential

Cluster settings Description


OS Image Choose your VMware ESXi image.

11. Click Next.


12. On the OS Credentials page, select the OS credential that you want to use for each node and SVM and click Next.
13. Review the inventory on the Inventory Summary page and click Next.
14. On the Network Mapping page, review the networks that are mapped to port groups and make any required edits, and click
Next.
15. Review the Summary page, and click Finish when the service is ready to be added.
16. Automatically migrate the vCLS VMs:
a. In the Service Detail window, select Service Action and click Migrate vCLS VMs.
b. In the Migrate vCLS VMs window, select Storage Pool, and then select PFMC-POOL.
c. Type migrate vcls virtual machines.
d. Click Confirm.

Upload a management data store license


You can upload a management data store license and a single production license for the PowerFlex system (including PowerFlex
and PowerFlex Manager).

About this task


No license is required for the first 90 days of use. During this period, you are running PowerFlex in trial mode, and all features
are enabled. PowerFlex Manager shows an alert on the Monitoring > Alerts page when you are running in trial mode.

Prerequisites
You need to deploy the MDM cluster before uploading a PowerFlex license. You need to discover an MDS gateway before
uploading an MDS license.

Steps
1. On the menu bar, click Settings and click License Management.
2. Click PowerFlex License.
3. To upload an MDS license, click Choose File in the Management Data Store (MDS) License section and select the
license file. Click Save.
4. To upload a production license for PowerFlex, click Choose File in the Production License section and select the license
file. Click Save.

Results
When you upload a license file, PowerFlex Manager checks the license file to ensure that it is valid.
After the upload is complete, PowerFlex Manager stores the license details and displays them on the PowerFlex Manager
License page. You can see the Installation ID, System Name, and SWID for the PowerFlex. In addition, you can see the Total
Licensed Capacity, as well as the License Capacity Left. You can upload a second license, as long as the license is equal to or
more than the Total System Capacity.

Configuring PowerFlex Manager 127


Internal Use - Confidential

9
Deploying the PowerFlex file nodes
Use this chapter to deploy PowerFlex file nodes.

File storage
File storage is managed through NAS servers, which must be created prior to creating file systems. NAS servers can be created
to support SMB protocol, NFS protocol, or both. Once NAS servers are created, you can create file systems as containers for
your SMB shares for Windows users, or NFS exports for UNIX users.

PowerFlex file capabilities


PowerFlex features a file solution that is highly scalable, efficient, performance-focused, and flexible. This design enables
accessing data over file protocols such as Server Message Block (SMB), Network File System (NFS), File Transfer Protocol
(FTP), and SSH file transfer protocol (SFTP).
PowerFlex uses virtualized NAS servers to enable access to file systems, provide data separation, and act as the basis for
multitenancy. File systems can be accessed through a wide range of protocols and can take advantage of advanced protocol
features Services such as anti-virus, scheduled snapshots, and Network Data Management Protocol (NDMP) backups ensure
that the data on the file systems is well protected.
PowerFlex file is available on PowerFlex appliance, which is designed as true unified storage system. For enabling file capability,
minimum two physical nodes are required, and maximum supported up to 16 nodes. Monitoring, and provisioning capabilities are
available in the HTML5-based PowerFlex Manager.

PowerFlex file terminology


The following table provides definitions for some PowerFlex file terms:

Term Definition
File system A storage resource that can be accessed through file sharing protocols such as SMB or NFS.
PowerFlex file services A virtualized network-attached storage server that uses the SMB, NFS, FTP, and SFTP protocols
to catalog, organize, and transfer files within file system shares and exports. A NAS server,
the basis for multitenancy, must be created before you can create file-level storage resources.
PowerFlex file services is responsible for the configuration parameters on the set of file systems
that it serves.
Network file system (NFS) An access protocol that enables users to access files and folders on a network. NFS is typically
used by Linux/UNIX hosts.
PowerFlex Manager An HTML5 user interface used to manage PowerFlex appliance.
Server message block An access protocol that allows remote file data access from clients to hosts on a network. SMB is
(SMB) typically used in Microsoft Windows environments.
Snapshot A point-in-time view of data stored on a storage resource. A user can recover files from a
snapshot or restore a storage resource from a snapshot.

PowerFlex file node definition


● PowerFlex file services solution offered with PowerFlex R650 node only.

128 Deploying the PowerFlex file nodes


Internal Use - Confidential

● Single PowerFlex file cluster will support maximum of 16 nodes and minimum of two nodes.
● Expansion can be incremented in one or multiple up to a max of 16 nodes.
● Each PowerFlex R650 node will have same CPU/memory/NIC in a cluster.

Node configurations
Config Cores RAM (GB) NICs (GB) Local storage (GB)
Small 2 x 12 (24) 128 4 x 25 480 BOSS M.2
Medium 16 x 2 (32) 256 4 x 25 480 BOSS M.2
Large 28 x 2 (56) 256 4 x 25 or 4 x 100 480 BOSS M.2

Node slot matrix


PowerFlex file node Dual CPU with OCP
Slot 0 (OCP) CX5
Slot 1 Empty
Slot 2 Empty
Slot 3 CX5

PowerFlex file node Dual CPU with no OCP


Slot 0 Empty
Slot 1 CX6
Slot 2 CX6
Slot 3 Empty

Configure iDRAC before starting the deployment, see Configuing the iDRAC.

Related information
Configuring the iDRAC

Deployment requirements for PowerFlex file services


For a complete list of supported hardware and software for PowerFlex file, see to the Dell PowerFlex Appliance with PowerFlex
4.x Support Matrix.
● PowerFlex R650 node
● PowerFlex appliance with PowerFlex storage-only nodes or PowerFlex hyperconverged nodes
○ Dedicated storage pool with 5.5 TB of free space for PowerFlex file metadata
○ Recommended to have multiple protection domains, each protection domain must have a dedicated storage pool for
PowerFlex file cluster.
● Supported only port-channel with LACP networking
● Two x NICs with four ports
● Required license

Networking pre-requisites
● Create the required VLANs in the access switches

Deploying the PowerFlex file nodes 129


Internal Use - Confidential

● For VLAN information, see VLAN mapping


PowerFlex Manager does not have an option to change the M&O ingress CA certificate after PowerFlex file services resource
group deployment. If users want to use their own signed certificates for M&O (ingress), then they should upload that before
deploying PowerFlex file services resource group.

Resource group deployment

Define networks
Adding the details of an existing network enables PowerFlex Manager to automatically configure nodes that are connected to
the network.

About this task


Enter detailed information about the available networks in the environment. This information is used later during deployments
to configure nodes and switches to have the right network connectivity. PowerFlex Manager uses the defined networks in
templates to specify the networks or VLANs that are configured on nodes and switches for your resource groups. This step is
enabled immediately after you perform an initial setup for PowerFlex Manager.

Steps
1. On the menu bar, click Settings and click Networks.
The Networks page opens.
2. Click Define. The Define Network page opens.
3. In the Name field, enter the name of the network. Optionally, in the Description field, enter a description for the network.
4. From the Network Type drop-down list, select one of the following network types:
● PowerFlex Management - Depends on number of nodes in the PowerFlex file cluster.
● PowerFlex Data (Client Traffic Only) - Define the number of data networks depends on the number of data networks
configured on the PowerFlex block storage.
● NAS File Management – Always define one additional IP address for NAS cluster which means if you have three
Powerflex file nodes in the cluster, define four IP addresses (three IP addresses for nodes and one IP address for
cluster). Ensure that you configured untagged VLAN in switch side for NAS File Management network if deployment is in
PNA mode.
● NAS File Data - Can be used the range without defining also. Number of IP addresses depends on number of NAS
servers that you want to create.
5. In the VLAN ID field, enter a VLAN ID between 1 and 4094.
NOTE: PowerFlex Manager uses the VLAN ID to configure I/O modules to enable network traffic to flow from the node
to configured networks during deployment.

6. Optionally, select the Configure Static IP Address Ranges check box, and then do the following:
a. In the Subnet box, enter the IP address for the subnet. The subnet is used to support static routes for data and
replication networks.
b. In the Subnet Mask box, enter the subnet mask.
c. In the Gateway box, enter the default gateway IP address for routing network traffic.
d. Optionally, in the Primary DNS and Secondary DNS fields, enter the IP addresses of primary DNS and secondary DNS.
e. Optionally, in the DNS Suffix field, enter the DNS suffix to append for hostname resolution.
f. To add an IP address range, click Add IP Address Range. In the row, indicate the role in PowerFlex nodes for the IP
address range and then specify a starting and ending IP address for the range. For the Role, select Client Only. The
range is assigned to the client role on PowerFlex file nodes.
NOTE: IP address ranges cannot overlap. For example, you cannot create an IP address range of 10.10.10.1–
10.10.10.100 and another range of 10.10.10.50–10.10.10.150.

7. Click Save.

130 Deploying the PowerFlex file nodes


Internal Use - Confidential

Edit a network
If a network is not associated with a template or resource group, you can edit the network name, the VLAN ID, or the IP address
range.

Steps
1. On the menu bar, click Settings and click Networks. The Networks page opens.
2. Select the network that you want to modify, and click Edit. The Edit Network page opens.
3. Edit the information in any of the following fields: Name, VLAN ID, IP Address Range.
For a PowerFlex data or replication network, you can specify a subnet IP address for a static route configuration. The subnet
is used to support static routes for data and replication networks.
4. Click Save.

Delete a network
You cannot delete a network that is associated with a template or resource group.

Steps
1. On the menu bar, click Settings and click Networks. The Networks page is displayed.
2. Click the network that you want to delete, and click Delete.
3. Click OK when the confirmation message is displayed.

Discover resources
A resource is a physical and virtual data center object that PowerFlex Manager interacts with, including but not limited to nodes,
network switches, VM managers (for example, VMware vCenter), and element managers (for example, CloudLink Center,
PowerFlex file gateway).

About this task


A resource must be discovered in PowerFlex Manager in order for PowerFlex Manager to manage it. A resource in PowerFlex
Manager is categorized into one of the following groups: element manager, node, switch, VM manager.

Prerequisites
Before you start discovering a resource, complete the following:
NOTE: In this case resources are PowerFlex file nodes, PowerFlex file gateway is automatically deployed and discovered as
part of PowerFlex management platform deployment.
● Gather the IP addresses and credentials that are associated with the resources.
● Ensure that both the resources and the PowerFlex Manager are available on the network.

Steps
1. Access the Discovery Wizard by performing either of the following actions:
a. On the Getting Started page, click Discovery Resources.
b. On the menu bar, click Resources. On the Resources page, click Discover on the All Resources tab.
2. On the Welcome page of the Discovery Wizard, read the instructions, and click Next.
3. On the Identify Resources page, click Add Resource Type, and perform the following steps:
a. From the Resource Type list, select a resource that you want to discover.
● Element Manager, for example, CloudLink Center.
● Node (Hardware / Software Management)
● Switch
● VM Manager
● PowerFlex Gateway
● Node (Software Management): For PowerFlex, click Node (Software Management).

Deploying the PowerFlex file nodes 131


Internal Use - Confidential

● PowerFlex System
The PowerFlex system resource type is used to discover an MDS gateway.

b. Enter the management IP address (or hostname) of the resources that you want to discover in the IP/Hostname Range
field.
To discover one or more nodes by IP address, select IP Address and provide a starting and ending IP address.
To discover one or more nodes by hostname, select Hostname and identify the nodes to discover in one of the following
ways:
● Enter the fully qualified domain name (FQDN) with a domain suffix.
● Enter the FQDN without a domain suffix.
● Enter a hostname search string that includes one of the following variables:

Variable Description
$(num) Produces an automatically generated unique number.
$(num_2d) Produces an automatically generated unique number that
has two digits.
$(num_3d) Produces an automatically generated unique number that
has three digits.

If you use a variable, you must provide a start number and end number for the hostname search.

c. The following options are available in the Resource State list.

Option Description
Managed Select this option to monitor the firmware version compliance, upgrade
firmware, and deploy resource groups on the discovered resources. A managed
state is the default option for the switch, vCenter, element manager, and
PowerFlex gateway resource types.
Resource state must be set to Managed for PowerFlex Manager to send alerts
to secure connect gateway (SCG).
For PowerFlex file nodes, select the Managed option.

Unmanaged Select this option to monitor the health status of a device and the firmware
version compliance only. The discovered resources are not available for a
firmware upgrade or deploying resource groups by PowerFlex Manager. This
is the default option for the node resource type.
If you did not upload a license in the Initial Setup wizard, PowerFlex Manager
is configured for monitoring and alerting only. In this case, Unmanaged is the
only option available.

Reserved Select this option to monitor firmware version compliance and upgrade
firmware. The discovered resources are not available for deploying resource
groups by PowerFlex Manager.

d. To discover resources into a selected node pool instead of the global pool (default), select an existing or create a node
pool from the Discover into Node Pool list. To create a node pool, click the + sign to the right of the Discover into
Node Pool box.
e. Select an existing or create a credential from the Credentials list to discover resource types. To create a credential,
click the + sign to the right of the Credentials box. PowerFlex Manager maps the credential type to the type of
resource you are discovering, The default node credential type is Dell EMC PowerEdge iDRAC Default.
f. If you want PowerFlex Manager to automatically reconfigure the iDRAC nodes it finds, select the Reconfigure
discovered nodes with new management IP and credentials check box. This option is not selected by default,
because it is faster to discover the nodes if you bypass the reconfiguration.
g. To have PowerFlex Manager automatically configure iDRAC nodes to send alerts to PowerFlex Manager, select theAuto
configure nodes to send alerts to PowerFlex Manager check box.
4. Click Next.

132 Deploying the PowerFlex file nodes


Internal Use - Confidential

You might have to wait while PowerFlex Manager locates and displays all the resources that are connected to the managed
networks.
To discover multiple resources with different IP address ranges, repeat step 2 and 3.

5. On the Discovered Resources page, select the resources from which you want to collect inventory data and click Finish.
The discovered resources are listed on the Resources page.
NOTE: PowerFlex file cluster deployment also uses the same compliance version and compatibility management files
which are used for PowerFlex hyperconverged or storage-only deployment (backend block storage).

Build or clone a template


The template builder allows you to build a customized template by configuring both physical and virtual components. On the
template builder page, you can set the component properties.

About this task


If you are deploying standard PowerFlex file cluster configuration, Dell recommends using clone template option instead of
building it manually.
NOTE: A newly created, or a cloned template appears in a draft state on the Template page and remains in the same state
until published.
You can configure node, cluster, and VM components in a template. The template builder page displays a graphical
representation of the topology that is created within a particular template. From this page, you can:
● Add node, cluster, and VM components to a template
● Build and publish a template
● Delete a template
● Import a template
● Deploy a resource group (this feature is available only on published templates)

Component types
Components (physical or virtual or applications) are the main building blocks of a template.
PowerFlex Manager has the following component types:
● Node
● Cluster
● VM
Specific to the PowerFlex file template PowerFlex Manager has three component types:
● PowerFlex cluster
● PowerFlex file cluster
● Nodes

Node settings
This reference table describes the following node settings: hardware, BIOS, operating system, and network.

Setting Description
Component name Indicates the node component name, in Powerflex appliance case it will be node (software/
hardware).
NOTE: This is applicable only when you manually build the template.

Deploying the PowerFlex file nodes 133


Internal Use - Confidential

Setting Description
Full network automation Allows you to perform deployments with full network automation. This feature allows you to
(FNA) work with supported switches and requires less manual configuration. Full network automation
also provides better error handling since PowerFlex Manager can communicate with the
switches and identify any problems that may exist with the switch configurations. Note:
applicable only when you manually build the template.
Partial network automation Allows you to perform switchless deployments with partial network automation. This feature
(PNA) allows you to work with unsupported switches, but requires more manual configuration before
a deployment can proceed successfully. If you choose to use partial network automation,
you give up the error handling and network automation features that are available with a full
network configuration that includes supported switches. For a partial network deployment,
the switches are not discovered, so PowerFlex Manager does not have access to switch
configuration information. You must ensure that the switches are configured correctly,
since PowerFlex Manager does not have the ability to configure the switches for you. If
your switch is not configured correctly, the deployment may fail and PowerFlex Manager
is not able to provide information about why the deployment failed. For a partial network
deployment, you must add all the interfaces and ports, as you would when deploying with full
network automation. The Switch Port Configuration must be set to Port Channel (LACP
enabled). In addition, the LACP fallback or LACP ungroup option must be configured on the
port channels. Note: In this release PowerFlex Manager supports Powerflex file deployment
only with Port Channel (LACP enabled).
Number of instances Enter the number of instances that you want to add. If you select more than one instance,
a single component representing multiple instances of an identically configured component
is created. Edit the component to add extra instances. If you require different configuration
settings, you can create multiple components.
Related components Select Associate All or Associate Selected to associate all or specific components to the
new component.
Import configuration from Click this option to import an existing node configuration and use it for the node component
reference node settings. On the Select Reference Node page, select the node from which you want to
import the settings and click Select.
OS Settings
Host name selection If you choose Specify At Deployment Time, you must type the name for the host at
deployment time. If you choose Auto Generate, PowerFlex Manager displays the Host Name
Template field to enable you to specify a macro that includes variables that produce a unique
hostname. For details on which variables are supported, see the context-sensitive help for
the field. If you choose Reverse DNS Lookup, PowerFlex Manager assigns the hostname by
performing a reverse DNS lookup of the host IP address at deployment time.
OS Image Specifies the location of the operating system image install files. You must choose Use
Compliance File Linux image provided with the target compliance file) for deploying
PowerFlex file cluster.
NOTE: IC has Operating System images (Embedded Operating system based on SUSE
Linux) required for deploying PowerFlex file resource group, choose the image which is
part of IC.

OS Credential Select the credential that you created on the Credentials Management page. Alternatively,
you can create a credential while you are editing a template. If you select a credential that was
created on the Credentials Management page, you do not need to type the username and
password, since they are part of the credential definition. For nodes running Linux, the user is
root.
NTP Server Specifies the IP address of the NTP server for time synchronization. If adding more than one
NTP server in the operating system section of a node component, be sure to separate the IP
addresses with commas.
Use Node For Dell PowerFlex Indicates that this node component is used for a PowerFlex deployment. When this option is
selected, the deployment installs the SDC components, as required for a PowerFlex file cluster
to access the PowerFlex volume in Linux environment. To deploy a PowerFlex file cluster
successfully, include at least two nodes in the template.

134 Deploying the PowerFlex file nodes


Internal Use - Confidential

Setting Description
PowerFlex Role Specifies the following deployment type for PowerFlex: Compute Only indicates that the
node is only used for compute resources. For an PowerFlex file template, be sure to select
Compute Only as the role and add a Node, PowerFlex File Cluster and PowerFlex
Cluster components to the template.
Enable PowerFlex File Enables PowerFlex file capabilities on the node. This option is only available if you choose Use
Compliance File Linux Image as the OS Image and then choose Compute Only as the
PowerFlex Role. If Enable PowerFlex File is selected, you must ensure that the template
includes the necessary NAS File Management network. NAS File Data network is optional.
If you do not configure NAS File Management on the template, the template validation will
fail.
Switch Port Configuration Specifies whether Cisco virtual PortChannel (vPC) or Dell Virtual Link Trunking (VLT) is
enabled or disabled for the switch port. For PowerFlex file template that use a Linux operating
system image, the option available only Port Channel (LACP enabled) turns on vPC or VLT
with the link aggregation control protocol enabled.
Teaming And Bonding For PowerFlex file template, if you choose Port Channel (LACP enabled) as the switch port
Configuration configuration, the only teaming and bonding option is Mode 4 (IEEE 802.3ad policy).
Hardware Settings
Target Boot Device Specifies the target boot device. Local Flash storage for Dell EMC PowerFlex: Installs the
operating system to the BOSS flash storage device that is present in the node and configures
the node to support PowerFlex file. If you select the option to Use Node for Dell EMC
PowerFlex under OS Settings , the Local Flash storage for Dell EMC PowerFlex option is
automatically selected as the target boot device.
Node Pool Specifies the pool from which nodes are selected for the deployment.
BIOS Settings
System Profile Select the system power and performance profile for the node. Default selection is
Performance.
User Accessible USB Ports Enables or disables the user-accessible USB ports. Default selection is All Ports On.
Number of Cores per Specifies the number of enabled cores per processor. Default selection is All.
Processor
Virtualization Technology Enables the additional hardware capabilities of virtualization technology. Default selection is
Enabled.
Logical Processor Each processor core supports up to two logical processors. If enabled, the BIOS reports all
logical processors. If disabled, the BIOS reports only one logical processor per core. Default
selection is Enabled.
Execute Disable Enables or disables execute disable memory protection. Default selection is Enabled.
Node Interleaving Enable or disable the interleaving of allocated memory across nodes.
● If enabled, only nodes that support interleaving and have the read/write attribute for node
interleaving set to enabled are displayed. Node interleaving is automatically set to enabled
when a resource group is deployed on a node.
● If disabled, any nodes that support interleaving are displayed. Node interleaving is
automatically set to disabled when a resource group is deployed on a node. Node
interleaving is also disabled for a resource group with NVDIMM compression.
● If not applicable is selected, all nodes are displayed irrespective of whether interleaving is
enabled or disabled. This setting is the default. Default selection is Disabled.
Network Settings
Add New Interface Click Add New Interface to create a network interface in a template component. Under
this interface, all network settings are specified for a node. This interface is used to find a
compatible node in the inventory. For example, if you add Two Port, 100 gigabit to the
template, when the template is deployed PowerFlex Manager matches a node with a two
port 100-gigabit network card as its first interface. To add one or more networks to the port,
select Add Networks to this Port. Then, choose the networks to add, or mirror network
settings defined on another port. To see network changes that are previously made to a

Deploying the PowerFlex file nodes 135


Internal Use - Confidential

Setting Description
template, you can click View/Edit under Interfaces. Or you can click View All Settings
on the template, and then click View Networks. To see network changes at resource group
deployment time, click View Networks under Interfaces.
NOTE: If you used the sample template, standard configuration ports are selected by
default, just verify the selected ports before validating settings.

Add New Static Route Click Add New Static Route to create a static route in a template. To add a static route, you
must first select Enabled under Static Routes. A static route allows nodes to communicate
across different networks. A static route requires a Source Network and a Destination
Network, as well as a Gateway. The source and destination network must each be a
PowerFlex data network or replication network that has the Subnet field defined. If you add
or remove a network for one of the ports, the Source Network drop-down list does not get
updated and still shows the old networks. In order to see the changes, save the node settings
and edit the node again.
Validate Settings Click Validate Settings to determine what can be chosen for a deployment with this
template component. The Validate Settings wizard displays a banner to when one or more
resources in the template do not match the configuration settings that are specified in the
template . The wizard displays the following tabs:
● Valid (number) lists the resources that match the configuration settings.
● Invalid (number) lists the resources that do not match the configuration settings. The
reason for the mismatch is shown at the bottom of the wizard.
For example, you might see Network Configuration Mismatch as the reason for the mismatch
if you set the port layout to use a 100-Gb network architecture, but one of the nodes is using
a 25 GB architecture.

Cluster component settings


This table describes the cluster component settings.

Field name Description


Select a Component Select PowerFlex Cluster or PowerFlex File Cluster.
Component name Indicates the cluster component name.
Related components Select Associate All or Associate Selected to associate all
or specific components to the new component.
PowerFlex cluster
Target PowerFlex gateway Select a target PowerFlex Gateway, which acts an endpoint
for PowerFlex API calls. During the provisioning process,
PowerFlex Manager connects to the PowerFlex Gateway and
uses its APIs to configure all SDS and SDC parameters.
PowerFlex File cluster
PowerFlex File gateway Select PowerFlex file gateway from drop down list. PowerFlex
file gateway would be discovered automatically once you have
successful PowerFlex Manager platform deployment.
Number of protection domains Determines the number of protection domains that are used
for PowerFlex File configuration data. Control volumes will be
created automatically for every node in the PowerFlex File
cluster, and spread across the number of protection domains
specified for improved cluster resiliency. To add data volumes,
you need to use the tools provided on the File tab. You can
have between one and four protection domains.
Protection domain <n> Includes a separate section for each protection domain used in
the template.

136 Deploying the PowerFlex file nodes


Internal Use - Confidential

Field name Description


Storage pool <n> Includes a separate section for each storage pool used in the
template.

Create a template
The Create feature allows you to create a template, clone the components of an existing template into a new template, or
import a pre-existing template.

About this task


For most environments, you can simply clone one of the sample templates that are provided with PowerFlex Manager and edit
as needed. Choose the sample template that is most appropriate for your environment.

Steps
1. On the menu bar, click Lifecycle > Templates.
2. On the Templates page, click Create.
3. In the Create dialog box, select one of the following options:
● Clone an existing PowerFlex Manager template
● Upload External Template
● Create a new template

If you select Clone an existing PowerFlex Manager template, select the Category and the Template to be Cloned.
The components of the selected template are in the new template. You can clone one of your own template or the sample
template.

4. Enter a Template Name and click Next.


5. From the Template Category list, select a template category. To create a category, select Create New Category from the
list.
6. Optionally, enter a Template Description.
7. To specify the version to use for compliance, select the version from the Firmware and Software Compliance list or
choose Use PowerFlex Manager appliance default catalog.
You cannot select a minimal compliance version for a template, since it only includes server firmware updates. The
compliance version for a template must include the full set of compliance update capabilities. PowerFlex Manager does
not show any minimal compliance versions in the Firmware and Software Compliance list.
NOTE: Changing the compliance version might update the firmware level on nodes for this resource group. Firmware on
shared devices will still be maintained by the global default firmware repository.

8. Specify the resource group permissions for this template under Who should have access to the resource group
deployed from this template? by performing one of the following actions:
● To restrict access to administrators, select Only PowerFlex SuperUser.
● To grant access to administrators and specific standard users, select PowerFlex SuperUser and Specific Lifecycle
Admin and Drive Replacer and perform the following steps:
a. Click Add User(s) to add one more standard or operator users to the list.
b. To remove a standard or operator user from the list, select the user and click Remove User(s).
c. After adding the standard and or operator users, select or clear the check box next to the standard or operator users
to grant or block access to use this template.
● To grant access to administrators and all standard users, select PowerFlex SuperUser and Specific Lifecycle Admin
and Drive Replacer.
9. Click Save.

Deploying the PowerFlex file nodes 137


Internal Use - Confidential

Clone a template

About this task


The Clone feature allows you to copy an existing template into a new template. A cloned template contains the c components
that existed in the original template. You can edit it to add additional components or modify the cloned components.
For most environments, you can clone one of the sample templates that are provided with PowerFlex Manager and edit as
needed. Choose the sample template that is most appropriate for your environment.

Steps
1. On the menu bar, click Lifecycle > Templates.
2. Open a PowerFlex File template from Sample Templates, and then click More Actions > Clone in the right pane.
You can also click Create > Clone an existing PowerFlex Manager template on the My Templates page if you want to
clone one of your own templates or the sample templates.
3. In the Clone Template dialog box, enter a template name in the Template Name box.
4. Select a template category from the Template Category list. To create a template category, select Create New Category
5. In the Template Description box, enter a description for the template.
6. To specify the version to use for compliance, select the version from the Firmware and Software Compliance list or
choose Use PowerFlex Manager appliance default catalog.
You cannot select a minimal compliance version for a template, since it only includes server firmware updates. The
compliance version for a template must include the full set of compliance update capabilities. PowerFlex Manager does
not show any minimal compliance versions in the Firmware and Software Compliance list.
NOTE: Changing the compliance version might update the firmware level on nodes for this resource group. Firmware on
shared devices will still be maintained by the global default firmware repository

7. Indicate Who should have access to the resource group deployed from this template by selecting one of the following
options:
● Grant access to Only PowerFlex Manager Administrators.
● Grant access to PowerFlex Manager Administrators and Specific Standard and Operator Users. Click Add
User(s) to add one or more standard and or operator users to the list. Click Remove User(s) to remove users from the
list.
● Grant access to PowerFlex Manager Administrators and All Standard and Operator Users.
8. Click Next.
9. On the Additional Settings page, provide new values for the Network Settings, OS Settings, PowerFlex Gateway
Settings, and Node Pool Settings.
10. Click Finish.
11. Once you click Finish, it will be re-directed to Template page. Add/Modify the valuer of each component (PowerFlex
cluster, PowerFlex file cluster, node) based on the details mentioned in the above table and ensure that there is no warning
on any of the three components. Then click Publish Template for publishing the template. After publishing a template, you
can use the template to deploy a resource group on the Resource Groups page.

Build and publish a template


After creating a template using Create, you use the template builder page to build and publish the customized template.
Publishing a template indicates that a template is ready for deployment.

Steps
1. Click Modify Template.
2. To add a component type to the template, click Add Node and Add Cluster (PowerFlex Cluster and PowerFlex File Cluster)
at the top of the template builder.
The corresponding <component type> component dialog box appears.
3. If you are adding a node, choose one of the following network automation types:
● Full Network Automation

138 Deploying the PowerFlex file nodes


Internal Use - Confidential

● Partial Network Automation

When you choose Partial Network Automation , PowerFlex Manager skips the switch configuration step, which is normally
performed for a resource group with Full Network Automation. Partial network automation allows you to work with
unsupported switches. However, it also requires more manual configuration before deployments can proceed successfully.
If you choose to use partial network automation, you give up the error handling and network automation features that
are available with a full network configuration that includes supported switches. For more information about the manual
configuration steps needed for partial network automation, please refer networking section in the document. In the Number
of Instances box, provide the number of component instances that you want to include in the template.

NOTE: Minimum is two nodes and maximum is 16 nodes for PowerFlex File deployment.

4. Click Continue.
5. On the Node Page, provide new values for the OS Settings, Hardware Settings, BIOS Settings, and Network Settings,
6. Click Validate Settings to determine what can be chosen for a deployment with this template component. The Validate
Settings wizard displays a banner to when one or more resources in the template do not match the configuration settings
that are specified in the template
7. Click Save.
8. If you are adding a cluster, in the Select a Component box, choose one of the following cluster types:
● PowerFlex cluster
● PowerFlex File cluster
9. Under Related Components, perform one of the following actions:
● To associate the component with all existing components, click Associate All.
● To associate the component with only selected components, click Associate Selected and then select the components
to associate.
Based on the component type, specific settings and properties appear automatically that are required and can be edited.
10. Click Save to add the component to the template builder.
11. Repeat steps 1 through 6 to add additional components.
12. After you finish adding components to your template, click Publish Template.
A template must be published to be deployed. It remains in draft state until published.
After publishing a template, you can use the template to deploy a resource group on the Resource Groups page.

Edit template information

Steps
1. On the menu bar, click Lifecycle > Templates.
2. On the Templates page, click the template that you want to edit and click Modify Template in the right pane.
3. On the template builder page, in the right pane, click Modify.
4. In the Modify Template Information dialog box, enter a template name in the Template Name box.
5. Select a template category from the Template Category list. To create a template category, select Create New Category
6. In the Template Description box, enter a description for the template.
7. To specify the version to use for compliance, select the version from the Firmware and Software Compliance list or
choose Use PowerFlex Manager appliance default catalog.
8. Indicate Who should have access to the resource group deployed from this template by selecting one of the following
options:
● Grant access to Only PowerFlex Manager Administrators
● Grant access to PowerFlex Manager Administrators and Specific Standard and Operator Users. Click Add
User(s) to add one or more standard and or operator users to the list. Click Remove User(s) to remove users from the
list.
● Grant access to PowerFlex Manager Administrators and All Standard and Operator Users.
9. Click Save.

Deploying the PowerFlex file nodes 139


Internal Use - Confidential

Edit a template
You can edit an existing template to change its draft state to published for deployment, or to modify its components and their
properties.

Steps
1. On the menu bar, click Lifecycle > Templates.
2. Open a template, and click Modify Template.
3. Make changes as needed to the settings for components within the template. Based on the component type, required
settings and properties are displayed automatically and can be edited.
a. To edit PowerFlex cluster settings, select the PowerFlex Cluster component and click Modify. Make the necessary
changes, and click Save.
b. To edit PowerFlex File cluster settings, select the PowerFlex File cluster component and click Modify. Make the
necessary changes, and click Save.
c. To edit node settings, select the Node component and click Modify. Make the necessary changes, and click Save.
4. Optionally, click Publish Template to make the template ready for deployment.

Deploy a resource group

About this task


Deployment is the automated process of selecting and configuring specific resource requirements that are outlined in a template
using PowerFlex Managers integrated automation workflows. You cannot use a template that is in a draft state to deploy a
resource group. Publish the template before using it to deploy a resource group.

Prerequisites
Ensure LLDP is enabled on the switches, and update the inventory in PowerFlex Manager.

Steps
1. On the menu bar, click one of the following:
● Lifecycle > Resource Groups and, click Deploy New Resource Group.
● Lifecycle > Templates and click Deploy.
The Deploy Resource Group wizard opens.
2. On the Deploy Resource Group page, perform the following steps:
a. From the Select Published Template list, select the template to deploy a resource group.
b. Enter the Resource Group Name (required) and Resource Group Description (optional) that identifies the resource
group.
c. To specify the version to use for compliance, select the version from the Firmware and Software Compliance list or
choose Use PowerFlex Manager appliance default catalog.
You cannot select a minimal compliance version when you deploy a new resource group, since it only includes server
firmware updates. The compliance version for a new resource group must include the full set of compliance update
capabilities. PowerFlex Manager does not show any minimal compliance versions in the Firmware and Software
Compliance list.
NOTE: Changing the firmware repository might update the firmware level on nodes for this resource group. The
global default firmware repository maintains the firmware on the shared devices.

d. Indicate Who should have access to the resource group deployed from this template by selecting one of the
following options:
● Grant access to Only PowerFlex Manager Administrators.
● To grant access to administrators and specific standard and operator users, select the PowerFlex Manager
Administrators and Specific Standard and Operator Users option, and perform the following steps:
i. Click Add User(s) to add one more standard or operator users to the list displayed.
ii. Select which users will have access to this resource group.
iii. To delete a standard and or operator user from the list, select the user and click Remove User(s).

140 Deploying the PowerFlex file nodes


Internal Use - Confidential

iv. After adding the standard and or users, select or clear the check box next to the standard or operator users to
grant or block access to use this template.
● Grant access toPowerFlex Manager Administrators and All Standard and Operator Users.
3. Click Next.
4. On the screens that follow the Deployment Settings page, configure the settings, as needed for your deployment.
5. Click Next.
6. On the Schedule Deployment page, select one of the following options and click Next:
● Deploy Now - Select this option to deploy the resource group immediately.
● Deploy Later - Select this option and enter the date and time to deploy the resource group
7. Review the Summary page.
The Summary page gives you a preview of what the resource group will look like after the deployment.
8. Click Finish when you are ready to begin the deployment. If you want to edit the resource group, click Back.

Verify resource group status


Use this procedure after the resource group is successfully deployed, to verify the status of the resource group in Resource
Groups page.

Steps
1. On the menu bar, click Lifecycle > Resource Groups.
2. On the Resource Groups page, ensure Status and Deployment state shows as Healthy.
See the following table for various states of resource group:

State Description
Healthy The resource group is successfully deployed and healthy.
Warning One or more resources in the resource group requires corrective action.
Critical The resource group is in a severely degraded or nonfunctional state and requires attention.
Pending The deployment is scheduled for a later time or date.
In The resource group deployment is in progress, or has other actions currently in process, such as a node
Progress expansion or removal.
Cancelled The resource group deployment has bee stopped. You can update the resources or retry the deployment, if
necessary.
Incomplet The resource group is no fully functional because it has no volumes that are associated with it. Click Add
e Resources to add volumes.
Service The resource group is in service mode.
Mode
Lifecycle The resource group is in lifecycle mode. Resource groups in life cycle mode are enabled with health and
Mode compliance monitoring, and non-disruptive upgrade features only.
Managed The resource group is in lifecycle mode. Resource groups in life cycle mode are enabled with health and
Mode compliance monitoring, and non-disruptive upgrade, automated resource addition, and automated resource
replacement features.

The Resource Groups page displays the resource groups that are in the following states in both Tile and List view.

Deploying the PowerFlex file nodes 141


Internal Use - Confidential

10
Deploying PowerFlex NVMe over TCP
Nonvolatile Memory Express (NVMe) is a high-speed storage protocol designed specifically to take advantage of solid-state
drive performance and bandwidth. NVMe over fabrics allow hosts to use existing network architectures such as Fiber Channel
and Ethernet to access NVMe devices at greater speeds and lower latency than legacy storage protocols.
Requirements:
● PowerFlex Manager 4.0 must be deployed and configured
● Four storage-only nodes (with standard SSD or NVMe disks)

Create the storage NVMe template


Steps
1. Log in to PowerFlex Manager.
2. Click Lifecycle > Templates > My Templates > Create.
3. Click Clone an existing PowerFlex Manager template > Sample Templates.
4. From the template to be cloned, click Storage with NVMe/TCP and click Next.
5. Enter a template name.
6. Select or create a new category.
7. Enter a description of the template.
8. From the Firmware and Software Compliance field, select the customer IC.
9. Select the security group.
10. Click Next.
11. Under Network Settings, select the matching customer networks for each category.
12. Under OS Settings:
a. Select or create (+) the OS Credential to be used for the root user.
b. Under Use Compliance File Linux Image, select Use Compliance File Linux Image (or custom if requested).
13. Under PowerFlex Gateway Settings, select the appropriate PowerFlex gateway, the default is block-legacy-gateway.
14. Under Node Pool Settings:
a. Select the node pool that contains the NVMe nodes (or default Global).
b. Click Finish.
15. Select node and then modify and change node count as necessary and select Continue.
16. Add NTP and time zone information and click Save.
17. Click Publish Template.
18. Click Yes to confirm.

Deploy storage with the NVMe/TCP template


Steps
1. Log in to PowerFlex Manager.
2. Click Lifecycle/Templates and select the template that you just created.
3. Click Deploy Resource Group, enter the Resource Group Name, and enter a description of the resource group.
4. Select the ICIC version.
5. Select the administration group for this resource.
6. Click Next.

142 Deploying PowerFlex NVMe over TCP


Internal Use - Confidential

7. Under Deployment Settings:


a. Auto-generate or fill out the following fields:
● Protection Domain Name
● Protection Domain Name Template
● Storage Pool Name
● Number of Storage Pools
● Storage Pool Name Template
b. Allow PowerFlex to select the IP addresses or manually provide the MDM virtual IP addresses.
c. Allow PowerFlex to select the IP addresses or manually provide the storage-only node OS IP addresses.
NOTE: If already deployed, the PowerFlex MDM virtual IP address will be displayed.

d. Manually select each storage-only node by the serial number or the iDRAC IP address, or allow PowerFlex to select the
nodes automatically from the selected node pool.
e. Click Next.
f. Click Deploy Now > Next.
g. Review the summary screen and click Next.
Monitor deployment activity on the right panel under Recent Activity.

Configuring NVMe over TCP on a VMware ESXi


compute-only node
Requirements:
● Deployed NVMe over TCP resource group
● Deployed VMware ESXi compute-only nodes
● The host must be at VMware ESXi version 7.0U3 or higher
● VMware vSphere Distributed Switch must be at version 7.0.3 or higher

Enable the NVMe/TCP VMkernel ports


Use this procedure to enable the NVMe/TCP VMkernel ports.

Steps
1. Log in to the VMware vSphere Client.
2. Click Home/Inventory and select the host.
3. Select Configure > VMkernel adapters.
4. Edit PowerFlex-Data 1.
5. Select the NVMe over TCP check box and click OK.
6. Repeat these steps for the remaining PowerFlex data networks.

Add NVMe over TCP software storage adapter


Use this procedure to add NVMe over TCP software storage adapter.

Steps
1. Log in to the VMware vSphere Client.
2. Click Home > Inventory > Hosts and Clusters.
3. In the VMware vSphere console, browse to the customer data center, compute-only cluster, and select the added host.
4. From the right pane, click Configure > Storage Adapters.
5. From the right pane, click Add Software Adapter.
6. Click Add NVMe over TCP adapter.

Deploying PowerFlex NVMe over TCP 143


Internal Use - Confidential

7. Select the first flex_dvswitch VMNIC and click OK.


8. Click Add NVMe over TCP adapter.
9. Select the second flex_dvswitch VMNIC and click OK.

Copy the host NQN


Use this procedure to copy the host NQN to the copy buffer. The host NQN details are required when you add the host to the
PowerFlex Manager.

Steps
1. Log in to VMware vSphere Client.
2. Select the first VMware NVMe over TCP storage adapter. For example, vmhba6x.
3. On the right pane, select Configure > Storage Adapters.
4. From the pane, select Controllers/Add Controller.
The host NQN is listed at the top of the form.
5. Click COPY and place the host NQN in the copy buffer.
6. Click CANCEL.

Add a host to PowerFlex Manager


Use this procedure to add a host to PowerFlex Manager.

Steps
1. Log in to PowerFlex Manager.
2. Click Block > Hosts.
3. Click +Add Host.
4. Enter the hostname and paste the host NQN from the copy buffer. The default number of paths is four.
5. Click Add.

Create a volume
Use this procedure to create a volume.

Steps
1. From PowerFlex Manager, click Block > Volumes.
2. Click +Create Volume.
3. Enter the number of volumes and the name of the volumes.
4. Select Thick or Thin. Thin is the default.
5. Enter the required volume size in GB, in 8 GB increments.
6. Select the NVMe storage pool and click Create.

Map a volume to the host


Use this procedure to map a volume to the host.

Steps
1. From PowerFlex Manager, click Block > Volumes.
2. Select the Volume check box and click Mapping > Map.
3. Select the check box on the host that you are mapping the volume to.
4. Click Map > Apply.

144 Deploying PowerFlex NVMe over TCP


Internal Use - Confidential

Discover target IP addresses


Use this procedure to discover target IP addresses.

Prerequisites
Ensure that a volume is mapped to the host to be able to connect the SDT paths.

Steps
1. From PowerFlex Manager, click Block > NVMe Targets.
2. Select any one of the listed SDTs.
3. Record the IP addresses and discovery port in the lower right corner.
Example output:

IP address Storage port I/O port Discovery port


192.168.151.2 12200 4420 8009
192.168.152.2 12200 4420 8009
192.168.153.2 12200 4420 8009
192.168.154.2 12200 4420 8009

Configuring NVMe over TCP on SLES


Requirements:
● Deployed NVMe over TCP resource group
● SUSE Linux Enterprise Server 15SP3 or higher with repository access

Add a host to PowerFlex Manager


Use this procedure to add a host to PowerFlex Manager.

Prerequisites
If the host is not connected to the embedded operating system 15 SPx repository, perform the following steps:
1. Run zypper ar http://<customer-repository-address>/pub/suse/sles/15/dell-sles15.repo
2. Type zypper in nvme-cli to install the NVMe command line.
3. Type Y to confirm if additional modules are required.
4. Type cat /etc/nvme/hostnqn to find the hosts NQN address. For example,
nqn.2014-08.org.nvmexpress:uuid:4c4c4544-0056-4e10-8034-b7c04f463333.

Steps
1. Log in to PowerFlex Manager.
2. Click Block > Hosts.
3. Click +Add Host.
4. Enter the hostname and paste the host NQN.
5. The default number of paths is four.
6. Click Add.

Deploying PowerFlex NVMe over TCP 145


Internal Use - Confidential

Create a volume
Use this procedure to create a volume.

Steps
1. From PowerFlex Manager, click Block > Volumes.
2. Click +Create Volume.
3. Enter the number of volumes and the name of the volumes.
4. Select Thick or Thin. Thin is the default.
5. Enter the required volume size in GB, in 8 GB increments.
6. Select the NVMe storage pool and click Create.

Map a volume to the host


Use this procedure to map a volume to the host.

Steps
1. From PowerFlex Manager, click Block > Volumes.
2. Select the Volume check box and click Mapping > Map.
3. Select the check box on the host that you are mapping the volume to.
4. Click Map > Apply.

Discover target IP addresses


Use this procedure to discover target IP addresses.

Prerequisites
Ensure a volume is mapped to the host to be able to connect the SDT paths.

Steps
1. From PowerFlex Manager, click Block > NVMe Targets.
2. Select any one of the listed SDTs.
3. Record the IP addresses and discovery port in the lower right corner.
Example output:

IP address Storage port I/O port Discovery port


192.168.151.2 12200 4420 8009
192.168.152.2 12200 4420 8009
192.168.153.2 12200 4420 8009
192.168.154.2 12200 4420 8009

4. Type echo "nvme-tcp" | tee -a /etc/modules-load.d/nvme-tcp.conf to load the NVMe kernel module on
startup and add it to the nvme-tcp.conf file.
5. Reboot the host. After the system returns to operation, type lsmod |grep nvme to verify if the modules are loaded.
Example output:

nvme_tcp 36864 0
nvme_fabrics 28672 1 nvme_tcp
nvme_core 135168 2 nvme_tcp,nvme_fabrics
t10_pi 16384 2 sd_mod,nvme_core

146 Deploying PowerFlex NVMe over TCP


Internal Use - Confidential

6. Type nvme discover -t tcp -a <SDT IP ADDRESS> -s 4420 to discover the PowerFlex NVMe SDT interfaces.
Use one of the SDT IP addresses gathered in step 2. If discovery fails, use the next IP address in the list and try again.
Example output from a successful discovery:

# nvme discover -t tcp -a 192.168.151.2 -s 4420

Discovery Log Number of Records 3, Generation counter 2


=====Discovery Log Entry 0======
trtype: tcp
adrfam: ipv4
subtype: nvme subsystem
treq: not specified, sq flow control disable supported
portid: 0
trsvcid: 4420
subnqn: nqn.1988-11.com.dell:powerflex:00:6b63b30579b1ac0f
traddr: 192.168.151.3
sectype: none
=====Discovery Log Entry 1======
trtype: tcp
adrfam: ipv4
subtype: nvme subsystem
treq: not specified, sq flow control disable supported
portid: 19
trsvcid: 4420
subnqn: nqn.1988-11.com.dell:powerflex:00:6b63b30579b1ac0f
traddr: 192.168.152.2
sectype: none
=====Discovery Log Entry 2======
trtype: tcp
adrfam: ipv4
subtype: nvme subsystem
treq: not specified, sq flow control disable supported
portid: 34
trsvcid: 4420
subnqn: nqn.1988-11.com.dell:powerflex:00:6b63b30579b1ac0f
traddr: 192.168.154.4
sectype: none

The field traddr are the SDT IP addresses.


7. Type nvme connect-all -t tcp -a <SDT IP ADDRESS> to connect all SDT paths.
8. Type the following to verify path connectivity:
nvme list-subsys
nvme-subsys0 - NQN=nqn.1988-11.com.dell:powerflex:00:6b63b30579b1ac0f
\
+- nvme0 tcp traddr=192.168.151.3 trsvcid=4420 live
+- nvme1 tcp traddr=192.168.152.2 trsvcid=4420 live
+- nvme2 tcp traddr=192.168.154.4 trsvcid=4420 live
9. Type lsblk to verify that the storage is presented to the host.
Example output:

NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT


sda 8:0 0 1.7T 0 disk
nvme0n1 259:1 0 64G 0 disk (This is the NVMe volume)

10. (Optional) To enable the NVMe path and storage persistence beyond a reboot, type:
echo "-t tcp -a <SDT IP ADDRESS> -s 4420" | tee -a /etc/nvme/discovery.conf
systemctl enable nvmf-autoconnect.service
11. Reboot the host and verify paths and volumes persist.

Deploying PowerFlex NVMe over TCP 147


Internal Use - Confidential

Configuring NVMe over TCP on Red Hat Enterprise


Linux
Requirements:
● Deployed NVMe over TCP resource group
● Red Hat Enterprise Linux 8.5

Pre-configure the embedded operating system 7.x


Use this procedure to pre-configure the embedded operating system 7.x.

Steps
1. The following kernel modules are required for NVMe over TCP connectivity: nvme, nvme_fabrics, and nvme_tcp. Use the
lsmod command to confirm that the modules are loaded.
The following output is for example purpose only. The output may vary depending on the deployment and node type.

[root@chargers-r640-151 ~]# lsmod |grep nvme


nvme_fabrics 24576 0
nvme 45056 0
nvme_core 114688 2 nvme,nvme_fabrics
t10_pi 16384 2 sd_mod,nvme_core

2. If nvme_tcp and nvme_fabrics are not listed, use the following command to add lines to the nvme_tcp.conf file. This
forces the modules to load on boot:
The following output is for example purpose only. The output may vary depending on the deployment and node type.

[root@chargers-r640-151 ~]# echo -e "nvme_tcp\nnvme_fabrics" >> /etc/modules-load.d/


nvme_tcp.conf

3. Re-run lsmod |grep nvme to confirm nvme_tcp and nvme_fabrics are now listed:

[root@vision174-150 ~]# lsmod |grep nvme


nvme_tcp 32768 0
nvme_fabrics 24576 1 nvme_tcp
nvme_core 114688 2 nvme_tcp,nvme_fabrics
t10_pi 16384 2 sd_mod,nvme_core

4. Enter the following command to confirm that an nvme hostnqn is set:

[root@vision174-150 ~]# cat /etc/nvme/hostnqn


nqn.2014-08.org.nvmexpress:uuid:4c4c4544-0043-5a10-8034-b3c04f483133

5. If the command in step 4 returns no value, enter the following command to generate an nvme hostnqn:

[root@vision174-150 ~]# nvme gen-hostnqn > /etc/nvme/hostnqn

6. Enter the following command to confirm an nvme hostid exists:

[[root@vision174-150 ~]# cat /etc/nvme/hostid


9d14cb98-7167-4a17-b2b6-d6f47df4bb46

7. If nvme hostid does not exist, enter the following command to generate an nvme hostid:

[root@vision174-150 ~]# nvme gen-hostnqn > /etc/nvme/hostid

148 Deploying PowerFlex NVMe over TCP


Internal Use - Confidential

NOTE: After completing the nvme gen-hostnqn command, edit the newly created file: /etc/nvme/hostid and
remove nqn.2014-08.org.nvmexpress:uuid: from the beginning of the line.

Create a volume
Use this procedure to create a volume.

Steps
1. From PowerFlex Manager, click Block > Volumes.
2. Click +Create Volume.
3. Enter the number of volumes and the name of the volumes.
4. Select Thick or Thin. Thin is the default.
5. Enter the required volume size in GB, in 8 GB increments.
6. Select the NVMe storage pool and click Create.

Map a volume to the host


Use this procedure to map a volume to the host.

Steps
1. From PowerFlex Manager, click Block > Volumes.
2. Select the Volume check box and click Mapping > Map.
3. Select the check box on the host that you are mapping the volume to.
4. Click Map > Apply.

Discover target IP addresses


Use this procedure to discover target IP addresses.

About this task


Ensure a volume is mapped to the host to be able to connect the SDT paths.

Steps
1. From PowerFlex Manager, click Block > NVMe Targets.
2. Select any one of the listed SDTs.
3. Record the IP addresses and discovery port in the lower right corner.
Example output:

IP address Storage port I/O port Discovery port


192.168.151.2 12200 4420 8009
192.168.152.2 12200 4420 8009
192.168.153.2 12200 4420 8009
192.168.154.2 12200 4420 8009

Deploying PowerFlex NVMe over TCP 149


Internal Use - Confidential

Add a PowerFlex compute-only host to the PowerFlex storage


system
Use this procedure to add a PowerFlex compute-only host based on embedded operating system 7.x to the PowerFlex storage
system.

Steps
1. Use SSH to the primary MDM and enter --host_name.
NOTE: The value for host_name should correspond to the abbreviated hostname for the system to be added.

Example command to identify the hostname:

chargers-pfmp-deployer:~ # hostname

Example output of an identified hostname:

chargers-pfmp-deployer

The following output is for example purpose only. The output may vary depending on the deployment and node type.

chargers-r740-sds-138:~ # scli --add_nvme_host --nvme_host_nqn


nqn.2014-08.org.nvmexpress:uuid:4c4c4544-0043-5a10-8034-b3c04f483133 --host_name
chargers-640-151

Example output:

Successfully created NVMe Host chargers-640-151. Object ID 1d73aad400010001

2. Confirm the compute-only host is added to PowerFlex, enter the scli command --query_host.
The following output is for example purpose only. The output may vary depending on the deployment and node type.

chargers-r740-sds-138:/tmp # scli --query_host --host_name chargers-640-151

Example output:

NVMe Host ID: 1d73aad400010001 Name: chargers-640-151 NQN:


nqn.2014-08.org.nvmexpress:uuid:4c4c4544-0043-5a10-8034-b3c04f483133
Max number of paths per mapped volume: 4
Max number of system ports per Protection Domain: 10
OS full type: GENERIC
Last discovery: N/A, Last successful connection: N/A, Last failure connection: N/A

3. Enter the scli command --map_volume_to_host to map the volume to the host.
The following output is for example purpose only. The output may vary depending on the deployment and node type.

chargers-r740-sds-138:/tmp # scli --map_volume_to_host --host_name chargers-640-151 --


volume_name Linux-CO-Test-Vol

Example output:

Successfully mapped volume Linux-CO-Test-Vol to host chargers-640-151

4. Using the NVMe, connect the compute-only node to the volume. Replace the IP address listed with one of the SDT IP
addresses gathered in Discover target IP addresses.
If discovery fails, use the next IP address in the list and try again.
Example output:

[root@vision174-150 ~]# nvme connect-all -t tcp -a 192.168.152.2

5. Type lsblk to verify the connection to the PowerFlex system on the compute-only host.

150 Deploying PowerFlex NVMe over TCP


Internal Use - Confidential

Example output:

[root@vision174-150 ~]# lsblk


NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
sda 8:0 0 1.8T 0 disk
├─sda1 8:1 0 600M 0 part /boot/efi
├─sda2 8:2 0 1G 0 part /boot
└─sda3 8:3 0 1.8T 0 part
├─rl-root 253:0 0 70G 0 lvm /
├─rl-swap 253:1 0 4G 0 lvm [SWAP]
└─rl-home 253:2 0 1.7T 0 lvm /home
sdb 8:16 0 1.8T 0 disk
nvme0n1 259:4 0 664G 0 disk

You can also verify using the nmve list-subsys command.


Example output:

[root@vision174-150 ~]# nvme list-subsys


nvme-subsys0 - NQN=nqn.1988-11.com.dell:powerflex:00:6b63b30579b1ac0f
\
+- nvme0 tcp traddr=192.168.152.4 trsvcid=4420 live

6. (Optional), To enable the NVMe path and storage persistence beyond a reboot, type echo "-t tcp -a
<SDT IP ADDRESS> -s 4420" | tee -a /etc/nvme/discovery.conf systemctl enable nvmf-
autoconnect.service
7. Reboot the host and verify if paths and volumes persist.

Deploying PowerFlex NVMe over TCP 151


Internal Use - Confidential

11
Deploying the VMware NSX-T Ready nodes
Use this chapter to deploy the VMware NSX-T Ready nodes.
After the PowerFlex appliance is configured at the customer location by Dell, VMware services perform the NSX-T data center
installation. The NSX-T Edge hardware, physical switch configuration, and virtualization components are pre-configured for the
VMware engineer to perform the NSX-T data center installation.
If the PowerFlex management controllers (four) and / or NSX-T Edge nodes are provided by the customer, use all instructions
for these two nodes as a recommended customer configuration.
Refer to Dell PowerFlex Appliance and PowerFlex Rack with PowerFlex 4.x Cabling and Connectivity Guide for information
about setting up VMware NSX-T in your environment.

Configuring the Cisco Nexus switches


This section provides procedures on how to configure the physical switches to prepare the environment for VMware NSX-T
ready. This section includes both aggregation and access and leaf-spine topology. Depending on the topology used within the
current system deployment, follow the steps within the network topology section being configured. This guide includes only
Cisco Nexus switch configuration examples.
CAUTION: The aggregation and access topology is supported with Dell switches, but this guide does not cover
those configurations. Dell switches are not supported with leaf-spine topology.

Update management switches


Perform this procedure to manually configure the management switch ports for the VMware NSX-T Edge nodes.

Prerequisites
● Determine which switch ports to use and install the cabling.
● Confirm that there are no hardware issues and all cablings are in place.

Steps
1. Configure the out-of-band (OOB) management on the management switch ports for the VMware NSX-T Edge nodes (the
port examples are for the Cisco Nexus 92348GC-X switch), as follows:

interface e1/31
description edge-01 (00:M0) m0 – nsx-edge-01
switchport access vlan 101 (Provided in Enterprise Management Platform)
spanning-tree port type edge
spanning-tree bpduguard enable
spanning-tree guard root
no shutdown

interface e1/32
description edge-02 (00:M0) m0 – nsx-edge-02
switchport access vlan 101 (Provided in Enterprise Management Platform)
spanning-tree port type edge
spanning-tree bpduguard enable
spanning-tree guard root
no shutdown

2. From the Cisco Nexus NX-OS switch CLI, type the following to save the configuration on all the switches:

copy running-config startup-config

152 Deploying the VMware NSX-T Ready nodes


Internal Use - Confidential

Update aggregation switches


Perform this procedure to manually configure the aggregation switches and configure switch ports for the physical VMware
NSX-T Edge nodes. Each VMware NSX-T Edge node is connected to the aggregation switches by default. However, if port
capacity or cable distance is an issue, the VMware NSX-T Edge nodes can connect both the management and NSX-T transport
traffic ports to access switches instead of the aggregation switches. See the Dell PowerFlex Appliance and PowerFlex Rack
with PowerFlex 4.x Cabling and Connectivity Guide section.

Prerequisites
● Determine which switch ports to use and install the cabling.
● Confirm that there are no hardware issues and all cablings are in place.
● Be sure to look up the following SVIs within the Enterprise Management Platform (EMP). The SVIs must be created on the
aggregation switches depending on which networking topology is being used within build.
● There is an option to either configure RAID 1+0 local storage) or vSAN on the VMware NSX-T Edge nodes. If configuring a
vSAN scenario, create the vSAN VLAN and add the VLAN ID to the appropriate trunk ports.

VLAN name VLAN ID IP address Subnet mask Gateway Switch side


flex-node-mgmt- 105 See the EMP See the EMP See the EMP Switch A
<vlanid>
Switch B

nsx-edge-vmotion 113 See the EMP See the EMP See the EMP Switch A
(only if required)
Switch B
(only if required)
nsx-edge-vsan 116 See the EMP See the EMP See the EMP Switch A
(only if required)
Switch B

nsx-edge- 121 See the EMP See the EMP See the EMP Switch A
transport-1
Switch B
nsx-edge-
transport-2

nsx-edge- 122 See the EMP See the EMP See the EMP Switch A
external-1
nsx-edge- 123 See the EMP See the EMP See the EMP Switch B
external-2

Steps
1. Perform this step only if configuring vSAN on the NSX-T Edge cluster and the links are connecting to the aggregation
switches. If vSAN is not being configured or the management cables are connecting to the access switches instead of the
aggregation switches, then skip this step. The vMotion traffic is not configured on the NSX-T Edge nodes since VMware
does not recommend that NSX-T Gateway VMs be migrated between VMware ESXi hosts. Configure only the vSAN VLANs
on both switch sides for the aggregation switches as follows:
CAUTION: By default, the VMware NSX-T Edge nodes do not connect to the access switches. However, if
port capacity or cable distance is an issue, the VMware NSX-T Edge nodes can connect the two management
ports to the access switches instead of the aggregation switches. The configuration below is identical if
configured on the access switches.
● Configure VLAN for the vSAN traffic on aggregation Switch A and aggregation Switch B:

vlan 116 (Provided in EMP)


name nsx-edge-vsan-116
exit

● Configure VLAN for the vMotion traffic on aggregation Switch A and aggregation Switch B:

vlan 113 (Provided in EMP)


name nsx-edge-vmotion-113
exit

Deploying the VMware NSX-T Ready nodes 153


Internal Use - Confidential

2. Configure the transport VLAN and SVI on both switch sides for the aggregation switches as follows:
a. Configure transport VLAN and SVI on aggregation Switch A:

vlan 121 (Provided in EMP)


name nsx-edge-transport-1
exit

interface vlan121 (Provided in EMP)


description nsx-edge-transport-1
no shutdown
mtu 9216
no ip redirects
ip address 192.168.121.2/24 (Provided in EMP)
no ipv6 redirects
hsrp version 2
hsrp 121
authentication text dell1234
preempt
priority 110
ip 192.168.121.1 (Provided in EMP)
exit

b. Configure transport VLAN and SVI on aggregation Switch B:

vlan 121 (Provided in EMP)


name nsx-edge-transport-1
exit

interface vlan121 (Provided in EMP)


description nsx-edge-transport-1
no shutdown
mtu 9216
no ip redirects
ip address 192.168.121.3/24 (Provided in EMP)
no ipv6 redirects
hsrp version 2
hsrp 121
authentication text dell1234
preempt
ip 192.168.121.1 (Provided in EMP)
exit

3. Configure two NSX-T Edge external VLANs, and SVIs on the appropriate aggregation switch as follows.
NOTE: Do not create both on the same switch.

a. Configure NSX-T Edge external 1 VLAN and SVI only to aggregation Switch A as follows:

vlan 122 (Provided in EMP)


name nsx-edge-external-1
exit

interface port-channel 50 # This is the default peer-link where all VLANs pass
through between switches
description switch peerlink port-channel
switchport trunk allowed vlan remove 122

interface vlan122 (Provided in EMP)


description nsx-edge-external-1
no shutdown
mtu 9216
no ip redirects
ip address 192.168.122.1/28 (Provided in EMP)
no ipv6 redirects

b. Configure NSX-T Edge external 2 VLAN and SVI only to aggregation Switch B as follows:

vlan 123 (Provided in EMP)


name nsx-edge-external-2
exit

154 Deploying the VMware NSX-T Ready nodes


Internal Use - Confidential

interface port-channel 50 # This is the default peer-link where all VLANs pass
through between switches
description switch peerlink port-channel
switchport trunk allowed vlan remove 123

interface vlan123 (Provided in EMP)


description nsx-edge-external-2
no shutdown
mtu 9216
no ip redirects
ip address 192.168.123.1/28 (Provided in EMP)
no ipv6 redirects

4. Configure BGP on aggregation Switch A as follows:

feature bpg

router bgp 65100 (Provided in EMP)


router-id 2.2.2.2 (Provided in EMP)
address-family ipv4 unicast
maximum-paths 10
maximum-paths ibgp 10
neighbor 192.168.122.4 (Provided in EMP)
description edge-01-vm1-uplink1
remote-as 65001 (Provided in EMP)
timers 1 3
neighbor 192.168.122.5 (Provided in EMP)
description edge-02-vm1-uplink1
remote-as 65001 (Provided in EMP)
timers 1 3
neighbor <IP of the customer peer switch> (Provided in EMP)
description peering to customer network
remote-as 65200 (Provided in EMP)
timers 1 3
exit

5. Configure BGP on aggregation Switch B as follows:

feature bpg

router bgp 65100 (Provided in EMP)


router-id 2.2.2.3 (Provided in EMP)
address-family ipv4 unicast
maximum-paths 10
maximum-paths ibgp 10
neighbor 192.168.123.4 (Provided in EMP)
description edge-01-vm1-uplink2
remote-as 65001 (Provided in EMP)
timers 1 3
neighbor 192.168.123.5 (Provided in EMP)
description edge-02-vm1-uplink2
remote-as 65001 (Provided in EMP)
timers 1 3
neighbor <IP to customer peer> (Provided in EMP)
description peering to customer network
remote-as 65200 (Provided in EMP)
timers 1 3
exit

6. Configure port-channel (LACP) on aggregation Switch A and aggregation Switch B for each Edge as follows:
NOTE: Add VLAN 116 (vSAN) to the port channel only if vSAN is being configured. If RAID 1+0 is configured instead, do
not add VLAN 116 vSAN.

Deploying the VMware NSX-T Ready nodes 155


Internal Use - Confidential

The provided configuration accounts only for two VMware NSX-T Edge nodes, which is the default number when configuring
the local storage with RAID 1+0. If vSAN is being configured, ensure that a port channel is also configured for the third and
fourth VMware NSX-T Edge node.

interface port-channel60
description to NSX-Edge-1
switchport
switchport mode trunk
switchport trunk allowed vlan 105,116(Provided in EMP)
spanning-tree port type edge trunk
spanning-tree bpduguard enable
spanning-tree guard root
speed 25000
lacp vpc-convergence
no lacp suspend-individual
vpc 60

interface port-channel61
description to NSX-Edge-2
switchport
switchport mode trunk
switchport trunk allowed vlan 105,116(Provided in EMP)
spanning-tree port type edge trunk
spanning-tree bpduguard enable
spanning-tree guard root
speed 25000
lacp vpc-convergence
no lacp suspend-individual
vpc 61

7. Configure the VMware ESXi access ports on aggregation Switch A for each Edge as follows:
a. Configure the switch ports with LACP enabled for the VMware ESXi management traffic on aggregation switch A. If
vSAN is required, this configuration must also include vSAN traffic that shares the same two interfaces. The example
configuration below provides two NSX-T Edge nodes:
CAUTION: By default, the NSX-T Edge nodes do not connect to the access switches. However, if port
capacity or cable distance is an issue, the NSX-T Edge servers can connect the odd port connections
(management and transport traffic) to the access switches instead of the aggregation switches. The
configuration below is identical if configured on the access switches. The last two connections that are
used for external edge always reside on the aggregation switches.

NOTE: Add VLAN 116 (vSAN) to the appropriate trunk port only if vSAN is being configured. If RAID 1+0 is
configured instead, do not add VLAN 116 (vSAN) to the switches.

interface e1/29/1
description nsx-edge-1 (xx:xx) -nicX
switchport
switchport mode trunk
switchport trunk allowed vlan 105,113,116 (Provided in EMP)
spanning-tree port type edge trunk
spanning-tree bpduguard enable
spanning-tree guard root
speed 25000
channel-group 60 mode active
no shutdown

interface e1/29/2
description nsx-edge-2 (xx:xx) -nicX
switchport
switchport mode trunk
switchport trunk allowed vlan 105,113,116 (Provided in EMP)
spanning-tree port type edge trunk
spanning-tree bpduguard enable
spanning-tree guard root
speed 25000
channel-group 61 mode active
no shutdown

156 Deploying the VMware NSX-T Ready nodes


Internal Use - Confidential

b. Configure the switch ports as trunks for the NSX-T transport traffic on aggregation Switch A. The example configuration
below provides two NSX-T Edge nodes:
CAUTION: By default, the NSX-T Edge nodes do not connect to the access switches. However, if port
capacity or cable distance is an issue, the NSX-T Edge nodes can connect the odd port connections
(management and transport traffic) to the access switches instead of the aggregation switches. The
configuration below is identical if configured on the access switches. The last two connections that are
used for external edge always reside on the aggregation switches.

interface e1/30/1
description nsx-edge-1 (xx:xx) -nicX
switchport
switchport access vlan 121 (Provided in EMP)
spanning-tree port type edge
spanning-tree bpduguard enable
spanning-tree guard root
mtu 9216
speed 25000
no shutdown

interface e1/30/2
description nsx-edge-2 (xx:xx) -nicX
switchport
switchport access vlan 121 (Provided in EMP)
spanning-tree port type edge
spanning-tree bpduguard enable
spanning-tree guard root
mtu 9216
speed 25000
no shutdown

c. Configure the switch port as trunks for the NSX-T external edge traffic on aggregation switch A. The example
configuration below provides two NSX-T Edge nodes:

interface e1/28/1
description nsx-edge-1 (xx:xx) -nicX
switchport
switchport access vlan 122 (Provided in EMP)
spanning-tree port type edge
spanning-tree bpduguard enable
spanning-tree guard root
speed 25000
no shutdown

interface e1/28/2
description nsx-edge-2 (xx:xx) -nicX
switchport
switchport access vlan 122 (Provided in EMP)
spanning-tree port type edge
spanning-tree bpduguard enable
spanning-tree guard root
speed 25000
no shutdown

d. From the Cisco NX-OS switch CLI, type the following to save the configuration on all the switches:

copy running-config startup-config

8. Configure ESXi access ports on aggregation Switch B for each Edge as follows:
a. Configure the switch port with LACP enabled for the ESXi management traffic on aggregation switch B. If vSAN is
required, then this configuration must also include vSAN traffic that shares the same two interfaces. The example
configuration below provides two NSX-T Edge nodes:
CAUTION: By default, the NSX-T Edge nodes do not connect to the access switches. However, if port
capacity or cable distance is an issue, the NSX-T Edge nodes can connect the odd port connections
(management and transport traffic) to the access switches instead of the aggregation switches. The

Deploying the VMware NSX-T Ready nodes 157


Internal Use - Confidential

last two connections that are used for external edge always reside on the aggregation switches. The
configuration below is identical if configured on the access switches.

NOTE: Add VLAN 116 (vSAN) to the appropriate switch port only if vSAN is being configured. If RAID 1+0 is
configured instead, do not add VLAN 116 (vSAN) to the switches.

interface e1/29/1
description nsx-edge-1 (xx:xx) -nicX
switchport
switchport mode trunk
switchport trunk allowed vlan 105,116 (Provided in EMP)
spanning-tree port type edge trunk
spanning-tree bpduguard enable
spanning-tree guard root
speed 25000
channel-group 60 mode active
no shutdown

interface e1/29/2
description nsx-edge-2 (xx:xx) -nicX
switchport
switchport mode trunk
switchport trunk allowed vlan 105,116 (Provided in EMP)
spanning-tree port type edge trunk
spanning-tree bpduguard enable
spanning-tree guard root
speed 25000
channel-group 61 mode active
no shutdown

b. Configure the switch ports as trunks for the NSX-T transport traffic on aggregation Switch A. The example configuration
below provides two NSX-T Edge nodes:
CAUTION: By default, the NSX-T Edge nodes do not connect to the access switches. However, if port
capacity or cable distance is an issue, the NSX-T Edge nodes can connect the odd port connections
(management and transport traffic) to the access switches instead of the aggregation switches. The
last two connections that are used for external edge always reside on the aggregation switches. The
configuration below is identical if configured on the access switches.

interface e1/30/1
description nsx-edge-1 (xx:xx) -nicX
switchport
switchport access vlan 121 (Provided in EMP)
spanning-tree port type edge
spanning-tree bpduguard enable
spanning-tree guard root
mtu 9216
speed 25000
no shutdown

interface e1/30/2
description nsx-edge-2 (xx:xx) -nicX
switchport
switchport access vlan 121 (Provided in EMP)
spanning-tree port type edge
spanning-tree bpduguard enable
spanning-tree guard root
mtu 9216
speed 25000
no shutdown

c. Configure the switch port as trunk for the NSX-T external edge traffic on aggregation Switch B. The example
configuration below provides two NSX-T Edge nodes:

interface e1/28/1
description nsx-edge-1 (xx:xx) -nicX
switchport
switchport access vlan 123 (Provided in EMP)
spanning-tree port type edge

158 Deploying the VMware NSX-T Ready nodes


Internal Use - Confidential

spanning-tree bpduguard enable


spanning-tree guard root
speed 25000
no shutdown

interface e1/28/2
description nsx-edge-2 (xx:xx) -nicX
switchport
switchport access vlan 123 (Provided in EMP)
spanning-tree port type edge
spanning-tree bpduguard enable
spanning-tree guard root
speed 25000
no shutdown

d. From the Cisco NX-OS switch CLI, type the following to save the configuration on all the switches:

copy running-config startup-config

Update access switches


CAUTION: By default, the VMware NSX-T Edge nodes do not connect to the access switches. However, if
port capacity or cable distance is an issue, then the NSX-T Edge nodes can connect the odd port connections
(management and transport traffic) to the access switches instead of the aggregation switches. The last two
connections that are used for external edge traffic always reside on the aggregation switches. Follow steps 7
(a, b, and d) and 8 (a, b, and d) of the previous section, Update aggregation switches if the connections are
connected to access switches.

Update border leaf switches


Perform this procedure to manually configure the border leaf switches and configure switch ports for the NSX-T Edge nodes.
Each NSX-T Edge node is connected to the border leaf switches by default. However, if port capacity or cable distance is
an issue, the NSX-T Edge nodes can connect both the management and NSX-T transport traffic ports to the leaf switches
instead of the border leaf switches. See the Dell PowerFlex Appliance with PowerFlex 4.x Expansion Guide for the port map
connectivity.

Prerequisites
● Determine which switch ports to use and install the cabling.
● Confirm that there are no hardware issues and all cablings are in place.
● Be sure to look up the following SVIs within the . These SVIs are required to be created on the border leaf switches
depending on which networking topology is being used within build.
● There is an option to either configure RAID 1+0 (local storage) or vSAN on the NSX-T Edge nodes. If configuring a single
vSAN scenario, create the vSAN SVI and add the VLAN ID to the appropriate trunk ports.

VLAN name VLAN ID IP address Subnet mask Gateway Switch side


nsx-edge-node-mgmt-105 105 See EMP See EMP See EMP Switch A
Switch B

nsx-edge-vsan-116 (only if 116 See EMP See EMP See EMP Switch A
required)
Switch B

nsx-edge-transport-121 121 See EMP See EMP See EMP Switch A


Switch B

nsx-edge-external-122 122 See EMP See EMP See EMP Switch A


nsx-edge-external-123 123 See EMP See EMP See EMP Switch B

Deploying the VMware NSX-T Ready nodes 159


Internal Use - Confidential

Steps
1. Configure the vSAN networking configuration on both switch sides for the border leaf switches as follows:
NOTE: Perform this step only if configuring vSAN on NSX-T Edge cluster.

● Configure VLAN for the vSAN traffic on border leaf Switch A and border leaf Switch B:

vlan 116 (Provided in EMP)


name nsx-edge-vsan-116
vn-segment 800116

interface nve1
member vni 800116
suppress-arp
ingress-replication protocol bgp

evpn
vni 800116
rd auto
route-target import auto
route-target export auto
exit

● Configure VLAN for the vMotion traffic on border leaf Switch A and border leaf Switch B:

vlan 113 (Provided in EMP)


name nsx-edge-vmotion-113
vn-segment 800113

interface nve1
member vni 800113
suppress-arp
ingress-replication protocol bgp

evpn
vni 800113
rd auto
route-target import auto
route-target export auto
exit

2. Configure the transport VLAN and SVI on both switch sides for the border leaf switches as follows:
● Configure transport VLAN and SVI on border leaf Switch A and border leaf Switch B:

vlan 121 (Provided in EMP)


name nsx-edge-transport-121
vn-segment 800121

interface nve1
member vni 800121
suppress-arp
ingress-replication protocol bgp

evpn
vni 800121 121
rd auto
route-target import auto
route-target export auto
exit

interface vlan121 (Provided in EMP)


description nsx-edge-transport
vrf member VxFLEX_Management_VRF
fabric forwarding mode anycast-gateway
mtu 9216
ip address 192.168.121.1/24 (Provided in EMP)

3. Configure two Edge external SVIs on the appropriate border leaf switch as follows.
NOTE: Do not create both on each switch.

160 Deploying the VMware NSX-T Ready nodes


Internal Use - Confidential

a. Configure Edge external 1 VLAN and SVI only to border leaf Switch A as follows:

vlan 122 (Provided in EMP)


name nsx-edge-external-122
vn-segment 800122

interface port-channel 50 ## This is the default peer-link where all VLANs pass
through between switches.
switchport trunk allowed vlan remove 122 # Remove VLAN from peer-link to prevent
alerts that VLAN ID do not match.

interface nve1
member vni 800122
suppress-arp
ingress-replication protocol bgp

evpn
vni 800122 122
rd auto
route-target import auto
route-target import auto

interface vlan122 (Provided in EMP)


description nsx-edge-external
vrf member VxFLEX_Management_VRF
fabric forwarding mode anycast-gateway
mtu 9216
ip address 192.168.122.1/24 (Provided in EMP)

b. Configure Edge external 2 VLAN and SVI only to border leaf Switch B as follows:

vlan 123 (Provided in EMP)


name nsx-edge-external-123
vn-segment 800123

interface port-channel 50 ## This is the default peer-link where all VLANs pass
through between switches.
switchport trunk allowed vlan remove 122 # Remove VLAN from peer-link to prevent
alerts that VLAN ID do not match.

interface nve1
member vni 800123
suppress-arp
ingress-replication protocol bgp

evpn
vni 800123 123
rd auto
route-target import auto
route-target import auto

interface vlan123 (Provided in EMP)


description nsx-edge-external
vrf member VxFLEX_Management_VRF
fabric forwarding mode anycast-gateway
mtu 9216
ip address 192.168.123.1/24 (Provided in EMP)

4. Configure BGP on border leaf Switch A as follows:

router bgp 65100 (Provided in EMP)


router-id 2.2.2.2 (Provided in EMP)
vrf VxFLEX_Management_VRF
address-family ipv4 unicast
maximum-paths 10
maximum-paths ibgp 10
neighbor 192.168.122.4 (Provided in EMP)
description edge-01-vm1-uplink1
remote-as 65001 (Provided in Workbook)
timers 1 3
neighbor 192.168.122.5 (Provided in EMP)

Deploying the VMware NSX-T Ready nodes 161


Internal Use - Confidential

description edge-02-vm1-uplink1
remote-as 65001 (Provided in EMP)
timers 1 3
neighbor <IP to customer peer> (Provided in EMP)
description peering to customer network
remote-as 65200 (Provided in EMP)
timers 1 3

5. Configure BGP on border leaf Switch B as follows:

router bgp 65100 (Provided in EMP)


router-id 2.2.2.3 (Provided in EMP)
vrf VxFLEX_Management_VRF
address-family ipv4 unicast
maximum-paths 10
maximum-paths ibgp 10
neighbor 192.168.123.4 (Provided in EMP)
description edge-01-vm1-uplink2
remote-as 65001(Provided in EMP)
timers 1 3
neighbor 192.168.123.5 (Provided in EMP)
description edge-02-vm1-uplink2
remote-as 65001 Provided in EMP)
timers 1 3
neighbor <IP to customer peer> (Provided in EMP)
description peering to customer network
remote-as 65200(Provided in EMP)
timers 1 3

6. Configure port-channel (LACP) on border leaf Switch A and border leaf Switch B for each NSX-T Edge node as follows:
NOTE: Add VLAN 116 (vSAN) to the port-channel only if vSAN is being configured. If RAID 1+0 is configured instead, do
not add VLAN 116 vSAN.

interface port-channel60
description to NSX-Edge-1
switchport
switchport mode trunk
switchport trunk allowed vlan 105,116(Provided in EMP)
spanning-tree port type edge trunk
spanning-tree bpduguard enable
spanning-tree guard root
speed 25000
lacp vpc-convergence
no lacp suspend-individual
vpc 60

interface port-channel61
description to NSX-Edge-2
switchport
switchport mode trunk
switchport trunk allowed vlan 105,116(Provided in EMP)
spanning-tree port type edge trunk
spanning-tree bpduguard enable
spanning-tree guard root
speed 25000
lacp vpc-convergence
no lacp suspend-individual
vpc 61

7. Configure VMware ESXi access ports on border leaf Switch A for each NSX-T Edge node as follows:
a. Configure the switch port for the ESXi management traffic on border leaf switch A. If vSAN is required, then this
configuration includes vSAN traffic that shares the same two interfaces. The example configuration below provides two
NSX-T Edge nodes:
NOTE: Add VLAN 116 (vSAN) to the appropriate trunk port only if vSAN is being configured. If RAID 1+0 is
configured instead, do not add VLAN 116 (vSAN).

162 Deploying the VMware NSX-T Ready nodes


Internal Use - Confidential

WARNING: By default, the NSX-T Edge nodes do not connect to leaf switches. However, if port capacity
or cable distance is an issue, the NSX-T Edge nodes can connect the odd port connections (management
and transport traffic) to the leaf switches instead of the border leaf switches. The last two connections
used for external edge always reside on the border leaf switches. The configuration below is identical if
configured on the leaf switches.

interface e1/29/1
description nsx-edge-1 (xx:xx) -nicX
switchport
switchport mode trunk
switchport trunk allowed vlan 105,113,116 (Provided in EMP)
spanning-tree port type edge trunk
spanning-tree bpduguard enable
spanning-tree guard root
speed 25000
channel-group 60 mode active
no shutdown

interface e1/29/2
description nsx-edge-2 (xx:xx) -nicX
switchport
switchport mode trunk
switchport trunk allowed vlan 105,113,116 (Provided in EMP)
spanning-tree port type edge trunk
spanning-tree bpduguard enable
spanning-tree guard root
speed 25000
channel-group 61 mode active
no shutdown

b. Configure the switch ports for the NSX-T transport traffic on border leaf Switch A. The example configuration below
provides two NSX-T Edge nodes:
WARNING: By default, the NSX-T Edge nodes do not connect to leaf switches. However, if port capacity
or cable distance is an issue, the NSX-T Edge nodes can connect the odd port connections (management
and transport traffic) to the leaf switches instead of the border leaf switches. The last two connections
used for external edge always reside on the border leaf switches. The configuration below is identical if
configured on the leaf switches.

interface e1/30/1
description nsx-edge-1 (xx:xx) -nicX
switchport
switchport access vlan 121 (Provided in EMP)
spanning-tree port type edge
spanning-tree bpduguard enable
spanning-tree guard root
mtu 9216
speed 25000
no shutdown

interface e1/30/2
description nsx-edge-2 (xx:xx) -nicX
switchport
switchport access vlan 121 (Provided in EMP)
spanning-tree port type edge
spanning-tree bpduguard enable
spanning-tree guard root
mtu 9216
speed 25000
no shutdown

c. Configure the switch port for the NSX-T external edge traffic on border leaf Switch A. The example configuration below
provides two NSX-T Edge nodes:

interface e1/28/1
description nsx-edge-1 (xx:xx) -nicX
switchport
switchport access vlan 122 (Provided in EMP)

Deploying the VMware NSX-T Ready nodes 163


Internal Use - Confidential

spanning-tree port type edge


spanning-tree bpduguard enable
spanning-tree guard root
speed 25000
no shutdown

interface e1/28/2
description nsx-edge-2 (xx:xx) -nicX
switchport
switchport access vlan 122 (Provided in EMP)
spanning-tree port type edge
spanning-tree bpduguard enable
spanning-tree guard root
speed 25000
no shutdown

d. From the Cisco NX-OS switch CLI, type the following to save the configuration on all the switches:

copy running-config startup-config

8. Configure ESXi access ports on border leaf Switch B for each NSX-T Edge node as follows:
NOTE: Add VLAN 116 (VSAN) to the port-channel only if VSAN is being configured. If RAID 1+0 is configured instead,
do not add VLAN 116 (VSAN).
The sample switch port configuration below configures two NSX-T Edge nodes.

interface port-channel60
description to NSX-Edge-1
switchport
switchport mode trunk
switchport trunk allowed vlan 105,116(Provided in EMP)
spanning-tree port type edge trunk
spanning-tree bpduguard enable
speed 25000
lacp vpc-convergence
no lacp suspend-individual
vpc 60

interface port-channel61
description to NSX-Edge-2
switchport
switchport mode trunk
switchport trunk allowed vlan 105,116(Provided in EMP)
spanning-tree port type edge trunk
spanning-tree bpduguard enable
speed 25000
lacp vpc-convergence
no lacp suspend-individual
vpc 61

9. Configure ESXi access ports on border leaf Switch B for each NSX-T Edge node as follows:
a. Configure the switch port for the ESXi management traffic on border leaf Switch B. If vSAN is required, this
configuration also includes vSAN traffic that shares the same two interfaces. The example configuration below provides
two NSX-T Edge nodes.
NOTE: Add VLAN 116 (vSAN) to the appropriate trunk port only if vSAN is being configured. If RAID 1+0 is
configured instead, do not add VLAN 116 (vSAN).

WARNING: By default, the NSX-T Edge nodes do not connect to leaf switches. However, if port capacity
or cable distance is an issue, the NSX-T Edge nodes can connect the odd port connections (management
and transport traffic) to the leaf switches instead of the border leaf switches. The last two connections
used for external edge always reside on the border leaf switches. The configuration below is identical if
configured on the leaf switches.

interface e1/29/1
description nsx-edge-1 (xx:xx) -nicX
switchport

164 Deploying the VMware NSX-T Ready nodes


Internal Use - Confidential

switchport mode trunk


switchport trunk allowed vlan 105,113,116 (Provided in EMP)
spanning-tree port type edge trunk
spanning-tree bpduguard enable
spanning-tree guard root
speed 25000
channel-group 60 mode active
no shutdown

interface e1/29/2
description nsx-edge-2 (xx:xx) -nicX
switchport
switchport mode trunk
switchport trunk allowed vlan 105,113,116 (Provided in EMP)
spanning-tree port type edge trunk
spanning-tree bpduguard enable
spanning-tree guard root
speed 25000
channel-group 61 mode active
no shutdown

b. Configure the switch ports for the NSX-T transport traffic on border leaf Switch A. The example configuration below
provides two NSX-T Edge nodes:
WARNING: By default, the NSX-T Edge nodes do not connect to leaf switches. However, if port capacity
or cable distance is an issue, the NSX-T Edge nodes can connect the odd port connections (management
and transport traffic) to the leaf switches instead of the border leaf switches. The last two connections
used for external edge always reside on the border leaf switches. The configuration below is identical if
configured on the leaf switches.

interface e1/30/1
description nsx-edge-1 (xx:xx) -nicX
switchport
switchport access vlan 121 (Provided in EMP)
spanning-tree port type edge trunk
spanning-tree bpduguard enable
spanning-tree guard root
mtu 9216
speed 25000
no shutdown

interface e1/30/2
description nsx-edge-2 (xx:xx) -nicX
switchport
switchport access vlan 121 (Provided in EMP)
spanning-tree port type edge trunk
spanning-tree bpduguard enable
spanning-tree guard root
mtu 9216
speed 25000
no shutdown

c. Configure the switch port for the NSX-T external edge traffic on border leaf Switch B. The example configuration below
provides two NSX-T Edge nodes:

interface e1/28/1
description nsx-edge-1 (xx:xx) -nicX
switchport
switchport access vlan 123 (Provided in EMP)
spanning-tree port type edge
spanning-tree bpduguard enable
spanning-tree guard root
speed 25000
no shutdown

interface e1/28/2
description nsx-edge-2 (xx:xx) -nicX
switchport
switchport access vlan 123 (Provided in EMP)
spanning-tree port type edge

Deploying the VMware NSX-T Ready nodes 165


Internal Use - Confidential

spanning-tree bpduguard enable


spanning-tree guard root
speed 25000
no shutdown

d. From the Cisco NX-OS switch CLI, type the following to save the configuration on all the switches:

copy running-config startup-config

Update leaf switches


CAUTION: By default, the NSX-T Edge nodes do not connect to the leaf switches. However, if port capacity
or cable distance is an issue, the NSX-T Edge nodes can connect the odd port connections (management and
transport traffic) to the access switches instead of the border leaf switches. The last two connections that
are used for external edge always reside on the border leaf switches. Follow steps 6, 7 (a, b, and d), 8 and 9
(a, b, and d) in the previous section, Update border leaf switches if the connections are connected to access
switches.

Configuring the VMware NSX-T Edge hosts for


VMware ESXi
This section describes how to fully configure the VMware ESXi environment for the VMware NSX-T Edge nodes.

About this task


NOTE: If the customer provides the NSX-T Edge nodes, use this section as a recommendation and not as a customer
requirement.

CAUTION: VMware vSphere configuration steps are based on VMware vSphere 7.0.

Configure iDRAC network settings


Perform this procedure to configure iDRAC network settings on the VMware NSX-T Edge nodes.

Prerequisites
Before assigning any IP addresses, perform a ping test to validate there are no duplicate IP addresses being assigned to the new
nodes iDRAC.

Steps
1. Use a keyboard and monitor, a KVM, or a crash cart and connect to the new node.
2. Power on the node, connect a keyboard and monitor and enter F2 to open the BIOS setup. Use the password emcbios.
3. From the System Setup main menu, select iDRAC Settings > Network.
4. Confirm Enabled is set to Enable NIC, and NIC Selection is set to Dedicated.
5. Under IPv4 Settings, configure the settings using the details that were recorded for the following fields:
● Clear the DHCP Enable option to ensure that DHCP is not enabled.
● Static IP address
● Static Subnet Mask
● Static Gateway
6. Under IPv6 Settings, ensure IPv6 is disabled.
7. Click Back.
8. Change the iDRAC password to match the ones configured on the other PowerFlex nodes, as follows:

166 Deploying the VMware NSX-T Ready nodes


Internal Use - Confidential

a. Click User Configuration. The username should be root.


b. Enter the new iDRAC password in the Change Password field.
c. When prompted, retype the new password and click OK.
d. Click Back > Finish.
e. When prompted, click Yes to confirm.
9. When a dialog box appears, click Yes.
You are returned to the System Setup main menu.

Update the BIOS and system firmware


Update BIOS and system firmware as needed on the VMware NSX-T Edge nodes.

Prerequisites
● Obtain access to the iDRAC web interface.
● Verify access to the upgrade files:
○ BIOS installer: IC location/BIOS
○ iDRAC installer: IC location/iDRAC
○ Backplane Expander firmware installer: IC location/Backplane
○ Network firmware installer: IC location/Intel NIC Firmware
○ SAS firmware installer: IC location/PERC H755 Firmware
○ BOSS controller firmware installer: IC location/BOSS controller firmware

Steps
1. From the iDRAC Web Browser section, click Maintenance > System Update.
2. Choose Choose File. Browse and select the BIOS file and click Upload.
3. Choose Choose File. Browse and select the iDRAC file and click Upload.
4. Choose Choose File. Browse and select the network update files and click Upload.
5. Choose Choose File. Browse and select the SAS update file and click Upload.
6. Choose Choose File. Browse and select the backplane expanders and click Upload.
7. Choose Choose File. Browse and select the BOSS and click Upload.
8. Under Update Details, select all updates.
9. Under Install and Reboot or Install Next Reboot.
The following message appears Updating Job Queue.
10. Click Job Queue to monitor the progress of the install.
11. Wait for the Firmware Update: BIOS job to complete its Downloading state.
When the job reaches a Scheduled state, a Pending Reboot task appears.
12. Click Reboot. The node boots and updates the BIOS.

Disable the hot spare power supply


To avoid power issues on a fully populated PowerFlex appliance, you must disable the hot spare power supply. This procedure
ensures that the power is balanced across both power supply units (PSUs).

Prerequisites
Ensure you have access to the iDRAC.

Steps
1. Log in to the iDRAC Web Console (username: root, password: P@ssw0rd!).
2. For PowerFlex appliance R640/R740xd/R840, click iDRAC > Configuration > Power Management > Power
Configuration > Hot Spare > Disabled.
3. Click Apply.
4. Repeat the steps for the remaining ESXi nodes.

Deploying the VMware NSX-T Ready nodes 167


Internal Use - Confidential

Configure system monitoring


Use these steps to configure system monitoring for the VMware NSX-T Edge nodes.

Prerequisites
Verify that you have access to the iDRAC.

Steps
1. Log in to the iDRAC interface of the node.
2. For PowerFlex appliance R640 (NSX-T Edge node), perform the following steps:
a. Click Configure > System Settings > Alert Configuration > SNMP Traps Configuration. Enter the IP address into
the Alert Destination IP field and select the State checkbox. Click Apply.
NOTE: The read-only community string is already populated. Do not remove this entry.

b. To add the destination IP address for an existing customer monitoring system, enter the PowerFlex Manager IP address
into the Alert Destination2 IP field, select the State checkbox, and click Apply.
NOTE: Customer monitoring must support SNMP v2 and use the community string already configured.

3. Click Apply.

Enable UEFI and configure data protection for the BOSS card
Use these steps to manually configure the data protection (RAID 1) for the BOSS card and enable UEFI on the VMware NSX-T
Edge nodes.

Steps
1. From the iDRAC Dashboard, launch the virtual console and select BIOS setup from the Boot menu to enter system BIOS.
2. Power cycle the server and enter BIOS setup.
3. From the System Setup main menu, select Device Settings.
4. Select AHCI Controller in Slot1: BOSS-1 Configuration Utility.
5. Select Create RAID Configuration.
6. Select both devices and click Next.
7. Enter VD_R1_1 for the name and leave the default values .
8. Click Yes to create the virtual disk and click OK to apply the new configuration.
9. Click Next > OK.
10. Select the VD_R1_1 that was created in above step and click Back > Back > Finish.
11. Select System BIOS.
12. Select Boot Settings and enter the following settings:
● Boot Mode: UEFI
● Boot Sequence Retry: Enabled
● Hard Disk Failover: Disabled
● Generic USB Boot: Disabled
● Hard-disk Drive Placement: Disabled
13. Click Back > Finish.
14. Click Finish to reboot the node.
15. Boot the node into BIOS mode by pressing F2 during boot.
16. Select System BIOS > Boot Settings > UEFI Boot Settings.
17. Select UEFI Boot Sequence to change the order.
18. Click AHCI Controller in Slot 1: EFI Fixed Disk Boot Device 1, and select + to move to the top.
19. Click OK.
20. Click Back > Back.

168 Deploying the VMware NSX-T Ready nodes


Internal Use - Confidential

21. Click Finish again to reboot the node.

Disabling IPMI for NSX-T Edge nodes


Perform the appropriate procedure to disable IPMI on the NSX-T Edge nodes.

Disable IPMI using a Windows-based jump server


Use these steps to disable IPMI for NSX-T Edge nodes using a Windows-based jump server.

Prerequisites
Ensure that iDRAC command-line tools are installed on the system jump server.

Steps
1. For a single NSX-T Edge node:
a. From the jump server, open a PowerShell session.
b. Enter racadm -r x.x.x.x -u root -p yyyyy config -g cfgIpmiLan -o cfgIpmiLanEnable 0.
Where x.x.x.x is the IP address of the iDRAC node and yyyyy is the iDRAC password.
2. For multiple NSX-T Edge nodes:
a. From the jump server, at the root of the C: drive, create a folder that is named ipmi.
b. From the File Explorer, go to View and select the File Name extensions check box.
c. Open a notepad file, and paste this text into the file: powershell -noprofile -executionpolicy bypass
-file ".\disableIPMI.ps1"
d. Save the file, and rename it runme.cmd in C:\ipmi.
e. Open a notepad file, and paste this text into the file: import-csv $pwd\hosts.csv -Header:"Hosts" |
Select-Object -ExpandProperty hosts | % {racadm -r $_ -u root -p XXXXXX config -g
cfgIpmiLan -o cfgIpmiLanEnable 0},
Where XXXXXX is the customer password that must be changed.
f. Save the file, and rename it disableIPMI.ps1 in C:\ipmi.
g. Open a notepad file, and list all the iDRAC IP addresses that must be included, one per line.

h. Save the file, and rename it hosts.csv in C:\ipmi.


i. Open a PowerShell session, and go to C:\ipmi.
j. Enter .\runme.cmd.
Example output:

Deploying the VMware NSX-T Ready nodes 169


Internal Use - Confidential

Disable IPMI using an embedded operating system-based jump server


Use these steps to disable IPMI for NSX-T Edge nodes using an embedded operating system-based jump server.

Prerequisites
Ensure that the iDRAC command-line tools are installed on the system jump server.

Steps
1. For a single NSX-T Edge node:
a. From the jump server, open a terminal session.
b. Enter racadm -r x.x.x.x -u root -p yyyyy config -g cfgIpmiLan -o cfgIpmiLanEnable 0,
where x.x.x.x is the IP address of the iDRAC node and yyyyy is the iDRAC password.
2. For multiple NSX-T Edge nodes:
a. From the jump server, open a terminal window.
b. Edit the idracs text file and enter IP addresses for each iDRAC, one per line.
c. Save the file.
d. From the prompt, enter while read line; do echo “$line” ; racadm -r $line -u root -p yyyyy
config -p cfgIpmiLan -o cfgIpmiLanEnable 0.; done < idracs, where yyyyy is the iDRAC password.
This output displays the IP address for each iDRAC, and the output from the racadm command:

170 Deploying the VMware NSX-T Ready nodes


Internal Use - Confidential

Configure data protection for the PERC Mini Controller


Use these steps only if local storage with data protection (RAID 1+0) is configured on the ESXi NSX-T Edge hosts.

Prerequisites
iDRAC must be configured and reachable.

About this task


CAUTION: By default, the VMware NSX-T Edge nodes are configured using local storage with RAID 1+0 enabled
and come with eight SSD hard drives. VMware recommends using local storage with RAID 1+0 enabled as the
NSX-T Edge gateway VMs have their own method of providing availability at the services level. However, if
VMware services recommends vSAN, skip these steps.

Steps
1. Launch the virtual console and select next boot as BIOS setup from Boot option in menu, to enter system BIOS.
2. Power cycle the server and wait for the boot menu to appear.
3. From the System setup main menu, select Device Settings > Integrated RAID Controller 1: Dell <PERC H755P Mini>
Configure Utility.
4. Select Main Menu > Configuration Management > Create Virtual Disk.
5. Select RAID Level option to be Raid10 and click Select Physical Disks.
6. Select SSD option for Select Media Type to view all the disk drives.
7. Select only the first four disk drives and click Apply Changes.
NOTE: RAID10 only works with an even number of disks and not work with the default five disks that come with the
VMware NSX-T Edge nodes.

8. Click OK.
9. Click Create Virtual Disk and select Confirm box before selecting Yes.
10. Click Yes > OK.
11. Click Back > Back to the main menu.
12. Click Virtual Disk Management to view the initialization process.
NOTE: The initialization process can occur in the background while the ESXi is installed.

13. Click Back > Back to the configuration utility.


14. Click Finish > Finish and click Yes when prompted to restart.
15. Repeat steps 1 through 14 for each new node.

Deploying the VMware NSX-T Ready nodes 171


Internal Use - Confidential

Install and configure VMware ESXi


Use this procedure to install VMware ESXi on the VMware NSX-T Edge node.

Prerequisites
Verify that the customer VMware ESXi ISO is available and is located in the Intelligent Catalog (IC) code directory.

Steps
1. Configure the iDRAC:
a. Connect to the iDRAC interface and launch a virtual remote console from Dashboard and click Launch Virtual Console.
b. Select Connect Virtual Media > Map CD/DVD.
c. Browse to the folder where the ISO file is saved, select it, and click Open.
d. Click Map Device.
e. Click Boot > Virtual CD/DVD/ISO.
f. Click Yes to confirm the boot action.
g. Click Power > Reset System (warm boot).
h. Click Yes to confirm power action.
2. Install VMware ESXi:
a. On the VMware ESXi installer screen, click Enter to continue.
b. Press F11 to accept the license agreement.
c. Under Local, select DELLBOSS VD as the installation location and click Enter.
d. Select US Default as the keyboard layout and click Enter to continue.
e. At the prompt, type the customer provided root password or use the default password VMwar3!!. Click Enter.
f. When the Confirm Install screen is displayed, press F11.
g. Click Enter to reboot the node.
3. Configure the host:
a. Press F2 to access the System Customization menu.
b. Enter the password for the root user.
c. Go to Direct Console User Interface (DCUI) > Configure Management Network.
d. Set the following options under Configure Management Network:
● Network Adapters: Select vmnic2 and vmnic6.
● VLAN: See Enterprise Management Platform (EMP) for VLAN. The standard VLAN is 105.
● IPv4 Configuration: Set static IPv4 address and network configuration. See EMP for the IPv4 address, subnet
mask, and the default gateway.
● DNS Configuration: See EMP for the primary DNS server and alternate DNS server.
○ Custom DNS Suffixes: See EMP.
● IPv6 Configuration: Disable IPv6.
e. Press ESC to return to DCUI.
f. Type Y to commit the changes and the node restarts.
4. Use the command line to set the IP hash:
a. From the DCUI, press F2 to customize the system.
b. Enter the password for the root user.
c. Select Troubleshooting Options and press Enter.
d. From the Troubleshooting Mode Options menu, enable the following:
● ESXi Shell
● Enable SSH
e. Press Enter to enable the service.
f. Press <Alt>+F1 and log in.
g. To enable the VMware ESXi host to work on the port channel, type the following commands:
esxcli network vswitch standard policy failover set -v vSwitch0 –1 iphash

esxcli network vswitch standard portgroup policy failover set -p "Management Network"
-l iphash

172 Deploying the VMware NSX-T Ready nodes


Internal Use - Confidential

h. Press <Alt>+F2 to exit DCUI.


i. Press ESC to exit DCUI.

Create a vSphere cluster and add NSX-T Edge hosts to VMware


vCenter
Use these steps to add the NSX-T Edge nodes to VMware vCenter and configure NTP. This procedure also contains steps to
configure vSAN (only if it is required).

Prerequisites
Ensure you have access to the VMware vSphere Client.

Steps
1. Log in to the VMware vSphere Client.
2. Click Hosts and Clusters.
3. Create an NSX-T vSphere cluster:
a. Right-click PowerFlex Customer-Datacenter and select New Cluster to open the wizard.
NOTE: If more than one customer vCenter datacenters exist, then it should not matter which customer datacenter
to deploy the NSX-T Edge nodes as long as they are not deployed within the management datacenter.

b. Enter PFNC for the name.


c. Leave vSphere DRS, vSphere HA and vSAN turned off (default).
NOTE: If required, vSAN is configured later.

d. Click Next > Finish.


4. Access the PowerFlex Customer-Datacenter and right-click the PFNC cluster.
5. Click Add Hosts.
6. On the Add Hosts screen, enter the hostname, username as root, and the root password for the host, and click Next.
NOTE: You can add multiple hosts by clicking Add Host.

7. Click Yes to replace the host certificate and click Next.


NOTE: If you have multiple ESXi hosts, select the ESXi hosts and click OK to accept multiple certificates.

8. On the Ready to complete screen, review the host summary and click Finish.
This step enforces each ESXi host to enter maintenance mode.
9. For each ESXi host, right-click Edge ESXi node > Maintenance Mode > Exit Maintenance.
NOTE: vCLS VMs deploy automatically when a host is added to the vCenter cluster. Each cluster has a maximum of
three vCLS VMs.

Add the new VMware ESXi local datastore and rename the
operating system datastore (RAID local storage only)
Use this procedure only if the existing production VMware NSX-T Edge nodes do not have vSAN configured. This procedure
manually adds the new local datastore that was created from the RAID utility to VMware ESXi. By default, the VMware NSX-T
Edge nodes are configured using the local storage with RAID1+0 enabled and come with eight SSD hard drives. Using the local
storage with RAID1+0 enabled is the preferred method recommended by VMware because the NSX-T Edge gateway VMs have
their own method of providing availability at the services level. However, if VMware professional services recommends vSAN,
then skip this procedure.

Prerequisites
Ensure that you have access to the VMware vSphere Client.

Deploying the VMware NSX-T Ready nodes 173


Internal Use - Confidential

Steps
1. Log in to the VMware vSphere Client.
2. Click the home icon at the top of the screen and select Hosts and Clusters.
3. Expand PowerFlex Customer-Datacenter and select Edge-Cluster.
4. Rename the local operating system datastore to BOSS card:
a. Select an NSX-T Edge ESXi host.
b. Click Datastores.
c. Right-click the smaller size datastore (OS) and click Rename.
d. To name the datastore, type Edge-Cluster-<nsx-t edge host shortname>-DASOS.
5. Right-click the third NSX Edge ESXi server and select Storage > New Datastore to open the wizard. Perform the
following:
a. Verify that VMFS is selected and click Next.
b. Name the datastore using Edge-Cluster_DAS01.
c. Click the LUN that has disks created in RAID 10.
d. Click Next > Finish.
6. Repeat steps 1 through 5 for the remaining VMware NSX-T Edge nodes.

Enable and configure vSAN on the NSX-T Edge cluster (vSAN


storage option)
Perform this procedure only if enabling and configuring vSAN for the NSX-T Edge cluster.

Prerequisites
● Ensure that you have access to the management VMware vSphere Client.
● VMware ESXi must be installed with hosts added to the vCenter.

About this task


CAUTION: The NSX-T Edge nodes by default are configured using local storage with RAID 1+0 enabled and come
with eight SSD hard drives. VMware recommends using local storage with RAID 1+0 enabled as the NSX-T Edge
gateway VMs have their own method of providing availability at the services level. However, if VMware services
recommends vSAN, then proceed with the following steps.

Steps
1. Log in to the VMware vSphere Client.
2. Click Hosts and Clusters.
3. Expand PowerFlex Customer-Datacenter and select EDGE-CLUSTER cluster.
4. Rename local OS Datastore:
a. Select an NSX-T Edge ESXi host.
b. Click Datastores.
c. Right-click the smaller size datastore (operating system) and select Rename.
d. Name the datastore using the <nsx-t edge host short name>-DASOS.
5. Repeat steps 1 through 4 for the remaining NSX-T Edge nodes.
6. Select the EDGE-CLUSTER cluster, and click Configure > vSAN > Services.
7. Click Configure vSAN to open the wizard:
a. Leave default, Single site cluster and click Next.
b. Leave default and click Next.
c. For NSX-T Edge ready nodes within the NSX-T cluster, select an VMware ESXi host, and click the disks and claim disks
as cache or capacity tier using the Claim For icon as follows:
● Identify one SSD disk that is used for cache tier (generally 1-2 disks of the same model). Select a disk and then select
cache tier from the drop-down.
● Identify remaining four capacity drives. Select the remaining disks and select the capacity's tier from the drop-
down, and then click Next > Finish.

174 Deploying the VMware NSX-T Ready nodes


Internal Use - Confidential

NOTE: Sometimes, two separate disk groups per host are needed. To do this, be sure that two disks are tagged as
cache tier and the remaining disks as capacity tier. A new disk group is created for each disk that is tagged as cache
tier.

Configure NTP settings


Use this procedure to configure the NTP and scratch partition settings for each VMware NSX-T Edge host.

Prerequisites
Ensure that the NSX-T Edge ESXi hosts are added to VMware vCenter server. VMware ESXi must be installed with hosts added
to the VMware vCenter.

Steps
1. Log in to the VMware vSphere Client.
2. Click the home icon at the top of the screen and select Hosts and Clusters.
3. Click Datacenter > Edge-Cluster.
4. Configure NTP on VMware ESXi NSX-T Edge host as follows:
a. Select a VMware ESXi NSX-T Edge host.
b. Click Configure > System > Time Configuration and click Edit from Network Time Protocol.
c. Select the Enable check box.
d. Enter the NTP servers as recorded in the Enterprise Management Platform (EMP). Set the NTP service startup policy as
Start and stop with host, and select Start NTP service.
e. Click OK.
5. Repeat for each controller host.

Configuring virtual networking for NSX-T Edge nodes


Use these procedures to create and configure the distributed virtual switches within the customer vCenter Server.

Create and configure the NSX-T Edge distributed switches


Perform this procedure to create and configure the distributed virtual switches within the customer vCenter Server.

Prerequisites
Ensure that the VMware vSphere vCenter Server and the VMware vSphere Client(HTML5) are accessible.

Create and configure the two distributed virtual switches

Steps
1. Create and configure edge-dvswitch0 as follows:
a. Log in to the VMware vSphere Client HTML5.
b. Click Networking.
c. Right-click the data center (Workbook default name).
d. Create edge_dvswitch0 as follows:
i. Click Distributed Switch > New Distributed Switch.
ii. Update the name to Edge_Dvswitch0 and click Next.
iii. Choose the version 7.0.2-ESXi-7.0.2 and later and click Next.
iv. Select 2 for the number of Uplinks.
v. Select Enabled from the Network I/O Control menu.
vi. Clear the Create default port group option.
vii. Click Next > Finish.

Deploying the VMware NSX-T Ready nodes 175


Internal Use - Confidential

e. Configure LLDP as follows:


i. Right-click Edge_Dvswitch0 > Settings > Edit Settings to open wizard.
ii. Click Advanced.
iii. Change Type to Link Layer Discovery Protocol
iv. Change Operation to Both.
v. Click OK.
f. Configure LAG (LACP) on Edge_Dvswitch0 as follows:
i. Click Edge_Dvswitch0 > Configure > LACP.
ii. Click + New to open the wizard.
iii. Verify that name is lag1.
iv. Verify that number of ports is 2.
v. Verify that mode is Active.
vi. Change Load Balancing mode to Source and Destination IP address and TCP/UDP Port .
vii. Click OK.
2. Create and configure edge-dvswitch1 as follows:
a. Log in to the VMware vSphere Client HTML5.
b. Click Networking.
c. Right-click the data center (Workbook default name).
d. Create edge_dvswitch1 as follows:
i. Click Distributed Switch > New Distributed Switch.
ii. Update the name to Edge_Dvswitch1 and click Next.
iii. Choose the version 7.0.2-ESXi-7.0.2 and later and click Next.
iv. Select 4 for the number of Uplinks.
v. Select Enabled from the Network I/O Control menu.
vi. Clear the Create default port group option.
vii. Click Next > Finish.
e. Configure LLDP as follows:
i. Right-click Edge_Dvswitch1 > Settings > Edit Settings to open the wizard.
ii. Click Advanced.
iii. Enter 9000 for the MTU.
iv. Change Type to Link Layer Discovery Protocol.
v. Change Operation to Both.
vi. Click OK.

Create and configure the two distributed port groups

Steps
1. Create and configure edge-cluster-node-mgmt-105 distributed port group:
a. Right-click the edge_dvswitch0 (Workbook default name).
b. Click Distributed Port Group > New Distributed Port Group.
c. Update the name to edge-cluster-node-mgmt-105 and click Next.
d. Select the default Port binding.
e. Select the default Port allocation.
f. Select the default # of ports (Default is 8) .
g. Select the default VLAN as VLAN Type .
h. Set the VLAN ID to 105.
i. Clear the Customize default policies configuration and click Next.
j. Click Finish.
k. Right-click the edge-cluster-node-mgmt-105 and click Edit Settings....
l. Click Teaming and failover.
m. Change Load Balancing mode to Route based on IP hash.
n. Verify that lag1 is active, and the Uplink1 and Uplink2 are unused.
o. Click OK.
2. Create and configure edge-cluster-nsx-vsan-116 distributed port group:

176 Deploying the VMware NSX-T Ready nodes


Internal Use - Confidential

WARNING: There are two vSAN options for each build: full vSAN configuration and partial vSAN
configuration. If partial vSAN is configured, then skip this step because partial only means to configure
the VLAN on the physical switches. vSAN at the virtualization layer is then configured onsite.

NOTE: This step is required only if VSAN is used.

a. Right-click the edge_dvswitch0 (Workbook default name).


b. Click Distributed Port Group > New Distributed Port Group.
c. Update the name to edge-cluster-vsan-116 and click Next.
d. Select the default Port binding.
e. Select the default Port allocation.
f. Select the default # of ports (Default is 8) .
g. Select the default VLAN as VLAN Type .
h. Set the VLAN ID to 116.
i. Clear the Customize default policies configuration and click Next.
j. Click Finish.
k. Right-click the edge-cluster-vsan-116 and click Edit Settings....
l. Click Teaming and failover.
m. Change Load Balancing mode to Route based on IP hash.
n. Verify that lag1 is active, and the Uplink1 and Uplink2 are unused.
o. Click OK.

Create the distributed port groups for dvswitch1

Steps
1. Create and configure edge-cluster-nsx-transport-121 distributed port group:
a. Right-click the edge_dvswitch1 (Workbook default name).
b. Click Distributed Port Group > New Distributed Port Group.
c. Update the name to edge-cluster-nsx-transport-121 and click Next.
d. Select the default Port binding.
e. Select the default Port allocation.
f. Select the default # of ports (Default is 8).
g. Select the default VLAN as VLAN Type .
h. Set the VLAN ID to 121.
i. Clear the Customize default policies configuration and click Next.
j. Click Finish.
k. Right-click the edge-cluster-nsx-transport-121 and click Edit Settings....
l. Click Teaming and failover.
NOTE: Verify that the load balance policy is configured as Route based on originating virtual port.

m. Click the down arrow to move Uplink3 and Uplink4 to Unused.


NOTE: Only Uplink1 and Uplink2 must be active for this port group.

n. Click OK.
2. Create and configure edge-cluster-nsx-edge1-122 distributed port group:
a. Right-click the edge_dvswitch1 (Workbook default name).
b. Click Distributed Port Group > New Distributed Port Group.
c. Update the name to edge-cluster-nsx-122 and click Next.
d. Select the default Port binding.
e. Select the default Port allocation.
f. Select the default # of ports (Default is 8) .
g. Select the default VLAN as VLAN Type .
h. Set the VLAN ID to 122.
i. Clear the Customize default policies configuration and click Next.
j. Click Finish.

Deploying the VMware NSX-T Ready nodes 177


Internal Use - Confidential

k. Right-click the edge-cluster-nsx-edge1-122 and click Edit Settings....


l. Click Teaming and failover.
m. Click the down arrow to move Uplink1, Uplink2, and Uplink4 to Unused.
NOTE: Only Uplink3 must be active for this port group.

n. Click OK.
3. Create and configure edge-cluster-nsx-edge2-123 distributed port group:
a. Right-click the edge_dvswitch1 (Workbook default name).
b. Click Distributed Port Group > New Distributed Port Group.
c. Update the name to edge-cluster-nsx-edge2-123 and click Next.
d. Select the default Port binding.
e. Select the default Port allocation.
f. Select the default # of ports (Default is 8) .
g. Select the default VLAN as VLAN Type .
h. Set the VLAN ID to 123.
i. Clear the Customize default policies configuration and click Next.
j. Click Finish.
k. Right-click the edge-cluster-nsx-edge2-123 and click Edit Settings....
l. Click Teaming and failover.
m. Click the down arrow to move Uplink1, Uplink2, and Uplink3 to Unused.
NOTE: Only Uplink4 must be active for this port group.

n. Click OK.

Add the VMware NSX-T Edge node to edge_dvswitch0


Use this procedure to add the new VMware NSX-T Edge node to the edge_dvswitch0 and configure the VMkernel networking.

Prerequisites
Ensure you have access to the management VMware vSphere Client.

Steps
1. Log in to the VMware vSphere Client.
2. Click Networking.
3. Expand PowerFlex Customer - Datacenter.
4. Right-click edge_dvswitch0 and select Add and Manage Hosts.
5. Select Add hosts and click Next.
6. In Manage Physical Adapters, perform the following:
a. Click Assign Uplink for vmnic0.
b. Select lag1-0.
c. Click Assign Uplink for vmnic2.
d. Select lag1-1.
e. Click Next.
7. In Manage VMkernel Adapters, perform the following:
a. Select vmk0 and click Assign portgroup.
b. Select Edge-Cluster-node-mgmt-105 and click X to close.
c. Click Next > Next > Next.
8. In the Ready to Complete screen, review the details, and click Finish.
9. If VMware vSAN is required, create and configure the edge-cluster-vsan-114 VMkernel network adapter distributed port
group:
NOTE: The vMotion VMkernel network adapter is not configured by default. Availability depends on the NSX-T Edge
Gateway VM service level.

178 Deploying the VMware NSX-T Ready nodes


Internal Use - Confidential

a. Log in to the VMware vSphere Client.


b. Click Hosts and Clusters.
c. Expand Datacenter and Edge-Cluster to view the VMware ESXi edge hosts.
d. Click ESXi edge host > Configure > VMkernel adapters.
e. Click Add Networking... to open the wizard.
f. At the VMkernel Network Adapter screen, leave the default values and click Next.
g. At the Select an existing network screen, leave the default values and click Browse....
h. Select edge-cluster-vsan-vSAN-114 and click OK.
i. Click Enable vSAN and click Next.
j. Select Use static IPv4 settings and enter the IPv4 address and Subnet mask.
k. Click Next > Finish.

Add the VMware NSX-T Edge node to edge_dvswitch1


Use this procedure to add the new VMware NSX-T Edge node to the edge_dvswitch1.

Prerequisites
Ensure that you have access to the management VMware vSphere Client.

Steps
1. Log in to the VMware vSphere Client.
2. Click Networking.
3. Expand PowerFlex Customer - Datacenter.
4. Right-click edge_dvswitch1 and select Add and Manage Hosts.
5. Select Add hosts and click Next.
6. In Manage Physical Adapters, perform the following:
a. Click Assign Uplink for vmnic5.
b. Select Uplink 1.
c. Click Assign Uplink for vmnic3.
d. Select Uplink 2.
e. Click Assign Uplink for vmnic7.
f. Select Uplink 3.
g. Click Assign Uplink for vmnic4.
h. Select Uplink 4.
7. Click Next > Next > Next.
8. In the Ready to Complete screen, review the details, and click Finish.

Patch and install drivers for VMware ESXi


Use this procedure if VMware ESXi drivers differ from the current Intelligent Catalog (IC). Patch and install the VMware ESXi
drivers using the VMware vSphere Client.

Prerequisites
Apply all VMware ESXi updates before installing or loading hardware drivers.
NOTE: This procedure is required only if the ISO drivers are not at the proper Intelligent Catalog (IC) level.

Steps
1. Log in to the VMware vSphere Client.
2. Click Hosts and Clusters.
3. Locate and select the VMware ESXi NSX-T Edge host that you installed.
4. Select Datastores.
5. Right-click the datastore name and select Browse Files.

Deploying the VMware NSX-T Ready nodes 179


Internal Use - Confidential

6. Select the Upload icon (to upload file to the datastore).


7. Browse to the ESXi folder or downloaded current solution Intelligent Catalog (IC) files.
8. Select the VMware ESXi patch zip files according to the current solution Intelligent Catalog (IC) and node type and click OK
to upload.
9. Select the driver and vib files according to the current Intelligent Catalog (IC) and node type and click OK to upload.
10. Click Hosts and Clusters.
11. Locate the VMware ESXi host, right-click, and select Enter Maintenance Mode.
12. Open an SSH session with the VMware ESXi host using PuTTY or a similar SSH client.
13. Log in as root.
14. Type cd /vmfs/volumes/Edge-Cluster-<controller host name>-DASXX where XX is the name of the local
datastore that is assigned to the VMware ESXi server.
15. To display the contents of the directory, type ls.
16. Type esxcli software vib install –v /vmfs/volumes/Edge-Cluster-<controller host name>-
DASXX/patchname.vib to install the vib.
17. Type esxcli software vib update -d /vmfs/volumes/DAS<name>/VMware-ESXi-7.0<version>-
depot.zip.
18. Type reboot to reboot the host.
19. Once the host completes rebooting, open an SSH session with the VMware ESXi host, and type esxcli software vib
list |grep net-i to verify that the correct drivers loaded.
20. Select the host and click Exit Maintenance Mode.
21. Update the test plan and host tracker with the results.

Configuring the hyperconverged or compute-only


transport nodes
This section describes how to configure the PowerFlex hyperconverged or compute-only nodes as part of getting the PowerFlex
appliance ready for NSX-T.

About this task


CAUTION: vSphere configuration steps are based on vSphere 7.0. Steps for a version prior to vSphere 7.0 may
differ.

Configure the NSX-T overlay distributed virtual port group


Use these steps to create and configure the NSX-T overlay distributed virtual port group on cust_dvswitch.

Prerequisites
● Ensure that the VMware vSphere vCenter Server and the vSphere Client are accessible.
● Allow VLAN 121 on server facing ports in both switches.

Steps
1. Log in to the VMware vSphere Client HTML5.
2. Click Networking.
3. Expand the PowerFlex Customer-Datacenter.
4. Right-click flex_dvswitch0 (Workbook default name).
5. Click Distributed Port Group > New Distributed Port Group.
6. Update the name to edge-cluster-nsx-transport-121 and click Next.
7. Select the default Port binding.
8. Select the default Port allocation.
9. Select the default # of ports (default is 8) .

180 Deploying the VMware NSX-T Ready nodes


Internal Use - Confidential

10. Select the default VLAN as VLAN Type.


11. Set the VLAN ID to 121.
12. Clear the Customize default policies configuration and click Next.
13. Click Finish.
14. Right-click the edge-cluster-nsx-transport-121 and click Edit Settings....
15. Click Teaming and failover.
16. Verify that Uplink1 and Uplink2 are moved to Active.
17. Click OK.

Convert trunk access to LACP-enabled switch ports for


flex_dvswitch (option 1)
Perform this procedure to convert the physical NICs from trunk to LACP without losing network connectivity. Use this option
only if flex_dvswitch is configured as trunk. LACP is the default configuration.

Prerequisites
Both Cisco Nexus access switch ports for the compute ESXi hosts are configured as trunk access. These ports will be
configured as LACP enabled after the physical adapter is removed from each VMware ESXi host.
NOTE: Since the VMK0 VMware (ESXi management) is not configured on flex_dvswitch, both the vmnics are first
migrated to the LAGs simultaneously and then the port channel is configured. Data connectivity to PowerFlex is lost until
the port channels are brought online with both vmnic interfaces connected to LAGs.

About this task


This procedure includes reconfiguring one port at a time as an LACP without requiring any migration of the VMkernel network
adapters.

Steps
1. Log in to the VMware vSphere Client.
2. Look at vCenter and physical switches to ensure that both ports across all hosts are up.
3. For each compute ESXi host, record the physical switch ports to which vmnic3 (switch-B) and vmnic5 (switch-A) are
connected.
a. Click Home, then select Hosts and Clusters and expand the compute cluster.
b. Select the first compute ESXi host in left pane, and then select Configure tab in right pane.
c. Select Virtual switches under Networking.
d. Expand flex_dvswitch.
e. Expand Uplink1 and click eclipse (…) for vmnic7 and select view settings.
f. Click LLDP tab.
g. Record the Port ID (switch port) and System Name (switch).
h. Repeat step 3 for vmnic5 on Uplink 2.
4. Configure LAG (LACP) on the flex_dvswitch within vCenter Server:
a. Click Home, then select Networking.
b. Expand the compute cluster and click flex_dvswitch > Configure > LACP.
c. Click +New to open wizard.
d. Verify that the name is lag1.
e. Verify that the number of ports is 2.
f. Verify that the mode is Active.
g. Change Load Balancing mode to Source and destination IP address and TCP/UDP port..
h. Click OK.
5. Migrate vmnic5 to lag1-0 and vmnic3 to lag1-1 on flex_dvswitch for the compute ESXi host as follows:
a. Click Home, then select Networking and expand the PowerFlex data center.
b. Right-click flex_dvSwitch and select Manage host networking to open wizard.
c. Select Add hosts... and click Next.

Deploying the VMware NSX-T Ready nodes 181


Internal Use - Confidential

d. Click Attached hosts..., select all the compute ESXi hosts, and click OK.
e. Click Next.
f. Click Assign uplink for vmnic5.
g. Click lag1-0
h. Click Assign uplink for vmnic3.
i. Click lag1-1.
j. Click Next > Next > Next > Finish.
6. Create port-channel (LACP) on switch-A for the compute VMware ESXi host.
The following switch configuration is an example of a single compute VMware ESXi host.
a. SSH to switch-A switch.
b. Create a port channel on switch-A for each compute VMware ESXi host as follows:

interface port-channel40
Description to flex-compute-esxi-host01
switchport
switchport mode trunk
switchport trunk allowed vlan 151,152,153,154(Provided in Workbook)
spanning-tree port type edge trunk
spanning-tree bpduguard enable
speed 25000
mtu 9216
lacp vpc-convergence
no lacp suspend-individual
vpc 40

7. Configure channel-group (LACP) on the switch-A access port (vmnic5) for each compute ESXi host.
The following switch port configuration is an example of a single compute ESXi host.
a. SSH to switch-A switch.
b. Configure the port on switch-A as follows:

int e1/1/1
description to flex-compute-esxi-host01 – vmnic5
switchport
switchport mode trunk
switchport trunk allowed vlan 151,152,153,154 (Provided in Workbook)
spanning-tree port type edge trunk
spanning-tree bpduguard enable
spanning-tree guard root
speed 25000
channel-group 40 mode active

8. Create port-channel (LACP) on switch-B for compute ESXi host.


The following switch configuration is an example of a single compute ESXi host.
a. SSH to switch-B switch.
b. Create a port channel on switch-B for each compute ESXi host as follows:

interface port-channel40
Description to flex-compute-esxi-host01
switchport
switchport mode trunk
switchport trunk allowed vlan 151,152,153,154(Provided in Workbook)
spanning-tree port type edge trunk
spanning-tree bpduguard enable
speed 25000
mtu 9216
lacp vpc-convergence
no lacp suspend-individual
vpc 40

9. Configure the channel-group (LACP) on switch-B access port (vmnic3) for each compute ESXi host.
The following switch port configuration is an example of a single compute ESXi host.
a. SSH to switch-B switch.

182 Deploying the VMware NSX-T Ready nodes


Internal Use - Confidential

b. Create the port on switch-B as follows:

int e1/1/1
description to flex-compute-esxi-host01 – vmnic7
switchport
switchport mode trunk
switchport trunk allowed vlan 151,152,153,154 (Provided in Workbook)
spanning-tree port type edge trunk
spanning-tree bpduguard enable
spanning-tree guard root
speed 25000
channel-group 40 mode active

10. Update teaming and policy to be route based on physical NIC load for each port group within flex_dvswitch:
a. Click Home and select Networking.
b. Expand flex_dvSwitch to display all port groups.
c. Right-click flex-data-1 and select Edit Settings.
d. Click Teaming and failover.
e. Move lag1 to be Active and both Uplink1 and Uplink2 to Unused.
f. Change Load Balancing mode to Route based on IP hash.
g. Repeat steps 10b to 10f for each remaining port groups.

Convert LACP to trunk access enabled switch ports for


cust_dvswitch (option 2)
Perform this procedure to convert the physical NICs from LACP to trunk without losing connectivity. Use this option only if
cust_dvswitch is configured as LACP. VMware does not recommend using LACP as the PNICs for overlay traffic type.

Prerequisites
Both Cisco Nexus access switch ports for the compute VMware ESXi hosts are configured with LACP enabled. These ports will
be configured as trunk access after the removal of the physical adapter from each VMware ESXi host.

About this task


This procedure includes reconfiguring a single port at a time as a trunk without requiring any migration of the VMkernel network
adapters.

Steps
1. Log in to the VMware vSphere Client.
2. Look at the vCenter and physical switches to ensure that both ports across all hosts are up.
3. For each compute VMware ESXi host, record the physical switch port to which vmnic4 (switch-B) and vmnic6 (switch-A)
connect.
a. Click Home > Hosts and Clusters and expand the compute cluster.
b. Select the first compute VMware ESXi host in the left pane, and then select Configure tab in the right pane.
c. Select Virtual switches under Networking.
d. Expand Cust_DvSwitch.
e. Expand lag-1 and click eclipse (…) for vmnic4 and select view settings.
f. Click LLDP tab.
g. Record the port ID (switch port) and system name (switch).
h. Repeat step 3 for vmnic6 on lag1-1.
4. Repeat steps 2 and 3 for each additional compute VMware ESXi host.
5. Create a management distributed port group for cust_dvswitch as follows:
a. Right-click Cust_DvSwitch.
b. Click Distributed Port Group > New Distributed Port Group.
c. Update the name to hyperconverged-node-mgmt-105-new and click Next.
d. Select the default Port binding.
e. Select the default Port allocation.

Deploying the VMware NSX-T Ready nodes 183


Internal Use - Confidential

f. Select the default # of ports (default is 8).


g. Select the default VLAN as VLAN Type.
h. Set the VLAN ID to 105.
i. Clear the Customize default policies configuration and click Next.
j. Click Finish.
k. Right-click the hyperconverged-node-mgmt-105-new and click Edit Settings...
l. Click Teaming and failover.
m. Change Load Balancing mode to Route based on originating virtual port.
n. Verify that Uplink1 and Uplink2 are listed under active and LAG is unused.
o. Click OK.
6. Remove channel-group from the port interface (vmnic9) on switch-B for each compute VMware ESXi host as follows:
CAUTION: This step must be done before removing the physical NICs from the VDS. Otherwise, only one
physical NIC gets removed successfully. The other physical NIC fails to remove from the LAG because both
ports are bonded to a port channel.

a. SSH to switch-B switch.


b. Enter the following switch commands to configure trunk access for the VMware ESXi host:

config t
interface ethernet 1/x
no channel-group

c. Repeat steps 6a and 6b for each switch port for the remaining compute VMware ESXi hosts.
7. Delete vmnic6 from lag1:
a. Click Home > Hosts and Clusters and expand the PowerFlex data center.
b. Select the PowerFlex hyperconverged node or PowerFlex compute-only node and click Configure and Virtual
Switches.
c. Select cust_DvSwitch and click Manage Physical Adapters.
d. Select vmnic6 and click X to delete.
e. Click OK.
8. Migrate vmnic9 to Uplink2 and VMK0 to hyperconverged-node-mgmt-105-new on cust_dvswitch for each compute VMware
ESXi host as follows:
a. Click Home, then select Networking and expand the PowerFlex data center.
b. Right-click Cust_DvSwitch and select Add and Manage hosts to open the wizard.
● Select Manage host networking and click Next.
● Click Attached hosts..., select all the compute VMware ESXi hosts, and click OK.
● Click Next.
● Select Uplink2 for vmnic0 and click OK.
● Click Next.
● Select vmk0 (esxi-management) and click Assign port group.
● Select pfcc-node-mgmt-105-new and click OK.
● Click Next > Next > Next > Finish.
9. Remove channel-group from the port interface (vmnic4) on switch-A for each compute VMware ESXi host as follows:
a. SSH to switch-A switch.
b. Enter the following switch commands to configure trunk access for the VMware ESXi host:

Config t
Interface ethernet 1/x
No channel-group

c. Repeat steps 8a and 8b for each switch port for the remaining compute VMware ESXi hosts.
10. Add vmnic4 to Uplink1 on cust_dvswitch for each compute VMware ESXi host as follows:
a. Click Home, then select Networking and expand the PowerFlex data center.
b. Right-click Cust_DvSwitch and select Add and Manage Hosts to open the wizard.
● Select Manage host networking and click Next.
● Click Attached hosts..., select all the compute VMware ESXi hosts, and click OK.
● Click Next.

184 Deploying the VMware NSX-T Ready nodes


Internal Use - Confidential

● For each VMware ESXi host, select vmnic4 and click Assign uplink.
● Select Uplink1 and click OK.
● Click Next > Next > Next > Finish.
11. Delete the port group hyperconverged-node-mgmt-105 on cust_dvswitch:
a. Click Home, select Networking, and expand the PowerFlex data center.
b. Expand cust_dvswitch to view the distributed port groups.
c. Right-click hyperconverged-node-mgmt-105 and click Delete.
d. Click Yes to confirm deletion of the distributed port group.
12. Delete vmnic2 from lag1:
a. Click Home > Hosts and Clusters and expand the PowerFlex data center.
b. Select the PowerFlex hyperconverged node or PowerFlex compute-only node and click Configure and Virtual
Switches.
c. Select cust_DvSwitch and click Manage Physical Adapters.
d. Select vmnic2 and click X to delete.
e. Click OK.
13. Rename the port group pfmc-node-mgmt-105-new on cust_dvswitch:
a. Click Home, select Networking, and expand the PowerFlex data center.
b. Expand Cust_DvSwitch to view the distributed port groups.
c. Right-click hyperconverged-node-mgmt-105-new and click Rename.
d. Enter hyperconverged-node-mgmt-105 and click OK.
14. Update teaming and policy to be route based on physical NIC load for port group flex-vmotion-106:
a. Click Home, select Networking, and expand the PowerFlex compute data center.
b. Expand cust_dvswitch to view the distributed port groups.
c. Right-click pfcc-vmotion-106 and click Edit Settings....
d. Click Teaming and failover.
e. Move both Uplink1 and Uplink2 to be Active and lag1 to Unused.
f. Change Load Balancing mode to Route based on originating virtual port.
g. Repeat steps 12c through 12f for the remaining port groups on cust_dvswitch.
h. Select the cust_dvswitch and go toConfigure > LACP.
i. Select the required lag and click Remove.

Add the VMware NSX-T nodes using PowerFlex Manager


Use this procedure only if the PowerFlex nodes are added to the NSX-T environment.

Prerequisites
● Before adding this service in PowerFlex Manager, verify that the NSX-T Data Center is configured on the PowerFlex
hyperconverged or compute-only nodes.
● Ensure that the iDRAC of nodes, vCenter, and switches (applicable for full networking) are discovered in PowerFlex
Manager.
● Before adding a VMware NSX-T service, remove (do not delete) the PowerFlex hyperconverged service being used for
NSX-T.
● After adding an NSX-T node, if you are using PowerFlex Manager, run Update Service Details to represent the appropriate
environment. If you are using VMware NSX-T in a PowerFlex Manager service, the service goes into lifecycle mode.

Steps
1. Log in to PowerFlex Manager.
2. From Getting Started, click Define Networks.
a. Click + Define and do the following:

NSX-T information Values


Name Type NSX-T Transport

Description Type Used for east-west traffic

Deploying the VMware NSX-T Ready nodes 185


Internal Use - Confidential

NSX-T information Values


Network Type Select General Purpose LAN
VLAN ID Type 121. See Workbook

b. Click Save > Close.


3. From Getting Started, click Add Existing Service and do the following:
a. On the Welcome page, click Next.
b. On the Service Information page, enter the following details:

Service Information Details


Name Type NSX-T Service

Description Type Transport Nodes

Type Type Hyperconverged Compute-only

Firmware and software compliance Select the Intelligent Catalog (IC) version
Who should have access to the service deployed Leave as default
from this template?

c. Click Next.
d. On the Network Information page, select Full Network Automation/Partial Network Automation, and click Next.
NOTE: For partial network automation, you must finish the complete network configuration required for NSX-T.
Consider the configuration given in this document as a reference.

e. On the Cluster Information page, enter the following details:

Cluster Information Details


Target Virtual Machine Manager Select VCSA name
Data Center Name Select data center name
Cluster Name Select cluster name
Target PowerFlex gateway Select PowerFlex gateway name
Target Protection Domain Select PD-1
OS Image Select the ESXi image

f. Click Next.
g. On the OS Credentials page, select the OS credentials for each node, and click Next.
h. On the Inventory Summary page, review the summary and click Next.
i. On the Networking Mapping page, verify that the networks are aligned with the correct dvswitch.
j. On the Summary page, review the summary and click Finish.
4. Verify that PowerFlex Manager recognizes that NSX-T is configured on the nodes:
a. Click Services.
b. Select the hyperconverged or compute-only service.
c. Verify that a banner appears under the Service Details tab, notifying that NSX-T is configured on a node and
is preventing some features from being used. If you do not see this banner, check if you have selected
the wrong service or NSX-T is not configured on the hyperconverged or compute-only nodes.

186 Deploying the VMware NSX-T Ready nodes


Internal Use - Confidential

12
Optional deployment tasks
This section contains miscellaneous deployment activities that may not be required for your deployment.

Configuring replication on PowerFlex nodes


PowerFlex replication provides data protection by mirroring volumes in one system to a remote system asynchronously.
A volume and its remote mirror are called replication consistency groups. A replication consistency group can consist of one or
several volumes in a single protection domain that replicate to a remote protection domain. PowerFlex 4.0 supports multi-site
replication of up to five systems.

Requirements
● PowerFlex Manager must be deployed and configured.
● Replication VLANs must be created on the switches and defined in PowerFlex Manager.

Workflow summary
● Create, publish and deploy storage with replication or hyperconverged template (local and remote)
● Create and copy certificates
● Add peer systems
● Create replication consistency groups

Clone the storage replication template


Use this procedure to clone the storage replication templates.

Steps
1. Log in to PowerFlex Manager.
2. Click Lifecycle > Templates > Create.
3. Click Clone an existing PowerFlex Manager template.
4. Click Sample Templates.
5. From the Template to be cloned field, click Storage - Replication and click Next.
6. Enter a template name.
7. Select or create a new category and enter a description.
8. Select the appropriate compliance version and the appropriate security group and click Next.
9. Select the matching customer networks for each category.
10. Under OS Settings:
a. Select or create (+) the OS credential for the root user.
b. Under Use Compliance File Linux Image, select Use Compliance File Linux Image (or custom if requested).
11. Under PowerFlex Gateway Settings, select the appropriate PowerFlex gateway. The default is block-legacy-gateway.
12. Under Hardware Settings/Node Pool Settings, select the pool that contains the Replication nodes. The default is Global.
Click Finish.
13. Under Node Settings:
a. Click Node > Modify and change node count as necessary and select Continue.
b. Add NTP and time zone information and click Save.

Optional deployment tasks 187


Internal Use - Confidential

14. Under Network Settings > Static Routes:


a. If routing will be required on the nodes, click Enabled.
b. Click Add New Static Route and select the Source Network, Destination Network, and enter the gateway to be
used for that route.
c. Click Finish.
15. Click Publish Template.
16. Click Yes on the confirmation dialog.

Deploy storage with replication template


Use this procedure to deploy storage with replication templates.

Steps
1. Click Lifecycle > Templates.
2. Select the template created in the previous section.
3. Click Deploy Resource Group.
4. Enter the resource group name and a brief description.
5. Select the IC version.
6. Select the administration group for this resource.
7. Click Next.
8. Under Deployment Settings:
a. Auto generate or fill out the following fields:
● Protection domain name
● Protection domain name template
● Storage pool name
● Number of storage pools
● Storage pool name template
b. Let PowerFlex select the IP addresses or manually provide the MDM virtual IP addresses.
c. Let PowerFlex select the IP addresses or manually provide the storage-only nodes OS IP addresses.
d. Manually select each storage-only node by serial number or iDRAC IP address, or let PowerFlex select the nodes
automatically from the selected node pool.
e. Click Next.
9. Click Deploy Now > Next.
10. Review the summary screen and click Finish.
Deployment activity can be monitored on right panel under Recent Activity.

Clone the hyperconverged replication template


Use this procedure to clone the hyperconverged replecation templates.

Steps
1. Log in to PowerFlex Manager.
2. Click Lifecycle > Templates > Create.
3. Click Clone an existing PowerFlex Manager template.
4. Click Sample Templates.
5. From the Template to be cloned field, click Hyperconverged - Replication and click Next.
6. Enter a template name.
7. Select or create a new category and enter a description.
8. Select the appropriate compliance version and the appropriate security group and click Next.
9. Select the matching customer networks for each category.
10. Under OS Settings:

188 Optional deployment tasks


Internal Use - Confidential

a. Select or create (+) the OS credential for the root user.


b. Select or create (+) the SVM OS credential for the root user.
c. Under Use Compliance File ESXi Image, select Use Compliance File ESXi Image (or custom if requested).
11. Under Cluster Settings, select the target VMware vCenter.
12. Under PowerFlex Gateway Settings, select the appropriate PowerFlex gateway. The default is block-legacy-gateway.
13. Under Hardware Settings/Node Pool Settings, select the pool that contains the Replication nodes. The default is Global.
14. Under Network Settings > Static Routes:
a. If routing will be required on the nodes, click Enabled.
b. Click Add New Static Route and select the Source Network, Destination Network, and enter the gateway to be
used for that route.
c. Click Finish.
15. Under Node Settings:
a. Click Node > Modify and change node count as necessary and select Continue.
b. Add NTP and time zone information and click Save.
16. Under VMware Cluster Settings:
a. Select the VMware cluster and click Modify.
b. Click Continue.
c. Select or create a new target data canter. If it is new, enter a name.
d. Select or create a new target cluster. If it is new, enter a name.
e. Click Configure VDS Settings.
f. To create custom port groups, click User Entered Port Groups or click Auto Create All Port Groups to let PowerFlex
Manager provide them.
g. Click Next.
h. Add the VDS name for VDS1, cust_dvswitch.
i. Add the VDS name for VDS2, flex_dvswitch.
j. Click Next.
k. Verify network, VLAN ID and portgroup names are as expected and click Next.
l. Select the MTU size as configured on customer network (or LCS) and click Next > Finish.
m. In the confirmation dialogue, click Yes > Save.
17. Click Publish Template and click Yes on the confirmation dialog.

Deploy hyperconverged nodes with replication template


Use this procedure to deploy hyperconverged nodes with replication templates.

Steps
1. Click Lifecycle > Templates.
2. Select the template created in the previous section.
3. Click Deploy Resource Group.
4. Enter the resource group name and a brief description.
5. Select the IC version.
6. Select the administration group for this resource.
7. Click Next.
8. Under VMware cluster settings, auto generate or fill out the following fields:
● Data center name
● Cluster name
● Storage pool name
● Number of storage pools
● Storage pool name template
9. Under PowerFlex Cluster Settings:
a. Auto generate or fill out the following fields:
● Protection domain name
● Protection domain name template

Optional deployment tasks 189


Internal Use - Confidential

● Storage pool name


● Number of storage pools
● Storage pool name template
● Set default journal capacity (10%) unless directed differently by Enterprise Management Platform (EMP) or the
customer
b. Let PowerFlex select the IP addresses or manually provide the MDM virtual IP addresses.
c. Let PowerFlex select the IP addresses or manually provide the hyperconverged nodes OS IP addresses.
d. Let PowerFlex select the IP addresses or manually provide the SVM OS IP addresses.
e. Manually select each hyperconverged node by serial number or iDRAC IP address, or let PowerFlex select the nodes
automatically from the selected Node Pool.
f. Repeat these steps for each additional node.
g. Click Next.
10. Click Deploy Now > Next.
11. Review the summary screen and click Finish.
Deployment activity can be monitored on right panel under Recent Activity.

Create and copy certificates


Use this procedure to create and copy certificates.

About this task


You can locate the system ID by logging into the primary MDM by using scli --login_certificate --
p12_path /opt/emc/scaleio/mdm/cfg/cli_certificate.p12.

Prerequisites
● Deployed storage-only or hyperconverged with replication resource groups at each participating site
● System ID of each participating system

Steps
1. Log in to the primary MDM for each site using SSH to generate, copy and add certificates.
2. Type scli --login_certificate --p12_path /opt/emc/scaleio/mdm/cfg/cli_certificate.p12, and
after the password prompt, enter the certificate password.
3. Extract the certificate for each site, type the following for each site (source and destination): scli --extract_root_ca
--certificate_file /tmp/site-x.crt.
4. Copy the extracted certificate of the source (primary MDM /tmp folder) to destination (primary MDM /tmp folder) using
SCP.
5. Copy the extracted certificate of the destination (primary MDM /tmp folder) to source (primary MDM /tmp folder) using
SCP.
6. To add the copied certificate to the source and each destination, type scli --add_trusted_ca --
certificate_file tmp/site-b.crt --comment site-x_crt.
7. To verify the new certificate, type scli --list_trusted_ca.

Create remote consistency groups (RCG)


Use this procedure to create remove consistency groups.

Prerequisites
● Peer system must be configured
● Source volumes to be replicated

Steps
1. Log in to PowerFlex Manager.

190 Optional deployment tasks


Internal Use - Confidential

2. Select Protection > RCGs .


3. Click + Add RCG.
4. Enter the RCG name.
5. Enter the Recover Point Objective (60 seconds default).
6. Select the source system protection domain.
7. Select the target system and protection domain and click Next.
8. Select auto or manual provisioning

Option Description
Auto Provisioning (default) This option is relevant if there are no volumes at the target system.
Select the source volumes to protect.
The target volumes are automatically created.

Manual Provisioning This option is relevant if there are volumes at the target system.
Select the source volumes to protect.
Select the same size volume at the target system to create a pair between the volumes.

9. Click Next.
10. Select the source volumes.
11. Select Target Volume as thin (default) or thick.
12. Select the target storage pool.
13. Click Add Pair.
14. Click Next.
15. Optionally, to map a host on the target side:
a. Select the target volume.
b. Select the target host.
c. Click Map.
16. Click Next.
17. Select Add and Activate or Add (and activate separately)
NOTE: Add and Activate begins replication immediately.

The Add function will create the RCG but not start replication. Replication can be deferred until manually activated.

After the volumes begin replication, the final status should be OK and the constancy state will be Consistent after the
initial volume copy completes.

Add peer replication systems


Use this procedure to add peer replication systems.

Prerequisites
● Deployed PowerFlex storage nodes or PowerFlex hyperconverged nodes with replication resource group at each site
● Certificates generated and copied to each participating system

Steps
1. Log in to PowerFlex Manager.
2. Select Protection > Peer System.
3. Click + Add Peer System.
4. Enter the following:
● Peer system name
● ID
● IP addresses

Optional deployment tasks 191


Internal Use - Confidential

5. Click Add IP for each additional replication IP in the target Replication Group and click Add.
After a few moments the target system should show the state as Connected.

Storage data client authentication


Authentication and authorization can be enabled for all storage data clients connected to a cluster.

Prepare for SDC authentication


Prepare the SDCs for authentication.

Prerequisites
Ensure that you have the following information:
● Primary and secondary MDM IP address
● PowerFlex cluster credentials

Steps
1. Log in to the primary MDM: login_certificate --p12_path /opt/emc/scaleio/mdm/cfg/
cli_certificate.p12
2. Authenticate with the PowerFlex cluster using the credentials provided.
3. Type scli --query_all_sdc and record all the connected SDCs (any of the identifier - NAME, GUID, ID, or IP):
4. For each SDC in your list, use the identifier recorded to generate and record a CHAP password. Type scli --
generate_sdc_password --sdc_id <id> or --sdc_ip <ip> or --sdc_name <name> or --sdc_guid
<guid> --reason "CHAP setup".
This password is specific to that SDC and cannot be reused for subsequent SDC entries.

For example:
scli --generate_sdc_password --sdc_IP 172.16.151.36 --reason "CHAP setup"
Sample output:

[root@svm1 ~]# scli --generate_sdc_password --sdc_ip 172.16.151.36 --reason “CHAP


setup”
Successfully generated SDC with IP 172.16.151.36 password:
AQAAAAAAAAAAAAA8UKVYp0LHCDFD59BrnEXNPVKSlGfLrwAk

Configure storage data client to use authentication


Perform this procedure to configure the storage data clients for authentication.

About this task


For each storage data client, populate the generated CHAP password. On a VMware ESXi host, this requires setting a new
scini parameter through the esxcli tool. Use the procedure to perform this configuration change. For Windows and Linux SDC
hosts, the included drv_cfg utility is used to update the driver and configuration file in real time.
NOTE: Reboot the VMware ESXi hosts for the new parameter to take effect.

Prerequisites
● Generate the pre-shared passwords for all the storage data clients to be configured.
● Ensure that you have the following information:
○ Primary and secondary MDM IP addresses or names
○ Credentials to access all VMware ESXi hosts running storage data clients

192 Optional deployment tasks


Internal Use - Confidential

Steps
1. SSH into the VMware ESXi host using the provided credentials.
2. Type esxcli system module parameters list -m scini | grep Ioctl to list the hosts current scini
parameters:

IoctlIniGuidStr string d30ff770-b64c-40b5-a341-58d18927e523


Ini Guid, for example: 12345678-90AB-CDEF-1234-567890ABCDEF
IoctlMdmIPStr string
192.168.151.20,192.168.152.20,192.168.153.20,192.168.154.20 Mdms IPs, IPs for MDM in
same cluster should be comma separated. To configure more than one cluster use '+'
to separate between IPs.For Example: 10.20.30.40,50.60.70.80+11.22.33.44. Max 1024
characters
IoctlMdmPasswordStr string
Mdms passwords. Each value
is <ip>-<password>, Multiple passwords separated by ';' signFor example: 10.20.30.40-
AQAAAAAAAACS1pIywyOoC5t;11.22.33.44-tppW0eap4cSjsKIcMax 1024 characters

NOTE: The third parameter IoctlMdmPasswordStr is empty.

3. Using ESXCLI, configure the driver with the existing and new parameters. To specify multiple IP addresses, use a semicolon
(;) between the entries, as shown in the following example. Additional data IP addresses, data3 and data4 if required.

esxcli system module parameters set -m


scini -p "IoctlIniGuidStr=10cb8ba6-5107-47bc-8373-5bb1dbe6efa3
IoctlMdmIPStr=192.168.151.20,192.168.152.20,192.168.153.20,192.168.154.20
IoctlMdmPasswordStr=192.168.151.20-
AQAAAAAAAAA8UKVYp0LHCFD59BrnExNPvKSlGfLrwAk;192.168.152.20-
AQAAAAAAAAA8UKVYp0LHCFD59BrnExNPvKSlGfLrwAk
192.168.153.20-AQAAAAAAAAA8UKVYp0LHCFD59BrnExNPvKSlGfLrwAk;192.168.154.20-
AQAAAAAAAAA8UKVYp0LHCFD59BrnExNPvKSlGfLrwAk

NOTE: There are spaces between Ioctl parameter fields and the opening quotes. The example is entered on a single
line.

4. Reboot the VMware ESXi nodes.


The SDC configuration is applied.
If the SDC is a PowerFlex hyperconverged node, go to the next step. For other nodes, continue to Step 8.
5. For PowerFlex hyperconverged nodes, use the presentation manager or scli tool to place the corresponding SDS into
maintenance mode.
6. If the SDS is also the cluster primary MDM, switch cluster ownership to a secondary MDM and verify cluster state before
proceeding, type scli --switch_mdm_ownership --mdm_name <secondary MDM name>.
7. Power off the SVM once the cluster ownership is switched (if needed) and the SDS is in maintenance mode.
8. Manually migrate the workloads to the other hosts if required, and place the VMware ESXi host in maintenance mode.
9. Reboot the VMware ESXi host.
10. Once the host has completed rebooting, remove it from maintenance mode and power on the SVM (if present).
11. Take the SDS out of the maintenance mode (if present).
12. Repeat this procedure for each VMware ESXi SDC host.
Examples - Windows and Linux SDC nodes
Windows and Linux hosts have access to the drv_cfg utility, which allows driver modification and configuration in real time.
The --file option allows for persistent configuration to be written to the driver's configuration file (so that the SDC remains
configured after a reload or reboot).

Optional deployment tasks 193


Internal Use - Confidential

NOTE: Only one IP address is needed for the command to identify the MDM to modify.

Windows (from within a PowerShell prompt):


C:\Program Files\EMC\scaleio\sdc\bin\drv_cfg --set_mdm_password --ip <MDM IP> --port
6611 --password <secret>

Linux:
/opt/emc/scaleio/sdc/bin/drv_cfg --set_mdm_password --ip <MDM IP> --port 6611 --password
<secret> --file /etc/emc/scaleio/drv_cfg.txt

Iterate through the relevant SDCs, using the command examples along with the recorded information.

Enable storage data client authentication


Perform this procedure to enable storage data client authentication.

Prerequisites
● Make sure that all storage data clients are running PowerFlex, and are configured with their appropriate CHAP password.
Any older or unconfigured storage data client will be disconnected from the system when authentication is turned on.
● Ensure that you have the following information:
○ Primary MDM IP address
○ Credentials to access the PowerFlex cluster

Steps
1. SSH into the primary MDM.
2. Type scli --login --p12_path <P12_PATH> --p12_password <P12_PASS> to log in to the PowerFlex cluster
using the provided credentials.
3. Type scli --set_sdc_authentication --enable to enable storage data client authentication feature.
4. Type scli --check_sdc_authentication_status to verify that the storage data client authentication and
authorization is on, and that the storage data clients are connected with passwords.
Sample output:

[root@svm1 ~]# scli --check_sdc_authentication_status


SDC authentication and authorization is enabled.
Found 4 SDCs.
The number of SDCs with generated password: 4
The number of SDCs with updated password set: 4

5. If the number of storage data clients does not match or any storage data clients are disconnected, storage data clients,
list any or all of the disconnected storage data clients and then disable the storage data client authentication by typing the
following commands:
scli --query_all_sdc | grep "State: Disconnected"

scli --set_sdc_authentication --disable

6. Recheck the disconnected storage data clients to make sure that they have the proper configuration applied. If necessary,
regenerate their shared password and reconfigure the storage data client. If you are unable to resolve the storage data client
disconnection, leave the feature disabled and contact Dell Technologies support as needed.

Installing a Windows compute-only node with LACP


bonding NIC port design
PowerFlex Manager does not support deployment of Windows-based compute-only nodes with LACP bonding NIC port design.
To install these nodes with LACP bonding NIC port design without PowerFlex Manager, perform the steps in the following
sections.

194 Optional deployment tasks


Internal Use - Confidential

NOTE: PowerFlex Manager does not provide installation or management support for Windows compute-only nodes.

Mount the Windows Server 2016 or 2019 ISO


Use this procedure to mount the Windows Server ISO.

Steps
1. Connect to the iDRAC, and launch a virtual remote console.
2. Click Menu > Virtual Media > Connect Virtual Media > Map Device > Map CD/DVD.
3. Click Choose File and browse and select the customer provided Windows Server 2016 or 2019 DVD ISO and click Open.
4. Click Map Device.
5. Click Close.
6. Click Boot and select Virtual CD/DVD/ISO. Click Yes.
7. Click Power > Reset System (warm boot) to reboot the server.
The host boots from the attached Windows Server 2016 or 2019 virtual media.

Install the Windows Server 2016 or 2019 on a PowerFlex compute-


only node
Use this procedure to install Windows Server on a PowerFlex compute-only node.

Steps
1. Select the desired values for the Windows Setup page, and click Next.
NOTE: The default values are US-based settings.

2. Click Install now.


3. Enter the product key, and click Next.
4. Select the operating system version with Desktop experience (For example, Windows Server 2019 Datacenter (Desktop
Experience)), and click Next.
5. Select the check box next to the license terms, and click Next.
6. Select the Custom option, and click Next.
7. To install the operating system, select the available drive with a minimum of 60 GB space on the bootable disk and click
Next.
NOTE: Wait until the operating system installation is complete.

8. Enter the password according to the standard password policy.


9. Click Finish.
10. Install or upgrade the network driver using these steps:
NOTE: Use this procedure if the driver is not updated or discovered by Windows automatically.

a. Download DELL EMC Server Update Utility, Windows 64 bit Format, v.x.x.x.iso file from Dell
Technologies Support.
b. Map the driver CD/DVD/ISO through iDRAC, if the installation requires it.
c. Connect to the server as the administrator.
d. Open and run the mapped disk with elevated permission.
e. Select Install, and click Next.
f. Select I accept the license terms and click Next.
g. Select the check box beside the device drives, and click Next.
h. Click Install, and Finish.
i. Close the window to exit.

Optional deployment tasks 195


Internal Use - Confidential

Download and install drivers


Perform these steps if you need to download and install drivers for the target Dell server model. Use this procedure if the driver
is not updated or discovered automatically by Windows.

Steps
1. Log in to the Dell Technologies Support, Click Product Support under the Support tab.
2. Find the target server model by looking up the service tag, product ID, or the model (for example, PowerEdge R740).
3. Click the Drivers & Downloads tab and select Drivers for OS Deployment for the category.
4. Download the Dell OS Driver Pack.
5. Copy the downloaded driver pack to the new Windows host (or download on the host itself).
6. Open the folder where the driver pack is downloaded and execute the file.

Configure networks
Perform this procedure to configure the networks by creating new teams and interfaces.

Steps
1. Create a new team and assign the name as Team0:
a. Open the server manager, and click Local Server > NIC teaming.
b. In the NIC teaming window, click Tasks > New Team.
c. Enter name as Team0, and select the appropriate network adapters.
d. Expand the Additional properties, and select LACP in teaming mode and set load-balancing mode as Dynamic, and
standby adapter as None (all adapters active).
e. Click OK to save the changes.
2. Create a new interface in Team0:
a. Select your existing NIC Team Team0 in the Teams list box, and select the Team Interfaces tab in the Adapters and
Interfaces list box.
b. Click Tasks, and click Add Interface.
c. In the New team interface dialog box, type the name as, flex-node-mgmt-<vlanid>
d. Assign VLAN ID (105) to the new interface in the VLAN field, and click OK.
e. From the network management console, right-click the newly created network interface controller, and click Properties
> Internet Protocol Version 4 (TCP/IPv4).
3. If the customer is using Microsoft Cluster and wants to use live migration, repeat step 2 for flex-livemigration.
4. Create a new team and assign the name as Team1:
a. Open the server manager, and click NIC teaming.
b. In the NIC teaming window, click Tasks > New Team.
c. Enter name as Team1, and select the appropriate network adapters.
d. Expand the Additional properties, and select LACP in teaming mode and set load-balancing mode as Dynamic, and
standby adapter as None (all adapters active).
e. Click OK to save the changes.
5. Create a new interface in Team1
a. Select your existing NIC Team Team1 in the Teams list box, and select the Team Interfaces tab in the Adapters and
Interfaces list box.
b. Click Tasks, and click Add Interface.
c. In the New team interface dialog box, type the name as, flex-data1-<vlanid>
d. Assign VLAN ID (151) to the new interface in the VLAN field, and click OK.
e. From the network management console, right-click the newly created network interface controller, and click Properties
> Internet Protocol Version 4 (TCP/IPv4).
6. Repeat step 5 for flex-data2-<vlanid>, flex-data3-<vlanid>, and flex-data4-<vlanid> with VLANs 152, 153, and 154
respectively.

196 Optional deployment tasks


Internal Use - Confidential

Disable Windows Firewall


Use this procedure to disable Windows Firewall through the Windows Server 2016 or 2019 or Microsoft PowerShell.

Steps
1. Windows Server 2016 or 2019:
a. Press Windows key+R on your keyboard, type control and click OK.
The All Control Panel Items window opens.
b. Click System and Security > Windows Firewall .
c. Click Turn Windows Defender Firewall on or off.
d. Turn off Windows Firewall for both private and public network settings, and click OK.
2. Windows PowerShell:
a. Click Start, type Windows PowerShell.
b. Right-click Windows PowerShell, click More > Run as Administrator.
c. Type Set-NetFirewallProfile -profile Domain, Public, Private -enabled false in the Windows
PowerShell console.

Enable the hyper-V role through Windows Server 2016 or 2019


Use this procedure to enable the hyper-V role through Windows Server 2016 or 2019.

About this task


This is an optional procedure and is recommended only when you want to enable the hyper-V role on a specified server.

Steps
1. Click Start > Server Manager.
2. In Server Manager, on the Manage menu, click Add Roles and Features.
3. On the Before you begin page, click Next.
4. On the Select installation type page, select Role-based or feature-based installation, and click Next.
5. On the Select destination server page, click Select a server from the server pool, and click Next.
6. On the Select server roles page, select Hyper-V.
An Add Roles and Features Wizard page opens, prompting you to add features to Hyper-V.
7. Click Add Features. On the Features page, click Next.
8. Retain the default selections/locations on the following pages, and click Next:
● Create Virtual Switches
● Virtual Machine Migration
● Default stores
9. On the Confirm installation selections page, verify your selections, and click Restart the destination server
automatically if required, and click Install.
10. Click Yes to confirm automatic restart.

Enable the Hyper-V role through Windows PowerShell


Enable the Hyper-V role using Windows PowerShell.

About this task


This is an optional procedure and is recommended only when you want to enable the Hyper-V role on a specified server.

Steps
1. Click Start, type Windows PowerShell.
2. Right-click Windows PowerShell, and select Run as Administrator.

Optional deployment tasks 197


Internal Use - Confidential

3. Type Install-WindowsFeature -Name Hyper-V -IncludeManagementTools -Restart in the Windows


PowerShell console.

Enable Remote Desktop access


Use this procedure to enable Remote Desktop access.

Steps
1. Right-click Start > Run.
2. Enter SystemPropertiesRemote.exe and click OK.
3. Select Allow remote connection to this computer.
4. Click Apply > OK.

Install and configure SDC


Perform this procedure to install and configure SDC on a Windows-based compute-only node for mapping the volume.

Steps
1. Get the Windows *.msi files from the Intelligent Catalog. The Intelligent Catalog is available at the Dell Technologies
Support.
2. Log in to the Windows compute-only node with the administrative account.
3. Install and configure SDC:
NOTE: Make note of the MDM VIPs before installing the SDC component.

a. Open a command prompt and enter msiexec /i <SDC_PATH>.msi MDM_IP=<LIST_VIP_MDM_IPS>, where


<SDC_PATH> is the path where the SDC installation package is located. The <LIST_VIP_MDM_IPS> is a comma
separated list of the MDM IP addresses or the virtual IP address of the MDM.
b. Accept the terms in the license agreement and click Install.
c. Click Finish.
d. Permit the Windows server reboot to load the SDC driver on the server.

Map volumes
Perform this procedure to map a PowerFlex volume to a Windows-based compute-only node.

Steps
1. On the menu bar, click Block > Volumes.
2. In the list of volumes, select one or more volumes, and click Mapping > Map.
3. A list of the hosts that can be mapped to the selected volumes is displayed. If a volume is already mapped to a host, only
hosts of the same type, NVMe or SDC, are listed. If the volume is not mapped to a host, click NVMe or SDC to set the type
of hosts to be listed.
4. In the Map Volume dialog box, select one or more hosts to which you want to map to the volumes.
5. Click Map.
6. Verify the operation is finished and successful, click Dismiss.
7. Log in to the Windows Server compute-only node with the administrative account.
8. To open the disk management console, perform the following steps:
a. Press Windows+R.
b. Enter diskmgmt.msc and press Enter .
9. Rescan the disk and set the disks online:
a. Click Action > Rescan Disks.
b. Right-click each Offline disk, and click Online.

198 Optional deployment tasks


Internal Use - Confidential

10. Right-click the disks selected in the previous step, and click Initialize disk > OK.
After initialization, the disk appears online.
11. Right-click Unallocated, and select New Simple Volume.
12. Select default, and click Next.
13. Assign the drive letter.
14. Select default, and click Next.
15. Click Finish.

Activate the license


Use this procedure to activate the license online or offline.

Prerequisites
If you do not have Internet connectivity, you might need to activate by phone.

Steps
1. To activate the license online:
a. Using the administrator credentials, log in to the target Windows Server
b. When the main desktop view appears, click Start and type Run.
c. Type slui 3 and press Enter .
d. Enter the customer provided Product key and click Next.
If the key is valid, Windows Server xxxx is successfully activated.
If the key is invalid, verify that the Product key entered is correct and try the procedure again.

NOTE: If the key is still invalid, try activating offline.

2. To activate the license offline:


a. Using the administrator credentials, log in to the target Windows Server VM (jump server).
b. When the main desktop view appears, click Start and select Command Prompt (Admin) from the option list.
c. At the command prompt, use the slmgr command to change the current product key to the newly entered key.
C:\Windows\System32> slmgr /ipk XXXXX-XXXXX-XXXXX-XXXXX-XXXXX
d. At the command prompt, use the slui command to initiate the phone activation wizard. For example: C:
\Windows\System32> slui 4.
e. From the pulldown menu, select the geographic location that you are calling and click Next.
f. Call the displayed number, and follow the automated prompts.
After the process completes, the system provides a confirmation ID.
g. Click Enter Confirmation ID and enter the codes provided. Click Activate Windows.
Successful activation can be validated using the slmgr command.
C:\Windows\System32> slmgr /dlv

Enable PowerFlex file on an existing PowerFlex


appliance
Use this procedure for enabling PowerFlex file on an existing PowerFlex appliance.

Steps
The supported scenarios are:

If the.... Do this...
Existing Deploy PowerFlex file cluster, provided the resource group has dedicated storage pool with 5.5 TB of space
PowerFle available.

Optional deployment tasks 199


Internal Use - Confidential

If the.... Do this...
x
hypercon
verged or
PowerFle
x
storage-
only
service is
available.
The We might need to migrate the existing data or create a new storage pool with 5.5 TB of space for deploying
existing PowerFlex file cluster.
resource
If we have multiple protection domains and storage pools, select the protection domains and storage pools without
group
volume and 5.5 TB space available on the storage pool.
(PowerFl
ex
hypercon
verged or
PowerFle
x
storage-
only) is
already in
use.

NOTE:
● Migrate data using any traditional migration method and its customers responsibility to do the migration.
● PowerFlex file deployment is supported only using PowerFlex Manager and minimum two PowerFlex file nodes are
required.
For performing PowerFlex file deployment, see PowerFlex file deployment.

Configure VMware vCenter high availability


Use this procedure to enable VMware vCenter high availability.

About this task


VMware vCenter high availability (vCenter HA) protects the VMware vCenter Server against host and hardware failures. The
active-passive architecture of the solution can also help reduce downtime significantly when you patch the vCenter Server.

Steps
After you create a three-node PowerFlex cluster that contains active, passive, and witness nodes. Different configuration paths
are available, your selection depends on your existing configuration.
VMware vCenter HA requirements:
● Recommended minimum of three VMware ESXi hosts
● Validate the flex-vcsa-ha networking and VMware vCenter port groups have been configured
See the VMware vSphere Product Documentation (link) for additional requirements and configuration of VMware vCenter HA.

200 Optional deployment tasks


Internal Use - Confidential

13
Post-deployment tasks

Enabling SupportAssist
● There are two options to configure events and alerts:
○ Connect directly
○ Connect using Secure Connect Gateway
● If you connect directly, only the call home option is available
● If you connect through Secure Connect Gateway, all options through Secure Connect Gateway are enabled
● You do not need to deploy and configure Secure Connect Gateway if you choose ESE direct

Related information
Enable SupportAssist

Deploy or configure Secure Connect Gateway


Secure Connect Gateway is an enterprise monitoring technology that monitors your devices and proactively detects hardware
issues that may occur.

Prerequisites
● Download the required version of secure connect gateway from the Dell support site.
● You must have VMware vCenter Server running on the virtual machine on which you want to deploy secure connect
gateway. Deploying secure connect gateway directly on a server running VMware vSphere ESXi is not supported.

Steps
1. Download and extract the OVF file to a location accessible by the VMware vSphere Client.
2. On the right pane, click Create/Register VM.
3. On the Select Creation Type page, select Deploy a virtual machine from an OVF or an OVA file and click Next.
4. On the Select OVF and VMDK files page, enter a name for the virtual machine, select the OVF and VMDK files, and click
Next.
NOTE: If there is more than one datastore on the host, the datastores are displayed on the Select storage page.

5. Select the location to store the virtual machine (VM) files and click Next.
6. On the License agreements page, read the license agreement, click I agree, and click Next.
7. On the Deployment options page, perform the following steps:
a. From the Network mappings list, select the network that the deployment template must use.
b. Select a disk provisioning type.
c. Click Next.
8. On the Additional settings page, enter the following details and click Next.
● Domain name server
● Hostname
● Default gateway
● Network IPv4 and IPv6
● Time zone
● Root password

Post-deployment tasks 201


Internal Use - Confidential

NOTE: Ensure that the root password consists of eight characters with at least one uppercase and one lowercase
letter, one number, and one special character. Use this root password to log in to secure connect gateway for the first
time after the deployment.

9. On the Ready to complete page, verify the details that are displayed, and click Finish.
A message is displayed after the deployment is complete and the virtual machine is powered on.

NOTE: Wait 15 minutes before you log in to the secure connect gateway user interface.

10. After installation, power on the Secure Connect Gateway.


11. Go to https://localhost:5700/ and log in using the root credentials to check the user interface access.

Configuring the initial setup and generating the access key and pin
Use the section to generate the access key and pin to register with Secure Connect Gateway and Dell Support site.
Use this link to generate the Dell Support account and access key and pin: https://www.dell.com/support/kbdoc/en-us/
000180688/generate-access-key-and-pin-for-dell-products?lang=en.
Customers should work with field engineer support to get the SITE ID that is required while generating the access key and pin.

Log in to the secure connect gateway user interface


Use this procedure to log in to the secure connect gateway user interface.

Steps
1. Go to https://<hostname(FQDN) or IP address:5700/>.
2. Enter the username as root and password created while deploying the VM.
3. Create the admin password:
a. Enter a new password.
b. Confirm the password.
4. Accept the terms and conditions.
5. Provide the access key and pin generated in Configuring the initial setup and generating the access key and pin.
6. Enter the Primary Support Contacts information.

Configuring SupportAssist on PowerFlex Manager


Use the following procedures to configure the Connect direct or Connect using the Secure Connect Gateway.
Depending on the customers requirement, use the following steps:

Configure SupportAssist using the connect directly mode


Use this procedure to enable SupportAssist using the connect directly mode.

Steps
1. Log in to PowerFlex Manager.
2. Click Settings > Events and alerts.
3. Click Notification Policies.
4. From the policies tab on the grayed out part, click Configure Now.
5. Accept the license and telemetry agreement on the connect support assist page and click Next.
6. Choose the connection type Connect Directly.
NOTE: This helps us directly connect to SupportAssist direct. Call to home feature works on connect direct. The proxy
setting is not supported.

202 Post-deployment tasks


Internal Use - Confidential

7. Click Connect to cloudIQ.


It enables PowerFlex Manager to transport telemetry data, alerts and analytics to assist Dell Technologies in providing
support.
8. On the Authentication details page, provide the following details.
9. Access key and PIN generated in Configuring the initial setup and generating the access key and pin.
You need the software ID for generating the access key and PIN.
10. Choose the Device type to be registered like rack, appliance or software.
11. In the Enterprise License Management Systems file, enter the software ID used in step 2 while generating the access
key and pin.
12. The Solution serial number must be provided by customer.
13. In the Site ID field, provide the site ID location. If you do not have one, contact Dell Technologies Support to generate one.
14. Click Next, provide the contact details for customer, and click Finish.
A popup appears on the bottom of the screen configuring SupportAssist and another pop up appears once it is successfully
configured.
15. To activate the policy now, click Configure Now and enable the policy by making it active.
After the policy is active, it will remove from grayed out mode to available and active mode.

Connect SupportAssist using the secure connect gateway


Use this procedure to enable SupportAssist using the secure connect gateway.

Prerequisites
Configure the secure connect gateway.

Steps
1. Log in to PowerFlex Manager.
2. Click Settings > Events and alerts.
3. Click Notification Policies.
4. From the Policies tab on the grayed out part, click Configure Now.
5. Accept the license and telemetry agreement on the Connect SupportAssist page and click Next.
6. Choose the connection type connect via gateway.
NOTE: Connect using the gateway helps register the PowerFlex Manager on secure connect gateway and
SupportAssist. From here we can enable the proxy setting.

7. Provide the SCG IP address and Port number.


8. Click Connect to CloudIQ.
It enables PowerFlex Manager to transport telemetry data, alerts and analytics to assist Dell Technologies in providing
support.
9. Enable the Remote Support button and click Next.
10. On the Authentication Details page, provide the following details.
11. Access key and PIN.
12. Choose the Device type to register like rack, appliance or software.
13. In the Enterprise License Management Systems file, enter the software ID used in step 4 while generating the access
key and PIN.
14. The Solution serial number must be provided by the customer.
15. In the Site ID field, provide the site ID location. If you do not have one, contact Dell Technologies Support to generate one.
16. Click Connect to CloudIQ.
It enables PowerFlex Manager to transport telemetry data, alerts and analytics to assist Dell Technologies in providing
support.
17. Click Next, provide the contact details for the customer, and click Finish.
A popup appears on the bottom of the screen configuring SupportAssist and another pop up appears once it is successfully
configured.

Post-deployment tasks 203


Internal Use - Confidential

18. To activate the policy now, click Configure Now and enable the policy by making it active.
Once the policy is active, it will remove from grayed out mode to available and active mode.

Events and alerts


A source is used to configure the receiving of external events and syslog content.
A destination is used to configure the ability to send events and alerts information out. A destination is always external.
SupportAssist, email, SNMP, remote syslog are considered destinations.
Notification policies define what information is sent to each destination. Events and alerts exist irrespective of whether
notification policies are created.
SNMP sources are not automatically discovered and must be configured to receive events about these sources. PowerFlex
Manager is preconfigured and events, and alerts are automatically available. Resources in the PowerFlex appliance are
automatically discovered. Any future resources, for example, switch replacements or additional nodes, are considered external
and must be added manually as sources.

Configure an external source


You must define a source to enable PowerFlex Manager to receive an external event.

About this task


A syslog source can only go to a syslog destination and does not display in events. An SNMP source, either V2 or V3, displays in
events even without a defined notification policy.

Steps
1. Go to Settings > Events and Alerts > Notification Policies.
2. From the Sources pane, click Add.
3. Enter a source name and description.
4. Configure either SNMP or syslog forwarding and click Submit > Dismiss:
● For SNMPv2c:
a. Enter the community string by which the source forwards traps to destinations.
b. Enter the same community string for the configured resource. During discovery, if you selected PowerFlex Manager
to automatically configure iDRAC nodes to send alerts to PowerFlex Manager, enter the community string that is used
in that credential here.
● For SNMPv3:
a. Enter the username, which identifies the ID where traps are forwarded on the network management system.
b. Select a security level from the following:

Security level Description authPassword privPassword


Minimum noAuthNoPriv Not required Not required
Moderate authNoPriv Required Not required
Maximum authPriv Required Required
● If you select Syslog, click Enable Syslog.

Configure a destination
Define a location where event and alert data that has been processed by PowerFlex Manager should be sent.

Steps
1. Click Settings > Events and Alerts > Notification Policies.
2. From the Destinations pane, click Add.
3. From the Destinations page:

204 Post-deployment tasks


Internal Use - Confidential

a. Enter the destination name and description.


b. From Destination Type menu, select to configure either SNMP, Syslog, or Email (SMTP) forwarding.
c. Click Next.
d. Depending on the destination type, enter the following information:

Destination Type Protocol settings


SNMP V2c ● Network name/IP address
● Port
● Community string
SNMP V3 ● Network name/IP address
● Port
● Username
● Security level:
○ Minimal - no more information required
○ Moderate - MD5 authentication password required
○ Maximum - MD5 authentication and DES privacy
passwords required
Syslog ● Network name/IP address
● Port
● Protocol:
○ UDP
○ TCP
● Facility:
○ All
○ Authentication
○ Security and authentication
Email (SMTP) ● Destination name
● Description
● Destination type:
○ Server type:
■ SMTP
■ SMTP over SSL
■ SMTPS SMARTTLS
○ Server IP or FQDN
○ Port
○ Sender address and up to five recipient addresses
○ If you choose credentials, enter:
■ Username, password, sender address, and up to
five recipient addresses
● Send test email
● Test email server connection

4. Click Finish.

Redistribute the MDM cluster

About this task


PowerFlex Manager enables you to change the MDM role for a node in a PowerFlex cluster by switching the MDM role from one
node to the another.

Steps
1. To access the wizard from the Resource Groups page:

Post-deployment tasks 205


Internal Use - Confidential

a. On the menu bar, click Lifecycle > Resource Groups.


b. Select a resource group that the node with MDM role is to reconfigure.
c. In the right pane, click View Details.
The Resource Group Details page is displayed.
d. On the Resource Group Details page, under More Actions click Reconfigure MDM Roles.
2. The MDM Reconfiguration page is displayed. In the Reconfigure MDM Role page, the Current node holds the MDM role
are displayed.
To reassign and choose the new HostName or IP address for the role in the Select New Node for MDM Role from the
drop-down.
You can reassign multiple roles at one time.

3. Click Next. The Summary page is displayed.


The warning message will pop up which states:

The MDM cluster will be


reconfigured to have the selected servers acting as the MDM roles
defined in this wizard. This could include installing or removing
the PowerFlex MDM role packages on the selected servers as
required

4. Type CHANGE MDM ROLES to confirm your changes.


5. Click Finish.
The MDM Role Change will begin and can view the details in the Recent Activity section.
After it completes, verify the role change are made on the node in the Resource Group.

Verify the PowerFlex Manager resource group


Verify the resource group status is Healthy and compliant.

Steps
1. Log in to PowerFlex Manager using the credentials.
2. On the menu bar, click Lifecycle > Resource Group.
3. Select the Resource Group you looking for and verify the following on the Resource Group Information section on the
right.
4. Verify the status:

Overall Resource Group Health Healthy


Resource Health Healthy
Compliance Compliant
Deployment Deployed

The system health is verified as Healthy and Compliant.

Verify PowerFlex status

About this task


After the deployment is successful , to verify the PowerFlex storage details on the Block tab in PowerFlex Manager.

Steps
1. Log in to PowerFlex Manager using the credentials.

206 Post-deployment tasks


Internal Use - Confidential

2. Select the Block tab on the menu, click on the appropriate tabs to view and verify the details:
● Protection domain
● Fault sets
● SDS
● Storage pools
● Acceleration pools
● Devices
● Volumes
● NVMe targets
● Hosts

Export a compliance report

About this task


The report lists compliance details for all resources and can be downloaded a CSV and PDF. The CSV report includes all
information and can be imported into a database for querying. The PDF format contains a subset of information to make it easier
to read. It measures each resource against a specified compliance file version. For each resource, it lists the components, the
current and expected software or firmware version, and whether the component is compliant.

Prerequisites
Ensure the UI access along with the appropriate user permission available to export the report.

Steps
1. On the menu bar, click Resources.
2. Click Export Report.
3. Select either Export Compliance PDF Report or Export Compliance CSV Report from the drop-down list.
The compliance report is downloaded.

Export a configuration report

About this task


The configuration report shows the result of various configuration checks that PowerFlex Manager performs against the
system. You can use the report to troubleshoot your resources and resource groups. You can download a PDF report that lists
configuration details for all resources and resource groups.

Prerequisites
Ensure the UI access along with the appropriate user permission available to export the report.

Steps
1. On the menu bar, click Resources.
2. Click Export Report.
3. Select Export Configuration PDF Report from the drop-down list.
The configuration report downloads. The report shows the following kinds of information:

Column name Description


Resource Name Provides the name of the resource for which the checks
have been run. This column corresponds to the Resource
Name shown on the Resources page.

Post-deployment tasks 207


Internal Use - Confidential

Column name Description


Asset/Service Tag Provides the asset or service tag of the resource for which
the checks have been run. This column corresponds to the
Asset/Service Tag shown on the Resources page.
Management IP Specifies the Management IP of the resource for which
the check is run. For a PowerFlex cluster, the management
IP address is the MDM cluster IP address.
Result Shows the result of the check (PASS or FAIL).
Severity Indicates the severity of the check. The severity is based on
the result. The severity levels are
● INFO
● WARNING
● HIGH
● CRITICAL
If the result is PASS, the severity is INFO. If the result is
FAIL, the severity depends on the type of check. PowerFlex
Manager supports only CRITICAL checks.
Details Provides a description of the check that was run.
Affected Resources Gives a list of the IP addresses or unique identifiers of
resources that are impacted by the check. The list of
affected resources helps with troubleshooting.

Back up using PowerFlex Manager


Use this procedure to backup PowerFlex Manager manually.

About this task


PowerFlex Manager backup files include the following information:
● Activity logs
● Credentials
● Deployments
● Resource inventory and status
● Events
● Initial setup
● IP addresses
● Jobs
● Licensing
● Networks
● Templates
● Users and roles
● Resource module configuration files
● Performance metrics

Steps
1. The Backup and Restore page displays information about the last backup operation that was performed on the PowerFlex
Manager virtual appliance. Information in the Settings and Details section applies to both manual and automatically
scheduled backups and includes the following:
● Last backup date
● Last backup status
● Back up directory path to an NFS or a CIFS share
● Back up directory username

208 Post-deployment tasks


Internal Use - Confidential

2. The Backup and Restore page also display information about the status of automatically scheduled backups (enabled or
disabled). On this page, you can:
● Manually start an immediate backup
● Restore an earlier configuration
● Edit general backup settings
● Edit automatically scheduled backup settings

Back up the networking switch configuration


Use this procedure to back up and restore running configuration of the Cisco Nexus and Dell PowerSwitch switches.

About this task


If switches are owned by the customer, advise customers to take the switch backup immediate after successful PowerFlex
appliance deployment.
NOTE: Dell Technologies recommends to store the backup in separate shared location i.e. not on the jump server. The
customer is responsible for providing the backup location and also maintaining the backup.

Steps
1. Connect to the Cisco Nexus or Dell PowerSwitch switch, either via console cable, Telnet or SSH using admin credentials,
type #copy running-config scheme://server/[url/]filename.
For the scheme argument, you can enter tftp:, ftp:, scp:, or sftp:.
The server argument is the address or name of the remote server, and the URL argument is the path to the source file on
the remote server. The server, URL, and filename arguments are case sensitive.
For example:
switch# copy running-config
tftp://10.10.10.1/sw1-run-config.bak

switch# copy running-configuration scp://root:calvin@10.11.10.12/tmp/backup.txt

2. Restore the network configuration, connect to the Cisco Nexus or Dell PowerSwitch switch, either via console cable, Telnet
or SSH using admin credentials, type #copy running-config scheme://server/[url/]filename running-
config.
For the scheme argument, you can enter tftp:, ftp:, scp:, or sftp:.
The server argument is the address or name of the remote server, and the URL argument is the path to the source file on
the remote server. The server, URL, and filename arguments are case sensitive.
For example:

switch# copy tftp://10.10.10.1/my-config running-config

switch# copy scp://root:calvin@10.11.10.12/tmp/backup.txt running-configuration

Backing up the VMware vCenter


VMware vCenter server supports a file-based backup and restore options that helps recover your VMware vCenter server after
failure. You can manually initiate or schedule the backup using the VMware vCenter server management interface.
NOTE: The customer is responsible for providing backup location and maintaining the backup as well, Dell Technologies
recommends you to store the backup in separate shared location i.e. not on the jump server.

Description Link
The back up and restore solution https://docs.vmware.com/en/VMware-vSphere/7.0/
com.vmware.vcenter.install.doc/GUID-3EAED005-
B0A3-40CF-B40D-85AD247D7EA4.html?

Post-deployment tasks 209


Internal Use - Confidential

Description Link

The VMware vCenter server provides the ability to backup hWord=N4IghgNiBcIEJgMYGsCuAHABGAdgE0wCcBTAZwBc


and restore the configuration of a vSphere Distributed Switch B7EkAXyA
and all port groups in case of database or upgrade failure,
you can also use a saved distributed switch configuration as a
template to create a copy of the switch.

The back up and restoring networking configurations https://docs.vmware.com/en/VMware-vSphere/7.0/


com.vmware.vsphere.networking.doc/GUID-140C6A52-
F4C1-4B13-B2A3-9FFCF6000991.html?
hWord=N4IghgNiBcIEJgMYGsCuAHABGAdgE0wCcBTAZwBc
B7EkAXyA

Log in to PowerFlex using scli


The PowerFlex MDM cluster uses Mutual Transport Layer Security (mTLS) authentication instead of legacy TLS authentication
with username and password.

About this task


mTLS is a method for mutual authentication. mTLS ensures that the parties at each end of a network connection are who they
claim to be by verifying that they both have the correct private key.

Steps
1. To generate the certificate, copy management certificate to the root location, type: cp /opt/emc/scaleio/mdm/cfg/
mgmt_ca.pem /.
2. Generate the certificates, type: # scli --generate_login_certificate --management_system_ip
<MNO_IP> --username <USER> --password <PASS> --p12_path <P12_PATH> --p12_password
<P12_PASS> --insecure.
Where:
● #management_system_id is the IP address of PowerFlex Manager.
● #username is the username used to login to PowerFlex Manager.
● #password is the password used to login to PowerFlex Manager.
● #p12_path <P12_PATH> is optional. If is not provided then the file will be created in the users home directory.
● #p12_password is the password for the p12 bundle. The same passwords need to be provided for generation of the
certificate and for login operation.
3. Add the certificates, type: cd /opt/emc/scaleio/mdm/cfg; scli --add_certificate --certificate_file
mgmt_ca.pem.
4. Log in to PowerFlex using the certificates, type: #scli --login --p12_path <P12_PATH> --p12_password
<P12_PASS>.

210 Post-deployment tasks

You might also like