Flex Appliance Deployment 4x
Flex Appliance Deployment 4x
Flex Appliance Deployment 4x
January 2023
Rev. 1.1
Internal Use - Confidential
NOTE: A NOTE indicates important information that helps you make better use of your product.
CAUTION: A CAUTION indicates either potential damage to hardware or loss of data and tells you how to avoid
the problem.
WARNING: A WARNING indicates a potential for property damage, personal injury, or death.
© 2022 - 2023 Dell Inc. or its subsidiaries. All rights reserved. Dell Technologies, Dell, and other trademarks are trademarks of Dell Inc. or its
subsidiaries. Other trademarks may be trademarks of their respective owners.
Internal Use - Confidential
Contents
Chapter 1: Introduction................................................................................................................. 8
Contents 3
Internal Use - Confidential
4 Contents
Internal Use - Confidential
Configure individual trunk with per NIC VLAN setup for storage-only nodes with a bonded
management interface............................................................................................................................................... 120
Verify resource group status........................................................................................................................................ 123
Supported modes for a new deployment...................................................................................................................124
Adding the PowerFlex management service to PowerFlex Manager................................................................. 125
Gather PowerFlex system information................................................................................................................. 125
Add the PowerFlex system as a resource............................................................................................................125
Add as an existing resource group.........................................................................................................................126
Upload a management data store license............................................................................................................ 127
Contents 5
Internal Use - Confidential
6 Contents
Internal Use - Confidential
Enable the hyper-V role through Windows Server 2016 or 2019................................................................... 197
Enable the Hyper-V role through Windows PowerShell................................................................................... 197
Enable Remote Desktop access............................................................................................................................. 198
Install and configure SDC........................................................................................................................................ 198
Map volumes............................................................................................................................................................... 198
Activate the license...................................................................................................................................................199
Enable PowerFlex file on an existing PowerFlex appliance................................................................................... 199
Configure VMware vCenter high availability............................................................................................................200
Contents 7
Internal Use - Confidential
1
Introduction
The Dell PowerFlex Appliance with PowerFlex 4.x Deployment Guide provides specific steps to deploy the software applications
and hardware components required to deploy and configure PowerFlex appliance with PowerFlex Manager.
The target audience for this guide is Dell Technologies Services deploying a PowerFlex appliance and configuring it with
PowerFlex Manager.
PowerFlex appliance deploys as follows:
1. Check the prerequisites
2. Complete the node cabling
3. Configure the networking
4. Configure iDRAC
5. Configure the PowerFlex management controller
6. Configure the PowerFlex management platform
7. Deploy PowerFlex appliance
8. Verify the deployment status
See the Dell PowerFlex 4.0.x Administration Guide for additional documentation about using PowerFlex Manager.
See Dell Support to search the knowledge base for FAQs, Tech Alerts, and Tutorials.
8 Introduction
Internal Use - Confidential
2
Revision history
Date Document revision Description of changes
January 2023 1.1 Added support for
● Broadcom 57414 and 57508 network
adapters
● CloudLink 7.1.5
Updated support for
● PowerFlex management platform
August 2022 1.0 Initial release
Revision history 9
Internal Use - Confidential
3
Deployment requirements
This section lists the hardware and software required to build a PowerFlex appliance.
For a complete list of supported hardware, refer to the Dell PowerFlex Appliance with PowerFlex 4.x Support Matrix.
Related information
Deploying and configuring the PowerFlex management platform installer VM using VMware vSphere
Deploying and configuring the PowerFlex management platform installer using Linux KVM
Deploying and configuring the PowerFlex management platform using VMware vSphere
Deploying and configuring the PowerFlex management platform using Linux KVM
Software requirements
Download the Intelligent Catalog (IC) before starting the deployment. The following are the operating systems, software and
packages required as part of IC apart from Dell CloudLink and the secure connect gateway (SCG) images.
● VMware vSphere vCenter and ESXi 7.x
● Dell embedded operating system
● PowerFlex 4.x packages
● PowerFlex management platform packages
● Jump server image
● Dell CloudLink (optional)
● Secure connect gateway (optional)
Other requirements
● Enterprise Management Platform (EMP)Enterprise Management Platform - prepare before starting deployment
● Licenses:
○ PowerFlex Manager
○ CloudLink
Hardware requirements
Before deploying a PowerFlex appliance, the hardware requirements must be met.
Ensure you have:
● A minimum of four PowerFlex appliance nodes.
● PowerFlex appliance management controller nodes (PowerFlex R650).
● Supported PowerFlex appliance management controller configurations:
○ Single node (Dell provided or customer provided)
○ Multiple nodes (minimum of three nodes)
● A minimum of two PowerFlex Manager supported access/leaf switches.
● Cables / SFP28 25 GB direct attached to copper (four for each of the PowerFlex appliance nodes and four for the
PowerFlex management controller node).
● Cables / QSFP28 100 GB direct attached to copper (two for access switch uplinks and two for access switch VLT or VPC
interconnects).
● CAT5 / CAT6 cables 1 GB (one for each node for iDRAC connectivity) and one for each access switch for management
connectivity.
10 Deployment requirements
Internal Use - Confidential
Resource requirements
Resource requirements must be met before you deploy.
For all the examples in this document, the following conventions are used:
● The third octets of the example IP addresses match the VLAN of the interface.
● All networks in the example have a subnet mask of 255.255.255.0.
The following table lists the minimum resource requirements for the infrastructure virtual machines:
Deployment requirements 11
Internal Use - Confidential
Jump server
The jump server is an embedded operating system-based VM available for PowerFlex appliance to access and manage all the
devices in the system.
The embedded operating system-based jump server is marked as internal-only and can be downloaded only by the professional
services or manufacturing team. The embedded operating system-based jump server does not provide the DNS or NTP services
that are needed for full PowerFlex appliance functionality.
Steps
1. On the Dell Support site, to see the SHA2 hash value, hover over the question mark (? ) next to the File Description.
2. In the Windows file manager, right-click the downloaded file and select CRC SHA > SHA-256. The CRC SHA option is
available only if you install 7-zip application.
The SHA-256 value is calculated.
3. The SHA2 value that is shown on the Dell Technologies Support site and the SHA-256 value that is generated by Microsoft
Windows must match. If the values do not match, the file is corrupted. Download the file again.
PowerFlex ports
For information about the ports and protocols used by the components, see the Dell PowerFlex Rack with PowerFlex 4.x
Security Configuration Guide.
12 Deployment requirements
Internal Use - Confidential
PowerFlex Manager
Port TCP Service
20 Yes FTP
21 Yes FTP
22 Yes SSH
80 Yes HTTP
443 Yes HTTPS
Networking pre-requisites
Configure the customer network for routing and layer-2 access for the various networks before PowerFlex Manager deploys the
PowerFlex appliance cluster.
The pre-deployment customer network requirements are as follows:
● Redundant connections to access switches using virtual link trunking (VLT) or virtual port channel (VPC).
● MTU=9216 on all ports or link aggregation interfaces carrying PowerFlex data VLANs.
● MTU=9216 as default on VMware vMotion and PowerFlex data interfaces.
The following table lists customer network pre-deployment VLAN configuration options:
● Example VLAN: Lists the VLANs that are used in the PowerFlex appliance deployment.
● Network Name: Network names and or the VLAN defined by PowerFlex Manager.
● Descripton: Describes each network or VLAN.
NOTE: VLANs numbers in the table are an example, they may change depends on customer requirements.
In a default PowerFlex setup two data networks are standard. Four data networks are only required for specific customer
requirements, for example, high performance or use of trunk ports. For more information, contact your Dell Sales Engineer.
CAUTION: All defined data networks must be accessible from all storage data clients (SDC). If you have
implemented a solution with four data networks, all four must be assigned and accessible from each storage
data client. Using less than the configured number of networks will result in an error in PowerFlex and can lead
to path failures and other challenges if not properly configured.
Deployment requirements 13
Internal Use - Confidential
14 Deployment requirements
Internal Use - Confidential
Untagged VLAN
251 NAS file data 1 For accessing PowerFlex file data from Layer-2/layer-3,
client MTU=1500/9000
252 NAS file data 2 For accessing PowerFlex file data from Layer-2/layer-3,
client MTU=1500/9000
Deployment requirements 15
Internal Use - Confidential
Related information
Partial network automation
Configuring Cisco Nexus switches
Configure the Dell PowerSwitch access switches
Related information
Configuring Cisco Nexus switches
Configuring Dell PowerSwitch switches
Network configuration
16 Deployment requirements
Internal Use - Confidential
switch is not configured correctly, the deployment may fail and PowerFlex Manager is not able to provide information about why
the deployment failed.
If you select the Partial networking template then you should configure the switches before deploying the service. For
more information about example configuration for Dell PowerSwitch and Cisco Nexus switch, the Configure Dell PowerSwitch
switches or Configure Cisco Nexus switches section.
The pre-deployment access switch requirements are as follows:
● Management interfaces IP addresses configured.
● Switches and interconnect link aggregation interfaces configured to support VLT or VPC.
● MTU=9216 on redundant uplinks with link aggregation (VLT or VPC) to customer data center network.
● MTU=9216 on VLT or VPC interconnect link aggregation interfaces.
● LLDP enabled on switch ports that are connected to PowerFlex appliance node ports.
● SNMP enabled, community string set (public) and trap destination set to PowerFlex Manager.
● All uplink, VLT or VPC, and PowerFlex appliance connected ports are not shut down.
● Interface port configuration for downlink to PowerFlex node (only applicable for partial network automation ).
Dell recommends that only PowerFlex appliance (including the PowerFlex management node if present) be connected to access
switches.
NOTE: VLANs 140 through 143 are only required for PowerFlex management controller 2.0.
See the table in Configuration data for the VLAN and network information.
Related information
Configuration data
Networking pre-requisites
Deployment requirements 17
Internal Use - Confidential
NOTE: The PowerFlex appliance iDRAC NIC is connected to separate customer or Dell provided switch, which is the
out-of-band management switch.
See the following example for the physical network connectivity details between PowerFlex hyperconverged nodes, PowerFlex
storage-only nodes, PowerFlex compute-only nodes, PowerFlex controller nodes, access switches and management switch.
Node Node type NIC X, port 1 NIC X, port NIC Y, port 1 NIC Y, port 2 NIC Z, port 1 M0/iDRAC
number 2
1 PowerFlex Access A, Access B, Access B, Access A, NA OOB_switch
port 1 port 2 port 1 port 2
2 PowerFlex Access A, Access B, Access B, Access A, NA OOB_switch
port 3 port 4 port 3 port 4
3 PowerFlex Access A, Access B, Access B, Access A, NA OOB_switch
port 5 port 6 port 5 port 6
4 PowerFlex Access A, Access B, Access B, Access A, NA OOB_switch
port 7 port 8 port 7 port 8
1 Controller Access A, Access B, Access B, Access A, OOB_switch OOB_switch
port 23 port 24 port 23 port 24
2 Controller Access A, Access B, Access B, Access A, OOB_switch OOB_switch
port 21 port 22 port 21 port 22
3 Controller Access A, Access B, Access B, Access A, OOB_switch OOB_switch
port 19 port 20 port 19 port 20
4 Controller Access A, Access B, Access B, Access A, OOB_switch OOB_switch
port 17 port 18 port 17 port 18
On some servers, the NIC cards in certain PCIe slots are inverted. That means that NIC port 1 is on the right and NIC port 2 is
on the left. Consider the NIC port positions during cabling and deployment. Also, the port numbers might be mentioned in the
card as shown in the picture below:
Related information
Configure storage data client on the PowerFlex management controller
18 Deployment requirements
Internal Use - Confidential
4
Network configuration
PowerFlex appliance is available in two standard network architectures: access and aggregation (Cisco Nexus or Dell
PowerSwitch) or leaf-spine (Cisco Nexus). For most PowerFlex appliance deployments, access-aggregation network
configuration provides the simplest integration, however when customer scale or east/west bandwidth requirements exceed
the aggregation and access design abilities, the leaf-spine architecture is used instead.
Related information
Network requirements for a PowerFlex appliance deployment
Configuration data
This section has the information about supported networking configuration in PowerFlex appliance and applicable for both Dell
PowerSwitch and Cisco Nexus switches.
PowerFlex appliance supports the following node connectivity network configurations:
● Port-channel - All PowerFlex nodes are connected to access or leaf pair switches.
● Port-channel with link aggregation control protocol (LACP) - All PowerFlex nodes are connected to access and leaf pair
switches.
● Individual trunk - All PowerFlex nodes are connected using trunk configuration.
Access/leaf switch ports connected to nodes require different configuration parameters for PowerFlex appliance deployment. If
partial network automation is used, the customer is responsible for the access/leaf switch configuration.
The following table shows the different configuration parameters required on the access or leaf switch ports connected to the
nodes for the deployment of PowerFlex appliance:
Network configuration 19
Internal Use - Confidential
NOTE: VLANs in the table are an example, this may change depends on customer requirements.
Supported Node Virtual Port- Speed LACP Required VLANs Node load
networking switch/bond channel/ (GB) mode balancing
name interface
mode
Port-channel PowerFlex fe_dvSwitch Trunk 25 Active 105,140,150 LAG-Active-
with LACP management Src and dest
(manual build) controller 2.0 be_dvSwitch Trunk 25 Active 103,141,142,143,151, 152 IP and TCP/
If required: 153 and 154 UDP
PowerFlex NA NA NA NA NA NA
storage-only
nodes / NA NA NA NA NA NA
PowerFlex
file nodes
Port-channel PowerFlex cust_dvSwitc Trunk 25/100 Active 105-106 LAG-Active-
with Link compute-only h Src and dest
Aggregation nodes IP and TCP/
Control (VMware flex_dvSwitch Trunk 25/100 Active 151, 152 UDP
Protocol ESXi based) If required: 153 and 154
(LACP) for
full network PowerFlex cust_dvSwitc Trunk 25/100 Active 105-106,150
automation/ hyperconverg h
partial ed nodes
network flex_dvSwitch Trunk 25/100 Active 151, 152
automation
If required: 153, 154, 161,
and 162
20 Network configuration
Internal Use - Confidential
Supported Node Virtual Port- Speed LACP Required VLANs Node load
networking switch/bond channel/ (GB) mode balancing
name interface
mode
network flex_dvSwitch Trunk 25/100 151-152 NIC load,
automation (153,154 Source MAC
, if hash
required
)
PowerFlex cust_dvSwitc Trunk 25/100 NA 105-106,150 Originating
hyperconverg h virtual port
ed nodes (recommende
flex_dvSwitch Trunk 25/100 d), Physical
NIC load,
Source MAC
hash
PowerFlex Bond0 Trunk 25/100 NA 150,151 Mode0-RR,
storage-only Mode1- Active
nodes (option If required: 153 and 161 backup,
1) Mode6-
Bond1 Trunk 25/100 NA 152 Adaptive LB
If required: 154 and 162 (recommende
d)
Individual PowerFlex Per NIC VLAN Trunk 25/100 NA 151,152 bonded Mode0-RR,
trunk for full storage-only Mode1- Active
network nodes (option If required: 150, 153, 154, backup,
automation/ 2) 161,162 Mode6-
partial Adaptive LB
network (recommende
automation d)
Related information
Partial network automation
Configure the Dell PowerSwitch access switches
Create new dvSwitches
Create distributed port groups on dvswitches
Create LAG on dvSwitches
Add hosts to dvSwitches
Assign LAG as an active uplink for the dvSwitch
Set load balancing for dvSwitch
Related information
Network requirements for a PowerFlex appliance deployment
Network configuration 21
Internal Use - Confidential
See the Dell PowerFlex with PowerFlex 4.x Appliance Support Matrix and Intelligent Catalog for supported switch model and
software versions in the current release.
NOTE: If access switches are provided and configured by customer, the below configuration is only for reference purpose.
See Configuration data for more details about supported full network automation options.
NOTE: VLANs 140 through 143 are only required for PowerFlex management controller 2.0.
Ensure to be in the configuration mode to perform the commands in the following procedures.
● Type configure terminal, to enter the config mode in the CLI.
● Type end to fully exit the configuration mode.
Steps
1. Configure the hardware, by completing the following:
a. Turn on both switches.
b. Connect a serial cable to the serial port of the first switch.
c. Use a terminal utility to open the terminal emulator and configure it to use the serial port. The serial port is usually COM1,
but this may vary depending on your system. Configure serial communications for 115200,8, N,1 and no flow control.
d. Connect the switches by connecting port 53 on switch 1 to port 53 on switch 2 and port 54 on switch 1 to port 54 on
switch 2.
2. Configure the IP address for the management ports, enter the following:
interface mgmt 1/1/1
no shutdown
no ip address dhcp
ip address <ipaddress>/<mask>
exit
3. Set a global username and password and an enable mode password, enter the following:
username <admin> password <admin> role <sysadmin>
4. Enable SSH, complete the following:
a. Regenerate keys for the SSH server in EXEC mode, enter the following:
crypto ssh-key generate {rsa {2048}}
b. Overwrite an existing key,enter the following:
Host key already exists. Overwrite [confirm yes/no]:yes
Generated 2048-bit RSA key
c. Display the SSH public keys in EXEC mode, enter the following:
show crypto ssh-key rsa
d. Save the configuration, enter the following:
copy running-config startup-config
5. Set the SSH login attempts, enter the following:
password-attributes max-retry 5 lockout-period 30
6. Configure SNMP, enter the following:
snmp-server community <snmpCommunityString> ro
7. Set SNMP destinations, enter the following:
snmp-server host <PowerFlex Manager IP> traps version 2c stringtest entity lldp snmp
envmon
8. Enable LLDP, enter the following:
lldp enable
Related information
Networking pre-requisites
Configuration data
22 Network configuration
Internal Use - Confidential
Prerequisites
● Ensure the primary MDM is not residing with the same PowerFlex appliance as the current switch upgrade.
● The primary MDM usually resides on R01S01, when upgrading access switches, secondary MDM (usually resides on R02S01)
will be promoted to primary. The primary MDM will be moved back after completion of primary switch upgrade
● To switch ownership between primary and secondary MDM, type on the primary MDM: scli--
switch_mdm_ownership --new_master_mdm_id <MDM_ID>
Steps
1. Check the current version of switch operating system:
a. Log in to the Dell-OS CLI as admin. Use PuTTY to enter the password admin.
b. Check the operating system version, by entering the following command: show version.
The screen displays an output similar to the following:
OS Version shows the version the operating system version should be 10.x.x or later.
2. Save the license file and the configuration:
a. In the Dell-OS CLI, type show license status to get the license path.
The screen displays an output similar to the following:
You can find the license path in the license location row.
Network configuration 23
Internal Use - Confidential
b. Get the switch address (IP address configured by DHCP) and hostname, by entering the following command: show
interface mgmt.
The screen displays an output similar to the following:
Steps
1. Sign in to Dell SmartFabric OS10 using your account credentials.
2. Locate your entitlement ID and order number sent by email, and select the product name.
3. On the Product page, the Assigned To: field on the Product tab is blank. Click Key Available for Download.
4. Enter the device service tag you purchased the OS10 Enterprise Edition for in the Bind to: and Re-enter ID: fields.
This step binds the software entitlement to the service tag of the switch.
5. Select how to receive the license key — by email or downloaded to your local device.
6. Click Submit to download the License.zip file.
7. Select the Available Downloads tab.
8. Select the OS10 Enterprise Edition release to download, and click Download.
9. Read the Dell End User License Agreement. Scroll to the end of the agreement, and click Yes, I agree.
24 Network configuration
Internal Use - Confidential
10. Select how to download the software files, and click Download Now.
11. After you download the OS10 Enterprise Edition image, unpack the TAR file and store the OS10 binary image on a local
server. To unpack the TAR file, follow these guidelines:
● Extract the OS10 binary file from the TAR file. For example, to unpack a TAR file on a Linux server or from the ONIE
prompt, enter:
12. Some Windows unzip applications insert extra carriage returns (CR) or line feeds (LF) when extracting the contents of
a .TAR file. The additional CRs or LFs may corrupt the downloaded OS10 binary image. Turn this option off if you use a
Windows-based tool to untar an OS10 binary file.
13. Generate a checksum for the downloaded OS10 binary image by running the md5sum command on the image file. Ensure
that the generated checksum matches the checksum extracted from the TAR file.
md5sum image_filename
14. Type copy to copy the OS10 image file to a local server.
Connect to a switch
Use this procedure to connect to the switch.
Steps
Use one of the following methods to verify that the system is properly connected before starting installation:
● Connect a serial cable and terminal emulator to the console serial port on the switch. The serial port settings can be found
in the Installation Guide for your particular switch model. For example, the S4100-ON serial port settings are 115200, 8 data
bits, and no parity.
● Connect the management port to the network if you prefer downloading the image over the network. Use the Installation
Guide for your particular switch model for more information about setting up the management port.
Steps
1. Extract the TAR file, and copy the contents to a FAT32 formatted USB flash drive.
2. Plug the USB flash drive into the USB port on the switch.
3. From the ONIE menu, select ONIE: Install OS, then press the Ctrl + C key sequence to cancel.
4. From the ONIE:/ # command prompt, type:
ONIE:/ # onie-discovery-stop (this optional commandstops the scrolling)
ONIE:/ # mkdir /mnt/usb
ONIE:/ # cd /mnt
ONIE:/mnt # fdisk -l (this command shows the deviceUSB is using)
The switches storage devices and partitions are displayed.
5. Use the device or partition that is formatted FAT32 (example: /dev/sdb1 ) in the next command.
ONIE:/mnt # mount -t vfat /dev/sdb1 /mnt/usb
ONIE:/mnt # mount -a
Network configuration 25
Internal Use - Confidential
The USB is now available for installing OS10 onto the switch.
Steps
1. Use the output of the following command to copy/paste the BIN filename into the install command below.
ONIE:/ # ls /mnt/usb
ONIE:/ # cd /mnt/usb
3. Manually install using the onie-nos-install command. If installing version 10.x.x, the command is:
The OS10 update takes approximately 10 minutes to complete and boots to the OS10 login: prompt when done. Several
messages display during the installation process.
4. Log in to OS10 and run the show version command to verify that the update was successful.
Steps
1. Once you download the OS10 Enterprise Edition image, extract the TAR file.
● Some Windows unzip applications insert extra carriage returns (CR) or line feeds (LF) when they extract the contents of
a TAR file, which may corrupt the downloaded OS10 binary image. Turn OFF this option if you use a Windows-based tool
to untar an OS10 binary file.
● For example, in WinRAR under the Advanced Options tab de-select the TAR file smart CR/LF conversion feature.
2. Save the current configuration on the switch, and backup the startup configuration.
Command Parameter
OS10#write memory Write the current configuration to startup-config
3. Format a USB as VFAT/FAT32 and add the BIN file, or move the BIN file to a TFTP/FTP Server.
● Use the native Windows tool, or equivalent, to format as VFAT/FAT32.
● Starting with OS10.4, OS10 will auto-mount a new USB key after a reboot.
4. Save the BIN file in EXEC mode, and view the status. Update file name to match your firmware version.
26 Network configuration
Internal Use - Confidential
The image download command only downloads the software image - it does not install the software on your device. The
image install command installs the downloaded image to the standby partition.
Command Parameter
OS10#image download usb://PKGS_OS10- Update via USB
Enterprise-10.version-info-here.BIN
OS10#dir image
Network configuration 27
Internal Use - Confidential
Command Parameter
OS10#image install image://PKGS_OS10- Installs OS
Enterprise-10.version-info-here.bin
NOTE: On older versions of OS10, the image install command will appear frozen, without showing the current status.
Duplicating the ssh/telnet session will allow you to run show image status to see the current status.
6. View the status of the current software install in EXEC mode. If the install status shows FAILED, check to make sure the
TAR file is extracted correctly.
Command Parameter
OS10#show image status Verify OS was updated
7. Change the next boot partition to the standby partition in EXEC mode.
Command Parameter
OS10#boot system standby Changes next boot partition
8. Check whether the next boot partition has changed to standby in EXEC mode.
Command Parameter
OS10#show boot detail Verify next boot partition is new firmware
Command Parameter
OS10#reload Reboots the switch
Steps
1. Download the ONIE software from support.dell.com and place it on the TFTP server.
NOTE: In this example, the file name is onie-updater-x86_64-dellemc_s5200_c3538-r0.3.40.1.1-6.
28 Network configuration
Internal Use - Confidential
NOTE: Before you begin, go to www.dell.com/support and download the diagnostic package.
Steps
1. Enter the onie-discovery-stop command to stop ONIE Discovery mode.
2. Assign an IP address to the management interface and verify the network connectivity.
Network configuration 29
Internal Use - Confidential
OK.
Diag-OS Installer: platform: x86_64-dell_<platform>_c2538-r0
INSTALLER DONE...
Removing /tmp/tmp.qlnVIY
ONIE: NOS install successful: tftp://<tftp-server ip>/users/<user>/<platform>/diag-
installer-x86_64-dell_<platform>_c2538-r0-2016-08-12.bin
ONIE: Rebooting...
ONIE:/ # discover: installer mode detected.
Stopping: discover...start-stop-daemon: warning: killing process 2605: No such process
done.
Stopping: dropbear ssh daemon... done.
Stopping: telnetd... done.
Stopping: syslogd... done.
Info: Unmounting kernel filesystems
umount: can't umount /: Invalid argument
The system is going down NOW!
Sent SIGTERM to all processes
Sent SIGKILL tosd 4:0:0:0: [sda] Synchronizing SCSI cache
reboot: Restarting system
reboot: machine restart
POST Configuration
CPU Signature 406D8
CPU FamilyID=6, Model=4D, SteppingId=8, Processor=0
Microcode Revision 125
Platform ID: 0x10041A43
PMG_CST_CFG_CTL: 0x40006
BBL_CR_CTL3: 0x7E2801FF
Misc EN: 0x840081
30 Network configuration
Internal Use - Confidential
BIOS initializations...
Booting `EDA-DIAG'
The programs included with the Debian GNU/Linux system are free software;
the exact distribution terms for each program are described in the
individual files in /usr/share/doc/*/copyright.
4. Start diagnostics.
To start the ONIE diagnostics, use the EDA-DIAG option from the GRUB menu.
a. Boot into the EDA Diags.
b. Log in as root.
Password: calvin.
c. Install the EDA-DIAG tools package.
Next steps
NOTE: To return to your networking operating software, enter the reboot command.
Steps
1. Download the diagnostic tools from support.dell.com and unzip.
Network configuration 31
Internal Use - Confidential
2. Copy using SCP the dn-diags-sssss-DiagOS-vvvvvv-ddddd.deb file to the switch. For example
root@dellemc-diag-os:~# ls dn-diags-S4100-DiagOS-3.33.4.1-6-2018-01-21.deb
Firmware requirements
CAUTION: The minimum required ONIE version is 3.40.1.1-6. Before using ONIE firmware updater, if your switch
has an ONIE version lower than 3.40.1.1-6, you must first upgrade your switch to this minimum requirement.
NOTE: Boot the switch and choose ONIE: Rescue mode to perform firmware upgrade.
To upgrade the ONIE version, use the ONIE discovery-stop command, as shown:
# onie-discovery-stop
# onie-self-update onie-updater-x86_64-dellemc__c3538-r0.3.40.1.1-6
After you upgrade your switch to the minimum ONIE version requirement, you can use the ONIE firmware updater, as shown:
# onie-discovery-stop
# onie-fwpkg add onie-firmware-x86_64-dellemc__c3538-r0.3.40.5.1-9.bin
# onie-discovery-start
32 Network configuration
Internal Use - Confidential
NOTE: During a firmware update, if there is an efivars duplicate issue, the BIOS configuration sets to the default,
and the efivar duplicate issue resolves.
Steps
1. In the command prompt, type:
# system "/mnt/onie-boot/onie/tools/bin/onie-fwpkg show-log |
grep Firmware | grep version"
A message is displayed:
Node Id : 1
MAC : 50:9a:4c:e2:21:00
Number of MACs : 256
Up Time : 00:28:17
-- Unit 1 --
Status : up
System Identifier : 1
Down Reason : user-triggered
Digital Optical Monitoring : disable
System Location LED : off
Required Type : S4148T
Current Type : S4148T
Hardware Revision : A02
Software Version : 10.5.2.3
Physical Ports : 48x10GbE, 2x40GbE, 4x100GbE
BIOS : 3.33.0.1-11
System CPLD : 1.3
Master CPLD : 1.2
-- Power Supplies --
2 up AC REVERSE 1 13936 up
-- Fan Status –
2 up REVERSE 1 9590 up
2 9590 up
3 up REVERSE 1 9567 up
Network configuration 33
Internal Use - Confidential
2 9637 up
4 up REVERSE 1 9590 up
2 9567 up
In order to correspond to BIOS and CPLD versions, see Dell support for the release notes on the firmware documentation.
Prerequisites
By default, DHCP is enabled in ONIE. If your network has DHCP configured, ONIE gets the valid IP address for the management
port using DHCP, as shown.
Steps
1. Wait for ONIE to complete a DHCP timeout and return to the prompt.
2. Wait for ONIE to assign a random default IP address. This address may not be valid for your network.
3. Enter the ifconfig command to assign a valid IP address.
This command is not persistent. After you reboot, you must reconfigure the IP address.
Steps
To configure the VLANs, enter the following command:
Interface vlan <vlan number>
name <vlan-name>
no shutdown
NOTE:
● To remove the created VLAN, enter the following command: Dell(config)# no interface vlan <VLAN ID>
34 Network configuration
Internal Use - Confidential
● To describe the created VLAN, enter the interface mode and enter the following command: Dell(config VLAN
ID)# description <enter the description>
NOTE: This procedure is optional. If a customer is not planning to configure VLT, skip this step.
Steps
1. For switch A, enter the following commands:
vlt-domain 10
backup destination <ip address of second access switch>
discovery-interface ethernet<slot>/<port>-<slot>/<port+1>
peer-routing
primary-priority 1
vlt-mac <VLT mac address>
2. For switch B, enter the following commands:
vlt-domain 10
backup destination <ip address of second access switch>
discovery-interface ethernet<slot>/<port>-<slot>/<port+1>
peer-routing
primary-priority 8192
vlt-mac <VLT mac address>
3. Configure VLTi interfaces (use 2 x 100 GB interfaces as VLTi), enter the following commands:
interface range eth 1/1/X-1/1/X
description "VLTi interfaces"
no switchport
no shutdown
exit
NOTE: The starting and ending values of the command should match the ports.
Steps
At the command prompt, type:
no ip telnet server enable
Network configuration 35
Internal Use - Confidential
Steps
1. Enable the REST API service on the switch, type:
OS10(config)# rest api restconf
2. Limit the ciphers to encrypt and decrypt the REST HTTPS data, type:
OS10(config)#rest https cipher-suite <encryption-suite>
Where <encryption-suite> needs to be predetermined by the customer in order to match the communication through
the REST methods.
Steps
1. To enable the telemetry, type:
OS10(config)# telemetry
OS10(conf-telemetry)# enable
2. Configure a destination group, type:
OS10(conf-telemetry)# destination-group dest1
OS10(conf-telemetry-dg-dest1)# destination <PowerFlex Manager IP> <PowerFlex Manager
port>
3. Return to telemetry mode, type:
OS10(conf-telemetry-dg-dest1)# exit
4. Configure a subscription profile, type:
OS10(conf-telemetry)# subscription-profile subscription-1
OS10(conf-telemetry-sp-subscription-1)# sensor-group bgp 300000
OS10(conf-telemetry-sp-subscription-1)# sensor-group bgp-peer 0
OS10(conf-telemetry-sp-subscription-1)# sensor-group buffer 15000
OS10(conf-telemetry-sp-subscription-1)# sensor-group device 300000
OS10(conf-telemetry-sp-subscription-1)# sensor-group environment 300000
OS10(conf-telemetry-sp-subscription-1)# sensor-group interface 180000
OS10(conf-telemetry-sp-subscription-1)# sensor-group lag 0
OS10(conf-telemetry-sp-subscription-1)# sensor-group system 300000
OS10(conf-telemetry-sp-subscription-1)# destination-group dest1
OS10(conf-telemetry-sp-subscription-1)# encoding gpb
OS10(conf-telemetry-sp-subscription-1)# transport grpc no-tls
OS10(conf-telemetry-sp-subscription-1)# source-interface ethernet 1/1/1
OS10(conf-telemetry-sp-subscription-1)# end
36 Network configuration
Internal Use - Confidential
Steps
Configure the port channels uplink to the customer network, type:
interface port-channel 101
description <Uplink-Port-Channel-to-customer network>
no shutdown
switchport mode trunk
switchport trunk allowed vlan 105, 150, 161, 162
mtu 9216
vlt-port-channel 101
Add interfaces to the newly created port channel for customer network
Use this procedure to add interfaces to the newly created port channel for customer network.
Steps
Add ethernet interfaces to newly created port channels, type:
interface Ethernet <ID> (change the ethernet ID based on the interface)
description <description>
no shutdown
channel-group 101 mode active
no switchport
mtu 9216
speed 25000
flowcontrol receive off
Before you can deploy a PowerFlex appliance through PowerFlex Manager, the Dell access switches running Dell SmartFabric
OS10.x need specific configuration. The requirements are listed in Configuration data.
Steps
To configure the port channels, enter the following commands:
interface port-channel <port-channel number>
Description "Port Channel to <node info>"
switchport trunk allowed vlan <vlan list>
spanning-tree port type edge
spanning-tree bpduguard enable
spanning-tree guard root
switchport mode trunk
lacp fallback enable # applicable only for port-channel with LACP
speed <speed>
Network configuration 37
Internal Use - Confidential
Steps
Configure the interface depending on the interface, enter the following commands:
If the interface type Run the following command using command prompt...
is...
Port channel
interface <interface number>
Description “Connected to <connectivity info>"
channel-group <channel-group> mode <mode>
no shutdown
Access
interface <interface number> # applicable only for access interface
switchport mode access
switchport access vlan <vlan number>
spanning-tree port type edge
spanning-tree bpduguard enable
spanning-tree guard root
speed <speed>
Trunk
interface <interface number>
switchport mode trunk
switchport trunk allowed vlan <vlan-list>
spanning-tree port type edge trunk
spanning-tree bpduguard enable
spanning-tree guard root
speed <speed>
Steps
To save the configuration, enter the following command: Dell#copy running-config startup-config.
Related information
Network requirements for a PowerFlex appliance deployment
Networking pre-requisites
38 Network configuration
Internal Use - Confidential
Prerequisites
For correct functionality, the switch must have the supported switch firmware or software version that is available in Intelligent
Catalog (IC). Using firmware or software other than the versions that are specified in the IC, may have unpredictable results.
NOTE: VLANs 140 through 143 are required only for PowerFlex management controller 2.0.
Steps
1. Turn on both switches.
2. Connect a serial cable to the serial port of the first switch.
3. Use a terminal utility to open the terminal emulator and configure it to use the serial port (usually COM1 but this may vary
depending on your system). Configure serial communications for 9600,8, N,1 and no flow control.
4. Connect the switches by connecting port 53 on switch 1 to port 53 on switch 2 and port 54 on switch 1 to port 54 on switch
2.
5. Delete the startup configuration using the following commands
NOTE: This example assumes a switch at its default configuration settings. Using the write erase command sets
the startup configuration file to its default settings. You should always back up your configuration settings prior to
performing any configuration changes.
Abort Power On Auto Provisioning and continue with normal setup ?(yes/no)[n]: yes
---- System Admin Account Setup ----
Do you want to enforce secure password standard (yes/no): yes
Enter the password for "admin":
Confirm the password for "admin":
---- Basic System Configuration Dialog ----
This setup utility will guide you through the basic configuration of the system.
Setup configures only enough connectivity for management of the system.
Please register Cisco Nexus9000 Family devices promptly with your supplier.
Failure to register may affect response times for initial service calls.
Nexus9000 devices must be registered to receive entitled support services.
Press Enter at anytime to skip a dialog. Use ctrl-c at anytime
to skip the remaining dialogs.
Would you like to enter the basic configuration dialog (yes/no):yes
Create another login account (yes/no) [n]: no
Configure read-only SNMP community string (yes/no) [n]: no
Configure read-write SNMP community string (yes/no) [n]: no
Enter the switch name : Cisco_Access-A
Continue with Out-of-band (mgmt0) management configuration? (yes/no) [y]: yes
Mgmt0 IPv4 address : 192.168.101.45
Mgmt0 IPv4 netmask : 255.255.255.0
Configure the default gateway? (yes/no) [y]: yes
IPv4 address of the default gateway : 192.168.101.254
Configure advanced IP options? (yes/no) [n]: no
Enable the telnet service? (yes/no) [n]: no
Enable the ssh service? (yes/no) [y]: yes
Type of ssh key you would like to generate (dsa/rsa) [rsa]: rsa
Number of rsa key bits <1024-2048> [1024]: 1024
Configure the ntp server? (yes/no) [n]: no
Configure default interface layer (L3/L2) [L2]: L2
Configure default switchport interface state (shut/noshut) [noshut]: noshut
Configure CoPP system profile (strict/moderate/lenient/dense) [strict]: strict
Network configuration 39
Internal Use - Confidential
40 Network configuration
Internal Use - Confidential
Steps
1. Start an SSH session to the switch.
2. Enter the following command to commit to persistent storage. In addition, copy the configuration to a remote server or jump
server:
copy running-config startup-config
3. Type the show version command to determine the current running version.
NOTE: The output from the command displays a running firmware version. Depending on your switch model, near the
bottom of the display, the previous running version might display and should not be confused with the current running
version.
4. Check the contents of the bootflash directory to verify that enough free space is available for the new Cisco NX- OS
software image.
a. Enter the following command to check the free space on the flash:
dir bootflash:
NOTE: The Cisco Nexus 3000 and Cisco Nexus 9000 switches do not provide a confirmation prompt before deleting
them.
delete bootflash:nxos.7.0.2.I7.6.bin
5. If upgrading a Cisco Nexus 3000 switch, enter the following command to compact the current running image file:
switch# install all nxos bootflash:nxos.7.0.3.I7.bin compact
6. From the SCP, FTP, and TFTP servers, enter one of the following commands to copy the firmware file to local storage on
the Cisco Nexus switches.
Use the TFTP command to copy the image:
Network configuration 41
Internal Use - Confidential
NOTE: The firmware files are hardware model-specific. The firmware follows the same naming convention as the
current, running firmware files that are displayed in the show version command. If you receive warnings of insufficient
space to copy files, you must perform an SCP copy with the compact option to compact the file while it is copied.
Doing this might result with encountering the Cisco defect CSCvg51567. The work-around for this defect requires
cabling the management port and configuring its IP address on a shared network with the SCP server, allowing the copy
to take place across that management port. After the process is complete, go to Step 7.
Enter vrf (If no input, current vrf 'default' is considered): management Trying to
connect to tftp server..... Connecting to Server Established. TFTP get operation was
successful
Copy complete, now saving to disk (please wait)..
7. Enter the show install all impact command to identify the upgrade impact.
switch# show install all impact nxos bootflash:nxos.9.3.3.bin
NOTE: If you receive errors regarding free space on the bootflash, go to Step 3 to ensure that you have removed older
firmware files to free additional disk space for the upgrade to complete. Check all subdirectories on bootflash when
searching for older bootflash files.
NOTE: After the upgrade, the switch reboot can take 5 to 10 minutes. Use a continuous ping command from the jump
server to validate when the switch is online.
Installer will perform compatibility check first. Please wait. Installer is forced
disruptive
42 Network configuration
Internal Use - Confidential
Sample output:
NOTE: The screen captures below are examples. Versions might vary, based on the Intelligent Catalog (IC).
Steps
1. Start an SSH session to the switch.
2. Enter the following command to commit to persistent storage. In addition, copy the configuration to a remote server or jump
server:
copy running-config startup-config
3. Enter the show version module <number> epld command to determine the current running version.
4. Check the contents of the bootflash directory to verify that enough free space is available for the software image.
a. Enter the following command to check the free space on the flash:
dir bootflash:
5. From the SCP, FTP, or TFTP server, enter the following command to copy the firmware file to local storage on the Cisco
Nexus switches:
Use the following TFTP command to copy the image:
Network configuration 43
Internal Use - Confidential
6. To determine if you must upgrade, use the show install all impact epld bootflash: n9000-
epld.9.3.3.img command.
44 Network configuration
Internal Use - Confidential
NOTE: After the upgrade, the switch reboot could take 5 to 10 minutes. Use a continuous ping command from the jump
server to validate when the switch is back online.
9. Using SSH, log back in to the switch with username and password.
10. Enter the following command to verify that the switch is running the correct version:
switch# show install epld status
Steps
At the command prompt, type:
Cisco_Access-A(config)# vlan 100,104,105,106,150,151,152,153,154,161,162
Cisco_Access-A(config-vlan)# exit
Prerequisites
Confirm with the network administrator that enabling spanning tree is appropriate for the network and discuss any specific
spanning tree mode/feature configuration options.
Network configuration 45
Internal Use - Confidential
Steps
At the command prompt, type:
Cisco_Access-A(config)# spanning-tree vlan 1-3967
Cisco_Access-A(config)# spanning-tree port type edge bpduguard default
Cisco_Access-A(config)# spanning-tree port type edge bpdufilter default
NOTE: This is an optional procedure. If you are not planning to configure vPC, skip this step.
Steps
At the command prompt, for the first access switch, type:
vpc domain 60
peer-switch
role priority 8192
system-priority 8192
peer-keepalive destination <oob mgmt ip> source <oob mgmt ip>
delay restore 300
auto-recovery reload-delay 360
ip arp synchronize
NOTE: Role priority should be different on both the switches and system priority should be same on both the switches.
NOTE: This is an optional procedure. If you are not planning to configure vPC, skip this step.
Steps
At the command prompt, type:
interface port-channel 100
description "virtual port-channel vpc-peer-link"
switchport mode trunk
spanning-tree port type network
vpc peer-link
NOTE: This is an optional procedure, if you are not planning to configure vPC, skip this step.
46 Network configuration
Internal Use - Confidential
Steps
At the command prompt, type:
interface <interface>
Description "Peerlink to Peer Switch "
channel-group 100 mode active
no shutdown
Prerequisites
For PowerFlex Manager, the switch ports must be up (no shutdown), and unconfigured.
Steps
At the command prompt, type:
interface range eth 1/1/1-1/1/X
no shutdown
exit
Where X is the number of ports used by the PowerFlex node.
Steps
At the command prompt, type:
Cisco_Access-A(config)# interface port-channel 101
Cisco_Access-A(config-if)# switchport mode trunk
Cisco_Access-A(config-if)# switchport trunk allowed vlan 105,150,161,162
Cisco_Access-A(config-if)# spanning-tree port type network
Cisco_Access-A(config-if)# mtu 9216
Cisco_Access-A(config-if)# vpc
Steps
At the command prompt, enter the following commands:
Cisco_Access-A(config)# interface ethernet 1/49
Cisco_Access -A(config-if)# switchport mode trunk
Cisco_Access-A(config-if)# switchport trunk allowed vlan 105,150,161,162
Cisco_Access-A(config-if)# spanning-tree port type network
Cisco_Access-A(config-if)# mtu 9216
Cisco_Access-A(config-if)# channel-group 101 mode active
Cisco_Access-A(config-if)# no shutdown
Cisco_Access-A(config-if)# exit
Network configuration 47
Internal Use - Confidential
Steps
To configure the port channels, enter the following commands:
interface port-channel <port-channel number>
Description "Port Channel to <node info>"
switchport trunk allowed vlan <vlan list>
spanning-tree port type edge
spanning-tree bpduguard enable
spanning-tree guard root
switchport mode trunk
no lacp suspend-individual
lacp vpc-convergence #only for LACP based network
speed <speed>
vpc <vpc number same as port-channel number>
Steps
Configure the interface depending on the interface type:
If the interface type is... Run the following command at the command prompt...
Port channel interface <interface number>
no shutdown
speed <speed>
48 Network configuration
Internal Use - Confidential
If the interface type is... Run the following command at the command prompt...
speed <speed>
Steps
To save the configuration, type:
Cisco_Access-A# copy running-config startup-config
[########################################] 100%
Copy complete.
Cisco_Access-A#
Network configuration 49
Internal Use - Confidential
5
Configuring the iDRAC
Related information
Deploying the PowerFlex file nodes
Prerequisites
For console operations, ensure that you have a crash cart. A crash cart enables a keyboard, mouse, and monitor (KVM)
connection to the node.
Steps
1. Connect the KVM to the node.
2. During boot, to access the Main Menu, press F2.
3. From System Setup Main Menu, select the iDRAC Settings menu option. To configure the network settings, do the
following:
a. From the iDRAC Settings pane, select Network.
b. From the iDRAC Settings-Network pane, verify the following parameter values:
● Enable NIC = Enabled
● NIC Selection = Dedicated
c. From the IPv4 Settings pane, configure the IPv4 parameter values for the iDRAC port as follows:
● Enable IPv4 = Enabled
● Enable DHCP = Disabled
● Static IP Address = <ip address > # select the IP address from this range for each node (192.168.101.21 to
192.168.101.24)
● Static Gateway = 192.168.101.254
● Static Subnet Mask = 255.255.255.0
● Static Preferred DNS Server = 192.168.200.101
4. After configuring the parameters, click Back to display the iDRAC Settings pane.
5. From the iDRAC Settings pane, select User Configuration and configure the following:
a. Enter a user name in the User name field.
b. LAN User privilege = Administrator
c. Enter a new password in the Change Password field.
d. In the Re-enter password dialog box, type the password again and press Enter twice.
e. Click Back.
6. From the iDRAC Settings pane, click Finish > Yes. Click OK to return to the System Setup Main Menu pane.
7. To exit the BIOS and apply all setting post boot, select Finish.
8. Reboot the node and confirm iDRAC settings by accessing the iDRAC using the web interface.
6
Installing and configuring PowerFlex
management controller 2.0
Steps
1. Log in to the VMware vSphere Client.
2. Select the host and click Configure in the right pane.
3. Under Networking tab, select the VMkernel adapter.
4. Click Add.
5. Select Connection type as VMkernel network adapter and click Next.
6. Select Target device as Existing network and click Browse to select the appropriate port group.
7. In port properties, select Enable services, select the appropriate service, and click Next.
For example, for vMotion, select vMotion. For any other networks, retain the default service.
NOTE: The MTU for pfmc-vmotion is 1500.
8. In IPV4 Settings, select Use static IPV4 settings, provide the appropriate IP address and subnet details, and click Next.
9. Verify the details on Ready to Complete and click Finish.
10. Repeat steps 2 through 9 to create the VMkernel adapters for the vLANs referenced in the configuration data as port
groups.
Steps
1. Log in to VMware vCenter.
2. Click Networking.
3. Right-click the data center and select Distributed Switch > New Distributed Switch.
4. On the name and location page, type dvswitch name for the new distributed switch and click Next.
5. On the Select Version tab, select the latest VMware ESXi version, and click Next.
6. On the Configure tab, select 2 for the number of uplinks and click Next.
7. Click Finish.
8. Repeat steps 3 through 7 to create additional dvSwitches for the PowerFlex node.
Related information
Configuration data
Steps
1. Log in to the VMware vSphere client and select the Networking inventory view.
2. Select Inventory, right-click the dvswitch, and select New Port Group.
3. Enter the dvswitch port group name and click Next. See Configuration data for more information on the VLANs.
4. From the VLAN type, select VLAN and enter 105 as the VLAN ID.
5. Click Next > Finish.
6. Repeat steps 2 to 4 to create the additional port groups.
Related information
Configuration data
Steps
1. Log in to the VMware vSphere client and select Networking inventory.
2. Select Inventory, right-click the dvswitch, and select Configure.
3. In Settings, select LACP.
4. Click New, type name as FE-LAG or BE-LAG.
The default number of ports is 2.
5. Select mode as active.
6. Select the load balancing option. See Configuration data for more information.
7. Click OK to create LAG.
Repeat steps 1 through 6 to create LAG on additional dvswitches.
Related information
Configuration data
Steps
1. Select the dvSwitch.
2. Click Configure and from Settings, select LACP.
3. Click Migrating network traffic to LAGs.
4. Click Manage Distributed Port Groups, click Teaming and Failover, and click Next.
5. Select all port groups, and click Next.
6. Select LAG and move it to Standby Uplinks.
7. Click Finish.
Prerequisites
See Configuration data for naming information of the dvSwitches.
Steps
1. Select the dvSwitch.
NOTE: If you are not using LACP, right-click and skip to step 4.
Related information
Configuration data
Prerequisites
See Configuration data for naming information of the dvSwitches and loadbalancing options.
Steps
1. Select the dvSwitch.
2. Click Configure and from Settings, select LACP.
3. Click Migrating network traffic to LAGs.
4. Click Manage Distributed Port Groups, click Teaming and Failover, and click Next.
5. Select all port groups, and click Next.
6. Select a load balancing option.
7. Select LAG and move it to Active Uplinks.
8. Move Uplink1 and Uplink2 to Unused Uplinks and click Next.
9. Click Finish.
Related information
Configuration data
Prerequisites
See Configuration data for naming information of the dvSwitches and loadbalancing options.
Steps
1. Select the dvSwitch.
2. Right-click the dvSwitch, select Distributed Portgroup > Manage distributed portgroups.
3. Select teaming and failover and select all the port groups, and click Next.
4. Select load balancing.
Related information
Configuration data
Steps
1. Log in to VMware vSphere client.
2. From Home, click Networking and expand the data center.
Steps
1. Log in to the VMware vSphere client and click Networking.
2. Right-click oob_dvswitch and select Distributed Port Group > New Distribution Port Group.
3. Retain the default values for the following port related options:
● Port binding
● Port allocation
● # of ports
4. Select VLAN as the VLAN type.
5. Enter flex-oob-mgmt-<vlanID> and click Next.
Steps
1. Log in to the VMware vSphere client.
2. Click Networking and select oob_dvswitch.
3. Right-click Add and Manage Hosts.
4. Select Add Hosts and click Next.
5. Click New Host, select the host in maintenance mode, and click OK.
6. Click Next.
7. Select vmnic4 and click Assign Uplink.
8. Select Uplink 1, and click OK.
9. Click Next > Next > Next.
10. Click Finish.
Steps
1. Log in to VMware vSphere Client.
2. On Menu, click Host and Cluster.
3. Select Host.
4. Click Configure > Networking > Virtual Switches.
5. Right-click Standard Switch: vSwitch0 and click ...> Remove.
6. On the Remove Standard Switch window, click Yes.
Prerequisites
Ensure that iDRAC is configured and connected to the management network.
Steps
1. Connect to the iDRAC web interface.
2. Click Storage > Overview > Controllers.
3. In the Actions drop down for the PERC H755 Front (embedded), select Reset Configurations > OK > Apply now.
4. Click Job Queue and wait for the task to complete.
5. Select Storage > Overview > Controller.
6. In the Actions drop down for the PERC H755 Front (embedded), select Create Virtual Disk.
7. For Setup Virtual Disk:
● Name: Leave blank for auto-name
● Layout: Raid-5
● Media type: SSD
● Physical disk selection: New Group
8. For Advanced Settings:
● Security: Disabled
● Stripe element size: 256 KB
● Read policy: Read Ahead
● Write policy: Write Back
9. Click Next.
10. For the Select Physical Disk, select All SSDs and click Next.
11. For Virtual Disk Settings:
● Leave Defaults
12. Click Next.
13. For Confirmation and confirm Settings, select Add to Pending.
14. Select Apply Now.
15. Click Job Queue and wait for the task to complete.
16. Click Storage > Overview > Virtual disks.
17. Confirm the PERC-01 virtual disk.
Steps
1. In the web browser, enter https://<ip-address-of-idrac>.
2. From the iDRAC dashboard, click Maintenance > System Update > Manual Update.
3. Click Choose File. Browse to the release appropriate Intelligent Catalog folder and select the appropriate files.
Required firmware:
● Dell iDRAC or Lifecycle Controller firmware
● Dell BIOS firmware
● Dell BOSS Controller firmware
● Dell Mellanox ConnectX-5 EN or Broadcom NetXtreme firmware
● PERC H755P controller firmware
4. Click Upload.
5. Click Install and Reboot.
Steps
1. Launch the virtual console, select Boot from the menu, and select BIOS setup from Boot Controls to enter the system
BIOS.
2. Power cycle the server and enter the BIOS setup.
3. From the menu, click Power > Reset System (Warm Boot).
4. Press F2 to enter the System Setup main menu, select Device Settings.
5. Select AHCI Controller in SlotX: BOSS-X Configuration Utility.
6. Select Create RAID Configuration.
7. Select both the devices and click Next.
8. Enter VD_R1_1 for name and retain the default values.
9. Click Yes to create the virtual disk and then click OK to apply the new configuration.
10. Click Next > OK.
11. Select VD_R1_1 that was created and click Back > Finish > Yes > OK.
12. Select System BIOS.
13. Select Boot Settings and enter the following settings:
● Boot Mode: UEFI
● Boot Sequence Retry: Enabled
● Hard Disk Failover: Disabled
● Generic USB Boot: Disabled
● Hard-disk Drive Placement: Disabled
● Clean all Sysprep order and variables: None
14. Click Back > Finish > Finish and click Yes to reboot the node.
Steps
1. Log in to iDRAC and perform the following steps:
a. Connect to the iDRAC interface and launch a virtual remote console from Dashboard and click Launch Virtual Console.
b. Select Virtual Media > Connect Virtual Media > Map CD/DVD.
c. Click Choose File and browse to the folder where the ISO file is saved, select it, and click Open.
d. Click Map Device > Close.
e. Click Boot > Virtual CD/DVD/ISO.
f. Click Yes to confirm boot action.
g. Click Power > Reset System (warm boot).
h. Click Yes.
2. Perform the following steps to install VMware ESXi:
a. On the VMware ESXi installer screen, press Enter to continue.
b. Press F11 to accept the license agreement.
c. Under Local, select ATA DELLBOSS VD as the installation location. If prompted, press Enter.
d. Select US Default as the keyboard layout and press Enter.
e. At the prompt, type the root password, and press Enter.
f. At the Confirm Install screen, press F11.
g. In Virtual Console, click Virtual Media > Disconnect Virtual Media.
h. Click Yes to un-map all devices.
i. Press Enter to reboot the PowerFlex management controller when the installation completes.
Steps
1. Press F2 to customize the system.
2. Enter the root password and press Enter.
3. Go to DCUI and select Troubleshooting Options.
4. Select Enable SSH.
5. Select Enable ESXi Shell.
6. Press ESC to exit from troubleshooting mode options.
7. Go to Direct Console User Interface (DCUI) > Configure Management Network.
8. Set Network Adapter to VMNIC2.
9. Set the ESXi Management VLAN ID to the required VLAN value.
10. Set the IPv4 ADDRESS, SUBNET MASK, and DEFAULT GATEWAY.
11. Select IPV6 Configuration > Disable IPV6 and press Enter.
12. Go to DNS Configuration and set the customer provided value.
13. Go to Custom DNS Suffixes and set the customer provided value.
14. Press ESC to exit the network configuration and enter Y to apply the changes.
15. Enter Y to commit the changes and the node restarts.
16. Verify the host connectivity by pinging the IP address from the jump server using the command prompt.
Prerequisites
Download the latest supported version from Dell iDRAC Service Module.
Steps
1. Copy ISM-Dell-Web-X.X.X-XXXX.VIB-ESX7i-Live_AXX.zip to the /vmfs/volumes/<datastore>/ folder on
the PowerFlex management node running VMware ESXi.
2. Start an SSH session with the new appliance management host running VMware ESXi using PuTTY.
3. To install VMware vSphere 7.x Dell iDRAC service module, type esxcli software vib install -d /vmfs/
volumes/<datastore>/ISM-Dell-Web-X.X.X-XXXX.VIB-ESX7i-Live_AXX.zip.
NOTE: Modify the VM network on the PowerFlex controller node planned for VMware vCenter deployment
Steps
1. Log in to VMware ESXi host client as root.
2. On the left pane, click Networking.
3. Right-click VM Network and click Edit Settings.
4. Change VLAN ID to flex-node-mgmt-<vlanid> and click Save.
Steps
1. Log in to VMware ESXi host client as root.
2. In the left pane, click Manage.
3. Click System and Time & Date.
4. Click Edit NTP Settings.
5. Select Use Network Time Protocol (enable NTP client).
6. Select Start and Stop with host from the drop-down list.
7. Enter NTP IP Addresses.
8. Click Save.
9. Click Services > ntpd.
10. Click Start.
Steps
1. Log in to VMware ESXi host client as a root.
2. On the left pane, click Storage.
3. Click Datastores.
4. Right-click datastore1 and click Rename.
5. Enter PFMC_DS<last_ip_octet>.
6. Click Save.
Steps
1. Log in to VMware ESXi host client as a root.
2. On the left pane, click Storage.
3. Click New Datastore.
4. Select Create new VMFS datastore and click Next.
5. Enter PERC-01 in the Name box and select Local Dell Disk.
6. Click Next.
7. Make sure that Use Full Disk and VMFS 6 are selected and click Next.
8. Verify Summary and click Finish.
9. Click Yes on content warning.
Steps
NOTE: VMware vCSA 7.0 installation fails if FQDN is not specified or DNS records are not created for the corresponding
assigned FQDN during installation. Ensure the correct forward and reverse records are created in DNS for this service. It is
assumed that the customer provides DNS and may require the DNS to create the required records.
1. Deploy a new appliance to the target VMware vCenter server or ESXi host:
a. Mount the ISO and open the VMware vCSA 7.x installer from \vcsa-ui-installer\win32\installer.exe.
b. Select Install from the VMware vCSA 7.x installer.
c. Click Next in Stage 1: Deploy vCenter Server wizard.
d. Select I accept the terms of the License Agreement and click Next.
e. Type the host FQDN of the PowerFlex management controller:
i. Provide the login credentials.
ii. Click Next and click Yes.
f. Type the host FQDN of the PowerFlex management controller.
g. Enter the vCenter VM name (FQDN), set the root password, and confirm the root password. Click Next.
h. Select the deployment size to Large and leave storage as default. Click Next.
i. Select Install on an existing datastore accessible from the target host, select PERC-01 (that was created
previously), and Enable Thin Disk Mode. Click Next.
j. In Configure network settings page, do the following:
Select the following:
● VM network from network
● IPv4 from IP version
● Static from IP assignment
Enter the following:
● FQDN
● IP address
● Subnet
● Default gateway
● DNS server information
k. Click Next.
l. Review the summary and click Finish.
2. Copy data from the source appliance to the VMware vCenter Server Appliance:
a. After selecting Continue from stage 1, select Next from the stage 2 introduction page.
b. Select Synchronize time with NTP Server and enter NTP Server IP Address(es) and select Disabled for SSH
access. Click Next.
c. Enter Single Sign-On domain name and password for SSO. Click Next.
d. Clear the Customer Experience Improvement Program (CEIP) check box. Click Next.
e. Review the summary information and click Finish > OK to continue.
f. Click Close on completion.
g. Log in to validate that the new controller vCSA is operational using SSO credentials.
Steps
1. Create a data center:
a. Log in to the VMware vSphere Client.
b. Right-click vCenter and click New Datacenter.
c. Enter data center name as PowerFlex Management and click OK.
2. Add a host to the data center:
NOTE: The vCLS VMs are deployed on the local datastore when the node is added to the cluster from VCSA 7.0Ux.
These VMs are auto deployed using VMware vCenter. When you add the host cluster they are used for managing the HA
and DRS service on the cluster.
Steps
1. In the VMware vSphere client, log in to the vCSA, on the Administration tab, select Licensing.
2. Click Add to open the New Licenses wizard.
3. Enter or paste the license keys for VMware vSphere and vCenter. Click Next.
4. Optionally, provide an identifying name for each license. Click Next.
5. Click Finish to complete the addition of licenses to the system inventory.
6. Select the vCenter license from the list and click OK.
7. Click Assets.
8. Click vCenter Server Systems and select the vCenter server and click Assign License.
9. In the Licenses view, the added licenses should be visible. Click the Assets tab.
10. Click Hosts.
11. Select the controller nodes.
12. Click Assign License.
13. In the Assign License dialog box, select the vSphere license from the list and click OK.
Related information
Deploying the PowerFlex management platform
Steps
1. In the web browser, enter https://<ip-address-of-idrac>.
2. From the iDRAC dashboard, click Maintenance > System Update > Manual Update.
3. Click Choose File. Browse to the release appropriate IC folder and select the appropriate files.
Required firmware:
● Dell BIOS firmware
● Dell BOSS Controller firmware
● Dell iDRAC or Lifecycle Controller firmware
● Dell Mellanox ConnectX-5 EN or Broadcom NetXtreme firmware
● HBA 355i (multi-node) controller firmware
4. Click Upload.
5. Click Install and Reboot.
Steps
1. Launch the virtual console, select Boot from the menu, and select BIOS setup from Boot Controls to enter the system
BIOS.
2. Power cycle the server and enter the BIOS setup.
3. From the menu, click Power > Reset System (Warm Boot).
4. Press F2 to enter the System Setup main menu, select Device Settings.
5. Select AHCI Controller in SlotX: BOSS-X Configuration Utility.
6. Select Create RAID Configuration.
7. Select both the devices and click Next.
8. Enter VD_R1_1 for name and retain the default values.
9. Click Yes to create the virtual disk and then click OK to apply the new configuration.
10. Click Next > OK.
11. Select VD_R1_1 that was created and click Back > Finish > Yes > OK.
12. Select System BIOS.
13. Select Boot Settings and enter the following settings:
● Boot Mode: UEFI
● Boot Sequence Retry: Enabled
Steps
1. Log in to iDRAC and perform the following steps:
a. Connect to the iDRAC interface and launch a virtual remote console from Dashboard and click Launch Virtual Console.
b. Select Virtual Media > Connect Virtual Media > Map CD/DVD.
c. Click Choose File and browse to the folder where the ISO file is saved, select it, and click Open.
d. Click Map Device > Close.
e. Click Boot > Virtual CD/DVD/ISO.
f. Click Yes to confirm boot action.
g. Click Power > Reset System (warm boot).
h. Click Yes.
2. Perform the following steps to install VMware ESXi:
a. On the VMware ESXi installer screen, press Enter to continue.
b. Press F11 to accept the license agreement.
c. Under Local, select ATA DELLBOSS VD as the installation location. If prompted, press Enter.
d. Select US Default as the keyboard layout and press Enter.
e. At the prompt, type the root password, and press Enter.
f. At the Confirm Install screen, press F11.
g. In Virtual Console, click Virtual Media > Disconnect Virtual Media.
h. Click Yes to un-map all devices.
i. Press Enter to reboot the PowerFlex management controller when the installation completes.
Steps
1. Press F2 to customize the system.
2. Enter the root password and press Enter.
3. Go to DCUI and select Troubleshooting Options.
4. Select Enable SSH.
5. Select Enable ESXi Shell.
6. Press ESC to exit from troubleshooting mode options.
7. Go to Direct Console User Interface (DCUI) > Configure Management Network.
8. Set Network Adapter to VMNIC2.
9. Set the ESXi Management VLAN ID to the required VLAN value.
10. Set the IPv4 ADDRESS, SUBNET MASK, and DEFAULT GATEWAY.
11. Select IPV6 Configuration > Disable IPV6 and press Enter.
12. Go to DNS Configuration and set the customer provided value.
13. Go to Custom DNS Suffixes and set the customer provided value.
14. Press ESC to exit the network configuration and enter Y to apply the changes.
15. Enter Y to commit the changes and the node restarts.
16. Verify the host connectivity by pinging the IP address from the jump server using the command prompt.
Prerequisites
Download the latest supported version from Dell iDRAC Service Module.
Steps
1. Copy ISM-Dell-Web-X.X.X-XXXX.VIB-ESX7i-Live_AXX.zip to the /vmfs/volumes/<datastore>/ folder on
the PowerFlex management node running VMware ESXi.
2. Start an SSH session with the new appliance management host running VMware ESXi using PuTTY.
3. To install VMware vSphere 7.x Dell iDRAC service module, type esxcli software vib install -d /vmfs/
volumes/<datastore>/ISM-Dell-Web-X.X.X-XXXX.VIB-ESX7i-Live_AXX.zip.
NOTE: Modify the VM network on the PowerFlex controller node planned for VMware vCenter deployment
Steps
1. Log in to VMware ESXi host client as root.
2. On the left pane, click Networking.
3. Right-click VM Network and click Edit Settings.
4. Change VLAN ID to flex-node-mgmt-<vlanid> and click Save.
Steps
1. Log in to VMware ESXi host client as root.
2. In the left pane, click Manage.
3. Click System and Time & Date.
4. Click Edit NTP Settings.
5. Select Use Network Time Protocol (enable NTP client).
6. Select Start and Stop with host from the drop-down list.
7. Enter NTP IP Addresses.
8. Click Save.
9. Click Services > ntpd.
10. Click Start.
Steps
1. Log in to VMware ESXi host client as a root.
2. On the left pane, click Storage.
3. Click Datastores.
4. Right-click datastore1 and click Rename.
5. Enter PFMC_DS<last_ip_octet>.
6. Click Save.
Enable PCI passthrough for the HBA 355 on the PowerFlex controller nodes
Use this procedure to enable PCI passthrough on the PowerFlex management controller.
Steps
1. Log in to the VMware ESXi host.
2. Select Manage > Hardware > PCI Devices.
3. Select Broadcom / LSI HBA H355i Front Device > Toggle passthrough.
4. A reboot is required, after the storage data client (SDC) is installed.
NOTE: Ignore the VMware popup warning: Failed to configure passthrough devices.
Steps
1. Copy the storage data client file to the local datastore on the VMware ESXi server.
2. Use SSH to log in to each VMware ESXi hosts as root.
3. Type the following command to install the storage data client (SDC): esxcli software component apply -d /
vmfs/volumes/PFMC_DS<last ip octet>/sdc.zip.
Prerequisites
Each storage data client requires a unique UUID.
Steps
1. To configure the storage data client, generate one UUID per server (https://www.guidgenerator.com/online-
guid-generator.aspx).
3. Substitute the new UUID in the following command with the pfmc-data1-vip and pfmc-data2-vip:
4. Type the following command to verify scini configuration: esxcli system module parameters list -m scini |
head
5. Reboot the PowerFlex management controller 2.0.
Related information
PowerFlex appliance node cabling
Steps
1. Deploy a new appliance to the target VMware vCenter server or VMware ESXi host:
a. Mount the ISO and open the VMware vCSA 7.x installer from \vcsa-ui-installer\win32\installer.exe.
b. Select Install from the VMware vCSA 7.x installer.
c. Click Next in Stage 1: Deploy vCenter Server wizard.
d. Accept the End User License Agreement and click Next.
e. Type the host FQDN of the PowerFlex management controller 2.0 (install on the node with the modified VM network):
i. Provide all the log in credentials.
ii. Click Next and click Yes.
f. Enter the vCenter VM name (FQDN), set the root password, and confirm the root password. Click Next.
g. Select the deployment size to Large and leave storage as Default and click Next.
h. In the Select Datastore page, select the following:
● Select PFMC_DSxxx.
● Select Enable Thin Disk Mode
i. Click Next.
j. In Configure network settings page, do the following:
Select the following:
● VM network from network
● IPv4 from IP version
● Static from IP assignment
Enter the following:
● FQDN
● IP address
● Subnet
● Default gateway
● DNS server information
k. Click Next.
l. Review the summary and click Finish.
2. Copy data from the source appliance to the VMware vCenter Server Appliance (vCSA):
a. Click Continue to continue from Stage 1 and select Next from the Stage 2 Introduction page.
b. Select Synchronize time with NTP Server and enter the NTP Server IP addresses and select Disabled for
SSH access. Click Next.
c. Enter the Single Sign-On domain name and password for SSO, and click Next.
d. Clear the Customer Experience Improvement Program (CEIP) check box, and click Next.
e. Review the summary information and click Finish > OK to continue.
f. Click Close when it completes.
g. Log in to validate that the new controller vCSA is operational using SSO credentials.
Create a datacenter
Use this procedure to create a datacenter. This will be the container for all the PowerFlex management controller inventory.
Steps
1. Log in to the VMware vSphere Client.
2. Right-click vCenter and click New Datacenter.
3. Enter data center name as PFMC-Datacenter and click OK.
Create a cluster
Use this to create a cluster.
Steps
1. Right-click Datacenter and click New Cluster.
2. Enter the cluster name as PFMC-Management-Cluster and retain the default for DRS, and HA and click OK.
3. Verify the summary and click Finish.
Steps
1. Log in to the VMware vSphere Client.
2. In the left pane, click vCenter > Hosts and Clusters.
3. Right-click the PowerFlex Management Cluster and click Add Host .
4. Enter the FQDN.
5. Enter root username and password for the host and click Next.
6. Repeat steps 2 through 5 for all the controller hosts.
7. On the security alert popup, select All Hosts and click OK.
8. Verify the summary and click Finish.
NOTE: If the node goes into maintenance mode, right-click the VMware ESXi host and click Maintenance Mode > Exit
Maintenance Mode. vCLS VMs are migrated using PowerFlex Manager after takeover.
Steps
1. In the VMware vSphere client, log in to the vCSA, on the Administration tab, select Licensing.
2. Click Add to open the New Licenses wizard.
3. Enter or paste the license keys for VMware vSphere and vCenter. Click Next.
4. Optionally, provide an identifying name for each license. Click Next.
5. Click Finish to complete the addition of licenses to the system inventory.
6. Select the vCenter license from the list and click OK.
7. Click Assets.
8. Click vCenter Server Systems and select the vCenter server and click Assign License.
9. In the Licenses view, the added licenses should be visible. Click the Assets tab.
10. Click Hosts.
11. Select the controller nodes.
12. Click Assign License.
13. In the Assign License dialog box, select the vSphere license from the list and click OK.
Supported Virtual Port-channel Speed (G) LACP mode Required VLANs Node load
networkin Switch mode balancing
g name
Port- fe_dvSwitc Trunk 25 Active 105, 140, 150 LAG-Active-Src
channel h and dest IP and
with LACP TCP/UDP
be_dvSwit Trunk 25 Active 103,141,142,143,
ch 151, 152, 153 (if
required), 154 (if
required)
oob_dvSwi Access 10 / 25 N/A 101 N/A
tch
Steps
1. Log in to the VMware vSphere Client.
2. From Home, click Networking and expand the data center.
3. Right-click data center:
a. Click Distributed Switch > New Distributed Switch.
b. Update the name to FE_dvSwitch and click Next.
c. On the Select version page, select 7.0.3 - ESXi 7.0.3 and later and click Next.
d. Under Configure Settings, select 2 for Number of uplinks.
e. Select Enabled from the Network I/O Control menu.
f. Clear the Create default port group option.
g. Click Next.
h. In Ready to complete, click Finish.
4. Right-click FE_dvSwitch and click Settings > Edit Settings.
5. Select Advanced.
6. Set MTU to 9000.
7. Under Discovery Protocol, set the type to Link Layer Discovery Protocol and the operation to Both, and click OK.
Steps
1. Log in to the VMware vSphere Client and click Networking.
2. Right-click FE_dvSwitch and select Distributed Port Group > New Distribution Port Group.
3. Enter flex-node-mgmt-<vlanid> and click Next.
4. Leave the port related options (port binding, allocation, and number of ports) as the default values.
5. Select VLAN as the VLAN type.
6. Set the VLAN ID to the appropriate VLAN number and click Next.
7. In the Ready to complete screen, verify the details and click Finish.
8. Repeat steps 2 through 7 to create the following port groups:
● pfmc-sds-mgmt-<vlanid>
● flex-stor-mgmt-<vlanid>
Steps
1. Log in to the VMware vSphere Client.
2. From Home, click Networking and expand the data center.
3. Right-click the data center:
a. Click Distributed Switch > New Distributed Switch.
b. Update the name to BE_dvSwitch and click Next.
c. On the Select version page, select 7.0.3 - ESXi 7.0.3 and later and click Next.
d. Under Configure Settings, select 2 for Number of uplinks.
e. Select Enabled from the Network I/O Control menu.
f. Clear the Create default port group option.
g. Click Next.
h. In Ready to complete, click Finish.
4. Right-click BE_dvSwitch and click Settings > Edit Settings.
5. Select Advanced.
6. Set MTU to 9000.
7. Under Discovery Protocol, set the type to Link Layer Discovery Protocol and the operation to Both, and click OK.
Steps
1. Log in to the VMware vSphere Client and click Networking.
2. Right-click BE_dvSwitch and select Distributed Port Group > New Distribution Port Group.
3. Enter flex-vcsa-ha-<vlanid> and click Next.
4. Leave the port-related options (Port binding, Port allocation, and # of ports) as the default values.
5. Select VLAN as the VLAN type.
6. Set the VLAN ID to the appropriate VLAN number.
7. Clear the Customize default policies configuration and click Next > Finish.
8. Repeat steps 2 through 7 to create the following port groups:
● pfmc-vmotion-<vlanid>
● pfmc-sds-data1-<vlanid>
● pfmc-sds-data2-<vlanid>
● flex-data1-<vlanid>
● flex-data2-<vlanid>
● flex-data3-<vlanid> (if required)
● flex-data4-<vlanid> (if required)
Steps
1. Log in to the VMware vSphere client.
2. Select FE_dvswitch.
3. Click Configure. In Settings, select LACP.
4. Click New and enter the name LAG-FE. The default number of ports is 2.
5. Select the mode as active.
6. Select Load Balancing Mode as Source and Destination IP address and TCP/UDP Port.
7. Set time out mode to Slow.
8. Click OK to create LAG.
Steps
1. Log in to the VMware vSphere client.
2. Select BE_dvswitch.
3. Click Configure. In Settings, select LACP.
4. Click New and enter the name LAG-BE. The default number of ports is 2.
5. Select mode as active.
6. Select Load Balancing Mode as Source and Destination IP address and TCP/UDP Port.
7. Set time out mode to Slow.
8. Click OK to create LAG.
Steps
1. Log in to the VMware vSphere Client.
2. Click Networking and select FE_dvSwitch.
3. Click Configure, and from Settings, select LACP.
4. Click Migrating network traffic to LAGs.
5. Click Manage Distributed Port Groups, click Teaming and Failover, and click Next.
6. Select All port groups and click Next.
7. Select LAG-FE and move it to Active Uplinks.
8. Move Uplink1 and Uplink2 to Unused Uplinks and click Next.
9. Click Finish.
Steps
1. Log in to the VMware vSphere Client.
2. Click Networking and select BE_dvSwitch.
3. Click Configure, and from Settings, select LACP.
4. Click Migrating network traffic to LAGs.
5. Click Manage Distributed Port Groups, click Teaming and Failover, and click Next.
6. Select All port groups and click Next.
7. Select LAG-BE and move it to Active Uplinks.
8. Move Uplink1 and Uplink2 to Unused Uplinks and click Next.
9. Click Finish.
Steps
1. Log in to the VMware vSphere Client.
2. Select BE_dvSwitch.
3. Click Configure and in Settings, select LACP.
4. Click Migrating network traffic to LAGs > Add and Manage Hosts.
5. Select Add Hosts and click Next.
6. Click New Hosts, select All Hosts, and click OK > Next.
7. Select vmnic3 and click Assign Uplink.
8. Select LAG-BE-0 and select Apply this uplink assignment to the rest of the hosts, and click OK.
9. Select vmnic7 and click Assign Uplink.
10. Select LAG-BE-1 and select Apply this uplink assignment to the rest of the hosts, and click OK.
11. Click Next > Next > Next > Finish.
Add the first uplink of the vCenter PowerFlex management controller to the
FE_dvSwitch
Use this procedure to migrate the first uplink on the PowerFlex management controller with VMware vCenter.
Steps
1. Log in to the VMware vSphere Client.
2. Click Networking and select FE_dvSwitch.
3. Click Configure and in Settings, select LACP.
4. Click Migrating network traffic to LAGs > Add and Manage Hosts.
5. Click Add Hosts and click Next.
6. Select the host with the vCSA.
7. On the Manage physical adapters page, select the Adapters on all hosts tab.
8. For vmnic6, using the drop-down menu for assign uplink, select LAG-FE-1.
9. Click Next > Next > Next > Finish.
Steps
1. Log in to the VMware vSphere Client.
2. Click Networking and select FE_dvSwitch.
3. Click Configure and in Settings, select LACP.
4. Click Migrating network traffic to LAGs > Add and Manage Hosts.
5. Click Manage host networking and click Next.
6. Select the host with the vCSA, and click OK > Next > Next > Next.
7. On the Migrate VM networking page, select the Configure per virtual machine tab.
8. For the vCSA, under the Destination port group column select Assign Port Group > For the Select network page.
9. Under Actions, click Assign for the flex-node-mgmt network.
10. Click Next > Finish.
Steps
1. Log in to the VMware vSphere Client.
2. Click Networking and select FE_dvSwitch.
3. Click Configure and in Settings, select LACP.
4. Click Migrating network traffic to LAGs > Add and Manage Hosts.
5. Click Manage host networking and click Next.
6. Select the host with the vCSA and click Next.
7. On the Manage physical adapters page, select the Adapters on all hosts tab.
8. For vmnic2, using the drop-down menu for assign uplink, select LAG-FE-0.
9. Click Next.
10. On the Manage VMkernel adapters page, for vmk0, under the destination port group column select Assign Port Group >
For the Select network page.
11. Under actions, click Assign for the flex-node-mgmt network.
12. Click Next > Next > Finish.
Steps
1. Log in to the VMware vSphere Client.
2. Click Networking and select FE_dvSwitch.
3. Click Configure and in Settings, select LACP.
4. Click Migrating network traffic to LAGs > Add and Manage Hosts.
5. Click Add Hosts and click Next.
6. Click Select All and click Next.
7. On the Manage physical adapters page, select the Adapters on all hosts tab.
8. For vmnic2, using the drop-down menu for assign uplink, select LAG-FE-0.
9. For vmnic6, using the drop down menu for assign uplink, select LAG-FE-1.
10. Click Next.
11. On the Manage VMkernel adapters page, for vmk0, under the Destination port group column select Assign Port
Group > For the Select network page.
12. Under Actions, click Assign for the flex-node-mgmt network.
13. Click Next > Next > Finish.
Steps
1. Log in to the VMware vSphere Client.
2. From Home, click Networking and expand the data center.
3. Right-click the data center:
a. Click Distributed Switch > New Distributed Switch.
b. Update the name to oob_dvSwitch and click Next.
c. On the Select version page, select 7.0.3 - ESXi 7.0.3 and later, and click Next.
d. Under Edit Settings, select 1 for Number of uplinks.
e. Select Enabled from the Network I/O Control menu.
f. Clear the Create default port group option.
g. Click Next.
h. On Ready to complete page, click Finish.
Steps
1. Log in to the VMware vSphere Client and click Networking.
2. Right-click oob_dvSwitch, and select Distributed Port Group > New Distribution Port Group.
3. Leave the port-related options (Port binding, Port allocation, and # of ports) as the default values.
4. Select VLAN as the VLAN type.
5. Enter flex-oob-mgmt-<vlanid> and click Next.
6. Click Next > Finish.
Steps
1. Log in to the VMware vSphere Client.
2. Click Networking and select oob_dvSwitch.
3. Right-click Add and Manage Hosts.
4. Select Add Hosts and click Next.
5. Click Select All and click Next.
6. For vmnic4, using the drop-down menu for assign uplink, select Uplink 1.
7. Click Next > Next > Next > Finish.
Steps
1. Log in to the VMware vSphere Client.
2. Select the host and click Configure on the right pane.
3. Under Networking tab, select the VMkernel adapter.
4. Click Add Networking.
5. Select Connection type as VMkernel network adapter and click Next.
6. Select Target device as Existing network and click Browse to select the appropriate port group.
7. On the port properties, select Enable services, select the appropriate service, and click Next.
For example, for vMotion, select vMotion. For any other networks, retain the default service.
8. In IPV4 Settings, select Use static IPV4 settings, provide the appropriate IP address and subnet details, and click Next.
9. Verify the details on Ready to Complete and click Finish.
10. Repeat the steps 2 through 9 to create the VMkernel adapters for the following port groups:
● pfmc-vmotion-<vlanid>
● pfmc-sds-data1-<vlanid>
● pfmc-sds-data2-<vlanid>
Deploying PowerFlex
NOTE: Manually deploy the PowerFlex SVM on each PowerFlex controller nodes. The SVM on the PowerFlex management
controller 2.0 is installed in the local storage. The SVM on the PowerFlex management controller is installed in the PERC-01
storage.
Steps
1. Log in to the VMware vCSA.
2. Select Hosts and Clusters.
3. Right-click the ESXi host > Select Deploy OVF Template.
4. Select Local file > Upload file > Browse to the SVM OVA template.
5. Click Open > Next.
6. Enter pfmc-<svm-ip-address> for VM name.
7. Click Next.
8. Identify the cluster and select the node that you are deploying. Verify that there are no compatibility warnings and click
Next.
9. Click Next.
10. Review details and click Next.
11. Select Local datastore Thin Provision and Disable Storage DRS for this VM and click Next.
12. Select pfmc-sds-mgmt-<vlanid> for VM network and click Next.
13. Click Finish.
NOTE: Deployment of the OVA has five interfaces configured. Remove the two unused interfaced from the OVA.
Steps
1. Right-click each SVM, and click Edit Settings.
a. Set CPU to 12 CPUs with 12 cores per socket.
b. Select Reservation and enter the GHz value: 17.4.
c. Set Memory to 18 GB and check Reserve all guest memory (all locked).
d. Set Network Adapter 1 to the pfmc-sds-mgmt-<vlanid>.
e. Set Network Adapter 2 to the pfmc-sds-data1-<vlanid>.
f. Set Network Adapter 3 to the pfmc-sds-data2-<vlanid>.
g. Click Add New Device and select PCI Device (new single PowerFlex management controller)
h. Enable Toggle DirectPath IO.
i. PCI Device = HBA 355i Front BroadCom / LSI
j. Click OK.
2. Power on the SVM and open a console.
3. Log in using the following credentials:
● Username: root
● Password: admin
4. To change the root password type passwd and enter the new SVM root password twice.
5. Set the hostname, type: hostnamectl set-hostname <hostname>.
Steps
1. Configure the PowerFlex management controller 2.0 network. Type: vi /etc/sysconfig/network/ifcfg-eth0 and
enter the following information:
DEVICE=eth0
NAME=eth0
STARTMODE=auto
BOOTPROTO=static
IPADDR=<ip address>
NETMASK=<netmask>
For example:
DEVICE=eth0
NAME=eth0
STARTMODE=auto
BOOTPROTO=static
IPADDR=10.10.10.11
NETMASK=255.255.255.224
For example:
3. Configure DNS search and DNS servers. Type: vi /etc/sysconfig/network/config and modify the following:
NETCONFIG_DNS_STATIC_SEARCHLIST=”<search domain>”
NETCONFIG_DNS_STATIC_SERVERS=”<dns ip>”
For example:
NETCONFIG_DNS_STATIC_SEARCHLIST=”example.com”
NETCONFIG_DNS_STATIC_SEARCHLIST=”10.10.10.240”
Steps
1. Configure the PowerFlex management controller 2.0 network. Type: vi /etc/sysconfig/network/ifcfg-eth1 and
enter the following information:
DEVICE=eth1
NAME=eth1
STARTMODE=auto
BOOTPROTO=static
IPADDR=<ip address>
NETMASK=<netmask>
MTU=<mtu>
For example:
DEVICE=eth1
NAME=eth1
STARTMODE=auto
BOOTPROTO=static
IPADDR=10.10.11.11
NETMASK=255.255.255.224
MTU=9000
Steps
1. Configure the PowerFlex management controller 2.0 network. Type: vi /etc/sysconfig/network/ifcfg-eth2 and
enter the following information:
DEVICE=eth2
NAME=eth2
STARTMODE=auto
BOOTPROTO=static
IPADDR=<ip address>
NETMASK=<netmask>
MTU=<mtu>
For example:
DEVICE=eth2
NAME=eth2
STARTMODE=auto
BOOTPROTO=static
IPADDR=10.10.12.11
NETMASK=255.255.255.224
MTU=9000
Steps
1. Log in to the VMware vCSA.
2. From Home, select PFMC-Datacenter.
3. Select Hosts and Cluster and expand PFMC-Management-Cluster.
4. Select the SVM and on the VM summary page, select Launch Web Console.
5. Log in to the SVM as root.
6. Run the following commands to verify connectivity between the SVMs:
● For the pfmc-sds-mgmt interface, run: ping [destination pfmc-sds-mgmt-ip]
● For the pfmc-sds-data interfaces, run: ping -M do -s 8972 [pfmc-sds-data-ip]
7. Confirm connectivity for all interfaces to all SVMs.
Steps
1. On all PowerFlex controller nodes perform the following:
a. Install LIA on all the PowerFlex management controllers, enter the following command:
TOKEN='<TOKEN-PASSWORD>' rpm -ivh /root/install/EMC-ScaleIO-lia-x.x-
x.sles15.3.x86_64.rpm
b. Install the SDS on all PowerFlex management controllers, enter the following command:
rpm -ivh /root/install/EMC-ScaleIO-sds-x.xxx.xxx.sles15.3.x86_64.rpm
c. Verify Java is installed, enter the following command: - java -version
Example output if running:
java -version
openjdk version "11.0.13" 2021-10-19
OpenJDK Runtime Environment (build 11.0.13+8-suse-3.68.1-x8664)
OpenJDK 64-Bit Server VM (build 11.0.13+8-suse-3.68.1-x8664, mixed mode)
If not install OpenJDK on all the PowerFlex management controllers, enter the following command: rpm -ivh /root/
install/java-11-openjdk-headless- x.xxx.xxx.x86_64.rpm
d. Install ActiveMq on all the PowerFlex management controllers, type:
rpm -ivh /root/install/EMC-ScaleIO-activemq-x.xxx.xxx.noarch.rpm
2. On the MDM PowerFlex controller nodes, perform the following:
a. Install MDM on the SVM1 and SVM2 by running the following command:
MDM_ROLE_IS_MANAGER=1 rpm -ivh /root/install/EMC-ScaleIO-mdm-x.x-
xxxx.xxx.el7.x86_64.rpm
3. On the tiebreaker PowerFlex controller nodes, perform the following:
a. Install MDM on SVM3 by running the following command:
MDM_ROLE_IS_MANAGER=0 rpm -ivh /root/install/EMC-ScaleIO-mdm-x.x-
xxxx.xxx.el7.x86_64.rpm
b. To reboot, type Reboot.
4. Reboot all SVMs.
Steps
1. Log in as a root user to the primary MDM.
2. Go to the config folder, type: cd /opt/emc/scaleio/mdm/cfg
3. Generate CA certificate, type: python3 certificate_generator_MDM_USER.py --generate_ca
mgmt_ca.pem.
4. Create a CLI certificate, type: python3 certificate_generator_MDM_USER.py --generate_cli
cli_certificate.p12 -CA mgmt_ca.pem --password <password>.
Steps
1. Create the MDM cluster in the SVM, type: scli --create_mdm_cluster --master_mdm_ip <data1
ip address,data2 ip address> --master_mdm_management_ip <mdm mgmt ip address>
--cluster_virtual_ip <vip 1,vip 2> --master_mdm_virtual_ip_interface eth1,eth2 --
master_mdm_name <pfmc-svm-last ip octet> --accept_license --approve_certificate.
2. Log in, type: scli --login --p12_path /opt/emc/scaleio/mdm/cfg/cli_certificate.p12 --
p12_password <password>.
3. Query the cluster, type: scli --query_cluster.
4. Add a secondary MDM to the cluster, type: scli --add_standby_mdm --new_mdm_ip <data1 ip
address,data2 ip address> --new_mdm_virtual_ip_interface eth1,eth2 --mdm_role manager --
new_mdm_management_ip <mdm mgmt ip address> --new_mdm_name <pfmc-svm-last ip octet> --
i_am_sure.
5. Add the Tiebreaker MDM to the cluster, type: scli --add_standby_mdm --mdm_role tb --new_mdm_ip <data1
ip address,data2 ip address> --new_mdm_name <pfmc-svm-last ip octet> --i_am_sure.
Steps
1. To log in as root to the primary MDM, type: scli --login --p12_path /opt/emc/scaleio/mdm/cfg/
cli_certificate.p12 --p12_password password.
2. To verify cluster status (cluster mode is 1_node), type: scli --query_cluster.
Example:
Cluster: Mode: 1_node
3. To convert a single node cluster to a three node cluster, type: scli --switch_cluster_mode --cluster_mode
3_node --add_slave_mdm_name <standby-mdm-name> --add_tb_name <tiebreaker-mdm-name>.
Example:
scli --switch_cluster_mode --cluster_mode 3_node --add_slave_mdm_name pfmc-svm-39 --
add_tb_name pfmc-svm-40
Steps
1. Log in to the MDM. Type: scli --login --p12_path /opt/emc/scaleio/mdm/cfg/cli_certificate.p12
--p12_password <password>
NOTE: After discovering MDS on PowerFlex Manager, the login will be as follows:
Steps
1. Log in to the MDM. Type: scli --login --p12_path /opt/emc/scaleio/mdm/cfg/cli_certificate.p12
--p12_password <password>.
2. To create the storage pool. Type: scli --add_storage_pool --protection_domain_name PFMC --
dont_use_rmcache --media_type SSD --data_layout medium_granularity --storage_pool_name
PFMC-Pool.
Set the spare capacity for the medium granularity storage pool
Use this procedure to set the spare capacity for the medium granularity storage pool.
Steps
1. Log in to the primary MDM, type: scli --login --p12_path /opt/emc/scaleio/mdm/cfg/
cli_certificate.p12 --p12_password <password>.
2. To modify the capacity pool, type scli --modify_spare_policy --protection_domain_name PFMC --
storage_pool_name PFMC-Pool --spare_percentage <percentage>.
NOTE: Spare percentage is 1/n (where n is the number of nodes in the cluster). For example, the spare percentage for
a three-node cluster is 34%.
3. Type Y to proceed.
Steps
1. Log in to the MDM. Type: scli --login --p12_path /opt/emc/scaleio/mdm/cfg/cli_certificate.p12
--p12_password <password>.
2. To add storage data servers. Type scli --add_sds --sds_ip <pfmc-sds-data1-ip,pfcm-sds-data2-ip>
--protection_domain_name PFMC --storage_pool_name PFMC-Pool --disable_rmcache --sds_name
PFMC-SDS-<last ip octet>.
3. Repeat for each PowerFlex management controller.
Steps
1. Log in as root to each of the storage VMs.
2. To identify all available disks on the SVM, type lsblk.
3. Repeat for all PowerFlex controller nodes SVMs.
Steps
1. Log in to the MDM: scli --login --p12_path /opt/emc/scaleio/mdm/cfg/cli_certificate.p12 --
p12_password <password>.
2. To add SDS storage devices, type scli --add_sds_device --sds_name <sds name> --storage_pool_name
<storage pool name> --device_path /dev/sd(x).
3. Repeat for all storage devices and for all PowerFlex management controller SVMs.
Create datastores
Use this procedure to create datastores and add volumes.
Steps
1. Log in to the MDM: scli --login --p12_path /opt/emc/scaleio/mdm/cfg/cli_certificate.p12 --
p12_password <password>.
2. To create the vCSA datastore, type scli --add_volume --protection_domain_name PFMC --
storage_pool_name PFMC-Pool --size_gb 3000 --volume_name vcsa --dont_use_rmcache.
3. To create the general datastore, type scli --add_volume --protection_domain_name PFMC --
storage_pool_name PFMC-Pool --size_gb 1500 --volume_name general --dont_use_rmcache.
4. To create the PowerFlex Manager datastore, type scli --add_volume --protection_domain_name PFMC --
storage_pool_name PFMC-Pool --size_gb 3000 --volume_name PFMP --dont_use_rmcache.
Steps
1. Log in to the MDM: scli --login --p12_path /opt/emc/scaleio/mdm/cfg/cli_certificate.p12 --
p12_password <password>.
2. To query all storage data clients (SDC) to capture the SDC IDs, type scli --query_all_sdc.
3. To query all the volumes to capture volume names, type scli --query_all_volumes.
4. To map volumes to SDCs, type scli --map_volume_to_sdc --volume_name <volume name> --sdc_id <sdc
id> --allow_multi_map.
5. Repeat steps 2 through 4 for all volumes and SDCs.
Steps
1. Log in to the VMware vCSA.
2. Right-click the new PowerFlex management controller node in Host and Clusters view.
3. Select Storage > New Datastore
4. Select VMFS and click Next.
5. Enter a name for the datastore, select an available LUN, and click Next.
6. Select VMFS 6 and click Next.
7. For partition configuration, retain the default settings and click Next.
8. Click Finish to start creating the datastore.
9. Repeat for all additional volumes created in the PowerFlex cluster.
Delete vSwitch0
Use this procedure to delete the standard switch (vSwitch0) for all PowerFlex management controller.
Steps
1. Log in to the VMware vSphere Client.
2. On the Menu, click Host and Cluster.
3. Select Controller A node.
4. Click Configure > Networking > Virtual Switches.
5. Expand Standard Switch: vSwitch0.
6. Click ... > Remove.
7. On the Remove Standard Switch window, click Yes.
Steps
1. Log in to the VMware vSphere Client.
2. Click VMware vCenter > Hosts and Clusters > Cluster Name.
3. To enable vSphere HA, click vSphere Availability under Services, and click Edit.
4. Select Turn ON VMware vSphere HA.
Prerequisites
● Ensure VMware ESXi is installed on all the PowerFlex controller nodes.
● Copy the IC code repository to the /home/admin/share path of the jump server.
● Confirm the availability of the virtual machine template: Embedded-JumpSrv-YYYYMMDD.ova, as specified in the
appropriate IC.
● Obtain an IP address from flex-node-mgmt-<vlanid> for the jump server main interface.
Steps
1. Deploy the OVA:
a. Log in to VMware vSphere Client using credentials.
b. Select vSphere Client and select Host and Clusters.
c. Right-click on the controller cluster PFMC Cluster and select Deploy a OVF Template. The Deploy OVF template
opens.
d. On the Select an OVF Template page, upload the OVF template using either one of the option URL or local files and
click Next.
e. On the Select a name and folder page, enter the name of the virtual machine according to the Enterprise Management
Platform (EMP), and click to select the files. Locate and click Next.
f. On the Select a Compute resource page, select the node where the jump server you wanted to be hosted and click
Next.
g. On the Review details page, verify the template details and click Next.
h. From the Select Storage page, choose the datastore as per the EMP. From the Select virtual Disk Format page, choose
Thin Provision and click Next.
i. From the Select Networks page, assign the Destination Networks to the VM
● Primary NIC for management access (flex-node-mgmt-<vlanid>)
● Secondary NIC for iDRAC access (flex-oob-mgmt-<vlanid>)
● Third NIC for initial deployment support access (optional)
j. Click Next.
k. On the Ready to Complete page, review the settings and click Finish.
2. On the first boot:
a. Right-click the VM and power it on. Wait to complete the initial boot.
b. Log in as admin.
c. Set up the networking:
i. Use the yast command to configure the management.
ii. On yast controller center page, select System and select Network Settings.
iii. On select GLOBAL tab, disable IPV6 on Network settings.
iv. Uncheck the Enable IPv6.
g. Use F10, to save all the changes and use F9 to exit from YAST center controller page.
h. Use ip addr s to verify whether the IP addresses are configured properly, and the interface are up.
i. Edit the /etc/exports file and add the flex-node-mgmt-<vlanid> subnet for NFS shares.
j. To change the default password, run the command sudo passwd, at the prompt, provide the password as per the
EMP.
k. Power off the VM.
3. Upgrade the VM hardware version:
a. Select Upgrade.
b. Check Schedule VM Compatibility Upgrade.
c. Expand Upgrade.
d. Select Compatible with (*): ESXi 7.x and later.
e. Click OK.
4. Power on the VM.
7
Deploying the PowerFlex management
platform
This section describes how to install and configure the PowerFlex management platform.
This includes the deployment and configuration of the temporary PowerFlex management platform installer virtual machine.
This PowerFlex management platform installer VM is used to deploy the containerized services required for the PowerFlex
management platform. Remove the installer VM after the deployment of the PowerFlex management platform. PowerFlex
management platform deployment types are:
● PowerFlex controller node - A single node VMware ESXi system with local (RAID) storage.
● PowerFlex management controller - A multi-node highly available cluster based on PowerFlex storage and VMware ESXi.
● Customer provided hypervisor based on kernel-based VM (KVM) - A customer deploys our eSLES VMs on their hypervisor to
run the management.
The PowerFlex management controller (single node or multi-node) needs to be configured before installing the PowerFlex
management platform. Verify the PowerFlex management controller has the recommended resource requirements needed
before proceeding.
NOTE:
● The PowerFlex management platform installer VM is removed after installation of the PowerFlex management platform
cluster.
● The PowerFlex management platform cluster requires three VMs to be deployed.
● Ensure the network vLAN requirements are met:
○ VLAN flex-node-mgmt (105) and flex-stor-mgmt (150) must be routable to each other
○ VLAN flex-node-mgmt (105) and pfmc-sds-mgmt (140) must be routable to each other
○ VLAN pfmc-sds-mgmt (140) and flex-stor-mgmt (150) must not route to each other
○ If VLAN 150 and 105 are not routed to each other, contact Dell support.
● Ensure the NTP is configured for correct time synchronization for all hosts and VMs.
● Ensure the DNS and PTR records are setup and properly configured.
Related information
Add VMware vSphere licenses
Related information
Deployment requirements
Steps
1. Log in to the VMware vCSA.
2. Click Menu > Shortcuts > Hosts and Clusters.
3. Right-click the ESXi Host > Select Deploy OVF Template.
4. Select Local File > Upload Files > Browse to the PowerFlex management platform OVA Template
5. Click Open > Next.
6. Enter pfmp-installer for the VM name.
7. Click Next.
8. Verify that there are no compatibility warnings and click Next.
9. Click Next.
10. Review details and click Next.
11. Select Virtual disk format > Thin provision > Next.
12. Select the datastore.
13. Select flex-node-mgmt-<vlanid> > OK > Next.
14. Click Finish.
15. Right-click the VM and select Power > Power On.
Steps
1. Launch the web console from vCSA and log in as delladmin.
2. To configure the <flex-node-mgmt> interface, type sudo vi /etc/sysconfig/network/ifcfg-eth0
DEVICE=eth0
NAME=eth0
STARTMODE=auto
BOOTPROTO=static
IPADDR=<ip address>
NETMASK=<mask>
Example:
DEVICE=eth0
NAME=eth0
STARTMODE=auto
BOOTPROTO=static
IPADDR=10.10.10.12
NETMASK=255.255.255.0
a. To configure the default route, type sudo vi /etc/sysconfig/network/routes. default <gateway ip>
- <interface>
Example: default 10.10.10.1 - eth0
Steps
1. To configure the NTP server, type sudo vi /etc/chrony.conf and add the following: server <ntp ip address>
iburst.
For example: server 10.10.10.240 iburst
2. To enable chronyd, type sudo systemctl enable chronyd.
3. To reboot, type sudo reboot.
4. Log in as delladmin
5. To check the server is synced to NTP, type chronyc tracking.
NOTE: This may take a few minutes time to sync. Examples of chronyc tracking status are as follows:
Related information
Deployment requirements
Steps
1. Log in to the KVM server.
2. Copy the management eSLES QCOW image to the KVM server.
3. Open terminal and type virt-manager.
4. Click File > New Virtual Machine.
5. Select Import existing disk image and click Forward.
6. Click Browse and select the eSLES QCOW image from the saved path.
7. Select the operating system as Generic OS and click Forward.
8. Complete necessary changes to the CPU and RAM as per requirements and click Forward.
9. Enter VM name and in network selection, select Bridge device and enter device name and click finish.
Steps
1. Launch the web console from vCSA and log in as delladmin.
2. To configure the <flex-node-mgmt> interface, type sudo vi /etc/sysconfig/network/ifcfg-eth0
DEVICE=eth0
NAME=eth0
STARTMODE=auto
BOOTPROTO=static
IPADDR=<ip address>
NETMASK=<mask>
Example:
DEVICE=eth0
NAME=eth0
STARTMODE=auto
BOOTPROTO=static
IPADDR=10.10.10.12
NETMASK=255.255.255.0
a. To configure the default route, type sudo vi /etc/sysconfig/network/routes. default <gateway ip>
- <interface>
Example: default 10.10.10.1 - eth0
b. To configure DNS search and DNS servers, type sudo vi /etc/sysconfig/network/
config and modify the following: NETCONFIG_DNS_STATIC_SEARCHLIST="<search domain>"
NETCONFIG_DNS_STATIC_SERVERS="< dns_ip1 dns_ip2>"
Example: NETCONFIG_DNS_STATIC_SEARCHLIST="example.com" NETCONFIG_DNS_STATIC_SERVERS="
10.10.10.240 10.10.10.241"
Steps
1. To configure the NTP server, type sudo vi /etc/chrony.conf and add the following: server <ntp ip address>
iburst.
For example: server 10.10.10.240 iburst
2. To enable chronyd, type sudo systemctl enable chronyd.
3. To reboot, type sudo reboot.
4. Log in as delladmin
5. To check the server is synced to NTP, type chronyc tracking.
NOTE: This may take a few minutes time to sync. Examples of chronyc tracking status are as follows:
Related information
Deployment requirements
Prerequisites
The PowerFlex management platform cluster requires three VMs to be deployed.
Steps
1. Log in to the VMware vCSA.
2. Click Menu > Shortcuts > Hosts and Clusters.
3. Right-click the ESXi Host > Select Deploy OVF Template.
4. Select Local File > Upload Files > Browse to the PowerFlex management platform OVA
5. Click Open > Next.
6. Enter pfmp-mvm-<number> for the VM name.
7. Click Next.
8. Verify that there are no compatibility warnings and click Next.
9. Click Next.
10. Review details and click Next.
11. Select Virtual disk format > Thin provision > Next.
12. Select the datastore.
13. Select Desired Destination Network > OK > Next.
14. Click Finish.
15. Repeat the above for the three management VMs.
NOTE: The management VMs and the installer VM use the flex-node-mgmt network. They must be on the same
network.
Prerequisites
PowerFlex manager requires these interfaces for alerting, upgrade, management, and other services.
Steps
1. Log in to the VMware vCSA.
2. Right-click a management virtual machine and click Edit Settings.
3. Select Virtual Hardware > Add New Device > Network Adapter.
4. For the new network adapter created, click the dropdown menu or select <vlan id>.
5. Repeat for all required network adapters.
6. Click OK.
7. Repeat for all management virtual machines.
8. Power on the management virtual machine.
Steps
1. Launch the web console from the Virtual Machine Manager. Log in as delladmin.
2. Configure the flex-node-mgmt-<vlanid> eth0 interface:
a. Type: sudo vi /etc/sysconfig/network/ifcfg-eth0
DEVICE=eth0
NAME=eth0
STARTMODE=auto
BOOTPROTO=static
IPADDR=<ip address>
NETMASK=<mask>
For example:
DEVICE=eth0
NAME=eth0
STARTMODE=auto
BOOTPROTO=static
IPADDR=10.10.10.12
NETMASK=255.255.255.0
DEVICE=eth1
NAME=eth1
STARTMODE=auto
BOOTPROTO=static
IPADDR=<ip address>
NETMASK=<mask>
For example:
DEVICE=eth1
NAME=eth1
STARTMODE=auto
BOOTPROTO=static
IPADDR=10.10.9.11
NETMASK=255.255.255.0
DEVICE=eth2
NAME=eth2
STARTMODE=auto
BOOTPROTO=static
IPADDR=<ip address>
NETMASK=<mask>
For example:
DEVICE=eth2
NAME=eth2
STARTMODE=auto
BOOTPROTO=static
IPADDR=192.168.151.10
NETMASK=255.255.255.0
DEVICE=eth3
NAME=eth3
STARTMODE=auto
BOOTPROTO=static
IPADDR=<ip address>
NETMASK=<mask>
For example:
DEVICE=eth3
NAME=eth3
STARTMODE=auto
BOOTPROTO=static
IPADDR=192.168.152.10
NETMASK=255.255.255.0
Steps
1. To configure the NTP server, type sudo vi /etc/chrony.conf and add the following: server <ntp ip address>
iburst.
For example: server 10.10.10.240 iburst
2. To enable chronyd, type sudo systemctl enable chronyd.
3. To reboot, type sudo reboot.
4. Log in as delladmin
5. To check the server is synced to NTP, type chronyc tracking.
NOTE: This may take a few minutes time to sync. Examples of chronyc tracking status are as follows:
Prerequisites
The following locations contain log files for troubleshooting:
● PowerFlex management platform installer logs: /opt/dell/pfmp/PFMP_Installer/logs
● Platform installer logs: /opt/dell/pfmp/atlantic/logs/bedrock.log
This table describes the PFMP_Config.json and its configuration parameters:
Steps
1. To SSH as non-root user to the PowerFlex management platform Installer, run the following command: ssh
delladmin@<pfmp installer ip>.
2. To navigate to the config directory, run the following command: cd /opt/dell/pfmp/PFMP_Installer/config
For example: cd /pfmp/PFMP_Installer/config/
3. To configure the PFMP_Config.json, run the following command: sudo vi PFMP_Config.json and update the
configuration parameters.
For example:
{
"Nodes":
[
{
"hostname": "pfmp-mgmt-01",
"ipaddress": "10.10.10.01"
},
{
"hostname": "pfmp-mgmt-02",
"ipaddress": "10.10.10.02"
},
{
"hostname": "pfmp-mgmt-03",
"ipaddress": "10.10.10.03"
}
],
"ClusterReservedIPPoolCIDR" : "10.42.0.0/23",
"ServiceReservedIPPoolCIDR" : "10.43.0.0/23",
"RoutableIPPoolCIDR" : [{"flex-node-mgmt-<vlanid>":"10.10.10.20-10.10.10.24"},
{"flex-oob-mgmt-<vlanid>":"10.10.20.20-10.10.20.24"},
{"flex-data1-<vlanid>”:”192.168.151.20-192.169.151.24"},
{"flex-data2-<vlanid>”:”192.168.152.20-192.168.152.24"}],
"PFMPHostname" : "dellpowerflex.com",
“PFMPHostIP” : “10.10.10.20”
NOTE: If the customer is planning for four data networks, add flex-data3-<vlanid> and flex-data4-<vlanid> also in
PFMP_Config.json file.
Steps
1. Log in to the VMware vCSA.
2. Select Hosts and Clusters.
3. Right-click PowerFlex management platform installer VM > Power > Power Off.
4. Once powered off, right-click Delete from Disk.
Steps
1. Log in to the VMware vSphere Client.
2. Browse to the cluster in the VMware vSphere Client and click the Configure tab.
3. Under Configuration, select VM/Host Groups and click Add.
4. From the Create VM/Host Group window, type MVM Host Group for the group.
5. Select Host Group from the Type list and click Add.
6. Ensure that all PowerFlex management controller hosts are selected and click OK > OK.
Steps
1. Log in to the VMware vSphere Client.
2. Browse to the cluster in the VMware vSphere Client and click the Configure tab.
3. Under Configuration, select VM/Host Groups and click Add.
4. From the Create VM/Host Group window, type MVM VM Group for the group.
5. Select VM Group from the Type list and click Add.
6. Ensure that all PowerFlex management virtual machines are selected and click OK > OK.
Prerequisites
Ensure you have created the host and virtual machine DRS groups to which the VM-host anti-affinity rule applies.
Steps
1. Log in to the VMware vSphere Client.
2. Select Hosts and Clusters.
3. Browse to the PowerFlex management cluster in the VMware vSphere Client and click the Configure tab.
4. Under Configuration, click VM/Host Rules.
5. Click Add.
6. From the Create VM/Host Rules window, type MVM Rule for the rule.
7. From the Type menu, select Virtual Machines to Hosts.
8. Select the virtual machine DRS group (management virtual machines VM group) and the host DRS group to which the rule
applies.
9. Select the Should run on hosts in group check box.
10. Click OK to save the rule.
Related information
Deployment requirements
Steps
1. Log in to the KVM server.
2. Copy the management eSLES QCOW image to KVM server.
3. Open terminal, type: virt-manager.
4. Click File > New Virtual Machine.
5. Select Import existing disk image and click Forward.
6. Click Browse and select the eSLES QCOW image from the saved path.
7. Select operating system as Generic OS and click Forward.
8. Complete necessary changes to CPU and RAM as per requirements and click Forward.
9. Enter VM name, and in network selection select Bridge device and enter Device name and click Finish.
10. Repeat Steps 4 through 9 for all management VMs.
Steps
1. Launch the web console from the Virtual Machine Manager. Log in as delladmin.
2. Configure the flex-node-mgmt-<vlanid> eth0 interface:
a. Type: sudo vi /etc/sysconfig/network/ifcfg-eth0
DEVICE=eth0
NAME=eth0
STARTMODE=auto
BOOTPROTO=static
IPADDR=<ip address>
NETMASK=<mask>
For example:
DEVICE=eth0
NAME=eth0
STARTMODE=auto
BOOTPROTO=static
IPADDR=10.10.10.12
NETMASK=255.255.255.0
DEVICE=eth1
NAME=eth1
STARTMODE=auto
BOOTPROTO=static
IPADDR=<ip address>
NETMASK=<mask>
For example:
DEVICE=eth1
NAME=eth1
STARTMODE=auto
BOOTPROTO=static
IPADDR=10.10.9.11
NETMASK=255.255.255.0
DEVICE=eth2
NAME=eth2
STARTMODE=auto
BOOTPROTO=static
IPADDR=<ip address>
NETMASK=<mask>
For example:
DEVICE=eth2
NAME=eth2
STARTMODE=auto
BOOTPROTO=static
IPADDR=192.168.151.10
NETMASK=255.255.255.0
DEVICE=eth3
NAME=eth3
STARTMODE=auto
BOOTPROTO=static
IPADDR=<ip address>
NETMASK=<mask>
For example:
DEVICE=eth3
NAME=eth3
STARTMODE=auto
BOOTPROTO=static
IPADDR=192.168.152.10
NETMASK=255.255.255.0
Steps
1. To configure the NTP server, type sudo vi /etc/chrony.conf and add the following: server <ntp ip address>
iburst.
For example: server 10.10.10.240 iburst
2. To enable chronyd, type sudo systemctl enable chronyd.
3. To reboot, type sudo reboot.
4. Log in as delladmin
5. To check the server is synced to NTP, type chronyc tracking.
NOTE: This may take a few minutes time to sync. Examples of chronyc tracking status are as follows:
Prerequisites
The following locations contain log files for troubleshooting:
● PowerFlex management platform installer logs: /opt/dell/pfmp/PFMP_Installer/logs
● Platform installer logs: /opt/dell/pfmp/atlantic/logs/bedrock.log
This table describes the PFMP_Config.json and its configuration parameters:
Steps
1. To SSH as non-root user to the PowerFlex management platform Installer, run the following command: ssh
delladmin@<pfmp installer ip>.
2. To navigate to the config directory, run the following command: cd /opt/dell/pfmp/PFMP_Installer/config
For example: cd /pfmp/PFMP_Installer/config/
3. To configure the PFMP_Config.json, run the following command: sudo vi PFMP_Config.json and update the
configuration parameters.
For example:
{
"Nodes":
[
{
"hostname": "pfmp-mgmt-01",
"ipaddress": "10.10.10.01"
},
{
"hostname": "pfmp-mgmt-02",
"ipaddress": "10.10.10.02"
},
{
"hostname": "pfmp-mgmt-03",
"ipaddress": "10.10.10.03"
}
],
"ClusterReservedIPPoolCIDR" : "10.42.0.0/23",
"ServiceReservedIPPoolCIDR" : "10.43.0.0/23",
"RoutableIPPoolCIDR" : [{"flex-node-mgmt-<vlanid>":"10.10.10.20-10.10.10.24"},
{"flex-oob-mgmt-<vlanid>":"10.10.20.20-10.10.20.24"},
{"flex-data1-<vlanid>”:”192.168.151.20-192.169.151.24"},
{"flex-data2-<vlanid>”:”192.168.152.20-192.168.152.24"}],
"PFMPHostname" : "dellpowerflex.com",
“PFMPHostIP” : “10.10.10.20”
NOTE: If the customer is planning for four data networks, add flex-data3-<vlanid> and flex-data4-<vlanid> also in
PFMP_Config.json file.
Steps
1. Log in to the KVM server.
2. Connect to the virt-manager.
3. Select the installer VM to be deleted.
4. Right-click and select Delete.
8
Configuring PowerFlex Manager
Use the procedures in this section to configure PowerFlex Manager.
Prerequisites
● Ensure that you have access to a web browser that has network connectivity with PowerFlex Manager.
● Ensure that you know the address that was configured for accessing PowerFlex Manager. This address was configured for
PFMPHostname in the JSON file, during PowerFlex Manager installation. The default address is dellpowerflex.com.
● Prepare a new password for accessing PowerFlex Manager. Admin123! cannot be used. Password rules are:
○ Contains less than 32 characters
○ Contains only alphanumeric and punctuation characters
Steps
1. Point your browser to the address configured for PowerFlex Manager.
The PowerFlex Manager login page is displayed.
2. Enter your new password in the New Password box, and enter it again in the Confirm Password box.
3. Make a note of the password for future use.
This password is for the SuperUser who is performing the initial configuration activities. Additional users and passwords can
be configured later in the process.
4. Click Submit.
You are now logged into the system. The initial setup wizard is displayed. Proceed with initial configuration activities, guided
by this wizard.
5. On the Summary page, verify all settings for SupportAssist, compliance, and installation type. Click Finish to complete the
initial setup.
After completing the initial setup, you can begin to configure PowerFlex Manager and deploy resources from the Getting
Started page.
Enable SupportAssist
SupportAssist is a secure support technology for the data center. You can enable SupportAssist as part of the initial
configuration wizard. Alternatively, you can enable it later by adding it as a destination to a notification policy in Events and
Alerts.
Prerequisites
Ensure you have the details about your SupportAssist configuration.
Steps
1. Click Enable SupportAssist.
2. From the Connection Type tab, there are two options:
Related information
Enabling SupportAssist
Steps
1. Click I use RCM or IC to manage other components in my system if you want to enable the full compliance features of
PowerFlex Manager.
If you choose this option, PowerFlex Manager allows you to upload a compliance file on the Getting Started page.
2. Click I only manage PowerFlex if you only want to use PowerFlex Manager to manage PowerFlex software.
If you choose this option, PowerFlex Manager does not allow you to upload a compliance file on the Getting Started page.
If you are unsure about which option you want, select this option.
Prerequisites
If you are importing an existing PowerFlex deployment that was not managed by PowerFlex Manager, make sure you have the IP
address, username, and password for the primary and secondary MDMs. If you are importing an existing PowerFlex deployment
that was managed by PowerFlex Manager, make sure you have the IP address, username, and password for the PowerFlex
Manager virtual appliance.
Steps
1. Click one of the following options:
Option Description
I want to deploy a new instance of PowerFlex If you do not have an existing PowerFlex deployment and would
like to bypass the import step.
I have a PowerFlex instance to import If you would like to import an existing PowerFlex instance that
was not managed by PowerFlex Manager
Provide the following details about the existing PowerFlex
instance:
● IP addresses for the primary and secondary MDMs (separated
by a comma with no spaces)
● Admin username and password for the primary MDM
● Operating system username and password for the primary
MDM
● LIA password
I have a PowerFlex instance managed by If you would like to import a an existing PowerFlex directly from
PowerFlex Manager to import an existing PowerFlex Manager virtual appliance.
Provide the following details about the existing PowerFlex
Manager virtual appliance:
● IP address or DNS name for the virtual appliance
● Username and password for the virtual appliance
Results
For a full PowerFlex Manager migration, the import process backs up and restores information from the old PowerFlex Manager
virtual appliance. The migration process for the full PowerFlex Manager workflow imports all resources, templates, and services
from a previous instance of PowerFlex Manager. The migration also connects the legacy PowerFlex gateway to the MDM
cluster, thereby enabling the Block tab in the user interface to function.
The migrated environment includes a PowerFlex gateway instance called "block-legacy-gateway". It will not include the gateway
instance for the Management Data Store ("block-legacy-gateway-mds") until you discover the PowerFlex System resource on
the Resources page.
For a software-only PowerFlex system, there will be no PowerFlex Manager information available after the migration completes.
The migrated environment will not include resources, templates, and services.
Steps
1. On the Summary page, verify the settings that you configured on the previous pages.
2. To edit any information, click Back or click the corresponding page name in the left pane.
3. If you are importing an existing PowerFlex instance from PowerFlex Manager, type IMPORT POWERFLEX MANAGER .
4. If the information is correct, click Finish to complete the initial setup.
If you are importing an existing environment, PowerFlex Manager displays a message indicating that the import operation is
in progress. When the import operation is complete, PowerFlex Manager displays the Getting Started page. The steps you
can perform after the initial setup vary depending on the compliance option you selected. If you indicated that you would
only like to manage PowerFlex, the Getting Started page displays steps that are suitable for a software-only environment
that does not require the compliance step. Otherwise, the Getting Started page displays steps that are suitable for a
full-featured installation that includes the compliance step.
Next steps
If you did not migrate an existing PowerFlex environment, you now have the option to deploy a new instance of PowerFlex.
After completing the migration wizard for a full PowerFlex Manager import, you need to perform these steps:
1. On the Settings page, upload the compatibility matrix file and upload the latest repository catalog (IC), or use the software-
only catalog.
The software-only catalog is new in this release. This catalog only includes the components required for an upgrade of
PowerFlex.
2. On the Resources page, select the PowerFlex entry, and perform a non-disruptive update.
3. On the Resource Groups page, perform an IC upgrade on any migrated service that needs to be upgraded.
The migrated resource groups are initially non-compliant, because PowerFlex Manager 4.0 is running a later IC that includes
PowerFlex 4.0. These resource groups must be upgraded to the latest IC before they can be expanded or managed with
automation operations.
4. Power down the old PowerFlex Manager VM, the old PowerFlex gateway VM, and the presentation server VM.
The upgrade of the cluster to version 4.0 will cause the old PowerFlex Manager virtual appliances to stop working.
5. After validating the upgrade to version 4.0, decommission the old instances of PowerFlex Manager, the PowerFlex gateway,
and the presentation server.
Do not delete the old instances until you have had a chance to review the initial setup and confirm that the old environment
was migrated successfully.
After completing the migration wizard for a PowerFlex (software only) import, you need to perform these steps:
1. On the Settings page, upload the compatibility matrix file and confirm that you have the latest software-only catalog.
The software-only catalog is new in this release. This catalog only includes the components required for an upgrade of
PowerFlex.
2. On the Resources page, select the PowerFlex entry, and perform a non-disruptive update.
You do not need a resource group (service) to perform an upgrade of the PowerFlex environment. In addition, PowerFlex
Manager does not support Add Existing Resource Group operations for a software-only migration. If you want to be able
to perform any deployments, you will need a new resource group. Therefore, you need to create a new template (or clone a
sample template), and then deploy a new resource group from the template.
Getting started
The Getting Started page guides you through the common configurations that are required to prepare a new PowerFlex
Manager environment. A green check mark on a step indicates that you have completed the step. Only super users have access
to the Getting Started page.
The following table describes each step:
Step Description
Upload Compliance File Provide compliance file location and authentication information for use within
PowerFlex Manager. The compliance file defines the specific hardware
NOTE: Use http as the preferred loading
components and software version combinations that are tested and certified
method for the RCM. by Dell for hyperconverged infrastructure and other Dell products. This step
enables you to choose a default compliance version for compliance or add new
compliance versions.
Step Description
This step is enabled after you complete the initial setup if you selected I use
RCM or IC to manage other components in my system. Otherwise, this
step is not available on the Getting Started page.
You can also click Settings > Repositories > Compliance Versions.
NOTE: Before you make an RCM or IC the default compliance version,
you first need to upload a suitable compatibility management file under
Settings > Repositories > Compatibility Management.
Define Networks Enter detailed information about the available networks in the environment.
This information is used later during deployments to configure nodes and
switches to have the right network connectivity. PowerFlex Manager uses
the defined networks in templates to specify the networks or VLANs that are
configured on nodes and switches for your resource groups.
This step is enabled immediately after you perform an initial setup for
PowerFlex Manager.
You can also click Settings > Networking > Networks.
Discover Resources Grant PowerFlex Manager access to resources (nodes, switches, virtual
machine managers) in the environment by providing the management IP and
credential for the resources to be discovered.
This step is not enabled until you define your networks.
You can also click Resources > Discover Resources.
Manage Deployed Resources (Optional) Add existing resource group for a cluster that is already deployed and manage
the resources within PowerFlex Manager.
This step is not enabled until you define your networks.
You can also click Lifecycle > Resource Groups > Add Existing Resource
Group.
Deploy Resources Create a template with requirements that must be followed during a
deployment. Templates enable you to automate the process of configuring
and deploying infrastructure and workloads. For most environments, you can
clone one of the sample templates that are provided with PowerFlex Manager
and make modifications as needed. Choose the sample template that is most
appropriate for your environment.
For example, for a hyperconverged deployment, clone one of the
hyperconverged templates.
For a two-layer deployment, clone the compute-only templates. Then clone one
of the storage templates.
This step is not enabled until you define your networks.
You can also click Lifecycle > Templates.
To revisit the Getting Started page, click Getting Started on the help menu.
Steps
1. Click the user icon in the upper right corner of PowerFlex Manager.
2. Click Change password.
3. Type the password in the New Password field.
4. Type the password again in the Verify Password field.
5. Click Apply.
Configure repositories
Use this section to configure the repositories.
Steps
1. On the menu bar, select Settings and choose Repositories.
2. Select Compliance Version and click Add.
3. In the Add Compliance File dialog, select one of the following options:
a. Download from Secure Connect Gateway (SCG) - Select this option to import the compliance file that contains the
firmware bundles you need. (SupportAssist)
b. Download from local network path - Select this option to download the compliance file from an NFS or CIFS file share.
4. Optionally, set this compliance file as the default by choosing Make this default version for compliance checking. and
click Save.
5. PowerFlex Manager takes some time to unpack the packages from the compliance bundle.
a. If you attempt to add an unsigned compliance, the compliance file state displays as Needs Approval. You can choose to
do either of the following from the Available Actions drop-down menu:
● Allow Unsigned File - Select this option to allow PowerFlex Manager to use the unsigned compliance file. The
compliance file then moves to an Available state.
● Delete - Select this option to remove the unsigned compliance file.
Steps
1. From the menu, select Settings > Repositories > Compatibility management.
2. Click Edit Settings.
3. In the Compatibility Management dialog, select one of the following options:
a. Download from configured Dell Technologies Support Assist.
b. Upload from the local. Click Choose File to select the GPG file.
4. Click Save.
Prerequisites
Upload OS images only for deploying ESXi based Hyperconverged and Compute only resource group. The customer can also
upload supported Linux based OS images for deploying Compute only resource group.
Steps
1. From the menu, click Settings > Repositories > OS Images.
2. Click Add.
3. In the Add OS image Repository dialog, enter the following:
a. For Repository Name, enter the name of the repository.
The repository name must be unique and case insensitive.
b. For Image Type, enter the image type.
c. For Source Path and Filename, enter the path of the OS image file name in a file share.
● To enter the CIFS share, use the following format example: \\host\lab\isos\filename.iso
● To enter the NFS share, use the following format example: Host:/var/nfs/filename.iso
d. If you are using the CIFS share, enter the Username and Password to access the share.
4. Click Add.
Configuring networking
Adding the details of an existing network enables PowerFlex Manager to automatically configure nodes that are connected to
the network.
Define a network
Steps
1. On the menu bar, click Settings > Networking and click Networks.
2. Click Define.
3. In the Name field, enter the name of the network. Optionally, in the Description field, enter a description for the network.
4. From the Network Type drop-down, select one of the following network types:
● General purpose LAN
● Hypervisor management
● Hypervisor migration
● Hardware management
● PowerFlex data
● PowerFlex data (client traffic only)
● PowerFlex data (server traffic only)
● PowerFlex replication
● PowerFlex management
NOTE:
● For a PowerFlex configuration that uses a hyperconverged architecture with two/four data networks, you typically
have two or four networks that are defined with the PowerFlex data network type.
● The PowerFlex data network type supports both client and server communications and used with hyperconverged
resource groups.
● For a PowerFlex configuration that uses a two-layer architecture with four dedicated data networks, you typically
have two PowerFlex (client-traffic only) VLANs and two PowerFlex data (server-traffic only) VLANs. These network
types are used with storage-only and compute-only resource groups
6. Optionally, select the Configure Static IP Address Ranges check box, and do the following:
a. In the Subnet box, enter the IP address for the subnet. The subnet is used to support static routes for data and
replication networks.
b. In the Subnet Mask box, enter the subnet mask.
c. In the Gateway box, enter the default gateway IP address for routing network traffic.
d. Optionally, in the Primary DNS and Secondary DNS fields, enter the IP addresses of primary DNS and secondary DNS.
e. Optionally, in the DNS Suffix field, enter the DNS suffix to append for hostname resolution.
f. To add an IP address range, click Add IP Address Range. In the row, indicate the role in PowerFlex nodes for the IP
address range and then specify a starting and ending IP address for the range. For the Role, select either:
● Server or Client: Default; range is assigned to the server and client roles.
● Client Only: Range is assigned to the client role on PowerFlex hyperconverged nodes and PowerFlex compute-only
nodes.
● Server Only: Range is assigned to the server role on PowerFlex hyperconverged nodes and PowerFlex storage-only
nodes.
NOTE: The Configure Static IP Address Ranges check box is not available for all network types. For example,
you cannot configure a static IP address range for the operating system Installation network type. You cannot select
or clear this check box to configure static IP address pools after a network is created.
7. Click Save.
8. If replicating the network, repeat steps 1 through 7 to add the remote replication networks.
Edit a network
If a network is not associated with a template or resource group, you can edit the network name, the VLAN ID, or the IP address
range.
Steps
1. On the menu bar, click Settings > Networking and click Networks.
2. Select the network that you want to modify, and click Modify.
3. Edit the information in any of the following fields: Name, VLAN ID, IP Address Range.
For a PowerFlex data or replication network, you can specify a subnet IP address for a static route configuration. The subnet
is used to support static routes for data and replication networks.
4. Click Save.
Delete a network
You cannot delete a network that is associated with a template or resource group.
Steps
1. On the menu bar, click Settings > Networking and click Networks.
2. Click the network that you want to delete, and click Delete.
3. Click Yes when the confirmation message is displayed.
NOTE: New license capacity is the aggregate of the old capacity with newly purchased capacity.
Steps
1. To upload the PowerFlex license:
a. lick Settings and License management from the left pane.
b. Select PowerFlex License, under Production License, click Choose File.
c. Browse and select the license to upload and click Open.
d. Click Save.
2. To upload CloudLink license:
a. Click Other Software Licenses.
b. Click ADD, and Add Software license dialog box opens.
c. Under Upload License, select Choose License.
d. Browse and select the license to upload and click Open.
e. Select the Type as CloudLink and click Save.
Discover resources
Use this procedure to discover and allow PowerFlex Manager access to resources in the environment. Provide the management
IP address or hostname and credential for each discoverable resource.
NOTE: The powerflex-mds (PowerFlex system) will not be available until you complete the Add the PowerFlex system as
a resource section.
During node discovery, you can configure the iDRAC nodes to automatically send alerts to PowerFlex Manager. If the PowerFlex
nodes are not configured for alert connector, SupportAssist does not receive critical or error alerts for those resources.
The following table describes how to configure resources in managed mode:
Prerequisites
Ensure you gather the IP addresses and credentials that are associated with the resources.
NOTE: PowerFlex Manager also allows you to use the name-based searches to discover a range of nodes that were
assigned the IP addresses through DHCP to iDRAC. For more information about this feature, see Dell PowerFlex 4.0.x
Administration Guide.
Steps
1. On the PowerFlex Manager Getting Started page, click Discover Resources.
2. On the Welcome page of the Discovery wizard, read the instructions and click Next.
3. On the Identify Resources page, click Add Resource Type. From the Resource Type list, select the resource that you
want to discover.
4. Select IP address option or Hostname option. Enter the IP address/hostname of the resource in the IP address/hostname
range field.
● To discover one or more nodes by IP address, select IP address and provide a starting and ending IP address.
● To discover one or more nodes by hostname, select hostname and identify the nodes to discover in one of the following
ways:
○ Enter the fully qualified domain name (FQDN) with a domain suffix.
○ Enter the FQDN without a domain suffix
If you use a variable, you must provide a start number and end number for the hostname search.
5. In the Resource State list, select Managed, Unmanaged or Reserved.
Option Description
Managed ● Select this option to monitor the firmware version compliance, upgrade firmware,
and deploy resource groups on the discovered resources. A managed state is the
default option for the switch, VMware vCenter, element manager, and PowerFlex
Gateway resource types.
● Resource state must be set to Managed for PowerFlex Manager to send alerts to
Secure Connect Gateway.
Unmanaged ● Select this option to monitor the health status of a device and the firmware version
compliance only. The discovered resources are not available for a firmware upgrade
or deploying resource groups by PowerFlex Manager. This is the default option for
the node resource type.
● If you did not upload a license in the Initial Setup wizard, PowerFlex Manager is
configured for monitoring and alerting only. In this case, Unmanaged is the only
option available.
Option Description
Reserved ● Select this option to monitor firmware version compliance and upgrade firmware. The
discovered resources are not available for deploying resource groups by PowerFlex
Manager.
6. For a PowerFlex node, to discover resources into a selected node pool instead of the global (default), select the node pool
from the Discover into Node Pool list. To create a node pool, click + to the right of the Discover into Node Pool.
7. Select the appropriate credential from the Credentials list. To create a credential, click + to the right of Credentials.
PowerFlex Manager maps the credential type to the type of resource you are discovering.
8. For a PowerFlex node, if you want PowerFlex Manager to automatically reconfigure the iDRAC IP addresses of the nodes it
discovers, select the Reconfigure discovered nodes with new management IP and credentials check box. This option
is not selected by default, because it is faster to discover the nodes if you bypass the reconfiguration.
9. For a PowerFlex node, select the Auto configure nodes to send alerts to PowerFlex Manager check box to have
PowerFlex Manager automatically configure nodes to send alerts to PowerFlex Manager.
10. Click Next to start discovery.
11. On the Discovered Resources page, select the resources from which you want to collect inventory data and click Finish.
The discovered resources are listed on the Resources page.
NOTE: The gateway is container based. It is automatically discovered on the Resource page:
● Powerflex - PowerFlex Gateway
● powerflex-mds - PowerFlex system
● powerflex-file - PowerFlex file
Steps
1. If your system will be configured with CloudLink encryption, deploy CloudLink Center VM.
NOTE: Before deploying the PowerFlex hyperconverged node or PowerFlex storage-only node template, you must
deploy the CloudLink VM if you choose encryption enabled service.
9. Click Finish.
Create a template
Create a template with requirements to follow during deployment.
Steps
1. On the menu bar, click Lifecycle > Templates.
2. On the Templates page, click Create.
3. In the Create dialog box, select one of the following options:
● Clone an existing PowerFlex Manager template
● Upload External Template
● Create a new template
If you select Clone an existing PowerFlex Manager template, select the Category and the Template to be cloned.
The components of the selected template are in the new template.
● For software-only block storage, ensure that you select a template that includes "SW Only" in the name.
● For software-only file storage, ensure that you select a template that includes "File-SW Only" in the name.
8. Specify the resource group permissions for this template under Who should have access to the resource group deployed
from this template by performing one of the following actions:
● To restrict access to super users, select Only PowerFlex SuperUser.
● To grant access to super users and some specific lifecycle administrators and drive replacers, select the PowerFlex
SuperUser and specific LifecycleAdmin and DriveReplacer option, and perform the following steps:
○ Click Add User(s) to add one or more LifecycleAdmin or DriverReplacer us.
Publish a template
Use this procedure to publish the template.
Steps
1. On the Templates page, perform the following steps to modify a component type to the template:
a. Select Node Component and click Modify.
If you select a template from Sample templates, PowerFlex Manager selects the default number of PowerFlex nodes
for deployment.
b. Under Related Components, perform one of the following actions:
● To associate the component with all existing components, click Associate All.
● To associate the component with selected components, click Associate Selected and then select the components to
associate.
Based on the component type, specific required settings and properties appear automatically. You can edit components
as needed.
c. Click Continue. Ensure there is appropriate values are available on all the settings.
d. Click Validate Settings.
PowerFlex Manager will list the identified resource which are valid/invalid (if any) with the settings mentioned in the
template.
e. Click Save.
Cluster component will be available only for HC and CO ESXi based template, If they are not, complete Step 2.
2. Select Cluster Component and click Modify.
If you select a template from Sample templates, PowerFlex Manager selects the default component name VMware Cluster.
a. Under Related Components, perform one of the following actions:
● To associate the component with all existing components, click Associate All.
● To associate the component with selected components, click Associate Selected and then select the components
to associate.
Based on the component type, specific required settings and properties appear automatically. You can edit components
as needed.
b. Ensure Cluster settings are populated with the right details
c. To configure the vSphere distributed switch settings, click Configure VDS Settings.
Under VDS Port Group Configuration, perform one of the following actions
i. Click User Entered Port Groups and click Next:
● Under VDS Naming, provide the name for each VDS. For each VDS, click Create VDS and type the VDS name
and click Next.
● On the Port Group Select page, for each VDS, click Create Port Group and type the port group name. Initially,
the port group name defaults to the name of the network, but you can type over the default to suit for your
requirements. Alternatively, you can click Select and choose an existing port group.
● Click Next.
ii. Click Auto Create All Port Groups and click Next.
NOTE: PowerFlex Manager determines the VDS order based on the following criteria: PowerFlex Manager first
considers the number of port groups on each VDS. Then, PowerFlex Manager considers whether a management
port group is present on a particular VDS. PowerFlex Manager considers the network type for port groups on a
VDS by performing lifecycle operations for a resource group.
PowerFlex Manager considers the network name for port groups on a VDS.
d. On VDS Naming, Provide the name for each VDS under VDS Naming.
e. For each VDS, click Create VDS and type the VDS name. Click Next.
f. On the Port Group Select page, review the port group names automatically assigned for the networks
g. Click Next.
h. On Advanced Networking, select the MTU Selection as per the LCS. Click Next.
i. On the Summary Page, verify all the details and click Finish.
j. Click Save.
3. The components should not have any warnings or error.
4. Click Publish Template.
Ensure there are no warnings in the template and the template remains in draft state until published. A template must be
published to be deployed.
After publishing a template, you can use the template to deploy a service. For more information, see the PowerFlex Manager
online help.
NOTE: Skip this task if the deployment type is without CloudLink (encryption).
Prerequisites
● Ensure hypervisor management or PowerFlex Manager management networks are added on the PowerFlex Manager
Networks page.
● The latest, valid release IC should be uploaded and be in the Available state.
● A VMware vCenter with a valid data center, cluster, network (matching with the network from the first item), and datastore
should be discovered in the Resources page.
Steps
1. For a CloudLink Center deployment, clone the Management - CloudLink Center from the sample template.
2. Select on View Details > More Actions > Clone.
3. In the Clone Template wizard, complete the following:
a. Enter a Template name.
b. From the Template Category list, select a template category.
To create a category, select Create New Category from the list.
c. Optionally, enter a Template Description.
d. To specify the version to use for compliance, select the version from the Firmware and Software Compliance list or
choose Use PowerFlex Manager appliance default catalog.
e. Specify the resource group permissions for this template under Who should have access to the resource group
deployed from this template:
i. To restrict access to administrators, select Only PowerFlex SuperUser.
ii. To grant access to administrators and specific standard users, select PowerFlex SuperUser and Specific Lifecycle
Admin and Drive Replacer and perform the following steps:
● Click Add User(s) to add one more standard or operator users to the list.
● To remove a standard or operator user from the list, select the user and click Remove User(s).
● After adding the standard and or operator users, select or clear the check box next to the users to grant or block
access to this template.
● To grant access to administrators and all standard users, select PowerFlex SuperUser and Specific Lifecycle
Admin and Drive Replacer.
f. Click Next.
4. From the Additional Settings page:
a. Under Network Settings, select Hypervisor Network (PowerFlex management network).
b. Under OS Settings, select CLC credential or create a credential with root or CloudLink user by clicking +.
c. Under Cloudlink Settings, select the Secadmin credential from the list or create a secadmin credential by clicking +
and do the following:
i. Enter Credential Name
ii. Enter Username as secadmin
iii. Leave the Domain empty.
iv. Enter the password for secadmin in Password and Confirm Password.
v. Select V2 in SNMP Type and click Save.
d. Select a License File from the list based on the types of drives or select + to upload a license through the Add
Software License page.
NOTE: For SSD/NVMe drives, upload a capacity-based license. For SED drives, upload an SED-based license.
Steps
1. Log in to the vSphere Web Client and access the cluster.
2. Click the Configure tab.
3. Under Configuration, select VM/Host Rules and click Add.
4. In Create VM/Host Rule, enter a rule name.
5. From the Type menu, select Separate Virtual Machines and click Add.
6. Select both CloudLink Center VMs to which the rule will apply, and click OK.
Prerequisites
Ensure the:
● Compliance version and compatibility management file are uploaded
● Template to be used in published state.
● Networks are defined.
● CloudLink Center is deployed if it is CloudLink based PowerFlex hyperconverged or PowerFlex storage-only deployment.
Steps
1. On the menu bar, click Lifecycle > Resource Groups and click Deploy New Resource Group.
2. The Deploy Resource Group wizard opens. On the Deploy Resource Group page, perform the following steps:
a. From the Select Published Template list, select the template to deploy a Resource Group.
b. Enter the Resource Group Name and Resource Group Description (optional) that identifies the Resource Group.
c. To specify the version to use for compliance, select the version from the Firmware and Software Compliance list or
choose Use PowerFlex Manager appliance default catalog.
NOTE: Changing the firmware repository might update the firmware level on nodes for this resource group. The
global default firmware repository maintains the firmware on the shared devices.
d. Indicate Who should have access to the service deployed from this template by selecting one of the available options.
Click Next.
i. Grant access to Only PowerFlex Manager Administrators.
ii. To grant access to administrators and specific standard and operator users, select the PowerFlex Manager
Administrators and Specific Standard and Operator Users option, and perform the following steps
● Click Add User(s) to add one more standard or operator users to the list displayed
● Select which users will have access to this resource group
● To delete a standard and or operator user from the list, select the user and click Remove User(s).
● After adding the standard and or users, select or clear the check box next to the standard or operator users to
grant or block access to use this template
iii. Grant access to PowerFlex Manager Administrators and All Standard and Operator Users.
3. On the Deployment Settings page, configure the required settings. You can override many of the settings that are
specified in the template. You must specify other settings that are not part of the template:
If you are deploying a resource group with CloudLink, ensure that the correct CloudLink Center is displayed under the
CloudLink Center settings.
a. Under PowerFlex Settings, choose one of the following options for PowerFlex MDM virtual IP address source:
i. PowerFlex Manager Selected IP instructs PowerFlex Manager to select the virtual IP addresses.
ii. User Entered IP enables you to specify the IP address manually for each PowerFlex data network that is part of the
node definition in the resource group template.
b. Under PowerFlex Cluster, to configure OS Settings, select an IP address source. To manually enter the IP address,
select User Entered IP.
c. From the IP Source list, select Manual Entry. Then enter the IP address in the Static IP Address field.
d. To configure Hardware Settings, select the node source from the Node Source list.
i. If you select Node Pool, you can view all user-defined node pools and the global pool. Standard users can see
only the pools for which they have permission. Select Retry on Failure to ensure that PowerFlex Manager selects
another node from the node pool for deployment if any node fails. Each node can be retried up to five times.
ii. If you select Manual Entry, the Choose Node list is displayed. Select the node for deployment from the list by its
Service Tag.
e. Click Next.
f. On the Schedule Deployment page, select one of the following options and click Next.
i. Deploy Now - Select this option to deploy the resource group immediately.
ii. Deploy Later - Select this option and enter the date and time to deploy the service.
g. Review the Summary page.
The Summary page gives you a preview of what the Resource group will look like after the deployment.
h. Click Finish when you are ready to begin the deployment.
NOTE: This configuration removes some of the redundancy from the PowerFlex system in trade for raw speed.
This document assumes that the switches are already setup with trunk allowing the management VLAN on each port and the
data VLANs on their individual ports.
Steps
1. Log in as root.
2. Change directory into /etc/sysconfig/network-scripts, enter the following command: cd /etc/sysconfig/
network-scripts.
3. Gather the interface configuration file name, enter the following command: grep -H <data1 IP> ifcfg-p*
a. Note the file name that matches.
b. Perform the same command with the other data network IP addresses and note the filenames.
4. Setup the data networks using the data gathered in step 2.
5. Create ifcfg-<interface name>.<vlan id>, enter the following command: vi ifcfg-<interface
name>.<vlan id> and add the following:
VLAN=yes
TYPE=Vlan
PHYSDEV=<interface name>
VLAN_ID=<vlan id>
REORDER_HDR=yes
GVRP=no
MVRP=no
MTU=9000
PROXY_METHOD=none
BROWSER_ONLY=no
BOOTPROTO=none
IPADDR=<ip address>
PREFIX=<subnet in CIDR notation>
DEFROUTE=no
IPV4_FAILURE_FATAL=no
IPV6INIT=no
NAME=<interface name>.<vlan id>
DEVICE=<interface name>.<vlan id>
ONBOOT=yes
NM_CONTROLLED=no
For example, the data1 network is:
VLAN=yes
TYPE=Vlan
PHYSDEV=em1
VLAN_ID=151
REORDER_HDR=yes
GVRP=no
MVRP=no
MTU=9000
PROXY_METHOD=none
BROWSER_ONLY=no
BOOTPROTO=none
IPADDR=10.10.151.246
PREFIX=24>
DEFROUTE=no
IPV4_FAILURE_FATAL=no
IPV6INIT=no
NAME=em1.151
DEVICE=em1.151
ONBOOT=yes
NM_CONTROLLED=no
6. Repeat step 5 for data2 network and if required, repeat for data3 and data4 networks.
7. Create the bond sub-interfaces, enter the following command: vi ifcfg-<interface name>-bond and add the
following:
MTU=9000
TYPE=Ethernet
NAME=<interface name>-bond
DEVICE=<interface name>
ONBOOT=yes
PRIMARY=bond0.<mgmt.vlan id>
SECONDARY=yes
For example, the data1 bond primary is:
MTU=9000
TYPE=Ethernet
NAME=em1-bond
DEVICE=em1
ONBOOT=yes
PRIMARY=bond0.150
SECONDARY=yes
8. Repeat step 7 for data2 network and if required, repeat for data3 and data4 networks.
9. Create the bond interfaces for management, enter the following command: vi ifcfg-bond0.<mgmt.vlan id> and add
the following:
BONDING_OPTS="ad_select=stable all_seconday_active=0 arp_all_targets=any downdelay=0
fail_over_mac=none lp_interval=1 miimon=100 min_links=0 mode=balance-alb num_grat_arp=1
num_unsol_na=1 primary_reselect=always resend_igmp=1 updelay=0 use_carrier=1
xmit_hash_policy=layer2"
TYPE=Bond
BONDING_PRIMARY=yes
PROXY_METHOD=none
BROWSER_ONLY=no
BOOTPROTO=none
IPADDR=<mgmt.ip>
PREFIX=<subnet mask in CIDR notation>
GATEWAY=<mgmt.gateway ip>
DNS1=<dns1>
DNS2=<dns2>
DEFROUTE=yes
IPV4_FAILURE_FATAL=no
IPV6INIT=no
NAME=bond0.<mgmt.vlan id>
DEVICE=bond0.<mgmt.vlan id>
ONBOOT=yes
10. Save the file.
For example:
Steps
1. On the menu bar, click Lifecycle > Resource Groups.
2. On the Resource Groups page, ensure Status and Deployment state shows as Healthy.
See the following table for various states of resource group:
State Description
Healthy The resource group is successfully deployed and healthy.
Warning One or more resources in the resource group requires corrective action.
Critical The resource group is in a severely degraded or nonfunctional state and requires attention.
Pending The deployment is scheduled for a later time or date.
In The resource group deployment is in progress, or has other actions currently in process, such as a node
Progress expansion or removal.
Cancelled The resource group deployment has bee stopped. You can update the resources or retry the deployment, if
necessary.
Incomplet The resource group is no fully functional because it has no volumes that are associated with it. Click Add
e Resources to add volumes.
Service The resource group is in service mode.
Mode
Lifecycle The resource group is in lifecycle mode. Resource groups in life cycle mode are enabled with health and
Mode compliance monitoring, and non-disruptive upgrade features only.
State Description
Managed The resource group is in lifecycle mode. Resource groups in life cycle mode are enabled with health and
Mode compliance monitoring, and non-disruptive upgrade, automated resource addition, and automated resource
replacement features.
The Resource Groups page displays the resource groups that are in the following states in both Tile and List view.
Incomplete The resource group is not fully functional because it has no volumes that are associated with
it. Click Add Resources to add volumes.
Cancelled The resource group deployment has been stopped. You can update the resources or retry
the deployment, if necessary.
● Healthy - The resource group is successfully deployed and is healthy.
● Warning - One or more resources in the resource group requires corrective action.
● Critical - The resource group is in a severely degraded or nonfunctional state and
requires attention.
● In Progress - The resource group deployment is in progress, or has other actions
currently in process, such as a node expansions or removal.
Managed mode The service supports health and compliance monitoring, non-disruptive upgrades, automated
resource addition, and automated resource replacement features.
Apart from a VMware NSX-T environment, all other supported deployments would be in
managed mode regardless of full network automation or partial network automation.
Full network automation PowerFlex Manager configures the required interface port configuration on supported
access or leaf switches for downlink to the PowerFlex appliance node.
Partial networking Requires a manual interface port configuration on the customer managed access or leaf
automation switches for downlink to PowerFlex appliance node. partial networking automation uses
iDRAC virtual media for installing operating system.
Steps
1. Start an SSH session to the primary MDM as a non-root user.
2. Type scli --login --p12_path /opt/emc/scaleio/mdm/cfg/cli_certificate.p12 --p12_password
password to capture the system ID used to discover the PowerFlex system.
Example output:
Cluster:
Mode: 3_node, State: Normal, Active: 3/3, Replicas: 2/2
Virtual IP Addresses: 192.168.109.250, 192.168.110.250
Master MDM:
Name: pfmc-svm-38, ID: 0x13ca7e24633b9200
IP Addresses: 192.168.109.138, 192.168.110.138, Port: 9011, Virtual IP
interfaces: eth1, eth2
Management IP Addresses: 10.10.10.38, Port: 8611
Status: Normal, Version: 4.0.9999
Slave MDMs:
Name: pfmc-svm-39, ID: 0x7741eb2c255c6101
IP Addresses: 192.168.109.139, 192.168.110.139, Port: 9011, Virtual IP
interfaces: eth1, eth2
Management IP Addresses: 10.10.10.39, Port: 8611
Status: Normal, Version: 4.0.9999
Tie-Breakers:
Name: pfmc-svm-40, ID: 0x089bab052efed002
IP Addresses: 192.168.109.140, 192.168.110.140, Port: 9011
Status: Normal, Version: 4.0.9999
Steps
1. Log in to PowerFlex Manager.
2. Navigate to the Resources tab and click Discover Resources > Next.
3. Click Add Resource Type.
4. For Resource, select the PowerFlex system.
5. For the MDM cluster IP address, enter all the PowerFlex cluster management IP addresses.
Enter the management IP addresses of the LIA nodes in the MDM cluster IP address field. You need to provide the IP
addresses for all of the nodes in a comma-separated list. The list should include a minimum of three nodes and a maximum of
five nodes.
If you forget to add a node, the node will not be reachable after discovery. To fix this, you can rerun the discovery later to
provide the missing node. You can enter just the one missing node, or all of the nodes again. If you enter IP addresses for any
nodes that were previously discovered, these will be ignored on the second run.
Prerequisites
Ensure the following before you add an existing service:
● The VMware vCenter, switches, and hosts are discovered in the resource list.
● Ensure that Add the PowerFlex system as a resource has been completed.
● oob-mgmt, and vcsa-ha must be type general purpose for PowerFlex Manager to run without error.
Steps
1. In PowerFlex Manager, on the menu bar, click Lifecycle > Resource Groups > + Add Existing Resource Group > Next.
2. On the Service Information page, enter a service name in the Name field.
3. Enter a description in the Description field.
4. For Type, select Hyperconverged.
5. From Firmware and Software Compliance, select the applicable IC version.
6. Specify the resource group permissions for this template under Who should have access to the resource group
deployed from this template:
a. To restrict access to administrators, select Only PowerFlex SuperUser.
b. To grant access to administrators and specific standard users, select PowerFlex SuperUser and Specific Lifecycle
Admin and Drive Replacer and perform the following steps:
● Click Add User(s) to add one more standard or operator users to the list.
● To remove a standard or operator user from the list, select the user and click Remove User(s).
● After adding the standard and or operator users, select or clear the check box next to the users to grant or block
access to this template.
● To grant access to administrators and all standard users, select PowerFlex SuperUser and Specific Lifecycle
Admin and Drive Replacer.
7. Click Next.
8. Select the network automation type: Full network automation (FNA).
9. On the Cluster Information page, enter a name for the cluster component in the Component Name field.
10. Select values for the cluster settings:
Prerequisites
You need to deploy the MDM cluster before uploading a PowerFlex license. You need to discover an MDS gateway before
uploading an MDS license.
Steps
1. On the menu bar, click Settings and click License Management.
2. Click PowerFlex License.
3. To upload an MDS license, click Choose File in the Management Data Store (MDS) License section and select the
license file. Click Save.
4. To upload a production license for PowerFlex, click Choose File in the Production License section and select the license
file. Click Save.
Results
When you upload a license file, PowerFlex Manager checks the license file to ensure that it is valid.
After the upload is complete, PowerFlex Manager stores the license details and displays them on the PowerFlex Manager
License page. You can see the Installation ID, System Name, and SWID for the PowerFlex. In addition, you can see the Total
Licensed Capacity, as well as the License Capacity Left. You can upload a second license, as long as the license is equal to or
more than the Total System Capacity.
9
Deploying the PowerFlex file nodes
Use this chapter to deploy PowerFlex file nodes.
File storage
File storage is managed through NAS servers, which must be created prior to creating file systems. NAS servers can be created
to support SMB protocol, NFS protocol, or both. Once NAS servers are created, you can create file systems as containers for
your SMB shares for Windows users, or NFS exports for UNIX users.
Term Definition
File system A storage resource that can be accessed through file sharing protocols such as SMB or NFS.
PowerFlex file services A virtualized network-attached storage server that uses the SMB, NFS, FTP, and SFTP protocols
to catalog, organize, and transfer files within file system shares and exports. A NAS server,
the basis for multitenancy, must be created before you can create file-level storage resources.
PowerFlex file services is responsible for the configuration parameters on the set of file systems
that it serves.
Network file system (NFS) An access protocol that enables users to access files and folders on a network. NFS is typically
used by Linux/UNIX hosts.
PowerFlex Manager An HTML5 user interface used to manage PowerFlex appliance.
Server message block An access protocol that allows remote file data access from clients to hosts on a network. SMB is
(SMB) typically used in Microsoft Windows environments.
Snapshot A point-in-time view of data stored on a storage resource. A user can recover files from a
snapshot or restore a storage resource from a snapshot.
● Single PowerFlex file cluster will support maximum of 16 nodes and minimum of two nodes.
● Expansion can be incremented in one or multiple up to a max of 16 nodes.
● Each PowerFlex R650 node will have same CPU/memory/NIC in a cluster.
Node configurations
Config Cores RAM (GB) NICs (GB) Local storage (GB)
Small 2 x 12 (24) 128 4 x 25 480 BOSS M.2
Medium 16 x 2 (32) 256 4 x 25 480 BOSS M.2
Large 28 x 2 (56) 256 4 x 25 or 4 x 100 480 BOSS M.2
Configure iDRAC before starting the deployment, see Configuing the iDRAC.
Related information
Configuring the iDRAC
Networking pre-requisites
● Create the required VLANs in the access switches
Define networks
Adding the details of an existing network enables PowerFlex Manager to automatically configure nodes that are connected to
the network.
Steps
1. On the menu bar, click Settings and click Networks.
The Networks page opens.
2. Click Define. The Define Network page opens.
3. In the Name field, enter the name of the network. Optionally, in the Description field, enter a description for the network.
4. From the Network Type drop-down list, select one of the following network types:
● PowerFlex Management - Depends on number of nodes in the PowerFlex file cluster.
● PowerFlex Data (Client Traffic Only) - Define the number of data networks depends on the number of data networks
configured on the PowerFlex block storage.
● NAS File Management – Always define one additional IP address for NAS cluster which means if you have three
Powerflex file nodes in the cluster, define four IP addresses (three IP addresses for nodes and one IP address for
cluster). Ensure that you configured untagged VLAN in switch side for NAS File Management network if deployment is in
PNA mode.
● NAS File Data - Can be used the range without defining also. Number of IP addresses depends on number of NAS
servers that you want to create.
5. In the VLAN ID field, enter a VLAN ID between 1 and 4094.
NOTE: PowerFlex Manager uses the VLAN ID to configure I/O modules to enable network traffic to flow from the node
to configured networks during deployment.
6. Optionally, select the Configure Static IP Address Ranges check box, and then do the following:
a. In the Subnet box, enter the IP address for the subnet. The subnet is used to support static routes for data and
replication networks.
b. In the Subnet Mask box, enter the subnet mask.
c. In the Gateway box, enter the default gateway IP address for routing network traffic.
d. Optionally, in the Primary DNS and Secondary DNS fields, enter the IP addresses of primary DNS and secondary DNS.
e. Optionally, in the DNS Suffix field, enter the DNS suffix to append for hostname resolution.
f. To add an IP address range, click Add IP Address Range. In the row, indicate the role in PowerFlex nodes for the IP
address range and then specify a starting and ending IP address for the range. For the Role, select Client Only. The
range is assigned to the client role on PowerFlex file nodes.
NOTE: IP address ranges cannot overlap. For example, you cannot create an IP address range of 10.10.10.1–
10.10.10.100 and another range of 10.10.10.50–10.10.10.150.
7. Click Save.
Edit a network
If a network is not associated with a template or resource group, you can edit the network name, the VLAN ID, or the IP address
range.
Steps
1. On the menu bar, click Settings and click Networks. The Networks page opens.
2. Select the network that you want to modify, and click Edit. The Edit Network page opens.
3. Edit the information in any of the following fields: Name, VLAN ID, IP Address Range.
For a PowerFlex data or replication network, you can specify a subnet IP address for a static route configuration. The subnet
is used to support static routes for data and replication networks.
4. Click Save.
Delete a network
You cannot delete a network that is associated with a template or resource group.
Steps
1. On the menu bar, click Settings and click Networks. The Networks page is displayed.
2. Click the network that you want to delete, and click Delete.
3. Click OK when the confirmation message is displayed.
Discover resources
A resource is a physical and virtual data center object that PowerFlex Manager interacts with, including but not limited to nodes,
network switches, VM managers (for example, VMware vCenter), and element managers (for example, CloudLink Center,
PowerFlex file gateway).
Prerequisites
Before you start discovering a resource, complete the following:
NOTE: In this case resources are PowerFlex file nodes, PowerFlex file gateway is automatically deployed and discovered as
part of PowerFlex management platform deployment.
● Gather the IP addresses and credentials that are associated with the resources.
● Ensure that both the resources and the PowerFlex Manager are available on the network.
Steps
1. Access the Discovery Wizard by performing either of the following actions:
a. On the Getting Started page, click Discovery Resources.
b. On the menu bar, click Resources. On the Resources page, click Discover on the All Resources tab.
2. On the Welcome page of the Discovery Wizard, read the instructions, and click Next.
3. On the Identify Resources page, click Add Resource Type, and perform the following steps:
a. From the Resource Type list, select a resource that you want to discover.
● Element Manager, for example, CloudLink Center.
● Node (Hardware / Software Management)
● Switch
● VM Manager
● PowerFlex Gateway
● Node (Software Management): For PowerFlex, click Node (Software Management).
● PowerFlex System
The PowerFlex system resource type is used to discover an MDS gateway.
b. Enter the management IP address (or hostname) of the resources that you want to discover in the IP/Hostname Range
field.
To discover one or more nodes by IP address, select IP Address and provide a starting and ending IP address.
To discover one or more nodes by hostname, select Hostname and identify the nodes to discover in one of the following
ways:
● Enter the fully qualified domain name (FQDN) with a domain suffix.
● Enter the FQDN without a domain suffix.
● Enter a hostname search string that includes one of the following variables:
Variable Description
$(num) Produces an automatically generated unique number.
$(num_2d) Produces an automatically generated unique number that
has two digits.
$(num_3d) Produces an automatically generated unique number that
has three digits.
If you use a variable, you must provide a start number and end number for the hostname search.
Option Description
Managed Select this option to monitor the firmware version compliance, upgrade
firmware, and deploy resource groups on the discovered resources. A managed
state is the default option for the switch, vCenter, element manager, and
PowerFlex gateway resource types.
Resource state must be set to Managed for PowerFlex Manager to send alerts
to secure connect gateway (SCG).
For PowerFlex file nodes, select the Managed option.
Unmanaged Select this option to monitor the health status of a device and the firmware
version compliance only. The discovered resources are not available for a
firmware upgrade or deploying resource groups by PowerFlex Manager. This
is the default option for the node resource type.
If you did not upload a license in the Initial Setup wizard, PowerFlex Manager
is configured for monitoring and alerting only. In this case, Unmanaged is the
only option available.
Reserved Select this option to monitor firmware version compliance and upgrade
firmware. The discovered resources are not available for deploying resource
groups by PowerFlex Manager.
d. To discover resources into a selected node pool instead of the global pool (default), select an existing or create a node
pool from the Discover into Node Pool list. To create a node pool, click the + sign to the right of the Discover into
Node Pool box.
e. Select an existing or create a credential from the Credentials list to discover resource types. To create a credential,
click the + sign to the right of the Credentials box. PowerFlex Manager maps the credential type to the type of
resource you are discovering, The default node credential type is Dell EMC PowerEdge iDRAC Default.
f. If you want PowerFlex Manager to automatically reconfigure the iDRAC nodes it finds, select the Reconfigure
discovered nodes with new management IP and credentials check box. This option is not selected by default,
because it is faster to discover the nodes if you bypass the reconfiguration.
g. To have PowerFlex Manager automatically configure iDRAC nodes to send alerts to PowerFlex Manager, select theAuto
configure nodes to send alerts to PowerFlex Manager check box.
4. Click Next.
You might have to wait while PowerFlex Manager locates and displays all the resources that are connected to the managed
networks.
To discover multiple resources with different IP address ranges, repeat step 2 and 3.
5. On the Discovered Resources page, select the resources from which you want to collect inventory data and click Finish.
The discovered resources are listed on the Resources page.
NOTE: PowerFlex file cluster deployment also uses the same compliance version and compatibility management files
which are used for PowerFlex hyperconverged or storage-only deployment (backend block storage).
Component types
Components (physical or virtual or applications) are the main building blocks of a template.
PowerFlex Manager has the following component types:
● Node
● Cluster
● VM
Specific to the PowerFlex file template PowerFlex Manager has three component types:
● PowerFlex cluster
● PowerFlex file cluster
● Nodes
Node settings
This reference table describes the following node settings: hardware, BIOS, operating system, and network.
Setting Description
Component name Indicates the node component name, in Powerflex appliance case it will be node (software/
hardware).
NOTE: This is applicable only when you manually build the template.
Setting Description
Full network automation Allows you to perform deployments with full network automation. This feature allows you to
(FNA) work with supported switches and requires less manual configuration. Full network automation
also provides better error handling since PowerFlex Manager can communicate with the
switches and identify any problems that may exist with the switch configurations. Note:
applicable only when you manually build the template.
Partial network automation Allows you to perform switchless deployments with partial network automation. This feature
(PNA) allows you to work with unsupported switches, but requires more manual configuration before
a deployment can proceed successfully. If you choose to use partial network automation,
you give up the error handling and network automation features that are available with a full
network configuration that includes supported switches. For a partial network deployment,
the switches are not discovered, so PowerFlex Manager does not have access to switch
configuration information. You must ensure that the switches are configured correctly,
since PowerFlex Manager does not have the ability to configure the switches for you. If
your switch is not configured correctly, the deployment may fail and PowerFlex Manager
is not able to provide information about why the deployment failed. For a partial network
deployment, you must add all the interfaces and ports, as you would when deploying with full
network automation. The Switch Port Configuration must be set to Port Channel (LACP
enabled). In addition, the LACP fallback or LACP ungroup option must be configured on the
port channels. Note: In this release PowerFlex Manager supports Powerflex file deployment
only with Port Channel (LACP enabled).
Number of instances Enter the number of instances that you want to add. If you select more than one instance,
a single component representing multiple instances of an identically configured component
is created. Edit the component to add extra instances. If you require different configuration
settings, you can create multiple components.
Related components Select Associate All or Associate Selected to associate all or specific components to the
new component.
Import configuration from Click this option to import an existing node configuration and use it for the node component
reference node settings. On the Select Reference Node page, select the node from which you want to
import the settings and click Select.
OS Settings
Host name selection If you choose Specify At Deployment Time, you must type the name for the host at
deployment time. If you choose Auto Generate, PowerFlex Manager displays the Host Name
Template field to enable you to specify a macro that includes variables that produce a unique
hostname. For details on which variables are supported, see the context-sensitive help for
the field. If you choose Reverse DNS Lookup, PowerFlex Manager assigns the hostname by
performing a reverse DNS lookup of the host IP address at deployment time.
OS Image Specifies the location of the operating system image install files. You must choose Use
Compliance File Linux image provided with the target compliance file) for deploying
PowerFlex file cluster.
NOTE: IC has Operating System images (Embedded Operating system based on SUSE
Linux) required for deploying PowerFlex file resource group, choose the image which is
part of IC.
OS Credential Select the credential that you created on the Credentials Management page. Alternatively,
you can create a credential while you are editing a template. If you select a credential that was
created on the Credentials Management page, you do not need to type the username and
password, since they are part of the credential definition. For nodes running Linux, the user is
root.
NTP Server Specifies the IP address of the NTP server for time synchronization. If adding more than one
NTP server in the operating system section of a node component, be sure to separate the IP
addresses with commas.
Use Node For Dell PowerFlex Indicates that this node component is used for a PowerFlex deployment. When this option is
selected, the deployment installs the SDC components, as required for a PowerFlex file cluster
to access the PowerFlex volume in Linux environment. To deploy a PowerFlex file cluster
successfully, include at least two nodes in the template.
Setting Description
PowerFlex Role Specifies the following deployment type for PowerFlex: Compute Only indicates that the
node is only used for compute resources. For an PowerFlex file template, be sure to select
Compute Only as the role and add a Node, PowerFlex File Cluster and PowerFlex
Cluster components to the template.
Enable PowerFlex File Enables PowerFlex file capabilities on the node. This option is only available if you choose Use
Compliance File Linux Image as the OS Image and then choose Compute Only as the
PowerFlex Role. If Enable PowerFlex File is selected, you must ensure that the template
includes the necessary NAS File Management network. NAS File Data network is optional.
If you do not configure NAS File Management on the template, the template validation will
fail.
Switch Port Configuration Specifies whether Cisco virtual PortChannel (vPC) or Dell Virtual Link Trunking (VLT) is
enabled or disabled for the switch port. For PowerFlex file template that use a Linux operating
system image, the option available only Port Channel (LACP enabled) turns on vPC or VLT
with the link aggregation control protocol enabled.
Teaming And Bonding For PowerFlex file template, if you choose Port Channel (LACP enabled) as the switch port
Configuration configuration, the only teaming and bonding option is Mode 4 (IEEE 802.3ad policy).
Hardware Settings
Target Boot Device Specifies the target boot device. Local Flash storage for Dell EMC PowerFlex: Installs the
operating system to the BOSS flash storage device that is present in the node and configures
the node to support PowerFlex file. If you select the option to Use Node for Dell EMC
PowerFlex under OS Settings , the Local Flash storage for Dell EMC PowerFlex option is
automatically selected as the target boot device.
Node Pool Specifies the pool from which nodes are selected for the deployment.
BIOS Settings
System Profile Select the system power and performance profile for the node. Default selection is
Performance.
User Accessible USB Ports Enables or disables the user-accessible USB ports. Default selection is All Ports On.
Number of Cores per Specifies the number of enabled cores per processor. Default selection is All.
Processor
Virtualization Technology Enables the additional hardware capabilities of virtualization technology. Default selection is
Enabled.
Logical Processor Each processor core supports up to two logical processors. If enabled, the BIOS reports all
logical processors. If disabled, the BIOS reports only one logical processor per core. Default
selection is Enabled.
Execute Disable Enables or disables execute disable memory protection. Default selection is Enabled.
Node Interleaving Enable or disable the interleaving of allocated memory across nodes.
● If enabled, only nodes that support interleaving and have the read/write attribute for node
interleaving set to enabled are displayed. Node interleaving is automatically set to enabled
when a resource group is deployed on a node.
● If disabled, any nodes that support interleaving are displayed. Node interleaving is
automatically set to disabled when a resource group is deployed on a node. Node
interleaving is also disabled for a resource group with NVDIMM compression.
● If not applicable is selected, all nodes are displayed irrespective of whether interleaving is
enabled or disabled. This setting is the default. Default selection is Disabled.
Network Settings
Add New Interface Click Add New Interface to create a network interface in a template component. Under
this interface, all network settings are specified for a node. This interface is used to find a
compatible node in the inventory. For example, if you add Two Port, 100 gigabit to the
template, when the template is deployed PowerFlex Manager matches a node with a two
port 100-gigabit network card as its first interface. To add one or more networks to the port,
select Add Networks to this Port. Then, choose the networks to add, or mirror network
settings defined on another port. To see network changes that are previously made to a
Setting Description
template, you can click View/Edit under Interfaces. Or you can click View All Settings
on the template, and then click View Networks. To see network changes at resource group
deployment time, click View Networks under Interfaces.
NOTE: If you used the sample template, standard configuration ports are selected by
default, just verify the selected ports before validating settings.
Add New Static Route Click Add New Static Route to create a static route in a template. To add a static route, you
must first select Enabled under Static Routes. A static route allows nodes to communicate
across different networks. A static route requires a Source Network and a Destination
Network, as well as a Gateway. The source and destination network must each be a
PowerFlex data network or replication network that has the Subnet field defined. If you add
or remove a network for one of the ports, the Source Network drop-down list does not get
updated and still shows the old networks. In order to see the changes, save the node settings
and edit the node again.
Validate Settings Click Validate Settings to determine what can be chosen for a deployment with this
template component. The Validate Settings wizard displays a banner to when one or more
resources in the template do not match the configuration settings that are specified in the
template . The wizard displays the following tabs:
● Valid (number) lists the resources that match the configuration settings.
● Invalid (number) lists the resources that do not match the configuration settings. The
reason for the mismatch is shown at the bottom of the wizard.
For example, you might see Network Configuration Mismatch as the reason for the mismatch
if you set the port layout to use a 100-Gb network architecture, but one of the nodes is using
a 25 GB architecture.
Create a template
The Create feature allows you to create a template, clone the components of an existing template into a new template, or
import a pre-existing template.
Steps
1. On the menu bar, click Lifecycle > Templates.
2. On the Templates page, click Create.
3. In the Create dialog box, select one of the following options:
● Clone an existing PowerFlex Manager template
● Upload External Template
● Create a new template
If you select Clone an existing PowerFlex Manager template, select the Category and the Template to be Cloned.
The components of the selected template are in the new template. You can clone one of your own template or the sample
template.
8. Specify the resource group permissions for this template under Who should have access to the resource group
deployed from this template? by performing one of the following actions:
● To restrict access to administrators, select Only PowerFlex SuperUser.
● To grant access to administrators and specific standard users, select PowerFlex SuperUser and Specific Lifecycle
Admin and Drive Replacer and perform the following steps:
a. Click Add User(s) to add one more standard or operator users to the list.
b. To remove a standard or operator user from the list, select the user and click Remove User(s).
c. After adding the standard and or operator users, select or clear the check box next to the standard or operator users
to grant or block access to use this template.
● To grant access to administrators and all standard users, select PowerFlex SuperUser and Specific Lifecycle Admin
and Drive Replacer.
9. Click Save.
Clone a template
Steps
1. On the menu bar, click Lifecycle > Templates.
2. Open a PowerFlex File template from Sample Templates, and then click More Actions > Clone in the right pane.
You can also click Create > Clone an existing PowerFlex Manager template on the My Templates page if you want to
clone one of your own templates or the sample templates.
3. In the Clone Template dialog box, enter a template name in the Template Name box.
4. Select a template category from the Template Category list. To create a template category, select Create New Category
5. In the Template Description box, enter a description for the template.
6. To specify the version to use for compliance, select the version from the Firmware and Software Compliance list or
choose Use PowerFlex Manager appliance default catalog.
You cannot select a minimal compliance version for a template, since it only includes server firmware updates. The
compliance version for a template must include the full set of compliance update capabilities. PowerFlex Manager does
not show any minimal compliance versions in the Firmware and Software Compliance list.
NOTE: Changing the compliance version might update the firmware level on nodes for this resource group. Firmware on
shared devices will still be maintained by the global default firmware repository
7. Indicate Who should have access to the resource group deployed from this template by selecting one of the following
options:
● Grant access to Only PowerFlex Manager Administrators.
● Grant access to PowerFlex Manager Administrators and Specific Standard and Operator Users. Click Add
User(s) to add one or more standard and or operator users to the list. Click Remove User(s) to remove users from the
list.
● Grant access to PowerFlex Manager Administrators and All Standard and Operator Users.
8. Click Next.
9. On the Additional Settings page, provide new values for the Network Settings, OS Settings, PowerFlex Gateway
Settings, and Node Pool Settings.
10. Click Finish.
11. Once you click Finish, it will be re-directed to Template page. Add/Modify the valuer of each component (PowerFlex
cluster, PowerFlex file cluster, node) based on the details mentioned in the above table and ensure that there is no warning
on any of the three components. Then click Publish Template for publishing the template. After publishing a template, you
can use the template to deploy a resource group on the Resource Groups page.
Steps
1. Click Modify Template.
2. To add a component type to the template, click Add Node and Add Cluster (PowerFlex Cluster and PowerFlex File Cluster)
at the top of the template builder.
The corresponding <component type> component dialog box appears.
3. If you are adding a node, choose one of the following network automation types:
● Full Network Automation
When you choose Partial Network Automation , PowerFlex Manager skips the switch configuration step, which is normally
performed for a resource group with Full Network Automation. Partial network automation allows you to work with
unsupported switches. However, it also requires more manual configuration before deployments can proceed successfully.
If you choose to use partial network automation, you give up the error handling and network automation features that
are available with a full network configuration that includes supported switches. For more information about the manual
configuration steps needed for partial network automation, please refer networking section in the document. In the Number
of Instances box, provide the number of component instances that you want to include in the template.
NOTE: Minimum is two nodes and maximum is 16 nodes for PowerFlex File deployment.
4. Click Continue.
5. On the Node Page, provide new values for the OS Settings, Hardware Settings, BIOS Settings, and Network Settings,
6. Click Validate Settings to determine what can be chosen for a deployment with this template component. The Validate
Settings wizard displays a banner to when one or more resources in the template do not match the configuration settings
that are specified in the template
7. Click Save.
8. If you are adding a cluster, in the Select a Component box, choose one of the following cluster types:
● PowerFlex cluster
● PowerFlex File cluster
9. Under Related Components, perform one of the following actions:
● To associate the component with all existing components, click Associate All.
● To associate the component with only selected components, click Associate Selected and then select the components
to associate.
Based on the component type, specific settings and properties appear automatically that are required and can be edited.
10. Click Save to add the component to the template builder.
11. Repeat steps 1 through 6 to add additional components.
12. After you finish adding components to your template, click Publish Template.
A template must be published to be deployed. It remains in draft state until published.
After publishing a template, you can use the template to deploy a resource group on the Resource Groups page.
Steps
1. On the menu bar, click Lifecycle > Templates.
2. On the Templates page, click the template that you want to edit and click Modify Template in the right pane.
3. On the template builder page, in the right pane, click Modify.
4. In the Modify Template Information dialog box, enter a template name in the Template Name box.
5. Select a template category from the Template Category list. To create a template category, select Create New Category
6. In the Template Description box, enter a description for the template.
7. To specify the version to use for compliance, select the version from the Firmware and Software Compliance list or
choose Use PowerFlex Manager appliance default catalog.
8. Indicate Who should have access to the resource group deployed from this template by selecting one of the following
options:
● Grant access to Only PowerFlex Manager Administrators
● Grant access to PowerFlex Manager Administrators and Specific Standard and Operator Users. Click Add
User(s) to add one or more standard and or operator users to the list. Click Remove User(s) to remove users from the
list.
● Grant access to PowerFlex Manager Administrators and All Standard and Operator Users.
9. Click Save.
Edit a template
You can edit an existing template to change its draft state to published for deployment, or to modify its components and their
properties.
Steps
1. On the menu bar, click Lifecycle > Templates.
2. Open a template, and click Modify Template.
3. Make changes as needed to the settings for components within the template. Based on the component type, required
settings and properties are displayed automatically and can be edited.
a. To edit PowerFlex cluster settings, select the PowerFlex Cluster component and click Modify. Make the necessary
changes, and click Save.
b. To edit PowerFlex File cluster settings, select the PowerFlex File cluster component and click Modify. Make the
necessary changes, and click Save.
c. To edit node settings, select the Node component and click Modify. Make the necessary changes, and click Save.
4. Optionally, click Publish Template to make the template ready for deployment.
Prerequisites
Ensure LLDP is enabled on the switches, and update the inventory in PowerFlex Manager.
Steps
1. On the menu bar, click one of the following:
● Lifecycle > Resource Groups and, click Deploy New Resource Group.
● Lifecycle > Templates and click Deploy.
The Deploy Resource Group wizard opens.
2. On the Deploy Resource Group page, perform the following steps:
a. From the Select Published Template list, select the template to deploy a resource group.
b. Enter the Resource Group Name (required) and Resource Group Description (optional) that identifies the resource
group.
c. To specify the version to use for compliance, select the version from the Firmware and Software Compliance list or
choose Use PowerFlex Manager appliance default catalog.
You cannot select a minimal compliance version when you deploy a new resource group, since it only includes server
firmware updates. The compliance version for a new resource group must include the full set of compliance update
capabilities. PowerFlex Manager does not show any minimal compliance versions in the Firmware and Software
Compliance list.
NOTE: Changing the firmware repository might update the firmware level on nodes for this resource group. The
global default firmware repository maintains the firmware on the shared devices.
d. Indicate Who should have access to the resource group deployed from this template by selecting one of the
following options:
● Grant access to Only PowerFlex Manager Administrators.
● To grant access to administrators and specific standard and operator users, select the PowerFlex Manager
Administrators and Specific Standard and Operator Users option, and perform the following steps:
i. Click Add User(s) to add one more standard or operator users to the list displayed.
ii. Select which users will have access to this resource group.
iii. To delete a standard and or operator user from the list, select the user and click Remove User(s).
iv. After adding the standard and or users, select or clear the check box next to the standard or operator users to
grant or block access to use this template.
● Grant access toPowerFlex Manager Administrators and All Standard and Operator Users.
3. Click Next.
4. On the screens that follow the Deployment Settings page, configure the settings, as needed for your deployment.
5. Click Next.
6. On the Schedule Deployment page, select one of the following options and click Next:
● Deploy Now - Select this option to deploy the resource group immediately.
● Deploy Later - Select this option and enter the date and time to deploy the resource group
7. Review the Summary page.
The Summary page gives you a preview of what the resource group will look like after the deployment.
8. Click Finish when you are ready to begin the deployment. If you want to edit the resource group, click Back.
Steps
1. On the menu bar, click Lifecycle > Resource Groups.
2. On the Resource Groups page, ensure Status and Deployment state shows as Healthy.
See the following table for various states of resource group:
State Description
Healthy The resource group is successfully deployed and healthy.
Warning One or more resources in the resource group requires corrective action.
Critical The resource group is in a severely degraded or nonfunctional state and requires attention.
Pending The deployment is scheduled for a later time or date.
In The resource group deployment is in progress, or has other actions currently in process, such as a node
Progress expansion or removal.
Cancelled The resource group deployment has bee stopped. You can update the resources or retry the deployment, if
necessary.
Incomplet The resource group is no fully functional because it has no volumes that are associated with it. Click Add
e Resources to add volumes.
Service The resource group is in service mode.
Mode
Lifecycle The resource group is in lifecycle mode. Resource groups in life cycle mode are enabled with health and
Mode compliance monitoring, and non-disruptive upgrade features only.
Managed The resource group is in lifecycle mode. Resource groups in life cycle mode are enabled with health and
Mode compliance monitoring, and non-disruptive upgrade, automated resource addition, and automated resource
replacement features.
The Resource Groups page displays the resource groups that are in the following states in both Tile and List view.
10
Deploying PowerFlex NVMe over TCP
Nonvolatile Memory Express (NVMe) is a high-speed storage protocol designed specifically to take advantage of solid-state
drive performance and bandwidth. NVMe over fabrics allow hosts to use existing network architectures such as Fiber Channel
and Ethernet to access NVMe devices at greater speeds and lower latency than legacy storage protocols.
Requirements:
● PowerFlex Manager 4.0 must be deployed and configured
● Four storage-only nodes (with standard SSD or NVMe disks)
d. Manually select each storage-only node by the serial number or the iDRAC IP address, or allow PowerFlex to select the
nodes automatically from the selected node pool.
e. Click Next.
f. Click Deploy Now > Next.
g. Review the summary screen and click Next.
Monitor deployment activity on the right panel under Recent Activity.
Steps
1. Log in to the VMware vSphere Client.
2. Click Home/Inventory and select the host.
3. Select Configure > VMkernel adapters.
4. Edit PowerFlex-Data 1.
5. Select the NVMe over TCP check box and click OK.
6. Repeat these steps for the remaining PowerFlex data networks.
Steps
1. Log in to the VMware vSphere Client.
2. Click Home > Inventory > Hosts and Clusters.
3. In the VMware vSphere console, browse to the customer data center, compute-only cluster, and select the added host.
4. From the right pane, click Configure > Storage Adapters.
5. From the right pane, click Add Software Adapter.
6. Click Add NVMe over TCP adapter.
Steps
1. Log in to VMware vSphere Client.
2. Select the first VMware NVMe over TCP storage adapter. For example, vmhba6x.
3. On the right pane, select Configure > Storage Adapters.
4. From the pane, select Controllers/Add Controller.
The host NQN is listed at the top of the form.
5. Click COPY and place the host NQN in the copy buffer.
6. Click CANCEL.
Steps
1. Log in to PowerFlex Manager.
2. Click Block > Hosts.
3. Click +Add Host.
4. Enter the hostname and paste the host NQN from the copy buffer. The default number of paths is four.
5. Click Add.
Create a volume
Use this procedure to create a volume.
Steps
1. From PowerFlex Manager, click Block > Volumes.
2. Click +Create Volume.
3. Enter the number of volumes and the name of the volumes.
4. Select Thick or Thin. Thin is the default.
5. Enter the required volume size in GB, in 8 GB increments.
6. Select the NVMe storage pool and click Create.
Steps
1. From PowerFlex Manager, click Block > Volumes.
2. Select the Volume check box and click Mapping > Map.
3. Select the check box on the host that you are mapping the volume to.
4. Click Map > Apply.
Prerequisites
Ensure that a volume is mapped to the host to be able to connect the SDT paths.
Steps
1. From PowerFlex Manager, click Block > NVMe Targets.
2. Select any one of the listed SDTs.
3. Record the IP addresses and discovery port in the lower right corner.
Example output:
Prerequisites
If the host is not connected to the embedded operating system 15 SPx repository, perform the following steps:
1. Run zypper ar http://<customer-repository-address>/pub/suse/sles/15/dell-sles15.repo
2. Type zypper in nvme-cli to install the NVMe command line.
3. Type Y to confirm if additional modules are required.
4. Type cat /etc/nvme/hostnqn to find the hosts NQN address. For example,
nqn.2014-08.org.nvmexpress:uuid:4c4c4544-0056-4e10-8034-b7c04f463333.
Steps
1. Log in to PowerFlex Manager.
2. Click Block > Hosts.
3. Click +Add Host.
4. Enter the hostname and paste the host NQN.
5. The default number of paths is four.
6. Click Add.
Create a volume
Use this procedure to create a volume.
Steps
1. From PowerFlex Manager, click Block > Volumes.
2. Click +Create Volume.
3. Enter the number of volumes and the name of the volumes.
4. Select Thick or Thin. Thin is the default.
5. Enter the required volume size in GB, in 8 GB increments.
6. Select the NVMe storage pool and click Create.
Steps
1. From PowerFlex Manager, click Block > Volumes.
2. Select the Volume check box and click Mapping > Map.
3. Select the check box on the host that you are mapping the volume to.
4. Click Map > Apply.
Prerequisites
Ensure a volume is mapped to the host to be able to connect the SDT paths.
Steps
1. From PowerFlex Manager, click Block > NVMe Targets.
2. Select any one of the listed SDTs.
3. Record the IP addresses and discovery port in the lower right corner.
Example output:
4. Type echo "nvme-tcp" | tee -a /etc/modules-load.d/nvme-tcp.conf to load the NVMe kernel module on
startup and add it to the nvme-tcp.conf file.
5. Reboot the host. After the system returns to operation, type lsmod |grep nvme to verify if the modules are loaded.
Example output:
nvme_tcp 36864 0
nvme_fabrics 28672 1 nvme_tcp
nvme_core 135168 2 nvme_tcp,nvme_fabrics
t10_pi 16384 2 sd_mod,nvme_core
6. Type nvme discover -t tcp -a <SDT IP ADDRESS> -s 4420 to discover the PowerFlex NVMe SDT interfaces.
Use one of the SDT IP addresses gathered in step 2. If discovery fails, use the next IP address in the list and try again.
Example output from a successful discovery:
10. (Optional) To enable the NVMe path and storage persistence beyond a reboot, type:
echo "-t tcp -a <SDT IP ADDRESS> -s 4420" | tee -a /etc/nvme/discovery.conf
systemctl enable nvmf-autoconnect.service
11. Reboot the host and verify paths and volumes persist.
Steps
1. The following kernel modules are required for NVMe over TCP connectivity: nvme, nvme_fabrics, and nvme_tcp. Use the
lsmod command to confirm that the modules are loaded.
The following output is for example purpose only. The output may vary depending on the deployment and node type.
2. If nvme_tcp and nvme_fabrics are not listed, use the following command to add lines to the nvme_tcp.conf file. This
forces the modules to load on boot:
The following output is for example purpose only. The output may vary depending on the deployment and node type.
3. Re-run lsmod |grep nvme to confirm nvme_tcp and nvme_fabrics are now listed:
5. If the command in step 4 returns no value, enter the following command to generate an nvme hostnqn:
7. If nvme hostid does not exist, enter the following command to generate an nvme hostid:
NOTE: After completing the nvme gen-hostnqn command, edit the newly created file: /etc/nvme/hostid and
remove nqn.2014-08.org.nvmexpress:uuid: from the beginning of the line.
Create a volume
Use this procedure to create a volume.
Steps
1. From PowerFlex Manager, click Block > Volumes.
2. Click +Create Volume.
3. Enter the number of volumes and the name of the volumes.
4. Select Thick or Thin. Thin is the default.
5. Enter the required volume size in GB, in 8 GB increments.
6. Select the NVMe storage pool and click Create.
Steps
1. From PowerFlex Manager, click Block > Volumes.
2. Select the Volume check box and click Mapping > Map.
3. Select the check box on the host that you are mapping the volume to.
4. Click Map > Apply.
Steps
1. From PowerFlex Manager, click Block > NVMe Targets.
2. Select any one of the listed SDTs.
3. Record the IP addresses and discovery port in the lower right corner.
Example output:
Steps
1. Use SSH to the primary MDM and enter --host_name.
NOTE: The value for host_name should correspond to the abbreviated hostname for the system to be added.
chargers-pfmp-deployer:~ # hostname
chargers-pfmp-deployer
The following output is for example purpose only. The output may vary depending on the deployment and node type.
Example output:
2. Confirm the compute-only host is added to PowerFlex, enter the scli command --query_host.
The following output is for example purpose only. The output may vary depending on the deployment and node type.
Example output:
3. Enter the scli command --map_volume_to_host to map the volume to the host.
The following output is for example purpose only. The output may vary depending on the deployment and node type.
Example output:
4. Using the NVMe, connect the compute-only node to the volume. Replace the IP address listed with one of the SDT IP
addresses gathered in Discover target IP addresses.
If discovery fails, use the next IP address in the list and try again.
Example output:
5. Type lsblk to verify the connection to the PowerFlex system on the compute-only host.
Example output:
6. (Optional), To enable the NVMe path and storage persistence beyond a reboot, type echo "-t tcp -a
<SDT IP ADDRESS> -s 4420" | tee -a /etc/nvme/discovery.conf systemctl enable nvmf-
autoconnect.service
7. Reboot the host and verify if paths and volumes persist.
11
Deploying the VMware NSX-T Ready nodes
Use this chapter to deploy the VMware NSX-T Ready nodes.
After the PowerFlex appliance is configured at the customer location by Dell, VMware services perform the NSX-T data center
installation. The NSX-T Edge hardware, physical switch configuration, and virtualization components are pre-configured for the
VMware engineer to perform the NSX-T data center installation.
If the PowerFlex management controllers (four) and / or NSX-T Edge nodes are provided by the customer, use all instructions
for these two nodes as a recommended customer configuration.
Refer to Dell PowerFlex Appliance and PowerFlex Rack with PowerFlex 4.x Cabling and Connectivity Guide for information
about setting up VMware NSX-T in your environment.
Prerequisites
● Determine which switch ports to use and install the cabling.
● Confirm that there are no hardware issues and all cablings are in place.
Steps
1. Configure the out-of-band (OOB) management on the management switch ports for the VMware NSX-T Edge nodes (the
port examples are for the Cisco Nexus 92348GC-X switch), as follows:
interface e1/31
description edge-01 (00:M0) m0 – nsx-edge-01
switchport access vlan 101 (Provided in Enterprise Management Platform)
spanning-tree port type edge
spanning-tree bpduguard enable
spanning-tree guard root
no shutdown
interface e1/32
description edge-02 (00:M0) m0 – nsx-edge-02
switchport access vlan 101 (Provided in Enterprise Management Platform)
spanning-tree port type edge
spanning-tree bpduguard enable
spanning-tree guard root
no shutdown
2. From the Cisco Nexus NX-OS switch CLI, type the following to save the configuration on all the switches:
Prerequisites
● Determine which switch ports to use and install the cabling.
● Confirm that there are no hardware issues and all cablings are in place.
● Be sure to look up the following SVIs within the Enterprise Management Platform (EMP). The SVIs must be created on the
aggregation switches depending on which networking topology is being used within build.
● There is an option to either configure RAID 1+0 local storage) or vSAN on the VMware NSX-T Edge nodes. If configuring a
vSAN scenario, create the vSAN VLAN and add the VLAN ID to the appropriate trunk ports.
nsx-edge-vmotion 113 See the EMP See the EMP See the EMP Switch A
(only if required)
Switch B
(only if required)
nsx-edge-vsan 116 See the EMP See the EMP See the EMP Switch A
(only if required)
Switch B
nsx-edge- 121 See the EMP See the EMP See the EMP Switch A
transport-1
Switch B
nsx-edge-
transport-2
nsx-edge- 122 See the EMP See the EMP See the EMP Switch A
external-1
nsx-edge- 123 See the EMP See the EMP See the EMP Switch B
external-2
Steps
1. Perform this step only if configuring vSAN on the NSX-T Edge cluster and the links are connecting to the aggregation
switches. If vSAN is not being configured or the management cables are connecting to the access switches instead of the
aggregation switches, then skip this step. The vMotion traffic is not configured on the NSX-T Edge nodes since VMware
does not recommend that NSX-T Gateway VMs be migrated between VMware ESXi hosts. Configure only the vSAN VLANs
on both switch sides for the aggregation switches as follows:
CAUTION: By default, the VMware NSX-T Edge nodes do not connect to the access switches. However, if
port capacity or cable distance is an issue, the VMware NSX-T Edge nodes can connect the two management
ports to the access switches instead of the aggregation switches. The configuration below is identical if
configured on the access switches.
● Configure VLAN for the vSAN traffic on aggregation Switch A and aggregation Switch B:
● Configure VLAN for the vMotion traffic on aggregation Switch A and aggregation Switch B:
2. Configure the transport VLAN and SVI on both switch sides for the aggregation switches as follows:
a. Configure transport VLAN and SVI on aggregation Switch A:
3. Configure two NSX-T Edge external VLANs, and SVIs on the appropriate aggregation switch as follows.
NOTE: Do not create both on the same switch.
a. Configure NSX-T Edge external 1 VLAN and SVI only to aggregation Switch A as follows:
interface port-channel 50 # This is the default peer-link where all VLANs pass
through between switches
description switch peerlink port-channel
switchport trunk allowed vlan remove 122
b. Configure NSX-T Edge external 2 VLAN and SVI only to aggregation Switch B as follows:
interface port-channel 50 # This is the default peer-link where all VLANs pass
through between switches
description switch peerlink port-channel
switchport trunk allowed vlan remove 123
feature bpg
feature bpg
6. Configure port-channel (LACP) on aggregation Switch A and aggregation Switch B for each Edge as follows:
NOTE: Add VLAN 116 (vSAN) to the port channel only if vSAN is being configured. If RAID 1+0 is configured instead, do
not add VLAN 116 vSAN.
The provided configuration accounts only for two VMware NSX-T Edge nodes, which is the default number when configuring
the local storage with RAID 1+0. If vSAN is being configured, ensure that a port channel is also configured for the third and
fourth VMware NSX-T Edge node.
interface port-channel60
description to NSX-Edge-1
switchport
switchport mode trunk
switchport trunk allowed vlan 105,116(Provided in EMP)
spanning-tree port type edge trunk
spanning-tree bpduguard enable
spanning-tree guard root
speed 25000
lacp vpc-convergence
no lacp suspend-individual
vpc 60
interface port-channel61
description to NSX-Edge-2
switchport
switchport mode trunk
switchport trunk allowed vlan 105,116(Provided in EMP)
spanning-tree port type edge trunk
spanning-tree bpduguard enable
spanning-tree guard root
speed 25000
lacp vpc-convergence
no lacp suspend-individual
vpc 61
7. Configure the VMware ESXi access ports on aggregation Switch A for each Edge as follows:
a. Configure the switch ports with LACP enabled for the VMware ESXi management traffic on aggregation switch A. If
vSAN is required, this configuration must also include vSAN traffic that shares the same two interfaces. The example
configuration below provides two NSX-T Edge nodes:
CAUTION: By default, the NSX-T Edge nodes do not connect to the access switches. However, if port
capacity or cable distance is an issue, the NSX-T Edge servers can connect the odd port connections
(management and transport traffic) to the access switches instead of the aggregation switches. The
configuration below is identical if configured on the access switches. The last two connections that are
used for external edge always reside on the aggregation switches.
NOTE: Add VLAN 116 (vSAN) to the appropriate trunk port only if vSAN is being configured. If RAID 1+0 is
configured instead, do not add VLAN 116 (vSAN) to the switches.
interface e1/29/1
description nsx-edge-1 (xx:xx) -nicX
switchport
switchport mode trunk
switchport trunk allowed vlan 105,113,116 (Provided in EMP)
spanning-tree port type edge trunk
spanning-tree bpduguard enable
spanning-tree guard root
speed 25000
channel-group 60 mode active
no shutdown
interface e1/29/2
description nsx-edge-2 (xx:xx) -nicX
switchport
switchport mode trunk
switchport trunk allowed vlan 105,113,116 (Provided in EMP)
spanning-tree port type edge trunk
spanning-tree bpduguard enable
spanning-tree guard root
speed 25000
channel-group 61 mode active
no shutdown
b. Configure the switch ports as trunks for the NSX-T transport traffic on aggregation Switch A. The example configuration
below provides two NSX-T Edge nodes:
CAUTION: By default, the NSX-T Edge nodes do not connect to the access switches. However, if port
capacity or cable distance is an issue, the NSX-T Edge nodes can connect the odd port connections
(management and transport traffic) to the access switches instead of the aggregation switches. The
configuration below is identical if configured on the access switches. The last two connections that are
used for external edge always reside on the aggregation switches.
interface e1/30/1
description nsx-edge-1 (xx:xx) -nicX
switchport
switchport access vlan 121 (Provided in EMP)
spanning-tree port type edge
spanning-tree bpduguard enable
spanning-tree guard root
mtu 9216
speed 25000
no shutdown
interface e1/30/2
description nsx-edge-2 (xx:xx) -nicX
switchport
switchport access vlan 121 (Provided in EMP)
spanning-tree port type edge
spanning-tree bpduguard enable
spanning-tree guard root
mtu 9216
speed 25000
no shutdown
c. Configure the switch port as trunks for the NSX-T external edge traffic on aggregation switch A. The example
configuration below provides two NSX-T Edge nodes:
interface e1/28/1
description nsx-edge-1 (xx:xx) -nicX
switchport
switchport access vlan 122 (Provided in EMP)
spanning-tree port type edge
spanning-tree bpduguard enable
spanning-tree guard root
speed 25000
no shutdown
interface e1/28/2
description nsx-edge-2 (xx:xx) -nicX
switchport
switchport access vlan 122 (Provided in EMP)
spanning-tree port type edge
spanning-tree bpduguard enable
spanning-tree guard root
speed 25000
no shutdown
d. From the Cisco NX-OS switch CLI, type the following to save the configuration on all the switches:
8. Configure ESXi access ports on aggregation Switch B for each Edge as follows:
a. Configure the switch port with LACP enabled for the ESXi management traffic on aggregation switch B. If vSAN is
required, then this configuration must also include vSAN traffic that shares the same two interfaces. The example
configuration below provides two NSX-T Edge nodes:
CAUTION: By default, the NSX-T Edge nodes do not connect to the access switches. However, if port
capacity or cable distance is an issue, the NSX-T Edge nodes can connect the odd port connections
(management and transport traffic) to the access switches instead of the aggregation switches. The
last two connections that are used for external edge always reside on the aggregation switches. The
configuration below is identical if configured on the access switches.
NOTE: Add VLAN 116 (vSAN) to the appropriate switch port only if vSAN is being configured. If RAID 1+0 is
configured instead, do not add VLAN 116 (vSAN) to the switches.
interface e1/29/1
description nsx-edge-1 (xx:xx) -nicX
switchport
switchport mode trunk
switchport trunk allowed vlan 105,116 (Provided in EMP)
spanning-tree port type edge trunk
spanning-tree bpduguard enable
spanning-tree guard root
speed 25000
channel-group 60 mode active
no shutdown
interface e1/29/2
description nsx-edge-2 (xx:xx) -nicX
switchport
switchport mode trunk
switchport trunk allowed vlan 105,116 (Provided in EMP)
spanning-tree port type edge trunk
spanning-tree bpduguard enable
spanning-tree guard root
speed 25000
channel-group 61 mode active
no shutdown
b. Configure the switch ports as trunks for the NSX-T transport traffic on aggregation Switch A. The example configuration
below provides two NSX-T Edge nodes:
CAUTION: By default, the NSX-T Edge nodes do not connect to the access switches. However, if port
capacity or cable distance is an issue, the NSX-T Edge nodes can connect the odd port connections
(management and transport traffic) to the access switches instead of the aggregation switches. The
last two connections that are used for external edge always reside on the aggregation switches. The
configuration below is identical if configured on the access switches.
interface e1/30/1
description nsx-edge-1 (xx:xx) -nicX
switchport
switchport access vlan 121 (Provided in EMP)
spanning-tree port type edge
spanning-tree bpduguard enable
spanning-tree guard root
mtu 9216
speed 25000
no shutdown
interface e1/30/2
description nsx-edge-2 (xx:xx) -nicX
switchport
switchport access vlan 121 (Provided in EMP)
spanning-tree port type edge
spanning-tree bpduguard enable
spanning-tree guard root
mtu 9216
speed 25000
no shutdown
c. Configure the switch port as trunk for the NSX-T external edge traffic on aggregation Switch B. The example
configuration below provides two NSX-T Edge nodes:
interface e1/28/1
description nsx-edge-1 (xx:xx) -nicX
switchport
switchport access vlan 123 (Provided in EMP)
spanning-tree port type edge
interface e1/28/2
description nsx-edge-2 (xx:xx) -nicX
switchport
switchport access vlan 123 (Provided in EMP)
spanning-tree port type edge
spanning-tree bpduguard enable
spanning-tree guard root
speed 25000
no shutdown
d. From the Cisco NX-OS switch CLI, type the following to save the configuration on all the switches:
Prerequisites
● Determine which switch ports to use and install the cabling.
● Confirm that there are no hardware issues and all cablings are in place.
● Be sure to look up the following SVIs within the . These SVIs are required to be created on the border leaf switches
depending on which networking topology is being used within build.
● There is an option to either configure RAID 1+0 (local storage) or vSAN on the NSX-T Edge nodes. If configuring a single
vSAN scenario, create the vSAN SVI and add the VLAN ID to the appropriate trunk ports.
nsx-edge-vsan-116 (only if 116 See EMP See EMP See EMP Switch A
required)
Switch B
Steps
1. Configure the vSAN networking configuration on both switch sides for the border leaf switches as follows:
NOTE: Perform this step only if configuring vSAN on NSX-T Edge cluster.
● Configure VLAN for the vSAN traffic on border leaf Switch A and border leaf Switch B:
interface nve1
member vni 800116
suppress-arp
ingress-replication protocol bgp
evpn
vni 800116
rd auto
route-target import auto
route-target export auto
exit
● Configure VLAN for the vMotion traffic on border leaf Switch A and border leaf Switch B:
interface nve1
member vni 800113
suppress-arp
ingress-replication protocol bgp
evpn
vni 800113
rd auto
route-target import auto
route-target export auto
exit
2. Configure the transport VLAN and SVI on both switch sides for the border leaf switches as follows:
● Configure transport VLAN and SVI on border leaf Switch A and border leaf Switch B:
interface nve1
member vni 800121
suppress-arp
ingress-replication protocol bgp
evpn
vni 800121 121
rd auto
route-target import auto
route-target export auto
exit
3. Configure two Edge external SVIs on the appropriate border leaf switch as follows.
NOTE: Do not create both on each switch.
a. Configure Edge external 1 VLAN and SVI only to border leaf Switch A as follows:
interface port-channel 50 ## This is the default peer-link where all VLANs pass
through between switches.
switchport trunk allowed vlan remove 122 # Remove VLAN from peer-link to prevent
alerts that VLAN ID do not match.
interface nve1
member vni 800122
suppress-arp
ingress-replication protocol bgp
evpn
vni 800122 122
rd auto
route-target import auto
route-target import auto
b. Configure Edge external 2 VLAN and SVI only to border leaf Switch B as follows:
interface port-channel 50 ## This is the default peer-link where all VLANs pass
through between switches.
switchport trunk allowed vlan remove 122 # Remove VLAN from peer-link to prevent
alerts that VLAN ID do not match.
interface nve1
member vni 800123
suppress-arp
ingress-replication protocol bgp
evpn
vni 800123 123
rd auto
route-target import auto
route-target import auto
description edge-02-vm1-uplink1
remote-as 65001 (Provided in EMP)
timers 1 3
neighbor <IP to customer peer> (Provided in EMP)
description peering to customer network
remote-as 65200 (Provided in EMP)
timers 1 3
6. Configure port-channel (LACP) on border leaf Switch A and border leaf Switch B for each NSX-T Edge node as follows:
NOTE: Add VLAN 116 (vSAN) to the port-channel only if vSAN is being configured. If RAID 1+0 is configured instead, do
not add VLAN 116 vSAN.
interface port-channel60
description to NSX-Edge-1
switchport
switchport mode trunk
switchport trunk allowed vlan 105,116(Provided in EMP)
spanning-tree port type edge trunk
spanning-tree bpduguard enable
spanning-tree guard root
speed 25000
lacp vpc-convergence
no lacp suspend-individual
vpc 60
interface port-channel61
description to NSX-Edge-2
switchport
switchport mode trunk
switchport trunk allowed vlan 105,116(Provided in EMP)
spanning-tree port type edge trunk
spanning-tree bpduguard enable
spanning-tree guard root
speed 25000
lacp vpc-convergence
no lacp suspend-individual
vpc 61
7. Configure VMware ESXi access ports on border leaf Switch A for each NSX-T Edge node as follows:
a. Configure the switch port for the ESXi management traffic on border leaf switch A. If vSAN is required, then this
configuration includes vSAN traffic that shares the same two interfaces. The example configuration below provides two
NSX-T Edge nodes:
NOTE: Add VLAN 116 (vSAN) to the appropriate trunk port only if vSAN is being configured. If RAID 1+0 is
configured instead, do not add VLAN 116 (vSAN).
WARNING: By default, the NSX-T Edge nodes do not connect to leaf switches. However, if port capacity
or cable distance is an issue, the NSX-T Edge nodes can connect the odd port connections (management
and transport traffic) to the leaf switches instead of the border leaf switches. The last two connections
used for external edge always reside on the border leaf switches. The configuration below is identical if
configured on the leaf switches.
interface e1/29/1
description nsx-edge-1 (xx:xx) -nicX
switchport
switchport mode trunk
switchport trunk allowed vlan 105,113,116 (Provided in EMP)
spanning-tree port type edge trunk
spanning-tree bpduguard enable
spanning-tree guard root
speed 25000
channel-group 60 mode active
no shutdown
interface e1/29/2
description nsx-edge-2 (xx:xx) -nicX
switchport
switchport mode trunk
switchport trunk allowed vlan 105,113,116 (Provided in EMP)
spanning-tree port type edge trunk
spanning-tree bpduguard enable
spanning-tree guard root
speed 25000
channel-group 61 mode active
no shutdown
b. Configure the switch ports for the NSX-T transport traffic on border leaf Switch A. The example configuration below
provides two NSX-T Edge nodes:
WARNING: By default, the NSX-T Edge nodes do not connect to leaf switches. However, if port capacity
or cable distance is an issue, the NSX-T Edge nodes can connect the odd port connections (management
and transport traffic) to the leaf switches instead of the border leaf switches. The last two connections
used for external edge always reside on the border leaf switches. The configuration below is identical if
configured on the leaf switches.
interface e1/30/1
description nsx-edge-1 (xx:xx) -nicX
switchport
switchport access vlan 121 (Provided in EMP)
spanning-tree port type edge
spanning-tree bpduguard enable
spanning-tree guard root
mtu 9216
speed 25000
no shutdown
interface e1/30/2
description nsx-edge-2 (xx:xx) -nicX
switchport
switchport access vlan 121 (Provided in EMP)
spanning-tree port type edge
spanning-tree bpduguard enable
spanning-tree guard root
mtu 9216
speed 25000
no shutdown
c. Configure the switch port for the NSX-T external edge traffic on border leaf Switch A. The example configuration below
provides two NSX-T Edge nodes:
interface e1/28/1
description nsx-edge-1 (xx:xx) -nicX
switchport
switchport access vlan 122 (Provided in EMP)
interface e1/28/2
description nsx-edge-2 (xx:xx) -nicX
switchport
switchport access vlan 122 (Provided in EMP)
spanning-tree port type edge
spanning-tree bpduguard enable
spanning-tree guard root
speed 25000
no shutdown
d. From the Cisco NX-OS switch CLI, type the following to save the configuration on all the switches:
8. Configure ESXi access ports on border leaf Switch B for each NSX-T Edge node as follows:
NOTE: Add VLAN 116 (VSAN) to the port-channel only if VSAN is being configured. If RAID 1+0 is configured instead,
do not add VLAN 116 (VSAN).
The sample switch port configuration below configures two NSX-T Edge nodes.
interface port-channel60
description to NSX-Edge-1
switchport
switchport mode trunk
switchport trunk allowed vlan 105,116(Provided in EMP)
spanning-tree port type edge trunk
spanning-tree bpduguard enable
speed 25000
lacp vpc-convergence
no lacp suspend-individual
vpc 60
interface port-channel61
description to NSX-Edge-2
switchport
switchport mode trunk
switchport trunk allowed vlan 105,116(Provided in EMP)
spanning-tree port type edge trunk
spanning-tree bpduguard enable
speed 25000
lacp vpc-convergence
no lacp suspend-individual
vpc 61
9. Configure ESXi access ports on border leaf Switch B for each NSX-T Edge node as follows:
a. Configure the switch port for the ESXi management traffic on border leaf Switch B. If vSAN is required, this
configuration also includes vSAN traffic that shares the same two interfaces. The example configuration below provides
two NSX-T Edge nodes.
NOTE: Add VLAN 116 (vSAN) to the appropriate trunk port only if vSAN is being configured. If RAID 1+0 is
configured instead, do not add VLAN 116 (vSAN).
WARNING: By default, the NSX-T Edge nodes do not connect to leaf switches. However, if port capacity
or cable distance is an issue, the NSX-T Edge nodes can connect the odd port connections (management
and transport traffic) to the leaf switches instead of the border leaf switches. The last two connections
used for external edge always reside on the border leaf switches. The configuration below is identical if
configured on the leaf switches.
interface e1/29/1
description nsx-edge-1 (xx:xx) -nicX
switchport
interface e1/29/2
description nsx-edge-2 (xx:xx) -nicX
switchport
switchport mode trunk
switchport trunk allowed vlan 105,113,116 (Provided in EMP)
spanning-tree port type edge trunk
spanning-tree bpduguard enable
spanning-tree guard root
speed 25000
channel-group 61 mode active
no shutdown
b. Configure the switch ports for the NSX-T transport traffic on border leaf Switch A. The example configuration below
provides two NSX-T Edge nodes:
WARNING: By default, the NSX-T Edge nodes do not connect to leaf switches. However, if port capacity
or cable distance is an issue, the NSX-T Edge nodes can connect the odd port connections (management
and transport traffic) to the leaf switches instead of the border leaf switches. The last two connections
used for external edge always reside on the border leaf switches. The configuration below is identical if
configured on the leaf switches.
interface e1/30/1
description nsx-edge-1 (xx:xx) -nicX
switchport
switchport access vlan 121 (Provided in EMP)
spanning-tree port type edge trunk
spanning-tree bpduguard enable
spanning-tree guard root
mtu 9216
speed 25000
no shutdown
interface e1/30/2
description nsx-edge-2 (xx:xx) -nicX
switchport
switchport access vlan 121 (Provided in EMP)
spanning-tree port type edge trunk
spanning-tree bpduguard enable
spanning-tree guard root
mtu 9216
speed 25000
no shutdown
c. Configure the switch port for the NSX-T external edge traffic on border leaf Switch B. The example configuration below
provides two NSX-T Edge nodes:
interface e1/28/1
description nsx-edge-1 (xx:xx) -nicX
switchport
switchport access vlan 123 (Provided in EMP)
spanning-tree port type edge
spanning-tree bpduguard enable
spanning-tree guard root
speed 25000
no shutdown
interface e1/28/2
description nsx-edge-2 (xx:xx) -nicX
switchport
switchport access vlan 123 (Provided in EMP)
spanning-tree port type edge
d. From the Cisco NX-OS switch CLI, type the following to save the configuration on all the switches:
CAUTION: VMware vSphere configuration steps are based on VMware vSphere 7.0.
Prerequisites
Before assigning any IP addresses, perform a ping test to validate there are no duplicate IP addresses being assigned to the new
nodes iDRAC.
Steps
1. Use a keyboard and monitor, a KVM, or a crash cart and connect to the new node.
2. Power on the node, connect a keyboard and monitor and enter F2 to open the BIOS setup. Use the password emcbios.
3. From the System Setup main menu, select iDRAC Settings > Network.
4. Confirm Enabled is set to Enable NIC, and NIC Selection is set to Dedicated.
5. Under IPv4 Settings, configure the settings using the details that were recorded for the following fields:
● Clear the DHCP Enable option to ensure that DHCP is not enabled.
● Static IP address
● Static Subnet Mask
● Static Gateway
6. Under IPv6 Settings, ensure IPv6 is disabled.
7. Click Back.
8. Change the iDRAC password to match the ones configured on the other PowerFlex nodes, as follows:
Prerequisites
● Obtain access to the iDRAC web interface.
● Verify access to the upgrade files:
○ BIOS installer: IC location/BIOS
○ iDRAC installer: IC location/iDRAC
○ Backplane Expander firmware installer: IC location/Backplane
○ Network firmware installer: IC location/Intel NIC Firmware
○ SAS firmware installer: IC location/PERC H755 Firmware
○ BOSS controller firmware installer: IC location/BOSS controller firmware
Steps
1. From the iDRAC Web Browser section, click Maintenance > System Update.
2. Choose Choose File. Browse and select the BIOS file and click Upload.
3. Choose Choose File. Browse and select the iDRAC file and click Upload.
4. Choose Choose File. Browse and select the network update files and click Upload.
5. Choose Choose File. Browse and select the SAS update file and click Upload.
6. Choose Choose File. Browse and select the backplane expanders and click Upload.
7. Choose Choose File. Browse and select the BOSS and click Upload.
8. Under Update Details, select all updates.
9. Under Install and Reboot or Install Next Reboot.
The following message appears Updating Job Queue.
10. Click Job Queue to monitor the progress of the install.
11. Wait for the Firmware Update: BIOS job to complete its Downloading state.
When the job reaches a Scheduled state, a Pending Reboot task appears.
12. Click Reboot. The node boots and updates the BIOS.
Prerequisites
Ensure you have access to the iDRAC.
Steps
1. Log in to the iDRAC Web Console (username: root, password: P@ssw0rd!).
2. For PowerFlex appliance R640/R740xd/R840, click iDRAC > Configuration > Power Management > Power
Configuration > Hot Spare > Disabled.
3. Click Apply.
4. Repeat the steps for the remaining ESXi nodes.
Prerequisites
Verify that you have access to the iDRAC.
Steps
1. Log in to the iDRAC interface of the node.
2. For PowerFlex appliance R640 (NSX-T Edge node), perform the following steps:
a. Click Configure > System Settings > Alert Configuration > SNMP Traps Configuration. Enter the IP address into
the Alert Destination IP field and select the State checkbox. Click Apply.
NOTE: The read-only community string is already populated. Do not remove this entry.
b. To add the destination IP address for an existing customer monitoring system, enter the PowerFlex Manager IP address
into the Alert Destination2 IP field, select the State checkbox, and click Apply.
NOTE: Customer monitoring must support SNMP v2 and use the community string already configured.
3. Click Apply.
Enable UEFI and configure data protection for the BOSS card
Use these steps to manually configure the data protection (RAID 1) for the BOSS card and enable UEFI on the VMware NSX-T
Edge nodes.
Steps
1. From the iDRAC Dashboard, launch the virtual console and select BIOS setup from the Boot menu to enter system BIOS.
2. Power cycle the server and enter BIOS setup.
3. From the System Setup main menu, select Device Settings.
4. Select AHCI Controller in Slot1: BOSS-1 Configuration Utility.
5. Select Create RAID Configuration.
6. Select both devices and click Next.
7. Enter VD_R1_1 for the name and leave the default values .
8. Click Yes to create the virtual disk and click OK to apply the new configuration.
9. Click Next > OK.
10. Select the VD_R1_1 that was created in above step and click Back > Back > Finish.
11. Select System BIOS.
12. Select Boot Settings and enter the following settings:
● Boot Mode: UEFI
● Boot Sequence Retry: Enabled
● Hard Disk Failover: Disabled
● Generic USB Boot: Disabled
● Hard-disk Drive Placement: Disabled
13. Click Back > Finish.
14. Click Finish to reboot the node.
15. Boot the node into BIOS mode by pressing F2 during boot.
16. Select System BIOS > Boot Settings > UEFI Boot Settings.
17. Select UEFI Boot Sequence to change the order.
18. Click AHCI Controller in Slot 1: EFI Fixed Disk Boot Device 1, and select + to move to the top.
19. Click OK.
20. Click Back > Back.
Prerequisites
Ensure that iDRAC command-line tools are installed on the system jump server.
Steps
1. For a single NSX-T Edge node:
a. From the jump server, open a PowerShell session.
b. Enter racadm -r x.x.x.x -u root -p yyyyy config -g cfgIpmiLan -o cfgIpmiLanEnable 0.
Where x.x.x.x is the IP address of the iDRAC node and yyyyy is the iDRAC password.
2. For multiple NSX-T Edge nodes:
a. From the jump server, at the root of the C: drive, create a folder that is named ipmi.
b. From the File Explorer, go to View and select the File Name extensions check box.
c. Open a notepad file, and paste this text into the file: powershell -noprofile -executionpolicy bypass
-file ".\disableIPMI.ps1"
d. Save the file, and rename it runme.cmd in C:\ipmi.
e. Open a notepad file, and paste this text into the file: import-csv $pwd\hosts.csv -Header:"Hosts" |
Select-Object -ExpandProperty hosts | % {racadm -r $_ -u root -p XXXXXX config -g
cfgIpmiLan -o cfgIpmiLanEnable 0},
Where XXXXXX is the customer password that must be changed.
f. Save the file, and rename it disableIPMI.ps1 in C:\ipmi.
g. Open a notepad file, and list all the iDRAC IP addresses that must be included, one per line.
Prerequisites
Ensure that the iDRAC command-line tools are installed on the system jump server.
Steps
1. For a single NSX-T Edge node:
a. From the jump server, open a terminal session.
b. Enter racadm -r x.x.x.x -u root -p yyyyy config -g cfgIpmiLan -o cfgIpmiLanEnable 0,
where x.x.x.x is the IP address of the iDRAC node and yyyyy is the iDRAC password.
2. For multiple NSX-T Edge nodes:
a. From the jump server, open a terminal window.
b. Edit the idracs text file and enter IP addresses for each iDRAC, one per line.
c. Save the file.
d. From the prompt, enter while read line; do echo “$line” ; racadm -r $line -u root -p yyyyy
config -p cfgIpmiLan -o cfgIpmiLanEnable 0.; done < idracs, where yyyyy is the iDRAC password.
This output displays the IP address for each iDRAC, and the output from the racadm command:
Prerequisites
iDRAC must be configured and reachable.
Steps
1. Launch the virtual console and select next boot as BIOS setup from Boot option in menu, to enter system BIOS.
2. Power cycle the server and wait for the boot menu to appear.
3. From the System setup main menu, select Device Settings > Integrated RAID Controller 1: Dell <PERC H755P Mini>
Configure Utility.
4. Select Main Menu > Configuration Management > Create Virtual Disk.
5. Select RAID Level option to be Raid10 and click Select Physical Disks.
6. Select SSD option for Select Media Type to view all the disk drives.
7. Select only the first four disk drives and click Apply Changes.
NOTE: RAID10 only works with an even number of disks and not work with the default five disks that come with the
VMware NSX-T Edge nodes.
8. Click OK.
9. Click Create Virtual Disk and select Confirm box before selecting Yes.
10. Click Yes > OK.
11. Click Back > Back to the main menu.
12. Click Virtual Disk Management to view the initialization process.
NOTE: The initialization process can occur in the background while the ESXi is installed.
Prerequisites
Verify that the customer VMware ESXi ISO is available and is located in the Intelligent Catalog (IC) code directory.
Steps
1. Configure the iDRAC:
a. Connect to the iDRAC interface and launch a virtual remote console from Dashboard and click Launch Virtual Console.
b. Select Connect Virtual Media > Map CD/DVD.
c. Browse to the folder where the ISO file is saved, select it, and click Open.
d. Click Map Device.
e. Click Boot > Virtual CD/DVD/ISO.
f. Click Yes to confirm the boot action.
g. Click Power > Reset System (warm boot).
h. Click Yes to confirm power action.
2. Install VMware ESXi:
a. On the VMware ESXi installer screen, click Enter to continue.
b. Press F11 to accept the license agreement.
c. Under Local, select DELLBOSS VD as the installation location and click Enter.
d. Select US Default as the keyboard layout and click Enter to continue.
e. At the prompt, type the customer provided root password or use the default password VMwar3!!. Click Enter.
f. When the Confirm Install screen is displayed, press F11.
g. Click Enter to reboot the node.
3. Configure the host:
a. Press F2 to access the System Customization menu.
b. Enter the password for the root user.
c. Go to Direct Console User Interface (DCUI) > Configure Management Network.
d. Set the following options under Configure Management Network:
● Network Adapters: Select vmnic2 and vmnic6.
● VLAN: See Enterprise Management Platform (EMP) for VLAN. The standard VLAN is 105.
● IPv4 Configuration: Set static IPv4 address and network configuration. See EMP for the IPv4 address, subnet
mask, and the default gateway.
● DNS Configuration: See EMP for the primary DNS server and alternate DNS server.
○ Custom DNS Suffixes: See EMP.
● IPv6 Configuration: Disable IPv6.
e. Press ESC to return to DCUI.
f. Type Y to commit the changes and the node restarts.
4. Use the command line to set the IP hash:
a. From the DCUI, press F2 to customize the system.
b. Enter the password for the root user.
c. Select Troubleshooting Options and press Enter.
d. From the Troubleshooting Mode Options menu, enable the following:
● ESXi Shell
● Enable SSH
e. Press Enter to enable the service.
f. Press <Alt>+F1 and log in.
g. To enable the VMware ESXi host to work on the port channel, type the following commands:
esxcli network vswitch standard policy failover set -v vSwitch0 –1 iphash
esxcli network vswitch standard portgroup policy failover set -p "Management Network"
-l iphash
Prerequisites
Ensure you have access to the VMware vSphere Client.
Steps
1. Log in to the VMware vSphere Client.
2. Click Hosts and Clusters.
3. Create an NSX-T vSphere cluster:
a. Right-click PowerFlex Customer-Datacenter and select New Cluster to open the wizard.
NOTE: If more than one customer vCenter datacenters exist, then it should not matter which customer datacenter
to deploy the NSX-T Edge nodes as long as they are not deployed within the management datacenter.
8. On the Ready to complete screen, review the host summary and click Finish.
This step enforces each ESXi host to enter maintenance mode.
9. For each ESXi host, right-click Edge ESXi node > Maintenance Mode > Exit Maintenance.
NOTE: vCLS VMs deploy automatically when a host is added to the vCenter cluster. Each cluster has a maximum of
three vCLS VMs.
Add the new VMware ESXi local datastore and rename the
operating system datastore (RAID local storage only)
Use this procedure only if the existing production VMware NSX-T Edge nodes do not have vSAN configured. This procedure
manually adds the new local datastore that was created from the RAID utility to VMware ESXi. By default, the VMware NSX-T
Edge nodes are configured using the local storage with RAID1+0 enabled and come with eight SSD hard drives. Using the local
storage with RAID1+0 enabled is the preferred method recommended by VMware because the NSX-T Edge gateway VMs have
their own method of providing availability at the services level. However, if VMware professional services recommends vSAN,
then skip this procedure.
Prerequisites
Ensure that you have access to the VMware vSphere Client.
Steps
1. Log in to the VMware vSphere Client.
2. Click the home icon at the top of the screen and select Hosts and Clusters.
3. Expand PowerFlex Customer-Datacenter and select Edge-Cluster.
4. Rename the local operating system datastore to BOSS card:
a. Select an NSX-T Edge ESXi host.
b. Click Datastores.
c. Right-click the smaller size datastore (OS) and click Rename.
d. To name the datastore, type Edge-Cluster-<nsx-t edge host shortname>-DASOS.
5. Right-click the third NSX Edge ESXi server and select Storage > New Datastore to open the wizard. Perform the
following:
a. Verify that VMFS is selected and click Next.
b. Name the datastore using Edge-Cluster_DAS01.
c. Click the LUN that has disks created in RAID 10.
d. Click Next > Finish.
6. Repeat steps 1 through 5 for the remaining VMware NSX-T Edge nodes.
Prerequisites
● Ensure that you have access to the management VMware vSphere Client.
● VMware ESXi must be installed with hosts added to the vCenter.
Steps
1. Log in to the VMware vSphere Client.
2. Click Hosts and Clusters.
3. Expand PowerFlex Customer-Datacenter and select EDGE-CLUSTER cluster.
4. Rename local OS Datastore:
a. Select an NSX-T Edge ESXi host.
b. Click Datastores.
c. Right-click the smaller size datastore (operating system) and select Rename.
d. Name the datastore using the <nsx-t edge host short name>-DASOS.
5. Repeat steps 1 through 4 for the remaining NSX-T Edge nodes.
6. Select the EDGE-CLUSTER cluster, and click Configure > vSAN > Services.
7. Click Configure vSAN to open the wizard:
a. Leave default, Single site cluster and click Next.
b. Leave default and click Next.
c. For NSX-T Edge ready nodes within the NSX-T cluster, select an VMware ESXi host, and click the disks and claim disks
as cache or capacity tier using the Claim For icon as follows:
● Identify one SSD disk that is used for cache tier (generally 1-2 disks of the same model). Select a disk and then select
cache tier from the drop-down.
● Identify remaining four capacity drives. Select the remaining disks and select the capacity's tier from the drop-
down, and then click Next > Finish.
NOTE: Sometimes, two separate disk groups per host are needed. To do this, be sure that two disks are tagged as
cache tier and the remaining disks as capacity tier. A new disk group is created for each disk that is tagged as cache
tier.
Prerequisites
Ensure that the NSX-T Edge ESXi hosts are added to VMware vCenter server. VMware ESXi must be installed with hosts added
to the VMware vCenter.
Steps
1. Log in to the VMware vSphere Client.
2. Click the home icon at the top of the screen and select Hosts and Clusters.
3. Click Datacenter > Edge-Cluster.
4. Configure NTP on VMware ESXi NSX-T Edge host as follows:
a. Select a VMware ESXi NSX-T Edge host.
b. Click Configure > System > Time Configuration and click Edit from Network Time Protocol.
c. Select the Enable check box.
d. Enter the NTP servers as recorded in the Enterprise Management Platform (EMP). Set the NTP service startup policy as
Start and stop with host, and select Start NTP service.
e. Click OK.
5. Repeat for each controller host.
Prerequisites
Ensure that the VMware vSphere vCenter Server and the VMware vSphere Client(HTML5) are accessible.
Steps
1. Create and configure edge-dvswitch0 as follows:
a. Log in to the VMware vSphere Client HTML5.
b. Click Networking.
c. Right-click the data center (Workbook default name).
d. Create edge_dvswitch0 as follows:
i. Click Distributed Switch > New Distributed Switch.
ii. Update the name to Edge_Dvswitch0 and click Next.
iii. Choose the version 7.0.2-ESXi-7.0.2 and later and click Next.
iv. Select 2 for the number of Uplinks.
v. Select Enabled from the Network I/O Control menu.
vi. Clear the Create default port group option.
vii. Click Next > Finish.
Steps
1. Create and configure edge-cluster-node-mgmt-105 distributed port group:
a. Right-click the edge_dvswitch0 (Workbook default name).
b. Click Distributed Port Group > New Distributed Port Group.
c. Update the name to edge-cluster-node-mgmt-105 and click Next.
d. Select the default Port binding.
e. Select the default Port allocation.
f. Select the default # of ports (Default is 8) .
g. Select the default VLAN as VLAN Type .
h. Set the VLAN ID to 105.
i. Clear the Customize default policies configuration and click Next.
j. Click Finish.
k. Right-click the edge-cluster-node-mgmt-105 and click Edit Settings....
l. Click Teaming and failover.
m. Change Load Balancing mode to Route based on IP hash.
n. Verify that lag1 is active, and the Uplink1 and Uplink2 are unused.
o. Click OK.
2. Create and configure edge-cluster-nsx-vsan-116 distributed port group:
WARNING: There are two vSAN options for each build: full vSAN configuration and partial vSAN
configuration. If partial vSAN is configured, then skip this step because partial only means to configure
the VLAN on the physical switches. vSAN at the virtualization layer is then configured onsite.
Steps
1. Create and configure edge-cluster-nsx-transport-121 distributed port group:
a. Right-click the edge_dvswitch1 (Workbook default name).
b. Click Distributed Port Group > New Distributed Port Group.
c. Update the name to edge-cluster-nsx-transport-121 and click Next.
d. Select the default Port binding.
e. Select the default Port allocation.
f. Select the default # of ports (Default is 8).
g. Select the default VLAN as VLAN Type .
h. Set the VLAN ID to 121.
i. Clear the Customize default policies configuration and click Next.
j. Click Finish.
k. Right-click the edge-cluster-nsx-transport-121 and click Edit Settings....
l. Click Teaming and failover.
NOTE: Verify that the load balance policy is configured as Route based on originating virtual port.
n. Click OK.
2. Create and configure edge-cluster-nsx-edge1-122 distributed port group:
a. Right-click the edge_dvswitch1 (Workbook default name).
b. Click Distributed Port Group > New Distributed Port Group.
c. Update the name to edge-cluster-nsx-122 and click Next.
d. Select the default Port binding.
e. Select the default Port allocation.
f. Select the default # of ports (Default is 8) .
g. Select the default VLAN as VLAN Type .
h. Set the VLAN ID to 122.
i. Clear the Customize default policies configuration and click Next.
j. Click Finish.
n. Click OK.
3. Create and configure edge-cluster-nsx-edge2-123 distributed port group:
a. Right-click the edge_dvswitch1 (Workbook default name).
b. Click Distributed Port Group > New Distributed Port Group.
c. Update the name to edge-cluster-nsx-edge2-123 and click Next.
d. Select the default Port binding.
e. Select the default Port allocation.
f. Select the default # of ports (Default is 8) .
g. Select the default VLAN as VLAN Type .
h. Set the VLAN ID to 123.
i. Clear the Customize default policies configuration and click Next.
j. Click Finish.
k. Right-click the edge-cluster-nsx-edge2-123 and click Edit Settings....
l. Click Teaming and failover.
m. Click the down arrow to move Uplink1, Uplink2, and Uplink3 to Unused.
NOTE: Only Uplink4 must be active for this port group.
n. Click OK.
Prerequisites
Ensure you have access to the management VMware vSphere Client.
Steps
1. Log in to the VMware vSphere Client.
2. Click Networking.
3. Expand PowerFlex Customer - Datacenter.
4. Right-click edge_dvswitch0 and select Add and Manage Hosts.
5. Select Add hosts and click Next.
6. In Manage Physical Adapters, perform the following:
a. Click Assign Uplink for vmnic0.
b. Select lag1-0.
c. Click Assign Uplink for vmnic2.
d. Select lag1-1.
e. Click Next.
7. In Manage VMkernel Adapters, perform the following:
a. Select vmk0 and click Assign portgroup.
b. Select Edge-Cluster-node-mgmt-105 and click X to close.
c. Click Next > Next > Next.
8. In the Ready to Complete screen, review the details, and click Finish.
9. If VMware vSAN is required, create and configure the edge-cluster-vsan-114 VMkernel network adapter distributed port
group:
NOTE: The vMotion VMkernel network adapter is not configured by default. Availability depends on the NSX-T Edge
Gateway VM service level.
Prerequisites
Ensure that you have access to the management VMware vSphere Client.
Steps
1. Log in to the VMware vSphere Client.
2. Click Networking.
3. Expand PowerFlex Customer - Datacenter.
4. Right-click edge_dvswitch1 and select Add and Manage Hosts.
5. Select Add hosts and click Next.
6. In Manage Physical Adapters, perform the following:
a. Click Assign Uplink for vmnic5.
b. Select Uplink 1.
c. Click Assign Uplink for vmnic3.
d. Select Uplink 2.
e. Click Assign Uplink for vmnic7.
f. Select Uplink 3.
g. Click Assign Uplink for vmnic4.
h. Select Uplink 4.
7. Click Next > Next > Next.
8. In the Ready to Complete screen, review the details, and click Finish.
Prerequisites
Apply all VMware ESXi updates before installing or loading hardware drivers.
NOTE: This procedure is required only if the ISO drivers are not at the proper Intelligent Catalog (IC) level.
Steps
1. Log in to the VMware vSphere Client.
2. Click Hosts and Clusters.
3. Locate and select the VMware ESXi NSX-T Edge host that you installed.
4. Select Datastores.
5. Right-click the datastore name and select Browse Files.
Prerequisites
● Ensure that the VMware vSphere vCenter Server and the vSphere Client are accessible.
● Allow VLAN 121 on server facing ports in both switches.
Steps
1. Log in to the VMware vSphere Client HTML5.
2. Click Networking.
3. Expand the PowerFlex Customer-Datacenter.
4. Right-click flex_dvswitch0 (Workbook default name).
5. Click Distributed Port Group > New Distributed Port Group.
6. Update the name to edge-cluster-nsx-transport-121 and click Next.
7. Select the default Port binding.
8. Select the default Port allocation.
9. Select the default # of ports (default is 8) .
Prerequisites
Both Cisco Nexus access switch ports for the compute ESXi hosts are configured as trunk access. These ports will be
configured as LACP enabled after the physical adapter is removed from each VMware ESXi host.
NOTE: Since the VMK0 VMware (ESXi management) is not configured on flex_dvswitch, both the vmnics are first
migrated to the LAGs simultaneously and then the port channel is configured. Data connectivity to PowerFlex is lost until
the port channels are brought online with both vmnic interfaces connected to LAGs.
Steps
1. Log in to the VMware vSphere Client.
2. Look at vCenter and physical switches to ensure that both ports across all hosts are up.
3. For each compute ESXi host, record the physical switch ports to which vmnic3 (switch-B) and vmnic5 (switch-A) are
connected.
a. Click Home, then select Hosts and Clusters and expand the compute cluster.
b. Select the first compute ESXi host in left pane, and then select Configure tab in right pane.
c. Select Virtual switches under Networking.
d. Expand flex_dvswitch.
e. Expand Uplink1 and click eclipse (…) for vmnic7 and select view settings.
f. Click LLDP tab.
g. Record the Port ID (switch port) and System Name (switch).
h. Repeat step 3 for vmnic5 on Uplink 2.
4. Configure LAG (LACP) on the flex_dvswitch within vCenter Server:
a. Click Home, then select Networking.
b. Expand the compute cluster and click flex_dvswitch > Configure > LACP.
c. Click +New to open wizard.
d. Verify that the name is lag1.
e. Verify that the number of ports is 2.
f. Verify that the mode is Active.
g. Change Load Balancing mode to Source and destination IP address and TCP/UDP port..
h. Click OK.
5. Migrate vmnic5 to lag1-0 and vmnic3 to lag1-1 on flex_dvswitch for the compute ESXi host as follows:
a. Click Home, then select Networking and expand the PowerFlex data center.
b. Right-click flex_dvSwitch and select Manage host networking to open wizard.
c. Select Add hosts... and click Next.
d. Click Attached hosts..., select all the compute ESXi hosts, and click OK.
e. Click Next.
f. Click Assign uplink for vmnic5.
g. Click lag1-0
h. Click Assign uplink for vmnic3.
i. Click lag1-1.
j. Click Next > Next > Next > Finish.
6. Create port-channel (LACP) on switch-A for the compute VMware ESXi host.
The following switch configuration is an example of a single compute VMware ESXi host.
a. SSH to switch-A switch.
b. Create a port channel on switch-A for each compute VMware ESXi host as follows:
interface port-channel40
Description to flex-compute-esxi-host01
switchport
switchport mode trunk
switchport trunk allowed vlan 151,152,153,154(Provided in Workbook)
spanning-tree port type edge trunk
spanning-tree bpduguard enable
speed 25000
mtu 9216
lacp vpc-convergence
no lacp suspend-individual
vpc 40
7. Configure channel-group (LACP) on the switch-A access port (vmnic5) for each compute ESXi host.
The following switch port configuration is an example of a single compute ESXi host.
a. SSH to switch-A switch.
b. Configure the port on switch-A as follows:
int e1/1/1
description to flex-compute-esxi-host01 – vmnic5
switchport
switchport mode trunk
switchport trunk allowed vlan 151,152,153,154 (Provided in Workbook)
spanning-tree port type edge trunk
spanning-tree bpduguard enable
spanning-tree guard root
speed 25000
channel-group 40 mode active
interface port-channel40
Description to flex-compute-esxi-host01
switchport
switchport mode trunk
switchport trunk allowed vlan 151,152,153,154(Provided in Workbook)
spanning-tree port type edge trunk
spanning-tree bpduguard enable
speed 25000
mtu 9216
lacp vpc-convergence
no lacp suspend-individual
vpc 40
9. Configure the channel-group (LACP) on switch-B access port (vmnic3) for each compute ESXi host.
The following switch port configuration is an example of a single compute ESXi host.
a. SSH to switch-B switch.
int e1/1/1
description to flex-compute-esxi-host01 – vmnic7
switchport
switchport mode trunk
switchport trunk allowed vlan 151,152,153,154 (Provided in Workbook)
spanning-tree port type edge trunk
spanning-tree bpduguard enable
spanning-tree guard root
speed 25000
channel-group 40 mode active
10. Update teaming and policy to be route based on physical NIC load for each port group within flex_dvswitch:
a. Click Home and select Networking.
b. Expand flex_dvSwitch to display all port groups.
c. Right-click flex-data-1 and select Edit Settings.
d. Click Teaming and failover.
e. Move lag1 to be Active and both Uplink1 and Uplink2 to Unused.
f. Change Load Balancing mode to Route based on IP hash.
g. Repeat steps 10b to 10f for each remaining port groups.
Prerequisites
Both Cisco Nexus access switch ports for the compute VMware ESXi hosts are configured with LACP enabled. These ports will
be configured as trunk access after the removal of the physical adapter from each VMware ESXi host.
Steps
1. Log in to the VMware vSphere Client.
2. Look at the vCenter and physical switches to ensure that both ports across all hosts are up.
3. For each compute VMware ESXi host, record the physical switch port to which vmnic4 (switch-B) and vmnic6 (switch-A)
connect.
a. Click Home > Hosts and Clusters and expand the compute cluster.
b. Select the first compute VMware ESXi host in the left pane, and then select Configure tab in the right pane.
c. Select Virtual switches under Networking.
d. Expand Cust_DvSwitch.
e. Expand lag-1 and click eclipse (…) for vmnic4 and select view settings.
f. Click LLDP tab.
g. Record the port ID (switch port) and system name (switch).
h. Repeat step 3 for vmnic6 on lag1-1.
4. Repeat steps 2 and 3 for each additional compute VMware ESXi host.
5. Create a management distributed port group for cust_dvswitch as follows:
a. Right-click Cust_DvSwitch.
b. Click Distributed Port Group > New Distributed Port Group.
c. Update the name to hyperconverged-node-mgmt-105-new and click Next.
d. Select the default Port binding.
e. Select the default Port allocation.
config t
interface ethernet 1/x
no channel-group
c. Repeat steps 6a and 6b for each switch port for the remaining compute VMware ESXi hosts.
7. Delete vmnic6 from lag1:
a. Click Home > Hosts and Clusters and expand the PowerFlex data center.
b. Select the PowerFlex hyperconverged node or PowerFlex compute-only node and click Configure and Virtual
Switches.
c. Select cust_DvSwitch and click Manage Physical Adapters.
d. Select vmnic6 and click X to delete.
e. Click OK.
8. Migrate vmnic9 to Uplink2 and VMK0 to hyperconverged-node-mgmt-105-new on cust_dvswitch for each compute VMware
ESXi host as follows:
a. Click Home, then select Networking and expand the PowerFlex data center.
b. Right-click Cust_DvSwitch and select Add and Manage hosts to open the wizard.
● Select Manage host networking and click Next.
● Click Attached hosts..., select all the compute VMware ESXi hosts, and click OK.
● Click Next.
● Select Uplink2 for vmnic0 and click OK.
● Click Next.
● Select vmk0 (esxi-management) and click Assign port group.
● Select pfcc-node-mgmt-105-new and click OK.
● Click Next > Next > Next > Finish.
9. Remove channel-group from the port interface (vmnic4) on switch-A for each compute VMware ESXi host as follows:
a. SSH to switch-A switch.
b. Enter the following switch commands to configure trunk access for the VMware ESXi host:
Config t
Interface ethernet 1/x
No channel-group
c. Repeat steps 8a and 8b for each switch port for the remaining compute VMware ESXi hosts.
10. Add vmnic4 to Uplink1 on cust_dvswitch for each compute VMware ESXi host as follows:
a. Click Home, then select Networking and expand the PowerFlex data center.
b. Right-click Cust_DvSwitch and select Add and Manage Hosts to open the wizard.
● Select Manage host networking and click Next.
● Click Attached hosts..., select all the compute VMware ESXi hosts, and click OK.
● Click Next.
● For each VMware ESXi host, select vmnic4 and click Assign uplink.
● Select Uplink1 and click OK.
● Click Next > Next > Next > Finish.
11. Delete the port group hyperconverged-node-mgmt-105 on cust_dvswitch:
a. Click Home, select Networking, and expand the PowerFlex data center.
b. Expand cust_dvswitch to view the distributed port groups.
c. Right-click hyperconverged-node-mgmt-105 and click Delete.
d. Click Yes to confirm deletion of the distributed port group.
12. Delete vmnic2 from lag1:
a. Click Home > Hosts and Clusters and expand the PowerFlex data center.
b. Select the PowerFlex hyperconverged node or PowerFlex compute-only node and click Configure and Virtual
Switches.
c. Select cust_DvSwitch and click Manage Physical Adapters.
d. Select vmnic2 and click X to delete.
e. Click OK.
13. Rename the port group pfmc-node-mgmt-105-new on cust_dvswitch:
a. Click Home, select Networking, and expand the PowerFlex data center.
b. Expand Cust_DvSwitch to view the distributed port groups.
c. Right-click hyperconverged-node-mgmt-105-new and click Rename.
d. Enter hyperconverged-node-mgmt-105 and click OK.
14. Update teaming and policy to be route based on physical NIC load for port group flex-vmotion-106:
a. Click Home, select Networking, and expand the PowerFlex compute data center.
b. Expand cust_dvswitch to view the distributed port groups.
c. Right-click pfcc-vmotion-106 and click Edit Settings....
d. Click Teaming and failover.
e. Move both Uplink1 and Uplink2 to be Active and lag1 to Unused.
f. Change Load Balancing mode to Route based on originating virtual port.
g. Repeat steps 12c through 12f for the remaining port groups on cust_dvswitch.
h. Select the cust_dvswitch and go toConfigure > LACP.
i. Select the required lag and click Remove.
Prerequisites
● Before adding this service in PowerFlex Manager, verify that the NSX-T Data Center is configured on the PowerFlex
hyperconverged or compute-only nodes.
● Ensure that the iDRAC of nodes, vCenter, and switches (applicable for full networking) are discovered in PowerFlex
Manager.
● Before adding a VMware NSX-T service, remove (do not delete) the PowerFlex hyperconverged service being used for
NSX-T.
● After adding an NSX-T node, if you are using PowerFlex Manager, run Update Service Details to represent the appropriate
environment. If you are using VMware NSX-T in a PowerFlex Manager service, the service goes into lifecycle mode.
Steps
1. Log in to PowerFlex Manager.
2. From Getting Started, click Define Networks.
a. Click + Define and do the following:
Firmware and software compliance Select the Intelligent Catalog (IC) version
Who should have access to the service deployed Leave as default
from this template?
c. Click Next.
d. On the Network Information page, select Full Network Automation/Partial Network Automation, and click Next.
NOTE: For partial network automation, you must finish the complete network configuration required for NSX-T.
Consider the configuration given in this document as a reference.
f. Click Next.
g. On the OS Credentials page, select the OS credentials for each node, and click Next.
h. On the Inventory Summary page, review the summary and click Next.
i. On the Networking Mapping page, verify that the networks are aligned with the correct dvswitch.
j. On the Summary page, review the summary and click Finish.
4. Verify that PowerFlex Manager recognizes that NSX-T is configured on the nodes:
a. Click Services.
b. Select the hyperconverged or compute-only service.
c. Verify that a banner appears under the Service Details tab, notifying that NSX-T is configured on a node and
is preventing some features from being used. If you do not see this banner, check if you have selected
the wrong service or NSX-T is not configured on the hyperconverged or compute-only nodes.
12
Optional deployment tasks
This section contains miscellaneous deployment activities that may not be required for your deployment.
Requirements
● PowerFlex Manager must be deployed and configured.
● Replication VLANs must be created on the switches and defined in PowerFlex Manager.
Workflow summary
● Create, publish and deploy storage with replication or hyperconverged template (local and remote)
● Create and copy certificates
● Add peer systems
● Create replication consistency groups
Steps
1. Log in to PowerFlex Manager.
2. Click Lifecycle > Templates > Create.
3. Click Clone an existing PowerFlex Manager template.
4. Click Sample Templates.
5. From the Template to be cloned field, click Storage - Replication and click Next.
6. Enter a template name.
7. Select or create a new category and enter a description.
8. Select the appropriate compliance version and the appropriate security group and click Next.
9. Select the matching customer networks for each category.
10. Under OS Settings:
a. Select or create (+) the OS credential for the root user.
b. Under Use Compliance File Linux Image, select Use Compliance File Linux Image (or custom if requested).
11. Under PowerFlex Gateway Settings, select the appropriate PowerFlex gateway. The default is block-legacy-gateway.
12. Under Hardware Settings/Node Pool Settings, select the pool that contains the Replication nodes. The default is Global.
Click Finish.
13. Under Node Settings:
a. Click Node > Modify and change node count as necessary and select Continue.
b. Add NTP and time zone information and click Save.
Steps
1. Click Lifecycle > Templates.
2. Select the template created in the previous section.
3. Click Deploy Resource Group.
4. Enter the resource group name and a brief description.
5. Select the IC version.
6. Select the administration group for this resource.
7. Click Next.
8. Under Deployment Settings:
a. Auto generate or fill out the following fields:
● Protection domain name
● Protection domain name template
● Storage pool name
● Number of storage pools
● Storage pool name template
b. Let PowerFlex select the IP addresses or manually provide the MDM virtual IP addresses.
c. Let PowerFlex select the IP addresses or manually provide the storage-only nodes OS IP addresses.
d. Manually select each storage-only node by serial number or iDRAC IP address, or let PowerFlex select the nodes
automatically from the selected node pool.
e. Click Next.
9. Click Deploy Now > Next.
10. Review the summary screen and click Finish.
Deployment activity can be monitored on right panel under Recent Activity.
Steps
1. Log in to PowerFlex Manager.
2. Click Lifecycle > Templates > Create.
3. Click Clone an existing PowerFlex Manager template.
4. Click Sample Templates.
5. From the Template to be cloned field, click Hyperconverged - Replication and click Next.
6. Enter a template name.
7. Select or create a new category and enter a description.
8. Select the appropriate compliance version and the appropriate security group and click Next.
9. Select the matching customer networks for each category.
10. Under OS Settings:
Steps
1. Click Lifecycle > Templates.
2. Select the template created in the previous section.
3. Click Deploy Resource Group.
4. Enter the resource group name and a brief description.
5. Select the IC version.
6. Select the administration group for this resource.
7. Click Next.
8. Under VMware cluster settings, auto generate or fill out the following fields:
● Data center name
● Cluster name
● Storage pool name
● Number of storage pools
● Storage pool name template
9. Under PowerFlex Cluster Settings:
a. Auto generate or fill out the following fields:
● Protection domain name
● Protection domain name template
Prerequisites
● Deployed storage-only or hyperconverged with replication resource groups at each participating site
● System ID of each participating system
Steps
1. Log in to the primary MDM for each site using SSH to generate, copy and add certificates.
2. Type scli --login_certificate --p12_path /opt/emc/scaleio/mdm/cfg/cli_certificate.p12, and
after the password prompt, enter the certificate password.
3. Extract the certificate for each site, type the following for each site (source and destination): scli --extract_root_ca
--certificate_file /tmp/site-x.crt.
4. Copy the extracted certificate of the source (primary MDM /tmp folder) to destination (primary MDM /tmp folder) using
SCP.
5. Copy the extracted certificate of the destination (primary MDM /tmp folder) to source (primary MDM /tmp folder) using
SCP.
6. To add the copied certificate to the source and each destination, type scli --add_trusted_ca --
certificate_file tmp/site-b.crt --comment site-x_crt.
7. To verify the new certificate, type scli --list_trusted_ca.
Prerequisites
● Peer system must be configured
● Source volumes to be replicated
Steps
1. Log in to PowerFlex Manager.
Option Description
Auto Provisioning (default) This option is relevant if there are no volumes at the target system.
Select the source volumes to protect.
The target volumes are automatically created.
Manual Provisioning This option is relevant if there are volumes at the target system.
Select the source volumes to protect.
Select the same size volume at the target system to create a pair between the volumes.
9. Click Next.
10. Select the source volumes.
11. Select Target Volume as thin (default) or thick.
12. Select the target storage pool.
13. Click Add Pair.
14. Click Next.
15. Optionally, to map a host on the target side:
a. Select the target volume.
b. Select the target host.
c. Click Map.
16. Click Next.
17. Select Add and Activate or Add (and activate separately)
NOTE: Add and Activate begins replication immediately.
The Add function will create the RCG but not start replication. Replication can be deferred until manually activated.
After the volumes begin replication, the final status should be OK and the constancy state will be Consistent after the
initial volume copy completes.
Prerequisites
● Deployed PowerFlex storage nodes or PowerFlex hyperconverged nodes with replication resource group at each site
● Certificates generated and copied to each participating system
Steps
1. Log in to PowerFlex Manager.
2. Select Protection > Peer System.
3. Click + Add Peer System.
4. Enter the following:
● Peer system name
● ID
● IP addresses
5. Click Add IP for each additional replication IP in the target Replication Group and click Add.
After a few moments the target system should show the state as Connected.
Prerequisites
Ensure that you have the following information:
● Primary and secondary MDM IP address
● PowerFlex cluster credentials
Steps
1. Log in to the primary MDM: login_certificate --p12_path /opt/emc/scaleio/mdm/cfg/
cli_certificate.p12
2. Authenticate with the PowerFlex cluster using the credentials provided.
3. Type scli --query_all_sdc and record all the connected SDCs (any of the identifier - NAME, GUID, ID, or IP):
4. For each SDC in your list, use the identifier recorded to generate and record a CHAP password. Type scli --
generate_sdc_password --sdc_id <id> or --sdc_ip <ip> or --sdc_name <name> or --sdc_guid
<guid> --reason "CHAP setup".
This password is specific to that SDC and cannot be reused for subsequent SDC entries.
For example:
scli --generate_sdc_password --sdc_IP 172.16.151.36 --reason "CHAP setup"
Sample output:
Prerequisites
● Generate the pre-shared passwords for all the storage data clients to be configured.
● Ensure that you have the following information:
○ Primary and secondary MDM IP addresses or names
○ Credentials to access all VMware ESXi hosts running storage data clients
Steps
1. SSH into the VMware ESXi host using the provided credentials.
2. Type esxcli system module parameters list -m scini | grep Ioctl to list the hosts current scini
parameters:
3. Using ESXCLI, configure the driver with the existing and new parameters. To specify multiple IP addresses, use a semicolon
(;) between the entries, as shown in the following example. Additional data IP addresses, data3 and data4 if required.
NOTE: There are spaces between Ioctl parameter fields and the opening quotes. The example is entered on a single
line.
NOTE: Only one IP address is needed for the command to identify the MDM to modify.
Linux:
/opt/emc/scaleio/sdc/bin/drv_cfg --set_mdm_password --ip <MDM IP> --port 6611 --password
<secret> --file /etc/emc/scaleio/drv_cfg.txt
Iterate through the relevant SDCs, using the command examples along with the recorded information.
Prerequisites
● Make sure that all storage data clients are running PowerFlex, and are configured with their appropriate CHAP password.
Any older or unconfigured storage data client will be disconnected from the system when authentication is turned on.
● Ensure that you have the following information:
○ Primary MDM IP address
○ Credentials to access the PowerFlex cluster
Steps
1. SSH into the primary MDM.
2. Type scli --login --p12_path <P12_PATH> --p12_password <P12_PASS> to log in to the PowerFlex cluster
using the provided credentials.
3. Type scli --set_sdc_authentication --enable to enable storage data client authentication feature.
4. Type scli --check_sdc_authentication_status to verify that the storage data client authentication and
authorization is on, and that the storage data clients are connected with passwords.
Sample output:
5. If the number of storage data clients does not match or any storage data clients are disconnected, storage data clients,
list any or all of the disconnected storage data clients and then disable the storage data client authentication by typing the
following commands:
scli --query_all_sdc | grep "State: Disconnected"
6. Recheck the disconnected storage data clients to make sure that they have the proper configuration applied. If necessary,
regenerate their shared password and reconfigure the storage data client. If you are unable to resolve the storage data client
disconnection, leave the feature disabled and contact Dell Technologies support as needed.
NOTE: PowerFlex Manager does not provide installation or management support for Windows compute-only nodes.
Steps
1. Connect to the iDRAC, and launch a virtual remote console.
2. Click Menu > Virtual Media > Connect Virtual Media > Map Device > Map CD/DVD.
3. Click Choose File and browse and select the customer provided Windows Server 2016 or 2019 DVD ISO and click Open.
4. Click Map Device.
5. Click Close.
6. Click Boot and select Virtual CD/DVD/ISO. Click Yes.
7. Click Power > Reset System (warm boot) to reboot the server.
The host boots from the attached Windows Server 2016 or 2019 virtual media.
Steps
1. Select the desired values for the Windows Setup page, and click Next.
NOTE: The default values are US-based settings.
a. Download DELL EMC Server Update Utility, Windows 64 bit Format, v.x.x.x.iso file from Dell
Technologies Support.
b. Map the driver CD/DVD/ISO through iDRAC, if the installation requires it.
c. Connect to the server as the administrator.
d. Open and run the mapped disk with elevated permission.
e. Select Install, and click Next.
f. Select I accept the license terms and click Next.
g. Select the check box beside the device drives, and click Next.
h. Click Install, and Finish.
i. Close the window to exit.
Steps
1. Log in to the Dell Technologies Support, Click Product Support under the Support tab.
2. Find the target server model by looking up the service tag, product ID, or the model (for example, PowerEdge R740).
3. Click the Drivers & Downloads tab and select Drivers for OS Deployment for the category.
4. Download the Dell OS Driver Pack.
5. Copy the downloaded driver pack to the new Windows host (or download on the host itself).
6. Open the folder where the driver pack is downloaded and execute the file.
Configure networks
Perform this procedure to configure the networks by creating new teams and interfaces.
Steps
1. Create a new team and assign the name as Team0:
a. Open the server manager, and click Local Server > NIC teaming.
b. In the NIC teaming window, click Tasks > New Team.
c. Enter name as Team0, and select the appropriate network adapters.
d. Expand the Additional properties, and select LACP in teaming mode and set load-balancing mode as Dynamic, and
standby adapter as None (all adapters active).
e. Click OK to save the changes.
2. Create a new interface in Team0:
a. Select your existing NIC Team Team0 in the Teams list box, and select the Team Interfaces tab in the Adapters and
Interfaces list box.
b. Click Tasks, and click Add Interface.
c. In the New team interface dialog box, type the name as, flex-node-mgmt-<vlanid>
d. Assign VLAN ID (105) to the new interface in the VLAN field, and click OK.
e. From the network management console, right-click the newly created network interface controller, and click Properties
> Internet Protocol Version 4 (TCP/IPv4).
3. If the customer is using Microsoft Cluster and wants to use live migration, repeat step 2 for flex-livemigration.
4. Create a new team and assign the name as Team1:
a. Open the server manager, and click NIC teaming.
b. In the NIC teaming window, click Tasks > New Team.
c. Enter name as Team1, and select the appropriate network adapters.
d. Expand the Additional properties, and select LACP in teaming mode and set load-balancing mode as Dynamic, and
standby adapter as None (all adapters active).
e. Click OK to save the changes.
5. Create a new interface in Team1
a. Select your existing NIC Team Team1 in the Teams list box, and select the Team Interfaces tab in the Adapters and
Interfaces list box.
b. Click Tasks, and click Add Interface.
c. In the New team interface dialog box, type the name as, flex-data1-<vlanid>
d. Assign VLAN ID (151) to the new interface in the VLAN field, and click OK.
e. From the network management console, right-click the newly created network interface controller, and click Properties
> Internet Protocol Version 4 (TCP/IPv4).
6. Repeat step 5 for flex-data2-<vlanid>, flex-data3-<vlanid>, and flex-data4-<vlanid> with VLANs 152, 153, and 154
respectively.
Steps
1. Windows Server 2016 or 2019:
a. Press Windows key+R on your keyboard, type control and click OK.
The All Control Panel Items window opens.
b. Click System and Security > Windows Firewall .
c. Click Turn Windows Defender Firewall on or off.
d. Turn off Windows Firewall for both private and public network settings, and click OK.
2. Windows PowerShell:
a. Click Start, type Windows PowerShell.
b. Right-click Windows PowerShell, click More > Run as Administrator.
c. Type Set-NetFirewallProfile -profile Domain, Public, Private -enabled false in the Windows
PowerShell console.
Steps
1. Click Start > Server Manager.
2. In Server Manager, on the Manage menu, click Add Roles and Features.
3. On the Before you begin page, click Next.
4. On the Select installation type page, select Role-based or feature-based installation, and click Next.
5. On the Select destination server page, click Select a server from the server pool, and click Next.
6. On the Select server roles page, select Hyper-V.
An Add Roles and Features Wizard page opens, prompting you to add features to Hyper-V.
7. Click Add Features. On the Features page, click Next.
8. Retain the default selections/locations on the following pages, and click Next:
● Create Virtual Switches
● Virtual Machine Migration
● Default stores
9. On the Confirm installation selections page, verify your selections, and click Restart the destination server
automatically if required, and click Install.
10. Click Yes to confirm automatic restart.
Steps
1. Click Start, type Windows PowerShell.
2. Right-click Windows PowerShell, and select Run as Administrator.
Steps
1. Right-click Start > Run.
2. Enter SystemPropertiesRemote.exe and click OK.
3. Select Allow remote connection to this computer.
4. Click Apply > OK.
Steps
1. Get the Windows *.msi files from the Intelligent Catalog. The Intelligent Catalog is available at the Dell Technologies
Support.
2. Log in to the Windows compute-only node with the administrative account.
3. Install and configure SDC:
NOTE: Make note of the MDM VIPs before installing the SDC component.
Map volumes
Perform this procedure to map a PowerFlex volume to a Windows-based compute-only node.
Steps
1. On the menu bar, click Block > Volumes.
2. In the list of volumes, select one or more volumes, and click Mapping > Map.
3. A list of the hosts that can be mapped to the selected volumes is displayed. If a volume is already mapped to a host, only
hosts of the same type, NVMe or SDC, are listed. If the volume is not mapped to a host, click NVMe or SDC to set the type
of hosts to be listed.
4. In the Map Volume dialog box, select one or more hosts to which you want to map to the volumes.
5. Click Map.
6. Verify the operation is finished and successful, click Dismiss.
7. Log in to the Windows Server compute-only node with the administrative account.
8. To open the disk management console, perform the following steps:
a. Press Windows+R.
b. Enter diskmgmt.msc and press Enter .
9. Rescan the disk and set the disks online:
a. Click Action > Rescan Disks.
b. Right-click each Offline disk, and click Online.
10. Right-click the disks selected in the previous step, and click Initialize disk > OK.
After initialization, the disk appears online.
11. Right-click Unallocated, and select New Simple Volume.
12. Select default, and click Next.
13. Assign the drive letter.
14. Select default, and click Next.
15. Click Finish.
Prerequisites
If you do not have Internet connectivity, you might need to activate by phone.
Steps
1. To activate the license online:
a. Using the administrator credentials, log in to the target Windows Server
b. When the main desktop view appears, click Start and type Run.
c. Type slui 3 and press Enter .
d. Enter the customer provided Product key and click Next.
If the key is valid, Windows Server xxxx is successfully activated.
If the key is invalid, verify that the Product key entered is correct and try the procedure again.
Steps
The supported scenarios are:
If the.... Do this...
Existing Deploy PowerFlex file cluster, provided the resource group has dedicated storage pool with 5.5 TB of space
PowerFle available.
If the.... Do this...
x
hypercon
verged or
PowerFle
x
storage-
only
service is
available.
The We might need to migrate the existing data or create a new storage pool with 5.5 TB of space for deploying
existing PowerFlex file cluster.
resource
If we have multiple protection domains and storage pools, select the protection domains and storage pools without
group
volume and 5.5 TB space available on the storage pool.
(PowerFl
ex
hypercon
verged or
PowerFle
x
storage-
only) is
already in
use.
NOTE:
● Migrate data using any traditional migration method and its customers responsibility to do the migration.
● PowerFlex file deployment is supported only using PowerFlex Manager and minimum two PowerFlex file nodes are
required.
For performing PowerFlex file deployment, see PowerFlex file deployment.
Steps
After you create a three-node PowerFlex cluster that contains active, passive, and witness nodes. Different configuration paths
are available, your selection depends on your existing configuration.
VMware vCenter HA requirements:
● Recommended minimum of three VMware ESXi hosts
● Validate the flex-vcsa-ha networking and VMware vCenter port groups have been configured
See the VMware vSphere Product Documentation (link) for additional requirements and configuration of VMware vCenter HA.
13
Post-deployment tasks
Enabling SupportAssist
● There are two options to configure events and alerts:
○ Connect directly
○ Connect using Secure Connect Gateway
● If you connect directly, only the call home option is available
● If you connect through Secure Connect Gateway, all options through Secure Connect Gateway are enabled
● You do not need to deploy and configure Secure Connect Gateway if you choose ESE direct
Related information
Enable SupportAssist
Prerequisites
● Download the required version of secure connect gateway from the Dell support site.
● You must have VMware vCenter Server running on the virtual machine on which you want to deploy secure connect
gateway. Deploying secure connect gateway directly on a server running VMware vSphere ESXi is not supported.
Steps
1. Download and extract the OVF file to a location accessible by the VMware vSphere Client.
2. On the right pane, click Create/Register VM.
3. On the Select Creation Type page, select Deploy a virtual machine from an OVF or an OVA file and click Next.
4. On the Select OVF and VMDK files page, enter a name for the virtual machine, select the OVF and VMDK files, and click
Next.
NOTE: If there is more than one datastore on the host, the datastores are displayed on the Select storage page.
5. Select the location to store the virtual machine (VM) files and click Next.
6. On the License agreements page, read the license agreement, click I agree, and click Next.
7. On the Deployment options page, perform the following steps:
a. From the Network mappings list, select the network that the deployment template must use.
b. Select a disk provisioning type.
c. Click Next.
8. On the Additional settings page, enter the following details and click Next.
● Domain name server
● Hostname
● Default gateway
● Network IPv4 and IPv6
● Time zone
● Root password
NOTE: Ensure that the root password consists of eight characters with at least one uppercase and one lowercase
letter, one number, and one special character. Use this root password to log in to secure connect gateway for the first
time after the deployment.
9. On the Ready to complete page, verify the details that are displayed, and click Finish.
A message is displayed after the deployment is complete and the virtual machine is powered on.
NOTE: Wait 15 minutes before you log in to the secure connect gateway user interface.
Configuring the initial setup and generating the access key and pin
Use the section to generate the access key and pin to register with Secure Connect Gateway and Dell Support site.
Use this link to generate the Dell Support account and access key and pin: https://www.dell.com/support/kbdoc/en-us/
000180688/generate-access-key-and-pin-for-dell-products?lang=en.
Customers should work with field engineer support to get the SITE ID that is required while generating the access key and pin.
Steps
1. Go to https://<hostname(FQDN) or IP address:5700/>.
2. Enter the username as root and password created while deploying the VM.
3. Create the admin password:
a. Enter a new password.
b. Confirm the password.
4. Accept the terms and conditions.
5. Provide the access key and pin generated in Configuring the initial setup and generating the access key and pin.
6. Enter the Primary Support Contacts information.
Steps
1. Log in to PowerFlex Manager.
2. Click Settings > Events and alerts.
3. Click Notification Policies.
4. From the policies tab on the grayed out part, click Configure Now.
5. Accept the license and telemetry agreement on the connect support assist page and click Next.
6. Choose the connection type Connect Directly.
NOTE: This helps us directly connect to SupportAssist direct. Call to home feature works on connect direct. The proxy
setting is not supported.
Prerequisites
Configure the secure connect gateway.
Steps
1. Log in to PowerFlex Manager.
2. Click Settings > Events and alerts.
3. Click Notification Policies.
4. From the Policies tab on the grayed out part, click Configure Now.
5. Accept the license and telemetry agreement on the Connect SupportAssist page and click Next.
6. Choose the connection type connect via gateway.
NOTE: Connect using the gateway helps register the PowerFlex Manager on secure connect gateway and
SupportAssist. From here we can enable the proxy setting.
18. To activate the policy now, click Configure Now and enable the policy by making it active.
Once the policy is active, it will remove from grayed out mode to available and active mode.
Steps
1. Go to Settings > Events and Alerts > Notification Policies.
2. From the Sources pane, click Add.
3. Enter a source name and description.
4. Configure either SNMP or syslog forwarding and click Submit > Dismiss:
● For SNMPv2c:
a. Enter the community string by which the source forwards traps to destinations.
b. Enter the same community string for the configured resource. During discovery, if you selected PowerFlex Manager
to automatically configure iDRAC nodes to send alerts to PowerFlex Manager, enter the community string that is used
in that credential here.
● For SNMPv3:
a. Enter the username, which identifies the ID where traps are forwarded on the network management system.
b. Select a security level from the following:
Configure a destination
Define a location where event and alert data that has been processed by PowerFlex Manager should be sent.
Steps
1. Click Settings > Events and Alerts > Notification Policies.
2. From the Destinations pane, click Add.
3. From the Destinations page:
4. Click Finish.
Steps
1. To access the wizard from the Resource Groups page:
Steps
1. Log in to PowerFlex Manager using the credentials.
2. On the menu bar, click Lifecycle > Resource Group.
3. Select the Resource Group you looking for and verify the following on the Resource Group Information section on the
right.
4. Verify the status:
Steps
1. Log in to PowerFlex Manager using the credentials.
2. Select the Block tab on the menu, click on the appropriate tabs to view and verify the details:
● Protection domain
● Fault sets
● SDS
● Storage pools
● Acceleration pools
● Devices
● Volumes
● NVMe targets
● Hosts
Prerequisites
Ensure the UI access along with the appropriate user permission available to export the report.
Steps
1. On the menu bar, click Resources.
2. Click Export Report.
3. Select either Export Compliance PDF Report or Export Compliance CSV Report from the drop-down list.
The compliance report is downloaded.
Prerequisites
Ensure the UI access along with the appropriate user permission available to export the report.
Steps
1. On the menu bar, click Resources.
2. Click Export Report.
3. Select Export Configuration PDF Report from the drop-down list.
The configuration report downloads. The report shows the following kinds of information:
Steps
1. The Backup and Restore page displays information about the last backup operation that was performed on the PowerFlex
Manager virtual appliance. Information in the Settings and Details section applies to both manual and automatically
scheduled backups and includes the following:
● Last backup date
● Last backup status
● Back up directory path to an NFS or a CIFS share
● Back up directory username
2. The Backup and Restore page also display information about the status of automatically scheduled backups (enabled or
disabled). On this page, you can:
● Manually start an immediate backup
● Restore an earlier configuration
● Edit general backup settings
● Edit automatically scheduled backup settings
Steps
1. Connect to the Cisco Nexus or Dell PowerSwitch switch, either via console cable, Telnet or SSH using admin credentials,
type #copy running-config scheme://server/[url/]filename.
For the scheme argument, you can enter tftp:, ftp:, scp:, or sftp:.
The server argument is the address or name of the remote server, and the URL argument is the path to the source file on
the remote server. The server, URL, and filename arguments are case sensitive.
For example:
switch# copy running-config
tftp://10.10.10.1/sw1-run-config.bak
2. Restore the network configuration, connect to the Cisco Nexus or Dell PowerSwitch switch, either via console cable, Telnet
or SSH using admin credentials, type #copy running-config scheme://server/[url/]filename running-
config.
For the scheme argument, you can enter tftp:, ftp:, scp:, or sftp:.
The server argument is the address or name of the remote server, and the URL argument is the path to the source file on
the remote server. The server, URL, and filename arguments are case sensitive.
For example:
Description Link
The back up and restore solution https://docs.vmware.com/en/VMware-vSphere/7.0/
com.vmware.vcenter.install.doc/GUID-3EAED005-
B0A3-40CF-B40D-85AD247D7EA4.html?
Description Link
Steps
1. To generate the certificate, copy management certificate to the root location, type: cp /opt/emc/scaleio/mdm/cfg/
mgmt_ca.pem /.
2. Generate the certificates, type: # scli --generate_login_certificate --management_system_ip
<MNO_IP> --username <USER> --password <PASS> --p12_path <P12_PATH> --p12_password
<P12_PASS> --insecure.
Where:
● #management_system_id is the IP address of PowerFlex Manager.
● #username is the username used to login to PowerFlex Manager.
● #password is the password used to login to PowerFlex Manager.
● #p12_path <P12_PATH> is optional. If is not provided then the file will be created in the users home directory.
● #p12_password is the password for the p12 bundle. The same passwords need to be provided for generation of the
certificate and for login operation.
3. Add the certificates, type: cd /opt/emc/scaleio/mdm/cfg; scli --add_certificate --certificate_file
mgmt_ca.pem.
4. Log in to PowerFlex using the certificates, type: #scli --login --p12_path <P12_PATH> --p12_password
<P12_PASS>.