Flex Rack Field Implementation 4x
Flex Rack Field Implementation 4x
Flex Rack Field Implementation 4x
January 2023
Rev. 1.2
Internal Use - Confidential
NOTE: A NOTE indicates important information that helps you make better use of your product.
CAUTION: A CAUTION indicates either potential damage to hardware or loss of data and tells you how to avoid
the problem.
WARNING: A WARNING indicates a potential for property damage, personal injury, or death.
© 2022 - 2023 Dell Inc. or its subsidiaries. All rights reserved. Dell Technologies, Dell, and other trademarks are trademarks of Dell Inc. or its
subsidiaries. Other trademarks may be trademarks of their respective owners.
Internal Use - Confidential
Contents
Chapter 1: Introduction................................................................................................................. 6
PowerFlex VLANs................................................................................................................................................................6
PowerFlex management controller datastore and virtual machine details............................................................ 7
Contents 3
Internal Use - Confidential
Chapter 12: Connecting to the customer network using Cisco Nexus switches............................ 33
Cisco Nexus management switch networking overview......................................................................................... 33
Cisco Nexus management switch network connectivity scenarios...................................................................... 33
PowerFlex rack scenario A: Layer 2 network connectivity - Cisco Nexus management switch............. 33
PowerFlex rack scenario B: Layer 2 network connectivity - Cisco Nexus management
aggregation switch..................................................................................................................................................34
PowerFlex rack scenario C: Layer 3 network connectivity - Cisco Nexus management
aggregation switch..................................................................................................................................................34
Cisco Nexus access switch networking overview in a hyperconverged deployment.......................................35
Cisco Nexus aggregation switch networking overview for a storage-only PowerFlex rack...........................35
PowerFlex rack Layer 2 network connectivity — Cisco Nexus access switch................................................. 35
Verifying NTP..................................................................................................................................................................... 36
Post-network activity setup........................................................................................................................................... 36
Clear the temporary ports and SVIs....................................................................................................................... 36
Cisco Smart Account communication.................................................................................................................... 36
Configuring Smart Call Home for Cisco Nexus switches.................................................................................. 36
4 Contents
Internal Use - Confidential
Modifying a destination..............................................................................................................................................53
Add a notification policy............................................................................................................................................ 53
Modify a notification policy.......................................................................................................................................54
Delete a notification policy........................................................................................................................................54
Contents 5
Internal Use - Confidential
1
Introduction
This document describes how to bring up the system once it is shipped from factory to customer location and also provides
details about configuring the alert, verifying the PowerFlex rack setup, and updating the licenses at a customer site.
The target audience for this document is Dell Technologies Services and Dell partners who are installing and configuring
PowerFlex rack at a customer site.
The person running these procedures must be familiar with the relevant virtualization, storage, and networking fundamentals,
and with the major components of PowerFlex rack.
CAUTION: This guide contains Dell proprietary and confidential intellectual property. Protect any copies that
you print. Printed copies can become quickly out of date.
PowerFlex VLANs
Specific networks are required for deployment. Each network requires enough IP addresses allocated for the deployment and
future expansion. If the access switches are supported by PowerFlex Manager, the switch ports are configured. Manually
configure the switch ports for PowerFlex management controller and for services discovered in lifecycle mode.
The following table lists VLAN descriptions:
● Example VLAN: Lists the VLANs that are used in the deployment.
● Networks or VLANs: Network names and or the VLAN defined by PowerFlex Manager.
● Descripton: Describes each network or VLAN.
● Where configure: Indicates which resources have interfaces that are configured on the network or VLAN. The resource
definitions are:
○ PowerFlex node: PowerFlex hyperconverged node
○ PowerFlex Manager: Deploys and manages the PowerFlex rack.
○ Access switches: PowerFlex Manager configures the node facing ports of these switches. You configure the other ports
on the switch (management, uplinks, interconnects, and the switch ports for the PowerFlex management node).
○ Cloudlink Center: Provides the key management and encryption for PowerFlex.
6 Introduction
Internal Use - Confidential
Introduction 7
Internal Use - Confidential
NOTE: For PowerFlex management controller 2.0, verify the capacity before adding additional VMs to the general volume.
If there is not enough capacity, expand the volume before proceeding. For more information on expanding a volume, see Dell
PowerFlex Rack with PowerFlex 4.x Administration Guide.
8 Introduction
Internal Use - Confidential
2
Revision history
Date Document revision Description of changes
January 2023 1.2 ● Updated the Cisco Smart Account
licensing information
● Editorial changes
September 2022 1.1 Added Cisco Smart Account licensing
information
August 2022 1.0 Initial release
Revision history 9
Internal Use - Confidential
3
Powering on and off
NOTE: The Technology Extension for PowerScale must be powered on before the PowerFlex rack is powered on.
Steps
1. Power on the switches in the Isilon cabinet.
2. Power on node 1 first by pressing the Power button that is located on the back of the node that is labeled as node 1.
3. Using a serial connection, connect to the console port of node 1 with a laptop and a HyperTerminal or similar connection.
4. Monitor the status of the boot process for node 1. When node 1 has completed booting, it displays the login prompt. Note
any error codes or amber lights on the node. Resolve any issues before moving to node 2.
5. Move to node 2, power on, and monitor the boot process.
6. Repeat the procedure for each node in the cluster in sequential order.
When all nodes have completed booting, then entire cluster is powered on.
Next steps
See the relevant procedure to power on the PowerFlex rack.
NOTE: If asynchronous replication is enabled activate the protection domain on both source and destination
protection domains.
○ Power on the PowerFlex compute-only nodes
○ Power on the PowerFlex compute-only nodes with Windows server (if applicable)
○ Power on the VMware NSX-T Edge nodes (if applicable)
○ Power on all VMs on or single VMware vCenter (customer cluster VMs)
● Check PowerFlex health and rebuild status
NOTE: Powering on must be completed in this order for the components that you have in your environment. Prioritize
and power on the PowerFlex storage-only nodes or PowerFlex hyperconverged nodes with PowerFlex metadata manager
(MDM) first.
Prerequisites
● Confirm that the servers are not damaged from the shipment.
● Verify that all connections are seated properly and check the manufacturing handoff notes for any items that must be
completed.
● Verify that the following items are available:
○ Customer-provided services such as Active Directory, DNS, NTP
○ Physical infrastructure such as reliable power, adequate cooling, and core network access
See Cisco documentation for information about the LED indicators.
See Dell PowerSwitch S5200 Series Installation Guide for information about the LED indicators for Dell PowerSwitch switches.
Steps
1. Verify that the PDU breakers are in the OPEN (OFF) positions. If the breakers are not OPEN, use the small enclosed tool to
press the small white tab below each switch for the circuit to open. These switches are located below the ON/OFF breaker.
2. Connect the external AC feeds to the PDUs.
3. Verify that power is available to the PDUs and a number is displayed on the LEDs of each PDU.
4. Close the PDU circuit breakers on all PDUs for Zone A by pressing the side of the switch that is labeled ON. This action
causes the switch to lie flat. Verify all the components that are connected to the PDUs on Zone A light up.
5. Close the PDU circuit breakers on all PDUs for Zone B by pressing the side of the switch that is labeled ON. Verify all the
components that are connected to the PDUs on Zone B light up.
6. Power on the network components in the following order:
NOTE: Network components take about 10 minutes to power on.
● Management switches - Wait until the PS1 and PS2 LEDs are solid green before proceeding.
● Cisco Nexus aggregation or leaf-spine switches - Wait until the system status LED is green before proceeding.
● Dell PowerSwitch switches - Wait until the system status LED is solid green before proceeding.
Next steps
Power on the PowerFlex management controller.
Steps
1. Power on the PowerFlex management controller:
a. Log in to the iDRACs of each PowerFlex management controller 2.0.
b. Power on the PowerFlex management controller 2.0.
c. Verify that VMware ESXi boots and that you can ping the management IP address.
Allow up to 20 minutes for the PowerFlex management controller 2.0 to boot after VMware ESXi loads.
2. Exit maintenance mode on all of the PowerFlex management controller 2.0:
a. Log in to the VMware ESXi hosts on the PowerFlex management controller 2.0.
Steps
1. Power on the VMware NSX-T Edge nodes.
2. Verify that VMware ESXi has booted and you can ping the management IP address.
3. Power on the VMware NSX-T Edge VMs.
Steps
1. From iDRAC, power on all PowerFlex storage-only nodes and allow them time to boot completely.
NOTE: Perform steps 2 to 6 only for a PowerFlex storage-only node cluster, and where the MDM is part of
the PowerFlex storage-only node. Do not perform steps 2 to 7 when the PowerFlex storage-only node is part of
hyperconverged environment. Activation of PD is included as part of power on PowerFlex hyperconverged node.
6. Do the following:
a. Log in to the jump server on the controller stack.
b. At the command prompt or the terminal, type nslookup, against the correct DNS and verify that the DNS is correct.
For example, nslookup eagles-r640-f-158.lab.vce.com (node hostname) 10.234.134.100 (dns
server ip address).
Steps
1. From iDRAC, power on all PowerFlex file nodes and allow them time to boot completely.
2. Log in to PowerFlex Manager, verify the Resource group is healthy.
3. Ensure back-end PowerFlex storage is up and running before powering on PowerFlex file cluster.
4. Power on PowerFlex file cluster by logging into each PowerFlex file node, type:
svc_nas_ctl --enable_ha_monitoring
svc_nas_ctl --start_nas_container
Rarely there can be a requirement for recovery post bring up on NAS volumes which will need service engagement.
Steps
1. From iDRAC, power on all PowerFlex hyperconverged nodes with VMware ESXi and allow them time to boot completely.
2. Log in to the VMware vSphere Client if the PowerFlex rack includes VMware vSphere.
3. Take each PowerFlex hyperconverged node with VMware ESXi out of maintenance mode.
4. If the PowerFlex rack is a full VMware ESXi deployment, power on the MDM cluster PowerFlex VMs, primary, two
secondaries, and two tiebreakers.
5. Log in to PowerFlex Manager.
a. On the Block menu, click SDSs.Verify that all the SDSs are healthy.
b. On the Block menu, click Devices, and verify that the devices are online.
c. Verify that asynchronous replication is enabled:
● Under the Protection menu, click SDRs. Verify that the SDRs are healthy.
● Under the Protection menu, click Journal Capacity. Ensure that the journal capacity has already been added.
d. On the Block menu, click the protection domain. Select each protection domain, under More Actions, select Activate.
e. In the Active Protection Domain dialog box, click Yes for Force activate and click Activate to enable access to the
data on the protection domain.
f. Verify that the operation has successfully completed and click Dismiss.
Steps
1. From iDRAC, power on all PowerFlex compute-only nodes and allow them time to boot completely.
2. For PowerFlex compute-only nodes with VMware ESXi:
a. Log in to the VMware vSphere Client if the PowerFlex rack includes VMware vSphere.
b. Take each PowerFlex compute-only node with VMware ESXi out of maintenance mode.
3. For PowerFlex compute-only nodes with Windows Server 2016 or 2019:
a. After the Windows compute-only node boots successfully, log in to the Windows Server 2016 or 2019 system from
Remote Desktop with administrator privilege.
b. Confirm if the mapped PowerFlex volumes are online and accessible using the disk management tool using Windows+R
to open Run. Type diskmgmt.msc in the box and press Enter.
c. Confirm if all the critical services are up and running by pressing Windows+R to open Run. Type Services.msc in the
box and press Enter.
Steps
1. From vCenter, power on the remaining VMs of all PowerFlex compute-only nodes with VMware ESXi.
2. From the VMware vSphere Client:
a. Rescan to rediscover datastores.
b. Mount the previously unmounted datastores, and add any missing VMs to the inventory.
c. Power on the remaining VMs.
3. For VMware vSphere, enable HA, DRS, and affinity rules.
4. Delete expired or unused CloudLink Center licenses from PowerFlex Manager using the following commands:
a. Log in to PowerFlex Manager.
b. Click Settings > License Management > Other Software Licenses.
c. Select the license to delete and click Remove.
d. Go to the resource, select the CloudLink VMs, and click Run inventory and click Close.
Prerequisites
To facilitate powering on the PowerFlex rack later, document the location of the management infrastructure VMs on their
respective hosts. Also, verify that all startup configurations for the Cisco and Dell devices are saved.
See the Dell VxBlock TM System 1000 and PowerFlex Rack Physical Planning Guide for information about power specifications.
Steps
1. Check PowerFlex health and rebuild status:
a. Log in to the PowerFlex Manager and check the dashboard.
b. Confirm there is no error and rebuild or rebalance is running.
2. Shut down all VMs on the vCenter:
a. Using the VMware vSphere Client, log in to the customer VMware vCenter or a single VMware vCenter (customer
cluster).
b. Expand the customer clusters.
c. Shut down all VMs, except for the PowerFlex storage VMs (SVM).
CAUTION: Do not shut down the SVMs. Shutting them down now can result in data loss.
Steps
1. Log in to PowerFlex Manager.
2. Select Block > Protection Domain.
3. For each protection domain, click More Actions > Inactivate.
4. Click OK and type the administrator password when prompted. Repeat for each protection domain and verify that each is
deactivated.
5. Repeat for each protection domain and verify that each is deactivated.
6. Log in to the iDRAC to power off the PowerFlex storage-only node.
Steps
1. Log in to the VMware vCenter, click Home, and click Inventory.
2. Disable DRS and HA on the customer cluster.
3. Place the PowerFlex compute-only nodes with VMware ESXi into maintenance mode.
4. Power off the PowerFlex compute-only nodes with VMware ESXi.
Prerequisites
Shut down all applications that use the NAS filesystem before shutting down the PowerFlex file cluster. If you do not shut them
down, there is a chance of DU/DL.
Steps
1. Ensure all the applications which uses NAS filesystems are shut down.
2. Ensure all compute-only or hyperconverged nodes are shut down.
3. Shut down PowerFlex file cluster by logging into each PowerFlex file node and type:
svc_nas_ctl --disable_ha_monitoring
svc_nas_ctl --stop_nas_container
4. From iDRAC, shut down on all PowerFlex file nodes and allow them time to power off completely.
5. Log in to PowerFlex Manager and verify the resource group is healthy.
Steps
1. Log in to the VMware vSphere Client.
2. From VMware vCenter, click Home > Inventory.
3. Verify that DRS and HA on the customer cluster are disabled. If they are not disabled, disable them.
4. Shut down all PowerFlex SVMs.
5. Place the PowerFlex hyperconverged nodes into maintenance mode.
6. Power off the PowerFlex hyperconverged nodes with VMware ESXi.
Steps
1. Connect to the Windows Server 2016 or 2019 system from the Remote Desktop with an account set up with an
administrator privilege.
2. Power off through any of the following modes:
● GUI: Click Start > Power > Shut down.
● Command line using PowerShell: Run the Stop-Computer cmdlet.
Steps
1. In the management VMware vCenter, right-click the NSX-T Edge VMs and click Power > Shut Down Guest OS.
2. In the management VMware vCenter, right-click the NSX-T Edge nodes and click Power > Shut Down.
Steps
1. Determine the primary MDM IP address and the protection domain name:
a. Log in to PowerFlex Manager to determine the primary MDM.
b. To view the details of a resource group, click Lifecycle > Resource Groups > PowerFlex management controller 2.0
resource group. Scroll down on the Service Details page, the following information is displayed based on the resource
types in the resource group:
● Primary MDM IP
● Protection Domain
2. Power off the VMs except for the PowerFlex SVMs:
a. Log in to the PowerFlex management controller 2.0 ESXi hosts.
b. Click Virtual Machines.
c. Power off all the VMs, except the PowerFlex SVMs.
3. Inactivate the protection domain:
a. Log in to primary MDM, type scli --login --p12_path /opt/emc/scaleio/mdm/cfg/
cli_certificate.p12 --p12_password <password>
NOTE: After discovering MDS on PowerFlex Manager, the login is as follows: scli --login --username
admin --password <PFxM_password> --mangement_system_ip <PFxM IP> --insecure
Prerequisites
Steps
1. Connect to all the switches using SSH:
● For Cisco Nexus switches, type copy running-config startup-config
● For Dell PowerSwitch switches, type copy running-config tftp://hostip/filepath.
2. On Zone B (BLUE), turn off all PDU power breakers (OPEN position).
3. On Zone A (RED), turn off all PDU power breakers (OPEN position).
4. To verify that there is no power beyond the PDUs, disconnect the AC feeds to all PDUs.
Steps
1. Open OneFS and log in as root.
2. Click CLUSTER MANAGEMENT > Hardware Configuration > Shutdown & Reboot Controls.
3. Select Shut Down.
4. Click Submit.
5. Power off the Switches in the Technology Extension for PowerScale cabinet.
Results
Verify that all nodes have shut down by looking at the power indicators on each node.
If nodes do not power off:
1. SSH to the node.
2. Log in as root and type:
Isi config
shutdown #
shutdown all
If the node still does not power off, you can force the node to power off by pressing and holding the multifunction/power
button on the back of the node.
If the node still does not respond, press Power button of the node three times, and wait five minutes. If the node still does not
shut down, press and hold Power button until the node powers off.
NOTE: Perform a forced shutdown only with a failed and unresponsive node. Never force a shutdown with a healthy node.
Do not attempt any hardware operations until the shutdown process is complete. The process is complete when the node
LEDs are no longer illuminated.
4
Changing PowerFlex rack passwords
PowerFlex rack is deployed with factory default passwords. Passwords must be replaced with secure system-generated
passwords by the customer. Default passwords must be changed by the customer.
For specific steps to change passwords for a PowerFlex rack, see the Password management section in the Dell PowerFlex
Rack with PowerFlex 4.x Administration Guide.
5
Removing the temporary DNS servers
After the customer DNS is configured, remove the temporary DNS servers.
Prerequisites
Verify the customer DNS is configured for PowerFlex rack.
Steps
1. Log in to the controller PowerFlex rack vCSA.
2. Power off the DNS1 VM.
3. Remove the DNS1 VM from the disk.
6
Manually deploy UCC Edge 2.0
Deploy the UCC Edge.
Manually deploy the UCC Edge using the UCC Edge Getting Started Guide.
7
Configuring the licenses
Complete the PowerFlex rack configuration by applying licenses.
The following software that is deployed on PowerFlex rack requires a license:
● VMware vSphere
● VMware vCenter Server
● PowerFlex
● CloudLink
Prerequisites
The PowerFlex management controller requires valid licenses for:
● VMware vSphere (Enterprise Plus)
● VMware vCenter Server Standard
● PowerFlex (only applicable for PowerFlex management controller 2.0)
Steps
1. Log in to the single vCSA.
2. In the Administration section, click Licensing.
3. At the top of the display, click Licenses.
4. Click ADD to open the New Licenses wizard.
5. Optionally, provide an identifying name for each license. Click Next.
6. Click Finish to complete the addition of licenses to the system inventory.
7. In the Licenses view, the added licenses are visible. Click the Assets tab.
8. Click Hosts.
9. To check the controller nodes, Hold the Ctrl key and left-click each listed controller host until they are all highlighted.
10. Click Assign License and click Yes.
11. In the dialog box, select the newly-added vSphere license from the list and click OK.
12. Click vCenter Service systems.
13. To check vCenter, left-click VXMA VCSA to select it, if not already selected.
14. Click Assign License.
15. In the dialog box, select the newly-added vCenter license from the list and click OK.
16. Select the License tab and verify usage appears for each listed license.
Prerequisites
You need to deploy the MDM cluster before uploading a PowerFlex license. You need to discover an MDS gateway before
uploading an MDS license.
Steps
1. On the menu bar, click Settings and click License Management.
2. Click PowerFlex License.
3. To upload an MDS license, click Choose File in the Management Data Store (MDS) License section and select the
license file. Click Save.
4. To upload a production license for PowerFlex, click Choose File in the Production License section and select the license
file. Click Save.
Results
When you upload a license file, PowerFlex Manager checks the license file to ensure that it is valid.
After the upload is complete, PowerFlex Manager stores the license details and displays them on the PowerFlex Manager
License page. You can see the Installation ID, System Name, and SWID for the PowerFlex. In addition, you can see the Total
Licensed Capacity, as well as the License Capacity Left. You can upload a second license, as long as the license is equal to or
more than the Total System Capacity.
Licensing CloudLink
The CloudLink license is applied during CloudLink Center deployment. Perform this procedure if you need to manually replace the
CloudLink license.
Steps
1. Open a browser, and provide the CloudLink VM IP address.
2. Log in using secadmin credentials.
3. Click System > License > Upload License.
NOTE: CloudLink license files determine the number of machine instances, CPU sockets, encrypted storage capacity,
or physical machines with self-encrypting drives (SEDs) that your organization can manage using CloudLink Center. For
environments with SED drives, a CloudLink license for SED must be applied to manage SED drives from CloudLink. To
verify that a drive is SED, do the following:
a. Log in to iDRAC of the node and navigate to Storage > Overview > Physical Disks.
b. Select the SSD drive and expand the details.
c. Verify that Encryption Capable option shows as Capable for the Toshiba SSD Drive.
4. For the CloudLink environments managed by PowerFlex Manager, once you update the license manually, log in to PowerFlex
Manager and go to the Resource page. Select the CloudLink VMs and click Run the inventory.
Steps
1. Download the OS10 image and license, sign in to Dell Digital Locker using your account credentials.
2. Locate the entry for your entitlement ID and order number that is sent by email, then select the product name.
3. On the Product page, the Assigned To: field on the Product tab is blank. Click Key Available for Download.
4. Enter the Service Tag of the device for which you purchased the OS10 Enterprise Edition in the Bind to: and Re-enter ID:
fields. This step binds the software entitlement to the service tag of the switch.
5. Select how you want to receive the license key - by email or downloaded to your local device.
6. Click Submit to download the License.zip file.
7. Click the Available Downloads tab.
8. Select the OS10 Enterprise Edition release to download, then click Download.
9. Read the Dell End User License Agreement. Scroll to the end of the agreement, then click Yes, I agree.
10. Select how you want to download the software files, then click Download Now.
After you download the OS10 Enterprise Edition image, extract the TAR file by following these guidelines:
● Extract the OIS10 binary file from the .tar file using any file archiver/compressor software. For example, to extract
a TAR file on a Linux server or from the ONIE prompt, enter
tar -xf
tar_filename
● On a Windows server, some Windows extract applications insert extra carriage returns (CR) or line feeds (LF) when they
extract the contents of a TAR file. The additional CRs or LFs may corrupt the downloaded OS10 binary image. Turn off
this option if you use a Windows-based tool to untar an OS10 binary file.
● Generate a checksum for the downloaded OS10 binary image by running the md5sum command on the image file. Ensure
that the generated checksum matches the checksum that is extracted from the TAR file.
md5sum
image_filename
11. Open the ZIP file, and locate the license file in the Dell folder. Copy the license file to a local or remote workstation.
12. Install the license file from the workstation in EXEC mode. Type the following:
● usb://filepath - Install from a file directory on a storage device that is connected to the USB storage port on the
switch.
● filepath/filename - Enter the directory path where the license file is stored.
13. Install the license XML file. Type the following: OS10# license install scp://user:userpwd@10.1.1.10/
CFNNX42-NOSEnterprise-License.xml
In the license install is successful, the following appears:
14. Verify the license installation, type OS10# show license status.
System Information
------------------------------------------
Vendor Name : DELL
Product Name : S4048-ON
Hardware Version: A00
Platform Name : S4048-ON
PPID : CN0M68YC2829855M0133
Service Tag : CFNNX42
License Details
----------------
Software : OS10-Enterprise
Version : 10.3.0E
License Type : PERPETUAL License
Duration: Unlimited License
Status : Active
License location: /mnt/license/CFNNX42.lic
Steps
1. Verify the installation path to the local or remote location you tried to download the license from.
2. Check the log in the remote server to see why the FTP or TFTP file transfer failed.
3. Ping the remote server from the switch - use the ping and traceroute commands to test network connectivity. If the
ping fails:
● Check if a management route is configured on the switch. If not, use the management route command to configure a
route to the server network.
● Install the server with the license file on the same subnet as switch.
4. Check if the server is up and running.
NOTE: For additional information, on return material authorization (RMA), see Replacing a Dell PowerSwitch Series or
Cisco Nexus Switch.
8
Authenticating the VMware vCenter Server
in a hyperconverged deployment
Use the VMware vCenter Web Client to join VMware vCenter to the domain.
Prerequisites
Verify you have network access to the VMware vCenter Web Client and Microsoft Windows AD domain with administrator
privileges.
If the VMware vCenter Server and SSO are installed in separate systems from a custom installation, join both systems to the
domain.
Ensure the system is online and services are started.
Steps
1. Add the AD identity source to SSO, log in to the vSphere Web Client as the SSO administrator.
2. Select Administration.
3. Expand the Single Sign-On entry.
4. Select Configuration > Identity Sources.
5. From the Options menu, click Add Identity Source.
6. Click Active Directory. If the domain name field is not automatically propagated with the correct Windows DNS domain,
type it manually.
7. Select Use machine account and click OK.
After the AD identity source is configured, you can add users from that domain to the VMware vCenter Server.
9
Authenticating the VMware vCenter Server
in a two-layer deployment
Use the VMware vCenter Client to join VMware vCenter to the domain.
Prerequisites
Verify you have network access to the VMware vCenter Web Client and Microsoft Windows AD domain with administrator
privileges.
Ensure the system is online and services are started.
Steps
1. Add the AD identity source to SSO. Log in to the vSphere Web Client as the SSO administrator.
2. Select Home/Administration.
3. Expand the Single Sign-On entry.
4. Select Configuration > Identity Sources.
5. From the Options menu, click Add Identity Source.
6. Click Active Directory. If the domain name field is not automatically propagated with the correct Windows DNS domain,
type it manually.
7. Select Use machine account and click OK.
After the AD identity source is configured, you can add users from that domain to the VMware vCenter Server.
10
Enabling user access to the VMware vCenter
server
Use the VMware vSphere Web Client to enable user access to the VMware vCenter Server Appliance (vCSA).
Prerequisites
Verify that the user you use to log in to the VMware vCenter Server instance is a member of the
SystemConfiguration.Administrators group in the VMware vCenter Single Sign-On domain.
Steps
1. Use the VMware vSphere Web Client to log in as administrator@vsphere.local to the VMware vCenter Server instance in the
VMware vCenter Server Appliance.
2. Click Administration.
3. Under Single Sign-On, click Users and Groups.
4. On the Groups tab, select the SystemConfiguration.BashShellAdministrators group.
5. In the Group Members window, click Add member.
6. Double-click users from the list or type names in the Users text box.
7. Click OK.
11
Configuring the Cisco Nexus management
switch
Configure the PowerFlex rack service port
Use this procedure to configure the management switch to create the IP address access list and configure the interface.
Prerequisites
● Access and credentials to the flex-oob-mgmt-<vlanid> switch (default is VLAN 101) using the network or console
● Access and credentials to the single VMware vCenter
● Access and credentials to the controller jump server
● One labeled CAT6 patch cable to provide a permanent service port connection to port 48 of the first management switch
Use the default network settings used for service port configuration:
Steps
1. To enter global configuration mode, type: conf t.
2. To create the access list, type: ip access-list flex-support-access permit ip 172.16.255.249
252.255.255.255 172.16.255.250 252.255.255.255.
3. To configure the interface, type:
int e1/48
switchport
reload
int e1/48
switchport
no shutdown
reload
interface Ethernet1/49
ip port access-group flex-support-access in
switchport access vlan 101
Steps
1. Log in to the single VMware vCenter.
2. Expand the management cluster.
3. Right-click the jump server and select Edit Settings.
4. Click the Select menu, select Network, and click Add.
Steps
1. Log in to the jump server VM as admin using VNC or the VMware vCenter console (Ctrl+Alt+F2 for text console).
2. Set up the networking by typing sudo nmtui. Enter the account password when prompted.
3. Edit a connection and select ens256. The selection should be the third interface.
4. Set the IPv4 configuration to Manual and click Show.
5. Select Add and set the IP address and subnet mask. There is no default gateway.
172.16.255.250/30
12
Connecting to the customer network using
Cisco Nexus switches
Cisco Nexus management switch networking overview
To handle management traffic, network connectivity between PowerFlex rack and the customer network requires a
management IP interface for the Cisco Nexus management switch.
Cisco Nexus management switch connectivity in a PowerFlex rack supports a single upstream switch or two upstream switches.
For example, two Gb fiber (Ethernet 1/49 Ethernet 1/50).
Depending on the customer configuration, the ports can be combined into a port channel or remain separate.
PowerFlex rack port channel connectivity supports static/on (no protocol) and LACP port aggregation protocols.
Prerequisites
● Identify the port-channel protocol from the Logical Configuration Survey (LCS) and verify the configuration with the
customer.
● Verify the physical interfaces match the port channel settings.
Steps
1. Access the console interface on the Cisco Nexus management switch.
2. To change the management IP address for VLAN 101, type: ip address <ip address> <subnet mask>.
3. Enter a command to establish the default route. In this example, 192.168.101.1. Refer to the LCS for the default gateway IP
address.
Prerequisites
● Identify the port channel protocol from the Logical Configuration Survey (LCS) and verify the configuration with the
customer.
● Verify the physical interfaces match the port channel settings.
Steps
1. Access the console interface on the Cisco Nexus management aggregation switch.
2. To change the management IP address for VLAN 101, type: ip address <ip address> <subnet mask>.
3. Enter a command to establish the default route. In this example, 192.168.101.1. See LCS for the default gateway IP address.
Prerequisites
● Identify the routing protocol from the Logical Configuration Survey (LCS) and verify the configuration with the customer.
● Verify that the physical interfaces match the port channel settings.
Steps
1. Access the console interface on the Cisco Nexus management aggregation switch.
2. Type the following to configure HSRP VIP on interface VLAN 101:
hsrp version 2
hsrp 103
authentication text <test>
preempt
priority <num>
ip <ip address>
See LCS for the default gateway IP address and routing protocol peering with customer network.
Prerequisites
See the Logical Configuration Survey (LCS) and verify the Layer 2 configuration with the customer.
Steps
1. Access the console interface on the Cisco Nexus access switch.
2. Use SSH to log in to the access switch.
3. Configure the uplink interfaces to match the customer network configuration.
NOTE: To obtain configurations, communicate with the customer's networking team.
4. Confirm that the appropriate VLANs from the LCS are allowed over the links and the correct SVIs are configured on the
customer side.
Verifying NTP
Use this procedure to verify NTP, after booting the hosts and VMs.
Steps
1. Verify all management VMs.
2. Verify SVMs.
3. Verify VMware ESXi hosts and vApps.
4. Verify network switches.
Steps
1. Clean up management or aggregation switch temporary ports.
2. Remove temporary SVIs from the management switches.
3. Complete this step before activating the customer uplinks.
CAUTION: Failure to complete this step before activating the customer uplinks might take down the
production networks.
Steps
1. Enable the Call Home IOS feature:
a. In a global configuration mode, type #service call-home.
b. Type #call-home to enter call-home configuration mode.
Sample output:
Hostname#configure terminal
Hostname(config)#service call-home
Hostname(config)#call-home
Hostname(cfg-call-home)#contact-email-addr username@domain-name
3. Activate the default Cisco TAC-1 profile and set the transport option to HTTP:
Hostname(cfg-call-home)#profile CiscoTAC-1
Hostname(cfg-call-home-profile)#active
Hostname(cfg-call-home-profile)#destination transport-method http
4. Install a security certificate and obtain the Cisco server certificate from the security certificate.
5. Configure a trust point and, prepare to enroll the certificate through the terminal using copy and paste:
Hostname(config-cert-chain#end
Hostname#copy running-config startup-config
8. After you receive an email from Cisco, follow the link to complete the Smart Call Home registration.
Steps
1. In global configuration mode, activate the call home feature and enter the call home configuration mode:
Hostname#configure terminal
Hostname(config)#service call-home
Hostname(config)#call-home
Hostname(cfg-call-home)#contact-email-addr username@domain-name
The mail-server <address> is an IP address or domain-name of an SMTP server that receives messages from Call
Home. If more than one mail-server <address> is configured, mail-server priority is used to identify an active primary
server. Call Home sends messages to the active primary server with the lowest priority number.
4. Activate the default Cisco TAC-1 profile and set the transport option to email:
Hostname(cfg-call-home)#profile CiscoTAC-1
Hostname(cfg-call-home-profile)#active
Hostname(cfg-call-home-profile)#destination transport-method email
Hostname(config-cert-chain)#end
Hostname# copy running-config startup-config
7. After you receive an email from Cisco, follow the link to complete the Smart Call Home registration.
For additional information to register the Cisco switch alerting, see KB article.
13
Redistribute the MDM cluster
About this task
PowerFlex Manager enables you to change the MDM role for a node in a PowerFlex cluster by switching the MDM role from one
node to the another.
Steps
1. To access the wizard from the Resource Groups page:
a. On the menu bar, click Lifecycle > Resource Groups.
b. Select a resource group that the node with MDM role is to reconfigure.
c. In the right pane, click View Details.
The Resource Group Details page is displayed.
d. On the Resource Group Details page, under More Actions click Reconfigure MDM Roles.
2. The MDM Reconfiguration page is displayed. In the Reconfigure MDM Role page, the Current node holds the MDM role
are displayed.
To reassign and choose the new HostName or IP address for the role in the Select New Node for MDM Role from the
drop-down.
You can reassign multiple roles at one time.
14
Verify PowerFlex spare capacity
The spare capacity for each storage pool needs to be equivalent to the largest amount of storage that a single Storage Data
Server (SDS) provides.
Prerequisites
Calculate the spare capacity percentage value that must be set.
Spare capacity should be configured a minimum of (1/N) *100 where N is the number of nodes in the system. For example: For
a ten node system spare capacity should be set to (1/10)*100 =10%. For a three node system, the recommended setting would
be (1/3)*100=33.33%. This recommended setting for spare capacity applies equally to fault sets. Spare capacity is implemented
at the storage pool level, each storage pool has a separate provision spare capacity.
Steps
1. Log in to PowerFlex Manager.
2. Click Block > Storage Pools.
3. Select the storage pool you want to verify.
4. Click Modify > Capacity
The Storage Pool Capacity Setting window appears.
5. Verify that the Spare Percentage Policy is configured based on the equation mentioned above.
6. Repeat this procedure on each storage pool.
15
Finalizing the system
Complete these steps to finalize the PowerFlex rack.
Mode Description
Managed Management and orchestration.
PowerFlex Manager deployed services or imported existing services for elements.
Prerequisites
Determine the PowerFlex Manager deployment mode:
1. Log in to PowerFlex Manager and open the Dashboard. Look at the Resource Groups section.
2. On the menu bar, click Lifecycle > Resource Groups.
3. If there is at least one service, click Resources on the menu bar. Note whether any switches are discovered.
4. Determine the deployment mode:
a. If there are no services, the deployment is in Alerting mode.
b. If there is at least one service and there are no switches that are listed on the Resources page, the deployment is in
Lifecycle mode.
c. If there is at least one service, the deployment is in Managed mode.
Steps
1. Run the System Configuration Reporter (SCR) and RVTools (a Windows .NET 4.6.1 application).
2. Proceed with DAC, the test plan, and coordinate the knowledge transfer to the customer resources.
3. Create a ServiceNow ticket to update the Dell install base with the PowerFlex Manager deployment type.
a. Go to inside.dell.com.
b. Click My IT.
c. Click Order Something.
d. Under Browse Categories, expand Professional Services and click VCE.
e. Click MACD (Move/Add/Change/Delete).
f. Complete the form requirements:
● Which components will this affect? - Select VMware or Multiple from the list. Either selection is applicable.
● Opportunity ID - Enter NA if not available.
● Serial Number - Provide the valid system serial number requiring the update. You can enter one system serial
number per ticket.
● Description - Provide a valid PowerFlex Manager deployment mode: Managed Mode, Alerting Mode, or Lifecycle
Mode.
g. Click Submit.
An SSET team member will respond to the ticket after the updates are complete in the install base. The standard SLA is 3
to 5 business days. After the updates are made, they are live and visible in the install base. The requestor receives an email
notification when ticket is submitted and after the ticket is resolved or closed.
16
Configure events and alerts
Before you begin
Before configuring the alerts and events, verify that the Secure connect gateway is installed and configured to connect with
PowerFlex Manager.
For information on installation and configuration of Secure connect gateway, see the Download Secure Connect Gateway
section in the Secure Connect Gateway User's Guide .
NOTE: Port 8443 is not required for functionality. If it is not opened, there will be a significant decrease in remote support
performance.
The following table lists the firewall port requirements for PowerFlex rack functionality:
Steps
1. Go to https://cpsd-mfg.ins.dell.com/Neptune/.
2. In the search box, enter P# and press Enter to search for the LAC.
For example, searching for P#3596408 displays results similar to the following screenshot:
Enabling SupportAssist
● There are two options to configure events and alerts:
○ Connect directly
○ Connect using secure connect gateway
● If you connect directly, only the call home option is available
● If you connect through secure connect gateway, all options through secure connect gateway are enabled
● You do not need to deploy and configure secure connect gateway if you choose ESE direct
Prerequisites
● Download the required version of secure connect gateway from the Dell support site.
● You must have VMware vCenter Server running on the virtual machine on which you want to deploy secure connect
gateway. Deploying secure connect gateway directly on a server running VMware vSphere ESXi is not supported.
Steps
1. Download and extract the OVF file to a location accessible by the VMware vSphere Client.
2. On the right pane, click Create/Register VM.
3. On the Select Creation Type page, select Deploy a virtual machine from an OVF or an OVA file and click Next.
4. On the Select OVF and VMDK files page, enter a name for the virtual machine, select the OVF and VMDK files, and click
Next.
NOTE: If there is more than one datastore on the host, the datastores are displayed on the Select storage page.
5. Select the location to store the virtual machine (VM) files and click Next.
6. On the License agreements page, read the license agreement, click I agree, and click Next.
7. On the Deployment options page, perform the following steps:
a. From the Network mappings list, select the network that the deployment template must use.
b. Select a disk provisioning type.
c. Click Next.
8. On the Additional settings page, enter the following details and click Next.
● Domain name server
● Hostname
● Default gateway
● Network IPv4 and IPv6
● Time zone
● Root password
NOTE: Ensure that the root password consists of eight characters with at least one uppercase and one lowercase
letter, one number, and one special character. Use this root password to log in to secure connect gateway for the first
time after the deployment.
9. On the Ready to complete page, verify the details that are displayed, and click Finish.
A message is displayed after the deployment is complete and the virtual machine is powered on.
NOTE: Wait 15 minutes before you log in to the secure connect gateway user interface.
Configuring the initial setup and generating the access key and pin
Use the section to generate the access key and pin to register with Secure Connect Gateway and Dell Support site.
Use this link to generate the Dell Support account and access key and pin: https://www.dell.com/support/kbdoc/en-us/
000180688/generate-access-key-and-pin-for-dell-products?lang=en.
Customers should work with field engineer support to get the SITE ID that is required while generating the access key and pin.
Steps
1. Go to https://<hostname(FQDN) or IP address:5700/>.
2. Enter the username as root and password created while deploying the VM.
3. Create the admin password:
a. Enter a new password.
b. Confirm the password.
4. Accept the terms and conditions.
5. Provide the access key and pin generated in Configuring the initial setup and generating the access key and pin.
6. Enter the Primary Support Contacts information.
Steps
1. Log in to PowerFlex Manager.
2. Click Settings > Events and alerts.
3. Click Notification Policies.
4. From the policies tab on the grayed out part, click Configure Now.
5. Accept the license and telemetry agreement on the connect support assist page and click Next.
6. Choose the connection type Connect Directly.
NOTE: This helps us directly connect to SupportAssist direct. Call to home feature works on connect direct. The proxy
setting is not supported.
Prerequisites
Configure the secure connect gateway.
Steps
1. Log in to PowerFlex Manager.
2. Click Settings > Events and alerts.
3. Click Notification Policies.
4. From the Policies tab on the grayed out part, click Configure Now.
5. Accept the license and telemetry agreement on the Connect SupportAssist page and click Next.
6. Choose the connection type connect via gateway.
NOTE: Connect using the gateway helps register the PowerFlex Manager on secure connect gateway and
SupportAssist. From here we can enable the proxy setting.
Steps
1. Using a web browser, access Secure connect gateway by typing the following URL: https://<scg-gateway_ip>:5700
2. On the menu bar, select Devices > Managed Device. Initially the device displays as Offline or Missing.
3. After 10 to 20 minutes, the device displays as Online and Managed.
Steps
1. Access ServiceLink by going to the following URL: https://servicelink.emc.com/searchdevice.
2. Log in with your corporate NT ID and password.
3. Search for the SWID from the gateway.
The green connected status indicates the registration was successful.
Steps
1. Go to https://licensing.emc.com.
2. On the menu bar, click Licenses and then click View Certificates.
3. In the License Authorization Code box, enter the license authorization code. Click Search.
4. In the View a Certificate section, click View in the row containing the device machine name.
5. Click Ownership. Note the site ID, which displays under Sites associated with the activated products, in the Site
column.
NOTE: You can also use the License Activation Code (LAC) or the SWID to get the site ID.
Prerequisites
Ensure you have the following information:
● Secure connect gateway IP address
● Site ID, which you located here: Locate the site ID
Steps
1. Using a web browser, access Secure connect gateway by typing the following URL: https://<scg-gateway_ip>:5700
2. Ensure the gateway contains the site ID to which the SWID is registered.
3. If the site IDs do not match, add the site ID to the gateway. On the menu bar, select Device Management > Manage
Devices > Devices. The new site ID is added to the list at the top of the page.
Steps
1. Go to Settings > Events and Alerts > Notification Policies.
2. From the Sources pane, click Add.
The Add Source window opens.
3. Enter a source name and description.
4. Configure either SNMP or syslog forwarding:
● If you select SNMPV2c:
a. Enter the community string by which the source forwards traps to destinations.
b. Enter the same community string for the configured resource. During discovery, if you selected PowerFlex Manager
to automatically configure iDRAC nodes to send alerts to PowerFlex Manager, enter the community string that is used
in that credential here.
● If you select SNMP V3:
a. Enter the username, which identifies the ID where traps are forwarded on the network management system.
NOTE: The username must be at most 16 characters.
b. Select a security level from the following:
Steps
1. Go to Settings > Events and Alerts > Notification Policies.
2. From the Sources pane, click the source that you want to modify.
The Edit Source window opens.
3. Edit the information and click Submit.
Modifying a destination
You can edit the information about where event and alert data that is processed by PowerFlex Manager should be sent.
Steps
1. Go to Settings > Events and Alerts > Notification Policies.
2. From the Destinations pane, click the destination whose information you want to modify.
The Edit Source window opens.
3. Edit the information and click Submit.
Steps
1. Go to Settings > Events and Alerts > Notification Policies.
2. Click Create New Policy.
3. Enter a name and a description for the notification policy.
4. From the Resource Domain menu, select the resource domain that you want to add a notification policy to. The resource
domain options are:
● All
● Management
● Block (Storage)
● File (Storage)
● Compute (Servers, Operating Systems, virtualization)
Steps
1. Go to Settings > Events and Alerts > Notification Policies.
2. Select the notification policy that you want to modify.
3. You can choose to modify the notification policy in the following ways:
● To activate or deactivate the policy, click Active.
● To modify the policy, click Modify. The Edit Notification Policy window opens.
4. Click Submit.
Steps
1. Go to Settings > Events and Alerts > Notification Policies.
2. Select the notification policy that you want to delete.
3. Click Delete.
You receive an information message asking if you are sure that you want to delete the policy.
4. Click Submit and click Dismiss.