Powerstore - Hardware Configuration Guide
Powerstore - Hardware Configuration Guide
Powerstore - Hardware Configuration Guide
July 2022
Rev. A10
Notes, cautions, and warnings
NOTE: A NOTE indicates important information that helps you make better use of your product.
CAUTION: A CAUTION indicates either potential damage to hardware or loss of data and tells you how to avoid
the problem.
WARNING: A WARNING indicates a potential for property damage, personal injury, or death.
© 2020 - 2022 Dell Inc. or its subsidiaries. All rights reserved. Dell Technologies, Dell, and other trademarks are trademarks of Dell Inc. or its
subsidiaries. Other trademarks may be trademarks of their respective owners.
Contents
Additional Resources.....................................................................................................................7
Chapter 1: Introduction................................................................................................................. 8
Purpose.................................................................................................................................................................................. 8
Contents 3
Setting up NVMe HBAs............................................................................................................................................. 26
iSCSI Configuration...........................................................................................................................................................27
Pre-Requisites.............................................................................................................................................................. 27
Network Configuration for iSCSI............................................................................................................................. 27
iSCSI Software Adapter Configuration................................................................................................................... 31
Jumbo Frames...............................................................................................................................................................31
Delayed ACK................................................................................................................................................................. 32
Login Timeout...............................................................................................................................................................32
No-Op Interval..............................................................................................................................................................32
Known Issues................................................................................................................................................................ 33
NVMe/TCP Configuration.............................................................................................................................................. 33
Pre-Requisites.............................................................................................................................................................. 33
Setting the ESXi Host NVMe Qualified Name......................................................................................................33
Network Configuration for NVMe/TCP.................................................................................................................34
NVMe/TCP Software Adapter Configuration...................................................................................................... 36
Using CLI........................................................................................................................................................................37
Known Issues................................................................................................................................................................38
vStorage API for System Integration (VAAI) Settings.............................................................................................38
Confirming that VAAI is Enabled on the ESXi Host............................................................................................ 38
Setting the Maximum I/O............................................................................................................................................... 39
Confirming UNMAP Priority........................................................................................................................................... 39
Configuring VMware vSphere with PowerStore Storage in a Multiple Cluster Configuration...................... 40
Multipathing Software Configuration............................................................................................................................41
Configuring Native Multipathing (NMP) with SCSI.............................................................................................41
Configuring High Performance Multipathing (HPP) with NVMe.....................................................................43
Configuring PowerPath Multipathing..................................................................................................................... 45
PowerStore Considerations............................................................................................................................................ 45
Presenting PowerStore Volumes to the ESXi Host............................................................................................ 45
Disk Formatting............................................................................................................................................................45
Virtual Volumes............................................................................................................................................................ 46
AppsOn: Virtual Machine Compute and Storage Collocation Rules for PowerStore X Clusters............. 46
vSphere Considerations...................................................................................................................................................46
VMware Paravirtual SCSI Controllers.................................................................................................................... 46
Virtual Disk Provisioning............................................................................................................................................ 46
Virtual Machine Guest Operating System Settings.............................................................................................47
Creating a File System................................................................................................................................................47
4 Contents
Post-Configuration Steps - Using the PowerStore system....................................................................................53
Presenting PowerStore Volumes to the Windows Host.................................................................................... 53
Creating a File System............................................................................................................................................... 53
Contents 5
Recommended Configuration Values Summary......................................................................................................... 76
Boot from SAN................................................................................................................................................................... 77
Fibre Channel Configuration........................................................................................................................................... 77
Pre-Requisites.............................................................................................................................................................. 78
Queue Depth.................................................................................................................................................................78
Solaris Host Parameter Settings................................................................................................................................... 78
Configuring Solaris native multipathing..................................................................................................................78
PowerPath Configuration with PowerStore Volumes........................................................................................ 79
Host storage tuning parameters..............................................................................................................................80
Post configuration steps - using the PowerStore system...................................................................................... 83
Partition alignment in Solaris.................................................................................................................................... 84
Appendix B: Troubleshooting....................................................................................................... 91
View Configured Storage Networks for NVMe/TCP................................................................................................91
View Configured Storage Networks for iSCSI............................................................................................................91
View NVMe/FC and SCSI/FC Target Ports...............................................................................................................92
View Physical Ethernet Ports Status........................................................................................................................... 92
View Discovered Initiators...............................................................................................................................................93
View Active Sessions........................................................................................................................................................93
6 Contents
Preface
As part of an improvement effort, revisions of the software and hardware are periodically released. Some functions that are
described in this document are not supported by all versions of the software or hardware currently in use. The product release
notes provide the most up-to-date information about product features. Contact your service provider if a product does not
function properly or does not function as described in this document.
Additional Resources 7
1
Introduction
Topics:
• Purpose
Purpose
This document provides guidelines and best practices on attaching and configuring external hosts to PowerStore systems, or
in conjunction with other storage systems. It includes information on topics such as multipathing, zoning, and timeouts. This
document may also include references to issues found in the field and notify you on known issues.
Regarding ESXi hosts, this document provides guidelines only for configuring ESXi hosts that are connected externally to
PowerStore. For configuring an internal ESXi host in PowerStore X model, refer to the PowerStore Virtualization Guide.
For further host connectivity best practices in conjunction to other Dell EMC storage systems, also refer to the E-Lab Host
Connectivity Guides. For details, refer to the E-Lab Interoperability Navigator at https://elabnavigator.dell.com.
8 Introduction
2
Best Practices for Storage Connectivity
This chapter contains the following topics:
Topics:
• General SAN Guidelines
• Fibre Channel SAN Guidelines
• NVMe/FC SAN Guidelines
• iSCSI SAN Guidelines
• NVMe over TCP (NVMe/TCP) SAN Guidelines
• NVMe-oF General Guidelines
• SAN Connectivity Best Practices
NOTE: In hosts running a hypervisor, such as VMware ESXi, Microsoft Hyper-V or any clustering software, it is important
to ensure that the logical unit numbers of PowerStore volumes are consistent across all hosts in the hypervisor cluster.
Inconsistent LUNs may affect operations such as VM online migration or VM power-up.
Recommended Configuration
Consider the following recommendations when setting up a Fibre Channel SAN infrastructure.
● Use two separate fabrics. Each fabric should be on a different physical FC switch for resiliency.
○ Keep a consistent link speed and duplex across all paths to the PowerStore cluster per single host or a cluster of hosts.
● Balance the hosts between the two nodes of the appliance.
○ The PowerStore cluster can be shipped with various extension modules for Fibre Channel. If your PowerStore cluster
contains more than one extension I/O module per node, distribute the zoning among all I/O modules for highest
availability and performance.
○ The optimal number of paths depends on the operating system and server information. To avoid multipathing
performance degradation, do not use more than eight paths per device per host. It is recommended to use four paths.
○ With a multi appliance cluster, it is highly advised to zone the host to as many appliances as possible, to achieve optimal
load distribution across the cluster. Be sure to keep the minimum/optimal zoning recommendation for each appliance.
NOTE: A multi-appliance cluster is not designed to provide better resiliency, but to provide better load balance. To
perform volume migration between appliances, a host must be zoned to both appliances.
● Use single initiator zoning scheme, using port WWN: Utilize single-initiator per multiple-target (1 : many) zoning scheme
when configuring zoning with a PowerStore cluster.
NOTE: Avoid using zoning based on switch port. Use only port WWN for zoning.
● Host I/O latency can be severely affected by FC SAN congestion. Minimize the use of ISLs by placing the host and storage
ports on the same physical switch. When this is not possible, ensure that there is sufficient ISL bandwidth and that both the
host and PowerStore cluster interfaces are separated by no more than two ISL hops.
● For more information about zoning best practices, see Fibre Channel SAN Topologies.
Prerequisites
Starting with PowerStore operating system version 2.0, NVMe/FC is supported.
PowerStore exposes two WWNs, one for the FC (SCSI WWN) and one for NVMe (NVMe WWN).
Steps
1. Using the PSTCLI fc_port show command or the WebUI Fibre Channel Ports screen (Hardware > Appliance > Ports
> Fibre Channel, find the corresponding SCSI WWN for each target port.
2. When using Fibre Channel, use SCSI WWN when zoning a target port.
Example
Using the PSTCLI fc_port show command to locate the SCSI WWN of a target port:
Recommended Configuration
Consider the following recommendations when setting up an NVMe/FC infrastructure.
● Use two separate fabrics. Each fabric should be on a different physical FC switch for resiliency.
○ Keep a consistent link speed and duplex across all paths to the PowerStore cluster per single host or a cluster of hosts.
● Balance the hosts between the two nodes of the appliance.
○ The PowerStore cluster can be shipped with various extension modules for Fibre Channel. If your PowerStore cluster
contains more than one extension I/O module per node, distribute the zoning among all I/O modules for highest
availability and performance.
○ The optimal number of paths depends on the operating system and server information. To avoid multipathing
performance degradation, do not use more than eight paths per device per host. It is recommended to use four paths.
○ With a multi appliance cluster, it is highly advised to zone the host to as many appliances as possible, to achieve best
optimal distribution across the cluster. Be sure to keep the minimum/optimal zoning recommendation for each appliance.
NOTE: A multi-appliance cluster is not designed to provide better resiliency, but to provide better load balance. To
perform volume migration between appliances, a host must be zoned to both appliances.
● Use single initiator zoning scheme, using port WWN: Utilize single-initiator per multiple-target (1 : many) zoning scheme
when configuring zoning with a PowerStore cluster.
NOTE: Avoid using zoning based on switch port. Use only port WWN for zoning.
● Host I/O latency can be severely affected by FC SAN congestion. Minimize the use of ISLs by placing the host and storage
ports on the same physical switch. When this is not possible, ensure that there is sufficient ISL bandwidth and that both the
host and PowerStore cluster interfaces are separated by no more than two ISL hops.
● For more information about zoning best practices, see Fibre Channel SAN Topologies.
Additional Considerations
Review the following considerations when configuring hosts with PowerStore storage using NVMe/FC:
● NVMe/FC is supported with PowerStore operating system 2.0 and later.
Prerequisites
● Starting with PowerStore operating system 2.0, NVMe/FC is supported.
● PowerStore exposes two WWNs, one for the FC (SCSI WWN) and one for NVMe (NVMe WWN).
Steps
1. Using the PSTCLI fc_port show command or the WebUI Fibre Channel Ports screen (Hardware > Appliance > Ports
> Fibre Channel), find the corresponding SCSI WWN and NVMe for each target port.
2. According to the protocol, use NVMe WWN when zoning a target port.
Example
Using the PSTCLI fc_port show command to locate the NVMe WWN of a target port:
Recommended Configuration
Consider the following recommendations when setting up an iSCSI SAN infrastructure:
● Use two separate fabrics. Each fabric should be on a different physical switch for resiliency.
● The optimal number of paths depends on the operating system and server information. To avoid multipathing performance
degradation, do not use more than eight paths per device per host. It is recommended to use four paths.
● Keep a consistent link speed and duplex across all paths to the PowerStore cluster per single host or a cluster of hosts.
● With a multi-appliance cluster, it is highly advised to zone the host to as many appliances as possible, to achieve best load
distribution across the cluster. Be sure to keep the minimum/optimal zoning recommendations for each appliance.
NOTE: A multi-appliance cluster is not designed to provide better resiliency, but to provide better load balance. To
perform volume migration between appliances, a host must be zoned to both appliances.
● External hosts can be attached to a PowerStore cluster by either the embedded 4-port card or by a SLIC:
○ A host can be connected to 1-4 appliances. It is recommended to connect the host to as many appliances as possible to
allow volume migration to and from all appliances.
○ Hosts that are connected over the first two ports of the 4-port card are connected using ToR switches (also used for
PowerStore internal communication). With this configuration, it is recommended to use a dedicated VLAN, and if not
possible, use a separate subnet/network.
○ For hosts connected using any other port (that is, not the first two ports), use either dedicated switches or a dedicated
VLAN for iSCSI storage.
NOTE: VMware requires setting Jumbo Frames at the virtual switch (vSS or vDS) and VMKERNEL level.
● See your Ethernet switch user manual for instructions on the implementations.
● For detailed information about connecting the PowerStore appliance to the ToR switch, see the PowerStore Network
Planning Guide and the Network Configuration Guide for Dell PowerSwitch Series.
Additional Considerations
Review the following considerations when configuring hosts with PowerStore storage using iSCSI:
● Maximum number of network subnets supported with PowerStore over iSCSI SAN:
○ With PowerStore operating system 2.0 (or later), up to 32 subnets are supported, but only up to eight subnets are
supported per physical port.
○ With PowerStore operating system 1.x, only a single subnet is supported.
● See the E-Lab Interoperability Navigator (https://elabnavigator.dell.com) for iSCSI SAN support limitations regarding HBAs,
operating systems, and Direct-Attach.
Recommended Configuration
Consider the following recommendations with PowerStore storage using NVMe/TCP.
● Use two separate fabrics. Each fabric should be on a different physical switch for resiliency.
● The optimal number of paths depends on the operating system and server information. To avoid multipathing performance
degradation, do not use more than eight paths per device per host. It is recommended to use four paths.
● Keep a consistent link speed and duplex across all paths to the PowerStore cluster per a single host or a cluster of hosts.
● With a multi-appliance cluster, it is highly advised to zone the host to as many appliances as possible, to achieve best load
distribution across the cluster. Be sure to keep the minimum/optimal zoning recommendations for each appliance.
NOTE: A multi-appliance cluster is not designed to provide better resiliency, but rather to provide better load balance.
To perform volume migration between appliances, a host must be zoned to both appliances.
● External hosts can be attached using NVMe/TCP to a PowerStore cluster by either the embedded 4-port card or by a SLIC:
○ A host can be connected to 1-4 appliances. It is recommended to connect the host to as many appliances as possible to
allow volume migration to and from all appliances.
○ Hosts that are connected over the first two ports of the 4-port card are connected using ToR switches (also used for
PowerStore internal communication). With this configuration, it is recommended to use a dedicated VLAN, and if not
possible, use a separate subnet/network.
○ For hosts connected using any other port (that is, not the first two ports), use either dedicated Ethernet switch or a
dedicated VLAN.
○ The PowerStore cluster can be shipped with various extension modules. If your PowerStore cluster contains more
than one extension I/O module per node, distribute the connections among all I/O modules for highest availability and
performance.
● Ethernet switch recommendations:
○ Use non-blocking switches.
○ Use enterprise grade switch.
○ Utilize at minimum 10 GbE interfaces.
● It is recommended to use dedicated NICs or iSCSI HBAs for PowerStore cluster and not to partition the interface (that is,
disable NIC Partitioning - NPAR).
● Enable the TCP Offloading Engine (TOE) on the host interfaces, to offload the TCP packet encapsulation from the CPU of
the host to the NIC or iSCSI HBA, and free up CPU cycles.
● It is recommended to use interfaces individually rather than using NIC Teaming (Link Aggregation), to combine multiple
interfaces into a single virtual interface.
● If Jumbo Frames are required, ensure that all ports (servers, switches, and system) are configured with the same MTU
value.
Additional Considerations
Review the following additional considerations when configuring hosts with PowerStore using NVMe/TCP.
● NVMe/TCP requires ports 8009 and 4420 to be open between PowerStore storage networks and each NVMe/TCP initiator.
● See the NVMe/TCP Host/Storage Interoperability Simple Support Matrix for supported NICs/HBA models and drivers with
NVMe/TCP and known limits.
● NVMe/TCP with vSphere ESXi requires vDS 7.0.3 (or later) or a VSS.
● For customers deploying NVMe/TCP environments at scale, consider leveraging SmartFabric Storage Software to automate
host and subsystem connectivity. For more information, see the SmartFabric Storage Software Deployment Guide.
Direct Attach
● A host must be connected at minimum with one path to each node for redundancy.
● See E-Lab for details of supported configuration with Direct Attach:
○ See the E-Lab Interoperability Navigator (https://elabnavigator.dell.com) for supported FC and iSCSI configurations.
○ See the E-Lab Dell EMC 32G FC-NVMe Simple Support Matrix for supported NVMe/FC configurations with Direct
Attach.
○ For a host that is directly attached to a PowerStore appliance, disable NVMe/FC support on the HBA. For details
on potential issues when directly connecting a host to PowerStore, see Dell EMC Knowledge Article 000200588
(PowerStore: After an upgrade...) and Dell EMC Knowledge Article 000193380 (PowerStoreOS 2.0: ESXi hosts do not
detect...).
○ See the E-Lab NVMe/TCP Host/Storage Interoperability Simple Support Matrix for supported NVMe/TCP configurations
with Direct Attach.
● The following diagram describes minimum connectivity with a single PowerStore appliance:
1. PowerStore appliance
2. Node
3. Host
NOTE: Be sure to configure proper zoning for FC and NVMe/FC or proper subnetting with iSCSI and TVMe/TCP.
● A host must be connected at minimum with one path to each node for redundancy.
● The following diagram describes a minimum connectivity with a single PowerStore appliance.
NOTE: Be sure to configure proper zoning for FC and NVMe/FC or proper subnetting with iSCSI and NVMe/TCP.
● It is recommended that a host is connected with two paths to each node for redundancy.
● The following diagram describes simple connectivity with a single PowerStore appliance.
NOTE: Be sure to configure proper zoning for FC and NVMe/FC or proper subnetting with iSCSI and NVMe/TCP.
● It is recommended that a host is connected with two paths to each node for redundancy.
● The following diagram describes simple connectivity with two (2) PowerStore appliances.
1. PowerStore appliance
2. Node
3. ToR/iSCSI Switch
4. Host
NOTE: Be sure to configure proper zoning for FC and NVMe/FC or proper subnetting with iSCSI and NVMe/TCP.
● It is recommended that a host is connected with two paths to each node on each appliance for redundancy.
● The following diagram describes simple connectivity with three (3) PowerStore appliances.
NOTE: Be sure to configure proper zoning for FC and NVMe/FC or proper subnetting with iSCSI and NVMe/TCP.
● It is recommended that a host is connected with two paths to each node on each appliance for redundancy.
● The following diagram describes simple connectivity with four (4) PowerStore appliances.
Chapter Scope
This chapter provides guidelines only for configuring ESXi hosts that are connected externally to PowerStore. For configuring
an internal ESXi host on PowerStoreX, see the Dell EMC PowerStore Virtualization infrastrucure Guide document at https://
dell.com/support.
NOTE: This document includes links to external documents. These links may change. If you cannot open a link, contact the
vendor for information.
ESXi configuration: Keep the UNMAP Stability & Performance Mandatory Confirming UNMAP Priority
priority for the host at the lowest
possible value (default value for ESXi
6.5).
Specify ESXi as the operating system Serviceability Mandatory Presenting PowerStore
for each defined host. Volumes to the ESXi Host
Path selection policy for SCSI: Stability & Performance Mandatory Configuring vSphere Native
VMW_PSP_RR Multipathing
Path selection policy for NVMe: LB- Performance Recommended Configuring High
IOPS Performance Multipathing
(HPP) with NVMe
Alignment: Guest OS virtual machines Storage efficiency & Warning Disk Formatting
should be aligned. Performance
iSCSI configuration: Configure end- Performance Recommended Jumbo Frames
to-end Jumbo Frames.
iSCSI configuration: Disable Delayed Stability Recommended Delayed ACK
ACK on ESXi.
iSCSI configuration: Adjust Stability Recommended Login Timeout
LoginTimeOut to 30.
Path switching: Switch for every I/O. Performance Recommended Configuring vSphere Native
Multipathing
Virtual Disk Provisioning: Use thin Performance Recommended Virtual Machine Formatting
provisioned virtual disks.
Virtual machine configuration: Stability & Performance Recommended VMware Paravirtual SCSI
Configure virtual machines with controllers
Paravirtualized SCSI controllers.
RDM volumes: In Guest OS, span Performance Recommended Virtual Machine Guest OS
RDM volumes used by the virtual Settings
machine across SCSI controllers.
NOTE: For information about virtualization and Virtual Volumes, see the following white papers:
● Dell EMC PowerStore Virtualization Infrastructure Guide at https://dell.com/support
● Dell PowerStore: VMware vSphere Best Practices
NOTE: As noted in Dell EMC Knowledge Article 000126731 (PowerStore - Best practices for VMFS datastores...), when
using vSphere v6.7 there is a known issue relating to VMFS deadlock. To resolve, install the latest vSphere version
Pre-Requisites
When attaching a host to a PowerStore cluster using Fibre Channel, ensure that the following pre-requisites are met:
● Review Fibre Channel SAN Guidelines before you proceed.
● Ensure that you are using PowerStore operating system 2.0 (or later).
● See the E-Lab Dell EMC 32G FC-NVMe Simple Support Matrix for supported Fibre Channel HBA models and drivers with
NVMe/FC and known limits.
● Verify that all HBAs have supported driver and firmware versions according to the Support Matrix at the E-Lab Navigator
(https://elabnavigator.dell.com).
● Verify that all HBAs BIOS settings are configured according to Dell EMC E-Lab Navigator recommendations (https://
elabnavigator.dell.com).
● It is highly recommended to install the nvme-cli package:
systool -c fc_host -v
Known Issues
For a host directly attached to the PowerStore appliance, disable NVMe/FC support on the HBA. For details on potential issues
when directly connecting a host to PowerStore, see Dell EMC Knowledge Article 000200588 (PowerStore: After an upgrade...)
and Dell EMC Knowledge Article 000193380 (PowerStoreOS 2.0: ESXi hosts do not detect...).
Prerequisites
You can configure the host NVMe Qualified Name (NQN) using either Hostname or UUID. For visibility and simplicity, it is
recommended to use Hostname.
Steps
1. Connect to the ESXi host as root.
2. Run the following esxcli command for the Host NQN to be based on the hostname and verify that the setting is changed.
3. Ensure that the value complies with NVMe Base Specification, Chapter 4.5 (NVMe Qualified Names).
4. Reboot the host.
5. Run the esxcli nvme info get command to confirm that the Host NQN was modified correctly.
Steps
1. Connect to the host as root.
2. Run the following esxcli command:
Steps
1. Connect to the host as root.
2. Run the following esxcli command:
NOTE: lpfc_enable_fc4_type=3 enables both FCP and NVMe/FC, and lpfc_enable_fc4_type=1 enables
only FCP.
Known Issues
If you are using NVMe/FC, it is highly recommended to upgrade to PowerStore operating system 2.1.1. For information, see Dell
EMC Knowledge Article 000196492 (PowerStore: IO on ESXi VMs...).
iSCSI Configuration
This section describes the recommended configuration that should be applied when attaching hosts to a PowerStore cluster
using iSCSI.
NOTE: This section applies only for iSCSI. If you are using any other protocol with ESX, see the relevant configuration
section.
Pre-Requisites
The following pre-requisites should be met before attaching hosts to a PowerStore cluster using iSCSI:
● Review iSCSI SAN Guidelines before you proceed.
● See the E-Lab Interoperability Navigator (https://elabnavigator.dell.com) for supported NIC/iSCSI HBA models and drivers.
● Verify that all HBAs have supported driver, firmware, and BIOS versions.
● Follow the operating system recommendations for installation and setup of the appropriate NIC/iSCSI HBA for your host.
● It is recommended to install the latest driver version (patch), as described in the VMware support site for each specific
NIC/iSCSI HBA.
● Review the VMware vSphere Storage document for the vSphere version running on the ESXi hosts, a requirement list,
limitations, and other configuration considerations. For example, for vSphere 7.0u3. see VMware vSphere Storage.
Steps
1. Dell Technologies recommends creating four target iSCSI IP addresses (two per node) on the same subnet/VLAN.
2. Create a single vSwitch (or vDS) consisting of two uplink physical ports (each connected to a different switch).
3. Create two VMkernel ports on the same subnet as the storage cluster iSCSI portals (the communication must not be
routable).
Example:
● iSCSI-A-port0 1.1.1.1/24
● iSCSI-A-port1 1.1.1.2/24
● iSCSI-B-port0 1.1.1.3/24
● iSCSI-B-port1 1.1.1.4/24
● vmk1 1.1.1.10/24
● vmk2 1.1.1.11/24
4. Ensure that both VMkernel interfaces are attached to the same vSwitch.
5. Override the default Network Policy for iSCSI. For details, see VMware vSphere documentation.
For example, with ESXi 7.0, see Multiple Network Adapters in iSCSI or iSER Configuration.
Steps
1. Dell Technologies recommends creating four target iSCSI IP addresses (two per node) on two different subnets/VLANs.
2. Create a single vSwitch (or vDS) consisting of two uplink physical ports (each connected to a different switch).
3. Create two VMkernel ports: One on VLAN-A and another on VLAN-B as the storage iSCSI portals.
Note: It is highly recommended not to use routing on iSCSI.
Example:
4. Ensure that both VMkernel interfaces are attached to the same vSwitch.
5. Override the default Network Policy for iSCSI. For details, see the VMware vSphere documentation.
For example, with ESXi 7.0, see Multiple Network Adapters in iSCSI or iSER Configuration.
Steps
1. List current switches.
Verify that the vSwitch name that you intend to use is not in use.
4. Configure uplinks.
5. Create port groups for each VMkernel interface (repeat the steps for the secondary VMkernel interface).
The example below is for the first VMkernel interface. Use the same procedure for the second VMkernel interface.
$ esxcli network vswitch standard portgroup policy failover set -a vmnic1 -p vlan801
$ esxcli network vswitch standard portgroup policy failover set -a vmnic2 -p vlan802
10. Verify that TCP ports are open (-s specifies source IP).
Repeat on both mvk1 and vmk2, and for all Storage IPs visible from each VMkernel port.
Steps
1. Activate the Software iSCSI adapter.
NOTE: You can activate only one software iSCSI adapter.
Jumbo Frames
Configure end-to-end Jumbo Frames for optimal performance.
When using iSCSI with ESXi hosts and PowerStore, it is recommended to configure end-to-end Jumbo Frames (MTU=9000) for
optimal performance. Ethernet frames are larger than the standard frame size of 1500 bytes (for IPv4) or 1280 bytes (for IPv6).
For information about configuring Jumbo Frames with iSCSI on ESXi, See VMware Knowledge Article 1007654 (iSCSI and
Jumbo Frames...).
Login Timeout
Follow these steps to set the iSCSI login timeout.
Steps
1. Connect to the host as root.
2. Run the following command:
Example
Replacing VMHBA number with the iSCSI vmhba:
No-Op Interval
Follow these steps to set the iSCSI No-Op interval.
Steps
1. Connect to the host as root.
Example
Known Issues
● When using Jumbo Frames, ensure that all ports (Virtual Switch, VMkernel port, Switch Ports, and PowerStore iSCSI
interfaces are configured with the correct MTU value. For information, see Dell EMC Knowledge Article 000196316
(PowerStore: After increasing the MTU...).
● When using iSCSI software initiator with ESXi and PowerStore storage, it is recommended to use only lower case characters
in the IQN to correctly present the PowerStore volumes to ESXi. For information, see VMware Knowledge Article 2017582
(Recommended characters in the...).
NVMe/TCP Configuration
This section describes the recommended configuration that should be applied when attaching hosts to a PowerStore cluster
using NVMe/TCP.
NOTE: This section applies only to NVMe/TCP. If you are using any other protocol with ESX, see the relevant configuration
section.
Pre-Requisites
The following pre-requisites should be met before attaching hosts to a PowerStore cluster using NVMe/TCP:
● Review NVMe over TCP (NVMe/TCP) SAN Guidelines before you proceed.
● See the E-Lab NVMe/TCP Host/Storage Interoperability Simple Support Matrix for supported NIC/HBA models and drivers
with NVMe/TCP and known limits.
● Verify that all HBAs have supported driver, firmware, and BIOS versions.
● Follow the operating system recommendations for installation and setup of the appropriate NIC/iSCSI HBA for your host.
● It is recommended to install the latest driver version (patch), as described in the VMware support site for each specific
NIC/iSCSI HBA.
● TCP ports 4420 and 8009 are open between each host interface and PowerStore subsystem port. These ports should be
open on the interfaces where NVMe/TCP is running.
● Review the VMware vSphere Storage document for the vSphere version running on the ESXi hosts, a requirement list,
limitations, and other configuration considerations. For example, for vSphere 7.0u3, see VMware vSpere Storage.
Prerequisites
You can configure the host NVMe Qualified Name (NQN) using either Hostname or UUID. For visibility and simplicity, it is
recommended to use Hostname.
Steps
1. Connect to the ESXi host as root.
2. Run the following esxcli command for the Host NQN to be based on the hostname and verify that the setting is changed.
3. Ensure that the value complies with NVMe Base Specification, Chapter 4.5 (NVMe Qualified Names).
4. Reboot the host.
5. Run the esxcli nvme info get command to confirm that the Host NQN was modified correctly.
Steps
1. Dell Technologies recommends creating four target NVMe/TCP IP addresses (two per node) on two different subnets/
VLSNs.
2. Create a single vSwitch (or vDS) consisting of two uplink physical ports (each connected to a different switch).
3. Create two VMkernel ports, one on VLAN-A and another on VLAN-B, as the storage iSCSI portals.
NOTE: It is highly recommended not to use routing on NVMe/TCP.
4. Ensure that both VMkernel interfaces are attached to the same vSwitch.
5. Override the default Network Policy for iSCSI. For details, see the VMware vSphere documentation.
For example, with ESXi 7.0, see Multiple Network Adapters in iSCSI or iSER Configuration.
The following example demonstrates configuring networking for a PowerStore PowerStore operating system 2.1 with a single
virtual standard switch and CLI.
1. List current vSwitches and verify that the vSwitch name is not in use.
2. Create a new Virtual Standard Switch (ensure that the name is unique).
4. Configure uplinks (In the example below, 10 Gb interfaces are used. These interfaces must not be used by any other vSwitch
or vDS).
5. Create port groups for each VMkernel interface. Repeat the steps for the secondary VMkernel interfaces.
$ esxcli network vswitch standard portgroup policy failover set -a vmnic1 -p vlan801
$ esxcli network vswitch standard portgroup policy failover set -a vmnic2 -p vlan802
Steps
1. Activate the software NVMe/TCP adapter.
a. In the vSphere Client, go to the ESXi host.
b. Click the Configure tab.
c. Under Storage, click Storage Adapters and then click the Add icon.
d. Select the NVMe/TCP adapter for each vmnic on the NVMe/TCP virtual switch.
In the example, there should be two NVMe/TCP adapters, one enabled on vmnic1 and the other enabled on vmnic2.
2. Verify that TCP ports 4420 and 8009 are open (-s specifies source IP). Repeat on both vmk1 and vmk2, and for all storage
IPs visible from each VMkernel port.
c. On the Add Controller window, enter any of the PowerStore NVMe/TCP enabled ports IP addresses, and select port
8009 (discovery controller).
d. From the list, select the subsystem ports that you want to connect to, and click OK.
These NVMe subsystem ports must be on the same VLAN/subnet that the vmhba is attached to.
e. Repeat these steps for the other NVMe/TCP adapter.
Using CLI
Steps
1. Use the following command to view the configured NVMe adapters:
2. Verify that the ESXi can see the target subsystem controller/s.
3. Use the following commands to discover and connect to the NVMe subsystem:
Known Issues
If you are using NVMe/TCP, it is highly recommended to upgrade to PowerStore operating system 2.1.1. For details, see Dell
EMC Knowledge Article 000196492 (PowerStore: IO on ESXi VMs...).
Steps
1. Verify that the following parameters are enabled (that is, set to 1):
● DataMover.HardwareAcceleratedMove
● DataMover.HardwareAcceleratedInit
● VMFS3.HardwareAcceleratedLocking
2. If any of the above parameters are not enabled, click the Edit icon and then click OK to adjust them.
Example
The examples below can be used to query for VAAI status and to enable VAAI using CLI.
NOTE: These settings enable ATS-only on supported VMFS Datastores, as noted in VMware Knowledge Article 1021976
(Frequently Asked Questions...).
NOTE: When setting Disk.DiskMaxIOSize to 1 MB on ESXi hosts connected to arrays other than PowerStore, performance
on large I/Os may be impacted.
NOTE: Setting the maximum I/O size is only required for ESXi versions earlier than 7.0, unless the ESXi version used is not
exposed to the issue covered in VMware Knowledge Article 2137402 (Virtual machines using EFI firmware...).
Prerequisites
NOTE: Provisioning Virtual Disks with UNMAP set to a non-default priority on a DataStore provisioned on PowerStore may
result in an increased amount of write I/Os to the storage subsystem. It is therefore highly recommended to verify that
UNMAP is set to Low priority.
NOTE: See Dell EMC Knowledge Article 000126731 (Best practices for VMFS datastores...) for further unmap-related
recommendations when doing Virtual Machine File System (VMFS) bootstorm or failover with VMware Site Recovery
Manager (SRM) on VMFS datastores from ESXi hosts connected to PowerStore.
To set UNMAP priority on a datastore:
Steps
1. On most ESXi hosts, the default UNMAP priority is set to Low. It is recommended to verify, using ESX CLI, that the
datastores are configured with Low priority.
2. To verify that a datastore is set to Low priority:
a. List the file systems:
3. If required, run the following ESX CLI command to modify the UNMAP priority to Low:
[~] esxcli storage vmfs reclaim config set --volume-label VMFS1 -p low
● UCS FC Adapter Policy - The total number of I/O requests that can be outstanding on a per-virtual Host Bus Adapter
(vHBA) in UCS.
● Cisco nfnic lun_queue_depth_per_path - Cisco nfnic driver setting to set the LUN queue depth per path. The default value
for this setting is 32 (recommended). For details on Cisco nfnic settings, see the Cisco nfnic driver documentation on the
Cisco website.
● DiskSchedNumReqOutstanding - The total number of outstanding commands that are permitted from all virtual machines
collectively on the host to a LUN. For details, see VMware vSphere documentation.
● Disk.SchedQuantum - The maximum number of consecutive "sequential" I/Os allowed from one VM before forcing a switch
to another VM. For details, see VMware vSphere documentation.
● Disk.DiskMaxIOSize - The maximum I/O size ESX allows before splitting I/O requests. For details, see Setting the Maximum
I/O.
● XCOPY (/DataMover/MaxHWTransferSize) - The maximum number of blocks used for XCOPY operations. For details, see
VMware vSphere documentation.
Configuring NMP Round Robin as the Default Pathing Policy for All
PowerStore Volumes
Follow this method to configure NMP Round Robin as the default pathing policy for all PowerStore volumes using the ESXi
command line.
NOTE: Use this method when no PowerStore volume is presented to the host. PowerStore volumes already presented to
the host are not affected by this method (unless they are unmapped from the host).
NOTE: With ESXi 6.7 hosts that are connected to PowerStore, it is recommended to disable action_OnRetryErrors.
For details on this ESXi parameter, see VMware Knowledge Article 67006 (Active/Passive or ALUA based...).
NOTE: Using this method does not impact any non-PowerStore volume that is presented to the ESXi host.
Steps
1. Open an SSH session to the host as root.
2. Run the following command to configure the default pathing policy for newly defined PowerStore volumes to Round Robin
with path switching after each I/O packet:
NOTE: Use the disable_action_OnRetryErrors parameter only with ESXi 6.7 hosts.
This command also sets the NMP Round Robin path switching frequency for newly defined PowerStore volumes to switch
every I/O.
NOTE: Using this method does not impact any non-PowerStore volumes that are presented to the ESXi host.
For details, see VMware Knowledge Article 1017760 (Changing the default pathing...) and VMware Knowledge Article 2069356
(Adjusting Round Robin IOPS...) on the VMware website.
Steps
1. Open an SSH session to the host as root.
2. Run the following command to obtain the NAA of PowerStore LUNs presented to the ESXi host:
The following example demonstrates issuing the esxcli storage nmp path list command to obtain the NAA of all
PowerStore LUNs presented to the ESXi host:
3. Run the following command to modify the path selection policy on the PowerStore volume to Round Robin:
For example:
4. Run the following command to set the NMP Round Robin path switching frequency on PowerStore volumes from the default
value (1000 I/O packets) to 1:
#esxcli storage nmp psp roundrobin deviceconfig set --device="<NAA ID>" --iops=1 --
type=iops
For example:
5. Run the following command to validate that changes were applied to all PowerStore LUNs:
Each listed PowerStore LUN should have the following NMP settings:
Configuring HPP Round Robin as the Default Pathing Policy for All
PowerStore Volumes
Follow this method to configure HPP Round Robin as the default pathing policy for all PowerStore volumes, using the ESXi
command line.
NOTE: Using this method does not impact any non-PowerStore volumes that are presented to the ESXi host, or SCSI
(FC/iSCSI) volumes.
Steps
1. Open an SSH session to the host as root.
$ esxcli storage hpp devide list | grep "Device Display Name: NVMe\|Path Selection"
Device Display Name: NVMe TCP Disk (eui.b635f9c20e1cb3658ccf096800ce9565)
Path Selection Scheme: LB-IOPS
Path Selection Scheme Config: {iops=1;}
Steps
1. Open an SSH session to the host as root.
2. Run the following command to retrieve the list of namespaces (in the example, there are three namespaces: NSID 50, 51,
and 52):
3. Run the following command to view the information for each of the devices listed in the previous step (in the example,
information is displayed for NSID 50):
PowerStore Considerations
When host configuration is completed, you can use the PowerStore storage from the host.
NOTE: When connecting an ESXi host to PowerStore, LUN IDs 254 and 255, may have a dead status. These LUNs
represent the Virtual Volume Protocol Endpoints (PE).
You can create, present, and manage volumes that are accessed from the host using PowerStore Manager, CLI, or REST API.
See the PowerStore Manager Online Help, CLI Reference Guide, or REST API Reference Guide for additional information.
The Dell EMC Virtual Storage Integrator (VSI) version 8.4 and later plug-in can be used to provision from within Virtual Machine
File System (VMFS) datastores and Raw Device Mapping volumes on PowerStore. Furthermore, the Dell EMC VSI Storage
Viewer version 8.4 and later plug-in extends the vSphere Client to facilitate the discovery and identification of PowerStore
storage devices that are allocated to VMware ESXi hosts and virtual machines.
For information about using these two vSphere Client plug-ins, see the VSI Unified Storage Management Product Guide and the
VSI Storage Viewer Product Guide.
NOTE: Using data reduction and /or encryption software on the host side affects the PowerStore cluster data reduction.
Disk Formatting
Review the following considerations when you create volumes in PowerStore for a vSphere ESXi host:
● Disk logical block size - The only logical block (LB) size that is supported by vSphere ESXi for presenting volumes is 512
bytes.
Virtual Volumes
On PowerStore operating system below 2.1.1, it is recommended to avoid creating a single host group containing all ESXi
hosts, when multiple Virtual Volumes are mapped to these hosts. For information, see Dell EMC Knowledge Article 000193872
(PowerStore: Intermittent vVol bind...).
It is recommended to create a dedicated host for each ESXi and mount the Virtual Volume datastore on all ESXi hosts in the
cluster.
If you require access to regular VMFS datastores in addition to Virtual Volumes, map each of the volumes to each of the ESXi
hosts.
vSphere Considerations
NOTE: For details on SCSI-3 Persistent Reservations (SCSI3-PRs) on a virtual disk (VMDK) support with PowerStore
storage, see Dell EMC Knowledge Article 000191117 (PowerStore: SCSI-3 Persistent Reservations Support).
NOTE: File system configuration and management are out of the scope of this document.
It is recommended to create the file system using its default block size (using a nondefault block size may lead to unexpected
behavior). See your operating system and file system documentation.
● To re-enable UNMAP on
the host (after file system
creation):
fsutil behavior set
DisableDeleteNotify
0
NOTE: Before you proceed, review Fibre Channel and NVMe over Fibre Channel SAN Guidelines.
Pre-Requisites
This section describes the pre-requisites for FC HBA configuration.
● Refer to the E-Lab Interoperability Navigator (https://elabnavigator.dell.com) for supported FC HBA models and drivers.
● Verify all HBAs are at the supported driver, firmware and BIOS versions.
● Verify all HBAs BIOS settings are configured according to E-Lab recommendations. Follow the procedures in one of the
following documents according to the FC HBA type:
○ For Qlogic HBAs, refer to Dell EMC Host Connectivity with Qlogic FIbre Channel and iSCSI HBAs and Converged
Network Adapters (CNAs) for the Windows Environment.
○ For Emulex HBAs, refer to Dell EMC Host Connectivity with Emulex Fibre Channel and iSCSI HBAs and Converged
Network Adapters (CNAs) for the Windows Environment.
○ For Cisco UCS fNIC HBAs, refer to the Cisco UCS Virtual Interface Card Drivers for Windows Installation Guide for
complete driver installation instructions .
iSCSI Configuration
This section describes the recommended configuration that should be applied when attaching hosts to PowerStore cluster using
iSCSI.
NOTE: This section applies only to iSCSI. If you are using only Fibre Channel with Windows and PowerStore, go to Fibre
Channel HBA Configuration.
Steps
1. Dell Technologies recommends creating four target iSCSI IP addresses (two per node) on the same subnet/VLAN.
2. Configure two iSCSI interfaces on the same subnet as the storage cluster iSCSI portals.
Example:
● iSCSI-A-port0 1.1.1.1/24
● iSCSI-A-port1 1.1.1.2/24
● iSCSI-B-port0 1.1.1.3/24
● iSCSI-B-port1 1.1.1.4/24
● NIC0 1.1.1.10/24
● NIC1 1.1.1.11/24
Next steps
NOTE: The Microsoft iSCSI Initiator default configuration ignores multiple NICs on the same subnet. When multiple NICs
are on the same subnet, use the Advanced button in the Log On to Target dialog box of the Microsoft iSCSI Software
Initiator UI to associate a specific NIC with a specific SP port.
Steps
1. Dell recommends creating four target iSCSI IP addresses (two per node), on two different subnets/VLANS.
2. Configure two iSCSI interfaces on the same subnet as the storage cluster iSCSI portals.
Example:
● iSCSI-A-port0 1.1.1.1/24
● iSCSI-A-port1 1.1.2.1/24
● iSCSI-B-port0 1.1.1.2/24
● iSCSI-B-port1 1.1.2.2/24
● NIC0 1.1.1.10/24
● NIC1 1.1.2.10/24
Steps
1. Open PowerShell on the host.
2. Run the following commands to install MPIO if it is not already installed:
4. Run one of the following commands to set RoundRobin failover policy or Least Queue Depth failover policy, respectively:
● Round-Robin
Set-MSDSMGlobalDefaultLoadBalancePolicy -Policy RR
Get-MPIOSetting
NOTE: Refer to the PowerStore Configuring Volumes Guide for additional information.
NOTE: Creating a file system formatting with UNMAP enabled on a host connected to PowerStore may result in an
increased amount of write I/Os to the storage subsystem. It is highly recommended to disable UNMAP during file system
creation.
To disable UNMAP during file system creation:
1. Open a Windows CMD window on the host.
2. Run the following fsutil command to temporarily disable UNMAP on the host (before creating the file system):
3. Once file system creation is complete, reenable UNMAP by running the following command:
NOTE: To verify the current setting of the file system, run the following fsutil command:
Temporarily disable UNMAP during file Performance Recommended Creating a File System
system creation.
● When creating a file system using
the mke2fs command, use the "-E
nodiscard" parameter.
● When creating a file system using the
mkfs.xfs command, use the "-K"
parameter.
Pre-Requisites
When attaching a host to PowerStore cluster using Fibre Channel, ensure that the following pre-requisites are met:
● Review Fibre Channel SAN Guidelines before you proceed.
● See the Dell EMC E-Lab Navigator (https://elabnavigator.dell.com) for supported Fibre Channel HBA models and drivers.
● Verify that all HBAs have supported driver and firmware versions according to the Support Matrix at Dell EMC E-Lab
Navigator (https://elabnavigator.dell.com).
● Verify that all HBAs BIOS settings are configured according to Dell EMC E-Lab recommendations.
● Locate your Fibre Channel HBA information:
systool -c fc_host -v
Pre-Requisites
When attaching a host to PowerStore cluster using NVMe/FC, ensure that the following pre-requisites are met:
● Review NVMe/FC SAN Guidelines before you proceed.
● PowerStore operating system 2.0 (or later) is required.
● See the E-Lab Dell EMC 32G FC-NVMe Simple Support Matrix for supported Fibre Channel HBA models and drivers with
NVMe/FC and known limits.
● Verify that all HBAs have supported driver and firmware versions according to the Support Matrix at Dell EMC E-Lab
Navigator (https://elabnavigator.dell.com).
● Verify that all HBAs BIOS settings are configured according to Dell EMC E-Lab recommendations.
● It is highly recommended to install the nvme-cli package:
systool -c fc_host -v
Known Issues
For a host directly attached to the PowerStore appliance, disable NVMe/FC support on the HBA. For details on potential issues
when directly connecting a host to PowerStore, see Dell EMC Knowledge Article 000200588 (PowerStore: After an upgrade...)
and Dell EMC Knowledge Article 000193380 (PowerStoreOS 2.0: ESXi hosts do not detect...).
Steps
1. Connect to the ESXi host as root.
2. Edit the /etc/nvme/hostnqn file and modify the UUID format to Hostname format.
Before:
# nvme show-hostnqn
nqn.2014-08.org.nvmexpress:uuid:daa45a0b-d371-45f6-b071-213787ff0917
After:
# nvme show-hostnqn
nqn.2014-08.org.nvmexpress:Linux-Host1
3. The value must comply with NVMe Express Base Specification, Chapter 4.5 (NVMe Qualified Names).
4. If you want to revert back to UUID format, run the following command to create a new NQN and update the /etc/nvme/
hostnqn file:
# nvme gen-hostnqn
nqn.2014-08.org.nvmexpress:uuid:51dc3c11-35b6-e311-bcdd-001e67a3bceb
Steps
1. Access the Linux host as root.
2. Edit the /etc/modprobe.d/qla2xxx.conf configuration file with the following data:
Steps
1. Access the Linux host as root.
NOTE: lpfc_enable_fc4_type=3 enables both FCP and NVMe/FC, and lpfc_enable_fc4_type=1 enables
only FCP.
# dracut --force
# systemctl reboot
iSCSI Configuration
This section provides an introduction to the recommended configuration to be applied when attaching hosts to PowerStore
cluster using iSCSI.
NOTE: This section applies only to iSCSI. If you are using any other protocol with Linux, see the relevant configuration
section.
NOTE: Be sure to review the iSCSI SAN Guidelines before you proceed.
Pre-Requisites
Before configuring iSCSI, the following pre-requisites should be met:
● Follow the operating system recommendations for installation and setup of the appropriate NIC/iSCSI HBA for your system.
● It is recommended to install the latest driver version (patch), as described in the operating system support site for each
specific NIC/iSCSI HBA.
● Refer to the E-Lab Interoperability Navigator (https://elabnavigator.dell.com) for supported NIC/iSCSI HBA models and
drivers.
● Configure networking according to PowerStore best practices:
○ If you are using PowerStore T model and utilizing only the two bonded ports (the first two ports on the Mezz card), it is
recommended to configure them as a LACP port channel across the two switches and configure proper MC-LAG (VLTi
or VPC configuration between the switches).
NOTE: If a port channel is not properly configured on the switch side, the bond operates as active/passive, and the
appliance bandwidth cannot be fully utilized.
○ If you are using PowerStore T model and utilizing any other port (not bonded), there is no need to configure any port
channel.
○ For information, see the Dell EMC PowerStore Networking Guide for PowerStore T Models on the support site (https://
www.dell.com/support)
Steps
1. Dell recommends creating four target iSCSI IP addresses (two per node) on the same subnet/VLAN.
2. Configure two iSCSI interfaces on the same subnet as the storage cluster iSCSI portals.
Example:
Description IP Address
Host (NIC-0) 1.1.1.10/24
Host (NIC-1) 1.1.1.11/24
Node-A-Port0 1.1.1.1/24
Node-A-Port1 1.1.1.2/24
Node-B-Port0 1.1.1.3/24
Node-B-Port1 1.1.1.4/24
Policy-Based Routing
This topic outlines policy-based routing as a solution to the single network subnet limitation (recommended solution).
This solution is based on adding routing tables and rules, binding source IP address for each route, and adding those as default
gateways for each network interface.
Using this solution, a routing table is defined for each interface, thus the default routing table is redundant for those interfaces.
For additional technical information on Policy-Based Routing, see RedHat Knowledge Article 30564 (How to connect...).
Bonding/Teaming
Use bonding/teaming as a solution to the single network subnet limitation.
NOTE: This section does not apply to hosts directly attached to the PowerStore appliances.
For a comparison between Bonding and Network Teaming implementations, see Networking Guide: Comparison of Network
Teaming to Bonding.
net.ipv4.conf.p2p1.rp_filter = 2
net.ipv4.conf.p2p2.rp_filter = 2
NOTE: In this example, p2p1 and p2p2 are the network interfaces used for iSCSI. Ensure to change to the relevant
interfaces.
To reload the configuration:
sysctl -p
Steps
1. Dell recommends creating four target iSCSI IP addresses (two per node), on two different subnets/VLANs.
2. Configure two iSCSI interfaces on the same subnet as each of the storage cluster iSCSI portals.
NOTE: It is highly recommended not to use routing on iSCSI.
Example:
Configuration Sample
The sample below uses Red Hat Enterprise Linux host. This may vary depending on your host configuration.
Steps
1. List the available adapters:
In this case, ports p514p1 and p514p2 are the PCIe to the iSCSI network.
$ nmcli connection add type vlan con-name vlan11 ifname vlan11 vlan.parent p514p2
vlan.id 11
** Reconnect on boot
$ nmcli connection modify vlan11 connection.autoconnect yes
6. Verify configuration:
NOTE: 1.1.1.1 represents the iSCSI portal IP address of a PowerStore Storage Network.
# nc -z -v 1.1.1.1 3260
Connection to 1.1.1.1 3260 port [tcp/*] succeeded!
NOTE: If you are using the physical interface (and not the VLAN interfaces), specify the interface that contains the IP
address (the device eth1, p2p1, and so on).
NOTE: Some operating system releases may require to configure additional parameters (in addition to
iface.net_ifacename) to properly identify the interface.
● Perform a discovery and login from the first subnet (if multiple subnets exist):
NOTE: The command logs in only to the target ports on the same VLAN as the iSCSI interface.
NOTE: The configurations in this example may differ based on your host and PowerStore configuration.
Using these settings prevents commands from being split by the iSCSI initiator and enables instantaneous mapping from the
host to the volume.
NOTE: If a previous iSCSI target is discovered on the Linux host, delete the iSCSI database and rerun the iSCSI target
discovery procedure with the iscsid.conf settings that are described above.
Pre-Requisites
Steps
1. Verify that DM-MPIO is installed:
NOTE: If the host is connected to a cluster other than PowerStore, the configuration file may include additional devices.
NOTE: If the multipath.conf file includes a blacklist section, it should come before the devices section. For information, See
the Importing External Storage to PowerStore Guide.
NOTE: To resolve a known issue described in RedHat Knowledge Article 6298681 (multipathd crashes when...), it is highly
recommended to update the device-mapper-multipath package to version 0.4.9-135.el7_9 (or later).
devices {
device {
vendor DellEMC
product PowerStore
path_selector "queue-length 0"
path_grouping_policy group_by_prio
path_checker tur
detect_prio yes
failback immediate
no_path_retry 3
rr_min_io_rq 1
fast_io_fail_tmo 15
max_sectors_kb 1024 ##only for RHEL 6.9 (or later 6.x versions) and
RHEL 7.4 (or later)
}
fast_io_fail_tmo Specifies the number of seconds the scsi layer will wait after a 15
problem has been detected on an FC remote port before failing
I/O to devices on that remote port. This value should be smaller
than dev_loss_tmo. Setting this parameter to off disables the
timeout.
max_sectors_kb Applies to Red Hat Enterprise Linux Release 6.9 (or later 6.x 1024
versions) and Red Hat Enterprise Linux Release 7.4 (or later 7.x
versions).
Sets the max_sectors_kb device queue parameter to the
specified value on all underlying paths of a multipath device
before the multipath device is first activated. When a multipath
device is created, the device inherits the max_sectors_kb
value from the path devices. Manually raising this value for the
multipath device or lowering it for the path devices can cause
multipath to create I/O operations larger than the path devices
allow. Using the max_sectors_kb parameter is an easy way to
set these values before a multipath device is created on top of
the path devices and prevent invalid-sized I/O operations from
being passed. If this parameter is not set by the user, the path
devices have it set by their device driver, and the multipath device
inherits it from the path devices.
NOTE: In PowerStore cluster, the maximum I/O size is 1 MB.
PowerStore does not set an optimal transfer size.
devices {
device {
vendor .*
product dellemc-powerstore
uid_attribute ID_WWN
prio ana
failback immediate
path_grouping_policy "group_by_prio"
# path_checker directio
path_selector "queue-length 0"
detect_prio "yes"
fast_io_fail_tmo 15
no_path_retry 3
rr_min_io_rq 1
}
## other devices
}
NOTE: Ensure that the multipath.conf file includes the max_sectors_kb setting if working with iSCSI or Fibre Channel.
Prerequisites
For a PowerStore cluster to function properly with Linux hosts that are using the Oracle ASM volume management software
with ASMLib driver, follow these steps to configure the /etc/sysconfig/oracleasm settings file:
● When DM-MPIO multipathing is used on the Linux host, edit these lines as follows:
● When PowerPath multipathing is used on the Linux host, edit these lines as follows:
2. Shutdown the Oracle instance running on the specific host, and run the following commands to restart Oracle ASM:
/etc/init.d/oracleasm stop
/etc/init.d/oracleasm start
In this mode, rather than using "cylinders" for creating partitions, the fdisk command uses sectors, which are a direct mapping
to the LBA space of the cluster. Thus, to verify that the partition is aligned, simply verify that the starting sector number is a
multiple of 16 (16 sectors, at 512 bytes each, is 8 KB). The fdisk command defaults to a starting sector for the first partition
of 2048, which is divisible by 16, and thus is correctly assigned.
NOTE: File system configuration and management are out of the scope of this document.
● If you are not using LVM, edit the /etc/fstab file to mount the file systems automatically when the system boots.
● On Red Hat Enterprise Linux, the _netdev option should be used to indicate that the file system must mount automatically.
The example below demonstrates a configuration entry with the _netdev option:
If the file system being mounted exists directly on the device (does not use LVM), it is recommended to use labels, as shown
in the example above. For information, see RedHat Knowledge Article 3889 (How can I mount iSCSI devices...). If you still
experience issues, see RedHat Knowledge Article 22993 (Why aren't remote filesystems...) for additional troubleshooting
steps.
● On SUSE Linux 11 and later, the nofail option should be used to indicate that the file system must mount automatically.
The example below demonstrates a configuration entry with the nofail option:
To enable Fast I/O Failure for all fscsi Stability and Warning Fast I/O Failure for Fibre
devices, set the fscsi device attribute set Performance Channel Devices
fc_err_recov to fast_fail
PowerStore operating systems earlier than Stability Mandatory 2 TB LUN Size Support
2.1.0 do not support volumes larger than 2
TB with AIX.
NOTE: In general, no more than eight(8) paths per LUN should be used with an AIX host that is connected to PowerStore.
If more paths are needed, an RPQ is required.
Pre-Requisites
Before you install HBAs on an AIX host, the following pre-requisites should be met.
Follow the IBM recommendations for installation and setup of the appropriate HBA for your system. It is recommended to install
the latest driver version (patch), as described on the IBM support site for each specific FC HBA.
Refer to the E-Lab Interoperability Navigator (https://elabnavigator.dell.com) for supported FC HBA models and drivers.
Steps
1. Run the chdev command for each HBA in the AIX host to set the HBA firmware level queue depth:
2. Reboot the AIX host to apply the HBA queue depth settings.
Run the following command to verify that the setting was enabled in the ODM:
Fast fail logic is applied when the switch sends a Registered State Change Notification (RSCN) to the adapter driver, indicating
a link event with a remote storage device port.
Fast I/O Failure is useful when multipathing software is used. Setting the fc_err_recov attribute to fast_fail can
decrease I/O failure due to link loss between the storage device and switch, by supporting faster failover to alternate paths.
Dynamic Tracking
This topic describes the dynamic tracking logic for FC devices and details the setting recommendations.
Dynamic tracking logic is applied when the adapter driver receives an indication from the switch that a link event with a remote
storage device port has occurred.
If dynamic tracking of FC devices is enabled, the FC adapter driver detects when the Fibre Channel N_Port ID of a device
changes. The FC adapter driver then reroutes the traffic that is destined for that device to the new address, while the devices
are still online.
Events that can cause an N_Port ID to change include:
● Moving a cable that connects a switch to a storage device from one switch port to another.
● Connecting two separate switches using an Inter-Switch Link (ISL).
● Rebooting a switch.
The fscsi device attribute dyntrk controls dynamic tracking of FC devices (default value is no for non-NPIV configurations).
It is recommended to enable dynamic tracking for PowerStore volumes.
To enable dynamic tracking for FC devices, change all fscsi device attributes to dyntrk=yes, as shown in the following
example:
Run the following command to verify that the setting was enabled in the ODM:
NOTE: The -P flag only modifies the setting in the ODM and requires a system reboot for the changes to apply.
Prerequisites
The max_xfer_size FC HBA adapter device driver attribute for the fscsi device controls the maximum I/O size that the
adapter device driver can handle. This attribute also controls a memory area that the adapter uses for data transfers.
For optimal AIX host operation over FC with PowerStore, perform the following steps:
Steps
1. Run the following command on all FC adapters that are connected to PowerStore:
uncompress DellEMC.AIX.6.2.0.1.tar.Z
tar -xvf DellEMC.AIX.6.2.0.1.tar.Z
inutoc .
4. Run the following command to install the following filesets to support native MPIO:
5. Run the following command to install the following filesets to support PowerPath (an RPQ for PowerPath is required for this
configuration):
NOTE: Solaris OS can use two types of disk drivers to manage disk storage. The driver type depends on the platform
architecture (x86 or SPARC) and the version of Solaris installed on the platform.
All versions of Solaris x86 OS are using SD disk drivers to manage all disk storage.
For SPARC platform versions prior to 11.4, release SSD driver type is used to manage all disk storage.
To simplify configuration and disk storage management, as of SPARC platform version 11.4, both platforms are using SD
driver.
If the SPARC system is upgraded to Solaris 11.4 from one of the earlier versions, the system will continue to use SSD driver.
All new installations of Solaris 11.4 will be configured to use SD driver for disk management.
Make sure that you update the tuning settings in the correct disk driver configuration file.
set
zfs:zfs_log_unmap_ignore_si
ze=256
Fibre Channel path failover tuning: fp.conf Stability Recommended Updating fp.conf
fp_offline_ticker = 20; configuration file
Fibre Channel path failover tuning: fp.conf Stability Recommended Updating fp.conf
fcp_offline_delay = 20; configuration file
Maximum I/O size for ssd driver for ssd.conf Stability Mandatory Updating ssd.conf
Solaris 10, 11-11.3 (SPARC) configuration file
ssd_max_xfer_size=0x100000;
Maximum I/O size for sd driver for sd.conf Stability Mandatory Updating sd.conf
Solaris 11.4 (SPARC) 11.x (x86) configuration file
sd_max_xfer_size=0x100000;
Soaris ssd driver tuning for Solaris 10, ssd.conf Stability Recommended Updating ssd.conf
11-11.3 (SPARC) configuration file
ssd-config-list = "DellEMC
PowerStore","throttle-
max:64, physical-block-
size:4096, disksort:false,
cache-nonvolatile:true";
Soaris sd driver tuning for Solaris 11.4 sd.conf Stability Recommended Updating sd.conf
(SPARC) 11.x (x86) configuration file
sd-config-list = "DellEMC
PowerStore","throttle-
max:64, physical-block-
size:4096, disksort:false,
cache-nonvolatile:true";
Solaris MPxIO multi-path driver tuning scsi_vhci.conf Stability Mandatory Updating scsi_vhci.conf
configuration file
load-balance="round-robin";
auto-failback="enable";
Solaris MPxIO multi-path driver tuning scsi_vhci.conf Stability Mandatory Updating scsi_vhci.conf
configuration file
scsi-vhci-update-pathstate-
on-reset = "DellEMC
PowerStore", "yes";
NOTE: Before you proceed, review Fibre Cannel and NVMe over Fibre Channel SAN Guidelines.
Pre-Requisites
Before installing HBAs in a Solaris host, the following pre-requisites should be met:
● Follow Oracle's recommendations for installation and setup of the appropriate HBA for your system.
● It is recommended to install the latest driver version (patch), as described on the Oracle support site for each specific FC
HBA.
● Refer to the E-Lab Interoperability Navigator (https://elabnavigator.dell.com) for supported FC HBA models and drivers.
Queue Depth
Queue depth is the amount of SCSI commands (including I/O requests) that can be handled by a storage device at a given time.
A queue depth can be set on either of the following:
● Initiator level - HBA queue depth
● LUN level - LUN queue depth
The LUN queue depth setting controls the amount of outstanding I/O requests per a single path. The HBA queue depth (also
referred to as execution throttle) setting controls the amount of outstanding I/O requests per HBA port.
With PowerStore and Solaris, the HBA queue depth setting should retain its default value, and the initial LUN queue depth
setting should be modified to 64. This is a good starting point provided good I/O response times. The specific value can be
adjusted based on particular infrastructure configuration, application performance and I/O profile details.
NOTE: If the host is connected to a cluster other than PowerStore, the configuration file may include additional devices.
NOTE: Currently, PowerStore clusters are only supported with native Solaris multipathing (MPxIO).
To enable management of storage LUNs that are presented to the host with MPxIO, use the following command:
# stmsboot -e
NOTE: The host must be rebooted immediately after the command execution is complete. It is recommended to update all
storage-related host configuration files before rebooting.
Steps
1. Run the following command to verify the scsi_vhci.conf file location:
# ls /etc/driver/drv/
2. If the file is not in the expected location, run the following command to copy it from /kernel/drv:
# cp /kernel/drv/scsi_vhci.conf /etc/driver/drv
3. Run the following commands to create a backup copy of the scsi_vhci.conf file:
# cp -p /etc/driver/drv/scsi_vhci.conf /etc/driver/drv/scsi_vhci.conf_ORIG
4. Modify the scsi_vhci.conf file by adding the following recommended entries for PowerStore storage:
load-balance="round-robin";
auto-failback="enable";
scsi-vhci-update-pathstate-on-reset="DellEMC PowerStore", "yes";
scsi-vhci-failover-override="DellEMC PowerStore", "f_tpgs";
scsi-vhci-failover-override Add a third-party (non-Sun) storage f_tpgs (Target Port Groups, ALUA)
device to run under scsi_vhci (and
by that take advantage of scsi_vhci
multipathing), using the vendor
ID "DellEMC PowerStore" and
product ID "f_tpgs" corresponding
to the Dell EMC PowerStore device.
Steps
1. Run the following command to verify the fp.conf file location:
# ls /etc/driver/drv/
2. If the file is not in the expected location , run the following command to copy it from /kernel/drv:
# cp /kernel/drv/fp.conf /etc/driver/drv
3. Run the following commands to create a backup copy and modify the file:
# cp -p /etc/driver/drv/fp.conf /etc/driver/drv/fp.conf_ORIG
# vi /etc/driver/drv/fp.conf
Example
Below are the entries that are recommended for PowerStore storage.
mpxio-disable="no";
fp_offline_ticker=20;
Steps
1. Run the following command to change directory to the /etc/system.d directory:
# cd /etc/system.d
2. Run the following command to create a new PowerStore.conf file with the recommended Solaris kernel tuning settings for
PowerStore storage:
# vi PowerStore.conf
Example
Below are Solaris kernel tuning setting entries recommended for PowerStore storage.
Steps
1. Run the following command to verify the fcp.conf file location:
# ls /etc/driver/drv/
2. If the file is not in the expected location, run the following command to copy it from /kernel/drv:
# cp /kernel/drv/fcp.conf /etc/driver/drv
3. Run the following commands to create a backup copy and modify the file:
# cp -p /etc/driver/drv/fcp.conf /etc/driver/drv/fcp.conf_ORIG
# vi /etc/driver/drv/fcp.conf
fcp_offline_delay = 20;
Steps
1. Run the following command to verify the ssd.conf file location:
# ls /etc/driver/drv/
2. If the file is not in the expected location, run the following command to copy it from /kernel/drv:
# cp /kernel/drv/sd.conf /etc/driver/drv
3. Run the following commands to create a backup copy and modify the file:
# cp -p /etc/driver/drv/sd.conf /etc/driver/drv/sd.conf_ORIG
# vi /etc/driver/drv/sd.conf
Example
Below are the entries that are recommended for PowerStore storage.
ssd_max_xfer_size=0x100000;
ssd-config-list = "DellEMC PowerStore", "throttle-max:64, physical-block-size:4096,
disksort:false, cache-nonvolatile:true";
Updating sd.conf configuration file (Solaris 11.x x86 and 11.4 SPARC)
Steps
1. Run the following command to verify the sd.conf file location:
#ls /etc/driver/drv/
2. If the file is not in the expected location, run the following command to copy it from /kernel/drv:
# cp /kernel/drv/sd.conf /etc/driver/drv
3. Run the following commands to create a backup copy and modify the file:
# cp -p /etc/driver/drv/sd.conf /etc/driver/drv/sd.conf_ORIG
# vi /etc/driver/drv/sd.conf
Example
Below are the entries that are recommended for PowerStore storage.
sd_max_xfer_size=0x100000;
sd-config-list = "DellEMC PowerStore", "throttle-max:64, physical-block-size:4096,
disksort:false, cache-nonvolatile:true";
NOTE: Refer to the PowerStore Configuring Volumes Guide for additional information.
NOTE: If a volume that was already discovered and configured by a host is presented to that host, then a subsequent
change to the escsi_maxphys parameter does not take effect until a host reboot. Volumes attached after the parameter
change inherit the parameter change automatically and require no further host reboot.
scsimgr save_attr -a
escsi_maxphys=256
Load balancing: Keep the Performance Mandatory Configuring Native Multipathing using HP-UX
following HP-UX native Multipath (MPIO)
multipathing parameters at
their default values:
● load_bal_policy - set to
"round_robin"
● path_fail_secs - set to 120
seconds
Temporarily disable UNMAP Performance Recommended Creating File System
during file systems creation
(only when using Veritas
Volume Manager):
● To temporarily disable
UNMAP for the targeted
device on the host (before
file system creation):
#vxdisk set
reclaim=off "disk
name"
NOTE: PowerStore supports only FC-SW FCP connections. GigE iSCSI and FC direct connections from HP-UX initiators to
PowerStore target ports are not supported.
NOTE: Before you proceed, review Fibre Cannel and NVMe over Fibre Channel SAN Guidelines.
Pre-Requisites
This section describes the pre-requisites for FC HBA configuration
● Refer to the E-Lab Interoperability Navigator (https://elabnavigator.dell.com) for supported FC HBA models and drivers.
● Verify all HBAs are at the supported driver, firmware and BIOS versions.
● For instructions about installing the FC HBA and upgrading the drivers or the firmware, see HP documentation.
The set value is defined in 4 KB increments. To support the PowerStore devices in HP-UX, change the escsi_maxphys value
to 256 using the following commands:
● 1 MB max transfer length, reset to default 2 MB during host reboot:
NOTE: You can configure the escsi_maxphys attribute only on a global basis, and it applies for all FC fcp black devices
that are connected to the host.
NOTE: Refer to the PowerStore Configuring Volumes Guide for additional information.
NOTE: Creating a file system formatting with UNMAP enabled on a host connected to PowerStore may result in an
increased amount of write I/Os to the storage subsystem. When possible, it is highly recommended to disable UNMAP
during file system creation. Disabling UNMAP can be done when using the Veritas Volume Manager on the HP-UX host.
However, when using the HP-UX native volume manager, this recommendation is not applicable as UNMAP is not supported
in this case.
To disable UNMAP during file system creation (only when using Veritas Volume Manager):
Steps
1. Access the HP-UX host using SSH as root.
2. Run the following vxdisk command to temporarily disable UNMAP for the targeted device on the host (before creating the
file system):
NOTE: To verify the current setting of a specific device using its corresponding disk group, run the following vxprint
command:
Example: Using the vxprint command to verify the current UNMAP setting of a specific device:
# vxprint -z -g testdg
...
dm testdg02 3pardata0_55 - 2031232 - - - -
NOTE: The current PowerStore OS release does not support mapping individual LUNs to host under a host group.
Topics:
• View Configured Storage Networks for NVMe/TCP
• View Configured Storage Networks for iSCSI
• View NVMe/FC and SCSI/FC Target Ports
• View Physical Ethernet Ports Status
• View Discovered Initiators
• View Active Sessions
Troubleshooting 91
| | | fd4b:8a14:c03b::201:4413:5d31:d6ff
| | | fd41:3062:7f9a::201:4480:39ac:7db3
| | |
4 | IP_PORT4 | ISCSI | 172.28.2.205
| | NVMe_TCP | 172.28.1.205
| | | fd4b:8a14:c03b::201:445f:9cc4:d91e
| | | fd41:3062:7f9a::201:4463:c850:b827
92 Troubleshooting
View Discovered Initiators
Use the following PSTCLI to view all discovered initiators, which are not part of any initiator group.
Troubleshooting 93