Powerstore - Hardware Configuration Guide

Download as pdf or txt
Download as pdf or txt
You are on page 1of 93

Dell EMC PowerStore

Host Configuration Guide

July 2022
Rev. A10
Notes, cautions, and warnings

NOTE: A NOTE indicates important information that helps you make better use of your product.

CAUTION: A CAUTION indicates either potential damage to hardware or loss of data and tells you how to avoid
the problem.

WARNING: A WARNING indicates a potential for property damage, personal injury, or death.

© 2020 - 2022 Dell Inc. or its subsidiaries. All rights reserved. Dell Technologies, Dell, and other trademarks are trademarks of Dell Inc. or its
subsidiaries. Other trademarks may be trademarks of their respective owners.
Contents

Additional Resources.....................................................................................................................7

Chapter 1: Introduction................................................................................................................. 8
Purpose.................................................................................................................................................................................. 8

Chapter 2: Best Practices for Storage Connectivity...................................................................... 9


General SAN Guidelines..................................................................................................................................................... 9
Using LUN 0 with PowerStore................................................................................................................................... 9
Using LUNs 254 and 255 with PowerStore............................................................................................................ 9
Fibre Channel SAN Guidelines........................................................................................................................................ 10
Recommended Configuration Values Summary....................................................................................................10
Recommended Configuration....................................................................................................................................10
Additional Considerations............................................................................................................................................11
Zoning using SCSI WWN.............................................................................................................................................11
NVMe/FC SAN Guidelines................................................................................................................................................11
Recommended Configuration Values Summary....................................................................................................12
Recommended Configuration....................................................................................................................................12
Additional Considerations........................................................................................................................................... 12
Zoning using NVMe WWN......................................................................................................................................... 13
iSCSI SAN Guidelines........................................................................................................................................................ 13
Recommended Configuration Values Summary....................................................................................................13
Recommended Configuration....................................................................................................................................14
Additional Considerations...........................................................................................................................................15
NVMe over TCP (NVMe/TCP) SAN Guidelines........................................................................................................ 15
Recommended Configuration Values Summary....................................................................................................15
Recommended Configuration....................................................................................................................................16
Additional Considerations........................................................................................................................................... 17
NVMe-oF General Guidelines.......................................................................................................................................... 17
SAN Connectivity Best Practices.................................................................................................................................. 18
Direct Attach................................................................................................................................................................. 18
SAN - Minimal Configuration.....................................................................................................................................18
SAN - Recommended Configuration....................................................................................................................... 19

Chapter 3: Host Configuration for VMware vSphere ESXi............................................................ 23


Related E-Lab Host Connectivity Guide...................................................................................................................... 23
Chapter Scope................................................................................................................................................................... 23
Recommended Configuration Values Summary......................................................................................................... 23
Boot from SAN.................................................................................................................................................................. 25
Fibre Channel (FC) Configuration.................................................................................................................................25
Pre-Requisites..............................................................................................................................................................25
Known Issues................................................................................................................................................................25
NVMe over Fibre Channel Configuration.................................................................................................................... 25
Pre-Requisites.............................................................................................................................................................. 26
Setting the ESXi Host NVMe Qualified Name......................................................................................................26

Contents 3
Setting up NVMe HBAs............................................................................................................................................. 26
iSCSI Configuration...........................................................................................................................................................27
Pre-Requisites.............................................................................................................................................................. 27
Network Configuration for iSCSI............................................................................................................................. 27
iSCSI Software Adapter Configuration................................................................................................................... 31
Jumbo Frames...............................................................................................................................................................31
Delayed ACK................................................................................................................................................................. 32
Login Timeout...............................................................................................................................................................32
No-Op Interval..............................................................................................................................................................32
Known Issues................................................................................................................................................................ 33
NVMe/TCP Configuration.............................................................................................................................................. 33
Pre-Requisites.............................................................................................................................................................. 33
Setting the ESXi Host NVMe Qualified Name......................................................................................................33
Network Configuration for NVMe/TCP.................................................................................................................34
NVMe/TCP Software Adapter Configuration...................................................................................................... 36
Using CLI........................................................................................................................................................................37
Known Issues................................................................................................................................................................38
vStorage API for System Integration (VAAI) Settings.............................................................................................38
Confirming that VAAI is Enabled on the ESXi Host............................................................................................ 38
Setting the Maximum I/O............................................................................................................................................... 39
Confirming UNMAP Priority........................................................................................................................................... 39
Configuring VMware vSphere with PowerStore Storage in a Multiple Cluster Configuration...................... 40
Multipathing Software Configuration............................................................................................................................41
Configuring Native Multipathing (NMP) with SCSI.............................................................................................41
Configuring High Performance Multipathing (HPP) with NVMe.....................................................................43
Configuring PowerPath Multipathing..................................................................................................................... 45
PowerStore Considerations............................................................................................................................................ 45
Presenting PowerStore Volumes to the ESXi Host............................................................................................ 45
Disk Formatting............................................................................................................................................................45
Virtual Volumes............................................................................................................................................................ 46
AppsOn: Virtual Machine Compute and Storage Collocation Rules for PowerStore X Clusters............. 46
vSphere Considerations...................................................................................................................................................46
VMware Paravirtual SCSI Controllers.................................................................................................................... 46
Virtual Disk Provisioning............................................................................................................................................ 46
Virtual Machine Guest Operating System Settings.............................................................................................47
Creating a File System................................................................................................................................................47

Chapter 4: Host Configuration for Microsoft Windows................................................................ 48


Related E-Lab Host Connectivity Guide...................................................................................................................... 48
Recommended Configuration Values Summary.........................................................................................................48
Boot from SAN...................................................................................................................................................................49
Fibre Channel Configuration...........................................................................................................................................49
Pre-Requisites.............................................................................................................................................................. 49
iSCSI Configuration...........................................................................................................................................................49
Pre-Requisites..............................................................................................................................................................50
PowerStore Operating System 1.x Only - Single Subnet...................................................................................50
PowerStore Operating System 2.x and Above - Multi Subnet......................................................................... 51
Multipathing Software Configuration............................................................................................................................51
Configuring Native Multipathing Using Microsoft Multipath I/O (MPIO)......................................................51
PowerPath Configuration with PowerStore Volumes........................................................................................ 52

4 Contents
Post-Configuration Steps - Using the PowerStore system....................................................................................53
Presenting PowerStore Volumes to the Windows Host.................................................................................... 53
Creating a File System............................................................................................................................................... 53

Chapter 5: Host Configuration for Linux......................................................................................54


Related E-Lab Host Connectivity Guide......................................................................................................................54
Recommended Configuration Values Summary.........................................................................................................54
Boot from SAN.................................................................................................................................................................. 55
Fibre Channel (FC) Configuration.................................................................................................................................56
Pre-Requisites..............................................................................................................................................................56
NVMe over Fibre Channel Configuration.................................................................................................................... 56
Pre-Requisites..............................................................................................................................................................56
Known Issues................................................................................................................................................................56
NVMe/FC Configuration on Linux Hosts...............................................................................................................57
Setting the Linux Host NVMe Qualified Name.....................................................................................................57
iSCSI Configuration.......................................................................................................................................................... 58
Pre-Requisites..............................................................................................................................................................58
PowerStore Operating System 1.x Only - Single Subnet...................................................................................58
PowerStore Operating System 2.x and Later - Multi Subnet.......................................................................... 60
iSCSI Session Configuration..................................................................................................................................... 63
Updating iSCSI Configuration File........................................................................................................................... 64
Multipathing Software Configuration...........................................................................................................................65
Pre-Requisites..............................................................................................................................................................65
Configuration with Device Mapper Multipathing for SCSI................................................................................65
Configuring with Device Mapper Multipathing for NVMe................................................................................. 67
Configuration with PowerPath.................................................................................................................................67
Configuring Oracle ASM............................................................................................................................................ 67
Post-Configuration Steps - Using the PowerStore system....................................................................................68
Presenting PowerStore Cluster Volumes to the Linux Host.............................................................................68
Partition Alignment in Linux...................................................................................................................................... 68
Creating a File System............................................................................................................................................... 69
Mounting iSCSI File Systems....................................................................................................................................69

Chapter 6: Host Configuration for AIX......................................................................................... 71


Related E-Lab Host Connectivity Guide.......................................................................................................................71
Recommended Configuration Values Summary..........................................................................................................71
2 TB LUN Size Support....................................................................................................................................................72
Boot from SAN...................................................................................................................................................................72
Fibre Channel Configuration........................................................................................................................................... 72
Pre-Requisites.............................................................................................................................................................. 72
Queue Depth.................................................................................................................................................................73
Fast I/O Failure for Fibre Channel Devices........................................................................................................... 73
Dynamic Tracking........................................................................................................................................................ 74
Fibre Channel Adapter Device Driver Maximum I/O Size..................................................................................74
Dell EMC AIX ODM Installation...................................................................................................................................... 75
Dell EMC AIX ODM Installation Requirements..................................................................................................... 75

Chapter 7: Host Configuration for Solaris.................................................................................... 76


Related E-Lab Host Connectivity Guide...................................................................................................................... 76

Contents 5
Recommended Configuration Values Summary......................................................................................................... 76
Boot from SAN................................................................................................................................................................... 77
Fibre Channel Configuration........................................................................................................................................... 77
Pre-Requisites.............................................................................................................................................................. 78
Queue Depth.................................................................................................................................................................78
Solaris Host Parameter Settings................................................................................................................................... 78
Configuring Solaris native multipathing..................................................................................................................78
PowerPath Configuration with PowerStore Volumes........................................................................................ 79
Host storage tuning parameters..............................................................................................................................80
Post configuration steps - using the PowerStore system...................................................................................... 83
Partition alignment in Solaris.................................................................................................................................... 84

Chapter 8: Host Configuration for HP-UX................................................................................... 85


Related E-Lab Host Connectivity Guide......................................................................................................................85
Recommended Configuration Values Summary.........................................................................................................85
Boot from SAN.................................................................................................................................................................. 86
Fibre Channel Configuration...........................................................................................................................................86
Pre-Requisites..............................................................................................................................................................86
HP-UX Host Parameter Settings...................................................................................................................................87
Maximum Transfer Length........................................................................................................................................ 87
Multipathing Software Configuration........................................................................................................................... 87
Configuring Native Multipathing Using HP-UX Multipath I/O (MPIO).......................................................... 87
Post-Configuration Steps - Using the PowerStore System................................................................................... 88
Presenting PowerStore Volumes to the HP-UX Host........................................................................................88
Creating a file system.................................................................................................................................................88

Appendix A: Considerations for Boot from SAN with PowerStore................................................ 90


Consideration for Boot from SAN with PowerStore................................................................................................ 90

Appendix B: Troubleshooting....................................................................................................... 91
View Configured Storage Networks for NVMe/TCP................................................................................................91
View Configured Storage Networks for iSCSI............................................................................................................91
View NVMe/FC and SCSI/FC Target Ports...............................................................................................................92
View Physical Ethernet Ports Status........................................................................................................................... 92
View Discovered Initiators...............................................................................................................................................93
View Active Sessions........................................................................................................................................................93

6 Contents
Preface

As part of an improvement effort, revisions of the software and hardware are periodically released. Some functions that are
described in this document are not supported by all versions of the software or hardware currently in use. The product release
notes provide the most up-to-date information about product features. Contact your service provider if a product does not
function properly or does not function as described in this document.

Where to get help


Support, product, and licensing information can be obtained as follows:
● Product information
For product and feature documentation or release notes, go to the PowerStore Documentation page at https://
www.dell.com/powerstoredocs.
● Troubleshooting
For information about products, software updates, licensing, and service, go to https://www.dell.com/support and locate
the appropriate product support page.
● Technical support
For technical support and service requests, go to https://www.dell.com/support and locate the Service Requests page.
To open a service request, you must have a valid support agreement. Contact your Sales Representative for details about
obtaining a valid support agreement or to answer any questions about your account.

Additional Resources 7
1
Introduction
Topics:
• Purpose

Purpose
This document provides guidelines and best practices on attaching and configuring external hosts to PowerStore systems, or
in conjunction with other storage systems. It includes information on topics such as multipathing, zoning, and timeouts. This
document may also include references to issues found in the field and notify you on known issues.
Regarding ESXi hosts, this document provides guidelines only for configuring ESXi hosts that are connected externally to
PowerStore. For configuring an internal ESXi host in PowerStore X model, refer to the PowerStore Virtualization Guide.
For further host connectivity best practices in conjunction to other Dell EMC storage systems, also refer to the E-Lab Host
Connectivity Guides. For details, refer to the E-Lab Interoperability Navigator at https://elabnavigator.dell.com.

8 Introduction
2
Best Practices for Storage Connectivity
This chapter contains the following topics:
Topics:
• General SAN Guidelines
• Fibre Channel SAN Guidelines
• NVMe/FC SAN Guidelines
• iSCSI SAN Guidelines
• NVMe over TCP (NVMe/TCP) SAN Guidelines
• NVMe-oF General Guidelines
• SAN Connectivity Best Practices

General SAN Guidelines


This section provides general guidelines for storage connectivity.
NOTE: This document describes mainly the storage-specific recommendations for PowerStore. It is recommended to
always consult with the OS documentation for the up-to-date guidelines specific for the used operating system.

NOTE: In hosts running a hypervisor, such as VMware ESXi, Microsoft Hyper-V or any clustering software, it is important
to ensure that the logical unit numbers of PowerStore volumes are consistent across all hosts in the hypervisor cluster.
Inconsistent LUNs may affect operations such as VM online migration or VM power-up.

Using LUN 0 with PowerStore


The PowerStore system exposes a Storage Array Controller Device (SACD) by default. The device is exposed with a LUN ID 0.
● A user may choose to override the SACD with a real storage device (such as a volume or a clone), by setting the device LUN
ID to 0.
● Doing so may require the user to manually force a host rescan to discover that device. For instructions on forcing a host
rescan, see the operating system vendor documentation.

Using LUNs 254 and 255 with PowerStore


● PowerStore appliances expose for vSphere Virtual Volumes LUN 254, 255.
● The LUNs are not 'real' LUNs, and cannot be used for any volume mapping.
● The LUNs represent the Virtual Volume Physical Endpoint (PE).

Best Practices for Storage Connectivity 9


Fibre Channel SAN Guidelines
This section describes the best practices for attaching hosts to a PowerStore cluster in a highly available resilient and optimal
Fibre Channel SAN.

Recommended Configuration Values Summary


The following table summarizes the recommended configuration values that are related to Fibre Channel SAN.

Validation Impact Severity Refer to Section


Use two separate fabrics. Redundancy Mandatory Recommended Configuration
Each host should be zoned to both nodes of each Redundancy Mandatory Recommended Configuration
appliance.
Zoning must be done using the appropriate WWN: Redundancy Warning Zoning using SCSI WWN
● For Fibre Channel SAN - use PowerStore SCSI
WWN.
Maximum number of paths per appliance per Performance Warning Recommended Configuration
volume per host: 8
Recommended number of paths per volume per Performance Warning Recommended Configuration
host: 4
Link speed should be consistent across all paths Performance Warning Recommended Configuration
to the PowerStore cluster per single host or a
cluster of hosts.
Balance the hosts between the nodes of the Performance Recommended Recommended Configuration
appliance to provide a distributed load across all
target ports.
Maximum ISL Hops: 2 Performance Recommended Recommended Configuration

Recommended Configuration
Consider the following recommendations when setting up a Fibre Channel SAN infrastructure.
● Use two separate fabrics. Each fabric should be on a different physical FC switch for resiliency.
○ Keep a consistent link speed and duplex across all paths to the PowerStore cluster per single host or a cluster of hosts.
● Balance the hosts between the two nodes of the appliance.
○ The PowerStore cluster can be shipped with various extension modules for Fibre Channel. If your PowerStore cluster
contains more than one extension I/O module per node, distribute the zoning among all I/O modules for highest
availability and performance.
○ The optimal number of paths depends on the operating system and server information. To avoid multipathing
performance degradation, do not use more than eight paths per device per host. It is recommended to use four paths.
○ With a multi appliance cluster, it is highly advised to zone the host to as many appliances as possible, to achieve optimal
load distribution across the cluster. Be sure to keep the minimum/optimal zoning recommendation for each appliance.
NOTE: A multi-appliance cluster is not designed to provide better resiliency, but to provide better load balance. To
perform volume migration between appliances, a host must be zoned to both appliances.
● Use single initiator zoning scheme, using port WWN: Utilize single-initiator per multiple-target (1 : many) zoning scheme
when configuring zoning with a PowerStore cluster.
NOTE: Avoid using zoning based on switch port. Use only port WWN for zoning.
● Host I/O latency can be severely affected by FC SAN congestion. Minimize the use of ISLs by placing the host and storage
ports on the same physical switch. When this is not possible, ensure that there is sufficient ISL bandwidth and that both the
host and PowerStore cluster interfaces are separated by no more than two ISL hops.
● For more information about zoning best practices, see Fibre Channel SAN Topologies.

10 Best Practices for Storage Connectivity


Additional Considerations
Review the following additional considerations when configuring hosts with PowerStore storage using FC:
● See the E-Lab Interoperability Navigator (https://elabnavigator.dell.com) for additional support limitations regarding HBAs,
operating systems, switches, and Direct-Attach.

Zoning using SCSI WWN

Prerequisites
Starting with PowerStore operating system version 2.0, NVMe/FC is supported.
PowerStore exposes two WWNs, one for the FC (SCSI WWN) and one for NVMe (NVMe WWN).

About this task


Zoning should be performed using the correct WWN:
● For Fibre Channel, use SCSI WWN.
To locate the correct WWN:

Steps
1. Using the PSTCLI fc_port show command or the WebUI Fibre Channel Ports screen (Hardware > Appliance > Ports
> Fibre Channel, find the corresponding SCSI WWN for each target port.
2. When using Fibre Channel, use SCSI WWN when zoning a target port.
Example
Using the PSTCLI fc_port show command to locate the SCSI WWN of a target port:

NOTE: In the example below, the output text may be wrapped.

PS C:\> pstcli -d <IP> -u admin -p <password> fc_port show -select


name,wwn,port_index,current_speed,appliance_id,is_link_up -query "is_link_up is yes"
# | name | wwn | port_index |
current_speed | appliance_id | is_link_up
----+---------------------------------------+-------------------------+------------
+---------------+--------------+------------
1 | BaseEnclosure-NodeA-IoModule0-FEPort1 | 58:cc:f0:90:49:21:07:7b
| 1 | 32_Gbps | A1 | yes
2 | BaseEnclosure-NodeB-IoModule0-FEPort0 | 58:cc:f0:98:49:20:07:7b
| 0 | 32_Gbps | A1 | yes
3 | BaseEnclosure-NodeB-IoModule0-FEPort1 | 58:cc:f0:98:49:21:07:7b
| 1 | 32_Gbps | A1 | yes
4 | BaseEnclosure-NodeA-IoModule0-FEPort0 | 58:cc:f0:90:49:20:07:7b
| 0 | 32_Gbps | A1 | yes

NVMe/FC SAN Guidelines


This section describes the best practices for attaching hosts to a PowerStore cluster in a highly available resilient and optimal
NVMe/FC SAN.

Best Practices for Storage Connectivity 11


Recommended Configuration Values Summary
The following table summarizes the recommended configuration values that are related to NVMe/FC SAN.

Validation Impact Severity Refer to Section


Use two separate fabrics. Redundancy Mandatory Reocmmended Configuration
Each host should be zoned to both nodes of each Redundancy Mandatory SAN Connectivity Best
appliance. Practices
Zoning must be done using the appropriate WWN: Redundancy Mandatory Zoning using NVMe WWN
● For NVMe/FC - use PowerStore NVMe WWN.
Maximum number of paths per appliance per Performance Warning Reocmmended Configuration
volume per host: 8
Recommended number of paths per volume per Performance Warning Reocmmended Configuration
host: 4
Link speed should be consistent across all paths Performance Warning Reocmmended Configuration
to the PowerStore cluster per single host or a
cluster of hosts.
Balance the hosts between the nodes of the Performance Recommended Reocmmended Configuration
appliance to provide a distributed load across all
target ports.
Maximum ISL Hops: 2 Performance Recommended Reocmmended Configuration

Recommended Configuration
Consider the following recommendations when setting up an NVMe/FC infrastructure.
● Use two separate fabrics. Each fabric should be on a different physical FC switch for resiliency.
○ Keep a consistent link speed and duplex across all paths to the PowerStore cluster per single host or a cluster of hosts.
● Balance the hosts between the two nodes of the appliance.
○ The PowerStore cluster can be shipped with various extension modules for Fibre Channel. If your PowerStore cluster
contains more than one extension I/O module per node, distribute the zoning among all I/O modules for highest
availability and performance.
○ The optimal number of paths depends on the operating system and server information. To avoid multipathing
performance degradation, do not use more than eight paths per device per host. It is recommended to use four paths.
○ With a multi appliance cluster, it is highly advised to zone the host to as many appliances as possible, to achieve best
optimal distribution across the cluster. Be sure to keep the minimum/optimal zoning recommendation for each appliance.
NOTE: A multi-appliance cluster is not designed to provide better resiliency, but to provide better load balance. To
perform volume migration between appliances, a host must be zoned to both appliances.
● Use single initiator zoning scheme, using port WWN: Utilize single-initiator per multiple-target (1 : many) zoning scheme
when configuring zoning with a PowerStore cluster.
NOTE: Avoid using zoning based on switch port. Use only port WWN for zoning.
● Host I/O latency can be severely affected by FC SAN congestion. Minimize the use of ISLs by placing the host and storage
ports on the same physical switch. When this is not possible, ensure that there is sufficient ISL bandwidth and that both the
host and PowerStore cluster interfaces are separated by no more than two ISL hops.
● For more information about zoning best practices, see Fibre Channel SAN Topologies.

Additional Considerations
Review the following considerations when configuring hosts with PowerStore storage using NVMe/FC:
● NVMe/FC is supported with PowerStore operating system 2.0 and later.

12 Best Practices for Storage Connectivity


● NVMe/FC requires NPIV to be enabled at the switch level (NPIV is enabled by default on PowerStore FC ports). If NPIV
is disabled on the switch and the administrator wants to enable NPIV on the port for NVMe/FC to work, it is required to
disable and then reenable the port at the switch.
● See the Dell EMC 32G FC-NVMe Simple Support Matrix for supported NVMe/FC configurations and other up-to-date
limitations.
● See NVMe-oF General Guidelines for guidelines specific to NVMe-oF.

Zoning using NVMe WWN

Prerequisites
● Starting with PowerStore operating system 2.0, NVMe/FC is supported.
● PowerStore exposes two WWNs, one for the FC (SCSI WWN) and one for NVMe (NVMe WWN).

About this task


Zoning should be performed using the correct WWN:
● For NVMe/FC, use NVMe WWN.
To locate the correct WWN:

Steps
1. Using the PSTCLI fc_port show command or the WebUI Fibre Channel Ports screen (Hardware > Appliance > Ports
> Fibre Channel), find the corresponding SCSI WWN and NVMe for each target port.
2. According to the protocol, use NVMe WWN when zoning a target port.
Example
Using the PSTCLI fc_port show command to locate the NVMe WWN of a target port:

NOTE: In the example below, the output text may be wrapped.

PS C:\> pstcli -d 10.55.34.127 -u admin -p <password> fc_port show -select


name,wwn_nvme,port_index,current_speed,appliance_id,is_link_up -query "is_link_up is yes"
# | name | wwn_nvme | port_index |
current_speed | appliance_id | is_link_up
----+---------------------------------------+-------------------------+------------
+---------------+--------------+------------
1 | BaseEnclosure-NodeA-IoModule0-FEPort1 | 58:cc:f0:90:49:29:07:7b
| 1 | 32_Gbps | A1 | yes
2 | BaseEnclosure-NodeB-IoModule0-FEPort0 | 58:cc:f0:98:49:28:07:7b
| 0 | 32_Gbps | A1 | yes
3 | BaseEnclosure-NodeB-IoModule0-FEPort1 | 58:cc:f0:98:49:29:07:7b
| 1 | 32_Gbps | A1 | yes
4 | BaseEnclosure-NodeA-IoModule0-FEPort0 | 58:cc:f0:90:49:28:07:7b
| 0 | 32_Gbps | A1 | yes

iSCSI SAN Guidelines


This section details the best practices for attaching hosts to a PowerStore cluster in a highly-available, resilient and optimal
iSCSI SAN.

Recommended Configuration Values Summary


The following table summarizes the recommended variables related to iSCSI SAN:

Validation Impact Severity Refer to Section


Use two separate fabrics Redundancy Mandatory Recommended Configuration

Best Practices for Storage Connectivity 13


Validation Impact Severity Refer to Section
Each host should be connected to both Redundancy Mandatory SAN Connectivity Best Practices
nodes of each appliance.
If Jumbo Frames are required, make sure Redundancy Mandatory Recommended Configuration
that all ports (servers, switches and
system) are configured with the same
MTU value.
Maximum number of paths per volume per Performance Warning Recommended Configuration
host: 8
Recommended number of paths per Performance Warning Recommended Configuration
volume per host: 4
Link speed should be consistent across all Performance Warning Recommended Configuration
paths to the PowerStore cluster.
Duplex setting should be consistent Performance Warning Recommended Configuration
across all paths to the PowerStore cluster
per single host or a cluster of hosts.
Enable the TCP Offloading Engine (TOE) Performance Warning Recommended Configuration
on the host interfaces.
Balance the hosts across the target ports Performance Recommended Recommended Configuration
of the appliances to provide a distributed
load across all target ports.
Use dedicated NICs or iSCSI HBAs for Performance Recommended Recommended Configuration
PowerStore cluster iSCSI connection.
NOTE: Avoid partitioning the
interface.

Maximum number of network subnets Performance Normal Additional Considerations


supported with PowerStore over iSCSI
SAN:
● With PowerStore OS 2.0 (or later), up
to 32 subnets are supported.
● With PowerStore OS 1.x, only a single
subnet is supported.

Recommended Configuration
Consider the following recommendations when setting up an iSCSI SAN infrastructure:
● Use two separate fabrics. Each fabric should be on a different physical switch for resiliency.
● The optimal number of paths depends on the operating system and server information. To avoid multipathing performance
degradation, do not use more than eight paths per device per host. It is recommended to use four paths.
● Keep a consistent link speed and duplex across all paths to the PowerStore cluster per single host or a cluster of hosts.
● With a multi-appliance cluster, it is highly advised to zone the host to as many appliances as possible, to achieve best load
distribution across the cluster. Be sure to keep the minimum/optimal zoning recommendations for each appliance.
NOTE: A multi-appliance cluster is not designed to provide better resiliency, but to provide better load balance. To
perform volume migration between appliances, a host must be zoned to both appliances.
● External hosts can be attached to a PowerStore cluster by either the embedded 4-port card or by a SLIC:
○ A host can be connected to 1-4 appliances. It is recommended to connect the host to as many appliances as possible to
allow volume migration to and from all appliances.
○ Hosts that are connected over the first two ports of the 4-port card are connected using ToR switches (also used for
PowerStore internal communication). With this configuration, it is recommended to use a dedicated VLAN, and if not
possible, use a separate subnet/network.
○ For hosts connected using any other port (that is, not the first two ports), use either dedicated switches or a dedicated
VLAN for iSCSI storage.

14 Best Practices for Storage Connectivity


○ The PowerStore cluster can be shipped with various extension modules. If your PowerStore cluster contains more than
one extension I/O module per node, distribute the connections among all I/O modules for maximum availability and
performance.
● Ethernet switch recommendations:
○ Use nonblocking switches.
○ Use enterprise grade switch.
○ Utilize at minimum 10 GbE interfaces.
● It is recommended to use dedicated NICs or iSCSI HBAs for PowerStore cluster iSCSI and not to partition the interface (that
is, disable NIC Partitioning - NPAR).
● Enable the TCP Offloading Engine (TOE) on the host interfaces to offload the TCP packet encapsulation from the CPU of
the host to the NIC or iSCSI HBA, and free up CPU cycles.
● It is recommended to use interfaces individually rather than using NIC Teaming (Link Aggregation), to combine multiple
interfaces into a single virtual interface.
● If Jumbo Frames are required, ensure that all ports (servers, switches, and system) are configured with the same MTU
value.
NOTE: Failure to configure consistent MTU end to end may result with PowerStore node failures. For details, see Dell
EMC Knowledge Article 000196316 (PowerStore: After increasing the MTU...).

NOTE: VMware requires setting Jumbo Frames at the virtual switch (vSS or vDS) and VMKERNEL level.
● See your Ethernet switch user manual for instructions on the implementations.
● For detailed information about connecting the PowerStore appliance to the ToR switch, see the PowerStore Network
Planning Guide and the Network Configuration Guide for Dell PowerSwitch Series.

Additional Considerations
Review the following considerations when configuring hosts with PowerStore storage using iSCSI:
● Maximum number of network subnets supported with PowerStore over iSCSI SAN:
○ With PowerStore operating system 2.0 (or later), up to 32 subnets are supported, but only up to eight subnets are
supported per physical port.
○ With PowerStore operating system 1.x, only a single subnet is supported.
● See the E-Lab Interoperability Navigator (https://elabnavigator.dell.com) for iSCSI SAN support limitations regarding HBAs,
operating systems, and Direct-Attach.

NVMe over TCP (NVMe/TCP) SAN Guidelines


This section details the best practices for attaching hosts to a PowerStore cluster in a highly-available, resilient and optimal
NVMe/TCP SAN.

Recommended Configuration Values Summary


The following table summarizes the recommended variables related to NVMe/TCP SAN:

Validation Impact Severity Refer to Section


Use two separate fabrics. Redundancy Mandatory Recommended Configuration
Each host should be connected to both Redundancy Mandatory SAN Connectivity Best Practices
nodes of each appliance.
Configure two PowerStore storage Redundancy Recommended Recommended Configuration
networks on two separate VLANs.
If Jumbo Frames are required, make sure Redundancy Mandatory Recommended Configuration
that all ports (servers, switches and
system) are configured with the same
MTU value.

Best Practices for Storage Connectivity 15


Validation Impact Severity Refer to Section
Maximum number of paths per volume per Performance Warning Recommended Configuration
host: 8
Recommended number of paths per Performance Warning Recommended Configuration
volume per host: 4
Link speed should be consistent across all Performance Warning Recommended Configuration
paths to the PowerStore cluster.
Duplex setting should be consistent Performance Warning Recommended Configuration
across all paths to the PowerStore cluster
per single host or a cluster of hosts.
Enable the TCP Offloading Engine (TOE) Performance Warning Recommended Configuration
on the host interfaces.
Balance the hosts across the target ports Performance Recommended Recommended Configuration
of the appliances to provide a distributed
load across all target ports.
Use dedicated NICs or iSCSI HBAs for Performance Recommended Recommended Configuration
PowerStore cluster iSCSI connection.
NOTE: Avoid partitioning the
interface.

Recommended Configuration
Consider the following recommendations with PowerStore storage using NVMe/TCP.
● Use two separate fabrics. Each fabric should be on a different physical switch for resiliency.
● The optimal number of paths depends on the operating system and server information. To avoid multipathing performance
degradation, do not use more than eight paths per device per host. It is recommended to use four paths.
● Keep a consistent link speed and duplex across all paths to the PowerStore cluster per a single host or a cluster of hosts.
● With a multi-appliance cluster, it is highly advised to zone the host to as many appliances as possible, to achieve best load
distribution across the cluster. Be sure to keep the minimum/optimal zoning recommendations for each appliance.
NOTE: A multi-appliance cluster is not designed to provide better resiliency, but rather to provide better load balance.
To perform volume migration between appliances, a host must be zoned to both appliances.
● External hosts can be attached using NVMe/TCP to a PowerStore cluster by either the embedded 4-port card or by a SLIC:
○ A host can be connected to 1-4 appliances. It is recommended to connect the host to as many appliances as possible to
allow volume migration to and from all appliances.
○ Hosts that are connected over the first two ports of the 4-port card are connected using ToR switches (also used for
PowerStore internal communication). With this configuration, it is recommended to use a dedicated VLAN, and if not
possible, use a separate subnet/network.
○ For hosts connected using any other port (that is, not the first two ports), use either dedicated Ethernet switch or a
dedicated VLAN.
○ The PowerStore cluster can be shipped with various extension modules. If your PowerStore cluster contains more
than one extension I/O module per node, distribute the connections among all I/O modules for highest availability and
performance.
● Ethernet switch recommendations:
○ Use non-blocking switches.
○ Use enterprise grade switch.
○ Utilize at minimum 10 GbE interfaces.
● It is recommended to use dedicated NICs or iSCSI HBAs for PowerStore cluster and not to partition the interface (that is,
disable NIC Partitioning - NPAR).
● Enable the TCP Offloading Engine (TOE) on the host interfaces, to offload the TCP packet encapsulation from the CPU of
the host to the NIC or iSCSI HBA, and free up CPU cycles.
● It is recommended to use interfaces individually rather than using NIC Teaming (Link Aggregation), to combine multiple
interfaces into a single virtual interface.
● If Jumbo Frames are required, ensure that all ports (servers, switches, and system) are configured with the same MTU
value.

16 Best Practices for Storage Connectivity


NOTE: Failure to configure consistent MTU end to end, may result with PowerStore node failures. For details, see Dell
EMC Knowledge Article 000196316 (PowerStore: After increasing the MTU...).
● See your Ethernet switch user manual for instructions on the implementations.
● For detailed information about connecting the PowerStore appliance to the ToR switch, see the PowerStore Network
Planning Guide and the Network Configuration Guide for Dell PowerSwitch series

Additional Considerations
Review the following additional considerations when configuring hosts with PowerStore using NVMe/TCP.
● NVMe/TCP requires ports 8009 and 4420 to be open between PowerStore storage networks and each NVMe/TCP initiator.
● See the NVMe/TCP Host/Storage Interoperability Simple Support Matrix for supported NICs/HBA models and drivers with
NVMe/TCP and known limits.
● NVMe/TCP with vSphere ESXi requires vDS 7.0.3 (or later) or a VSS.
● For customers deploying NVMe/TCP environments at scale, consider leveraging SmartFabric Storage Software to automate
host and subsystem connectivity. For more information, see the SmartFabric Storage Software Deployment Guide.

NVMe-oF General Guidelines


PowerStore operating system 2.0 introduces support of NVMe/FC and PowerStore operating system 2.1 introduces support for
NVMe/TCP.
The following is a high-level list of concepts and key features of PowerStore with NVMe-oF:
● NVMe Subsystem
○ An NVMe Subsystem usually represents a storage array (except for a discovery subsystem).
○ A PowerStore cluster (Federation) is considered a single NVMe Subsystem.
○ NVMe-oF is supported on both PowerStore T model and PowerStore X model models.
● NVMe Front-End Ports
○ NVMe Front-End ports are the target ports capable of NVMe-oF.
○ On PowerStore, all FE, FC, and Ethernet ports are capable of NVMe-oF.
○ When you create a storage network, the NVMe/TCP purpose must be manually selected to work with NVMe/TCP.
○ When you upgrade to PowerStore operating system 2.0, all FC ports automatically support NVMe/FC and NPIV is
enabled on those ports.
○ When you upgrade to PowerStore operating system 2.1, all iSCSI storage networks are automatically assigned the
purpose of NVMe/TCP.
● NVMe Qualified Name (NQN)
○ Uniquely describes a Host or NVMe subsystem for identification and authentication.
○ This value can be modified (depending on the operating system) to UUID based or to hostname based.
○ The value must comply with NVMe Express Base Specification, chapter 4.5 (NVMe Qualified Names).
● Namespace
○ A Namespace is equivalent to a Logical Unit (LU) in SCSI world, and represents the data written to a PowerStore volume.
○ Mapping a volume to a host (or host group) designates that volume as either SCSI (iSCSI or FC) or NVMe (NVMe/FC or
NVMe/TCP).
○ A volume can only be mapped to either an NVMe host (or host group) or to a SCSI host (or host group).
● Namespace ID (NSID)
○ An NSID is equivalent to a Logical Unit Number (LUN) in SCSI world, and represents the identifier of a namespace
(volume).
○ An NSID on a PowerStore is unique across an NVMe subsystem.
○ On SCSI, there is a differentiation between Array LUN ID (ALU) and Host LUN ID (HLU). For example, A SCSI LUN on
the array may be with ALU of 10 while to a host it may have an HLU of 250. In addition, a LUN created on one appliance
can have the same ALU and HLU as another LUN created on a different appliance from the same PowerStore cluster.
○ With PowerStore implementation of NVMe-oF, since an NVMe subsystem is a PowerStore cluster, a namespace (LU)
can have a single NSID (for example, a volume that is created with NSID10 has the same ID across all appliances,
internally on the array (ALU) and externally to the hosts (HLU)). With NVMe-oF, there is no distinction between HLU and
ALU.
● For a deep dive theory on NVMe-oF, see NVMe, NVMe/TCP, and Dell SmartFabric Storage Software Overview IP SAN
Solution Primer.

Best Practices for Storage Connectivity 17


SAN Connectivity Best Practices
This section provides general guidelines for physically connecting hosts with PowerStore cluster.
NOTE: The diagrams throughout this section illustrate possible implementations of these guidelines. Other possible
implementations are not illustrated.
● A PowerStore appliance contains two nodes.
● To prevent a host path failure due to a single node failure, ensure that redundancy is maintained (connect each initiator at
minimum to each node).

Direct Attach
● A host must be connected at minimum with one path to each node for redundancy.
● See E-Lab for details of supported configuration with Direct Attach:
○ See the E-Lab Interoperability Navigator (https://elabnavigator.dell.com) for supported FC and iSCSI configurations.
○ See the E-Lab Dell EMC 32G FC-NVMe Simple Support Matrix for supported NVMe/FC configurations with Direct
Attach.
○ For a host that is directly attached to a PowerStore appliance, disable NVMe/FC support on the HBA. For details
on potential issues when directly connecting a host to PowerStore, see Dell EMC Knowledge Article 000200588
(PowerStore: After an upgrade...) and Dell EMC Knowledge Article 000193380 (PowerStoreOS 2.0: ESXi hosts do not
detect...).
○ See the E-Lab NVMe/TCP Host/Storage Interoperability Simple Support Matrix for supported NVMe/TCP configurations
with Direct Attach.
● The following diagram describes minimum connectivity with a single PowerStore appliance:

1. PowerStore appliance
2. Node
3. Host

SAN - Minimal Configuration

NOTE: Be sure to configure proper zoning for FC and NVMe/FC or proper subnetting with iSCSI and TVMe/TCP.

● A host must be connected at minimum with one path to each node for redundancy.
● The following diagram describes a minimum connectivity with a single PowerStore appliance.

18 Best Practices for Storage Connectivity


1. PowerStore appliance
2. Node
3. Fibre Channel Switch
4. Host

SAN - Recommended Configuration


Consider the following recommendations when setting up a SAN infrastructure.

Single Appliance Cluster

NOTE: Be sure to configure proper zoning for FC and NVMe/FC or proper subnetting with iSCSI and NVMe/TCP.

● It is recommended that a host is connected with two paths to each node for redundancy.
● The following diagram describes simple connectivity with a single PowerStore appliance.

Best Practices for Storage Connectivity 19


1. PowerStore appliance
2. Node
3. ToR/iSCSI Switch
4. Host

Two Appliance Cluster

NOTE: Be sure to configure proper zoning for FC and NVMe/FC or proper subnetting with iSCSI and NVMe/TCP.

● It is recommended that a host is connected with two paths to each node for redundancy.
● The following diagram describes simple connectivity with two (2) PowerStore appliances.

1. PowerStore appliance
2. Node
3. ToR/iSCSI Switch
4. Host

Three Appliance Cluster

NOTE: Be sure to configure proper zoning for FC and NVMe/FC or proper subnetting with iSCSI and NVMe/TCP.

● It is recommended that a host is connected with two paths to each node on each appliance for redundancy.
● The following diagram describes simple connectivity with three (3) PowerStore appliances.

20 Best Practices for Storage Connectivity


1. PowerStore appliance
2. Node
3. ToR/iSCSI Switch
4. Host

Four Appliance Cluster

NOTE: Be sure to configure proper zoning for FC and NVMe/FC or proper subnetting with iSCSI and NVMe/TCP.

● It is recommended that a host is connected with two paths to each node on each appliance for redundancy.
● The following diagram describes simple connectivity with four (4) PowerStore appliances.

Best Practices for Storage Connectivity 21


1. PowerStore appliance
2. Node
3. ToR/iSCSI Switch
4. Host

22 Best Practices for Storage Connectivity


3
Host Configuration for VMware vSphere ESXi
This chapter contains the following topics:
Topics:
• Related E-Lab Host Connectivity Guide
• Chapter Scope
• Recommended Configuration Values Summary
• Boot from SAN
• Fibre Channel (FC) Configuration
• NVMe over Fibre Channel Configuration
• iSCSI Configuration
• NVMe/TCP Configuration
• vStorage API for System Integration (VAAI) Settings
• Setting the Maximum I/O
• Confirming UNMAP Priority
• Configuring VMware vSphere with PowerStore Storage in a Multiple Cluster Configuration
• Multipathing Software Configuration
• PowerStore Considerations
• vSphere Considerations

Related E-Lab Host Connectivity Guide


The topics in this chapter detail specific caveats and configuration parameters that must be present when configuring a
VMware vSphere ESXi host to access a PowerStore storage. These caveats and parameters should be applied with the
configuration steps that are detailed on the E-Lab Host Connectivity Guide for VMware vSphere ESXi (see the E-Lab
Interoperability Navigator at https://elabnavigator.dell.com).

Chapter Scope
This chapter provides guidelines only for configuring ESXi hosts that are connected externally to PowerStore. For configuring
an internal ESXi host on PowerStoreX, see the Dell EMC PowerStore Virtualization infrastrucure Guide document at https://
dell.com/support.
NOTE: This document includes links to external documents. These links may change. If you cannot open a link, contact the
vendor for information.

Recommended Configuration Values Summary


The following table summarizes all used and recommended variables and their values when configuring hosts for VMware
vSphere.

NOTE: Unless indicated otherwise, use the default parameters values.

Validation Impact Severity Refer to Section


To clarify the above note for using Stability & Performance Recommended For further details, see the
default parameter settings unless operating system and HBA
stated otherwise in this chapter, documentation.

Host Configuration for VMware vSphere ESXi 23


Validation Impact Severity Refer to Section
ensure that the following are set per
the default OS setting:
● LUN and HBA queue depth
● HBA timeout
ESXi configuration: Stability & Performance Mandatory with ESXi Setting Maximum I/O
Disk.DiskMaxIOSize = 1024 versions earlier than 7.0.
NOTE: Mandatory for ESXi Not required with ESXi
versions earlier than 7.0, unless version 7.0 or later.
the ESXi version is not
exposed to the issue covered
in VMware Knowledge Article
2137402 (Virtual machines using
EFI firmware fails...).

ESXi configuration: Keep the UNMAP Stability & Performance Mandatory Confirming UNMAP Priority
priority for the host at the lowest
possible value (default value for ESXi
6.5).
Specify ESXi as the operating system Serviceability Mandatory Presenting PowerStore
for each defined host. Volumes to the ESXi Host
Path selection policy for SCSI: Stability & Performance Mandatory Configuring vSphere Native
VMW_PSP_RR Multipathing
Path selection policy for NVMe: LB- Performance Recommended Configuring High
IOPS Performance Multipathing
(HPP) with NVMe
Alignment: Guest OS virtual machines Storage efficiency & Warning Disk Formatting
should be aligned. Performance
iSCSI configuration: Configure end- Performance Recommended Jumbo Frames
to-end Jumbo Frames.
iSCSI configuration: Disable Delayed Stability Recommended Delayed ACK
ACK on ESXi.
iSCSI configuration: Adjust Stability Recommended Login Timeout
LoginTimeOut to 30.

iSCSI configuration: Adjust Stability Recommended No-Op Interval


NoopInterval to 5.

Path switching: Switch for every I/O. Performance Recommended Configuring vSphere Native
Multipathing
Virtual Disk Provisioning: Use thin Performance Recommended Virtual Machine Formatting
provisioned virtual disks.
Virtual machine configuration: Stability & Performance Recommended VMware Paravirtual SCSI
Configure virtual machines with controllers
Paravirtualized SCSI controllers.
RDM volumes: In Guest OS, span Performance Recommended Virtual Machine Guest OS
RDM volumes used by the virtual Settings
machine across SCSI controllers.

NOTE: For information about virtualization and Virtual Volumes, see the following white papers:
● Dell EMC PowerStore Virtualization Infrastructure Guide at https://dell.com/support
● Dell PowerStore: VMware vSphere Best Practices

NOTE: As noted in Dell EMC Knowledge Article 000126731 (PowerStore - Best practices for VMFS datastores...), when
using vSphere v6.7 there is a known issue relating to VMFS deadlock. To resolve, install the latest vSphere version

24 Host Configuration for VMware vSphere ESXi


(v6.7 P02) that includes a fix for this issue. For further details, see fix PR 2512739 in VMware ESXi 6.7, Patch Release
ESXi670-202004002.

Boot from SAN


For guidelines and recommendations for boot from SAN with vSphere ESXi hosts and PowerStore, refer to the Considerations
for Boot from SAN with PowerStore appendix.

Fibre Channel (FC) Configuration


This section describes the recommended configuration that should be applied when attaching hosts to PowerStore cluster using
Fibre Channel.
NOTE: This section applies only to Fibre Channel. If you are using any other protocol with vSphere ESXi, see the relevant
configuration section.

Pre-Requisites
When attaching a host to a PowerStore cluster using Fibre Channel, ensure that the following pre-requisites are met:
● Review Fibre Channel SAN Guidelines before you proceed.
● Ensure that you are using PowerStore operating system 2.0 (or later).
● See the E-Lab Dell EMC 32G FC-NVMe Simple Support Matrix for supported Fibre Channel HBA models and drivers with
NVMe/FC and known limits.
● Verify that all HBAs have supported driver and firmware versions according to the Support Matrix at the E-Lab Navigator
(https://elabnavigator.dell.com).
● Verify that all HBAs BIOS settings are configured according to Dell EMC E-Lab Navigator recommendations (https://
elabnavigator.dell.com).
● It is highly recommended to install the nvme-cli package:

yum install nvme-cli

● Locate your FC HBA information:

systool -c fc_host -v

Known Issues
For a host directly attached to the PowerStore appliance, disable NVMe/FC support on the HBA. For details on potential issues
when directly connecting a host to PowerStore, see Dell EMC Knowledge Article 000200588 (PowerStore: After an upgrade...)
and Dell EMC Knowledge Article 000193380 (PowerStoreOS 2.0: ESXi hosts do not detect...).

NVMe over Fibre Channel Configuration


This section describes the recommended configuration that should be applied when attaching hosts to PowerStore cluster using
NVMe over Fibre Channel.
NOTE: This section applies only to NVMe/FC. If you are using any other protocol with vSphere ESXi, see the relevant
configuration section

Host Configuration for VMware vSphere ESXi 25


Pre-Requisites
When attaching a host to a PowerStore cluster using Fibre Channel, ensure that the following pre-requisites are met:
● Review NVMe/FC SAN Guidelines before you proceed.
● PowerStore operating system 2.0 (or later) is required.
● See the E-Lab Dell EMC 32G FC-NVMe Simple Support Matrix for supported Fibre Channel HBA models and drivers with
NVMe/FC and known limits.
● Verify that all HBAs have supported driver and firmware versions.
● Verify that all HBAs BIOS settings are configured according to E-Lab recommendations.
● Review the VMware vSphere Storage document for the vSphere version running on the ESXi hosts, a requirement list,
limitations, and other configuration considerations. For example, for vSphere 7.0u3, see chapter 16 (on VMware NVMe
Storage).

Setting the ESXi Host NVMe Qualified Name

Prerequisites
You can configure the host NVMe Qualified Name (NQN) using either Hostname or UUID. For visibility and simplicity, it is
recommended to use Hostname.

Steps
1. Connect to the ESXi host as root.
2. Run the following esxcli command for the Host NQN to be based on the hostname and verify that the setting is changed.

$ esxcli system module parameters set -m vmknvme -p vmknvme_hostnqn_format=1

$ esxcli system module parameters list -m vmknvme | grep vmknvme_hostnqn_format


vmknvme_hostnqn_format uint 1 HostNQN format, UUID: 0,
HostName: 1

3. Ensure that the value complies with NVMe Base Specification, Chapter 4.5 (NVMe Qualified Names).
4. Reboot the host.
5. Run the esxcli nvme info get command to confirm that the Host NQN was modified correctly.

# esxcli nvme info get


Host NQN: nqn.2014-08.com.emc.test:nvme:esx1

Setting up NVMe HBAs


For further details on HBA setup, see NVMe HBA documentation.

Setting Up Marvell (QLogic) HBAs


Follow these steps to setup Marvell (QLogic) NVMe HBAs for ESXi:

Steps
1. Connect to the host as root.
2. Run the following esxcli command:

esxcli system module parameters set -p ql2xnvmesupport=1 -m qlnativefc

NOTE: ql2xnvmeenable=1 enables NVMe/FC, and ql2xnvmeenable=0 disables NVMe/FC.

26 Host Configuration for VMware vSphere ESXi


3. Reboot the host.
4. Be sure to zone against PowerStore NVMe Target WWNs.

Setting Up Emulex HBAs


Follow these steps to setup LPe3200x or LPe 3500x HBAs for ESXi:

Steps
1. Connect to the host as root.
2. Run the following esxcli command:

esxcli system module parameters set -m lpfc -p lpfc_enable_fc4_type=3

NOTE: lpfc_enable_fc4_type=3 enables both FCP and NVMe/FC, and lpfc_enable_fc4_type=1 enables
only FCP.

3. Reboot the host.


4. Be sure to zone against PowerStore NVMe Target WWNs.

Known Issues

If you are using NVMe/FC, it is highly recommended to upgrade to PowerStore operating system 2.1.1. For information, see Dell
EMC Knowledge Article 000196492 (PowerStore: IO on ESXi VMs...).

iSCSI Configuration
This section describes the recommended configuration that should be applied when attaching hosts to a PowerStore cluster
using iSCSI.
NOTE: This section applies only for iSCSI. If you are using any other protocol with ESX, see the relevant configuration
section.

Pre-Requisites
The following pre-requisites should be met before attaching hosts to a PowerStore cluster using iSCSI:
● Review iSCSI SAN Guidelines before you proceed.
● See the E-Lab Interoperability Navigator (https://elabnavigator.dell.com) for supported NIC/iSCSI HBA models and drivers.
● Verify that all HBAs have supported driver, firmware, and BIOS versions.
● Follow the operating system recommendations for installation and setup of the appropriate NIC/iSCSI HBA for your host.
● It is recommended to install the latest driver version (patch), as described in the VMware support site for each specific
NIC/iSCSI HBA.
● Review the VMware vSphere Storage document for the vSphere version running on the ESXi hosts, a requirement list,
limitations, and other configuration considerations. For example, for vSphere 7.0u3. see VMware vSphere Storage.

Network Configuration for iSCSI


Use these high-level procedures for configuring the iSCSI adapter on ESXi hosts connected to PowerStore.

Host Configuration for VMware vSphere ESXi 27


PowerStore Operating System 1.x

About this task


Configure the iSCSI networking on ESXi hosts connected to PowerStore with a PowerStore operating system 1.x in which only a
single subnet is supported.

Steps
1. Dell Technologies recommends creating four target iSCSI IP addresses (two per node) on the same subnet/VLAN.
2. Create a single vSwitch (or vDS) consisting of two uplink physical ports (each connected to a different switch).
3. Create two VMkernel ports on the same subnet as the storage cluster iSCSI portals (the communication must not be
routable).
Example:
● iSCSI-A-port0 1.1.1.1/24
● iSCSI-A-port1 1.1.1.2/24
● iSCSI-B-port0 1.1.1.3/24
● iSCSI-B-port1 1.1.1.4/24
● vmk1 1.1.1.10/24
● vmk2 1.1.1.11/24

4. Ensure that both VMkernel interfaces are attached to the same vSwitch.
5. Override the default Network Policy for iSCSI. For details, see VMware vSphere documentation.
For example, with ESXi 7.0, see Multiple Network Adapters in iSCSI or iSER Configuration.

PowerStore Operating System 2.x and Above

About this task


Set the iSCSI networking on ESXi hosts connected to PowerStore with a PowerStore operating system 2.x (or later) in which
up to 32 network subnets are supported.
NOTE: Configuring a single subnet is also supported (but less recommended). For information, see Network Configuration
for iSCSI - PowerStore Operating System 1.x.

Steps
1. Dell Technologies recommends creating four target iSCSI IP addresses (two per node) on two different subnets/VLANs.
2. Create a single vSwitch (or vDS) consisting of two uplink physical ports (each connected to a different switch).
3. Create two VMkernel ports: One on VLAN-A and another on VLAN-B as the storage iSCSI portals.
Note: It is highly recommended not to use routing on iSCSI.
Example:

28 Host Configuration for VMware vSphere ESXi


● iSCSI-A-port0 1.1.1.1/24
● iSCSI-A-port1 1.1.2.1/24
● iSCSI-B-port0 1.1.1.2/24
● iSCSI-B-port1 1.1.2.2/24
● vmk1 1.1.1.10/24
● vmk2 1.1.2.10/24

4. Ensure that both VMkernel interfaces are attached to the same vSwitch.
5. Override the default Network Policy for iSCSI. For details, see the VMware vSphere documentation.
For example, with ESXi 7.0, see Multiple Network Adapters in iSCSI or iSER Configuration.

Example - Configuring iSCSI Networking

About this task


The steps below demonstrate configuring networking for a PowerStore operating system 2.x with a single virtual standard
switch and CLI.
NOTE: For a detailed procedure for configuring networking, see VMware documentation.

NOTE: Some of the output texts may be wrapped.

Steps
1. List current switches.
Verify that the vSwitch name that you intend to use is not in use.

$ esxcli network switch standard list | grep Name

2. Create a new virtual Standard Switch.

$ esxcli network vswitch standard add -v iSCSI

3. List current VMkernel interfaces.


Use a VMK number that is not currently in use.

$ esxcli network ip interface ipv4 address list


Name IPv4 Address IPv4 Netmask IPv4 Broadcast Address Type Gateway DHCP DNS
---- ------------ ------------ -------------- ------------ ------- --------
vmk0 17.17.17.17 255.255.255.0 17.17.17.255 DHCP 17.17.17.1 true

4. Configure uplinks.

Host Configuration for VMware vSphere ESXi 29


In the example, 10 Gb interfaces are used (these interfaces must not be used by any other vSwitch or vDS).

$ esxcli network nic list


Name PCI Device Driver Admin Status Link Status Speed Duplex MAC Address
MTU Description
---- ---------- ------ ------------ ----------- ----- ------ -----------
--- -----------
vmnic0 0000:07:00.0 igbn Up Up 1000 Full
00:1e:67:bd:26:70 1500 Intel Corporation I350 Gigabit Network Connection
vmnic1 0000:04:00.0 ixgben Up Up 10000 Full
00:1e:67:fd:4b:94 1500 Intel(R) 82599 10 Gigabit Dual PortNetwork Connection
vmnic2 0000:04:00.1 ixgben Up Up 10000 Full
00:1e:67:fd:4b:95 1500 Intel(R) 82599 10 Gigabit Dual PortNetwork Connection
vmnic3 0000:07:00.1 igbn Up Up 1000 Full
00:1e:67:bd:26:71 1500 Intel Corporation I350 Gigabit Network Connection

5. Create port groups for each VMkernel interface (repeat the steps for the secondary VMkernel interface).
The example below is for the first VMkernel interface. Use the same procedure for the second VMkernel interface.

$ esxcli network vswitch standard portgroup add --portgroup-name="vlan801" --vswitch-


name=iSCSI
$ esxcli network vswitch standard portgroup set -p vlan801 --vlan-id 801
$ esxcli network ip interface add --interface-name=vmk1 --portgroup-name="vlan801"
$ esxcli network ip interface ipv4 set --interface-name=vmk1 --ipv4=1.1.1.10 --
netmask=255.255.255.0 --type=static

6. Verify that VMkernel ports are created.

$ esxcli network ip interface ipv4 address list


Name IPv4 Address IPv4 Netmask IPv4 Broadcast Address Type Gateway DHCP DNS
---- ------------ ------------ -------------- ------------ ------- --------
vmk0 17.17.17.17 255.255.255.0 17.17.17.255 DHCP 17.17.17.1 true
vmk1 1.1.1.10 255.255.255.0 1.1.1.255 STATIC 1.1.1.1 false
vmk2 1.1.2.10 255.255.255.0 1.1.1.255 STATIC 1.1.1.1 false

7. Set port group policy to override default settings.

$ esxcli network vswitch standard portgroup policy failover set -a vmnic1 -p vlan801
$ esxcli network vswitch standard portgroup policy failover set -a vmnic2 -p vlan802

8. Verify settings from both user interface and CLI.


9. Run the following command to verify access to the array storage ports.
Repeat on both vmk1 and vmk2, and for all Storage IPs visible from each VMkernel port.

$ vmkping -4 -c1 -Ivmk1 1.1.1.1


PING 1.1.1.1 (1.1.1.1): 56 data bytes
64 bytes from 1.1.1.1: icmp_seq=0 ttl=64 time=0.319 ms
--- 1.1.1.1 ping statistics ---
1 packets transmitted, 1 packets received, 0% packet loss
round-trip min/avg/max = 0.319/0.319/0.319 ms

10. Verify that TCP ports are open (-s specifies source IP).
Repeat on both mvk1 and vmk2, and for all Storage IPs visible from each VMkernel port.

$ nc -z -v -s 1.1.1.10 1.1.1.1 3260


Connection to 1.1.1.1 3260 port [tcp/*] succeeded!

30 Host Configuration for VMware vSphere ESXi


iSCSI Software Adapter Configuration

About this task


These steps provide a high-level outline for configuring the software iSCSI adapter.

Steps
1. Activate the Software iSCSI adapter.
NOTE: You can activate only one software iSCSI adapter.

a. In the vSphere Client, go to the ESXi host.


b. Click the Configure tab.
c. Under Storage, click Storage Adapters, and then click the Add icon.
d. Select Software iSCSI Adapter and confirm that you want to add the adapter.
The software iSCSI adapter (vmhba#) is enabled and appears on the Storage Adapters list. After the adapter is enabled, the
host assigns it with the default iSCSI name.
2. Configure Dynamic Discovery.
Also known as SendTargets discovery. Each time the initiator contacts a specified iSCSI server, the server responds by
supplying a list of available targets to the initiator. It is possible to use either the Global Storage Discovery IP (GSIP) or any
of the iSCSI target IPs for discovery.
a. In the vSphere Client, go to the ESXi host.
b. Click the Configure tab.
c. Under Storage, click Storage Adapters, and select the adapter that you want to configure.
d. Click Dynamic Discovery, and then click Add.
e. Select Scan Adapter and ensure that all Target IPs are listed under Static Discovery.
3. Port Bindings.
● For additional information about Port Binding for iSCSI, see VMware Knowledge Article 2038869 (Considerations for
using software...).
● PowerStore operating system 1.x
Configure port binding for each VMkernel interface as described in the VMware vSphere documentation. See Best
Practices for Configuring Networking with Software iSCSI. For instructions on how to configure Port Binding, see
VMware Knowledge Article 2045040 (Configuring iSCSI port binding...).
a. Select your iSCSI Software Adapter vmhba.
b. Right-click the iSCSI Software Adapter vmhba and select Properties.
c. On the Network Configuration tab, click Add for the VMkernel port that you want to bind and then click OK.
d. Repeat the previous step for all iSCSI VMkernel ports that you want to bind.
e. On the iSCSI Initiator Properties window, click Close.
f. Rescan the iSCSI Software Adapter.
● PowerStore operating system 2.x
○ If you are using a single subnet, follow the instructions for PowerStore operating system 1.x (above).
○ If you are using multi subnets, do not configure port binding as described in the VMware vSphere documentation.
4. On the PowerStore UI, create a host object and set the initiator type to VMware.

Jumbo Frames
Configure end-to-end Jumbo Frames for optimal performance.
When using iSCSI with ESXi hosts and PowerStore, it is recommended to configure end-to-end Jumbo Frames (MTU=9000) for
optimal performance. Ethernet frames are larger than the standard frame size of 1500 bytes (for IPv4) or 1280 bytes (for IPv6).
For information about configuring Jumbo Frames with iSCSI on ESXi, See VMware Knowledge Article 1007654 (iSCSI and
Jumbo Frames...).

Host Configuration for VMware vSphere ESXi 31


Delayed ACK
For optimal traffic, it is recommended to disable Delayed ACK on ESXi.
For optimal iSCSI traffic between the ESXi hosts and PowerStore, especially during periods of network congestion, it is
recommended to disable Delayed ACK on ESXi. By disabling Delayed ACK, the ESXi host would send an ACK acknowledgment
segment for every received data segment (rather than delaying the sending of ACK acknowledgment segments, while receiving
a stream of TCP data segments).
For information about the Delayed ACK parameter and how to disable it using the vSphere Client, See VMware Knowledge
Article 1002598 (ESX/ESXi hosts might experience...).
NOTE: The recommended method for configuring the Delayed ACK setting is per discovered iSCSI target. As a result,
Delayed ACK can be disabled only for PowerStore iSCSI targets.

Login Timeout
Follow these steps to set the iSCSI login timeout.

About this task


When establishing an iSCSI session between the initiator and target, the login timeout setting controls how long the ESXi host
attempts to log in to the iSCSI target before failing the login and retrying. The default setting for LoginTimeOut is five. For
example, by default an iSCSI session ceases retries after 20 seconds (five times the LoginRetryMax setting, which is set by
default to four).
To optimize the iSCSI session behavior with PowerStore and to better handle periods of network disruptions, it is recommended
to adjust LoginTimeOut to 30.
The following steps describe how to adjust LoginTimeOut, using command line.

Steps
1. Connect to the host as root.
2. Run the following command:

esxcli iscsi adapter param set -A adapter_name -k LoginTimeout -v value_in_sec

Example
Replacing VMHBA number with the iSCSI vmhba:

esxcli iscsi adapter param set -A vmhba66 -k LoginTimeout -v 30

No-Op Interval
Follow these steps to set the iSCSI No-Op interval.

About this task


The noop iSCSI settings (NoopInterval and NoopTimout) are used to determine whether a path is dead, when it is not the active
path. iSCSI passively discovers whether this path is dead using NoopTimout. This test is carried out on nonactive paths every
NoopInterval, and if NoopTimout does not receive a response, the path is marked as DEAD.
The default setting for NoopInterval is 10. To optimize the iSCSI session behavior with PowerStore, it is recommended to
adjust NoopInterval to five. This adjustment would trigger an iSCSI path failover following a network disconnection before the
command times out.
The following steps describe how to adjust NoopInterval using command line:

Steps
1. Connect to the host as root.

32 Host Configuration for VMware vSphere ESXi


2. Run the following command:

esxcli iscsi adapter param set -A adapter_name -k NoopInterval -v value_in_sec

Example

esxcli iscsi adapter param set -A vmhba66 -k NoopInterval -v 5

Known Issues
● When using Jumbo Frames, ensure that all ports (Virtual Switch, VMkernel port, Switch Ports, and PowerStore iSCSI
interfaces are configured with the correct MTU value. For information, see Dell EMC Knowledge Article 000196316
(PowerStore: After increasing the MTU...).
● When using iSCSI software initiator with ESXi and PowerStore storage, it is recommended to use only lower case characters
in the IQN to correctly present the PowerStore volumes to ESXi. For information, see VMware Knowledge Article 2017582
(Recommended characters in the...).

NVMe/TCP Configuration
This section describes the recommended configuration that should be applied when attaching hosts to a PowerStore cluster
using NVMe/TCP.
NOTE: This section applies only to NVMe/TCP. If you are using any other protocol with ESX, see the relevant configuration
section.

Pre-Requisites
The following pre-requisites should be met before attaching hosts to a PowerStore cluster using NVMe/TCP:
● Review NVMe over TCP (NVMe/TCP) SAN Guidelines before you proceed.
● See the E-Lab NVMe/TCP Host/Storage Interoperability Simple Support Matrix for supported NIC/HBA models and drivers
with NVMe/TCP and known limits.
● Verify that all HBAs have supported driver, firmware, and BIOS versions.
● Follow the operating system recommendations for installation and setup of the appropriate NIC/iSCSI HBA for your host.
● It is recommended to install the latest driver version (patch), as described in the VMware support site for each specific
NIC/iSCSI HBA.
● TCP ports 4420 and 8009 are open between each host interface and PowerStore subsystem port. These ports should be
open on the interfaces where NVMe/TCP is running.
● Review the VMware vSphere Storage document for the vSphere version running on the ESXi hosts, a requirement list,
limitations, and other configuration considerations. For example, for vSphere 7.0u3, see VMware vSpere Storage.

Setting the ESXi Host NVMe Qualified Name

Prerequisites
You can configure the host NVMe Qualified Name (NQN) using either Hostname or UUID. For visibility and simplicity, it is
recommended to use Hostname.

Steps
1. Connect to the ESXi host as root.
2. Run the following esxcli command for the Host NQN to be based on the hostname and verify that the setting is changed.

$ esxcli system module parameters set -m vmknvme -p vmknvme_hostnqn_format=1

Host Configuration for VMware vSphere ESXi 33


$ esxcli system module parameters list -m vmknvme | grep vmknvme_hostnqn_format
vmknvme_hostnqn_format uint 1 HostNQN format, UUID: 0,
HostName: 1

3. Ensure that the value complies with NVMe Base Specification, Chapter 4.5 (NVMe Qualified Names).
4. Reboot the host.
5. Run the esxcli nvme info get command to confirm that the Host NQN was modified correctly.

# esxcli nvme info get


Host NQN: nqn.2014-08.com.emc.test:nvme:esx1

Network Configuration for NVMe/TCP

PowerStore Operating System 2.x and Above

About this task


Use these high-level procedures for preparing the networking stack for NVMe/TCP on ESXi hosts connected to PowerStore
with a PowerStore operating system 2.1 (or later), where up to 32 network subnets are supported.

Steps
1. Dell Technologies recommends creating four target NVMe/TCP IP addresses (two per node) on two different subnets/
VLSNs.
2. Create a single vSwitch (or vDS) consisting of two uplink physical ports (each connected to a different switch).
3. Create two VMkernel ports, one on VLAN-A and another on VLAN-B, as the storage iSCSI portals.
NOTE: It is highly recommended not to use routing on NVMe/TCP.

● NVMe/TCP-A-port0 1.1.1.1/24 (vlan801)


● NVMe/TCP-A-port1 1.1.2.1/24 (vlan802)
● NVMe/TCP-B-port0 1.1.1.2/24 (vlan801)
● NVMe/TCP-B-port1 1.1.2.2/24 (vlan802)
● vmk1 1.1.1.10/24 (vlan801)
● vmk2 1.1.2.10/24 (vlan801)

4. Ensure that both VMkernel interfaces are attached to the same vSwitch.
5. Override the default Network Policy for iSCSI. For details, see the VMware vSphere documentation.
For example, with ESXi 7.0, see Multiple Network Adapters in iSCSI or iSER Configuration.

34 Host Configuration for VMware vSphere ESXi


Example - Configuring NVMe/TCP Networking

The following example demonstrates configuring networking for a PowerStore PowerStore operating system 2.1 with a single
virtual standard switch and CLI.

NOTE: Some of the output texts may be wrapped.

1. List current vSwitches and verify that the vSwitch name is not in use.

$ esxcli network vswitch standard list | grep Name

2. Create a new Virtual Standard Switch (ensure that the name is unique).

$ esxcli network vswitch standard add -v NVMeTCP

3. List current VMkernel interfaces (use an unused VMK number).

$ esxcli network ip interface ipv4 address list


Name IPv4 Address IPv4 Netmask IPv4 Broadcast Address Type Gateway DHCP DNS
---- ------------ ------------ -------------- ------------ ------- --------
vmk0 17.17.17.17 255.255.255.0 17.17.17.255 DHCP 17.17.17.1 true

4. Configure uplinks (In the example below, 10 Gb interfaces are used. These interfaces must not be used by any other vSwitch
or vDS).

$ esxcli network nic list


Name PCI Device Driver Admin Status Link Status Speed Duplex MAC Address
MTU Description
---- --- ------ ------ ------------ ----------- ----- ------ -----------
--- -----------
vmnic0 0000:07:00.0 igbn Up Up 1000 Full
00:1e:67:bd:26:70 1500 Intel Corporation I350 Gigabit Network Connection
vmnic1 0000:04:00.0 ixgben Up Up 10000 Full
00:1e:67:fd:4b:94 1500 Intel (R) 82599 10 Gigabit Dual Port Network Connection
vmnic2 0000:04:00.1 ixgben Up Up 10000 Full
00:1e:67:fd:4b:95 1500 Intel (R) 82599 10 Gigabit Dual Port Network Connection
vmnic3 0000:07:00.1 igbn Up Up 1000 Full
00:1e:67:bd:26:71 1500 Intel Corporation I350 Gigabit Network Connection

[root@esx1:~] esxcli network vswitch standard uplink add --uplink-name=vmnic1 --


vswitch-name=NVMeTCP
[root@esx1:~] esxcli network vswitch standard uplink add --uplink-name=vmnic2 --
vswitch-name=NVMeTCP

5. Create port groups for each VMkernel interface. Repeat the steps for the secondary VMkernel interfaces.

$ esxcli network vswitch standard portgroup add --portgroup-name="vlan801" --vswitch-


name=NVMeTCP
$ esxcli network vswitch standard portgroup set -p vlan801 --vlan-id 801
$ esxcli network ip interface add --interface-name=vmk1 --portgroup-name="vlan801"
$ esxcli network ip interface ipv4 set --interface-name=vmk1 --ipv4=1.1.1.10 --
netmask=255.255.255.0 --type=static
$ esxcli network ip interface tag add -i vmk1 -t NVMeTCP

6. Verify that VMkernel ports are created.

$ esxcli network ip interface ipv4 address list


Name IPv4 Address IPv4 Netmask IPv4 Broadcast Address Type Gateway DHCP DNS
---- ------------ ------------ -------------- ------------ ------- --------
vmk0 17.17.17.17 255.255.255.0 17.17.17.255 DHCP 17.17.17.1 true
vmk1 1.1.1.10 255.255.255.0 1.1.1.255 STATIC 1.1.1.1 false
vmk2 1.1.2.10 255.255.255.0 1.1.1.255 STATIC 1.1.1.1 false

7. Set port group policy to override default settings.

$ esxcli network vswitch standard portgroup policy failover set -a vmnic1 -p vlan801
$ esxcli network vswitch standard portgroup policy failover set -a vmnic2 -p vlan802

8. Verify settings from both UI and CLI.

Host Configuration for VMware vSphere ESXi 35


9. Run the following command to verify access to the array storage ports. Repeat for both vmk1 and vmk2, and for all storage
IPs visible from each VMkernel port.

$ vmkping -4 -c1 -Ivmk1 1.1.1.1


PING 1.1.1.1 (1.1.1.1): 56 data bytes
64 bytes from 1.1.1.1: icmp_seq=0 ttl=64 time=0.319 ms
--- 1.1.1.1 ping statistics ---
1 packets transmitted, 1 packets received, 0% packet loss
round-trip min/avg/max = 0.319/0.319/0.319 ms

NVMe/TCP Software Adapter Configuration

About this task


The following steps provide a high-level description of the software iSCSI adapter configuration.

Steps
1. Activate the software NVMe/TCP adapter.
a. In the vSphere Client, go to the ESXi host.
b. Click the Configure tab.
c. Under Storage, click Storage Adapters and then click the Add icon.
d. Select the NVMe/TCP adapter for each vmnic on the NVMe/TCP virtual switch.
In the example, there should be two NVMe/TCP adapters, one enabled on vmnic1 and the other enabled on vmnic2.

2. Verify that TCP ports 4420 and 8009 are open (-s specifies source IP). Repeat on both vmk1 and vmk2, and for all storage
IPs visible from each VMkernel port.

$ nc -z -v -s 1.1.1.10 1.1.1.1 4420


Connection to 1.1.1.1 4420 port [tcp/*] succeeded!
$ nc -z -v -s 1.1.1.10 1.1.1.1 8009
Connection to 1.1.1.1 8009 port [tcp/*] succeeded!

3. Discover and connect to the PowerStore array NVMe/TCP subsystem.


a. On the Storage Adapters window, select one of the vmhba representing an NVMe/TCP adapter.

36 Host Configuration for VMware vSphere ESXi


b. Select Controllers > Add Controller

c. On the Add Controller window, enter any of the PowerStore NVMe/TCP enabled ports IP addresses, and select port
8009 (discovery controller).
d. From the list, select the subsystem ports that you want to connect to, and click OK.
These NVMe subsystem ports must be on the same VLAN/subnet that the vmhba is attached to.
e. Repeat these steps for the other NVMe/TCP adapter.

Using CLI

About this task

NOTE: Some of the output texts may be wrapped.

Steps
1. Use the following command to view the configured NVMe adapters:

[root@lgsup1:~] esxcli nvme adapter list


Adapter Adapter Qualified Name Transport Type Driver Associated
Devices
------- ---------------------- -------------- ------
------------------
vmhba64 aqn:qlnativefc:21000024ff16b650 FC qlnativefc
vmhba65 aqn:qlnativefc:21000024ff16b651 FC qlnativefc
vmhba67 aqn:nvmetcp:00-1e-67-fd-4b-95-T TCP nvmetcp vmnic2
vmhba69 aqn:nvmetcp:00-1e-67-fd-4b-94-T TCP nvmetcp vmnic1

2. Verify that the ESXi can see the target subsystem controller/s.

$ esxcli nvme controller list


Name
Controller Number Adapter Transport Type Is Online

Host Configuration for VMware vSphere ESXi 37


----
----------------- ------- -------------- ---------
nqn.1988-11.com.dell:powerstore:00:a25a740af431A946CBF6#vmhba67#172.28.1.223:4420
281 vmhba67 TCP true
nqn.1988-11.com.dell:powerstore:00:a25a740af431A946CBF6#vmhba67#172.28.1.222:4420
280 vmhba67 TCP true
nqn.1988-11.com.dell:powerstore:00:a25a740af431A946CBF6#vmhba69#172.28.2.223:4420
287 vmhba69 TCP true
nqn.1988-11.com.dell:powerstore:00:a25a740af431A946CBF6#vmhba69#172.28.2.222:4420
288 vmhba69 TCP true

NOTE: A controller should be created against each NVMe/TCP storage IP.

3. Use the following commands to discover and connect to the NVMe subsystem:

$ esxcli nvme fabrics discover -a vmhba67 -i 1.1.1.1 -c


$esxcli nvme fabrics discover -a vmhba69 -i 1.1.2.1 -c

Known Issues
If you are using NVMe/TCP, it is highly recommended to upgrade to PowerStore operating system 2.1.1. For details, see Dell
EMC Knowledge Article 000196492 (PowerStore: IO on ESXi VMs...).

vStorage API for System Integration (VAAI) Settings


PowerStore Storage cluster fully supports VAAI. VAAI must be enabled on the ESXi host before using PowerStore.
VAAI is an API that offloads operations such as virtual machine provisioning, storage cloning, and space reclamation to storage
clusters that supports VAAI.
To ensure optimal performance of PowerStore storage from vSphere, VAAI must be enabled on the ESXi host before using
PowerStore storage from vSphere. Failure to do so may expose the PowerStore cluster to the risk of datastores becoming
inaccessible to the host.
This section describes the necessary settings for configuring VAAI for PowerStore storage.

Confirming that VAAI is Enabled on the ESXi Host


Follow the steps below to confirm that VAAI is enabled on the ESXi host.

About this task


When you are using vSphere ESXi version 6.5 and above, VAAI is enabled by default. Before using the PowerStore storage,
confirm that VAAI features are enabled on the ESXi host.

Steps
1. Verify that the following parameters are enabled (that is, set to 1):
● DataMover.HardwareAcceleratedMove
● DataMover.HardwareAcceleratedInit
● VMFS3.HardwareAcceleratedLocking
2. If any of the above parameters are not enabled, click the Edit icon and then click OK to adjust them.

Example
The examples below can be used to query for VAAI status and to enable VAAI using CLI.

38 Host Configuration for VMware vSphere ESXi


Query for VAAI status:

# esxcli system settings advanced list --option=/DataMover/HardwareAcceleratedInit


# esxcli system settings advanced list --option=/DataMover/HardwareAcceleratedMove
# esxcli system settings advanced list --option=/VMFS3/HardwareAcceleratedLocking

Verify the Int Value equals 1 (enabled).


Enable each of the settings:

# esxcli system settings advanced set --int-value 1 --option /DataMover/


HardwareAcceleratedInit
# esxcli system settings advanced set --int-value 1 --option /DataMover/
HardwareAcceleratedMove
# esxcli system settings advanced set --int-value 1 --option /VMFS3/
HardwareAcceleratedLocking

NOTE: These settings enable ATS-only on supported VMFS Datastores, as noted in VMware Knowledge Article 1021976
(Frequently Asked Questions...).

Setting the Maximum I/O


Follow these guidelines to set the maximum I/O request size for storage devices.
Disk.DiskMaxIOSize determines the maximum I/O request size that is passed to storage devices. With PowerStore and
ESXi release earlier than 7.x, it is required to change this parameter from 32767 (the default setting of 32 MB) to 1024 (1 MB).
Example: Setting Disk.DiskMaxIOSize to 1024 (1 MB).

esxcli system settings advanced set -o "/Disk/DiskMaxIOSize" --int-value 1024

NOTE: When setting Disk.DiskMaxIOSize to 1 MB on ESXi hosts connected to arrays other than PowerStore, performance
on large I/Os may be impacted.

NOTE: Setting the maximum I/O size is only required for ESXi versions earlier than 7.0, unless the ESXi version used is not
exposed to the issue covered in VMware Knowledge Article 2137402 (Virtual machines using EFI firmware...).

Confirming UNMAP Priority


This topic provides steps for setting UNMAP priority on a DataStore.

Prerequisites
NOTE: Provisioning Virtual Disks with UNMAP set to a non-default priority on a DataStore provisioned on PowerStore may
result in an increased amount of write I/Os to the storage subsystem. It is therefore highly recommended to verify that
UNMAP is set to Low priority.

NOTE: See Dell EMC Knowledge Article 000126731 (Best practices for VMFS datastores...) for further unmap-related
recommendations when doing Virtual Machine File System (VMFS) bootstorm or failover with VMware Site Recovery
Manager (SRM) on VMFS datastores from ESXi hosts connected to PowerStore.
To set UNMAP priority on a datastore:

Steps
1. On most ESXi hosts, the default UNMAP priority is set to Low. It is recommended to verify, using ESX CLI, that the
datastores are configured with Low priority.
2. To verify that a datastore is set to Low priority:
a. List the file systems:

[~] esxcli storage filesystem list


Mount Point Volume Name

Host Configuration for VMware vSphere ESXi 39


------------------------------------------------- -------------
/vmfs/volumes/6297861b-b77a1e70-93d5-588a5ae062f1 VMFS1
/vmfs/volumes/62978cd2-58d2b498-9317-588a5ae062f1 VMFS2

b. Verify that the file system is configured as low (Reclaim Priority):

[~] esxcli storage vmfs reclaim config get --volume-label VMFS1


Reclaim Granularity: 1048576 Bytes
Reclaim Priority: low <<<<<<<<<<<< priority Low
Reclaim Method: priority <<<<<<<<<<<< use priority and not fixed
Reclaim Bandwidth: 26 MB/s

3. If required, run the following ESX CLI command to modify the UNMAP priority to Low:

[~] esxcli storage vmfs reclaim config set --volume-label VMFS1 -p low

Configuring VMware vSphere with PowerStore


Storage in a Multiple Cluster Configuration
Use the listed recommended values when multiple clusters are connected to vSphere.
The following table lists the recommended vSphere settings when multiple storage clusters are connected to vSphere (in
addition to PowerStore). Follow these recommendations instead of other recommendations in this chapter.
For reference, this table also includes the corresponding recommendations for settings when vSphere is connected to
PowerStore storage only.

Setting Scope/Granularity Multi-Storage Setting PowerStore Only Setting


Cisco UCS: FC Adapter Policy Per vHBA default default
Cisco UCS nfnic: Global default (32) default (32)
lun_queue_depth_per_path
Disk.SchedNumReqOutstanding LUN default default
Disk.SchedQuantum Global default default
Disk.DiskMaxIOSize Global 1 MB (unless ESX version is 1 MB
7.x)
XCOPY (/DataMover/ Global default (4 MB) default (4 MB)
MaxHWTransferSize)
XCOPY (Claim Rule) N/A No Guidance No Guidance
vCenter Concurrent Clones vCenter default (8) default (8)
(config.vpxd.ResourceManager.m
axCostPerHos)

● UCS FC Adapter Policy - The total number of I/O requests that can be outstanding on a per-virtual Host Bus Adapter
(vHBA) in UCS.
● Cisco nfnic lun_queue_depth_per_path - Cisco nfnic driver setting to set the LUN queue depth per path. The default value
for this setting is 32 (recommended). For details on Cisco nfnic settings, see the Cisco nfnic driver documentation on the
Cisco website.
● DiskSchedNumReqOutstanding - The total number of outstanding commands that are permitted from all virtual machines
collectively on the host to a LUN. For details, see VMware vSphere documentation.
● Disk.SchedQuantum - The maximum number of consecutive "sequential" I/Os allowed from one VM before forcing a switch
to another VM. For details, see VMware vSphere documentation.
● Disk.DiskMaxIOSize - The maximum I/O size ESX allows before splitting I/O requests. For details, see Setting the Maximum
I/O.
● XCOPY (/DataMover/MaxHWTransferSize) - The maximum number of blocks used for XCOPY operations. For details, see
VMware vSphere documentation.

40 Host Configuration for VMware vSphere ESXi


● vCenter Concurrent Clones (config.vpxd.ResourceManager.maxCostPerHost) - The maximum number of concurrent full
clone operations allowed (the default value is 8). For details, see VMware vSphere documentation.

Multipathing Software Configuration

Configuring Native Multipathing (NMP) with SCSI


PowerStore with iSCSI or FC supports the Native Multipathing Plugin (NMP).
This section describes the procedure that is required for configuring native multipathing for PowerStore volumes.
NOTE: Configuring native multipathing for PowerStore volumes applies only to iSCSI or FC, and is not relevant for
NVMe-oF.
For best performance, follow these recommendations:
● Set the NMP Round Robin path selection policy on PowerStore volumes that are presented to the ESXi host.
● Set the NMP Round Robin path switching frequency to PowerStore volumes from the default value (1000 I/O packets) to 1.
These settings ensure optimal distribution and availability of load between I/O paths to the PowerStore storage.

Configuring NMP Round Robin as the Default Pathing Policy for All
PowerStore Volumes
Follow this method to configure NMP Round Robin as the default pathing policy for all PowerStore volumes using the ESXi
command line.

About this task


NOTE: As of VMware ESXi version 6.7, Patch Release ESXi670-201912001, the SATP rule in this method is already
integrated into the ESXi kernel.

NOTE: Use this method when no PowerStore volume is presented to the host. PowerStore volumes already presented to
the host are not affected by this method (unless they are unmapped from the host).

NOTE: With ESXi 6.7 hosts that are connected to PowerStore, it is recommended to disable action_OnRetryErrors.
For details on this ESXi parameter, see VMware Knowledge Article 67006 (Active/Passive or ALUA based...).

NOTE: Using this method does not impact any non-PowerStore volume that is presented to the ESXi host.

Steps
1. Open an SSH session to the host as root.
2. Run the following command to configure the default pathing policy for newly defined PowerStore volumes to Round Robin
with path switching after each I/O packet:

esxcli storage nmp satp rule add -c tpgs_on -e "PowerStore" -M PowerStore -P


VMW_PSP_RR -O iops=1 -s VMW_SATP_ALUA
-t vendor -V DellEMC -o disable_action_OnRetryErrors

NOTE: Use the disable_action_OnRetryErrors parameter only with ESXi 6.7 hosts.

This command also sets the NMP Round Robin path switching frequency for newly defined PowerStore volumes to switch
every I/O.

Host Configuration for VMware vSphere ESXi 41


Configuring NMP Round Robin on a PowerStore Volume Already Presented
to the ESXi Host
Follow this method to configure NMP Round Robin on a PowerStore volume that is already presented to the ESXi host, using
ESXi command line:

About this task


NOTE: Use this method only for PowerStore volumes that are already presented to the host. For volumes not yet
presented to the host, see Configuring vSphere NMP Round Robin as the default pathing policy for all volumes.

NOTE: Using this method does not impact any non-PowerStore volumes that are presented to the ESXi host.

For details, see VMware Knowledge Article 1017760 (Changing the default pathing...) and VMware Knowledge Article 2069356
(Adjusting Round Robin IOPS...) on the VMware website.

Steps
1. Open an SSH session to the host as root.
2. Run the following command to obtain the NAA of PowerStore LUNs presented to the ESXi host:

#esxcli storage nmp path list | grep DellEMC -B1

The following example demonstrates issuing the esxcli storage nmp path list command to obtain the NAA of all
PowerStore LUNs presented to the ESXi host:

#esxcli storage nmp path list | grep DellEMC -B1


Device: naa.68ccf09800e8fa24ea37a1bc49d9f6b8
Device Display Name: DellEMC Fibre Channel Disk
(naa.68ccf09800e8fa24ea37a1bc49d9f6b8)
--
Device: naa.68ccf098003a54f16d2eddc3217da922
Device Display Name: DellEMC Fibre Channel Disk
(naa.68ccf098003a54f16d2eddc3217da922)
--
Device: naa.68ccf09000000000c9f6d1acda1e4567
Device Display Name: DellEMC Fibre Channel RAID Ctlr
(naa.68ccf09000000000c9f6d1acda1e4567)

3. Run the following command to modify the path selection policy on the PowerStore volume to Round Robin:

#esxcli storage nmp device set --device="<NAA ID>" --psp=VMW_PSP_RR

For example:

#esxcli storage nmp device set --device="naa.68ccf098003f1461569ea4750e9dac50" --


psp=VMW_PSP_RR

4. Run the following command to set the NMP Round Robin path switching frequency on PowerStore volumes from the default
value (1000 I/O packets) to 1:

#esxcli storage nmp psp roundrobin deviceconfig set --device="<NAA ID>" --iops=1 --
type=iops

For example:

#esxcli storage nmp psp roundrobin deviceconfig set --


device="naa.68ccf098003f1461569ea4750e9dac50" --iops=1 --type=iops

5. Run the following command to validate that changes were applied to all PowerStore LUNs:

#esxcli storage nmp device list | grep -B1 -A4 DellEMC

Each listed PowerStore LUN should have the following NMP settings:

42 Host Configuration for VMware vSphere ESXi


● Path Selection Policy: VMW_PSP_RR
● Path Selection Policy Device Config: policy=rr, iops=1
The following example demonstrates issuing the esxcli storage nmp device list command to validate that
changes were applied to all PowerStore LUNs:
NOTE: The first LUN is the SACD device. If a LUN is mapped as LUN-0, it does not appear.

#esxcli storage nmp device list |grep -B1 -A4 DellEMC


naa.68ccf09000000000c9f6d1acda1e4567
Device Display Name: DellEMC Fibre Channel RAID Ctlr
(naa.68ccf09000000000c9f6d1acda1e4567)
Storage Array Type: VMW_SATP_DEFAULT_AA
Storage Array Type Device Config: {action_OnRetryErrors=off}
Path Selection Policy: VMW_PSP_FIXED
Path Selection Policy Device Config:
{preferred=vmhba2:C0:T0:L0;current=vmhba2:C0:T0:L0}
--
naa.68ccf098003a54f16d2eddc3217da922
Device Display Name: DellEMC Fibre Channel Disk
(naa.68ccf098003a54f16d2eddc3217da922)
Storage Array Type: VMW_SATP_ALUA
Storage Array Type Device Config:
{implicit_support=on; explicit_support=off; explicit_allow=on; alua_followover=on;
action_OnRetryErrors=off; {TPG_id=1,TPG_state=ANO}}
Path Selection Policy: VMW_PSP_RR
Path Selection Policy Device Config: {policy=rr,iops=1,bytes=10485760,useANO=0;
lastPathIndex=0: NumIOsPending=0,numBytesPending=0}

Configuring High Performance Multipathing (HPP) with NVMe


PowerStore with NVMe/TCP or NVMe/FC supports the High Performance Multipathing Plugin (HPP). This section describes
the method that is required for configuring high-performance multipathing for PowerStore volumes.

NOTE: This method applies only to NVMe-oF.

For best performance, follow these recommendations:


● Set the HPP Path Selection Policy (PSS) LB-IOPS on PowerStore volumes that are presented on the ESXi host.
● Set the PSS LB-IOPS path switching frequency to PowerStore volumes from the default value (1000 I/O packets) to 1.
These settings ensure optimal distribution and availability of load between I/O paths to the PowerStore storage.

Configuring HPP Round Robin as the Default Pathing Policy for All
PowerStore Volumes
Follow this method to configure HPP Round Robin as the default pathing policy for all PowerStore volumes, using the ESXi
command line.

About this task


NOTE: Use this method when no PowerStore volume is presented to the host. PowerStore volumes that are already
presented to the host are not affected by this method (unless they are unmapped from the host).

NOTE: Using this method does not impact any non-PowerStore volumes that are presented to the ESXi host, or SCSI
(FC/iSCSI) volumes.

Steps
1. Open an SSH session to the host as root.

Host Configuration for VMware vSphere ESXi 43


2. Run the following command to configure the default pathing policy for newly defined PowerStore volumes to Round Robin
with path switching after each I/O packet:

$ esxcli storage core claimrule add -u -t vendor --nvme-controller-model "dellemc-


powerstore" -P HPP -g "pss=LB-IOPS,iops=1"

3. Reboot the host.


4. Verify that all volumes deriving from NVMe-oF are properly claimed.

$ esxcli storage hpp devide list | grep "Device Display Name: NVMe\|Path Selection"
Device Display Name: NVMe TCP Disk (eui.b635f9c20e1cb3658ccf096800ce9565)
Path Selection Scheme: LB-IOPS
Path Selection Scheme Config: {iops=1;}

Device Display Name: NVMe TCP Disk (eui.e99e9d7f23a70e698ccf096800426a6d)


Path Selection Scheme: LB-IOPS
Path Selection Scheme Config: {iops=1;}

Configuring HPP Round Robin on a PowerStore Volume Already Presented


to the ESXi Host
Follow this method to configure HPP Round Robin on a PowerStore volume that is already presented to the ESXi host, using
ESXi command line.

About this task


NOTE: Use this method only for PowerStore volumes that are already presented to the host. For volumes that are not yet
presented to the host, see Configuring HPP Round Robin as the Default Pathing Policy for all PowerStore Volumes.

Steps
1. Open an SSH session to the host as root.
2. Run the following command to retrieve the list of namespaces (in the example, there are three namespaces: NSID 50, 51,
and 52):

$ esxcli nvme namespace list


Name Controller Number Namespace ID Block Size
Capacity in MB
---- ----------------- ------------ ----------
--------------
eui.e99e9d7f23a70e698ccf096800426a6d 256 50 512
1048576
eui.2fbd2ea5e4aa92d78ccf09680000d5a7 256 51 512
102400
eui.b635f9c20e1cb3658ccf096800ce9565 256 52 512
22528

3. Run the following command to view the information for each of the devices listed in the previous step (in the example,
information is displayed for NSID 50):

$ esxcli storage hpp device list -d eui.e99e9d7f23a70e698ccf096800426a6d


eui.e99e9d7f23a70e698ccf096800426a6d
Device Display Name: NVMe TCP Disk (eui.e99e9d7f23a70e698ccf096800426a6d)
Path Selection Scheme: LB-IOPS
Path Selection Scheme Config: {iops=1000;}
Current Path: vmhba67:C0:T1:L49
Working Path Set: vmhba67:C0:T1:L49, vmhba68:C0:T1:L49
Is SSD: true
Is Local: false
Paths: vmhba67:C0:T1:L49, vmhba68:C0:T0:L49, vmhba68:C0:T1:L49, vmhba67:C0:T0:L49
Use ANO: false

44 Host Configuration for VMware vSphere ESXi


4. Run the following command to change the policy for the specific volume:

$ esxcli storage hpp device set -d eui.e99e9d7f23a70e698ccf096800426a6d -P "LB-IOPS"


--iops 1

5. Run the following command to verify the policy change:

$ esxcli storage hpp device list -d eui.e99e9d7f23a70e698ccf096800426a6d


eui.e99e9d7f23a70e698ccf096800426a6d
Device Display Name: NVMe TCP Disk (eui.e99e9d7f23a70e698ccf096800426a6d)
Path Selection Scheme: LB-IOPS
Path Selection Scheme Config: {iops=1;}
Current Path: vmhba67:C0:T1:L49
Working Path Set: vmhba67:C0:T1:L49, vmhba68:C0:T1:L49
Is SSD: true
Is Local: false
Paths: vmhba67:C0:T1:L49, vmhba68:C0:T0:L49, vmhba68:C0:T1:L49, vmhba67:C0:T0:L49
Use ANO: false

Configuring PowerPath Multipathing


PowerStore supports multipathing using Dell EMC PowerPath/VE on a VMware ESXi host.
For the most updated information about PowerPath/VE support with PowerStore, see the PowerStore Simple Support Matrix.
For details on installing and configuring PowerPath/VE with PowerStore on your host, see Dell EMC PowerPath on VMware
vShpere Installation and Administration Guide for the PowerPath/VE version you are installing.

PowerStore Considerations
When host configuration is completed, you can use the PowerStore storage from the host.
NOTE: When connecting an ESXi host to PowerStore, LUN IDs 254 and 255, may have a dead status. These LUNs
represent the Virtual Volume Protocol Endpoints (PE).
You can create, present, and manage volumes that are accessed from the host using PowerStore Manager, CLI, or REST API.
See the PowerStore Manager Online Help, CLI Reference Guide, or REST API Reference Guide for additional information.
The Dell EMC Virtual Storage Integrator (VSI) version 8.4 and later plug-in can be used to provision from within Virtual Machine
File System (VMFS) datastores and Raw Device Mapping volumes on PowerStore. Furthermore, the Dell EMC VSI Storage
Viewer version 8.4 and later plug-in extends the vSphere Client to facilitate the discovery and identification of PowerStore
storage devices that are allocated to VMware ESXi hosts and virtual machines.
For information about using these two vSphere Client plug-ins, see the VSI Unified Storage Management Product Guide and the
VSI Storage Viewer Product Guide.

Presenting PowerStore Volumes to the ESXi Host


Specify ESXi as the operating system when presenting PowerStore volumes to the ESXi host.

NOTE: Using data reduction and /or encryption software on the host side affects the PowerStore cluster data reduction.

Disk Formatting
Review the following considerations when you create volumes in PowerStore for a vSphere ESXi host:
● Disk logical block size - The only logical block (LB) size that is supported by vSphere ESXi for presenting volumes is 512
bytes.

Host Configuration for VMware vSphere ESXi 45


NOTE: For details on formatting a newly created volume, see the PowerStore Configuring Volumes guide that matches
the version running on your PowerStore cluster.
● Disk alignment - Unaligned disk partitions may substantially impact I/O to the disk.
With vSphere ESXi, datastores and virtual disks are aligned by default when they are created. No further action is required
to align datastores and virtual disks in ESXi.
With virtual machine disk partitions within the virtual disk, the guest operating system determines the alignment.

Virtual Volumes
On PowerStore operating system below 2.1.1, it is recommended to avoid creating a single host group containing all ESXi
hosts, when multiple Virtual Volumes are mapped to these hosts. For information, see Dell EMC Knowledge Article 000193872
(PowerStore: Intermittent vVol bind...).
It is recommended to create a dedicated host for each ESXi and mount the Virtual Volume datastore on all ESXi hosts in the
cluster.
If you require access to regular VMFS datastores in addition to Virtual Volumes, map each of the volumes to each of the ESXi
hosts.

AppsOn: Virtual Machine Compute and Storage Collocation Rules


for PowerStore X Clusters
NOTE: The following is applicable only to PowerStore X multi-appliance clusters with operating system version 2.0 (or
later).
To ensure compute and storage resources collection for optimal VM performance, you can use the predefined VM and host rules
in vCenter server.
To tie a user VM to a host group, add that VM to the predefined VM group in vCenter server.
For more details, see PowerStore Virtualization Infrastructure Guide.

vSphere Considerations

VMware Paravirtual SCSI Controllers


Configure virtual machines with paravirtual SCSI controllers to achieve higher throughput and lower CPU usage.
For optimal resource utilization of virtual machines with PowerStore, it is recommended to configure virtual machines with
paravirtualized SCSI controllers. VMware paravirtual SCSI controllers are high-performance storage controllers that can provide
higher throughput and lower CPU usage. These controllers are best suited for high-performance storage environments.
For further details on configuring virtual machines with paravirtualized SCSI controllers, see the vSphere Virtual Machine
Administration Guide in the VMware vSphere documentation.

Virtual Disk Provisioning


Follow these recommendations for provisioning virtual disks on the PowerStore cluster.
For optimal space utilization with vSphere ESXi 6.x and above, it is recommended to provision virtual disks on the PowerStore
cluster, using Thin Provisioning.
In Thin Provisioning format, in-guest space reclamation is available, provided the following minimum requirements are fulfilled:
● ESXi 6.x
● Thin virtual disks

46 Host Configuration for VMware vSphere ESXi


● VM hardware version 11
● EnableBlockDelete set to 1
● Guest operating system support of UNMAP
NOTE: See the corresponding guest operating system chapter within this document for instructions on how to efficiently
create a file system.

NOTE: For details on SCSI-3 Persistent Reservations (SCSI3-PRs) on a virtual disk (VMDK) support with PowerStore
storage, see Dell EMC Knowledge Article 000191117 (PowerStore: SCSI-3 Persistent Reservations Support).

Virtual Machine Guest Operating System Settings


Follow these recommendations for setting a virtual machines guest operating system.
● LUN Queue Depth - For optimal virtual machine operation, configure the virtual machine guest operating system to use the
default queue depth of the virtual SCSI controller. For details on adjusting the guest operating system LUN queue depth, see
VMware Knowledge Article 2053145 (Large-scale workloads...).
● RDM volumes in guest operating system - Span RDM volumes, which are used by the virtual machine, across SCSI
controllers to prevent a bottleneck on a single SCSI controller.
● RDM volumes in guest operating system used for Microsoft Cluster (MSCS) - ESXi hosts with visibility to RDM volumes,
which are used by Microsoft Cluster (MSCS) may take a long time to start or to perform LUN rescan.
For the required settings on the RDM volumes, see VMware Knowledge Article 1016106 (ESXi host takes a long time...).

Creating a File System


It is recommended to create the file system using its default block size.

NOTE: File system configuration and management are out of the scope of this document.

It is recommended to create the file system using its default block size (using a nondefault block size may lead to unexpected
behavior). See your operating system and file system documentation.

Host Configuration for VMware vSphere ESXi 47


4
Host Configuration for Microsoft Windows
This chapter contains the following topics:
Topics:
• Related E-Lab Host Connectivity Guide
• Recommended Configuration Values Summary
• Boot from SAN
• Fibre Channel Configuration
• iSCSI Configuration
• Multipathing Software Configuration
• Post-Configuration Steps - Using the PowerStore system

Related E-Lab Host Connectivity Guide


The topics in this chapter detail specific caveats and configuration parameters that must be present when configuring
a Microsoft Windows host to access a PowerStore storage. These caveats and parameters should be applied with the
configuration steps that are detailed on the E-Lab Host Connectivity Guide for Microsoft Windows (see the E-Lab
Interoperability Navigator at https://elabnavigator.dell.com).

Recommended Configuration Values Summary


The following table summarizes all used variables and their values when configuring hosts for Microsoft Windows.

NOTE: Unless indicated otherwise, use the default parameters values.

Validation Impact Severity Refer to Section


To clarify the above Stability & Recommended For further details, refer to OS and HBA
note for using default Performance documentation.
parameter settings unless
stated otherwise in this
chapter, make sure that the
following are set per the
default OS setting:
● LUN and HBA queue depth
● HBA timeout
Specify Windows as the Serviceability Mandatory Presenting PowerStore Volumes to the
operating system for each Windows Host
defined host.
Load balancing: Use Round Performance Warning Configuring Native Multipathing using
Robin or Least Queue Depth Microsoft Multipath I/O (MPIO)
for Microsoft Native Microsoft
Multipath I/O (MPIO) with
Windows Server 2012/R2 and
above.
Temporarily disable UNMAP Performance Recommended Creating a File System
during file systems creation:

48 Host Configuration for Microsoft Windows


Validation Impact Severity Refer to Section
● To temporarily disable
UNMAP on the host (prior
to file system creation):
fsutil behavior set
DisableDeleteNotify
1

● To re-enable UNMAP on
the host (after file system
creation):
fsutil behavior set
DisableDeleteNotify
0

Boot from SAN


For guidelines and recommendations for boot from SAN with Microsoft Windows hosts and PowerStore, refer to the
Considerations for Boot from SAN with PowerStore appendix.

Fibre Channel Configuration


This section describes the recommended configuration that should be applied when attaching hosts to PowerStore cluster using
Fibre Channel.
NOTE: This section applies only to FC. If you are using only iSCSI with Windows, go to iSCSI HBA Configuration.

NOTE: Before you proceed, review Fibre Channel and NVMe over Fibre Channel SAN Guidelines.

Pre-Requisites
This section describes the pre-requisites for FC HBA configuration.
● Refer to the E-Lab Interoperability Navigator (https://elabnavigator.dell.com) for supported FC HBA models and drivers.
● Verify all HBAs are at the supported driver, firmware and BIOS versions.
● Verify all HBAs BIOS settings are configured according to E-Lab recommendations. Follow the procedures in one of the
following documents according to the FC HBA type:
○ For Qlogic HBAs, refer to Dell EMC Host Connectivity with Qlogic FIbre Channel and iSCSI HBAs and Converged
Network Adapters (CNAs) for the Windows Environment.
○ For Emulex HBAs, refer to Dell EMC Host Connectivity with Emulex Fibre Channel and iSCSI HBAs and Converged
Network Adapters (CNAs) for the Windows Environment.
○ For Cisco UCS fNIC HBAs, refer to the Cisco UCS Virtual Interface Card Drivers for Windows Installation Guide for
complete driver installation instructions .

iSCSI Configuration
This section describes the recommended configuration that should be applied when attaching hosts to PowerStore cluster using
iSCSI.
NOTE: This section applies only to iSCSI. If you are using only Fibre Channel with Windows and PowerStore, go to Fibre
Channel HBA Configuration.

NOTE: Be sure to review iSCSI SAN Guidelines before you proceed.

Host Configuration for Microsoft Windows 49


Pre-Requisites
This section describes the pre-requisites when attaching hosts to PowerStore cluster using iSCSI.
Before configuring iSCSI, the following pre-requisites should be met:
● Follow the operating system recommendations for installation and setup of the appropriate NIC/iSCSI HBA for your system.
● It is recommended to install the latest driver version (patch), as described in the operating system support site for each
specific NIC/iSCSI HBA.
● Refer to the E-Lab Interoperability Navigator (https://elabnavigator.dell.com) for supported NIC/iSCSI HBA models and
drivers.

PowerStore Operating System 1.x Only - Single Subnet

About this task


Use this method for configuring the iSCSI adapter on Windows hosts connected to PowerStore with a PowerStore operating
system version 1 in which only a single subnet is supported. For further details, see the Microsoft Windows documentation for
the Windows version that is installed on the Windows hosts.

Steps
1. Dell Technologies recommends creating four target iSCSI IP addresses (two per node) on the same subnet/VLAN.
2. Configure two iSCSI interfaces on the same subnet as the storage cluster iSCSI portals.
Example:
● iSCSI-A-port0 1.1.1.1/24
● iSCSI-A-port1 1.1.1.2/24
● iSCSI-B-port0 1.1.1.3/24
● iSCSI-B-port1 1.1.1.4/24
● NIC0 1.1.1.10/24
● NIC1 1.1.1.11/24

3. Proceed to configure Microsoft iSCSI adapter.

Next steps
NOTE: The Microsoft iSCSI Initiator default configuration ignores multiple NICs on the same subnet. When multiple NICs
are on the same subnet, use the Advanced button in the Log On to Target dialog box of the Microsoft iSCSI Software
Initiator UI to associate a specific NIC with a specific SP port.

50 Host Configuration for Microsoft Windows


PowerStore Operating System 2.x and Above - Multi Subnet

About this task


Use this procedure for configuring the iSCSI adapter on Windows hosts connected to PowerStore with a PowerStore operating
system version 2 (or later) in which up to 32 network subnets are supported. For further details, see the Microsoft Windows
documentation for the Windows version that is installed on the Windows hosts.

Steps
1. Dell recommends creating four target iSCSI IP addresses (two per node), on two different subnets/VLANS.
2. Configure two iSCSI interfaces on the same subnet as the storage cluster iSCSI portals.
Example:
● iSCSI-A-port0 1.1.1.1/24
● iSCSI-A-port1 1.1.2.1/24
● iSCSI-B-port0 1.1.1.2/24
● iSCSI-B-port1 1.1.2.2/24
● NIC0 1.1.1.10/24
● NIC1 1.1.2.10/24

3. Proceed to configure Microsoft iSCSI adapter.

Multipathing Software Configuration


This topic introduces Multipathing Software Configuration for Microsoft Windows
PowerStore supports native multipathing using Microsoft Native Microsoft Multipath I/O (MPIO) with Windows Server 2012/R2
and above, or multipathing using PowerPath.

Configuring Native Multipathing Using Microsoft Multipath I/O


(MPIO)
This topic describes configuring native multipathing using Microsoft Multipath I/O (MPIO).
For optimal operation with PowerStore storage, configure the Round-Robin (RR) policy or the Least Queue Depth policy for
MPIO for devices presented from PowerStore. Using these policies, I/O operations are balanced across all available paths.
To configure the native multipathing, using Microsoft Multipath I/O (MPIO), see:
Enabling MPIO on the Windows Host and Configuring MPIO for PowerStore Volumes Presented to the Host

Host Configuration for Microsoft Windows 51


Enabling MPIO on the Windows Host and Configuring MPIO for PowerStore
Volumes Presented to the Host
This topic describes enabling and configuring MPIO on the Windows host.

About this task


Before configuring the native multipathing, you should enable MPIO on the server by adding the MPIO feature to Windows.

Steps
1. Open PowerShell on the host.
2. Run the following commands to install MPIO if it is not already installed:

Get-WindowsOptionalFeature -Online -FeatureName MultiPathIO


Enable-WindowsOPtionalFeature -Online -FeatureName MultiPathIO

3. Run the following command to set vid/pid:

New-MSDSMSupportedHW -VendorId DellEMC -ProductId PowerStore

4. Run one of the following commands to set RoundRobin failover policy or Least Queue Depth failover policy, respectively:
● Round-Robin

Set-MSDSMGlobalDefaultLoadBalancePolicy -Policy RR

● Least Queue Depth

Set-MSDSMGlobalDefaultLoadBalancePolicy -Policy LQD

5. Run the following commands to set MPIO timeout values:

Set-MPIOSetting -NewPathVerificationState Enabled


Set-MPIOSetting -NewPathVerificationPeriod 30
Set-MPIOSetting -NewPDORemovePeriod 20
Set-MPIOSetting -NewRetryCount 3
Set-MPIOSetting -NewRetryInterval 3
Set-MPIOSetting -custompathrecovery enabled
Set-MPIOSetting -newpathrecoveryinterval 10
Set-MPIOSetting -NewDiskTimeout 30

6. To verify MPIO settings on the host, run the following command:

Get-MPIOSetting

PowerPath Configuration with PowerStore Volumes


PowerStore supports multipathing using Dell EMC PowerPath on a Microsoft Windows host.
For the most updated information about PowerPath support with PowerStore, see the PowerStore SImple Support Matrix.
For details on installing and configuring PowerPath with PowerStore on your host, see Dell EMC PowerPath on Microsoft
Windows Installation and Administration Guide for the PowerPath version you are installing.

52 Host Configuration for Microsoft Windows


Post-Configuration Steps - Using the PowerStore
system
This topic describes the post-configuration steps using the PowerStore system.
After the host configuration is completed, you can use the PowerStore storage from the host.
You can create, present, and manage volumes accessed from the host via PowerStore Manager, CLI, or REST API. Refer to the
PowerStore Manager Online Help, CLI Reference Guide, or REST API Reference Guide for additional information.

Presenting PowerStore Volumes to the Windows Host


This topic discusses presenting PowerStore Volumes to the Windows host.
When adding host groups and hosts to allow Windows hosts to access PowerStore volumes, specify Windows as the operating
system for the newly-created hosts.
NOTE: Setting the host’s operating system is required for optimal interoperability and stability of the host with PowerStore
storage. You can adjust the setting while the host is online and connected to the PowerStore cluster with no I/O impact.

NOTE: Refer to the PowerStore Configuring Volumes Guide for additional information.

Creating a File System


This topic discusses creating a file system.
File system configuration and management are out of the scope of this document.
NOTE: Some file systems may require you to properly align the file system on the PowerStore volume. It is recommended
to use specified tools to optimally match your host with application requirements.

NOTE: Creating a file system formatting with UNMAP enabled on a host connected to PowerStore may result in an
increased amount of write I/Os to the storage subsystem. It is highly recommended to disable UNMAP during file system
creation.
To disable UNMAP during file system creation:
1. Open a Windows CMD window on the host.
2. Run the following fsutil command to temporarily disable UNMAP on the host (before creating the file system):

fsutil behavior set DisableDeleteNotify 1

3. Once file system creation is complete, reenable UNMAP by running the following command:

fsutil behavior set DisableDeleteNotify 0

NOTE: To verify the current setting of the file system, run the following fsutil command:

fsutil behavior query DisableDeleteNotify

● DisableDeleteNotify=0 - Indicates that the 'Trim and Unmap' feature is on (enabled).


● DisableDeleteNotify=1 - Indicates that the 'Trim and Unmap' feature is off (disabled).

Host Configuration for Microsoft Windows 53


5
Host Configuration for Linux
This chapter contains the following topics:
Topics:
• Related E-Lab Host Connectivity Guide
• Recommended Configuration Values Summary
• Boot from SAN
• Fibre Channel (FC) Configuration
• NVMe over Fibre Channel Configuration
• iSCSI Configuration
• Multipathing Software Configuration
• Post-Configuration Steps - Using the PowerStore system

Related E-Lab Host Connectivity Guide


The topics in this chapter detail specific caveats and configuration parameters that must be present when configuring a
Linux host to access a PowerStore storage. These caveats and parameters should be applied with the configuration steps
that are detailed on the E-Lab Host Connectivity Guide for Linux (see the E-Lab Interoperability Navigator at https://
elabnavigator.dell.com).

Recommended Configuration Values Summary


The following table summarizes all used and recommended variables and their values when configuring hosts for Linux.

NOTE: Unless indicated otherwise, use the default parameters values.

Validation Impact Severity Refer to Section


To clarify the above note for using default Stability & Performance Recommended For further details, refer to
parameter settings unless stated otherwise operating system and HBA
in this chapter, make sure that the documentation.
following are set per the default operating
system setting:
● LUN and HBA queue depth
● HBA timeout
Specify Linux as the operating system for Serviceability Mandatory Presenting PowerStore
each defined host. Volumes to the Linux Host
SCSI Device Mapper Multipathing: Performance Recommended Configuring Linux Native
Multipathing
Modify /etc/multipath.conf file as
follows:
● vendor - "DellEMC"
● product - "PowerStore"
● path_selector - "queue-length 0"
● path_grouping_policy -
"group_by_prio"
● path_checker - "tur"
● detect_prio - "yes"

54 Host Configuration for Linux


Validation Impact Severity Refer to Section
● failback - "immediate"
● no_path_retry - "3"
● rr_min_io_rq - "1"
● fast_io_fail_tmo - "15"
● max_sectors_kb - "1024"
NVMe Device Mapper Multipathing: Performance Recommended Configuration with Device
Mapper Multipathing for
Modify /etc/multipath.conf file as
SCSI
follows:
● vendor - ".*"
● product - "dellemc-powerstore"
● prio = ana
● uid_attribute = ID_WWN
● path_selector - "queue-length 0"
● path_grouping_policy -
"group_by_prio"
● path_checker - "tur"
● detect_prio - "yes"
● failback - "immediate"
● no_path_retry - "3"
● rr_min_io_rq - "1"
● fast_io_fail_tmo - "15"
iSCSI Configuration: Stability Recommended PowerStore Operating
System 2.x and Later -
When using PowerStore operating system
Multi Subnet
2.x (or later), configure two subnets.

iSCSI Configuration: Performance Recommended Updating iSCSI


Configuration File
Modify /etc/iscsi/iscsid.conf file
as follows:
● node.session.timeo.replacement_timeo
ut = 15
● node.startup = automatic
iSCSI Configuration: Stability Mandatory Mounting iSCSI File
Systems
Use _netdev or nofail option in /etc/
fstab

iSCSI Configuration: Performance Recommended iSCSI Session Configuration


Use iSCSI interfaces

Temporarily disable UNMAP during file Performance Recommended Creating a File System
system creation.
● When creating a file system using
the mke2fs command, use the "-E
nodiscard" parameter.
● When creating a file system using the
mkfs.xfs command, use the "-K"
parameter.

Boot from SAN


For guidelines and recommendations for boot from SAN with Linux hosts and PowerStore, refer to the Considerations for Boot
from SAN with PowerStore appendix.

Host Configuration for Linux 55


Fibre Channel (FC) Configuration
This section describes the recommended configuration that should be applied when attaching hosts to PowerStore cluster using
Fibre Channel.
NOTE: This section applies only to Fibre Channel. If you are using any other protocol with Linux, see the relevant
configuration section.

Pre-Requisites
When attaching a host to PowerStore cluster using Fibre Channel, ensure that the following pre-requisites are met:
● Review Fibre Channel SAN Guidelines before you proceed.
● See the Dell EMC E-Lab Navigator (https://elabnavigator.dell.com) for supported Fibre Channel HBA models and drivers.
● Verify that all HBAs have supported driver and firmware versions according to the Support Matrix at Dell EMC E-Lab
Navigator (https://elabnavigator.dell.com).
● Verify that all HBAs BIOS settings are configured according to Dell EMC E-Lab recommendations.
● Locate your Fibre Channel HBA information:

systool -c fc_host -v

NVMe over Fibre Channel Configuration


This section describes the recommended configuration that should be applied when attaching hosts to PowerStore cluster using
NVMe over Fibre Channel.
NOTE: This section applies only to NVMe/FC. If you are using any other protocol with Linux, see the relevant configuration
section.

Pre-Requisites
When attaching a host to PowerStore cluster using NVMe/FC, ensure that the following pre-requisites are met:
● Review NVMe/FC SAN Guidelines before you proceed.
● PowerStore operating system 2.0 (or later) is required.
● See the E-Lab Dell EMC 32G FC-NVMe Simple Support Matrix for supported Fibre Channel HBA models and drivers with
NVMe/FC and known limits.
● Verify that all HBAs have supported driver and firmware versions according to the Support Matrix at Dell EMC E-Lab
Navigator (https://elabnavigator.dell.com).
● Verify that all HBAs BIOS settings are configured according to Dell EMC E-Lab recommendations.
● It is highly recommended to install the nvme-cli package:

yum install nvme-cli

● Locate your Fibre Channel HBA information:

systool -c fc_host -v

Known Issues
For a host directly attached to the PowerStore appliance, disable NVMe/FC support on the HBA. For details on potential issues
when directly connecting a host to PowerStore, see Dell EMC Knowledge Article 000200588 (PowerStore: After an upgrade...)
and Dell EMC Knowledge Article 000193380 (PowerStoreOS 2.0: ESXi hosts do not detect...).

56 Host Configuration for Linux


NVMe/FC Configuration on Linux Hosts
For details on NVMe/FC configuration for Red Hat Enterprise Linux hosts, see Red Hat Knowledge Article 4706181 (Is NVMe
over Fibre (NVMeoF) supported...).
For details on NNVMe/FC configuration for SUSE, see for example NVMe over Fabric.

Setting the Linux Host NVMe Qualified Name

About this task


You can configure the host NVMe Qualified Name (NQN) using either Hostname or UUID. For visibility and simplicity, it is
recommended to use Hostname.

Steps
1. Connect to the ESXi host as root.
2. Edit the /etc/nvme/hostnqn file and modify the UUID format to Hostname format.
Before:

# nvme show-hostnqn
nqn.2014-08.org.nvmexpress:uuid:daa45a0b-d371-45f6-b071-213787ff0917

After:

# nvme show-hostnqn
nqn.2014-08.org.nvmexpress:Linux-Host1

3. The value must comply with NVMe Express Base Specification, Chapter 4.5 (NVMe Qualified Names).
4. If you want to revert back to UUID format, run the following command to create a new NQN and update the /etc/nvme/
hostnqn file:

# nvme gen-hostnqn
nqn.2014-08.org.nvmexpress:uuid:51dc3c11-35b6-e311-bcdd-001e67a3bceb

Setting Up Marvell Qlogic HBAs


Follow these steps to setup Marvell Qlogic HBAs.

Steps
1. Access the Linux host as root.
2. Edit the /etc/modprobe.d/qla2xxx.conf configuration file with the following data:

options qla2xxx ql2xextended_error_logging=1 ql2xfwloadbin=2 ql2xnvmeenable=1

NOTE: ql2xnvmeenable=1 enables NVMe-oF, and ql2xnvmeenable=0 disables it.

Setting Up Emulex HBAs


Follow these steps to setup Emulex HBAs.

Steps
1. Access the Linux host as root.

Host Configuration for Linux 57


2. Edit the /etc/modprobe.d/lpfc.conf configuration file with the following data:

options lpfc lpfc_lun_queue_depth=128 lpfc_sg_seg_cnt=256 lpfc_max_luns=65535


lpfc_enable_fc4_type=3

NOTE: lpfc_enable_fc4_type=3 enables both FCP and NVMe/FC, and lpfc_enable_fc4_type=1 enables
only FCP.

3. Rebuild the initramfs image:

# dracut --force

4. Reboot the host system to reconfigure the lpfc driver:

# systemctl reboot

iSCSI Configuration
This section provides an introduction to the recommended configuration to be applied when attaching hosts to PowerStore
cluster using iSCSI.
NOTE: This section applies only to iSCSI. If you are using any other protocol with Linux, see the relevant configuration
section.

NOTE: Be sure to review the iSCSI SAN Guidelines before you proceed.

Pre-Requisites
Before configuring iSCSI, the following pre-requisites should be met:
● Follow the operating system recommendations for installation and setup of the appropriate NIC/iSCSI HBA for your system.
● It is recommended to install the latest driver version (patch), as described in the operating system support site for each
specific NIC/iSCSI HBA.
● Refer to the E-Lab Interoperability Navigator (https://elabnavigator.dell.com) for supported NIC/iSCSI HBA models and
drivers.
● Configure networking according to PowerStore best practices:
○ If you are using PowerStore T model and utilizing only the two bonded ports (the first two ports on the Mezz card), it is
recommended to configure them as a LACP port channel across the two switches and configure proper MC-LAG (VLTi
or VPC configuration between the switches).
NOTE: If a port channel is not properly configured on the switch side, the bond operates as active/passive, and the
appliance bandwidth cannot be fully utilized.
○ If you are using PowerStore T model and utilizing any other port (not bonded), there is no need to configure any port
channel.
○ For information, see the Dell EMC PowerStore Networking Guide for PowerStore T Models on the support site (https://
www.dell.com/support)

PowerStore Operating System 1.x Only - Single Subnet

About this task


Use this procedure for configuring the iSCSI adapter on Linux hosts connected to PowerStore with a PowerStore operating
system version 1.x in which only a single subnet is supported.
By design, on various Linux distributions, only two network interfaces can be configured on the same network subnet. For
details, see RedHat Knowledge Article 30564 (How to connect...) and RedHat Knowledge Article 53031 (When RHEL has
multiple IPs...).

58 Host Configuration for Linux


In light of this limitation, use one of the following solutions to make both network interfaces accessible with hosts that are
connected to PowerStore storage with PowerStore operating system 1.x:
● Policy-Based Routing
● Bonding/Teaming
● Disable Reverse Path Filtering

Steps
1. Dell recommends creating four target iSCSI IP addresses (two per node) on the same subnet/VLAN.
2. Configure two iSCSI interfaces on the same subnet as the storage cluster iSCSI portals.
Example:

Description IP Address
Host (NIC-0) 1.1.1.10/24
Host (NIC-1) 1.1.1.11/24
Node-A-Port0 1.1.1.1/24
Node-A-Port1 1.1.1.2/24
Node-B-Port0 1.1.1.3/24
Node-B-Port1 1.1.1.4/24

Policy-Based Routing
This topic outlines policy-based routing as a solution to the single network subnet limitation (recommended solution).
This solution is based on adding routing tables and rules, binding source IP address for each route, and adding those as default
gateways for each network interface.
Using this solution, a routing table is defined for each interface, thus the default routing table is redundant for those interfaces.
For additional technical information on Policy-Based Routing, see RedHat Knowledge Article 30564 (How to connect...).

Bonding/Teaming
Use bonding/teaming as a solution to the single network subnet limitation.

NOTE: This section does not apply to hosts directly attached to the PowerStore appliances.

This solution is based on the Bond and Network teaming configuration.


● Bond - Binding multiple network interfaces into a single-bonded channel enables them to act as one virtual interface.
That way, only a single network address is defined and the said limitation does not apply. For technical information about
configuring network bond on Red Hat Enterprise Linux version 7, see Networking Guide: Configure Network Bonding.

Host Configuration for Linux 59


● Network Teaming - With Red Hat Enterprise Linux version 7, Network Teaming is offered as a new implementation of the
bonding concept. The existing bonding driver is unaffected.
Network Teaming is offered as an alternative and does not replace bonding in Red Hat Enterprise Linux version 7. For
technical information about configuring Network Teaming, see Networking Guide: Configure Network Teaming.

For a comparison between Bonding and Network Teaming implementations, see Networking Guide: Comparison of Network
Teaming to Bonding.

Disabling Reverse Path Filtering


Use disabling reverse path filtering as a solution to the single network subnet limitation.
Red Hat Enterprise Linux versions 6 and later are configured by default to apply Strict Reverse Path Forwarding filtering
recommended in RFC 3704 - Ingress Filtering for Multihomed Networks.
NOTE: Before making this change, see the Root Cause section of this article to understand what it does and review
alternative solutions as explained in RedHat Knowledge Article 53031 (When RHEL has multiple IPs...).
Setting the Reverse Path Filtering to 2 (loose) on the relevant network interfaces makes them both accessible and routable.
To apply this change, add the following lines to /etc/sysctl.conf:

net.ipv4.conf.p2p1.rp_filter = 2
net.ipv4.conf.p2p2.rp_filter = 2

NOTE: In this example, p2p1 and p2p2 are the network interfaces used for iSCSI. Ensure to change to the relevant
interfaces.
To reload the configuration:

sysctl -p

To view the current Reverse Path Filtering configuration on the system:

# sysctl -ar "\.rp_filter"


net.ipv4.conf.all.rp_filter = 1
net.ipv4.conf.default.rp_filter = 0
net.ipv4.conf.em1.rp_filter = 0
net.ipv4.conf.em2.rp_filter = 0
net.ipv4.conf.em3.rp_filter = 0
net.ipv4.conf.em4.rp_filter = 0
net.ipv4.conf.lo.rp_filter = 0
net.ipv4.conf.p2p1.rp_filter = 0
net.ipv4.conf.p2p2.rp_filter = 0

PowerStore Operating System 2.x and Later - Multi Subnet

About this task


Use this procedure for configuring the iSCSI adapter on Linux hosts connected to PowerStore with a PowerStore operating
system version 2.0 (or later), in which up to 32 network subnets are supported. For further details, see the Linux documentation
for the Linux version that is installed on the Linux hosts.

Steps
1. Dell recommends creating four target iSCSI IP addresses (two per node), on two different subnets/VLANs.
2. Configure two iSCSI interfaces on the same subnet as each of the storage cluster iSCSI portals.
NOTE: It is highly recommended not to use routing on iSCSI.

Example:

60 Host Configuration for Linux


Description VLAN IP Address
Host (NIC-0) 11 1.1.1.10/24
Host (NIC-1) 12 1.1.2.10/24
Node-A-Port0 11 1.1.1.1/24
Node-A-Port1 12 1.1.2.1/24
Node-B-Port0 11 1.1.1.2/24
Node-B-Port1 12 1.1.2.2/24

Configuration Sample

About this task


The steps below provide a configuration sample with PowerStore operating system 2.x when host interfaces on the switch are
configured as Trunk, and VLAN is required.
NOTE: Consult your IT support for instructions that suit your environment.

The sample below uses Red Hat Enterprise Linux host. This may vary depending on your host configuration.

Steps
1. List the available adapters:

$ lshw -class network -short


H/W path Device Class Description
===============================================================
/0/100/3.2/0 p514p1 network 82599ES 10-Gigabit SFI/SFP+
Network Connection
/0/100/3.2/0.1 p514p2 network 82599ES 10-Gigabit SFI/SFP+
Network Connection
/0/100/1c/0 eth0 network I350 Gigabit Network Connection
/0/100/1c/0.1 eno1 network devices attached I350 Gigabit
Network Connection

In this case, ports p514p1 and p514p2 are the PCIe to the iSCSI network.

2. Verify that network speed is 10 Gb/s at minimum:

$ ethtool p514p1 | grep Speed


Speed: 10000Mb/s

Host Configuration for Linux 61


3. Verify that no IP addresses are assigned to the above interfaces:

$ ip -4 -br addr show


lo UNKNOWN 127.0.0.1/8
eth0 UP x.x.x.x/24

4. Enable Network Manager (if not enabled):

$ systemctl enable NetworkManager


$ systemctl start NetworkManager
$ systemctl status NetworkManager

5. Configure VLAN interfaces:


Two VLANs are configured (one for each subnet). In the example below, VLAN 11 is used. Repeat the steps to configure the
other VLAN interface.

$ nmcli connection add type vlan con-name vlan11 ifname vlan11 vlan.parent p514p2
vlan.id 11

** Set IP address and subnet


$ nmcli connection modify vlan11 ipv4.addresses 1.1.1.10/24

** Set to manual (not DHCP)


$ nmcli connection modify vlan11 ipv4.method manual

** Disable Default Gateway


$ nmcli connection modify vlan11 ipv4.never-default yes

** Reconnect on boot
$ nmcli connection modify vlan11 connection.autoconnect yes

** Disable IPv6, unless you wish to configure using IPv6


$ nmcli con mod vlan11 ipv6.method ignore
$ nmcli connection down vlan11
$ nmcli connection up vlan11

6. Verify configuration:

$ nmcli dev status


DEVICE TYPE STATE CONNECTION
eth0 ethernet connected Wired connection 1
vlan11 vlan connected vlan11
vlan12 vlan connected vlan12
p514p1 ethernet connected p514p1
eno1 ethernet disconnected --
p514p2 ethernet disconnected --
lo loopback unmanaged --

$ ip -4 -br addr show


lo UNKNOWN 127.0.0.1/8
eth0 UP x.x.x.x
vlan11@p514p1 UP 1.1.1.10/24
vlan12@p514p2 UP 1.1.2.10/24

7. Test network connectivity from the host:


The example below demonstrates a single target port. Ping from each interface to each target port.

$ ping -4 -w 3 -I vlan11 1.1.1.1

NOTE: 1.1.1.1 represents the iSCSI portal IP address of a PowerStore Storage Network.

8. Verify that TCP port 3260 is active on each target port:

62 Host Configuration for Linux


The example below demonstrates a single target port. Repeat for all target ports.

# nc -z -v 1.1.1.1 3260
Connection to 1.1.1.1 3260 port [tcp/*] succeeded!

iSCSI Session Configuration


To fully use iSCSI with PowerStore, it is recommended to use iSCSI interfaces (IFACE) mainly when using a single subnet.
For up-to-date information, see the relevant operating system documentation about using iscsiadm to properly configure iSCSI.
The following provides an example for configuring iSCSI interfaces for a multi-subnet environment.
Prerequisites:
● Properly configured networking on the host and PowerStore appliance.
● In the example, IP addresses 1.1.1.1-2/24 and 1.1.2.1-2/24 are configured on the PowerStore appliance.
● The host is configured with two VLANs (11 and 12) with IP addresses 1.1.1.10 and 1.1.2.10.
The example demonstrates a host with two network cards, one configured with VLAN 11, and one with VLAN 12.
● Create two iSCSI interfaces on the host (one for each VLAN):

# iscsiadm --m iface -I vlan11 --o new


# iscsiadm --m iface -I vlan12 --o new

● Configure each IFACE to reflect the correct network interface:

# iscsiadm --mode iface -I vlan11 -o update -n iface.net_ifacename -v vlan11 updated


# iscsiadm --mode iface -I vlan12 -o update -n iface.net_ifacename -v vlan12 updated

NOTE: If you are using the physical interface (and not the VLAN interfaces), specify the interface that contains the IP
address (the device eth1, p2p1, and so on).

NOTE: Some operating system releases may require to configure additional parameters (in addition to
iface.net_ifacename) to properly identify the interface.
● Perform a discovery and login from the first subnet (if multiple subnets exist):

# iscsiadm -m discovery -t st -p 1.1.1.1 -I vlan11 -l


1.1.1.1:3260,1 iqn.2015-10.com.dell:dellemc-powerstore-fnm00191800733-a-2ab6c956
1.1.2.1:3260,1 iqn.2015-10.com.dell:dellemc-powerstore-fnm00191800733-a-2ab6c956
1.1.1.2:3260,1 iqn.2015-10.com.dell:dellemc-powerstore-fnm00191800733-b-2e098984
1.1.2.2:3260,1 iqn.2015-10.com.dell:dellemc-powerstore-fnm00191800733-b-2e098984
Logging in to [iface: vlan11, target: iqn.2015-10.com.dell:dellemc-powerstore-
fnm00191800733-a-2ab6c956, portal: 1.1.1.1,3260]
Logging in to [iface: vlan11, target: iqn.2015-10.com.dell:dellemc-powerstore-
fnm00191800733-a-2ab6c956, portal: 1.1.2.1,3260]
Logging in to [iface: vlan11, target: iqn.2015-10.com.dell:dellemc-powerstore-
fnm00191800733-b-2e098984, portal: 1.1.1.2,3260]
Logging in to [iface: vlan11, target: iqn.2015-10.com.dell:dellemc-powerstore-
fnm00191800733-b-2e098984, portal: 1.1.2.2,3260]
iscsiadm: Could not login to [iface: vlan11, target: iqn.2015-10.com.dell:dellemc-
powerstore-fnm00191800733-a-2ab6c956, portal: 1.1.2.1,3260].
iscsiadm: initiator reported error (8 - connection timed out)
Login to [iface: vlan11, target: iqn.2015-10.com.dell:dellemc-powerstore-
fnm00191800733-a-2ab6c956, portal: 1.1.1.1,3260] successful.
iscsiadm: Could not login to [iface: vlan11, target: iqn.2015-10.com.dell:dellemc-
powerstore-fnm00191800733-b-2e098984, portal: 1.1.2.2,3260].
iscsiadm: initiator reported error (8 - connection timed out)
Login to [iface: vlan11, target: iqn.2015-10.com.dell:dellemc-powerstore-
fnm00191800733-b-2e098984, portal: 1.1.1.2,3260] successful.

NOTE: The command logs in only to the target ports on the same VLAN as the iSCSI interface.

Host Configuration for Linux 63


● Repeat the above to connect from the second subnet:

# iscsiadm -m discovery -t st -p 1.1.2.1 -I vlan12 -l


1.1.1.1:3260,1 iqn.2015-10.com.dell:dellemc-powerstore-fnm00191800733-a-2ab6c956
1.1.2.1:3260,1 iqn.2015-10.com.dell:dellemc-powerstore-fnm00191800733-a-2ab6c956
1.1.1.2:3260,1 iqn.2015-10.com.dell:dellemc-powerstore-fnm00191800733-b-2e098984
1.1.2.2:3260,1 iqn.2015-10.com.dell:dellemc-powerstore-fnm00191800733-b-2e098984
Logging in to [iface: vlan12, target: iqn.2015-10.com.dell:dellemc-powerstore-
fnm00191800733-a-2ab6c956, portal: 1.1.1.1,3260]
Logging in to [iface: vlan12, target: iqn.2015-10.com.dell:dellemc-powerstore-
fnm00191800733-a-2ab6c956, portal: 1.1.2.1,3260]
Logging in to [iface: vlan12, target: iqn.2015-10.com.dell:dellemc-powerstore-
fnm00191800733-b-2e098984, portal: 1.1.1.2,3260]
Logging in to [iface: vlan12, target: iqn.2015-10.com.dell:dellemc-powerstore-
fnm00191800733-b-2e098984, portal: 1.1.2.2,3260]
iscsiadm: Could not login to [iface: vlan12, target: iqn.2015-10.com.dell:dellemc-
powerstore-fnm00191800733-a-2ab6c956, portal: 1.1.1.1,3260].
iscsiadm: initiator reported error (8 - connection timed out)
Login to [iface: vlan12, target: iqn.2015-10.com.dell:dellemc-powerstore-
fnm00191800733-a-2ab6c956, portal: 1.1.2.1,3260] successful.
iscsiadm: Could not login to [iface: vlan12, target: iqn.2015-10.com.dell:dellemc-
powerstore-fnm00191800733-b-2e098984, portal: 1.1.1.2,3260].
iscsiadm: initiator reported error (8 - connection timed out)
Login to [iface: vlan12, target: iqn.2015-10.com.dell:dellemc-powerstore-
fnm00191800733-b-2e098984, portal: 1.1.2.2,3260] successful.

● Verify that the sessions are properly configured:

# iscsiadm --m session


tcp: [1] 1.1.1.1:3260,1 iqn.2015-10.com.dell:dellemc-powerstore-fnm00191800733-
b-2e098984 (non-flash)
tcp: [2] 1.1.2.1:3260,1 iqn.2015-10.com.dell:dellemc-powerstore-fnm00191800733-
b-2e098984 (non-flash)
tcp: [3] 1.1.1.2:3260,1 iqn.2015-10.com.dell:dellemc-powerstore-fnm00191800733-
a-2ab6c956 (non-flash)
tcp: [4] 1.1.2.2:3260,1 iqn.2015-10.com.dell:dellemc-powerstore-fnm00191800733-
a-2ab6c956 (non-flash)

NOTE: The configurations in this example may differ based on your host and PowerStore configuration.

Updating iSCSI Configuration File


To configure the PowerStore disk device, modify the configuration file with the following parameters.
The configuration file for the multipath daemon is multipath.conf. It is used to overwrite the integrated configuration table of
the multipath daemon.
When iSCSI is used with PowerStore, the iscsi.conf file is used to overwrite iSCSI specific multipathing related settings.
To configure the PowerStore disk device, modify the /etc/iscsi/iscsid.conf file, using the following parameters:
NOTE: The example below is based on RedHat. For details about configuring PowerStore disk device with iSCSI, see the
specific instructions of your operating system.

Parameter Description Value


node.session.timeo.replacement_timeout Specifies the number of seconds the iSCSI layer waits for 15
a timed-out path/session to reestablish before failing any
commands on that path/session. The default value is 120.
node.startup Defines whether the session should be established manually or automatic
automatically when the system starts from a power off state or
when rebooted
Default value for RedHat: automatic, Default value for SUSE:
manual

Using these settings prevents commands from being split by the iSCSI initiator and enables instantaneous mapping from the
host to the volume.

64 Host Configuration for Linux


To apply the adjusted iscsid.conf settings, run the following command on the Linux host:

systemctl restart iscsi

NOTE: If a previous iSCSI target is discovered on the Linux host, delete the iSCSI database and rerun the iSCSI target
discovery procedure with the iscsid.conf settings that are described above.

Multipathing Software Configuration

Pre-Requisites

Steps
1. Verify that DM-MPIO is installed:

$ rpm -qa | grep device-mapper-multipath

2. If not installed, install Device Mapper:

$ dnf install device-mapper-multipath

3. Verify that Device Mapper is enabled:

$ systemctl enable --now multipathd.service

Configuration with Device Mapper Multipathing for SCSI


For a PowerStore cluster to function properly with Linux hosts, configure the multipath settings file /etc/multipath.conf:

NOTE: If the host is connected to a cluster other than PowerStore, the configuration file may include additional devices.

NOTE: If the multipath.conf file includes a blacklist section, it should come before the devices section. For information, See
the Importing External Storage to PowerStore Guide.

NOTE: To resolve a known issue described in RedHat Knowledge Article 6298681 (multipathd crashes when...), it is highly
recommended to update the device-mapper-multipath package to version 0.4.9-135.el7_9 (or later).

## Use user friendly names, instead of using WWIDs as names.


defaults {
user_friendly_names yes
disable_changed_wwids yes
}

devices {
device {
vendor DellEMC
product PowerStore
path_selector "queue-length 0"
path_grouping_policy group_by_prio
path_checker tur
detect_prio yes
failback immediate
no_path_retry 3
rr_min_io_rq 1
fast_io_fail_tmo 15
max_sectors_kb 1024 ##only for RHEL 6.9 (or later 6.x versions) and
RHEL 7.4 (or later)
}

Host Configuration for Linux 65


## other devices
}

Parameter Description Value


disable_changed_wwids If set to yes, and the WWID of a path device changes while it is yes
part of a multipath device, multipath disables access to the path
device until the WWID of the path is restored to the WWID of the
multipath device. The default value is no (does not check if the
WWID of the path has changed).
vendor Specifies the vendor name DellEMC
product The below configuration applies only to PowerStore volumes. PowerStore
path_selector Sends the next group of I/Os to the path with the least number of queue-length 0
outstanding I/O requests
path_grouping_policy Specifies the default path grouping policy to apply to PowerStore group_by_prio
volumes
● Paths are grouped by priorities that are assigned by the
cluster.
● A higher priority (50) is set as Active/Optimized. Lower
priority (10) is set as Active/Non-Optimized.
path_checker Specifies TEST UNIT READY as the default method used to tur
determine the state of the paths.
detect_prio If set to yes, multipath tries to detect whether the device yes
supports ALUA. If so, the device automatically uses the alua
prioritizer. Otherwise, the prioritizer is selected as usual. Default
value is no.
failback Manages the path group failback. Immediate refers to Immediate
immediate failback to the highest priority path group that contains
active paths.
no_path_retry Specifies the number of times the system should attempt to use a 3
failed path before disabling queuing.
rr_min_io_rq Specifies the number of I/O requests to route to a path before 1
switching to the next path in the same path group, using request-
based device-mapper-multipath. This setting should be used on
systems running current kernels. On systems running kernels older
than 2.6.31, use rr_min_io. Default value is 1.

fast_io_fail_tmo Specifies the number of seconds the scsi layer will wait after a 15
problem has been detected on an FC remote port before failing
I/O to devices on that remote port. This value should be smaller
than dev_loss_tmo. Setting this parameter to off disables the
timeout.
max_sectors_kb Applies to Red Hat Enterprise Linux Release 6.9 (or later 6.x 1024
versions) and Red Hat Enterprise Linux Release 7.4 (or later 7.x
versions).
Sets the max_sectors_kb device queue parameter to the
specified value on all underlying paths of a multipath device
before the multipath device is first activated. When a multipath
device is created, the device inherits the max_sectors_kb
value from the path devices. Manually raising this value for the
multipath device or lowering it for the path devices can cause
multipath to create I/O operations larger than the path devices
allow. Using the max_sectors_kb parameter is an easy way to
set these values before a multipath device is created on top of
the path devices and prevent invalid-sized I/O operations from
being passed. If this parameter is not set by the user, the path

66 Host Configuration for Linux


Parameter Description Value

devices have it set by their device driver, and the multipath device
inherits it from the path devices.
NOTE: In PowerStore cluster, the maximum I/O size is 1 MB.
PowerStore does not set an optimal transfer size.

Configuring with Device Mapper Multipathing for NVMe


When configuring NVMe/FC on a Linux host that is connected to PowerStore, also configure DM-multipathing to setup multiple
I/O paths between the Linux host and the PowerStore array into a single device over NVMe/FC.
When configuring DM-multipathing for PowerStore NVMe/FC devices on the Linux host, configure the multipath settings
file /etc/multipath.conf:
NOTE: To resolve a known issue described in RedHat Knowledge Article 6298681 (multipathd crashes when...), it is highly
recommended to update the device-mapper-multipath package to version 0.4.9-135.el7_9 (or later).

## Use user friendly names, instead of using WWIDs as names.


defaults {
user_friendly_names yes
disable_changed_wwids yes
}

devices {
device {
vendor .*
product dellemc-powerstore
uid_attribute ID_WWN
prio ana
failback immediate
path_grouping_policy "group_by_prio"
# path_checker directio
path_selector "queue-length 0"
detect_prio "yes"
fast_io_fail_tmo 15
no_path_retry 3
rr_min_io_rq 1
}
## other devices
}

Configuration with PowerPath


PowerStore supports multipathing using Dell EMC PowerPath on a Linux host.
For the most updated information about PowerPath support with PowerStore, see the PowerStore Simple Support Matrix.
For details on installing and configuring PowerPath with PowerStore on your host, see Dell EMC PowerPath on Linux Installation
and Administration Guide for the PowerPath version you are installing.

NOTE: Ensure that the multipath.conf file includes the max_sectors_kb setting if working with iSCSI or Fibre Channel.

Configuring Oracle ASM


For proper functioning with Linux hosts using Oracle ASM with PowerStore, configure the Oracle ASM settings file.

Prerequisites
For a PowerStore cluster to function properly with Linux hosts that are using the Oracle ASM volume management software
with ASMLib driver, follow these steps to configure the /etc/sysconfig/oracleasm settings file:

Host Configuration for Linux 67


Steps
1. Modify the following lines in the /etc/sysconfig/oracleasm file according to the multipathing used on the host:

# ORACLEASM_SCANORDER: Matching patterns to order disk scanning


ORACLEASM_SCANORDER=""
# ORACLEASM_SCANEXCLUDE: Matching patterns to exclude disks from scan
ORACLEASM_SCANEXCLUDE=""

● When DM-MPIO multipathing is used on the Linux host, edit these lines as follows:

# ORACLEASM_SCANORDER: Matching patterns to order disk scanning


ORACLEASM_SCANORDER="dm"
# ORACLEASM_SCANEXCLUDE: Matching patterns to exclude disks from scan
ORACLEASM_SCANEXCLUDE="sd"

● When PowerPath multipathing is used on the Linux host, edit these lines as follows:

# ORACLEASM_SCANORDER: Matching patterns to order disk scanning


ORACLEASM_SCANORDER="emcpower"
# ORACLEASM_SCANEXCLUDE: Matching patterns to exclude disks from scan
ORACLEASM_SCANEXCLUDE="sd"

2. Shutdown the Oracle instance running on the specific host, and run the following commands to restart Oracle ASM:

/etc/init.d/oracleasm stop
/etc/init.d/oracleasm start

Post-Configuration Steps - Using the PowerStore


system
After the host configuration is completed, you can access the PowerStore system from the host.
You can create, present, and manage volumes accessed from the host using PowerStore Manager, CLI, or REST API. See the
PowerStore Manager Online Help, CLI Reference Guide, or REST API Reference Guide for additional information.

Presenting PowerStore Cluster Volumes to the Linux Host


Specify Linux as the operating system when presenting PowerStore cluster volumes to the Linux host.
● When adding host groups and hosts to allow Linux hosts to access PowerStore cluster volumes, specify Linux as the
operating system for the newly created hosts.
● Setting the operating system of the host is required for optimal interoperability and stability of the host with PowerStore
cluster storage. You can adjust the setting while the host is online and connected to the PowerStore cluster with no I/O
impact.

Partition Alignment in Linux


When using disk partitions with a Linux host attached to a PowerStore cluster, alignment is recommended. Follow these
guidelines to align disk partitions.
To align partitions on PowerStore cluster volumes that are presented to Linux hosts, use the default value (2048). Then, create
a partition using the fdisk command to ensure that the file system is aligned.
When you perform partition alignment, the logical device (/dev/mapper/) should be used rather than the physical device
(/dev/). When multipathing is not used (for example in a virtual machine), the physical device should be used.
The following example demonstrates using the fdisk command to create an aligned partition on a PowerStore cluster volume.

[root@lg114 ~]# fdisk -c -u /dev/mapper/ 368ccf098003f1461569ea4750e9dac50


Device contains neither a valid DOS partition table, nor Sun, SGI or OSF disklabel
Building a new DOS disklabel with disk identifier 0x12d4e90c Changes will remain

68 Host Configuration for Linux


in memeory only, until you decide to write them. After that, of course, the previous
content won't be recoverable.
Warning: invalid flag 0x0000 of partition table 4 will be corrected by w(rite)

Command (m for help): p

Disk /dev/mapper/3514f0c5b12a00004: 1649.3 GB, 1649267441664 bytes


255 heads, 63 sectors/track, 200512 cylinders, total 3221225472 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes I/O size
(minimum/optimal): 16384 bytes / 65536 bytes Disk identifier: 0x12d4e90c
Device Boot Start End Blocks Id System

In this mode, rather than using "cylinders" for creating partitions, the fdisk command uses sectors, which are a direct mapping
to the LBA space of the cluster. Thus, to verify that the partition is aligned, simply verify that the starting sector number is a
multiple of 16 (16 sectors, at 512 bytes each, is 8 KB). The fdisk command defaults to a starting sector for the first partition
of 2048, which is divisible by 16, and thus is correctly assigned.

Creating a File System


When creating a file system with PowerStore cluster storage, use its default block size, and disable UNMAP during creation.
It is recommended to create the file system using its default block size (using a non-default block size may lead to unexpected
behavior). See your operating system and file system documentation.
NOTE: Creating a file system with UNMAP enabled on a host connected to PowerStore may result in an increased amount
of write I/Os to the storage subsystem. It is highly recommended to disable UNMAP during file system creation.
To disable UNMAP during file system creation:
● When creating a file system using the mke2fs command - Use the "-E nodiscard" parameter.
● When creating a file system using the mkfs.xfs command - Use the "-K" parameter.
For a more efficient data utilization and better performance, use Ext4 file system with PowerStore cluster storage instead of
Ext3. For details about converting to Ext4 file system (from either Ext3 or Ext2), See Upgrade to Ext4.

NOTE: File system configuration and management are out of the scope of this document.

Mounting iSCSI File Systems


When configuring /etc/fstab to automatically mount iSCSI file systems, verify the following:
● Enable Netfs
NOTE: The command may change depending on the Linux release that you are using.

# systemctl enable remote-fs.target


# systemctl start remote-fs.target

● If you are not using LVM, edit the /etc/fstab file to mount the file systems automatically when the system boots.
● On Red Hat Enterprise Linux, the _netdev option should be used to indicate that the file system must mount automatically.
The example below demonstrates a configuration entry with the _netdev option:

#device mount point FS Options Backup fsck


/dev/mapper/diskname /mnt/vol1 ext4 _netdev 0 2

If the file system being mounted exists directly on the device (does not use LVM), it is recommended to use labels, as shown
in the example above. For information, see RedHat Knowledge Article 3889 (How can I mount iSCSI devices...). If you still
experience issues, see RedHat Knowledge Article 22993 (Why aren't remote filesystems...) for additional troubleshooting
steps.
● On SUSE Linux 11 and later, the nofail option should be used to indicate that the file system must mount automatically.
The example below demonstrates a configuration entry with the nofail option:

/dev/mapper/diskname /mnt/vol1 ext4 rw,nofail 0 2

Host Configuration for Linux 69


Replace disk_name with the iSCSI disk name and /mount/point with the mount point of the partition. For information,
see SUSE Knowledge Article 000017130 (/etc/fstab entry does not mount...).

70 Host Configuration for Linux


6
Host Configuration for AIX
This chapter contains the following topics:
Topics:
• Related E-Lab Host Connectivity Guide
• Recommended Configuration Values Summary
• 2 TB LUN Size Support
• Boot from SAN
• Fibre Channel Configuration
• Dell EMC AIX ODM Installation

Related E-Lab Host Connectivity Guide


The topics in this chapter detail specific caveats and configuration parameters that must be present when configuring
an AIX host to access a PowerStore storage. These caveats and parameters should be applied with the configuration
steps that are detailed on the E-Lab Host Connectivity Guide for AIX (see the E-Lab Interoperability Navigator at https://
elabnavigator.dell.com).

Recommended Configuration Values Summary


The following table summarizes all used and recommended variables and their values when configuring hosts for AIX.

NOTE: Unless indicated otherwise, use the default parameters values.

Validation Impact Severity Refer to Section


To clarify the above note for using Stability & Performance Recommended For further details,
default parameter settings unless stated refer to OS and HBA
otherwise in this chapter, make sure that documentation.
the following are set per the default OS
setting:
● LUN and HBA queue depth
● HBA timeout
Fibre Channel Configuration: No more than Stability & Performance Mandatory Fibre Channel Configuration
eight (8) paths per LUN should be used
with an AIX host that is connected to
PowerStore.
LUN Queue Depth: Performance Recommended Queue Depth
If I/O throttling is required, the default
depth value of 256 should be modified to
a lower value.

HBA FC max I/O size: Performance Recommended Fibre Channel Adapter


Device Driver Maximum I/O
max_xfer_size should be set to 1 MB.
Size
ODM Minimum Version: Stability Stability Dell EMC ODM Installation
DellEMC.AIX.6.2.0.1.tar.Z

Host Configuration for AIX 71


Validation Impact Severity Refer to Section

NOTE: When upgrading the connected


PowerStore system to PowerStore OS
version 1.0.2.0.5.003 (or later), AIX
ODM version 6.2.0.1 is required when
using PowerStore.

To enable Fast I/O Failure for all fscsi Stability and Warning Fast I/O Failure for Fibre
devices, set the fscsi device attribute set Performance Channel Devices
fc_err_recov to fast_fail

To enable dynamic tracking of FC devices, Stability and Warning Dynamic Tracking


set: Performance
dyntrk= yes

PowerStore operating systems earlier than Stability Mandatory 2 TB LUN Size Support
2.1.0 do not support volumes larger than 2
TB with AIX.

2 TB LUN Size Support


PowerStore operating system versions earlier than 2.1.0 do not support SCSI command WRITE and VERIFY 0x8E (16) byte
CDB that is used for volumes larger than 2 TB in size. If you must use volumes that are larger than 2 TB with AIX, it is highly
recommended to upgrade to PowerStore operating system version 2.1.0 (or later). Otherwise, use volumes that are not larger
than 2 TB.

Boot from SAN


For guidelines and recommendations for boot from SAN with AIX hosts and PowerStore, refer to the Considerations for Boot
from SAN with PowerStore appendix.

Fibre Channel Configuration


This section describes the recommended configuration that should be applied when attaching AIX hosts to PowerStore cluster
using Fibre Channel.
NOTE: When using Fibre Channel with PowerStore, the FC Host Bus Adapters (HBA) issues that are described in this
section should be addressed for optimal performance.

NOTE: In general, no more than eight(8) paths per LUN should be used with an AIX host that is connected to PowerStore.
If more paths are needed, an RPQ is required.

Pre-Requisites
Before you install HBAs on an AIX host, the following pre-requisites should be met.
Follow the IBM recommendations for installation and setup of the appropriate HBA for your system. It is recommended to install
the latest driver version (patch), as described on the IBM support site for each specific FC HBA.
Refer to the E-Lab Interoperability Navigator (https://elabnavigator.dell.com) for supported FC HBA models and drivers.

72 Host Configuration for AIX


Queue Depth
Follow these recommendations when setting queue depth.
NOTE: Changing queue depth settings is designed for advanced users. Increasing the queue depth may cause the host to
overstress other clusters that are connected to the AIX host, resulting in performance degradation while communicating
with them. Therefore, especially in mixed environments with multiple cluster types that are connected to the AIX host,
compare the PowerStore recommendations for queue depth with those of other platforms before applying them.
Queue depth is the number of SCSI commands (including I/O requests) that a storage device can handle at a given time. Queue
depth can be set on either of the following levels:
● Initiator level - HBA queue depth
● LUN level - LUN queue depth
LUN queue depth setting controls the amount of outstanding I/O requests per a single path.
HBA queue depth (also referred to as execution throttle) setting controls the number of outstanding requests per HBA port.
For optimal operation with PowerStore, it is recommended to adjust the HBA queue depth setting of the FC HBA.
The driver module for the card controls LUN queue depth settings at the operating system level. Change the LUN queue depth
default value (256) to a lower value only if I/O throttling is required.

Setting the Queue Depth


Follow these steps to set the HBA queue depth.

About this task


To set the HBA queue depth:

Steps
1. Run the chdev command for each HBA in the AIX host to set the HBA firmware level queue depth:

chdev -l fcs# -a num_cmd_elems=2048 -P

2. Reboot the AIX host to apply the HBA queue depth settings.

Fast I/O Failure for Fibre Channel Devices


This topic describes the Fast I/O Failure feature for FC devices and details the setting recommendations.
AIX supports Fast I/O Failure for Fibre Channel devices after link events in a switched environment.
When the FC adapter driver detects a link event, such as a lost link between a storage device and a switch, it waits for the
fabric to stabilize (approximately 15 s). If the device is not on the fabric, the FC adapter driver fails all new I/Os or future
retries of the failed I/Os, until the device rejoins the fabric. The fscsi device attribute fc_err_recov controls Fast I/O Failure
(default value is delayed_fail).
It is recommended to enable Fast I/O Failure for FC adapters that are connected to PowerStore storage.
To enable Fast I/O Failure for all fscsi devices, set the fc_err_recov attribute to fast_fail, as shown in the following
example:

NOTE: In the example, the fscsi device instance is fscsi0.

chdev -l fscsi0 -a fc_err_recov=fast_fail -P

Run the following command to verify that the setting was enabled in the ODM:

lsattr -El fscsi0

Host Configuration for AIX 73


NOTE: The -P flag only modifies the setting in the ODM and requires a system reboot for the changes to apply.

Fast fail logic is applied when the switch sends a Registered State Change Notification (RSCN) to the adapter driver, indicating
a link event with a remote storage device port.
Fast I/O Failure is useful when multipathing software is used. Setting the fc_err_recov attribute to fast_fail can
decrease I/O failure due to link loss between the storage device and switch, by supporting faster failover to alternate paths.

Dynamic Tracking
This topic describes the dynamic tracking logic for FC devices and details the setting recommendations.
Dynamic tracking logic is applied when the adapter driver receives an indication from the switch that a link event with a remote
storage device port has occurred.
If dynamic tracking of FC devices is enabled, the FC adapter driver detects when the Fibre Channel N_Port ID of a device
changes. The FC adapter driver then reroutes the traffic that is destined for that device to the new address, while the devices
are still online.
Events that can cause an N_Port ID to change include:
● Moving a cable that connects a switch to a storage device from one switch port to another.
● Connecting two separate switches using an Inter-Switch Link (ISL).
● Rebooting a switch.
The fscsi device attribute dyntrk controls dynamic tracking of FC devices (default value is no for non-NPIV configurations).
It is recommended to enable dynamic tracking for PowerStore volumes.
To enable dynamic tracking for FC devices, change all fscsi device attributes to dyntrk=yes, as shown in the following
example:

NOTE: In the example, the fscsi device instance is fscsi0.

chdev -l fscsi0 -a dyntrk=yes -P

Run the following command to verify that the setting was enabled in the ODM:

lsattr -El fscsi0

NOTE: The -P flag only modifies the setting in the ODM and requires a system reboot for the changes to apply.

Fibre Channel Adapter Device Driver Maximum I/O Size


Set the max_xfer_size attribute for optimal AIX host operation over FC with PowerStore.

Prerequisites
The max_xfer_size FC HBA adapter device driver attribute for the fscsi device controls the maximum I/O size that the
adapter device driver can handle. This attribute also controls a memory area that the adapter uses for data transfers.
For optimal AIX host operation over FC with PowerStore, perform the following steps:

Steps
1. Run the following command on all FC adapters that are connected to PowerStore:

chdev -l fcs0 -a max_xfer_size=0x100000 -P

2. Reboot the AIX host to apply the max_xfer_size setting adjustments.


NOTE: For virtualized AIX hosts, ensure to apply the max_xfer_size setting adjustments on all LPARs of the host
that is connected to PowerStore storage.

74 Host Configuration for AIX


Dell EMC AIX ODM Installation
This topic provides an introduction to Dell EMC ODM.
The Object Data Manager (ODM) is a database of system and device configuration information that is integrated into the AIX
operating system. Information is stored and maintained as objects with associated characteristics. The Dell EMC ODM support
package contains a series of installable filesets. These filesets are used to update the AIX ODM with customized Dell EMC
storage device configuration attributes.

Dell EMC AIX ODM Installation Requirements


This section outlines the requirements for Dell EMC AIX ODM installation.
To meet the Dell EMC storage cluster requirements, you must install the correct Dell EMC ODM filesets to support Fibre
Channel attachment to the PowerStore cluster.
The minimum ODM and AIX operating system versions that are supported with PowerStore and native MPIO are:
DellEMC.AIX.6.2.0.1.tar.Z -> For AIX 7.1, 7.2, 7.3, and VIOS versions 3.1.0 or later.
PowerStore AIX ODM software package must be updated to version 6.2.0.1 before running PowerStore operating system
1.0.2.0.5.003 (or later) with PowerPath. An RPQ for PowerPath is required for this configuration. PowerStore operating system
SP2 contains a change to the PowerStore cluster serial number that requires this new ODM package version. A reboot of the
connected AIX host is required for this change to take effect.
To install the Dell EMC fileset:
1. Download the correct Dell EMC ODM fileset version and place it in the /tmp/ODM directory.
2. Untar the DellEMC.AIX.6.2.0.1.tar.Z file, using the following command:

uncompress DellEMC.AIX.6.2.0.1.tar.Z
tar -xvf DellEMC.AIX.6.2.0.1.tar.Z

3. Run the following command to create a table of contents file:

inutoc .

4. Run the following command to install the following filesets to support native MPIO:

installp -ad . EMC.PowerStore.aix.rte EMC.PowerStore.fcp.MPIO.rte


Installation Summary
------------------------
Name Level Part Event Result
--------------------------------------------------------------------
EMC.PowerStore.aix.rte 6.2.0.1 USR APPLY SUCCESS
EMC.PowerStore.fcp.MPIO.rte 6.2.0.1 USR APPLY SUCCESS

5. Run the following command to install the following filesets to support PowerPath (an RPQ for PowerPath is required for this
configuration):

installp -ad . EMC.POwerStore.aix.rte EMC.PowerStore.fcp.rte


Installation Summary
-------------------------
Name Level Part Event Result
----------------------------------------------------------------
EMC.PowerStore.aix.rte 6.2.0.1 USR APPLY SUCCESS
EMC.PowerStore.fcp.rte 6.2.0.1 USR APPLY SUCCESS

Host Configuration for AIX 75


7
Host Configuration for Solaris
This chapter contains the following topics:
Topics:
• Related E-Lab Host Connectivity Guide
• Recommended Configuration Values Summary
• Boot from SAN
• Fibre Channel Configuration
• Solaris Host Parameter Settings
• Post configuration steps - using the PowerStore system

Related E-Lab Host Connectivity Guide


The topics in this chapter detail specific caveats and configuration parameters that must be present when configuring a
Solaris host to access a PowerStore storage. These caveats and parameters should be applied with the configuration steps
that are detailed on the E-Lab Host Connectivity Guide for Solaris (see the E-Lab Interoperability Navigator at https://
elabnavigator.dell.com).

Recommended Configuration Values Summary


The following table summarizes all used and recommended variables and their values when configuring hosts for Solaris
Operating System.

NOTE: Unless indicated otherwise, use the default parameters values.

NOTE: Solaris OS can use two types of disk drivers to manage disk storage. The driver type depends on the platform
architecture (x86 or SPARC) and the version of Solaris installed on the platform.

All versions of Solaris x86 OS are using SD disk drivers to manage all disk storage.

For SPARC platform versions prior to 11.4, release SSD driver type is used to manage all disk storage.

To simplify configuration and disk storage management, as of SPARC platform version 11.4, both platforms are using SD
driver.

If the SPARC system is upgraded to Solaris 11.4 from one of the earlier versions, the system will continue to use SSD driver.
All new installations of Solaris 11.4 will be configured to use SD driver for disk management.

Make sure that you update the tuning settings in the correct disk driver configuration file.

Validation Config File Impact Severity Refer to Section


Set the maximum I/O size to 1 MB: /etc/system Stability Mandatory Updating ssd.conf
configuration file
set maxphys = 0x100000
Updating sd.conf
configuration file

Configure ZFS space reclamation: /etc/system Efficiency Recommended Creating PowerStore


System Configuration File
set
zfs:zfs_unmap_ignore_size=2
56

76 Host Configuration for Solaris


Validation Config File Impact Severity Refer to Section

set
zfs:zfs_log_unmap_ignore_si
ze=256

Enable Solaris MPxIO multipathing: fp.conf Stability Recommended Updating fp.conf


configuration file
mpxio-disable="no";

Fibre Channel path failover tuning: fp.conf Stability Recommended Updating fp.conf
fp_offline_ticker = 20; configuration file
Fibre Channel path failover tuning: fp.conf Stability Recommended Updating fp.conf
fcp_offline_delay = 20; configuration file
Maximum I/O size for ssd driver for ssd.conf Stability Mandatory Updating ssd.conf
Solaris 10, 11-11.3 (SPARC) configuration file
ssd_max_xfer_size=0x100000;

Maximum I/O size for sd driver for sd.conf Stability Mandatory Updating sd.conf
Solaris 11.4 (SPARC) 11.x (x86) configuration file
sd_max_xfer_size=0x100000;

Soaris ssd driver tuning for Solaris 10, ssd.conf Stability Recommended Updating ssd.conf
11-11.3 (SPARC) configuration file
ssd-config-list = "DellEMC
PowerStore","throttle-
max:64, physical-block-
size:4096, disksort:false,
cache-nonvolatile:true";

Soaris sd driver tuning for Solaris 11.4 sd.conf Stability Recommended Updating sd.conf
(SPARC) 11.x (x86) configuration file
sd-config-list = "DellEMC
PowerStore","throttle-
max:64, physical-block-
size:4096, disksort:false,
cache-nonvolatile:true";

Solaris MPxIO multi-path driver tuning scsi_vhci.conf Stability Mandatory Updating scsi_vhci.conf
configuration file
load-balance="round-robin";
auto-failback="enable";

Solaris MPxIO multi-path driver tuning scsi_vhci.conf Stability Mandatory Updating scsi_vhci.conf
configuration file
scsi-vhci-update-pathstate-
on-reset = "DellEMC
PowerStore", "yes";

Boot from SAN


For guidelines and recommendations for boot from SAN with Solaris hosts and PowerStore, refer to the Considerations for Boot
from SAN with PowerStore appendix.

Fibre Channel Configuration


This section describes the recommended configuration to apply when attaching a host to thePowerStore cluster using host
Fibre Channel HBAs.

Host Configuration for Solaris 77


NOTE: When using Fibre Channel with PowerStore, the FC HBA (Host Bus Adapters) issues described in this section
should be addressed for optimal performance.

NOTE: Before you proceed, review Fibre Cannel and NVMe over Fibre Channel SAN Guidelines.

Pre-Requisites
Before installing HBAs in a Solaris host, the following pre-requisites should be met:
● Follow Oracle's recommendations for installation and setup of the appropriate HBA for your system.
● It is recommended to install the latest driver version (patch), as described on the Oracle support site for each specific FC
HBA.
● Refer to the E-Lab Interoperability Navigator (https://elabnavigator.dell.com) for supported FC HBA models and drivers.

Queue Depth
Queue depth is the amount of SCSI commands (including I/O requests) that can be handled by a storage device at a given time.
A queue depth can be set on either of the following:
● Initiator level - HBA queue depth
● LUN level - LUN queue depth
The LUN queue depth setting controls the amount of outstanding I/O requests per a single path. The HBA queue depth (also
referred to as execution throttle) setting controls the amount of outstanding I/O requests per HBA port.
With PowerStore and Solaris, the HBA queue depth setting should retain its default value, and the initial LUN queue depth
setting should be modified to 64. This is a good starting point provided good I/O response times. The specific value can be
adjusted based on particular infrastructure configuration, application performance and I/O profile details.

Solaris Host Parameter Settings


This section describes the Solaris host parameter settings required for optimal configuration when using Dell Technologies
PowerStore storage.

Configuring Solaris native multipathing


For a PowerStore cluster to properly function with Oracle Solaris hosts, configure the multipath settings as described in the
following sections.

NOTE: If the host is connected to a cluster other than PowerStore, the configuration file may include additional devices.

NOTE: Currently, PowerStore clusters are only supported with native Solaris multipathing (MPxIO).

Enable Solaris native multipathing on Solaris 10 and 11.0-11.4 hosts (SPARC


and x86)

To enable management of storage LUNs that are presented to the host with MPxIO, use the following command:

# stmsboot -e

NOTE: The host must be rebooted immediately after the command execution is complete. It is recommended to update all
storage-related host configuration files before rebooting.

78 Host Configuration for Solaris


Updating scsi_vhci.conf configuration file

About this task


The scsi_vhci.conf file is used to configure third-party storage multipathing parameters on Solaris 11 hosts, based on SCSI
inquiry responses. The host sends SCSI inquiry commands and, based on the returned data, MPxIO driver activates the
corresponding multipathing module. The load-balancing and failover policies are also configured, based on the settings in the
scsi_vhci.conf file.

Steps
1. Run the following command to verify the scsi_vhci.conf file location:

# ls /etc/driver/drv/

2. If the file is not in the expected location, run the following command to copy it from /kernel/drv:

# cp /kernel/drv/scsi_vhci.conf /etc/driver/drv

3. Run the following commands to create a backup copy of the scsi_vhci.conf file:

# cp -p /etc/driver/drv/scsi_vhci.conf /etc/driver/drv/scsi_vhci.conf_ORIG

4. Modify the scsi_vhci.conf file by adding the following recommended entries for PowerStore storage:

load-balance="round-robin";
auto-failback="enable";
scsi-vhci-update-pathstate-on-reset="DellEMC PowerStore", "yes";
scsi-vhci-failover-override="DellEMC PowerStore", "f_tpgs";

Parameter Description Value


load-balance Specifies the default load-balancing round-robin
policy. Possible values are none,
logical-block, round-robin

auto-failback Specifies whether LUN access enable


should be restored on the path
restore. Possible values are enable,
disable

scsi-vhci-update-pathstate- Enable path status update on reset. yes


on-reset Possible values are yes, no

scsi-vhci-failover-override Add a third-party (non-Sun) storage f_tpgs (Target Port Groups, ALUA)
device to run under scsi_vhci (and
by that take advantage of scsi_vhci
multipathing), using the vendor
ID "DellEMC PowerStore" and
product ID "f_tpgs" corresponding
to the Dell EMC PowerStore device.

PowerPath Configuration with PowerStore Volumes


PowerStore supports multipathing using Dell EMC PowerPath on a Solaris host.
For the most updated information about PowerPath support with PowerStore, see the PowerStore SImple Support Matrix.
For details on installing and configuring PowerPath with PowerStore on your host, see Dell EMC PowerPath on Solaris
Installation and Administration Guide for the PowerPath version you are installing.

Host Configuration for Solaris 79


Host storage tuning parameters
Configure host storage tuning parameters as described in the following sections.

Updating fp.conf configuration file

About this task


The fp.conf host file is used to control options for Fibre Channel storage. The MPxIO settings in fp.conf file should match the
settings in scsi_vhci.conf.

Steps
1. Run the following command to verify the fp.conf file location:

# ls /etc/driver/drv/

2. If the file is not in the expected location , run the following command to copy it from /kernel/drv:

# cp /kernel/drv/fp.conf /etc/driver/drv

3. Run the following commands to create a backup copy and modify the file:

# cp -p /etc/driver/drv/fp.conf /etc/driver/drv/fp.conf_ORIG
# vi /etc/driver/drv/fp.conf

Example
Below are the entries that are recommended for PowerStore storage.

mpxio-disable="no";
fp_offline_ticker=20;

Parameter Description Value


mpxio-disable Specifies whether MPxIO is disabled. no
MPxIO can be enabled for Fibre Channel
storage, or it can be disabled for a
particular HBA.
fp_offline_ticker Used to prevent errors from being 20
generated immediately for transient/
brief connection interruptions. If the
connections are restored before the fcp
and fp delays expire, should prevent any
errors.

Run the following commands to verify these fp.conf parameter settings:

root@support-sparc1:~# echo "mpxio-disable/U"|mdb -k


mpxio-disable:
mpxio-disable: no
root@support-sparc1:~# echo "fp_offline_ticker/U"|mdb -k
fp_offline_ticker:
fp_offline_ticker: 20

80 Host Configuration for Solaris


Creating PowerStore System Configuration File

About this task


The /etc/system.d directory holds files that are used to control Solaris kernel tuning settings.

Steps
1. Run the following command to change directory to the /etc/system.d directory:

# cd /etc/system.d

2. Run the following command to create a new PowerStore.conf file with the recommended Solaris kernel tuning settings for
PowerStore storage:

# vi PowerStore.conf

Example
Below are Solaris kernel tuning setting entries recommended for PowerStore storage.

set maxphys = 0x100000


set zfs:zfs_unmap_ignore_size=256
set zfs:zfs_log_unmap_ignore_size=256

Parameter Description Value


set maxphys Sets the maximum size of a single I/O 0x100000
request. For PowerStore it must be set
to no more than 1 MB.
set zfs:zfs_unmap_ignore_size ZFS TRIM settings 256
set zfs:zfs_log_unmap_ignore_size ZFS TRIM settings 256

Updating fcp.conf configuration file

About this task


The fcp.conf host file is used to control options for Fibre Channel storage.

Steps
1. Run the following command to verify the fcp.conf file location:

# ls /etc/driver/drv/

2. If the file is not in the expected location, run the following command to copy it from /kernel/drv:

# cp /kernel/drv/fcp.conf /etc/driver/drv

3. Run the following commands to create a backup copy and modify the file:

# cp -p /etc/driver/drv/fcp.conf /etc/driver/drv/fcp.conf_ORIG
# vi /etc/driver/drv/fcp.conf

Host Configuration for Solaris 81


Example
Below are the entries that are recommended for PowerStore storage.

fcp_offline_delay = 20;

Parameter Description Value


fcp_offline_delay The tuning setting is designed 20
to prevent errors from being
generated immediately for transient/
brief connection interruptions. If the
connections are restored before the fcp
and fp delays expire, should prevent any
errors.

Run the following commands to verify this fcp.conf parameter setting:

root@support-sparc1:~# echo "fcp_offline_delay/U"|mdb -k


fcp_offline_delay:
fcp_offline_delay: 20

Updating ssd.conf configuration file (Solaris 10 and 11.0-11.3 SPARC)

About this task


The ssd.conf host file is used to control options for SCSI disk storage device.

Steps
1. Run the following command to verify the ssd.conf file location:

# ls /etc/driver/drv/

2. If the file is not in the expected location, run the following command to copy it from /kernel/drv:

# cp /kernel/drv/sd.conf /etc/driver/drv

3. Run the following commands to create a backup copy and modify the file:

# cp -p /etc/driver/drv/sd.conf /etc/driver/drv/sd.conf_ORIG
# vi /etc/driver/drv/sd.conf

Example
Below are the entries that are recommended for PowerStore storage.

ssd_max_xfer_size=0x100000;
ssd-config-list = "DellEMC PowerStore", "throttle-max:64, physical-block-size:4096,
disksort:false, cache-nonvolatile:true";

Parameter Description Value


ssd_max_xfer_size Restrict SCSI disk driver maximum I/O 0x100000
size to 1 MB
ssd-config-list SCSI inquiry storage response, the VPD DellEMC PowerStore
for PowerStore LUNs
throttle-max Maximum SCSI queue depth setting 64
physical-block-size Optimal LUN block size in bytes 4096

82 Host Configuration for Solaris


Parameter Description Value
disksort SCSI device command optimization false
cache-nonvolatile Indicate whether the storage has a true
nonvolatile cache

Updating sd.conf configuration file (Solaris 11.x x86 and 11.4 SPARC)

About this task


The sd.conf host file is used to control options for SCSI disk storage device.

Steps
1. Run the following command to verify the sd.conf file location:

#ls /etc/driver/drv/

2. If the file is not in the expected location, run the following command to copy it from /kernel/drv:

# cp /kernel/drv/sd.conf /etc/driver/drv

3. Run the following commands to create a backup copy and modify the file:

# cp -p /etc/driver/drv/sd.conf /etc/driver/drv/sd.conf_ORIG
# vi /etc/driver/drv/sd.conf

Example
Below are the entries that are recommended for PowerStore storage.

sd_max_xfer_size=0x100000;
sd-config-list = "DellEMC PowerStore", "throttle-max:64, physical-block-size:4096,
disksort:false, cache-nonvolatile:true";

Parameter Description Value


sd_max_xfer_size Restrict SCSI disk driver maximum I/O 0x100000
size to 1 MB
sd-config-list SCSI inquiry storage response, the VPD DellEMC PowerStore
for PowerStore LUNs
throttle-max Maximum SCSI queue depth setting 64
physical-block-size Optimal LUN block size in bytes 4096
disksort SCSI device command optimization false
cache-nonvolatile Indicate whether the storage has a true
nonvolatile cache

Post configuration steps - using the PowerStore


system
When host configuration is complete, you can access the PowerStore system from the host.
You can create, present, and manage volumes accessed from the host via PowerStore Manager, CLI, or REST API. Refer to the
PowerStore Manager Online Help, CLI Reference Guide, or REST API Reference Guide for additional information.

Host Configuration for Solaris 83


When adding host groups and hosts to allow Solaris hosts to access PowerStore volumes, specify Solaris as the operating
system for the newly-created hosts.
NOTE: Setting the host’s operating system is required for optimal interoperability and stability of the host with PowerStore
storage. You can adjust the setting while the host is online and connected to the PowerStore cluster with no I/O impact.

NOTE: Refer to the PowerStore Configuring Volumes Guide for additional information.

Partition alignment in Solaris


Use Solaris format command to create partitions aligned to 4K on a PowerStore cluster LUNs for use as a raw or UFS devices.
When a PowerStore LUN is added to the ZFS pool, ZFS automatically creates aligned partitions.

84 Host Configuration for Solaris


8
Host Configuration for HP-UX
This chapter contains the following topics:
Topics:
• Related E-Lab Host Connectivity Guide
• Recommended Configuration Values Summary
• Boot from SAN
• Fibre Channel Configuration
• HP-UX Host Parameter Settings
• Multipathing Software Configuration
• Post-Configuration Steps - Using the PowerStore System

Related E-Lab Host Connectivity Guide


The topics in this chapter detail specific caveats and configuration parameters that must be present when configuring an
HP-UX host to access a PowerStore storage. These caveats and parameters should be applied with the configuration steps
that are detailed on the E-Lab Host Connectivity Guide for HP-UX (see the E-Lab Interoperability Navigator at https://
elabnavigator.dell.com).

Recommended Configuration Values Summary


The following table summarizes all used variables and their values when configuring hosts for HP-UX

NOTE: Unless indicated otherwise, use the default parameters values.

NOTE: If a volume that was already discovered and configured by a host is presented to that host, then a subsequent
change to the escsi_maxphys parameter does not take effect until a host reboot. Volumes attached after the parameter
change inherit the parameter change automatically and require no further host reboot.

Validation Impact Severity See Section


To clarify the above note Stability & Recommended For further details, see operating system and
for using default parameter Performance HBA documentation.
settings (unless stated
otherwise in this chapter),
ensure that the following are
set per the default operating
system setting:
● LUN and HBA queue depth
● HBA timeout
Specify HP-UX as the Serviceability Mandatory Presenting PowerStore Volumes to the HP-
operating system for each UX Host
defined host.
Maximum transfer length: Performance Mandatory Maximum Transfer Length
Change escsi_maxphys value
to 1MB (256 increments of
4KB).

Host Configuration for HP-UX 85


Validation Impact Severity See Section

scsimgr save_attr -a
escsi_maxphys=256

Load balancing: Keep the Performance Mandatory Configuring Native Multipathing using HP-UX
following HP-UX native Multipath (MPIO)
multipathing parameters at
their default values:
● load_bal_policy - set to
"round_robin"
● path_fail_secs - set to 120
seconds
Temporarily disable UNMAP Performance Recommended Creating File System
during file systems creation
(only when using Veritas
Volume Manager):
● To temporarily disable
UNMAP for the targeted
device on the host (before
file system creation):
#vxdisk set
reclaim=off "disk
name"

● To re-enable UNMAP for


the targeted device on
the host (after file system
creation):
# vxdisk reclaim
"disk name"

Boot from SAN


For guidelines and recommendations for boot from SAN with HP-UX hosts and PowerStore, refer to the Considerations for Boot
from SAN with PowerStore appendix.

Fibre Channel Configuration


This section describes the recommended configuration that should be applied when attaching hosts to PowerStore cluster using
Fibre Channel.
NOTE: This section applies only to Fibre Channel.

NOTE: PowerStore supports only FC-SW FCP connections. GigE iSCSI and FC direct connections from HP-UX initiators to
PowerStore target ports are not supported.

NOTE: Before you proceed, review Fibre Cannel and NVMe over Fibre Channel SAN Guidelines.

Pre-Requisites
This section describes the pre-requisites for FC HBA configuration
● Refer to the E-Lab Interoperability Navigator (https://elabnavigator.dell.com) for supported FC HBA models and drivers.
● Verify all HBAs are at the supported driver, firmware and BIOS versions.
● For instructions about installing the FC HBA and upgrading the drivers or the firmware, see HP documentation.

86 Host Configuration for HP-UX


HP-UX Host Parameter Settings
This section describes the HP-UX host parameter settings required for optimal configuration when using Dell Technologies
PowerStore storage.
NOTE: PowerStore is supported only with HP-UX version lliv3. HP-UX versions lliv2 and lliv1 are only supported with the
volume set addressing method, which currently is not available with PowerStore.
To configure HP-UX lliv3 with PowerStore make sure that the following requirements are met:
● PowerStore supports only native MPIO on HP-UX lliv3. For further details refer to the <Multipathing Software
Configuration> section.
● Maximum request transfer length of PowerStore array is 2048 blks (512 byte blks) for a maximum transfer length of 1MB.

Maximum Transfer Length


HP-UX lliv3 implements a default maximum transfer length of 2 MB and does not have support for VPD page B0h. A tunable
parameter, named escsi_maxphys, enables the modification of its FC fcp maximum transfer length. You can configure
escsi_maxphys as follows:
● Reset to default during host reboot:

scsimgr set_attr -a escsi_maxphys=<value>

● Persistent through reboot

scsimgr save_attr -a escsi_maxphys=<value>

The set value is defined in 4 KB increments. To support the PowerStore devices in HP-UX, change the escsi_maxphys value
to 256 using the following commands:
● 1 MB max transfer length, reset to default 2 MB during host reboot:

scsimgr set_attr -a escsi_maxphys=256

● 1 MB max transfer length persistent through reboot

scsimgr save_attr -a escsi_maxphys=256

NOTE: You can configure the escsi_maxphys attribute only on a global basis, and it applies for all FC fcp black devices
that are connected to the host.

Multipathing Software Configuration


This topic introduces Multipathing Software Configuration for HP-UX
PowerStore supports only native multipathing using multipath I/O (MPIO) with HP-UX lliv3.
NOTE: Other non-native multipathing software, such as PowerPath or Veritas Dynamix Multipathing (DMP), are not
supported.

Configuring Native Multipathing Using HP-UX Multipath I/O


(MPIO)
This topic describes configuring native multipathing using HP-UX Multipath I/O (MPIO).
For optimal operation with PowerStore storage, configure the following HP-UX native multipathing parameters at their default
values:

Host Configuration for HP-UX 87


● load_bal_policy - I/O load balancing policy: This parameter must be set with the default value of "round-robin" to
set the Round-Robin (RR) policy for MPIO for devices presented from PowerStore. Using this policy, I/O operations are
balanced across all available paths.
● path_fail_secs - Timeout in seconds before declaring a LUN path offline: This parameter must be set with the default
value of 120 seconds.
NOTE: The man page of scsimgr_esdisk(7) provides a list of parameters related to HP-UX native multipathing.

Post-Configuration Steps - Using the PowerStore


System
This topic describes the post-configuration steps using the PowerStore system.
After the host configuration is completed, you can use the PowerStore storage from the host.
You can create, present, and manage volumes accessed from the host via PowerStore Manager, CLI , or REST API. Refer to the
PowerStore Manager Online Help, CLI Reference Guide, or REST API Reference Guide for additional information.

Presenting PowerStore Volumes to the HP-UX Host


This topic describes presenting PowerStore volumes to the HP-UX host.
When adding host groups and hosts to allow HP-UX hosts to access PowerStore volumes, specify HP-UX as the operating
system for the newly-created hosts.
NOTE: Setting the host's operating system is required for optimal interoperability and stability of the host with PowerStore
storage. You can adjust the setting while the host is online and connected to the PowerStore cluster with no I/O impact.

NOTE: Refer to the PowerStore Configuring Volumes Guide for additional information.

Creating a file system

About this task


File system configuration and management are out of the scope of this document.
NOTE: Some file systems may require you to properly align the file system on the PowerStore volume. It is recommended
to use specified tools to optimally match your host with application requirements.

NOTE: Creating a file system formatting with UNMAP enabled on a host connected to PowerStore may result in an
increased amount of write I/Os to the storage subsystem. When possible, it is highly recommended to disable UNMAP
during file system creation. Disabling UNMAP can be done when using the Veritas Volume Manager on the HP-UX host.
However, when using the HP-UX native volume manager, this recommendation is not applicable as UNMAP is not supported
in this case.
To disable UNMAP during file system creation (only when using Veritas Volume Manager):

Steps
1. Access the HP-UX host using SSH as root.

2. Run the following vxdisk command to temporarily disable UNMAP for the targeted device on the host (before creating the
file system):

# vxdisk set reclaim=off "disk name"

88 Host Configuration for HP-UX


3. Once file system creation is complete, reenable UNMAP for the targeted device on the host, by running the following
command:

# vxdisk reclaim "disk name"

NOTE: To verify the current setting of a specific device using its corresponding disk group, run the following vxprint
command:

#vxprint -z -g "disk group name"

Example: Using the vxprint command to verify the current UNMAP setting of a specific device:

# vxprint -z -g testdg
...
dm testdg02 3pardata0_55 - 2031232 - - - -

sd testdg02-01 - ENABLED 409600 - RECLAIM - -


sd testdg02-02 - ENABLED 67840 - RECLAIM - -

Host Configuration for HP-UX 89


A
Considerations for Boot from SAN with
PowerStore
This appendix provides considerations for configuring boot from SAN with PowerStore.
Topics:
• Consideration for Boot from SAN with PowerStore

Consideration for Boot from SAN with PowerStore


NOTE: See your operating system documentation for general boot from SAN configuration guidelines.

NOTE: The current PowerStore OS release does not support mapping individual LUNs to host under a host group.

Follow these guidelines when configuring boot from SAN:


● Manually register your HBAs with PowerStore and create a host.
● Create the boot volume.
● Map the created boot volume to the host.
● Use the path that owns the volume (Node A or Node B).
○ The lowest-numbered path to the boot LUN must be the active path.
● It is recommended to assign the boot LUN with Host LUN ID 0. During the installation procedure, it is recommended that
only one LUN be mapped to a host for ease of use.
○ Once the installation is complete, additional LUNs may be mapped to the host.
Follow these guidelines for clustered boot from SAN mapping:
● For every physical host, there should be one host in the PowerStore manager. Do not create a host group.
● Boot LUNs are mapped only for the specific host.
● Shared LUNs are mapped to all hosts in the cluster. The user must keep the same LUN ID across the cluster for shared
LUNs.
When migrating a boot volume, using the PowerStore native migration tool, follow these guidelines:
● Power off the connected host before the migration; after the boot LUN ID is successfully changed, power it back on .
● It is recommended to assign the boot LUN with HOST LUN ID 0.
● When a volume is migrated from one PowerStore appliance to another appliance in the same cluster using the native
migration tool, the LUN ID number changes automatically.
● After migrating a boot from SAN volume, the LUN ID number can be changed back to 0.
● Perform the following steps to change the boot LUN ID:
1. On the PowerStore Manager of the destination appliance, select Storage > Volumes.
2. Click the name of the boot volume and select the Host Mapping tab.
3. Click the checkbox next to the boot volume name and select MORE ACTIONS > Edit Logical Unit Number.
NOTE: Changing the LUN ID number is a disruptive operation. The following message is displayed before changing the
number: Changing the Logical Unit Number of the host will disrupt its access to the volume until a host-side rescan is
performed.

90 Considerations for Boot from SAN with PowerStore


B
Troubleshooting
NOTE: In some of the examples in this appendix, the output text may be wrapped.

Topics:
• View Configured Storage Networks for NVMe/TCP
• View Configured Storage Networks for iSCSI
• View NVMe/FC and SCSI/FC Target Ports
• View Physical Ethernet Ports Status
• View Discovered Initiators
• View Active Sessions

View Configured Storage Networks for NVMe/TCP


Use the following PSTCLI to view all configured storage networks for NVMe/TCP

PS C:\WINDOWS\system32> pstcli -d 10.55.34.127 -u admin -p <password> ip_port show


-select id,current_usages,ip_pool_addresses.address -query "current_usages contains
NVMe_TCP"
# | id | current_usages | ip_pool_addresses.address
----+-----------+----------------------+------------------------------------
1 | IP_PORT1 | ISCSI | 172.16.5.204
| | External_Replication |
| | NVMe_TCP |
2 | IP_PORT12 | ISCSI | 172.16.5.205
| | External_Replication |
| | NVMe_TCP |
3 | IP_PORT16 | ISCSI | 172.28.2.204
| | NVMe_TCP | 172.28.1.204
| | | fd4b:8a14:c03b::201:4413:5d31:d6ff
| | | fd41:3062:7f9a::201:4480:39ac:7db3
| | |
4 | IP_PORT4 | ISCSI | 172.28.2.205
| | NVMe_TCP | 172.28.1.205
| | | fd4b:8a14:c03b::201:445f:9cc4:d91e
| | | fd41:3062:7f9a::201:4463:c850:b827

View Configured Storage Networks for iSCSI


Use the following PSTCLI to view all configured storage networks for iSCSI

PS C:\WINDOWS\system32> pstcli -d 10.55.34.127 -u admin -p <password> ip_port show


-select id,current_usages,ip_pool_addresses.address -query "current_usages contains
ISCSI"
# | id | current_usages | ip_pool_addresses.address
----+-----------+----------------------+------------------------------------
1 | IP_PORT1 | ISCSI | 172.16.5.204
| | External_Replication |
| | NVMe_TCP |
2 | IP_PORT12 | ISCSI | 172.16.5.205
| | External_Replication |
| | NVMe_TCP |
3 | IP_PORT16 | ISCSI | 172.28.2.204
| | NVMe_TCP | 172.28.1.204

Troubleshooting 91
| | | fd4b:8a14:c03b::201:4413:5d31:d6ff
| | | fd41:3062:7f9a::201:4480:39ac:7db3
| | |
4 | IP_PORT4 | ISCSI | 172.28.2.205
| | NVMe_TCP | 172.28.1.205
| | | fd4b:8a14:c03b::201:445f:9cc4:d91e
| | | fd41:3062:7f9a::201:4463:c850:b827

View NVMe/FC and SCSI/FC Target Ports


Use the following PSTCLI to view all NVMe/FC and SCSI/FC target ports.

PS C:\WINDOWS\system32> pstcli -d 10.55.34.127 -u admin -p <password> fc_port show


-select name,wwn,wwn_nvme,port_index,current_speed,appliance_id,
is_link_up -query "is_link_up is yes"
# | name | wwn | wwn_nvme
| port_index | current_speed | appliance_id | is_link_up
----+---------------------------------------+-------------------------
+-------------------------+------------+---------------+--------------+------------
1 | BaseEnclosure-NodeA-IoModule0-FEPort1 | 58:cc:f0:90:49:21:07:7b
| 58:cc:f0:90:49:29:07:7b | 1 | 32_Gbps | A1 | yes
2 | BaseEnclosure-NodeB-IoModule0-FEPort0 | 58:cc:f0:98:49:20:07:7b
| 58:cc:f0:98:49:28:07:7b | 0 | 32_Gbps | A1 | yes
3 | BaseEnclosure-NodeB-IoModule0-FEPort1 | 58:cc:f0:98:49:21:07:7b
| 58:cc:f0:98:49:29:07:7b | 1 | 32_Gbps | A1 | yes
4 | BaseEnclosure-NodeA-IoModule0-FEPort0 | 58:cc:f0:90:49:20:07:7b
| 58:cc:f0:90:49:28:07:7b | 0 | 32_Gbps | A1 | yes

View Physical Ethernet Ports Status


Use the following PSTCLI to view the physical ethernet ports status.

PS C:\WINDOWS\system32> pstcli -d 10.55.34.127 -u admin -p <> eth_port show -select


appliance_id,name,mac_address,current_speed,
current_mtu -q "is_link_up is true"
# | appliance_id | name | mac_address |
current_speed | current_mtu
----+--------------+---------------------------------------------+--------------
+---------------+-------------
1 | A1 |
BaseEnclosure-NodeB-EmbeddedModule-MgmtPort | 006016ac2400 | 1_Gbps | 1500
2 | A1 |
BaseEnclosure-NodeA-IoModule1-FEPort0 | 006016a22ff8 | 10_Gbps | 1500
3 | A1 |
BaseEnclosure-NodeA-IoModule1-FEPort1 | 006016a22ff9 | 10_Gbps | 1500
4 | A1 |
BaseEnclosure-NodeB-4PortCard-FEPort1 | 00e0ec8d8893 | 10_Gbps | 1500
5 | A1 |
BaseEnclosure-NodeA-EmbeddedModule-MgmtPort | 006016ab43ce | 1_Gbps | 1500
6 | A1 |
BaseEnclosure-NodeB-IoModule1-FEPort0 | 006016a0f854 | 10_Gbps | 1500
7 | A1 |
BaseEnclosure-NodeA-4PortCard-FEPort0 | 00e0ec8ac269 | 10_Gbps | 1500
8 | A1 |
BaseEnclosure-NodeB-IoModule1-FEPort1 | 006016a0f855 | 10_Gbps | 1500
9 | A1 |
BaseEnclosure-NodeA-4PortCard-FEPort1 | 00e0ec8ac269 | 10_Gbps | 1500
10 | A1 | BaseEnclosure-
NodeB-4PortCard-FEPort0 | 00e0ec8d8893 | 10_Gbps | 1500

92 Troubleshooting
View Discovered Initiators
Use the following PSTCLI to view all discovered initiators, which are not part of any initiator group.

PS C:\WINDOWS\system32> pstcli -d 10.55.34.127 -u admin -p <> discovered_initiator show


# | name | protocol_type
----+---------------------------------------------------------------------+--------------
1 | iqn.1998-01.com.vmware:dell-r640-1.xiolab.lab.emc.com:1362223105:66 | iSCSI

View Active Sessions


Use the following PSTCLI to view all active sessions.
1. View the list of hosts

PS C:\WINDOWS\system32> pstcli -d 10.55.34.127 -u admin -p <> host show


# | id | name | description | os_type |
host_group.name
----+--------------------------------------+---------------+-------------+---------
+-----------------
1 | 1b163a5b-2d69-412c-bd46-061441fe40e3 | ESX-NVMeFC | | ESXi |
3 | dffcdb74-835d-48b8-b04f-402d90e12030 | ESX-FC | | ESXi |

2. View all active sessions for a Fibre Channel host.

PS C:\WINDOWS\system32> pstcli -d 10.55.34.127 -u admin -p <> host show -q "name =


ESX-FC" -select host_initiators
# | host_initiators.port_name |
host_initiators.port_type| host_initiators.active_sessions.port_name |
host_initiators.active_sessions.appliance_id
----+---------------------------+--------------------------
+-------------------------------------------
+----------------------------------------------
1 | 10:00:00:90:fa:a0:a2:bf
| FC | 58:cc:f0:90:49:21:07:7b | A1
|
| | 58:cc:f0:98:49:21:07:7b | A1
|
| | |
| 10:00:00:90:fa:a0:a2:be
| FC | 58:cc:f0:90:49:20:07:7b | A1
|
| | 58:cc:f0:98:49:20:07:7b | A1

3. View all active sessions for an NVMe host.

PS C:\WINDOWS\system32> pstcli -d 10.55.34.127 -u admin -p <> host show -q "name =


ESX-NVMeFC" -select host_initiators
# | host_initiators.port_name |
host_initiators.port_ty~| host_initiators.active_sessions.port_name |
host_initiators.active_sessions.appli~
----+------------------------------------------+-------------------------
+----------------------------------------------------------
+----------------------------------------
1 |nqn.2014-08.com.emc.lab.xiolab:nvme:dell~
|NVMe |iqn.2015-10.com.dell:dellemc-powerstore-fnm00191800733-~|A1
|
| |iqn.2015-10.com.dell:dellemc-powerstore-fnm00191800733-~|A1

Troubleshooting 93

You might also like