Vsphere Esxi Vcenter Server 67 Storage Guide PDF
Vsphere Esxi Vcenter Server 67 Storage Guide PDF
Vsphere Esxi Vcenter Server 67 Storage Guide PDF
24 MAY 2018
VMware vSphere 6.7
VMware ESXi 6.7
vCenter Server 6.7
vSphere Storage
You can find the most up-to-date technical documentation on the VMware website at:
https://docs.vmware.com/
If you have comments about this documentation, submit your feedback to
docfeedback@vmware.com
VMware, Inc.
3401 Hillview Ave.
Palo Alto, CA 94304
www.vmware.com
Copyright © 2009–2018 VMware, Inc. All rights reserved. Copyright and trademark information.
VMware, Inc. 2
Contents
Updated Information 9
1 Introduction to Storage 10
Traditional Storage Virtualization Models 10
Software-Defined Storage Models 12
vSphere Storage APIs 12
VMware, Inc. 3
vSphere Storage
VMware, Inc. 4
vSphere Storage
VMware, Inc. 5
vSphere Storage
VMware, Inc. 6
vSphere Storage
VMware, Inc. 7
About vSphere Storage
vSphere Storage describes virtualized and software-defined storage technologies that VMware ESXi™
®
and VMware vCenter Server offer, and explains how to configure and use these technologies.
Intended Audience
This information is for experienced system administrators who are familiar with the virtual machine and
storage virtualization technologies, data center operations, and SAN storage concepts.
For the workflows that significantly differ between the two clients, vSphere Storage provides duplicate
procedures. The procedures indicate when they are intended exclusively for the vSphere Client or the
vSphere Web Client.
Note In vSphere 6.7, most of the vSphere Web Client functionality is implemented in the vSphere Client.
For an up-to-date list of the unsupported functionality, see Functionality Updates for the vSphere Client.
VMware, Inc. 8
Updated Information
This vSphere Storage is updated with each release of the product or when necessary.
Revision Description
24 MAY 2018 Checking Metadata Consistency with VOMA has been updated.
07 MAY 2018 n Use the ESXCLI Command to Change Space Reclamation Parameters has been updated with additional
details.
n Space Reclamation Requests from Guest Operating Systems now reflects that for VMs with snapshots,
space reclamation works only when the VM is powered on.
n Virtual Volumes now includes a statement about MSCS support with Virtual Volumes.
n Change Datastore Name now states that a datastore managed by vCenter Server cannot be renamed when
the host is directly accessed from the VMware Host Client.
n Increase VMFS Datastore Capacity has been updated to clarify that the VMs can continue to run while you
increase the datastore capacity.
VMware, Inc. 9
Introduction to Storage 1
vSphere supports various storage options and functionalities in traditional and software-defined storage
environments. A high-level overview of vSphere storage elements and aspects helps you plan a proper
storage strategy for your virtual data center.
In vSphere environment, a traditional model is built around the following storage technologies and ESXi
and vCenter Server virtualization functionalities.
Local and Networked In traditional storage environments, the ESXi storage management process
Storage starts with storage space that your storage administrator preallocates on
different storage systems. ESXi supports local storage and networked
storage.
See Types of Physical Storage.
Storage Area Networks A storage area network (SAN) is a specialized high-speed network that
connects computer systems, or ESXi hosts, to high-performance storage
systems. ESXi can use Fibre Channel or iSCSI protocols to connect to
storage systems.
See Chapter 3 Overview of Using ESXi with a SAN.
Fibre Channel Fibre Channel (FC) is a storage protocol that the SAN uses to transfer data
traffic from ESXi host servers to shared storage. The protocol packages
SCSI commands into FC frames. To connect to the FC SAN, your host
uses Fibre Channel host bus adapters (HBAs).
See Chapter 4 Using ESXi with Fibre Channel SAN.
VMware, Inc. 10
vSphere Storage
Internet SCSI Internet iSCSI (iSCSI) is a SAN transport that can use Ethernet
connections between computer systems, or ESXi hosts, and high-
performance storage systems. To connect to the storage systems, your
hosts use hardware iSCSI adapters or software iSCSI initiators with
standard network adapters.
See Chapter 10 Using ESXi with iSCSI SAN.
Storage Device or LUN In the ESXi context, the terms device and LUN are used interchangeably.
Typically, both terms mean a storage volume that is presented to the host
from a block storage system and is available for formatting.
See Target and Device Representations and Chapter 14 Managing Storage
Devices.
Virtual Disks A virtual machine on an ESXi host uses a virtual disk to store its operating
system, application files, and other data associated with its activities. Virtual
disks are large physical files, or sets of files, that can be copied, moved,
archived, and backed up as any other files. You can configure virtual
machines with multiple virtual disks.
To access virtual disks, a virtual machine uses virtual SCSI controllers.
These virtual controllers include BusLogic Parallel, LSI Logic Parallel, LSI
Logic SAS, and VMware Paravirtual. These controllers are the only types of
SCSI controllers that a virtual machine can see and access.
Each virtual disk resides on a datastore that is deployed on physical
storage. From the standpoint of the virtual machine, each virtual disk
appears as if it were a SCSI drive connected to a SCSI controller. Whether
the physical storage is accessed through storage or network adapters on
the host is typically transparent to the VM guest operating system and
applications.
®
VMware vSphere The datastores that you deploy on block storage devices use the native
VMFS vSphere Virtual Machine File System (VMFS) format. It is a special high-
performance file system format that is optimized for storing virtual
machines.
See Understanding VMFS Datastores.
NFS An NFS client built into ESXi uses the Network File System (NFS) protocol
over TCP/IP to access an NFS volume that is located on a NAS server. The
ESXi host can mount the volume and use it as an NFS datastore.
See Understanding Network File System Datastores.
Raw Device Mapping In addition to virtual disks, vSphere offers a mechanism called raw device
mapping (RDM). RDM is useful when a guest operating system inside a
virtual machine requires direct access to a storage device. For information
about RDMs, see Chapter 19 Raw Device Mapping.
VMware, Inc. 11
vSphere Storage
With the software-defined storage model, a virtual machine becomes a unit of storage provisioning and
can be managed through a flexible policy-based mechanism. The model involves the following vSphere
technologies.
Storage Policy Based Storage Policy Based Management (SPBM) is a framework that provides a
Management single control panel across various data services and storage solutions,
including vSAN and Virtual Volumes. Using storage policies, the framework
aligns application demands of your virtual machines with capabilities
provided by storage entities.
See Chapter 20 Storage Policy Based Management.
®
VMware vSphere The Virtual Volumes functionality changes the storage management
Virtual Volumes™ paradigm from managing space inside datastores to managing abstract
storage objects handled by storage arrays. With Virtual Volumes, an
individual virtual machine, not the datastore, becomes a unit of storage
management. And storage hardware gains complete control over virtual
disk content, layout, and management.
See Chapter 22 Working with Virtual Volumes.
VMware vSAN vSAN is a distributed layer of software that runs natively as a part of the
hypervisor. vSAN aggregates local or direct-attached capacity devices of an
ESXi host cluster and creates a single storage pool shared across all hosts
in the vSAN cluster.
See Administering VMware vSAN.
I/O Filters I/O filters are software components that can be installed on ESXi hosts and
can offer additional data services to virtual machines. Depending on
implementation, the services might include replication, encryption, caching,
and so on.
See Chapter 23 Filtering Virtual Machine I/O.
This Storage publication describes several Storage APIs that contribute to your storage environment. For
information about other APIs from this family, including vSphere APIs - Data Protection, see the VMware
website.
VMware, Inc. 12
vSphere Storage
VASA becomes essential when you work with Virtual Volumes, vSAN, vSphere APIs for I/O Filtering
(VAIO), and storage VM policies. See Chapter 21 Using Storage Providers.
n Hardware Acceleration APIs. Help arrays to integrate with vSphere, so that vSphere can offload
certain storage operations to the array. This integration significantly reduces CPU overhead on the
host. See Chapter 24 Storage Hardware Acceleration.
n Array Thin Provisioning APIs. Help to monitor space use on thin-provisioned storage arrays to
prevent out-of-space conditions, and to perform space reclamation. See ESXi and Array Thin
Provisioning.
VMware, Inc. 13
Getting Started with a
Traditional Storage Model 2
Setting up your ESXi storage in traditional environments, includes configuring your storage systems and
devices, enabling storage adapters, and creating datastores.
n Datastore Characteristics
Local Storage
Local storage can be internal hard disks located inside your ESXi host. It can also include external
storage systems located outside and connected to the host directly through protocols such as SAS or
SATA.
Local storage does not require a storage network to communicate with your host. You need a cable
connected to the storage unit and, when required, a compatible HBA in your host.
The following illustration depicts a virtual machine using local SCSI storage.
VMware, Inc. 14
vSphere Storage
ESXi Host
VMFS
vmdk
SCSI Device
In this example of a local storage topology, the ESXi host uses a single connection to a storage device.
On that device, you can create a VMFS datastore, which you use to store virtual machine disk files.
Although this storage configuration is possible, it is not a best practice. Using single connections between
storage devices and hosts creates single points of failure (SPOF) that can cause interruptions when a
connection becomes unreliable or fails. However, because most of local storage devices do not support
multiple connections, you cannot use multiple paths to access local storage.
ESXi supports various local storage devices, including SCSI, IDE, SATA, USB, SAS, flash, and NVMe
devices.
Note You cannot use IDE/ATA or USB drives to store virtual machines.
Local storage does not support sharing across multiple hosts. Only one host has access to a datastore on
a local storage device. As a result, although you can use local storage to create VMs, you cannot use
VMware features that require shared storage, such as HA and vMotion.
However, if you use a cluster of hosts that have just local storage devices, you can implement vSAN.
vSAN transforms local storage resources into software-defined shared storage. With vSAN, you can use
features that require shared storage. For details, see the Administering VMware vSAN documentation.
Networked Storage
Networked storage consists of external storage systems that your ESXi host uses to store virtual machine
files remotely. Typically, the host accesses these systems over a high-speed storage network.
Networked storage devices are shared. Datastores on networked storage devices can be accessed by
multiple hosts concurrently. ESXi supports multiple networked storage technologies.
VMware, Inc. 15
vSphere Storage
In addition to traditional networked storage that this topic covers, VMware supports virtualized shared
storage, such as vSAN. vSAN transforms internal storage resources of your ESXi hosts into shared
storage that provides such capabilities as High Availability and vMotion for virtual machines. For details,
see the Administering VMware vSAN documentation.
Note The same LUN cannot be presented to an ESXi host or multiple hosts through different storage
protocols. To access the LUN, hosts must always use a single protocol, for example, either Fibre Channel
only or iSCSI only.
To connect to the FC SAN, your host should be equipped with Fibre Channel host bus adapters (HBAs).
Unless you use Fibre Channel direct connect storage, you need Fibre Channel switches to route storage
traffic. If your host contains FCoE (Fibre Channel over Ethernet) adapters, you can connect to your
shared Fibre Channel devices by using an Ethernet network.
Fibre Channel Storage depicts virtual machines using Fibre Channel storage.
ESXi Host
Fibre
Channel
HBA
SAN
VMFS
vmdk
Fibre
Channel Array
VMware, Inc. 16
vSphere Storage
In this configuration, a host connects to a SAN fabric, which consists of Fibre Channel switches and
storage arrays, using a Fibre Channel adapter. LUNs from a storage array become available to the host.
You can access the LUNs and create datastores for your storage needs. The datastores use the VMFS
format.
For specific information on setting up the Fibre Channel SAN, see Chapter 4 Using ESXi with Fibre
Channel SAN.
Hardware iSCSI Your host connects to storage through a third-party adapter capable of
offloading the iSCSI and network processing. Hardware adapters can be
dependent and independent.
Software iSCSI Your host uses a software-based iSCSI initiator in the VMkernel to connect
to storage. With this type of iSCSI connection, your host needs only a
standard network adapter for network connectivity.
You must configure iSCSI initiators for the host to access and display iSCSI storage devices.
VMware, Inc. 17
vSphere Storage
ESXi Host
Software
Adapter
iSCSI Ethernet
HBA NIC
LAN LAN
VMFS VMFS
In the left example, the host uses the hardware iSCSI adapter to connect to the iSCSI storage system.
In the right example, the host uses a software iSCSI adapter and an Ethernet NIC to connect to the iSCSI
storage.
iSCSI storage devices from the storage system become available to the host. You can access the storage
devices and create VMFS datastores for your storage needs.
For specific information on setting up the iSCSI SAN, see Chapter 10 Using ESXi with iSCSI SAN.
You can mount an NFS volume directly on the ESXi host. You then use the NFS datastore to store and
manage virtual machines in the same way that you use the VMFS datastores.
NFS Storage depicts a virtual machine using the NFS datastore to store its files. In this configuration, the
host connects to the NAS server, which stores the virtual disk files, through a regular network adapter.
VMware, Inc. 18
vSphere Storage
ESXi Host
Ethernet
NIC
LAN
NFS
vmdk
NAS
Appliance
For specific information on setting up NFS storage, see Understanding Network File System Datastores.
Different storage vendors present the storage systems to ESXi hosts in different ways. Some vendors
present a single target with multiple storage devices or LUNs on it, while others present multiple targets
with one LUN each.
VMware, Inc. 19
vSphere Storage
In this illustration, three LUNs are available in each configuration. In one case, the host connects to one
target, but that target has three LUNs that can be used. Each LUN represents an individual storage
volume. In the other example, the host detects three different targets, each having one LUN.
Targets that are accessed through the network have unique names that are provided by the storage
systems. The iSCSI targets use iSCSI names. Fibre Channel targets use World Wide Names (WWNs).
Note ESXi does not support accessing the same LUN through different transport protocols, such as
iSCSI and Fibre Channel.
A device, or LUN, is identified by its UUID name. If a LUN is shared by multiple hosts, it must be
presented to all hosts with the same UUID.
ESXi supports Fibre Channel (FC), Internet SCSI (iSCSI), Fibre Channel over Ethernet (FCoE), and NFS
protocols. Regardless of the type of storage device your host uses, the virtual disk always appears to the
virtual machine as a mounted SCSI device. The virtual disk hides a physical storage layer from the virtual
machine’s operating system. This allows you to run operating systems that are not certified for specific
storage equipment, such as SAN, inside the virtual machine.
The following graphic depicts five virtual machines using different types of storage to illustrate the
differences between each type.
VMware, Inc. 20
vSphere Storage
Software
iSCSI
Adapter
Fibre
VMFS Channel iSCSI Ethernet Ethernet
SCSI Device HBA HBA NIC NIC
Note This diagram is for conceptual purposes only. It is not a recommended configuration.
After the devices get registered with your host, you can display all available local and networked devices
and review their information. If you use third-party multipathing plug-ins, the storage devices available
through the plug-ins also appear on the list.
Note If an array supports implicit asymmetric logical unit access (ALUA) and has only standby paths, the
registration of the device fails. The device can register with the host after the target activates a standby
path and the host detects it as active. The advanced system /Disk/FailDiskRegistration parameter
controls this behavior of the host.
For each storage adapter, you can display a separate list of storage devices available for this adapter.
Generally, when you review storage devices, you see the following information.
Name Also called Display Name. It is a name that the ESXi host assigns to the device based on the
storage type and manufacturer. You can change this name to a name of your choice.
Operational State Indicates whether the device is attached or detached. For details, see Detach Storage
Devices.
VMware, Inc. 21
vSphere Storage
LUN Logical Unit Number (LUN) within the SCSI target. The LUN number is provided by the
storage system. If a target has only one LUN, the LUN number is always zero (0).
Drive Type Information about whether the device is a flash drive or a regular HDD drive. For information
about flash drives and NVMe devices, see Chapter 15 Working with Flash Devices.
Transport Transportation protocol your host uses to access the device. The protocol depends on the
type of storage being used. See Types of Physical Storage.
Owner The plug-in, such as the NMP or a third-party plug-in, that the host uses to manage paths to
the storage device. For details, see Pluggable Storage Architecture and Path Management.
Hardware Acceleration Information about whether the storage device assists the host with virtual machine
management operations. The status can be Supported, Not Supported, or Unknown. For
details, see Chapter 24 Storage Hardware Acceleration.
Sector Format Indicates whether the device uses a traditional, 512n, or advanced sector format, such as
512e or 4Kn. For more information, see Device Sector Formats.
Partition Format A partition scheme used by the storage device. It can be of a master boot record (MBR) or
GUID partition table (GPT) format. The GPT devices can support datastores greater than 2
TB. For more information, see Device Sector Formats.
Multipathing Policies Path Selection Policy and Storage Array Type Policy the host uses to manage paths to
storage. For more information, see Chapter 18 Understanding Multipathing and Failover.
The Storage Devices view allows you to list the hosts' storage devices, analyze their information, and
modify properties.
Procedure
All storage devices available to the host are listed in the Storage Devices table.
4 To view details for a specific device, select the device from the list.
VMware, Inc. 22
vSphere Storage
Icon Description
Refresh Refresh information about storage adapters, topology, and file systems.
Rescan Rescan all storage adapters on the host to discover newly added storage devices or VMFS
datastores.
Turn On LED Turn on the locator LED for the selected devices.
Turn Off LED Turn off the locator LED for the selected devices.
Mark as Local Mark the selected devices as local for the host.
Mark as Remote Mark the selected devices as remote for the host.
6 Use tabs under Device Details to access additional information and modify properties for the selected
device.
Tab Description
Properties View device properties and characteristics. View and modify multipathing policies
for the device.
Paths Display paths available for the device. Disable or enable a selected path.
Procedure
All storage adapters installed on the host are listed in the Storage Adapters table.
4 Select the adapter from the list and click the Devices tab.
Storage devices that the host can access through the adapter are displayed.
VMware, Inc. 23
vSphere Storage
Icon Description
Refresh Refresh information about storage adapters, topology, and file systems.
Rescan Rescan all storage adapters on the host to discover newly added storage devices or VMFS
datastores.
Turn On LED Turn on the locator LED for the selected devices.
Turn Off LED Turn off the locator LED for the selected devices.
Mark as Local Mark the selected devices as local for the host.
Mark as Remote Mark the selected devices as remote for the host.
The following table compares networked storage technologies that ESXi supports.
Fibre Channel over FCoE/SCSI Block access of data/LUN n Converged Network Adapter (hardware FCoE)
Ethernet n NIC with FCoE support (software FCoE)
iSCSI IP/SCSI Block access of data/LUN n iSCSI HBA or iSCSI-enabled NIC (hardware
iSCSI)
n Network adapter (software iSCSI)
The following table compares the vSphere features that different types of storage support.
VMware, Inc. 24
vSphere Storage
NAS over NFS Yes Yes NFS 3 and NFS No No Yes Yes
4.1
Note Local storage supports a cluster of virtual machines on a single host (also known as a cluster in a
box). A shared virtual disk is required. For more information about this configuration, see the vSphere
Resource Management documentation.
ESXi supports different classes of adapters, including SCSI, iSCSI, RAID, Fibre Channel, Fibre Channel
over Ethernet (FCoE), and Ethernet. ESXi accesses the adapters directly through device drivers in the
VMkernel.
Depending on the type of storage you use, you might need to enable and configure a storage adapter on
your host.
For information on setting up software FCoE adapters, see Chapter 6 Configuring Fibre Channel over
Ethernet.
For information on configuring different types of iSCSI adapters, see Chapter 11 Configuring iSCSI
Adapters and Storage.
Prerequisites
You must enable certain adapters, for example software iSCSI or FCoE, before you can view their
information. To configure adapters, see the following:
Procedure
VMware, Inc. 25
vSphere Storage
Icon Description
Add Software Adapter Add a storage adapter. Applies to software iSCSI and software FCoE.
Refresh Refresh information about storage adapters, topology, and file systems on the host.
Rescan Storage Rescan all storage adapters on the host to discover newly added storage devices or VMFS
datastores.
Rescan Adapter Rescan the selected adapter to discover newly added storage devices.
5 To view details for a specific adapter, select the adapter from the list.
6 Use tabs under Adapter Details to access additional information and modify properties for the
selected adapter.
Tab Description
Properties Review general adapter properties that typically include a name and model of the adapter
and unique identifiers formed according to specific storage standards. For iSCSI and FCoE
adapters, use this tab to configure additional properties, for example, authentication.
Devices View storage devices the adapter can access. Use the tab to perform basic device
management tasks. See Display Storage Devices for an Adapter.
Paths List and manage all paths the adapter uses to access storage devices.
Targets (Fibre Channel and Review and manage targets accessed through the adapter.
iSCSI)
Network Port Binding Configure port binding for software and dependent hardware iSCSI adapters.
(iSCSI only)
Datastore Characteristics
Datastores are logical containers, analogous to file systems, that hide specifics of each storage device
and provide a uniform model for storing virtual machine files. You can display all datastores available to
your hosts and analyze their properties.
n You can create a VMFS datastore, an NFS version 3 or 4.1 datastore, or a Virtual Volumes datastore
using the New Datastore wizard. A vSAN datastore is automatically created when you enable vSAN.
n When you add an ESXi host to vCenter Server, all datastores on the host are added to
vCenter Server.
The following table describes datastore details that you can see when you review datastores through the
vSphere Client. Certain characteristic might not be available or applicable to all types of datastores.
VMware, Inc. 26
vSphere Storage
Type VMFS File system that the datastore uses. For information about
NFS VMFS and NFS datastores and how to manage them, see
Chapter 17 Working with Datastores.
vSAN
For information about vSAN datastores, see the
Virtual Volumes
Administering VMware vSAN documentation.
For information about Virtual Volumes, see Chapter 22
Working with Virtual Volumes.
Protocol Endpoints Virtual Volumes Information about corresponding protocol endpoints. See
Protocol Endpoints.
Extents VMFS Individual extents that the datastore spans and their
capacity.
Drive Type VMFS Type of the underlying storage device, such as a flash drive
or a regular HHD drive. For details, see Chapter 15 Working
with Flash Devices.
Capacity VMFS Includes total capacity, provisioned space, and free space.
NFS
vSAN
Virtual Volumes
Capability Sets VMFS Information about storage data services that the underlying
storage entity provides. You cannot modify them.
Note A multi-extent VMFS datastore
assumes capabilities of only one of its
extents.
NFS
vSAN
Virtual Volumes
VMware, Inc. 27
vSphere Storage
Tags VMFS Datastore capabilities that you define and associate with
NFS datastores in a form of tags. For information, see Assign
Tags to Datastores.
vSAN
Virtual Volumes
Multipathing VMFS Path selection policy the host uses to access storage. For
Virtual Volumes more information, see Chapter 18 Understanding
Multipathing and Failover.
Use the Datastores view to list all datastores available in the vSphere infrastructure inventory, analyze the
information, and modify properties.
Procedure
1 Navigate to any inventory object that is a valid parent object of a datastore, such as a host, a cluster,
or a data center, and click the Datastores tab.
Datastores that are available in the inventory appear in the center panel.
2 Use the icons from a datastore right-click menu to perform basic tasks for a selected datastore.
Availability of specific icons depends on the type of the datastore and its configuration.
Icon Description
VMware, Inc. 28
vSphere Storage
Icon Description
Remove a datastore.
Tab Description
Monitor View alarms, performance data, resource allocation, events, and other status information for the datastore.
Configure View and modify datastore properties. Menu items that you can see depend on the datastore type.
Virtual machines that require high bandwidth, low latency, and persistence can benefit from this
technology. Examples include VMs with acceleration databases and analytics workload.
To use persistent memory with your ESXi host, you must be familiar with the following concepts.
PMem Datastore After you add persistent memory to your ESXi host, the host detects the
hardware, and then formats and mounts it as a local PMem datastore. ESXi
uses VMFS-L as a file system format. Only one local PMem datastore per
host is supported.
VMware, Inc. 29
vSphere Storage
PMem Access Modes ESXi exposes persistent memory to a VM in two different modes. PMem-
aware VMs can have direct access to persistent memory. Traditional VMs
can use fast virtual disks stored on the PMem datastore.
Direct-Access Mode In this mode, a PMem region can be presented to a VM as a virtual non-
volatile dual in-line memory module (NVDIMM) module. The VM uses the
NVDIMM module as a standard byte-addressable memory that can persist
across power cycles.
You can add one or several NVDIMM modules when provisioning the VM.
The VMs must be of the hardware version ESXi 6.7 and have a PMem-
aware guest OS. The NVDIMM device is compatible with latest guest OSes
that support persistent memory, for example, Windows 2016.
Each NVDIMM device is automatically stored on the PMem datastore.
Virtual Disk Mode This mode is available to any traditional VM and supports any hardware
version, including all legacy versions. VMs are not required to be PMem-
aware. When you use this mode, you create a regular SCSI virtual disk and
attach a PMem VM storage policy to the disk. The policy automatically
places the disk on the PMem datastore.
PMem Storage Policy To place the virtual disk on the PMem datastore, you must apply the host-
local PMem default storage policy to the disk. The policy is not editable.
The policy can be applied only to virtual disks. Because the VM home
directory does not reside on the PMem datastore, make sure to place it on
any standard datastore.
After you assign the PMem storage policy to the virtual disk, you cannot
change the policy through the VM Edit Setting dialog box. To change the
policy, migrate or clone the VM.
The following graphic illustrates how the persistent memory components interact.
VMware, Inc. 30
vSphere Storage
PMem-aware VM Traditional VM
PMem Storage
Policy
PMem Datastore
Persistent Memory
For information about how to configure and manage VMs with NVDIMMs or virtual persistent memory
disks, see the vSphere Resource Management documentation.
However, unlike regular datastores, such as VMFS or VVols, the PMem datastore does not appear in the
Datastores view of the vSphere Client. Regular datastore administrative tasks do not apply to it.
Procedure
Option Description
esxcli command Use the esxcli storage filesystem list to list the PMem datastore.
VMware, Inc. 31
vSphere Storage
VMware, Inc. 32
Overview of Using ESXi with a
SAN 3
Using ESXi with a SAN improves flexibility, efficiency, and reliability. Using ESXi with a SAN also supports
centralized management, failover, and load balancing technologies.
n You can store data securely and configure multiple paths to your storage, eliminating a single point of
failure.
n Using a SAN with ESXi systems extends failure resistance to the server. When you use SAN storage,
all applications can instantly be restarted on another host after the failure of the original host.
n You can perform live migration of virtual machines using VMware vMotion.
n Use VMware High Availability (HA) in conjunction with a SAN to restart virtual machines in their last
known state on a different server if their host fails.
n Use VMware Fault Tolerance (FT) to replicate protected virtual machines on two different hosts.
Virtual machines continue to function without interruption on the secondary host if the primary one
fails.
n Use VMware Distributed Resource Scheduler (DRS) to migrate virtual machines from one host to
another for load balancing. Because storage is on a shared SAN array, applications continue running
seamlessly.
n If you use VMware DRS clusters, put an ESXi host into maintenance mode to have the system
migrate all running virtual machines to other ESXi hosts. You can then perform upgrades or other
maintenance operations on the original host.
The portability and encapsulation of VMware virtual machines complements the shared nature of this
storage. When virtual machines are located on SAN-based storage, you can quickly shut down a virtual
machine on one server and power it up on another server, or suspend it on one server and resume
operation on another server on the same network. This ability allows you to migrate computing resources
while maintaining consistent shared access.
VMware, Inc. 33
vSphere Storage
Storage consolidation If you are working with multiple hosts, and each host is running multiple
and simplification of virtual machines, the storage on the hosts is no longer sufficient. You might
storage layout need to use external storage. The SAN can provide a simple system
architecture and other benefits.
Maintenance with zero When performing ESXi host or infrastructure maintenance, use vMotion to
downtime migrate virtual machines to other host. If shared storage is on the SAN, you
can perform maintenance without interruptions to the users of the virtual
machines. Virtual machine working processes continue throughout a
migration.
Load balancing You can add a host to a DRS cluster, and the host's resources become part
of the cluster's resources. The distribution and use of CPU and memory
resources for all hosts and virtual machines in the cluster are continuously
monitored. DRS compares these metrics to an ideal resource use. The
ideal use considers the attributes of the cluster's resource pools and virtual
machines, the current demand, and the imbalance target. If needed, DRS
performs or recommends virtual machine migrations.
Disaster recovery You can use VMware High Availability to configure multiple ESXi hosts as a
cluster. The cluster provides rapid recovery from outages and cost-effective
high availability for applications running in virtual machines.
Simplified array When you purchase new storage systems, use Storage vMotion to perform
migrations and storage live migrations of virtual machines from existing storage to their new
upgrades destinations. You can perform the migrations without interruptions of the
virtual machines.
VMware, Inc. 34
vSphere Storage
When you use SAN storage with ESXi, the following considerations apply:
n You cannot use SAN administration tools to access operating systems of virtual machines that reside
on the storage. With traditional tools, you can monitor only the VMware ESXi operating system. You
use the vSphere Client to monitor virtual machines.
n The HBA visible to the SAN administration tools is part of the ESXi system, not part of the virtual
machine.
When you use multiple arrays from different vendors, the following considerations apply:
n If your host uses the same SATP for multiple arrays, be careful when you change the default PSP for
that SATP. The change applies to all arrays. For information on SATPs and PSPs, see Chapter 18
Understanding Multipathing and Failover.
n Some storage arrays make recommendations on queue depth and other settings. Typically, these
settings are configured globally at the ESXi host level. Changing settings for one array impacts other
arrays that present LUNs to the host. For information on changing queue depth, see the VMware
knowledge base article at http://kb.vmware.com/kb/1267.
n Use single-initiator-single-target zoning when zoning ESXi hosts to Fibre Channel arrays. With this
type of configuration, fabric-related events that occur on one array do not impact other arrays. For
more information about zoning, see Using Zoning with Fibre Channel SANs.
When you make your LUN decision, the following considerations apply:
n Each LUN must have the correct RAID level and storage characteristic for the applications running in
virtual machines that use the LUN.
n If multiple virtual machines access the same VMFS, use disk shares to prioritize virtual machines.
You might want fewer, larger LUNs for the following reasons:
n More flexibility to create virtual machines without asking the storage administrator for more space.
n More flexibility for resizing virtual disks, doing snapshots, and so on.
VMware, Inc. 35
vSphere Storage
You might want more, smaller LUNs for the following reasons:
n More flexibility, as the multipathing policy and disk shares are set per LUN.
n Use of Microsoft Cluster Service requires that each cluster disk resource is in its own LUN.
When the storage characterization for a virtual machine is unavailable, it might not be easy to determine
the number and size of LUNs to provision. You can experiment using either a predictive or adaptive
scheme.
Procedure
2 Create a VMFS datastore on each LUN, labeling each datastore according to its characteristics.
3 Create virtual disks to contain the data for virtual machine applications in the VMFS datastores
created on LUNs with the appropriate RAID level for the applications' requirements.
Note Disk shares are relevant only within a given host. The shares assigned to virtual machines on
one host have no effect on virtual machines on other hosts.
Procedure
1 Provision a large LUN (RAID 1+0 or RAID 5), with write caching enabled.
If performance is acceptable, you can place additional virtual disks on the VMFS. If performance is not
acceptable, create a new, large LUN, possibly with a different RAID level, and repeat the process. Use
migration so that you do not lose virtual machines data when you recreate the LUN.
VMware, Inc. 36
vSphere Storage
n High Tier. Offers high performance and high availability. Might offer built-in snapshots to facilitate
backups and point-in-time (PiT) restorations. Supports replication, full storage processor redundancy,
and SAS drives. Uses high-cost spindles.
n Mid Tier. Offers mid-range performance, lower availability, some storage processor redundancy, and
SCSI or SAS drives. Might offer snapshots. Uses medium-cost spindles.
n Lower Tier. Offers low performance, little internal storage redundancy. Uses low-end SCSI drives or
SATA.
Not all VMs must be on the highest-performance and most-available storage throughout their entire life
cycle.
When you decide where to place a virtual machine, the following considerations apply:
n Criticality of the VM
A virtual machine might change tiers throughout its life cycle because of changes in criticality or changes
in technology. Criticality is relative and might change for various reasons, including changes in the
organization, operational processes, regulatory requirements, disaster planning, and so on.
Most SAN hardware is packaged with storage management software. In many cases, this software is a
Web application that can be used with any Web browser connected to your network. In other cases, this
software typically runs on the storage system or on a single server, independent of the servers that use
the SAN for storage.
n Storage array management, including LUN creation, array cache management, LUN mapping, and
LUN security.
VMware, Inc. 37
vSphere Storage
If you run the SAN management software on a virtual machine, you gain the benefits of a virtual machine,
including failover with vMotion and VMware HA. Because of the additional level of indirection, however,
the management software might not see the SAN. In this case, you can use an RDM.
Note Whether a virtual machine can run management software successfully depends on the particular
storage system.
n Identification of critical applications that require more frequent backup cycles within a given period.
n Recovery point and recovery time goals. Consider how precise your recovery point must be, and how
long you are willing to wait for it.
n The rate of change (RoC) associated with the data. For example, if you are using
synchronous/asynchronous replication, the RoC affects the amount of bandwidth required between
the primary and secondary storage devices.
n Identification of peak traffic periods on the SAN. Backups scheduled during those peak periods can
slow the applications and the backup process.
Include a recovery-time objective for each application when you design your backup strategy. That is,
consider the time and resources necessary to perform a backup. For example, if a scheduled backup
stores so much data that recovery requires a considerable amount of time, examine the scheduled
backup. Perform the backup more frequently, so that less data is backed up at a time and the recovery
time decreases.
If an application requires recovery within a certain time frame, the backup process must provide a time
schedule and specific data processing to meet the requirement. Fast recovery can require the use of
recovery volumes that reside on online storage. This process helps to minimize or eliminate the need to
access slow offline media for missing data components.
VMware, Inc. 38
vSphere Storage
The Storage APIs - Data Protection that VMware offers can work with third-party products. When using
the APIs, third-party software can perform backups without loading ESXi hosts with the processing of
backup tasks.
The third-party products using the Storage APIs - Data Protection can perform the following backup tasks:
n Perform a full, differential, and incremental image backup and restore of virtual machines.
n Perform a file-level backup of virtual machines that use supported Windows and Linux operating
systems.
n Ensure data consistency by using Microsoft Volume Shadow Copy Services (VSS) for virtual
machines that run supported Microsoft Windows operating systems.
Because the Storage APIs - Data Protection use the snapshot capabilities of VMFS, backups do not
require that you stop virtual machines. These backups are nondisruptive, can be performed at any time,
and do not need extended backup windows.
For information about the Storage APIs - Data Protection and integration with backup products, see the
VMware website or contact your vendor.
VMware, Inc. 39
Using ESXi with Fibre Channel
SAN 4
When you set up ESXi hosts to use FC SAN storage arrays, special considerations are necessary. This
section provides introductory information about how to use ESXi with an FC SAN array.
If you are new to SAN technology, familiarize yourself with the basic terminology.
A storage area network (SAN) is a specialized high-speed network that connects host servers to high-
performance storage subsystems. The SAN components include host bus adapters (HBAs) in the host
servers, switches that help route storage traffic, cables, storage processors (SPs), and storage disk
arrays.
A SAN topology with at least one switch present on the network forms a SAN fabric.
To transfer traffic from host servers to shared storage, the SAN uses the Fibre Channel (FC) protocol that
packages SCSI commands into Fibre Channel frames.
To restrict server access to storage arrays not allocated to that server, the SAN uses zoning. Typically,
zones are created for each group of servers that access a shared group of storage devices and LUNs.
Zones define which HBAs can connect to which SPs. Devices outside a zone are not visible to the
devices inside the zone.
Zoning is similar to LUN masking, which is commonly used for permission management. LUN masking is
a process that makes a LUN available to some hosts and unavailable to other hosts.
When transferring data between the host server and storage, the SAN uses a technique known as
multipathing. Multipathing allows you to have more than one physical path from the ESXi host to a LUN
on a storage system.
VMware, Inc. 40
vSphere Storage
Generally, a single path from a host to a LUN consists of an HBA, switch ports, connecting cables, and
the storage controller port. If any component of the path fails, the host selects another available path for
I/O. The process of detecting a failed path and switching to another is called path failover.
WWPN (World Wide A globally unique identifier for a port that allows certain applications to
Port Name) access the port. The FC switches discover the WWPN of a device or host
and assign a port address to the device.
Port_ID (or port Within a SAN, each port has a unique port ID that serves as the FC
address) address for the port. This unique ID enables routing of data through the
SAN to that port. The FC switches assign the port ID when the device logs
in to the fabric. The port ID is valid only while the device is logged on.
When N-Port ID Virtualization (NPIV) is used, a single FC HBA port (N-port) can register with the fabric by
using several WWPNs. This method allows an N-port to claim multiple fabric addresses, each of which
appears as a unique entity. When ESXi hosts use a SAN, these multiple, unique identifiers allow the
assignment of WWNs to individual virtual machines as part of their configuration.
The types of storage that your host supports include active-active, active-passive, and ALUA-compliant.
Active-active storage Supports access to the LUNs simultaneously through all the storage ports
system that are available without significant performance degradation. All the paths
are active, unless a path fails.
Active-passive storage A system in which one storage processor is actively providing access to a
system given LUN. The other processors act as a backup for the LUN and can be
actively providing access to other LUN I/O. I/O can be successfully sent
only to an active port for a given LUN. If access through the active storage
port fails, one of the passive storage processors can be activated by the
servers accessing it.
Asymmetrical storage Supports Asymmetric Logical Unit Access (ALUA). ALUA-compliant storage
system systems provide different levels of access per port. With ALUA, the host
can determine the states of target ports and prioritize paths. The host uses
some of the active paths as primary, and uses others as secondary.
VMware, Inc. 41
vSphere Storage
n Can prevent non-ESXi systems from accessing a particular storage system, and from possibly
destroying VMFS data.
n Can be used to separate different environments, for example, a test from a production environment.
With ESXi hosts, use a single-initiator zoning or a single-initiator-single-target zoning. The latter is a
preferred zoning practice. Using the more restrictive zoning prevents problems and misconfigurations that
can occur on the SAN.
For detailed instructions and best zoning practices, contact storage array or switch vendors.
When a virtual machine interacts with its virtual disk stored on a SAN, the following process takes place:
1 When the guest operating system in a virtual machine reads or writes to a SCSI disk, it sends SCSI
commands to the virtual disk.
2 Device drivers in the virtual machine’s operating system communicate with the virtual SCSI
controllers.
b Maps the requests for the blocks on the virtual disk to blocks on the appropriate physical device.
c Sends the modified I/O request from the device driver in the VMkernel to the physical HBA.
VMware, Inc. 42
vSphere Storage
6 Depending on a port the HBA uses to connect to the fabric, one of the SAN switches receives the
request. The switch routes the request to the appropriate storage device.
VMware, Inc. 43
Configuring Fibre Channel
Storage 5
When you use ESXi systems with SAN storage, specific hardware and system requirements exist.
n N-Port ID Virtualization
n Make sure that ESXi systems support the SAN storage hardware and firmware combinations you
use. For an up-to-date list, see the VMware Compatibility Guide.
n Configure your system to have only one VMFS volume per LUN.
n Unless you are using diskless servers, do not set up the diagnostic partition on a SAN LUN.
If you use diskless servers that boot from a SAN, a shared diagnostic partition is appropriate.
n Use RDMs to access raw disks. For information, see Chapter 19 Raw Device Mapping.
n For multipathing to work properly, each LUN must present the same LUN ID number to all ESXi
hosts.
n Make sure that the storage device driver specifies a large enough queue. You can set the queue
depth for the physical HBA during a system setup.
n On virtual machines running Microsoft Windows, increase the value of the SCSI TimeoutValue
parameter to 60. With this increase, Windows can tolerate delayed I/O resulting from a path failover.
For information, see Set Timeout on Windows Guest OS.
VMware, Inc. 44
vSphere Storage
n You cannot use multipathing software inside a virtual machine to perform I/O load balancing to a
single physical LUN. However, when your Microsoft Windows virtual machine uses dynamic disks,
this restriction does not apply. For information about configuring dynamic disks, see Set Up Dynamic
Disk Mirroring.
Storage provisioning To ensure that the ESXi system recognizes the LUNs at startup time,
provision all LUNs to the appropriate HBAs before you connect the SAN to
the ESXi system.
Provision all LUNs to all ESXi HBAs at the same time. HBA failover works
only if all HBAs see the same LUNs.
For LUNs that are shared among multiple hosts, make sure that LUN IDs
are consistent across all hosts.
vMotion and VMware When you use vCenter Server and vMotion or DRS, make sure that the
DRS LUNs for the virtual machines are provisioned to all ESXi hosts. This action
provides the most ability to move virtual machines.
Active-active compared When you use vMotion or DRS with an active-passive SAN storage device,
to active-passive arrays make sure that all ESXi systems have consistent paths to all storage
processors. Not doing so can cause path thrashing when a vMotion
migration occurs.
You should follow the configuration guidelines provided by your storage array vendor. During FC HBA
setup, consider the following issues.
n Do not mix FC HBAs from different vendors in a single host. Having different models of the same
HBA is supported, but a single LUN cannot be accessed through two different HBA types, only
through the same type.
n Set the timeout value for detecting a failover. To ensure optimal performance, do not change the
default value.
VMware, Inc. 45
vSphere Storage
1 Design your SAN if it is not already configured. Most existing SANs require only minor modification to
work with ESXi.
Most vendors have vendor-specific documentation for setting up a SAN to work with VMware ESXi.
4 Set up the HBAs for the hosts you have connected to the SAN.
7 (Optional) Set up your system for VMware HA failover or for using Microsoft Clustering Services.
N-Port ID Virtualization
N-Port ID Virtualization (NPIV) is an ANSI T11 standard that describes how a single Fibre Channel HBA
port can register with the fabric using several worldwide port names (WWPNs). This allows a fabric-
attached N-port to claim multiple fabric addresses. Each address appears as a unique entity on the Fibre
Channel fabric.
SAN objects, such as switches, HBAs, storage devices, or virtual machines can be assigned World Wide
Name (WWN) identifiers. WWNs uniquely identify such objects in the Fibre Channel fabric.
When virtual machines have WWN assignments, they use them for all RDM traffic. The LUNs pointed to
by any of the RDMs on the virtual machine must not be masked against its WWNs. When virtual
machines do not have WWN assignments, they access storage LUNs with the WWNs of their host’s
physical HBAs. By using NPIV, a SAN administrator can monitor and route storage access per a virtual
machine.
VMware, Inc. 46
vSphere Storage
When a virtual machine has a WWN assigned to it, the virtual machine’s configuration file (.vmx) is
updated to include a WWN pair. The WWN pair consists of a World Wide Port Name (WWPN) and a
World Wide Node Name (WWNN). When that virtual machine is powered on, the VMkernel instantiates a
virtual port (VPORT) on the physical HBA which is used to access the LUN. The VPORT is a virtual HBA
that appears to the FC fabric as a physical HBA. The VPORT has its own unique identifier, the WWN pair
that was assigned to the virtual machine.
Each VPORT is specific to the virtual machine. The VPORT is destroyed on the host and it no longer
appears to the FC fabric when the virtual machine is powered off. When a virtual machine is migrated
from one host to another, the VPORT closes on the first host and opens on the destination host.
If NPIV is enabled, WWN pairs (WWPN & WWNN) are specified for each virtual machine at creation time.
When a virtual machine using NPIV is powered on, it uses each of these WWN pairs in sequence to
discover an access path to the storage. The number of VPORTs that are instantiated equals the number
of physical HBAs present on the host. A VPORT is created on each physical HBA that a physical path is
found on. Each physical path determines the virtual path that is used to access the LUN. HBAs that are
not NPIV-aware are skipped in this discovery process because VPORTs cannot be instantiated on them.
n NPIV can be used only for virtual machines with RDM disks. Virtual machines with regular virtual
disks use the WWNs of the host’s physical HBAs.
For information, see the VMware Compatibility Guide and refer to your vendor documentation.
n Use HBAs of the same type, either all QLogic or all Emulex. VMware does not support
heterogeneous HBAs on the same host accessing the same LUNs.
n If a host uses multiple physical HBAs as paths to the storage, zone all physical paths to the virtual
machine. This is required to support multipathing even though only one path at a time will be
active.
n Make sure that physical HBAs on the host have access to all LUNs that are to be accessed by
NPIV-enabled virtual machines running on that host.
n When configuring a LUN for NPIV access at the storage level, make sure that the NPIV LUN number
and NPIV target ID match the physical LUN and Target ID.
VMware, Inc. 47
vSphere Storage
n NPIV supports vMotion. When you use vMotion to migrate a virtual machine it retains the assigned
WWN.
If you migrate an NPIV-enabled virtual machine to a host that does not support NPIV, VMkernel
reverts to using a physical HBA to route the I/O.
n If your FC SAN environment supports concurrent I/O on the disks from an active-active array, the
concurrent I/O to two different NPIV ports is also supported.
When you use ESXi with NPIV, the following limitations apply:
n Because the NPIV technology is an extension to the FC protocol, it requires an FC switch and does
not work on the direct attached FC disks.
n When you clone a virtual machine or template with a WWN assigned to it, the clones do not retain the
WWN.
n Disabling and then re-enabling the NPIV capability on an FC switch while virtual machines are
running can cause an FC link to fail and I/O to stop.
You can create from 1 to 16 WWN pairs, which can be mapped to the first 1 to 16 physical FC HBAs on
the host.
Typically, you do not need to change existing WWN assignments on your virtual machine. In certain
circumstances, for example, when manually assigned WWNs are causing conflicts on the SAN, you might
need to change or remove WWNs.
Prerequisites
n Before configuring WWN, ensure that the ESXi host can access the storage LUN access control list
(ACL) configured on the array side.
n If you want to edit the existing WWNs, power off the virtual machine.
Procedure
1 Right-click the virtual machine in the inventory and select Edit Settings.
3 Create or edit the WWN assignments by selecting one of the following options:
Option Description
Temporarily disable NPIV for this Disable but do not remove the existing WWN assignments for the virtual machine.
virtual machine
Leave unchanged Retain the existing WWN assignments. The read-only WWN assignments section
displays the node and port values of any existing WWN assignments.
VMware, Inc. 48
vSphere Storage
Option Description
Generate new WWNs Generate new WWNs, overwriting any existing WWNs. The WWNs of the HBA
are not affected. Specify the number of WWNNs and WWPNs. A minimum of two
WWPNs are required to support failover with NPIV. Typically only one WWNN is
created for each virtual machine.
Remove WWN assignment Remove the WWNs assigned to the virtual machine. The virtual machine uses the
HBA WWNs to access the storage LUN.
What to do next
VMware, Inc. 49
Configuring Fibre Channel over
Ethernet 6
To access Fibre Channel storage, an ESXi host can use the Fibre Channel over Ethernet (FCoE)
protocol.
The FCoE protocol encapsulates Fibre Channel frames into Ethernet frames. As a result, your host does
not need special Fibre Channel links to connect to Fibre Channel storage. The host can use 10 Gbit
lossless Ethernet to deliver Fibre Channel traffic.
The adapters that VMware supports generally fall into two categories, hardware FCoE adapters and
software FCoE adapters that use the native FCoE stack in ESXi.
For information on adapters that can be used with VMware FCoE, see the VMware Compatibility Guide
When such adapter is installed, your host detects and can use both CNA components. In the
vSphere Client, the networking component appears as a standard network adapter (vmnic) and the Fibre
Channel component as a FCoE adapter (vmhba). You do not need to configure the hardware FCoE
adapter to use it.
VMware, Inc. 50
vSphere Storage
VMware supports two categories of NICs with the software FCoE adapters.
NICs With Partial FCoE The extent of the offload capabilities might depend on the type of the NIC.
Offload Generally, the NICs offer Data Center Bridging (DCB) and I/O offload
capabilities.
NICs Without FCoE Any NICs that offer Data Center Bridging (DCB) and have a minimum
Offload speed of 10 Gbps. The network adapters are not required to support any
FCoE offload capabilities.
Unlike the hardware FCoE adapter, the software adapter must be activated. Before you activate the
adapter, you must properly configure networking.
Note The number of software FCoE adapters you activate corresponds to the number of physical NIC
ports. ESXi supports a maximum of four software FCoE adapters on one host.
n On the ports that communicate with your ESXi host, disable the Spanning Tree Protocol (STP).
Having the STP enabled might delay the FCoE Initialization Protocol (FIP) response at the switch and
cause an all paths down (APD) condition.
The FIP is a protocol that FCoE uses to discover and initialize FCoE entities on the Ethernet.
n Make sure that you have a compatible firmware version on the FCoE switch.
n Whether you use a partially offloaded NIC or a non-FCoE capable NIC, make sure that the latest
microcode is installed on the network adapter.
n If you use the non-FCoE capable NIC, make sure that it has the DCB capability for software FCoE
enablement.
n If the network adapter has multiple ports, when configuring networking, add each port to a separate
vSwitch. This practice helps you to avoid an APD condition when a disruptive event, such as an MTU
change, occurs.
VMware, Inc. 51
vSphere Storage
n Do not move a network adapter port from one vSwitch to another when FCoE traffic is active. If you
make this change, reboot your host afterwards.
n If you changed the vSwitch for a network adapter port and caused a failure, moving the port back to
the original vSwitch resolves the problem.
This procedure explains how to create a single VMkernel network adapter connected to a single FCoE
physical network adapter through a vSphere Standard switch. If your host has multiple network adapters
or multiple ports on the adapter, connect each FCoE NIC to a separate standard switch. For more
information, see the vSphere Networking documentation.
Procedure
5 To enable Jumbo Frames, change MTU (Bytes) to the value of 2500 or more, and click Next.
6 Click the Add adapters icon, and select the network adapter (vmnic#) that supports FCoE.
Network label is a friendly name that identifies the VMkernel adapter that you are creating, for
example, FCoE.
FCoE traffic requires an isolated network. Make sure that the VLAN ID you enter is different from the
one used for regular networking on your host. For more information, see the vSphere Networking
documentation.
9 After you finish configuration, review the information and click Finish.
You have created the virtual VMkernel adapter for the physical FCoE network adapter installed on your
host.
Note To avoid FCoE traffic disruptions, do not remove the FCoE network adapter (vmnic#) from the
vSphere Standard switch after you set up FCoE networking.
VMware, Inc. 52
vSphere Storage
The number of software FCoE adapters you can activate corresponds to the number of physical FCoE
NIC ports on your host. ESXi supports the maximum of four software FCoE adapters on one host.
Prerequisites
Procedure
3 Under Storage, click Storage Adapters, and click the Add icon ( ).
5 On the Add Software FCoE Adapter dialog box, select an appropriate vmnic from the drop-down list
of physical network adapters.
Only those adapters that are not yet used for FCoE traffic are listed.
6 Click OK.
After you activate the software FCoE adapter, you can view its properties. If you do not use the adapter,
you can remove it from the list of adapters.
VMware, Inc. 53
Booting ESXi from Fibre
Channel SAN 7
When you set up your host to boot from a SAN, your host's boot image is stored on one or more LUNs in
the SAN storage system. When the host starts, it boots from the LUN on the SAN rather than from its
local disk.
ESXi supports booting through a Fibre Channel host bus adapter (HBA) or a Fibre Channel over Ethernet
(FCoE) converged network adapter (CNA).
Caution When you use boot from SAN with multiple ESXi hosts, each host must have its own boot LUN.
If you configure multiple hosts to share the boot LUN, ESXi image corruption might occur.
If you use boot from SAN, the benefits for your environment include the following:
n Cheaper servers. Servers can be more dense and run cooler without internal storage.
n Easier server replacement. You can replace servers and have the new server point to the old boot
location.
n Less wasted space. Servers without local disks often take up less space.
n Easier backup processes. You can back up the system boot images in the SAN as part of the overall
SAN backup procedures. Also, you can use advanced array features such as snapshots on the boot
image.
VMware, Inc. 54
vSphere Storage
n Improved management. Creating and managing the operating system image is easier and more
efficient.
n Better reliability. You can access the boot disk through multiple paths, which protects the disk from
being a single point of failure.
ESXi system Follow vendor recommendations for the server booting from a SAN.
requirements
Adapter requirements Configure the adapter, so it can access the boot LUN. See your vendor documentation.
Access control n Each host must have access to its own boot LUN only, not the boot LUNs of other hosts. Use
storage system software to make sure that the host accesses only the designated LUNs.
n Multiple servers can share a diagnostic partition. You can use array-specific LUN masking to achieve
this configuration.
Multipathing support Multipathing to a boot LUN on active-passive arrays is not supported because the BIOS does not
support multipathing and is unable to activate a standby path.
SAN considerations If the array is not certified for a direct connect topology, the SAN connections must be through a
switched topology. If the array is certified for the direct connect topology, the SAN connections can be
made directly to the array. Boot from SAN is supported for both switched topology and direct connect
topology.
Hardware- specific If you are running an IBM eServer BladeCenter and use boot from SAN, you must disable IDE drives on
considerations the blades.
This section describes the generic boot-from-SAN enablement process on the rack-mounted servers. For
information on enabling the boot from SAN option on Cisco Unified Computing System FCoE blade
servers, refer to Cisco documentation.
VMware, Inc. 55
vSphere Storage
Because configuring the SAN components is vendor-specific, refer to the product documentation for each
item.
Procedure
1 Connect network cable, referring to any cabling guide that applies to your setup.
a From the SAN storage array, make the ESXi host visible to the SAN. This process is often called
creating an object.
b From the SAN storage array, set up the host to have the WWPNs of the host’s adapters as port
names or node names.
c Create LUNs.
d Assign LUNs.
Caution If you use a scripted installation process to install ESXi in boot from SAN mode, take
special steps to avoid unintended data loss.
Prerequisites
Procedure
VMware, Inc. 56
vSphere Storage
Because changing the boot sequence in the BIOS is vendor-specific, refer to vendor documentation for
instructions. The following procedure explains how to change the boot sequence on an IBM host.
Procedure
1 Power on your system and enter the system BIOS Configuration/Setup Utility.
Procedure
Procedure
1 Run lputil.
3 Select an adapter.
VMware, Inc. 57
vSphere Storage
Procedure
2 To configure the adapter parameters, press ALT+E at the Emulex prompt and follow these steps.
3 To configure the boot device, follow these steps from the Emulex main menu.
4 Boot into the system BIOS and move Emulex first in the boot controller sequence.
Procedure
1 While booting the server, press Ctrl+Q to enter the Fast!UTIL configuration utility.
VMware, Inc. 58
vSphere Storage
Option Description
One HBA If you have only one HBA, the Fast!UTIL Options page appears. Skip to Step 3.
Multiple HBAs If you have more than one HBA, select the HBA manually.
a In the Select Host Adapter page, use the arrow keys to position the pointer on
the appropriate HBA.
b Press Enter.
3 In the Fast!UTIL Options page, select Configuration Settings and press Enter.
4 In the Configuration Settings page, select Adapter Settings and press Enter.
7 Select the Boot Port Name entry in the list of storage processors (SPs) and press Enter.
If you are using an active-passive storage array, the selected SP must be on the preferred (active)
path to the boot LUN. If you are not sure which SP is on the active path, use your storage array
management software to find out. The target IDs are created by the BIOS and might change with
each reboot.
9 Perform the appropriate action depending on the number of LUNs attached to the SP.
Option Description
One LUN The LUN is selected as the boot LUN. You do not need to enter the Select LUN
page.
Multiple LUNs The Select LUN page opens. Use the pointer to select the boot LUN, then press
Enter.
10 If any remaining storage processors show in the list, press C to clear the data.
11 Press Esc twice to exit and press Enter to save the setting.
VMware, Inc. 59
Booting ESXi with Software
FCoE 8
ESXi supports booting from FCoE capable network adapters.
Only NICs with partial FCoE offload support the boot capabilities with the software FCoE. If you use the
NICs without FCoE offload, the software FCoE boot is not supported.
When you install and boot ESXi from an FCoE LUN, the host can use a VMware software FCoE adapter
and a network adapter with FCoE capabilities. The host does not require a dedicated FCoE HBA.
You perform most configurations through the option ROM of your network adapter. The network adapters
must support one of the following formats, which communicate parameters about an FCoE boot device to
VMkernel.
n FCoE Boot Firmware Table (FBFT). FBFT is Intel propriety.
n FCoE Boot Parameter Table (FBPT). FBPT is defined by VMware for third-party vendors to
implement a software FCoE boot.
The configuration parameters are set in the option ROM of your adapter. During an ESXi installation or a
subsequent boot, these parameters are exported in to system memory in either FBFT format or FBPT
format. The VMkernel can read the configuration settings and use them to access the boot LUN.
Requirements
n Use ESXi of a compatible version.
VMware, Inc. 60
vSphere Storage
n Be FCoE capable.
n Contain FCoE boot firmware which can export boot information in FBFT format or FBPT format.
Considerations
n You cannot change software FCoE boot configuration from within ESXi.
n Coredump is not supported on any software FCoE LUNs, including the boot LUN.
n Boot LUN cannot be shared with other hosts even on shared storage. Make sure that the host has
access to the entire boot LUN.
When you configure your host for a software FCoE boot, you perform several tasks.
Prerequisites
n Contain either a FCoE Boot Firmware Table (FBFT) or a FCoE Boot Parameter Table (FBPT).
For information about network adapters that support software FCoE boot, see the VMware Compatibility
Guide.
Procedure
VMware, Inc. 61
vSphere Storage
Procedure
u In the option ROM of the network adapter, specify software FCoE boot parameters.
These parameters include a boot target, boot LUN, VLAN ID, and so on.
Because configuring the network adapter is vendor-specific, review your vendor documentation for
instructions.
Prerequisites
n Configure the option ROM of the network adapter, so that it points to a target boot LUN. Make sure
that you have information about the bootable LUN.
n Change the boot order in the system BIOS to the following sequence:
a The network adapter that you use for the software FCoE boot.
Procedure
The ESXi installer verifies that FCoE boot is enabled in the BIOS and, if needed, creates a standard
virtual switch for the FCoE capable network adapter. The name of the vSwitch is
VMware_FCoE_vSwitch. The installer then uses preconfigured FCoE boot parameters to discover
and display all available FCoE LUNs.
2 On the Select a Disk page, select the software FCoE LUN that you specified in the boot parameter
setting.
If the boot LUN does not appear in this menu, make sure that you correctly configured boot
parameters in the option ROM of the network adapter.
5 Change the boot order in the system BIOS so that the FCoE boot LUN is the first bootable device.
ESXi continues booting from the software FCoE LUN until it is ready to be used.
What to do next
If needed, you can rename and modify the VMware_FCoE_vSwitch that the installer automatically
created. Make sure that the Cisco Discovery Protocol (CDP) mode is set to Listen or Both.
VMware, Inc. 62
vSphere Storage
Problem
When you install or boot ESXi from FCoE storage, the installation or the boot process fails. The FCoE
setup that you use includes a VMware software FCoE adapter and a network adapter with partial FCoE
offload capabilities.
Solution
n Make sure that you correctly configured boot parameters in the option ROM of the FCoE network
adapter.
n During installation, monitor the BIOS of the FCoE network adapter for any errors.
n Use the esxcli command to verify whether the boot LUN is present.
VMware, Inc. 63
Best Practices for Fibre Channel
Storage 9
When using ESXi with Fibre Channel SAN, follow recommendations to avoid performance problems.
The vSphere Client offers extensive facilities for collecting performance information. The information is
graphically displayed and frequently updated.
You can also use the resxtop or esxtop command-line utilities. The utilities provide a detailed look at
how ESXi uses resources. For more information, see the vSphere Resource Management
documentation.
Check with your storage representative if your storage system supports Storage API - Array Integration
hardware acceleration features. If it does, refer to your vendor documentation to enable hardware
acceleration support on the storage system side. For more information, see Chapter 24 Storage
Hardware Acceleration.
n Do not change the path policy the system sets for you unless you understand the implications of
making such a change.
n Document everything. Include information about zoning, access control, storage, switch, server and
FC HBA configuration, software and firmware versions, and storage cable plan.
n Verify different links, switches, HBAs, and other elements to ensure that you did not miss a critical
failure point in your design.
VMware, Inc. 64
vSphere Storage
n Ensure that the Fibre Channel HBAs are installed in the correct slots in the host, based on slot and
bus speed. Balance PCI bus load among the available buses in the server.
n Become familiar with the various monitor points in your storage network, at all visibility points,
including host's performance charts, FC switch statistics, and storage performance statistics.
n Be cautious when changing IDs of the LUNs that have VMFS datastores being used by your ESXi
host. If you change the ID, the datastore becomes inactive and its virtual machines fail. Resignature
the datastore to make it active again. See Managing Duplicate VMFS Datastores.
After you change the ID of the LUN, rescan the storage to reset the ID on your host. For information
on using the rescan, see Storage Rescan Operations.
Procedure
4 Under Advanced System Settings, select the Disk.EnableNaviReg parameter and click the Edit icon.
This operation disables the automatic host registration that is enabled by default.
If the environment is properly configured, the SAN fabric components (particularly the SAN switches) are
only minor contributors because of their low latencies relative to servers and storage arrays. Make sure
that the paths through the switch fabric are not saturated, that is, that the switch fabric is running at the
highest throughput.
If you encounter any problems with storage array performance, consult your storage array vendor
documentation for any relevant information.
VMware, Inc. 65
vSphere Storage
To improve the array performance in the vSphere environment, follow these general guidelines:
n When assigning LUNs, remember that several hosts might access the LUN, and that several virtual
machines can run on each host. One LUN used by a host can service I/O from many different
applications running on different operating systems. Because of this diverse workload, the RAID
group containing the ESXi LUNs typically does not include LUNs used by other servers that are not
running ESXi.
n SAN storage arrays require continual redesign and tuning to ensure that I/O is load-balanced across
all storage array paths. To meet this requirement, distribute the paths to the LUNs among all the SPs
to provide optimal load-balancing. Close monitoring indicates when it is necessary to rebalance the
LUN distribution.
Tuning statically balanced storage arrays is a matter of monitoring the specific performance statistics,
such as I/O operations per second, blocks per second, and response time. Distributing the LUN
workload to spread the workload across all the SPs is also important.
Each server application must have access to its designated storage with the following conditions:
Because each application has different requirements, you can meet these goals by selecting an
appropriate RAID group on the storage array.
n Place each LUN on a RAID group that provides the necessary performance levels. Monitor the
activities and resource use of other LUNs in the assigned RAID group. A high-performance RAID
group that has too many applications doing I/O to it might not meet performance goals required by an
application running on the ESXi host.
n Ensure that each host has enough HBAs to increase throughput for the applications on the host for
the peak period. I/O spread across multiple HBAs provides faster throughput and less latency for
each application.
n To provide redundancy for a potential HBA failure, make sure that the host is connected to a dual
redundant fabric.
VMware, Inc. 66
vSphere Storage
n When allocating LUNs or RAID groups for ESXi systems, remember that multiple operating systems
use and share that resource. The LUN performance required by the ESXi host might be much higher
than when you use regular physical machines. For example, if you expect to run four I/O intensive
applications, allocate four times the performance capacity for the ESXi LUNs.
n When you use multiple ESXi systems in with vCenter Server, the performance requirements for the
storage subsystem increase correspondingly.
n The number of outstanding I/Os needed by applications running on the ESXi system must match the
number of I/Os the HBA and storage array can handle.
VMware, Inc. 67
Using ESXi with iSCSI SAN 10
ESXi can connect to external SAN storage using the Internet SCSI (iSCSI) protocol. In addition to
traditional iSCSI, ESXi also supports iSCSI Extensions for RDMA (iSER).
When the iSER protocol is enabled, the host can use the same iSCSI framework, but replaces the TCP/IP
transport with the Remote Direct Memory Access (RDMA) transport.
On the host side, the iSCSI SAN components include iSCSI host bus adapters (HBAs) or Network
Interface Cards (NICs). The iSCSI network also includes switches and routers that transport the storage
traffic, cables, storage processors (SPs), and storage disk systems.
ESXi Host
Software
Adapter
iSCSI Ethernet
HBA NIC
LAN LAN
VMFS VMFS
VMware, Inc. 68
vSphere Storage
The client, called iSCSI initiator, operates on your ESXi host. It initiates iSCSI sessions by issuing SCSI
commands and transmitting them, encapsulated into the iSCSI protocol, to an iSCSI server. The server is
known as an iSCSI target. Typically, the iSCSI target represents a physical storage system on the
network.
The target can also be a virtual iSCSI SAN, for example, an iSCSI target emulator running in a virtual
machine. The iSCSI target responds to the initiator's commands by transmitting required iSCSI data.
iSCSI Multipathing
When transferring data between the host server and storage, the SAN uses a technique known as
multipathing. With multipathing, your ESXi host can have more than one physical path to a LUN on a
storage system.
Generally, a single path from a host to a LUN consists of an iSCSI adapter or NIC, switch ports,
connecting cables, and the storage controller port. If any component of the path fails, the host selects
another available path for I/O. The process of detecting a failed path and switching to another is called
path failover.
For more information on multipathing, see Chapter 18 Understanding Multipathing and Failover.
Each node has a node name. ESXi uses several methods to identify a node.
IP Address Each iSCSI node can have an IP address associated with it so that routing
and switching equipment on your network can establish the connection
between the host and storage. This address is like the IP address that you
assign to your computer to get access to your company's network or the
Internet.
iSCSI Name A worldwide unique name for identifying the node. iSCSI uses the iSCSI
Qualified Name (IQN) and Extended Unique Identifier (EUI).
By default, ESXi generates unique iSCSI names for your iSCSI initiators,
for example, iqn.1998-01.com.vmware:iscsitestox-68158ef2.
Usually, you do not have to change the default value, but if you do, make
sure that the new iSCSI name you enter is worldwide unique.
iSCSI Alias A more manageable name for an iSCSI device or port used instead of the
iSCSI name. iSCSI aliases are not unique and are intended to be a friendly
name to associate with a port.
Each node has one or more ports that connect it to the SAN. iSCSI ports are end-points of an iSCSI
session.
VMware, Inc. 69
vSphere Storage
iSCSI names are formatted in two different ways. The most common is the IQN format.
For more details on iSCSI naming requirements and string profiles, see RFC 3721 and RFC 3722 on the
IETF website.
n yyyy-mm is the year and month when the naming authority was established.
n naming-authority is the reverse syntax of the Internet domain name of the naming authority. For
example, the iscsi.vmware.com naming authority can have the iSCSI qualified name form of iqn.
1998-01.com.vmware.iscsi. The name indicates that the vmware.com domain name was
registered in January of 1998, and iscsi is a subdomain, maintained by vmware.com.
n unique name is any name you want to use, for example, the name of your host. The naming authority
must make sure that any names assigned following the colon are unique, such as:
n iqn.1998-01.com.vmware.iscsi:name1
n iqn.1998-01.com.vmware.iscsi:name2
n iqn.1998-01.com.vmware.iscsi:name999
The 16-hexadecimal digits are text representations of a 64-bit number of an IEEE EUI (extended unique
identifier) format. The top 24 bits are a company ID that IEEE registers with a particular company. The
remaining 40 bits are assigned by the entity holding that company ID and must be unique.
iSCSI Initiators
To access iSCSI targets, your ESXi host uses iSCSI initiators.
The initiator is a software or hardware installed on your ESXi host. The iSCSI initiator originates
communication between your host and an external iSCSI storage system and sends data to the storage
system.
In the ESXi environment, iSCSI adapters configured on your host play the role of initiators. ESXi supports
several types of iSCSI adapters.
For information on configuring and using iSCSI adapters, see Chapter 11 Configuring iSCSI Adapters and
Storage.
VMware, Inc. 70
vSphere Storage
Independent Hardware Implements its own networking and iSCSI configuration and management
iSCSI Adapter interfaces.
Typically, an independent hardware iSCSI adapter is a card that either
presents only iSCSI offload functionality or iSCSI offload functionality and
standard NIC functionality. The iSCSI offload functionality has independent
configuration management that assigns the IP, MAC, and other parameters
used for the iSCSI sessions. An example of an independent adapter is the
QLogic QLA4052 adapter.
Hardware iSCSI adapters might need to be licensed. Otherwise, they might not appear in the client or
vSphere CLI. Contact your vendor for licensing information.
The traditional iSCSI protocol carries SCSI commands over a TCP/IP network between an iSCSI initiator
on a host and an iSCSI target on a storage device. The iSCSI protocol encapsulates the commands and
assembles that data in packets for the TCP/IP layer. When the data arrives, the iSCSI protocol
disassembles the TCP/IP packets, so that the SCSI commands can be differentiated and delivered to the
storage device.
VMware, Inc. 71
vSphere Storage
iSER differs from traditional iSCSI as it replaces the TCP/IP data transfer model with the Remote Direct
Memory Access (RDMA) transport. Using the direct data placement technology of the RDMA, the iSER
protocol can transfer data directly between the memory buffers of the ESXi host and storage devices.
This method eliminates unnecessary TCP/IP processing and data coping, and can also reduce latency
and the CPU load on the storage device.
In the iSER environment, iSCSI works exactly as before, but uses an underlying RDMA fabric interface
instead of the TCP/IP-based interface.
Because the iSER protocol preserves the compatibility with iSCSI infrastructure, the process of enabling
iSER on the ESXi host is similar to the iSCSI process. See Configure iSER Adapters.
Different iSCSI storage vendors present storage to hosts in different ways. Some vendors present
multiple LUNs on a single target. Others present multiple targets with one LUN each.
In these examples, three LUNs are available in each of these configurations. In the first case, the host
detects one target but that target has three LUNs that can be used. Each of the LUNs represents
individual storage volume. In the second case, the host detects three different targets, each having one
LUN.
Host-based iSCSI initiators establish connections to each target. Storage systems with a single target
containing multiple LUNs have traffic to all the LUNs on a single connection. With a system that has three
targets with one LUN each, the host uses separate connections to the three LUNs.
This information is useful when you are trying to aggregate storage traffic on multiple connections from
the host with multiple iSCSI adapters. You can set the traffic for one target to a particular adapter, and use
a different adapter for the traffic to another target.
VMware, Inc. 72
vSphere Storage
The types of storage that your host supports include active-active, active-passive, and ALUA-compliant.
Active-active storage Supports access to the LUNs simultaneously through all the storage ports
system that are available without significant performance degradation. All the paths
are always active, unless a path fails.
Active-passive storage A system in which one storage processor is actively providing access to a
system given LUN. The other processors act as a backup for the LUN and can be
actively providing access to other LUN I/O. I/O can be successfully sent
only to an active port for a given LUN. If access through the active storage
port fails, one of the passive storage processors can be activated by the
servers accessing it.
Asymmetrical storage Supports Asymmetric Logical Unit Access (ALUA). ALUA-compliant storage
system systems provide different levels of access per port. With ALUA, hosts can
determine the states of target ports and prioritize paths. The host uses
some of the active paths as primary and others as secondary.
Virtual port storage Supports access to all available LUNs through a single virtual port. Virtual
system port storage systems are active-active storage devices, but hide their
multiple connections though a single port. ESXi multipathing does not make
multiple connections from a specific port to the storage by default. Some
storage vendors supply session managers to establish and manage
multiple connections to their storage. These storage systems handle port
failovers and connection balancing transparently. This capability is often
called transparent failover.
You must configure your host and the iSCSI storage system to support your storage access control policy.
Discovery
A discovery session is part of the iSCSI protocol. It returns the set of targets you can access on an iSCSI
storage system. The two types of discovery available on ESXi are dynamic and static. Dynamic discovery
obtains a list of accessible targets from the iSCSI storage system. Static discovery can access only a
particular target by target name and address.
For more information, see Configuring Discovery Addresses for iSCSI Adapters.
Authentication
iSCSI storage systems authenticate an initiator by a name and key pair. ESXi supports the CHAP
authentication protocol. To use CHAP authentication, the ESXi host and the iSCSI storage system must
have CHAP enabled and have common credentials.
For information on enabling CHAP, see Configuring CHAP Parameters for iSCSI Adapters.
VMware, Inc. 73
vSphere Storage
Access Control
Access control is a policy set up on the iSCSI storage system. Most implementations support one or more
of three types of access control:
n By initiator name
n By IP address
Only initiators that meet all rules can access the iSCSI volume.
Using only CHAP for access control can slow down rescans because the ESXi host can discover all
targets, but then fails at the authentication step. iSCSI rescans work faster if the host discovers only the
targets it can authenticate.
Error Correction
To protect the integrity of iSCSI headers and data, the iSCSI protocol defines error correction methods
known as header digests and data digests.
Both parameters are disabled by default, but you can enable them. These digests pertain to, respectively,
the header and SCSI data being transferred between iSCSI initiators and targets, in both directions.
Header and data digests check the noncryptographic data integrity beyond the integrity checks that other
networking layers provide, such as TCP and Ethernet. They check the entire communication path,
including all elements that can change the network-level traffic, such as routers, switches, and proxies.
The existence and type of the digests are negotiated when an iSCSI connection is established. When the
initiator and target agree on a digest configuration, this digest must be used for all traffic between them.
Enabling header and data digests does require additional processing for both the initiator and the target
and can affect throughput and CPU use performance.
Note Systems that use the Intel Nehalem processors offload the iSCSI digest calculations, as a result,
reducing the impact on performance.
For information on enabling header and data digests, see Configuring Advanced Parameters for iSCSI.
When a virtual machine interacts with its virtual disk stored on a SAN, the following process takes place:
1 When the guest operating system in a virtual machine reads or writes to SCSI disk, it sends SCSI
commands to the virtual disk.
VMware, Inc. 74
vSphere Storage
2 Device drivers in the virtual machine’s operating system communicate with the virtual SCSI
controllers.
b Maps the requests for the blocks on the virtual disk to blocks on the appropriate physical device.
c Sends the modified I/O request from the device driver in the VMkernel to the iSCSI initiator,
hardware or software.
5 If the iSCSI initiator is a hardware iSCSI adapter, independent or dependent, the adapter performs
the following tasks.
6 If the iSCSI initiator is a software iSCSI adapter, the following takes place.
d The physical NIC sends IP packets over Ethernet to the iSCSI storage system.
7 Ethernet switches and routers on the network carry the request to the appropriate storage device.
VMware, Inc. 75
Configuring iSCSI Adapters and
Storage 11
Before ESXi can work with iSCSI SAN, you must set up your iSCSI environment.
The process of preparing your iSCSI environment involves the following steps:
Step Details
Set up iSCSI storage For information, see your storage vendor documentation. In addition, follow these
recommendations:
n ESXi iSCSI SAN Recommendations and Restrictions
n Chapter 13 Best Practices for iSCSI Storage
VMware, Inc. 76
vSphere Storage
n To ensure that the host recognizes LUNs at startup time, configure all iSCSI storage targets so that
your host can access them and use them. Configure your host so that it can discover all available
iSCSI targets.
n Unless you are using diskless servers, set up a diagnostic partition on local storage. If you have
diskless servers that boot from iSCSI SAN, see General Recommendations for Boot from iSCSI SAN
for information about diagnostic partitions with iSCSI.
n Set the SCSI controller driver in the guest operating system to a large enough queue.
n On virtual machines running Microsoft Windows, increase the value of the SCSI TimeoutValue
parameter. With this parameter set up, the Windows VMs can better tolerate delayed I/O resulting
from a path failover. For information, see Set Timeout on Windows Guest OS.
n Configure your environment to have only one VMFS datastore for each LUN.
n You cannot use virtual-machine multipathing software to perform I/O load balancing to a single
physical LUN.
n ESXi does not support multipathing when you combine independent hardware adapters with either
software or dependent hardware adapters.
iSCSI Networking
For certain types of iSCSI adapters, you must configure VMkernel networking.
You can verify the network configuration by using the vmkping utility.
VMware, Inc. 77
vSphere Storage
The independent hardware iSCSI adapter does not require VMkernel networking. You can configure
network parameters, such as an IP address, subnet mask, and default gateway on the independent
hardware iSCSI adapter.
Independent Hardware Third-party adapter that Not required. For information, see Edit Network
iSCSI Adapter offloads the iSCSI and Settings for Hardware iSCSI.
network processing and
management from your
host.
Discovery Methods
For all types of iSCSI adapters, you must set the dynamic discovery address or static discovery address.
In addition, you must provide a target name of the storage system. For software iSCSI and dependent
hardware iSCSI, the address is pingable using vmkping.
CHAP Authentication
Enable the CHAP parameter on the initiator and the storage system side. After authentication is enabled,
it applies to all targets that are not yet discovered. It does not apply to targets that are already discovered.
VMware, Inc. 78
vSphere Storage
Prerequisites
For information about licensing, installation, and firmware updates, see vendor documentation.
The process of setting up the independent hardware iSCSI adapter includes these steps.
Step Description
View Independent Hardware iSCSI View an independent hardware iSCSI adapter and verify that it is correctly installed and
Adapters ready for configuration.
Modify General Properties for iSCSI If needed, change the default iSCSI name and alias assigned to your iSCSI adapters. For
or iSER Adapters the independent hardware iSCSI adapters, you can also change the default IP settings.
Edit Network Settings for Hardware Change default network settings so that the adapter is configured properly for the iSCSI
iSCSI SAN.
Configure Dynamic or Static Set up dynamic discovery. With dynamic discovery, each time the initiator contacts a
Discovery for iSCSI and iSER specified iSCSI storage system, it sends the SendTargets request to the system. The
iSCSI system responds by supplying a list of available targets to the initiator. In addition to
the dynamic discovery method, you can use static discovery and manually enter
information for the targets.
Set Up CHAP for iSCSI or iSER If your iSCSI environment uses the Challenge Handshake Authentication Protocol
Adapter (CHAP), configure it for your adapter.
Enable Jumbo Frames for If your iSCSI environment supports Jumbo Frames, enable them for the adapter.
Independent Hardware iSCSI
After you install an independent hardware iSCSI adapter on a host, it appears on the list of storage
adapters available for configuration. You can view its properties.
Prerequisites
Procedure
If installed, the hardware iSCSI adapter appears on the list of storage adapters.
VMware, Inc. 79
vSphere Storage
iSCSI Name Unique name formed according to iSCSI standards that identifies the iSCSI adapter. You
can edit the iSCSI name.
iSCSI Alias A friendly name used instead of the iSCSI name. You can edit the iSCSI alias.
Procedure
3 Under Storage, click Storage Adapters, and select the adapter (vmhba#) to configure.
4 Under Adapter Details, click the Network Settings tab and click Edit.
5 In the IPv4 settings section, disable IPv6 or select the method to obtain IP addresses.
Note The automatic DHCP option and static option are mutually exclusive.
Option Description
Use static IPv4 settings Enter the IPv4 IP address, subnet mask, and default gateway for the iSCSI
adapter.
6 In the IPv6 settings section, disable IPv6 or select an appropriate option for obtaining IPv6 addresses.
Note Automatic options and the static option are mutually exclusive.
Option Description
Obtain IPv6 addresses automatically Use router advertisement to obtain IPv6 addresses.
through Router Advertisement
VMware, Inc. 80
vSphere Storage
Option Description
Override Link-local address for IPv6 Override the link-local IP address by configuring a static IP address.
7 In the DNS settings section, provide IP addresses for a preferred DNS server and an alternate DNS
server.
An example of a dependent iSCSI adapter is a Broadcom 5709 NIC. When installed on a host, it presents
its two components, a standard network adapter and an iSCSI engine, to the same port. The iSCSI
engine appears on the list of storage adapters as an iSCSI adapter (vmhba).
The iSCSI adapter is enabled by default. To make it functional, you must connect it, through a virtual
VMkernel adapter (vmk), to a physical network adapter (vmnic) associated with it. You can then configure
the iSCSI adapter.
After you configure the dependent hardware iSCSI adapter, the discovery and authentication data is
passed through the network connection. The iSCSI traffic goes through the iSCSI engine, bypassing the
network.
The entire setup and configuration process for the dependent hardware iSCSI adapters involves several
steps.
Step Description
View Dependent Hardware iSCSI View a dependent hardware iSCSI adapter to verify that it is correctly loaded.
Adapters
Modify General Properties for If needed, change the default iSCSI name and alias assigned to your adapter.
iSCSI or iSER Adapters
Determine Association Between You must create network connections to bind dependent iSCSI and physical network
iSCSI and Network Adapters adapters. To create the connections correctly, determine the name of the physical NIC with
which the dependent hardware iSCSI adapter is associated.
Configure Port Binding for iSCSI or Configure connections for the traffic between the iSCSI component and the physical
iSER network adapters. The process of configuring these connections is called port binding.
Configure Dynamic or Static Set up dynamic discovery. With dynamic discovery, each time the initiator contacts a
Discovery for iSCSI and iSER specified iSCSI storage system, it sends the SendTargets request to the system. The
iSCSI system responds by supplying a list of available targets to the initiator. In addition to
the dynamic discovery method, you can use static discovery and manually enter
information for the targets.
Set Up CHAP for iSCSI or iSER If your iSCSI environment uses the Challenge Handshake Authentication Protocol (CHAP),
Adapter configure it for your adapter.
VMware, Inc. 81
vSphere Storage
Step Description
Set Up CHAP for Target You can also configure different CHAP credentials for each discovery address or static
target.
Enable Jumbo Frames for If your iSCSI environment supports Jumbo Frames, enable them for the adapter.
Networking
n When you use any dependent hardware iSCSI adapter, performance reporting for a NIC associated
with the adapter might show little or no activity, even when iSCSI traffic is heavy. This behavior occurs
because the iSCSI traffic bypasses the regular networking stack.
n If you use a third-party virtual switch, for example Cisco Nexus 1000V DVS, disable automatic
pinning. Use manual pinning instead, making sure to connect a VMkernel adapter (vmk) to an
appropriate physical NIC (vmnic). For information, refer to your virtual switch vendor documentation.
n The Broadcom iSCSI adapter performs data reassembly in hardware, which has a limited buffer
space. When you use the Broadcom iSCSI adapter in a congested network or under heavy load,
enable flow control to avoid performance degradation.
Flow control manages the rate of data transmission between two nodes to prevent a fast sender from
overrunning a slow receiver. For best results, enable flow control at the end points of the I/O path, at
the hosts and iSCSI storage systems.
To enable flow control for the host, use the esxcli system module parameters command. For
details, see the VMware knowledge base article at http://kb.vmware.com/kb/1013413
If installed, the dependent hardware iSCSI adapter (vmhba#) appears on the list of storage adapters
under such category as, for example, Broadcom iSCSI Adapter. If the dependent hardware adapter does
not appear on the list of storage adapters, check whether it needs to be licensed. See your vendor
documentation.
Procedure
The default details for the adapter appear, including the iSCSI name, iSCSI alias, and the status.
VMware, Inc. 82
vSphere Storage
What to do next
Although the dependent iSCSI adapter is enabled by default, to make it functional, you must set up
networking for the iSCSI traffic and bind the adapter to the appropriate VMkernel iSCSI port. You then
configure discovery addresses and CHAP parameters.
Procedure
4 Select the iSCSI adapter (vmhba#) and click the Network Port Binding tab under adapter details.
5 Click Add.
The network adapter (vmnic#) that corresponds to the dependent iSCSI adapter is listed in the
Physical Network Adapter column.
What to do next
If the VMkernel Adapter column is empty, create a VMkernel adapter (vmk#) for the physical network
adapter (vmnic#) and then bind them to the associated dependent hardware iSCSI. See Setting Up
Network for iSCSI and iSER.
When you use the software iSCSI adapters, consider the following:
n Designate a separate network adapter for iSCSI. Do not use iSCSI on 100 Mbps or slower adapters.
n Avoid hard coding the name of the software adapter, vmhbaXX, in the scripts. It is possible for the
name to change from one ESXi release to another. The change might cause failures of your existing
scripts if they use the hardcoded old name. The name change does not affect the behavior of the
iSCSI software adapter.
The process of configuring the software iSCSI adapter involves several steps.
VMware, Inc. 83
vSphere Storage
Step Description
Activate or Disable the Software Activate your software iSCSI adapter so that your host can use it to access iSCSI
iSCSI Adapter storage.
Modify General Properties for iSCSI If needed, change the default iSCSI name and alias assigned to your adapter.
or iSER Adapters
Configure Port Binding for iSCSI or Configure connections for the traffic between the iSCSI component and the physical
iSER network adapters. The process of configuring these connections is called port binding.
Configure Dynamic or Static Set up dynamic discovery. With dynamic discovery, each time the initiator contacts a
Discovery for iSCSI and iSER specified iSCSI storage system, it sends the SendTargets request to the system. The
iSCSI system responds by supplying a list of available targets to the initiator. In addition
to the dynamic discovery method, you can use static discovery and manually enter
information for the targets.
Set Up CHAP for iSCSI or iSER If your iSCSI environment uses the Challenge Handshake Authentication Protocol
Adapter (CHAP), configure it for your adapter.
Set Up CHAP for Target You can also configure different CHAP credentials for each discovery address or static
target.
Enable Jumbo Frames for If your iSCSI environment supports Jumbo Frames, enable them for the adapter.
Networking
Prerequisites
Note If you boot from iSCSI using the software iSCSI adapter, the adapter is enabled and the network
configuration is created at the first boot. If you disable the adapter, it is reenabled each time you boot the
host.
Procedure
VMware, Inc. 84
vSphere Storage
Option Description
Enable the software iSCSI adapter a Under Storage, click Storage Adapters, and click the Add icon.
b Select Software iSCSI Adapter and confirm that you want to add the
adapter.
The software iSCSI adapter (vmhba#) is enabled and appears on the list of
storage adapters. After enabling the adapter, the host assigns the default
iSCSI name to it. You can now complete the adapter configuration.
Disable the software iSCSI adapter a Under Storage, click Storage Adapters, and select the adapter (vmhba#) to
disable.
b Click the Properties tab.
c Click Disable and confirm that you want to disable the adapter.
After the reboot, the adapter no longer appears on the list of storage
adapters. The storage devices associated with the adapter become
inaccessible. You can later activate the adapter.
When installed on the host, the RDMA-capable adapter appears in vCenter Server as a network adapter
(vmnic).
To make the adapter functional, you must enable the VMware iSER component, and then connect the
iSER adapter to the RDMA-capable vmnic. You can then configure typical properties, such as targets and
CHAP, for the iSER adapter.
The entire setup and configuration process for the iSER adapters involves several steps.
Step Description
Enable the VMware iSER Adapter Use the esxcli command to enable the VMware iSER adapter.
Modify General Properties for If needed, change the default name and alias assigned to your adapter.
iSCSI or iSER Adapters
Configure Port Binding for iSCSI You must create network connections to bind the iSER engine and the RDMA-capable
or iSER network adapter. The process of configuring these connections is called port binding.
Note iSER does not support NIC teaming. When configuring port binding, use only one
RDMA adapter per vSwitch.
Configure Dynamic or Static Set up the dynamic discovery. With the dynamic discovery, each time the initiator contacts a
Discovery for iSCSI and iSER specified iSER storage system, it sends the SendTargets request to the system. The iSER
system responds by supplying a list of available targets to the initiator. In addition to the
dynamic discovery method, you can use the static discovery and manually enter information
for the targets.
VMware, Inc. 85
vSphere Storage
Step Description
Set Up CHAP for iSCSI or iSER If your environment uses the Challenge Handshake Authentication Protocol (CHAP),
Adapter configure it for your adapter.
Set Up CHAP for Target You can also configure different CHAP credentials for each discovery address or static
target.
Enable Jumbo Frames for If your environment supports Jumbo Frames, enable them for the adapter.
Networking
Prerequisites
n Make sure that your iSCSI storage supports the iSER protocol.
n Enable flow control on the ESXi host. To enable flow control for the host, use the esxcli system
module parameters command. For details, see the VMware knowledge base article at
http://kb.vmware.com/kb/1013413.
n Make sure to configure RDMA switch ports to create lossless connections between the iSER initiator
and target.
Procedure
1 Use the ESXi Shell or vSphere CLI to enable the iSER adapter:
c Under Storage, click Storage Adapters, and review the list of adapters.
If you enabled the adapter, the adapter (vmhba#) appears on the list under the VMware iSCSI
over RDMA (iSER) Adapter category.
Important When you modify any default properties for your adapters, make sure to use correct formats
for their names and IP addresses.
VMware, Inc. 86
vSphere Storage
Prerequisites
Procedure
3 Under Storage, click Storage Adapters, and select the adapter (vmhba#) to configure.
4 Click the Properties tab, and click Edit in the General panel.
Option Description
iSCSI Name Unique name formed according to iSCSI standards that identifies the iSCSI
adapter. If you change the name, make sure that the name you enter is worldwide
unique and properly formatted. Otherwise, certain storage devices might not
recognize the iSCSI adapter.
iSCSI Alias A friendly name you use instead of the iSCSI name.
If you change the iSCSI name, it is used for new iSCSI sessions. For existing sessions, the new settings
are not used until you log out and log in again.
Configuring the network connection involves creating a virtual VMkernel adapter for each physical
network adapter. You use 1:1 mapping between each virtual and physical network adapter. You then
associate the VMkernel adapter with an appropriate iSCSI or iSER adapter. This process is called port
binding.
VMware, Inc. 87
vSphere Storage
Host
vSwitch
vmnic
physical NIC
IP network
n You can connect the software iSCSI adapter with any physical NICs available on your host.
n The dependent iSCSI adapters must be connected only to their own physical NICs.
n You must connect the iSER adapter only to the RDMA-capable network adapter.
For specific considerations on when and how to use network connections with software iSCSI, see the
VMware knowledge base article at http://kb.vmware.com/kb/2038869.
You can use multiple physical adapters in a single or multiple switch configurations.
In the multiple switch configuration, you designate a separate vSphere switch for each virtual-to-physical
adapter pair.
VMware, Inc. 88
vSphere Storage
iSCSI1 vmnic1
vmk1
vSwitch2
iSCSI2 vmnic2
vmk2
An alternative is to add all NICs and VMkernel adapters to the single vSphere switch. In this case, you
must override the default network setup and make sure that each VMkernel adapter maps to only one
corresponding active physical adapter.
iSCSI2 vmnic2
vmk2 vmnic1
iSCSI1
vmk1
The examples show configurations that use vSphere standard switches, but you can use distributed
switches as well. For more information about vSphere distributed switches, see the vSphere Networking
documentation.
The following considerations apply when you use multiple physical adapters:
n Physical network adapters must be on the same subnet as the storage system they connect to.
n If you use separate vSphere switches, you must connect them to different IP subnets. Otherwise,
VMkernel adapters might experience connectivity problems and the host fails to discover the LUNs.
n The single switch configuration is not appropriate for iSER because iSER does not support NIC
teaming.
Do not use port binding when any of the following conditions exist:
n Array target iSCSI ports are in a different broadcast domain and IP subnet.
n VMkernel adapters used for iSCSI connectivity exist in different broadcast domains, IP subnets, or
use different virtual switches.
VMware, Inc. 89
vSphere Storage
Note Make sure that all target portals are reachable from all VMkernel ports when port binding is used.
Otherwise, iSCSI sessions might fail to create. As a result, the rescan operation might take longer than
expected.
No Port Binding
If you do not use port binding, the ESXi networking layer selects the best VMkernel port based on its
routing table. The host uses the port to create an iSCSI session with the target portal. Without the port
binding, only one session per each target portal is created.
If your target has only one network portal, you can create multiple paths to the target by adding multiple
VMkernel ports on your ESXi host and binding them to the iSCSI initiator.
VMware, Inc. 90
vSphere Storage
vmk1
192.168.0.1/24
vmnic1
Same subnet
vmk2
192.168.0.2/24
vmnic2 Single Target:
IP 192.168.0.10/24
vmk3 Network
192.168.0.3/24
vmnic3
vmk2
192.168.0.4/24
vmnic4
In this example, all initiator ports and the target portal are configured in the same subnet. The target is
reachable through all bound ports. You have four VMkernel ports and one target portal, so total of four
paths are created.
You can create multiple paths by configuring multiple ports and target portals on different IP subnets. By
keeping initiator and target ports in different subnets, you can force ESXi to create paths through specific
ports. In this configuration, you do not use port binding because port binding requires that all initiator and
target ports are on the same subnet.
vmk1 SP/Controller A:
IP
Network
vmk2 SP/Controller B:
ESXi selects vmk1 when connecting to Port 0 of Controller A and Controller B because all three ports are
on the same subnet. Similarly, vmk2 is selected when connecting to Port 1of Controller A and B. You can
use NIC teaming in this configuration.
VMware, Inc. 91
vSphere Storage
Paths Description
In this example, you keep all bound vmkernel ports in one subnet (N1) and configure all target portals in
another subnet (N2). You can then add a static route for the target subnet (N2).
N1 N2
vmk1
SP/Controller A
192.168.1.1/24 Port 0
10.115.179.1/24
vmnic1
IP
Network
vmk2
SP/Controller B
192.168.1.2/24 Port 0
10.115.179.2/24
vmnic2
In this configuration, you use static routing when using different subnets. You cannot use the port binding
with this configuration.
vmk1
SP/Controller A
192.168.1.1/24 Port 0
0.115.155.1/24
vmnic1
IP
Network
vmk2
SP/Controller A
192.168.2.1/24 Port 0
0.115.179.1/24
vmnic2
VMware, Inc. 92
vSphere Storage
You configure vmk1 and vmk2 in separate subnets, 192.168.1.0 and 192.168.2.0. Your target portals are
also in separate subnets, 10.115.155.0 and 10.155.179.0.
You can add the static route for 10.115.155.0 from vmk1. Make sure that the gateway is reachable from
vmk1.
You then add static route for 10.115.179.0 from vmk2. Make sure that the gateway is reachable from
vmk2.
Starting with vSphere 6.5, you can configure a separate gateway per VMkernel port. If you use DHCP to
obtain IP configuration for a VMkernel port, gateway information can also be obtained using DHCP.
To see gateway information per VMkernel port, use the following command:
Name IPv4 Address IPv4 Netmask IPv4 Broadcast Address Type Gateway DHCP DNS
---- -------------- ------------- -------------- ------------ -------------- --------
vmk0 10.115.155.122 255.255.252.0 10.115.155.255 DHCP 10.115.155.253 true
vmk1 10.115.179.209 255.255.252.0 10.115.179.255 DHCP 10.115.179.253 true
vmk2 10.115.179.146 255.255.252.0 10.115.179.255 DHCP 10.115.179.253 true
With separate gateways per VMkernel port, you use port binding to reach targets in different subnets.
The following tasks discuss the network configuration with a vSphere Standard switch.
® ®
You can also use the VMware vSphere Distributed Switch™ and VMware NSX Virtual Switch™ in the
port biding configuration. For information about NSX virtual switches, see the VMware NSX
documentation.
Note iSER does not support NIC teaming. When configuring port binding for iSER, use only one RDMA
adapter per vSwitch.
VMware, Inc. 93
vSphere Storage
If you use a vSphere distributed switch with multiple uplink ports, for port binding, create a separate
distributed port group per each physical NIC. Then set the team policy so that each distributed port group
has only one active uplink port. For detailed information on distributed switches, see the vSphere
Networking documentation.
Procedure
Procedure
5 Click the Add adapters icon, and select the network adapter (vmnic#) to use for iSCSI.
Important If you are creating a VMkernel adapter for dependent hardware iSCSI, select the vmnic
that corresponds to the iSCSI component. See Determine Association Between iSCSI and Network
Adapters. With the iSER adapter, make sure to use an appropriate RDMA-capable vmnic.
A network label is a friendly name that identifies the VMkernel adapter that you are creating, for
example, iSCSI.
VMware, Inc. 94
vSphere Storage
You created the virtual VMkernel adapter (vmk#) for a physical network adapter (vmnic#) on your host.
What to do next
If your host has one physical network adapter for iSCSI traffic, you must bind the virtual adapter that you
created to the iSCSI adapter.
If you have multiple network adapters, create additional VMkernel adapters and then perform iSCSI
binding. The number of virtual adapters must correspond to the number of physical adapters on the host.
Prerequisites
Create a vSphere standard switch that maps an iSCSI VMkernel adapter to a single physical network
adapter designated for iSCSI traffic.
Procedure
3 Under Networking, click Virtual switches, and select the vSphere switch that you want to modify
from the list.
c Make sure that you are using the existing switch, and click Next.
d Click the Add adapters icon, and select one or more network adapters (vmnic#) to use for iSCSI.
With the dependent hardware iSCSI adapter, select the vmnic that has a corresponding iSCSI
component. With the iSER adapter, make sure to use an appropriate RDMA-capable vmnic.
5 Create iSCSI VMkernel adapters for all physical network adapters that you added.
The number of VMkernel interfaces must correspond to the number of physical network adapters on
the vSphere standard switch.
VMware, Inc. 95
vSphere Storage
c Make sure that you are using the existing switch, and click Next.
What to do next
Change the network policy for all VMkernel adapters, so that only one physical network adapter is active
for each VMkernel adapter. You can then bind the VMkernel adapters to the appropriate iSCSI adapters.
By default, for each VMkernel adapter on the vSphere Standard switch, all network adapters appear as
active. You must override this setup, so that each VMkernel adapter maps to only one corresponding
active physical. For example, vmk1 maps to vmnic1, vmk2 maps to vmnic2, and so on.
Prerequisites
Create a vSphere Standard switch that connects VMkernel with physical network adapters designated for
iSCSI traffic. The number of VMkernel adapters must correspond to the number of physical adapters on
the vSphere Standard switch.
Procedure
3 Under Networking, click Virtual switches, and select the vSphere switch that you want to modify
from the list.
4 Select the VMkernel adapter and click the Edit Settings icon.
5 On the Edit Settings wizard, click Teaming and Failover and select Override under Failover Order.
6 Designate only one physical adapter as active and move all remaining adapters to the Unused
Adapters category.
7 Repeat Step 4 through Step 6 for each iSCSI VMkernel interface on the vSphere Standard switch.
The following table illustrates the proper iSCSI mapping where only one physical network adapter is
active for each VMkernel adapter.
VMware, Inc. 96
vSphere Storage
What to do next
After you perform this task, bind the VMkernel adapters to the appropriate iSCSI adapters.
Prerequisites
Create a virtual VMkernel adapter for each physical network adapter on your host. If you use multiple
VMkernel adapters, set up the correct network policy.
Procedure
3 Under Storage, click Storage Adapters, and select the appropriate iSCSI adapter from the list.
4 Click the Network Port Binding tab and click the Add icon.
Note Make sure that the network policy for the VMkernel adapter is compliant with the binding
requirements.
You can bind the software iSCSI adapter to one or more VMkernel adapters. For a dependent
hardware iSCSI adapter or the iSER adapter, only one VMkernel adapter associated with the correct
physical NIC is available.
6 Click OK.
The network connection appears on the list of VMkernel port bindings for the iSCSI adapter.
VMware, Inc. 97
vSphere Storage
Procedure
3 Under Storage, click Storage Adapters, and select the appropriate iSCSI adapter from the list.
4 Click the Network Port Binding tab and click the View Details icon.
After you create network connections for iSCSI, an iSCSI indicator becomes enabled in the
vSphere Client. The indicator shows that a particular virtual or physical network adapter is iSCSI-bound.
To avoid disruptions in iSCSI traffic, follow these guidelines and considerations when managing iSCSI-
bound virtual and physical network adapters:
n Make sure that the VMkernel network adapters are assigned addresses on the same subnet as the
iSCSI storage portal they connect to.
n iSCSI adapters using VMkernel adapters cannot connect to iSCSI ports on different subnets, even if
the iSCSI adapters discover those ports.
n When using separate vSphere switches to connect physical network adapters and VMkernel
adapters, make sure that the vSphere switches connect to different IP subnets.
n If VMkernel adapters are on the same subnet, they must connect to a single vSwitch.
n If you migrate VMkernel adapters to a different vSphere switch, move associated physical adapters.
n Do not make configuration changes to iSCSI-bound VMkernel adapters or physical network adapters.
n Do not make changes that might break association of VMkernel adapters and physical network
adapters. You can break the association if you remove one of the adapters or the vSphere switch that
connects them. Or if you change the 1:1 network policy for their connection.
Problem
The VMkernel adapter's port group policy is considered non-compliant in the following cases:
n The VMkernel adapter is connected to more than one physical network adapter.
VMware, Inc. 98
vSphere Storage
Solution
Follow the steps in Change Network Policy for iSCSI to set up the correct network policy for the iSCSI-
bound VMkernel adapter.
Jumbo Frames are Ethernet frames with the size that exceeds 1500 Bytes. The maximum transmission
unit (MTU) parameter is typically used to measure the size of Jumbo Frames.
When you use Jumbo Frames for iSCSI traffic, the following considerations apply:
n Check with your vendors to ensure your physical NICs and iSCSI adapters support Jumbo Frames.
n To set up and verify physical network switches for Jumbo Frames, consult your vendor
documentation.
The following table explains the level of support that ESXi provides to Jumbo Frames.
To enable Jumbo Frames, change the default value of the maximum transmission units (MTU) parameter.
You change the MTU parameter on the vSphere switch that you use for iSCSI traffic. For more
information, see the vSphere Networking documentation.
Procedure
3 Under Networking, click Virtual switches, and select the vSphere switch that you want to modify
from the list.
VMware, Inc. 99
vSphere Storage
This step sets the MTU for all physical NICs on that standard switch. Set the MTU value to the largest
MTU size among all NICs connected to the standard switch. ESXi supports the MTU size of up to
9000 Bytes.
Use the Advanced Options settings to change the MTU parameter for the iSCSI HBA.
Procedure
3 Under Storage, click Storage Adapters, and select the independent hardware iSCSI adapter from
the list of adapters.
Dynamic Discovery Also known as SendTargets discovery. Each time the initiator contacts a
specified iSCSI server, the initiator sends the SendTargets request to the
server. The server responds by supplying a list of available targets to the
initiator. The names and IP addresses of these targets appear on the Static
Discovery tab. If you remove a static target added by dynamic discovery,
the target might be returned to the list the next time a rescan happens, the
iSCSI adapter is reset, or the host is rebooted.
Note With software and dependent hardware iSCSI, ESXi filters target
addresses based on the IP family of the iSCSI server address specified. If
the address is IPv4, IPv6 addresses that might come in the SendTargets
response from the iSCSI server are filtered out. When DNS names are
used to specify an iSCSI server, or when the SendTargets response from
the iSCSI server has DNS names, ESXi relies on the IP family of the first
resolved entry from DNS lookup.
Static Discovery In addition to the dynamic discovery method, you can use static discovery
and manually enter information for the targets. The iSCSI adapter uses a
list of targets that you provide to contact and communicate with the iSCSI
servers.
When you set up static or dynamic discovery, you can only add new iSCSI targets. You cannot change
any parameters of an existing target. To make changes, remove the existing target and add a new one.
Prerequisites
Procedure
3 Under Storage, click Storage Adapters, and select the adapter (vmhba#) to configure.
Procedure
3 Under Storage, click Storage Adapters, and select the iSCSI adapter to modify from the list.
If you are removing the static target that was dynamically discovered, you need to remove it from the
storage system before performing the rescan. Otherwise, your host will automatically discover and
add the target to the list of static targets when you rescan the adapter.
CHAP uses a three-way handshake algorithm to verify the identity of your host and, if applicable, of the
iSCSI target when the host and target establish a connection. The verification is based on a predefined
private value, or CHAP secret, that the initiator and target share.
ESXi supports CHAP authentication at the adapter level. In this case, all targets receive the same CHAP
name and secret from the iSCSI initiator. For software and dependent hardware iSCSI adapters, ESXi
also supports per-target CHAP authentication, which allows you to configure different credentials for each
target to achieve greater level of security.
Before configuring CHAP, check whether CHAP is enabled at the iSCSI storage system. Also, obtain
information about the CHAP authentication method the system supports. If CHAP is enabled, configure it
for your initiators, making sure that the CHAP authentication credentials match the credentials on the
iSCSI storage.
Unidirectional CHAP In unidirectional CHAP authentication, the target authenticates the initiator,
but the initiator does not authenticate the target.
Bidirectional CHAP The bidirectional CHAP authentication adds an extra level of security. With
this method, the initiator can also authenticate the target. VMware supports
this method for software and dependent hardware iSCSI adapters only.
For software and dependent hardware iSCSI adapters, you can set unidirectional CHAP and bidirectional
CHAP for each adapter or at the target level. Independent hardware iSCSI supports CHAP only at the
adapter level.
When you set the CHAP parameters, specify a security level for CHAP.
Note When you specify the CHAP security level, how the storage array responds depends on the array’s
CHAP implementation and is vendor-specific. For information on CHAP authentication behavior in
different initiator and target configurations, consult the array documentation.
None The host does not use CHAP authentication. If Software iSCSI
authentication is enabled, use this option to disable it. Dependent hardware iSCSI
Independent hardware iSCSI
Use unidirectional CHAP if The host prefers a non-CHAP connection, but can use a Software iSCSI
required by target CHAP connection if required by the target. Dependent hardware iSCSI
Use unidirectional CHAP unless The host prefers CHAP, but can use non-CHAP connections Software iSCSI
prohibited by target if the target does not support CHAP. Dependent hardware iSCSI
Independent hardware iSCSI
Use unidirectional CHAP The host requires successful CHAP authentication. The Software iSCSI
connection fails if CHAP negotiation fails. Dependent hardware iSCSI
Independent hardware iSCSI
Use bidirectional CHAP The host and the target support bidirectional CHAP. Software iSCSI
Dependent hardware iSCSI
The CHAP name cannot exceed 511 alphanumeric characters and the CHAP secret cannot exceed 255
alphanumeric characters. Some adapters, for example the QLogic adapter, might have lower limits, 255
for the CHAP name and 100 for the CHAP secret.
Prerequisites
n Before setting up CHAP parameters for software or dependent hardware iSCSI, determine whether to
configure unidirectional or bidirectional CHAP. Independent hardware iSCSI adapters do not support
bidirectional CHAP.
n Verify CHAP parameters configured on the storage side. Parameters that you configure must match
the ones on the storage side.
Procedure
2 Under Adapter Details, click the Properties tab and click Edit in the Authentication panel.
n None
n Use bidirectional CHAP. To configure bidirectional CHAP, you must select this option.
Make sure that the name you specify matches the name configured on the storage side.
n To set the CHAP name to the iSCSI adapter name, select Use initiator name.
n To set the CHAP name to anything other than the iSCSI initiator name, deselect Use initiator
name and enter a name in the Name text box.
5 Enter an outgoing CHAP secret to be used as part of authentication. Use the same secret that you
enter on the storage side.
Make sure to use different secrets for the outgoing and incoming CHAP.
7 Click OK.
If you change the CHAP parameters, they are used for new iSCSI sessions. For existing sessions, new
settings are not used until you log out and log in again.
The CHAP name cannot exceed 511 and the CHAP secret 255 alphanumeric characters.
Prerequisites
n Before setting up CHAP parameters for software or dependent hardware iSCSI, determine whether to
configure unidirectional or bidirectional CHAP.
n Verify CHAP parameters configured on the storage side. Parameters that you configure must match
the ones on the storage side.
Procedure
1 Select the iSCSI adapter to configure, and click the Targets tab under Adapter Details.
3 From the list of available targets, select a target to configure and click Authentication.
n None
n Use bidirectional CHAP. To configure bidirectional CHAP, you must select this option.
Make sure that the name you specify matches the name configured on the storage side.
n To set the CHAP name to the iSCSI adapter name, select Use initiator name.
n To set the CHAP name to anything other than the iSCSI initiator name, deselect Use initiator
name and enter a name in the Name text box.
6 Enter an outgoing CHAP secret to be used as part of authentication. Use the same secret that you
enter on the storage side.
Make sure to use different secrets for the outgoing and incoming CHAP.
8 Click OK.
If you change the CHAP parameters, they are used for new iSCSI sessions. For existing sessions, new
settings are not used until you log out and login again.
The following table lists advanced iSCSI parameters that you can configure using the vSphere Client. In
addition, you can use the vSphere CLI commands to configure some of the advanced parameters. For
information, see the Getting Started with vSphere Command-Line Interfaces documentation.
Depending on the type of your adapters, certain parameters might not be available.
Important Do not change the advanced iSCSI settings unless VMware support or Storage Vendors
direct you to change them.
Header Digest Increases data integrity. When the header digest parameter is enabled, the system performs a
checksum over each header part of the iSCSI Protocol Data Unit (PDU). The system verifies
the data using the CRC32C algorithm.
Data Digest Increases data integrity. When the data digest parameter is enabled, the system performs a
checksum over each PDU data part. The system verifies the data using the CRC32C algorithm.
Note Systems that use the Intel Nehalem processors offload the iSCSI digest calculations for
software iSCSI. This offload helps to reduce the impact on performance.
ErrorRecoveryLevel iSCSI Error Recovery Level (ERL) value that the iSCSI initiator on the host negotiates during a
login.
LoginRetryMax Maximum number of times the ESXi iSCSI initiator attempts to log into a target before ending
the attempts.
MaxOutstandingR2T Defines the R2T (Ready to Transfer) PDUs that can be in transition before an acknowledge
PDU is received.
FirstBurstLength Specifies the maximum amount of unsolicited data an iSCSI initiator can send to the target
during the execution of a single SCSI command, in bytes.
MaxBurstLength Maximum SCSI data payload in a Data-In or a solicited Data-Out iSCSI sequence, in bytes.
MaxRecvDataSegLength Maximum data segment length, in bytes, that can be received in an iSCSI PDU.
MaxCommands Maximum SCSI commands that can be queued on the iSCSI adapter.
DefaultTimeToWait Minimum time in seconds to wait before attempting a logout or an active task reassignment
after an unexpected connection termination or reset.
DefautTimeToRetain Maximum time in seconds, during which reassigning the active task is still possible after a
connection termination or reset.
LoginTimeout Time in seconds the initiator will wait for the login response to finish.
LogoutTimeout Time in seconds initiator will wait to get a response for Logout request PDU.
RecoveryTimeout Specifies the amount of time, in seconds, that can lapse while a session recovery is performed.
If the timeout exceeds its limit, the iSCSI initiator ends the session.
No-Op Interval Specifies the time interval, in seconds, between NOP-Out requests sent from your iSCSI
initiator to an iSCSI target. The NOP-Out requests serve as the ping mechanism to verify that a
connection between the iSCSI initiator and the iSCSI target is active.
No-Op Timeout Specifies the amount of time, in seconds, that can lapse before your host receives a NOP-In
message. The iSCSI target sends the message in response to the NOP-Out request. When the
no-op timeout limit is exceeded, the initiator ends the current session and starts a new one.
ARP Redirect With this parameter enabled, storage systems can move iSCSI traffic dynamically from one
port to another. Storage systems that perform array-based failovers require the ARP parameter.
Delayed ACK With this parameter enabled, storage systems can delay an acknowledgment of received data
packets.
Caution Do not make any changes to the advanced iSCSI settings unless you are working with the
VMware support team or otherwise have thorough information about the values to provide for the
settings.
Prerequisites
Procedure
3 Under Storage, click Storage Adapters, and select the adapter (vmhba#) to configure.
n To configure advanced parameters at the adapter level, under Adapter Details, click the
Advanced Options tab and click Edit.
a Click the Targets tab and click either Dynamic Discovery or Static Discovery.
b From the list of available targets, select a target to configure and click Advanced Options.
5 Enter any required values for the advanced parameters you want to modify.
By default, software iSCSI and dependent hardware iSCSI initiators start one iSCSI session between
each initiator port and each target port. If your iSCSI initiator or target has more than one port, your host
can have multiple sessions established. The default number of sessions for each target equals the
number of ports on the iSCSI adapter times the number of target ports.
Using vSphere CLI, you can display all current sessions to analyze and debug them. To create more
paths to storage systems, you can increase the default number of sessions by duplicating existing
sessions between the iSCSI adapter and target ports.
You can also establish a session to a specific target port. This capability is useful if your host connects to
a single-port storage system that presents only one target port to your initiator. The system then redirects
additional sessions to a different target port. Establishing a new session between your iSCSI initiator and
another target port creates an additional path to the storage system.
n Some storage systems do not support multiple sessions from the same initiator name or endpoint.
Attempts to create multiple sessions to such targets can result in an unpredictable behavior of your
iSCSI environment.
n Storage vendors can provide automatic session managers. Using the automatic session manages to
add or delete sessions, does not guarantee lasting results and can interfere with the storage
performance.
Prerequisites
Install vCLI or deploy the vSphere Management Assistant (vMA) virtual machine. See Getting Started with
vSphere Command-Line Interfaces. For troubleshooting, run esxcli commands in the ESXi Shell.
Procedure
Option Description
-A|--adapter=str The iSCSI adapter name, for example, vmhba34.
Prerequisites
Install vCLI or deploy the vSphere Management Assistant (vMA) virtual machine. See Getting Started with
vSphere Command-Line Interfaces. For troubleshooting, run esxcli commands in the ESXi Shell.
Procedure
Option Description
-A|--adapter=str The iSCSI adapter name, for example, vmhba34. This option is required.
-s|--isid=str The ISID of a session to duplicate. You can find it by listing all sessions.
What to do next
Prerequisites
Install vCLI or deploy the vSphere Management Assistant (vMA) virtual machine. See Getting Started with
vSphere Command-Line Interfaces. For troubleshooting, run esxcli commands in the ESXi Shell.
Procedure
Option Description
-A|--adapter=str The iSCSI adapter name, for example, vmhba34. This option is required.
-s|--isid=str The ISID of a session to remove. You can find it by listing all session.
What to do next
You can use boot from the SAN if you do not want to handle maintenance of local storage or have
diskless hardware configurations, such as blade systems.
Configure the iSCSI HBA to boot from the SAN. For Use the network adapter that supports the iBFT. For information, see
information on configuring the HBA, see Configure iBFT iSCSI Boot Overview.
Independent Hardware iSCSI Adapter for SAN Boot
The following guidelines apply to booting from the independent hardware iSCSI and iBFT.
n Review any vendor recommendations for the hardware you use in your boot configuration.
n For installation prerequisites and requirements, review vSphere Installation and Setup.
n The boot LUN must be visible only to the host that uses the LUN. No other host on the SAN is
permitted to see that boot LUN.
n If a LUN is used for a VMFS datastore, multiple hosts can share the LUN.
n With the independent hardware iSCSI only, you can place the diagnostic partition on the boot
LUN. If you configure the diagnostic partition in the boot LUN, this LUN cannot be shared across
multiple hosts. If a separate LUN is used for the diagnostic partition, multiple hosts can share the
LUN.
n If you boot from SAN using iBFT, you cannot set up a diagnostic partition on a SAN LUN. To
collect your host's diagnostic information, use the vSphere ESXi Dump Collector on a remote
server. For information about the ESXi Dump Collector, see vCenter Server Installation and Setup
and vSphere Networking.
Caution If you use scripted installation to install ESXi when booting from a SAN, you must take special
steps to avoid unintended data loss.
Procedure
1 Connect network cables, referring to any cabling guide that applies to your setup.
Verify configuration of any routers or switches on your storage network. Storage systems must be
able to ping the iSCSI adapters in your hosts.
a Create a volume (or LUN) on the storage system for your host to boot from.
b Configure the storage system so that your host has access to the assigned LUN.
This step might involve updating ACLs with the IP addresses, iSCSI names, and the CHAP
authentication parameter you use on your host. On some storage systems, in addition to
providing access information for the ESXi host, you must also explicitly associate the assigned
LUN with the host.
e Record the iSCSI name and IP addresses of the targets assigned to the host.
This procedure discusses how to enable the QLogic iSCSI HBA to boot from the SAN. For more
information and more up-to-date details about QLogic adapter configuration settings, see the QLogic
website.
Prerequisites
Because you start with booting from the VMware installation media, set up your host to boot from
CD/DVD-ROM.
Procedure
1 Insert the installation CD/DVD in the CD/DVD-ROM drive and reboot the host.
2 Use the BIOS to set the host to boot from the CD/DVD-ROM drive first.
3 During server POST, press Crtl+q to enter the QLogic iSCSI HBA configuration menu.
a From the Fast!UTIL Options menu, select Configuration Settings > Host Adapter Settings.
b (Optional) Configure the following settings for your host adapter: initiator IP address, subnet
mask, gateway, initiator iSCSI name, and CHAP.
Procedure
1 From the Fast!UTIL Options menu, select Configuration Settings > iSCSI Boot Settings.
2 Before you can set SendTargets, set Adapter Boot mode to Manual.
n If only one iSCSI target and one LUN are available at the target address, leave Boot LUN
and iSCSI Name blank.
After your host reaches the target storage system, these text boxes are populated with
appropriate information.
n If more than one iSCSI target and LUN are available, supply values for Boot LUN and iSCSI
Name.
c Save changes.
4 From the iSCSI Boot Settings menu, select the primary boot device.
If more than one LUN exists within the target, you can select a specific LUN ID by pressing Enter
after you locate the iSCSI device.
6 Return to the Primary Boot Device Setting menu. After the rescan, Boot LUN and iSCSI Name are
populated. Change the value of Boot LUN to the appropriate LUN ID.
To deploy ESXi and boot from the iSCSI SAN, the host must have an iSCSI boot capable network
adapter. The adapter must support the iSCSI Boot Firmware Table (iBFT) format, a method of
communicating parameters about the iSCSI boot device to an operating system.
Before installing ESXi and booting from the iSCSI SAN, configure the networking and iSCSI boot
parameters on the network adapter. Because configuring the network adapter is vendor-specific, review
your vendor documentation for instructions.
When you boot from iSCSI for the first time, the iSCSI boot firmware on your system connects to an iSCSI
target. If a login is successful, the firmware saves the networking and iSCSI boot parameters in the iBFT
and stores the table in the system's memory. The system uses this table to configure its own iSCSI
connection and networking and to start up.
1 When restarted, the system BIOS detects the iSCSI boot firmware on the network adapter.
2 The iSCSI boot firmware uses the preconfigured boot parameters to connect with the specified iSCSI
target.
3 After the successful connection, the iSCSI boot firmware writes the networking and iSCSI boot
parameters in to the iBFT. The firmware stores the table in the system memory.
Note The system uses this table to configure its own iSCSI connection and networking and to start
up.
5 The VMkernel starts loading and takes over the boot operation.
6 Using the boot parameters from the iBFT, the VMkernel connects to the iSCSI target.
n Update your NIC's boot code and iBFT firmware using vendor supplied tools before trying to install
and boot VMware ESXi. Consult vendor documentation and VMware HCL for supported boot code
and iBFT firmware versions for VMware ESXi iBFT boot.
n The iBFT iSCSI boot does not support failover for the iBFT-enabled network adapters.
n After you set up your host to boot from iBFT iSCSI, the following restrictions apply:
n You cannot disable the software iSCSI adapter. If the iBFT configuration is present in the BIOS,
the host re-enables the software iSCSI adapter during each reboot.
Note If you do not use the iBFT-enabled network adapter for the iSCSI boot and do not want the
software iSCSI adapter to be always enabled, remove the iBFT configuration from the network
adapter.
n You cannot remove the iBFT iSCSI boot target using the vSphere Client. The target appears on
the list of adapter static targets.
When you set up your host to boot with iBFT, you perform a number of tasks.
Configuration on the network adapter can be dynamic or static. If you use the dynamic configuration, you
indicate that all target and initiator boot parameters are acquired using DHCP. For the static configuration,
you manually enter data that includes your host's IP address and initiator IQN, and the target parameters.
Procedure
u On the network adapter that you use for the boot from iSCSI, specify networking and iSCSI
parameters.
Because configuring the network adapter is vendor-specific, review your vendor documentation for
instructions.
n iSCSI
n DVD-ROM
Because changing the boot sequence in the BIOS is vendor-specific, refer to vendor documentation for
instructions. The following sample procedure explains how to change the boot sequence on a Dell host
with a Broadcom network adapter.
Procedure
4 In the Boot Sequence menu, arrange the bootable items so that iSCSI precedes the DVD-ROM.
7 Select Save Changes and click Exit to exit the BIOS Setup menu.
Prerequisites
n Configure iSCSI boot firmware on your boot NIC to point to the target LUN that you want to use as
the boot LUN.
n Change the boot sequence in the BIOS so that iSCSI precedes the DVD-ROM.
Procedure
1 Insert the installation media in the CD/DVD-ROM drive and restart the host.
The installer copies the ESXi boot image to the iSCSI LUN.
Prerequisites
n Configure the iSCSI boot firmware on your boot NIC to point to the boot LUN.
n Change the boot sequence in the BIOS so that iSCSI precedes the boot device.
Procedure
The host boots from the iSCSI LUN using iBFT data. During the first boot, the iSCSI initialization
script sets up default networking. The network setup is persistent after subsequent reboots.
To achieve greater security and better performance, have redundant network adapters on the host.
How you set up all the network adapters depends on whether your environment uses shared or isolated
networks for the iSCSI traffic and host management traffic.
n If you use VLANs to isolate the networks, they must have different subnets to ensure that routing
tables are properly set up.
n VMware recommends that you configure the iSCSI adapter and target to be on the same subnet. If
you set up the iSCSI adapter and target on different subnets, the following restrictions apply:
n The default VMkernel gateway must be able to route both the management and iSCSI traffic.
n After you boot your host, you can use the iBFT-enabled network adapter only for iBFT. You
cannot use the adapter for other iSCSI traffic.
n Use the first physical network adapter for the management network.
n Use the second physical network adapter for the iSCSI network. Make sure to configure the iBFT.
n After the host boots, you can add secondary network adapters to both the management and iSCSI
networks.
Procedure
The host boots using the new information stored in the iBFT.
Problem
Cause
When you specify a gateway in the iBFT-enabled network adapter during ESXi installation, this gateway
becomes the system's default gateway. If you delete the port group associated with the network adapter,
the system's default gateway is lost. This action causes the loss of network connectivity.
Solution
Do not set an iBFT gateway unless it is required. If the gateway is required, after installation, manually set
the system's default gateway to the one that the management network uses.
Problem
If you change the iSCSI boot parameters on the network adapter after the first ESXi boot from iSCSI, the
host will boot in a stateless mode.
Cause
The firmware uses the updated boot configuration to connect to the iSCSI target and load the ESXi
image. However, when loaded, the system does not pick up the new parameters, but continues to use
persistent networking and iSCSI parameters from the previous boot. As a result, the host cannot connect
to the target and boots in the stateless mode.
Solution
2 Reconfigure the iSCSI and networking parameters on the host, so that they match the iBFT
parameters.
3 Perform a rescan.
Check with your storage representative if your storage system supports Storage API - Array Integration
hardware acceleration features. If it does, refer to your vendor documentation to enable hardware
acceleration support on the storage system side. For more information, see Chapter 24 Storage
Hardware Acceleration.
n Do not change the path policy the system sets for you unless you understand the implications of
making such a change.
n Document everything. Include information about configuration, access control, storage, switch, server
and iSCSI HBA configuration, software and firmware versions, and storage cable plan.
n Cross off different links, switches, HBAs, and other elements to ensure that you did not miss a
critical failure point in your design.
n Ensure that the iSCSI HBAs are installed in the correct slots in the ESXi host, based on slot and bus
speed. Balance PCI bus load among the available buses in the server.
n Become familiar with the various monitor points in your storage network, at all visibility points,
including ESXi performance charts, Ethernet switch statistics, and storage performance statistics.
n Change LUN IDs only when VMFS datastores deployed on the LUNs have no running virtual
machines. If you change the ID, virtual machines running on the VMFS datastore might fail.
After you change the ID of the LUN, you must rescan your storage to reset the ID on your host. For
information on using the rescan, see Storage Rescan Operations.
n If you change the default iSCSI name of your iSCSI adapter, make sure that the name you enter is
worldwide unique and properly formatted. To avoid storage access problems, never assign the same
iSCSI name to different adapters, even on different hosts.
If the network environment is properly configured, the iSCSI components provide adequate throughput
and low enough latency for iSCSI initiators and targets. If the network is congested and links, switches or
routers are saturated, iSCSI performance suffers and might not be adequate for ESXi environments.
If issues occur with storage system performance, consult your storage system vendor’s documentation for
any relevant information.
When you assign LUNs, remember that you can access each shared LUN through a number of hosts,
and that a number of virtual machines can run on each host. One LUN used by the ESXi host can service
I/O from many different applications running on different operating systems. Because of this diverse
workload, the RAID group that contains the ESXi LUNs should not include LUNs that other hosts use that
are not running ESXi for I/O intensive applications.
Load balancing is the process of spreading server I/O requests across all available SPs and their
associated host server paths. The goal is to optimize performance in terms of throughput (I/O per second,
megabytes per second, or response times).
SAN storage systems require continual redesign and tuning to ensure that I/O is load balanced across all
storage system paths. To meet this requirement, distribute the paths to the LUNs among all the SPs to
provide optimal load balancing. Close monitoring indicates when it is necessary to manually rebalance
the LUN distribution.
Tuning statically balanced storage systems is a matter of monitoring the specific performance statistics
(such as I/O operations per second, blocks per second, and response time) and distributing the LUN
workload to spread the workload across all the SPs.
Each server application must have access to its designated storage with the following conditions:
Because each application has different requirements, you can meet these goals by selecting an
appropriate RAID group on the storage system.
n Place each LUN on a RAID group that provides the necessary performance levels. Monitor the
activities and resource use of other LUNS in the assigned RAID group. A high-performance RAID
group that has too many applications doing I/O to it might not meet performance goals required by an
application running on the ESXi host.
n To achieve maximum throughput for all the applications on the host during the peak period, install
enough network adapters or iSCSI hardware adapters. I/O spread across multiple ports provides
faster throughput and less latency for each application.
n To provide redundancy for software iSCSI, make sure that the initiator is connected to all network
adapters used for iSCSI connectivity.
n When allocating LUNs or RAID groups for ESXi systems, remember that multiple operating systems
use and share that resource. The LUN performance required by the ESXi host might be much higher
than when you use regular physical machines. For example, if you expect to run four I/O intensive
applications, allocate four times the performance capacity for the ESXi LUNs.
n When you use multiple ESXi systems with vCenter Server, the storage performance requirements
increase.
n The number of outstanding I/Os needed by applications running on an ESXi system must match the
number of I/Os the SAN can handle.
Network Performance
A typical SAN consists of a collection of computers connected to a collection of storage systems through
a network of switches. Several computers often access the same storage.
The following graphic shows several computer systems connected to a storage system through an
Ethernet switch. In this configuration, each system is connected through a single Ethernet link to the
switch. The switch is connected to the storage system through a single Ethernet link.
When systems read data from storage, the storage responds with sending enough data to fill the link
between the storage systems and the Ethernet switch. It is unlikely that any single system or virtual
machine gets full use of the network speed. However, this situation can be expected when many systems
share one storage device.
When writing data to storage, multiple systems or virtual machines might attempt to fill their links. As a
result, the switch between the systems and the storage system might drop network packets. The data
drop might occur because the switch has more traffic to send to the storage system than a single link can
carry. The amount of data the switch can transmit is limited by the speed of the link between it and the
storage system.
1 Gbit
1 Gbit
1 Gbit
dropped packets
Recovering from dropped network packets results in large performance degradation. In addition to time
spent determining that data was dropped, the retransmission uses network bandwidth that can otherwise
be used for current transactions.
iSCSI traffic is carried on the network by the Transmission Control Protocol (TCP). TCP is a reliable
transmission protocol that ensures that dropped packets are retried and eventually reach their
destination. TCP is designed to recover from dropped packets and retransmits them quickly and
seamlessly. However, when the switch discards packets with any regularity, network throughput suffers.
The network becomes congested with requests to resend data and with the resent packets. Less data is
transferred than in a network without congestion.
Most Ethernet switches can buffer, or store, data. This technique gives every device attempting to send
data an equal chance to get to the destination. The ability to buffer some transmissions, combined with
many systems limiting the number of outstanding commands, reduces transmissions to small bursts. The
bursts from several systems can be sent to a storage system in turn.
If the transactions are large and multiple servers are sending data through a single switch port, an ability
to buffer can be exceeded. In this case, the switch drops the data it cannot send, and the storage system
must request a retransmission of the dropped packet. For example, if an Ethernet switch can buffer 32
KB, but the server sends 256 KB to the storage device, some of the data is dropped.
Most managed switches provide information on dropped packets, similar to the following:
*: interface is up
IHQ: pkts in input hold queue IQD: pkts dropped from input queue
OHQ: pkts in output hold queue OQD: pkts dropped from output queue
RXBS: rx rate (bits/sec) RXPS: rx rate (pkts/sec)
TXBS: tx rate (bits/sec) TXPS: tx rate (pkts/sec)
TRTL: throttle count
In this example from a Cisco switch, the bandwidth used is 476303000 bits/second, which is less than
half of wire speed. The port is buffering incoming packets, but has dropped several packets. The final line
of this interface summary indicates that this port has already dropped almost 10,000 inbound packets in
the IQD column.
Configuration changes to avoid this problem involve making sure several input Ethernet links are not
funneled into one output link, resulting in an oversubscribed link. When several links transmitting near
capacity are switched to a smaller number of links, oversubscription becomes possible.
Generally, applications or systems that write much data to storage must avoid sharing Ethernet links to a
storage device. These types of applications perform best with multiple connections to storage devices.
Multiple Connections from Switch to Storage shows multiple connections from the switch to the storage.
1 Gbit
1 Gbit
1 Gbit
1 Gbit
Using VLANs or VPNs does not provide a suitable solution to the problem of link oversubscription in
shared configurations. VLANs and other virtual partitioning of a network provide a way of logically
designing a network. However, they do not change the physical capabilities of links and trunks between
switches. When storage traffic and other network traffic share physical connections, oversubscription and
lost packets might become possible. The same is true of VLANs that share interswitch trunks.
Performance design for a SAN must consider the physical limitations of the network, not logical
allocations.
Switches that have ports operating near maximum throughput much of the time do not provide optimum
performance. If you have ports in your iSCSI SAN running near the maximum, reduce the load. If the port
is connected to an ESXi system or iSCSI storage, you can reduce the load by using manual load
balancing.
If the port is connected between multiple switches or routers, consider installing additional links between
these components to handle more load. Ethernet switches also commonly provide information about
transmission errors, queued packets, and dropped Ethernet packets. If the switch regularly reports any of
these conditions on ports being used for iSCSI traffic, performance of the iSCSI SAN will be poor.
After the devices get registered with your host, you can display all available local and networked devices
and review their information. If you use third-party multipathing plug-ins, the storage devices available
through the plug-ins also appear on the list.
Note If an array supports implicit asymmetric logical unit access (ALUA) and has only standby paths, the
registration of the device fails. The device can register with the host after the target activates a standby
path and the host detects it as active. The advanced system /Disk/FailDiskRegistration parameter
controls this behavior of the host.
For each storage adapter, you can display a separate list of storage devices available for this adapter.
Generally, when you review storage devices, you see the following information.
Name Also called Display Name. It is a name that the ESXi host assigns to the device based on the
storage type and manufacturer. You can change this name to a name of your choice.
Operational State Indicates whether the device is attached or detached. For details, see Detach Storage
Devices.
LUN Logical Unit Number (LUN) within the SCSI target. The LUN number is provided by the
storage system. If a target has only one LUN, the LUN number is always zero (0).
Drive Type Information about whether the device is a flash drive or a regular HDD drive. For information
about flash drives and NVMe devices, see Chapter 15 Working with Flash Devices.
Transport Transportation protocol your host uses to access the device. The protocol depends on the
type of storage being used. See Types of Physical Storage.
Owner The plug-in, such as the NMP or a third-party plug-in, that the host uses to manage paths to
the storage device. For details, see Pluggable Storage Architecture and Path Management.
Hardware Acceleration Information about whether the storage device assists the host with virtual machine
management operations. The status can be Supported, Not Supported, or Unknown. For
details, see Chapter 24 Storage Hardware Acceleration.
Sector Format Indicates whether the device uses a traditional, 512n, or advanced sector format, such as
512e or 4Kn. For more information, see Device Sector Formats.
Partition Format A partition scheme used by the storage device. It can be of a master boot record (MBR) or
GUID partition table (GPT) format. The GPT devices can support datastores greater than 2
TB. For more information, see Device Sector Formats.
Multipathing Policies Path Selection Policy and Storage Array Type Policy the host uses to manage paths to
storage. For more information, see Chapter 18 Understanding Multipathing and Failover.
The Storage Devices view allows you to list the hosts' storage devices, analyze their information, and
modify properties.
Procedure
All storage devices available to the host are listed in the Storage Devices table.
4 To view details for a specific device, select the device from the list.
Icon Description
Refresh Refresh information about storage adapters, topology, and file systems.
Rescan Rescan all storage adapters on the host to discover newly added storage devices or VMFS
datastores.
Turn On LED Turn on the locator LED for the selected devices.
Turn Off LED Turn off the locator LED for the selected devices.
Mark as Local Mark the selected devices as local for the host.
Mark as Remote Mark the selected devices as remote for the host.
6 Use tabs under Device Details to access additional information and modify properties for the selected
device.
Tab Description
Properties View device properties and characteristics. View and modify multipathing policies
for the device.
Paths Display paths available for the device. Disable or enable a selected path.
Procedure
All storage adapters installed on the host are listed in the Storage Adapters table.
4 Select the adapter from the list and click the Devices tab.
Storage devices that the host can access through the adapter are displayed.
Icon Description
Refresh Refresh information about storage adapters, topology, and file systems.
Rescan Rescan all storage adapters on the host to discover newly added storage devices or VMFS
datastores.
Turn On LED Turn on the locator LED for the selected devices.
Turn Off LED Turn off the locator LED for the selected devices.
Mark as Local Mark the selected devices as local for the host.
Mark as Remote Mark the selected devices as remote for the host.
This table introduces different storage device formats that ESXi supports.
ESXi detects and registers the 4Kn devices and automatically emulates them as 512e. The device is
presented to upper layers in ESXi as 512e. But the guest operating systems always see it as a 512n
device. You can continue using existing VMs with legacy guest OSes and applications on the host with
the 4Kn devices.
n ESXi does not support 4Kn SSD and NVMe devices, or 4Kn devices as RDMs.
n You can use the 4Kn device to configure a coredump partition and coredump file.
n Only the NMP plug-in can claim the 4Kn devices. You cannot use the HPP to claim these devices.
n With vSAN, you can use only the 4Kn capacity HDDs for vSAN Hybrid Arrays. For information, see
the Administering VMware vSAN documentation.
n Due to the software emulation layer, the performance of the 4Kn devices depends on the alignment of
the I/Os. For best performance, run workloads that issue mostly 4K aligned I/Os.
n Workloads accessing the emulated 4Kn device directly using scatter-gather I/O (SGIO) must issue
I/Os compatible with the 512e disk.
Device Identifiers
Depending on the type of storage, the ESXi host uses different algorithms and conventions to generate
an identifier for each storage device.
SCSI INQUIRY The host uses the SCSI INQUIRY command to query a storage device. The
identifiers host uses the resulting data, in particular the Page 83 information, to
generate a unique identifier. Device identifiers that are based on Page 83
are unique across all hosts, persistent, and have one of the following
formats:
n naa.number
n t10.number
n eui.number
These formats follow the T10 committee standards. See the SCSI-3
documentation on the T10 committee website.
Path-based identifier When the device does not provide the Page 83 information, the host
generates an mpx.path name, where path represents the first path to the
device, for example, mpx.vmhba1:C0:T1:L3. This identifier can be used in
the same way as the SCSI INQUIRY identifies.
The mpx.path identifier is created for local devices on the assumption that
their path names are unique. However, this identifier is not unique or
persistent, and can change after every system restart.
vmhbaAdapter:CChannel:TTarget:LLUN
n LLUN is the LUN number that shows the position of the LUN within the
target. The LUN number is provided by the storage system. If a target
has only one LUN, the LUN number is always zero (0).
Legacy Identifier
In addition to the SCSI INQUIRY or mpx.path identifiers, ESXi generates an alternative legacy name for
each device. The identifier has the following format:
vml.number
The legacy identifier includes a series of digits that are unique to the device. The identifier can be derived
in part from the Page 83 information. For nonlocal devices that do not support Page 83 information, the
vml. name is used as the only available unique identifier.
Procedure
When you perform VMFS datastore management operations, such as creating a VMFS datastore or
RDM, adding an extent, and increasing or deleting a VMFS datastore, your host or the vCenter Server
automatically rescans and updates your storage. You can disable the automatic rescan feature by turning
off the Host Rescan Filter. See Turn Off Storage Filters.
In certain cases, you need to perform a manual rescan. You can rescan all storage available to your host
or to all hosts in a folder, cluster, and data center.
If the changes you make are isolated to storage connected through a specific adapter, perform a rescan
for this adapter.
Perform the manual rescan each time you make one of the following changes.
n Reconnect a cable.
n Add a single host to the vCenter Server after you have edited or removed from the vCenter Server a
datastore shared by the vCenter Server hosts and the single host.
Important If you rescan when a path is unavailable, the host removes the path from the list of paths to
the device. The path reappears on the list as soon as it becomes available and starts working again.
Procedure
1 In the vSphere Client object navigator, browse to a host, a cluster, a data center, or a folder that
contains hosts.
Option Description
Scan for New Storage Devices Rescan all adapters to discover new storage devices. If new devices are
discovered, they appear in the device list.
Scan for New VMFS Volumes Rescan all storage devices to discover new datastores that have been added
since the last scan. Any new datastores appear in the datastore list.
Procedure
3 Under Storage, click Storage Adapters, and select the adapter to rescan from the list.
The Disk.MaxLUN parameter also determines how many LUNs the SCSI scan code attempts to discover
using individual INQUIRY commands if the SCSI target does not support direct discovery using
REPORT_LUNS.
You can modify the Disk.MaxLUN parameter depending on your needs. For example, if your environment
has a smaller number of storage devices with LUN IDs from 1 through 100, set the value to 101. As a
result, you can improve device discovery speed on targets that do not support REPORT_LUNS. Lowering
the value can shorten the rescan time and boot time. However, the time to rescan storage devices might
also depend on other factors, including the type of the storage system and the load on the storage
system.
In other cases, you might need to increase the value if your environment uses LUN IDs that are greater
than 1023.
Procedure
4 In the Advanced System Settings table, select Disk.MaxLUN and click the Edit icon.
5 Change the existing value to the value of your choice, and click OK.
The value you enter specifies the LUN ID that is after the last one you want to discover.
For example, to discover LUN IDs from 1 through 100, set Disk.MaxLUN to 101.
Storage connectivity problems are caused by a variety of reasons. Although ESXi cannot always
determine the reason for a storage device or its paths being unavailable, the host differentiates between a
permanent device loss (PDL) state of the device and a transient all paths down (APD) state of storage.
Permanent Device Loss A condition that occurs when a storage device permanently fails or is
(PDL) administratively removed or excluded. It is not expected to become
available. When the device becomes permanently unavailable, ESXi
receives appropriate sense codes or a login rejection from storage arrays,
and is able to recognize that the device is permanently lost.
All Paths Down (APD) A condition that occurs when a storage device becomes inaccessible to the
host and no paths to the device are available. ESXi treats this as a
transient condition because typically the problems with the device are
temporary and the device is expected to become available again.
Typically, the PDL condition occurs when a device is unintentionally removed, or its unique ID changes, or
when the device experiences an unrecoverable hardware error.
When the storage array determines that the device is permanently unavailable, it sends SCSI sense
codes to the ESXi host. After receiving the sense codes, your host recognizes the device as failed and
registers the state of the device as PDL. For the device to be considered permanently lost, the sense
codes must be received on all its paths.
After registering the PDL state of the device, the host stops attempts to reestablish connectivity or to send
commands to the device.
The vSphere Client displays the following information for the device:
n The operational state of the device changes to Lost Communication.
If no open connections to the device exist, or after the last connection closes, the host removes the PDL
device and all paths to the device. You can disable the automatic removal of paths by setting the
advanced host parameter Disk.AutoremoveOnPDL to 0.
If the device returns from the PDL condition, the host can discover it, but treats it as a new device. Data
consistency for virtual machines on the recovered device is not guaranteed.
Note When a device fails without sending appropriate SCSI sense codes or an iSCSI login rejection, the
host cannot detect PDL conditions. In this case, the host continues to treat the device connectivity
problems as APD even when the device fails permanently.
H:0x0 D:0x2 P:0x0 Valid sense data: 0x5 0x25 0x0 or Logical Unit Not Supported
Planned device removal is an intentional disconnection of a storage device. You might also plan to
remove a device for such reasons as upgrading your hardware or reconfiguring your storage devices.
When you perform an orderly removal and reconnection of a storage device, you complete a number of
tasks.
Task Description
Migrate virtual machines from the device you plan to detach. vCenter Server and Host Management
For an iSCSI device with a single LUN per target, delete the static target entry See Remove Dynamic or Static iSCSI
from each iSCSI HBA that has a path to the storage device. Targets.
Perform any necessary reconfiguration of the storage device by using the array See your vendor documentation.
console.
Mount the datastore and restart the virtual machines. See Mount Datastores.
You might need to detach the device to make it inaccessible to your host, when, for example, you perform
a hardware upgrade on the storage side.
Prerequisites
Procedure
The device becomes inaccessible. The operational state of the device changes to Unmounted.
What to do next
If multiple hosts share the device, detach the device from each host.
Procedure
4 Select the detached storage device and click the Attach icon.
The following items in the vSphere Client indicate that the device is in the PDL state:
n A warning about the device being permanently inaccessible appears in the VMkernel log file.
To recover from the unplanned PDL condition and remove the unavailable device from the host, perform
the following tasks.
Task Description
Power off and unregister all virtual machines that are running on the datastore affected by the See vSphere Virtual
PDL condition. Machine Administration.
Rescan all ESXi hosts that had access to the device. See Perform Storage
Rescan.
Note If the rescan is not successful and the host continues to list the device, some pending I/O
or active references to the device might still exist. Check for any items that might still have an
active reference to the device or datastore. The items include virtual machines, templates, ISO
images, raw device mappings, and so on.
The reasons for an APD state can be, for example, a failed switch or a disconnected storage cable.
In contrast with the permanent device loss (PDL) state, the host treats the APD state as transient and
expects the device to be available again.
The host continues to retry issued commands in an attempt to reestablish connectivity with the device. If
the host's commands fail the retries for a prolonged period, the host might be at risk of having
performance problems. Potentially, the host and its virtual machines might become unresponsive.
To avoid these problems, your host uses a default APD handling feature. When a device enters the APD
state, the host turns on a timer. With the timer on, the host continues to retry non-virtual machine
commands for a limited time period only.
By default, the APD timeout is set to 140 seconds. This value is typically longer than most devices require
to recover from a connection loss. If the device becomes available within this time, the host and its virtual
machine continue to run without experiencing any problems.
If the device does not recover and the timeout ends, the host stops its attempts at retries and stops any
non-virtual machine I/O. Virtual machine I/O continues retrying. The vSphere Client displays the following
information for the device with the expired APD timeout:
Even though the device and datastores are unavailable, virtual machines remain responsive. You can
power off the virtual machines or migrate them to a different datastore or host.
If later the device paths become operational, the host can resume I/O to the device and end the special
APD treatment.
If you disable the APD handling, the host will indefinitely continue to retry issued commands in an attempt
to reconnect to the APD device. This behavior might cause virtual machines on the host to exceed their
internal I/O timeout and become unresponsive or fail. The host might become disconnected from vCenter
Server.
Procedure
4 In the Advanced System Settings table, select the Misc.APDHandlingEnable parameter and click
the Edit icon.
If you disabled the APD handling, you can reenable it and set its value to 1 when a device enters the APD
state. The internal APD handling feature turns on immediately and the timer starts with the current
timeout value for each device in APD.
The timeout period begins immediately after the device enters the APD state. After the timeout ends, the
host marks the APD device as unreachable. The host stops its attempts to retry any I/O that is not coming
from virtual machines. The host continues to retry virtual machine I/O.
By default, the timeout parameter on your host is set to 140 seconds. You can increase the value of the
timeout if, for example, storage devices connected to your ESXi host take longer than 140 seconds to
recover from a connection loss.
Note If you change the timeout parameter after the device becomes unavailable, the change does not
take effect for that particular APD incident.
Procedure
4 In the Advanced System Settings table, select the Misc.APDTimeout parameter and click the Edit
icon.
Prerequisites
Install vCLI or deploy the vSphere Management Assistant (vMA) virtual machine. See Getting Started with
vSphere Command-Line Interfaces. For troubleshooting, run esxcli commands in the ESXi Shell.
Procedure
n on - Device is connected.
n dead - Device has entered the APD state. The APD timer starts.
vSphere HA uses VM Component Protection (VMCP) to protect virtual machines running on a host in a
vSphere HA cluster against accessibility failures. For more information about VMCP and how to configure
responses for datastores and virtual machines when an APD or PDL condition occurs, see the vSphere
Availability documentation.
Procedure
4 From the list of storage devices, select one or more disks and enable or disable the locator LED
indicator.
Option Description
Prerequisites
n Verify that the devices you plan to erase are not in use.
Procedure
4 Select one or more devices and click All Actions > Erase Partitions.
If you are erasing a single device, a dialog box with partition information opens.
5 For a single device, verify that the partition information you are erasing is not critical.
Unlike the regular HDDs that are electromechanical devices containing moving parts, the flash devices
use semiconductors as their storage medium and have no moving parts. Typically, the flash devices are
resilient and provide faster access to data.
To detect flash devices, ESXi uses an inquiry mechanism based on T10 standards. Check with your
vendor whether your storage array supports the ESXi mechanism of flash device detection.
After the host detects the flash devices, you can use them for several tasks and functionalities.
If you use NVMe local flash storage, enable the high-performance plug-in (HPP) to improve your storage
performance. See VMware High Performance Plug-In.
vSAN vSAN requires flash devices. For more information, see the Administering VMware vSAN
documentation.
VMFS Datastores You can create VMFS datastores on flash devices. Use the datastores for the following
purposes:
n Store virtual machines. Certain guest operating systems can identify virtual disks
stored on these datastores as flash virtual disks. See Identifying Flash Virtual Disks.
n Allocate datastore space for the ESXi host swap cache. See Configuring Host Swap
Cache
Virtual Flash Resource (VFFS) Set up a virtual flash resource and use it for the following functionalities:
n Use as Virtual Flash Read Cache for your virtual machines. See Chapter 16 About
VMware vSphere Flash Read Cache.
n Allocate the virtual flash resource for the ESXi host swap cache. This method is an
alternative way of host cache configuration that uses VFFS volumes instead of
VMFS datastores. See Configure Host Swap Cache with Virtual Flash Resource.
n If required by your vendor, use the virtual flash resource for I/O caching filters. See
Chapter 23 Filtering Virtual Machine I/O.
To verify if this feature is enabled, guest operating systems can use standard inquiry commands such as
SCSI VPD Page (B1h) for SCSI devices and ATA IDENTIFY DEVICE (Word 217) for IDE devices.
For linked clones, native snapshots, and delta-disks, the inquiry commands report the virtual flash status
of the base disk.
Operating systems can detect that a virtual disk is a flash disk under the following conditions:
n Detection of flash virtual disks is supported on ESXi 5.x and later hosts and virtual hardware version 8
or later.
n If virtual disks are located on shared VMFS datastores with flash device extents, the device must be
marked as flash on all hosts.
n For a virtual disk to be identified as virtual flash, all underlying physical extents should be flash-
backed.
When you configure vSAN or set up a virtual flash resource, your storage environment must include local
flash devices.
However, ESXi might not recognize certain storage devices as flash devices when their vendors do not
support automatic flash device detection. In other cases, certain devices might not be detected as local,
and ESXi marks them as remote. When devices are not recognized as the local flash devices, they are
excluded from the list of devices offered for vSAN or virtual flash resource. Marking these devices as local
flash makes them available for vSAN and virtual flash resource.
ESXi does not recognize certain devices as flash when their vendors do not support automatic flash disk
detection. The Drive Type column for the devices shows HDD as their type.
Caution Marking the HDD devices as flash might deteriorate the performance of datastores and
services that use them. Mark the devices only if you are certain that they are flash devices.
Prerequisites
Procedure
4 From the list of storage devices, select one or several HDD devices and click the Mark as Flash Disk
( ) icon.
What to do next
If the flash device that you mark is shared among multiple hosts, make sure that you mark the device
from all hosts that share the device.
Prerequisites
n Power off virtual machines that reside on the device and unmount an associated datastore.
Procedure
4 From the list of storage devices, select one or several remote devices and click the Mark as Local
icon.
Prerequisites
Install vCLI or deploy the vSphere Management Assistant (vMA) virtual machine. See Getting Started with
vSphere Command-Line Interfaces. For troubleshooting, run esxcli commands in the ESXi Shell.
Procedure
n Make sure to use the latest firmware with flash devices. Frequently check with your storage vendors
for any updates.
n Carefully monitor how intensively you use the flash device and calculate its estimated lifetime. The
lifetime expectancy depends on how actively you continue to use the flash device.
Typically, storage vendors provide reliable lifetime estimates for a flash device under ideal conditions. For
example, a vendor might guarantee a lifetime of 5 years under the condition of 20 GB writes per day.
However, the more realistic life expectancy of the device depends on how many writes per day your ESXi
host actually generates. Follow these steps to calculate the lifetime of the flash device.
Prerequisites
Note the number of days passed since the last reboot of your ESXi host. For example, ten days.
Procedure
1 Obtain the total number of blocks written to the flash device since the last reboot.
Run the esxcli storage core device stats get -d=device_ID command. For example:
The Blocks Written item in the output shows the number of blocks written to the device since the last
reboot. In this example, the value is 629,145,600. After each reboot, it resets to 0.
One block is 512 bytes. To calculate the total number of writes, multiply the Blocks Written value by
512, and convert the resulting value to GB.
In this example, the total number of writes since the last reboot is approximately 322 GB.
Divide the total number of writes by the number of days since the last reboot.
If the last reboot was ten days ago, you get 32 GB of writes per day. You can average this number
over the time period.
vendor provided number of writes per day times vendor provided life span divided by actual average
number of writes per day
For example, if your vendor guarantees a lifetime of 5 years under the condition of 20 GB writes per
day, and the actual number of writes per day is 30 GB, the life span of your flash device will be
approximately 3.3 years.
When you set up the virtual flash resource, you create a new file system, Virtual Flash File System
(VFFS). VFFS is a derivative of VMFS, which is optimized for flash devices and is used to group the
physical flash devices into a single caching resource pool. As a non-persistent resource, it cannot be
used to store virtual machines.
n Virtual machine read cache. See Chapter 16 About VMware vSphere Flash Read Cache.
n Host swap cache. See Configure Host Swap Cache with Virtual Flash Resource.
n I/O caching filters, if required by your vendors. See Chapter 23 Filtering Virtual Machine I/O.
Before setting up the virtual flash resource, make sure that you use devices approved by the VMware
Compatibility Guide.
n You can have only one virtual flash resource, also called a VFFS volume, on a single ESXi host. The
virtual flash resource is managed only at the host's level.
n You cannot use the virtual flash resource to store virtual machines. Virtual flash resource is a caching
layer only.
n You can use only local flash devices for the virtual flash resource.
n You can create the virtual flash resource from mixed flash devices. All device types are treated the
same and no distinction is made between SAS, SATA, or PCI express connectivity. When creating the
resource from mixed flash devices, make sure to group similar performing devices together to
maximize performance.
n You cannot use the same flash devices for the virtual flash resource and vSAN. Each requires its own
exclusive and dedicated flash device.
n The total available capacity of the virtual flash resource can be used by ESXi hosts as host swap
cache and by virtual machines as read cache.
n You cannot select individual flash devices for swap cache or read cache. All flash devices are
combined into a single flash resource entity.
To set up a virtual flash resource, you use local flash devices connected to your host. To increase the
capacity of your virtual flash resource, you can add more devices, up to the maximum number indicated
in the Configuration Maximums documentation. An individual flash device must be exclusively allocated to
the virtual flash resource. No other vSphere service, such as vSAN or VMFS, can share the device with
the virtual flash resource.
Procedure
3 Under Virtual Flash, select Virtual Flash Resource Management and click Add Capacity.
4 From the list of available flash devices, select one or more devices to use for the virtual flash
resource and click OK.
Under certain circumstances, you might not be able to see flash devices on the list. For more
information, see Marking Storage Devices.
The virtual flash resource is created. The Device Backing area lists all devices that you use for the virtual
flash resource.
What to do next
You can use the virtual flash resource for cache configuration on the host and Flash Read Cache
configuration on virtual disks. In addition, I/O caching filters developed through vSphere APIs for I/O
Filtering might require the virtual flash resource.
You can increase the capacity by adding more flash devices to the virtual flash resource.
Prerequisites
n Verify that the virtual flash resource is not configured with host swap cache.
n Verify that no virtual machines configured with Flash Read Cache are powered on the host.
Procedure
3 Under Virtual Flash, select Virtual Flash Resource Management and click Remove All.
After you remove the virtual flash resource and erase the flash device, the device is available for other
operations.
Procedure
Parameter Description
VFLASH.VFlashResourceUsageThresh The system triggers the Host vFlash resource usage alarm when a virtual
old flash resource use exceeds the threshold. The default threshold is 80%. You can
change the threshold to an appropriate value. The alarm is cleared when the
virtual flash resource use drops below the threshold.
VFLASH.MaxResourceGBForVmCache An ESXi host stores Flash Read Cache metadata in RAM. The default limit of
total virtual machine cache size on the host is 2 TB. You can reconfigure this
setting. You must restart the host for the new setting to take effect.
5 Click OK.
The host-level cache is made up of files on a low-latency disk that ESXi uses as a write-back cache for
virtual machine swap files. All virtual machines running on the host share the cache. Host-level swapping
of virtual machine pages makes the best use of potentially limited flash device space.
Depending on your environment and licensing package, the following methods of configuring the host-
level swap cache are available. Both methods provide similar results.
n You can create a VMFS datastore on a flash device, and then use the datastore to allocate space for
the host cache. The host reserves a certain amount of space for swapping to the host cache.
n If you have a vSphere license to set up a virtual flash resource, you can use the resource to configure
the swap cache on the host. The host swap cache is allocated from a portion of the virtual flash
resource.
Use this task if you do not have an appropriate license that allows you to set up and manage a virtual
flash resource. If you have the license, use the virtual flash resource for host cache configuration.
Prerequisites
Procedure
4 Select the flash datastore in the list and click the Edit virtul flash host swap cache properties icon.
6 Click OK.
Prerequisites
2 If your host is in maintenance mode, exit the maintenance mode before you configure a host swap
cache
Procedure
3 Under Virtual Flash, select Virtual Flash Host Swap Cache Configuration and click Edit.
4 Select the Enable virtual flash host swap cache check box.
5 Specify the amount of virtual flash resource to reserve for host swap cache.
6 Click OK.
You can reserve a Flash Read Cache for any individual virtual disk. The Flash Read Cache is created
only when a virtual machine is powered on. It is discarded when a virtual machine is suspended or
powered off.
When you migrate a virtual machine, you can migrate the cache. By default the cache is migrated if the
virtual flash modules on the source and destination hosts are compatible. If you do not migrate the cache,
the cache is rewarmed on the destination host.
You can change the size of the cache while a virtual machine is powered on. In this instance, the existing
cache is discarded and a new write-through cache is created, which results in a cache warm-up period.
The advantage of creating a cache is that the cache size can better match the application's active data.
Flash Read Cache supports write-through or read caching. Write-back or write caching are not supported.
Data reads are satisfied from the cache, if present. Data writes are dispatched to the backing storage,
such as a SAN or NAS. All data that is read from or written to the backing storage is unconditionally
stored in the cache.
Flash Read Cache does not support RDMs in physical compatibility. Virtual compatibility RDMs are
supported with Flash Read Cache.
Note Not all workloads benefit with a Flash Read Cache. The performance boost depends on your
workload pattern and working set size. Read-intensive workloads with working sets that fit into the cache
can benefit from a Flash Read Cache configuration. By configuring Flash Read Cache for your read-
intensive workloads, additional I/O resources become available on your shared storage. As a result,
performance might increase for other workloads even though they are not configured to use Flash Read
Cache.
DRS manages virtual machines with Flash Read Cache reservations. Every time DRS runs, it displays
the available virtual flash capacity reported by the ESXi host. Each host supports one virtual flash
resource. DRS selects a host that has sufficient available virtual flash capacity to start a virtual machine.
DRS treats powered-on virtual machines with a Flash Read Cache as soft affined to their current host and
moves them only for mandatory reasons or if necessary to correct host over-utilization.
vSphere HA restarts a virtual machine with Flash Read Cache on a target host that meets the Flash Read
Cache, CPU, Memory, and overhead reservations. If unreserved flash is insufficient to meet the virtual
flash reservation, vSphere HA does not restart a virtual machine. If the target host does not have
sufficient virtual flash resource available, reconfigure the virtual machine to reduce or drop the Flash
Read Cache.
When you enable Flash Read Cache, you can specify the block size and cache size reservation.
Block size is the minimum number of contiguous bytes that can be stored in the cache. This block size
can be larger than the nominal disk block size of 512 bytes, between 4 KB and 1024 KB. If a guest
operating system writes a single 512-byte disk block, the surrounding cache block size bytes are cached.
Do not confuse the cache block size with the disk block size.
Reservation is a reservation size for cache blocks. There is a minimum number of 256 cache blocks. If
the cache block size is 1 MB, then the minimum cache size is 256 MB. If the cache block size is 4 K, then
the minimum cache size is 1 MB.
For more information about sizing guidelines, search for the Performance of vSphere Flash Read Cache
in VMware vSphere white paper on the VMware website.
Prerequisites
Procedure
3 On the Virtual Hardware tab, expand Hard disk to view the disk menu items.
4 To enable Flash Read Cache for the virtual machine, enter a value in the Virtual Flash Read Cache
text box.
Parameter Description
6 Click OK.
Prerequisites
If you plan to migrate Flash Read Cache contents, configure a sufficient virtual flash resource on the
destination host.
Procedure
Option Description
Change compute resource only Migrate the virtual machines to another host or cluster.
Change both compute resource and Migrate the virtual machines to a specific host or cluster and their storage to a
storage specific datastore or datastore cluster.
4 Specify a migration setting for all virtual disks configured with virtual Flash Read Cache. This
migration parameter does not appear when you do not change the host, but only change the
datastore.
Always migrate the cache contents Virtual machine migration proceeds only if all of the cache contents can be
migrated to the destination host. This option is useful when the cache is small or
the cache size closely matches the application's active data.
Do not migrate the cache contents Deletes the write-through cache. Cache is recreated on the destination host. This
option is useful when the cache size is large or the cache size is larger than the
application's active data.
5 If you have multiple virtual disks with Flash Read Cache, you can adjust the migration setting for each
individual disk.
a Click Advanced.
b Select a virtual disk for which you want to modify the migration setting.
c From the drop-down menu in the Virtual Flash Read Cache Migration Setting column, select
an appropriate option.
What to do next
Verify the successful migration by looking at the Summary tab of the virtual machine:
n Make sure that the tab displays the correct IP address of the destination host.
n Make sure that the VM Hardware panel displays correct Virtual Flash Read Cache information for
each virtual disk.
n Creating Datastores
Types of Datastores
Depending on the storage you use, datastores can be of different types.
VMFS (version 5 and 6) Datastores that you deploy on block storage devices use the
vSphere Virtual Machine File System (VMFS) format. VMFS is
a special high-performance file system format that is optimized
for storing virtual machines. See Understanding VMFS
Datastores.
NFS (version 3 and 4.1) An NFS client built into ESXi uses the Network File System
(NFS) protocol over TCP/IP to access a designated NFS
volume. The volume is located on a NAS server. The ESXi
host mounts the volume as an NFS datastore, and uses it for
storage needs. ESXi supports versions 3 and 4.1 of the NFS
protocol. See Understanding Network File System Datastores
Depending on your storage type, some of the following tasks are available for the datastores.
n Create datastores. You can use the vSphere Client to create certain types of datastores.
n Organize the datastores. For example, you can group them into folders according to business
practices. After you group the datastores, you can assign the same permissions and alarms on the
datastores in the group at one time.
n Add the datastores to datastore clusters. A datastore cluster is a collection of datastores with shared
resources and a shared management interface. When you create the datastore cluster, you can use
Storage DRS to manage storage resources. For information about datastore clusters, see the
vSphere Resource Management documentation.
Use the vSphere Client to set up the VMFS datastore in advance on the block-based storage device that
your ESXi host discovers. The VMFS datastore can be extended to span over several physical storage
devices that include SAN LUNs and local storage. This feature allows you to pool storage and gives you
flexibility in creating the datastore necessary for your virtual machines.
You can increase the capacity of the datastore while the virtual machines are running on the datastore.
This ability lets you add new space to your VMFS datastores as your virtual machine requires it. VMFS is
designed for concurrent access from multiple physical machines and enforces the appropriate access
controls on the virtual machine files.
For all supported VMFS version, ESXi offers complete read and write support. On the supported VMFS
datastores, you can create and power on virtual machines.
VMFS3 Not supported. For information about upgrades to VMFS5, see Upgrading VMFS
Datastores.
The following table compares major characteristics of VMFS5 and VMFS6. For additional information, see
Configuration Maximums .
Access for ESXi hosts version 6.5 and later Yes Yes
Manual space reclamation through the esxcli command. See Yes Yes
Manually Reclaim Accumulated Storage Space.
Storage devices greater than 2 TB for each VMFS extent Yes Yes
Support for virtual machines with large capacity virtual disks, or Yes Yes
disks greater than 2 TB
n Datastore Extents. A spanned VMFS datastore must use only homogeneous storage devices, either
512n, 512e, or 4Kn. The spanned datastore cannot extend over devices of different formats.
n Block Size. The block size on a VMFS datastore defines the maximum file size and the amount of
space a file occupies. VMFS5 and VMFS6 datastores support the block size of 1 MB.
n Storage vMotion. Storage vMotion supports migration across VMFS, vSAN, and Virtual Volumes
datastores. vCenter Server performs compatibility checks to validate Storage vMotion across different
types of datastores.
n Storage DRS. VMFS5 and VMFS6 can coexist in the same datastore cluster. However, all datastores
in the cluster must use homogeneous storage devices. Do not mix devices of different formats within
the same datastore cluster.
n Device Partition Formats. Any new VMFS5 or VMFS6 datastore uses GUID partition table (GPT) to
format the storage device. The GPT format enables you to create datastores larger than 2 TB. If your
VMFS5 datastore has been previously upgraded from VMFS3, it continues to use the master boot
record (MBR) partition format, which is characteristic for VMFS3. Conversion to GPT happens only
after you expand the datastore to a size larger than 2 TB.
Note Always have only one VMFS datastore for each LUN.
You can store multiple virtual machines on the same VMFS datastore. Each virtual machine,
encapsulated in a set of files, occupies a separate single directory. For the operating system inside the
virtual machine, VMFS preserves the internal file system semantics, which ensures correct application
behavior and data integrity for applications running in virtual machines.
When you run multiple virtual machines, VMFS provides specific locking mechanisms for the virtual
machine files. As a result, the virtual machines can operate safely in a SAN environment where multiple
ESXi hosts share the same VMFS datastore.
In addition to the virtual machines, the VMFS datastores can store other files, such as the virtual machine
templates and ISO images.
VMFS volume
disk1
virtual
disk2 disk
files
disk3
For information on the maximum number of hosts that can connect to a single VMFS datastore, see the
Configuration Maximums document.
To ensure that multiple hosts do not access the same virtual machine at the same time, VMFS provides
on-disk locking.
Sharing the VMFS volume across multiple hosts offers several advantages, for example, the following:
n You can use VMware Distributed Resource Scheduling (DRS) and VMware High Availability (HA).
You can distribute virtual machines across different physical servers. That means you run a mix of
virtual machines on each server, so that not all experience high demand in the same area at the
same time. If a server fails, you can restart virtual machines on another physical server. If the failure
occurs, the on-disk lock for each virtual machine is released. For more information about VMware
DRS, see the vSphere Resource Management documentation. For information about VMware HA,
see the vSphere Availability documentation.
n You can use vMotion to migrate running virtual machines from one physical server to another. For
information about migrating virtual machines, see the vCenter Server and Host Management
documentation.
To create a shared datastore, mount the datastore on those ESXi hosts that require the datastore access.
Metadata is updated each time you perform datastore or virtual machine management operations.
Examples of operations requiring metadata updates include the following:
n Creating a template
When metadata changes are made in a shared storage environment, VMFS uses special locking
mechanisms to protect its data and prevent multiple hosts from concurrently writing to the metadata.
Depending on its configuration and the type of underlying storage, a VMFS datastore can use different
types of locking mechanisms. It can exclusively use the atomic test and set locking mechanism (ATS-
only), or use a combination of ATS and SCSI reservations (ATS+SCSI).
ATS-Only Mechanism
For storage devices that support T10 standard-based VAAI specifications, VMFS provides ATS locking,
also called hardware assisted locking. The ATS algorithm supports discrete locking per disk sector. All
newly formatted VMFS5 and VMFS6 datastores use the ATS-only mechanism if the underlying storage
supports it, and never use SCSI reservations.
When you create a multi-extent datastore where ATS is used, vCenter Server filters out non-ATS devices.
This filtering allows you to use only those devices that support the ATS primitive.
In certain cases, you might need to turn off the ATS-only setting for a VMFS5 or VMFS6 datastore. For
information, see Change Locking Mechanism to ATS+SCSI.
ATS+SCSI Mechanism
A VMFS datastore that supports the ATS+SCSI mechanism is configured to use ATS and attempts to use
it when possible. If ATS fails, the VMFS datastore reverts to SCSI reservations. In contrast with the ATS
locking, the SCSI reservations lock an entire storage device while an operation that requires metadata
protection is performed. After the operation completes, VMFS releases the reservation and other
operations can continue.
Datastores that use the ATS+SCSI mechanism include VMFS5 datastores that were upgraded from
VMFS3. In addition, new VMFS5 or VMFS6 datastores on storage devices that do not support ATS use
the ATS+SCSI mechanism.
If the VMFS datastore reverts to SCSI reservations, you might notice performance degradation caused by
excessive SCSI reservations.
Prerequisites
Install vCLI or deploy the vSphere Management Assistant (vMA) virtual machine. See Getting Started with
vSphere Command-Line Interfaces. For troubleshooting, run esxcli commands in the ESXi Shell.
Procedure
u To display information related to VMFS locking mechanisms, run the following command:
The table lists items that the output of the command might include.
Typically, VMFS5 datastores that were previously upgraded from VMFS3 continue using the ATS+SCSI
locking mechanism. If the datastores are deployed on ATS-enabled hardware, they are eligible for an
upgrade to ATS-only locking. Depending on your vSphere environment, you can use one of the following
upgrade modes:
n The online upgrade to the ATS-only mechanism is available for most single-extent VMFS5
datastores. While you perform the online upgrade on one of the hosts, other hosts can continue using
the datastore.
n The offline upgrade to ATS-only must be used for VMFS5 datastores that span multiple physical
extents. Datastores composed of multiple extents are not eligible for the online upgrade. These
datastores require that no hosts actively use the datastores at the time of the upgrade request.
Procedure
Procedure
1 Upgrade all hosts that access the VMFS5 datastore to the newest version of vSphere.
2 Determine whether the datastore is eligible for an upgrade of its current locking mechanism by
running the esxcli storage vmfs lockmode list command.
The following sample output indicates that the datastore is eligible for an upgrade. It also shows the
current locking mechanism and the upgrade mode available for the datastore.
3 Depending on the upgrade mode available for the datastore, perform one of the following actions:
Online Verify that all hosts have consistent storage connectivity to the VMFS datastore.
Most datastores that do not span multiple extents are eligible for an online upgrade. While you perform
the online upgrade on one of the ESXi hosts, other hosts can continue using the datastore. The online
upgrade completes only after all hosts have closed the datastore.
Prerequisites
If you plan to complete the upgrade of the locking mechanism by putting the datastore into maintenance
mode, disable Storage DRS. This prerequisite applies only to an online upgrade.
Procedure
esxcli storage vmfs lockmode set -a|--ats -l|--volume-label= VMFS label -u|--
volume-uuid= VMFS UUID.
a Close the datastore on all hosts that have access to the datastore, so that the hosts can
recognize the change.
n Put the datastore into maintenance mode and exit maintenance mode.
b Verify that the Locking Mode status for the datastore changed to ATS-only by running:
c If the Locking Mode displays any other status, for example ATS UPGRADE PENDING, check
which host has not yet processed the upgrade by running:
You might need to switch to the ATS+SCSI locking mechanism when, for example, your storage device is
downgraded. Or when firmware updates fail and the device no longer supports ATS.
The downgrade process is similar to the ATS-only upgrade. As with the upgrade, depending on your
storage configuration, you can perform the downgrade in online or offline mode.
Procedure
esxcli storage vmfs lockmode set -s|--scsi -l|--volume-label= VMFS label -u|--
volume-uuid= VMFS UUID.
2 For an online mode, close the datastore on all hosts that have access to the datastore, so that the
hosts can recognize the change.
Sparse disks use the copy-on-write mechanism, in which the virtual disk contains no data, until the data is
copied there by a write operation. This optimization saves storage space.
Depending on the type of your datastore, delta disks use different sparse formats.
SEsparse For virtual disks larger than 2 TB. For all disks.
VMFSsparse VMFS5 uses the VMFSsparse format for virtual disks smaller than 2 TB.
SEsparse SEsparse is a default format for all delta disks on the VMFS6 datastores.
On VMFS5, SEsparse is used for virtual disks of the size 2 TB and larger.
Snapshot Migration
You can migrate VMs with snapshots across different datastores. The following considerations apply:
n If you migrate a VM with the VMFSsparse snapshot to VMFS6, the snapshot format changes to
SEsparse.
n When a VM with a vmdk of the size smaller than 2 TB is migrated to VMFS5, the snapshot format
changes to VMFSsparse.
n You cannot mix VMFSsparse redo-logs with SEsparse redo-logs in the same hierarchy.
VMFS5 Datastores
You cannot upgrade a VMFS5 datastore to VMFS6. If you have a VMFS5 datastore in your environment,
create a VMFS6 datastore and migrate virtual machines from the VMFS5 datastore to VMFS6.
VMFS3 Datastores
ESXi no longer supports VMFS3 datastores. The ESXi host automatically upgrades VMFS3 to VMFS5
when mounting existing datastores. The host performs the upgrade operation in the following
circumstances:
n At the first boot after an upgrade to ESXi 6.7, when the host mounts all discovered VMFS3
datastores.
n When you manually mount the VMFS3 datastores that are discovered after the boot, or mount
persistently unmounted datastores.
When you have VMFS3 datastores in your environment, the following considerations apply:
n If you use an ESXi .iso image to upgrade your legacy host through vSphere Update Manager, and
the upgrade is not successful, the VMFS3 datastore is upgraded to VMFS5 if the installation process
passes the mount phase.
n In the mixed environment of the 6.5 and 6.7 ESXi hosts, the VMFS3 datastore upgrades when the 6.7
host attempts to mount it. The 6.5 host can continue to access the datastore even when the upgrade
is unsuccessful.
n When you mount the VMFS3 datastore after its resignaturing, it does not upgrade to VMFS5. You
must perform the upgrade manually.
Note If the upgrade fails in the mixed environment of the 6.5 and 6.7 ESXi hosts, the VMFS3 datastore
remains unmounted in the Failed to Upgrade status on the 6.7 host. However, the datastore is accessible
from the 6.5 host.
Procedure
Because the datastore is in the Failed to Upgrade state, the ESXi host does not attempt an upgrade.
After a successful mount, the Failed to Upgrade status is cleared.
For example, if the previous upgrade failed due to lack of space, delete the files from the datastore to
make space.
Procedure
5 Perform a rescan on all hosts that are associated with the datastore.
The datastore is upgraded to VMFS5 and is available to all hosts that are associated with the datastore.
Typically, the NFS volume or directory is created by a storage administrator and is exported from the NFS
server. You do not need to format the NFS volume with a local file system, such as VMFS. Instead, you
mount the volume directly on the ESXi hosts and use it to store and boot virtual machines in the same
way that you use the VMFS datastores.
In addition to storing virtual disks on NFS datastores, you can use NFS as a central repository for ISO
images, virtual machine templates, and so on. If you use the datastore for the ISO images, you can
connect the CD-ROM device of the virtual machine to an ISO file on the datastore. You then can install a
guest operating system from the ISO file.
Virtual machines on NFS v4.1 do not support the old, legacy Fault Tolerance mechanism.
In vSphere 6.0, the newer Fault Tolerance mechanism can accommodate symmetric multiprocessor
(SMP) virtual machines with up to four vCPUs. Earlier versions of vSphere used a different technology for
Fault Tolerance, with different requirements and characteristics.
NFS Upgrades
When you upgrade ESXi from a version earlier than 6.5, existing NFS 4.1 datastores automatically begin
supporting functionalities that were not available in the previous ESXi release. These functionalities
include Virtual Volumes, hardware acceleration, and so on.
ESXi does not support automatic datastore conversions from NFS version 3 to NFS 4.1.
If you want to upgrade your NFS 3 datastore, the following options are available:
n Create the NFS 4.1 datastore, and then use Storage vMotion to migrate virtual machines from the old
datastore to the new one.
n Use conversion methods provided by your NFS storage server. For more information, contact your
storage vendor.
n Unmount the NFS 3 datastore, and then mount as NFS 4.1 datastore.
Caution If you use this option, make sure to unmount the datastore from all hosts that have access
to the datastore. The datastore can never be mounted by using both protocols at the same time.
n NFS Networking
An ESXi host uses TCP/IP network connection to access a remote NAS server. Certain guidelines
and best practices exist for configuring the networking when you use NFS storage.
n NFS Security
With NFS 3 and NFS 4.1, ESXi supports the AUTH_SYS security. In addition, for NFS 4.1, the
Kerberos security mechanism is supported.
n NFS Multipathing
While NFS 3 with ESXi does not provide multipathing support, NFS 4.1 supports multiple paths.
n NFS Datastores
When you create an NFS datastore, make sure to follow specific guidelines.
n Make sure that the NAS servers you use are listed in the VMware HCL. Use the correct version for
the server firmware.
n Ensure that the NFS volume is exported using NFS over TCP.
n Make sure that the NAS server exports a particular share as either NFS 3 or NFS 4.1. The NAS
server must not provide both protocol versions for the same share. The NAS server must enforce this
policy because ESXi does not prevent mounting the same share through different NFS versions.
n NFS 3 and non-Kerberos (AUTH_SYS) NFS 4.1 do not support the delegate user functionality that
enables access to NFS volumes using nonroot credentials. If you use NFS 3 or non-Kerberos NFS
4.1, ensure that each host has root access to the volume. Different storage vendors have different
methods of enabling this functionality, but typically the NAS servers use the no_root_squash option.
If the NAS server does not grant root access, you can still mount the NFS datastore on the host.
However, you cannot create any virtual machines on the datastore.
n If the underlying NFS volume is read-only, make sure that the volume is exported as a read-only
share by the NFS server. Or mount the volume as a read-only datastore on the ESXi host. Otherwise,
the host considers the datastore to be read-write and might not open the files.
NFS Networking
An ESXi host uses TCP/IP network connection to access a remote NAS server. Certain guidelines and
best practices exist for configuring the networking when you use NFS storage.
n For network connectivity, use a standard network adapter in your ESXi host.
n ESXi supports Layer 2 and Layer 3 Network switches. If you use Layer 3 switches, ESXi hosts and
NFS storage arrays must be on different subnets and the network switch must handle the routing
information.
n Configure a VMkernel port group for NFS storage. You can create the VMkernel port group for IP
storage on an existing virtual switch (vSwitch) or on a new vSwitch. The vSwitch can be a vSphere
Standard Switch (VSS) or a vSphere Distributed Switch (VDS).
n If you use multiple ports for NFS traffic, make sure that you correctly configure your virtual switches
and physical switches.
NFS 3 locking on ESXi does not use the Network Lock Manager (NLM) protocol. Instead, VMware
provides its own locking protocol. NFS 3 locks are implemented by creating lock files on the NFS server.
Lock files are named .lck-file_id..
Because NFS 3 and NFS 4.1 clients do not use the same locking protocol, you cannot use different NFS
versions to mount the same datastore on multiple hosts. Accessing the same virtual disks from two
incompatible clients might result in incorrect behavior and cause data corruption.
NFS Security
With NFS 3 and NFS 4.1, ESXi supports the AUTH_SYS security. In addition, for NFS 4.1, the Kerberos
security mechanism is supported.
NFS 3 supports the AUTH_SYS security mechanism. With this mechanism, storage traffic is transmitted
in an unencrypted format across the LAN. Because of this limited security, use NFS storage on trusted
networks only and isolate the traffic on separate physical switches. You can also use a private VLAN.
NFS 4.1 supports the Kerberos authentication protocol to secure communications with the NFS server.
Nonroot users can access files when Kerberos is used. For more information, see Using Kerberos for
NFS 4.1.
In addition to Kerberos, NFS 4.1 supports traditional non-Kerberos mounts with the AUTH_SYS security.
In this case, use root access guidelines for NFS version 3.
Note You cannot use two security mechanisms, AUTH_SYS and Kerberos, for the same NFS 4.1
datastore shared by multiple hosts.
NFS Multipathing
While NFS 3 with ESXi does not provide multipathing support, NFS 4.1 supports multiple paths.
NFS 3 uses one TCP connection for I/O. As a result, ESXi supports I/O on only one IP address or
hostname for the NFS server, and does not support multiple paths. Depending on your network
infrastructure and configuration, you can use the network stack to configure multiple connections to the
storage targets. In this case, you must have multiple datastores, each datastore using separate network
connections between the host and the storage.
NFS 4.1 provides multipathing for servers that support the session trunking. When the trunking is
available, you can use multiple IP addresses to access a single NFS volume. Client ID trunking is not
supported.
NFS 3 and NFS 4.1 support hardware acceleration that allows your host to integrate with NAS devices
and use several hardware operations that NAS storage provides. For more information, see Hardware
Acceleration on NAS Devices.
NFS Datastores
When you create an NFS datastore, make sure to follow specific guidelines.
The NFS datastore guidelines and best practices include the following items:
n You cannot use different NFS versions to mount the same datastore on different hosts. NFS 3 and
NFS 4.1 clients are not compatible and do not use the same locking protocol. As a result, accessing
the same virtual disks from two incompatible clients might result in incorrect behavior and cause data
corruption.
n NFS 3 and NFS 4.1 datastores can coexist on the same host.
n ESXi cannot automatically upgrade NFS version 3 to version 4.1, but you can use other conversion
methods. For information, see NFS Protocols and ESXi.
n When you mount the same NFS 3 volume on different hosts, make sure that the server and folder
names are identical across the hosts. If the names do not match, the hosts see the same NFS
version 3 volume as two different datastores. This error might result in a failure of such features as
vMotion. An example of such discrepancy is entering filer as the server name on one host and
filer.domain.com on the other. This guideline does not apply to NFS version 4.1.
n If you use non-ASCII characters to name datastores and virtual machines, make sure that the
underlying NFS server offers internationalization support. If the server does not support international
characters, use only ASCII characters, or unpredictable failures might occur.
Supported services, including NFS, are described in a rule set configuration file in the ESXi firewall
directory /etc/vmware/firewall/. The file contains firewall rules and their relationships with ports and
protocols.
The behavior of the NFS Client rule set (nfsClient) is different from other rule sets.
For more information about firewall configurations, see the vSphere Security documentation.
When you add, mount, or unmount an NFS datastore, the resulting behavior depends on the version of
NFS.
When you add or mount an NFS v3 datastore, ESXi checks the state of the NFS Client (nfsClient)
firewall rule set.
n If the nfsClient rule set is disabled, ESXi enables the rule set and disables the Allow All IP
Addresses policy by setting the allowedAll flag to FALSE. The IP address of the NFS server is
added to the allowed list of outgoing IP addresses.
n If the nfsClient rule set is enabled, the state of the rule set and the allowed IP address policy are
not changed. The IP address of the NFS server is added to the allowed list of outgoing IP addresses.
Note If you manually enable the nfsClient rule set or manually set the Allow All IP Addresses policy,
either before or after you add an NFS v3 datastore to the system, your settings are overridden when the
last NFS v3 datastore is unmounted. The nfsClient rule set is disabled when all NFS v3 datastores are
unmounted.
When you remove or unmount an NFS v3 datastore, ESXi performs one of the following actions.
n If none of the remaining NFS v3 datastores are mounted from the server of the datastore being
unmounted, ESXi removes the server's IP address from the list of outgoing IP addresses.
n If no mounted NFS v3 datastores remain after the unmount operation, ESXi disables the nfsClient
firewall rule set.
When you mount the first NFS v4.1 datastore, ESXi enables the nfs41client rule set and sets its
allowedAll flag to TRUE. This action opens port 2049 for all IP addresses. Unmounting an NFS v4.1
datastore does not affect the firewall state. That is, the first NFS v4.1 mount opens port 2049 and that port
remains enabled unless you close it explicitly.
Procedure
4 Scroll down to an appropriate version of NFS to make sure that the port is open.
n Use Cisco's Hot Standby Router Protocol (HSRP) in IP Router. If you are using a non-Cisco router,
use Virtual Router Redundancy Protocol (VRRP) instead.
n To prioritize NFS L3 traffic on networks with limited bandwidths, or on networks that experience
congestion, use Quality of Service (QoS). See your router documentation for details.
n Follow Routed NFS L3 recommendations offered by storage vendor. Contact your storage vendor for
details.
n If you are planning to use systems with top-of-rack switches or switch-dependent I/O device
partitioning, contact your system vendor for compatibility and support.
n The environment supports only the NFS protocol. Do not use other storage protocols such as FCoE
over the same physical network.
n The NFS traffic in this environment can be routed only over a LAN. Other environments such as WAN
are not supported.
The RPCSEC_GSS Kerberos mechanism is an authentication service. It allows an NFS 4.1 client
installed on ESXi to prove its identity to an NFS server before mounting an NFS share. The Kerberos
security uses cryptography to work across an insecure network connection.
The ESXi implementation of Kerberos for NFS 4.1 provides two security models, krb5 and krb5i, that offer
different levels of security.
n Kerberos for authentication and data integrity (krb5i), in addition to identity verification, provides data
integrity services. These services help to protect the NFS traffic from tampering by checking data
packets for any potential modifications.
Kerberos supports cryptographic algorithms that prevent unauthorized users from gaining access to NFS
traffic. The NFS 4.1 client on ESXi attempts to use either the AES256-CTS-HMAC-SHA1-96 or AES128-
CTS-HMAC-SHA1-96 algorithm to access a share on the NAS server. Before using your NFS 4.1
datastores, make sure that AES256-CTS-HMAC-SHA1-96 or AES128-CTS-HMAC-SHA1-96 are enabled
on the NAS server.
The following table compares Kerberos security levels that ESXi supports.
Kerberos for authentication Integrity checksum for RPC Yes with DES Yes with AES
only (krb5) header
Kerberos for authentication Integrity checksum for RPC No krb5i Yes with AES
and data integrity (krb5i) header
n As a vSphere administrator, you specify Active Directory credentials to provide access to NFS 4.1
Kerberos datastores for an NFS user. A single set of credentials is used to access all Kerberos
datastores mounted on that host.
n When multiple ESXi hosts share the NFS 4.1 datastore, you must use the same Active Directory
credentials for all hosts that access the shared datastore. To automate the assignment process, set
the user in host profiles and apply the profile to all ESXi hosts.
n You cannot use two security mechanisms, AUTH_SYS and Kerberos, for the same NFS 4.1 datastore
shared by multiple hosts.
Prerequisites
n Familiarize yourself with the guidelines in NFS Storage Guidelines and Requirements.
n For details on configuring NFS storage, consult your storage vendor documentation.
Procedure
1 On the NFS server, configure an NFS volume and export it to be mounted on the ESXi hosts.
a Note the IP address or the DNS name of the NFS server and the full path, or folder name, for the
NFS share.
For NFS 4.1, you can collect multiple IP addresses or DNS names to use the multipathing
support that the NFS 4.1 datastore provides.
b If you plan to use Kerberos authentication with NFS 4.1, specify the Kerberos credentials to be
used by ESXi for authentication.
2 On each ESXi host, configure a VMkernel Network port for NFS traffic.
3 If you plan to use Kerberos authentication with the NFS 4.1 datastore, configure the ESXi hosts for
Kerberos authentication.
What to do next
When multiple ESXi hosts share the NFS 4.1 datastore, you must use the same Active Directory
credentials for all hosts that access the shared datastore. You can automate the assignment process by
setting the user in host profiles and applying the profile to all ESXi hosts.
Prerequisites
n Make sure that Microsoft Active Directory (AD) and NFS servers are configured to use Kerberos.
n Make sure that the NFS server exports are configured to grant full access to the Kerberos user.
Procedure
1 Configure DNS for NFS 4.1 with Kerberos
When you use NFS 4.1 with Kerberos, you must change the DNS settings on ESXi hosts. The
settings must point to the DNS server that is configured to hand out DNS records for the Kerberos
Key Distribution Center (KDC). For example, use the Active Directory server address if AD is used
as a DNS server.
What to do next
After you configure your host for Kerberos, you can create an NFS 4.1 datastore with Kerberos enabled.
Procedure
3 Under Networking, click TCP/IP configuration, and click the Edit TCP/IP stack configuration icon.
Option Description
The best practice is to use the Active Domain server as the NTP server.
Procedure
5 Click OK.
Prerequisites
Set up an AD domain and a domain administrator account with the rights to add hosts to the domain.
Procedure
Files stored in all Kerberos datastores are accessed using these credentials.
Creating Datastores
You use the New Datastore wizard to create your datastores. Depending on the type of your storage and
storage needs, you can create a VMFS, NFS, or Virtual Volumes datastore.
A vSAN datastore is automatically created when you enable vSAN. For information, see the
Administering VMware vSAN documentation.
You can also use the New Datastore wizard to manage VMFS datastore copies.
Prerequisites
2 To discover newly added storage devices, perform a rescan. See Storage Rescan Operations.
3 Verify that storage devices you are planning to use for your datastores are available. See Storage
Device Characteristics.
Procedure
1 In the vSphere Client object navigator, browse to a host, a cluster, or a data center.
4 Enter the datastore name and if necessary, select the placement location for the datastore.
Important The device you select must not have any values displayed in the Snapshot Volume
column. If a value is present, the device contains a copy of an existing VMFS datastore. For
information on managing datastore copies, see Managing Duplicate VMFS Datastores.
Option Description
VMFS6 Default format on all hosts that support VMFS6. The ESXi hosts of version 6.0 or
earlier cannot recognize the VMFS6 datastore.
VMFS5 VMFS5 datastore supports access by the ESXi hosts of version 6.7 or earlier.
Option Description
Use all available partitions Dedicates the entire disk to a single VMFS datastore. If you select this option,
all file systems and data currently stored on this device are destroyed.
Use free space Deploys a VMFS datastore in the remaining free space of the disk.
b If the space allocated for the datastore is excessive for your purposes, adjust the capacity values
in the Datastore Size field.
c For VMFS6, specify the block size and define space reclamation parameters. See Space
Reclamation Requests from VMFS Datastores.
8 In the Ready to Complete page, review the datastore configuration information and click Finish.
The datastore on the SCSI-based storage device is created. It is available to all hosts that have access to
the device.
Prerequisites
n If you plan to use Kerberos authentication with the NFS 4.1 datastore, make sure to configure the
ESXi hosts for Kerberos authentication.
Procedure
1 In the vSphere Client object navigator, browse to a host, a cluster, or a data center.
n NFS 3
n NFS 4.1
Important If multiple hosts access the same datastore, you must use the same protocol on all hosts.
Option Description
Datastore name The system enforces a 42 character limit for the datastore name.
Server The server name or IP address. You can use IPv6 or IPv4 formats.
With NFS 4.1, you can add multiple IP addresses or server names if the NFS
server supports trunking. The ESXi host uses these values to achieve
multipathing to the NFS server mount point.
5 Select Mount NFS read only if the volume is exported as read-only by the NFS server.
6 To use Kerberos security with NFS 4.1, enable Kerberos and select an appropriate Kerberos model.
Option Description
Use Kerberos for authentication and In addition to identity verification, provides data integrity services. These services
data integrity (krb5i) help to protect the NFS traffic from tampering by checking data packets for any
potential modifications.
If you do not enable Kerberos, the datastore uses the default AUTH_SYS security.
7 If you are creating a datastore at the data center or cluster level, select hosts that mount the
datastore.
Procedure
1 In the vSphere Client object navigator, browse to a host, a cluster, or a data center.
4 Enter the datastore name and select a backing storage container from the list of storage containers.
Make sure to use the name that does not duplicate another datastore name in your data center
environment.
If you mount the same Virtual Volumes datastore to several hosts, the name of the datastore must be
consistent across all hosts.
What to do next
After you create the Virtual Volumes datastore, you can perform such datastore operations as renaming
the datastore, browsing datastore files, unmounting the datastore, and so on.
Each VMFS datastore created in a storage disk has a unique signature, also called UUID, that is stored in
the file system superblock. When the storage disk is replicated or its snapshot is taken on the storage
side, the resulting disk copy is identical, byte-for-byte, with the original disk. For example, if the original
storage device contains a VMFS datastore with UUIDX, the disk copy appears to contain a datastore
copy with the same UUIDX.
In addition to LUN snapshots and replications, certain device operations, such as LUN ID changes, might
produce a copy of the original datastore.
ESXi can detect the VMFS datastore copy. You can mount the datastore copy with its original UUID or
change the UUID. The process of changing the UUID is called the datastore resignaturing.
Whether you select resignaturing or mounting without resignaturing depends on how the LUNs are
masked in the storage environment. If your hosts can see both copies of the LUN, then resignaturing is
the optimal method.
You can keep the signature if, for example, you maintain synchronized copies of virtual machines at a
secondary site as part of a disaster recovery plan. In the event of a disaster at the primary site, you mount
the datastore copy and power on the virtual machines at the secondary site.
When resignaturing a VMFS copy, ESXi assigns a new signature (UUID) to the copy, and mounts the
copy as a datastore distinct from the original. All references to the original signature in virtual machine
configuration files are updated.
n After resignaturing, the storage device replica that contained the VMFS copy is no longer treated as a
replica.
n A spanned datastore can be resignatured only if all its extents are online.
n The resignaturing process is fault tolerant. If the process is interrupted, you can resume it later.
n You can mount the new VMFS datastore without a risk of its UUID conflicting with UUIDs of any other
datastore from the hierarchy of device snapshots.
Prerequisites
n Perform a storage rescan on your host to update the view of storage devices presented to the host.
n Unmount the original VMFS datastore that has the same UUID as the copy you plan to mount. You
can mount the VMFS datastore copy only if it does not collide with the original VMFS datastore.
Procedure
1 In the vSphere Client object navigator, browse to a host, a cluster, or a data center.
4 Enter the datastore name and if necessary, select the placement location for the datastore.
5 From the list of storage devices, select the device that has a specific value displayed in the Snapshot
Volume column.
The value present in the Snapshot Volume column indicates that the device is a copy that contains a
copy of an existing VMFS datastore.
Option Description
Mount with resignaturing Under Mount Options, select Assign a New Signature and click Next .
Mount without resignaturing Under Mount Options, select Keep Existing Signature.
If a shared datastore has powered on virtual machines and becomes 100% full, you can increase the
datastore capacity. You can perform this action only from the host where the powered on virtual machines
are registered.
Depending on your storage configuration, you can use one of the following methods to increase the
datastore capacity. You do not need to power off virtual machines when using either method of increasing
the datastore capacity.
Expand an Existing Increase the size of an expandable datastore. The datastore is considered
Datastore expandable when the backing storage device has free space immediately
after the datastore extent.
Add an Extent Increase the capacity of an existing VMFS datastore by adding new storage
devices to the datastore. The datastore can span over multiple storage
devices, yet appear as a single volume.
The spanned VMFS datastore can use any or all its extents at any time. It
does not need to fill up a particular extent before using the next one.
Note Datastores that support only the hardware assisted locking, also
called the atomic test and set (ATS) mechanism, cannot span over non-
ATS devices. For more information, see VMFS Locking Mechanisms.
Prerequisites
You can increase the datastore capacity if the host storage meets one of the following conditions:
n The backing device for the existing datastore has enough free space.
Procedure
Option Description
To expand an existing datastore extent Select the device for which the Expandable column reads YES.
To add an extent Select the device for which the Expandable column reads NO.
Depending on the current layout of the disk and on your previous selections, the menu items you see
might vary.
Use free space to expand the datastore Expands an existing extent to a required capacity.
Use free space Deploys an extent in the remaining free space of the disk. This menu item is
available only when you are adding an extent.
Use all available partitions Dedicates the entire disk to a single extent. This menu item is available only when
you are adding an extent and when the disk you are formatting is not blank. The
disk is reformatted, and the datastores and any data that it contains are erased.
The minimum extent size is 1.3 GB. By default, the entire free space on the storage device is
available.
7 Click Next.
8 Review the proposed layout and the new configuration of your datastore, and click Finish.
n Unmount Datastores
When you unmount a datastore, it remains intact, but can no longer be seen from the hosts that you
specify. The datastore continues to appear on other hosts, where it remains mounted.
n Mount Datastores
You can mount a datastore you previously unmounted. You can also mount a datastore on additional
hosts, so that it becomes a shared datastore.
Note If the host is managed by vCenter Server, you cannot rename the datastore by directly accessing
the host from the VMware Host Client. You must rename the datastore from vCenter Server.
Procedure
The new name appears on all hosts that have access to the datastore.
Unmount Datastores
When you unmount a datastore, it remains intact, but can no longer be seen from the hosts that you
specify. The datastore continues to appear on other hosts, where it remains mounted.
Do not perform any configuration operations that might result in I/O to the datastore while the unmounting
is in progress.
Note Make sure that the datastore is not used by vSphere HA Heartbeating. vSphere HA Heartbeating
does not prevent you from unmounting the datastore. However, if the datastore is used for heartbeating,
unmounting it might cause the host to fail and restart any active virtual machine.
Prerequisites
When appropriate, before unmounting datastores, make sure that the following prerequisites are met:
Procedure
3 If the datastore is shared, select the hosts from which to unmount the datastore.
After you unmount a VMFS datastore from all hosts, the datastore is marked as inactive. If you unmount
an NFS or a virtual volumes datastore from all hosts, the datastore disappears from the inventory. You
can mount the unmounted VMFS datastore. To mount the NFS or virtual volumes datastore that has been
removed from the inventory, use the New Datastore wizard.
What to do next
If you unmounted the VMFS datastore as a part of a storage removal procedure, you can now detach the
storage device that is backing the datastore. See Detach Storage Devices.
Mount Datastores
You can mount a datastore you previously unmounted. You can also mount a datastore on additional
hosts, so that it becomes a shared datastore.
A VMFS datastore that has been unmounted from all hosts remains in inventory, but is marked as
inaccessible. You can use this task to mount the VMFS datastore to a specified host or multiple hosts.
If you have unmounted an NFS or a Virtual Volumes datastore from all hosts, the datastore disappears
from the inventory. To mount the NFS or Virtual Volumes datastore that has been removed from the
inventory, use the New Datastore wizard.
A datastore of any type that is unmounted from some hosts while being mounted on others, is shown as
active in the inventory.
Procedure
2 Right-click the datastore to mount and select one of the following options:
n Mount Datastore
Note The delete operation for the datastore permanently deletes all files associated with virtual
machines on the datastore. Although you can delete the datastore without unmounting, it is preferable
that you unmount the datastore first.
Prerequisites
n Make sure that the datastore is not used for vSphere HA heartbeating.
Procedure
Procedure
2 Explore the contents of the datastore by navigating to existing folders and files.
Copy to Copy selected folders or files to a new location, either on the same datastore or
on a different datastore.
Move to Move selected folders or files to a new location, either on the same datastore or
on a different datastore.
Inflate Convert a selected thin virtual disk to thick. This option applies only to thin-
provisioned disks.
In addition to their traditional use as storage for virtual machines files, datastores can serve to store data
or files related to virtual machines. For example, you can upload ISO images of operating systems from a
local computer to a datastore on the host. You then use these images to install guest operating systems
on the new virtual machines.
Note You cannot upload files directly to the Virtual Volumes datastores. You must first create a folder on
the Virtual Volumes datastore, and then upload the files into the folder. The VVols datastore supports
direct uploads of folders.
Prerequisites
Procedure
Option Description
Upload a file a Select the target folder and click Upload Files.
b Locate the item to upload on the local computer and click Open.
Upload a folder (available only in the a Select the datastore or the target folder and click Upload Folders.
vSphere Client) b Locate the item to upload on the local computer and click Ok.
4 Refresh the datastore file browser to see the uploaded files or folders on the list.
What to do next
You might experience problems when deploying an OVF template that you previously exported and then
uploaded to a datastore. For details and a workaround, see the VMware Knowledge Base article
2117310.
Prerequisites
Procedure
Note Virtual disk files are moved or copied without format conversion. If you move a virtual disk to a
datastore that belongs to a host different from the source host, you might need to convert the virtual disk.
Otherwise, you might not be able to use the disk.
Prerequisites
Procedure
5 (Optional) Select Overwrite files and folders with matching names at the destination.
6 Click OK.
Prerequisites
Procedure
You use the datastore browser to inflate the thin virtual disk.
Prerequisites
n Make sure that the datastore where the virtual machine resides has enough space.
n Remove snapshots.
Procedure
2 Expand the virtual machine folder and browse to the virtual disk file that you want to convert.
The file has the .vmdk extension and is marked with the virtual disk ( ) icon.
Note The option might not be available if the virtual disk is thick or when the virtual machine is
running.
The inflated virtual disk occupies the entire datastore space originally provisioned to it.
Prerequisites
Before you change the device filters, consult with the VMware support team. You can turn off the filters
only if you have other methods to prevent device corruption.
Procedure
In the Name and Value text boxes at the bottom of the page, enter appropriate information.
Name Value
config.vpxd.filter.vmfsFilter False
config.vpxd.filter.rdmFilter False
config.vpxd.filter.sameHostsAndTrans False
portsFilter
config.vpxd.filter.hostRescanFilter False
Note If you turn off this filter, your hosts continue to perform a rescan each time
you present a new LUN to a host or a cluster.
Storage Filtering
vCenter Server provides storage filters to help you avoid storage device corruption or performance
degradation that might be caused by an unsupported use of storage devices. These filters are available
by default.
config.vpxd.filter.vmfsFilter Filters out storage devices, or LUNs, that are already used by a VMFS datastore on any
(VMFS Filter) host managed by vCenter Server. The LUNs do not show up as candidates to be
formatted with another VMFS datastore or to be used as an RDM.
config.vpxd.filter.rdmFilter Filters out LUNs that are already referenced by an RDM on any host managed by
(RDM Filter) vCenter Server. The LUNs do not show up as candidates to be formatted with VMFS or
to be used by a different RDM.
For your virtual machines to access the same LUN, the virtual machines must share the
same RDM mapping file. For information about this type of configuration, see the
vSphere Resource Management documentation.
config.vpxd.filter.sameHostsAndTrans Filters out LUNs ineligible for use as VMFS datastore extents because of host or storage
portsFilter type incompatibility. Prevents you from adding the following LUNs as extents:
(Same Hosts and Transports Filter) n LUNs not exposed to all hosts that share the original VMFS datastore.
n LUNs that use a storage type different from the one the original VMFS datastore
uses. For example, you cannot add a Fibre Channel extent to a VMFS datastore on
a local storage device.
config.vpxd.filter.hostRescanFilter Automatically rescans and updates VMFS datastores after you perform datastore
(Host Rescan Filter) management operations. The filter helps provide a consistent view of all VMFS
datastores on all hosts managed by vCenter Server.
Note If you present a new LUN to a host or a cluster, the hosts automatically perform a
rescan no matter whether you have the Host Rescan Filter on or off.
Prerequisites
Procedure
2 Log in to your virtual machine and configure the disks as dynamic mirrored disks.
Name Value
scsi#.returnNoConnectDuringAPD True
scsi#.returnBusyOnNoConnectStatus False
e Click OK.
Typically, a partition to collect diagnostic information, also called VMkernel core dump, is created on a
local storage device during ESXi installation. You can override this default behavior if, for example, you
use shared storage devices instead of local storage. To prevent automatic formatting of local devices,
detach the devices from the host before you install ESXi and power on the host for the first time. You can
later set up a location for collecting diagnostic information on a local or remote storage device.
When you use storage devices, you can select between two options of setting up core dump collection.
You can use a preconfigured diagnostic partition on a storage device or use a file on a VMFS datastore.
n You cannot create a diagnostic partition on an iSCSI LUN accessed through the software iSCSI or
dependent hardware iSCSI adapter. For more information about diagnostic partitions with iSCSI, see
General Recommendations for Boot from iSCSI SAN.
n You cannot create a diagnostic partition on a LUN accessed through software FCoE.
n Unless you are using diskless servers, set up a diagnostic partition on local storage.
n Each host must have a diagnostic partition of 2.5 GB. If multiple hosts share a diagnostic partition on
a SAN LUN, the partition must be large enough to accommodate core dumps of all hosts.
n If a host that uses a shared diagnostic partition fails, reboot the host and extract log files immediately
after the failure. Otherwise, the second host that fails before you collect the diagnostic data of the first
host might save the core dump.
Procedure
If you do not see this menu item, the host already has a diagnostic partition.
Private local Creates the diagnostic partition on a local disk. This partition stores fault
information only for your host.
Private SAN storage Creates the diagnostic partition on a non-shared SAN LUN. This partition stores
fault information only for your host.
Shared SAN storage Creates the diagnostic partition on a shared SAN LUN. Multiple hosts can access
this partition. It can store fault information for more than one host.
4 Click Next.
5 Select the device to use for the diagnostic partition and click Next.
Prerequisites
Install vCLI or deploy the vSphere Management Assistant (vMA) virtual machine. See Getting Started with
vSphere Command-Line Interfaces. For troubleshooting, run esxcli commands in the ESXi Shell.
Procedure
If a diagnostic partition is set, the command displays information about it. Otherwise, the command shows
that no partition is activated and configured.
What to do next
To manage the host’s diagnostic partition, use the vCLI commands. See vSphere Command-Line
Interface Concepts and Examples.
Typically, a core dump partition of 2.5 GB is created during ESXi installation. For upgrades from ESXi 5.0
and earlier, the core dump partition is limited to 100 MB. For this type of upgrade, during the boot process
the system might create a core dump file on a VMFS datastore. If it does not create a core dump file, you
can manually create the file.
Note Software iSCSI and software FCoE are not supported for core dump file locations.
Prerequisites
Install vCLI or deploy the vSphere Management Assistant (vMA) virtual machine. See Getting Started with
vSphere Command-Line Interfaces. For troubleshooting, run esxcli commands in the ESXi Shell.
Procedure
1 Create a VMFS datastore core dump file by running the following command:
The command takes the following options, but they are not required and can be omitted:
Option Description
--datastore | -d datastore_UUID If not provided, the system selects a datastore of sufficient size.
or datastore_name
--file | -f file_name If not provided, the system specifies a unique name for the core dump file.
--size |-s file_size_MB If not provided, the system creates a file of the size appropriate for the memory
installed in the host.
Option Description
--path | -p The path of the core dump file to use. The file must be pre-allocated.
--smart | -s This flag can be used only with --enable | -e=true. It causes the file to be
selected using the smart selection algorithm.
For example,
esxcli system coredump file set --smart --enable true
The output similar to the following indicates that the core dump file is active and configured:
What to do next
For information about other commands you can use to manage the core dump files, see the vSphere
Command-Line Interface Reference documentation.
You can temporarily deactivate the core dump file. If you do not plan to use the deactivated file, you can
remove it from the VMFS datastore. To remove the file that has not been deactivated, you can use the
system coredump file remove command with the --force | -F parameter.
Prerequisites
Install vCLI or deploy the vSphere Management Assistant (vMA) virtual machine. See Getting Started with
vSphere Command-Line Interfaces. For troubleshooting, run esxcli commands in the ESXi Shell.
Procedure
Option Description
--file | -f Enter the name of the dump file to be removed. If you do not enter the name, the
command removes the default configured core dump file.
--force | -F Deactivate and unconfigure the dump file being removed. This option is required if
the file has not been previously deactivated and is active.
The core dump file becomes disabled and is removed from the VMFS datastore.
Problem
You can check metadata consistency when you experience problems with a VMFS datastore or a virtual
flash resource. For example, perform a metadata check if one of the following occurs:
n You experience storage outages.
n You see metadata errors in the vmkernel.log file similar to the following:
n You see corruption being reported for a datastore in events tabs of vCenter Server.
Solution
To check metadata consistency, run VOMA from the CLI of an ESXi host. VOMA can be used to check
and fix minor inconsistency issues for a VMFS datastore or a virtual flash resource. To resolve errors
reported by VOMA, consult VMware Support.
n Power off any virtual machines that are running or migrate them to a different datastore.
The following example demonstrates how to use VOMA to check VMFS metadata consistency.
1 Obtain the name and partition number of the device that backs the VMFS datastore that you want to
check.
The Device Name and Partition columns in the output identify the device. For example:
Provide the absolute path to the device partition that backs the VMFS datastore, and provide a
partition number with the device name. For example:
The output lists possible errors. For example, the following output indicates that the heartbeat
address is invalid.
XXXXXXXXXXXXXXXXXXXXXXX
Phase 2: Checking VMFS heartbeat region
ON-DISK ERROR: Invalid HB address
Phase 3: Checking all file descriptors.
Phase 4: Checking pathname and connectivity.
Phase 5: Checking resource reference counts.
Command options that the VOMA tool takes include the following.
-d|--device Device or disk to be inspected. Make sure to provide the absolute path to the device
partition backing the VMFS datastore. For example, /vmfs/devices/disks/naa.
00000000000000000000000000:1.
For more details, see the VMware Knowledge Base article 2036767.
The pointer block cache is a host-wide cache that is independent from VMFS. The cache is shared
across all datastores that are accessed from the same ESXi host.
/VMFS3/MinAddressabl The minimum value is minimum amount of memory that the system
eSpaceTB guarantees to the pointer block cache. For example, 1 TB of open file
space requires approximately 4 MB of memory. Default value is 10 TB.
/VMFS3/MaxAddressabl The parameter defines the maximum limit of pointer blocks that can be
eSpaceTB cached in memory. Default value is 32 TB. Maximum value is 128 TB.
Typically, the default value of the /VMFS3/MaxAddressableSpaceTB
parameter is adequate.
However, as the size of the open vmdk files increases, the number of
pointer blocks related to those files also increases. If the increase causes
any performance degradation, you can adjust the parameter to its
maximum value to provide more space for the pointer block cache. Base
the maximum size of the pointer block cache on the working set, or the
active pointer blocks required.
Pointer Block Eviction The /VMFS3/MaxAddressableSpaceTB parameter also controls the growth
of the pointer block cache. When the size of the pointer block cache
approaches the configured maximum size, a pointer block eviction process
starts. The mechanism leaves active pointer blocks, but removes non-
active or less active blocks from the cache, so that space can be reused.
To change the values for the pointer block cache, use the Advanced System Settings dialog box of the
vSphere Client or the esxcli system settings advanced set -o command.
You can use the esxcli storage vmfs pbcache command to obtain information about the size of the
pointer block cache and other statistics. This information assists you in adjusting minimum and maximum
sizes of the pointer block cache, so that you can get maximum performance.
Prerequisites
Install vCLI or deploy the vSphere Management Assistant (vMA) virtual machine. See Getting Started with
vSphere Command-Line Interfaces. For troubleshooting, run esxcli commands in the ESXi Shell.
Procedure
u To obtain or reset the pointer block cache statistics, use the following command:
Option Description
get Get VMFS pointer block cache statistics.
Caution Changing advanced options is considered unsupported. Typically, the default settings produce
the optimum result. Change the advanced options only when you get specific instructions from VMware
technical support or a knowledge base article.
Procedure
Option Description
VMFS3.MinAddressableSpaceTB Minimum size of all open files that VMFS cache guarantees to support.
VMFS3.MaxAddressableSpaceTB Maximum size of all open files that VMFS cache supports before eviction starts.
6 Click OK.
Example: Use the esxcli Command to Change the Pointer Block Cache
You can also use the esxcli system settings advanced set -o to modify the size of the pointer
block cache. The following example describes how to set the size to its maximum value of 128 TB.
1 To change the value of /VMFS3/MaxAddressableSpaceTB to 128 TB, enter the following command:
If a failure of any element in the SAN network, such as an adapter, switch, or cable, occurs, ESXi can
switch to another viable physical path. This process of path switching to avoid failed components is
known as path failover.
In addition to path failover, multipathing provides load balancing. Load balancing is the process of
distributing I/O loads across multiple physical paths. Load balancing reduces or removes potential
bottlenecks.
Note Virtual machine I/O might be delayed for up to 60 seconds while path failover takes place. With
these delays, the SAN can stabilize its configuration after topology changes. In general, the I/O delays
might be longer on active-passive arrays and shorter on active-active arrays.
In the following illustration, multiple physical paths connect each server with the storage device. For
example, if HBA1 or the link between HBA1 and the FC switch fails, HBA2 takes over and provides the
connection. The process of one HBA taking over for another is called HBA failover.
switch switch
SP1 SP2
storage array
Similarly, if SP1 fails or the links between SP1 and the switches breaks, SP2 takes over. SP2 provides the
connection between the switch and the storage device. This process is called SP failover. VMware ESXi
supports both HBA and SP failovers.
n ESXi does not support multipathing when you combine an independent hardware adapter with
software iSCSI or dependent iSCSI adapters in the same host.
n Multipathing between software and dependent adapters within the same host is supported.
n On different hosts, you can mix both dependent and independent adapters.
The following illustration shows multipathing setups possible with different types of iSCSI initiators.
host 1 host 2
software
adapter
HBA2 HBA1 NIC2 NIC1
IP network
SP
iSCSI storage
On the illustration, Host1 has two hardware iSCSI adapters, HBA1 and HBA2, that provide two physical
paths to the storage system. Multipathing plug-ins on your host, whether the VMkernel NMP or any third-
party MPPs, have access to the paths by default. The plug-ins can monitor health of each physical path.
If, for example, HBA1 or the link between HBA1 and the network fails, the multipathing plug-ins can
switch the path over to HBA2.
Multipathing plug-ins do not have direct access to physical NICs on your host. As a result, for this setup,
you first must connect each physical NIC to a separate VMkernel port. You then associate all VMkernel
ports with the software iSCSI initiator using a port binding technique. Each VMkernel port connected to a
separate NIC becomes a different path that the iSCSI storage stack and its storage-aware multipathing
plug-ins can use.
For information about configuring multipathing for software iSCSI, see Setting Up Network for iSCSI and
iSER.
When using one of these storage systems, your host does not see multiple ports on the storage and
cannot choose the storage port it connects to. These systems have a single virtual port address that your
host uses to initially communicate. During this initial communication, the storage system can redirect the
host to communicate with another port on the storage system. The iSCSI initiators in the host obey this
reconnection request and connect with a different port on the system. The storage system uses this
technique to spread the load across available ports.
If the ESXi host loses connection to one of these ports, it automatically attempts to reconnect with the
virtual port of the storage system, and should be redirected to an active, usable port. This reconnection
and redirection happens quickly and generally does not disrupt running virtual machines. These storage
systems can also request that iSCSI initiators reconnect to the system, to change which storage port they
are connected to. This allows the most effective use of the multiple ports.
The Port Redirection illustration shows an example of port redirection. The host attempts to connect to
the 10.0.0.1 virtual port. The storage system redirects this request to 10.0.0.2. The host connects with
10.0.0.2 and uses this port for I/O communication.
Note The storage system does not always redirect connections. The port at 10.0.0.1 could be used for
traffic, also.
10.0.0.2
storage
10.0.0.2
storage
If the port on the storage system that is acting as the virtual port becomes unavailable, the storage
system reassigns the address of the virtual port to another port on the system. Port Reassignment shows
an example of this type of port reassignment. In this case, the virtual port 10.0.0.1 becomes unavailable
and the storage system reassigns the virtual port IP address to a different port. The second port responds
to both addresses.
10.0.0.1
10.0.0.2
storage
10.0.0.1
10.0.0.1
10.0.0.2
storage
With this form of array-based failover, you can have multiple paths to the storage only if you use multiple
ports on the ESXi host. These paths are active-active. For additional information, see iSCSI Session
Management.
When a path fails, storage I/O might pause for 30-60 seconds until your host determines that the link is
unavailable and performs the failover. If you attempt to display the host, its storage devices, or its
adapters, the operation might appear to stall. Virtual machines with their disks installed on the SAN can
appear unresponsive. After the failover, I/O resumes normally and the virtual machines continue to run.
A Windows virtual machine might interrupt the I/O and eventually fail when failovers take too long. To
avoid the failure, set the disk timeout value for the Windows virtual machine to at least 60 seconds.
This procedure explains how to change the timeout value by using the Windows registry.
Prerequisites
Procedure
4 Double-click TimeOutValue.
5 Set the value data to 0x3c (hexadecimal) or 60 (decimal) and click OK.
After you make this change, Windows waits at least 60 seconds for delayed disk operations to finish
before it generates errors.
Pluggable Storage To manage multipathing, ESXi uses a special VMkernel layer, Pluggable
Architecture (PSA) Storage Architecture (PSA). The PSA is an open and modular framework
that coordinates various software modules responsible for multipathing
operations. These modules include generic multipathing modules that
VMware provides, NMP and HPP, and third-party MPPs.
Native Multipathing The NMP is the VMkernel multipathing module that ESXi provides by
Plug-in (NMP) default. The NMP associates physical paths with a specific storage device
and provides a default path selection algorithm based on the array type.
The NMP is extensible and manages additional submodules, called Path
Selection Policies (PSPs) and Storage Array Type Policies (SATPs). PSPs
and SATPs can be provided by VMware, or by a third party.
Path Selection Plug-ins The PSPs are submodules of the VMware NMP. PSPs are responsible for
(PSPs) selecting a physical path for I/O requests.
Storage Array Type The SATPs are submodules of the VMware NMP. SATPs are responsible
Plug-ins (SATPs) for array-specific operations. The SATP can determine the state of a
particular array-specific path, perform a path activation, and detect any path
errors.
Multipathing Plug-ins The PSA offers a collection of VMkernel APIs that third parties can use to
(MPPs) create their own multipathing plug-ins (MPPs). The modules provide
specific load balancing and failover functionalities for a particular storage
array. The MPPs can be installed on the ESXi host. They can run in
addition to the VMware native modules, or as their replacement.
VMware High- The HPP replaces the NMP for high-speed devices, such as NVMe PCIe
Performance Plug-in flash. The HPP improves the performance of ultra-fast flash devices that
(HPP) are installed locally on your ESXi host. The plug-in supports only single-
pathed devices.
Claim Rules The PSA uses claim rules to determine whether an MPP or NMP owns the
paths to a particular storage device. The NMP has its own set of claim
rules. These claim rules match the device with a specific SATP and PSP.
The MPP claim rules are ordered. Lower rule numbers have preference
over higher rule numbers. The NMP claim rules are not ordered.
SATP Storage Array Type Plug-in. Handles path failover for a given
storage array.
VMware provides generic native multipathing modules, called VMware NMP and VMware HPP. In
addition, the PSA offers a collection of VMkernel APIs that third-party developers can use. The software
developers can create their own load balancing and failover modules for a particular storage array. These
third-party multipathing modules (MPPs) can be installed on the ESXi host and run in addition to the
VMware native modules, or as their replacement.
When coordinating the VMware native modules and any installed third-party MPPs, the PSA performs the
following tasks:
n Routes I/O requests for a specific logical device to the MPP managing that device.
As the Pluggable Storage Architecture illustration shows, multiple third-party MPPs can run in parallel with
the VMware NMP or HPP. When installed, the third-party MPPs can replace the behavior of the native
modules. The MPPs can take control of the path failover and the load-balancing operations for the
specified storage devices.
VMKernel
VMware SATP
Generally, the VMware NMP supports all storage arrays listed on the VMware storage HCL and provides
a default path selection algorithm based on the array type. The NMP associates a set of physical paths
with a specific storage device, or LUN.
For additional multipathing operations, the NMP uses submodules, called SATPs and PSPs. The NMP
delegates to the SATP the specific details of handling path failover for the device. The PSP handles path
selection for the device.
n Performs actions necessary to handle path failures and I/O command retries.
ESXi automatically installs an appropriate SATP for an array you use. You do not need to obtain or
download any SATPs.
2 The PSP selects an appropriate physical path on which to issue the I/O.
3 The NMP issues the I/O request on the path selected by the PSP.
5 If the I/O operation reports an error, the NMP calls the appropriate SATP.
6 The SATP interprets the I/O command errors and, when appropriate, activates the inactive paths.
7 The PSP is called to select a new path on which to issue the I/O.
Prerequisites
Install vCLI or deploy the vSphere Management Assistant (vMA) virtual machine. See Getting Started with
vSphere Command-Line Interfaces. For troubleshooting, run esxcli commands in the ESXi Shell.
Procedure
This command typically shows the NMP and, if loaded, the HPP and the MASK_PATH module. If any
third-party MPPs have been loaded, they are listed as well.
For more information about the command, see the vSphere Command-Line Interface Concepts and
Examples and vSphere Command-Line Interface Reference documentation.
Prerequisites
Install vCLI or deploy the vSphere Management Assistant (vMA) virtual machine. See Getting Started with
vSphere Command-Line Interfaces. For troubleshooting, run esxcli commands in the ESXi Shell.
Procedure
Use the --device | -d=device_ID parameter to filter the output of this command to show a single
device.
......
eui.6238666462643332
Device Display Name: SCST_BIO iSCSI Disk (eui.6238666462643332)
Storage Array Type: VMW_SATP_DEFAULT_AA
Storage Array Type Device Config: {action_OnRetryErrors=off}
Path Selection Policy: VMW_PSP_FIXED
Path Selection Policy Device Config: {preferred=vmhba65:C0:T0:L0;current=vmhba65:C0:T0:L0}
Path Selection Policy Device Custom Config:
Working Paths: vmhba65:C0:T0:L0
Is USB: false
For more information about the command, see the vSphere Command-Line Interface Concepts and
Examples and vSphere Command-Line Interface Reference documentation.
The plug-ins are submodules of the VMware NMP. The NMP assigns a default PSP for each logical
device based on the device type. You can override the default PSP.
VMW_PSP_RR — VMW_PSP_RR enables the Round Robin (VMware) policy. The policy uses
Round Robin (VMware) an automatic path selection algorithm rotating through the configured paths.
Round Robin is the default policy for many arrays. The policy can be used
with both active-active and active-passive arrays to implement load
balancing across paths for different LUNs. With active-passive arrays, the
policy uses all active paths. With active-active arrays, the policy uses all
available paths.
VMW_PSP_RR has configurable options that you can modify on the
command line. To set these parameters, use the esxcli storage nmp
psp roundrobin command. For details, see the vSphere Command-Line
Interface Reference documentation.
VMware SATPs
Storage Array Type Plug-ins (SATPs) are responsible for array-specific operations. The SATPs are
submodules of the VMware NMP.
ESXi offers an SATP for every type of array that VMware supports. ESXi also provides default SATPs that
support non-specific active-active, active-passive, ALUA, and local devices.
Each SATP accommodates special characteristics of a certain class of storage arrays. The SATP can
perform the array-specific operations required to detect path state and to activate an inactive path. As a
result, the NMP module itself can work with multiple storage arrays without having to be aware of the
storage device specifics.
Generally, the NMP determines which SATP to use for a specific storage device and associates the SATP
with the physical paths for that storage device. The SATP implements the tasks that include the following:
n Performs array-specific actions necessary for storage fail-over. For example, for active-passive
devices, it can activate passive paths.
For more information, see the VMware Compatibility Guide and the vSphere Command-Line Interface
Reference documentation.
Prerequisites
Install vCLI or deploy the vSphere Management Assistant (vMA) virtual machine. See Getting Started with
vSphere Command-Line Interfaces. For troubleshooting, run esxcli commands in the ESXi Shell.
Procedure
For each SATP, the output displays information that shows the type of storage array or system the SATP
supports. The output also shows the default PSP for any LUNs that use this SATP. Placeholder
(plugin not loaded) in the Description column indicates that the SATP is not loaded.
For more information about the command, see the vSphere Command-Line Interface Concepts and
Examples and vSphere Command-Line Interface Reference documentation.
The HPP replaces the NMP for high-speed devices, such as NVMe PCIe flash, installed on your host.
The HPP uses a direct I/O submission model, also called fast path, and does not require SATPs or PSPs.
The plug-in directly submits I/Os to the local device using one path. Only single-pathed devices are
supported.
The HPP is included with vSphere. The direct submission APIs can also be included in the multipathing
plug-ins (MPPs) that third parties provide.
vSAN deployments or any standalone ESXi hosts that require faster storage performance can benefit
from the HPP.
HPP Requirements
The HPP requires the following infrastructure.
n Your ESXi host uses high-speed local flash devices for storage.
HPP Limitations
HPP does not support the following items that NMP typically supports.
n Multipathing. The HPP claims the first path to a device and rejects the rest of the paths.
n 4Kn devices with software emulation. You cannot use the HPP to claim these devices.
n Use the vSphere version that supports the HPP, such as vSphere 6.7.
n Do not activate the HPP for HDDs, slower flash devices, or remote storage. The HPP is not expected
to provide any performance benefits with devices incapable of at least 200 000 IOPS.
n Because ESXi is not expected to provide built-in claim rules for the HPP, enable the HPP using the
esxcli command.
n Configure your VMs to use VMware Paravirtual controllers. See the vSphere Virtual Machine
Administration documentation.
n If a single VM drives a significant share of the device's I/O workload, consider spreading the I/O
across multiple virtual disks. Attach the disks to separate virtual controllers in the VM.
Otherwise, I/O throughput might be limited due to saturation of the CPU core responsible for
processing I/Os on a particular virtual storage controller.
You can use the ESXi Shell or vSphere CLI to configure the HPP claim rules. For more information, see
Getting Started with vSphere Command-Line Interfaces and vSphere Command-Line Interface
Reference.
Note Enabling the HPP is not supported on PXE booted ESXi hosts.
Prerequisites
Procedure
esxcli storage core claimrule add -r 10 -t vendor -V=NVMe -M=* -P HPP --force-
reserved
This sample command instructs the HPP to claim all devices with the vendor NVMe. Modify this rule
to claim the devices you specify. Make sure to follow these recommendations:
n For the rule ID parameter, use the number within the 1–49 range to make sure that the HPP claim
rule precedes the build-in NMP rules. The default NMP rules 50–54 are reserved for locally
attached storage devices.
n Use the --force-reserved option. With this option, you can add a rule into the range 0–100 that
is reserved for internal VMware use.
mpx.vmhba2:C0:T0:L0
Display Name: Local NVMe Disk (mpx.vmhba2:C0:T0:L0)
...
Multipath Plugin: HPP
...
By default, ESXi passes every I/O through the I/O scheduler. However, using the scheduler might create
internal queuing, which is not efficient with the high-speed storage devices.
You can configure the latency sensitive threshold and enable the direct submission mechanism that helps
I/O to bypass the scheduler. With this mechanism enabled, the I/O passes directly from PSA through the
HPP to the device driver.
For the direct submission to work properly, the observed average I/O latency must be lower than the
latency threshold you specify. If the I/O latency exceeds the latency threshold, the system stops the direct
submission and temporarily reverts to using the I/O scheduler. The direct submission is resumed when
the average I/O latency drops below the latency threshold again.
Procedure
1 Set the latency sensitive threshold for the device by running the following command:
3 Monitor the status of the latency sensitive threshold. Check VMkernel logs for the following entries:
See Getting Started with vSphere Command-Line Interfaces for an introduction, and vSphere Command-
Line Interface Reference details of esxcli command use.
esxcli storage hpp path List the paths currently claimed by -d|--device=device
list the high-performance plug-in. Display information for a specific device.
-p|--path=path
Limit the output to a specific path.
esxcli storage hpp List the devices that were marked -d|--device=device
device usermarkedssd as SSD by user. Limit the output to a specific device.
list
The module that owns the device becomes responsible for managing the multipathing support for the
device. By default, the host performs a periodic path evaluation every five minutes and assigns unclaimed
paths to the appropriate module.
For the paths managed by the NMP module, a second set of claim rules is used. These rules assign an
SATP and PSP modules to each storage device and determine which Storage Array Type Policy and Path
Selection Policy to apply.
Use the vSphere Client to view the Storage Array Type Policy and Path Selection Policy assigned to a
specific storage device. You can also check the status of all available paths for this storage device. If
needed, you can change the default Path Selection Policy using the client.
To change the default multipathing module or SATP, modify claim rules using the vSphere CLI.
You can find some information about modifying claim rules in Using Claim Rules.
Procedure
5 Click the Properties tab and review the module that owns the device, for example NMP. Under
Multipathing Policies, you can also see the Path Selection Policy and Storage Array Type Policy
assigned to the device.
6 Click the Paths tab to review all paths available for the storage device and the status of each path.
The following path status information can appear:
Status Description
Active (I/O) Working path or multiple paths that currently transfer data.
Standby Paths that are inactive. If the active path fails, they can become operational and
start transferring I/O.
Dead Paths that are no longer available for processing I/O. A physical medium failure or
array misconfiguration can cause this status.
If you are using the Fixed path policy, you can see which path is the preferred path. The preferred
path is marked with an asterisk (*) in the Preferred column.
Procedure
5 Under Multipathing Policies, review the module that owns the device, such as NMP. You can also see
the Path Selection Policy and Storage Array Type Policy assigned to the device.
6 Under Paths, review the device paths and the status of each path. The following path status
information can appear:
Status Description
Active (I/O) Working path or multiple paths that currently transfer data.
Standby Paths that are inactive. If the active path fails, they can become operational and
start transferring I/O.
Dead Paths that are no longer available for processing I/O. A physical medium failure or
array misconfiguration can cause this status.
If you are using the Fixed path policy, you can see which path is the preferred path. The preferred
path is marked with an asterisk (*) in the Preferred column.
Procedure
4 Select the item whose paths you want to change and click the Properties tab.
By default, VMware supports the following path selection policies. If you have a third-party PSP
installed on your host, its policy also appears on the list.
n Fixed (VMware)
8 To save your settings and exit the dialog box, click OK.
You disable a path using the Paths panel. You have several ways to access the Paths panel, from a
datastore, a storage device, an adapter, or a VVols Protocol Endpoint view.
Procedure
n Storage Adapters
n Storage Devices
n Protocol Endpoints
4 In the right pane, select the item whose paths you want to disable, an adapter, storage device, or
Protocol Endpoint, and click the Paths tab.
Core Claim Rules These claim rules determine which multipathing module, the NMP, HPP, or
a third-party MPP, claims the specific device.
SATP Claim Rules Depending on the device type, these rules assign a particular SATP
submodule that provides vendor-specific multipathing management to the
device.
You can use the esxcli commands to add or change the core and SATP claim rules. Typically, you add
the claim rules to load a third-party MPP or to hide a LUN from your host. Changing claim rules might be
necessary when default settings for a specific device are not sufficient.
For more information about commands available to manage PSA claim rules, see the Getting Started with
vSphere Command-Line Interfaces.
For a list of storage arrays and corresponding SATPs and PSPs, see the Storage/SAN section of the
vSphere Compatibility Guide.
Multipathing Considerations
Specific considerations apply when you manage storage multipathing plug-ins and claim rules.
n If no SATP is assigned to the device by the claim rules, the default SATP for iSCSI or FC devices is
VMW_SATP_DEFAULT_AA. The default PSP is VMW_PSP_FIXED.
n When the system searches the SATP rules to locate a SATP for a given device, it searches the driver
rules first. If there is no match, the vendor/model rules are searched, and finally the transport rules
are searched. If no match occurs, NMP selects a default SATP for the device.
n If VMW_SATP_ALUA is assigned to a specific storage device, but the device is not ALUA-aware, no
claim rule match occurs for this device. The device is claimed by the default SATP based on the
device's transport type.
n The default PSP for all devices claimed by VMW_SATP_ALUA is VMW_PSP_MRU. The
VMW_PSP_MRU selects an active/optimized path as reported by the VMW_SATP_ALUA, or an
active/unoptimized path if there is no active/optimized path. This path is used until a better path is
available (MRU). For example, if the VMW_PSP_MRU is currently using an active/unoptimized path
and an active/optimized path becomes available, the VMW_PSP_MRU will switch the current path to
the active/optimized one.
n While VMW_PSP_MRU is typically selected for ALUA arrays by default, certain ALUA storage arrays
need to use VMW_PSP_FIXED. To check whether your storage array requires VMW_PSP_FIXED,
see the VMware Compatibility Guide or contact your storage vendor. When using VMW_PSP_FIXED
with ALUA arrays, unless you explicitly specify a preferred path, the ESXi host selects the most
optimal working path and designates it as the default preferred path. If the host selected path
becomes unavailable, the host selects an alternative available path. However, if you explicitly
designate the preferred path, it will remain preferred no matter what its status is.
n By default, the PSA claim rule 101 masks Dell array pseudo devices. Do not delete this rule, unless
you want to unmask these devices.
Claim rules indicate whether the NMP, HPP, or a third-party MPP manages a given physical path. Each
claim rule identifies a set of paths based on the following parameters:
n Vendor/model strings
Procedure
If you do not use the claimrule-class option, the MP rule class is implied.
Example: Sample Output of the esxcli storage core claimrule list Command
n The NMP claims all paths connected to storage devices that use the USB, SATA, IDE, and Block
SCSI transportation.
n The rules for HPP, MPP_1, MPP_2, and MPP_3 have been added, so that the modules can claim
specified devices. For example, the HPP claims all devices with vendor NVMe. All devices handled
by the inbox nvme driver are claimed regardless of the actual vendor. The MPP_1 module claims all
paths connected to any model of the NewVend storage array.
n You can use the MASK_PATH module to hide unused devices from your host. By default, the PSA
claim rule 101 masks Dell array pseudo devices with a vendor string DELL and a model string
Universal Xport.
n The Rule Class column in the output describes the category of a claim rule. It can be MP
(multipathing plug-in), Filter, or VAAI.
n The Class column shows which rules are defined and which are loaded. The file parameter in the
Class column indicates that the rule is defined. The runtime parameter indicates that the rule has
been loaded into your system. For a user-defined claim rule to be active, two lines with the same rule
number must exist, one line for the rule with the file parameter and another line with runtime.
Several default system-defined claim rules have only one line with the Class of runtime. You cannot
modify these rules.
n The default rule 65535 assigns all unclaimed paths to the NMP. Do not delete this rule.
Examples when you add a PSA claim rule include the following:
n You load a new third-party MPP and must define the paths that this module claims.
You cannot create rules where two different plug-ins claim paths to the same device. Your attempts to
create these claim rules fail with a warning in vmkernel.log.
Prerequisites
Install vCLI or deploy the vSphere Management Assistant (vMA) virtual machine. See Getting Started with
vSphere Command-Line Interfaces. For troubleshooting, run esxcli commands in the ESXi Shell.
Procedure
Option Description
-A|--adapter=<adapter> Adapter of the paths to use. Valid only if --type is location.
-u|--autoassign Adds a claim rule based on its characteristics. The rule number is not required.
Option Description
-c|--claimrule-class=<cl> Claim rule class to use in this operation. You can specify MP (default), Filter, or
VAAI.
To configure hardware acceleration for a new array, add two claim rules, one for
the VAAI filter and another for the VAAI plug-in. See Add Hardware Acceleration
Claim Rules for detailed instructions.
-D|--driver=<driver> Driver for the HBA of the paths to use. Valid only if --type is driver.
-f|--force Force claim rules to ignore validity checks and install the rule anyway.
--if-unset=<str> Run this command if this advanced user variable is not set to 1.
-i|--iqn=<iscsi_name> iSCSI Qualified Name for the target. Valid only when --type is target.
-P|--plugin=<plugin> PSA plug-in to use. The values are NMP, MASK_PATH, or HPP. Third parties can
also provide their own PSA plug-ins. Required.
-r|--rule=<rule_ID> Rule ID to use. The rule ID indicates the order in which the claim rule is to be
evaluated. User-defined claim rules are evaluated in numeric order starting with
101.
You can run esxcli storage core claimrule list to determine which rule
IDs are available.
-R|--transport=<transport> Transport of the paths to use. Valid only if --type is transport. The following
values are supported.
n block — block storage
n fc — Fibre Channel
n iscsivendor — iSCSI
n iscsi — not currently used
n ide — IDE storage
n sas — SAS storage
n sata — SATA storage
n usb — USB storage
n parallel — parallel
n fcoe — FCoE
n unknown
Option Description
-t|--type=<type> Type of matching to use for the operation. Valid values are the following.
Required.
n vendor
n location
n driver
n transport
n device
n target
-a|--xcopy-use-array-values Use the array reported values to construct the XCOPY command to be sent to the
storage array. This applies to VAAI claim rules only.
-s|--xcopy-use-multi-segs Use multiple segments when issuing an XCOPY request. Valid only if --xcopy-
use-array-values is specified.
-m|--xcopy-max-transfer-size Maximum data transfer size in MB when you use a transfer size different than
array reported. Valid only if --xcopy-use-array-values is specified.
-k|--xcopy-max-transfer-size-kib Maximum transfer size in KiB for the XCOPY commands when you use a transfer
size different than array reported. Valid only if --xcopy-use-array-values is
specified.
2 To load the new claim rule into your system, use the following command:
This command loads all newly created multipathing claim rules from the esx.conf configuration file
into the VMkernel. The command has no options.
3 To apply claim rules that are loaded, use the following command:
Option Description
-A|--adapter=<adapter> If --type is location, name of the HBA for the paths to run the claim rules on.
To run claim rules on paths from all adapters, omit this option.
-C|--channel=<channel> If --type is location, value of the SCSI channel number for the paths to run the
claim rules on. To run claim rules on paths with any channel number, omit this
option.
-L|--lun=<lun_id> If --type is location, value of the SCSI LUN for the paths to run claim rules on.
To run claim rules on paths with any LUN, omit this option.
Option Description
-p|--path=<path_uid> If --type is path, this option indicates the unique path identifier (UID) or the
runtime name of a path to run claim rules on.
-T|--target=<target> If --type is location, value of the SCSI target number for the paths to run claim
rules on. To run claim rules on paths with any target number, omit this option.
-t|--type=<location|path|all> Type of claim to perform. By default, uses all, which means claim rules run
without restriction to specific paths or SCSI addresses. Valid values are
location, path, and all.
-w|--wait You can use this option only if you also use --type all.
If the option is included, the claim waits for paths to settle before running the
claim operation. In that case, the system does not start the claiming process until
it is likely that all paths on the system have appeared before starting the claim
process.
After the claiming process has started, the command does not return until device
registration has completed.
If you add or remove paths during the claiming or the discovery process, this
option might not work correctly.
# esxcli storage core claimrule add -r 500 -t vendor -V NewVend -M NewMod -P NMP
After you run the esxcli storage core claimrule list command, you can see the new claim rule
appearing on the list.
The following output indicates that the claim rule 500 has been loaded into the system and is active.
Prerequisites
Install vCLI or deploy the vSphere Management Assistant (vMA) virtual machine. See Getting Started with
vSphere Command-Line Interfaces. For troubleshooting, run esxcli commands in the ESXi Shell.
Procedure
Note By default, the PSA claim rule 101 masks Dell array pseudo devices. Do not delete this rule,
unless you want to unmask these devices.
Option Description
-c|--claimrule-class=<str> Indicate the claim rule class (MP, Filter, VAAI).
This step removes the claim rule from the File class.
This step removes the claim rule from the Runtime class.
Mask Paths
You can prevent the host from accessing storage devices or LUNs or from using individual paths to a
LUN. Use the esxcli commands to mask the paths. When you mask paths, you create claim rules that
assign the MASK_PATH plug-in to the specified paths.
Prerequisites
Install vCLI or deploy the vSphere Management Assistant (vMA) virtual machine. See Getting Started with
vSphere Command-Line Interfaces. For troubleshooting, run esxcli commands in the ESXi Shell.
Procedure
The claim rules that you use to mask paths have rule IDs in the range from 101 through 200. If this
command shows that rules 101 and 102 exist, you can specify 103 for the rule to add.
2 Assign the MASK_PATH plug-in to a path by creating a new claim rule for the plug-in.
5 If a claim rule for the masked path exists, remove the rule.
After you assign the MASK_PATH plug-in to a path, the path state becomes irrelevant and is no longer
maintained by the host. As a result, commands that display the masked path's information might show the
path state as dead.
1
#esxcli storage core claimrule list
2
#esxcli storage core claimrule add -P MASK_PATH -r 109 -t location -A vmhba2 -C 0 -T 1 -L 20
#esxcli storage core claimrule add -P MASK_PATH -r 110 -t location -A vmhba3 -C 0 -T 1 -L 20
#esxcli storage core claimrule add -P MASK_PATH -r 111 -t location -A vmhba2 -C 0 -T 2 -L 20
#esxcli storage core claimrule add -P MASK_PATH -r 112 -t location -A vmhba3 -C 0 -T 2 -L 20
3
#esxcli storage core claimrule load
4
#esxcli storage core claimrule list
5
#esxcli storage core claiming unclaim -t location -A vmhba2
#esxcli storage core claiming unclaim -t location -A vmhba3
6
#esxcli storage core claimrule run
Unmask Paths
When you need the host to access the masked storage device, unmask the paths to the device.
Note When you run an unclaim operation using a device property, for example, device ID or vendor, the
paths claimed by the MASK_PATH plug-in are not unclaimed. The MASK_PATH plug-in does not track
any device property of the paths that it claims.
Prerequisites
Install vCLI or deploy the vSphere Management Assistant (vMA) virtual machine. See Getting Started with
vSphere Command-Line Interfaces. For troubleshooting, run esxcli commands in the ESXi Shell.
Procedure
3 Reload the path claiming rules from the configuration file into the VMkernel.
4 Run the esxcli storage core claiming unclaim command for each path to the masked storage
device.
For example:
Your host can now access the previously masked storage device.
You might need to create an SATP rule when you install a third-party SATP for a specific storage array.
Prerequisites
Install vCLI or deploy the vSphere Management Assistant (vMA) virtual machine. See Getting Started with
vSphere Command-Line Interfaces. For troubleshooting, run esxcli commands in the ESXi Shell.
Procedure
1 To add a claim rule for a specific SATP, run the esxcli storage nmp satp rule add command.
The command takes the following options.
Option Description
-b|--boot This rule is a system default rule added at boot time. Do not modify esx.conf or
add to a host profile.
-c|--claim-option=string Set the claim option string when adding a SATP claim rule.
-e|--description=string Set the claim rule description when adding a SATP claim rule.
-d|--device=string Set the device when adding SATP claim rules. Device rules are mutually
exclusive with vendor/model and driver rules.
-D|--driver=string Set the driver string when adding a SATP claim rule. Driver rules are mutually
exclusive with vendor/model rules.
Option Description
-f|--force Force claim rules to ignore validity checks and install the rule anyway.
-M|--model=string Set the model string when adding SATP a claim rule. Vendor/Model rules are
mutually exclusive with driver rules.
-o|--option=string Set the option string when adding a SATP claim rule.
-P|--psp=string Set the default PSP for the SATP claim rule.
-O|--psp-option=string Set the PSP options for the SATP claim rule.
-R|--transport=string Set the claim transport type string when adding a SATP claim rule.
-t|--type=string Set the claim type when adding a SATP claim rule.
-V|--vendor=string Set the vendor string when adding SATP claim rules. Vendor/Model rules are
mutually exclusive with driver rules.
Note When searching the SATP rules to locate a SATP for a given device, the NMP searches the
driver rules first. If there is no match, the vendor/model rules are searched, and finally the transport
rules. If there is still no match, NMP selects a default SATP for the device.
When you run the esxcli storage nmp satp list -s VMW_SATP_INV command, you can see the
new rule on the list of VMW_SATP_INV rules.
This mechanism ensures that I/O for a particular virtual machine file goes into its own separate queue
and avoids interfering with I/Os from other files.
If you turn off the per file I/O scheduling model, your host reverts to a legacy scheduling mechanism. The
legacy scheduling maintains only one I/O queue for each virtual machine and storage device pair. All I/Os
between the virtual machine and its virtual disks are moved into this queue. As a result, I/Os from different
virtual disks might interfere with each other in sharing the bandwidth and affect each other's performance.
Note Do not disable per file scheduling if you have the HPP plug-in and the latency sensitive threshold
parameter configured for high-speed local devices. Disabling per file scheduling might cause
unpredictable behavior.
Procedure
Option Description
Prerequisites
Install vCLI or deploy the vSphere Management Assistant (vMA) virtual machine. See Getting Started with
vSphere Command-Line Interfaces. For troubleshooting, run esxcli commands in the ESXi Shell.
Procedure
u To enable or disable per file I/O scheduling, run the following commands:
Option Description
esxcli system settings kernel set Disable per file I/O scheduling
-s isPerFileSchedModelActive -v
FALSE
esxcli system settings kernel set Enable per file I/O scheduling
-s isPerFileSchedModelActive -v
TRUE
The following topics contain information about RDMs and provide instructions on how to create and
manage RDMs.
The file gives you some of the advantages of direct access to a physical device, but keeps some
advantages of a virtual disk in VMFS. As a result, it merges the VMFS manageability with the raw device
access.
Virtual
machine
opens reads,
writes
VMFS volume
address
mapping file mapped device
resolution
Typically, you use VMFS datastores for most virtual disk storage. On certain occasions, you might use
raw LUNs or logical disks located in a SAN.
For example, you might use raw LUNs with RDMs in the following situations:
n When SAN snapshot or other layered applications run in the virtual machine. The RDM enables
backup offloading systems by using features inherent to the SAN.
n In any MSCS clustering scenario that spans physical hosts, such as virtual-to-virtual clusters and
physical-to-virtual clusters. In this case, cluster data and quorum disks are configured as RDMs rather
than as virtual disks on a shared VMFS.
Think of an RDM as a symbolic link from a VMFS volume to a raw LUN. The mapping makes LUNs
appear as files in a VMFS volume. The RDM, not the raw LUN, is referenced in the virtual machine
configuration. The RDM contains a reference to the raw LUN.
n In the virtual compatibility mode, the RDM acts like a virtual disk file. The RDM can use snapshots.
n In the physical compatibility mode, the RDM offers direct access to the SCSI device for those
applications that require lower-level control.
User-Friendly Provides a user-friendly name for a mapped device. When you use an
Persistent Names RDM, you do not need to refer to the device by its device name. You refer
to it by the name of the mapping file, for example:
/vmfs/volumes/myVolume/myVMDirectory/myRawDisk.vmdk
Dynamic Name Stores unique identification information for each mapped device. VMFS
Resolution associates each RDM with its current SCSI device, regardless of changes
in the physical configuration of the server because of adapter hardware
changes, path changes, device relocation, and so on.
Distributed File Locking Makes it possible to use VMFS distributed locking for raw SCSI devices.
Distributed locking on an RDM makes it safe to use a shared raw LUN
without losing data when two virtual machines on different servers try to
access the same LUN.
File Permissions Makes file permissions possible. The permissions of the mapping file are
enforced at file-open time to protect the mapped volume.
File System Operations Makes it possible to use file system utilities to work with a mapped volume,
using the mapping file as a proxy. Most operations that are valid for an
ordinary file can be applied to the mapping file and are redirected to
operate on the mapped device.
vMotion Lets you migrate a virtual machine with vMotion. The mapping file acts as a
proxy to allow vCenter Server to migrate the virtual machine by using the
same mechanism that exists for migrating virtual disk files.
Host 1 Host 2
VMotion
VM1 VM2
VMFS volume
mapping file
address
resolution
mapped device
SAN Management Makes it possible to run some SAN management agents inside a virtual
Agents machine. Similarly, any software that needs to access a device by using
hardware-specific SCSI commands can be run in a virtual machine. This
kind of software is called SCSI target-based software. When you use SAN
management agents, select a physical compatibility mode for the RDM.
N-Port ID Virtualization Makes it possible to use the NPIV technology that allows a single Fibre
(NPIV) Channel HBA port to register with the Fibre Channel fabric using several
worldwide port names (WWPNs). This ability makes the HBA port appear
as multiple virtual ports, each having its own ID and virtual port name.
Virtual machines can then claim each of these virtual ports and use them
for all RDM traffic.
Note You can use NPIV only for virtual machines with RDM disks.
VMware works with vendors of storage management software to ensure that their software functions
correctly in environments that include ESXi. Some applications of this kind are:
n Snapshot software
n Replication software
Such software uses a physical compatibility mode for RDMs so that the software can access SCSI
devices directly.
Various management products are best run centrally (not on the ESXi machine), while others run well on
the virtual machines. VMware does not certify these applications or provide a compatibility matrix. To find
out whether a SAN management application is supported in an ESXi environment, contact the SAN
management software provider.
n The RDM is not available for direct-attached block devices or certain RAID devices. The RDM uses a
SCSI serial number to identify the mapped device. Because block devices and some direct-attach
RAID devices do not export serial numbers, they cannot be used with RDMs.
n If you are using the RDM in physical compatibility mode, you cannot use a snapshot with the disk.
Physical compatibility mode allows the virtual machine to manage its own, storage-based, snapshot
or mirroring operations.
Virtual machine snapshots are available for RDMs with virtual compatibility mode.
n You cannot map to a disk partition. RDMs require the mapped device to be a whole LUN.
n If you use vMotion to migrate virtual machines with RDMs, make sure to maintain consistent LUN IDs
for RDMs across all participating ESXi hosts.
n Flash Read Cache does not support RDMs in physical compatibility. Virtual compatibility RDMs are
supported with Flash Read Cache.
Key contents of the metadata in the mapping file include the location of the mapped device (name
resolution), the locking state of the mapped device, permissions, and so on.
In virtual mode, the VMkernel sends only READ and WRITE to the mapped device. The mapped device
appears to the guest operating system exactly the same as a virtual disk file in a VMFS volume. The real
hardware characteristics are hidden. If you are using a raw disk in virtual mode, you can realize the
benefits of VMFS such as advanced file locking for data protection and snapshots for streamlining
development processes. Virtual mode is also more portable across storage hardware than physical mode,
presenting the same behavior as a virtual disk file.
In physical mode, the VMkernel passes all SCSI commands to the device, with one exception: the
REPORT LUNs command is virtualized so that the VMkernel can isolate the LUN to the owning virtual
machine. Otherwise, all physical characteristics of the underlying hardware are exposed. Physical mode
is useful to run SAN management agents or other SCSI target-based software in the virtual machine.
Physical mode also allows virtual-to-physical clustering for cost-effective high availability.
VMFS5 and VMFS6 support greater than 2 TB disk size for RDMs in virtual and physical modes.
VMFS uniquely identifies all mapped storage devices, and the identification is stored in its internal data
structures. Any change in the path to a raw device, such as a Fibre Channel switch failure or the addition
of a new HBA, can change the device name. Dynamic name resolution resolves these changes and
automatically associates the original device with its new name.
Host 3 Host 4
VM3 VM4
“shared” access
address
mapping file mapped
resolutiion device
VMFS volume
The following table provides a comparison of features available with the different modes.
Table 19‑1. Features Available with Virtual Disks and Raw Device Mappings
ESXi Features Virtual Disk File Virtual Mode RDM Physical Mode RDM
Use virtual disk files for the cluster-in-a-box type of clustering. If you plan to reconfigure your cluster-in-a-
box clusters as cluster-across-boxes clusters, use virtual mode RDMs for the cluster-in-a-box clusters.
Although the RDM disk file has the same.vmdk extension as a regular virtual disk file, the RDM contains
only mapping information. The actual virtual disk data is stored directly on the LUN.
This procedure assumes that you are creating a new virtual machine. For information, see the vSphere
Virtual Machine Administration documentation.
Procedure
a Right-click any inventory object that is a valid parent object of a virtual machine, such as a data
center, folder, cluster, resource pool, or host, and select New Virtual Machine.
3 (Optional) To delete the default virtual hard disk that the system created for your virtual machine,
move your cursor over the disk and click the Remove icon.
a Click Add New Devices and select RDM Disk from the list.
b From the list of LUNs, select a target raw LUN and click OK.
The system creates an RDM disk that maps your virtual machine to the target LUN. The RDM
disk is shown on the list of virtual devices as a new hard disk.
a Click the New Hard Disk triangle to expand the properties for the RDM disk.
You can place the RDM on the same datastore where your virtual machine configuration files
reside, or select a different datastore.
Note To use vMotion for virtual machines with enabled NPIV, make sure that the RDM files and
the virtual machine files are located on the same datastore. You cannot perform Storage vMotion
when NPIV is enabled.
Option Description
Physical Allows the guest operating system to access the hardware directly. Physical
compatibility is useful if you are using SAN-aware applications on the virtual
machine. However, a virtual machine with a physical compatibility RDM cannot
be cloned, made into a template, or migrated if the migration involves copying
the disk.
Virtual Allows the RDM to behave as if it were a virtual disk, so you can use such
features as taking snapshots, cloning, and so on. When you clone the disk or
make a template out of it, the contents of the LUN are copied into a .vmdk
virtual disk file. When you migrate a virtual compatibility mode RDM, you can
migrate the mapping file or copy the contents of the LUN into a virtual disk.
Disk modes are not available for RDM disks using physical compatibility mode.
Option Description
Independent - Persistent Disks in persistent mode behave like conventional disks on your physical
computer. All data written to a disk in persistent mode are written permanently
to the disk.
Independent - Nonpersistent Changes to disks in nonpersistent mode are discarded when you power off or
reset the virtual machine. With nonpersistent mode, you can restart the virtual
machine with a virtual disk in the same state every time. Changes to the disk
are written to and read from a redo log file that is deleted when you power off
or reset.
Procedure
2 Click the Virtual Hardware tab and click Hard Disk to expand the disk options menu.
4 Use the Edit Multipathing Policies dialog box to enable or disable paths, set multipathing policy, and
specify the preferred path.
For information on managing paths, see Chapter 18 Understanding Multipathing and Failover.
As an abstraction layer, SPBM abstracts storage services delivered by Virtual Volumes, vSAN, I/O filters,
or other storage entities.
Rather than integrating with each individual type of storage and data services, SPBM provides a universal
framework for different types of storage entities.
UI CLI API/SDK
I/O Filter
Vendors
Virtual Traditional
vSAN
Volumes (VMFS, NFS)
n Advertisement of storage capabilities and data services that storage arrays and other entities, such
as I/O filters, offer.
n Bidirectional communications between ESXi and vCenter Server on one side, and storage arrays and
entities on the other.
vSphere offers default storage policies. In addition, you can define policies and assign them to the virtual
machines.
You use the VM Storage Policies interface to create a storage policy. When you define the policy, you
specify various storage requirements for applications that run on the virtual machines. You can also use
storage policies to request specific data services, such as caching or replication, for virtual disks.
You apply the storage policy when you create, clone, or migrate the virtual machine. After you apply the
storage policy, the SPBM mechanism assists you with placing the virtual machine in a matching
datastore. In certain storage environments, SPBM determines how the virtual machine storage objects
are provisioned and allocated within the storage resource to guarantee the required level of service. The
SPBM also enables requested data services for the virtual machine and helps you to monitor policy
compliance.
Whether you must perform a specific step might depend on the type of storage or data services that your
environment offers.
Step Description
Populate the VM Storage The VM Storage Policies interface is populated with information about datastores and data services
Policies interface with that are available in your storage environment. This information is obtained from storage providers
appropriate data. and datastore tags.
n For entities represented by storage providers, verify that an appropriate provider is registered.
Entities that use the storage provider include vSAN, Virtual Volumes, and I/O filters. Depending
on the type of storage entity, some providers are self-registered. Other providers must be
manually registered.
See Use Storage Providers to Populate the VM Storage Policies Interface and Register Storage
Providers for Virtual Volumes.
n Tag datastores that are not represented by storage providers. You can also use tags to indicate a
property that is not communicated through the storage provider, such as geographical location or
administrative group.
Create predefined A storage policy component describes a single data service, such as replication, that must be
storage policy provided for the virtual machine. You can define the component in advance and associate it with
components. multiple VM storage policies. The components are reusable and interchangeable.
See Create Storage Policy Components.
Create VM storage When you define storage policies for virtual machines, you specify storage requirements for
policies. applications that run on the virtual machines.
See Creating and Managing VM Storage Policies.
Apply the VM storage You can apply the storage policy when deploying the virtual machine or configuring its virtual disks.
policy to the virtual See Assign Storage Policies to Virtual Machines.
machine.
Check compliance for Verify that the virtual machine uses the datastore that is compliant with the assigned storage policy.
the VM storage policy. See Check Compliance for a VM Storage Policy.
This information is obtained from storage providers, also called VASA providers. Another source is
datastore tags.
Storage Capabilities Certain datastores, for example, Virtual Volumes and vSAN, are
and Services represented by the storage providers. Through the storage providers, the
datastores can advertise their capabilities in the VM Storage Policy
interface. These datastore capabilities, data services, and other
characteristics with ranges of values populate the VM Storage Policy
interface.
Data Services I/O filters on your hosts are also represented by the storage providers. The
storage provider delivers information about the data services of the filters to
the VM Storage Policy interface. You use this information when defining the
rules for host-based data services, also called common rules. Unlike the
datastore-specific rules, these rules do not define storage placement and
storage requirements for the virtual machine. Instead, they activate the
requested I/O filter data services for the virtual machine.
Tags Generally, VMFS and NFS datastores are not represented by a storage
provider. They do not display their capabilities and data services in the VM
Storage Polices interface. You can use tags to encode information about
these datastores. For example, you can tag your VMFS datastores as
VMFS-Gold and VMFS-Silver to represent different levels of service.
For VVols and vSAN datastores, you can use tags to encode information
that is not advertised by the storage provider, such as geographical location
(Palo Alto), or administrative group (Accounting).
Entities that use the storage provider include vSAN, Virtual Volumes, and I/O filters. Depending on the
type of the entity, some providers are self-registered. Other providers, for example, the Virtual Volumes
storage provider, must be manually registered. After the storage providers are registered, they deliver the
following data to the VM Storage Policies interface:
n Storage capabilities and characteristics for such datastores as Virtual Volumes and vSAN.
Prerequisites
Register the storage providers that require manual registration. For more information, see the appropriate
documentation:
Procedure
3 In the Storage Providers list, view the storage providers registered with vCenter Server.
The list shows general information including the name of the storage provider, its URL and status,
storage entities that the provider represents, and so on.
4 To display more details, select a specific storage provider or its component from the list.
You can apply a new tag that contains general storage information to a datastore. For more details about
the tags, their categories, and how to manage the tags, see the vCenter Server and Host Management
documentation.
Prerequisites
Required privileges:
n vSphere Tagging.Create vSphere Tag Category on the root vCenter Server instance
n vSphere Tagging.Assign or Unassign vSphere Tag on the root vCenter Server instance
Procedure
e Click OK.
c Specify the properties for the tag. See the following example.
Name Texas
d Click OK.
b Right-click the datastore, and select Tags & Custom Attributes > Assign Tag.
c From the list of tags, select an appropriate tag, for example, Texas in the Storage Location
category, and click Assign.
The new tag is assigned to the datastore and appears on the datastore Summary tab in the Tags pane.
What to do next
When creating a VM storage policy, you can reference the tag to include the tagged datastore in the list of
compatible storage resources. See Create Storage-Specific Rules for a VM Storage Policy.
Or you can exclude the tagged datastore from the VM storage policy. For example, your VM storage
policy can include Virtual Volumes datastores located in Taxes and California, but exclude datastores
located in Nevada.
To learn more about how to use tags in VM storage policies, watch the following video.
Rules The rule is a basic element of the VM storage policy. Each individual rule is
a statement that describes a single requirement for virtual machine storage
and data services.
Rule Sets Within a storage policy, individual rules are organized into collections of
rules, or rule sets. Typically, the rule sets can be in one of the following
categories: rules for host-based services and datastore-specific rules.
Datastore-Specific Rule Each rule set must include placement rules that describe requirements for
Sets virtual machine storage resources. All placement rules within a single rule
set represent a single storage entity. These rules can be based on storage
capabilities or tags.
In addition, the datastore-specific rule set can include optional rules or
storage policy components that describe data services to provide for the
virtual machine. Generally, these rules request such services as caching,
replication, other services provided by storage systems.
To define the storage policy, one datastore-specific set is required.
Additional rule sets are optional. A single policy can use multiple sets of
rules to define alternative storage placement parameters, often from
several storage providers.
Placement Rules: Placement rules specify a particular storage requirement for the VM and
Capability-Based enable SPBM to distinguish compatible datastores among all datastores in
the inventory. These rules also describe how the virtual machine storage
objects are allocated within the datastore to receive the required level of
service. For example, the rules can list Virtual Volumes as a destination and
define the maximum recovery point objective (RPO) for the Virtual Volumes
objects.
When you provision the virtual machine, these rules guide the decision that
SPBM makes about the virtual machine placement. SPBM finds the Virtual
Volumes datastores that can match the rules and satisfy the storage
requirements of the virtual machine. See Create a VM Storage Policy for
Virtual Volumes.
Placement Rules: Tag- Tag-based rules reference datastore tags. These rules can define the VM
Based placement, for example, request as a target all datastores with the VMFS-
Gold tag. You can also use the tag-based rules to fine-tune your VM
placement request further. For example, exclude datastores with the Palo
Alto tag from the list of your Virtual Volumes datastores. See Create a VM
Storage Policy for Tag-Based Placement.
Rules for Host-Based This rule set activates data services provided by the host. The set for host-
Services based services can include rules or storage policy components that
describe particular data services, such as encryption or replication.
Unlike datastore-specific rules, this set does not include placement rules.
Rules for host-based services are generic for all types of storage and do
not depend on the datastore. See Create a VM Storage Policy for Host-
Based Data Services.
Rules or predefined storage policy components to activate data Capability-based or tag-based placement rules that describe
services installed on ESXi hosts. For example, replication by requirements for virtual machine storage resources. For
I/O filters. example, Virtual Volumes placement.
If the rule set for host-based services is not present, meeting all the rules of a single datastore-specific
rule set is sufficient to satisfy the entire policy. If the rule set for host-based services is present, the policy
matches the datastore that satisfies the host services rules and all rules in one of the datastore-specific
sets.
or or
rule 1_1 rule 2_1 rule 3_1
Depending on whether you use the vSphere Web Client or the vSphere Client, the appearance of the VM
Storage Policy interface and its options might change.
A storage policy can reference storage capabilities that are advertised by a storage entity. Or it can
reference datastore tags. The policy can include components that enable data services, such as
replication or caching, provided by I/O filters, storage systems, or other entities.
Prerequisites
n Make sure that the VM Storage Policies interface is populated with information about storage entities
and data services that are available in your storage environment. See Populating the VM Storage
Policies Interface.
n Define appropriate storage policy components. See Create Storage Policy Components.
Procedure
What to do next
You can apply this storage policy to virtual machines. If you use object-based storage, such as vSAN and
Virtual Volumes, you can designate this storage policy as the default.
Procedure
1 From the vSphere Web Client Home, click Policies and Profiles > VM Storage Policies.
The data services are generic for all types of storage and do not depend on a datastore. Depending on
your environment, the data services can belong to various categories, including encryption, caching,
replication, and so on. Certain data services, such as encryption, are provided by VMware. Others are
offered by third-party I/O filters.
Prerequisites
n For information about encrypting your virtual machines, see the vSphere Security documentation.
n For information about I/O filters, see Chapter 23 Filtering Virtual Machine I/O.
n For information about storage policy components, see About Storage Policy Components.
Procedure
1 Enable common rules by selecting Use common rules in the VM storage policy.
2 Click the Add component ( ) icon and select a data service category from the drop-down menu, for
example, Replication.
3 Define rules for the data service category by specifying an appropriate provider and values for the
rules. Or select the data service from the list of predefined components.
Option Description
Component Name This option is available if you have predefined storage policy components in your
database. If you know which component to use, select it from the list to add to the
VM storage policy.
See all Review all component available for the category. To include a specific component,
select it from the list and click OK.
Custom Define custom rules for the data service category by specifying an appropriate
provider and values for the rules.
You can use only one component from the same category, for example caching, per a set of common
or regular rules.
5 Click Next.
Prerequisites
n If your environment includes storage entities such as vSAN or Virtual Volumes, review these
functionalities. For information, see the Administering VMware vSAN documentation and Chapter 22
Working with Virtual Volumes.
n To configure predefined storage policy components, see About Storage Policy Components.
Procedure
1 Make sure that the Use rule-sets in the storage policy check box is selected.
Placement rules request a specific storage entity as a destination for the virtual machine. They can be
capability-based or tag-based. Capability-based rules are based on data services that storage entities
such as vSAN and Virtual Volumes advertise through storage (VASA) providers. Tag-based rules
reference tags that you assign to datastores.
Option Description
Placement based on storage a From the Storage Type drop-down menu, select a target storage entity, for
capabilities example, Virtual Volumes.
b From theAdd rule drop-down menu, select a capability and specify its value.
For example, you can specify the number of read operations per second for
Virtual Volumes objects. You can include as many rules as you need for the
selected storage entity. Verify that the values you provide are within the range
of values that the storage resource advertises.
c If you need to fine-tune your placement request further, add a tag-based rule.
Placement based on tags a From the Storage Type drop-down menu, select Tags based placement.
b From theAdd rule drop-down menu, select Tags from category.
c Define tag-based placement criteria.
For example, you can request as a target all datastores with the VMFS-Gold
tag.
The data services that you reference on the Rule Set page are provided by the storage. The VM
storage policy that references the data services, requests them for the virtual machine.
a Click the Add component ( ) icon and select a data service category from the drop-down menu,
for example, Replication.
b Define rules for the data service category by specifying an appropriate provider and values for the
rules. Or select the data service from the list of predefined components.
Option Description
Component Name This option is available if you have predefined storage policy components in
your database. If you know which component to use, select it from the list to
add to the VM storage policy.
See all Review all component available for the category. To include a specific
component, select it from the list and click OK.
Custom Define custom rules for the data service category by specifying an appropriate
provider and values for the rules.
You can use only one component from the same category, for example caching, per a set of
common or regular rules.
4 (Optional) To define another rule set, click Add another rule set and repeat Step 2 through Step 3.
Multiple rule sets allow a single policy to define alternative storage placement parameters, often from
several storage providers.
5 Click Next.
Procedure
1 On the Storage compatibility page, review the list of datastores that match this policy and click
Next.
To be eligible, the datastore must satisfy at least one rule set and all rules within this set.
If you need to change any settings, click Back to go back to the relevant page.
3 Click Finish.
Available data services include encryption, I/O control, caching, and so on. Certain data services, such as
encryption, are provided by VMware. Others can be offered by third-party I/O filters that you install on
your host.
The data services are usually generic for all types of storage and do not depend on a datastore. Adding
datastore-specific rules to the storage policy is optional.
If you add datastore-specific rules, and both the I/O filters on the host and storage offer the same type of
service, for example, encryption, your policy can request this service from both providers. As a result, the
virtual machine data is encrypted twice, by the I/O filter and your storage. However, replication provided
by Virtual Volumes and replication provided by the I/O filter cannot coexist in the same storage policy.
Prerequisites
n For information about encrypting your virtual machines, see the vSphere Security documentation.
n For information about I/O filters, see Chapter 23 Filtering Virtual Machine I/O.
n For information about storage policy components, see About Storage Policy Components.
Procedure
Option Action
3 On the Policy structure page under Host based services, enable host-based rules.
4 On the Host based services page, define rules to enable and configure data services provided by
your host.
a Click the tab for the data service category, for example, Replication.
b Define custom rules for the data service category or use predefined components.
Option Description
Use storage policy component Select a storage policy component from the drop-down menu. This option is
available only if you have predefined components in your database.
Custom Define custom rules for the data service category by specifying an appropriate
provider and values for the rules.
Note You can enable several data services. If you use encryption with other data services, set
the Allow I/O filters before encryption parameter to True, so that other services, such as
replication, can analyze clear text data before it is encrypted.
5 On the Storage compatibility page, review the list of datastores that match this policy.
To be compatible with the policy for host-based services, datastores must be connected to the host
that provides these services. If you add datastore-specific rule sets to the policy, the compatible
datastores must also satisfy storage requirements of the policy.
6 On the Review and finish page, review the storage policy settings and click Finish.
The new VM storage policy for host-based data services appears on the list.
The procedure assumes that you are creating the VM storage policy for Virtual Volumes. For information
about the vSAN storage policy, see the Administering VMware vSAN documentation.
Prerequisites
n Verify that the Virtual Volumes storage provider is available and active. See Register Storage
Providers for Virtual Volumes.
n Make sure that the VM Storage Policies interface is populated with information about storage entities
and data services that are available in your storage environment. See Populating the VM Storage
Policies Interface.
n Define appropriate storage policy components. See Create Storage Policy Components.
Procedure
Option Action
Name Enter the name of the storage policy, for example VVols Storage Policy.
3 On the Policy structure page under Datastore specific rules, enable rules for a target storage entity,
such as Virtual Volumes storage.
You can enable rules for several datastores. Multiple rule sets allow a single policy to define
alternative storage placement parameters, often from several storage providers.
4 On the Virtual Volumes rules page, define storage placement rules for the target VVols datastore.
b From the Add Rule drop-down menu, select available capability and specify its value.
For example, you can specify the number of read operations per second for the Virtual Volumes
objects.
You can include as many rules as you need for the selected storage entity. Verify that the values
you provide are within the range of values that the VVols datastore advertises.
c To fine-tune your placement request further, click the Tags tab and add a tag-based rule.
Tag-based rules can filter datastores by including or excluding specific placement criteria. For
example, your VM storage policy can include Virtual Volumes datastores located in Taxes and
California, but exclude datastores located in Nevada.
The data services, such as encryption, caching, or replication, are offered by the storage. The VM
storage policy that references data services, requests these services for the VM when the VM is
placed to the VVols datastore.
a Click the tab for the data service category, for example, Replication.
b Define custom rules for the data service category or use predefined components.
Option Description
Use storage policy component Select a storage policy component from the drop-down menu. This option is
available only if you have predefined components in your database.
Custom Define custom rules for the data service category by specifying an appropriate
provider and values for the rules.
6 On the Storage compatibility page, review the list of datastores that match this policy.
If the policy includes several rule sets, the datastore must satisfy at least one rule set and all rules
within this set.
7 On the Review and finish page, review the storage policy settings and click Finish.
The new VM storage policy compatible with Virtual Volumes appears on the list.
What to do next
You can now associate this policy with a virtual machine, or designate the policy as default.
Prerequisites
n Make sure that the VM Storage Policies interface is populated with information about storage entities
and data services that are available in your storage environment. See Populating the VM Storage
Policies Interface.
Procedure
Option Action
3 On the Policy structure page under Datastore specific rules, enable tag-based placement rules.
a Click Add Tag Rule and define tag-based placement criteria. Use the following as an example.
Option Example
Tags Gold
All datastores with the Gold tag become compatible as the storage placement target.
5 On the Storage compatibility page, review the list of datastores that match this policy.
6 On the Review and finish page, review the storage policy settings and click Finish.
The new VM storage policy compatible with tagged datastores appears on the list.
Prerequisites
Procedure
Option Description
In the vSphere Web Client a From the Home menu, click Policies and Profiles > VM Storage Policies.
b Click the VM Storage Policies tab.
2 Select the storage policy, and click one of the following icons:
n Edit Settings
n Clone
4 If editing the storage policy that is used by a virtual machine, reapply the policy to the virtual machine.
Option Description
Manually later If you select this option, the compliance status for all virtual disks and virtual
machine home objects associated with the storage policy changes to Out of Date.
To update configuration and compliance, manually reapply the storage policy to
all associated entities. See Reapply Virtual Machine Storage Policy.
Now Update virtual machine and compliance status immediately after editing the
storage policy.
You cannot assign the predefined component directly to a virtual machine or virtual disk. Instead, you
must add the component to the VM storage policy, and assign the policy to the virtual machine.
The component describes one type of service from one service provider. The services can vary
depending on the providers that you use, but generally belong in one of the following categories.
n Compression
n Caching
n Encryption
n Replication
When you create the storage policy component, you define the rules for one specific type and grade of
service.
The following example shows that virtual machines VM1 and VM2 have identical placement
requirements, but must have different grades of replication services. You can create the storage policy
components with different replication parameters and add these components to the related storage
policies.
The provider of the service can be a storage system, an I/O filter, or another entity. If the component
references an I/O filter, the component is added to the host-based rules of the storage policy.
Components that reference entities other than the I/O filters, for example, a storage system, are added to
the datastore-specific rule sets.
n Each component can include only one set of rules. All characteristics in this rule set belong to a single
provider of the data services.
n If the component is referenced in the VM storage policy, you cannot delete the component. Before
deleting the component, you must remove it from the storage policy or delete the storage policy.
n When you add components to the policy, you can use only one component from the same category,
for example caching, per a set of rules.
Procedure
Option Description
In the vSphere Web Client a From the vSphere Web Client Home, click Policies and Profiles > VM
Storage Policies.
b Click the Storage Policy Component tab.
4 Enter a name, for example, 4-hour Replication, and a description for the policy component.
Make sure that the name does not conflict with the names of other components or storage policies.
For example, if you are configuring 4-hour replication, set the Recovery Point Objective (RPO) value
to 4.
For encryption based on I/O filters, set the Allow I/O filters before encryption parameter. Encryption
provided by storage does not require this parameter.
Option Description
False (default) Does not allow the use of other I/O filters before the encryption filter.
True Allows the use of other I/O filters before the encryption filter. Other filters, such as
replication, can analyze clear text data before it is encrypted.
8 Click OK.
What to do next
You can add the component to the VM storage policy. If the data service that the component references is
provided by the I/O filters, you add the component to the host-based rules of the storage policy.
Components that reference entities other than the I/O filters, for example, a storage system, are added to
the datastore-specific rule sets.
Procedure
Option Description
In the vSphere Web Client a From the vSphere Web Client Home, click Policies and Profiles > VM
Storage Policies.
b Click the Storage Policy Components tab.
Option Description
Edit Settings When editing, you cannot change the category of the data service and the
provider. For example, if the original component references replication provided
by I/O filters, these settings must remain unchanged.
Clone When cloning, you can customize any settings of the original component.
4 If a VM storage policy that is assigned to a virtual machine references the policy component you edit,
reapply the storage policy to the virtual machine.
Manually later If you select this option, the compliance status for all virtual disks and virtual
machine home objects associated with the storage policy changes to Out of Date.
To update configuration and compliance, manually reapply the storage policy to
all associated entities. See Reapply Virtual Machine Storage Policy.
Now Update virtual machine and compliance status immediately after editing the
storage policy.
If you do not specify the storage policy, the system uses a default storage policy that is associated with
the datastore. If your storage requirements for the applications on the virtual machine change, you can
modify the storage policy that was originally applied to the virtual machine.
This topic describes how to assign the VM storage policy when you create a virtual machine. For
information about other deployment methods that include cloning, deployment from a template, and so
on, see the vSphere Virtual Machine Administration documentation.
You can apply the same storage policy to the virtual machine configuration file and all its virtual disks. If
storage requirements for your virtual disks and the configuration file are different, you can associate
different storage policies with the VM configuration file and the selected virtual disks.
Procedure
1 Start the virtual machine provisioning process and follow the appropriate steps.
2 Assign the same storage policy to all virtual machine files and disks.
a On the Select storage page, select a storage policy from the VM Storage Policy drop-down
menu.
Based on its configuration, the storage policy separates all datastores into compatible and
incompatible. If the policy references data services offered by a specific storage entity, for
example, Virtual Volumes, the compatible list includes datastores that represent only that type of
storage.
The datastore becomes the destination storage resource for the virtual machine configuration file
and all virtual disks.
c If you use the replication service with Virtual Volumes, specify the replication group.
Replication groups indicate which VMs and virtual disks must be replicated together to a target
site.
Option Description
Preconfigured replication group Replication groups that are configured in advance on the storage side.
vCenter Server and ESXi discover the replication groups, but do not manage
their life cycle.
Automatic replication group Virtual Volumes creates a replication group and assigns all VM objects to this
group.
Use this option if requirements for storage placement are different for virtual disks. You can also use
this option to enable I/O filter services, such as caching and replication, for your virtual disks.
a On the Customize hardware page, expand the New hard disk pane.
b From the VM storage policy drop-down menu, select the storage policy to assign to the virtual
disk.
Use this option to store the virtual disk on a datastore other than the datastore where the VM
configuration file resides.
After you create the virtual machine, the Summary tab displays the assigned storage policies and their
compliance status.
What to do next
If storage placement requirements for the configuration file or the virtual disks change, you can later
modify the virtual policy assignment.
You can edit the storage policy for a powered-off or powered-on virtual machine.
When changing the VM storage policy assignment, you can apply the same storage policy to the virtual
machine configuration file and all its virtual disks. You can also associate different storage policies with
the VM configuration file and the virtual disks. You might apply different policies when, for example,
storage requirements for your virtual disks and the configuration file are different.
Procedure
Option Actions
In the vSphere Web Client a From the vSphere Web Client Home, click Policies and Profiles > VM
Storage Policies.
b On the VM Storage Policies tab, click the storage policy you want to change.
c Click the VMs tab and click Virtual Machines.
You can see the list of virtual machines that use this storage policy.
d Click the virtual machine whose policy you want to modify.
You can see the list of virtual machines that use this storage policy.
d Click the virtual machine whose policy you want to modify.
Option Actions
Apply the same storage policy to all a Select the policy from the VM storage policy drop-down menu.
virtual machine objects (in the vSphere b Click Apply to all.
Web Client)
Apply different storage policies to the a Select the object, for example, VM home.
VM home object and virtual disks (in b In the VM Storage Policy column, select the policy from the drop-down menu.
the vSphere Web Client)
Apply the same storage policy to all Select the policy from the VM storage policy drop-down menu.
virtual machine objects (in the vSphere
Client)
Apply different storage policies to the a Turn on the Configure per disk option.
VM home object and virtual disks (in b Select the object, for example, VM home.
the vSphere Client) c In the VM Storage Policy column, select the policy from the drop-down menu.
5 If you use Virtual Volumes policy with replication, configure the replication group.
Replication groups indicate which VMs and virtual disks must be replicated together to a target site.
You can select a common replication group for all objects or select different replication groups for
each storage object.
The storage policy is assigned to the virtual machine and its disks.
Prerequisites
Verify that the virtual machine has a storage policy that is associated with it.
Procedure
Compliant The datastore that the virtual machine or virtual disk uses has the storage capabilities compatible
with the policy requirements.
Noncompliant The datastore that the virtual machine or virtual disk uses does not have the storage capabilities
compatible with the policy requirements. You can migrate the virtual machine files and virtual disks to
compliant datastores.
Out of Date The status indicates that the policy has been edited, but the new requirements have not been
communicated to the datastore where the virtual machine objects reside. To communicate the
changes, reapply the policy to the objects that are out of date.
Not Applicable This storage policy references datastore capabilities that are not supported by the datastore where
virtual machine resides.
What to do next
When you cannot bring the noncompliant datastore into compliance, migrate the files or virtual disks to a
compatible datastore. See Find Compatible Storage Resource for Noncompliant Virtual Machine.
If the status is Out of Date, reapply the policy to the objects. See Reapply Virtual Machine Storage Policy.
Occasionally, a storage policy that is assigned to a virtual machine can be in the noncompliant status.
This status indicates that the virtual machine or its disks use datastores that are incompatible with the
policy. You can migrate the virtual machine files and virtual disks to compatible datastores.
Use this task to determine which datastores satisfy the requirements of the policy.
Procedure
1 Verify that the storage policy for the virtual machine is in the noncompliant state.
The VM Storage Policy Compliance panel on the VM Storage Policies pane shows the
Noncompliant status.
Option Description
In the vSphere Web Client a From the vSphere Web Client Home, click Policies and Profiles > VM
Storage Policies.
b Click the Storage Policy tab.
3 Display the list of compatible datastores for the noncompliant storage policy.
Option Description
The list of datastores that match the requirements of the policy appears.
What to do next
You can migrate the virtual machine or its disks to one of the datastores in the list.
Prerequisites
The compliance status for a virtual machine is Out of Date. The status indicates that the policy has been
edited, but the new requirements have not been communicated to the datastore.
Procedure
Compliant The datastore that the virtual machine or virtual disk uses has the storage capabilities that the policy
requires.
Noncompliant The datastore that the virtual machine or virtual disk uses does not have the storage capabilities that
the policy requires.
When you cannot bring the noncompliant datastore into compliance, migrate the files or virtual disks
to a compatible datastore. See Find Compatible Storage Resource for Noncompliant Virtual Machine.
Not Applicable This storage service level references datastore capabilities that are not supported by the datastore
where the virtual machine resides.
VMware-Provided The generic default storage policy that ESXi provides applies to all
Default Storage Policy datastores and does not include rules specific to any storage type.
For information about the default storage policy for VVols, see Virtual
Volumes and VM Storage Policies.
VMFS and NFS datastores do not have specific default policies and can
use the generic default policy or a custom policy you define for them.
User-Defined Default You can create a VM storage policy that is compatible with vSAN or Virtual
Storage Policies Volumes. You can then designate this policy as the default for vSAN and
Virtual Volumes datastores. The user-defined default policy replaces the
default storage policy that VMware provides.
Each vSAN and Virtual Volumes datastore can have only one default policy
at a time. However, you can create a single storage policy with multiple
placement rule sets, so that it matches multiple vSAN and Virtual Volumes
datastores. You can designate this policy as the default policy for all
datastores.
When the VM storage policy becomes the default policy for a datastore,
you cannot delete the policy unless you disassociate it from the datastore.
Note A storage policy that contains replication rules should not be specified as a default storage policy.
Otherwise, the policy prevents you from selecting replication groups.
Prerequisites
Create a storage policy that is compatible with Virtual Volumes or vSAN. You can create a policy that
matches both types of storage.
Procedure
5 From the list of available storage policies, select a policy to designate as the default and click OK.
The selected storage policy becomes the default policy for the datastore. vSphere assigns this policy to
any virtual machine objects that you provision on the datastore when no other policy is selected.
Persistence Storage Storage providers that manage arrays and storage abstractions, are called
Providers persistence storage providers. Providers that support Virtual Volumes or
vSAN belong to this category. In addition to storage, persistence providers
can provide other data services, such as replication.
Data Service Providers Another category of providers is I/O filter storage providers, or data service
providers. These providers offer data services that include host-based
caching, compression, and encryption.
Both persistence storage and data service providers can belong to one of these categories.
Built-in Storage Built-in storage providers are offered by VMware. Typically, they do not
Providers require registration. For example, the storage providers that support vSAN
or I/O filters are build-in and become registered automatically.
Third-Party Storage When a third party offers a storage provider, you typically must register the
Providers provider. An example of such a provider is the Virtual Volumes provider.
You use the vSphere Client to register and manage each storage provider
component.
The following graphic illustrates how different types of storage providers facilitate communications
between vCenter Server and ESXi and other components of your storage environment. For example, the
components might include storage arrays, Virtual Volumes storage, and I/O filters.
vCenter
Server
SPBM
I/O Filter
Storage
Provider
I/O
Filter
ESXi-1
Multi-Array VVols
X100 Array
Storage Storage
Provider I/O Provider
Filter
Virtual
ESXi-2
Volumes
X200 Array Storage
Information that the storage provider supplies can be divided into the following categories:
n Storage data services and capabilities. This type of information is essential for such functionalities as
vSAN, Virtual Volumes, and I/O filters. The storage provider that represents these functionalities
integrates with the Storage Policy Based Management (SPBM) mechanism. The storage provider
collects information about data services that are offered by underlying storage entities or available I/O
filters.
You reference these data services when you define storage requirements for virtual machines and
virtual disks in a storage policy. Depending on your environment, the SPBM mechanism ensures
appropriate storage placement for a virtual machine or enables specific data services for virtual disks.
For details, see Creating and Managing VM Storage Policies.
n Storage status. This category includes reporting about status of various storage entities. It also
includes alarms and events for notifying about configuration changes.
This type of information can help you troubleshoot storage connectivity and performance problems. It
can also help you to correlate array-generated events and alarms to corresponding performance and
load changes on the array.
n Storage DRS information for the distributed resource scheduling on block devices or file systems.
This information helps to ensure that decisions made by Storage DRS are compatible with resource
management decisions internal to the storage systems.
Typically, vendors are responsible for supplying storage providers. The VMware VASA program defines
an architecture that integrates third-party storage providers into the vSphere environment, so that
vCenter Server and ESXi hosts can communicate with the storage providers.
n Make sure that every storage provider you use is certified by VMware and properly deployed. For
information about deploying the storage providers, contact your storage vendor.
n Make sure that the storage provider is compatible with the vCenter Server and ESXi versions. See
VMware Compatibility Guide.
n Do not install the VASA provider on the same system as vCenter Server.
n If your environment contains older versions of storage providers, existing functionality continues to
work. However, to use new features, upgrade your storage provider to a new version.
n When you upgrade a storage provider to a later VASA version, you must unregister and reregister the
provider. After registration, vCenter Server can detect and use the functionality of the new VASA
version.
When you upgrade a storage provider to a later VASA version, you must unregister and reregister the
provider. After registration, vCenter Server can detect and use the functionality of the later VASA version.
Note If you use vSAN, the storage providers for vSAN are registered and appear on the list of storage
providers automatically. vSAN does not support manual registration of storage providers. See the
Administering VMware vSAN documentation.
Prerequisites
Verify that the storage provider component is installed on the storage side and obtain its credentials from
your storage administrator.
Procedure
4 Enter connection information for the storage provider, including the name, URL, and credentials.
Action Description
Direct vCenter Server to the storage Select the Use storage provider certificate option and specify the certificate's
provider certificate location.
Use a thumbprint of the storage If you do not guide vCenter Server to the provider certificate, the certificate
provider certificate thumbprint is displayed. You can check the thumbprint and approve it.
vCenter Server adds the certificate to the truststore and proceeds with the
connection.
The storage provider adds the vCenter Server certificate to its truststore when vCenter Server first
connects to the provider.
6 Click OK.
vCenter Server registers the storage provider and establishes a secure SSL connection with it.
What to do next
If your storage provider fails to register, see the VMware Knowledge Base article
http://kb.vmware.com/kb/2079087.
View general storage provider information and details for each storage component.
Procedure
3 In the Storage Providers list, view the storage providers registered with vCenter Server.
The list shows general information including the name of the storage provider, its URL and status,
version of VASA APIs, storage entities the provider represents, and so on.
4 To display additional details, select a specific storage provider or its component from the list.
Note A single storage provider can support storage systems from multiple different vendors.
Procedure
3 From the list of storage providers, select a storage provider and click one of the following icons.
Option Description
Synchronize Storage Providers Synchronize all storage providers with the current state of the environment.
Remove Unregister storage providers that you do not use. After this operation,
vCenter Server closes the connection and removes the storage provider from its
configuration.
This option is also useful when you upgrade a storage provider to a later VASA
version. In this case, you must unregister and then reregister the provider. After
registration, vCenter Server can detect and use the functionality of the later VASA
version.
Refresh certificate vCenter Server warns you when a certificate assigned to a storage provider is
about to expire. You can refresh the certificate to continue using the provider.
If you fail to refresh the certificate before it expires, vCenter Server discontinues
using the provider.
vCenter Server closes the connection and removes the storage provider from its configuration.
Historically, vSphere storage management used a datastore-centric approach. With this approach,
storage administrators and vSphere administrators discuss in advance the underlying storage
requirements for virtual machines. The storage administrator then sets up LUNs or NFS shares and
presents them to ESXi hosts. The vSphere administrator creates datastores based on LUNs or NFS, and
uses these datastores as virtual machine storage. Typically, the datastore is the lowest granularity level at
which data management occurs from a storage perspective. However, a single datastore contains
multiple virtual machines, which might have different requirements. With the traditional approach, it is
difficult to meet the requirements of an individual virtual machine.
The Virtual Volumes functionality helps to improve granularity. It helps you to differentiate virtual machine
services on a per application level by offering a new approach to storage management. Rather than
arranging storage around features of a storage system, Virtual Volumes arranges storage around the
needs of individual virtual machines, making storage virtual-machine centric.
Virtual Volumes maps virtual disks and their derivatives, clones, snapshots, and replicas, directly to
objects, called virtual volumes, on a storage system. This mapping allows vSphere to offload intensive
storage operations such as snapshot, cloning, and replication to the storage system.
By creating a volume for each virtual disk, you can set policies at the optimum level. You can decide in
advance what the storage requirements of an application are, and communicate these requirements to
the storage system. The storage system creates an appropriate virtual disk based on these requirements.
For example, if your virtual machine requires an active-active storage array, you no longer must select a
datastore that supports the active-active model. Instead, you create an individual virtual volume that is
automatically placed to the active-active array.
Watch the video to learn more about different components of the Virtual Volumes functionality.
n Virtual Volumes
Virtual volumes are encapsulations of virtual machine files, virtual disks, and their derivatives.
n Storage Containers
Unlike traditional LUN and NFS-based storage, the Virtual Volumes functionality does not require
preconfigured volumes on a storage side. Instead, Virtual Volumes uses a storage container. It is a
pool of raw storage capacity or an aggregation of storage capabilities that a storage system can
provide to virtual volumes.
n Protocol Endpoints
Although storage systems manage all aspects of virtual volumes, ESXi hosts have no direct access
to virtual volumes on the storage side. Instead, ESXi hosts use a logical I/O proxy, called the
protocol endpoint, to communicate with virtual volumes and virtual disk files that virtual volumes
encapsulate. ESXi uses protocol endpoints to establish a data path on demand from virtual
machines to their respective virtual volumes.
Virtual Volumes
Virtual volumes are encapsulations of virtual machine files, virtual disks, and their derivatives.
Virtual volumes are stored natively inside a storage system that is connected to your ESXi hosts through
Ethernet or SAN. They are exported as objects by a compliant storage system and are managed entirely
by hardware on the storage side. Typically, a unique GUID identifies a virtual volume. Virtual volumes are
not preprovisioned, but created automatically when you perform virtual machine management operations.
These operations include a VM creation, cloning, and snapshotting. ESXi and vCenter Server associate
one or more virtual volumes to a virtual machine.
Data-VVol A data virtual volume that corresponds directly to each virtual disk .vmdk
file. As virtual disk files on traditional datastores, virtual volumes are
presented to virtual machines as SCSI disks. Data-VVols can be either
thick or thin-provisioned.
Swap-VVol Created when a VM is first powered on. It is a virtual volume to hold copies
of VM memory pages that cannot be retained in memory. Its size is
determined by the VM’s memory size. It is thick-provisioned by default.
Snapshot-VVol A virtual memory volume to hold the contents of virtual machine memory for
a snapshot. Thick-provisioned.
Other A virtual volume for specific features. For example, a digest virtual volume
is created for Content-Based Read Cache (CBRC).
Typically, a VM creates a minimum of three virtual volumes, data-VVol, config-VVol, and swap-VVol. The
maximum depends on how many virtual disks and snapshots reside on the VM.
For example, the following SQL server has six virtual volumes:
n Config-VVol
n Snapshot-VVol
By using different virtual volumes for different VM components, you can apply and manipulate storage
policies at the finest granularity level. For example, a virtual volume that contains a virtual disk can have a
richer set of services than the virtual volume for the VM boot disk. Similarly, a snapshot virtual volume can
use a different storage tier compared to a current virtual volume.
You select the thin or thick type for your virtual disk at the VM creation time. If your disk is thin and resides
on a VVols datastore, you cannot change its type later by inflating the disk.
The storage provider is implemented through VMware APIs for Storage Awareness (VASA) and is used to
manage all aspects of Virtual Volumes storage. The storage provider integrates with the Storage
Monitoring Service (SMS), shipped with vSphere, to communicate with vCenter Server and ESXi hosts.
The storage provider delivers information from the underlying storage container. The storage container
capabilities appear in vCenter Server and the vSphere Client. Then, in turn, the storage provider
communicates virtual machine storage requirements, which you can define in the form of a storage policy,
to the storage layer. This integration process ensures that a virtual volume created in the storage layer
meets the requirements outlined in the policy.
Typically, vendors are responsible for supplying storage providers that can integrate with vSphere and
provide support to Virtual Volumes. Every storage provider must be certified by VMware and properly
deployed. For information about deploying and upgrading the Virtual Volumes storage provider to a
version compatible with current ESXi release, contact your storage vendor.
After you deploy the storage provider, you must register it in vCenter Server, so that it can communicate
with vSphere through the SMS.
Storage Containers
Unlike traditional LUN and NFS-based storage, the Virtual Volumes functionality does not require
preconfigured volumes on a storage side. Instead, Virtual Volumes uses a storage container. It is a pool of
raw storage capacity or an aggregation of storage capabilities that a storage system can provide to virtual
volumes.
A storage container is a part of the logical storage fabric and is a logical unit of the underlying hardware.
The storage container logically groups virtual volumes based on management and administrative needs.
For example, the storage container can contain all virtual volumes created for a tenant in a multitenant
deployment, or a department in an enterprise deployment. Each storage container serves as a virtual
volume store and virtual volumes are allocated out of the storage container capacity.
Typically, a storage administrator on the storage side defines storage containers. The number of storage
containers, their capacity, and their size depend on a vendor-specific implementation. At least one
container for each storage system is required.
After you register a storage provider associated with the storage system, vCenter Server discovers all
configured storage containers along with their storage capability profiles, protocol endpoints, and other
attributes. A single storage container can export multiple capability profiles. As a result, virtual machines
with diverse needs and different storage policy settings can be a part of the same storage container.
Initially, all discovered storage containers are not connected to any specific host, and you cannot see
them in the vSphere Client. To mount a storage container, you must map it to a Virtual Volumes datastore.
Protocol Endpoints
Although storage systems manage all aspects of virtual volumes, ESXi hosts have no direct access to
virtual volumes on the storage side. Instead, ESXi hosts use a logical I/O proxy, called the protocol
endpoint, to communicate with virtual volumes and virtual disk files that virtual volumes encapsulate.
ESXi uses protocol endpoints to establish a data path on demand from virtual machines to their
respective virtual volumes.
Each virtual volume is bound to a specific protocol endpoint. When a virtual machine on the host performs
an I/O operation, the protocol endpoint directs the I/O to the appropriate virtual volume. Typically, a
storage system requires just a few protocol endpoints. A single protocol endpoint can connect to
hundreds or thousands of virtual volumes.
On the storage side, a storage administrator configures protocol endpoints, one or several per storage
container. The protocol endpoints are a part of the physical storage fabric. The storage system exports
the protocol endpoints with associated storage containers through the storage provider. After you map the
storage container to a Virtual Volumes datastore, the ESXi host discovers the protocol endpoints and they
become visible in the vSphere Client. The protocol endpoints can also be discovered during a storage
rescan. Multiple hosts can discover and mount the protocol endpoints.
In the vSphere Client, the list of available protocol endpoints looks similar to the host storage devices list.
Different storage transports can be used to expose the protocol endpoints to ESXi. When the SCSI-based
transport is used, the protocol endpoint represents a proxy LUN defined by a T10-based LUN WWN. For
the NFS protocol, the protocol endpoint is a mount point, such as an IP address and a share name. You
can configure multipathing on the SCSI-based protocol endpoint, but not on the NFS-based protocol
endpoint. No matter which protocol you use, the storage array can provide multiple protocol endpoints for
availability purposes.
Protocol endpoints are managed per array. ESXi and vCenter Server assume that all protocol endpoints
reported for an array are associated with all containers on that array. For example, if an array has two
containers and three protocol endpoints, ESXi assumes that virtual volumes on both containers can be
bound to all three protocol points.
The storage system replies with a protocol endpoint ID that becomes an access point to the virtual
volume. The protocol endpoint accepts all I/O requests to the virtual volume. This binding exists until
ESXi sends an unbind request for the virtual volume.
For later bind requests on the same virtual volume, the storage system can return different protocol
endpoint IDs.
When receiving concurrent bind requests to a virtual volume from multiple ESXi hosts, the storage system
can return the same or different endpoint bindings to each requesting ESXi host. In other words, the
storage system can bind different concurrent hosts to the same virtual volume through different endpoints.
The unbind operation removes the I/O access point for the virtual volume. The storage system might
unbind the virtual volume from its protocol endpoint immediately, or after a delay, or take some other
action. A bound virtual volume cannot be deleted until it is unbound.
After vCenter Server discovers storage containers exported by storage systems, you must mount them as
Virtual Volumes datastores. The Virtual Volumes datastores are not formatted in a traditional way like, for
example, VMFS datastores. You must still create them because all vSphere functionalities, including FT,
HA, DRS, and so on, require the datastore construct to function properly.
You use the datastore creation wizard in the vSphere Client to map a storage container to a Virtual
Volumes datastore. The Virtual Volumes datastore that you create corresponds directly to the specific
storage container.
From a vSphere administrator prospective, the Virtual Volumes datastore is similar to any other datastore
and is used to hold virtual machines. Like other datastores, the Virtual Volumes datastore can be browsed
and lists virtual volumes by virtual machine name. Like traditional datastores, the Virtual Volumes
datastore supports unmounting and mounting. However, such operations as upgrade and resize are not
applicable to the Virtual Volumes datastore. The Virtual Volumes datastore capacity is configurable by the
storage administrator outside of vSphere.
You can use the Virtual Volumes datastores with traditional VMFS and NFS datastores and with vSAN.
Note The size of a virtual volume must be a multiple of 1 MB, with a minimum size of 1 MB. As a result,
all virtual disks that you provision on a Virtual Volumes datastore must be an even multiple of 1 MB. If the
virtual disk you migrate to the Virtual Volumes datastore is not an even multiple of 1 MB, extend the disk
to the nearest even multiple of 1 MB.
A VM storage policy is a set of rules that contains placement and quality-of-service requirements for a
virtual machine. The policy enforces appropriate placement of the virtual machine within Virtual Volumes
storage and guarantees that storage can satisfy virtual machine requirements.
You use the VM Storage Policies interface to create a Virtual Volumes storage policy. When you assign
the new policy to the virtual machine, the policy enforces that the Virtual Volumes storage meets the
requirements.
The default No Requirements policy that VMware provides has the following characteristics:
n You can create a VM storage policy for Virtual Volumes and designate it as the default.
Virtual Volumes supports NFS version 3 and 4.1, iSCSI, Fibre Channel, and FCoE.
No matter which storage protocol is used, protocol endpoints provide uniform access to both SAN and
NAS storage. A virtual volume, like a file on other traditional datastore, is presented to a virtual machine
as a SCSI disk.
Note A storage container is dedicated to SCSI or NAS and cannot be shared across those protocol
types. An array can present one storage container with SCSI protocol endpoints and a different container
with NFS protocol endpoints. The container cannot use a combination of SCSI and NFS protocol
endpoints.
When the SCSI-based protocol is used, the protocol endpoint represents a proxy LUN defined by a T10-
based LUN WWN.
As any block-based LUNs, the protocol endpoints are discovered using standard LUN discovery
commands. The ESXi host periodically rescans for new devices and asynchronously discovers block‐
based protocol endpoints. The protocol endpoint can be accessible by multiple paths. Traffic on these
paths follows well‐known path selection policies, as is typical for LUNs.
On SCSI-based disk arrays at VM creation time, ESXi makes a virtual volume and formats it as VMFS.
This small virtual volume stores all VM metadata files and is called the config‐VVol. The config‐VVol
functions as a VM storage locator for vSphere.
Virtual volumes on disk arrays support the same set of SCSI commands as VMFS and use ATS as a
locking mechanism.
No matter which version you use, a storage array can provide multiple protocol endpoints for availability
purposes.
Virtual volumes on NAS devices support the same NFS Remote Procedure Calls (RPCs) that ESXi hosts
use when connecting to NFS mount points.
On NAS devices, a config‐VVol is a directory subtree that corresponds to a config‐VVolID. The config‐
VVol must support directories and other operations that are necessary for NFS.
Data center
VM storage policies
Storage
Monitoring VMware vSphere
Service
VASA APIs
Protocol endpoints
VASA
provider
Storage array
Virtual volumes are objects exported by a compliant storage system and typically correspond one-to-one
with a virtual machine disk and other VM-related files. A virtual volume is created and manipulated out-of-
band, not in the data path, by a VASA provider.
A VASA provider, or a storage provider, is developed through vSphere APIs for Storage Awareness. The
storage provider enables communication between the ESXi hosts, vCenter Server, and the
vSphere Client on one side, and the storage system on the other. The VASA provider runs on the storage
side and integrates with the vSphere Storage Monitoring Service (SMS) to manage all aspects of Virtual
Volumes storage. The VASA provider maps virtual disk objects and their derivatives, such as clones,
snapshots, and replicas, directly to the virtual volumes on the storage system.
The ESXi hosts have no direct access to the virtual volumes storage. Instead, the hosts access the virtual
volumes through an intermediate point in the data path, called the protocol endpoint. The protocol
endpoints establish a data path on demand from the virtual machines to their respective virtual volumes.
The protocol endpoints serve as a gateway for direct in-band I/O between ESXi hosts and the storage
system. ESXi can use Fibre Channel, FCoE, iSCSI, and NFS protocols for in-band communication.
The virtual volumes reside inside storage containers that logically represent a pool of physical disks on
the storage system. On the vCenter Server and ESXi side, storage containers are presented as Virtual
Volumes datastores. A single storage container can export multiple storage capability sets and provide
different levels of service to different virtual volumes.
Communication with the VASA provider is protected by SSL certificates. These certificates can come from
the VASA provider or from the VMCA.
n Certificates can be directly provided by the VASA provider for long-term use. They can be either self-
generated and self-signed, or derived from an external Certificate Authority.
n Certificates can be generated by the VMCA for use by the VASA provider.
When a host or VASA provider is registered, VMCA follows these steps automatically, without
involvement from the vSphere administrator.
1 When a VASA provider is first added to the vCenter Server storage management service (SMS), it
produces a self‐signed certificate.
2 After verifying the certificate, the SMS requests a Certificate Signing Request (CSR) from the VASA
provider.
3 After receiving and validating the CSR, the SMS presents it to the VMCA on behalf of the VASA
provider, requesting a CA signed certificate.
4 The signed certificate with the root certificate is passed to the VASA provider. The VASA provider can
authenticate all future secure connections originating from the SMS on vCenter Server and on ESXi
hosts.
In Virtual Volumes environment, snapshots are managed by ESXi and vCenter Server, but are performed
by the storage array.
Each snapshot creates an extra virtual volume object, snapshot, or memory, virtual volume, that holds the
contents of virtual machine memory. Original VM data is copied to this object, and it remains read-only,
which prevents the guest operating system from writing to snapshot. You cannot resize the snapshot
virtual volume. And it can be read only when the VM is reverted to a snapshot. Typically, when you
replicate the VM, its snapshot virtual volume is also replicated.
Storage Container
The base virtual volume remains active, or read-write. When another snapshot is created, it preserves the
new state and data of the virtual machine at the time you take the snapshot.
Deleting snapshots leaves the base virtual volume that represents the most current state of the virtual
machine. Snapshot virtual volumes are discarded. Unlike snapshots on the traditional datastores, virtual
volumes snapshots do not need to commit their contents to the base virtual volume.
Storage Container
Base VVol
Read Write
For information about creating and managing snapshots, see the vSphere Virtual Machine Administration
documentation.
n The storage system or storage array that you use must support Virtual Volumes and integrate with the
vSphere components through vSphere APIs for Storage Awareness (VASA). The storage array must
support thin provisioning and snapshotting.
n Protocol endpoints
n Storage containers
n Storage profiles
n Replication configurations if you plan to use Virtual Volumes with replication. See Requirements
for Replication with Virtual Volumes.
n If you use iSCSI, activate the software iSCSI adapters on your ESXi hosts. Configure Dynamic
Discovery and enter the IP address of your Virtual Volumes storage system. See Configure the
Software iSCSI Adapter.
n Synchronize all components in the storage array with vCenter Server and all ESXi hosts. Use
Network Time Protocol (NTP) to do this synchronization.
For more information, contact your vendor and see VMware Compatibility Guide
Procedure
5 Click OK.
Prerequisites
Procedure
What to do next
You can now provision virtual machines on the Virtual Volumes datastore. For information on creating
virtual machines, see Provision Virtual Machines on Virtual Volumes Datastores and the vSphere Virtual
Machine Administration documentation.
After registration, the Virtual Volumes provider communicates with vCenter Server. The provider reports
characteristics of underlying storage and data services, such as replication, that the storage system
provides. The characteristics appear in the VM Storage Policies interface and can be used to create a VM
storage policy compatible with the Virtual Volumes datastore. After you apply this storage policy to a
virtual machine, the policy is pushed to Virtual Volumes storage. The policy enforces optimal placement of
the virtual machine within Virtual Volumes storage and guarantees that storage can satisfy virtual
machine requirements. If your storage provides extra services, such as caching or replication, the policy
enables these services for the virtual machine.
Prerequisites
Verify that an appropriate version of the Virtual Volumes storage provider is installed on the storage side.
Obtain credentials of the storage provider.
Procedure
4 Enter connection information for the storage provider, including the name, URL, and credentials.
Action Description
Direct vCenter Server to the storage Select the Use storage provider certificate option and specify the certificate's
provider certificate location.
Use a thumbprint of the storage If you do not guide vCenter Server to the provider certificate, the certificate
provider certificate thumbprint is displayed. You can check the thumbprint and approve it.
vCenter Server adds the certificate to the truststore and proceeds with the
connection.
The storage provider adds the vCenter Server certificate to its truststore when vCenter Server first
connects to the provider.
vCenter Server discovers and registers the Virtual Volumes storage provider.
Procedure
1 In the vSphere Client object navigator, browse to a host, a cluster, or a data center.
4 Enter the datastore name and select a backing storage container from the list of storage containers.
Make sure to use the name that does not duplicate another datastore name in your data center
environment.
If you mount the same Virtual Volumes datastore to several hosts, the name of the datastore must be
consistent across all hosts.
What to do next
After you create the Virtual Volumes datastore, you can perform such datastore operations as renaming
the datastore, browsing datastore files, unmounting the datastore, and so on.
Procedure
4 To view details for a specific item, select this item from the list.
5 Use tabs under Protocol Endpoint Details to access additional information and modify properties for
the selected protocol endpoint.
Tab Description
Properties View the item properties and characteristics. For SCSI (block) items, view and
edit multipathing policies.
Paths (SCSI protocol endpoints only) Display paths available for the protocol endpoint. Disable or enable a selected
path. Change the Path Selection Policy.
Procedure
4 Select the protocol endpoint whose paths you want to change and click the Properties tab.
The path policies available for your selection depend on the storage vendor support.
n Fixed (VMware)
8 To save your settings and exit the dialog box, click OK.
Note All virtual disks that you provision on a Virtual Volumes datastore must be an even multiple of 1
MB.
A virtual machine that runs on a Virtual Volumes datastore requires an appropriate VM storage policy.
After you provision the virtual machine, you can perform typical VM management tasks. For information,
see the vSphere Virtual Machine Administration documentation.
Procedure
VMware provides a default No Requirements storage policy for Virtual Volumes. If you need, you can
create a custom storage policy compatible with Virtual Volumes.
To guarantee that the Virtual Volumes datastore fulfills specific storage requirements when allocating
a virtual machine, associate the Virtual Volumes storage policy with the virtual machine.
For virtual machines provisioned on Virtual Volumes datastores, VMware provides a default No
Requirements policy. You cannot edit this policy, but you can designate a newly created policy as
default.
Array-based replication is policy driven. After you configure your Virtual Volumes storage for replication,
information about replication capabilities and replication groups is delivered from the array by the storage
provider. This information shows in the VM Storage Policy interface of vCenter Server.
You use the VM storage policy to describe replication requirements for your virtual machines. The
parameters that you specify in the storage policy depend on how your array implements replication. For
example, your VM storage policy might include such parameters as the replication schedule, replication
frequency, or recovery point objective (RPO). The policy might also indicate the replication target, a
secondary site where your virtual machines are replicated, or specify whether replicas must be deleted.
By assigning the replication policy during VM provisioning, you request replication services for your virtual
machine. After that, the array takes over the management of all replication schedules and processes.
Array based
replication
config swap data DB1 DB2 DB3 config swap data DB1 DB2 DB3
Virtual Volumes Virtual Volumes
Site 1 Site 2
For general Virtual Volumes requirements, see Before You Enable Virtual Volumes.
Storage Requirements
Implementation of Virtual Volumes replication depends on your array and might be different for storage
vendors. Generally, the following requirements apply to all vendors.
n The storage arrays that you use to implement replication must be compatible with Virtual Volumes.
n The arrays must integrate with the version of the storage (VASA) provider compatible with Virtual
Volumes replication.
n The storage arrays must be replication capable and configured to use vendor-provided replication
mechanisms. Typical configurations usually involve one or two replication targets. Any required
configurations, such as pairing of the replicated site and the target site, must be also performed on
the storage side.
n When applicable, replication groups and fault domains for Virtual Volumes must be preconfigured on
the storage side.
For more information, contact your vendor and see VMware Compatibility Guide.
vSphere Requirements
n Use the vCenter Server and ESXi versions that support Virtual Volumes storage replication.
vCenter Server and ESXi hosts that are older than 6.5 release do not support replicated Virtual
Volumes storage. Any attempts to create a replicated VM on an incompatible host fail with an error.
For information, see VMware Compatibility Guide.
n If you plan to migrate a virtual machine, make sure that target resources, such as the ESXi hosts and
Virtual Volumes datastores, support storage replication.
vCenter Server and ESXi can discover replication groups, but do not manage their life cycle. Replication
groups, also called consistency groups, indicate which VMs and virtual disks must be replicated together
to a target site. You can assign components of the same virtual machine, such as the VM configuration
file and virtual disks, to different preconfigured replication groups. Or exclude certain VM components
from replication.
Storage Container
If no preconfigured groups are available, Virtual Volumes can use an automatic method. With the
automatic method, Virtual Volumes creates a replication group on demand and associates this group with
a Virtual Volumes object being provisioned. If you use the automatic replication group, all components of
a virtual machine are assigned to the group. You cannot mix preconfigured and automatic replication
groups for components of the same virtual machine.
Fault domains are configured and reported by the storage array, and are not exposed in the
vSphere Client. The Storage Policy Based Management (SPBM) mechanism discovers fault domains and
uses them for validation purposes during a virtual machine creation.
For example, provision a VM with two disks, one associated with replication group Anaheim: B, the
second associated with replication group Anaheim: C. SPBM validates the provisioning because both
disks are replicated to the same target fault domains.
Source Target
Repl. Group:”Anaheim:D”
Repl. Group:”New-York:B”
Repl. Group:”New-York:C”
Repl. Group:”New-York:D”
Valid Configuration
Now provision a VM with two disks, one associated with replication group Anaheim: B, the second
associated with replication group Anaheim: D. This configuration is invalid. Both replication groups
replicate to the New-York fault domain, however, only one replicates to the Boulder fault domain.
Source Target
Repl. Group:”Anaheim:D”
Repl. Group:”New-York:B”
Repl. Group:”New-York:C”
Repl. Group:”New-York:D”
Invalid Configuration
The workflow to activate replication for your virtual machines includes steps typical for the virtual machine
provisioning on Virtual Volumes storage.
1 Define the VM storage policy compatible with replication storage. The datastore-based rules of the
policy must include the replication component. See Create a VM Storage Policy for Virtual Volumes.
After you configure the storage policy that includes replication, vCenter Server discovers available
replication groups.
2 Assign the replication policy to your virtual machine. If configured, select a compatible replication
group, or use the automatic assignment. See Assign Storage Policies to Virtual Machines.
n You can apply the replication storage policy only to a configuration virtual volume and a data virtual
volume. Other VM objects inherit the replication policy in the following way:
n The memory virtual volume inherits the policy of the configuration virtual volume.
n The digest virtual volume inherits the policy of the data virtual volume.
n The swap virtual volume, which exists while a virtual machine is powered on, is excluded from
replication.
n If you do not apply the replication policy to a VM disk, the disk is not replicated.
n The replication storage policy should not be used as a default storage policy for a datastore.
Otherwise, the policy prevents you from selecting replication groups.
n Replication preserves snapshot history. If a snapshot was created and replicated, you can recover to
the application consistent snapshot.
n You can replicate a linked clone. If a linked clone is replicated without its parent, it becomes a full
clone.
n If a descriptor file belongs to a virtual disk of one VM, but resides in the VM home of another VM, both
VMs must be in the same replication group. If the VMs are located in different replication groups, both
of these replication groups must be failed over at the same time. Otherwise, the descriptor might
become unavailable after the failover. As a result, the VM might fail to power on.
n In your Virtual Volumes with replication environment, you might periodically run a test failover
workflow to ensure that the recovered workloads are functional after a failover.
The resulting test VMs that are created during the test failover are fully functional and suitable for
general administrative operations. However, certain considerations apply:
n All VMs created during the test failover must be deleted before the test failover stops. The
deletion ensures that any snapshots or snapshot-related virtual volumes that are part of the VM,
such as the snapshot virtual volume, do not interfere with stopping of the test failover.
n You can create fast clones only if the policy applied to the new VM contains the same replication
group ID as the VM being cloned. Attempts to place the child VM outside of the replication group
of the parent VM fail.
Virtual Volumes supports the following capabilities, features, and VMware products:
n With Virtual Volumes, you can use advanced storage services that include replication, encryption,
deduplication, and compression on individual virtual disks. Contact your storage vendor for
information about services they support with Virtual Volumes.
n Virtual Volumes functionality supports backup software that uses vSphere APIs - Data Protection.
Virtual volumes are modeled on virtual disks. Backup products that use vSphere APIs - Data
Protection are as fully supported on virtual volumes as they are on VMDK files on a LUN. Snapshots
that the backup software creates using vSphere APIs - Data Protection look as non-VVols snapshots
to vSphere and the backup software.
Note vSphere Virtual Volumes does not support SAN transport mode. vSphere APIs - Data
Protection automatically selects an alternative data transfer method.
For more information about integration with the vSphere Storage APIs - Data Protection, consult your
backup software vendor.
n Virtual Volumes supports such vSphere features as vSphere vMotion, Storage vMotion, snapshots,
linked clones, Flash Read Cache, and DRS.
n You can use clustering products, such as Oracle Real Application Clusters, with Virtual Volumes. To
use these products, you activate the multiwrite setting for a virtual disk stored on the VVol datastore.
For more details, see the knowledge base article at http://kb.vmware.com/kb/2112039. For a list of
features and products that Virtual Volumes functionality supports, see VMware Product Interoperability
Matrixes.
n Because the Virtual Volumes environment requires vCenter Server, you cannot use Virtual Volumes
with a standalone host.
n A Virtual Volumes storage container cannot span multiple physical arrays. Some vendors present
multiple physical arrays as a single array. In such cases, you still technically use one logical array.
n Host profiles that contain Virtual Volumes datastores are vCenter Server specific. After you extract
this type of host profile, you can attach it only to hosts and clusters managed by the same
vCenter Server as the reference host.
Examples might include a container created for a tenant in a multitenant deployment, or a container for a
department in an enterprise deployment.
n Customers
Changing storage profiles must be an array-side operation, not a storage migration to another container.
When you use block storage, the PE represents a proxy LUN defined by a T10-based LUN WWN. For
NFS storage, the PE is a mount point, such as an IP address or DNS name, and a share name.
Typically, configuration of PEs is array-specific. When you configure PEs, you might need to associate
them with specific storage processors, or with certain hosts. To avoid errors when creating PEs, do not
configure them manually. Instead, when possible, use storage-specific management tools.
If your environment uses LUN IDs that are greater than 1023, change the number of scanned LUNs
through the Disk.MaxLUN parameter. See Change the Number of Scanned Storage Devices.
When you use vSphere Client, you cannot change the VM storage policy assignment for swap-VVol,
memory-VVol, or snapshot-VVol.
n On block storage, ESXi gives a large queue depth to I/O because of a potentially high number of
virtual volumes. The Scsi.ScsiVVolPESNRO parameter controls the number of I/O that can be
queued for PEs. You can configure the parameter on the Advanced System Settings page of the
vSphere Client.
Suppose that your VM has two virtual disks, and you take two snapshots with memory. Your VM might
occupy up to 10 VVol objects: a config-VVol, a swap-VVol, two data-VVols, four snapshot-VVols, and two
memory snapshot-VVols.
n When appropriate, use vSphere HA or Site Recovery Manager to protect the storage provider VM.
n Failed Attempts to Migrate VMs with Memory Snapshots to and from Virtual Datastores
When you attempt to migrate a VM with hardware version 10 or earlier to and from a vSphere Virtual
Volumes datastore, failures occur if the VM has memory snapshots.
esxcli storage vvol daemon unbindall Unbind all virtual volumes from all
VASA providers known to the ESXi
host.
esxcli storage vvol list List all protocol endpoints that your
protocolendpoint host can access.
esxcli storage vvol vasacontext get Show the VASA context (VC
UUID) associated with the host.
esxcli storage vvol vasaprovider list List all storage (VASA) providers
associated with the host.
Problem
The vSphere Client shows the datastore as inaccessible. You cannot use the datastore for virtual
machine provisioning.
Cause
This problem might occur when you fail to configure protocol endpoints for the SCSI-based storage
container that is mapped to the virtual datastore. Like traditional LUNs, SCSI protocol endpoints need to
be configured so that an ESXi host can detect them.
Solution
Before creating virtual datastores for SCSI-based containers, make sure to configure protocol endpoints
on the storage side.
Problem
An OVF template or a VM being migrated from a nonvirtual datastore might include additional large files,
such as ISO disk images, DVD images, and image files. If these additional files cause the configuration
virtual volume to exceed its 4-GB limit, migration or deployment to a virtual datastore fails.
Cause
The configuration virtual volume, or config-VVol, contains various VM-related files. On traditional
nonvirtual datastores, these files are stored in the VM home directory. Similar to the VM home directory,
the config-VVol typically includes the VM configuration file, virtual disk and snapshot descriptor files, log
files, lock files, and so on.
On virtual datastores, all other large-sized files, such as virtual disks, memory snapshots, swap, and
digest, are stored as separate virtual volumes.
Config-VVols are created as 4-GB virtual volumes. Generic content of the config-VVol usually consumes
only a fraction of this 4-GB allocation, so config-VVols are typically thin-provisioned to conserve backing
space. Any additional large files, such as ISO disk images, DVD images, and image files, might cause the
config-VVol to exceed its 4-GB limit. If such files are included in an OVF template, deployment of the VM
OVF to vSphere Virtual Volumes storage fails. If these files are part of an existing VM, migration of that
VM from a traditional datastore to vSphere Virtual Volumes storage also fails.
Solution
n For VM migration. Before migrating a VM from a traditional datastore to a virtual datastore, remove
excess content from the VM home directory to keep the config-VVol under the 4-GB limit.
n For OVF deployment. Because you cannot deploy an OVF template that contains excess files directly
to a virtual datastore, first deploy the VM to a nonvirtual datastore. Remove any excess content from
the VM home directory, and migrate the resulting VM to vSphere Virtual Volumes storage.
Problem
The following problems occur when you migrate a version 10 or earlier VM with memory snapshots:
n Migration of a version 10 or earlier VM with memory snapshots to a virtual datastore is not supported
and causes a failure.
n Migration of a version 10 or earlier VM with memory snapshots from a virtual datastore to a nonvirtual
datastore, such as VMFS, can succeed. If you later make additional snapshots and attempt to
migrate this VM back to vSphere Virtual Volumes storage, your attempt fails.
Cause
vSphere Virtual Volumes storage does not require that you use a particular hardware version for your
virtual machines. Typically, you can move a virtual machine with any hardware version to vSphere Virtual
Volumes storage. However, if you have a VM with memory snapshots, and plan to migrate this VM
between a virtual datastore and a nonvirtual datastore, use the VM of hardware version 11.
Non-VVols virtual machines of hardware version 11 or later use separate files to store their memory
snapshots. This usage is consistent with VMs on vSphere Virtual Volumes storage, where memory
snapshots are created as separate VVols instead of being stored as part of a .vmsn file in the VM home
directory. In contrast, non-VVols VMs with hardware version 10 continue to store their memory snapshots
as part of the .vmsn file in the VM home directory. As a result, you might experience problems or failures
when attempting to migrate these VMs between virtual and nonvirtual datastores.
Solution
To avoid problems when migrating VMs with memory snapshots across virtual and nonvirtual datastores,
use hardware version 11. Follow these guidelines when migrating version 10 or earlier VMs with memory
snapshots:
n Migrating a version 10 or earlier VM with memory snapshots to a virtual datastore is not supported.
The only workaround is to remove all snapshots. Upgrading the hardware version does not solve this
problem.
n Migrating a version 10 or earlier VM with memory snapshots from a virtual datastore to a nonvirtual
datastore, such as VMFS, can succeed. However, the migration might put the VM in an inconsistent
state. The snapshots that were taken on the virtual datastore use the vmem object. Any memory
snapshots taken after migrating to VMFS are stored in the .vmsn file. If you later attempt to migrate
this VM back to vSphere Virtual Volumes storage, your attempt fails. As with the previous case,
remove all snapshots to work around this problem.
The I/O filters can be offered by VMware or created by third parties through vSphere APIs for I/O Filtering
(VAIO).
VMware offers certain categories of I/O filters. In addition, third-party vendors can create the I/O filters.
Typically, they are distributed as packages that provide an installer to deploy the filter components on
vCenter Server and ESXi host clusters.
After the I/O filters are deployed, vCenter Server configures and registers an I/O filter storage provider,
also called a VASA provider, for each host in the cluster. The storage providers communicate with
vCenter Server and make data services offered by the I/O filter visible in the VM Storage Policies
interface. You can reference these data services when defining common rules for a VM policy. After you
associate virtual disks with this policy, the I/O filters are enabled on the virtual disks.
Datastore Support
I/O filters can support all datastore types including the following:
n VMFS
n NFS 3
n NFS 4.1
n vSAN
n Replication. Replicates all write I/O operations to an external target location, such as another host or
cluster.
n Encryption. Offered by VMware. Provides encryption mechanisms for virtual machines. For more
information, see the vSphere Security documentation.
n Caching. Implements a cache for virtual disk data. The filter can use a local flash storage device to
cache the data and increase the IOPS and hardware utilization rates for the virtual disk. If you use the
caching filter, you might need to configure a Virtual Flash Resource.
n Storage I/O control. Offered by VMware. Throttles the I/O load towards a datastore and controls the
amount of storage I/O that is allocated to virtual machines during periods of I/O congestion. For more
information, see the vSphere Resource Management documentation.
Note You can install several filters from the same category, such as caching, on your ESXi host.
However, you can have only one filter from the same category per virtual disk.
VAIO Filter Framework A combination of user world and VMkernel infrastructure provided by ESXi.
With the framework, you can add filter plug-ins to the I/O path to and from
virtual disks. The infrastructure includes an I/O filter storage provider (VASA
provider). The provider integrates with the Storage Policy Based
Management (SPBM) system and exports filter capabilities to
vCenter Server.
The following figure illustrates the components of I/O filtering and the flow of I/O between the guest OS
and the virtual disk.
Virtual Machine
GuestOS
I/O Path
Filter 1
Filter 2
Filter N
I/O Path
Virtual Disk
Each Virtual Machine Executable (VMX) component of a virtual machine contains a Filter Framework that
manages the I/O filter plug-ins attached to the virtual disk. The Filter Framework invokes filters when the
I/O requests move between the guest operating system and the virtual disk. Also, the filter intercepts any
I/O access towards the virtual disk that happens outside of a running VM.
The filters run sequentially in a specific order. For example, a replication filter executes before a cache
filter. More than one filter can operate on the virtual disk, but only one for each category.
Once all filters for the particular disk verify the I/O request, the request moves to its destination, either the
VM or the virtual disk.
Because the filters run in user space, any filter failures impact only the VM, but do not affect the ESXi
host.
Storage providers for I/O filtering are software components that are offered by vSphere. They integrate
with I/O filters and report data service capabilities that I/O filters support to vCenter Server.
The capabilities populate the VM Storage Policies interface and can be referenced in a VM storage policy.
You then apply this policy to virtual disks, so that the I/O filters can process I/O for the disks.
If your caching I/O filter uses local flash devices, you need to configure a virtual flash resource, also
known as VFFS volume. You configure the resource on your ESXi host before activating the filter. While
processing the virtual machine read I/Os, the filter creates a virtual machine cache and places it on the
VFFS volume.
VM
I/O Path
Cache Filter
I/O Path
Virtual
Machine
Cache
Flash Flash
Storage Storage
Devices Devices
ESXi
To set up a virtual flash resource, you use flash devices that are connected to your host. To increase the
capacity of your virtual flash resource, you can add more flash drives. An individual flash drive must be
exclusively allocated to a virtual flash resource and cannot be shared with any other vSphere service,
such as vSAN or VMFS.
Flash Read Cache and caching I/O filters are mutually exclusive because both functionalities use the
virtual flash resource on the host. You cannot enable Flash Read Cache on a virtual disk with the cache
I/O filters. Similarly, if a virtual machine has Flash Read Cache configured, it cannot use the cache I/O
filters.
n Use the latest version of ESXi and vCenter Server compatible with I/O filters. Older versions might
not support I/O filters, or provide only partial support.
n Check for any additional requirements that individual partner solutions might have. In specific cases,
your environment might need flash devices, extra physical memory, or network connectivity and
bandwidth. For information, contact your vendor or your VMware representative.
n Web server to host partner packages for filter installation. The server must remain available after
initial installation. When a new host joins the cluster, the server pushes appropriate I/O filter
components to the host.
Prerequisites
n For information about I/O filters provided by third parties, contact your vendor or your VMware
representative.
Procedure
VMware partners create I/O filters through the vSphere APIs for I/O Filtering (VAIO) developer program.
The filter packages are typically distributed as vSphere Installation Bundles (VIBs). The VIB package can
include I/O filter daemons, CIM providers, and other associated components.
Typically, to deploy the filters, you run installers provided by vendors. Installation is performed at an ESXi
cluster lever. You cannot install the filters on selected hosts.
Prerequisites
n Verify that the I/O filter solution integrates with vSphere ESX Agent Manager and is certified by
VMware.
Procedure
The installer deploys the appropriate I/O filter extension on vCenter Server and the filter components
on all hosts within a cluster.
A storage provider, also called a VASA provider, is automatically registered for every ESXi host in the
cluster. Successful auto-registration of the I/O filter storage providers triggers an event at the host
level. If the storage providers fail to auto-register, the system raises alarms on the hosts.
When you install a third-party I/O filter, a storage provider, also called VASA provider, is automatically
registered for every ESXi host in the cluster. Successful auto-registration of the I/O filter storage providers
triggers an event at the host level. If the storage providers fail to auto-register, the system raises alarms
on the hosts.
Procedure
1 Verify that the I/O filter storage providers appear as expected and are active.
When the I/O filter providers are properly registered, capabilities and data services that the filters offer
populate the VM Storage Policies interface.
2 Verify that the I/O filter components are listed on your cluster and ESXi hosts.
Option Actions
Prerequisites
To determine whether the virtual flash resource must be enabled, check with your I/O filter vendor.
Procedure
3 Under Virtual Flash, select Virtual Flash Resource Management and click Add Capacity.
4 From the list of available flash drives, select one or more drives to use for the virtual flash resource
and click OK.
The virtual flash resource is created. The Device Backing area lists all the drives that you use for the
virtual flash resource.
Prerequisites
For the caching I/O filters, configure the virtual flash resource on your ESXi host.
Procedure
You must first create a virtual machine policy that lists data services provided by the I/O filters.
To activate data services that the I/O filter provides, associate the I/O filter policy with virtual disks.
You can assign the policy when you provision the virtual machine.
What to do next
If you later want to disable the I/O filter for a virtual machine, you can remove the filter rules from the VM
storage policy and re-apply the policy. See Edit or Clone a VM Storage Policy. Or you can edit the
settings of the virtual machine and select a different storage policy that does not include the filter.
You can assign the I/O filter policy during an initial deployment of a virtual machine. This topic describes
how to assign the policy when you create a new virtual machine. For information about other deployment
methods, see the vSphere Virtual Machine Administration documentation.
Note You cannot change or assign the I/O filter policy when migrating or cloning a virtual machine.
Prerequisites
Verify that the I/O filter is installed on the ESXi host where the virtual machine runs.
Procedure
1 Start the virtual machine provisioning process and follow the appropriate steps.
2 Assign the same storage policy to all virtual machine files and disks.
a On the Select storage page, select a storage policy from the VM Storage Policy drop-down
menu.
b Select the datastore from the list of compatible datastores and click Next.
The datastore becomes the destination storage resource for the virtual machine configuration file
and all virtual disks. The policy also activates I/O filter services for the virtual disks.
Use this option to enable I/O filters just for your virtual disks.
a On the Customize hardware page, expand the New hard disk pane.
b From the VM storage policy drop-down menu, select the storage policy to assign to the virtual
disk.
Use this option to store the virtual disk on a datastore other than the datastore where the VM
configuration file resides.
After you create the virtual machine, the Summary tab displays the assigned storage policies and their
compliance status.
What to do next
You can later change the virtual policy assignment. See Change Storage Policy Assignment for Virtual
Machine Files and Disks.
When you work with I/O filters, the following considerations apply:
n vCenter Server uses ESX Agent Manager (EAM) to install and uninstall I/O filters. As an
administrator, never invoke EAM APIs directly for EAM agencies that are created or used by
vCenter Server. All operations related to I/O filters must go through VIM APIs. If you accidentally
modify an EAM agency that was created by vCenter Server, you must revert the changes. If you
accidentally destroy an EAM agency that is used by I/O filters, you must call
Vim.IoFilterManager#uninstallIoFilter to uninstall the affected I/O filters. After uninstalling,
perform a fresh reinstall.
n When a new host joins the cluster that has I/O filters, the filters installed on the cluster are deployed
on the host. vCenter Server registers the I/O filter storage provider for the host. Any cluster changes
become visible in the VM Storage Policies interface of the vSphere Client.
n When you move a host out of a cluster or remove it from vCenter Server, the I/O filters are uninstalled
from the host. vCenter Server unregisters the I/O filter storage provider.
n If you use a stateless ESXi host, it might lose its I/O filter VIBs during a reboot. vCenter Server
checks the bundles installed on the host after it reboots, and pushes the I/O filter VIBs to the host if
necessary.
Prerequisites
Procedure
1 Uninstall the I/O filter by running the installer that your vendor provides.
During uninstallation, vSphere ESX Agent Manager automatically places the hosts into maintenance
mode.
If the uninstallation is successful, the filter and any related components are removed from the hosts.
2 Verify that the I/O filter components are properly uninstalled from your ESXi hosts:
An upgrade consists of uninstalling the old filter components and replacing them with the new filter
components. To determine whether an installation is an upgrade, vCenter Server checks the names and
versions of existing filters. If the existing filter names match the names of the new filters but have different
versions, the installation is considered an upgrade.
Prerequisites
n Required privileges:Host.Config.Patch.
Procedure
During the upgrade, vSphere ESX Agent Manager automatically places the hosts into maintenance
mode.
The installer identifies any existing filter components and removes them before installing the new filter
components.
2 Verify that the I/O filter components are properly uninstalled from your ESXi hosts:
After the upgrade, vSphere ESX Agent Manager places the hosts back into operational mode.
n Because I/O filters are datastore-agnostic, all types of datastores, including VMFS, NFS, Virtual
Volumes, and vSAN, are compatible with I/O filters.
n I/O filters support RDMs in virtual compatibility mode. No support is provided to RDMs in physical
compatibility mode.
n Flash Read Cache and caching I/O filters are mutually exclusive because both functionalities use the
virtual flash resource on the host. You cannot enable Flash Read Cache on a virtual disk with the
cache I/O filters. Similarly, if a virtual machine has Flash Read Cache configured, it cannot use the
cache I/O filters.
n You cannot change or assign the I/O filter policy while migrating or cloning a virtual machine. You can
change the policy after you complete the migration or cloning.
n When you clone or migrate a virtual machine with I/O filter policy from one host to another, make sure
that the destination host has a compatible filter installed. This requirement applies to migrations
initiated by an administrator or by such functionalities as HA or DRS.
n When you convert a template to a virtual machine, and the template is configured with I/O filter policy,
the destination host must have the compatible I/O filter installed.
n If you use vCenter Site Recovery Manager to replicate virtual disks, the resulting disks on the
recovery site do not have the I/O filter policies. You must create the I/O filter policies in the recovery
site and reattach them to the replicated disks.
n You can attach an encryption I/O filter to a new virtual disk when you create a virtual machine. You
cannot attach the encryption filter to an existing virtual disk.
n If your virtual machine has a snapshot tree associated with it, you cannot add, change, or remove the
I/O filter policy for the virtual machine.
If you use Storage vMotion to migrate a virtual machine with I/O filters, a destination datastore must be
connected to hosts with compatible I/O filters installed.
You might need to migrate a virtual machine with I/O filters across different types of datastores, for
example between VMFS and Virtual Volumes. If you do so, make sure that the VM storage policy includes
rule sets for every type of datastore you are planning to use. For example, if you migrate your virtual
machine between the VMFS and Virtual Volumes datastores, create a mixed VM storage policy that
includes the following rules:
n Rule Set 1 for the VMFS datastore. Because Storage Policy Based Management does not offer an
explicit VMFS policy, the rule set must include tag-based rules for the VMFS datastore.
When Storage vMotion migrates the virtual machine, the correct rule set that corresponds to the target
datastore is selected. The I/O filter rules remain unchanged.
If you do not specify rules for datastores and define only Common Rules for the I/O filters, the system
applies default storage policies for the datastores.
If an I/O filter installation fails on a host, the system generates events that report the failure. In addition,
an alarm on the host shows the reason for the failure. Examples of failures include the following:
n The VIB requires the host to be in maintenance mode for an upgrade or uninstallation.
n The VIB requires the host to reboot after the installation or uninstallation.
n Attempts to put the host in maintenance mode fail because the virtual machine cannot be evacuated
from the host.
vCenter Server can resolve some failures. You might have to intervene for other failures. For example,
you might need to edit the VIB URL, manually evacuate or power off virtual machines, or manually install
or uninstall VIBs.
Prerequisites
Install vCLI or deploy the vSphere Management Assistant (vMA) virtual machine. See Getting Started with
vSphere Command-Line Interfaces. For troubleshooting, run esxcli commands in the ESXi Shell.
Procedure
Options for the install command allow you to perform a dry run, specify a specific VIB, bypass
acceptance-level verification, and so on. Do not bypass verification on production systems. See the
vSphere Command-Line Interface Reference documentation.
Block storage devices, Fibre Channel and iSCSI, and NAS devices support the hardware acceleration.
For additional details, see the VMware knowledge base article at http://kb.vmware.com/kb/1021976.
n VMFS clustered locking and metadata operations for virtual machine files
ESXi Support T10 SCSI standard, or block Support NAS plug-ins for array
storage plug-ins for array integration integration
(VAAI)
Note If your SAN or NAS storage fabric uses an intermediate appliance in front of a storage system that
supports hardware acceleration, the intermediate appliance must also support hardware acceleration and
be properly certified. The intermediate appliance might be a storage virtualization appliance, I/O
acceleration appliance, encryption appliance, and so on.
The status values are Unknown, Supported, and Not Supported. The initial value is Unknown.
For block devices, the status changes to Supported after the host successfully performs the offload
operation. If the offload operation fails, the status changes to Not Supported. The status remains
Unknown if the device provides partial hardware acceleration support.
With NAS, the status becomes Supported when the storage can perform at least one hardware offload
operation.
When storage devices do not support or provide partial support for the host operations, your host reverts
to its native methods to perform unsupported operations.
n Full copy, also called clone blocks or copy offload. Enables the storage arrays to make full copies of
data within the array without having the host read and write the data. This operation reduces the time
and network load when cloning virtual machines, provisioning from a template, or migrating with
vMotion.
n Block zeroing, also called write same. Enables storage arrays to zero out a large number of blocks to
provide newly allocated storage, free of previously written data. This operation reduces the time and
network load when creating virtual machines and formatting virtual disks.
n Hardware assisted locking, also called atomic test and set (ATS). Supports discrete virtual machine
locking without use of SCSI reservations. This operation allows disk locking per sector, instead of the
entire LUN as with SCSI reservations.
Check with your vendor for the hardware acceleration support. Certain storage arrays require that you
activate the support on the storage side.
On your host, the hardware acceleration is enabled by default. If your storage does not support the
hardware acceleration, you can disable it.
In addition to hardware acceleration support, ESXi includes support for array thin provisioning. For
information, see ESXi and Array Thin Provisioning.
As with any advanced settings, before you disable the hardware acceleration, consult with the VMware
support team.
Procedure
n VMFS3.HardwareAcceleratedLocking
n DataMover.HardwareAcceleratedMove
n DataMover.HardwareAcceleratedInit
In the vSphere 5.x and later releases, these extensions are implemented as the T10 SCSI commands. As
a result, with the devices that support the T10 SCSI standard, your ESXi host can communicate directly
and does not require the VAAI plug-ins.
If the device does not support T10 SCSI or provides partial support, ESXi reverts to using the VAAI plug-
ins, installed on your host. The host can also use a combination of the T10 SCSI commands and plug-ins.
The VAAI plug-ins are vendor-specific and can be either VMware or partner developed. To manage the
VAAI capable device, your host attaches the VAAI filter and vendor-specific VAAI plug-in to the device.
For information about whether your storage requires VAAI plug-ins or supports hardware acceleration
through T10 SCSI commands, see the VMware Compatibility Guide or contact your storage vendor.
You can use several esxcli commands to query storage devices for the hardware acceleration support
information. For the devices that require the VAAI plug-ins, the claim rule commands are also available.
For information about esxcli commands, see Getting Started with vSphere Command-Line Interfaces.
Prerequisites
Install vCLI or deploy the vSphere Management Assistant (vMA) virtual machine. See Getting Started with
vSphere Command-Line Interfaces. For troubleshooting, run esxcli commands in the ESXi Shell.
Procedure
Prerequisites
Install vCLI or deploy the vSphere Management Assistant (vMA) virtual machine. See Getting Started with
vSphere Command-Line Interfaces. For troubleshooting, run esxcli commands in the ESXi Shell.
Procedure
The output shows the hardware acceleration, or VAAI, status that can be unknown, supported, or
unsupported.
Prerequisites
Install vCLI or deploy the vSphere Management Assistant (vMA) virtual machine. See Getting Started with
vSphere Command-Line Interfaces. For troubleshooting, run esxcli commands in the ESXi Shell.
Procedure
u Run the esxcli storage core device vaai status get -d=device_ID command.
If a VAAI plug-in manages the device, the output shows the name of the plug-in attached to the
device. The output also shows the support status for each T10 SCSI based primitive, if available.
Output appears in the following example:
Procedure
In this example, the filter claim rules specify devices that the VAAI_FILTER filter claims.
In this example, the VAAI claim rules specify devices that the VAAI plug-in claims.
Prerequisites
Install vCLI or deploy the vSphere Management Assistant (vMA) virtual machine. See Getting Started with
vSphere Command-Line Interfaces. For troubleshooting, run esxcli commands in the ESXi Shell.
Procedure
1 Define a new claim rule for the VAAI filter by running the
esxcli storage core claimrule add --claimrule-class=Filter --plugin=VAAI_FILTER
command.
2 Define a new claim rule for the VAAI plug-in by running the
esxcli storage core claimrule add --claimrule-class=VAAI command.
Note Only the filter-class rules must be run. When the VAAI filter claims a device, it automatically
finds the proper VAAI plug-in to attach.
This example shows how to configure the hardware acceleration for IBM arrays using the
VMW_VAAIP_T10 plug-in. Use the following sequence of commands. For information about the options
that the command takes, see Add Multipathing Claim Rules.
You can use the XCOPY mechanism with all storage arrays that support the SCSI T10 based
VMW_VAAIP_T10 plug-in developed by VMware. To enable the XCOPY mechanism, create a claim rule
of the VAAI class.
Prerequisites
Install vCLI or deploy the vSphere Management Assistant (vMA) virtual machine. See Getting Started with
vSphere Command-Line Interfaces. For troubleshooting, run esxcli commands in the ESXi Shell.
Procedure
For information about the options that the command takes, see Add Multipathing Claim Rules.
Option Description
-s|--xcopy-use-multi-segs Use multiple segments for XCOPY commands. Valid only when --xcopy-use-
array-values is specified.
-m|--xcopy-max-transfer-size Maximum transfer size in MB for the XCOPY commands when you use a transfer
size different than array reported. Valid only when --xcopy-use-array-values
is specified.
-k|--xcopy-max-transfer-size-kib Maximum transfer size in KiB for the XCOPY commands when you use a transfer
size different than array reported. Valid only if --xcopy-use-array-values is
specified.
n
# esxcli storage core claimrule add -r 914 -t vendor -V XtremIO -M XtremApp -P VMW_VAAIP_T10 -c
VAAI -a -s -k 64
n
# esxcli storage core claimrule add -r 65430 -t vendor -V EMC -M SYMMETRIX -P VMW_VAAIP_SYMM -c
VAAI -a -s -m 200
Prerequisites
Install vCLI or deploy the vSphere Management Assistant (vMA) virtual machine. See Getting Started with
vSphere Command-Line Interfaces. For troubleshooting, run esxcli commands in the ESXi Shell.
Procedure
The VAAI NAS framework supports both versions of NFS storage, NFS 3 and NFS 4.1.
The VAAI NAS uses a set of storage primitives to offload storage operations from the host to the array.
The following list shows the supported NAS operations:
n Full File Clone. Supports an ability of NAS device to clone virtual disk files. This operation is similar to
the VMFS block cloning, except that NAS devices clone entire files instead of file segments.
n Reserve Space. Supports an ability of storage arrays to allocate space for a virtual disk file in the
thick format.
Typically, when you create a virtual disk on an NFS datastore, the NAS server determines the
allocation policy. The default allocation policy on most NAS servers is thin and does not guarantee
backing storage to the file. However, the reserve space operation can instruct the NAS device to use
vendor-specific mechanisms to reserve space for a virtual disk. As a result, you can create thick
virtual disks on the NFS datastore.
n Native Snapshot Support. Creation of virtual machine snapshots can be offloaded to the array.
n Extended Statistics. Supports visibility to space use on NAS devices. This functionality is useful for
thin provisioning.
With NAS storage devices, the hardware acceleration integration is implemented through vendor-specific
NAS plug-ins. These plug-ins are typically created by vendors and are distributed as VIB packages
through a website. No claim rules are required for the NAS plug-ins to function.
Several tools for installing and upgrading VIB packages are available. They include the esxcli
commands and vSphere Update Manager. For more information, see the vSphere Upgrade and Installing
and Administering VMware vSphere Update Manager documentation.
This topic provides an example for a VIB package installation using the esxcli command. For more
details, see the vSphere Upgrade documentation.
Prerequisites
Install vCLI or deploy the vSphere Management Assistant (vMA) virtual machine. See Getting Started with
vSphere Command-Line Interfaces. For troubleshooting, run esxcli commands in the ESXi Shell.
Procedure
The host acceptance level must be the same or less restrictive than the acceptance level of any VIB
you want to add to the host. The value can be one of the following:
n VMwareCertified
n VMwareAccepted
n PartnerSupported
n CommunitySupported
The URL specifies the URL to the VIB package to install. http:, https:, ftp:, and file: are
supported.
This topic discusses how to uninstall a VIB package using the esxcli command. For more details, see
the vSphere Upgrade documentation.
Prerequisites
Install vCLI or deploy the vSphere Management Assistant (vMA) virtual machine. See Getting Started with
vSphere Command-Line Interfaces. For troubleshooting, run esxcli commands in the ESXi Shell.
Procedure
Prerequisites
This topic discusses how to update a VIB package using the esxcli command. For more details, see the
vSphere Upgrade documentation.
Install vCLI or deploy the vSphere Management Assistant (vMA) virtual machine. See Getting Started with
vSphere Command-Line Interfaces. For troubleshooting, run esxcli commands in the ESXi Shell.
Procedure
The URL specifies the URL to the VIB package to update. http:, https:, ftp:, and file: are
supported.
Prerequisites
Install vCLI or deploy the vSphere Management Assistant (vMA) virtual machine. See Getting Started with
vSphere Command-Line Interfaces. For troubleshooting, run esxcli commands in the ESXi Shell.
Procedure
The Hardware Acceleration column in the output shows whether hardware acceleration is supported.
For any primitive that the array does not implement, the array returns an error. The error triggers the ESXi
host to attempt the operation using its native methods.
The VMFS data mover does not leverage hardware offloads and instead uses software data movement
when one of the following occurs:
n The source and destination VMFS datastores have different block sizes.
n The source file type is RDM and the destination file type is non-RDM (regular file).
n The source VMDK type is eagerzeroedthick and the destination VMDK type is thin.
n The logical address and transfer length in the requested operation are not aligned to the minimum
alignment required by the storage device. All datastores created with the vSphere Client are aligned
automatically.
n The VMFS has multiple LUNs or extents, and they are on different arrays.
Hardware cloning between arrays, even within the same VMFS datastore, does not work.
Thick provisioning It is a traditional model of the storage provisioning. With the thick
provisioning, large amount of storage space is provided in advance in
anticipation of future storage needs. However, the space might remain
unused causing underutilization of storage capacity.
Thin provisioning This method contrast with thick provisioning and helps you eliminate
storage underutilization problems by allocating storage space in a flexible
on-demand manner. With ESXi, you can use two models of thin
provisioning, array‐level and virtual disk‐level.
Thin provisioning allows you to report more virtual storage space than there
is real physical capacity. This discrepancy can lead to storage over-
subscription, also called over-provisioning. When you use thin provisioning,
monitor actual storage usage to avoid conditions when you run out of
physical storage space.
By default, ESXi offers a traditional storage provisioning method for virtual machines. With this method,
you first estimate how much storage the virtual machine might need for its entire life cycle. You then
provision a fixed amount of storage space to the VM virtual disk in advance, for example, 40 GB. The
entire provisioned space is committed to the virtual disk. A virtual disk that immediately occupies the
entire provisioned space is a thick disk.
ESXi supports thin provisioning for virtual disks. With the disk-level thin provisioning feature, you can
create virtual disks in a thin format. For a thin virtual disk, ESXi provisions the entire space required for
the disk’s current and future activities, for example 40 GB. However, the thin disk uses only as much
storage space as the disk needs for its initial operations. In this example, the thin-provisioned disk
occupies only 20 GB of storage. If the disk requires more space, it can expand into its entire 40 GB of
provisioned space.
VM 1 VM 2
THICK THIN
80GB
40GB 40GB
provisioned
40GB capacity
20GB
used
capacity
virtual disks
20GB
datastore
40GB
NFS datastores with Hardware Acceleration and VMFS datastores support the following disk provisioning
policies. On NFS datastores that do not support Hardware Acceleration, only thin format is available.
You can use Storage vMotion or cross-host Storage vMotion to transform virtual disks from one format to
another.
Thick Provision Lazy Creates a virtual disk in a default thick format. Space required for the virtual
Zeroed disk is allocated when the disk is created. Data remaining on the physical
device is not erased during creation, but is zeroed out on demand later on
first write from the virtual machine. Virtual machines do not read stale data
from the physical device.
Thick Provision Eager A type of thick virtual disk that supports clustering features such as Fault
Zeroed Tolerance. Space required for the virtual disk is allocated at creation time.
In contrast to the thick provision lazy zeroed format, the data remaining on
the physical device is zeroed out when the virtual disk is created. It might
take longer to create virtual disks in this format than to create other types of
disks. Increasing the size of an Eager Zeroed Thick virtual disk causes a
significant stun time for the virtual machine.
Thin Provision Use this format to save storage space. For the thin disk, you provision as
much datastore space as the disk would require based on the value that
you enter for the virtual disk size. However, the thin disk starts small and at
first, uses only as much datastore space as the disk needs for its initial
operations. If the thin disk needs more space later, it can grow to its
maximum capacity and occupy the entire datastore space provisioned to it.
Thin provisioning is the fastest method to create a virtual disk because it
creates a disk with just the header information. It does not allocate or zero
out storage blocks. Storage blocks are allocated and zeroed out when they
are first accessed.
This procedure assumes that you are creating a new virtual machine. For information, see the vSphere
Virtual Machine Administration documentation.
Procedure
a Right-click any inventory object that is a valid parent object of a virtual machine, such as a data
center, folder, cluster, resource pool, or host, and select New Virtual Machine.
b Click the New Hard disk triangle to expand the hard disk options.
With a thin virtual disk, the disk size value shows how much space is provisioned and guaranteed
to the disk. At the beginning, the virtual disk might not use the entire provisioned space. The
actual storage use value can be less than the size of the virtual disk.
What to do next
If you created a virtual disk in the thin format, you can later inflate it to its full size.
Procedure
3 Review the storage use information in the upper right area of the Summary tab.
Storage Usage shows how much datastore space is occupied by virtual machine files, including
configuration and log files, snapshots, virtual disks, and so on. When the virtual machine is running, the
used storage space also includes swap files.
For virtual machines with thin disks, the actual storage use value might be less than the size of the virtual
disk.
Procedure
3 Click the Hard Disk triangle to expand the hard disk options.
The Type text box shows the format of your virtual disk.
What to do next
If your virtual disk is in the thin format, you can inflate it to its full size.
You use the datastore browser to inflate the thin virtual disk.
Prerequisites
n Make sure that the datastore where the virtual machine resides has enough space.
n Remove snapshots.
Procedure
2 Expand the virtual machine folder and browse to the virtual disk file that you want to convert.
The file has the .vmdk extension and is marked with the virtual disk ( ) icon.
Note The option might not be available if the virtual disk is thick or when the virtual machine is
running.
The inflated virtual disk occupies the entire datastore space originally provisioned to it.
Over-subscription can be possible because usually not all virtual machines with thin disks need the entire
provisioned datastore space simultaneously. However, if you want to avoid over-subscribing the
datastore, you can set up an alarm that notifies you when the provisioned space reaches a certain
threshold.
For information on setting alarms, see the vCenter Server and Host Management documentation.
If your virtual machines require more space, the datastore space is allocated on a first come first served
basis. When the datastore runs out of space, you can add more physical storage and increase the
datastore.
The ESXi host integrates with block-based storage and performs these tasks:
n The host can recognize underlying thin-provisioned LUNs and monitor their space use to avoid
running out of physical space. The LUN space might change if, for example, your VMFS datastore
expands or if you use Storage vMotion to migrate virtual machines to the thin-provisioned LUN. The
host warns you about breaches in physical LUN space and about out-of-space conditions.
n The host can run the automatic T10 unmap command from VMFS6 and VM guest operating systems
to reclaim unused space from the array. VMFS5 supports a manual space reclamation method.
Note ESXi does not support enabling and disabling of thin provisioning on a storage device.
Requirements
To use the thin provisioning reporting and space reclamation features, follow these requirements:
Unmap command originating form VMFS Manual for VMFS5. Use Automatic for VMFS6
esxcli storage vmfs
unmap.
Unmap command originating from guest OS Yes. Limited support. Yes (VMFS6)
n Use storage systems that support T10-based vSphere Storage APIs - Array Integration (VAAI),
including thin provisioning and space reclamation. For information, contact your storage provider and
check the VMware Compatibility Guide.
The following sample flow demonstrates how the ESXi host and the storage array interact to generate
breach of space and out-of-space warnings for a thin-provisioned LUN. The same mechanism applies
when you use Storage vMotion to migrate virtual machines to the thin-provisioned LUN.
1 Using storage-specific tools, your storage administrator provisions a thin LUN and sets a soft
threshold limit that, when reached, triggers an alert. This step is vendor-specific.
2 Using the vSphere Client, you create a VMFS datastore on the thin-provisioned LUN. The datastore
spans the entire logical size that the LUN reports.
3 As the space used by the datastore increases and reaches the set soft threshold, the following
actions take place:
You can contact the storage administrator to request more physical space. Alternatively, you can
use Storage vMotion to evacuate your virtual machines before the LUN runs out of capacity.
4 If no space is left to allocate to the thin-provisioned LUN, the following actions take place:
Caution In certain cases, when a LUN becomes full, it might go offline or get unmapped from
the host.
You can resolve the permanent out-of-space condition by requesting more physical space from
the storage administrator.
Prerequisites
Install vCLI or deploy the vSphere Management Assistant (vMA) virtual machine. See Getting Started with
vSphere Command-Line Interfaces. For troubleshooting, run esxcli commands in the ESXi Shell.
Procedure
The following thin provisioning status indicates that the storage device is thin-provisioned.
Note Some storage systems present all devices as thin-provisioned no matter whether the devices are
thin or thick. Their thin provisioning status is always yes. For details, check with your storage vendor.
You free storage space inside the VMFS datastore when you delete or migrate the VM, consolidate a
snapshot, and so on. Inside the virtual machine, storage space is freed when you delete files on the thin
virtual disk. These operations leave blocks of unused space on the storage array. However, when the
array is not aware that the data was deleted from the blocks, the blocks remain allocated by the array
until the datastore releases them. VMFS uses the SCSI unmap command to indicate to the array that the
storage blocks contain deleted data, so that the array can unallocate these blocks.
ESXi host
Storage Array
VMFS Datastore
VMs
Physical Disk
Blocks
The command can also originate directly from the guest operating system. Both VMFS5 and VMFS6
datastores can provide support to the unmap command that proceeds from the guest operating system.
However, the level of support is limited on VMFS5.
Depending on the type of your VMFS datastore, you use different methods to configure space
reclamation for the datastore and your virtual machines.
Watch the following video to learn more about how space reclamation works.
The operation helps the storage array to reclaim unused free space. Unmapped space can be then used
for other storage allocation requests and needs.
n Unmap requests are sent at a constant rate, which helps to avoid any instant load on the backing
array.
n Unmap processing and truncate I/O paths are disconnected, so I/O performance is not impacted.
For VMFS6 datastores, you can configure the following space reclamation parameters.
Space reclamation Granularity defines the minimum size of a released space sector that
granularity underlying storage can reclaim. Storage cannot reclaim those sectors that
are smaller in size than the specified granularity.
For VMFS6, reclamation granularity equals the block size. When you
specify the block size as 1 MB, the granularity is also 1 MB. Storage
sectors of the size smaller than 1 MB are not reclaimed.
Space reclamation The method can be either priority or fixed. When the method you use is
method priority, you configure the priority rate. For the fixed method, you must
indicate the bandwidth in MB per second.
Space reclamation This parameter defines the rate at which the space reclamation operation is
priority performed when you use the priority reclamation method. Typically, VMFS6
can send the unmap commands either in bursts or sporadically depending
on the workload and configuration. For VMFS6, you can specify one of the
following options.
Space
Reclamation
Priority Description Configuration
Low (default) Sends the unmap command at a less frequent vSphere Client
rate, 25–50 MB per second. esxcli command
Medium Sends the command at a rate twice faster than esxcli command
the low rate, 50–100 MB per second.
High Sends the command at a rate three times faster esxcli command
than the low rate, over 100 MB per second.
Note The ESXi host of version 6.5 does not recognize the medium and
high priority rates. If you migrate the VMs to the host version 6.5, the rate
defaults to low.
After you enable space reclamation, the VMFS6 datastore can start releasing the blocks of unused space
only when it has at least one open file. This condition can be fulfilled when, for example, you power on
one of the VMs on the datastore.
At the VMFS6 datastore creation time, the only available method for the space reclamation is priority. To
use the fixed method, edit the space reclamation settings of the existing datastore.
Procedure
1 In the vSphere Client object navigator, browse to a host, a cluster, or a data center.
The parameters define granularity and the priority rate at which space reclamation operations are
performed. You can also use this page to disable space reclamation for the datastore.
Option Description
Block size The block size on a VMFS datastore defines the maximum file size and the
amount of space the file occupies. VMFS6 supports the block size of 1 MB.
Space reclamation granularity Specify granularity for the unmap operation. Unmap granularity equals the block
size, which is 1 MB.
Storage sectors of the size smaller than 1 MB are not reclaimed.
Note In the vSphere Client, the only available settings for the space reclamation priority are Low and
None. To change the settings to Medium or High, use the esxcli command. See Use the ESXCLI
Command to Change Space Reclamation Parameters.
After you enable space reclamation, the VMFS6 datastore can start releasing the blocks of unused space
only when it has at least one open file. This condition can be fulfilled when, for example, you power on
one of the VMs on the datastore.
Procedure
Option Description
Enable automatic space reclamation at Use the fixed method for space reclamation. Specify reclamation bandwidth in MB
fixed rate per second.
Disable automatic space reclamation Deleted or unmapped blocks are not reclaimed.
The modified value for the space reclamation priority appears on the General page for the datastore.
Procedure
Option Description
Use the following example to set the reclamation method to fixed and the rate to 100 MB per second.
esxcli storage vmfs reclaim config set --volume-label datastore_name --reclaim-method fixed -b 100
Procedure
a Under Properties, expand File system and review the value for the space reclamation
granularity.
b Under Space Reclamation, review the setting for the space reclamation priority.
If you configured any values through the esxcli command, for example, Medium or High for the
space reclamation priority, these values also appear in the vSphere Client.
You can also use the esxcli storage vmfs reclaim config get -l=VMFS_label|-u=VMFS_uuid
command to obtain information for the space reclamation configuration.
Prerequisites
Install vCLI or deploy the vSphere Management Assistant (vMA) virtual machine. See Getting Started with
vSphere Command-Line Interfaces. For troubleshooting, run esxcli commands in the ESXi Shell.
Procedure
u To reclaim unused storage blocks on the thin-provisioned device, run the following command:
Option Description
-l|--volume-label=volume_label The label of the VMFS volume to unmap. A mandatory argument. If you specify
this argument, do not use -u|--volume-uuid=volume_uuid.
-u|--volume-uuid=volume_uuid The UUID of the VMFS volume to unmap. A mandatory argument. If you specify
this argument, do not use -l|--volume-label=volume_label.
-n|--reclaim-unit=number Number of VMFS blocks to unmap per iteration. An optional argument. If it is not
specified, the command uses the default value of 200.
What to do next
Important For additional details, see the VMware knowledge base article at
http://kb.vmware.com/kb/2014849.
Inside a virtual machine, storage space is freed when, for example, you delete files on the thin virtual
disk. The guest operating system notifies VMFS about freed space by sending the unmap command. The
unmap command sent from the guest operating system releases space within the VMFS datastore. The
command then proceeds to the array, so that the array can reclaim the freed blocks of space.
Generally, the guest operating systems send the unmap commands based on the unmap granularity they
advertise. For details, see documentation provided with your guest operating system.
The following considerations apply when you use space reclamation with VMFS6:
n VMFS6 processes the unmap request from the guest OS only when the space to reclaim equals 1
MB or is a multiple of 1 MB. If the space is less than 1 MB or is not aligned to 1 MB, the unmap
requests are not processed.
n For VMs with snapshots in the default SEsparse format, VMFS6 supports the automatic space
reclamation only on ESXi hosts version 6.7 or later. If you migrate VMs to ESXi hosts version 6.5 or
earlier, the automatic space reclamation stops working for the VMs with snapshots.
Space reclamation affects only the top snapshot and works when the VM is powered on.
However, for a limited number of the guest operating systems, VMFS5 supports the automatic space
reclamation requests.
To send the unmap requests from the guest operating system to the array, the virtual machine must meet
the following prerequisites:
n The guest operating system must be able to identify the virtual disk as thin.
Note After you make a change using the vmkfstools, the vSphere Client might not be updated
immediately. Use a refresh or rescan operation from the client.
For more information on the ESXi Shell, see Getting Started with vSphere Command-Line Interfaces.
Target specifies a partition, device, or path to apply the command option to.
options One or more command-line options and associated arguments that you use to
specify the activity for vmkfstools to perform. For example, selecting the disk format
when creating a new virtual disk.
After entering the option, specify a target on which to perform the operation. Target
can indicate a partition, device, or path.
partition Specifies disk partitions. This argument uses a disk_ID:P format, where disk_ID is the
device ID returned by the storage array and P is an integer that represents the
partition number. The partition digit must be greater than zero (0) and must
correspond to a valid VMFS partition.
device Specifies devices or logical volumes. This argument uses a path name in the ESXi
device file system. The path name begins with /vmfs/devices, which is the mount
point of the device file system.
Use the following formats when you specify different types of devices:
n /vmfs/devices/disks for local or SAN-based disks.
n /vmfs/devices/lvm for ESXi logical volumes.
n /vmfs/devices/generic for generic SCSI devices.
path Specifies a VMFS file system or file. This argument is an absolute or relative path
that names a directory symbolic link, a raw device mapping, or a file under /vmfs.
n To specify a VMFS file system, use this format:
/vmfs/volumes/file_system_UUID
or
/vmfs/volumes/file_system_label
/vmfs/volumes/file_system_label|
file_system_UUID/[dir]/myDisk.vmdk
The long and single-letter forms of the options are equivalent. For example, the following commands are
identical.
-v Suboption
The -v suboption indicates the verbosity level of the command output.
-v --verbose number
You can specify the -v suboption with any vmkfstools option. If the output of the option is not suitable
for use with the -v suboption, vmkfstools ignores -v.
Note Because you can include the -v suboption in any vmkfstools command line, -v is not included as
a suboption in the option descriptions.
-P|--queryfs
-h|--humanreadable
When you use this option on any file or directory that resides on a VMFS datastore, the option lists the
attributes of the specified datastore. The listed attributes typically include the file system label, the
number of extents for the datastore, the UUID, and a list of the devices where each extent resides.
Note If any device backing VMFS file system goes offline, the number of extents and available space
change accordingly.
You can specify the -h|--humanreadable suboption with the -P option. If you do so, vmkfstools lists
the capacity of the volume in a more readable form.
~ vmkfstools -P -h /vmfs/volumes/my_vmfs
VMFS-5.81 (Raw Major Version: 14) file system spanning 1 partitions.
File system label (if any): my_vmfs
Mode: public
Capacity 99.8 GB, 97.5 GB available, file block size 1 MB, max supported file size 62.9 TB
UUID: 571fe2fb-ec4b8d6c-d375-XXXXXXXXXXXX
Partitions spanned (on "lvm"):
eui.3863316131XXXXXX:1
Is Native Snapshot Capable: YES
-C|--createfs [vmfs5|vmfs6|vfat]
This option creates the VMFS datastore on the specified SCSI partition, such as disk_ID:P. The partition
becomes the head partition of the datastore. For VMFS5 and VMFS6, the only available block size is 1
MB.
n -S|--setfsname - Define the volume label of the VMFS datastore you are creating. Use this
suboption only with the -C option. The label you specify can be up to 128 characters long and cannot
contain any leading or trailing blank spaces.
Note vCenter Server supports the 80 character limit for all its entities. If a datastore name exceeds
this limit, the name gets shortened when you add this datastore to vCenter Server.
After you define a volume label, you can use it whenever you specify the VMFS datastore for the
vmkfstools command. The volume label appears in listings generated for the ls -l command and
as a symbolic link to the VMFS volume under the /vmfs/volumes directory.
To change the VMFS volume label, use the ln -sf command. Use the following as an example:
datastore is the new volume label to use for the UUID VMFS.
Note If your host is registered with vCenter Server, any changes you make to the VMFS volume
label get overwritten by vCenter Server. This operation guarantees that the VMFS label is consistent
across all vCenter Server hosts.
This example illustrates creating a VMFS6 datastore named my_vmfs on the naa.ID:1 partition.
When you add an extent, you span the VMFS datastore from the head partition across the partition
specified by span_partition.
You must specify the full path name for the head and span partitions, for
example /vmfs/devices/disks/disk_ID:1. Each time you use this option, you add an extent to the
VMFS datastore, so that the datastore spans multiple partitions.
Caution When you run this option, you lose all data that previously existed on the SCSI device you
specified in span_partition.
In this example, you extend the existing head partition of the VMFS datastore over a new partition.
The extended datastore spans two partitions, naa.disk_ID_1:1 and naa.disk_ID_2:1. In this example,
naa.disk_ID_1:1 is the name of the head partition.
You might increase the datastore size after the underlying storage had its capacity increased.
This option expands the VMFS datastore or its specific extent. For example,
-T|--upgradevmfs /vmfs/volumes/UUID
The upgrade is a one-way process. After you have converted a VMFS3 datastore to VMFS5, you cannot
revert it back.
n zeroedthick (default) – Space required for the virtual disk is allocated during creation. Any data
remaining on the physical device is not erased during creation, but is zeroed out on demand on first
write from the virtual machine. The virtual machine does not read stale data from disk.
n eagerzeroedthick – Space required for the virtual disk is allocated at creation time. In contrast to
zeroedthick format, the data remaining on the physical device is zeroed out during creation. It might
take much longer to create disks in this format than to create other types of disks.
n thin – Thin-provisioned virtual disk. Unlike with the thick format, space required for the virtual disk
is not allocated during creation, but is supplied, zeroed out, on demand.
n 2gbsparse – A sparse disk with the maximum extent size of 2 GB. You can use disks in this format
with hosted VMware products, such as VMware Fusion. However, you cannot power on the sparse
disk on an ESXi host unless you first re-import the disk with vmkfstools in a compatible format, such
as thick or thin.
The only disk formats you can use for NFS are thin, thick, zeroedthick, and 2gbsparse.
Thick, zeroedthick, and thin formats usually behave the same because the NFS server and not the
ESXi host determines the allocation policy. The default allocation policy on most NFS servers is thin.
However, on NFS servers that support Storage APIs - Array Integration, you can create virtual disks in
zeroedthick format. The reserve space operation enables NFS servers to allocate and guarantee
space.
For more information on array integration APIs, see Chapter 24 Storage Hardware Acceleration.
-c|--createvirtualdisk size[bB|sS|kK|mM|gG]
-d|--diskformat [thin|zeroedthick|eagerzeroedthick]
-W|--objecttype [file|vsan|vvol]
--policyFile fileName
This option creates a virtual disk at the specified path on a datastore. Specify the size of the virtual disk.
When you enter the value for size, you can indicate the unit type by adding a suffix of k (kilobytes), m
(megabytes), or g (gigabytes). The unit type is not case-sensitive. vmkfstools interprets either k or K to
mean kilobytes. If you do not specify a unit type, vmkfstools defaults to bytes.
n -W|--objecttype specifies whether the virtual disk is a file on a VMFS or NFS datastore, or an
object on a vSAN or Virtual Volumes datastore.
This example shows how to create a two-gigabyte virtual disk file named disk.vmdk. You create the disk
on the VMFS datastore named myVMFS. The disk file represents an empty virtual disk that virtual
machines can access.
-w|--writezeros
This option cleans the virtual disk by writing zeros over all its data. Depending on the size of your virtual
disk and the I/O bandwidth to the device hosting the virtual disk, completing this command might take a
long time.
Caution When you use this command, you lose any existing data on the virtual disk.
-j|--inflatedisk
This option converts a thin virtual disk to eagerzeroedthick, preserving all existing data. The option
allocates and zeroes out any blocks that are not already allocated.
-k|--eagerzero
While performing the conversion, this option preserves any data on the virtual disk.
-K|--punchzero
This option deallocates all zeroed out blocks and leaves only those blocks that were allocated previously
and contain valid data. The resulting virtual disk is in thin format.
-U|--deletevirtualdisk
You must specify the original filename or file path oldName and the new filename or file path newName.
A non-root user cannot clone a virtual disk or an RDM. You must specify the original filename or file path
oldName and the new filename or file path newName.
Use the following suboptions to change corresponding parameters for the copy you create.
n -W|--objecttype specifies whether the virtual disk is a file on a VMFS or NFS datastore, or an
object on a vSAN or Virtual Volumes datastore.
By default, ESXi uses its native methods to perform the cloning operations. If your array supports the
cloning technologies, you can off-load the operations to the array. To avoid the ESXi native cloning,
specify the -N|--avoidnativeclone option.
This example illustrates cloning the contents of a master virtual disk from the templates repository to a
virtual disk file named myOS.vmdk on the myVMFS file system.
You can configure a virtual machine to use this virtual disk by adding lines to the virtual machine
configuration file, as in the following example:
scsi0:0.present = TRUE
scsi0:0.fileName = /vmfs/volumes/myVMFS/myOS.vmdk
If you want to convert the format of the disk, use the -d|--diskformat suboption.
This suboption is useful when you import virtual disks in a format not compatible with ESXi, for example
2gbsparse format. After you convert the disk, you can attach this disk to a new virtual machine you create
in ESXi.
For example:
-X|--extendvirtualdisk newSize[bBsSkKmMgGtT]
Specify the newSize parameter adding an appropriate unit suffix. The unit type is not case-sensitive.
vmkfstools interprets either k or K to mean kilobytes. If you do not specify the unit type, vmkfstools
defaults to kilobytes.
The newSize parameter defines the entire new size, not just the increment you add to the disk.
For example, to extend a 4-g virtual disk by 1 g, enter: vmkfstools -X 5g disk name.
You can extend the virtual disk to the eagerzeroedthick format by using the -d eagerzeroedthick
option.
n Do not extend the base disk of a virtual machine that has snapshots associated with it. If you do, you
can no longer commit the snapshot or revert the base disk to its original size.
n After you extend the disk, you might need to update the file system on the disk. As a result, the guest
operating system recognizes the new size of the disk and can use it.
Use this option to convert virtual disks of type LEGACYSPARSE, LEGACYPLAIN, LEGACYVMFS,
LEGACYVMFS_SPARSE, and LEGACYVMFS_RDM.
-M|--migratevirtualdisk
-r|--createrdm device
/vmfs/devices/disks/disk_ID:P
In this example, you create an RDM file named my_rdm.vmdk and map the disk_ID raw disk to that file.
You can configure a virtual machine to use the my_rdm.vmdk mapping file by adding the following lines to
the virtual machine configuration file:
scsi0:0.present = TRUE
scsi0:0.fileName = /vmfs/volumes/myVMFS/my_rdm.vmdk
After you establish this type of mapping, you can use it to access the raw disk as you access any other
VMFS virtual disk.
/vmfs/devices/disks/device_ID
For the .vmdk name, use this format. Make sure to create the datastore before using the command.
/vmfs/volumes/datastore_name/example.vmdk
For example,
-q|--queryrdm my_rdm.vmdk
This option prints the name of the raw disk RDM. The option also prints other identification information,
like the disk ID, for the raw disk.
# vmkfstools -q /vmfs/volumes/VMFS/my_vm/my_rdm.vmdk
-g|--geometry
The output is in the form: Geometry information C/H/S, where C represents the number of cylinders, H
represents the number of heads, and S represents the number of sectors.
Note When you import virtual disks from hosted VMware products to the ESXi host, you might see a
disk geometry mismatch error message. A disk geometry mismatch might also trigger problems when you
load a guest operating system or run a newly created virtual machine.
-x|--fix [check|repair]
For example,
-e|--chainConsistent
Caution Using the -L option can interrupt the operations of other servers on a SAN. Use the -L option
only when troubleshooting clustering setups.
Unless advised by VMware, never use this option on a LUN hosting a VMFS volume.
n -L reserve – Reserves the specified LUN. After the reservation, only the server that reserved that
LUN can access it. If other servers attempt to access that LUN, a reservation error appears.
n -L release – Releases the reservation on the specified LUN. Other servers can access the LUN
again.
n -L lunreset – Resets the specified LUN by clearing any reservation on the LUN and making the
LUN available to all servers again. The reset does not affect any of the other LUNs on the device. If
another LUN on the device is reserved, it remains reserved.
n -L targetreset – Resets the entire target. The reset clears any reservations on all the LUNs
associated with that target and makes the LUNs available to all servers again.
n -L busreset – Resets all accessible targets on the bus. The reset clears any reservation on all the
LUNs accessible through the bus and makes them available to all servers again.
n -L readkeys – Reads the reservation keys registered with a LUN. Applies to SCSI-III persistent
group reservation functionality.
n -L readresv – Reads the reservation state on a LUN. Applies to SCSI-III persistent group
reservation functionality.
/vmfs/devices/disks/disk_ID:P
-B|--breaklock device
/vmfs/devices/disks/disk_ID:P
You can use this command when a host fails in the middle of a datastore operation, such as expand the
datastore, add an extent, or resignature. When you run this command, make sure that no other host is
holding the lock.