vSAN API Cookbook For Python
vSAN API Cookbook For Python
Table Of Contents
1. Introduction
1.1.Expectations
1.2.vSAN Management API
1.3.vSAN SDKs
2. vSAN Recipes
2.1.Deploying vSAN
2.2.Configuring vSAN Stretched Clusters or 2 Node
2.3.Performing a vSAN On-Disk Upgrade
3. References
3.1.Additional Documentation
3.2.VMware Contact Information
3.3.About the Author
1. Introduction
Typically, vSAN management is performed through the Web Client. Tasks can include those such as
initial configuration, ongoing maintenance, and reporting of capacity, performance, or heath of vSAN.
1.1 Expectations
While most element management is easily accomplished with the Web Client user interface,
performing many repeatable tasks are a manual process. Some aspects of vSAN management are
automated, such as disk claiming, periodic health checks, as well as error and other reporting. These
automated tasks are specific to each individual vSAN cluster, and often have to be repeated many
times when managing multiple independent vSAN clusters.
Consistency and repeatability, is a challenge when performing tasks manually. It is quite common to
leverage tools such as an Application Programming Interface (API) along with code to execute tasks in
a consistent and repeatable fashion.
Expectations
This document is intended to assist you with understanding types of things that can be managed
programmatically through the vSAN Management API and SDK.
It is neither comprehensive in showing all possible actions nor prescriptive in showing the only way to
accomplish these tasks. This document will focus on the use of Python, but be aware other languages
may be chosen, which will not be covered in this document.
Throughout the document we will alternate showing what types of tasks can be done through the
standard web client user interface, and then how to achieve the same result through Python. None of
the included code samples are supported by VMware, and are merely representative of possible ways
to accomplish tasks.
vSAN 6.2 introduced a new vSAN Management API that which extends upon the current vSphere API.
This API is exposed by both vCenter Server managing vSAN, as well as VMware ESXi hosts. Setup and
all configuration of aspects of vSAN, as well as runtime state, is available by leveraging the vSAN
Management API.
There are a variety of vSphere Managed Objects exposed by the vSAN Management API that provide
functionality specific to vCenter Server, ESXi, or both. These Managed Objects are:
Software Development Kits (SDK) have also been provided with the release of vSAN 6.2 for several
popular programming languages. By providing SDKs in multiple languages, VMware has made it easier
to use the vSAN API. Some of the languages that have SDKs for the vSAN API include Python, Ruby,
Java, C#, and Perl. They each include respective language bindings to the vSphere Management SDK,
as well as documentation that details the usage of each API.
These are all extensions of the vSphere API for their respective programming language. While these
are languages that are supported today, additional languages may be supported in the future.
To get started, the vSAN Management SDK, one of the previously mentioned packages needs to be
downloaded from the VMware Developer site. The starting page for SDKs can be found here:
http://code.vmware.com/sdks
This URL is a common launching point for SDKs for different VMware products including vSAN. The
vSAN Management SDKs can be found in the Storage section. Each of the different vSAN
Management SDK links leads to dedicated content for that SDK including documentation and
reference material and a download link.
While the vSAN Management SDK content is publicly available, a My VMware account is required.
Anyone can register for a My VMware account for free. Existing customers may already have accounts
if they have performed tasks such as license management, downloaded software, or worked with
VMware support.
From the download page, download the appropriate vSAN Management SDK for the language you
wish to use.
The vSAN Management SDKs are made available as a compressed .zip file. To use the SDK, the
contents of the .zip file will need to be extracted. While some Operating Systems provide native
support for extracting contents from .zip files, others do not.
To be able to use the vSAN Management SDK for Python, it is important to have Python installed on
the system that will be used to execute scripts that use the SDK.
Python can be easily installed on Windows, Linux, and Mac operating systems. This provides flexibility
for developers to use the platform of their choice, while providing portability of scripts across these
very different operating systems.
For the purpose of this document, Python for Windows 10 will be used. Python is available for free
from Python.org.
Python 2.7.12 was picked for the purpose of this paper. Python defaults to installing to C:\Python27.
After Python has been downloaded and installed, it is important to add pyVmomi. PyVmomi is the
Python SDK for the VMware vSphere API, allowing developers to access ESXi and vCenter using
Python. The vSAN Management API is built as an extension of the vSphere API therefore, pyVmomi is
also used to allow Python access to the vSAN Management API.
An easy way to add pyVmomi to the installation is through the use of pip. Pip is the Python Package
Manager. To install pyVmomi, simply type “pip install pyvmomi”
Once pyVmomi has been installed, the bindings included in the vSAN Management SDK for Python
can be extracted from the vsan-sdk-python.zip file.
Copy vsanmgmtObjects.py from the bindings folder to Python’s Scripts folder (C:\Python27\Scripts).
Sample code scripts could also be copied to Python’s Scripts folder (C:\Python27\Scripts).
Once the bindings and sample code have been copied to the Scripts folder, additional scripts can be
easily run from this location.
2. vSAN Recipes
Sample ‘Recipes’ are provided, to detail the process of how one would go about putting vSAN
Management API for Python scripts together.
Deploying vSAN can be accomplished easily using the Web Client using the Enable vSAN wizard. The
wizard prompts the administrator for some items like whether disks will be claimed manually or
automatically, if Deduplication and Compression are to be enabled, which disks will be used, and more.
Specific actions are required to be performed across hosts in the cluster, in a particular order. For
example, disks cannot be added to the vSAN datastore until vSAN has been enabled, and hosts have a
dedicated network to communicate between each other. Once hosts have been added to a vSphere
cluster, and settings like vSphere HA and DRS are configured, a vSAN cluster can be configured.
These tasks are easy individually, but when deploying many vSAN clusters very rapidly, the consistency
and repeatability of these tasks rely on the administrator. Manually executing these tasks is time
consuming, and is potentially subject to variance if performed inconsistently.
vSAN is configured per cluster. Enabling vSAN using the vSphere Web Client is a very simple process.
In the Hosts and Clusters view, simply click the cluster that will have vSAN enabled, and go to Settings
under the Manage tab.
Because vCenter Server does not have a global option to enable vSAN on specific clusters, or all
clusters, the process is manual. Each cluster must be individually selected, and the vSAN Configuration
Wizard must be run.
When using the vSphere Web Client authentication is taken care of automatically, and provides a
secure session for performing tasks. When using code, the mechanism of connecting to a vCenter
Server, passing authentication, and establishing a session must be created.
10
In the above code, a function, called getClusterInstance is called with the Cluster Name, which is
passed as a script argument, as well as the mechanism to securely connect to vCenter Server with
credentials, which were also passed as arguments.
Functions are snippets of code that are often valuable because they operate as smaller programs
within a larger program performing a specific action. Functions such as getClusterInstance below
could likely be reused across many scripts to connect to a specific cluster.
The vSAN Configuration Wizard is the starting point for enabling vSAN on a vSphere cluster. A lot of
work has been done by VMware Engineering to make the task of deploying vSAN very easy. The
wizard exposes the steps of choosing vSAN capabilities, ensuring that VMkernel adapters are
configured, the ability to choose storage devices, and then complete the setup.
Depending on which capabilities are selected, additional tasks can be performed using the wizard, like
claiming disks automatically, enabling Deduplication and Compression and configuring Fault Domains,
Stretched Clusters, or 2 Node configurations.
11
The vSAN Configuration Wizard is a launching point for configuring vSAN. For the purpose of this
document, we will break down the individual sections as they pertain to different functions.
vSAN can be configured to claim disks either automatically or manually. The default method is to
manually claim disks. This setting is valid both for the initial configuration, as well as normal operation
of vSAN. It is important to understand the behavior of this setting when deploying vSAN.
When disk claiming is set to automatic, disks will be claimed on the best effort of the vSAN Wizard to
choose the appropriate types for the Cache tier and the Capacity tier. This is easy for Hybrid
architectures, but could be problematic depending on capacity sizes in an All-Flash configuration if
disks that are desired to be used for the Cache tier are similar in capacity to those that are desired to
be used in the Capacity tier. In some cases it may be preferred to claim disks manually.
Enabling Deduplication and Compression is enabled either by a checkbox upon initially creating a
vSAN cluster, or can be enabled or disabled after the cluster has been created.
If a vSAN cluster is created without enabling Deduplication and Compression, the process of enabling
it performs a rolling upgrade, which can be time consuming, and in some cases require reduced
availability. Choosing to enable Deduplication and Compression at creation time can be beneficial to
mitigate the rolling upgrade process.
This can easily be done including code to enable Deduplication and Compression.
12
if isallFlash:
print 'Enable deduplication and compression for VSAN'
vsanReconfigSpec.dataEfficiencyConfig = vim.VsanDataEfficiencyConfig(
compressionEnabled = args.enabledc, deduplicationEnabled = args.enabledc)
*Note Deduplication and Compression are not enabled independently using the vSphere Web Client.
There is no significant performance benefit enabling one and not the other. Enabling both is the only
supported configuration. The vSphere Web Client enables both simultaneously.
vSAN requires Multicast to deliver metadata traffic among cluster nodes. Using the vSAN Wizard will
always set the default Multicast addressing. This process is not exposed by the vSAN Wizard.
In some situations, such as one where two independent vSAN clusters share the same Layer 2 network,
it is recommended to change the Multicast address for one of the two clusters. This can be
accomplished manually using the steps outlined in KB Article 2075451.
This could be modified to accept arguments, rather than hard coded addresses, passed at runtime.
Hosts must have a VMkernel interface tagged for vSAN traffic to be able to access a vSAN datastore.
This is relatively easy to perform on a single or few hosts, but can be challenging at scale.
To tag a VMkernel interface, a host must be selected, and then the Networking menu from the
Management tab selected. Once there the VMkernel interface that will be used for vSAN traffic must
be edited and tagged to include vSAN traffic.
13
While this is an easy task, it must be accomplished for each host in the cluster before vSAN is enabled.
Once each host has a VMkernel interface tagged for vSAN traffic, during setup the Wizard will indicate
that all hosts are properly configured.
Today, if hosts do not already have a VMkernel interface tagged for vSAN traffic, the Wizard must be
closed, VMkernel interfaces tagged, and the Wizard must be run again.
Selecting VMkernel NICs and enabling vSAN traffic can ensure proper configuration at script
execution.
14
vSAN leverages storage devices of several types, depending on the architecture, for the purpose of
backing the vSAN datastore. There are different types of devices that vSAN uses, like storage
controllers, Solid-State Drives (SSDs), and traditional spinning Hard Disk Drives (HDDs).
Disks can be added to a vSAN cluster upon creation of the cluster, or can be added manually later
through Disk Management.
When using the vSphere Web Client, drives that have existing partitions are not listed as eligible to add
to the pool of disks used by vSAN. The task of listing eligible disks in the Web Client is done on the
backend, and not exposed to the administrator. Once selected, they will be claimed and added to the
vSAN cluster.
We can query each host in the cluster for eligible disks, and include the option to clear partitions on
disks that are not eligible. Once all eligible disks have been identified, they can be claimed.
# For each disk, interactively ask the admin as to whether to individually wipe ineligible disks or not
for disk in disks:
if yes('Do you want to wipe disk {}?\nPlease Always check the partition table and the data
stored'
' on those disks before doing any wipe! (yes/no)?'.format(disk.displayName)):
hostProps[host]['configManager.storageSystem'].UpdateDiskPartitions(disk.deviceName,
vim.HostDiskPartitionSpec())
Also notice that in our wizard above that all disks are flash based devices. How do we determine which
devices to use for cache, and which devices to use for capacity? Because cache devices are normally
smaller than capacity devices, the wizard assigns Cache tier for the smaller devices and Capacity tier
for the larger devices. In a Hybrid configuration, flash devices would default to Cache tier and
traditional HDDs would be listed as Capacity tier devices.
15
vSAN 6 introduced the ability to logically group hosts into Fault Domains. This feature gives
administrators the ability to logically separate vSAN hosts much in the same way that hosts are
physically separated.
Grouping one or more hosts into a Fault Domain using the vSphere Web Client can be done upon
vSAN cluster creation, or after a cluster has been created
Fault Domains and assigned hosts can easily be passed as script arguments.
16
9. Finishing up
When completing a vSAN cluster setup using the vSAN Configuration Wizard, a nice summary screen
provides one last review of the proposed configuration before enabling vSAN with the selections a
Virtualization Administrator has made.
This review screen is the last opportunity to make sure that the cluster is configured properly. This is a
necessary step when manually configuring vSAN through the vSphere Web Client.
vSAN 6.2 added a new Performance Service that maintains a database of performance metrics on the
vSAN datastore. It must be manually enabled for each vSAN cluster because it is not enabled during
initial setup.
To setup the Performance Service, select Health and Performance from the Manage tab of a vSAN
cluster.
17
vSAN license assignment isn’t handled through the vSAN Configuration Wizard. vSAN clusters are
licensed through the vSphere Web Client. Multiple clusters can be licensed simultaneously through
the Web Client, but it is still a manual process.
The vSAN Management API allows includes the ability to assign licenses upon deployment of a vSAN
cluster.
if args.vsanlicense:
print 'Assign VSAN license'
lm = si.content.licenseManager
lam = lm.licenseAssignmentManager
lam.UpdateAssignedLicense(entity=cluster._moId, licenseKey=args.vsanlicense)
Using code to assign licenses to different clusters provides flexibility when assigning licenses to vSAN.
In cases where licensed features are upgraded across a large environment, using code to assign new
licensing could be significantly easier than manual license allocation.
Recipe Summary
Deploying vSAN manually is a fairly simple process. While there are some additional settings that
change the deployment options, it is still very simple. Despite the process being simplistic in nature,
there are always opportunities for error when manually performing processes at scale.
The above code snippets show how easy it is to automate the deployment of vSAN using code
consistently and repeatedly at both large and small scale.
18
The individual code snippets are available as a single Python script from the VMware Developer Center
at the following URL.
https://code.vmware.com/samples?id=1133
Stretched Clusters and 2 Node configurations have a few distinct characteristics from traditional vSAN
clusters.
Require a Witness Host to make up the 3rd site for metadata content
This can be a physical host or the freely available vSAN Witness Appliance
The first step required in supporting vSAN Stretched Clusters or 2 Node configurations, is to either
configure a physical host, or a vSAN Witness Appliance to perform the Witness responsibilities.
The vSAN Witness Appliance is great alternative to using a physical ESXi host due to not requiring a
dedicated license or dedicated physical disks for vSAN metadata. Virtual Appliances are often
provided as an OVA file, or an OVF file, along with additional components such as a manifest and
virtual disk files.
There are several methods to import Virtual Appliances into a vSphere environment. These methods
include options to import Virtual Appliances as a menu item in the vSphere Web Client (with the help
of the vSphere Client Integration Plugin), or the legacy vSphere Client. There is also an OVF Tool,
provided for various operating systems, that allows for importing Virtual Appliances from a command
line.
To import a Virtual Appliance from the vSphere Web Client, an administrator only needs to right click
on the cluster the appliance will be deployed to, designate the Virtual Appliance source, select
additional items (such as host, datastore, network, etc.), as well as any prompts for settings specific to
the Virtual Appliance.
19
Remember that previously mentioned tools can natively, or through the use of a plugin, upload Virtual
Appliances. Deploying the vSAN Witness Appliance can also be accomplished as part of a Python
script, but requires code to handle the upload process.
The vSAN Witness Appliance is available as an OVA. And OVA is a self-contained file that is comprised
of several files. Each of these files must be uploaded to vCenter Server during the OVF Deployment
process.
20
bufSize = 1048768 # 1 MB
total = 0
progress = minProgress
if log:
# If args.log is available, then log to it
log = log.info
else:
log = sys.stdout.write
log("%s: %s: Start: srcURL=%s dstURL=%s\n" % (time.asctime(time.localtime()), vmName, srcURL,
dstURL))
log("%s: %s: progress=%d total=%d length=%d\n" % (time.asctime(time.localtime()), vmName,
progress, total, length))
while True:
data = srcData.read(bufSize)
if lease.state != vim.HttpNfcLease.State.ready:
break
dstHttpConn.send(data)
total = total + len(data)
progress = (int)(total * (progressIncrement) / length)
progress += minProgress
lease.Progress(progress)
if len(data) == 0:
break
log("%s: %s: Finished: srcURL=%s dstURL=%s\n" % (time.asctime(time.localtime()), vmName,
srcURL, dstURL))
log("%s: %s: progress=%d total=%d length=%d\n" % \ (time.asctime(time.localtime()), vmName,
progress, total, length))
log("%s: %s: Lease State: %s\n" % \
(time.asctime(time.localtime()), vmName, lease.state))
if lease.state == vim.HttpNfcLease.State.error:
raise lease.error
dstHttpConn.getresponse()
return progress
21
Once the function is defined to upload a single file, another can be created to upload multiple files.
progress = 5
increment = (int)(90 / len(fileItems))
for file in fileItems:
ovfDevId = file.deviceId
srcDiskURL = urlparse.urljoin(ovfURL, file.path)
(viDevId, url) = uploadUrlMap[ovfDevId]
if lease.state == vim.HttpNfcLease.State.error:
raise lease.error
elif lease.state != vim.HttpNfcLease.State.ready:
raise Exception("%s: file upload aborted, lease state=%s" % \
(vmName, lease.state))
progress = uploadFile(srcDiskURL, url, file.create, lease, progress, increment, vmName, log)
Uploading files is only a small portion of Deploying a Virtual Appliance. Additional tasks such as
cluster, host, network, and datastore placement are required, as well as passing any parameters that
the Virtual Appliance requires.
22
f = urllib.urlopen(ovfURL)
ovfData = f.read()
import xml.etree.ElementTree as ET
params.networkMapping = []
if vmPassword:
params.propertyMapping = [vim.KeyValue(key='vsan.witness.root.passwd',
value=vmPassword)]
ovf_tree = ET.fromstring(ovfData)
if lease.state == vim.HttpNfcLease.State.error:
raise lease.error
# Upload files
uploadFiles(res.fileItem, lease, ovfURL, vmName, log)
lease.Complete()
23
return lease.info.entity
Copyright © 2019 VMware, Inc. All rights reserved.
vSAN API Cookbook for Python
The DeployWitnessOVF function takes care of setting networking, configuring the supplied password
as one of vApp options, and proper placement of the appliance on a specific host or resource pool.
Fortunately, the vSAN Witness Appliance only requires a password as an additional argument that
needs to be passed.
The DeployWitnessOVF function will parse the contents of the OVF, but does not have the capability
to parse an OVA, which the Witness Appliance is downloaded as. The Witness OVA file will have to be
extracted from an OVA to a folder containing the OVF and other required files.
The OVA file is essentially a .tar archive, that can be extracted easily using a wide variety of tools. For
the purpose of this paper, TarTool will be used due to its simplicity and command line availability. *No
specific recommendation of this tool should be inferred by the reader, as many tools provide this
functionality.
After deploying the vSAN Witness Appliance, the VM must be powered on and some tasks need to
take place. Putting these together, the process of deploying a Witness appliance could look something
like this:
24
The vSAN Witness Host, either in physical or Virtual Appliance form, must be added to the vCenter
environment where the vSAN Stretched Cluster or 2 Node configuration is running. It is important to
remember that the Witness Host cannot be a member of the vSAN cluster.
Hosts can be manually added to a vSphere Datacenter, or clusters within that datacenter, using the
vSphere Web Client, vSphere Client, or other scripting tools. It is important to note that when using the
legacy vSphere Client, adding a vSAN Witness Appliance as a host will not properly assign the vSAN
Witness license. As a result, if manually adding a Witness, using the vSphere Web Client is a preferred
method versus the legacy vSphere Client.
25
A function is required to facilitate the process of adding the Witness Appliance as a host in vCenter.
Adding the host becomes relatively easy using the AddHost function created above.
26
The process of configuring a vSAN cluster as either a Stretched Cluster or 2 Node configuration can be
done upon initial creation, or after a vSAN cluster has been initially setup.
Reusing much of the code from the Deploy recipe, some more tasks are still required, including setting
up Fault Domains, designating a Witness Host, and selecting the Witness Host disks that will be used.
27
To setup Fault Domains, we can enumerate the hosts in the cluster, answer yes or no for which Fault
Domain we choose to place the hosts into, saving that information in an array.
preferedFd = args.preferdomain
secondaryFd = args.seconddomain
firstFdHosts = []
secondFdHosts = []
for host in hosts:
if yes('Add host {} to preferred fault domain ? (yes/no)'.format(hostProps[host]['name'])):
firstFdHosts.append(host)
for host in set(hosts) - set(firstFdHosts):
if yes('Add host {} to second fault domain ? (yes/no)'.format(hostProps[host]['name'])):
secondFdHosts.append(host)
faultDomainConfig = vim.VimClusterVSANStretchedClusterFaultDomainConfig(
firstFdHosts = firstFdHosts,
firstFdName = preferedFd,
secondFdHosts = secondFdHosts,
secondFdName = secondaryFd )
Like in the previous recipe, only eligible disks can be used for vSAN devices. Putting these in an array
will allow for them to be claimed at the time of cluster configuration.
28
Once hosts have been put into two Fault Domain arrays, and eligible disks have been determined for
the Witness host, reconfiguring the Cluster can occur.
Recipe Summary
Just as deploying vSAN was in the first recipe, Stretched Cluster and 2 Node configurations can also
be easily and repeatedly automated with code.
The above code snippets show how easy it is to automate Stretched Cluster and 2 Node configurations
of vSAN using code consistently and repeatedly at both large and small scale.
The individual code snippets are available as a single Python script from the VMware Developer Center
at the following URL.
https://code.vmware.com/samples?id=1134
29
The VSAN-FS On-Disk format initially introduced in vSAN 5.5 is version 1.0. vSAN 6 introduced version
2.0. Version 2.0 of the vSAN On-Disk format brought about changes, allowing for configurations such
as All-Flash architectures, Stretched Clusters, support of the vsanSparse format for snapshots, and
more. With the introduction of vSAN 6.2, version 3.0 was introduced. Version 3.0 supports even more
functionality with Deduplication and Compression as well as Erasure Coding for All-Flash
architectures, and features including software checksums and IOPS limits for all architectures.
Depending on the configuration and state of a vSAN cluster, performing an On-Disk Format upgrade
can have different requirements. This is because the On-Disk format upgrade process essentially
evacuates data from each Disk Group in a host, removes and recreates the Disk Group in the new
format, and migrates data back, all in a rolling upgrade process.
This recipe will cover some of the things to consider when programmatically updating the On-Disk
format of vSAN.
Before attempting to perform a vSAN On-Disk upgrade, it is important to know what the current
format is, as well as the latest supported format for the ESXi build that vSAN cluster is running.
This is automatically visible within the vSphere Web Client from the Manage tab under Settings for a
vSphere cluster running vSAN.
Notice in the graphic that the Disk format version is 2.0, yet upgradeable to 3.0. Additionally, all the
disks, 6 of 6, are running the outdated version.
The vSphere Web Client easily shows this information, but requires a Virtualization Admin to manually
check the status of the On-Disk format for the cluster.
We’ll connect to the cluster, and determine the highest On-Disk Format supported by the cluster.
30
A function, provides the ability to compare the existing On-Disk version to the latest supported
version.
We can gather each of the disk group member devices into diskMappings, then pass them into the
hasOlderVersionDisks function to determine if an upgrade is necessary or not.
In cases where a cluster is already at the highest level support, there is obviously no need to perform
an upgrade. In cases where an upgrade is necessary, there are some additional tasks that need to be
performed, depending on the cluster’s configuration.
If vSAN is configured to automatically claim disks, an On-Disk format upgrade cannot occur. In the
vSphere Web Client, if this is the case, an error will occur.
31
A Virtualization Admin will have to go and change the Disk Claiming method from automatic to
manual.
Having to make this change across more than one cluster can be very time consuming process. This
can be better accomplished through an upgrade script checking for automatic disk claiming, and
changing the cluster’s disk claiming method to manual.
autoClaimChanged = False
if vsanConfig.defaultConfig.autoClaimStorage:
print 'autoClaimStorage should be set to false before upgrade VSAN disks'
autoClaimChanged = True
vsanReconfigSpec = vim.VimVsanReconfigSpec(
modify = True,
vsanClusterConfig = vim.VsanClusterConfigInfo(
defaultConfig = vim.VsanClusterConfigInfoHostDefaultInfo(
autoClaimStorage = False)))
32
If vSAN is configured to automatically claim disks, an On-Disk format upgrade cannot occur. In the
vSphere Web Client, if this is the case, an error will occur.
When an upgrade is selected from the vSphere Web Client, a “preflight” check occurs. While this is
handled by the vSphere Web Client, it must also be done in script.
**Note Deduplication and Compression are not enabled independently using the vSphere Web Client.
There is no significant performance benefit enabling one and not the other. Enabling both is the only
supported configuration. The vSphere Web Client enables both simultaneously.
If there are any issues with the preflight check, they must be resolved beforehand. We’ll need to
indicate this in our script so the Virtualization Admin can address accordingly.
vSAN On-Disk format upgrades require the existing VM storage policies to be satisfied during the
upgrade process.
In a 3 node cluster, a Failure To Tolerate =1 policy requires 3 nodes. Bringing a node offline to perform
an upgrade would create a situation of reduced redundancy.
This doesn’t mean that 3 node clusters can never be upgraded. It only means that they can only be
upgraded with reduced redundancy.
By default, the upgrade process does not allow reduced redundancy. Attempting to perform an On-
Disk format upgrade without sufficient spare resources will fail.
In the case where there are not enough vSAN resources to satisfy a VM storage policy, such as a 3
node cluster with FTT=1 using mirroring, a reduced redundancy flag must be set. There is no way to
33
accomplish this through the vSphere Web Client, and Virtualization Admins are required to perform
the upgrade from the Ruby vSphere Console (RVC). This process is detailed in KB 2113221 for the
Version 1 to Version 2 upgrade. The Version 3 upgrade is similar.
If performing an On-Disk format upgrade using a Python script, allowing the upgrade to occur with the
reduced redundancy flag can be set as part of initiating the upgrade.
If vSAN is configured to automatically claim disks, an On-Disk format upgrade cannot occur. In the
vSphere Web Client, if this is the case, an error will occur.
When vSAN 6.2 hosts continue to operate with the Version 2.0 On-Disk format, changing the
deduplication and compression setting to enabled will automatically initiate the upgrade process to
the Version 3.0 On-Disk format.
With that being stated, when upgrading from the Version 2.0 format to Version 3.0, deduplication and
compression may or may not be enabled simultaneously. However, if an administrator simply chooses
to upgrade to the new Version 3.0 format, the process to enable deduplication and compression would
then have to be run afterwards, as a separate rolling upgrade.
This process is not explicitly detailed, and scripting the process could easily accommodate both
upgrading and enabling deduplication and compression simultaneously.
Recipe Summary
In some cases, it may be significantly easier to execute a simple script to accomplish more tasks than
to manually execute equivalent tasks from the vSphere Web Client, such as upgrading the On-Disk
format with reduced redundancy, while enabling deduplication and compression simultaneously.
The above code snippets show how easy it is to upgrade the On-Disk format of a vSAN cluster using a
simple script along with specific parameters like “--enabledc” and “--reduced-redundancy”.
The individual code snippets are available as a single Python script from the VMware Developer Center
at the following URL.
https://code.vmware.com/samples?id=1135
The VSAN-FS On-Disk format initially introduced in vSAN 5.5 is version 1.0. vSAN 6 introduced version
2.0. Version 2.0 of the vSAN On-Disk format brought about changes, allowing for configurations such
as All-Flash architectures, Stretched Clusters, support of the vsanSparse format for snapshots, and
more. With the introduction of vSAN 6.2, version 3.0 was introduced. Version 3.0 supports even more
34
functionality with Deduplication and Compression as well as Erasure Coding for All-Flash
architectures, and features including software checksums and IOPS limits for all architectures.
Depending on the configuration and state of a vSAN cluster, performing an On-Disk Format upgrade
can have different requirements. This is because the On-Disk format upgrade process essentially
evacuates data from each Disk Group in a host, removes and recreates the Disk Group in the new
format, and migrates data back, all in a rolling upgrade process.
This recipe will cover some of the things to consider when programmatically updating the On-Disk
format of vSAN.
Before attempting to perform a vSAN On-Disk upgrade, it is important to know what the current
format is, as well as the latest supported format for the ESXi build that vSAN cluster is running.
This is automatically visible within the vSphere Web Client from the Manage tab under Settings for a
vSphere cluster running vSAN.
Notice in the graphic that the Disk format version is 2.0, yet upgradeable to 3.0. Additionally, all the
disks, 6 of 6, are running the outdated version.
The vSphere Web Client easily shows this information, but requires a Virtualization Admin to manually
check the status of the On-Disk format for the cluster.
We’ll connect to the cluster, and determine the highest On-Disk Format supported by the cluster.
A function, provides the ability to compare the existing On-Disk version to the latest supported
version.
We can gather each of the disk group member devices into diskMappings, then pass them into the
hasOlderVersionDisks function to determine if an upgrade is necessary or not.
35
In cases where a cluster is already at the highest level support, there is obviously no need to perform
an upgrade. In cases where an upgrade is necessary, there are some additional tasks that need to be
performed, depending on the cluster’s configuration.
If vSAN is configured to automatically claim disks, an On-Disk format upgrade cannot occur. In the
vSphere Web Client, if this is the case, an error will occur.
A Virtualization Admin will have to go and change the Disk Claiming method from automatic to
manual.
Having to make this change across more than one cluster can be very time consuming process. This
can be better accomplished through an upgrade script checking for automatic disk claiming, and
changing the cluster’s disk claiming method to manual.
autoClaimChanged = False
if vsanConfig.defaultConfig.autoClaimStorage:
print 'autoClaimStorage should be set to false before upgrade VSAN disks'
autoClaimChanged = True
vsanReconfigSpec = vim.VimVsanReconfigSpec(
modify = True,
vsanClusterConfig = vim.VsanClusterConfigInfo(
defaultConfig = vim.VsanClusterConfigInfoHostDefaultInfo(
autoClaimStorage = False)))
If vSAN is configured to automatically claim disks, an On-Disk format upgrade cannot occur. In the
vSphere Web Client, if this is the case, an error will occur.
When an upgrade is selected from the vSphere Web Client, a “preflight” check occurs. While this is
handled by the vSphere Web Client, it must also be done in script.
**Note Deduplication and Compression are not enabled independently using the vSphere Web Client.
There is no significant performance benefit enabling one and not the other. Enabling both is the only
supported configuration. The vSphere Web Client enables both simultaneously.
36
If there are any issues with the preflight check, they must be resolved beforehand. We’ll need to
indicate this in our script so the Virtualization Admin can address accordingly.
vSAN On-Disk format upgrades require the existing VM storage policies to be satisfied during the
upgrade process.
In a 3 node cluster, a Failure To Tolerate =1 policy requires 3 nodes. Bringing a node offline to perform
an upgrade would create a situation of reduced redundancy.
This doesn’t mean that 3 node clusters can never be upgraded. It only means that they can only be
upgraded with reduced redundancy.
By default, the upgrade process does not allow reduced redundancy. Attempting to perform an On-
Disk format upgrade without sufficient spare resources will fail.
In the case where there are not enough vSAN resources to satisfy a VM storage policy, such as a 3
node cluster with FTT=1 using mirroring, a reduced redundancy flag must be set. There is no way to
accomplish this through the vSphere Web Client, and Virtualization Admins are required to perform
the upgrade from the Ruby vSphere Console (RVC). This process is detailed in KB 2113221 for the
Version 1 to Version 2 upgrade. The Version 3 upgrade is similar.
If performing an On-Disk format upgrade using a Python script, allowing the upgrade to occur with the
reduced redundancy flag can be set as part of initiating the upgrade.
If vSAN is configured to automatically claim disks, an On-Disk format upgrade cannot occur. In the
vSphere Web Client, if this is the case, an error will occur.
When vSAN 6.2 hosts continue to operate with the Version 2.0 On-Disk format, changing the
deduplication and compression setting to enabled will automatically initiate the upgrade process to
the Version 3.0 On-Disk format.
With that being stated, when upgrading from the Version 2.0 format to Version 3.0, deduplication and
compression may or may not be enabled simultaneously. However, if an administrator simply chooses
to upgrade to the new Version 3.0 format, the process to enable deduplication and compression would
then have to be run afterwards, as a separate rolling upgrade.
This process is not explicitly detailed, and scripting the process could easily accommodate both
upgrading and enabling deduplication and compression simultaneously.
37
Recipe Summary
In some cases, it may be significantly easier to execute a simple script to accomplish more tasks than
to manually execute equivalent tasks from the vSphere Web Client, such as upgrading the On-Disk
format with reduced redundancy, while enabling deduplication and compression simultaneously.
The above code snippets show how easy it is to upgrade the On-Disk format of a vSAN cluster using a
simple script along with specific parameters like “--enabledc” and “--reduced-redundancy”.
The individual code snippets are available as a single Python script from the VMware Developer Center
at the following URL.
https://developercenter.vmware.com/samples?id=1135
38
3. References
.
39
For more information about VMware vSAN, please visit the product pages at http://www.vmware.com/
products/virtual-san
Product Overview
Product Documentation
For additional information or to purchase VMware vSAN, VMware’s global network of solutions
providers is ready to assist. If you would like to contact VMware directly, you can reach a sales
representative at 1-877-4VMWARE (650-475-5000 outside North America) or email
sdssales@vmware.com. When emailing, please include the state, country, and company name from
which you are inquiring.
This cookbook was put together using content from various resources from vSAN Engineering.
Jase McCarty is a Staff Technical Marketing Architect at VMware with a focus on storage solutions. He
has been in the Information Technology field for over 25 years, with roles on both the customer and
vendor side. Jase has Co-Authored two books on VMware virtualization, routinely speaks technology
focused user group meetings, and has presented at VMworld and EMC World.
Follow Jase on Twitter: @jasemccarty
40