Vxrail Tech Faq
Vxrail Tech Faq
Vxrail Tech Faq
This VxRail Technical FAQ describes technical details related to VxRail features and
functionality and should be used as a companion to the VxRail General FAQ.
Table of Contents
GPU ..................................................................................................................................................................... 33
Security .................................................................................................................................................................. 33
Networking ........................................................................................................................................................... 35
SmartFabric Services for VxRail .............................................................................................................. 38
Deployment Options ......................................................................................................................................... 41
VxRail satellite nodes................................................................................................................................... 41
VxRail dynamic nodes ................................................................................................................................. 42
Dynamic nodes with external storage array as primary storage........................................... 44
VxRail Dynamic AppsON ........................................................................................................................ 46
Dynamic nodes with VMware vSAN cross-cluster capacity sharing as primary storage
......................................................................................................................................................................... 47
VxRail with vSAN ESA .................................................................................................................................. 48
VCF on VxRail .................................................................................................................................................. 49
2-node vSAN Cluster .................................................................................................................................... 50
Stretched Cluster ........................................................................................................................................... 50
Customer-deployable VxRail .................................................................................................................... 51
Ecosystem support ............................................................................................................................................ 52
External storage ............................................................................................................................................. 52
VxRail Management Pack for Aria Operations ................................................................................... 53
Delivery Options ................................................................................................................................................. 54
Integrated Rack .............................................................................................................................................. 54
Sales......................................................................................................................................................................... 55
Licensing ........................................................................................................................................................... 55
Tools ................................................................................................................................................................... 56
Training ............................................................................................................................................................. 57
End of Sales Life (EOL) ................................................................................................................................ 57
End of Sales Life (EOL) for 14th Generation Nodes...................................................................... 57
Support Services ............................................................................................................................................ 58
Deploy Services .............................................................................................................................................. 58
Solutions ................................................................................................................................................................ 58
Competition .......................................................................................................................................................... 59
Question: What happens to support contracts that exceed the EOSS dates?
Answer: Once EOSS dates are coded, entitlements quoted past the EOSS date are
terminated, and the unused portion of the standard support contract quoted beyond
the EOSS date is credited back to the customer automatically.
VxRail Lot 9 Compliance & End of Sales Life (EOL) for VxRail E665/F/N in EMEA
Question: What is Lot 9, and what steps are taken to ensure VxRail is compliant?
Answer: The ErP Lot 9 regulation introduces requirements for servers with one or two
processor sockets to limit power consumption in idle state and set minimum
requirements for power supply efficiency. Starting January 1, 2024, the ErP Lot9
regulation will restrict the sale of Platinum PSUs, and impose the use of Titanium
Power Supplies in all VxRail products that are shipping into CE countries. More
information, including a list of affected countries, can be found here.
Question: How are the Platinum PSUs affected by the Lot 9 regulation?
Answer: Beginning November 6th, 2023, all Platinum PSU offerings will EOL across sales
and ordering tools, in affected EMEA CE countries only.
Answer: The Lot 9 requirements apply to VxRail nodes, not to APOS upgrades. If an
affected node was shipped before January 1, 2024, the customer can upgrade the
existing PSU with another Platinum PSU, at any time.
Answer: Service spares are unaffected. PSUs will be replaced with the same level of power
efficiency. If the PSU to be replaced is a Platinum PSU, the service part will be a
Platinum PSU, even after January 1, 2024.
Question: How are the VxRail E665/F/N models affected by the Lot 9 regulation?
Answer: There are no Titanium PSUs that are compatible with the VxRail E665/F/N.
Answer: E665/F/N will EOL across EMEA sales and ordering tools on Nov 6th, 2023.
Question: Does the Lot 9 regulation affect the selling or shipment of Platinum PSUs or
VxRail E665/F/N in non-CE countries?
Answer: No, Titanium PSUs are not mandated in non-CE countries.
Answer: Platinum PSUs will continue to ship normally in non-CE countries (Americas/APJ,
and EMEA regions where CE is not required).
Answer: E665/F/N nodes will continue to ship normally in the Americas and APJ regions.
Question: How can I identify VxRail feature releases and patch releases?
Answer: Feature releases, such as VxRail 7.0.520, are VxRail releases that introduce new
capabilities and hardware support. Moving forward, release numbers that end with
zero are identified as feature releases, while releases that end between one and
nine are identified as patch releases. For example, a future patch release based on
VxRail feature release 7.0.520 can have a release number of VxRail 7.0.52[1-9].
Question: What are the serviceability improvements that have been added to this
release?
Answer: The service request ticket creation capability in VxRail Manager UI has been
enhanced to simplify the customer experience and speed up the processing time to
resolve a service request ticket. There is a new input field for creating a service
request ticket. The new field, Issue Type, allows a user to select from a set list
which product area requires attention. Based on the issue type, VxRail Manager
will automatically collect relevant logs and package them into a bundle. When Dell
Support picks up the ticket and requests from the customer the log information, the
customer can avoid gathering the logs themselves and send to them the prepared
log bundle. This new feature will help capture log data while the issue may still be
present and reduces the turnaround time for ticket resolution by automating the log
collection.
Answer: VxRail event handling now includes part numbers for memory and disk/drive errors
and slot number information for battery errors. By adding this information, dial
home events can provide actionable data back to Dell Support or service providers
who can auto-dispatch parts in response.
Answer: VxRail event handling now supports scenarios when VxRail Manager certificate is
about to expire within 30 days or has expired. These scenarios will now trigger an
alarm, vCenter Server event, and a dial home event that will be sent to Dell
Support.
Question: Which VxRail 8.x features have been backported to VxRail 7.0.520?
Answer: Password management from VxRail Manager has been backported to VxRail
7.0.520. Introduced in VxRail 8.0.210, this feature streamlines password
management for iDRAC root and vCenter Server management accounts so that
password updates can be done in a single workflow from the UI or API.
Answer: The USB-based version of the node imaging management tool is available in
VxRail 7.x starting with VxRail 7.0.520. The USB option provides a solution for
users not wanting to connect a laptop onto the local network to reimage a node, or
users not familiar with using VxRail API to perform the operation. The USB option is
also the only option to re-image a VD-4000 witness because it lacks an iDRAC
interface.
Question: What are the details of the reduced node image package?
Answer: The reduced node image package includes all the contents of the full node image
package minus the VxRail Manager and vCenter Server installation files. For
VxRail 7.0.520, the reduced node image package is less than 6GB compared to the
full package of almost 21GB.
Question: How can a user reimage a node with the smaller node image package?
Answer: Users can use the node image management utility or VxRail API to not copy the
VxRail Manager and vCenter Server installation files when transferring the image to
the target node by using a selectable parameter. This use case can apply to
customers who already have a full image in their repository and can use either
method to transfer the reduced image to the target node.
The other option is to download a reduced node image package from the Dell
Support website. Starting with VxRail 7.0.520, there will be a reduced node image
package posted on the website along with other release contents on the Dell
Support website. This option can be used to build the node image ISO for the USB-
based version of the node image management tool.
Question: What are the key features and highlights of this release?
Answer: VxRail 7.0.510 is a hardware-only release, introducing 16th generation VxRail
AMD-based all-flash and all-NVMe platforms with support for vSphere 7.0 Update
3o.
Answer: These new VxRail VE-6615 and VP-7625 nodes are powered by 4th Generation
AMD EPYC processors (known as Genoa) and include the following features:
• AMD 4th Generation EPYC processors, with up to 96 cores per socket, up to
two sockets total, representing a 50% increase in core count over 15G.
• The VE-6615 supports a single AMD EPYC processor with up to 84 cores. The
VP-7625 supports up to two AMD EPYC processors with up to 96 cores per
socket.
• These new platforms support both all-flash and all-NVMe storage options with
vSAN OSA.
• All-flash VE-6615 and VP-7625 support RI and MU SAS/vSAS/SATA drives in
sizes of up to 7.68TB. The all-flash VE-6615 can achieve up to 61.44 TB of total
storage per node, and the all-flash VP-7625 can achieve up to 161.28TB of
storage per node.
• All-NVMe VE-6615 and VP-7625 support RI NVMe drives in sizes of up to
15.36TB. The all-NVMe VE-6615 can achieve up to 122.88 TB of total storage
per node, and the all-NVMe VP-7625 can achieve up to 322.56TB of storage
per node.
• The VE-6615 supports up to 12 DIMMs of DDR5 memory, in DIMM sizes up to
256GB. The VP-7625 supports up to 24 DIMMs of DDR5 memory, in DIMM
sizes up to 128GB. A per node capacity of 3TB at speeds of up to 4800MT/s is
achievable with both the VE-6615 and VP-7625.
• The VP-7625 does not support use of a 256GB DIMM due to server thermal
capacity limitations.
• PCIe Gen 5 which provides additional PCIe lanes (up to 128 total) and double
the throughput of PCIe Gen4.
• These platforms sport the new BOSS-N1, which brings two main improvements
over the BOSS-S2 card introduced on 15G. First, it has been upgraded to be a
pair of dual mirrored m.2 drives 960GB in size. Second, the m.2 interface has
been upgraded to NVMe. It still retains the hot-pluggability of the BOSS-S2,
which as a reminder, refers to the ability to disconnect and replace an M.2
SATA drive if it has failed, without needing to power off and open the server.
• Support for up to six single-wide and two double-wide GPUs.
• These 16th generation VxRail AMD-based VE-6615 and VP-7625 platforms will
be supported for Greenfield deployments only at launch.
Question: Which storage type is supported with the VE-6615 and VP-7625?
Answer: Both all-flash and all-NVMe storage options are available with the VE-6615 and VP-
7625 platforms.
Answer: All-Flash is available on both the VE-6615 and VP-7625. NVMe storage is available
on the VE-6615 and is also available with the dual CPU configuration of the VP-
7625. NVMe is not supported on the single CPU configuration of the VP-7625.
Question: Can I upgrade from a Single to a Dual CPU configuration with the VP-7625
APOS?
Answer: No. Upgrading from a single to a dual CPU configuration is not supported APOS.
Question: Can I purchase VMware perpetual licensing with the VE-6615 or VP-7625 at
RTS?
Answer: No. Please review the VxRail Ordering and Licensing Guide to review up-to-date
licensing information.
Question: What about the rest of the VxRail portfolio? Are additional 16G platforms
forthcoming?
Answer: The VxRail platform strategies are reviewed on the 6 Month Roadmap.
Question: What are the key features of the 4th Generation AMD EPYC processor?
Answer: This new generation of processor delivers up to 96 Zen4 cores and twelve
4800MT/s memory channels per AMD processor. The introduction of PCIe Gen 5
provides twice the bandwidth of PCIe Gen 4.
Question: Can I upgrade the processors in my current VxRail cluster (14G or 15G
hardware) to 4th Gen AMD EPYC?
Answer: No. It is not possible to upgrade existing VxRail AMD nodes with 2nd or 3rd
Generation AMD processors with the new 4th Generation AMD EPYC processors.
Question: What are the memory configuration rules for this new AMD chipset?
Answer: Each processor socket can be configured with 4, 6, 8, 10 or 12 DIMMs of equal
size. For best performance, populate all 12 DIMM slots in each processor (1 DPC
only) to achieve a memory speed of 4800 MT/s. Mixed DIMMs are not supported.
Refer to the Memory Configuration Rules slide in the Technical Reference Deck
and Ordering Configurations guide for details.
Question: Does this release support all the features of the 7.0.480 release?
Answer: Yes, it supports all the features of the 7.0.480 release, with an update path to
vSphere 8.x planned for future release. Refer to the 6 Month Roadmap for the
latest information.
VxRail Design
Question: Are VxRail systems achieving six 9’s of availability?
Answer: VxRail 2- to 4-node clusters configured with N + 1 redundancy, and 4- to 16-node
clusters configured with N + 2 redundancy are designed for 99.9999% hardware
availability, which equates to less than 1 minute of unplanned downtime per year.
When used with additional included software features that provide further high
availability, like fault domains or stretched cluster, VxRail can achieve greater than
6 x 9’s availability at the per VM level.
Question: How does VxRail load balance storage when a node is added?
Answer: VxRail can rebalance storage assuming there is available slack space to do so.
While DRS (if licensed) will handle moving VMs, vSAN will not rebalance data to
the drives of the newly added node, unless a capacity drive has reached 80% full If
any capacity drive in the cluster has reached 80% full, vSAN will automatically
rebalance the cluster, until the space available on all capacity drives is below the
80% threshold. You can manually start a rebalance from the storage perspective,
which may be beneficial as the timing of it can be controlled. See vSAN
documentation for additional details.
Question: What are the supported VxRail high availability configuration options when
using 3-node and 4-node deployments?
Answer: For vSAN OSA, with a 3-node vSAN cluster configuration there are three physical
hosts, and the data and witness components are distributed across all three nodes.
This configuration provides a lower entry point for customers, however, there are
trade-offs with respect to functionality and data protection. This configuration can
only support failures to tolerate (FTT) = 1, with RAID-1. It does not have spare
resources to self-heal. If a customer desires the resiliency of automatic self-healing
from a component failure, then a 4-node minimum is required. In a four-node
configuration, should one node fail, there remains (the sufficient minimal) three
nodes to meet the requirement of FTT = 1, RAID-1. A four-node configuration also
supports the failures to tolerate (FTT) = 1, with RAID-5 (Erasure Coding). However,
the self-healing resilience is lost, as four nodes is the minimal required to support
RAID-5 (Erasure Coding). Five nodes would be required to support this self-healing
resilience.
Answer: For vSAN ESA, customers should use the RAID-5 storage policy for better space
efficiency with performance. vSAN ESA RAID-5 can configure with a 2+1 scheme
for 3 or 4-node clusters.
Question: Which VxRail software components are not included in a full stack update?
Answer: VxRail LCM ensures a continuously validated version set of VMware software,
VxRail software, drivers, and firmware, but does not include GPU drivers and FC
HBA firmware. Though a user would be able to consolidate GPU drivers and FC
HBA firmware in the same cluster update for a faster update. Customer installable
software such as vRealize Log Insight and RecoverPoint for VMs are also updated
separately; refer to SolVe procedures and VMware documentation for update
instructions. Some update paths are only available between certain VxRail software
versions; refer to the target version release notes to ensure valid update paths.
VMware Technology
Question: Do VxRail systems offer data reduction capabilities?
Answer: Yes. Only all-flash VxRail system configurations offer a variety of data efficiency
services, including deduplication, compression, and RAID 5/6 erasure coding.
Hybrid configurations are not supported. These data reduction capabilities require
vSAN Advanced, Enterprise, or Enterprise Plus licensing. Customers with a
subscription license will have the vSAN Enterprise license included in the package.
Question: What is the best practice for when to enable data efficiency services?
Answer: If customers plan to use deduplication and compression, or compression only, it is
best to activate them at time of deployment, as when enabled, each disk group on
each host needs to be rebuilt. See VMware documentation for additional guidance.
Question: Are data reduction services usable with both vSphere and vSAN encryption?
Answer: There is no impact to vSAN encryption when using data services, including
deduplication and compression, as encryption occurs after dedupe and
compression. vSphere Encryption will significantly limit any benefits of dedupe and
compression, as in this instance, encryption occurs before dedupe and
compression.
Question: Is any of the VMware software running on the VxRail node transferrable?
Answer: If the VMware software was purchased via Bring Your Own License/Subscription
(BYOS) option from Broadcom, the software is transferrable. However, eOEM
licenses are non-transferrable from the hardware they were purchased with.
Question: What are the recommended VMware configuration limits VxRail can support?
Answer: Refer to the VMware Configuration Limits to obtain information on ESXi host
maximums and other details.
Question: How does vSAN ESA provide better performance for data services?
Answer: The data services are applied at the time of ingest which avoids the IO amplification
penalties experienced with needing to compress and decompress data when
processing between cache and capacity layers in vSAN OSA. With vSAN ESA,
data is immediately compressed and from then on, it is designed to process
compressed data down the stack which reduces CPU cycles and network
bandwidth usage.
Answer: When encryption is enabled, it is encrypting the compressed data which lessens
the impact to CPU cycles and network bandwidth usage. vSAN ESA also avoids
the need to decrypt and re-encrypt data when passed between cache and capacity
layers which is needed in vSAN OSA.
Question: How does the introduction of vSAN ESA impact the value of vSAN OSA?
Answer: Both vSAN ESA and OSA architecture co-exist in vSAN 8.0. The flexibility of two
architectures can allow customers to use vSAN in even more use cases.
There are many applications that still perform well using vSAN OSA. There may be
applications are not yet qualified to run on ESXi 8.0.
For customers with no near-future plans to refresh hardware, there is not a rush to
switch over to vSAN ESA.
For customers who are considering a tech refresh, it can make sense to position
vSAN ESA to begin their exploration into identifying benefits of running their
applications in this new architecture.
Question: Are there vSAN OSA features that vSAN ESA does not support?
Answer: vSAN ESA does not support the following features:
• Granular storage policies per VMDK. ESA policies are applied at the VM level.
• Deduplication
Question: Do the scalable, high performance native snapshots in vSAN ESA have any
negative impact to existing tools that use VMware snapshots?
Answer: No. This new snapshot architecture capability does not change the way in which 3rd
party VADP backup solution, SRM or vSphere Replication interact with snapshots.
These solutions should all see improved performance.
Question: Doesn’t all mixed-use NVMe make vSAN ESA expensive? Will it deliver
sufficient additional performance to justify extra cost of mixed-use NVMe?
Answer: The changes in ESA will impact how storage is consumed. It is expected that ESA
with erasure coding will perform the same if not better than mirroring on OSA. This
will deliver significant capacity savings, in addition to increased capacity savings
from improved compression – which is part of the default storage policy, and
reduced acquisition costs as no cache drives are needed.
System Management
Question: How are the VxRail systems managed?
Answer: All VxRail hardware maintenance and LCM activities can be managed from within
vCenter with VxRail Manager. For day-to-day VM management, customers manage
the VMware stack on VxRail directly from the vSphere Web Client.
Question: Is there a management interface that ties in all VMware and storage
management into one portal across all VxRail clusters a customer might
have?
Answer: vRealize Automation (optional) allows for management and orchestration of
workloads across VxRail clusters. It also provides a unified service catalog that
gives consumers an App Store ordering experience to make requests from a
personalized collection of application and infrastructure services. All physical
system management requires VxRail Manager.
Lifecycle Management
Question: What is synchronous release (commonly referred to as simultaneous
shipment or SimShip)?
Answer: There is an agreement between Dell VxRail and VMware that for every express
patch, quarterly patch, and major update of ESXi and vSAN software, VxRail will
deliver a supporting software release within 30 days of VMware GA. Best effort is
given to express patches to deliver even more quickly (sometimes in a few
business days).This objective is to provide customers confidence that they can
invest in VxRail while knowing they can quickly reap the benefits of the latest
software features and promptly address security vulnerabilities identified and fixed
by VMware. Refer to this synchronous release commitment KB article for more
information.
Question: What are some common things that would cause a delay to the agreed upon
commitment?
Answer: Holidays, factory shutdowns, and most often engineering findings during validation
might impact the 30-day commitment. Rather than release against a software
version with critical issues still present, engineering may choose to defer to a
subsequent software release/version with proper fixes and often assists our
partners to deliver those fixes faster. You can reach out to your regional Storage
Center of Competence (CoC) Product Line Manager (PLM) to get updates when
there is a delay.
Question: What are the different elements in VxRail LCM update process that makes it
unique and differentiated for customers?
Answer: Refer to the VxRail Techbook for a detailed explanation of the update process, its
features, and enhancements introduced over time. For a presentable overview of
the VxRail LCM experience, refer to the VxRail Customer Presentation or the
VxRail Technical Overview Presentation.
Answer: Yes, but any request to do so must be verified with VxRail engineering on a case-
by-case basis. Refer to KB article 000020460 for additional guidance.
Question: Should customers use software, other than VxRail Manager to perform
updates?
Answer: No, VxRail Manager is the sole source for VxRail lifecycle management, cluster
compatibility, software updates, and version control. VMware software tools such
as vSAN Config Assist and Updates, vSphere Update Manager (VUM) or Dell EMC
OpenManage are not supported for performing VxRail updates.
Question: What are the recent improvements made to reduce cluster update failures
and increase efficiency of performing a cluster update?
Answer: In 7.0.480, there are improvements to the user experience for LCM pre-check and
the update advisor report for unconnected clusters. Automation capabilities
removes the need for users to upload the LCM pre-checks file via the VxRail
Manager CLI. The VxRail Manager UI has been enhanced to allow users to upload
the LCM pre-checks file and installer metadata file to generate the update advisor
report, which embeds the LCM pre-check report. VxRail 7.0.480 also introduced the
capability for users to optionally reboot nodes in sequential order to further verify
the nodes are in good standing before a cluster update.
Question: How does vLCM compatibility impact the VxRail advantage in LCM over vSAN
ReadyNodes?
Answer: At its core, the VxRail advantage remains. VxRail’s Continuously Validated States
is what provides the operational simplicity and certainty our VxRail users value to
confidently evolve their clusters through hardware and software changes over time.
The practice of Continuously Validated States helps offloads customer’s IT
resources and decision-making responsibilities. vLCM compatibility changes how
the cluster update is executed. However, it is the planning and preparation LCM
features that differentiates VxRail from vSAN ReadyNodes: VxRail-driven
experience vs. customer-driven experience.
Answer: vLCM compatibility can also add to the VxRail advantage as its process can be
now more clearly differentiated to that of vSAN ReadyNodes. During baseline
image creation, a vSAN ReadyNode user would need to go through several manual
steps to build the image: deploying and configuring the hardware support manager
plugin to vLCM, deploying the driver and firmware depot, identifying each
component firmware and driver package in the stack, exporting the packages to
vCenter Server, and creating the cluster profile to establish the baseline image. For
a VxRail user, VxRail already has the baseline image in the form of the
Continuously Validated State. It’s a 3-step wizard. The VxRail Manager VM acts as
the hardware support manager plugin and can automatically port the Continuously
Validated State already on its VM into the vLCM framework. Similarly, building the
desired state image is just as manual of a process for a vSAN ReadyNode user
while the VxRail user has a much more streamlined, automated process because
of Continuously Validate States. The use of vLCM APIs in VxRail’s implementation
of vLCM compatibility allows the user experience to be an automated one within
VxRail Manager.
Question: Can the NVIDIA GPU VIB be added to the cluster image?
Answer: Users can include the NVIDIA GPU VIB when customizing their cluster update.
Customers are still responsible for acquiring the files and check for compatibility as
this is not part of the VxRail Continuously Validated State. Clusters running VxRail
7.0.350 or later support this feature with vLCM mode enabled. Starting with VxRail
7.0.450, clusters using legacy LCM mode has this feature.
Question: What considerations should be taken before using the sequential node
reboot feature?
Answer: The node reboot sequence can be scheduled to run at a later time.
Answer: Support cluster types are standard (3+ nodes), dynamic node, and stretched
clusters.
Answer: The node must have a vSphere license that supports vSphere DRS. Unlike VxRail
cluster update, VxRail cannot temporarily enable DRS for nodes that are not
Question: How does VxRail HCI System Software provide multi-cluster management
capabilities for CloudIQ users?
Answer: A microservice, called adaptive data collector, runs on VxRail HCI System Software
to aggregate metrics from the vSAN cluster and VxRail system. The metrics are
packaged and sent to the VxRail repository in the Dell Technologies cloud via the
connectivity agent. Within the CloudIQ platform in the Dell Technologies cloud,
infrastructure machine learning is used to produce reporting and insight to enable
users to improve serviceability and operational efficiencies. LCM operations are
available via the web portal and the operation requests are sent to the clusters via
the same connectivity agent so that the tasks are executed locally by VxRail HCI
System Software.
Question: What else should I know about the cluster update operation?
Answer: Running the cluster update operation first requires setup work. Customers need to
configure role-based access control and store VxRail infrastructure credentials for
cluster updates. Role-based access control allows a customer to permit select
individuals to perform lifecycle management operations by leveraging roles and
privileges. VxRail infrastructure credentials management is a supporting feature
that further streamlines cluster updates at scale. Like a cluster update via VxRail
Manager, root account credentials for vCenter Server, Platform Services Controller,
and VxRail Manager are required. CloudIQ users can save credentials at the initial
setup and they can automatically populate during a cluster update operation. This
benefit is magnified when performing multi-cluster updates.
Answer: CloudIQ initiates the cluster update operation but the actual update operation is
performed locally on the cluster itself. CloudIQ is responsible for tracking and
reporting the operation.
Answer: Cluster update is only supported for standard clusters (3 or more nodes), dynamic
node clusters, and management clusters for satellite nodes. It is not available for 2-
node clusters, stretched cluster configurations, and satellite nodes.
Question: Is this available for dark sites that have no access to the internet?
Answer: VxRail uses a connectivity agent for secure data transfer and requires an internet
connection. VxRail clusters that do not have internet access will not be able to use
CloudIQ.
Question: Do we collect customer data and is the customer cluster data secure?
Answer: Dell Technologies does not collect customer data through CloudIQ.
While the data collector service does aggregate machine data relative to the
cluster, software, hardware topology, and performance, users can be assured that
there is no customer or personal data collected.
The Dell Technologies remote connectivity mechanism assures data transfer
between customer site and Dell Technologies is secure. For more information, refer
to this VxRail security white paper.
Though the customer cluster metadata is not anonymized when it is stored in the
data lake, the data cannot be mapped to a customer account without Dell
Technologies support services. If the customer does have concerns that the
topology data may contain sensitive VM names, they have the option of not using
the multi-cluster management functionality by turning off the collection service.
Access to the cluster metadata is restricted to the VxRail engineering team.
Question: How are these multi-cluster management features in VxRail HCI System
Software licensed for use in CloudIQ?
Answer: Except for the cluster update feature, the capability set is licensed as part of VxRail
HCI System Software which comes standard for every VxRail node. To enable the
cluster update feature, there is an add-on license on top of the standard software
license which is called VxRail HCI System Software SaaS active multi-cluster
management.
Question: Can a customer evaluate cluster update feature in their own environment
before purchasing the add-on license?
Answer: Yes, trial licenses are available for the customer to evaluate the add-on license
functionality on a cluster for a limited period of time. Their sales account
representative would need to submit an RPQ. Contact the VxRail Product
Management team to understand the restrictions that come with the evaluation
license.
Question: What is the process of ordering and applying the add-on license?
Answer: The VxRail HCI System Software add-on license is available for purchase via Dell
sales tools. The add-on license is applied on a per-node basis. In order to execute
a cluster update from CloudIQ, all nodes in the cluster must have this add-on
license. The licenses are associated to the service tags of each node at the time of
order and the license entitlements for each node is stored internally at Dell.
CloudIQ gathers information from this database to enable the appropriate
functionality for each node.
The add-on license is not transferable to another VxRail node. Once the node is
entitled with the add-on license, it cannot be disassociated from the node.
Question: When purchasing the add-on license after point of sale, which term-based
option should I choose?
Answer: The term-based option should align with the remaining term length of the hardware
support contract for the node and rounded up to the next year. One of the tools to
find this information is https://quality.dell.com/search/. For example, if there are 30
months remaining in the support contract, select the 3-year term add-on license.
Question: What VxRail functionality is available to a cluster if the nodes within have
mixed software licenses?
Answer: Unless all nodes in the VxRail cluster have the add-on licenses, CloudIQ defaults to
the functionality provided by the standard VxRail HCI System Software license for
that cluster.
RESTful API
Question: What are the VxRail RESTful API capabilities?
Answer: The VxRail API provides customers and partners/system integrators a full set of
capabilities to automate “Day 1” (cluster deployment), “Day 2” (cluster operations
and LCM updates) and collection of system information. The latest API
documentation is available at the Dell Technologies Developer Portal:
https://developer.dell.com/apis/5538/.
Question: Does VxRail support cover custom-built automation (e.g., PowerShell scripts,
Ansible playbooks) using VxRail API?
Answer: VxRail support covers public API built into VxRail (VxRail API), the official VxRail
API PowerShell Modules and VxRail script packages created by VxRail
Engineering available for download from the Dell Technologies Support site.
Support for any custom-built automation leveraging VxRail API should be provided
by the party developing and implementing this solution for the Customer (e.g., Dell
Services/Consulting, VMware PSO, 3rd party system integrator, Customer) – it is
not covered by VxRail support.
Answer: VxRail 7.0.010 offers an API-driven deployment option for a VxRail system as a
part of public VxRail API. Like the GUI-driven deployment, the use of this API
requires professional services engagement from either Dell Services or an
authorized/certified partner. With VxRail 7.0.240 or later, this requirement can be
waved via RPQ.
Question: VxRail can be deployed via a “Day 1” REST API. Can customers use this, and
is there an RPQ required?
Answer: “Day 1” cluster deployment capabilities were introduced in VxRail API in VxRail
7.0.010. API-driven deployment is another way of deploying the VxRail cluster,
providing customers with more choice. The deployment restrictions are the same
as for the standard, GUI-driven deployment. API-driven deployment does not
remove the need for professional services to provide customers with the best
experience (the same applies to using this API from the VxRail API PowerShell
Modules and Ansible Modules). Dell Technologies and certified partners deliver
professional services for the cluster deployment.
For more information about the customer-deployable option and requirements,
please check the following section of this Technical FAQ: Customer-Deployable
VxRail.
VxRail Hardware
VxRail on PowerEdge Servers
Question: Which VxRail nodes are available on 16th Generation PowerEdge servers?
Answer: The Intel-based VxRail nodes available on 16G include: the VE-660 and VP-760
based on the PowerEdge R660 and R760 platforms. Note that VxRail platform
model numbers are now aligned to PowerEdge model numbers to easily identify
the underlying parent platform. The suffix for storage storage has been removed
from the platform model number. The storage type can be selected in the ordering
path.
Question: If a customer purchases VxRail nodes without TPM, can it be added APOS?
Answer: Yes. However, the VxRail APOS ordering path does not contain a TPM part (the
VxRail APOS component list is not an exhaustive list) so it is recommended to use
the PowerEdge APOS part for TPM in this situation.
Platform models
VD-4000
Question: Does a VD-4000 satellite node provide RAID or storage device redundancy?
Answer: The VD-4000 does not support a PERC, and therefore, there is no storage
redundancy. It is an army of one. Note that PERCs are optional, and not a
requirement, for any VxRail to deploy as a satellite node.
Question: My customer has 1GbE switches. Can they deploy VD-4000 in 1GbE
environments?
Answer: In a 2-node configuration, the 10GbE or 25Gbe ports can be connected back to
back to handle vSAN traffic. SFP transceivers can be used to auto-negotiate all
other traffic down to 1GbE.
Processors
Question: What processors are available on VxRail?
Answer: 4th Generation Intel Xeon Scalable processors, single or dual, from 8 to 56 cores
each.
3rd Generation Intel Xeon Scalable processors, single or dual, from 8 to 40-cores
each.
Single 2nd or 3rd Generation AMD EPYC with up to 64-cores.
Intel and AMD processors cannot be placed in the same vSphere cluster.
Drives
Question: What cache drive options are available?
Answer: In order of overall performance, from highest to lowest, the following cache drive
types are available: Optane, NVMe, MU NVMe, WI SAS, and MU SAS.
Answer: Generally higher performing drives cost more, however the performance gains from
NVMe versus the increased cost to the overall solution can make them very
attractive. Note, pricing changes frequently.
Question: Will larger cache drives have a greater portion of the drive usable for write
cache?
Answer: For clusters running vSAN OSA, the write cache buffer size is 600GB, though
larger capacity drives will extend drive life. vSAN 8.0 OSA increases the buffer
capacity to 1.6TB for all-flash clusters.
Question: Can a customer re-use drives from a decommissioned node in a newer node?
Answer: No, this is not supported.
Question: How does the implementation of 24Gbps SAS drives affect VxRail clusters?
Answer: The 24Gbps drives have a 14G VxRail software dependency of 7.0.370 or newer,
and a 15G VxRail software dependency of 7.0.405 or newer. Customers that do not
update to these software versions will not be able to add these drives or nodes with
these drives to their clusters.
Answer: The drive industry is shifting to standardize production of 24Gbps SAS drives,
hence the decision to EOL 12Gbps and replace them with 24Gbps variants. There
will not be 24Gbps SAS WI drives on offer as the industry has shifted to NVMe for
this segment. In situations where this changeover would introduce mixed cache
speeds in a node or cluster, know that this is permitted, but adjust performance
expectations to that of the slowest cache drive. See the Mixing disk guidelines in a
node for VxRail slide in the Technical Reference Deck for additional information.
Answer: VxRail 15G and 16G leverage 12Gbps SAS controllers, so no performance gains
from these faster drives should be expected. Customers whose performance needs
exceed this limitation should be encouraged to explore NVMe configurations.
Question: My customer is ready to buy, has 10GbE networking, but plans on upgrading
to 25GbE. How do I best position them for this?
Answer: 25GbE SFP28 is compatible with 10GbE SFP+, and will negotiate down to 10GbE,
with the correct optics. This would enable your customer to refresh their VxRail
environment today, configuring them with 25GbE network cards connected to their
existing 10GbE SFP+ network switches. Then in the future, upgrade the switches
to 25GbE, and gain the additional bandwidth. This does not apply to 10GbE BaseT.
Answer: The inverse is also true. They can upgrade the switches to 25GbE first. The 10GbE
SFP+ network cards in their VxRail node can use this new switch fabric, but at the
slower 10GbE speed.
using VxRail LCM or vLCM. However, customers remain responsible for testing
and validation as the FC HBA is still not part of the Continuously Validated State
provided by VxRail. In addition, customers are responsible for managing,
upgrading, and supporting their external storage arrays.
Answer: Customers may install VM/VIB/Drivers to operationalize the use of the external
storage as required.
Question: What are the considerations when using 1GbE networking on VxRail?
Answer: As network is the backplane for all vSAN storage traffic, the reduced bandwidth
impacts performance and scaling. Because of this the following limitations imposed:
• Single processor configurations only
• Maximum cluster size of 8 nodes
• Hybrid configurations only, all-flash or NVMe nodes are not supported
• Four network ports required on each node
• Requires the use of four 10GbE BaseT, which will negotiate down to 1GbE.
• With today’s increasingly more powerful and dense hardware, some customers’
needs may be met with a heavily configured 2-node cluster.
Memory
Different processor architectures have different memory rules and configurations to achieve
optimal performance. These are summarized here, and covered in more detail in the Technical
Reference deck and Orderability Guide.
Question: What are the memory rules for Ice Lake / Intel 3rd Generation?
Answer: Eight memory channels, with support for two DIMMs per memory channel. For
maximum 3200 MT/s performance populate all channels, e.g. 8 or 16 DIMMs per
processor. Populating with 4 DIMMs is supported, but provides reduced memory
performance. Capacities range from 64GB to 2TB per processor.
Answer: Near-balanced memory configs where mixed DIMM capacities are used to more
closely match requirements are supported. Providing 384GB, 640GB and 768GB
per processor options.
Answer: Support for Persistent Memory 200 Series, with capacities ranging from 256GB to
4TB per processor depending on mode.
Answer: Note that RDIMMs and LRDIMMs cannot be mixed. Persistent Memory can mix
with either.
Question: What are the memory rules for AMD 2nd or 3rd Gen EPYC?
Answer: Eight memory channels, with support for two DIMMs per memory channel. For best
performance configure one DIMM per memory channel for a max of 8 DIMMs,
resulting in 3200 MT/s of bandwidth. For maximum capacity configure two DIMM
per memory channel for a max of 16 DIMMs, doubling capacity, but at a reduced
bandwidth of 2933 MT/s. Capacities range from 64GB to 2TB per processor.
Answer: E665 has a maximum capacity of 1TB as it is not configurable with the 128GB
LRDIMM. The E665N is limited to eight 64GB DIMMs for a max of 512GB.
Answer: Configuring with 4 DIMMs is supported but recommended only with 32 cores or
fewer.
Question: What should I know about Intel Optane Persistent Memory (PMem) on
VxRail?
Answer: PMem is Intel’s storage class memory product, and is used in combination with
traditional DRAM. It can be used in Memory Mode to provide a larger amount of
system memory, up to 4TB per processor (3TB on P580N). It can be used in
Application Mode to provide non-volatile RAM or block storage with RAM like
performance. See the Technical Reference deck and DfD - Intel Optane Persistent
Memory for additional details.
Answer: Only supported with 2nd or 3rd Gen Intel Xeon Scalable processors.
Answer: Only available at POS, not available via APOS
Answer: vSphere Memory Monitoring and Remediation (VMMR) can be used to
troubleshoot memory bottlenecks between system memory and Intel Optane
Persistent Memory at the host and VM level
Question: Does VxRail support PMem in both memory mode and app-direct?
Answer: Both Memory Mode and App-Direct are supported, but only in particular
configurations. These are documented in the hardware-config spreadsheet.
Answer: Memory Mode and App-Direct mode cannot be used at the same time, only one
mode or the other can be used.
GPU
Graphical processing unit are compute accelerators that are beneficial to all types of VDI
environments and AI/ML data science workloads. The VP and V Series continues to be the
primary platforms for GPUs in the VxRail family. They support the broadest choice of GPUs,
and the most GPUs per node. Some of these GPUs are available on other VxRail platforms.
Security
Question: What are DISA STIGs, and do I need them?
Answer: Defense Information Systems Agency (DISA) Security Technical Implementation
Guides (STIGs) are the configuration standards for Dept of Defense (DOD)
Information Assurance (IA) and IA-enabled devices/systems. The STIGs contain
technical guidance to “lock down” information systems/software that might
otherwise be vulnerable to a malicious computer attack. To receive Approval to
Operate (ATO), a VxRail customer must first lock down (or harden) in accordance
with applicable DISA STIGs.
Question: Does VxRail provide DISA STIG compliant hardening guidelines and scripts?
Answer: Yes, VxRail provides the VxRail STIG Hardening Package to harden VxRail
systems to comply with DISA STIG requirements, in support of the NIST
Cybersecurity Framework. In addition, VxRail provides security configuration
guidance for protecting the system post-deployment.
Answer: The VxRail STIG Hardening Package includes scripts and the VxRail STIG
Hardening Guide provides manual steps to harden VxRail systems in compliance
with relevant Department of Defense (DoD) Security Technical Implementation
Guidelines (STIG) requirements. The package supports standard VxRail clusters
running 7.0.131 or later.
Question: Can customers pay for Dell to help with the VxRail STIG hardening?
Answer: Yes, there is an optional ProDeploy for VxRail Security STIG Hardening Add-on
offer available in the sales tools. Note – the STIG add-on offer is not available in all
regions and is not currently available for VCF on VxRail.
Question: Where can I find detailed information about VxRail security design and
assurances?
Answer: Please see the VxRail Comprehensive Security by Design white paper that covers
best practices, integrated and optional security features, and proven techniques.
Answer: Every VxRail provides the highest levels of security to enable customers to build
and sustain a compliant and cost-effective cybersecurity solution for Federal,
financial services, healthcare, cloud computing, and other industry sectors.
Question: What are the differences between vSAN encryption and virtual machine
encryption?
Answer: Virtual machine encryption, also known as vSphere Encryption, is enabled on a per
VM bases, whereas, vSAN encryption is enabled for the entire cluster datastore.
vSAN data at rest encryption is a better option for concerns of media theft and
allows data reduction to be applied. VM Encryption is a better option for concerns
of a rogue administrator but eliminates the benefit of data duplication due to
randomizing the data. Both forms of encryption require an external KMS. Read
https://kb.vmware.com/s/article/2148947 for more details.
Question: Would a customer ever utilize both vSphere and vSAN encryption?
Answer: There are a few scenarios where customers may choose to encrypt some critical
VMs with vSphere Encryption for the benefits provided above, primarily to protect
against network intrusion or rogue administrator. Using both encryption methods
will increase CPU overhead as now data is encrypted (and decrypted) twice.
Question: Is a Key Management Server (KMS) required when using encryption with
VxRail?
Answer: Yes. A KMIP compliant KMS is required for either vSphere or vSAN encryption.
vSphere Native Key Provider, HyTrust, or any other vSphere compatible KMS is
recommended. It should never be hosted on the same cluster of which it is
managing the encryption keys.
Answer: For vSAN data-in-transit encryption, a KMS is not needed.
Question: Does VxRail transmit management traffic securely over the network?
Answer: Yes, VxRail requires management traffic to be transmitted over HTTPS using TLS
1.2. VxRail Manager, vCenter, and iDRAC all disable the HTTP interface, thus
preventing management traffic from being transmitted in the clear.
Question: How do VxRail features align with the NIST Cybersecurity Framework?
Answer: Refer to the VxRail Comprehensive Security by Design paper for details.
Networking
Question: Where can I find detailed information about VxRail network configuration?
Answer: Refer to the VxRail Network Planning Guide.
Question: Do VxRail systems configured for SFP+, SFP28 interfaces ship with
compatible cables or transceivers?
Answer: No, VxRail systems do not ship with SFP+/SFP28 cables or transceivers – they are
specific to each switch type and are best provided separately so that they match. If
the customer is using SFP+/SFP28/Twinax/optic cables and transceivers, purchase
those compliant to the switch vendor specifications. Additional information on optics
and what is included in the ordering path can be found in the Ordering and
Licensing Guide.
Question: Where can I find detailed information about VxRail network configuration?
Answer: Refer to the VxRail Network Planning Guide.
Question: What use cases are suitable for link aggregation on VxRail?
Answer: Based on performance testing, read-intensive applications with large block sizes
such as video streaming and Oracle SAS benefit most from link aggregation.
Question: What are the requirements for configuring VxRail system traffic across two
VDSs?
Answer: A minimum of 4 physical ethernet ports are required. They can be from one NIC or
spread across two NICs for NIC-level redundancy. You can use the Configuration
Portal to build the JSON file. In this release, this feature can only be deployed using
VxRail API.
Answer: For Day 1 deployment, it can be implemented on a newly created VxRail-provided
VDS. It can also be implemented on a newly or existing customer-managed VDS.
Answer: There is also support for a Day 2 conversion from one VDS to two VDSs.
Question: Do VxRail systems configured for SFP+, SFP28 interfaces ship with
compatible cables or transceivers?
Answer: No, VxRail systems do not ship with SFP+/SFP28 cables or transceivers – they are
specific to each switch type and are best provided separately so that they match. If
the customer is using SFP+/SFP28/Twinax/optic cables and transceivers, purchase
those compliant to the switch vendor specifications and specifications for the Intel
NICs on VxRail. Additional information on optics and what is included in the
ordering path can be found in the Ordering and Licensing Guide.
interconnect and uplinks to the customer network. See the latest VxRail Networking
Guide for further details on supported VxRail networking configuration options.
Question: What are the options to deploy a network fabric based on SmartFabric
Services?
Answer: Any type of network planning, design or configuration effort is out of scope for a
VxRail deployment services engagement. Professional services are recommended,
but not required, for a deployment of a SmartFabric-based network. In order to
provide a predictable, high-quality customer experience for the entire Dell solution,
professional services are encouraged for both VxRail and SmartFabric. Dell
publishes a deployment guide for SmartFabric Services for customers who opt not
to utilize Dell professional services.
Question: Where can I find more information about SFS for VxRail?
Answer: There are several resources where you can find more information:
• VxRail General FAQ and VxRail Technical FAQ (this document) – both contain
a section dedicated to SFS
• Dell EMC Networking SmartFabric Services for VxRail Solution Brief
• Deployment Guide: Dell EMC Networking SmartFabric Services Deployment
with VxRail
• VxRail Appliance Technical Overview deck
Question: Are there any limitations in terms of number of switches, racks, VxRail
versions, clusters supported by SFS multi-rack?
Answer: VxRail 7.0 releases are supported by SFS versions 1.3 and later, while VxRail 8.0
releases are supported by SFS versions 3.2 and later. Up to 64 nodes in a single
cluster are supported(vSphere limitation), expandable to 6 physical racks. A switch
fabric based on SFS can support a single managed fabric consisting of up to 20
switches in 9 racks (if 2 spines and 18 leaves are used). Any Dell switch supported
with SmartFabric Services for VxRail can be used. A pair of leaf switches must be
deployed in every rack in the switch fabric, and two spine switches are required for
expansion outside of a single rack. SFS currently does not support VCF on VxRail,
NSX, 2-node and stretched VxRail/vSAN clusters.
Question: How does SFS Layer 3 (L3) Fabric personality (SFS for multi-rack VxRail)
work?
Answer: SFS uses BGP-EVPN to stretch L2 networks across the L3 leaf-spine fabric,
leveraging hardware VTEP functionality. This allows for the scalability of L3
networks with the VM mobility benefits of an L2 network. For example, the nodes in
a VxRail cluster can reside on any rack within the SmartFabric network, and VMs
can be migrated to any VxRail cluster node in any rack to another without manual
network configuration.
Question: Where can the minimum requirements, features, deployment options, and
other details for VxRail deployments with SFS be found?
Answer: Please consult the deployment guide for the up-to-date list of minimum
requirements.
Question: Can I enable SFS personality on a supported switch, that wasn’t configured
with SFS?
Answer: Yes, you can change the switch operating mode to Smart Fabric Mode, but it will
erase most of the switch/fabric configuration. Only basic settings, such as
management IP address, management route, hostname, NTP server and IP of the
name server are retained. Similarly, switches can be reconfigured to a Full Switch
Mode (“manual” / no SFS), but this operation deletes the existing switch
configuration.
Question: How can I update the SmartFabric OS10? Is it a part of VxRail LCM?
Answer: SFS OS updates on switches can be done using the OMNI vCenter plug-in and
they are not a part of an automated VxRail LCM today.
Deployment Options
VxRail satellite nodes
Question: What are VxRail satellite nodes?
Answer: VxRail satellite nodes are low-cost single node extension for existing VxRail
customers. These customers have seen the benefit from the simplicity, scalability,
and automation of VxRail. They want to extend that benefit beyond the core data
center, but with a smaller footprint and lower cost than what a 2-node cluster can
deliver, and are willing to accept a lower level of resiliency.
Answer: VxRail satellite nodes run the same VxRail HCI System Software as VxRail with
vSAN and VxRail dynamic nodes providing a common operating model from core
to edge. Like the VxRail dynamic nodes, satellite nodes do not use vSAN.
Question: What are the key use cases for satellite nodes?
Answer: Key uses can be for customers with locations that have no high availability
requirements, have less strict SLA than the core data center, have application
workloads are not compute, memory, or storage intensive, or where high availability
and SLA requirements can be met by other means, e.g. at the application layer.
Answer: Typical use cases would be:
• Retail and ROBO customers with distributed edge sites
• Telco 5G far edge sites at cell towers
• Test/Dev and Legacy Application Workloads
Question: What is the minimum node count for a satellite node cluster?
Answer: Satellite nodes are not deployed as part of a cluster, and cannot be converted for
use in a cluster. Satellite nodes are standalone hosts which are remotely managed
from a VxRail management cluster.
Question: How are virtual machines and application availability handled in the event of
node failure or node LCM?
Answer: Satellite nodes are intended for use cases where planned or unplanned downtime
is acceptable. In use cases where downtime cannot be tolerated, availability must
be handled at the application layer. VxRail includes vSphere replication can be
used to protect virtual machines on satellite nodes.
Mesh) or a Dell storage product. They run VxRail HCI System Software so that
VxRail clusters running vSAN and dynamic node clusters have a consistent
VMware and LCM experience.
Question: What are the external storage options for dynamic nodes?
Answer: Dynamic nodes have two external storage options: Dell storage product or VxRail
cluster using VMware vSAN cross-cluster capacity sharing for primary storage. The
primary datastore must have at least 900GB of storage capacity to host the VxRail
Manager VM.
Question: Which Dell storage products are supported with VxRail dynamic nodes?
Answer: Dynamic nodes require Dell storage. PowerFlex, PowerStore-T, PowerMax, Unity
XT, and VMAX are supported.
Question: Are third party storage array offerings supported with VxRail dynamic nodes
for primary storage?
Answer: No.
Question: What are the key use cases for dynamic nodes?
Answer: For customers looking to improve the economics of their HCI deployment, VMware
vSAN cross-cluster capacity sharing allows them to scale compute and storage
asymmetrically to better meet their companies’ IT demands while saving on vSAN
license costs where possible. Deploying VxRail dynamic nodes with vSAN cross-
cluster capacity sharing as the primary storage ensures customers have the same
LCM experience in their client clusters as they do with their server clusters running
VxRail HCI System Software. Dynamic nodes enable customers to lower
subscription costs by avoiding additional vSAN capacity subscription licensing.
Answer: Customers can better address data-centric workloads, for example in financial
services and medical verticals, that may still run on traditional three-tier
infrastructure by tightly coupling external storage arrays with dynamic nodes in their
VCF on VxRail environment. Users can add dynamic nodes, creating new workload
domains and utilizing external storage as primary storage with PowerFlex,
PowerStore-T, PowerMax, Unity XT, and VMAX in a VCF on VxRail environment.
Answer: Dynamic nodes can use external storage arrays as primary storage. This provides
flexibility to take advantage of Dell storage arrays’ strong feature set while providing
the same VxRail operational model in the compute layer to address more
workloads.
support are all followed. VMware vSAN cross-cluster capacity sharing is not
supported for SAP HANA, therefore VxRail dynamic nodes are supported for SAP
HANA deployments only with certified external storage. VxRail dynamic nodes are
not supported under the SAP HCI program.
Question: What is the minimum node count for a dynamic node cluster?
Answer: The minimum node count to deploy a dynamic node cluster is two.
Question: Can a cluster have a mix of VxRail dynamic nodes and VxRail nodes running
vSAN?
Answer: No. A VxRail cluster running vSAN cannot add dynamic nodes.
Question: Can dynamic nodes be configured to use external storage array and VMware
vSAN cross-cluster capacity sharing?
Answer: Yes. However, one has to be primary storage while the other one is secondary
storage. The primary storage hosts the VxRail Manager VM.
Question: What protocols are supported for external storage array connectivity?
Answer: FC, FC-NVMe, iSCSI, NFS, NVMe-oF, and NVMe-oF/TCP are available currently.
Please refer to the VxRail Roadmap (https://vxrail.is/6monthroadmap) for future
connectivity support.
Question: Does VxRail also lifecycle manage the external storage array?
Answer: Unless VxRail is in a Dynamic AppsON configuration, management of the external
storage array is done separately.
Question: What is the deployment process to configure dynamic nodes with external
storage array as primary storage?
Answer: For fibre-attached storage, the deployment process includes VxRail Day 1 bring-up
as well as setup done on the storage array and FC switch.
Similar to setting up FC-attached storage for ESXi clusters, zoning must be done in
advance of the VxRail Day 1 bring-up and storage needs to be already provisioned
from the storage array to the nodes. A VMFS datastore of at least 900GB in
capacity must also be created and zoned to each node in the cluster ahead of time.
If there are multiple VMFS datastores, VxRail will choose the largest one.
Once those elements are in place, user can run the VxRail Day 1 bring-up. The
wizard is largely the same as for VxRail HCI clusters except for a few areas. User
needs to select the new cluster type: dynamic node cluster, and that it will connect
to FC storage array for primary storage. There is no setup of vSAN traffic. The Day
1 bring-up will migrate the VxRail Manager VM from the internal BOSS card to the
datastore.
Answer: For NFS or iSCSI-attached storage, setup requires post Day 1 activity because IP
connectivity needs to be established on the cluster before storage can be mounted
and the VxRail Manager VM can be migrated onto the primary datastore.
Question: Can an external storage array be connected to multiple VxRail dynamic node
clusters?
Answer: Yes. An external storage array can be the primary storage resource to multiple
dynamic node clusters which is an example of scaling compute and storage
independently.
Question: What storage features of a Dell storage array are supported with primary
storage for dynamic nodes or secondary storage with VxRail for vSAN
nodes?
Answer: Provided the storage array OS/firmware level is at the appropriate level published
in the E-Lab support matrix to match to the corresponding ESXi level on the VxRail
nodes then the features of the storage array are supported. For specific storage
array features follow the recommended procedures and best practices as published
by the storage platform and/or VMware. The exception being there must be more
than one node per site, i.e. two node stretched metro clusters are not
recommended and require an RPQ.
Question: Is there external storage array management integrated onto VxRail Manager?
Answer: Starting with VxRail 7.0.480, VxRail Manager UI can report primary datastore
capacity (total, used, available, utilization), system serial number, and storage
protocol used on PowerStore, PowerMax, PowerFlex, and Unity XT. The feature is
restricted to storage using FC, NVMe/FC, or iSCSI protocols. This feature is
dependent on Virtual Storage Integrator (VSI) plugin v10.2 or later to acquire the
information from the storage system.
vSphere Metro Stretched Cluster with VxRail dynamic nodes, VxRail does not
provide any automation or specialized procedures for configuring dynamic nodes
with PowerStore MetroVolume. Users should follow the standard manual
configuration steps required to connect the ESXi hosts, provision the storage, and
configure the PowerStore MetroVolume. There is not any integrated reporting or
alerting within VxRail Manager for MetroVolume operations or status. The user
needs to use the native vSphere and PowerStore tools for monitoring and
managing the metrocluster/metrovolume. Also the PowerStore LCM capability
within VxRail Manager is limited to one array. For the second array, the user needs
to use PowerStore Manager, CLI, or VSI plugin for LCM.
Question: How is the PowerStore update feature for Dynamic AppsON implemented?
Answer: Performing PowerStore update from VxRail Manager requires communication with
the Virtual Storage Integrator (VSI) plugin on vCenter. In VSI v10.2, a private API
has been created for the exclusive use by VxRail Manager to perform PowerStore
lifecycle management via VSI. VxRail Manager uses the VSI private API to run the
PowerStore LCM pre-check and update and retrieve PowerStore version
information.
Question: What are the PowerStore LCM operations that can be run from VxRail
Manager?
Answer: From VxRail Manager, VxRail users can upload a PowerStore bundle from a local
client onto PowerStore Manager. VxRail users can then run the pre-check that is
packaged in the bundle. When ready, they can initiate the PowerStore LCM update
operation and monitor its progress. At any point in time, VxRail users can view the
PowerStore version from the cluster system page and physical views on VxRail
Manager.
Answer: A VxRail user must also enter PowerStore administrator credentials to start the
PowerStore update workflow from uploading the bundle, running the pre-check, to
initiating the update.
Question: Why are there storage protocol limitations for the PowerStore LCM
integration?
Answer: There is a limitation in the VSI API for NFS datastores that prevents VxRail from
retrieving the requisite array information to initiate the PowerStore LCM operations.
Answer: For the PowerStore version reporting limitation, you can submit an RPQ for other
storage protocols besides NFS.
Dynamic nodes with VMware vSAN cross-cluster capacity sharing as primary storage
Question: What is the deployment process to configure dynamic nodes with VMware
vSAN cross-cluster capacity sharing as primary storage?
Answer: VxRail cluster deployment automates the Day 1 operations: cluster build, vSAN
network setup, mounting of the remote datastore, and migration of the VxRail
Manager VM to the remote datastore. After the VxRail cluster deployment is
completed, the dynamic node cluster is ready for Day 2 operations.
Answer: For the automated Day 1 operations, user is required to use VxRail API. There are
storage parameters that need to be populated on the JSON cluster deployment file
to identify the server cluster and remote datastore. For networking, the vSAN VMK
IP for each node, portgroup, vLAN ID, and optionally a vSAN gateway would need
to be provided on the same JSON file.
VxRail dynamic node clusters with VMware cross-cluster capacity sharing is a good
example where vSAN gateways can be very beneficial as Layer 3 networks of
server clusters and client clusters can get very complex.
Question: What is the maximum distance for shared storage between client and server
clusters?
Answer: The maximum latency is 5ms round-trip time.
Question: Do the vSAN cross-cluster capacity sharing client and server clusters have to
be managed by a common vCenter Server?
Answer: No. As of VMware ESXi 8.0 U1, client and server VxRail clusters can be managed
by separate vCenter Servers regardless of if they are VxRail-managed or
customer-managed. The vCenter Servers can be linked via VMware Enhanced
Linked Mode or exist as standalone instances. However, vSAN stretched clusters
and 2-node vSAN clusters still require a common vCenter Server.
Question: What are the configuration limitations for VxRail with vSAN ESA?
Answer: VxRail with vSAN ESA does not support the following configuration options:
• Mixing of NVMe drive sizes, endurance ratings or vendor
Question: What are the deployment options for VxRail with vSAN ESA?
Answer: VxRail with vSAN ESA is only supported for greenfield opportunities. Brownfield
scenarios that involve repurposing VxRail nodes running vSAN OSA to run vSAN
ESA requires reimaging and redeployment. It is a disruptive process that may
require migration of workloads to another cluster before decommissioning the
VxRail nodes running vSAN OSA. Nodes may need to be reconfigured to meet the
vSAN ESA hardware requirements before they are reimaged and ultimately
redeployed into a VxRail cluster running vSAN ESA. Due to all these factors, an
approved RPQ is required to carefully evaluate each customer’s situation and
ability to successfully perform this conversion.
Answer: VxRail clusters with vSAN ESA can be managed by either VxRail-managed or
customer-managed vCenter Server 8.0. There is flexibility such that the vCenter
Server 8.0 instance can also manage VxRail clusters running vSAN OSA 7.0 or
OSA 8.0.
Answer: VxRail with vSAN ESA can be deployed as a standard vSAN cluster (3+ nodes), 2-
node vSAN cluster, or a stretched cluster.
Question: What are the deployment limitations for VxRail with vSAN ESA?
Answer: At this time, the following deployment options are not available for VxRail with
vSAN ESA:
• Mixing VxRail 15G and 16G nodes in the same cluster
• Re-purposing existing VxRail nodes
• Mixing vSAN ESA and OSA nodes in the same cluster
• Using vSAN cross-cluster capacity sharing on 2-node clusters
• 2-node vSAN cluster that shares a witness with a 2-node vSAN OSA cluster
Question: Can virtual machines and workloads be moved between vSAN ESA and OSA
clusters?
Answer: Yes, virtual machines and their workloads can be migrated between vSAN ESA
and OSA clusters using shared-nothing vMotion, as is the case between any
vSphere cluster.
VCF on VxRail
Question: Where can I learn more about VCF on VxRail?
Answer: Find more information in the VCF on VxRail Technical FAQ and white paper.
Question: My customer would like to remove one node from a VxRail 3-node cluster to
make a 2-node cluster. Is this possible?
Answer: No. A 2-node cluster must be a newly defined configuration. Therefore, removing a
single node from a cluster to form a 2-node configuration is not supported.
Question: What are the vSAN licensing options for the 2-node cluster?
Answer: See the VxRail Licensing section for details.
Stretched Cluster
Question: What should I know about Stretched Clusters in VxRail?
Answer: Please see the VxRail Architecture Overview Guide.
Customer-deployable VxRail
Question: How is VxRail customer-deployable?
Answer: For existing customers with experienced technical resources, VxRail has
introduced capabilities to enable customers to pre-configure their VxRail clusters
with a web-based configuration portal and to self-deploy their clusters with these
configurations using the VxRail deployment wizard, RESTful API, or the Offline
Deployment Tool.
Question: Can customers self-deploy a VxRail cluster with any VxRail node?
Answer: All VxRail 14th generation, VxRail 15th generation AMD-based and Intel-based
models, and VxRail 16th generation models are customer self-deployable. VxRail
nodes must be running a minimum VxRail software version of 7.0.410. Expect new
platforms going forward to support the self-deploy option, unless expressly stated.
Answer: Customers can repurpose supported nodes that are already deployed in a cluster.
Customers can re-image the nodes using the node image management tool with a
supported version. Then they can self-deploy the re-imaged nodes into a new
cluster.
Ecosystem support
External storage
Question: Can I use VxRail systems to access external storage?
Answer: Yes, Dell storage arrays are supported as primary storage for VxRail dynamic
nodes where the VxRail Manager VM will run from an external datastore.
PowerStore, PowerMax, Unity XT, VMAX, and PowerFlex are the supported Dell
storage arrays. Third-party storage arrays are not supported.
Answer: For secondary storage use cases, VxRail systems can utilize external iSCSI and
NFS datastores, in addition to Fibre Channel storage.
Question: Can VxRail external storage include any vendor storage array or type?
Answer: Yes, only as secondary storage. External storage can be connected via FC. It is up
to the customer to verify the FC HBA card, driver and firmware is qualified with their
storage array.
Answer: vSAN 8.0 supports NFS and SMB. With vSAN 8.0 U2, ESA fully supports vSAN file
services.
Question: How are Aria Operations and SaaS multi-cluster management different?
Answer: Both products have visibility into the health status, health events, and topology and
provide resource metrics charting, anomaly detection, and capacity forecasting of
the VxRail clusters. However, Aria Operations and SaaS multi-cluster management
are designed to solve different customer problems. Aria Operations focuses on the
management and optimization of the virtual application infrastructure for the
complete SDDC stack as well as hybrid/public cloud. SaaS multi-cluster
management focuses on active multi-cluster management of customer’s entire
VxRail footprint from a centralized point. It does not manage the virtualized
workload running on the VxRail clusters.
Answer: Aria Operations can be installed on-premises or consumed as a cloud-based
service. SaaS multi-cluster management is a cloud-based service.
Question: When should I position VxRail Management Pack for Aria Operations vs.
SaaS multi-cluster management?
Answer: The Management Pack itself is free of charge but requires that the customer
purchase Aria Operations licensing entitlements in order to use it. Customers
already using Aria Operations can benefit by adding VxRail context into their
existing monitoring, troubleshooting, and optimization activities of their virtual
infrastructure.
Answer: SaaS multi-cluster management is part of VxRail HCI System Software and does
not require additional license for its feature set, except for active management. It is
suitable for the majority of the VxRail customer base. It requires VxRail clusters to
be able to connect to Dell cloud via the connectivity agent. Customers looking to
more efficiently manage their VxRail clusters at scale and leverage operational
intelligence to simplify administration can benefit from SaaS multi-cluster
management.
Question: Where can I find more information on VxRail Management Pack for Aria
Operations?
Answer: Please refer to the VxRail Management Pack for vRealize Operations User Guide.
Delivery Options
Integrated Rack
Question: How does the delivery option called VxRail Integrated Rack Services differ
from the existing delivery option?
Answer: Customers who decide on the existing delivery option are looking for flexibility in
networking and racking options. Customers who choose the VxRail integrated rack
deployment option choose to have Dell Technologies rack and stack the VxRail
appliances and optionally other customer selected networking and other desired
infrastructure in a Dell Technologies 2nd Touch Facility prior to shipping. For non-
Dell supplied 3rd party products, the customer will be responsible for procuring and
shipping the products to a Dell Technologies 2nd Touch Facility for rack integration.
There are new ProDeploy Rack Integration for VxRail offers available in the sales
tool for easy quoting of factory rack integration work (previously it required a
customized quote).
Question: What rack design configurations are available for VxRail Integrated Rack?
Answer: With Flexible Dell Technologies 2nd Touch Facility factory services, customers
have options on the rack and networking components they would like used.
Customers can purchase from Dell Technologies, a rack from our Dell
Technologies partner, APC, or supply their own consigned 3rd party rack.
Customers also have options relating to network switches as well. Customers can
purchase Dell EMC PowerSwitch with OS10 EE switches from Dell Technologies or
they can supply their own consigned 3rd party switches. Any 3rd party consigned
items supplied by the customer must be purchased separately by the customer
outside of Dell Technologies. Support for those components would be provided by
the component vendor and not from Dell Technologies. So, depending on which
components are used for the system, customers have a choice of what support
experience they would like to have for their infrastructure.
Note: Rack Delivery services are only orderable through DSA/Gii Ordering tools.
Question: Are the fixed rack design configuration service templates only for VCF on
VxRail? Could they also be used for VVD and/or other VxRail use cases?
Answer: Fixed rack design configurations are no longer being offered and are EOL. All rack
integration services for VxRail must now be purchased as custom rack integration
services engagements. These can be quoted by working with your local Dell EMC
services sales specialist just as you would for other professional services offerings.
Answer: The GIS services will support individual rack configurations only, however,
customers can order multiple racks. Racks will include ToR switches and
management switches (when required).
Question: Can Dell sell Panduit racks for integration in the 2nd Touch Facility?
Answer: Panduit is not in the Dell price book catalog nor is it orderable in DSA/Gii. It must
be customer consigned material if a customer desires to use Panduit racks.
Question: Are deployment/installation services still required with the order of VxRail
Integrated Rack services?
Answer: Once the custom rack arrives in a customer datacenter, the typical onsite VxRail
installation and deployment services engagement begins. These VxRail installation
and deployment services are required no matter if a customer chooses to have
their physical infrastructure pre-racked and stacked at the Dell Technologies 2nd
touch facility prior to it arriving at their datacenter. The standard VxRail or VCF on
VxRail deployment and installation services would be performed by Dell
Technologies or partner professional services teams to configure the environment
per the designed physical site survey requirements.
Sales
Licensing
Question: What is the general licensing guidance now that perpetual licensing is end of
life?
Answer: VxRail clustered nodes, such as vSAN and dynamic nodes, require VMware
vSphere Foundation subscription at a minimum. VMware Cloud Foundation
subscription could also be used.
VxRail non-clustered hardware, such as satellite node and embedded witiness in
the VD-4000, requires vSphere Standard subscription at a minimum. VMware
vSphere Foundation or VMware Cloud Foundation subscriptions could also be
used.
VMware Cloud Foundation on VxRail requires VMware Cloud Foundation
subscription.
Answer: vSphere, vSAN, vCenter Server, vSphere with Tanzu, and Aria Operations licenses
are included in vSphere Foundation and VCF subscriptions.
Question: Where can I find more information about ordering and licensing for VxRail?
Answer: Refer to Dell Sales Tool Ordering and Licensing guide.
Question: Are RecoverPoint for Virtual Machines (RP4VM) licenses included with the
purchase of a VxRail?
Answer: Yes, except for VxRail satellite nodes. Standard support includes 5 licenses per
node (E, P, V, D, and S Series) and 15 licenses per VxRail G Series chassis. There
is a limitation with RP4VM that prevents support for standalone hosts such as
VxRail satellite nodes.
Question: Do we support shutting off cores in the BIOS to help customers stay in
compliance with software licensing?
Answer: Open an SR or request an RPQ to ensure we can properly support BIOS changes.
Tools
Question: Can the EIPT (Enterprise Infrastructure Planning Tool) be used for VxRail?
Answer: Yes, for specific power, cooling, weight, dimensions, etc., refer to the EIPT Tool.
Answer: Security requests and questions can be submitted in the Security & Customer Trust
portal.
Training
Question: What technical resources can I use to learn about VxRail?
Answer: VxRail Bootcamp series
Answer: VCF on VxRail bootcamp series
Question: What should I know about VxRail 14th Generation nodes End of Sales Life?
Answer: VxRail 14th Generation nodes are no longer for sale as of May 9th, 2023. Below are
other relevant dates for existing VxRail 14th Generation nodes.
• End of Expansion (EOE): April 28th, 2028
• End of Standard Support (EOSS): April 30th, 2028
Answer: The P580N dates are as follow:
• End of Life (EOL): February 5th, 2024
• End of Expansion (EOE): January 31st, 2029
• End of Standard Support (EOSS): January 31st, 2029
Question: What happens to support contracts that exceed the EOSS dates?
Answer: Once EOSS dates are coded, entitlements quoted past the EOSS date are
terminated, and the unused portion of the standard support contract quoted beyond
the EOSS date is credited back to the customer automatically.
Support Services
Question: Does the purchase of VMware Extended Support impact the support length of
the associated VxRail hardware?
Answer: No, support length of the associated VxRail hardware remains unchanged. Notably,
hardware support for VxRail nodes running on Quanta platforms ended on
September 30, 2022 and VxRail nodes running on PowerEdge 13G platforms
ended on May 31, 2023.
Question: Does ProSupport Suite provide code upgrades by Dell Support for the
customer?
Answer: Yes, but it depends on the ProSupport Suite level purchased by the customer. If the
customer purchases ProSupport Next Business Day, then code upgrades by Dell
Support are not available. All other ProSupport offers include the code upgrades by
Dell Support. The customer can perform their own code upgrades. Note – VCF on
VxRail is different from VxRail in that ALL ProSupport Suite for VCF on VxRail
offers include code upgrades by Dell Support.
Deploy Services
Question: Are ProDeploy Suite offers mandatory?
Answer: No, but they are highly recommended to ensure the best deployment experience
for the customer. Customer can deploy their own VxRail nodes but should only do it
if they have experience with doing the installation. ProDeploy Suite for VxRail offers
are sold by the node. They can be sold with onsite or guided hardware deployment
and with remote or onsite configuration. ProDeploy Plus for VxRail is the highest
level of deployment providing a ‘white-glove’ onsite hardware installation and onsite
configuration experience. It is the default option for VxRail in the sales tools.
Solutions
Question: Is VxRail SAP HANA certified?
Answer: Yes. As outlined in the VxRail SAP HANA Design Guide, SAP HANA is fully
validated and supported on vSphere 8.0 and vSAN 8.0; and vSphere 7.0U3, with
vSAN 7.0U3 or VxRail dynamic nodes, based on the following platforms:
• All-flash dual-socket VP-760 and VE-660
• All-NVMe quad-socket P580N
• All-NVMe dual-socket P670N, E660N, and E560N
Competition
Question: Where do I get additional information about positioning VxRail systems
against the competition?
Answer: See the VxRail system battle cards in the VxRail Enablement Center.
Answer: Visit the competitive Klue site.