0% found this document useful (0 votes)
23 views7 pages

PUBLIC-deploying-vsan-cluster-best-practices-guide (2)

Download as pdf or txt
Download as pdf or txt
Download as pdf or txt
You are on page 1/ 7

Best Practices Guide

Data Center | Hybrid Cloud

Deploying a Cost-Efficient,
High-Performance vSAN Cluster

Discover how to improve storage performance, lower latency, improve


scalability, and reduce total costs with a VMware vSAN-based solution
optimized for Intel® Optane™ persistent memory and Intel® Optane™ SSDs

Introduction
Software-defined storage (SDS) abstracts storage software from the storage
hardware. By providing a shared pool of storage capacity that can be used across
Author service offerings, SDS eliminates storage silos and helps improve utilization
Flavio Fomin ratios. Intelligent, automated orchestration can reduce operating costs and speed
Solutions Architect, Intel Corporation provisioning from several weeks to a few minutes.

John Hubbard IT organizations can no longer afford legacy storage architecture that limits their
Solutions Architect, Intel Corporation ability to scale cost-effectively. Intel hardware coupled with VMware software is an
effective alternative for modernizing and future-proofing storage. Intel® Optane™
Table of Contents SSDs combined with 3D NAND SSDs are a key differentiator for this architecture,
delivering scalability, predictability, and beneficial cost/performance.
Introduction . . . . . . . . . . . . . . . . . . . . . . . 1
Solution Overview. . . . . . . . . . . . . . . . . . 2 VMware vSAN is a vSphere-native SDS solution that powers industry-leading
vSAN Architecture Overview . . . . . . . . . 2 hyperconverged infrastructure (HCI) solutions in the hybrid cloud. Intel and
VMware have worked closely to develop reference architectures and best practices
How Storage Works in vSAN . . . . . . . . . 2
for deploying a vSAN platform that is optimized to run on technology from Intel.
Business Value . . . . . . . . . . . . . . . . . . . . . 2 Examples include the latest generation of Intel® Xeon® Scalable processors,
Consistently High Performance. . . . . . . 2 Intel Optane SSDs, and Intel® Optane™ persistent memory (Intel® Optane™ PMem).
Predictably Low Latency. . . . . . . . . . . . . 2 The steps and recommendations in this best practices guide can help you
Lower Acquisition Costs build a storage platform that can keep up with current needs as well as scale
and Footprint. . . . . . . . . . . . . . . . . . . . . . . . 3 into the future.
Higher Reliability and Availability. . . . . 3
Solution and System Requirements. . 3
Memory and I/O Layout. . . . . . . . . . . . . . 3
Memory Configurations. . . . . . . . . . . . . . 3
Required BIOS Settings. . . . . . . . . . . . . . 4
Optimizing Latency . . . . . . . . . . . . . . . . . . 4
Recommended Hardware. . . . . . . . . . . . 5
Benchmarking Your Cluster . . . . . . . . . . 5
Installation and Configuration . . . . . . 6
Cache Tier Considerations . . . . . . . . . . . 6
Capacity Tier Considerations. . . . . . . . . 6
Setup Summary. . . . . . . . . . . . . . . . . . . . . 6
Summary . . . . . . . . . . . . . . . . . . . . . . . . . . 7
Best Practices Guide | Deploying a Cost-Efficient, High-Performance vSAN Cluster 2

Solution Overview Business Value


A VMware vSAN-based deployment consists of the vSAN Intel Optane SSDs deliver excellent business value to vSAN
software, cache, and capacity drives organized into disk groups. deployments by offering high performance, low latency,
low acquisition costs and footprint, and high reliability and
vSAN Architecture Overview availability.

vSAN abstracts and aggregates locally attached disks in


a vSphere cluster to create a storage solution that can be Consistently High Performance
provisioned and managed from vCenter and the vSphere Web Because Intel Optane SSDs are a write-in-place media,
Client. Each host in a vSAN cluster can contribute storage they provide a significantly higher write throughput
capacity to the cluster, as well as consume data from the cluster. (measured in MB per second) than NAND flash SSDs. This
All the storage devices combine to create a single vSAN datastore. technology difference enables up to 80 percent higher
write throughput compared to typical NVMe NAND drives,1
vSAN is an HCI solution, where VM storage and compute
even though both the Intel Optane and Intel NVMe drives
resources are delivered from the same x86 server platform
use PCIe Gen 4 (see upper portion of Figure 2). Higher
running the hypervisor. As such, vSAN reduces the need for
throughput enables higher scalability. An SSD with a higher
external shared storage, and simplifies storage configuration
write throughput adds meaningful value to the vSAN two-
and VM provisioning. It integrates with the entire VMware
tier architecture because all writes must go through the
software stack—vSAN is a distributed layer of software
cache device.
included in the VMware ESXi hypervisor. You can use VM-
level policies to control VM storage provisioning and storage
service-level agreements (SLAs). And you can set these Predictably Low Latency
policies per VM and modify them dynamically. Intel Optane SSDs use bit-level (not page-level) write
operations, which significantly reduces time-consuming
vSAN is a common solution for hybrid cloud deployments
garbage collection and drives down latency (see lower
that need scale-out and virtualized compute, storage, and
portion of Figure 2).
networking resources.

How Storage Works in vSAN Higher Predictability and Performance with


VMware vSAN is a powerful HCI platform that serves as a Intel® Optane™ SSD P5800X
100% Sequential Write, 64 KB
critical building block for the software-defined data center.
Throughput Latency
Organizations can deploy vSAN to take advantage of the Intel® Optane™ SSD P5800X, 400 GB Intel® Optane™ SSD P5800X, 400 GB
solution’s distinctive scalability, security, and performance Intel® SSD D7-P5600, 1.6 TB Intel® SSD D7-P5600, 1.6 TB
features for today’s most demanding, storage-intensive data
center workloads.
vSAN aggregates all local capacity devices into a single 7.5k
datastore shared by all hosts in the vSAN cluster. The
80%
UP TO
datastore can be expanded by adding capacity devices or
Throughput (MBPS)
HIGHER IS BETTER

hosts with capacity devices to the cluster. vSAN works best HIGHER WRITE
5k THROUGHPUT
when all ESXi hosts in the cluster share similar or identical
configurations across all cluster members.
4

Figure 1 shows how data moves into and out of vSAN storage.

LOWER IS BETTER
3
2.5k
Latency (ms)
2
VM VM VM
1

0 Stable Latency
VMware vSphere VMware vSAN As I/O demand increases, Intel® Optane™ Time
SSD P5800X latency does not vary

Managed by vCenter
Figure 2. Intel® Optane™ SSD P5800X sustains performance
Memory under write pressure, enabling greater scalability and
WRITE CAPACITY TIER predictable performance.1
CACHE TIER
Intel® Optane™ Persistent Memory
(DRAM)
(Expanded Memory)

Storage WRITE READ

CACHE TIER CAPACITY TIER


Intel® Optane™ SSDs 3D NAND SSDs

Figure 1. Data moving from memory into the vSAN cache tier
and capacity tier.
Best Practices Guide | Deploying a Cost-Efficient, High-Performance vSAN Cluster 3

Response time remains low even when workload I/O


demands spike unexpectedly. The predictable response
Solution and System Requirements
time of Intel Optane SSDs means that even with volatile Table 1 provides the performance specifications for the Intel
workloads, or more workloads per VM, organizations can Optane SSD P5800X Series.
continue to meet SLAs (Figure 3).
Table 1. Intel® Optane™ SSD P5800X Series Performance
Specifications4
Excellent Response Time as IOPS Demand Increases
with Intel® Optane™ SSD P5800X Feature Specification
70/30 Read/Write, 64 KB Capacity 400/800/1600 GB
Interface PCIe x4
Scale and Sustain
128 OIOs Workload Demands DWPD 100
Intel Optane SSD P5800X scales beyond Throughput
64 OIOs and gracefully sustains workload
demand at 128 OIOs per host n Sequential Read Up to 7.4 GB/sec
2 n Sequential Write Up to 7.4 GB/sec
n Random 4K Read (IOPS) Up to 1.55 million
Response Time (ms)

n Random 4K Write (IOPS) Up to 1.6 million


128 OIOs n Random 4K 70/30 (IOPS) Up to 2.0 million
n Random 512B Read (IOPS for metadata) Up to 5.0 million
QoS
n 4KRR, QD=1, 99% < 6μs
n 4KRR, RW, Mixed QD=1, 99.999% < 66μs

Memory and I/O Layout


16 OIOs 32 OIOs 64 OIOs
Each 3rd Gen Intel Xeon Scalable processor supports
0
LOWER AND LONGER IS BETTER eight memory channels. Each memory channel supports
90K IOPS 360K up to two DIMMs. Based on NUMA settings and memory
Intel® Xeon® Gold 6348 Processor and Intel® SSD D7-P5600 population, these channels will be interleaved channels
Intel® Xeon® Gold 6348 Processor and Intel® Optane™ SSD DC P4800X of a two-socket system. The system can have access to a
Intel® Xeon® Gold 6348 Processor and Intel® Optane™ SSD P5800X
maximum of 6 TB of memory, combining DRAM and Intel
OIO = Outstanding I/O Requests
Optane PMem at 3200 MHz. The PCIe subsystem provides
Figure 3. For 64 KB 70/30 read/write workloads, the Intel® up to 128 lanes of high-speed I/O.
Optane™ SSD P5800X can keep up with steadily growing IOPS
demand—long after the Intel® SSD D7-5600 and even the Intel
Optane SSD DC P4800X response times increase exponentially.2
Memory Configurations
In an HCI environment, having enough memory is
important to increase CPU utilization and drive up VM
Lower Acquisition Costs and Footprint
density. Memory capacity can be configured as DRAM-only or
Using Intel Optane SSDs as cache enables a higher VM can be a tiered memory configuration, which is a combination
density per host. Fewer hosts per cluster help reduce of DRAM and Intel Optane PMem, where the DRAM capacity
hardware and software acquisition costs and consumed serves as a cache and the Intel Optane PMem provides the
footprint. Intel Optane SSDs can reduce node count by capacity consumed by the VMs. Intel Optane PMem 200
up to 30 percent, when comparing the combination of series in Memory Mode can provide up to a 20 percent
a 3rd Generation Intel® Xeon® Scalable processor and platform-level cost reduction5 and can provide comparable
the latest-generation Intel Optane SSD to the previous- performance to DRAM-only6 when the host’s active memory
generation processor and SSD. 3 fits in the DRAM capacity.
For optimal performance, populate all eight memory slots of
Higher Reliability and Availability the processors with DRAM DIMMs, and four or eight slots with
With an endurance of 100 DWPD, Intel Optane media does Intel Optane PMem, according to the desired capacity. Refer
not run the risk of wearing out before the 5-year warranty, to the following for memory population guidelines provided
avoiding the exposure to data loss and unavailability related by Intel, VMware, and current-generation OEM solutions:
to cache drive replacements. • Intel: Boost VMware vSphere Efficiency with Intel Optane
Persistent Memory
• Dell: Dell EMC PowerEdge R650 Installation and Service Manual
• HPE: Intel Optane Persistent Memory 100 Series for HPE Guide
• Lenovo: Intel Optane Persistent Memory 200 Series Product Guide
• Cisco: Configuring and Managing Intel Optane Data Center
Persistent Memory Modules
Best Practices Guide | Deploying a Cost-Efficient, High-Performance vSAN Cluster 4

Required BIOS Settings • VMware ESXi. The hypervisor operating system can
influence power profiles at an operating system level
Intel® Virtualization Technology (Intel® VT). Intel VT-x only if the BIOS power profile is set to “OS control mode”
extensions add migration, priority, and memory-handling or equivalent. Exact options available will depend on
capabilities to a wide range of Intel® processors and provide the server platform used and the option exposed in the
virtualization features that allow for more efficient execution BIOS. Using alternative power profiles in the BIOS, such
of VMs. Intel VT-x also allows for the creation of 64-bit guest as “Performance,” prevents ESXi from influencing power
systems under VMware. profiles within the operating system.
Intel® Virtualization Technology for Directed I/O (Intel® VT‑d).
This technology makes it possible for guest systems to directly Other BIOS Considerations
access a PCIe device, with help from the provided Input/Output • OEM BIOS software sometimes has profiles that initialize
Memory Management Unit (IOMMU). This allows a local area several of the BIOS settings to match application
network (LAN) card to be dedicated to a guest system, which requirements. To improve performance, choose the profile
makes it possible to attain increased network performance that favors performance in virtualized environments.
beyond that of an emulated LAN card. Once such a direct
access system has been implemented, live migration of the • Platforms often present a trade-off between power savings
guest system is no longer possible. VMware vSphere can be and performance. Turning off power-saving schemes in
configured for use with an activated Intel VT-d system using the BIOS may result in improved performance of certain
VMware VMDirectPath for direct access to PCIe cards. workloads at the expense of additional power consumed.
• Providing additional cooling to the CPU helps drive higher
BIOS Settings for Best Performance workloads more effectively. Set BIOS options that enhance
Modern data centers face growing concerns about power cooling, such as changing the fan settings from Acoustic
consumption, both from the perspective of total cost of (low) mode to Performance (high) mode. A side-effect may
ownership and total cost to the environment. Intel processors be increased power consumed by the server.
support BIOS settings that help conserve energy. However, in
certain use cases, higher performance may be deemed more Optimizing Latency
important than saving energy. The following BIOS options
vSAN is a latency-sensitive application. You can reduce
provide optimal performance on a variety of virtual applications
latency on a vSAN cluster with the following settings:
on vSphere installations. However, they may not achieve
additional objectives, such as optimal power consumption. • Use the highest frequency DIMM speeds that the
processor supports:
• P-State. If enabled, the CPU (all cores on a specific NUMA)
n 2666 MHz for most 1st Gen Intel Xeon Scalable processors
will go to “sleep” mode if there is no activity. This mode
is similar to C-State, but it applies to the whole NUMA n 2933 MHz for most 2nd Gen Intel Xeon Scalable processors
node. In most cases, it saves power when the CPU is idle. n 3200 MHz for most 3rd Gen Intel Xeon Scalable processors
However, for performance-oriented systems, when power • Choose a BIOS performance profile that optimizes
consumption is not an issue, it is recommended that P-State performance.
be disabled.
• Use a 100 GbE network interface card (NIC) or at least a
• C-State. To save energy, it is possible to lower the CPU 25 GbE NIC, such as the Intel® Ethernet Network Adapter
power when the CPU is idle. Each CPU has several power E810, to help avoid any networking I/O bottlenecks. Ensure
modes called C-states. If the BIOS is set to a performance that the NIC supports RDMA and RDMA over Converged
profile, these operations are not suitable when aiming for Ethernet (RoCE) V2. For more information, visit Intel’s
optimal performance. Therefore, they should be disabled. Ethernet Adapters webpage.
• Turbo Mode. Intel® Turbo Boost Technology automatically n Enable priority flow control for traffic class 3 on the
runs the processor core faster than the noted frequency. physical switch.
The processor must be working in the power, temperature, n Consider enabling RDMA to offload some of the vSAN
and specification limits of the TDP. Both single- and multi- CPU consumption to the NIC. In recent tests, enabling
threaded application performance is increased. RDMA resulted in an up to 10 percent increase in IOPS
• Hyper-threading. Intel® Hyper-Threading Technology allows and up to 8 percent lower CPU utilization when using
a CPU to work on multiple streams of data simultaneously to the latest RDMA drivers with a 3rd Gen Intel Xeon
improve performance and efficiency. In some cases, turning Scalable processor.7 For more information, read Network
hyper-threading off can result in higher performance with Requirements for RDMA over Converged Ethernet and
single-threaded tasks. Typically, Intel Hyper-Threading Configure Remote Direct Memory Access Network Adapters.
Technology should be enabled. In cases where the CPU is • We recommend using a Maximum Transmission Unit
close to 100 percent utilization, hyper-threading might not (MTU) of 9000 to increase overall vSAN throughput. Follow
help and can even harm performance. Therefore, in such the best practices regarding latency in the vSAN Planning
cases, hyper-threading should be disabled. and Deployment guide.
Best Practices Guide | Deploying a Cost-Efficient, High-Performance vSAN Cluster 5

• Always provide adequate power to the server by plugging rating of three DWPD will last five years only if the sustained
in all the redundant power supply units on the server. This write workload is lower than 2,400 GB per day or 28.4 write
also helps keep the server running if one of the power MB/s.9 Put another way, even a light sustained I/O load such
supply units fails. as 450 32-KB write IOPS activity would fill the drive three
times per day, assuming the use of RAID 1 as data protection.
• Evenly divide cache-tier storage devices and capacity-tier
In contrast, an Intel Optane SSD P5800X 400 GB drive has
storage devices between CPUs. This is not always possible
an endurance rating of 100 DWPD. It can sustain 40,000 GB
with configure-to-order OEM servers, but do this where
written per day or 470 MB/s written per day for five years.10
possible. We have seen best results when storage is divided
That is a 16X increase in endurance. If customers use a
evenly between CPU Socket 1 and Socket 2. This may
three-DWPD drive for the cache tier, they may increase
require specific placement of storage controllers in PCIe
their risk of costly downtime or may need to add more
slots and an understanding of the platform’s interconnects
cache drives to the cluster to increase longevity, which
between PCIe slots and CPU sockets. For more information,
can increase configuration costs.
check the technical product specification guide: Intel®
Server Board M50CYP2SB1US2600WF Product Family For more details about sizing the cache tier, refer to the
Technical Product Specification. following VMware blogs: Extending All Flash vSAN Cache Tier
Sizing Requirement for Different Endurance Level Flash Device;
Recommended Hardware Do I Need a Bigger Write Buffer?

CPU Capacity tier: Choose between SATA SSDs (lower cost)


• Intel Xeon Scalable processor or NVMe SSDs (higher performance) like the Intel® SSD
D7‑P5510 Series.
• Performance is driven by frequency
• VM density is increased with core count Benchmarking Your Cluster
• For more information: Intel Xeon Scalable Processors vSAN uses HCIBench as the benchmark to measure the
IOPS and latency of different I/O workloads. The settings
Memory suggested in this guide have used HCIBench to provide
Memory can be configured with a single DRAM tier or with a optimal results on Intel Xeon Scalable processors.
DRAM tier plus an Intel Optane PMem tier. Note that HCIBench is a synthetic workload. Hence, settings
• For best performance, we recommend using all DRAM that result in improved results in the benchmarks may not
channels in a socket (eight slots). For instance, for 128 GB necessarily translate to similar gains on real-world workloads.
DRAM capacity per socket, use 8x 16 GB DDR4 DIMMs. For more details on performance using the settings in this
• For tiered memory, we recommend the same eight guide, refer to the vSAN Planning and Deployment guide.
DRAM DIMMs plus four or eight Intel Optane PMem DIMMs
per socket.
We also recommend obtaining the vSphere active memory Experience the Difference
data for the workloads that will be running in the host when
determining the DRAM-to-PMem ratio. When no data is Intel® Optane™ technology can be used for delivering
available, a best practice has been to use a 1:4 ratio. large memory pools or in applications requiring fast
caching or fast storage. Intel Optane technology is
For more details, please refer to the Boost VMware vSphere available in a variety of products and solutions, including
Efficiency with Intel Optane Persistent Memory. VMware vSAN and many others. For more information,
go to Intel Optane Technology for Data Centers.
Network
• For small deployments, we recommend a minimum of 10 GbE.
• For most workloads, we recommend 25 GbE or 100 GbE. Installation and Configuration
With a 100 GbE NIC, customers can expect a lower packet
Cache Tier Considerations
latency and more than double the read throughput on a vSAN
cluster, when compared to previous-generation NICs.8 As mentioned earlier, it is recommended that all ESXi hosts
in the cluster share similar or identical configurations across
Storage all cluster members, including similar or identical storage
configurations. This consistent configuration balances VM
Cache tier: As discussed in the Solution Overview section,
storage components across all devices and hosts in the cluster.
the performance characteristics of the cache drive make a
significant difference to vSAN I/O performance. But drive In all-flash disk groups, vSAN uses the cache tier SSD as
endurance is another critical factor to be considered. a write buffer. One-hundred percent of a cache device’s
Use Intel Optane SSDs for higher write throughput and capacity, up to a maximum of 600 GB, is used for the cache
endurance. A NAND-based 800 GB drive with an endurance tier, while any remaining capacity is used for endurance.
Best Practices Guide | Deploying a Cost-Efficient, High-Performance vSAN Cluster 6

When choosing a cache tier device, look for four key vSphere Sizing Recommendations
characteristics:
Appropriately configuring your VMware vSphere cluster,
• High random and sequential write throughput capability including the vSAN cache and capacity tiers, is crucial to
to enable predictable, consistent I/O performance even achieving optimal performance (see Table 2).
when there are sudden variations in the aggregated
cluster workload. Table 2. High-performance Host Configuration Recommendation
The Intel Optane SSD P5800X provides nearly 8x higher
Component Recommendation
write throughput per GB than NAND flash SSDs.11
CPU 2x Intel® Xeon® Gold 6348 processor (28 cores, 2.6 GHz)
• Ability to sustain a low read latency even when processing
a high concurrent write activity. Memory 256 GB (16x 16 GB)

n The Intel Optane SSD P5800X can maintain a low read Intel® Optane™
1024 GB (8x 128 GB)
PMem
latency with almost any concurrent write load.12
n The Intel Optane SSD P5800X provides significant Storage Capacity 8x Intel® SSD D7-P5510 3.84 TB
advantage over NAND when the workload shifts to a Cache Drives 2x Intel® Optane™ SSD P5800X 400 GB
50/50 mixed read/write, which is typical for a cache tier.13 Network 1x Intel® Ethernet Adapter E810C 100 GbE NIC
• High read and write I/O capabilities to keep up with For some deployments, 25 GbE may be sufficient
today's demand and still provide sufficient headroom for
future growth.
Setup Summary
• High endurance, so that the SSD can handle heavy write
At a high level, setting up a vSAN environment consists of
activity without wearing out too fast.
three steps, detailed below.
n The Intel Optane SSD P5800X provides 100 DWPD,
compared to three DWPD for a NAND SSD.14
Install ESXi
n The Intel Optane SSD P5800X supports a 73,000 TBs 1. Download VMware vSphere 7.0 and create a bootable
written (TBW) rating, compared to the Intel® SSD D7-5600 media using VMware vSphere 7.0U2 or higher.
drive’s 8,760 TBW rating.15
2. Install ESXi 7.0 to the boot device:
Intel Optane SSDs are ideal for the cache tier because they n Enable SSH if planning to benchmark vSAN cluster.
satisfy all four characteristics. For more information about n Customize networking settings.
caching with vSAN, read the VMware blog.
Install vCenter
Capacity Tier Considerations 1. Download VMware vCenter (VCSA) 7.0.
When writes are being performed, the data is initially staged 2. Mount the .iso and execute the installer:
in the cache tier. As the cache tier fills, the data is de-staged n Provide ESXi host information to deploy VCSA.
to the disks in the capacity tier. This means that:
n Provide networking/subnet details if using static
• The cache tier must be capable of dealing with both the IP addressing.
incoming writes from the VMs and the reads from the n Select vCenter sizing dependent on data center needs.
de‑staging activity.
3. Navigate to the URL associated with your vCenter Server
• The capacity tier must be capable of processing the write Appliance for further configuration, if required.
requests from the de-staging activity.
Configure vSAN
The VM read requests in an all-flash vSAN cluster are
performed directly on the capacity tier—most of the time— 1. Log in to the vCenter instance.
so the speed/throughput of the capacity tier is important 2. Create a data center, if required.
to properly serve the cache de-staging activities and the 3. Create a cluster, if required.
VM read requests. This capacity tier throughput and speed
4. Create a distributed virtual switch:
depends on the characteristics of the drives used and on the
number of drives. n Assign hosts to the distributed virtual switch and
assign uplinks.
While a configuration with just one capacity drive is possible, n Create port groups for vSAN, vMotion, and VM traffic.
two or more are a better option for performance and cost.
n Create vmkernel adapters for vSAN and vMotion port
Similarly, SATA drives can satisfy several use cases, but as
groups and apply this to all hosts that will be a part of
more workloads are aggregated in the vSphere/vSAN cluster
the vSAN cluster.
and new data-intensive workloads are added, the throughput
of the individual drives increases in relevance, affecting both 5. Enable the vSAN service on the cluster:
individual VMs and overall cluster throughput. To maximize n Choose data-efficiency features such as duplication
cluster performance, we recommend using PCIe 4.0 x4, NVMe and compression.
drives, such as the Intel SSD D7-P5510 Series. n Select cache tier devices and capacity tier devices.
Best Practices Guide | Deploying a Cost-Efficient, High-Performance vSAN Cluster 7

Summary the vSAN system to sustain the I/O latency even when some
of the VMs that share the cluster resources cause sudden
The significant operational benefits continue to drive the variations in the I/O load. This facilitates the management
consolidation of workloads on HCI/hybrid cloud platforms. of workloads with stricter service-level objectives.
And an increase in VM density continues to be the go-to
option to control costs, energy use, and footprint. But As the size of the hosts used in the vSphere and vSAN
as the number of VMs per cluster increases, so does the clusters continues to increase, the cost of memory
demand for I/O to the storage systems and network, takes a larger portion of the total host cost. The use of
requiring data center architects to rethink the components Intel Optane PMem in Memory Mode creates significant
that they use in these HCI platforms. opportunities to reduce infrastructure costs.

Using Intel Optane SSDs as cache tier devices, along with By using the recommendations contained in this
higher-performance capacity drives and a wider network best practices guide, you can create a vSAN storage
bandwidth, enables a significant increase in the overall environment that can scale as storage needs increase,
cluster I/O throughput. This makes the configuration more while meeting today’s performance and operational
scalable and enables higher VM density, creating good efficiency requirements.
opportunities for cost and footprint reduction.16 In addition,
the higher write throughput of Intel Optane SSDs provides
a higher level of predictability; in other words, it enables

1
Testing by Intel as of May 10, 2021.
Intel® Optane™ SSD Configuration: 4 nodes, 2x Intel® Xeon® Gold 6348 processor (28 cores, 2.6 GHz), total memory = 256 GB (16 slots/16 GB/3200 MT/s), Intel® Hyper-Threading Technology = ON,
Intel® Turbo Boost Technology = ON, 2x Intel® Optane™ SSD P5800X (cache) 400 GB and 8x Intel® SSD D7-P5510 3.84 TB (capacity), 1x Intel® Ethernet Adapter E810C 100 GbE, BIOS = 2.1 (ucode =
05003003), VMware vSphere 7.0U2, HCIbench 2.5.3.
Intel® SSD D7-5600 Configuration: 4 nodes, 2x Intel® Xeon® Gold 6348 processor (28 cores, 2.6 GHz), total memory = 256 GB (16 slots/16 GB/3200 MT/s), Intel Hyper-Threading Technology = ON,
Intel Turbo Boost Technology = ON, 2x Intel® SSD D7-P5600 (cache) 1.6 TB and 8x Intel SSD D7-P5510 3.84 TB (capacity), 1x Intel Ethernet Adapter E810C 100 GbE, BIOS = 2.1 (ucode =
0x8d055260), VMware vSphere 7.0U2, HCIbench 2.5.3.
2
Testing by Intel as of May 10, 2021.
Intel® Optane™ SSD P5800X Configuration: 4 nodes, 2x Intel® Xeon® Gold 6348 processor (28 cores, 2.6 GHz), total memory = 256 GB (16 slots/16 GB/3200 MT/s), Intel® Hyper-Threading
Technology = ON, Intel® Turbo Boost Technology = ON, 2x Intel® Optane™ SSD P5800X (cache) 400 GB and 8x Intel® SSD D7-P5510 3.84 TB (capacity), 1x Intel® Ethernet Adapter E810C 100
GbE, BIOS = 2.1 (ucode = 05003003), VMware vSphere 7.0U2, HCIbench 2.5.3.
Intel Optane SSD DC P4800X Configuration: 4 nodes, 2x Intel® Xeon® Gold 6348 processor (28 cores, 2.6 GHz), total memory = 256 GB (16 slots/16 GB/3200 MT/s), Intel® Hyper-Threading
Technology = ON, Intel® Turbo Boost Technology = ON, 2x Intel® Optane™ SSD DC P4800X (cache) 375 GB and 8x Intel® SSD D7-P5510 3.84 TB (capacity), 1x Intel® Ethernet Adapter E810C
100 GbE, BIOS = 2.1 (ucode = 05003003), VMware vSphere 7.0U2, HCIbench 2.5.3.
Intel® SSD D7-5600: 4 nodes, 2x Intel® Xeon® Gold 6348 processor (28 cores, 2.6 GHz), total memory = 256 GB (16 slots/16 GB/3200 MT/s), Intel® Hyper-Threading Technology = ON,
Intel® Turbo Boost Technology = ON, 2x Intel® SSD D7-P5600 (cache) 1.6 TB and 8x Intel® SSD D7-P5510 3.84 TB (capacity), 1x Intel® Ethernet Adapter E810C 100 GbE, BIOS = 2.1 (ucode =
05003003), VMware vSphere 7.0U2, HCIbench 2.5.3.
3
Testing by Intel as of May 10, 2021. Based on 280 VMs - 4 vCPUs per VM, 8 GB MEM, 125 GB usable storage capacity, up to 1,500 IOPS per VM running a 70/30 32 KB I/O load. Overheads and
optimal utilization levels were considered in calculations. Results may vary.
New Configuration: 4 nodes, 2x Intel® Xeon® Gold 6348 processor, (28 cores, 2.6 GHz), total memory = 256 GB (16 slots/32 GB/3200 MT/s), Intel® Hyper-Threading Technology = ON, Intel®
Turbo Boost Technology = ON, 2x Intel® Optane™ SSD P5800X (cache) 400 GB and 8x Intel® SSD D7-P5510 3.84 TB (capacity), 1x Intel® Ethernet Adapter E810C 100 GbE, BIOS = 2.1 (ucode =
0x8d055260), VMware vSphere 7.0U2, vSAN 7.0U2, HCIbench 2.5.3, 8x VMs per host, 2x 150 GB vDisks per VM, 100% WSS.
Baseline Configuration: 4 nodes, 2x Intel® Xeon® Gold 6248 processor (20 cores, 2.5 GHz), total memory = 384 GB (12 slots/32 GB/2933 MT/s), Intel® Hyper-Threading Technology = ON,
Intel® Turbo Boost Technology = ON, 2x Intel® Optane™ SSD DC P4800X (cache) 375 GB and 8x Intel SSD D7-P5510 3.84 TB (capacity), 1x Intel Ethernet Adapter E810C 100 GbE, BIOS = 2.1
(ucode = 05003003), VMware vSphere 7.0U2, vSAN 7.0U2, HCIbench 2.5.3, 8x VMs per host, 2x 150 GB vDisks per VM, 100% WSS.
4
Intel® Optane™ SSD P5800X Series: intel.com/content/www/us/en/products/docs/memory-storage/solid-state-drives/data-center-ssds/optane-ssd-p5800x-p5801x-brief.html
5
CPU cost was estimated. Pricing varies over time. dell.com/en-us/work/shop/cty/pdp/spd/poweredge-r750/pe_r750_14794_vi_vp?configurationid=b605e5ac-c8b9-4578-b0e2-7d9b15772b04.
6
Claim [3] at Intel® Optane™ Persistent Memory 200 Series - 1 - ID:615781 | Performance Index
7
Testing by Intel as of May 10, 2021. Workloads shown: Random Read 4KB, Random Write 64KB, 70/30 R/W 32KB. Drivers: ICEN: 1.5.5.0-1OEM.700.1.0.15843807; Irdman: 1.3.3.0-1OEM.700.1.0.15843807.
2nd Generation Intel® Xeon® Scalable Processor Configuration (no RDMA): 4 nodes, 2x Intel® Xeon® Gold 6248 processor (20 cores, 2.5 GHz), total memory = 384 GB (12 slots/32 GB/2933
MT/s), Intel® Hyper-Threading Technology = ON, Intel® Turbo Boost Technology = ON, 2x Intel® Optane™ SSD DC P4800X (cache) 375 GB and 8x Intel SSD D7-P5510 3.84 TB (capacity), 1x Intel
Ethernet Adapter E810C 100 GbE, BIOS = 2.1 (ucode = 05003003), VMware vSphere 7.0U2, vSAN 7.0U2, HCIbench 2.5.3.
3rd Generation Intel® Xeon® Scalable Processor Configuration (no RDMA): 4 nodes, 2x Intel® Xeon® Gold 6348 processor, (28 cores, 2.6 GHz), total memory = 256 GB (16 slots/32 GB/3200
MT/s), Intel® Hyper-Threading Technology = ON, Intel® Turbo Boost Technology = ON, 2x Intel® Optane™ SSD P5800X (cache) 400 GB and 8x Intel® SSD D7-P5510 3.84 TB (capacity), 1x Intel®
Ethernet Adapter E810C 100 GbE, BIOS = 2.1 (ucode = 0x8d055260), VMware vSphere 7.0U2, vSAN 7.0U2, HCIbench 2.5.3.
3rd Generation Intel® Xeon® Scalable Processor Configuration (with RDMA): 4 nodes, 2x Intel® Xeon® Gold 6348 processor, (28 cores, 2.6 GHz), total memory = 256 GB (16 slots/32 GB/3200
MT/s), Intel® Hyper-Threading Technology = ON, Intel® Turbo Boost Technology = ON, 2x Intel® Optane™ SSD P5800X (cache) 400 GB and 8x Intel® SSD D7-P5510 3.84 TB (capacity), 1x Intel®
Ethernet Adapter E810C 100 GbE, BIOS = 2.1 (ucode = 0x8d055260), VMware vSphere 7.0U2, vSAN 7.0U2, HCIbench 2.5.3.
8
Testing by Intel as of May 10, 2021
New Configuration: 4 nodes, 2x Intel® Xeon® Gold 6248 processor (20 cores, 2.5 GHz), total memory = 384 GB (12 slots/32 GB/2933 MT/s), Intel® Hyper-Threading Technology = ON, Intel®
Turbo Boost Technology = ON, 2x Intel® Optane™ SSD DC P4800X (cache) 375 GB and 8x Intel SSD D7-P5510 3.84 TB (capacity), 1x Intel Ethernet Adapter E810C at 100 GbE data rate, BIOS =
2.1 (ucode = 05003003), VMware vSphere 7.0U2, vSAN 7.0U2, HCIbench 2.5.3.
Baseline Configuration: 4 nodes, 2x Intel® Xeon® Gold 6248 processor (20 cores, 2.5 GHz), total memory = 384 GB (12 slots/32 GB/2933 MT/s), Intel® Hyper-Threading Technology = ON,
Intel® Turbo Boost Technology = ON, 2x Intel® Optane™ SSD DC P4800X (cache) 375 GB and 8x Intel SSD D7-P5510 3.84 TB (capacity), 1x Intel Ethernet Adapter E810C limited to 25 GbE data
rate, BIOS = 2.1 (ucode = 05003003), VMware vSphere 7.0U2, vSAN 7.0U2, HCIbench 2.5.3.
9
Calculation: 28.44 MB/s * 3600 secs * 24 hours = ~2400 GB
10
Calculation: 470 MB/s * 3600 secs * 24 hours = ~40,000 GB
11
See endnote 1.
12
See various claims at https://edc.intel.com/content/www/us/en/products/performance/benchmarks/intel-optane-ssd-p5800x-series.
13
See endnote 1.
14
Intel® Optane™ SSD P5800X: https://ark.intel.com/content/www/us/en/ark/products/201859/intel-optane-ssd-dc-p5800x-series-1-6tb-2-5in-pcie-x4-3d-xpoint.html and
Intel® NAND SSD: https://www.intel.com/content/www/us/en/products/sku/202707/intel-ssd-d7p5600-series-1-6tb-2-5in-pcie-4-0-x4-3d3-tlc/specifications.html
15
TBW rating calculation: Size of drive in TB x DWPD x 365 days x 5 years.
16
See endnotes 1-3.
Performance varies by use, configuration and other factors. Learn more at intel.com/PerformanceIndex. Performance results are based on testing as of dates shown in configurations and
may not reflect all publicly available updates. See backup for configuration details. No product or component can be absolutely secure. Your costs and results may vary. Intel technologies
may require enabled hardware, software or service activation. © Intel Corporation. Intel, the Intel logo, and other Intel marks are trademarks of Intel Corporation or its subsidiaries. Other
names and brands may be claimed as the property of others. 1221/ACHO/KC/PDF 337370-002US

You might also like