0% found this document useful (0 votes)
154 views48 pages

Understanding Vsphere Storage Slides

This document summarizes the different types of storage in vSphere including local storage, network storage, fiber channel, iSCSI, NFS, VMFS, vSAN, vVol, and storage policies. It also discusses multipathing and failover, pluggable storage architecture, datastore clusters, storage DRS, storage I/O control, vSAN requirements and capabilities.

Uploaded by

amit_post2000
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
154 views48 pages

Understanding Vsphere Storage Slides

This document summarizes the different types of storage in vSphere including local storage, network storage, fiber channel, iSCSI, NFS, VMFS, vSAN, vVol, and storage policies. It also discusses multipathing and failover, pluggable storage architecture, datastore clusters, storage DRS, storage I/O control, vSAN requirements and capabilities.

Uploaded by

amit_post2000
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 48

Understanding vSphere Storage

Matt Allford
DevOps Engineer

@mattallford www.mattallford.com
Storage Types

Local Storage Network Storage


Connected directly to the server A storage device accessible over a
storage network
Could be internal or external to the
server chassis Allows multiple hosts to connect to the
same storage volumes
Introduces a single point of failure
Highly performant, resilient,
Typically not shared across and redundant
multiple servers
Exception: A distributed storage system
such as vSAN
Fiber Channel

High speed data transfer Uses the fiber channel


protocol for raw block data network to carry SCSI traffic

After connecting hosts


Connects ESXi hosts to
to storage, you would
the storage device(s)
create a datastore

ESXi hosts need fiber


Fiber Channel over
channel Host Bus Adapters
Ethernet (FCoE)
(HBAs)
iSCSI
Internet Protocol based storage network solution for
raw block data

Connects ESXi hosts to the storage device(s) using


TCP/IP networking

Uses the TCP/IP network to carry SCSI


commands

Adapter types
- Software iSCSI adapter
- Dependant hardware iSCSI adapter
- Independant hardware iSCSI adapter

After connecting hosts to storage, you would create


a datastore
NFS

Network File System protocol over TCP/IP to


access an NFS volume for file level access

Connects ESXi hosts to the storage device(s)


using TCP/IP networking

Uses a standard network adapter to connect to


the storage device

ESXi supports NFS protocols versions 3 and 4.1

You don't format the NFS volume with a local


file system such as VMFS
Storage Datastore Types for vSphere

Virtual Machine File System


Network File System (NFS)
(VMFS)

vSAN Virtual Volume (vVol)


VMFS

Virtual Machine File Specialized high- VMFS datastores are


System (version 5 and 6) performance file system deployed on block
optimized for storing storage devices
virtual machines
NFS

ESXi hosts mount


shares located on
ESXi has an NFS Supports NFS
a Network Attached
client built in to ESXi versions 3 and 4.1
Server, as an
NFS datastore
vSAN

VMware’s solution for software defined storage

VMware vSAN is a distributed layer of


software that runs natively as a part of the
ESXi hypervisor

Eliminates the need for external shared storage

Requires hosts have local storage devices

Aggregates all local capacity on the hosts into a


single logical datastore shared by all hosts in
the cluster

A cluster has one vSAN datastore


Virtualizes SAN and NAS devices by abstracting
physical hardware resources into logical pools
of capacity

More granular than a VMFS datastore or


NFS datastore
- Providing greater control and configuration for
the virtual machine

vVol A virtual volume gets created for each component


of a virtual machine

Requires the use of VM storage policies


- A set of rules that contains placement and QoS
requirements for a virtual machine

vVols supports NFS 3 and 4.1, iSCSI, Fiber Channel,


and FCOE
- Requires storage vendor support
vSphere APIs for Storage Awareness (VASA)

Storage devices can inform


Enables communication
vCenter about their
between vCenter Server and the
configurations, capabilities,
underlying storage
and health

VASA can deliver VM storage


VASA is essential for vVols, vSAN
requirements from vCenter to the
and storage policies
storage devices
Allows vSphere to offload storage operations to
the device

Reduces overhead on ESXi, storage network


traffic and latency
vSphere APIs The goal of VAAI is to help storage vendors
for Array provide hardware assistance to speed up I/O
operations
Integration
VAAI primitives
(VAAI) - Hardware assisted locking (ATS)
- XCOPY (Extended copy)
- Write Same (Zero)
- Dead space reclamation (UNMAP)
Multipathing and Failover

Storage Volume 1 Volume 2


Array 3TB 3TB
Path Selection Policies

Most Recently
Fixed Round Robin
Used (MRU)
Pluggable Storage Architecture

VMkernel

VMware NMP
Pluggable
Storage Third-Party MPP VMware HPP
PSP SATP
Architecture
Pluggable Storage Architecture

VMkernel

VMware NMP
Pluggable
Storage Third-Party MPP VMware HPP
PSP SATP
Architecture
Pluggable Storage Architecture

VMkernel

VMware NMP
Pluggable
Storage Third-Party MPP VMware HPP
PSP SATP
Architecture
vSphere Storage Policies
Storage Policy Based Management (SPBM) plays a major role in the
SDDC, by helping to align storage with application demands

Uses the VASA provider to present the characteristics and


capabilities of storage devices to the policy engine

Can also use tags on datastores to use within a storage policy rule

A policy contains a number of rules defining availability, performance,


replication, compression, deduplication

Monitor compliance of virtual machines and disks


Datastore Clusters

When you add


a datastore to
You can use
A collection of a cluster, the
vSphere Storage
datastores with datastore's resources
DRS to manage
shared resources become part of
storage resources
the datastore
cluster's resources
Storage DRS

Load balancing for storage across datastores


Initial placement and ongoing balancing

Space utilization balancing


I/O latency load balancing
Anti-affinity rules

Datastore maintenance mode


Datastore Cluster Overview
Storage network ESXi host
Storage
Array

Volume 1
3TB vmfs1

Volume 2
3TB vmfs2

Volume 3
3TB vmfs3
Datastore Cluster Overview
Storage network ESXi host
Storage
Array

Volume 1
3TB vmfs1

Volume 2 Datastore
cluster
3TB vmfs2
9TB

Volume 3
3TB vmfs3
Storage I/O Control (SIOC)

Provides a way to prioritize and limit the use


of storage IO from the datastore
Can be enabled on a datastore

Set the SIOC threshold value


Only activates when the threshold value
is exceeded
Storage shares and limits are configured on
virtual disks in VM settings
vSAN Overview

Storage traffic replicates over


Can be configured as hybrid
the TCP/IP network to other
or all flash
ESXi hosts

Supports vSphere features such


Driven by storage policies
as HA, vMotion, and DRS
Basics of vSAN
vSAN cluster

VM

Storage
Policy

VM.VMDK vsan datastore


vSAN Capabilities

Fault domains
Stretched clustering
iSCSI target service
Deduplication and compression

Data at rest encryption


vSAN Hardware Requirements

All capacity devices, drivers, and firmware versions in your vSAN


configuration must be certified and listed in the vSAN section of
the VMware Compatibility Guide

One SAS or SATA host bus adapter (HBA), or a RAID controller that
is in passthrough mode or RAID 0 mode

One cache disk and one capacity disk


vSAN Cluster Requirements

A standard vSAN cluster must


contain a minimum of three hosts A host that resides in a vSAN
that contribute capacity. A two cluster must not participate in
host cluster consists of two data other clusters
hosts and an external witness
vSAN Disk Group Overview

Cache
(1 per group)

Capacity
(1-7 per group)

Disk Group Disk Group Disk Group Disk Group Disk Group
ESXi vSAN Networking
ESXi Host 1 ESXi Host 2

vSAN vSAN
vmkernel vmkernel

vSAN port vSAN port


Distributed Switch
group group

Physical Physical Physical Physical


NIC NIC NIC NIC

Physical switch
vSAN Networking Requirements
Each host much have minimum bandwidth
dedicated to vSAN. 1Gbps for hybrid, 10Gbps for
all flash

Each host in the vSAN cluster must have a


vmkernel network adapter for vSAN traffic

All hosts in the cluster must be connected to a


vSAN layer 2 or layer 3 network

vSAN supports both IPv4 and IPv6

Network latency. 1ms RTT for standard clusters.


5ms RTT between the two sites for stretched
clusters. 200ms RTT from a site to the
vSAN witness
You must disable vSphere HA
before you enable vSAN on the
cluster.
You must disable vSphere HA
before you enable vSAN on the
cluster.

On a vSAN enabled cluster, the


vSAN storage network is used
by vSphere HA.
VM Clone Operations

Full Clone Instant Clone


Linked Clone
A child VM that shares A destination VM that
A child VM that shares
nothing with the shares virtual disks
virtual disks with the
parent VM after and memory with the
parent VM
creation source VM
Instant Clone
Creates a powered on VM (destination) from
the state of another VM (source)
The state of the processor, virtual device,
memory and disk of the destination VM, are
identical to the source
Can be created from either a running or frozen
source VM
Use a copy-on-write architecture when
creating a destination VM leveraging delta
disks
Changes to the destination VM are isolated to
only that destination machine
Instant Clone Overview
Instant Instant
Source VM cloned VM1 cloned VM2

Delta Disk Delta Disk

Delta Disk Delta Disk

Flat VMDK
Instant Clone Overview – Frozen Source
Instant Instant
Source VM cloned VM1 cloned VM2

Delta Disk Delta Disk Delta Disk

Flat VMDK
Instant Clone Use Cases

Virtual Desktop Infrastructure Application testing / automation


(VDI)
VMFS Datastore

VM1 VM2

VMDK VMX VMDK VMX

NVRAM LOG NVRAM LOG


VM1 VMFS VM2
Volume / LUN

Storage Array
Virtual Volumes Components

Config

Data

Memory

Swap

Other
vVols Architecture
vVols Datastore

VM VM VM

FC/FCOE/iSCSI/NFS

VASA
Provider
vVol vVol vVol vVol vVol vVol

Storage Container

Storage Array
vVols Architecture
vVols Datastore

VM VM VM

FC/FCOE/iSCSI/NFS

VASA
Provider
vVol vVol vVol vVol vVol vVol

Storage Container

Storage Array
vVols Architecture
vVols Datastore

VM VM VM

FC/FCOE/iSCSI/NFS

VASA
Provider
vVol vVol vVol vVol vVol vVol

Storage Container

Storage Array
vVols Architecture
vVols Datastore

VM VM VM

Protocol
Endpoint
VASA
Provider
vVol vVol vVol vVol vVol vVol

Storage Container

Storage Array
Cloud Native Storage Components
Kubernetes cluster

pod pod pod pod pod


Persistent k8s storage
Volumes class

Container Storage Interface (CSI)

CNS Control Plane

FCD Virtual Disks vSAN File Shares Storage


Policies
vSAN/VMFS/NFS/vVols
Cloud Native Storage Components
Kubernetes cluster

pod pod pod pod pod


Persistent k8s storage
Volumes class

Container Storage Interface (CSI)

CNS Control Plane

FCD Virtual Disks vSAN File Shares Storage


Policies
vSAN/VMFS/NFS/vVols
Cloud Native Storage Components
Kubernetes cluster

pod pod pod pod pod


Persistent k8s storage
Volumes class

Container Storage Interface (CSI)

CNS Control Plane

FCD Virtual Disks vSAN File Shares Storage


Policies
vSAN/VMFS/NFS/vVols
Up Next:
Understanding VMware Products and Solutions

You might also like