VPLEX Administration Student Guide
VPLEX Administration Student Guide
VPLEX Administration Student Guide
DOWNLOADABLE CONTENT
Internal Use - Confidential
Table of Contents
VPLEX Administration-SSP
VPLEX IO Operations............................................................................................... 48
Distributed Cache Coherency ............................................................................................ 48
VPLEX I/O Operations ....................................................................................................... 49
Path Redundancy .............................................................................................................. 53
VPLEX Cluster Witness ..................................................................................................... 55
VPLEX Administration-SSP
VPLEX Administration-SSP
VPLEX Administration-SSP
VPLEX Administration-SSP
VPLEX Administration-SSP
VPLEX Ecosystem
Storage
Storage Arrays
RecoverPoint
Applications
o Many applications, those that are supported on Block Storage, can run on
VPLEX volumes
VPLEX Administration-SSP
Page
Internal Use - Confidential 8 © Copyright 2020 Dell Inc.
VPLEX Architecture
VPLEX at a Glance
• Continuous availability
• Data mobility without host disruption
VPLEX Solution
Dell EMC VPLEX is an important part of the Data Protection and Availability
continuum. It delivers data availability and non-disruptive mobility across arrays in a
single data center or data centers separated by distance. VPLEX permits
technologies like VMware and other clusters that assume a single storage instance
and enabling them to function across arrays and across distance.
VPLEX Administration-SSP
© Copyright
Internal Use - Confidential 2020 Dell Inc. Page 9
VPLEX Architecture
Availability Challenges
VPLEX Administration-SSP
Page
Internal Use - Confidential 10 © Copyright 2020 Dell Inc.
VPLEX Architecture
Tech Refresh and Data Center Change Class of Maintenance Workload Balancing
Migrations Service
• Application downtime
VPLEX Administration-SSP
© Copyright
Internal Use - Confidential 2020 Dell Inc. Page 11
VPLEX Architecture
VPLEX Metro
VPLEX Administration-SSP
Page
Internal Use - Confidential 12 © Copyright 2020 Dell Inc.
VPLEX Architecture
Distributed Virtual Volumes have mirror legs at more than one cluster. Some of the benefits of
implementing a distributed active-active data center include:
Increased availability - both data centers can serve production workloads while
providing high availability backup for the other data center.
Increased asset utilization - passive data centers can have idle resources.
Active-active data centers make the most use of resources
Site A Site B
Synchronous
Latency
Up to 10 ms
VPLEX Administration-SSP
© Copyright
Internal Use - Confidential 2020 Dell Inc. Page 13
VPLEX Architecture
VPLEX Metro
The VPLEX product family is composed of VPLEX Local and VPLEX Metro
systems.
A VPLEX Local provides a seamless ability to manage and mirror data between multiple
heterogeneous arrays from a single interface. A VPLEX Local configuration consists of a single
VPLEX cluster. A VPLEX cluster is comprised of one, two, or four engines.
A VPLEX Metro enables active/active, block-level access to data between two sites within
synchronous distances. The distance is limited not only by physical distance but also by host
and application requirements. Depending on the application, VPLEX clusters can be installed
with inter-cluster links that have up to 10 ms round trip time (RTT). The combination of virtual
storage with VPLEX Metro and virtual servers enables the transparent movement of virtual
machines and storage across synchronous distances. This technology provides improved
utilization and availability across heterogeneous arrays and multiple sites.
VPLEX Administration-SSP
Page
Internal Use - Confidential 14 © Copyright 2020 Dell Inc.
VPLEX Architecture
A VPLEX appliance model that packages VPLEX VS6 with Dell Technologies
all-flash systems
XtremIO
Unity All Flash
Dell EMC PowerStore
VMAX AF
PowerMax
VPLEX Administration-SSP
© Copyright
Internal Use - Confidential 2020 Dell Inc. Page 15
VPLEX Architecture
Zero RTO
Zero RPO
Always on, no matter what
VPLEX Administration-SSP
Page
Internal Use - Confidential 16 © Copyright 2020 Dell Inc.
VPLEX Architecture
No host disruption
New Array
Old Array
VPLEX Administration-SSP
© Copyright
Internal Use - Confidential 2020 Dell Inc. Page 17
VPLEX Architecture
Virtual Volume
Physical
Device
TRANSPARENT MOBILITY
VPLEX Administration-SSP
Page
Internal Use - Confidential 18 © Copyright 2020 Dell Inc.
VPLEX Architecture
Simple DR testing
Active assets in both sites
No complex failover procedures
Reduces planned and unplanned downtime
Reduces TCO
VPLEX Administration-SSP
© Copyright
Internal Use - Confidential 2020 Dell Inc. Page 19
VPLEX Architecture
≤ 10 ms *
Site A Site B
* Application specific
Site C
A VPLEX Metro supports VMware HA and FT. It also provides VMware DRS and Vmotion
integration. For the VMs and applications to failover transparently, the data must be shared
across cluster nodes. VMware ESXi clustering requires shared storage to provide non-
disruptive movement of virtual machines. VPLEX Metro allows storage to span multiple data-
centers allowing ESXi servers in different failure domains to share access to Datastores
created on VPLEX Distributed Storage. VPLEX Metro fits perfectly with VPLEX distributed
cache coherence for automatic sharing and load balancing.
VPLEX supports VMware vSphere® Storage APIs - Array Integration (VAAI), also referred to
as hardware acceleration or hardware offload APIs . The APIs define a set of "storage
primitives" that enable the ESXi host to offload certain storage operations to the array, VPLEX,
which reduces resource overhead on the ESXi hosts and can significantly improve
performance for storage-intensive operations such as storage cloning, zeroing, and so on.
VPLEX Administration-SSP
Page
Internal Use - Confidential 20 © Copyright 2020 Dell Inc.
VPLEX Architecture
WAN
Local Remote
Sync / Async
Protection Protection
IP or FC
Journal Journal
VPLEX and RecoverPoint products together offer continuous availability and operational and
disaster recovery. For customers requiring even further levels of availability, VPLEX can be
used with RecoverPoint to enable a third site for disaster recovery. This site can be located at
any supported distance, ensuring operational resilience in the event of a regional outage.
VPLEX Administration-SSP
© Copyright
Internal Use - Confidential 2020 Dell Inc. Page 21
VPLEX Architecture
MetroPoint Solution
C
DR Site
Single DR Copy
B
A Disaster Recovery Production
Production
Site
Site Automatic Switchover
VPLEX Metro
Continuous Availability
RecoverPoint RecoverPoint
Operational Operational
Recovery Recovery
Remote And Local Protection
VPLEX using RecoverPoint with a VPLEX Metro configuration allows for a unique configuration
referred to as MetroPoint topology. This MetroPoint topology provides a three or four site
solution. This allows for protection to continue if one of the VPLEX Metro Clusters fails.
VPLEX Administration-SSP
Page
Internal Use - Confidential 22 © Copyright 2020 Dell Inc.
VPLEX Architecture
VPLEX Architecture
VPLEX Administration-SSP
© Copyright
Internal Use - Confidential 2020 Dell Inc. Page 23
VPLEX Architecture
o Virtual volume writes are cached and kept consistent and coherent
across all directors in a VPLEX system. Per-volume caching
implements local and global cache, and maintains consistency
between all directors in a VPLEX.
Device Virtualization
VPLEX Administration-SSP
Page
Internal Use - Confidential 24 © Copyright 2020 Dell Inc.
VPLEX Architecture
VPLEX Constructs
Virtual Volume
Device
Extent
Storage Volume
Claim Storage
Storage
Array
LUN
Virtual Volume A VPLEX Virtual Volume is created from the top-level Device. It is the storage
or Logical Unit presented to one or more hosts as part of a Storage View.
Device A VPLEX Device is the application of a RAID topology to one or more extents. A Device
can be either a Local Device or a Distributed Device (VPLEX Metro topology is required). A
Device may use another device as an extent. Devices can be made up of any combination of
local devices and extents as appropriate for a particular RAID topology/geometry.
Extent A VPLEX Extent is a slice or portion of the available space from a Storage Volume.
With VPLEX, you can create an extent that uses the entire capacity of the underlying Storage
Volume, or just a portion of the space. Extents provide a convenient means of allocating what is
needed while taking advantage of the dynamic thin allocation capabilities of the back-end array.
Storage Volume Storage Volumes are the Logical Units (LU) presented from an array to the
VPLEX BE ports. These are initially unclaimed but can be claimed as part of the provisioning
process used by VPLEX. This can be done using the CLI of the VPLEX UI.
VPLEX Administration-SSP
© Copyright
Internal Use - Confidential 2020 Dell Inc. Page 25
VPLEX Architecture
VPLEX Administration-SSP
Page
Internal Use - Confidential 26 © Copyright 2020 Dell Inc.
VPLEX Architecture
Site 2
Site 1 Virtual Volume
IP or FC WAN
VPLEX Administration-SSP
© Copyright
Internal Use - Confidential 2020 Dell Inc. Page 27
VPLEX Architecture
VS2 Engine
Hosts
8 Gbps FC
8 Gbps FC
0 1 2 3 0 1 3
2
VPLEX VPLEX VPLEX
Front-end Front-end
Director FC Ports
Engine FC Ports Director
A B
Inter 1 1 Inter
Cluster Cluster
Distributed Cor Core COM Ports 0 0 COM Ports Cor Cor Distributed
Cache Cache
Cor Co Intra- 1 1 Cor Cor
Intra-
Cluster Cluster
COM Ports 0 0 COM Ports
8 Gbps FC
Back-end Back-end
FC Ports FC Ports
0 1 2 3 0 1 2 3
8 Gbps 8 Gbps FC
FC
Storage
Volume
Front-End (FE) ports are Fibre Channel ports which are zoned to host HBA
ports. They provide host connectivity to VPLEX Virtual Volumes
CPU's and Distributed Cache They are key components for data
processing and storage virtualization.
Comm and WAN ports COM ports provide communication with other
directors in the local VPLEX cluster. Wan ports, both FC and IP, provide
communication with remote directors in a VPLEX Metro environment.
A VPLEX cluster can have up to four engines (or eight directors). Each director adds redundancy
and additional cache and processing power. VPLEX architecture is fully redundant to survive any
single point of failure. In any cluster, the fully redundant hardware can tolerate failure down to a
single director remaining with no Data Unavailability or Data Loss condition.
VPLEX Administration-SSP
Page
Internal Use - Confidential 28 © Copyright 2020 Dell Inc.
VPLEX Architecture
VS6 Engine
Hosts
16 Gbps FC 16 Gbps FC
0 1 2 3 0 1 2 3
VPLEX VPLEX VPLEX
Front-end Front-end
Director Engine FC Ports
Director
FC Ports
A B
Inter- 1 1 Inter-
Co Co Cluster Cluste Co Co
Co Co Co
Intra 1 1 Intra
Co Co Co Distributed
Distributed Co
Co Co Cluster Cluster Co Cache
Cache COM Ports 0 0 COM Ports
0 1 2 3 0 1 2 3
16 Gbps FC 16 Gbps FC
Storage
Volume
VS6 Details
VS6 architectural design is the same as the VS2 architecture. There is still the
same number of FE ports, BE ports, and COM ports.
VS6 engines are much higher performing because they are built from higher-
performing hardware. VS6 directors get their performance boost by using dual six-
core processors, faster FE and BE ports, more cache, and InfiniBand for intra-
cluster communications.
VPLEX Administration-SSP
© Copyright
Internal Use - Confidential 2020 Dell Inc. Page 29
VPLEX Architecture
VS2 Technology
Director A Director B
SPS
Front View
Director B Director A
SPS
Rear View
VPLEX Administration-SSP
Page
Internal Use - Confidential 30 © Copyright 2020 Dell Inc.
VPLEX Architecture
VS6 Technology
BB
Director B
Front View
Director A
BB
Director B
Rear View
Director A
VPLEX Administration-SSP
© Copyright
Internal Use - Confidential 2020 Dell Inc. Page 31
VPLEX Architecture
Comparison
VS2 VS6
CPUs per
director
2 six-core processors
1 quad-core processor 12 CPU cores
4 CPU cores
VPLEX Administration-SSP
Page
Internal Use - Confidential 32 © Copyright 2020 Dell Inc.
VPLEX Architecture
VS2
VPLEX Cluster
Directo Director
r 4A 4B
FC SW-B
Director Directo
3A r 3B
Director Directo
FC SW-B
2A r 2B
Director Director
1A 1B
VS2 uses
Fibre Channel (8 Gbps)for Local
COM connections
VPLEX Administration-SSP
© Copyright
Internal Use - Confidential 2020 Dell Inc. Page 33
VPLEX Architecture
VS6
Local COM
VPLEX Cluster
Director Director
4A 4B
IB SW-B
Director Director
3A 3B
Director IB SW-B
Directo
2A r 2B
Director Director
1A 1B
VS6 uses
InfiniBand (40 Gbps)for Local COM
connections
VS6 engines not only have more cache per director, but the communication paths
(Local COM) between directors in the same cluster use InfiniBand. InfiniBand
provides 40 Gbps data paths between directors.
VPLEX Administration-SSP
Page
Internal Use - Confidential 34 © Copyright 2020 Dell Inc.
VPLEX Architecture
Engine 4
SPS 4
Engine 3
Engine 4
FC SW B
FC SW A
Engine 3
UPS A
Cable Management
UPS B
Management Server
UPS A
SPS 2
Engine 2
Cable Management
SPS 1
Cable Management
VPLEX is configured as a single, dual, or quad engine cluster. A VPLEX Local will
have one cluster, and a Metro has two clusters connected together. A VPLEX Local
cluster consists of a single rack. Most installations use EMC factory-installed racks;
although VPLEX may be deployed in customer racks in the field. The rack contains
1, 2, or 4 VPLEX engines and each engine consists of 2 directors: A and B.Clusters
may be upgraded from one to two engines or upgraded from two to four engines
without disruption. Adding engines allows VPLEX to scale for greater performance
and redundancy.
VPLEX Administration-SSP
© Copyright
Internal Use - Confidential 2020 Dell Inc. Page 35
VPLEX Architecture
VS2
A VS2 Engine contains two types of I/O modules, a 4 port Gbps and a 2-port 10
GbE. Here are where they can be used:
VPLEX Administration-SSP
Page
Internal Use - Confidential 36 © Copyright 2020 Dell Inc.
VPLEX Architecture
VS6
With VS6 technology, there are three types of I/O modules, a 4-port 16 Gbps Fibre
Channel module, a 10 GbE I/O module, and a 2-Port InfiniBand module.
Here are their functions and locations:
Slot 0 has a 4-port 16 Gbps Fibre Channel module for FE connections to hosts.
Slot 1 contains a 4-port 16 Gbps Fibre Channel module for BE connections to
storage arrays
Slot 2 has either a 16 Gbps Fibre Channel module or a 10 GbE I/O module for WAN
COM (2 ports are used if a VPLEX Metro - otherwise ports are not used).
Slot 2 is used for WAN communication between VPLEX
Clusters in a VPLEX Metro. 2 ports are used, in a VPLEX Local
the ports are not used.
Slot 3 is a 2-Port InfiniBand module and is used for Local communications with
other directors in the same VPLEX Cluster.
VPLEX Administration-SSP
© Copyright
Internal Use - Confidential 2020 Dell Inc. Page 37
VPLEX Architecture
VS6: 16 Gb/s FC
Fibre Channel vs. IP - Choose a protocol for your VPLEX Metro WAN COM
connections. VPLEX Metro can be ordered with either 8 Gb/s Fibre Channel
(VS2) 16 Gb/s Fibre Channel (VS6), or 10 Gb/s Ethernet (VS2 and VS6) for the
WAN COM connections.
Round-trip-time or WAN delay - The time to exchange data between clusters
will directly impact the distributed-device write time since the write mirroring is
synchronous. The maximum WAN round-trip-time (up to 10ms) is going to
largely be dependent upon what the applications can tolerate.
Bandwidth or WAN link capacity - Ensure your inter-cluster WAN pipes are
sufficiently sized to handle the bandwidth required. Size appropriately for peak
capacity but also to tolerate a single link failure. Ideally VPLEX performance
should not suffer in the face of a single link failure, should one link happen to
fail, or be down for maintenance. Insufficient WAN COM bandwidth during times
of WAN saturation directly impacts latency and thus the host Metro write
performance. The amount of WAN bandwidth required depends primarily upon
the write rate for distributed-devices. Be aware of potential high bandwidth
users such as inter-cluster rebuilds, or reads in situations of a failed storage-
array.
Quality and reliability - Details like packet loss dropped or corrupted frames or
lack of buffer credits can negatively impact the host Metro write latency.
VPLEX Administration-SSP
Page
Internal Use - Confidential 38 © Copyright 2020 Dell Inc.
VPLEX Architecture
Director A Director B
Director B
PS
0
PS
1
PS PS 1
Director A
Independent power zones in the data center feed each VPLEX power zone,
providing redundant high availability.VS2 Technology: There are two power
supplies with fans per director. Both must be removed to pull the director out. To
ensure the power supply is completely inserted into the engine, the yellow at the
top of the power supply should not be visible.VS6 Technology: The Power Supply
Units (PSU) are N+1 technology. Each has enough wattage (1100 W) to run the
director and MMCS or MM (management module).
VPLEX Administration-SSP
© Copyright
Internal Use - Confidential 2020 Dell Inc. Page 39
VPLEX Architecture
VS2 - Each engine is connected to 2 standby power supplies (SPS) that provide a
battery backup for cache vaulting in the event of transient site power failure. In
single-engine clusters, the management server draws its power directly from the
cabinet PDU.
In dual- and quad-engine clusters, the management server draws power from UPS-
A. Each VPLEX engine is supported by a pair of standby power supplies (SPS) that
provide a hold-up time of five minutes, allowing the system to ride through a
transient power loss. A single standby power supply provides enough power for the
attached engine.
Each standby power supply is a FRU and can be replaced with no disruption to the
services provided by the system. The recharge time for a standby power supply is
up to 5.5 hours. The batteries in the standby power supply can support two
sequential five-minute outages
VPLEX Administration-SSP
Page
Internal Use - Confidential 40 © Copyright 2020 Dell Inc.
VPLEX Architecture
Management Server
VPLEX VS2 clusters have a separate management server that manages both A-
side and B-side directors. It has separate network connections for the customer
management network, and the A-Side and B-side internal subnets. It also has an IP
port for a service laptop connection. Either subnet can access any director. VPLEX
VS2 management server provides a management interface to the public network
for cluster management. It also has interfaces to other VPLEX components. All
VPLEX event logging, as a service, takes place on the management server.
VPLEX Administration-SSP
© Copyright
Internal Use - Confidential 2020 Dell Inc. Page 41
VPLEX Architecture
VS6 does not have an external management server. Instead, VPLEX VS6
technology clusters have an integrated Management Module Control Station
(MMCS) that is part of engine 1 director A (MMCS-A). The MMCS also has all the
same network connections that the VS2 management server has.
VPLEX Administration-SSP
Page
Internal Use - Confidential 42 © Copyright 2020 Dell Inc.
VPLEX Architecture
IP Connections
253 Subnet
(eth0) B-Side
The MMCS in engine 1 director A is used as the management server for the
cluster. There is a service port for connecting a service laptop (or the laptop that
comes with the rack) and an MRJ21 cable assembly that provides three more IP
ports:
- lime-green cable is for the .252 VPLEX internal subnet
- pink/violet cable is for the .253 VPLEX internal subnet
- black cable is used to connect to the data management network
VPLEX Administration-SSP
© Copyright
Internal Use - Confidential 2020 Dell Inc. Page 43
VPLEX Architecture
Only engine 1 has an MMCS in each director, and only the MMCS in director A
(MMCS-A) functions as a management server. The MMCS in engine 1 director B is
inactive for VPLEX CLI commands or the VPLEX Unisphere application.
An MMCS has its own CPU and 80 GB SSD which contains system code, logs,
and limited space for vaulting in case of power failure. There is no failover from one
MMCS to the other. If MMCS-A fails, it must be replaced.
The code on MMCS-B can be used to copy firmware to a new MMCS-A. MMCS-A-
Supports one Public IP for admin cluster management- Is used during EZ-Setup
and other VPLEX management operations.
VPLEX Administration-SSP
Page
Internal Use - Confidential 44 © Copyright 2020 Dell Inc.
VPLEX Architecture
MM - Management Module
o 252 subnet
o 253 subnet
VPLEX Administration-SSP
© Copyright
Internal Use - Confidential 2020 Dell Inc. Page 45
VPLEX Architecture
VS2
Cable Connections
Engine 4
Cable Connections for Director and Switch Cable Connections for Director and Switch
management ports management ports
Engine 3
A-Side
B-Side
FC Switch B
FC Switch A
Management
LAN
Management Server
Management
Client
Engine 2
Engine 1
Shown are the Ethernet cables that connect the management ports from the VPLEX
management server to the management ports of each director and the Fibre Channel COM
switches. Note that there are No internal VPLEX IP switches and the directors are in fact daisy-
chained together on the “A” Side and the “B” side
The management server is the only VPLEX component that is configured with a public IP on
the data center management network. From the data center management network, the
management server can be accessed via SSH or HTTPS.
VPLEX Administration-SSP
Page
Internal Use - Confidential 46 © Copyright 2020 Dell Inc.
VPLEX Architecture
VS6
Cable Connections
Engine 4
Cable Connections for Director
Cable Connections for Director management ports
management ports
B-Side A-Side
Engine 3
IB Switch IB Switch
Engine 2 Management
LAN
Management
Client
Engine 1
MMCS-A
The VS6 Management IP infrastructure is similar to the VS2. The major difference is that
instead of a separate management server, there is an embedded MMCS-A within engine 1
VPLEX Administration-SSP
© Copyright
Internal Use - Confidential 2020 Dell Inc. Page 47
VPLEX IO Operations
VPLEX IO Operations
VPLEX’s distributed cache coherency handling enables superior availability. It enables any
director to service any I/O for any volume while participating in a global cache with all the other
directors in the cluster. Each director contributes cache to the global cache. If a director is
added to the cluster, the global cache increases in size. If a director fails and is removed from
the cluster, the global cache shrinks but access is maintained through all the remaining
directors. To illustrate how this works:
When a write comes into director A for a particular block, a small piece of
metadata is updated to indicate that director A now has that block in cache.
VPLEX does not provide acknowledgment to the host until data is stored on the
storage array.
Should a read later come in for that same block on a different director, it will
look in the directory and see that the block is available in A’s cache. It will fetch
it from there and return it to the host.
VPLEX Administration-SSP
Page
Internal Use - Confidential 48 © Copyright 2020 Dell Inc.
VPLEX IO Operations
READ
VIRTUALIZATION
HETEROGENEOUS HETEROGENEOUS
If the data is not in the local cache a lookup is performed in the global cache.
If another director, in that cluster only, has the data in its cache, then it is sent
via the local com to the servicing director and then to the host.
The servicing director now has the requested data in its local cache, in order to
satisfy future potential reads.
VPLEX Administration-SSP
© Copyright
Internal Use - Confidential 2020 Dell Inc. Page 49
VPLEX IO Operations
VIRTUALIZATION
HETEROGENEOUS DATA
Read Miss
If the data the director is looking for is not in global cache, it is called a global cache miss. Here
is a description:
On a miss from Global Cache, the requested data is read from the Storage
Volume into the local cache. The requested data is returned from the local
cache to the host.
The servicing director now has the requested data in its local cache, in order to
satisfy future potential reads.
VPLEX Administration-SSP
Page
Internal Use - Confidential 50 © Copyright 2020 Dell Inc.
VPLEX IO Operations
VIRTUALIZATION
ACK
When a Write is being committed to the array, the cache must first invalidate any existing
copies of that data in the global and local cache. During a write hit:
The Write is acknowledged, first from the storage array, then VPLEX sends the
acknowledgement to the host.
VPLEX Administration-SSP
© Copyright
Internal Use - Confidential 2020 Dell Inc. Page 51
VPLEX IO Operations
VIRTUALIZATION
ACK
AC ACK
Both VPLEX Clusters are Active-Active with a VPLEX Metro. Here are the steps for a Write to a
Distributed Device:
The director identifies the blocks to be written and signals the other directors
that it now owns those blocks in cache.
All directors update their private copy of the cache coherence table.
o Noting which blocks will now be invalid within their own caches.
Both directors write the blocks to cache and through to Back-End storage.
An acknowledgement is sent back from the "remote" cluster to the local cluster.
VPLEX Administration-SSP
Page
Internal Use - Confidential 52 © Copyright 2020 Dell Inc.
VPLEX IO Operations
Path Redundancy
FE Ports FE Ports
Example:
Single Engine
VPLEX
Virtual volumes presented from VPLEX to a host can tolerate path failures by
connecting the host to multiple directors and by utilizing multi-pathing software on
the host to control the paths. a virtual volume is presented out of multiple VPLEX
front-end ports on different directors. This yields continuous data availability in the
presence of port or director failure.
VPLEX Administration-SSP
© Copyright
Internal Use - Confidential 2020 Dell Inc. Page 53
VPLEX IO Operations
Engine Redundancy
Example:
Dual Engine
Virtual volumes presented from VPLEX to a host can tolerate entire VPLEX engine failures by
connecting the host to VPLEX Front-End ports on different engines. An engine could fail and
the host would still be able to access its volumes. It is still best practice to connect the host to
one A director and one B director.
VPLEX Administration-SSP
Page
Internal Use - Confidential 54 © Copyright 2020 Dell Inc.
VPLEX IO Operations
Failure
VPLEX
Domain #3 Witnes
Is there a
failure?
Where should I/O
VP
continue?
N
VPLEX Witness
VPLEX Witness can be deployed (best practice) at a third location to improve data availability
in the presence of cluster failures and inter-cluster communication loss. The VPLEX Witness is
implemented as a virtual machine in a separate failure domain. This eliminates the possibility of
a single fault affecting both a VPLEX Cluster and VPLEX Witness.
VPLEX Witness connects to both VPLEX clusters over the management IP network using VPN.
VPLEX Witness observes the state of the clusters and thus can distinguish between an outage
of the inter-cluster link and a cluster failure.
VPLEX Witness uses this information to guide the clusters to either resume or suspend I/O.
VPLEX Administration-SSP
© Copyright
Internal Use - Confidential 2020 Dell Inc. Page 55
VPLEX Management Options
Management Options
Click to expand
A single VPLEX cluster is managed by logging into the management server on a VS2 system
or logging into MMCS-A on a VS6 system. In VPLEX Metro configurations, you can manage
both clusters from a single management connection.
The management server or MMCS-A coordinates data collection, VPLEX software upgrades,
configuration interfaces, diagnostics, event notifications, and some director-to-director
communication.
Both a VPLEX CLI and a GUI called EMC Unisphere for VPLEX are used to configure,
upgrade, manage, and monitor the VPLEX. The VPLEX CLI supports all VPLEX operations.
CLI commands are divided across a hierarchical context tree structure.
VPLEX Administration-SSP
Page
Internal Use - Confidential 56 © Copyright 2020 Dell Inc.
VPLEX Management Options
Role Description
vplexuser VPLEX management role used for provisioning, migrations etc. Cannot perform account
management
VPLEX accounts are password protected. Each account will be assigned a role. There are four
roles: service, securityadmin, vplexuser, and read-only. Most users will be assigned either
the vplexuser or readonly role. Only Dell Technologies service personnel (or partners
performing service) should use the account with the service role.
The securityadmin user should be restricted to administrators that manage VPLEX user
accounts. Additional security options include using an LDAPs server and creating a
Certification Authority (CA) on the VPLEX management server for the purposes of signing
management server certificates.
VPLEX Administration-SSP
© Copyright
Internal Use - Confidential 2020 Dell Inc. Page 57
VPLEX Management Options
Command Information
Distribute Management
Clusters Data migration Engines Server Monitoring Notifications System default
d storage
notifications
bindings Ports
call-home
distributed-devices Et
snmp-traps
rule-sets Engine-<n-n> Eth1
directors Eth
director-<x-y> A Eth3 Directors
hardware director_A
ports Monitors
list of ports director_A_director
Device-migrations
firmware director_A_diskReportMonitor
list of devices director-<x-y-> B
director_A_portReportMonitor
Extent-migrations
hardware director_A_volumeReportMonitor
list of devices
ports director_B
list of ports
Monitors
firmware director_B_director
director_B_diskReportMonitor
fans
director_B_portReportMonitor
mgmt-modules
director_B_volumeReportMonitor
power-supplies
stand-by-power supplies
The VPLEX CLI is based on a tree structure like the structure of a Linux file system.
Fundamental to the VPLEX CLI is the notion of object context. The object context is determined
by the current location or pwd (Linux print working directory command) within the directory tree
of managed objects.
The object context is determined by the current location or pwd within the directory tree of
managed objects. The CLI is divided into command contexts. Some commands are accessible
from all contexts and are referred to as global commands. The remaining commands are
arranged in a hierarchical context tree. These commands can only be executed from the
appropriate location in the context tree.
Except for system-defaults, each of the sub-contexts contains one or more sub-context to
configure, manage, and display sub-components. Many VPLEX CLI operations can be
performed from the current context. However, some commands may require the user to change
to a different directory before running the command.
VPLEX Administration-SSP
Page
Internal Use - Confidential 58 © Copyright 2020 Dell Inc.
VPLEX Management Options
Sub Functionality
Context
clusters/ Create and manage links between clusters, devices, extents, system volumes and virtual
volumes. Register initiator ports, export target ports, and storage
views
connectivity/ Configure connectivity between back-end storage arrays, front-end hosts, local directors,
port-groups and inter-cluster WANs
data-migrations/ Create, verify, start, pause, cancel, and resume data migrations of extents or devices
The CLI is divided into command contexts. Command contexts contain commands
that can be accessed only from within that context. The commands under each
context are arranged in a hierarchical context tree. These commands can only be
executed from the appropriate location in the context tree. Understanding the
command context tree is critical to using the VPLEX command-line interface
effectively.
The topmost context is the root context, or “/”. Although there are more, shown
here are the key root-level sub-contexts where commands can be accessed to
configure, manage, and monitor VPLEX clusters, storage, and host connectivity.
Some commands are accessible from all contexts. These are referred to as global
commands.
VPLEX Administration-SSP
© Copyright
Internal Use - Confidential 2020 Dell Inc. Page 59
VPLEX Management Options
VPlexcli:/> cd /clusters
- Use cd command to navigate
VPlexcli:/clusters> cd cluster-1
- Find the current context from CLI interface prompt
VPlexcli:/clusters/cluster-1> cd connectivity
- Use ll command to displays the sub-contexts
VPlexcli:/clusters/cluster-1/connectivity> cd back-end
VPlexcli:/clusters/cluster-1/connectivity/back-end/port-groups> cd fc-port-group-0
VPlexcli:/clusters/cluster-1/connectivity/back-end/port-groups/fc-port-group-0> ll member-ports
/clusters/cluster-1/connectivity/back-end/port-groups/fc-port-group-0/member-ports:
VPlexcli:/clusters/cluster-1/connectivity/back-end/port-groups/fc-port-group-0>
VPLEX Administration-SSP
Page
Internal Use - Confidential 60 © Copyright 2020 Dell Inc.
VPLEX Management Options
The CLI includes several features to help locate your current position in the context
tree and determine what contexts and/or commands are accessible:
The ls -l (list long) command displays more information about the current
sub-contexts
The tree command displays the immediate sub-contexts in the tree using the
current context as the root
tree -e command displays immediate sub-contexts in the tree and any sub-
contexts under them.
VPLEX Administration-SSP
© Copyright
Internal Use - Confidential 2020 Dell Inc. Page 61
VPLEX Management Options
Health Check
Unisphere for VPLEX provides many of the features that the VPLEX CLI provides, in a
graphical user interface (GUI) format. The GUI is very easy to navigate and requires no
knowledge of VPLEX CLI commands.
Operations are accomplished by clicking on VPLEX icons and selecting desired values. System
Status on the navigation bar shows a graphical representation of your system. It allows you to
quickly view the status of your system and some of its major components such as Directors,
Storage Arrays, and Storage Views.
System Status is the default screen when you log into the GUI. Also shown is the Monitoring
menu. Here, you can monitor VPLEX cluster performance, provisioning jobs status, and
general system health details.
VPLEX Administration-SSP
Page
Internal Use - Confidential 62 © Copyright 2020 Dell Inc.
VPLEX Management Options
The Support page, located in the settings menu, provides links to various online
functions. These include VPLEX documentation, Help, and Solve Desktop.
VPLEX Administration-SSP
© Copyright
Internal Use - Confidential 2020 Dell Inc. Page 63
VPLEX Management Options
VPLEX Connectivity
B B B2 B A0 A A2 A3
• Dual fabrics
VS2 / VS6 • Min 2 active paths / LUN
VPLEX
Engine
B A • Prefer 4 Active paths / LUN
• Distribute across engines
Back-End
Back-End
Fabric A Fabric B
Zone Zone
E1_A1_FC00 E1_A1_FC01
Array_SPA_0 Array_SPA_0
Array_SPB_0
SP A SP B Array_SPB_0
Zone Zone2
E1_B1_FC00 E1_B1_FC01
Array_SPA_0 Array_SPA_0
Array_SPB_0 Array_SPB_0
Ensure that you have a SAN implementation design that is consistent with the recommended
best practices. Consider each array allocating storage to hosts and their applications through
VPLEX. Here are a few best practice considerations when connecting VPLEX to back-end
arrays:
Each VPLEX director must have at least two active paths to every back-end
array storage volume presented to the cluster.
VPLEX Administration-SSP
Page
Internal Use - Confidential 64 © Copyright 2020 Dell Inc.
VPLEX Management Options
Best Practices
This illustration shows the physical connectivity to a Dell EMC VMAX array. Similar
considerations should apply to other active/active arrays. Follow the array best practices for all
arrays including third party arrays.
The VMAX Volumes should be provisioned for access through specific FA ports
and VPLEX ports.
The VMAX Volumes within this grouping should restrict access to four specific
FA ports for each VPLEX Director ITL group.
VPLEX Administration-SSP
© Copyright
Internal Use - Confidential 2020 Dell Inc. Page 65
VPLEX Management Options
The XtremIO Storage Array is an all-flash system, based on a scale-out architecture. XtremIO
storage array can include a single X-Brick, or a cluster of multiple X-Bricks. X-Brick cluster
scale from 2 to 16 active controllers simply by increasing the number of X-Bricks.
When connected to host through VPLEX, it is recommended to balance host access through
VPLEX between the X-Brick Storage Controllers to provide a distributed load across all target
ports.
VPLEX Administration-SSP
Page
Internal Use - Confidential 66 © Copyright 2020 Dell Inc.
VPLEX Management Options
WWPNs
Determine the VPLEX front-end and WAN-COM port WWNs for use in configuring SAN
connectivity and zoning to support VPLEX-to-hosts and VPLEX cluster-to-cluster
communications. Use the VPLEX CLI ls –l command to list the contents of /engines/**/ports of
all VPLEX engines and directors.
VPLEX Administration-SSP
© Copyright
Internal Use - Confidential 2020 Dell Inc. Page 67
VPLEX Management Options
Zone Example
Fabric-B Zone_1
HBA-2
E1_A0_FC00
F
C
F
C VPLEX FE
FC
FC Ports
Director B
02
FC
03
F
C
F
C
FC
01
Director A
FC
00
In a dual-SAN Fabric, best practice is to cross-connect each director’s FE ports into SAN
Fabric A and SAN Fabric B.
VPLEX Administration-SSP
Page
Internal Use - Confidential 68 © Copyright 2020 Dell Inc.
VPLEX Management Options
Best Practices:
VPLEX Administration-SSP
© Copyright
Internal Use - Confidential 2020 Dell Inc. Page 69
VPLEX Management Options
E2_B0_FC01 F
E2_B0_FC00
FC0
FC0
FC0
FC
FC VPLEX Engine-2
FC
F
FC
FC
FC
FC
FC0
FC
VPLEX Engine-1
FC
FC
Most host connectivity for hosts running load balancing software should follow the
recommendations for a dual-engine cluster. The hosts should be configured across two
engines and subsequent hosts should alternate between pairs of engines effectively load
balancing the I/O across all engines.
Zoning Example:
VPLEX Administration-SSP
Page
Internal Use - Confidential 70 © Copyright 2020 Dell Inc.
VPLEX Storage Provisioning Concepts
Storage View
Registered Host Initiators
VPLEX FE Ports
VIRTUAL
VOLUMES
Host
A Storage View is a combination of registered initiators, VPLEX Front-End ports, and Virtual
Volumes. It is used to control a single or clustered host access VPLEX Virtual Volumes. It is the
VPLEX method of LUN Masking.
To export VPLEX storage, you must first create a storage view for the host. Next add VPLEX
front-end ports and VPLEX Virtual Volumes to the view. Virtual volumes are not visible to hosts
until they are in a storage view with associated ports and initiators.
A registered initiator can be in more than one storage view and a VPLEX FE port can be in
more than one storage view, while the unique/particular combination of a specific
<initiator><FE_port> pair can only be in one storage view.
VPLEX Administration-SSP
© Copyright
Internal Use - Confidential 2020 Dell Inc. Page 71
VPLEX Storage Provisioning Concepts
HOST
Top-Level Device
Back-end storage arrays are configured to present LUNs to VPLEX through the SAN. Each
presented back-end LUN maps to one VPLEX storage volume. Storage volumes are initially in the
‘unclaimed’ state. Unclaimed storage volumes may not be used for any purpose within VPLEX
other than to create meta volumes that are for system internal use only.
Once a storage volume has been claimed within VPLEX, it may be split into one or more contiguous
extents. A single extent may map to an entire storage volume. However, it cannot span multiple
storage volumes. A VPLEX device is the entity that enables RAID implementation across one or
more extents or other devices. VPLEX supports RAID-0, RAID-1, RAID-C, as well as 1-1 mapping.
Raid-0 can stripe data across multiple extent/device constructs. When creating a Raid-0 device, if
more than one extent is chosen, VPLEX creates a Raid-0 device that is striped across the selected
extents. The Raid-0 device is the sum of the size of the extents. An example would be if three 2 GB
extents were selected, then the Raid-0 device would be 6 GB. VPLEX would stripe data across the
selected extents. The stripe depth specifies how much data is written to an extent before moving to
the next extent.
Raid-1 mirrors two device extent/device constructs. The top-level device is the same size. A storage
view is the masking construct that controls how one or more VPLEX virtual volumes are exposed
through VPLEX front-end ports to host initiators.
Once a storage view is properly configured and operational, the host should be able to detect and
use virtual volumes. A Host discovers virtual volumes presented by VPLEX, after initiating a bus-
scan on its HBAs. Every front-end path to a virtual volume is an active path and the current version
of VPLEX presents each virtual volume. The host requires multi-pathing software for a high-
availability implementation.
VPLEX Administration-SSP
Page
Internal Use - Confidential 72 © Copyright 2020 Dell Inc.
VPLEX Storage Provisioning Concepts
VPD83T3:6000144000000010e00c2ecb3c5914fb Exchange__1_vol
VPD83T3:6000144000000010e00c2ecb3c59152a Dev_14_vol
VPD83T3:6000144000000010e00c2ecb3c591530 Dev_8_vol
VPD83T3:6000144000000010e00c2ecb3c591549 Dev_10_vol
Use the export storage-view summary command from the context shown to see a
summary of storage views that are configured in VPLEX.
A unique VPD (vital product data) ID is assigned to each VPLEX virtual volume. We can view
this ID by entering the VPLEX CLI command export storage-view map
<storage_view>. This is the same logical device ID that is seen in all host operating systems
to identify a LUN from a Storage System. This VPD number will not change even if the
underlying storage is moved.
Here we see an example of a PowerPath CLI command. Notice the logical device ID is the
same as the VPD of a VPLEX Virtual Volume.
VPLEX Administration-SSP
© Copyright
Internal Use - Confidential 2020 Dell Inc. Page 73
VPLEX Storage Provisioning Concepts
Register Initiators
The first step when creating a Storage View is to discover and verify the
connectivity of the VPLEX Front End ports.
Select Ports
VPLEX Administration-SSP
Page
Internal Use - Confidential 74 © Copyright 2020 Dell Inc.
VPLEX Storage Provisioning Concepts
Select FE Ports
VPLEX Administration-SSP
© Copyright
Internal Use - Confidential 2020 Dell Inc. Page 75
VPLEX Storage Provisioning Concepts
VPLEX Administration-SSP
Page
Internal Use - Confidential 76 © Copyright 2020 Dell Inc.
VPLEX Storage Provisioning Concepts
VPLEX Administration-SSP
© Copyright
Internal Use - Confidential 2020 Dell Inc. Page 77
Storage Provisioning Methods
To begin using VPLEX, you must provision storage so that hosts can access that
storage.
Advanced provisioning
EZ provisioning
Integrated array service-based provisioning (VIAS)
VPLEX Administration-SSP
Page
Internal Use - Confidential 78 © Copyright 2020 Dell Inc.
Storage Provisioning Methods
View and
claim Create
Create RAID-
available Extents
0, RAID-1,
storage from
RAID-C or
Storage Create Place
1:1 mapping
Volumes Virtual Virtual
of Extents to
Volumes Volumes
Devices
into a
Storage
View
VPLEX Administration-SSP
© Copyright
Internal Use - Confidential 2020 Dell Inc. Page 79
Storage Provisioning Methods
Claim Storage
Select unclaimed
Storage Volumes
Provision Storage
Desired Cluster and Storage Volume view
Unclaimed Storage Volumes
VPLEX Administration-SSP
Page
Internal Use - Confidential 80 © Copyright 2020 Dell Inc.
Storage Provisioning Methods
VPLEX Administration-SSP
© Copyright
Internal Use - Confidential 2020 Dell Inc. Page 81
Storage Provisioning Methods
Step One
VPLEX Administration-SSP
Page
Internal Use - Confidential 82 © Copyright 2020 Dell Inc.
Storage Provisioning Methods
Step Two
VPLEX Administration-SSP
© Copyright
Internal Use - Confidential 2020 Dell Inc. Page 83
Storage Provisioning Methods
Storage Volume selection can be altered here. The arrows move Default Names can be edited.
volumes into the right column.
VPLEX Administration-SSP
Page
Internal Use - Confidential 84 © Copyright 2020 Dell Inc.
Storage Provisioning Methods
A view of the Storage Volumes will display all that have been claimed.
VPLEX Administration-SSP
© Copyright
Internal Use - Confidential 2020 Dell Inc. Page 85
Storage Provisioning Methods
The CLI command claimingwizard finds unclaimed storage volumes, claims them, and
names them appropriately. This command can be used to claim and name many Storage
Volumes with a single command.
Storage volumes must be claimed, and optionally named before they can be used in a VPLEX
cluster. Storage tiers allow the administrator to manage arrays based on price, performance,
capacity, and other attributes. If a tier ID is assigned, the storage with a specified tier ID can be
managed as a single unit. Storage Volumes without a tier assignment are assigned a value of
‘no tier’.
Optional arguments
VPLEX Administration-SSP
Page
Internal Use - Confidential 86 © Copyright 2020 Dell Inc.
Storage Provisioning Methods
We will now create the next level of volume construct, an extent. Select the desired Cluster,
then change the view to Storage Volumes. Use the drop-down menu to select Create Extents.
VPLEX Administration-SSP
© Copyright
Internal Use - Confidential 2020 Dell Inc. Page 87
Storage Provisioning Methods
Storage Volumes selected automatically appear. This can be altered using the arrows. Only
claimed Storage Volumes will be displayed.
VPLEX Administration-SSP
Page
Internal Use - Confidential 88 © Copyright 2020 Dell Inc.
Storage Provisioning Methods
Review
VPLEX Administration-SSP
© Copyright
Internal Use - Confidential 2020 Dell Inc. Page 89
Storage Provisioning Methods
Verify
The results of Extent creation are displayed. They can be seen by changing the
"View By" to Extents for the specific VPLEX Cluster.
VPLEX Administration-SSP
Page
Internal Use - Confidential 90 © Copyright 2020 Dell Inc.
Storage Provisioning Methods
Select Extents for the View By window. Click Create Devices to launch the Wizard.
Select Extents
VPLEX Administration-SSP
© Copyright
Internal Use - Confidential 2020 Dell Inc. Page 91
Storage Provisioning Methods
Step One
Device Type
Here, we can select a RAID protection or performance attribute or simply specify a one-to-one
mapping of one extent used to create a single device.
VPLEX Administration-SSP
Page
Internal Use - Confidential 92 © Copyright 2020 Dell Inc.
Storage Provisioning Methods
Step Two
Select the extent to be used for the new device. Since we previously selected Raid-1, we must
select two extents minimum. Click the Add Device button to create the device.Data will be
copied from Source to Target.
VPLEX Administration-SSP
© Copyright
Internal Use - Confidential 2020 Dell Inc. Page 93
Storage Provisioning Methods
Virtual Volumes can be created here. If multiple Devices are created, a base name can be
given. Click Finish to complete the wizard.
Click to Expand
VPLEX Administration-SSP
Page
Internal Use - Confidential 94 © Copyright 2020 Dell Inc.
Storage Provisioning Methods
New Device
Mirror Leg Syncing
VPLEX Administration-SSP
© Copyright
Internal Use - Confidential 2020 Dell Inc. Page 95
Storage Provisioning Methods
Select Devices
VPLEX Administration-SSP
Page
Internal Use - Confidential 96 © Copyright 2020 Dell Inc.
Storage Provisioning Methods
When creating Virtual Volumes, there is an option to make them thin enabled.
Storage volumes are provisioned from storage arrays that are supported by VPLEX as thin-capable
All the mirrors are created from the same storage-array family that VPLEX supports (For a RAID-1 configuration)
This allows us to use host-based storage reclamation using the unmap feature of VMware ESXi hosts. For example, after
deleting a VM from a datastore, it is desired to reclaim the storage to use for other VMs. VMware VAAI (vStorage API Array
Integration) supports this feature.
If the virtual volumes cannot be created as thin, the operation will succeed but it will be thick instead.
Refer to the Dell EMC VPLEX GeoSynchrony Administration Guide for additional information.
VPLEX Administration-SSP
© Copyright
Internal Use - Confidential 2020 Dell Inc. Page 97
Storage Provisioning Methods
VPLEX Administration-SSP
Page
Internal Use - Confidential 98 © Copyright 2020 Dell Inc.
Storage Provisioning Methods
To add Virtual Volumes to an existing Storage View, select the VPLEX Cluster, and change the
view to Storage Views. Click Add to begin.
VPLEX Administration-SSP
© Copyright
Internal Use - Confidential 2020 Dell Inc. Page 99
Storage Provisioning Methods
VPLEX Administration-SSP
Page
Internal Use - Confidential 100 © Copyright 2020 Dell Inc.
Storage Provisioning Methods
Verify the Virtual Volumes have been added to the Storage View. Change the View By to
Storage Views. Select Virtual Volumes in the Storage View Properties window.
VPLEX Administration-SSP
© Copyright
Internal Use - Confidential 2020 Dell Inc. Page 101
Storage Provisioning Methods
EZ Provisioning Overview
VPLEX Administration-SSP
Page
Internal Use - Confidential 102 © Copyright 2020 Dell Inc.
Storage Provisioning Methods
EZ Provisioning
Consistency Group
The first step is to select an existing Consistency Group or create a new one.
VPLEX Consistency Groups aggregate volumes to enable the application of a
common set of properties to the entire group. Consistency Groups are explained in
detail later in the course.
VPLEX Administration-SSP
© Copyright
Internal Use - Confidential 2020 Dell Inc. Page 103
Storage Provisioning Methods
Volume Options
Descriptive Name
Select the source VPLEX cluster that will provide the storage capacity from the back-end
arrays. Also, select the appropriate protection and data synchronization attributes for the new
storage capacity.
VPLEX Administration-SSP
Page
Internal Use - Confidential 104 © Copyright 2020 Dell Inc.
Storage Provisioning Methods
Storage Volumes
Select a back-end storage array and LUN connected to the source VPLEX cluster. The LUN
data of the selected physical array will be copied onto the Storage Volume in the selected
VPLEX cluster. Back-end array LUNs in either the claimed or unclaimed state may be used.
VPLEX does not report a volume as thin to host initiators until its thin-enabled option is set to
true. This can be set here.
VPLEX Administration-SSP
© Copyright
Internal Use - Confidential 2020 Dell Inc. Page 105
Storage Provisioning Methods
Final Steps
VPLEX Administration-SSP
Page
Internal Use - Confidential 106 © Copyright 2020 Dell Inc.
Storage Provisioning Methods
Storage View
VPLEX Administration-SSP
© Copyright
Internal Use - Confidential 2020 Dell Inc. Page 107
Storage Provisioning Methods
VIAS details
Registered Storage
Initiators Views
VPLEX FE
Ports
Admin running Virtual
Unisphere for VPLEX Volumes
Storage Pool
AMP
AMP
The VPLEX Integrated Array Services (VIAS) feature enables VPLEX to provision storage for
Dell EMC VMAX, VNX, and XtremIO storage arrays directly from the VPLEX CLI, UI, and
REST API. VPLEX uses Array Management Providers (AMPs) to streamline provisioning and
allows you to provision a VPLEX Virtual Volume from a pool on the storage array.
The VIAS feature uses the Storage Management Initiative-Specification (SMI-S) provider to
communicate with the arrays that support integrated services to enable provisioning. The SMI-
S provider is used for VMAX and VNX.
After the SMI-S provider is configured, you can register the SMI-S provider with VPLEX as the
Array Management Provider (AMP). When the registration is complete, the managed arrays,
pools, and storage groups are visible in VPLEX, and you can provision Virtual Volumes from
those pools. The pools used for provisioning must have been previously created on the storage
array, as VIAS does not create the pools for provisioning.
VIAS also supports a REST AMP used with XtremIO arrays. The REST AMP does not require
additional software. The provider is on the XtremIO array itself.
Each XtremIO array is registered for VIAS in a 1-to-1 relationship with a VPLEX cluster.
Multiple XtremIO arrays need to be individually registered in VPLEX. This is different from SMI-
S AMPs where multiple storage arrays are managed by one SMI-S provider, then the SMI-S
provider is registered with VPLEX.
VPLEX Administration-SSP
Page
Internal Use - Confidential 108 © Copyright 2020 Dell Inc.
Storage Provisioning Methods
Before provisioning storage using VIAS, we must register the Array Management
Provider (AMP).
Once the AMP is registered, we can see which arrays it manages and the free
space on the storage pools for each array.
VPLEX Administration-SSP
© Copyright
Internal Use - Confidential 2020 Dell Inc. Page 109
Storage Provisioning Methods
Using the VPLEX Integrated Array Services (VIAS) feature, you can create virtual
volumes from pre-defined storage pools.
VPLEX Administration-SSP
Page
Internal Use - Confidential 110 © Copyright 2020 Dell Inc.
Storage Provisioning Methods
Here are the steps to configure a Virtual Volume using the Provision from Pools
Wizard.
Select the Consistency Group within the appropriate VPLEX Cluster(s) to use or
create a new one.
VPLEX Administration-SSP
© Copyright
Internal Use - Confidential 2020 Dell Inc. Page 111
Storage Provisioning Methods
Steps Three
Storage Pools -This step selects the Back- Create Thin Virtual Volumes
End Storage. The array selection list is
based on the arrays added to the SMI-S
server, or the number of XtremIO arrays
added.
VPLEX Administration-SSP
Page
Internal Use - Confidential 112 © Copyright 2020 Dell Inc.
Storage Provisioning Methods
Storage Views The Virtual Volumes can be Review the selections made.
optionally added to a Storage View. The
Storage View must already exist.
VPLEX Administration-SSP
© Copyright
Internal Use - Confidential 2020 Dell Inc. Page 113
Storage Provisioning Methods
Step Six
The final results are displayed. Provisioning may take View the status from the Jobs view.
longer.
VPLEX Administration-SSP
Page
Internal Use - Confidential 114 © Copyright 2020 Dell Inc.
Storage Provisioning Methods
Provision-Job Rollback
VIAS Rollback
Undo operation or rollback steps are added in case the VIAS provisioning fails. Not
all steps in the VIAS will be rolled back. Here are the main create and rollback
steps. The pre-check does not need a rollback if it fails. The volume creation is
rolled back in the VIAS process, as well as the volume exposure to VPLEX. Steps
2 and 3 create 90% of the possible issues. Steps 4 and 5 do not have rollback
steps. Note: if an error occurs in step 4 or 5, provisioning artifacts will remain and
need to be deleted by the user.
VPLEX Administration-SSP
© Copyright
Internal Use - Confidential 2020 Dell Inc. Page 115
Storage Volume Encapsulation
Database Server
Log Boot
Data
SAN
Log
Each Storage
Data Volume to be
Encapsulated
Management
Boot
Station
VPLEX Administration-SSP
Page
Internal Use - Confidential 116 © Copyright 2020 Dell Inc.
Storage Volume Encapsulation
Steps for Storage Volume Encapsulation - VPLEX provides the ability to claim back-
end Storage Volumes already in use under its control. The process of claiming a storage
volume while saving existing user data is called storage encapsulation. Any existing back-end
user volume may be encapsulated, including non-bootable and host boot image volumes.
VPLEX Administration-SSP
© Copyright
Internal Use - Confidential 2020 Dell Inc. Page 117
Storage Volume Encapsulation
-v /clusters/cluster-1/exports/storage-views/esx_21
Argument Definitions:
-v [Storage View name] specifies the VPLEX storage-view(s) to receive the new
virtual-volume. In this example, the Encapsulated Volume is being exported to the
host esx_21 Storage View.
VPLEX Administration-SSP
Page
Internal Use - Confidential 118 © Copyright 2020 Dell Inc.
VPLEX Distributed Device Concepts
Mirr Mirr
or or
Distributed Devices - VPLEX Metro storage objects having a RAID-1 geometry with a mirror-
leg in each VPLEX Cluster.
VPLEX Administration-SSP
© Copyright
Internal Use - Confidential 2020 Dell Inc. Page 119
VPLEX Distributed Device Concepts
Rule-sets are predefined rules that determine which cluster continues servicing I/O when
connectivity between clusters is lost. If the VPLEX Metro clusters lose contact with one
another, or if one cluster fails, Rule-sets define which cluster continues operation. This cluster
is referred to as the "preferred cluster". The remaining cluster, the non-preferred, suspends I/O.
VPLEX Administration-SSP
Page
Internal Use - Confidential 120 © Copyright 2020 Dell Inc.
VPLEX Distributed Device Concepts
VPLEX Administration-SSP
© Copyright
Internal Use - Confidential 2020 Dell Inc. Page 121
VPLEX Distributed Device Concepts
Logging Volume details - Logging volumes are required at each cluster before a Distributed
Device can be created. Logging volumes are used to keep track of blocks written during an
inter-cluster link failure, or when one leg of a distributed RAID-1 becomes unreachable and
then recovers.
After a WAN link failure is restored or an unreachable leg recovers, VPLEX uses the
information in logging volumes to synchronize the mirrors by sending only changed blocks
across the link. After the inter-cluster link or leg is restored, the VPLEX system uses the
information in the logging volumes to synchronize the mirrors by sending only changed blocks
across the link. Logging volumes also track changes during the loss of a volume when that
volume is one mirror in a Distributed Device.
During and after link outages, logging volumes are subject to high levels of I/O. Thus, logging
volumes must be able to service I/O quickly and efficiently. For more information about logging
volume requirements and configuration, please see the VPLEX Administration Guide.
Keeps track of write I/Os during an inter-cluster link outage or loss of access for
a Distributed Device.
Required at each VPLEX Cluster before creating a Distributed Device.
Log information used to synchronize mirrors after access is restored.
VPLEX Administration-SSP
Page
Internal Use - Confidential 122 © Copyright 2020 Dell Inc.
VPLEX Distributed Device Concepts
Consistency Group Details - VPLEX Consistency Groups aggregate volumes to enable the
application of a common set of properties to the entire group. Consistency Groups ensure the
same winning cluster for all the Virtual Volumes within the group during an inter-cluster
communication failure. Consistency group detach rules define on which cluster I/O continues
during cluster or inter-cluster link failures. The groups work together with VPLEX Cluster
Witness. The properties of a consistency group are applied to all the virtual volumes in the
consistency group. Here is a summarized list of the properties that can be applied:
Cache mode
Visibility
Storage at cluster
Detach Rule
RecoverPoint enabled
Application A Application B
Cluster-1 Cluster-2
Consistency Groups ensure the same winning cluster for all the Virtual Volumes
within the group during an inter-cluster communication failure. Common properties
applied to all Virtual Volumes in the group. Works with VPLEX Cluster Witness
VPLEX Administration-SSP
© Copyright
Internal Use - Confidential 2020 Dell Inc. Page 123
VPLEX Distributed Device Concepts
Visibility Details
VPLEX Cluster-2
VPLEX Cluster-1
Virtual Volume
W = Write
A = Access (read)
Visibility controls which clusters know about a Consistency Group. By default, the visibility
property of a Consistency Group is set only to the cluster where the group was created. This is
referred to as local visibility. This means only hosts attached to the local cluster have
read/write access to the volumes in the consistency group. For global visibility, set the visibility
to both cluster-1 and cluster-2. With global visibility, host on both clusters have read/write
access to the volumes in the consistency group.
The visibility of the volumes within the consistency group must match the visibility of the
consistency group. Local Consistency Groups with global visibility will always be synchronous.
VPLEX Administration-SSP
Page
Internal Use - Confidential 124 © Copyright 2020 Dell Inc.
VPLEX Distributed Device Concepts
IP Management
Network
Inter-cluster
Network A
Inter-cluster
Network B
Witness Benefits:
VPLEX Administration-SSP
© Copyright
Internal Use - Confidential 2020 Dell Inc. Page 125
VPLEX Distributed Device Concepts
Detach Action
Rule
VPLEX Administration-SSP
Page
Internal Use - Confidential 126 © Copyright 2020 Dell Inc.
VPLEX Distributed Device Concepts
The Consistency Group Detach Rule designates which cluster detaches if clusters lose
connectivity. Possible values:
If a consistency group has a detach-rule configured, the rule applies to all volumes in the
Consistency Group and overrides any rule-sets applied to individual volumes.
VPLEX Administration-SSP
© Copyright
Internal Use - Confidential 2020 Dell Inc. Page 127
VPLEX Distributed Device Concepts
I
Failure of Cluster-1 I
I I
I I
I
I
I
I Cluster-2 loses communication with
I Cluster-1.
I
I I
By rule, Cluster-2 suspends I/O
I I
Non-preferred
I I
O * Data Unavailable *
Winner
I I
(static bias)
I
Stora
ge
Stora View
ge
Cluster-2
Cluster-1
Mir Mirr
Device ror or Device
Rule: “Cluster-1 Detaches” Leg
For a VPLEX Metro without Cluster Witness, there has to be a method to avoid split-brain
scenarios. Each Distributed Device has a Rule-set applied to it.
As discussed, Rule-sets are predefined rules that determine which cluster continues I/O when
connectivity between clusters is lost. When a loss of connectivity occurs, VPLEX starts a delay
timer (default 5 seconds) and suspends I/O to all Distributed Devices on both clusters. If
connectivity is not restored before the timer expires, then the rule is enforced.
Without VPLEX Cluster Witness, we may have a scenario as shown. The Distributed Device
has the detach Rule set to "Cluster-1 detaches". If cluster-1 fails, then cluster-2 uses the rule
and suspends I/O. Now data is unavailable until the problem is resolved.
VPLEX Administration-SSP
Page
Internal Use - Confidential 128 © Copyright 2020 Dell Inc.
VPLEX Distributed Device Concepts
C C
C
Cluster Witness
Cluster-1 Cluster-2 Cluster-1 Cluster- Cluster-1 Cluster-2 Failures
2
Cluster-1 and 2 both continue I/O; Cluster-1 and 2 will continue I/O; Cluster-1 and 2 will continue I/O;
Call home is issued Cluster-1 issues the call home Cluster-2 issues the call home
CW CW C
Cluste
Cluster-1 Cluster-2 Cluster- Cluster-
Cluster-2 Cluster-1 r
1 2
No Data Unavailability
VPLEX Cluster Witness helps VPLEX Metro systems with consistency groups respond to
cluster failures and inter-cluster (WAN COM) link outages. Presented here are the various
inter-site failure scenarios and how they are handled when using VPLEX Cluster Witness.
If the Cluster Witness host fails, each cluster will lose communication with
Cluster Witness and call home. I/O continues normally.
If management link between cluster and CW hosts fails, then the cluster that
detects this will call home.
If the WAN COM link between clusters fails, each cluster suspends I/O until
receiving guidance from Cluster Witness. Cluster Witness gives guidance to
each cluster to default to the rule set (where preferred cluster continues I/O and
the non-preferred cluster suspends I/O).
VPLEX Administration-SSP
© Copyright
Internal Use - Confidential 2020 Dell Inc. Page 129
VPLEX Distributed Device Configuration
VPLEX Administration-SSP
Page
Internal Use - Confidential 130 © Copyright 2020 Dell Inc.
VPLEX Distributed Device Configuration
Device Selection
Verify before launching the Wizard which existing Devices on each cluster are to be
used.
VPLEX Administration-SSP
© Copyright
Internal Use - Confidential 2020 Dell Inc. Page 131
VPLEX Distributed Device Configuration
VPLEX Administration-SSP
Page
Internal Use - Confidential 132 © Copyright 2020 Dell Inc.
VPLEX Distributed Device Configuration
Select Devices
VPLEX Administration-SSP
© Copyright
Internal Use - Confidential 2020 Dell Inc. Page 133
VPLEX Distributed Device Configuration
Select Mirror
The next step is to select the target device. possible candidates are based on the
size of the source Device.
VPLEX Administration-SSP
Page
Internal Use - Confidential 134 © Copyright 2020 Dell Inc.
VPLEX Distributed Device Configuration
Synchronize
VPLEX Administration-SSP
© Copyright
Internal Use - Confidential 2020 Dell Inc. Page 135
VPLEX Distributed Device Configuration
Consistency Group
VPLEX Administration-SSP
Page
Internal Use - Confidential 136 © Copyright 2020 Dell Inc.
VPLEX Distributed Device Configuration
Create Device
Review selections
VPLEX Administration-SSP
© Copyright
Internal Use - Confidential 2020 Dell Inc. Page 137
VPLEX Distributed Device Configuration
Result
Verify results
VPLEX Administration-SSP
Page
Internal Use - Confidential 138 © Copyright 2020 Dell Inc.
VPLEX Distributed Device Configuration
Verify
VPLEX Administration-SSP
© Copyright
Internal Use - Confidential 2020 Dell Inc. Page 139
VPLEX Distributed Device Configuration
Command Details
Command options
Exampl
The storage-tool compose command can be used as a simple way to create a Virtual
Volume. This one command creates the virtual volume on top of the specified storage-volumes,
building all intermediate extents, local, and distributed devices as necessary. Storage-volumes
from each cluster may be claimed but must be unused.
The command allows the user to add the virtual volumes created to both a consistency group
and a storage view. They must exist already. The example displays adding a virtual volume
named new-vv to a storage view named my-view and a consistency group named my-CG.
VPLEX Administration-SSP
Page
Internal Use - Confidential 140 © Copyright 2020 Dell Inc.
VPLEX Distributed Device Configuration
VPLEX Administration-SSP
© Copyright
Internal Use - Confidential 2020 Dell Inc. Page 141
VPLEX Distributed Device Configuration
Create Extents
VPLEX Administration-SSP
Page
Internal Use - Confidential 142 © Copyright 2020 Dell Inc.
VPLEX Distributed Device Configuration
View Extents
VPLEX Administration-SSP
© Copyright
Internal Use - Confidential 2020 Dell Inc. Page 143
VPLEX Distributed Device Configuration
VPLEX Administration-SSP
Page
Internal Use - Confidential 144 © Copyright 2020 Dell Inc.
VPLEX Distributed Device Configuration
This VPLEX CLI command displays all the Devices located at each cluster.
VPLEX Administration-SSP
© Copyright
Internal Use - Confidential 2020 Dell Inc. Page 145
VPLEX Distributed Device Configuration
VPLEX Administration-SSP
Page
Internal Use - Confidential 146 © Copyright 2020 Dell Inc.
VPLEX Distributed Device Configuration
Displayed are the properties of a Distributed Device. Notice that the rebuilding
process is still ongoing.
VPLEX Administration-SSP
© Copyright
Internal Use - Confidential 2020 Dell Inc. Page 147
VPLEX Distributed Device Configuration
The best practice for creating a Distributed Device with the CLI is to
specify the source-leg. when this option is used mirror legs
automatically synchronize.
VPLEX Administration-SSP
Page
Internal Use - Confidential 148 © Copyright 2020 Dell Inc.
VPLEX Distributed Device Configuration
Here are the steps to create a Consistency Group using Unisphere for VPLEX.
VPLEX Administration-SSP
© Copyright
Internal Use - Confidential 2020 Dell Inc. Page 149
VPLEX Distributed Device Configuration
A name for the new Consistency Group can be entered. Step two allows the
selection of a Rule Set.
Enter a name for the new group. Select a Rule Set for the group.
VPLEX Administration-SSP
Page
Internal Use - Confidential 150 © Copyright 2020 Dell Inc.
VPLEX Distributed Device Configuration
These steps in the wizard will add existing Virtual Volumes to the Consistency
Group. Next, a review of the previous selections is performed.
VPLEX Administration-SSP
© Copyright
Internal Use - Confidential 2020 Dell Inc. Page 151
VPLEX Distributed Device Configuration
Step Five
Step five displays the results. These include the group name, detach rule, and the
Virtual Volumes added.
VPLEX Administration-SSP
Page
Internal Use - Confidential 152 © Copyright 2020 Dell Inc.
VPLEX Distributed Device Configuration
VPLEX Administration-SSP
© Copyright
Internal Use - Confidential 2020 Dell Inc. Page 153
Distributed Device Failure Scenarios
Detach Rules
Every Consistency Group has a detach rule that applies to all members in the Consistency
Group. If a distributed device is a member of a Consistency Group, the detach rule of the
Consistency Group overrides the detach rule configured for the device. Here are the detach
rules:
no-automatic-winner - The consistency group does not select the preferred cluster.
The detach rules of the member devices determine the preferred cluster for that device.
VPLEX Administration-SSP
Page
Internal Use - Confidential 154 © Copyright 2020 Dell Inc.
Distributed Device Failure Scenarios
Cluster Witness
When VPLEX Witness is not deployed, detach rules determine at which cluster I/O
continues during a cluster failure or inter-cluster link outage. VPLEX Cluster
Witness does not guide Consistency Groups with the no-automatic-winner detach
rule.
VPLEX Administration-SSP
© Copyright
Internal Use - Confidential 2020 Dell Inc. Page 155
Distributed Device Failure Scenarios
An inter-cluster WAN communication failure will break the ability for Distributed
Devices to sustain I/O synchronization on local devices across clusters.
LAN
VPLEX Administration-SSP
Page
Internal Use - Confidential 156 © Copyright 2020 Dell Inc.
Distributed Device Failure Scenarios
connectivity: NONE
VPLEX Administration-SSP
© Copyright
Internal Use - Confidential 2020 Dell Inc. Page 157
Distributed Device Failure Scenarios
System Status
From the dashboard we can see the WAN failure and Storage View error at cluster-
2.
VPLEX Administration-SSP
Page
Internal Use - Confidential 158 © Copyright 2020 Dell Inc.
Distributed Device Failure Scenarios
Wan Restore
Command details - The WAN Connectivity has been restored (after the delay timer expired).
From the VPLEX CLI, we notice that the Distributed Device needs a “resume” at the losing
cluster.
VPLEX Administration-SSP
© Copyright
Internal Use - Confidential 2020 Dell Inc. Page 159
Distributed Device Failure Scenarios
Check Status
Storage Views - Here we see the Storage Views for cluster-1 and cluster-2. Cluster-1 is the
winning cluster in this example. Notice the operational status of the two Storage Views.
VPLEX Administration-SSP
Page
Internal Use - Confidential 160 © Copyright 2020 Dell Inc.
Distributed Device Failure Scenarios
Device Status
Use the ll command to view the status and attributes of the Distributed Devices.
VPLEX Administration-SSP
© Copyright
Internal Use - Confidential 2020 Dell Inc. Page 161
Distributed Device Failure Scenarios
ll
VPLEX Administration-SSP
Page
Internal Use - Confidential 162 © Copyright 2020 Dell Inc.
Volume Expansion and Protection
Concatenation (RAID-
Storage-Volume
C)
VPLEX Administration-SSP
© Copyright
Internal Use - Confidential 2020 Dell Inc. Page 163
Volume Expansion and Protection
Expansion Types
A VPLEX Virtual Volume can be expanded by two methods. They are Storage Volume
expansion or concatenation. If the volume type supports expansion, VPLEX detects the
capacity gained by expansion. Now you can identify the available expansion method. The
Storage Volume method is always preferred. Possible values for the expansion-method
attribute are:
Storage Volume
Concatenation
VPLEX Administration-SSP
Page
Internal Use - Confidential 164 © Copyright 2020 Dell Inc.
Volume Expansion and Protection
When expanding a Virtual Volume, the first step is to determine volume expansion
method. The method available is determined by the underlying Device.
Upgrade is in progress
VPLEX Administration-SSP
© Copyright
Internal Use - Confidential 2020 Dell Inc. Page 165
Volume Expansion and Protection
RAID-1
Device 1 Distributed RAID-1 device
Device
Storage Array Storage Array 1 Storage Array 2 Storage Array 1 Storage Array 2
Single Cluster Cluster-1 Cluster-2
1:1 Virtual Volume to Storage
Volume Dual-legged RAID-1 Distributed RAID-1
The VPLEX Virtual Volume geometry must meet one of the following criteria:
A Distributed Raid-1 Device where the mirror leg at each cluster is mapped 1:1
to the underlying Storage Volume.
There is a maximum initialization processes can run concurrently per cluster. See the Release
Notes for the current limit.
VPLEX Administration-SSP
Page
Internal Use - Confidential 166 © Copyright 2020 Dell Inc.
Volume Expansion and Protection
10 GB
To begin, you can list the expandable-capacity attribute (in the CLI) or the Expandable By
field (in the GUI) to plan capacity of your back-end storage. When using Unisphere, click on the
Virtual Volume name to display the properties of the Virtual Volume you want to expand. For
Virtual Volumes that can be expanded using the Storage-volume method, the Expandable By
attribute is the capacity added to the back-end storage volume, but not yet exposed to the host
by the Virtual Volume. A value of zero indicates that there is no expandable-capacity for the
volume. A non-zero value indicates the capacity available to expand. Here are the steps to
perform Storage Volume expansion:
1. Identify the underling Storage Volume for the Virtual Volume to be expanded.
VPLEX Administration-SSP
© Copyright
Internal Use - Confidential 2020 Dell Inc. Page 167
Volume Expansion and Protection
Starting Capacity
VPLEX Administration-SSP
Page
Internal Use - Confidential 168 © Copyright 2020 Dell Inc.
Volume Expansion and Protection
Concatenation Expansion
Virtual Virtual
Volume Volume
2
GB
Device 1
RAID-C
1G 1G
Some devices do not support the storage volume method of expansion. In this case, use the
concatenation method. The concatenation method expands the virtual volume by adding only
specified extents or devices.
Top-level Devices can be expanded without disruption to a host. A top-level Device can be
expanded using this method which will add another Device to the first. This creates a new top-
level device of RAID-C type. The device to be appended can be any type of device provided it
is not mapped to a Virtual Volume. It is best practice to use the same geometry as the original
device.
After the expansion is complete, the original mapped Device has been converted into type
RAID-C. This contains the original Device with the date appended to the Device name.
VPLEX Administration-SSP
© Copyright
Internal Use - Confidential 2020 Dell Inc. Page 169
Volume Expansion and Protection
CLI Expansion Method - The expansion begins with a 4 GB virtual volume, and then we
concatenated a 4 GB device to expand the virtual volume to 8 GB by using the virtual-volume
expand command. Confirm the expansion with ll.
VPLEX Administration-SSP
Page
Internal Use - Confidential 170 © Copyright 2020 Dell Inc.
Volume Expansion and Protection
VPLEX Administration-SSP
© Copyright
Internal Use - Confidential 2020 Dell Inc. Page 171
Volume Expansion and Protection
The first step when adding local mirror protection is to select a Virtual Volume and
launch the wizard.
VPLEX Administration-SSP
Page
Internal Use - Confidential 172 © Copyright 2020 Dell Inc.
Volume Expansion and Protection
Select Devices
Based on the Virtual Volumes selected, the devices to mirror are automatically
selected.
VPLEX Administration-SSP
© Copyright
Internal Use - Confidential 2020 Dell Inc. Page 173
Volume Expansion and Protection
Select Mirrors
Step two will select the target Device. It must be on the same cluster as the source
and the same size or larger.
VPLEX Administration-SSP
Page
Internal Use - Confidential 174 © Copyright 2020 Dell Inc.
Volume Expansion and Protection
Complete
The job results are shown after a review, which is not shown.
VPLEX Administration-SSP
© Copyright
Internal Use - Confidential 2020 Dell Inc. Page 175
Volume Expansion and Protection
The device attach mirror command is used to add a local device as a mirror
leg.
VPLEX Administration-SSP
Page
Internal Use - Confidential 176 © Copyright 2020 Dell Inc.
Volume Expansion and Protection
The steps for adding remote RAID-1 protection to a Virtual Volume are similar to
adding a local mirror.
The local Device may be any type geometry, but the size of the source must be
equal to or less than the target.
VPLEX Administration-SSP
© Copyright
Internal Use - Confidential 2020 Dell Inc. Page 177
Volume Expansion and Protection
The Device to mirror will be selected automatically. Select a Device located in the remote cluster. The
The selection is based on the previously selected best practice it to select a target that is the same size
Virtual Volume. as the source.
VPLEX Administration-SSP
Page
Internal Use - Confidential 178 © Copyright 2020 Dell Inc.
Volume Expansion and Protection
Because the new Device is now a Distributed Device Review the selections
a Consistency Group or Rule-set must be selected.
VPLEX Administration-SSP
© Copyright
Internal Use - Confidential 2020 Dell Inc. Page 179
Volume Expansion and Protection
Completion
VPLEX Administration-SSP
Page
Internal Use - Confidential 180 © Copyright 2020 Dell Inc.
Volume Expansion and Protection
Device Map
Here is an example of the Device Map. Notice the new Top-level Device. There is
also a new Virtual Volume in Cluster-1, which contained the added Device.
VPLEX Administration-SSP
© Copyright
Internal Use - Confidential 2020 Dell Inc. Page 181
Data Protection with RecoverPoint
VPLEX Unity
VPLEX Administration-SSP
Page
Internal Use - Confidential 182 © Copyright 2020 Dell Inc.
Data Protection with RecoverPoint
Here is an overview of the steps that are required to add RecoverPoint protection
to VPLEX Virtual Volumes. Each step is listed as to where, RecoverPoint or
VPLEX, it is performed.
VPLEX
VPLEX Administration-SSP
© Copyright
Internal Use - Confidential 2020 Dell Inc. Page 183
Data Protection with RecoverPoint
VPLEX Administration-SSP
Page
Internal Use - Confidential 184 © Copyright 2020 Dell Inc.
Data Protection with RecoverPoint
Register Initiators
VPLEX Administration-SSP
© Copyright
Internal Use - Confidential 2020 Dell Inc. Page 185
Data Protection with RecoverPoint
A Storage View for the RecoverPoint Cluster must contain the following:
VPLEX Administration-SSP
Page
Internal Use - Confidential 186 © Copyright 2020 Dell Inc.
Data Protection with RecoverPoint
Use the rp rpa-cluster add command on the VPLEX and add the clusters
that are local to VPLEX Metro to the VPLEX system. Ensure that you specify which
VPLEX cluster is the local cluster.
VPLEX Administration-SSP
© Copyright
Internal Use - Confidential 2020 Dell Inc. Page 187
Data Protection with RecoverPoint
Create a RecoverPoint enabled Consistency Group for all the Virtual Volumes
required by the RecoverPoint Cluster.
Use separate Consistency Groups for Local and Distributed Virtual Volumes.
VPLEX Administration-SSP
Page
Internal Use - Confidential 188 © Copyright 2020 Dell Inc.
VPLEX Data Mobility
Without the storage virtualization that VPLEX offers, migrating data from one array
to another is difficult. This procedure would require professional services to plan
and implement the migration.
VPLEX can handle many data mobility needs and migrate data from one array to
another with minimal effort and no planned downtime.
IT needs to move data. VPLEX gets your data wherever you want with
no planned downtime
Load balancing
VPLEX Administration-SSP
© Copyright
Internal Use - Confidential 2020 Dell Inc. Page 189
VPLEX Data Mobility
VPLEX Data Mobility moves data from one Extent or Device to another. The Virtual
Volume, which can be in a Storage View, remains unchanged. The "volume
identifier" remains unchanged. This allows moving data without host or application
disruption.
Virtual
Volume
Extent
– Within a Cluster
Device
VPLEX Administration-SSP
Page
Internal Use - Confidential 190 © Copyright 2020 Dell Inc.
VPLEX Data Mobility
VPLEX does not remove the Data from the old Storage Volume.
VPLEX Administration-SSP
© Copyright
Internal Use - Confidential 2020 Dell Inc. Page 191
VPLEX Data Mobility
Extent Mobility
Virtual
Volume
Device
Sourc Target
Extent Mov Extent
Storage Storage
Volume Volume
Storage Storage
Array Array
Extent mobility is a VPLEX mechanism to move all data from a source Extent to a
target Extent.
VPLEX Administration-SSP
Page
Internal Use - Confidential 192 © Copyright 2020 Dell Inc.
VPLEX Data Mobility
Device Mobility
Virtual
Volume
Temporary RAID-
1 Device
Mirror Mirror
Leg Leg
Extent
Extent Extent
Storage
Storage Storage
Volume
Volume Volume
VPLEX Administration-SSP
© Copyright
Internal Use - Confidential 2020 Dell Inc. Page 193
VPLEX Data Mobility
VPLEX Administration-SSP
Page
Internal Use - Confidential 194 © Copyright 2020 Dell Inc.
VPLEX Data Mobility
Optional
VPLEX Administration-SSP
© Copyright
Internal Use - Confidential 2020 Dell Inc. Page 195
VPLEX Data Mobility
Target device must be the same size or larger than the source device or extent
VPLEX Administration-SSP
Page
Internal Use - Confidential 196 © Copyright 2020 Dell Inc.
VPLEX Data Mobility
VPLEX Administration-SSP
© Copyright
Internal Use - Confidential 2020 Dell Inc. Page 197
VPLEX Data Mobility
Extent Mobility
Here is a map of the components of a Virtual Volume prior to migration. Note the names of the Storage Volumes and Extents.
VPLEX Administration-SSP
Page
Internal Use - Confidential 198 © Copyright 2020 Dell Inc.
VPLEX Data Mobility
Using the Mobility menu, select Move Data Within Cluster for Extent Mobility.
VPLEX Administration-SSP
© Copyright
Internal Use - Confidential 2020 Dell Inc. Page 199
VPLEX Data Mobility
Select the desired cluster Select a Storage Volume, this step is optional and acts
as a filter. The Extent on this Storage Volume will be
selected in the next step.
VPLEX Administration-SSP
Page
Internal Use - Confidential 200 © Copyright 2020 Dell Inc.
VPLEX Data Mobility
Extent Selection
Select Target
Based on the previous selections, a target Extent is presented for selection. The
Auto-Generate Mappings can be selected to automatically choose the target.
VPLEX Administration-SSP
© Copyright
Internal Use - Confidential 2020 Dell Inc. Page 201
VPLEX Data Mobility
Review
In step 5, configure the job name and set the rate to perform the migration.
VPLEX Administration-SSP
Page
Internal Use - Confidential 202 © Copyright 2020 Dell Inc.
VPLEX Data Mobility
Once the transfer is complete, the job can be Here is a map example for the completed migration.
committed. At this time it also can be canceled. Compare it to the previous map.
Committed jobs cannot be undone.
VPLEX Administration-SSP
© Copyright
Internal Use - Confidential 2020 Dell Inc. Page 203
VPLEX Data Mobility
VPLEX Administration-SSP
Page
Internal Use - Confidential 204 © Copyright 2020 Dell Inc.
VPLEX Data Mobility
Select Create
VPLEX Administration-SSP
© Copyright
Internal Use - Confidential 2020 Dell Inc. Page 205
VPLEX Data Mobility
Select Cluster
VPLEX Administration-SSP
Page
Internal Use - Confidential 206 © Copyright 2020 Dell Inc.
VPLEX Data Mobility
VPLEX Administration-SSP
© Copyright
Internal Use - Confidential 2020 Dell Inc. Page 207
VPLEX Data Mobility
Review
VPLEX Administration-SSP
Page
Internal Use - Confidential 208 © Copyright 2020 Dell Inc.
VPLEX Data Mobility
Commit
VPLEX Administration-SSP
© Copyright
Internal Use - Confidential 2020 Dell Inc. Page 209
VPLEX Data Mobility
Once the job transfer is complete it can be committed. After the migration is
complete, the commit step detaches the source leg of the RAID 1 and removes the
RAID 1. The Virtual Volume, Device, or Extent is identical to the one before the
migration except that the source Device/extent is replaced with the target
Device/Extent.
VPLEX Administration-SSP
Page
Internal Use - Confidential 210 © Copyright 2020 Dell Inc.
VPLEX Data Mobility
VPlexcli:/data-migrations/device-migrations> dm migration start -- name migrate_012 --from device_012 --to device_012a -transfersize 12M
...
RAID-1
Mov
The start operation first creates a RAID 1 device on top of the source Device. It specifies the
source device as one of its legs and the destination Device as the other leg. It then copies the
source Device’s data to the destination’s Device or Extent. This operation can be canceled as
long as it is not committed.
The commit operation removes the pointer to the source leg. At this point in time the
destination Device is the only Device accessible through the Virtual Volume.
The clean operation breaks the source Device down all the way to the Storage Volume level.
The Storage Volume is unclaimed after this operation if there are no other Extents configured
for this Storage Volume. Data mobility operations can also be paused and resumed before the
commit operation. It may be beneficial to pause mobility operations during daytime hours.
The remove operation will remove the record of canceled or committed data migrations.
VPLEX Administration-SSP
© Copyright
Internal Use - Confidential 2020 Dell Inc. Page 211
VPLEX Data Mobility
Batched Mobility
Batch migrations are run as batch jobs from reusable batch migration plan files. Migration plan
files are created using the create-plan command. A single batch migration plan can be either
for Devices or Extents, but not both. Batched mobility provides the ability to script large-scale
mobility operations without having to specify individual extent-by-extent or device-by-device
mobility jobs. Batched mobility can only be performed in the CLI. Batch migrations must follow
the same rules as individual migrations.
VPLEX Administration-SSP
Page
Internal Use - Confidential 212 © Copyright 2020 Dell Inc.
VPLEX Data Mobility
VPLEX Administration-SSP
© Copyright
Internal Use - Confidential 2020 Dell Inc. Page 213
VPLEX Data Mobility
Started 3 of 3 migrations.
Job Start
Job Cancel
VPlexcli:/data-migrations/device-migrations> batch-migrate cancel --file migrate.txt
• A larger transfer-size results in higher performance for the migration, but may
negatively impact front-end I/O.
VPLEX Administration-SSP
Page
Internal Use - Confidential 214 © Copyright 2020 Dell Inc.
VPLEX Data Mobility
Pause an active batch migration to release bandwidth for host I/O during periods of
peak traffic. Resume the batch migration during periods of low I/O.
WARNING: Failed to pause migration BR0_1 : Evaluation of <<dm migration pause -m /data-
migrations/device-migrations/BR0_1>> failed.
Job Resume
VPlexcli:/clusters/cluster-1/devices> batch-migrate resume --file=migrate.txt
WARNING: Failed to resume migration BR0_1 : Evaluation of <<dm migration resume - m /data-
migrations/device-migrations/BR0_1>> failed.
Resumed 2 of 3 migrations
VPLEX Administration-SSP
© Copyright
Internal Use - Confidential 2020 Dell Inc. Page 215
VPLEX Data Mobility
VPLEX Administration-SSP
Page
Internal Use - Confidential 216 © Copyright 2020 Dell Inc.
VPLEX Data Mobility
After the migration is complete, the commit step detaches the source leg of the
RAID 1 and then removes it.
After the migration is complete, the commit step detaches the source leg of the RAID 1 and
then removes it. The Virtual Volume, Device, or Extent is identical to the one before the
migration except that the source Device/Extent is replaced with the target Device/Extent. A
migration must be committed in order to be cleaned.
When the batch migration is 100% complete, use batch-migrate commit <filename>. Next,
run the clean command to dismantle the source device down to its Storage Volume.
Remove the migration record only if the migration has been committed or canceled. Migration
records are in the /data-migrations/device-migrations context.
VPLEX Administration-SSP
© Copyright
Internal Use - Confidential 2020 Dell Inc. Page 217
Role-Based-Access-Control
Role-Based-Access-Control
VPLEX (Dell
Technologies Service
personnel only)
management -
provisioning
Cannot do any
configuration (service
tasks) or account
management
changes allowed
CLI or REST
monitoring scripts can
use this account
VPLEX Administration-SSP
Page
Internal Use - Confidential 218 © Copyright 2020 Dell Inc.
Role-Based-Access-Control
The vplexuser role is for accounts created by the admin or accessed via LDAP.
Hos
vplexus
Automated Tasks
vplexuser
Storage
Arrays
Standard
VPLEX User
Provision
Monitoring
Mobility
VPLEX Administration-SSP
© Copyright
Internal Use - Confidential 2020 Dell Inc. Page 219
Role-Based-Access-Control
Hos
readonly
Automated Tasks
readonly
Storage
Arrays
VPLEX
Monitoring
User
The readonly role allows automated monitoring tools read only access to VPLEX.
VPLEX Administration-SSP
Page
Internal Use - Confidential 220 © Copyright 2020 Dell Inc.
Role-Based-Access-Control
VPLEX Administration-SSP
© Copyright
Internal Use - Confidential 2020 Dell Inc. Page 221
Role-Based-Access-Control
The VPLEX administrator can change the role and shell access for any account
with a vplexuser or readonly role.
Currently only accounts with readonly and vplexuser role are allowed to be
changed.
Role for service and admin account cannot be changed
VPLEX Administration-SSP
Page
Internal Use - Confidential 222 © Copyright 2020 Dell Inc.
Role-Based-Access-Control
Service and admin accounts Accounts with vplexuser and readonly Roles
Password:
Password:
VPlexcli:/>
creating logfile:/var/log/VPlex/cli/session.log
User with shell access = false will
directly log into VPLEX CLI
VPlexcli:/>
These two example screenshots show the difference when logging in with a role that has shell
access, and logging in with a role that does not have shell access. The example on the left has
shell access. Notice in the example on the right that newuser1 does not have shell access. So
after entering the password, newuser1 is put directly into Vplexcli.
Shell access can be enabled or disabled for any user assigned the vplexuser role.
When restricted users exit the VPLEX CLI by any method, they will also exit from the shell.
There is no way for a restricted user to exit VPLEX CLI and get to the shell prompt.
VPLEX Administration-SSP
© Copyright
Internal Use - Confidential 2020 Dell Inc. Page 223
Role-Based-Access-Control
VPLEX Administration-SSP
Page
Internal Use - Confidential 224 © Copyright 2020 Dell Inc.
VPLEX Support Integration
Secure Remote Services (SRS) is a two-way remote connection between Dell EMC
Customer Service and supported products and solutions.
• Site ID
SRS Gateway
VPLEX Administration-SSP
© Copyright
Internal Use - Confidential 2020 Dell Inc. Page 225
VPLEX Support Integration
SNMP Overview
VPLEX Administration-SSP
Page
Internal Use - Confidential 226 © Copyright 2020 Dell Inc.
VPLEX Support Integration
VPLEX CLI
snmp-agent configure
Configures the SNMP Agent
One time configuration
Executed in the VPLEX CLI
SNMP Management Station
SNMPGET
gets the most recent statistics for the OID specified from each director
SNMPGETNEXT
gets the most recent statistics for the next OID from each director
SNMPGETBULK
gets all SNMP statistics from each director
VPLEX Administration-SSP
© Copyright
Internal Use - Confidential 2020 Dell Inc. Page 227
VPLEX Support Integration
VPLEX Administration-SSP
Page
Internal Use - Confidential 228 © Copyright 2020 Dell Inc.
VPLEX Support Integration
IP
CERT
LDAPs Server
VPLEX Administration-SSP
© Copyright
Internal Use - Confidential 2020 Dell Inc. Page 229
VPLEX Support Integration
--server-name=“linux-72.s3.site" -b “dc=emc,dc=com" /
-r “ou=vplex,dc=emc,dc=com" -n “cn=Administrator,dc=emc,dc=com" /
-l “/opt/emc/Bplex/cert.pem" -p
...
VPlexcli:/>
-d for directory type, 1= OpenLDAP, 2= Active Directory
-r for the user search path, the distinguished name of the node at which to begin
user searches in the directory server
-p for Password of Bind Distinguished Name. You are prompted for the
password
VPLEX Administration-SSP
Page
Internal Use - Confidential 230 © Copyright 2020 Dell Inc.
VPLEX Monitoring Concepts
Monitor Clusters
Clusters:
Islands:
Island ID Clusters
--------- --------------------
1 cluster-1, cluster-2
Clusters in a VPLEX Metro will share the same Island ID.
operational-status: ok
transitioning-indications:
transitioning-progress:
health-state: ok
health-indications:
local-com: ok
...
VPLEX Administration-SSP
© Copyright
Internal Use - Confidential 2020 Dell Inc. Page 231
VPLEX Monitoring Concepts
VPLEX Monitoring
VPLEX Administration-SSP
Page
Internal Use - Confidential 232 © Copyright 2020 Dell Inc.
VPLEX Monitoring Concepts
This command allows the user to check the status of VPLEX Front-End ports. xxxx
VPLEX Administration-SSP
© Copyright
Internal Use - Confidential 2020 Dell Inc. Page 233
VPLEX Monitoring Concepts
Director Connectivity
The director uptime command displays the amount of time a director has
been online. The connectivity director <director name> command will
display all ports and storage array LUNs masked to the specified director.
VPLEX Administration-SSP
Page
Internal Use - Confidential 234 © Copyright 2020 Dell Inc.
VPLEX Monitoring Concepts
Monitor Users
The sessions command displays information on users who are logged into the
VPLEX Management Console.
VPLEX Administration-SSP
© Copyright
Internal Use - Confidential 2020 Dell Inc. Page 235
VPLEX Monitoring Concepts
VPLEX Administration-SSP
Page
Internal Use - Confidential 236 © Copyright 2020 Dell Inc.
VPLEX Monitoring Concepts
VPLEX Monitoring
This monitor generates a capacity report for all the storage in a VPLEX system, grouped by
storage arrays. It requires:
All Storage Volumes in a storage array have the same tier value
Tier IDs are required to determine the tier of a Storage Volume/storage array. Storage Volumes
that do not contain any of the specified IDs are given the tier value 'no-tier'. The report is
separated into two parts:
Local storage - Storage Volumes where the data is physically located at one
site only.
VPLEX Administration-SSP
© Copyright
Internal Use - Confidential 2020 Dell Inc. Page 237
VPLEX Monitoring Concepts
VPLEX Administration-SSP
Page
Internal Use - Confidential 238 © Copyright 2020 Dell Inc.
VPLEX Monitoring Concepts
Information generated by this monitor includes the number of views, total exported
capacity in GB, and the number of exported virtual volumes per cluster.
VPLEX Administration-SSP
© Copyright
Internal Use - Confidential 2020 Dell Inc. Page 239
VPLEX Monitoring Concepts
Details - The command will detect sliced elements, drill up through all slices, and indicate in
the output that slices were detected. The original target is highlighted in the output. You can
specify meta, logging, and virtual volumes, local and distributed devices, extents, storage-
volumes, or logical-units on a single command line.
VPLEX Administration-SSP
Page
Internal Use - Confidential 240 © Copyright 2020 Dell Inc.
VPLEX Monitoring Concepts
VPLEX Administration-SSP
© Copyright
Internal Use - Confidential 2020 Dell Inc. Page 241
VPLEX Monitoring Concepts
VPLEX Monitoring
VPLEX Administration-SSP
Page
Internal Use - Confidential 242 © Copyright 2020 Dell Inc.
VPLEX Monitoring Concepts
Extent Details
VPLEX Administration-SSP
© Copyright
Internal Use - Confidential 2020 Dell Inc. Page 243
VPLEX Monitoring Concepts
VPLEX Administration-SSP
Page
Internal Use - Confidential 244 © Copyright 2020 Dell Inc.
VPLEX Monitoring Concepts
VPLEX Administration-SSP
© Copyright
Internal Use - Confidential 2020 Dell Inc. Page 245
VPLEX Monitoring Concepts
VPLEX Administration-SSP
Page
Internal Use - Confidential 246 © Copyright 2020 Dell Inc.
VPLEX Monitoring Concepts
VPLEX Monitoring
The export storage-view map <view> will display virtual volumes that are
part of the Storage View and their corresponding local device IDs.
VPLEX Administration-SSP
© Copyright
Internal Use - Confidential 2020 Dell Inc. Page 247
VPLEX Monitoring Concepts
VPLEX Administration-SSP
Page
Internal Use - Confidential 248 © Copyright 2020 Dell Inc.
VPLEX Performance Monitoring
VPLEX Performance Monitor tool allows for data collection of various statistics from
the VPLEX cluster.
VPLEX Performance
Graph
Cluster-1 Cluster-2
VPLEX Administration-SSP
There are three categories of performance monitors that can be viewed through
VPLEX CLI. All of the VPLEX CLI performance monitors will gather data during a
polling cycle and save it to a File Sink.
Monitor Data
VPLEX Cluster 1
/var/log/VPlex/cli
Perpetual Monitors
– Basic Data - Always Running
Pre-configured Monitors
– Three per Director
– Must be created
Custom Monitors
VPLEX Administration-SSP
Here we see the contents of the folder where the perpetual monitors are written.
Notice the naming convention of the monitor files. Perpetual monitor files are
collected as part of collect-diagnostics.
The Perpetual Performance Monitors are always on, started from system setup, cannot be
modified, disabled, or deleted. The currently open monitor file is capped at 10 MB per director
and up to 10 files are stored (.log, .log.1, .log.2, and so on).
o Naming: …PERPETUAL_vplex_sys_perf_mon.log
VPLEX Administration-SSP
Pre-Configured Monitors
VPLEX Administration-SSP
Use the ll command, as shown here, to verify the running monitors on each
director. Notice that the pre-configured monitors have a period of 0s (zero
seconds). This means automatic polling is disabled. Use the report poll-monitors
command to force the monitors to poll for data and send the data to the associated
file sink.
VPLEX Administration-SSP
VPLEX Administration-SSP
To automate polling for our pre-configured monitors, we can schedule a job to run
at specified time(s). The example shown will run the report poll-monitors command
at 1 AM every day.
1. Minute
2. Hour 4. Month
VPLEX Administration-SSP
VPLEX Administration-SSP
Before creating a monitor, first use the monitor stat-list command to display the
available statistics. There are high-level categories each with subcategories.
Monitoring has no impact on host performance.
Many statistics require a target port or volume to be specified. Output of the monitor
stat-list command identifies which statistics need a target defined.
No Target Required
Target Required
VPLEX Administration-SSP
Statistics Details
VPLEX Administration-SSP
VPLEX Administration-SSP