VPLEX Administration Student Guide

Download as pdf or txt
Download as pdf or txt
You are on page 1of 261

VPLEX ADMINISTRATION

DOWNLOADABLE CONTENT
Internal Use - Confidential
Table of Contents

VPLEX Ecosystem ............................................................................................................... 8


VPLEX at a Glance .............................................................................................................. 9
Data Mobility Challenges ................................................................................................... 11
Active-Active Data Center with VPLEX .............................................................................. 12
Availability with VPLEX ...................................................................................................... 13
VPLEX Product Family....................................................................................................... 14
VPLEX for ALL Flash ......................................................................................................... 15
Where and When to Use VPLEX ....................................................................................... 16
VPLEX Use Case - Online Tech Refresh ........................................................................... 17
VPLEX Use Case - Storage Pooling .................................................................................. 18
VPLEX Continuous Availability........................................................................................... 19
Stretched VMware Clusters ............................................................................................... 20
VPLEX with RecoverPoint .................................................................................................. 21
MetroPoint Solution............................................................................................................ 22

VPLEX Architecture ................................................................................................. 23


Dell EMC VPLEX Logical Architecture - Director I/O Stack ................................................ 23
VPLEX Constructs ............................................................................................................. 25
VPLEX Local Architecture .................................................................................................. 26
VPLEX Metro Architecture Builds on VPLEX Local ............................................................ 27
VPLEX Physical Architecture ............................................................................................. 28
External Comparison of VS2 and VS6 Engines .................................................................. 30
Intra-Cluster Communications (Local COM) ....................................................................... 33
VPLEX Local Rack Configuration....................................................................................... 35
VPLEX Engine I/O Modules .............................................................................................. 36
WAN COM Performance Considerations ........................................................................... 38
VPLEX Power Subsystem .................................................................................................. 39
VPLEX Engine Battery Backup .......................................................................................... 40
VPLEX Management Server .............................................................................................. 41
VPLEX VS6 MMCS IP Connections ................................................................................... 43
VPLEX VS6 Management – MMCS ................................................................................... 44

VPLEX Administration-SSP

Internal Use - Confidential


Page ii © Copyright 2020 Dell Inc.
VPLEX VS6 Management - MM ......................................................................................... 45
VPLEX Management IP Infrastructure ............................................................................... 46

VPLEX IO Operations............................................................................................... 48
Distributed Cache Coherency ............................................................................................ 48
VPLEX I/O Operations ....................................................................................................... 49
Path Redundancy .............................................................................................................. 53
VPLEX Cluster Witness ..................................................................................................... 55

VPLEX Management Options .................................................................................. 56


VPLEX Management Tools ................................................................................................ 56
VPLEX Management Security ............................................................................................ 57
VPLEX CLI Context Tree (partial listing) ............................................................................ 58
Key Management CLI Root (/) Sub-contexts ...................................................................... 59
Navigating the CLI Context Tree ........................................................................................ 60
Locating in the Context Tree .............................................................................................. 61
Dell EMC Unisphere for VPLEX ......................................................................................... 62
VPLEX Online Support ...................................................................................................... 63
VPLEX Connectivity ........................................................................................................... 64
Back-End Array Zoning Best Practices - Unity ................................................................... 64
Back-End Zoning - Quad VPLEX with VMAX ..................................................................... 65
Dell EMC XtremIO with VPLEX Configuration .................................................................... 66
Determine WWNs of VPLEX FE Ports ............................................................................... 67
Host Connectivity to VPLEX Front-End Ports ..................................................................... 68
FE Connectivity Best Practices .......................................................................................... 69
VPLEX Front End Connectivity .......................................................................................... 70

VPLEX Storage Provisioning Concepts ................................................................. 71


Storage View: Establish Host-to-Virtual-Storage Connectivity ............................................ 71
VPLEX Virtual Storage Constructs – Complex Device Example......................................... 72
View Virtual Volume ID Presented to Host ......................................................................... 73
Storage View Creation ....................................................................................................... 74

Storage Provisioning Methods ............................................................................... 78

VPLEX Administration-SSP

Internal Use - Confidential


© Copyright 2020 Dell Inc. Page iii
VPLEX Storage Provisioning Overview .............................................................................. 78
Advanced Provisioning Overview ....................................................................................... 79
Advanced Provisioning – Claim Storage ............................................................................ 80
Claiming Storage Volumes using VPLEX CLI .................................................................... 86
Advanced Provisioning - Create Extents ............................................................................ 87
Advanced Provisioning - Create Devices ........................................................................... 91
Advanced Provisioning - Create Virtual Volumes ............................................................... 96
Advanced Provisioning – Map Virtual Volume .................................................................... 99
EZ Provisioning Overview ................................................................................................ 102
EZ Provisioning ................................................................................................................ 103
VIAS Provisioning Overview ............................................................................................ 108
Register AMP and View Managed Arrays ........................................................................ 109
VIAS Storage Provisioning ............................................................................................... 110
VIAS Storage Provisioning ............................................................................................... 111
Provision-Job Rollback..................................................................................................... 115

Storage Volume Encapsulation ............................................................................ 116


Storage Volume Encapsulation Overview ........................................................................ 116
Using Storage Volume Encapsulation .............................................................................. 117
Encapsulation Method using CLI..................................................................................... 118

VPLEX Distributed Device Concepts.................................................................... 119


Distributed Devices Overview .......................................................................................... 119
Rule-Sets Handle Inter-Cluster Failures ........................................................................... 120
Distributed Device Detach Rule Options .......................................................................... 121
VPLEX Logging Volumes ................................................................................................. 122
VPLEX Consistency Groups ............................................................................................ 123
Consistency Groups: Visibility for Local Volumes ............................................................. 124
VPLEX Cluster Witness Benefit ....................................................................................... 125
Consistency Group Detach Rule Options ......................................................................... 126
Detach Rules - What if no Cluster Witness?..................................................................... 128
Continuous Availability with Cluster Witness .................................................................... 129

VPLEX Distributed Device Configuration ............................................................ 130

VPLEX Administration-SSP

Internal Use - Confidential


Page iv © Copyright 2020 Dell Inc.
Creating Distributed Devices using Dell EMC Unisphere ................................................. 130
Creating Distributed Devices from Existing Devices ......................................................... 131
Create a Distributed Device with the storage-tool compose Command ............................ 140
Create Distributed Devices using VPLEX CLI .................................................................. 141
VPLEX Consistency Group Creation ................................................................................ 149

Distributed Device Failure Scenarios ................................................................... 154


Distributed Device and Consistency Group Failure Properties ......................................... 154
Distributed Device Failure Scenarios ............................................................................... 156
Distributed Device Failure Scenarios ............................................................................... 159

Volume Expansion and Protection ....................................................................... 163


Virtual Volume Expansion ................................................................................................ 163
Procedure for Volume Expansion ..................................................................................... 165
Storage Volume Expansion Topologies ........................................................................... 166
Storage-volume Expansion Unisphere ............................................................................. 167
Storage-volume Expansion Procedure with CLI ............................................................... 168
Concatenation (RAID-C) Expansion Method .................................................................... 169
Concatenation Expansion – with CLI................................................................................ 170
Graphic View of Concatenation Expansion ...................................................................... 171
Volume Protection – Add a Local Mirror ........................................................................... 172
Volume Protection – Attach a Local Mirror with the CLI.................................................... 176
Adding a Remote Mirror ................................................................................................... 177

Data Protection with RecoverPoint ...................................................................... 182


VPLEX with RecoverPoint Protection ............................................................................... 182
Procedure for Adding RecoverPoint Protection ................................................................ 183
Importing RecoverPoint Certificate ................................................................................... 184
Create VPLEX Storage View for RecoverPoint ................................................................ 185
Adding RecoverPoint Cluster to VPLEX ........................................................................... 187
Create VPLEX Consistency Group .................................................................................. 188

VPLEX Data Mobility .............................................................................................. 189


Data Mobility Use Cases .................................................................................................. 189

VPLEX Administration-SSP

Internal Use - Confidential


© Copyright 2020 Dell Inc. Page v
Data Mobility Overview .................................................................................................... 190
Data Mobility Overview .................................................................................................... 191
Extent Mobility ................................................................................................................. 192
Device Mobility ................................................................................................................. 193
General Procedure to Perform Data Migration ................................................................. 195
VPLEX Data Migration Considerations............................................................................. 196
VPLEX Batch Migrations .................................................................................................. 197
Extent Mobility ................................................................................................................. 198
Monitor Extent Mobility Jobs ............................................................................................ 203
Device Mobility Wizard ..................................................................................................... 205
Complete Device Mobility................................................................................................. 210
Mobility Operations Through CLI...................................................................................... 211
Batched Mobility .............................................................................................................. 212
Create and Check the Batched Mobility Plan ................................................................... 213
Start and Cancel Batched Jobs ........................................................................................ 214
Pause and Resume Batched Jobs ................................................................................... 215
Monitor Batched Mobility Jobs ......................................................................................... 216
Commit, Clean and Remove Batch Jobs.......................................................................... 217

Role-Based-Access-Control .................................................................................. 218


Supported RBAC Roles ................................................................................................... 218
Accounts with “vplexuser” Role ........................................................................................ 219
Accounts with “readonly” Role.......................................................................................... 220
RBAC - View Account Role Information ........................................................................... 221
RBAC - Change Role for User ......................................................................................... 222
Shell Access Control - Enabled /Disabled ........................................................................ 223
If Restricted Shell Access - SCP Using "share" Folder ..................................................... 224

VPLEX Support Integration ................................................................................... 225


Configuring SRS Gateway ............................................................................................... 225
SNMP Overview............................................................................................................... 226
Supported SNMP Polling Commands .............................................................................. 227
Supported SNMP Trap Commands .................................................................................. 228
Using LDAPS for User Accounts ...................................................................................... 229

VPLEX Administration-SSP

Internal Use - Confidential


Page vi © Copyright 2020 Dell Inc.
LDAPS Configuration on VPLEX...................................................................................... 230

VPLEX Monitoring Concepts ................................................................................ 231


Monitor Clusters ............................................................................................................... 231
VPLEX Monitoring............................................................................................................ 232
VPLEX Monitoring............................................................................................................ 237
VPLEX Monitoring............................................................................................................ 242
VPLEX Monitoring............................................................................................................ 247

VPLEX Performance Monitoring ........................................................................... 249


Overview of VPLEX Performance Monitoring ................................................................... 249
VPLEX CLI Performance Monitors ................................................................................... 250
Perpetual Performance Monitor Files ............................................................................... 251
Create Pre-configured Monitors ....................................................................................... 253
Verify Running Monitors ................................................................................................... 255
Manually Poll for Pre-configured Monitors ........................................................................ 256
Add Manual Polling to the Scheduled Tasks .................................................................... 257
Custom Monitor Configuration Steps................................................................................ 258
Determine the Type of Statistics to Collect ....................................................................... 259
Types of Statistics ............................................................................................................ 260
Example: How to Create a Custom Monitor ..................................................................... 261

VPLEX Administration-SSP

Internal Use - Confidential


© Copyright 2020 Dell Inc. Page vii
VPLEX Architecture

VPLEX Ecosystem

VPLEX is an important element in the Dell Technologies ecosystem. It is the core


of data protection when integrating with a wide range of storage arrays.

Storage

 Storage Arrays

o VPLEX is an important element in the EMC ecosystem and is the core of


data protection when integrating with storage platforms, such as VNX, Unity,
VMAX, XtremIO, and other third party storage.

 RecoverPoint

o RecoverPoint and RecoverPoint for VMs can provide continuous data


protection for VPLEX volumes.

 Applications

o Many applications, those that are supported on Block Storage, can run on
VPLEX volumes

VPLEX Administration-SSP

Page
Internal Use - Confidential 8 © Copyright 2020 Dell Inc.
VPLEX Architecture

VPLEX at a Glance

Application Clusters stretched across data centers

• Continuous availability
• Data mobility without host disruption

70+ Non-Dell EMC Storage platforms

VPLEX Solution

Dell EMC VPLEX is an important part of the Data Protection and Availability
continuum. It delivers data availability and non-disruptive mobility across arrays in a
single data center or data centers separated by distance. VPLEX permits
technologies like VMware and other clusters that assume a single storage instance
and enabling them to function across arrays and across distance.

VPLEX Administration-SSP

© Copyright
Internal Use - Confidential 2020 Dell Inc. Page 9
VPLEX Architecture

Availability Challenges

In a worst case scenario, what is the best possible


outcome IT could expect?

IT continues as if nothing happened.

VPLEX Administration-SSP

Page
Internal Use - Confidential 10 © Copyright 2020 Dell Inc.
VPLEX Architecture

Data Mobility Challenges

Tech Refresh and Data Center Change Class of Maintenance Workload Balancing
Migrations Service

Traditional methods result in…

• Application downtime

• Poor IT resource utilization

• Migration service costs

• Months for Tech Refresh

• Risk of things going wrong

Over time, change to any infrastructure is unavoidable, especially in a cloud


environment and big data world. For example: with the advance and evolution of
technologies, data must be migrated to new storage arrays during a tech refresh. IT
resources may need to be reallocated for workload balancing purposes. Other
changes, like maintenance and changing the class of service (storage tiering) also
may be necessary.

VPLEX Administration-SSP

© Copyright
Internal Use - Confidential 2020 Dell Inc. Page 11
VPLEX Architecture

Active-Active Data Center with VPLEX

ACTIVE Continuous data ACTIVE


Site availability, Site
1 2
deliver zero RPO/RTO

Stretched host clusters

VPLEX Cluster 1 VPLEX Cluster 2

Simultaneous R/W Non-disruptive data


migration across arrays
at both sites
or sites

VPLEX Metro

VPLEX storage virtualization and active-active technology leveraging VPLEX's


availability and mobility features. Host clusters and storage can stretch across
datacenters. The second data center is no longer sitting idle and can provide
continuous availability for mission-critical applications. VPLEX Metro configuration
allows VPLEX clusters at both data centers.

VPLEX Administration-SSP

Page
Internal Use - Confidential 12 © Copyright 2020 Dell Inc.
VPLEX Architecture

Availability with VPLEX

The key component used to enable an active-active data center is VPLEX


Distributed Virtual Volumes.

Distributed Virtual Volumes have mirror legs at more than one cluster. Some of the benefits of
implementing a distributed active-active data center include:

 Increased availability - both data centers can serve production workloads while
providing high availability backup for the other data center.

 Increased asset utilization - passive data centers can have idle resources.
Active-active data centers make the most use of resources

 Increased performance/locality of data access - data need not be read from


the production site as the same data is read/write accessible at both sites

Site A Site B

Virtual Volume Virtual Volume

Distributed Virtual Volume

Synchronous
Latency
Up to 10 ms

VPLEX Administration-SSP

© Copyright
Internal Use - Confidential 2020 Dell Inc. Page 13
VPLEX Architecture

VPLEX Product Family

VPLEX Metro

VPLEX Local Cluster 1 Cluster 2

The VPLEX product family is composed of VPLEX Local and VPLEX Metro
systems.

A VPLEX Local provides a seamless ability to manage and mirror data between multiple
heterogeneous arrays from a single interface. A VPLEX Local configuration consists of a single
VPLEX cluster. A VPLEX cluster is comprised of one, two, or four engines.

A VPLEX Metro enables active/active, block-level access to data between two sites within
synchronous distances. The distance is limited not only by physical distance but also by host
and application requirements. Depending on the application, VPLEX clusters can be installed
with inter-cluster links that have up to 10 ms round trip time (RTT). The combination of virtual
storage with VPLEX Metro and virtual servers enables the transparent movement of virtual
machines and storage across synchronous distances. This technology provides improved
utilization and availability across heterogeneous arrays and multiple sites.

VPLEX Administration-SSP

Page
Internal Use - Confidential 14 © Copyright 2020 Dell Inc.
VPLEX Architecture

VPLEX for ALL Flash

 A VPLEX appliance model that packages VPLEX VS6 with Dell Technologies
all-flash systems

 XtremIO
 Unity All Flash
 Dell EMC PowerStore
 VMAX AF
 PowerMax

VPLEX Administration-SSP

© Copyright
Internal Use - Confidential 2020 Dell Inc. Page 15
VPLEX Architecture

Where and When to Use VPLEX

Continuous Availability Data Mobility

Protect data in the Move data


VPLEX keeps event of disaster without IT needs to move
mission critical or failure disruption
data. VPLEX gets
applications
your data wherever
running even in
you want with
the face of
unplanned no planned
Collaborate over
disasters distance downtime

Zero RTO
Zero RPO
Always on, no matter what

Use VPLEX for:

 Protect data in the event of disasters or failure of components in your data


centers.
 Move data non-disruptively between Dell EMC and other third-party storage
arrays without any downtime.
 Collaborate over distance. Access Anywhere provides cache-consistent active-
active access to data across VPLEX clusters. Multiple users at different sites
can work on the same data while maintaining consistency of the dataset.

VPLEX Administration-SSP

Page
Internal Use - Confidential 16 © Copyright 2020 Dell Inc.
VPLEX Architecture

VPLEX Use Case - Online Tech Refresh

No host disruption

New Array

Old Array

1. New array is Connected to the SAN


– Zoning and LUN Masking configured
2. VPLEX discovers a new array, and the admin creates migration target devices.
3. VPLEX Administrator sets up and starts a data migration job for each device.
4. The admin monitors the progress of the migrations. Hosts I/O's continue.
5. Once volumes on the new array are fully synchronized, the admin commits the
migration, which will disconnect from the mirror legs on the old array.
6. The old array can be removed without disruption.

VPLEX Administration-SSP

© Copyright
Internal Use - Confidential 2020 Dell Inc. Page 17
VPLEX Architecture

VPLEX Use Case - Storage Pooling

 Pool arrays together


 Reduce cost without increasing risk
 Move devices 100% online between different tiers
 Mirror across arrays for enhanced availability

Virtual Volume

Physical
Device

TRANSPARENT MOBILITY

VPLEX Administration-SSP

Page
Internal Use - Confidential 18 © Copyright 2020 Dell Inc.
VPLEX Architecture

VPLEX Continuous Availability

Continuous Availability = High Availability + Disaster Recovery

Stretched Host Clustering

Distributed Virtual Volume

Active Site Active Site

Here are some key Simple of VPLEX continuous availability:

 Simple DR testing
 Active assets in both sites
 No complex failover procedures
 Reduces planned and unplanned downtime
 Reduces TCO

VPLEX Administration-SSP

© Copyright
Internal Use - Confidential 2020 Dell Inc. Page 19
VPLEX Architecture

Stretched VMware Clusters

≤ 10 ms *

Site A Site B

* Application specific
Site C

VPLEX Metro provides automatic load balancing between clusters.

A VPLEX Metro supports VMware HA and FT. It also provides VMware DRS and Vmotion
integration. For the VMs and applications to failover transparently, the data must be shared
across cluster nodes. VMware ESXi clustering requires shared storage to provide non-
disruptive movement of virtual machines. VPLEX Metro allows storage to span multiple data-
centers allowing ESXi servers in different failure domains to share access to Datastores
created on VPLEX Distributed Storage. VPLEX Metro fits perfectly with VPLEX distributed
cache coherence for automatic sharing and load balancing.

VPLEX supports VMware vSphere® Storage APIs - Array Integration (VAAI), also referred to
as hardware acceleration or hardware offload APIs . The APIs define a set of "storage
primitives" that enable the ESXi host to offload certain storage operations to the array, VPLEX,
which reduces resource overhead on the ESXi hosts and can significantly improve
performance for storage-intensive operations such as storage cloning, zeroing, and so on.

VPLEX Administration-SSP

Page
Internal Use - Confidential 20 © Copyright 2020 Dell Inc.
VPLEX Architecture

VPLEX with RecoverPoint

WAN
Local Remote
Sync / Async
Protection Protection

IP or FC

Journal Journal

Local Copies Remote Copies

RecoverPoint with VPLEX

VPLEX and RecoverPoint products together offer continuous availability and operational and
disaster recovery. For customers requiring even further levels of availability, VPLEX can be
used with RecoverPoint to enable a third site for disaster recovery. This site can be located at
any supported distance, ensuring operational resilience in the event of a regional outage.

VPLEX Administration-SSP

© Copyright
Internal Use - Confidential 2020 Dell Inc. Page 21
VPLEX Architecture

MetroPoint Solution
C
DR Site
Single DR Copy

B
A Disaster Recovery Production
Production
Site
Site Automatic Switchover
VPLEX Metro
Continuous Availability

CDP On Both Sides Of Metro

RecoverPoint RecoverPoint
Operational Operational
Recovery Recovery
Remote And Local Protection

VPLEX MetroPoint Configuration

VPLEX using RecoverPoint with a VPLEX Metro configuration allows for a unique configuration
referred to as MetroPoint topology. This MetroPoint topology provides a three or four site
solution. This allows for protection to continue if one of the VPLEX Metro Clusters fails.

VPLEX Administration-SSP

Page
Internal Use - Confidential 22 © Copyright 2020 Dell Inc.
VPLEX Architecture

VPLEX Architecture

Dell EMC VPLEX Logical Architecture - Director I/O Stack

 VPLEX Front End

o Virtual Volumes are exposed to front-end hosts in storage views. Host


I/O and SCSI task management are done at this layer of the I/O
stack. The Front End of VPLEX functions like a storage array from the
host point of view. A VPLEX Storage View, which consists of FE
Ports, Host Initiators, and Virtual Volumes is the VPLEX method of
LUN Masking.

VPLEX Administration-SSP

© Copyright
Internal Use - Confidential 2020 Dell Inc. Page 23
VPLEX Architecture

 Distributed Coherent Cache

o Virtual volume writes are cached and kept consistent and coherent
across all directors in a VPLEX system. Per-volume caching
implements local and global cache, and maintains consistency
between all directors in a VPLEX.

 Device Virtualization

o Device virtualization creates composite devices from storage


volumes. This allows for, creating local and distributed (VPLEX Metro)
mirroring. It also allows for Virtual Volume creation.

 VPLEX Back End

o The Back-end ports on VPLEX Directors function in the role of host


initiators to all storage arrays. The storage array ports are zoned to
VPLEX BE ports. Storage Volumes represent the devices masked to
the VPLEX BE ports from the back-end arrays. These Storage
Volumes are claimed by VPLEX and can be used to create Virtual
Volumes.

VPLEX Administration-SSP

Page
Internal Use - Confidential 24 © Copyright 2020 Dell Inc.
VPLEX Architecture

VPLEX Constructs

Virtual Volume

Device

Extent

Storage Volume

Claim Storage

Storage
Array
LUN

Here is an explanation of the VPLEX Constructs:

 Virtual Volume A VPLEX Virtual Volume is created from the top-level Device. It is the storage
or Logical Unit presented to one or more hosts as part of a Storage View.

 Device A VPLEX Device is the application of a RAID topology to one or more extents. A Device
can be either a Local Device or a Distributed Device (VPLEX Metro topology is required). A
Device may use another device as an extent. Devices can be made up of any combination of
local devices and extents as appropriate for a particular RAID topology/geometry.

 Extent A VPLEX Extent is a slice or portion of the available space from a Storage Volume.
With VPLEX, you can create an extent that uses the entire capacity of the underlying Storage
Volume, or just a portion of the space. Extents provide a convenient means of allocating what is
needed while taking advantage of the dynamic thin allocation capabilities of the back-end array.

 Storage Volume Storage Volumes are the Logical Units (LU) presented from an array to the
VPLEX BE ports. These are initially unclaimed but can be claimed as part of the provisioning
process used by VPLEX. This can be done using the CLI of the VPLEX UI.

VPLEX Administration-SSP

© Copyright
Internal Use - Confidential 2020 Dell Inc. Page 25
VPLEX Architecture

VPLEX Local Architecture

Here is an example of the different layers of a VPLEX Local Architecture:

VPLEX Administration-SSP

Page
Internal Use - Confidential 26 © Copyright 2020 Dell Inc.
VPLEX Architecture

VPLEX Metro Architecture Builds on VPLEX Local

VPLEX Metro provides a unique solution to data center problems.

APPLICATION MOBILITY NOW POSSIBLE

STRETCHED HOST CLUSTER

Site 2
Site 1 Virtual Volume

IP or FC WAN

Displayed is a VPLEX Metro VMware HA solution. VPLEX Access Anywhere


technology allows you to export a single virtual volume from both VPLEX clusters
simultaneously.
In this solution, the physical hosts are only connected to their local VPLEX cluster
and no SAN extension between sites is necessary. This will help to isolate data
centers and prevent failures in one data center from affecting the other.
There are many interesting solutions that can be built atop VPLEX Metro. VMware
vMotion, VMware HA, and Oracle RAC are examples.

VPLEX Administration-SSP

© Copyright
Internal Use - Confidential 2020 Dell Inc. Page 27
VPLEX Architecture

VPLEX Physical Architecture

VS2 Engine

Block Diagram VS2 Engine

Hosts
8 Gbps FC
8 Gbps FC

0 1 2 3 0 1 3
2
VPLEX VPLEX VPLEX
Front-end Front-end
Director FC Ports
Engine FC Ports Director
A B
Inter 1 1 Inter
Cluster Cluster
Distributed Cor Core COM Ports 0 0 COM Ports Cor Cor Distributed
Cache Cache
Cor Co Intra- 1 1 Cor Cor
Intra-
Cluster Cluster
COM Ports 0 0 COM Ports

8 Gbps FC
Back-end Back-end
FC Ports FC Ports

0 1 2 3 0 1 2 3

8 Gbps 8 Gbps FC
FC

Storage
Volume

VS2 Engine Details VPLEX architecture consists of the following components:

 Front-End (FE) ports are Fibre Channel ports which are zoned to host HBA
ports. They provide host connectivity to VPLEX Virtual Volumes

 CPU's and Distributed Cache They are key components for data
processing and storage virtualization.

 Back-End (BE) ports VPLEX BE ports allow Fibre Channel connectivity to


any supported storage array.

 Comm and WAN ports COM ports provide communication with other
directors in the local VPLEX cluster. Wan ports, both FC and IP, provide
communication with remote directors in a VPLEX Metro environment.

A VPLEX cluster can have up to four engines (or eight directors). Each director adds redundancy
and additional cache and processing power. VPLEX architecture is fully redundant to survive any
single point of failure. In any cluster, the fully redundant hardware can tolerate failure down to a
single director remaining with no Data Unavailability or Data Loss condition.

VPLEX Administration-SSP

Page
Internal Use - Confidential 28 © Copyright 2020 Dell Inc.
VPLEX Architecture

VS6 Engine

Block Diagram VS6 Engine

Hosts

16 Gbps FC 16 Gbps FC

0 1 2 3 0 1 2 3
VPLEX VPLEX VPLEX
Front-end Front-end
Director Engine FC Ports
Director
FC Ports
A B
Inter- 1 1 Inter-
Co Co Cluster Cluste Co Co

Co Co Co COM 0 0 COM Ports Co Co Co

Co Co Co
Intra 1 1 Intra
Co Co Co Distributed
Distributed Co
Co Co Cluster Cluster Co Cache
Cache COM Ports 0 0 COM Ports

Back- 40 Gbps Back-


end FC IB end FC

0 1 2 3 0 1 2 3

16 Gbps FC 16 Gbps FC

Storage
Volume

VS6 Details

VS6 architectural design is the same as the VS2 architecture. There is still the
same number of FE ports, BE ports, and COM ports.
VS6 engines are much higher performing because they are built from higher-
performing hardware. VS6 directors get their performance boost by using dual six-
core processors, faster FE and BE ports, more cache, and InfiniBand for intra-
cluster communications.

VPLEX Administration-SSP

© Copyright
Internal Use - Confidential 2020 Dell Inc. Page 29
VPLEX Architecture

External Comparison of VS2 and VS6 Engines

VS2 Technology

Director A Director B

SPS

Front View

Director B Director A

SPS

Rear View

VPLEX Administration-SSP

Page
Internal Use - Confidential 30 © Copyright 2020 Dell Inc.
VPLEX Architecture

VS6 Technology

BB
Director B

Front View
Director A
BB

Director B

Rear View
Director A

VPLEX Administration-SSP

© Copyright
Internal Use - Confidential 2020 Dell Inc. Page 31
VPLEX Architecture

Comparison

Compare VS2 and VS6 Hardware

VS2 VS6

CPUs per
director

2 six-core processors
1 quad-core processor 12 CPU cores
4 CPU cores

Memory per 36 GB (~24GB for Cache) 128 GB (~113 GB for Cache)


director

FE and BE 8 Gbps Fibre Channel 16 Gbps Fibre Channel


Connectivity

LOCAL COM Dual 8 Gbps FC Fabrics Dual 40 Gbps IB Fabrics


Connectivity

WAN COM Dual 8 Gbps FC or 10 GbE Dual 16 Gbps FC or 10 GbE


Connectivity

VPLEX Administration-SSP

Page
Internal Use - Confidential 32 © Copyright 2020 Dell Inc.
VPLEX Architecture

Intra-Cluster Communications (Local COM)

VS2

VS2 Local COM

VPLEX Cluster
Directo Director
r 4A 4B
FC SW-B
Director Directo
3A r 3B

Director Directo
FC SW-B
2A r 2B

Director Director
1A 1B
VS2 uses
Fibre Channel (8 Gbps)for Local
COM connections

VS2 technology uses 8 Gbps Fibre Channel to communicate between directors.


These intra-cluster communication paths are used frequently when data is
requested at one director but resides in the cache of another director.

VPLEX Administration-SSP

© Copyright
Internal Use - Confidential 2020 Dell Inc. Page 33
VPLEX Architecture

VS6

Local COM
VPLEX Cluster
Director Director
4A 4B
IB SW-B
Director Director
3A 3B

Director IB SW-B
Directo
2A r 2B

Director Director
1A 1B
VS6 uses
InfiniBand (40 Gbps)for Local COM
connections

VS6 engines not only have more cache per director, but the communication paths
(Local COM) between directors in the same cluster use InfiniBand. InfiniBand
provides 40 Gbps data paths between directors.

VPLEX Administration-SSP

Page
Internal Use - Confidential 34 © Copyright 2020 Dell Inc.
VPLEX Architecture

VPLEX Local Rack Configuration

VPLEX Cluster Rack Configuration

Engine 4

SPS 4

Engine 3

Quad-Engine Cluster Cable Management


SPS 3

Engine 4

FC SW B

Cable Management & Laptop Tray


UPS B

FC SW A
Engine 3

UPS A

Cable Management
UPS B
Management Server
UPS A

IB Switch IB Switch InfiniBand Switch A and B


Engine 2
Cable Management

SPS 2
Engine 2

Cable Management

SPS 1

Cable Management

VS2 Cluster VS6 Cluster

VPLEX is configured as a single, dual, or quad engine cluster. A VPLEX Local will
have one cluster, and a Metro has two clusters connected together. A VPLEX Local
cluster consists of a single rack. Most installations use EMC factory-installed racks;
although VPLEX may be deployed in customer racks in the field. The rack contains
1, 2, or 4 VPLEX engines and each engine consists of 2 directors: A and B.Clusters
may be upgraded from one to two engines or upgraded from two to four engines
without disruption. Adding engines allows VPLEX to scale for greater performance
and redundancy.

VPLEX Administration-SSP

© Copyright
Internal Use - Confidential 2020 Dell Inc. Page 35
VPLEX Architecture

VPLEX Engine I/O Modules

VS2

I/O Module details

A VS2 Engine contains two types of I/O modules, a 4 port Gbps and a 2-port 10
GbE. Here are where they can be used:

 Slot 0 is Fibre Channel and used for FE connections to a host.


 Slot 2 can be either 8 Gbps Fibre Channel or 10 GbE
o Used for WAN communications in a VPLEX Metro
 Slot 3 is 8 Gbps Fibre Channel used for local communication between
directors.

VPLEX Administration-SSP

Page
Internal Use - Confidential 36 © Copyright 2020 Dell Inc.
VPLEX Architecture

VS6

I/O Module details

With VS6 technology, there are three types of I/O modules, a 4-port 16 Gbps Fibre
Channel module, a 10 GbE I/O module, and a 2-Port InfiniBand module.
Here are their functions and locations:

 Slot 0 has a 4-port 16 Gbps Fibre Channel module for FE connections to hosts.
 Slot 1 contains a 4-port 16 Gbps Fibre Channel module for BE connections to
storage arrays
 Slot 2 has either a 16 Gbps Fibre Channel module or a 10 GbE I/O module for WAN
COM (2 ports are used if a VPLEX Metro - otherwise ports are not used).
 Slot 2 is used for WAN communication between VPLEX
Clusters in a VPLEX Metro. 2 ports are used, in a VPLEX Local
the ports are not used.
 Slot 3 is a 2-Port InfiniBand module and is used for Local communications with
other directors in the same VPLEX Cluster.

VPLEX Administration-SSP

© Copyright
Internal Use - Confidential 2020 Dell Inc. Page 37
VPLEX Architecture

WAN COM Performance Considerations

VPLEX Metro environment

FC vs. IP Round-trip- Bandwidth or Quality and


Time Capacity Reliability

VS6: 16 Gb/s FC

 Fibre Channel vs. IP - Choose a protocol for your VPLEX Metro WAN COM
connections. VPLEX Metro can be ordered with either 8 Gb/s Fibre Channel
(VS2) 16 Gb/s Fibre Channel (VS6), or 10 Gb/s Ethernet (VS2 and VS6) for the
WAN COM connections.
 Round-trip-time or WAN delay - The time to exchange data between clusters
will directly impact the distributed-device write time since the write mirroring is
synchronous. The maximum WAN round-trip-time (up to 10ms) is going to
largely be dependent upon what the applications can tolerate.
 Bandwidth or WAN link capacity - Ensure your inter-cluster WAN pipes are
sufficiently sized to handle the bandwidth required. Size appropriately for peak
capacity but also to tolerate a single link failure. Ideally VPLEX performance
should not suffer in the face of a single link failure, should one link happen to
fail, or be down for maintenance. Insufficient WAN COM bandwidth during times
of WAN saturation directly impacts latency and thus the host Metro write
performance. The amount of WAN bandwidth required depends primarily upon
the write rate for distributed-devices. Be aware of potential high bandwidth
users such as inter-cluster rebuilds, or reads in situations of a failed storage-
array.
 Quality and reliability - Details like packet loss dropped or corrupted frames or
lack of buffer credits can negatively impact the host Metro write latency.

VPLEX Administration-SSP

Page
Internal Use - Confidential 38 © Copyright 2020 Dell Inc.
VPLEX Architecture

VPLEX Power Subsystem

Comparison of Power Subsystems

VS2 Technology VS6 Technology


PSU_B1 PSU_B0

Director A Director B

Director B

PS
0
PS
1
PS PS 1
Director A

VS2 Front PSU_A1


View VS
6 Rear PSU_A0

1100W Power Supply Units

Each Director has 2 Power Supplies for redundancy (N+1)

Independent power zones in the data center feed each VPLEX power zone,
providing redundant high availability.VS2 Technology: There are two power
supplies with fans per director. Both must be removed to pull the director out. To
ensure the power supply is completely inserted into the engine, the yellow at the
top of the power supply should not be visible.VS6 Technology: The Power Supply
Units (PSU) are N+1 technology. Each has enough wattage (1100 W) to run the
director and MMCS or MM (management module).

VPLEX Administration-SSP

© Copyright
Internal Use - Confidential 2020 Dell Inc. Page 39
VPLEX Architecture

VPLEX Engine Battery Backup

VPLEX Power Backup System

VS2 Technology VS6 Technology


Dir. B - Battery Backup
Unit (BBU)

VS6 Front View


VS2 Rear View

SPS - Standby Power Supply


Dir. A - Battery
(Includes battery backup for both Directors A and B)
Backup Unit (BBU)

VS2 - Each engine is connected to 2 standby power supplies (SPS) that provide a
battery backup for cache vaulting in the event of transient site power failure. In
single-engine clusters, the management server draws its power directly from the
cabinet PDU.

In dual- and quad-engine clusters, the management server draws power from UPS-
A. Each VPLEX engine is supported by a pair of standby power supplies (SPS) that
provide a hold-up time of five minutes, allowing the system to ride through a
transient power loss. A single standby power supply provides enough power for the
attached engine.

Each standby power supply is a FRU and can be replaced with no disruption to the
services provided by the system. The recharge time for a standby power supply is
up to 5.5 hours. The batteries in the standby power supply can support two
sequential five-minute outages

VS6 - For VS6, there is no independent management server that needs to be


powered. The BBU (battery backup unit) pair provide 2 minutes of power to the
director and the MMCS or MM (management module). The dual BBU is not an
N+1 configuration, as both are required to provide the needed power.

VPLEX Administration-SSP

Page
Internal Use - Confidential 40 © Copyright 2020 Dell Inc.
VPLEX Architecture

VPLEX Management Server

Management Server

VPLEX VS2 clusters have a separate management server that manages both A-
side and B-side directors. It has separate network connections for the customer
management network, and the A-Side and B-side internal subnets. It also has an IP
port for a service laptop connection. Either subnet can access any director. VPLEX
VS2 management server provides a management interface to the public network
for cluster management. It also has interfaces to other VPLEX components. All
VPLEX event logging, as a service, takes place on the management server.

VPLEX Administration-SSP

© Copyright
Internal Use - Confidential 2020 Dell Inc. Page 41
VPLEX Architecture

VS6 Management Module Control Station

Management Module Control Station

VS6 does not have an external management server. Instead, VPLEX VS6
technology clusters have an integrated Management Module Control Station
(MMCS) that is part of engine 1 director A (MMCS-A). The MMCS also has all the
same network connections that the VS2 management server has.

VPLEX Administration-SSP

Page
Internal Use - Confidential 42 © Copyright 2020 Dell Inc.
VPLEX Architecture

VPLEX VS6 MMCS IP Connections

IP Connections

Service Port (eth1)


128.221.252.2

253 Subnet
(eth0) B-Side

MRJ21 Cable Assembly


• .252 subnet A-side cable is lime-colored To Customer Management
Network (eth3)
• .253 subnet B-side cable is violet-colored
• Management network cable is black-colored 252 Subnet
(eth2) A-Side

The MMCS in engine 1 director A is used as the management server for the
cluster. There is a service port for connecting a service laptop (or the laptop that
comes with the rack) and an MRJ21 cable assembly that provides three more IP
ports:
- lime-green cable is for the .252 VPLEX internal subnet
- pink/violet cable is for the .253 VPLEX internal subnet
- black cable is used to connect to the data management network

VPLEX Administration-SSP

© Copyright
Internal Use - Confidential 2020 Dell Inc. Page 43
VPLEX Architecture

VPLEX VS6 Management – MMCS

Management Control Station - MMCS

Only engine 1 has an MMCS in each director, and only the MMCS in director A
(MMCS-A) functions as a management server. The MMCS in engine 1 director B is
inactive for VPLEX CLI commands or the VPLEX Unisphere application.

An MMCS has its own CPU and 80 GB SSD which contains system code, logs,
and limited space for vaulting in case of power failure. There is no failover from one
MMCS to the other. If MMCS-A fails, it must be replaced.

The code on MMCS-B can be used to copy firmware to a new MMCS-A. MMCS-A-
Supports one Public IP for admin cluster management- Is used during EZ-Setup
and other VPLEX management operations.

It provides .252 subnet access to A-directors- Provides .253 subnet access to B-


directors.

VPLEX Administration-SSP

Page
Internal Use - Confidential 44 © Copyright 2020 Dell Inc.
VPLEX Architecture

VPLEX VS6 Management - MM

MM - Management Module

 Installed in Engines 2,3,4


 One per Director
 Provides IP connectivity to internal VPLEX Management

o 252 subnet

o 253 subnet

VPLEX Administration-SSP

© Copyright
Internal Use - Confidential 2020 Dell Inc. Page 45
VPLEX Architecture

VPLEX Management IP Infrastructure

VS2

Cable Connections

Engine 4

Cable Connections for Director and Switch Cable Connections for Director and Switch
management ports management ports

Engine 3
A-Side
B-Side

FC Switch B

FC Switch A

Management
LAN

Management Server

Management
Client
Engine 2

Engine 1

Shown are the Ethernet cables that connect the management ports from the VPLEX
management server to the management ports of each director and the Fibre Channel COM
switches. Note that there are No internal VPLEX IP switches and the directors are in fact daisy-
chained together on the “A” Side and the “B” side

The management server is the only VPLEX component that is configured with a public IP on
the data center management network. From the data center management network, the
management server can be accessed via SSH or HTTPS.

VPLEX Administration-SSP

Page
Internal Use - Confidential 46 © Copyright 2020 Dell Inc.
VPLEX Architecture

VS6

Cable Connections

Engine 4
Cable Connections for Director
Cable Connections for Director management ports
management ports

B-Side A-Side
Engine 3

IB Switch IB Switch

Engine 2 Management
LAN

Management
Client
Engine 1

MMCS-A

The VS6 Management IP infrastructure is similar to the VS2. The major difference is that
instead of a separate management server, there is an embedded MMCS-A within engine 1

VPLEX Administration-SSP

© Copyright
Internal Use - Confidential 2020 Dell Inc. Page 47
VPLEX IO Operations

VPLEX IO Operations

Distributed Cache Coherency

New Write: Rea


Block 3 Block 3

VPLEX’s distributed cache coherency handling enables superior availability. It enables any
director to service any I/O for any volume while participating in a global cache with all the other
directors in the cluster. Each director contributes cache to the global cache. If a director is
added to the cluster, the global cache increases in size. If a director fails and is removed from
the cluster, the global cache shrinks but access is maintained through all the remaining
directors. To illustrate how this works:

 When a write comes into director A for a particular block, a small piece of
metadata is updated to indicate that director A now has that block in cache.

 VPLEX does not provide acknowledgment to the host until data is stored on the
storage array.

 Should a read later come in for that same block on a different director, it will
look in the directory and see that the block is available in A’s cache. It will fetch
it from there and return it to the host.

VPLEX Administration-SSP

Page
Internal Use - Confidential 48 © Copyright 2020 Dell Inc.
VPLEX IO Operations

VPLEX I/O Operations

Read Hit Distributed Device

HOST Stretched Host Cluster HOST

READ

GLOBAL CACHE DATA

VIRTUALIZATION

HETEROGENEOUS HETEROGENEOUS

FAILURE DOMAIN A FAILURE DOMAIN B

Host Read Request Read Hit:

 A host read request is serviced by a director.

 The director performs a lookup in its local cache.

 If the data is there it is sent to the host.

 If the data is not in the local cache a lookup is performed in the global cache.

 If another director, in that cluster only, has the data in its cache, then it is sent
via the local com to the servicing director and then to the host.

 The servicing director now has the requested data in its local cache, in order to
satisfy future potential reads.

VPLEX Administration-SSP

© Copyright
Internal Use - Confidential 2020 Dell Inc. Page 49
VPLEX IO Operations

Read Miss Distributed Device

HOST Stretched Host Cluster HOST


READ

GLOBAL CACHE DATA

VIRTUALIZATION

HETEROGENEOUS DATA

FAILURE DOMAIN A FAILURE DOMAIN B

Read Miss

If the data the director is looking for is not in global cache, it is called a global cache miss. Here
is a description:

 A Read request is issued to a Virtual Volume

 A lookup is performed in local cache by the servicing director.

 A miss requires a lookup in Global Cache.

 On a miss from Global Cache, the requested data is read from the Storage
Volume into the local cache. The requested data is returned from the local
cache to the host.

 The servicing director now has the requested data in its local cache, in order to
satisfy future potential reads.

VPLEX Administration-SSP

Page
Internal Use - Confidential 50 © Copyright 2020 Dell Inc.
VPLEX IO Operations

Write Local Device

HOST Stretched Host Cluster HOST


DATA

GLOBAL CACHE DATA

VIRTUALIZATION

ACK

FAILURE DOMAIN A FAILURE DOMAIN B

Write Local Device

When a Write is being committed to the array, the cache must first invalidate any existing
copies of that data in the global and local cache. During a write hit:

 A Write request is issued to the Virtual Volume.

 A lookup is performed in the local cache of the receiving director.

 Global cache updates the new location for the data.

 Any prior data is invalidated in all cache locations.

 The receiving director transfers the data to the local cache.

 The new data is written through to the back-end storage.

 The Write is acknowledged, first from the storage array, then VPLEX sends the
acknowledgement to the host.

VPLEX Administration-SSP

© Copyright
Internal Use - Confidential 2020 Dell Inc. Page 51
VPLEX IO Operations

Write Distributed Device

HOST Stretched Host Cluster HOST


DATA

DATA GLOBAL CACHE

VIRTUALIZATION
ACK

AC ACK

FAILURE DOMAIN A FAILURE DOMAIN B

Write Distributed Device

Both VPLEX Clusters are Active-Active with a VPLEX Metro. Here are the steps for a Write to a
Distributed Device:

 Write is received by a director.

 The director identifies the blocks to be written and signals the other directors
that it now owns those blocks in cache.

o Both Local and Remote

 All directors update their private copy of the cache coherence table.

o Noting which blocks will now be invalid within their own caches.

 Both directors write the blocks to cache and through to Back-End storage.

 An acknowledgement is sent back from the "remote" cluster to the local cluster.

o At the same time the Local Cluster receives an acknowledgement from


the Back-End storage array.

 The Local Cluster acknowledges the host.

VPLEX Administration-SSP

Page
Internal Use - Confidential 52 © Copyright 2020 Dell Inc.
VPLEX IO Operations

Path Redundancy

Single Path Failure

SAN SAN SAN SAN

FE Ports FE Ports

Director A Director B Director A Director B

Example:
Single Engine

VPLEX

Virtual Volume Virtual Volume

Virtual volumes presented from VPLEX to a host can tolerate path failures by
connecting the host to multiple directors and by utilizing multi-pathing software on
the host to control the paths. a virtual volume is presented out of multiple VPLEX
front-end ports on different directors. This yields continuous data availability in the
presence of port or director failure.

VPLEX Administration-SSP

© Copyright
Internal Use - Confidential 2020 Dell Inc. Page 53
VPLEX IO Operations

Failure Across Engines

Engine Redundancy

SAN SAN SAN SAN

Engine 1 Engine 2 Engine 2

Example:
Dual Engine

VPLEX Virtual Volume Virtual Volume

Virtual volumes presented from VPLEX to a host can tolerate entire VPLEX engine failures by
connecting the host to VPLEX Front-End ports on different engines. An engine could fail and
the host would still be able to access its volumes. It is still best practice to connect the host to
one A director and one B director.

VPLEX Administration-SSP

Page
Internal Use - Confidential 54 © Copyright 2020 Dell Inc.
VPLEX IO Operations

VPLEX Cluster Witness

Failure
VPLEX
Domain #3 Witnes

Is there a
failure?
Where should I/O
VP
continue?
N

Failure Domain #1 IP Management Failure Domain #2


VPN Network VPN

VPLEX Cluster-1 VPLEX Cluster-2

VPLEX Witness

VPLEX Witness can be deployed (best practice) at a third location to improve data availability
in the presence of cluster failures and inter-cluster communication loss. The VPLEX Witness is
implemented as a virtual machine in a separate failure domain. This eliminates the possibility of
a single fault affecting both a VPLEX Cluster and VPLEX Witness.

VPLEX Witness connects to both VPLEX clusters over the management IP network using VPN.
VPLEX Witness observes the state of the clusters and thus can distinguish between an outage
of the inter-cluster link and a cluster failure.

VPLEX Witness uses this information to guide the clusters to either resume or suspend I/O.

VPLEX Administration-SSP

© Copyright
Internal Use - Confidential 2020 Dell Inc. Page 55
VPLEX Management Options

VPLEX Management Options

VPLEX Management Tools

Management Options

VPLEX HTML 5 Unisphere

Click to expand

A single VPLEX cluster is managed by logging into the management server on a VS2 system
or logging into MMCS-A on a VS6 system. In VPLEX Metro configurations, you can manage
both clusters from a single management connection.

The management server or MMCS-A coordinates data collection, VPLEX software upgrades,
configuration interfaces, diagnostics, event notifications, and some director-to-director
communication.

Both a VPLEX CLI and a GUI called EMC Unisphere for VPLEX are used to configure,
upgrade, manage, and monitor the VPLEX. The VPLEX CLI supports all VPLEX operations.
CLI commands are divided across a hierarchical context tree structure.

VPLEX Administration-SSP

Page
Internal Use - Confidential 56 © Copyright 2020 Dell Inc.
VPLEX Management Options

VPLEX Management Security

Password Protected Accounts (role-based access control)

Role Description

service Used to configure VPLEX ( Service personnel only)

securityadmin Admin user - for user account management

vplexuser VPLEX management role used for provisioning, migrations etc. Cannot perform account
management

readonly Monitoring only

VPLEX accounts are password protected. Each account will be assigned a role. There are four
roles: service, securityadmin, vplexuser, and read-only. Most users will be assigned either
the vplexuser or readonly role. Only Dell Technologies service personnel (or partners
performing service) should use the account with the service role.

The securityadmin user should be restricted to administrators that manage VPLEX user
accounts. Additional security options include using an LDAPs server and creating a
Certification Authority (CA) on the VPLEX management server for the purposes of signing
management server certificates.

The VPlexcli command security create-ca-certificate creates a CA certificate file


and private key protected by a passphrase.

VPLEX Administration-SSP

© Copyright
Internal Use - Confidential 2020 Dell Inc. Page 57
VPLEX Management Options

VPLEX CLI Context Tree (partial listing)

Command Information

Distribute Management
Clusters Data migration Engines Server Monitoring Notifications System default
d storage

notifications
bindings Ports
call-home
distributed-devices Et
snmp-traps
rule-sets Engine-<n-n> Eth1
directors Eth
director-<x-y> A Eth3 Directors
hardware director_A
ports Monitors
list of ports director_A_director
Device-migrations
firmware director_A_diskReportMonitor
list of devices director-<x-y-> B
director_A_portReportMonitor
Extent-migrations
hardware director_A_volumeReportMonitor
list of devices
ports director_B
list of ports
Monitors
firmware director_B_director
director_B_diskReportMonitor
fans
director_B_portReportMonitor
mgmt-modules
director_B_volumeReportMonitor
power-supplies

stand-by-power supplies

The VPLEX CLI is based on a tree structure like the structure of a Linux file system.
Fundamental to the VPLEX CLI is the notion of object context. The object context is determined
by the current location or pwd (Linux print working directory command) within the directory tree
of managed objects.

The object context is determined by the current location or pwd within the directory tree of
managed objects. The CLI is divided into command contexts. Some commands are accessible
from all contexts and are referred to as global commands. The remaining commands are
arranged in a hierarchical context tree. These commands can only be executed from the
appropriate location in the context tree.

Except for system-defaults, each of the sub-contexts contains one or more sub-context to
configure, manage, and display sub-components. Many VPLEX CLI operations can be
performed from the current context. However, some commands may require the user to change
to a different directory before running the command.

VPLEX Administration-SSP

Page
Internal Use - Confidential 58 © Copyright 2020 Dell Inc.
VPLEX Management Options

Key Management CLI Root (/) Sub-contexts

Sub Functionality
Context

clusters/ Create and manage links between clusters, devices, extents, system volumes and virtual
volumes. Register initiator ports, export target ports, and storage
views

connectivity/ Configure connectivity between back-end storage arrays, front-end hosts, local directors,
port-groups and inter-cluster WANs

distributed- Create and manage distributed devices and rule sets


storage/

data-migrations/ Create, verify, start, pause, cancel, and resume data migrations of extents or devices

monitoring/ Create and manage performance monitors

recoverpoint/ Manage RecoverPoint options

The CLI is divided into command contexts. Command contexts contain commands
that can be accessed only from within that context. The commands under each
context are arranged in a hierarchical context tree. These commands can only be
executed from the appropriate location in the context tree. Understanding the
command context tree is critical to using the VPLEX command-line interface
effectively.

Except for the ‘system-defaults/’ sub-context, each of the sub-contexts contains


one or more sub-contexts to configure, manage, and display sub-components.
Command contexts have commands that can be executed only from that context.

The topmost context is the root context, or “/”. Although there are more, shown
here are the key root-level sub-contexts where commands can be accessed to
configure, manage, and monitor VPLEX clusters, storage, and host connectivity.

Some commands are accessible from all contexts. These are referred to as global
commands.

VPLEX Administration-SSP

© Copyright
Internal Use - Confidential 2020 Dell Inc. Page 59
VPLEX Management Options

Navigating the CLI Context Tree

VPlexcli:/> cd /clusters
- Use cd command to navigate
VPlexcli:/clusters> cd cluster-1
- Find the current context from CLI interface prompt
VPlexcli:/clusters/cluster-1> cd connectivity
- Use ll command to displays the sub-contexts
VPlexcli:/clusters/cluster-1/connectivity> cd back-end

- Return to the root context with cd /


VPlexcli:/clusters/cluster-1/connectivity/back-end> cd port-groups

VPlexcli:/clusters/cluster-1/connectivity/back-end/port-groups> cd fc-port-group-0

VPlexcli:/clusters/cluster-1/connectivity/back-end/port-groups/fc-port-group-0> ll member-ports

/clusters/cluster-1/connectivity/back-end/port-groups/fc-port-group-0/member-ports:

Director Port Enabled Address


-------------- ------- ------- ------------------
director-1-1-A A1-FC00 enabled 0x50001442c00ef210

director-1-1-B B1-FC00 enabled 0x50001442d00ef210

VPlexcli:/clusters/cluster-1/connectivity/back-end/port-groups/fc-port-group-0>

VPLEX Administration-SSP

Page
Internal Use - Confidential 60 © Copyright 2020 Dell Inc.
VPLEX Management Options

Locating in the Context Tree

Context Tree Details

The CLI includes several features to help locate your current position in the context
tree and determine what contexts and/or commands are accessible:

 The ls (list) command displays the sub-contexts immediately accessible from


the current context

 The ls -l (list long) command displays more information about the current
sub-contexts

 The cd command followed by a <Tab> displays the same information as ls at


the context level

 The tree command displays the immediate sub-contexts in the tree using the
current context as the root

 tree -e command displays immediate sub-contexts in the tree and any sub-
contexts under them.

VPLEX Administration-SSP

© Copyright
Internal Use - Confidential 2020 Dell Inc. Page 61
VPLEX Management Options

Dell EMC Unisphere for VPLEX

Health Check

Unisphere for VPLEX provides many of the features that the VPLEX CLI provides, in a
graphical user interface (GUI) format. The GUI is very easy to navigate and requires no
knowledge of VPLEX CLI commands.

Operations are accomplished by clicking on VPLEX icons and selecting desired values. System
Status on the navigation bar shows a graphical representation of your system. It allows you to
quickly view the status of your system and some of its major components such as Directors,
Storage Arrays, and Storage Views.

System Status is the default screen when you log into the GUI. Also shown is the Monitoring
menu. Here, you can monitor VPLEX cluster performance, provisioning jobs status, and
general system health details.

VPLEX Administration-SSP

Page
Internal Use - Confidential 62 © Copyright 2020 Dell Inc.
VPLEX Management Options

VPLEX Online Support

The Support page, located in the settings menu, provides links to various online
functions. These include VPLEX documentation, Help, and Solve Desktop.

VPLEX Administration-SSP

© Copyright
Internal Use - Confidential 2020 Dell Inc. Page 63
VPLEX Management Options

VPLEX Connectivity

Back-End Array Zoning Best Practices - Unity

Zoning Best Practices

B B B2 B A0 A A2 A3
• Dual fabrics
VS2 / VS6 • Min 2 active paths / LUN
VPLEX
Engine
B A • Prefer 4 Active paths / LUN
• Distribute across engines

Back-End
Back-End
Fabric A Fabric B

Fabric-A Zones Fabric-B Zones

Zone Zone
E1_A1_FC00 E1_A1_FC01
Array_SPA_0 Array_SPA_0
Array_SPB_0
SP A SP B Array_SPB_0

Zone Zone2
E1_B1_FC00 E1_B1_FC01
Array_SPA_0 Array_SPA_0
Array_SPB_0 Array_SPB_0

Minimally Configured Back-End SAN

Ensure that you have a SAN implementation design that is consistent with the recommended
best practices. Consider each array allocating storage to hosts and their applications through
VPLEX. Here are a few best practice considerations when connecting VPLEX to back-end
arrays:

 Dual SAN fabrics should be used for redundancy.

 Each VPLEX director must have at least two active paths to every back-end
array storage volume presented to the cluster.

 Maximum of 4 active paths per Storage Volume per VPLEX director.

 Back-end connections should be distributed across multiple engines if possible.

VPLEX Administration-SSP

Page
Internal Use - Confidential 64 © Copyright 2020 Dell Inc.
VPLEX Management Options

Back-End Zoning - Quad VPLEX with VMAX

Best Practices

This illustration shows the physical connectivity to a Dell EMC VMAX array. Similar
considerations should apply to other active/active arrays. Follow the array best practices for all
arrays including third party arrays.

 VPLEX initiators (backend ports) on a single director should spread across


engines to increase HA and redundancy.

 The VMAX Volumes should be provisioned for access through specific FA ports
and VPLEX ports.

 The VMAX Volumes within this grouping should restrict access to four specific
FA ports for each VPLEX Director ITL group.

VPLEX Administration-SSP

© Copyright
Internal Use - Confidential 2020 Dell Inc. Page 65
VPLEX Management Options

Dell EMC XtremIO with VPLEX Configuration

VPLEX and XtremIO Details

XtremIO X-Brick Building Block With VPLEX, Balance load across


all target ports

The XtremIO Storage Array is an all-flash system, based on a scale-out architecture. XtremIO
storage array can include a single X-Brick, or a cluster of multiple X-Bricks. X-Brick cluster
scale from 2 to 16 active controllers simply by increasing the number of X-Bricks.

When connected to host through VPLEX, it is recommended to balance host access through
VPLEX between the X-Brick Storage Controllers to provide a distributed load across all target
ports.

VPLEX Administration-SSP

Page
Internal Use - Confidential 66 © Copyright 2020 Dell Inc.
VPLEX Management Options

Determine WWNs of VPLEX FE Ports

Copy VPLEX FE WWPNs for Zoning to Host Initiators

VPlexcli:/> ls -l /engines/**/ports/ Port information output will be


/engines/engine-1-1/directors/director-1-1 A/hardware/ports: grouped by VPLEX engine and
director

Name Address Role Port Status


------- ------------------ --------- -----------
A0-FC00 0x50001442a0633000 front-end up
A0-FC01 0x50001442a0633001 front-end up
A0-FC02 0x50001442a0633002 front-end up
A0-FC03 0x50001442a0633003 front-end up

WWPNs

Determine the VPLEX front-end and WAN-COM port WWNs for use in configuring SAN
connectivity and zoning to support VPLEX-to-hosts and VPLEX cluster-to-cluster
communications. Use the VPLEX CLI ls –l command to list the contents of /engines/**/ports of
all VPLEX engines and directors.

VPLEX Administration-SSP

© Copyright
Internal Use - Confidential 2020 Dell Inc. Page 67
VPLEX Management Options

Host Connectivity to VPLEX Front-End Ports

Host Connectivity Details

Zone Example

Fabric-B Zone_1

HBA-2

E1_A0_FC00

Fabric A SAN SAN E1_B0_FC00


Fabric B

F
C
F
C VPLEX FE
FC
FC Ports
Director B
02
FC
03

F
C
F
C
FC
01
Director A
FC
00

In a dual-SAN Fabric, best practice is to cross-connect each director’s FE ports into SAN
Fabric A and SAN Fabric B.

VPLEX Administration-SSP

Page
Internal Use - Confidential 68 © Copyright 2020 Dell Inc.
VPLEX Management Options

FE Connectivity Best Practices

Best Practices:

Each host has four logical paths.


One path to Director-A on each fabric.
One path to Director-B on each fabric
Host based Multi-pathing software
installed.
Dual Fabric
VPLEX FE Ports per Director has a
minimum of two connections, one per
fabric.
Zoning

Provides redundant access to each Virtual


Volume from both A and B director in each
fabric.

VPLEX Administration-SSP

© Copyright
Internal Use - Confidential 2020 Dell Inc. Page 69
VPLEX Management Options

VPLEX Front End Connectivity

Balance Between Directors

Fabric-A Zone_1 Fabric-B Zone_1


SAN A SAN B HBA-2
HBA-1
E1_A0_FC01 E1_A0_FC00

E2_B0_FC01 F
E2_B0_FC00
FC0
FC0
FC0

FC

FC VPLEX Engine-2
FC
F

FC
FC
FC
FC

FC0

FC
VPLEX Engine-1
FC
FC

Most host connectivity for hosts running load balancing software should follow the
recommendations for a dual-engine cluster. The hosts should be configured across two
engines and subsequent hosts should alternate between pairs of engines effectively load
balancing the I/O across all engines.

Zoning Example:

 Four Logical Paths


 Even Ports to Fabric B
 Odd Ports to Fabric A
 Each HBA Port to multi Engines

VPLEX Administration-SSP

Page
Internal Use - Confidential 70 © Copyright 2020 Dell Inc.
VPLEX Storage Provisioning Concepts

VPLEX Storage Provisioning Concepts

Storage View: Establish Host-to-Virtual-Storage Connectivity

Storage View Details

Storage View
Registered Host Initiators

VPLEX FE Ports
VIRTUAL
VOLUMES
Host

A Storage View is a combination of registered initiators, VPLEX Front-End ports, and Virtual
Volumes. It is used to control a single or clustered host access VPLEX Virtual Volumes. It is the
VPLEX method of LUN Masking.

To export VPLEX storage, you must first create a storage view for the host. Next add VPLEX
front-end ports and VPLEX Virtual Volumes to the view. Virtual volumes are not visible to hosts
until they are in a storage view with associated ports and initiators.

A registered initiator can be in more than one storage view and a VPLEX FE port can be in
more than one storage view, while the unique/particular combination of a specific
<initiator><FE_port> pair can only be in one storage view.

VPLEX Administration-SSP

© Copyright
Internal Use - Confidential 2020 Dell Inc. Page 71
VPLEX Storage Provisioning Concepts

VPLEX Virtual Storage Constructs – Complex Device


Example

Complex Device Details

Registered Host Initiators

HOST

Top-Level Device

Back-end storage arrays are configured to present LUNs to VPLEX through the SAN. Each
presented back-end LUN maps to one VPLEX storage volume. Storage volumes are initially in the
‘unclaimed’ state. Unclaimed storage volumes may not be used for any purpose within VPLEX
other than to create meta volumes that are for system internal use only.

Once a storage volume has been claimed within VPLEX, it may be split into one or more contiguous
extents. A single extent may map to an entire storage volume. However, it cannot span multiple
storage volumes. A VPLEX device is the entity that enables RAID implementation across one or
more extents or other devices. VPLEX supports RAID-0, RAID-1, RAID-C, as well as 1-1 mapping.

Raid-0 can stripe data across multiple extent/device constructs. When creating a Raid-0 device, if
more than one extent is chosen, VPLEX creates a Raid-0 device that is striped across the selected
extents. The Raid-0 device is the sum of the size of the extents. An example would be if three 2 GB
extents were selected, then the Raid-0 device would be 6 GB. VPLEX would stripe data across the
selected extents. The stripe depth specifies how much data is written to an extent before moving to
the next extent.

Raid-C concatenates (appends) extents to provide a larger address space.

Raid-1 mirrors two device extent/device constructs. The top-level device is the same size. A storage
view is the masking construct that controls how one or more VPLEX virtual volumes are exposed
through VPLEX front-end ports to host initiators.

Once a storage view is properly configured and operational, the host should be able to detect and
use virtual volumes. A Host discovers virtual volumes presented by VPLEX, after initiating a bus-
scan on its HBAs. Every front-end path to a virtual volume is an active path and the current version
of VPLEX presents each virtual volume. The host requires multi-pathing software for a high-
availability implementation.

VPLEX Administration-SSP

Page
Internal Use - Confidential 72 © Copyright 2020 Dell Inc.
VPLEX Storage Provisioning Concepts

View Virtual Volume ID Presented to Host

export storage view summary

VPlexcli:/clusters/cluster-1/exports/storage-views> export storage-view summary

View health summary(cluster-1):


view name | health-state exported volumes ports registered initiators
---------------------------------- ---------------- ------------------------ ------- --------------------------
Student5_StorageView healthy 4 4 2

Total 1 views, 0 unhealthy.

VPlexcli:/clusters/cluster-1/exports/storage-views> export storage-view map -v *

VPD83T3:6000144000000010e00c2ecb3c5914fb Exchange__1_vol
VPD83T3:6000144000000010e00c2ecb3c59152a Dev_14_vol
VPD83T3:6000144000000010e00c2ecb3c591530 Dev_8_vol
VPD83T3:6000144000000010e00c2ecb3c591549 Dev_10_vol

ID of Virtual Volume Virtual Volume


presented to host name

Use the export storage-view summary command from the context shown to see a
summary of storage views that are configured in VPLEX.

A unique VPD (vital product data) ID is assigned to each VPLEX virtual volume. We can view
this ID by entering the VPLEX CLI command export storage-view map
<storage_view>. This is the same logical device ID that is seen in all host operating systems
to identify a LUN from a Storage System. This VPD number will not change even if the
underlying storage is moved.

Here we see an example of a PowerPath CLI command. Notice the logical device ID is the
same as the VPD of a VPLEX Virtual Volume.

VPLEX Administration-SSP

© Copyright
Internal Use - Confidential 2020 Dell Inc. Page 73
VPLEX Storage Provisioning Concepts

Storage View Creation

Register Initiators

The first step when creating a Storage View is to discover and verify the
connectivity of the VPLEX Front End ports.

Initiators can be given a text name.

Select Ports

VPLEX FE Ports are


added to the
Storage View. The
ports selected must
be zoned to the HBA
ports.

VPLEX Administration-SSP

Page
Internal Use - Confidential 74 © Copyright 2020 Dell Inc.
VPLEX Storage Provisioning Concepts

Select FE Ports

VPLEX Administration-SSP

© Copyright
Internal Use - Confidential 2020 Dell Inc. Page 75
VPLEX Storage Provisioning Concepts

Add Virtual Volumes

Volumes here will be exported to host

VPLEX Administration-SSP

Page
Internal Use - Confidential 76 © Copyright 2020 Dell Inc.
VPLEX Storage Provisioning Concepts

Review and Complete

Click Finish to create the Storage View.

VPLEX Administration-SSP

© Copyright
Internal Use - Confidential 2020 Dell Inc. Page 77
Storage Provisioning Methods

Storage Provisioning Methods

VPLEX Storage Provisioning Overview

To begin using VPLEX, you must provision storage so that hosts can access that
storage.

There are three ways to provision storage on VPLEX:

 Advanced provisioning
 EZ provisioning
 Integrated array service-based provisioning (VIAS)

VPLEX Administration-SSP

Page
Internal Use - Confidential 78 © Copyright 2020 Dell Inc.
Storage Provisioning Methods

Advanced Provisioning Overview

Advanced Provisioning Details

View and
claim Create
Create RAID-
available Extents
0, RAID-1,
storage from
RAID-C or
Storage Create Place
1:1 mapping
Volumes Virtual Virtual
of Extents to
Volumes Volumes
Devices
into a
Storage
View

VPLEX Administration-SSP

© Copyright
Internal Use - Confidential 2020 Dell Inc. Page 79
Storage Provisioning Methods

Advanced Provisioning – Claim Storage

Claim Storage

Select Cluster and Storage Volumes

Select unclaimed
Storage Volumes

To claim a Storage Volume, select:

 Provision Storage
 Desired Cluster and Storage Volume view
 Unclaimed Storage Volumes

VPLEX Administration-SSP

Page
Internal Use - Confidential 80 © Copyright 2020 Dell Inc.
Storage Provisioning Methods

Launch the Wizard

Launch the Wizard

 Use the "More" drop-down menu


 Select Claim All Storage on Supporting Array

VPLEX Administration-SSP

© Copyright
Internal Use - Confidential 2020 Dell Inc. Page 81
Storage Provisioning Methods

Step One

Select the desired array from the list.

VPLEX Administration-SSP

Page
Internal Use - Confidential 82 © Copyright 2020 Dell Inc.
Storage Provisioning Methods

Step Two

Mapping File Selection (Optional)

VPLEX Administration-SSP

© Copyright
Internal Use - Confidential 2020 Dell Inc. Page 83
Storage Provisioning Methods

Step Three and Four

Storage Volume selection can be altered here. The arrows move Default Names can be edited.
volumes into the right column.

VPLEX Administration-SSP

Page
Internal Use - Confidential 84 © Copyright 2020 Dell Inc.
Storage Provisioning Methods

Storage Volume Claimed

A view of the Storage Volumes will display all that have been claimed.

VPLEX Administration-SSP

© Copyright
Internal Use - Confidential 2020 Dell Inc. Page 85
Storage Provisioning Methods

Claiming Storage Volumes using VPLEX CLI

Storage-volume claimingwizard Command

The CLI command claimingwizard finds unclaimed storage volumes, claims them, and
names them appropriately. This command can be used to claim and name many Storage
Volumes with a single command.

Storage volumes must be claimed, and optionally named before they can be used in a VPLEX
cluster. Storage tiers allow the administrator to manage arrays based on price, performance,
capacity, and other attributes. If a tier ID is assigned, the storage with a specified tier ID can be
managed as a single unit. Storage Volumes without a tier assignment are assigned a value of
‘no tier’.

Optional arguments

VPLEX Administration-SSP

Page
Internal Use - Confidential 86 © Copyright 2020 Dell Inc.
Storage Provisioning Methods

Advanced Provisioning - Create Extents

Extent Creation Wizard

We will now create the next level of volume construct, an extent. Select the desired Cluster,
then change the view to Storage Volumes. Use the drop-down menu to select Create Extents.

VPLEX Administration-SSP

© Copyright
Internal Use - Confidential 2020 Dell Inc. Page 87
Storage Provisioning Methods

Select Storage Volumes

Select Claimed Storage Volumes

Storage Volumes selected automatically appear. This can be altered using the arrows. Only
claimed Storage Volumes will be displayed.

VPLEX Administration-SSP

Page
Internal Use - Confidential 88 © Copyright 2020 Dell Inc.
Storage Provisioning Methods

Review

Selected Storage Volumes can be verified before Extents are created.

VPLEX Administration-SSP

© Copyright
Internal Use - Confidential 2020 Dell Inc. Page 89
Storage Provisioning Methods

Verify

The results of Extent creation are displayed. They can be seen by changing the
"View By" to Extents for the specific VPLEX Cluster.

VPLEX Administration-SSP

Page
Internal Use - Confidential 90 © Copyright 2020 Dell Inc.
Storage Provisioning Methods

Advanced Provisioning - Create Devices

Launch Device Creation Wizard

Select Extents for the View By window. Click Create Devices to launch the Wizard.

Select Extents

VPLEX Administration-SSP

© Copyright
Internal Use - Confidential 2020 Dell Inc. Page 91
Storage Provisioning Methods

Step One

Device Type

Here, we can select a RAID protection or performance attribute or simply specify a one-to-one
mapping of one extent used to create a single device.

VPLEX Administration-SSP

Page
Internal Use - Confidential 92 © Copyright 2020 Dell Inc.
Storage Provisioning Methods

Step Two

Select the extent to be used for the new device. Since we previously selected Raid-1, we must
select two extents minimum. Click the Add Device button to create the device.Data will be
copied from Source to Target.

Extent Selection The new Device can now be named.

Determined by previous selection

Raid-1 Select source and target

VPLEX Administration-SSP

© Copyright
Internal Use - Confidential 2020 Dell Inc. Page 93
Storage Provisioning Methods

Step Three and Four

Virtual Volumes can be created here. If multiple Devices are created, a base name can be
given. Click Finish to complete the wizard.

Click to Expand

VPLEX Administration-SSP

Page
Internal Use - Confidential 94 © Copyright 2020 Dell Inc.
Storage Provisioning Methods

View Created Devices

Devices can be viewed by Cluster.

New Device
Mirror Leg Syncing

VPLEX Administration-SSP

© Copyright
Internal Use - Confidential 2020 Dell Inc. Page 95
Storage Provisioning Methods

Advanced Provisioning - Create Virtual Volumes

Create Virtual Volumes

Virtual Volume Creation

Select Devices Create Thin Virtual Volumes

Select Devices

VPLEX Administration-SSP

Page
Internal Use - Confidential 96 © Copyright 2020 Dell Inc.
Storage Provisioning Methods

When creating Virtual Volumes, there is an option to make them thin enabled.

To make a Virtual Volume thin enabled several characteristics must be true:

 Storage volumes are provisioned from storage arrays that are supported by VPLEX as thin-capable

 All the mirrors are created from the same storage-array family that VPLEX supports (For a RAID-1 configuration)

 Storage volume display thin properties

This allows us to use host-based storage reclamation using the unmap feature of VMware ESXi hosts. For example, after
deleting a VM from a datastore, it is desired to reclaim the storage to use for other VMs. VMware VAAI (vStorage API Array
Integration) supports this feature.

If the virtual volumes cannot be created as thin, the operation will succeed but it will be thick instead.

Refer to the Dell EMC VPLEX GeoSynchrony Administration Guide for additional information.

VPLEX Administration-SSP

© Copyright
Internal Use - Confidential 2020 Dell Inc. Page 97
Storage Provisioning Methods

Virtual Volume Wizard

Virtual Volume Created Virtual Volumes Displayed

Not in a Storage View

VPLEX Administration-SSP

Page
Internal Use - Confidential 98 © Copyright 2020 Dell Inc.
Storage Provisioning Methods

Advanced Provisioning – Map Virtual Volume

Virtual Volume Mapping

To add Virtual Volumes to an existing Storage View, select the VPLEX Cluster, and change the
view to Storage Views. Click Add to begin.

VPLEX Administration-SSP

© Copyright
Internal Use - Confidential 2020 Dell Inc. Page 99
Storage Provisioning Methods

Storage View Editing

Storage View Wizard Complete

VPLEX Administration-SSP

Page
Internal Use - Confidential 100 © Copyright 2020 Dell Inc.
Storage Provisioning Methods

Verify Storage View

Storage View Properties

Verify the Virtual Volumes have been added to the Storage View. Change the View By to
Storage Views. Select Virtual Volumes in the Storage View Properties window.

VPLEX Administration-SSP

© Copyright
Internal Use - Confidential 2020 Dell Inc. Page 101
Storage Provisioning Methods

EZ Provisioning Overview

EZ provisioning is a simple method of provisioning that makes provisioning tasks easy.

EZ provisioning automatically creates a virtual volume with a one-to-one mapping to a


selected unclaimed storage volume. Use EZ provisioning to quickly create a virtual volume
that uses the entire capacity of the storage volume. In EZ provisioning, you select storage
arrays and define how you want them to be used, protected, and presented to hosts.

Steps for EZ Provisioning:

1. Select or create a consistency group for the volumes


2. Select the type of volume to create (distributed or local).
3. Select protection options, storage, and optionally expose the virtual volume to
hosts.
4. Add hosts

VPLEX Administration-SSP

Page
Internal Use - Confidential 102 © Copyright 2020 Dell Inc.
Storage Provisioning Methods

EZ Provisioning

Consistency Group

The first step is to select an existing Consistency Group or create a new one.
VPLEX Consistency Groups aggregate volumes to enable the application of a
common set of properties to the entire group. Consistency Groups are explained in
detail later in the course.

VPLEX Administration-SSP

© Copyright
Internal Use - Confidential 2020 Dell Inc. Page 103
Storage Provisioning Methods

Volume Options

Volume Option Details

Descriptive Name

Select the source VPLEX cluster that will provide the storage capacity from the back-end
arrays. Also, select the appropriate protection and data synchronization attributes for the new
storage capacity.

VPLEX Administration-SSP

Page
Internal Use - Confidential 104 © Copyright 2020 Dell Inc.
Storage Provisioning Methods

Storage Volumes

Storage Volume Selection Thin Attribute

Select a back-end storage array and LUN connected to the source VPLEX cluster. The LUN
data of the selected physical array will be copied onto the Storage Volume in the selected
VPLEX cluster. Back-end array LUNs in either the claimed or unclaimed state may be used.

VPLEX does not report a volume as thin to host initiators until its thin-enabled option is set to
true. This can be set here.

VPLEX Administration-SSP

© Copyright
Internal Use - Confidential 2020 Dell Inc. Page 105
Storage Provisioning Methods

Final Steps

Add the new Virtual Volume to a Review the selections. Results


Storage View or create it as
unexported.

VPLEX Administration-SSP

Page
Internal Use - Confidential 106 © Copyright 2020 Dell Inc.
Storage Provisioning Methods

Storage View

View the new Virtual Volume.

VPLEX Administration-SSP

© Copyright
Internal Use - Confidential 2020 Dell Inc. Page 107
Storage Provisioning Methods

VIAS Provisioning Overview

VIAS details

Registered Storage
Initiators Views

VPLEX FE
Ports
Admin running Virtual
Unisphere for VPLEX Volumes

Storage Pool

VPLEX Management Server

AMP

AMP

• VIAS = VPLEX Integrated Array Services

• AMP = Array Management Provider XtremIO VMAX VNX

The VPLEX Integrated Array Services (VIAS) feature enables VPLEX to provision storage for
Dell EMC VMAX, VNX, and XtremIO storage arrays directly from the VPLEX CLI, UI, and
REST API. VPLEX uses Array Management Providers (AMPs) to streamline provisioning and
allows you to provision a VPLEX Virtual Volume from a pool on the storage array.

The VIAS feature uses the Storage Management Initiative-Specification (SMI-S) provider to
communicate with the arrays that support integrated services to enable provisioning. The SMI-
S provider is used for VMAX and VNX.

After the SMI-S provider is configured, you can register the SMI-S provider with VPLEX as the
Array Management Provider (AMP). When the registration is complete, the managed arrays,
pools, and storage groups are visible in VPLEX, and you can provision Virtual Volumes from
those pools. The pools used for provisioning must have been previously created on the storage
array, as VIAS does not create the pools for provisioning.

VIAS also supports a REST AMP used with XtremIO arrays. The REST AMP does not require
additional software. The provider is on the XtremIO array itself.

You need to register the AMP as type REST within VPLEX.

Each XtremIO array is registered for VIAS in a 1-to-1 relationship with a VPLEX cluster.
Multiple XtremIO arrays need to be individually registered in VPLEX. This is different from SMI-
S AMPs where multiple storage arrays are managed by one SMI-S provider, then the SMI-S
provider is registered with VPLEX.

VPLEX Administration-SSP

Page
Internal Use - Confidential 108 © Copyright 2020 Dell Inc.
Storage Provisioning Methods

Register AMP and View Managed Arrays

Before provisioning storage using VIAS, we must register the Array Management
Provider (AMP).

Once the AMP is registered, we can see which arrays it manages and the free
space on the storage pools for each array.

SMI-S for all


other Arrays

Here is an explanation of the fields.

 Provider Type- Rest for XtremIO and SMI-S for AMPs


 Name - identify the XtremIO or AMP host
 IP Address - Address of the XtremIO or SMI-S host
 User/Password - Determined by the SMI-S host or XtremIO

VPLEX Administration-SSP

© Copyright
Internal Use - Confidential 2020 Dell Inc. Page 109
Storage Provisioning Methods

VIAS Storage Provisioning

Using the VPLEX Integrated Array Services (VIAS) feature, you can create virtual
volumes from pre-defined storage pools.

To launch the VIAS wizard:

 Select the cluster


 View by Virtual Volumes
 Select Create>Provision from Pools

VPLEX Administration-SSP

Page
Internal Use - Confidential 110 © Copyright 2020 Dell Inc.
Storage Provisioning Methods

VIAS Storage Provisioning

Here are the steps to configure a Virtual Volume using the Provision from Pools
Wizard.

Steps One and Two

Select the Consistency Group within the appropriate VPLEX Cluster(s) to use or
create a new one.

Consistency Group Volume Options

VPLEX Administration-SSP

© Copyright
Internal Use - Confidential 2020 Dell Inc. Page 111
Storage Provisioning Methods

Steps Three

Storage Pools -This step selects the Back- Create Thin Virtual Volumes
End Storage. The array selection list is
based on the arrays added to the SMI-S
server, or the number of XtremIO arrays
added.

VPLEX Administration-SSP

Page
Internal Use - Confidential 112 © Copyright 2020 Dell Inc.
Storage Provisioning Methods

Steps Four and Five

Storage Views The Virtual Volumes can be Review the selections made.
optionally added to a Storage View. The
Storage View must already exist.

VPLEX Administration-SSP

© Copyright
Internal Use - Confidential 2020 Dell Inc. Page 113
Storage Provisioning Methods

Step Six

The final results are displayed. Provisioning may take View the status from the Jobs view.
longer.

VPLEX Administration-SSP

Page
Internal Use - Confidential 114 © Copyright 2020 Dell Inc.
Storage Provisioning Methods

Provision-Job Rollback

VIAS Rollback

Undo operation or rollback steps are added in case the VIAS provisioning fails. Not
all steps in the VIAS will be rolled back. Here are the main create and rollback
steps. The pre-check does not need a rollback if it fails. The volume creation is
rolled back in the VIAS process, as well as the volume exposure to VPLEX. Steps
2 and 3 create 90% of the possible issues. Steps 4 and 5 do not have rollback
steps. Note: if an error occurs in step 4 or 5, provisioning artifacts will remain and
need to be deleted by the user.

VPLEX Administration-SSP

© Copyright
Internal Use - Confidential 2020 Dell Inc. Page 115
Storage Volume Encapsulation

Storage Volume Encapsulation

Storage Volume Encapsulation Overview

Database Server

Log Boot

Data

SAN

Log
Each Storage
Data Volume to be
Encapsulated
Management
Boot
Station

 Encapsulation claims Storage Volumes while retaining all existing data.


 Use encapsulation to import existing non-VPLEX volumes into VPLEX.
 Feature can be implemented using VPLEX CLI or Unisphere.
 CLI allows for scripting

VPLEX Administration-SSP

Page
Internal Use - Confidential 116 © Copyright 2020 Dell Inc.
Storage Volume Encapsulation

Using Storage Volume Encapsulation

Write down the


Un-mount the
UID of the
back-end
existing back-end
Storage Claim and
array LUN
Volume from Encapsulate the
Provision the new
host then Storage Volume
Virtual Volume back
detach using the VPLEX
to the host and note Remount the new
CLI
the new UID under Virtual Volume to
VPLEX the host

Steps for Storage Volume Encapsulation - VPLEX provides the ability to claim back-
end Storage Volumes already in use under its control. The process of claiming a storage
volume while saving existing user data is called storage encapsulation. Any existing back-end
user volume may be encapsulated, including non-bootable and host boot image volumes.

VPLEX Administration-SSP

© Copyright
Internal Use - Confidential 2020 Dell Inc. Page 117
Storage Volume Encapsulation

Encapsulation Method using CLI

Use the Storage-Tool Compose CLI command for Encapsulation.

storage-tool compose -n new_Encap -g raid-0 -d VPD83T3:60000970000195901096533030313832

-v /clusters/cluster-1/exports/storage-views/esx_21

Argument Definitions:

-n new_Encap specifies the name of virtual volume

-g raid-0 specifies the desired geometry to use -d [storage-volumes] - specifies a


list of storage volumes to be used to build the Virtual Volume.

-v [Storage View name] specifies the VPLEX storage-view(s) to receive the new
virtual-volume. In this example, the Encapsulated Volume is being exported to the
host esx_21 Storage View.

VPLEX Administration-SSP

Page
Internal Use - Confidential 118 © Copyright 2020 Dell Inc.
VPLEX Distributed Device Concepts

VPLEX Distributed Device Concepts

Distributed Devices Overview

Mirr Mirr
or or

Distributed Devices - VPLEX Metro storage objects having a RAID-1 geometry with a mirror-
leg in each VPLEX Cluster.

 Distributed Devices support Virtual Volumes, that are presented to hosts


through a Storage View on each cluster.
 Writes issued to a Distributed Device are synchronously mirrored to both
clusters.
 Distributed coherent cache preserves data integrity.

VPLEX Administration-SSP

© Copyright
Internal Use - Confidential 2020 Dell Inc. Page 119
VPLEX Distributed Device Concepts

Rule-Sets Handle Inter-Cluster Failures

Mirror Leg Mirror Leg

Rule-sets are predefined rules that determine which cluster continues servicing I/O when
connectivity between clusters is lost. If the VPLEX Metro clusters lose contact with one
another, or if one cluster fails, Rule-sets define which cluster continues operation. This cluster
is referred to as the "preferred cluster". The remaining cluster, the non-preferred, suspends I/O.

 Determine which cluster continues I/O after a failure.


 Inter-cluster link fails
 VPLEX Cluster fails
 Failure starts delay timer (5 seconds default)

VPLEX Administration-SSP

Page
Internal Use - Confidential 120 © Copyright 2020 Dell Inc.
VPLEX Distributed Device Concepts

Distributed Device Detach Rule Options

Detach Rule Cluster-1 Cluster-2

cluster-1-detaches (Cluster-1 is Services I/O Suspends I/O


preferred)

cluster-2-detaches (Cluster-2 is Suspends I/O Services I/O


preferred)

 Distributed Device Rule-set may be overridden by Consistency Group detach


rules.
 Auto-resume setting determines behavior after connectivity is restored

VPLEX Administration-SSP

© Copyright
Internal Use - Confidential 2020 Dell Inc. Page 121
VPLEX Distributed Device Concepts

VPLEX Logging Volumes

Logging Volume Logging Volume

Logging Volume details - Logging volumes are required at each cluster before a Distributed
Device can be created. Logging volumes are used to keep track of blocks written during an
inter-cluster link failure, or when one leg of a distributed RAID-1 becomes unreachable and
then recovers.

After a WAN link failure is restored or an unreachable leg recovers, VPLEX uses the
information in logging volumes to synchronize the mirrors by sending only changed blocks
across the link. After the inter-cluster link or leg is restored, the VPLEX system uses the
information in the logging volumes to synchronize the mirrors by sending only changed blocks
across the link. Logging volumes also track changes during the loss of a volume when that
volume is one mirror in a Distributed Device.

During and after link outages, logging volumes are subject to high levels of I/O. Thus, logging
volumes must be able to service I/O quickly and efficiently. For more information about logging
volume requirements and configuration, please see the VPLEX Administration Guide.

 Keeps track of write I/Os during an inter-cluster link outage or loss of access for
a Distributed Device.
 Required at each VPLEX Cluster before creating a Distributed Device.
 Log information used to synchronize mirrors after access is restored.

VPLEX Administration-SSP

Page
Internal Use - Confidential 122 © Copyright 2020 Dell Inc.
VPLEX Distributed Device Concepts

VPLEX Consistency Groups

Consistency Group Details - VPLEX Consistency Groups aggregate volumes to enable the
application of a common set of properties to the entire group. Consistency Groups ensure the
same winning cluster for all the Virtual Volumes within the group during an inter-cluster
communication failure. Consistency group detach rules define on which cluster I/O continues
during cluster or inter-cluster link failures. The groups work together with VPLEX Cluster
Witness. The properties of a consistency group are applied to all the virtual volumes in the
consistency group. Here is a summarized list of the properties that can be applied:

 Cache mode

 Visibility

 Storage at cluster

 Local read override

 Detach Rule

 Auto resume at loser

 RecoverPoint enabled

Application A Application B

Consistency Group Consistency Group

Virtual Virtual Virtual Virtual Virtual Virtual Virtual


Volum Volum Volum Volum Volum Volum Volum

Cluster-1 Cluster-2

Consistency Groups ensure the same winning cluster for all the Virtual Volumes
within the group during an inter-cluster communication failure. Common properties
applied to all Virtual Volumes in the group. Works with VPLEX Cluster Witness

VPLEX Administration-SSP

© Copyright
Internal Use - Confidential 2020 Dell Inc. Page 123
VPLEX Distributed Device Concepts

Consistency Groups: Visibility for Local Volumes

Visibility Details

VPLEX Cluster-2

VPLEX Cluster-1

…with Global Visibility

Local Consistency Group… W A

Virtual Volume

W = Write

A = Access (read)

Visibility controls which clusters know about a Consistency Group. By default, the visibility
property of a Consistency Group is set only to the cluster where the group was created. This is
referred to as local visibility. This means only hosts attached to the local cluster have
read/write access to the volumes in the consistency group. For global visibility, set the visibility
to both cluster-1 and cluster-2. With global visibility, host on both clusters have read/write
access to the volumes in the consistency group.

The visibility of the volumes within the consistency group must match the visibility of the
consistency group. Local Consistency Groups with global visibility will always be synchronous.

VPLEX Administration-SSP

Page
Internal Use - Confidential 124 © Copyright 2020 Dell Inc.
VPLEX Distributed Device Concepts

VPLEX Cluster Witness Benefit

Maintains Distributed Consistency Group Availability - VPLEX Cluster Witness is an


optional component installed as a virtual machine. Cluster Witness helps VPLEX Metro
configurations automate the response to cluster failures and inter-cluster link outages.

IP Management
Network

Inter-cluster
Network A

Inter-cluster
Network B

Witness Benefits:

 Automatically distinguishes between and provides guidance to clusters on how


to react to a cluster or inter-cluster communication failure.
 Allows VPLEX Metro to provide continuous availability in case of a failure.
 VPLEX Cluster Witness VM must be in a third failure domain relative to the
VPLEX cluster sites to be effective.
 Works only in conjunction with distributed consistency groups.

VPLEX Administration-SSP

© Copyright
Internal Use - Confidential 2020 Dell Inc. Page 125
VPLEX Distributed Device Concepts

Consistency Group Detach Rule Options

Detach Rule Option Details

Detach Action
Rule

winner cluster-  Cluster specified by cluster-name is declared the preferred cluster


name delay
seconds
if a failure lasts more than the number of seconds specified
 Detach rules apply to all volumes of a consistency group and
override rules applied to individual volumes
 Preferred cluster may be guided by VPLEX Cluster Witness (if
available)

no-automatic-  NOT guided by VPLEX Cluster Witness


winner
 Detach rules of the member devices determine the preferred
cluster for that device

VPLEX Administration-SSP

Page
Internal Use - Confidential 126 © Copyright 2020 Dell Inc.
VPLEX Distributed Device Concepts

The Consistency Group Detach Rule designates which cluster detaches if clusters lose
connectivity. Possible values:

 No-Automatic-Winner - the consistency group does not select a winning


cluster.

 Winner (cluster-name) (delay) - The cluster specified by cluster-name is


declared the winner after the inter-cluster link outage lasts more than the
number of seconds specified by delay. For a Consistency Group that is
participating in RecoverPoint, this value must be set to the cluster at which the
RecoverPoint Splitter is running.

If a consistency group has a detach-rule configured, the rule applies to all volumes in the
Consistency Group and overrides any rule-sets applied to individual volumes.

VPLEX Administration-SSP

© Copyright
Internal Use - Confidential 2020 Dell Inc. Page 127
VPLEX Distributed Device Concepts

Detach Rules - What if no Cluster Witness?

Detach Rules Details

I
Failure of Cluster-1 I
I I
I I
I
I
I
I Cluster-2 loses communication with
I Cluster-1.
I
I I
By rule, Cluster-2 suspends I/O
I I
Non-preferred
I I
O * Data Unavailable *
Winner
I I
(static bias)
I

Stora
ge
Stora View
ge

Cluster-2
Cluster-1

Distributed Raid-1 Device

Mir Mirr
Device ror or Device
Rule: “Cluster-1 Detaches” Leg

For a VPLEX Metro without Cluster Witness, there has to be a method to avoid split-brain
scenarios. Each Distributed Device has a Rule-set applied to it.

As discussed, Rule-sets are predefined rules that determine which cluster continues I/O when
connectivity between clusters is lost. When a loss of connectivity occurs, VPLEX starts a delay
timer (default 5 seconds) and suspends I/O to all Distributed Devices on both clusters. If
connectivity is not restored before the timer expires, then the rule is enforced.

Without VPLEX Cluster Witness, we may have a scenario as shown. The Distributed Device
has the detach Rule set to "Cluster-1 detaches". If cluster-1 fails, then cluster-2 uses the rule
and suspends I/O. Now data is unavailable until the problem is resolved.

VPLEX Administration-SSP

Page
Internal Use - Confidential 128 © Copyright 2020 Dell Inc.
VPLEX Distributed Device Concepts

Continuous Availability with Cluster Witness

Continuous Availability Details

C C
C

Cluster Witness
Cluster-1 Cluster-2 Cluster-1 Cluster- Cluster-1 Cluster-2 Failures
2

Cluster-1 and 2 both continue I/O; Cluster-1 and 2 will continue I/O; Cluster-1 and 2 will continue I/O;
Call home is issued Cluster-1 issues the call home Cluster-2 issues the call home

CW CW C

Cluste
Cluster-1 Cluster-2 Cluster- Cluster-
Cluster-2 Cluster-1 r
1 2

Cluster with preferred status,


Cluster-2 fails; Cluster-1 fails;
will continue I/O;
CW Guides Cluster-2 to continue I/O
Other cluster will suspend I/O CW Guides Cluster-1 to continue I/O

No Data Unavailability

VPLEX Cluster Witness helps VPLEX Metro systems with consistency groups respond to
cluster failures and inter-cluster (WAN COM) link outages. Presented here are the various
inter-site failure scenarios and how they are handled when using VPLEX Cluster Witness.

 If the Cluster Witness host fails, each cluster will lose communication with
Cluster Witness and call home. I/O continues normally.

 If management link between cluster and CW hosts fails, then the cluster that
detects this will call home.

 If the WAN COM link between clusters fails, each cluster suspends I/O until
receiving guidance from Cluster Witness. Cluster Witness gives guidance to
each cluster to default to the rule set (where preferred cluster continues I/O and
the non-preferred cluster suspends I/O).

 If Cluster-1 fails, Cluster-2 marks its cluster-witness operational-state as


Unknown upon seeing the failure. It takes about 4 seconds after that to receive
the guidance decision from Cluster Witness Server. Cluster Witness guides
Cluster-2 to continue I/O.

 If Cluster-2 fails, Cluster-1 marks its cluster-witness operational-state as


Unknown upon seeing the failure. It takes about 4 seconds after that to receive
the guidance decision from Cluster Witness Server. Cluster Witness guides
Cluster-1 to continue I/O.

VPLEX Administration-SSP

© Copyright
Internal Use - Confidential 2020 Dell Inc. Page 129
VPLEX Distributed Device Configuration

VPLEX Distributed Device Configuration

Creating Distributed Devices using Dell EMC Unisphere


Distributed Devices are mirrored between clusters in a VPLEX Metro. They support virtual
volumes, that are presented to a host through a storage view. All Distributed Devices must be
associated with a Logging Volume. During a link outage, the logging volume is used to map the
differences between the mirrored legs. All Distributed Devices must have a detach Rule-set to
determine which cluster continues I/O when connectivity between clusters is lost. Using the
wizard requires adding the devices to a Consistency Group. A Consistency Group will have it's
own Rule-set configured. This will override the device Rule-set.

To create Distributed Devices from Storage Volumes, navigate to Provision


Storage>Distributed Storage>Distributed Devices> and choose the Distributed Device type.
Select Create. This launches the wizard.

Creating Distributed Devices from Storage Volumes

VPLEX Administration-SSP

Page
Internal Use - Confidential 130 © Copyright 2020 Dell Inc.
VPLEX Distributed Device Configuration

Creating Distributed Devices from Existing Devices

Device Selection

Verify before launching the Wizard which existing Devices on each cluster are to be
used.

Considerations for selection:

 Source and target selection


– Data is synchronized from source to target.
 Device size
– Target Device must be equal to or greater than the Source device
 Devices may not have Virtual Volumes created.

VPLEX Administration-SSP

© Copyright
Internal Use - Confidential 2020 Dell Inc. Page 131
VPLEX Distributed Device Configuration

Launch the Wizard

Select Provision Storage and View By Devices.

VPLEX Administration-SSP

Page
Internal Use - Confidential 132 © Copyright 2020 Dell Inc.
VPLEX Distributed Device Configuration

Select Devices

Select the source cluster, then choose the Device.

VPLEX Administration-SSP

© Copyright
Internal Use - Confidential 2020 Dell Inc. Page 133
VPLEX Distributed Device Configuration

Select Mirror

The next step is to select the target device. possible candidates are based on the
size of the source Device.

Click Add Mirror after


Devices are selected

VPLEX Administration-SSP

Page
Internal Use - Confidential 134 © Copyright 2020 Dell Inc.
VPLEX Distributed Device Configuration

Synchronize

Best practice is to synchronize.

VPLEX Administration-SSP

© Copyright
Internal Use - Confidential 2020 Dell Inc. Page 135
VPLEX Distributed Device Configuration

Consistency Group

Create a new group, or select an existing one.

VPLEX Administration-SSP

Page
Internal Use - Confidential 136 © Copyright 2020 Dell Inc.
VPLEX Distributed Device Configuration

Create Device

Review selections

VPLEX Administration-SSP

© Copyright
Internal Use - Confidential 2020 Dell Inc. Page 137
VPLEX Distributed Device Configuration

Result

Verify results

VPLEX Administration-SSP

Page
Internal Use - Confidential 138 © Copyright 2020 Dell Inc.
VPLEX Distributed Device Configuration

Verify

Verify Distributed Device creation

Not in a Storage View

VPLEX Administration-SSP

© Copyright
Internal Use - Confidential 2020 Dell Inc. Page 139
VPLEX Distributed Device Configuration

Create a Distributed Device with the storage-tool compose


Command

storage-tool compose command is one command that will create a Virtual


Volume and export it to a selected host.

Command Details

Command options

Exampl

The storage-tool compose command can be used as a simple way to create a Virtual
Volume. This one command creates the virtual volume on top of the specified storage-volumes,
building all intermediate extents, local, and distributed devices as necessary. Storage-volumes
from each cluster may be claimed but must be unused.

The command allows the user to add the virtual volumes created to both a consistency group
and a storage view. They must exist already. The example displays adding a virtual volume
named new-vv to a storage view named my-view and a consistency group named my-CG.

VPLEX Administration-SSP

Page
Internal Use - Confidential 140 © Copyright 2020 Dell Inc.
VPLEX Distributed Device Configuration

Create Distributed Devices using VPLEX CLI

Identify Storage Volumes

The ll -p /**/storage-volumes command will list the Storage Volumes by


VPLEX Cluster. Identify Storage Volumes that are claimed.

VPLEX Administration-SSP

© Copyright
Internal Use - Confidential 2020 Dell Inc. Page 141
VPLEX Distributed Device Configuration

Create Extents

Here we have created an Extent.

VPLEX Administration-SSP

Page
Internal Use - Confidential 142 © Copyright 2020 Dell Inc.
VPLEX Distributed Device Configuration

View Extents

The ll -p /**/extents command lists the extents on both VPLEX Clusters.

VPLEX Administration-SSP

© Copyright
Internal Use - Confidential 2020 Dell Inc. Page 143
VPLEX Distributed Device Configuration

Create Local Devices

Here a local Device has been created. Notice it is a RAID-1 Device.

VPLEX Administration-SSP

Page
Internal Use - Confidential 144 © Copyright 2020 Dell Inc.
VPLEX Distributed Device Configuration

Identify Local Devices

This VPLEX CLI command displays all the Devices located at each cluster.

VPLEX Administration-SSP

© Copyright
Internal Use - Confidential 2020 Dell Inc. Page 145
VPLEX Distributed Device Configuration

Create Distributed Devices

The CLI command ds dd create is used to create a Distributed Device. The


Devices at each cluster, along with the source, are defined.

VPLEX Administration-SSP

Page
Internal Use - Confidential 146 © Copyright 2020 Dell Inc.
VPLEX Distributed Device Configuration

View Distributed Devices

Displayed are the properties of a Distributed Device. Notice that the rebuilding
process is still ongoing.

VPLEX Administration-SSP

© Copyright
Internal Use - Confidential 2020 Dell Inc. Page 147
VPLEX Distributed Device Configuration

Virtual Volume Create

Once a Distributed Device is created a Virtual Volume can be added. Also


displayed is the show-use-hierarchy command.

The best practice for creating a Distributed Device with the CLI is to
specify the source-leg. when this option is used mirror legs
automatically synchronize.

VPLEX Administration-SSP

Page
Internal Use - Confidential 148 © Copyright 2020 Dell Inc.
VPLEX Distributed Device Configuration

VPLEX Consistency Group Creation

Here are the steps to create a Consistency Group using Unisphere for VPLEX.

Launch the Wizard

Navigate to Provision Storage>Distributed Storage and view by Consistency Groups.

VPLEX Administration-SSP

© Copyright
Internal Use - Confidential 2020 Dell Inc. Page 149
VPLEX Distributed Device Configuration

Steps One and Two

A name for the new Consistency Group can be entered. Step two allows the
selection of a Rule Set.

Enter a name for the new group. Select a Rule Set for the group.

VPLEX Administration-SSP

Page
Internal Use - Confidential 150 © Copyright 2020 Dell Inc.
VPLEX Distributed Device Configuration

Steps Three and Four

These steps in the wizard will add existing Virtual Volumes to the Consistency
Group. Next, a review of the previous selections is performed.

VPLEX Administration-SSP

© Copyright
Internal Use - Confidential 2020 Dell Inc. Page 151
VPLEX Distributed Device Configuration

Step Five

Step five displays the results. These include the group name, detach rule, and the
Virtual Volumes added.

VPLEX Administration-SSP

Page
Internal Use - Confidential 152 © Copyright 2020 Dell Inc.
VPLEX Distributed Device Configuration

View Consistency Group

Here are the attributes for the Consistency Group.

VPLEX Administration-SSP

© Copyright
Internal Use - Confidential 2020 Dell Inc. Page 153
Distributed Device Failure Scenarios

Distributed Device Failure Scenarios

Distributed Device and Consistency Group Failure Properties

Detach Rules

Two levels of detach rules:

 Detach rules associated with individual Distributed Devices


Every Distributed Device must have a detach rule. There are two device-level detach rules:

 cluster-1-detaches - Cluster-1 is the preferred cluster. During a cluster failure or inter-


cluster link outage, cluster-1 continues I/O and cluster-2 suspends I/O. If Cluster-1 fails
there will be no access.

 cluster-2-detaches Cluster-2 is the preferred cluster. During a cluster failure or inter-


cluster link outage, cluster-2 continues I/O and cluster-1 suspends I/O. f Cluster-2 fails
there will be no access.

 Detach rules associated with Consistency Groups

Every Consistency Group has a detach rule that applies to all members in the Consistency
Group. If a distributed device is a member of a Consistency Group, the detach rule of the
Consistency Group overrides the detach rule configured for the device. Here are the detach
rules:

 winner cluster-name delay seconds - The cluster specified by cluster-name is


declared the preferred cluster if a failure lasts more than the number of seconds
specified by seconds.

 no-automatic-winner - The consistency group does not select the preferred cluster.
The detach rules of the member devices determine the preferred cluster for that device.

VPLEX Administration-SSP

Page
Internal Use - Confidential 154 © Copyright 2020 Dell Inc.
Distributed Device Failure Scenarios

Cluster Witness

VPLEX Witness helps VPLEX Metro configurations automate the response to


cluster failures and inter-cluster link outages. VPLEX Witness connects to both
VPLEX clusters over the management IP network.

When VPLEX Witness is not deployed, detach rules determine at which cluster I/O
continues during a cluster failure or inter-cluster link outage. VPLEX Cluster
Witness does not guide Consistency Groups with the no-automatic-winner detach
rule.

VPLEX Administration-SSP

© Copyright
Internal Use - Confidential 2020 Dell Inc. Page 155
Distributed Device Failure Scenarios

Distributed Device Failure Scenarios

Inter-cluster WAN Failure

An inter-cluster WAN communication failure will break the ability for Distributed
Devices to sustain I/O synchronization on local devices across clusters.

LAN

Cluster-1 Partition Failure Cluster-2

(Loss of WAN Links)

 WAN failure will cause all distributed devices to partition.

VPLEX Administration-SSP

Page
Internal Use - Confidential 156 © Copyright 2020 Dell Inc.
Distributed Device Failure Scenarios

WAN Connectivity Outage

The connectivity validate-wan-com command shows that there is no WAN


connectivity from cluster-1 to cluster-2. All WAN connections are down.

VPlexcli:/> connectivity validate-wan-com

connectivity: NONE

fc-port-group-3 - FAIL - No connectivity was found from any com port.

fc-port-group-0 - FAIL - No connectivity was found from any com port.

fc-port-group-2 - FAIL - No connectivity was found from any com port.


fc-port-group-1 - FAIL - No connectivity was found from any com port.

VPLEX Administration-SSP

© Copyright
Internal Use - Confidential 2020 Dell Inc. Page 157
Distributed Device Failure Scenarios

System Status

From the dashboard we can see the WAN failure and Storage View error at cluster-
2.

Error due to Cluster-1 is


the Winning Cluster

VPLEX Administration-SSP

Page
Internal Use - Confidential 158 © Copyright 2020 Dell Inc.
Distributed Device Failure Scenarios

Distributed Device Failure Scenarios

Wan Restore

Command details - The WAN Connectivity has been restored (after the delay timer expired).
From the VPLEX CLI, we notice that the Distributed Device needs a “resume” at the losing
cluster.

VPLEX Administration-SSP

© Copyright
Internal Use - Confidential 2020 Dell Inc. Page 159
Distributed Device Failure Scenarios

Check Status

Storage Views - Here we see the Storage Views for cluster-1 and cluster-2. Cluster-1 is the
winning cluster in this example. Notice the operational status of the two Storage Views.

Winning Cluster Losing Cluster

VPLEX Administration-SSP

Page
Internal Use - Confidential 160 © Copyright 2020 Dell Inc.
Distributed Device Failure Scenarios

Device Status

Use the ll command to view the status and attributes of the Distributed Devices.

VPLEX Administration-SSP

© Copyright
Internal Use - Confidential 2020 Dell Inc. Page 161
Distributed Device Failure Scenarios

Resume Distributed Device

Here we see the command device resume-link-up to resume the Virtual


Volume. This can also be done using the device at the losing cluster. This
command only works once WAN connectivity is restored.

device resume-link-up --virtual-volumes Student_1_VV -force

ll

VPLEX Administration-SSP

Page
Internal Use - Confidential 162 © Copyright 2020 Dell Inc.
Volume Expansion and Protection

Volume Expansion and Protection

Virtual Volume Expansion

Concatenation (RAID-
Storage-Volume
C)

VPLEX expands The virtual volume


the underlying is expanded by
storage-volume adding only
(Prior to this step specified extents or
the LUN presented devices, as required
to VPLEX must be
expanded on the
array)

VPLEX Administration-SSP

© Copyright
Internal Use - Confidential 2020 Dell Inc. Page 163
Volume Expansion and Protection

Expansion Types

A VPLEX Virtual Volume can be expanded by two methods. They are Storage Volume
expansion or concatenation. If the volume type supports expansion, VPLEX detects the
capacity gained by expansion. Now you can identify the available expansion method. The
Storage Volume method is always preferred. Possible values for the expansion-method
attribute are:

 Storage Volume

o VPLEX makes use of the underlying storage-volume expansion

 Concatenation

o A Virtual Volume is expanded by adding only specified Extents or


Devices

VPLEX Administration-SSP

Page
Internal Use - Confidential 164 © Copyright 2020 Dell Inc.
Volume Expansion and Protection

Procedure for Volume Expansion

When expanding a Virtual Volume, the first step is to determine volume expansion
method. The method available is determined by the underlying Device.

The expansion-method attribute can be listed using the CLI or Unisphere.

Determine volume expansion method

Storage-volume Concatenation (or RAID-C expansion)

Check expansion prerequisites

Migration or rebuilds Non-Disruptive Health-check


RecoverPoint-enabled
in progress? Upgrade in progress? command reports Metadata volumes?
consistency group?
problems?

Expand the virtual volume

VPlexcli Unisphere for VPLEX

Virtual Volume expansion prerequisites:

Volumes cannot be expanded if any of the following conditions are true:

 Migration or rebuilding is occurring

 Upgrade is in progress

 health-check has errors

 Volume to expand belongs to a RecoverPoint-enabled Consistency Group

VPLEX Administration-SSP

© Copyright
Internal Use - Confidential 2020 Dell Inc. Page 165
Volume Expansion and Protection

Storage Volume Expansion Topologies

The storage volume method of expansion supports simple expansion on a variety


of device geometries.

Virtual Virtual Virtual


Volum Volum Volum

RAID-1
Device 1 Distributed RAID-1 device
Device

Extent 1 Extent 1 Extent 2 Extent 1 Extent 2

Storage Storage Storage Storage Storage


Volume 1 Volume Volume 2 Volume 1 Volume 2

LUN 1 LUN 1 LUN 2 LUN 1 LUN 2

Storage Array Storage Array 1 Storage Array 2 Storage Array 1 Storage Array 2
Single Cluster Cluster-1 Cluster-2
1:1 Virtual Volume to Storage
Volume Dual-legged RAID-1 Distributed RAID-1

Storage Volume expansion criteria

The VPLEX Virtual Volume geometry must meet one of the following criteria:

 Mapped 1:1 to the underlying Storage Volume

 A multi-legged RAID-1 or RAID-0 volume and each of its smallest extents is


mapped 1:1 to a back end Storage Volume

 A Distributed Raid-1 Device where the mirror leg at each cluster is mapped 1:1
to the underlying Storage Volume.

There is a maximum initialization processes can run concurrently per cluster. See the Release
Notes for the current limit.

VPLEX Administration-SSP

Page
Internal Use - Confidential 166 © Copyright 2020 Dell Inc.
Volume Expansion and Protection

Storage-volume Expansion Unisphere

This is an example of volume expansion through Unisphere for VPLEX.

10 GB

To begin, you can list the expandable-capacity attribute (in the CLI) or the Expandable By
field (in the GUI) to plan capacity of your back-end storage. When using Unisphere, click on the
Virtual Volume name to display the properties of the Virtual Volume you want to expand. For
Virtual Volumes that can be expanded using the Storage-volume method, the Expandable By
attribute is the capacity added to the back-end storage volume, but not yet exposed to the host
by the Virtual Volume. A value of zero indicates that there is no expandable-capacity for the
volume. A non-zero value indicates the capacity available to expand. Here are the steps to
perform Storage Volume expansion:

1. Identify the underling Storage Volume for the Virtual Volume to be expanded.

2. Connect to the array and expand the volume presented to VPLEX.

3. Rediscover the array from VPLEX.

4. Verify the Virtual Volume now has expandable capacity.

VPLEX Administration-SSP

© Copyright
Internal Use - Confidential 2020 Dell Inc. Page 167
Volume Expansion and Protection

Storage-volume Expansion Procedure with CLI

The commands to be used to expand a volume using the storage-volume method:

Starting Capacity

Steps with the CLI for Storage Volume Expansion:

 Use ll <virtual volume> to determine method and capacity


 Use the expand command to perform the expansion.

VPLEX Administration-SSP

Page
Internal Use - Confidential 168 © Copyright 2020 Dell Inc.
Volume Expansion and Protection

Concatenation (RAID-C) Expansion Method

Concatenation Expansion

Before Expansion After Expansion

Virtual Virtual
Volume Volume

2
GB
Device 1
RAID-C

1G 1G

Device 1 Device 1 Device 2


Device 2
<date>

Some devices do not support the storage volume method of expansion. In this case, use the
concatenation method. The concatenation method expands the virtual volume by adding only
specified extents or devices.

Top-level Devices can be expanded without disruption to a host. A top-level Device can be
expanded using this method which will add another Device to the first. This creates a new top-
level device of RAID-C type. The device to be appended can be any type of device provided it
is not mapped to a Virtual Volume. It is best practice to use the same geometry as the original
device.

After the expansion is complete, the original mapped Device has been converted into type
RAID-C. This contains the original Device with the date appended to the Device name.

VPLEX Administration-SSP

© Copyright
Internal Use - Confidential 2020 Dell Inc. Page 169
Volume Expansion and Protection

Concatenation Expansion – with CLI

CLI Expansion Method - The expansion begins with a 4 GB virtual volume, and then we
concatenated a 4 GB device to expand the virtual volume to 8 GB by using the virtual-volume
expand command. Confirm the expansion with ll.

VPLEX Administration-SSP

Page
Internal Use - Confidential 170 © Copyright 2020 Dell Inc.
Volume Expansion and Protection

Graphic View of Concatenation Expansion

This figure is a graphical representation of the virtual volume shown on the


previous page.

This is the new 8 GB RAID-C


Device

This is the original device. It was


This is the new 4 GB device. built from a 4 GB Extent (which came
That was added from a 4 GB slice of an 8 GB Storage
Device)

This figure is a graphical representation of the virtual volume shown on the


previous page.

VPLEX Administration-SSP

© Copyright
Internal Use - Confidential 2020 Dell Inc. Page 171
Volume Expansion and Protection

Volume Protection – Add a Local Mirror

Choose Virtual Volume

The first step when adding local mirror protection is to select a Virtual Volume and
launch the wizard.

Select Virtual Volume

VPLEX Administration-SSP

Page
Internal Use - Confidential 172 © Copyright 2020 Dell Inc.
Volume Expansion and Protection

Select Devices

Based on the Virtual Volumes selected, the devices to mirror are automatically
selected.

VPLEX Administration-SSP

© Copyright
Internal Use - Confidential 2020 Dell Inc. Page 173
Volume Expansion and Protection

Select Mirrors

Step two will select the target Device. It must be on the same cluster as the source
and the same size or larger.

VPLEX Administration-SSP

Page
Internal Use - Confidential 174 © Copyright 2020 Dell Inc.
Volume Expansion and Protection

Complete

The job results are shown after a review, which is not shown.

VPLEX Administration-SSP

© Copyright
Internal Use - Confidential 2020 Dell Inc. Page 175
Volume Expansion and Protection

Volume Protection – Attach a Local Mirror with the CLI

The device attach mirror command is used to add a local device as a mirror
leg.

The show-use-hierarchy command displays the complete usage hierarchy for


a storage element. This view is from the top-level element down to the storage-
array.

Note: This same command can be used to create a Distributed Device.


The source and target Devices are located on different VPLEX Clusters.

VPLEX Administration-SSP

Page
Internal Use - Confidential 176 © Copyright 2020 Dell Inc.
Volume Expansion and Protection

Adding a Remote Mirror

Virtual Volume Selection

The steps for adding remote RAID-1 protection to a Virtual Volume are similar to
adding a local mirror.

The local Device may be any type geometry, but the size of the source must be
equal to or less than the target.

Select a Virtual Volume

VPLEX Administration-SSP

© Copyright
Internal Use - Confidential 2020 Dell Inc. Page 177
Volume Expansion and Protection

Add Remote Mirror Wizard

The Device to mirror will be selected automatically. Select a Device located in the remote cluster. The
The selection is based on the previously selected best practice it to select a target that is the same size
Virtual Volume. as the source.

VPLEX Administration-SSP

Page
Internal Use - Confidential 178 © Copyright 2020 Dell Inc.
Volume Expansion and Protection

Steps Three and Four

Because the new Device is now a Distributed Device Review the selections
a Consistency Group or Rule-set must be selected.

VPLEX Administration-SSP

© Copyright
Internal Use - Confidential 2020 Dell Inc. Page 179
Volume Expansion and Protection

Completion

Here we see the adding of a remote mirror was successful.

VPLEX Administration-SSP

Page
Internal Use - Confidential 180 © Copyright 2020 Dell Inc.
Volume Expansion and Protection

Device Map

Here is an example of the Device Map. Notice the new Top-level Device. There is
also a new Virtual Volume in Cluster-1, which contained the added Device.

VPLEX Administration-SSP

© Copyright
Internal Use - Confidential 2020 Dell Inc. Page 181
Data Protection with RecoverPoint

Data Protection with RecoverPoint

VPLEX with RecoverPoint Protection

Sample VPLEX and RecoverPoint Configuration

RPA Cluster RPA Cluster

VPLEX Unity

This example is a VPLEX Local configuration with RecoverPoint providing both


local and remote Virtual Volume Protection. RecoverPoint provides a local copy,
which is a Virtual Volume on the VPLEX. This configuration also provides a remote
copy. In this example, the data it is stored on a Dell EMC Unity array.

RecoverPoint Details - RecoverPoint provides a software embedded “Write-Splitter”. This


software is built into the VPLEX code and is enabled when RecoverPoint is implemented. The
splitter is also part of other Dell EMC supported arrays. The splitter copies incoming write I/Os
from the host for selected Virtual Volumes. These changes are sent to the RecoverPoint
Appliance (RPA). The RPA updates a Journal which stores all the changes made. Each copy
requires a Journal. In this example, Journals are Virtual Volumes on the VPLEX. These Virtual
Volumes must be in a Storage View along with the RecoverPoint Appliances. They are also
required to be in a RecoverPoint enabled Consistency Group. In this example there is a local
copy, which is a Virtual Volume on the VPLEX. This Virtual Volume, and all others used as
copies, are required to be in a Storage View and a Consistency Group. This includes the Virtual
Volumes that are Production volumes seen by hosts. The Storage View must only include the
RPA initiator ports. No other host should be part of this view. The Consistency Group must be
RecoverPoint enabled. It is required to have separate groups for Journals and copies.

VPLEX Administration-SSP

Page
Internal Use - Confidential 182 © Copyright 2020 Dell Inc.
Data Protection with RecoverPoint

Procedure for Adding RecoverPoint Protection

Here is an overview of the steps that are required to add RecoverPoint protection
to VPLEX Virtual Volumes. Each step is listed as to where, RecoverPoint or
VPLEX, it is performed.

VPLEX VPLEX VPLEX VPLEX

Import RecoverPoint Add RecoverPoint Register Create RecoverPoint


certificate in VPLEX cluster to VPLEX RecoverPoint storage view

VPLEX RecoverPoint RecoverPoint

Validate Create Add the VPLEX


Create VPLEX
RecoverPoint in RecoverPoint Splitter to
consistency groups
VPLEX Consistency Groups RecoverPoint

VPLEX

VPLEX Administration-SSP

© Copyright
Internal Use - Confidential 2020 Dell Inc. Page 183
Data Protection with RecoverPoint

Importing RecoverPoint Certificate

Certificates are required for VPLEX and RecoverPoint to obtain management


information about volume types. The RecoverPoint certificate is imported into
VPLEX, and the VPLEX certificate is imported in RecoverPoint.

VPLEX Administration-SSP

Page
Internal Use - Confidential 184 © Copyright 2020 Dell Inc.
Data Protection with RecoverPoint

Create VPLEX Storage View for RecoverPoint

Register Initiators

A RecoverPoint Cluster contains two to eight RecoverPoint Appliances (RPA).


Each RPA has 4 initiator ports.

Set the Host Type to recoverpoint for each initiator.

VPLEX Administration-SSP

© Copyright
Internal Use - Confidential 2020 Dell Inc. Page 185
Data Protection with RecoverPoint

Create Storage View

A Storage View for the RecoverPoint Cluster must contain the following:

 All RPA ports


– A RecoverPoint Cluster has 2-8 Appliances
– Each appliance has 4 ports
 VPLEX Front-End ports used by the RecoverPoint cluster
 All Virtual Volumes required by the RecoverPoint Cluster

VPLEX Administration-SSP

Page
Internal Use - Confidential 186 © Copyright 2020 Dell Inc.
Data Protection with RecoverPoint

Adding RecoverPoint Cluster to VPLEX

Use the rp rpa-cluster add command on the VPLEX and add the clusters
that are local to VPLEX Metro to the VPLEX system. Ensure that you specify which
VPLEX cluster is the local cluster.

Add RecoverPoint Cluster to each VPLEX Cluster (Metro)

VPLEX Administration-SSP

© Copyright
Internal Use - Confidential 2020 Dell Inc. Page 187
Data Protection with RecoverPoint

Create VPLEX Consistency Group

Create a RecoverPoint enabled Consistency Group for all the Virtual Volumes
required by the RecoverPoint Cluster.

Use separate Consistency Groups for Local and Distributed Virtual Volumes.

Consistency Groups for Consistency group for


replication volumes journals and repository

Separate local and Distributed All Journals must be in the same


Virtual Volumes Consistency Group

Separate local copy volumes from Add the RecoverPoint Repository


production volume if they reside on the Volume ( if required) to the same group
same clusters as the Journals

VPLEX Administration-SSP

Page
Internal Use - Confidential 188 © Copyright 2020 Dell Inc.
VPLEX Data Mobility

VPLEX Data Mobility

Data Mobility Use Cases

Without the storage virtualization that VPLEX offers, migrating data from one array
to another is difficult. This procedure would require professional services to plan
and implement the migration.

VPLEX can handle many data mobility needs and migrate data from one array to
another with minimal effort and no planned downtime.

IT needs to move data. VPLEX gets your data wherever you want with
no planned downtime

Tech refresh Datacenter moves


Consolidation

Load balancing

VPLEX Administration-SSP

© Copyright
Internal Use - Confidential 2020 Dell Inc. Page 189
VPLEX Data Mobility

Data Mobility Overview

VPLEX Data Mobility moves data from one Extent or Device to another. The Virtual
Volume, which can be in a Storage View, remains unchanged. The "volume
identifier" remains unchanged. This allows moving data without host or application
disruption.

Virtual
Volume

VPLEX Back End Device


Mobility operations are
transparent to host

Data Mobility Types:

 Extent
– Within a Cluster
 Device

– Within a Cluster or between Clusters

VPLEX Administration-SSP

Page
Internal Use - Confidential 190 © Copyright 2020 Dell Inc.
VPLEX Data Mobility

Data Mobility Overview

WARNING: Device migrations are not recommended between clusters. All


device migrations are synchronous. If there is I/O to the devices being
migrated, and latency to the target cluster is equal to or greater than 5 ms,
then significant performance degradation could occur.

VPLEX does not remove the Data from the old Storage Volume.

VPLEX Administration-SSP

© Copyright
Internal Use - Confidential 2020 Dell Inc. Page 191
VPLEX Data Mobility

Extent Mobility

Virtual
Volume

Device

Sourc Target
Extent Mov Extent

Storage Storage
Volume Volume

Storage Storage
Array Array

Extent mobility is a VPLEX mechanism to move all data from a source Extent to a
target Extent.

 Moves data from a source Extent to a target Extent


 Non-disruptive to the host
 Within a cluster only
 When committed, frees the original Extent for reuse
 Foundation of non-disruptive data mobility across or within storage arrays

VPLEX Administration-SSP

Page
Internal Use - Confidential 192 © Copyright 2020 Dell Inc.
VPLEX Data Mobility

Device Mobility

Virtual
Volume

Temporary RAID-
1 Device

Mirror Mirror
Leg Leg

Device Move Device


Source Target

Extent
Extent Extent

Storage
Storage Storage
Volume
Volume Volume

VPLEX Administration-SSP

© Copyright
Internal Use - Confidential 2020 Dell Inc. Page 193
VPLEX Data Mobility

Device Mobility Details:

 Moves data from a source device to a target device


 Non-disruptive for host
 When committed, frees the source device and underlying extent(s)
 Local or remote
 Local - within a cluster
 Remote - across clusters
 Best method for array migrations

VPLEX Administration-SSP

Page
Internal Use - Confidential 194 © Copyright 2020 Dell Inc.
VPLEX Data Mobility

General Procedure to Perform Data Migration

Optional

Create and check a Start the Monitor the


Pause, resume, or
migration plan migration migration
cancel the migration
progress
Batch Migration Only

Remove the Commit the


Clean up
record of the migration
migration
Optional

Both GUI and CLI commands have options to:


start/pause/resume/cancel/commit/clean/remove

VPLEX Administration-SSP

© Copyright
Internal Use - Confidential 2020 Dell Inc. Page 195
VPLEX Data Mobility

VPLEX Data Migration Considerations

Device migrations between distributed devices are not supported

Devices must be removed from consistency groups before being migrated

Target device must be the same size or larger than the source device or extent

Target devices must not have existing virtual volumes

Migration supported for “Thin Enabled” virtual volumes

VPLEX Administration-SSP

Page
Internal Use - Confidential 196 © Copyright 2020 Dell Inc.
VPLEX Data Mobility

VPLEX Batch Migrations

Batch migrations migrate multiple Extents or Devices. Create batch migrations to


automate repetitive tasks.

Batched extent migrations Batched device migrations

Migrate arrays within Migrate to dissimilar


the same cluster arrays

Same number of LUNs in the Migrate devices between


source and destination clusters in a VPLEX Metro

Identical capacities in the


source and destination

VPLEX Administration-SSP

© Copyright
Internal Use - Confidential 2020 Dell Inc. Page 197
VPLEX Data Mobility

Extent Mobility

Starting the Wizard

Here is a map of the components of a Virtual Volume prior to migration. Note the names of the Storage Volumes and Extents.

VPLEX Administration-SSP

Page
Internal Use - Confidential 198 © Copyright 2020 Dell Inc.
VPLEX Data Mobility

Using the Mobility menu, select Move Data Within Cluster for Extent Mobility.

VPLEX Administration-SSP

© Copyright
Internal Use - Confidential 2020 Dell Inc. Page 199
VPLEX Data Mobility

Extent Mobility Wizard

Select the desired cluster Select a Storage Volume, this step is optional and acts
as a filter. The Extent on this Storage Volume will be
selected in the next step.

VPLEX Administration-SSP

Page
Internal Use - Confidential 200 © Copyright 2020 Dell Inc.
VPLEX Data Mobility

Extent Selection

Here we select an Extent to migrate.

Select Target

Based on the previous selections, a target Extent is presented for selection. The
Auto-Generate Mappings can be selected to automatically choose the target.

VPLEX Administration-SSP

© Copyright
Internal Use - Confidential 2020 Dell Inc. Page 201
VPLEX Data Mobility

Review

In step 5, configure the job name and set the rate to perform the migration.

VPLEX Administration-SSP

Page
Internal Use - Confidential 202 © Copyright 2020 Dell Inc.
VPLEX Data Mobility

Monitor Extent Mobility Jobs

Once the transfer is complete, the job can be Here is a map example for the completed migration.
committed. At this time it also can be canceled. Compare it to the previous map.
Committed jobs cannot be undone.

VPLEX Administration-SSP

© Copyright
Internal Use - Confidential 2020 Dell Inc. Page 203
VPLEX Data Mobility

VPLEX Administration-SSP

Page
Internal Use - Confidential 204 © Copyright 2020 Dell Inc.
VPLEX Data Mobility

Device Mobility Wizard

Launch the Wizard

Devices can be migrated either locally or to a remote cluster.

Select Create

VPLEX Administration-SSP

© Copyright
Internal Use - Confidential 2020 Dell Inc. Page 205
VPLEX Data Mobility

Select Cluster

Select Cluster Select Virtual Volume

VPLEX Administration-SSP

Page
Internal Use - Confidential 206 © Copyright 2020 Dell Inc.
VPLEX Data Mobility

Select Source and Target

VPLEX Administration-SSP

© Copyright
Internal Use - Confidential 2020 Dell Inc. Page 207
VPLEX Data Mobility

Review

VPLEX Administration-SSP

Page
Internal Use - Confidential 208 © Copyright 2020 Dell Inc.
VPLEX Data Mobility

Commit

Once the job transfer is complete it can be committed.

VPLEX Administration-SSP

© Copyright
Internal Use - Confidential 2020 Dell Inc. Page 209
VPLEX Data Mobility

Complete Device Mobility

Once the job transfer is complete it can be committed. After the migration is
complete, the commit step detaches the source leg of the RAID 1 and removes the
RAID 1. The Virtual Volume, Device, or Extent is identical to the one before the
migration except that the source Device/extent is replaced with the target
Device/Extent.

VPLEX Administration-SSP

Page
Internal Use - Confidential 210 © Copyright 2020 Dell Inc.
VPLEX Data Mobility

Mobility Operations Through CLI

Four Data Mobility Operations

VPlexcli:/data-migrations/device-migrations> dm migration start -- name migrate_012 --from device_012 --to device_012a -transfersize 12M

VPlexcli:/> ls data-migrations/device-migrations/ migrate_012


...
Use the ls command to
start-time Fri April 8 13:32:23 MDT 2016
display the migration's
status in progress status.

...

VPlexcli:/data-migrations/device-migrations> dm migration commit -- force --migrations migrate_012


Committed 1 data migration(s) out of 1 requested migration(s).

VPlexcli:/data-migrations/device-migrations> dm migration clean -- force --migrations migrate_012


Cleaned 1 data migration(s) out of 1 requested migration(s).

VPlexcli:/data-migrations/device-migrations> dm migration remove -- force --migrations migrate_012

Removed 1 data migration(s) out of 1 requested migration(s).

RAID-1

Source Device or Extent Destination Device or Extent

Mov

There are four basic operations involved in moving Extents or Devices:

The start operation first creates a RAID 1 device on top of the source Device. It specifies the
source device as one of its legs and the destination Device as the other leg. It then copies the
source Device’s data to the destination’s Device or Extent. This operation can be canceled as
long as it is not committed.

The commit operation removes the pointer to the source leg. At this point in time the
destination Device is the only Device accessible through the Virtual Volume.

The clean operation breaks the source Device down all the way to the Storage Volume level.
The Storage Volume is unclaimed after this operation if there are no other Extents configured
for this Storage Volume. Data mobility operations can also be paused and resumed before the
commit operation. It may be beneficial to pause mobility operations during daytime hours.

The remove operation will remove the record of canceled or committed data migrations.

VPLEX Administration-SSP

© Copyright
Internal Use - Confidential 2020 Dell Inc. Page 211
VPLEX Data Mobility

Batched Mobility

Batched Mobility Details

 Allows the batching of extent or device mobility


jobs
 Enables convenient large scale mobility operations
 Makes use of a Plan File
 Up to 25 jobs at a time

Batch migrations are run as batch jobs from reusable batch migration plan files. Migration plan
files are created using the create-plan command. A single batch migration plan can be either
for Devices or Extents, but not both. Batched mobility provides the ability to script large-scale
mobility operations without having to specify individual extent-by-extent or device-by-device
mobility jobs. Batched mobility can only be performed in the CLI. Batch migrations must follow
the same rules as individual migrations.

Use batch migrations to:

 Retire storage arrays and bring new ones online

 Migrate devices to a different class of storage array

There are two additional steps to prepare for a batch migration:

 Create a batch migration plan file using batch-migrate create-plan

 Test the batch migration plan file using batch-migrate check-plan.

VPLEX Administration-SSP

Page
Internal Use - Confidential 212 © Copyright 2020 Dell Inc.
VPLEX Data Mobility

Create and Check the Batched Mobility Plan

The batch-migrate create-plan command creates a migration plan using


the specified sources and targets.

VPLEX Administration-SSP

© Copyright
Internal Use - Confidential 2020 Dell Inc. Page 213
VPLEX Data Mobility

Start and Cancel Batched Jobs

Use the batch-migrate start --transfer-size [40 KB-128 MB]


<filename> command to start the specified batch migration job.

Use the batch-migrate cancel --file <filename> command to cancel.

VPlexcli:/clusters/cluster-1/devices> batch-migrate start --file=migrate.txt --transfer-size 2M

Started device migration 'BR0_0'.

Started device migration 'BR0_1'.

Started device migration 'BR0_2'.

Started 3 of 3 migrations.
Job Start

Job Cancel
VPlexcli:/data-migrations/device-migrations> batch-migrate cancel --file migrate.txt

• Transfer-size can be as small 40 K, as large as 128 M, and must be a multiple of


4 K. The default recommended value is 128 K.

• A larger transfer-size results in higher performance for the migration, but may
negatively impact front-end I/O.

VPLEX Administration-SSP

Page
Internal Use - Confidential 214 © Copyright 2020 Dell Inc.
VPLEX Data Mobility

Pause and Resume Batched Jobs

Pause an active batch migration to release bandwidth for host I/O during periods of
peak traffic. Resume the batch migration during periods of low I/O.

VPlexcli:/clusters/cluster-1/devices> batch-migrate pause --file=migrate.txt

WARNING: Failed to pause migration BR0_1 : Evaluation of <<dm migration pause -m /data-
migrations/device-migrations/BR0_1>> failed.

Unable to pause the given data migration(s).

Could not pause migration 'BR0_1' because it is not in-progress.

Paused 2 of 3 migrations Job Pause

Job Resume
VPlexcli:/clusters/cluster-1/devices> batch-migrate resume --file=migrate.txt

WARNING: Failed to resume migration BR0_1 : Evaluation of <<dm migration resume - m /data-
migrations/device-migrations/BR0_1>> failed.

Unable to resume the given data migration(s).

Could not resume migration 'BR0_1' because it is not paused.

Resumed 2 of 3 migrations

VPLEX Administration-SSP

© Copyright
Internal Use - Confidential 2020 Dell Inc. Page 215
VPLEX Data Mobility

Monitor Batched Mobility Jobs

Use the batch-migrate summary <filename> --verbose command to


monitor the progress of the specified batch migration.If more than 25 migrations are
active at the same time, they are queued, their status is displayed as in-progress.

VPLEX Administration-SSP

Page
Internal Use - Confidential 216 © Copyright 2020 Dell Inc.
VPLEX Data Mobility

Commit, Clean and Remove Batch Jobs

After the migration is complete, the commit step detaches the source leg of the
RAID 1 and then removes it.

VPlexcli:/> batch-migrate commit –-file migrate.txt

VPlexcli:/> batch-migrate clean --rename-targets --file migrate.txt


Using migration plan file /temp/migration_plans/migrate.txt for cleanup phase.
0: Deleted source extent
/clusters/cluster-1/devices/R20061115_Symm2264_010, unclaimed its disks Symm2264_010

1: Deleted source extent

/clusters/cluster-1/extents/R20061115_Symm2264_011, unclaimed its disks Symm2264_011

VPlexcli:/> batch-migrate remove /data-migrations/device-migrations --file migrate.txt

After the migration is complete, the commit step detaches the source leg of the RAID 1 and
then removes it. The Virtual Volume, Device, or Extent is identical to the one before the
migration except that the source Device/Extent is replaced with the target Device/Extent. A
migration must be committed in order to be cleaned.

When the batch migration is 100% complete, use batch-migrate commit <filename>. Next,
run the clean command to dismantle the source device down to its Storage Volume.

Remove the migration record only if the migration has been committed or canceled. Migration
records are in the /data-migrations/device-migrations context.

VPLEX Administration-SSP

© Copyright
Internal Use - Confidential 2020 Dell Inc. Page 217
Role-Based-Access-Control

Role-Based-Access-Control

Supported RBAC Roles

Role-name Description Shell Access

service  Used to configure Always

VPLEX (Dell
Technologies Service
personnel only)

securityadmin  User management Always

(only admins should


use this)
 Can also perform
vplexuser tasks

vplexuser  Standard VPLEX Controlled by admin

management -
provisioning
 Cannot do any
configuration (service
tasks) or account
management

readonly  Monitoring only - no Never

changes allowed
 CLI or REST
monitoring scripts can
use this account

VPLEX Administration-SSP

Page
Internal Use - Confidential 218 © Copyright 2020 Dell Inc.
Role-Based-Access-Control

Accounts with “vplexuser” Role

The vplexuser role is for accounts created by the admin or accessed via LDAP.

Hos

vplexus

Automated Tasks

vplexuser

Storage
Arrays

Standard
VPLEX User

Standard VPLEX management tasks:

 Provision
 Monitoring
 Mobility

Cannot create accounts or perform service tasks.

VPLEX Administration-SSP

© Copyright
Internal Use - Confidential 2020 Dell Inc. Page 219
Role-Based-Access-Control

Accounts with “readonly” Role

Hos

readonly

Automated Tasks

readonly

Storage
Arrays

VPLEX
Monitoring
User

The readonly role allows automated monitoring tools read only access to VPLEX.

Accounts with the readonly role can be created.

VPLEX Administration-SSP

Page
Internal Use - Confidential 220 © Copyright 2020 Dell Inc.
Role-Based-Access-Control

RBAC - View Account Role Information

Navigate to the /management-server/users/local context to view the user-name


and role-name for any user. You can also see whether or not a particular user is
allowed shell access.

VPLEX Administration-SSP

© Copyright
Internal Use - Confidential 2020 Dell Inc. Page 221
Role-Based-Access-Control

RBAC - Change Role for User

The VPLEX administrator can change the role and shell access for any account
with a vplexuser or readonly role.

 Currently only accounts with readonly and vplexuser role are allowed to be
changed.
 Role for service and admin account cannot be changed

VPLEX Administration-SSP

Page
Internal Use - Confidential 222 © Copyright 2020 Dell Inc.
Role-Based-Access-Control

Shell Access Control - Enabled /Disabled

Shell Access Details

Service and admin accounts Accounts with vplexuser and readonly Roles

login as: admin login as: newuser1

Using keyboard-interactive authentication. Using keyboard-interactive authentication.

Password:
Password:

service@ManagementServer:~> vplexcli Trying ::1...

Trying ::1... Connected to localhost.


Access
Escape character is '^]'.
Connected to localhost. to Linux
shell creating logfile:/var/log/VPlex/cli/session.log
Escape character is '^]'.

VPlexcli:/>

creating logfile:/var/log/VPlex/cli/session.log
User with shell access = false will
directly log into VPLEX CLI
VPlexcli:/>

These two example screenshots show the difference when logging in with a role that has shell
access, and logging in with a role that does not have shell access. The example on the left has
shell access. Notice in the example on the right that newuser1 does not have shell access. So
after entering the password, newuser1 is put directly into Vplexcli.

Shell access can be enabled or disabled for any user assigned the vplexuser role.

When restricted users exit the VPLEX CLI by any method, they will also exit from the shell.
There is no way for a restricted user to exit VPLEX CLI and get to the shell prompt.

VPLEX Administration-SSP

© Copyright
Internal Use - Confidential 2020 Dell Inc. Page 223
Role-Based-Access-Control

If Restricted Shell Access - SCP Using "share" Folder

There is a restricted “share” folder accessible through the VPLEX CLI.

 Use the ll command in the /management-server/share/out/collect-


diagnostics context to see the files that are stored.
 To copy a diagnostics file from the share folder, we use the command: scp
newuser@<ipaddress>:/collect-diagnostics/FNMXXXXXXX-c1-
diag-xxxx.tar.gz.

VPLEX Administration-SSP

Page
Internal Use - Confidential 224 © Copyright 2020 Dell Inc.
VPLEX Support Integration

VPLEX Support Integration

Configuring SRS Gateway

Secure Remote Services (SRS) is a two-way remote connection between Dell EMC
Customer Service and supported products and solutions.

SRS Virtual Edition (VE) runs on a customer-supplied Enterprise VMware or Hyper-


V instance.

Required Information from VPLEX

• IP address of the management server

• VPLEX serial number (Top Level Assembly #)

• Site ID

SSL tunnel - TLS with RSA key exchange AES-


256 with SHA1 encryption

SRS Gateway

Administrator ← Proactive Remote Monitoring →


Dell Support

VPLEX Administration-SSP

© Copyright
Internal Use - Confidential 2020 Dell Inc. Page 225
VPLEX Support Integration

SNMP Overview

 SNMP can retrieve performance statistics for a single cluster


 One-time SNMP agent configuration is necessary
 Call-home event notifications sent

 Third-party management station often used


SNMP Management
Station

SNMP Polling Polling Information

VPLEX Management Server or


SNMP Traps
VPLEX Engines
SN Cluster-1

SNMP version 2c supported

VPLEX Administration-SSP

Page
Internal Use - Confidential 226 © Copyright 2020 Dell Inc.
VPLEX Support Integration

Supported SNMP Polling Commands

The relevant commands for SNMP polling are displayed here:

 VPLEX CLI
 snmp-agent configure
 Configures the SNMP Agent
 One time configuration
 Executed in the VPLEX CLI
 SNMP Management Station

 SNMPGET
 gets the most recent statistics for the OID specified from each director
 SNMPGETNEXT
 gets the most recent statistics for the next OID from each director
 SNMPGETBULK
 gets all SNMP statistics from each director

VPLEX Administration-SSP

© Copyright
Internal Use - Confidential 2020 Dell Inc. Page 227
VPLEX Support Integration

Supported SNMP Trap Commands

SNMP Traps are configured in the VPLEX CLI.

 notifications snmp-trap create

 Creates an SNMP trap sink for call-home events


 set remote host

 Configures the trap destination


 set started true

 Begins SNMP trap monitor


 notifications snmp-trap destroy

 Deletes one or multiple SNMP traps

VPLEX Administration-SSP

Page
Internal Use - Confidential 228 © Copyright 2020 Dell Inc.
VPLEX Support Integration

Using LDAPS for User Accounts

Configure user accounts one of two ways:

 Use an external LDAPS/AD (Lightweight Directory Access Protocol/Active


Directory)
 Configure locally on VPLEX

IP

Admin VPLEX Management Server or


MMCS-A

CERT
LDAPs Server

VPLEX Administration-SSP

© Copyright
Internal Use - Confidential 2020 Dell Inc. Page 229
VPLEX Support Integration

LDAPS Configuration on VPLEX

VPLEX provides the option to configure Lightweight Directory Access Protocol


Secure (or LDAPS) certificates which is the more commonly used form of LDAP
and is more secure. The certificates are files that come from the LDAPS server and
are stored on the VPLEX, which creates a trust mechanism between the two
systems.

VPlexcli:/> authentication directory-service configure -d 1 -t 2 -I 10.31.50.59 /

--server-name=“linux-72.s3.site" -b “dc=emc,dc=com" /

-r “ou=vplex,dc=emc,dc=com" -n “cn=Administrator,dc=emc,dc=com" /

-l “/opt/emc/Bplex/cert.pem" -p
...

VPlexcli:/>
-d for directory type, 1= OpenLDAP, 2= Active Directory

-t for connection type, 1= LDAP. 2=LDAPS

-i for IP address of the certificate server

--server-name for the name of the LDAPS server

-b for the base distinguished name of the directory server

-r for the user search path, the distinguished name of the node at which to begin
user searches in the directory server

-n for the bind distinguished name of the directory server

-l the path to the certificate file

-p for Password of Bind Distinguished Name. You are prompted for the
password

VPLEX Administration-SSP

Page
Internal Use - Confidential 230 © Copyright 2020 Dell Inc.
VPLEX Monitoring Concepts

VPLEX Monitoring Concepts

Monitor Clusters

The cluster summary command is a good command to run to monitor VPLEX


clusters.

VPlexcli:/> cluster summary

Clusters:

Name Cluster ID TLA Connected Expelled Operational Status Health State

--------- ---------- -------------- --------- -------- ------------------ ------------

cluster-1 1 FNM00161800086 true false ok ok

cluster-2 2 FNM00161800085 true false ok ok

Islands:

Island ID Clusters
--------- --------------------

1 cluster-1, cluster-2
Clusters in a VPLEX Metro will share the same Island ID.

VPlexcli:/> cluster status

operational-status: ok

transitioning-indications:

transitioning-progress:

health-state: ok

health-indications:

local-com: ok
...

VPLEX Administration-SSP

© Copyright
Internal Use - Confidential 2020 Dell Inc. Page 231
VPLEX Monitoring Concepts

VPLEX Monitoring

Validate VPN Status


The vpn status command verifies the VPN connection between management
servers.

VPLEX Administration-SSP

Page
Internal Use - Confidential 232 © Copyright 2020 Dell Inc.
VPLEX Monitoring Concepts

Export Port Summary

This command allows the user to check the status of VPLEX Front-End ports. xxxx

VPLEX Administration-SSP

© Copyright
Internal Use - Confidential 2020 Dell Inc. Page 233
VPLEX Monitoring Concepts

Director Connectivity

The director uptime command displays the amount of time a director has
been online. The connectivity director <director name> command will
display all ports and storage array LUNs masked to the specified director.

VPLEX Administration-SSP

Page
Internal Use - Confidential 234 © Copyright 2020 Dell Inc.
VPLEX Monitoring Concepts

Monitor Users

The sessions command displays information on users who are logged into the
VPLEX Management Console.

VPLEX Administration-SSP

© Copyright
Internal Use - Confidential 2020 Dell Inc. Page 235
VPLEX Monitoring Concepts

Monitor System Volumes

A metadata volume can be monitored by listing the system-volumes directory. If the


metadata volume is having trouble, it will be shown here. Note that a cluster will not
allow configuration changes to the metadata volume if both mirror legs of a
configured metadata volume have failed. I/O to existing Virtual Volumes will not be
affected.

VPLEX Administration-SSP

Page
Internal Use - Confidential 236 © Copyright 2020 Dell Inc.
VPLEX Monitoring Concepts

VPLEX Monitoring

Default Report – capacity-arrays

This monitor generates a capacity report for all the storage in a VPLEX system, grouped by
storage arrays. It requires:

 All Storage Volumes in a storage array have the same tier value

 The tier is indicated in the Storage Volume name

Tier IDs are required to determine the tier of a Storage Volume/storage array. Storage Volumes
that do not contain any of the specified IDs are given the tier value 'no-tier'. The report is
separated into two parts:

 Local storage - Storage Volumes where the data is physically located at one
site only.

 Shared storage - Distributed and remote Virtual Volumes

VPLEX Administration-SSP

© Copyright
Internal Use - Confidential 2020 Dell Inc. Page 237
VPLEX Monitoring Concepts

Default Report – capacity-clusters

Information generated from the report capacity-clusters monitor.

VPLEX Administration-SSP

Page
Internal Use - Confidential 238 © Copyright 2020 Dell Inc.
VPLEX Monitoring Concepts

Default Report – capacity-hosts

Information generated by this monitor includes the number of views, total exported
capacity in GB, and the number of exported virtual volumes per cluster.

VPLEX Administration-SSP

© Copyright
Internal Use - Confidential 2020 Dell Inc. Page 239
VPLEX Monitoring Concepts

Monitor VPLEX Object Constructs

The show-use-hierarchy command drills from the specified target up to the


top-level volume and down to the storage-array.

Details - The command will detect sliced elements, drill up through all slices, and indicate in
the output that slices were detected. The original target is highlighted in the output. You can
specify meta, logging, and virtual volumes, local and distributed devices, extents, storage-
volumes, or logical-units on a single command line.

VPLEX Administration-SSP

Page
Internal Use - Confidential 240 © Copyright 2020 Dell Inc.
VPLEX Monitoring Concepts

Monitor Storage Volume Details

This command provides additional details on storage volumes.

VPLEX Administration-SSP

© Copyright
Internal Use - Confidential 2020 Dell Inc. Page 241
VPLEX Monitoring Concepts

VPLEX Monitoring

Monitor Storage Volumes

The storage-volume-summary command provides information on the number


of used, claimed, and unhealthy storage volumes. It also displays the number of
meta-data volumes, the storage volume vendor, and the total capacity of storage
volumes.

VPLEX Administration-SSP

Page
Internal Use - Confidential 242 © Copyright 2020 Dell Inc.
VPLEX Monitoring Concepts

Extent Details

The ll/**/extents command provides detailed information on the extents in a


particular cluster. It will display , among other details, the use details. this provides
information on an Extent

VPLEX Administration-SSP

© Copyright
Internal Use - Confidential 2020 Dell Inc. Page 243
VPLEX Monitoring Concepts

Local Device Details

This command provides additional details on Devices, including the operational


status and health state, block count, block size, geometry, and the Virtual Volume
using the Device.

VPLEX Administration-SSP

Page
Internal Use - Confidential 244 © Copyright 2020 Dell Inc.
VPLEX Monitoring Concepts

Virtual Volume Details

This command provides additional details on Virtual Volumes. These details


include Virtual Volume operational status and health state, block count and block
size, Virtual Volume capacity, locality, and Virtual Volume supporting Devices.

VPLEX Administration-SSP

© Copyright
Internal Use - Confidential 2020 Dell Inc. Page 245
VPLEX Monitoring Concepts

Virtual Volume Summary

The virtual-volume summary command displays information about the Virtual


Volumes in a cluster. This information includes the Virtual Volume health state,
locality, cache mode, capacity, and total capacity.

VPLEX Administration-SSP

Page
Internal Use - Confidential 246 © Copyright 2020 Dell Inc.
VPLEX Monitoring Concepts

VPLEX Monitoring

Storage View Details

The ll storage-views command displays detailed information on Storage


Views. This includes the operational status of a view, initiator ports, Virtual
Volumes, and VPLEX Front-end ports that are part of the view. It also displays the
external LUN ID of a Virtual Volume.

Storage View find

The export storage-view find-unmapped-volumes <cluster>


command will display the virtual volumes that are un-exported (not placed in a
storage view).

The export storage-view map <view> will display virtual volumes that are
part of the Storage View and their corresponding local device IDs.

VPLEX Administration-SSP

© Copyright
Internal Use - Confidential 2020 Dell Inc. Page 247
VPLEX Monitoring Concepts

Monitor Distributed Devices

These commands provide information about Distributed Devices within a local


cluster or a VPLEX Metro configuration.

VPLEX Administration-SSP

Page
Internal Use - Confidential 248 © Copyright 2020 Dell Inc.
VPLEX Performance Monitoring

Overview of VPLEX Performance Monitoring

VPLEX Performance Monitor tool allows for data collection of various statistics from
the VPLEX cluster.

 Unisphere Performance Monitoring Dashboard


 VPLEX CLI commands collect performance statistics
 SNMP (SNMP agent runs on local cluster management server)

VPLEX Performance
Graph

Cluster-1 Cluster-2

Tools for Performance Monitoring

VPLEX Administration-SSP

© Copyright 2020 Dell Inc. Page 249


VPLEX CLI Performance Monitors

There are three categories of performance monitors that can be viewed through
VPLEX CLI. All of the VPLEX CLI performance monitors will gather data during a
polling cycle and save it to a File Sink.

Monitor Data

VPLEX Cluster 1

/var/log/VPlex/cli

Three Types for Performance Monitors:

 Perpetual Monitors
– Basic Data - Always Running
 Pre-configured Monitors
– Three per Director
– Must be created
 Custom Monitors

– Created and polling interval set


– Time running is short

VPLEX Administration-SSP

Page 250 © Copyright 2020 Dell Inc.


Perpetual Performance Monitor Files

Here we see the contents of the folder where the perpetual monitors are written.
Notice the naming convention of the monitor files. Perpetual monitor files are
collected as part of collect-diagnostics.

Perpetual Performance Monitors details

The Perpetual Performance Monitors are always on, started from system setup, cannot be
modified, disabled, or deleted. The currently open monitor file is capped at 10 MB per director
and up to 10 files are stored (.log, .log.1, .log.2, and so on).

 Always on - cannot be modified, disabled, or deleted

 Records a default set of performance categories every 30 seconds

 One file per director, local to a cluster

 Files written to /var/log/VPlex/cli/

o Naming: …PERPETUAL_vplex_sys_perf_mon.log

 Files capped at 10 MB and rotated up to 10 times

 Files can be copied for offline analysis

VPLEX Administration-SSP

© Copyright 2020 Dell Inc. Page 251


VPLEX Administration-SSP

Page 252 © Copyright 2020 Dell Inc.


Create Pre-configured Monitors

Shown here is an example of the report create-monitors command. Notice


that each pre-configured monitor has one file sink. By default, output files are
located in /var/log/VPlex/cli/reports/ on the management server.

Pre-Configured Monitors

VPLEX Administration-SSP

© Copyright 2020 Dell Inc. Page 253


VPLEX Administration-SSP

Page 254 © Copyright 2020 Dell Inc.


Verify Running Monitors

Use the ll command, as shown here, to verify the running monitors on each
director. Notice that the pre-configured monitors have a period of 0s (zero
seconds). This means automatic polling is disabled. Use the report poll-monitors
command to force the monitors to poll for data and send the data to the associated
file sink.

VPLEX Administration-SSP

© Copyright 2020 Dell Inc. Page 255


Manually Poll for Pre-configured Monitors

The monitors created by the report create-monitors command have their


period attribute set to 0 seconds, automatic polling is disabled. use the report
poll-monitors command to force an immediate poll and collection of
performance data.

Output is written to files located in /var/log/VPlex/cli/reports/ on the management


server.

VPLEX Administration-SSP

Page 256 © Copyright 2020 Dell Inc.


Add Manual Polling to the Scheduled Tasks

To automate polling for our pre-configured monitors, we can schedule a job to run
at specified time(s). The example shown will run the report poll-monitors command
at 1 AM every day.

VPlexcli:/> schedule add –t "0 1 * * *" –c ""report poll-


monitors""

1. Minute

VPlexcli:/> 3. Day of Month 5. Day of Week

2. Hour 4. Month

VPLEX Administration-SSP

© Copyright 2020 Dell Inc. Page 257


Custom Monitor Configuration Steps

Here are the general steps to create a Custom Monitor:

1. Determine the type of statistic to collect monitor stat-list


2. Determine how often the monitor should collect statistics.
3. Create a monitor monitor create
4. Add one or more sinks to the monitor monitor add-file-sink
5. Update/Collect statistics
6. Monitor output location /var/log/VPlex/cli/<file name>

VPLEX Administration-SSP

Page 258 © Copyright 2020 Dell Inc.


Determine the Type of Statistics to Collect

Before creating a monitor, first use the monitor stat-list command to display the
available statistics. There are high-level categories each with subcategories.
Monitoring has no impact on host performance.

Many statistics require a target port or volume to be specified. Output of the monitor
stat-list command identifies which statistics need a target defined.

No Target Required

Target Required

VPLEX Administration-SSP

© Copyright 2020 Dell Inc. Page 259


Types of Statistics

Statistics Details

 counters - monotonically increasing value (analogous to a car odometer).


Counters are used to count bytes, operations, and errors. Often reported as a
rate such as counts/second or KB/second.
 readings - instantaneous value (analogous to a car speedometer). Readings
are used to display CPU utilization and memory utilization. Value can change
every sample.
 Period-average - average of a series calculated over the last sample period.
 buckets - histogram of 'bucketized' counts. Buckets are used to track latencies,
determine median, mode, percentiles, minimums and maximums.

VPLEX Administration-SSP

Page 260 © Copyright 2020 Dell Inc.


Example: How to Create a Custom Monitor

Custom Monitor Details

VPLEX Administration-SSP

© Copyright 2020 Dell Inc. Page 261

You might also like