Dell EMC PowerMax Family Product Guide
Dell EMC PowerMax Family Product Guide
Dell EMC PowerMax Family Product Guide
PowerMaxOS
Revision 08
November 2019
Copyright © 2018-2019 Dell Inc. or its subsidiaries. All rights reserved.
Dell believes the information in this publication is accurate as of its publication date. The information is subject to change without notice.
THE INFORMATION IN THIS PUBLICATION IS PROVIDED “AS-IS.” DELL MAKES NO REPRESENTATIONS OR WARRANTIES OF ANY KIND
WITH RESPECT TO THE INFORMATION IN THIS PUBLICATION, AND SPECIFICALLY DISCLAIMS IMPLIED WARRANTIES OF
MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE. USE, COPYING, AND DISTRIBUTION OF ANY DELL SOFTWARE DESCRIBED
IN THIS PUBLICATION REQUIRES AN APPLICABLE SOFTWARE LICENSE.
Dell Technologies, Dell, EMC, Dell EMC and other trademarks are trademarks of Dell Inc. or its subsidiaries. Other trademarks may be the property
of their respective owners. Published in the USA.
Dell EMC
Hopkinton, Massachusetts 01748-9103
1-508-435-1000 In North America 1-866-464-7381
www.DellEMC.com
Figures 7
Tables 9
Preface 11
Operational overview........................................................................... 52
Host registration.................................................................................. 53
Device status....................................................................................... 53
Automatic creation of Initiator Groups................................................. 53
Management........................................................................................ 54
More information................................................................................. 54
Backup and restore using ProtectPoint and Data Domain................................. 54
Backup.................................................................................................54
Restore................................................................................................ 55
ProtectPoint agents.............................................................................56
Features used for ProtectPoint backup and restore.............................56
ProtectPoint and traditional backup.....................................................56
More information................................................................................. 57
VMware Virtual Volumes................................................................................... 57
vVol components..................................................................................57
vVol scalability..................................................................................... 58
vVol workflow...................................................................................... 58
Chapter 5 Provisioning 65
Thin provisioning...............................................................................................66
Pre-configuration for thin provisioning................................................ 66
Thin devices (TDEVs)...........................................................................67
Thin device oversubscription................................................................67
Internal memory usage.........................................................................68
Open Systems-specific provisioning.................................................... 68
About TimeFinder............................................................................................. 80
Interoperability with legacy TimeFinder products................................. 81
Targetless snapshots............................................................................ 81
Secure snaps........................................................................................82
Provision multiple environments from a linked target........................... 82
Cascading snapshots............................................................................83
Accessing point-in-time copies............................................................ 83
Mainframe SnapVX and zDP............................................................................. 83
As part of an effort to improve its product lines, Dell EMC periodically releases revisions of its
software and hardware. Therefore, some functions described in this document might not be
supported by all versions of the software or hardware currently in use. The product release notes
provide the most up-to-date information on product features.
Contact your Dell EMC representative if a product does not function properly or does not function
as described in this document.
Note: This document was accurate at publication time. New versions of this document might
be released on Dell EMC Online Support (https://www.dell.com/support/home). Check to
ensure that you are using the latest version of this document.
Purpose
This document introduces the features of the Dell EMC PowerMax arrays running PowerMaxOS
5978. The descriptions of the software capabilities also apply to VMAX All Flash arrays running
PowerMaxOS 5978, except where noted.
Audience
This document is intended for use by customers and Dell EMC representatives.
Related documentation
The following documentation portfolios contain documents related to the hardware platform and
manuals needed to manage your software and storage system configuration. Also listed are
documents for external components that interact with the PowerMax array.
Hardware platform documents:
Dell EMC PowerMax Family Site Planning Guide
Provides planning information regarding the purchase and installation of a PowerMax 2000,
8000 with PowerMaxOS.
Dell EMC Best Practices Guide for AC Power Connections for PowerMax 2000, 8000 with
PowerMaxOS
Describes the best practices to assure fault-tolerant power to a PowerMax 2000 or
PowerMax 8000 array.
PowerMaxOS 5978.221.221 Release Notes for Dell EMC PowerMax and All Flash
Describes new features and any limitations.
Unisphere documents:
Dell EMC Unisphere for PowerMax Release Notes
Describes new features and any known limitations for Unisphere for PowerMax.
Dell EMC Unisphere for PowerMax REST API Concepts and Programmer's Guide
Describes the Unisphere for PowerMax REST API concepts and functions.
Dell EMC Solutions Enabler Array Controls and Management CLI User Guide
Describes how to configure array control, management, and migration operations using
SYMCLI commands for arrays running HYPERMAX OS and PowerMaxOS.
Dell EMC Solutions Enabler Array Controls and Management CLI User Guide
Describes how to configure array control, management, and migration operations using
SYMCLI commands for arrays running Enginuity.
Dell EMC Events and Alerts for PowerMax and VMAX User Guide
Documents the SYMAPI daemon messages, asynchronous errors and message events,
SYMCLI return codes, and how to configure event logging.
PowerPath documents:
ProtectPoint documents:
Dell EMC ProtectPoint Solutions Guide
Provides ProtectPoint information related to various data objects and data handling facilities.
Note: ProtectPoint has been renamed to Storage Direct and it is included in PowerProtect,
Data Protection Suite for Apps, or Data Protection Suite Enterprise Software Edition.
Mainframe Enablers documents:
Dell EMC Mainframe Enablers Installation and Customization Guide
Describes how to install and configure Mainframe Enablers software.
Dell EMC Mainframe Enablers ResourcePak Base for z/OS Product Guide
Describes how to configure VMAX system control and management using the EMC Symmetrix
Control Facility (EMCSCF).
Dell EMC Mainframe Enablers Consistency Groups for z/OS Product Guide
Describes how to use Consistency Groups for z/OS (ConGroup) to ensure the consistency of
data remotely copied by SRDF in the event of a rolling disaster.
Dell EMC Mainframe Enablers SRDF Host Component for z/OS Product Guide
Describes how to use SRDF Host Component to control and monitor remote data replication
processes.
Dell EMC Mainframe Enablers TimeFinder SnapVX and zDP Product Guide
Describes how to use TimeFinder SnapVX and zDP to create and manage space-efficient
targetless snaps.
Dell EMC Mainframe Enablers TimeFinder/Clone Mainframe Snap Facility Product Guide
Describes how to use TimeFinder/Clone, TimeFinder/Snap, and TimeFinder/CG to control
and monitor local data replication processes.
Dell EMC Mainframe Enablers TimeFinder Utility for z/OS Product Guide
Describes how to use the TimeFinder Utility to condition volumes and devices.
z/TPF documents:
Dell EMC ResourcePak for z/TPF Product Guide
Describes how to configure VMAX system control and management in the z/TPF operating
environment.
Typographical conventions
Dell EMC uses the following type style conventions in this document:
Technical support
To open a service request through the Dell EMC Online Support (https://www.dell.com/
support/home) site, you must have a valid support agreement. Contact your Dell EMC sales
representative for details about obtaining a valid support agreement or to answer any
questions about your account.
eLicensing support
To activate your entitlements and obtain your VMAX license files, visit the Service Center on
Dell EMC Online Support (https://www.dell.com/support/home), as directed on your License
Authorization Code (LAC) letter emailed to you.
l For help with missing or incorrect entitlements after activation (that is, expected
functionality remains unavailable because it is not licensed), contact your Dell EMC
Account Representative or Authorized Reseller.
l For help with any errors applying license files through Solutions Enabler, contact the Dell
EMC Customer Support Center.
l If you are missing a LAC letter, or require further instructions on activating your licenses
through the Online Support site, contact Dell EMC's worldwide Licensing team at
licensing@emc.com or call:
n North America, Latin America, APJK, Australia, New Zealand: SVC4EMC
(800-782-4362) and follow the voice prompts.
Your comments
Your suggestions help us improve the accuracy, organization, and overall quality of the
documentation. Send your comments and feedback to: VMAXContentFeedback@emc.com
This chapter introduces PowerMax systems and the PowerMaxOS operating environment.
PowerMax arrays
The PowerMax family of arrays has two models:
l PowerMax 2000 with a maximum capacity of 1 PBe (Petabytes effective) that can operate in
open systems environments
l PowerMax 8000 with a maximum capacity of 4 PBe that can operate in open systems,
mainframe, or mixed open systems and mainframe environments
PowerMax systems are modular enabling them to expand to meet the future needs of the
customer.
System building blocks
Each PowerMax array is made up of one or more building blocks each known as a PowerMax Brick
in an open systems array or a PowerMax zBrick in a mainframe array. A PowerMax Brick or
PowerMax zBrick consists of:
l An engine with two directors (the redundant data storage processing unit)
l Flash storage in two Drive Array Enclosures (DAEs) each with 24 slots
l Minimum storage capacity:
n PowerMax 2000: 13 TBu (Terabytes usable)
n PowerMax 8000 in an open systems environment: 53 TBu
n PowerMax 8000 in a mainframe environment: 13 TBu
n PowerMax 8000 in a mixed open systems and mainframe environment: 66 TBu
Hardware expansion
Customers can increase the initial storage capacity in 13 TBu units each known as a Flash Capacity
Pack (in an open systems environment) or a zFlash Capacity Pack (in a mainframe environment).
The addition of Flash Capacity Packs or zFlash Capacity Packs to an array is known as scaling up.
Also, customers can add further PowerMax Bricks or PowerMax zBricks to increase the capacity
and capability of the system. A PowerMax 2000 array can have a maximum of two PowerMax
Bricks. A PowerMax 8000 can have a maximum of eight PowerMax Bricks or PowerMax zBricks.
The addition of bricks to an array is known as scaling out.
Finally, customers can increase the internal memory of the system. A PowerMax 2000 system can
have 512 GB, 1 TB, or 2 TB of memory on each engine. A PowerMax 8000 system can have 1 TB or
2 TB of memory on each engine.
Storage devices
Starting with PowerMaxOS 5978.444.444 there are two types of storage device available for a
PowerMax array:
l Storage Class Memory (SCM) drive
l NVMe flash drive
SCM drives are available with PowerMaxOS 5978.444.444 and later. Previous version of
PowerMaxOS 5978 work with NVMe flash drives only.
SCM drives are new, high-performance drives that have a significantly lower latency than the
NVMe flash drives. An eligible array can have any mix of SCM drives and NVMe drives.
In SCM-based systems:
l Customers can increase the capacity of SCM drives in increments of 5.25 TBu.
l The minimum starting capacity of a SCM-based system is 21 TBu.
System specifications
Detailed specifications of the PowerMax arrays are available from the Dell EMC website.
Software packages
There are four software packages for PowerMax arrays. The Essentials and Pro software packages
are for open system arrays while the zEssentials and zPro software packages are for mainframe
arrays.
Optional features
The optional features in the Essentials software package are:
a. The Pro software package contains 75 PowerPath licenses. Extra licenses are available
separately.
Optional features
The optional features of the Pro software package are:
Optional Features
The optional features in the zEssentials software package are:
Optional features
The optional features in the zPro software package are:
Package availability
The availability of the PowerMaxOS software packages on the PowerMax platforms is:
PowerMaxOS
This section summarizes the main features of PowerMaxOS.
PowerMaxOS emulations
PowerMaxOS provides emulations (executables) that perform specific data service and control
functions in the PowerMaxOS environment. The available emulations are:
Host connectivity FA - Fibre Channel Front-end emulation that: FC - 16 Gb/s and 32 Gb/s
SE - iSCSI l Receives data from the SE - 10 Gb/s
EF - FICON b host or network and EF - 16 Gb/s
commits it to the array
FN - FC-NVMe FN - 32 Gb/scd
l Sends data from the
array to the host or
network
Remote replication RF - Fibre Channel Interconnects arrays for RF - 16 Gb/s and 32 Gb/s
SRDF. SRDF
RE - GbE
RE - 10 GbE SRDF
a. The 16 Gb/s module autonegotiates to 16/8/4 Gb/s using optical SFP and OM2/OM3/OM4 cabling.
b. Only on PowerMax 8000 arrays.
c. Available on PowerMax arrays only.
d. The 32 Gb/s module autonegotiates to 32/16/8 Gb/s.
Container applications
PowerMaxOS provides an open application platform for running data services. It includes a
lightweight hypervisor that enables multiple operating environments to run as virtual machines on
the storage array.
Application containers are virtual machines that provide embedded applications on the storage
array. Each container virtualizes the hardware resources that are required by the embedded
application, including:
l Hardware needed to run the software and embedded application (processor, memory, PCI
devices, power management)
l VM ports, to which LUNs are provisioned
l Access to necessary drives (boot, root, swap, persist, shared)
Embedded Management
The eManagement container application embeds management software (Solutions Enabler, SMI-S,
Unisphere for PowerMax) on the storage array, enabling you to manage the array without
requiring a dedicated management host.
With eManagement, you can manage a single storage array and any SRDF attached arrays. To
manage multiple storage arrays with a single control pane, use the traditional host-based
management interfaces: Unisphere and Solutions Enabler. To this end, eManagement allows you to
link-and-launch a host-based instance of Unisphere.
eManagement is typically preconfigured and enabled at the factory. However, eManagement can
be added to arrays in the field. Contact your support representative for more information.
Embedded applications require system memory. The following table lists the amount of memory
unavailable to other data services.
eNAS configurations
The storage capacity required for arrays supporting eNAS is at least 680 GB. This table lists eNAS
configurations and front-end I/O modules.
a. Data Movers are added in pairs and must have the same configuration.
b. The PowerMax 8000 can be configured through Sizer with a maximum of four Data Movers.
However, six and eight Data Movers can be ordered by RPQ. As the number of data movers
increases, the maximum number of I/O cards , logical cores, memory, and maximum
capacity also increases.
c. For 2, 4, 6, and 8 Data Movers, respectively.
d. A single 2-port 10GbE Optical I/O module is required by each Data Mover for initial
PowerMax configurations. However, that I/O module can be replaced with a different I/O
module (such as a 4-port 1GbE or 2-port 10GbE copper) using the normal replacement
capability that exists with any eNAS Data Mover I/O module. Also, additional I/O modules
can be configured through a I/O module upgrade/add as long as standard rules are followed
(no more than three I/O modules per Data Mover, all I/O modules must occupy the same
slot on each director on which a Data Mover resides).
RAID levels
PowerMax arrays can use the following RAID levels:
l PowerMax 2000: RAID 5 (7+1) (Default), RAID 5 (3+1) and RAID 6 (6+2)
l PowerMax 8000: RAID 5 (7+1) and RAID 6 (6+2)
Enabling D@RE
D@RE is a licensed feature that is installed and configured at the factory. Upgrading an existing
array to use D@RE is possible, but is disruptive. The upgrade requires re-installing the array, and
may involve a full data back up and restore. Before upgrading, plan how to manage any data
already on the array. Dell EMC Professional Services offers services to help you implement D@RE.
D@RE components
Embedded D@RE (Figure 1 on page 31) uses the following components, all of which reside on the
primary Management Module Control Station (MMCS):
l RSA Embedded Data Protection Manager (eDPM)— Embedded key management platform,
which provides onboard encryption key management functions, such as secure key generation,
storage, distribution, and audit.
l RSA BSAFE® cryptographic libraries— Provides security functionality for RSA eDPM Server
(embedded key management) and the Dell EMC KTP client (external key management).
l Common Security Toolkit (CST) Lockbox— Hardware- and software-specific encrypted
repository that securely stores passwords and other sensitive key manager configuration
information. The lockbox binds to a specific MMCS.
External D@RE (Figure 2 on page 31) uses the same components as embedded D@RE, and adds
the following:
l Dell EMC Key Trust Platform (KTP)— Also known as the KMIP Client, this component resides
on the MMCS and communicates with external key managers using the OASIS Key
Management Interoperability Protocol (KMIP) to manage encryption keys.
l External Key Manager— Provides centralized encryption key management capabilities such as
secure key generation, storage, distribution, audit, and enabling Federal Information
Processing Standard (FIPS) 140-2 level 3 validation with High Security Module (HSM).
l Cluster/Replication Group— Multiple external key managers sharing configuration settings
and encryption keys. Configuration and key lifecycle changes made to one node are replicated
to all members within the same cluster or replication group.
Storage
Configuration
Host Management
SAN
IP
RSA
Director Director eDPM Client
IO IO IO IO
Module Module Module Module
Unencrypted
RSA data
eDPM Server
Management
traffic
Encrypted
data
Unique key per physical drive
Storage
Configuration
Host Management
SAN
Key Trust Platform (KTP) IP
Unencrypted
data
Management
traffic
Encrypted
data
TLS-authenticated
Unique key per physical drive KMIP traffic
Encryption keys must be highly available when they are needed, and tightly secured. Keys, and the
information required to use keys (during decryption), must be preserved for the lifetime of the
data. This is critical for encrypted data that is kept for many years.
Key accessibility is vital in high-availability environments. D@RE caches the keys locally. So
connection to the Key Manager is necessary only for operations such as the initial installation of
the array, replacement of a drive, or drive upgrades.
Lifecycle events involving keys (generation and destruction) are recorded in the array's Audit Log.
Key protection
The local keystore file is encrypted with a 256-bit AES key derived from a randomly generated
password file. This password file is secured in the Common Security Toolkit (CST) Lockbox, which
uses RSA BSAFE technology. The Lockbox is protected using MMCS-specific stable system values
(SSVs) of the primary MMCS. These are the same SSVs that protect Secure Service Credentials
(SSC).
Compromising the MMCS’s drive or copying Lockbox/keystore files off the array causes the SSV
tests to fail. Compromising the entire MMCS only gives an attacker access if they also
successfully compromise SSC.
There are no backdoor keys or passwords to bypass D@RE security.
Key operations
D@RE provides a separate, unique Data Encryption Key (DEK) for each physical drive in the array,
including spare drives. To ensure that D@RE uses the correct key for a given drive:
l DEKs stored in the array include a unique key tag and key metadata when they are wrapped
(encrypted) for use by the array.
This information is included with the key material when the DEK is wrapped (encrypted) for
use in the array.
l During encryption I/O, the expected key tag associated with the drive is supplied separately
from the wrapped key.
l During key unwrap, the encryption hardware checks that the key unwrapped correctly and that
it matches the supplied key tag.
l Information in a reserved system LBA (Physical Information Block, or PHIB) verifies the key
used to encrypt the drive and ensures the drive is in the correct location.
l During initialization, the hardware performs self-tests to ensure that the encryption/
decryption logic is intact.
The self-test prevents silent data corruption due to encryption hardware failures.
Audit logs
The audit log records major activities on an array, including:
l Host-initiated actions
l Physical component changes
l Actions on the MMCS
l D@RE key management events
l Attempts blocked by security controls (Access Controls)
The Audit Log is secure and tamper-proof so event contents cannot be altered. Users with the
Auditor access can view, but not modify, the log.
Data erasure
Dell EMC Data Erasure uses specialized software to erase information on arrays. It mitigates the
risk of information dissemination, and helps secure information at the end of the information
lifecycle. Data erasure:
Vault to flash
PowerMax arrays initiate a vault operation when the system is powered down, goes offline, or if
environmental conditions occur, such as the loss of a data center due to an air conditioning failure.
Each array comes with Standby Power Supply (SPS) modules. On a power loss, the array uses the
SPS power to write the system mirrored cache to flash storage. Vaulted images are fully
redundant; the contents of the system mirrored cache are saved twice to independent flash
storage.
l During the restore part of the operation, the array's startup program initializes the hardware
and the environmental system, and restores the system mirrored cache contents from the
saved data (while checking data integrity).
The system resumes normal operation when the SPS modules have sufficient charge to complete
another vault operation, if required. If any condition is not safe, the system does not resume
operation and notifies Customer Support for diagnosis and repair. This allows Customer Support to
communicate with the array and restore normal system operations.
Data efficiency
Data efficiency is a feature of PowerMax systems that is designed to make the best available use
of the storage space on a storage system. Data efficiency has two elements:
l Inline compression
l Deduplication
They work together to reduce the amount of storage that an individual storage group requires. The
space savings achieved through data efficiency is measured as the Data Reduction Ratio (DRR).
Data efficiency operates on individual storage groups so that a system can have a mix of storage
groups that use data efficiency and those that don't.
Inline compression
Inline compression is a feature of storage groups. When enabled (this is the default setting), new
I/O to a storage group is compressed when written to disk, while existing data on the storage
group starts to compress in the background. After turning off compression, new I/O is no longer
compressed, and existing data remains compressed until it is written again, at which time it
decompresses.
Inline compression, deduplication, and over-subscription complement each other. Over-
subscription allows presenting larger than needed devices to hosts without having the physical
drives to fully allocate the space represented by the thin devices (Thin device oversubscription on
page 67 has more information on over-subscription). Inline compression further reduces the data
footprint by increasing the effective capacity of the array.
The example in Figure 3 on page 36 shows this. Here, 1.3 PB of host attached devices (TDEVs) is
over-provisioned to 1.0 PB of back-end (TDATs), that reside on 1.0 PB of Flash drives. Following
data compression, the data blocks are compressed, by a ratio of 2:1, reducing the number of Flash
drives by half. Basically, with compression enabled, the array requires half as many drives to
support a given front-end capacity.
Figure 3 Inline compression and over-subscription
l Activity Based Compression: the most active tracks are held in cache and not compressed until
they move from cache to disk. This feature helps improve the overall performance of the array
while reducing wear on the flash drives.
Software compression
PowerMaxOS 5978 introduces software compression for PowerMax arrays. Software compression
is an extension of regular, inline compression and is available on PowerMax systems only. It
operates on data that was previously compressed but has not been accessed for 35 days or more.
Software compression recompresses this data using an algorithm that may produce a much
greater DRR. The amount of extra compression that can be achieved depends on the nature of the
data.
The criteria that software compression uses to select a data extent for recompression are:
l The extent is in a storage group that is enabled for compression
l The extent has not already been recompressed by software compression
l The extent has not been accessed in the previous 35 days
Software compression runs in the background, using CPU cycles that would otherwise be free.
Therefore, it does not impact the performance of the storage system. Also, software compression
does not require any user intervention as it automatically selects and recompresses idle data.
Inline deduplication
Deduplication works in conjunction with inline compression to further improve efficiency in the use
of storage space. It reduces the number of copies of identical tracks that are stored on back-end
devices. Depending on the nature of the data, deduplication can provide additional data reduction
over and above the reduction that compression provides.
The storage group is the unit that deduplication works on. When it detects a duplicated track in a
group, deduplication replaces it with a pointer to the track that already resides on back-end
storage.
Availability
Deduplication is available only on PowerMax arrays that run PowerMaxOS. In addition,
deduplication works on FBA data only. A system with a mix of FBA and CKD devices can use
deduplication, even when the FBA and CKD devices occupy separate SRPs.
Relationship with inline compression
Deduplication works hand-in-hand with inline compression. Enabling deduplication also enables
compression. Deduplication cannot operate independently of compression.
In addition, deduplication operates across an entire system. It is not possible to use compression
only on some storage groups and compression with deduplication on others.
Compatibility
Deduplication is compatible with the Dell EMC Live Optics performance analyzer. An array with
deduplication can participate in a performance study of an IT environment.
User management
Solutions Enabler or Unisphere for PowerMax have facilities to manage deduplication, including:
l Selecting the storage groups to use deduplication
l Monitoring the performance of the system
Management Interfaces on page 39 contains an overview of Solutions Enabler and Unisphere for
PowerMax.
Home View and manage functions such as array usage, alert settings,
authentication options, system preferences, user authorizations, and
link and launch client registrations.
Hosts View and manage initiators, masking views, initiator groups, array
host aliases, and port groups.
Data Protection View and manage local replication, monitor and manage replication
pools, create and view device groups, and monitor and manage
migration sessions.
Performance Monitor and manage array dashboards, perform trend analysis for
future capacity planning, and analyze data.
System View and display dashboards, active jobs, alerts, array attributes, and
licenses.
Events View alerts, the job list, and the audit log.
Unisphere also has a Representational State Transfer (REST) API. With this API you can access
performance and configuration information, and provision storage arrays. You can use the API in
any programming environment that supports standard REST clients, such as web browsers and
programming platforms that can issue HTTP requests.
Workload Planner
Workload Planner displays performance metrics for applications. Use Workload Planner to:
l Model the impact of migrating a workload from one storage system to another.
l Model proposed new workloads.
l Assess the impact of moving one or more workloads off of a given array running PowerMaxOS.
l Determine current and future resource shortfalls that require action to maintain the requested
workloads.
Unisphere 360
Unisphere 360 is an on-premise management solution that provides a single window across arrays
running PowerMaxOS at a single site. Use Unisphere 360 to:
l Add a Unisphere server to Unisphere 360 to allow for data collection and reporting of
Unisphere management storage system data.
l View the system health, capacity, alerts and capacity trends for your Data Center.
l View all storage systems from all enrolled Unisphere instances in one place.
l View details on performance and capacity.
l Link and launch to Unisphere instances running V8.2 or higher.
l Manage Unisphere 360 users and configure authentication and authorization rules.
l View details of visible storage arrays, including current and target storage.
CloudIQ
Cloud IQ is a web-based application for monitoring multiple PowerMax arrays simultaneously.
However, CloudIQ is more than a passive monitor. It uses predictive analytics to help with:
l Visualizing trends in capacity usage
l Predicting potential shortcomings in capacity and performance so that early action can be
taken to avoid them
l Troubleshooting performance issues
CloudIQ is available with PowerMaxOS 5978.221.221 and later, and with Unisphere for PowerMax
V9.0.1 and later.
Periodically, a data collector runs that gathers and packages data about the arrays that Unisphere
manages and their performance. The collector then sends the packaged data to CloudIQ. On
receiving the data, CloudIQ unpacks it, processes it, and makes it available to view in a GUI.
CloudIQ is hosted on Dell EMC infrastructure that is secure, highly available, and fault tolerant. In
addition, the infrastructure provides a guaranteed, 4-hour disaster recovery window.
The rest of this section contains more information on CloudIQ and how it interacts with a
PowerMax array.
Connectivity
The data collector communicates with CloudIQ through a Secure Remote Services (SRS)
gateway. SRS uses an encrypted connection running over HTTPS to exchange data with CloudIQ.
The connection to the Secure Remote Services gateway is either through the secondary
Management Modules Control Station (MMCS) within a PowerMax array, or through a direct
connection from the management host that runs Unisphere. Connection through the MMCS
requires that the array runs PowerMaxOS 5978.444.444.
The data collector is a component of Unisphere for PowerMax. So, it is installed along with
Unisphere and you manage it with Unisphere.
Registration
Before you can monitor an array you register it with SRS using the Settings dialog in Unisphere for
PowerMax. To be able to register an array you need a current support contract with Dell EMC.
Once an array is registered, data collection can begin. If you wish you can exclude any array from
data collection and hence being monitored by CloudIQ.
Data collection
The data collector gathers four categories of data and uses a different collection frequency for
each category:
Alerts 5 minutes
Performance 5 minutes
Health 5 minutes
Configuration 1 hour
In the Performance category, CloudIQ displays bandwidth, latency and IOPS (I/O operations). The
values are calculated from these data items, collected from the array:
l Throughput read
l Throughput write
l Latency read
l Latency write
l IOPS read
l IOPS write
The Configuration category contains information on configuration, capacity, and efficiency for the
overall array, each SRP (Storage Resource Pool), and each storage group.
CloudIQ provides the collector with configuration data that defines the data items to collect and
their collection frequency. CloudIQ sends this configuration data once a day (at most). As CloudIQ
gets new features, or enhancements to existing features, the data it requires changes accordingly.
It communicates this to the data collector in each registered array in the form of revised
configuration data.
Monitor facilities
CloudIQ has a comprehensive set of facilities for monitoring a storage array:
l A summary page gives an overview of the health of all the arrays.
l The systems page gives a summary of the state of each individual array.
l The details gives information about an individual array, its configuration, storage capacity,
performance, and health.
l The health center provides details of the alerts that individual arrays have raised.
The differentiator for CloudIQ, however, is the use of predictive analytics. CloudIQ analyzes the
data it has received from each array to determine the normal range of values for various metrics.
Using this it can highlight when the metric goes outside of this normal range.
Support services
SRS provides more facilities than simply the sending data from an array to CloudIQ:
l An array can automatically open service requests for critical issues that arise.
l Dell EMC support staff can access the array to troubleshoot critical issues and to obtain
diagnostic information such as log and dump files.
Security
Each customer with access to CloudIQ has a dedicated access portal through which they can view
their own arrays only. A customer does not have access to any other customer's arrays or data. In
addition, SRS uses point-to-point encryption over a dedicated VPN, multi-factor authentication,
customer-controlled access policies, and RSA digital certificates to ensure that all customer data is
securely transported to Dell EMC.
The infrastructure that CloudIQ uses is regularly scanned for vulnerabilities with remediation taking
place as a result of these scans. This helps to maintain the security and privacy of all customer
data.
Solutions Enabler
Solutions Enabler provides a comprehensive command line interface (SYMCLI) to manage your
storage environment.
SYMCLI commands are invoked from a management host, either interactively on the command
line, or using scripts.
SYMCLI is built on functions that use system calls to generate low-level I/O SCSI commands.
Configuration and status information is maintained in a host database file, reducing the number of
enquiries from the host to the arrays.
Use SYMCLI to:
l Configure array software (For example, TimeFinder, SRDF, Open Replicator)
l Monitor device configuration and status
l Perform control operations on devices and data objects
Solutions Enabler also has a Representational State Transfer (REST) API. Use this API to access
performance and configuration information, and provision storage arrays. It can be used in any
programming environments that supports standard REST clients, such as web browsers and
programming platforms that can issue HTTP requests.
Mainframe Enablers
The Dell EMC Mainframe Enablers are software components that allow you to monitor and manage
arrays running PowerMaxOS in a mainframe environment:
l ResourcePak Base for z/OS
Enables communication between mainframe-based applications (provided by Dell EMC or
independent software vendors) and PowerMax/VMAX arrays.
l SRDF Host Component for z/OS
Monitors and controls SRDF processes through commands executed from a host. SRDF
maintains a real-time copy of data at the logical volume level in multiple arrays located in
physically separate sites.
l Dell EMC Consistency Groups for z/OS
Ensures the consistency of data remotely copied by SRDF feature in the event of a rolling
disaster.
l AutoSwap for z/OS
Handles automatic workload swaps between arrays when an unplanned outage or problem is
detected.
l TimeFinder SnapVX
With Mainframe Enablers V8.0 and higher, SnapVX creates point-in-time copies directly in the
Storage Resource Pool (SRP) of the source device, eliminating the concepts of target devices
and source/target pairing. SnapVX point-in-time copies are accessible to the host through a
link mechanism that presents the copy on another device. TimeFinder SnapVX and
PowerMaxOS support backward compatibility to traditional TimeFinder products, including
TimeFinder/Clone, TimeFinder VP Snap, and TimeFinder/Mirror.
l Data Protector for z Systems (zDP™)
With Mainframe Enablers V8.0 and higher, zDP is deployed on top of SnapVX. zDP provides a
granular level of application recovery from unintended changes to data. zDP achieves this by
providing automated, consistent point-in-time copies of data from which an application-level
recovery can be conducted.
l TimeFinder/Clone Mainframe Snap Facility
Produces point-in-time copies of full volumes or of individual datasets. TimeFinder/Clone
operations involve full volumes or datasets where the amount of data at the source is the same
as the amount of data at the target. TimeFinder VP Snap leverages clone technology to create
space-efficient snaps for thin devices.
l TimeFinder/Mirror for z/OS
Allows the creation of Business Continuance Volumes (BCVs) and provides the ability to
ESTABLISH, SPLIT, RE-ESTABLISH and RESTORE from the source logical volumes.
l TimeFinder Utility
Conditions SPLIT BCVs by relabeling volumes and (optionally) renaming and recataloging
datasets. This allows BCVs to be mounted and used.
required for continuous operations or business restart. GDDR facilitates business continuity by
generating scripts that can be run on demand. For example, scripts to restart business applications
following a major data center incident, or resume replication following unplanned link outages.
Scripts are customized when invoked by an expert system that tailors the steps based on the
configuration and the event that GDDR is managing. Through automatic event detection and end-
to-end automation of managed technologies, GDDR removes human error from the recovery
process and allows it to complete in the shortest time possible.
The GDDR expert system is also invoked to automatically generate planned procedures, such as
moving compute operations from one data center to another. This is the gold standard for high
availability compute operations, to be able to move from scheduled DR test weekend activities to
regularly scheduled data center swaps without disrupting application workloads.
SMI-S Provider
Dell EMC SMI-S Provider supports the SNIA Storage Management Initiative (SMI), an ANSI
standard for storage management. This initiative has developed a standard management interface
that resulted in a comprehensive specification (SMI-Specification or SMI-S).
SMI-S defines the open storage management interface, to enable the interoperability of storage
management technologies from multiple vendors. These technologies are used to monitor and
control storage resources in multivendor or SAN topologies.
Solutions Enabler components required for SMI-S Provider operations are included as part of the
SMI-S Provider installation.
VASA Provider
The VASA Provider enables PowerMax management software to inform vCenter of how VMFS
storage, including vVols, is configured and protected. These capabilities are defined by Dell EMC
and include characteristics such as disk type, type of provisioning, storage tiering and remote
replication status. This allows vSphere administrators to make quick and informed decisions about
virtual machine placement. VASA offers the ability for vSphere administrators to complement their
use of plugins and other tools to track how devices hosting VMFS volume are configured to meet
performance and availability needs.
Virtualization enables businesses to simplify management, control costs, and guarantee uptime.
However, virtualized environments also add layers of complexity to the IT infrastructure that
reduce visibility and can complicate the management of storage resources. SRM addresses these
layers by providing visibility into the physical and virtual relationships to ensure consistent service
levels.
As you build out a cloud infrastructure, SRM helps you ensure storage service levels while
optimizing IT resources — both key attributes of a successful cloud deployment.
SRM is designed for use in heterogeneous environments containing multi-vendor networks, hosts,
and storage devices. The information it collects and the functionality it manages can reside on
technologically disparate devices in geographically diverse locations. SRM moves a step beyond
storage management and provides a platform for cross-domain correlation of device information
and resource topology, and enables a broader view of your storage environment and enterprise
data center.
SRM provides a dashboard view of the storage capacity at an enterprise level through Watch4net.
The Watch4net dashboard view displays information to support decisions regarding storage
capacity.
The Watch4net dashboard consolidates data from multiple ProSphere instances spread across
multiple locations. It gives a quick overview of the overall capacity status in the environment, raw
capacity usage, usable capacity, used capacity by purpose, usable capacity by pools, and service
levels.
SRDF/Cluster Enabler
Cluster Enabler (CE) for Microsoft Failover Clusters is a software extension of failover clusters
functionality. Cluster Enabler enables Windows Server 2012 (including R2) Standard and
Datacenter editions running Microsoft Failover Clusters to operate across multiple connected
storage arrays in geographically distributed clusters.
SRDF/Cluster Enabler (SRDF/CE) is a software plug-in module to Dell EMC Cluster Enabler for
Microsoft Failover Clusters software. The Cluster Enabler plug-in architecture consists of a CE
base module component and separately available plug-in modules, which provide your chosen
storage replication technology.
SRDF/CE supports:
l Synchronous and asynchronous mode (SRDF modes of operation on page 95 summarizes
these modes)
l Concurrent and cascaded SRDF configurations (SRDF multi-site solutions on page 89
summarizes these configurations)
AppSync
Dell EMC AppSync offers a simple, SLA-driven, self-service approach for protecting, restoring,
and cloning critical Microsoft and Oracle applications and VMware environments. After defining
service plans, application owners can protect, restore, and clone production data quickly with
item-level granularity by using the underlying Dell EMC replication technologies. AppSync also
provides an application protection monitoring service that generates alerts when the SLAs are not
met.
AppSync supports the following applications and storage arrays:
l Applications — Oracle, Microsoft SQL Server, Microsoft Exchange, and VMware VMFS and
NFS datastores and File systems.
l Replication Technologies—SRDF, SnapVX, RecoverPoint, XtremIO Snapshot, VNX Advanced
Snapshots, VNXe Unified Snapshot, and ViPR Snapshot.
On PowerMax arrays:
l The Essentials software package contains AppSync in a starter bundle. The AppSync Starter
Bundle provides the license for a scale-limited, yet fully functional version of AppSync. For
more information, refer to the AppSync Starter Bundle with PowerMax Product Brief available on
the Dell EMC Online Support Website.
l The Pro software package contains the AppSync Full Suite.
PowerPath
PowerPath runs on an application host and manages data paths between the host and LUNs on a
storage array. PowerPath is available for various operating systems including AIX, Microsoft
Windows, Linux, and VMware.
This section is high-level summary of the PowerPath capabilities for PowerMax arrays. It also
shows where to get detailed information including instructions on how to install, configure, and
manage PowerPath.
Operational overview
A data path is a physical connection between an application host and a LUN on a PowerMax array.
The path has several components including:
l Host-based adapter (HBA) port
l Cables
l Switches, PowerMax port
l The LUN
PowerPath manages the use of the paths between a host and a LUN to optimize their use and to
take corrective action should an error occurs.
There can be multiple paths to a LUN enabling PowerPath to:
l Balance the I/O load across the available paths. In turn, this:
n Optimizes the use of the paths
n Improves overall I/O performance
n Reduces management intervention
n Eliminates the need to configure paths manually
l Automatic failover should a path become unavailable due to the failure of one or more of its
components. That is, if a path becomes unavailable, PowerPath reroutes I/O traffic to
alternative paths without manual intervention.
Host registration
Each host that uses PowerPath to access an array registers itself with the array. The information
that PowerPath sends to the array is:
l Host name
l Operating system and version
l Hardware
l PowerPath verson
l Name of the cluster the host is part of and the host's cluster name (if applicable)
l WWN of the host
l Name of each VM on the host and the operating system that each runs
The array stores this information in memory.
PowerPath repeats the registration process every 24 hours. In addition, it checks the host
information at hourly intervals. If the name or IP address of the host have changed, PowerPath
repeats the registration process with the array immediately.
Rather than wait for the next registration check to occur, a system administrator can register a
change immediately using the PowerPath CLI. If necessary a site can control whether automatic
registration occurs both for an individual host and for an entire array.
In addition, the array deletes information on any host that has not registered over the last 72
hours. This prevents a build up of out-of-date host data.
Device status
In addition to host information, PowerPath sends device information to the array. The device
information includes:
l Date of last usage
l Mount status
l Name of the process that owns the device
l PowerPath I/O statistics (these are in addition to the I/O statistics that the array itself
gathers)
The array stores this information in memory.
Benefits of the device information include:
l Early identification of potential I/O problems
l Better long-term planning of array and host usage
l Recover and redeploy unused storage assets
Management
Solutions Enabler and Unisphere have facilities to:
l View host information
l View device information
l View PowerPath performance data
l Register PowerPath hosts with an array
l Control automatic registration of host systems
More information
There is more information about PowerPath, how to configure it, and manage it in:
l PowerPath Installation and Administration Guide
l PowerPath Release Notes
l PowerPath Family Product Guide
l PowerPath Family CLI and System Messages Reference
l PowerPath Management Appliance Installation and Configuration Guide
l PowerPath Management Appliance Release Notes
There are Installation and Administration Guide and Release Notes documents for each supported
operating system.
Backup
A LUN is the basic unit of backup in ProtectPoint. For each LUN, ProtectPoint creates a backup
image on the Data Domain array. You can group backup images to create a backup set. One use of
the backup set is to capture all the data for an application as a point-in-time image.
Backup process
To create a backup of a LUN, ProtectPoint:
1. Uses SnapVX to create a local snapshot of the LUN on the PowerMax array (the primary
storage array).
Once this is created, ProtectPoint and the application can proceed independently each other
and the backup process has no further impact on the application.
2. Copies the snapshot to a vdisk on the Data Domain array where it is deduplicated and
cataloged.
On the primary storage array the vdisk appears as a FAST.X encapsulated LUN. The copy of
the snapshot to the vdisk uses existing SnapVX link copy and PowerMax destaging
technologies.
Once the vdisk contains all the data for the LUN, Data Domain converts the data into a static
image. This image then has metadata added to it and Data Domain catalogs the resultant backup
image.
Figure 4 Data flow during a backup operation to Data Domain
Restore
ProtectPoint provides two forms of data restore:
l Object level restore from a selected backup image
l Full application rollback restore
ProtectPoint agents
ProtectPoint has three agents, each responsible for backing up and restoring a specific type of
data:
File system agent
Provides facilities to back up, manage, and restore application LUNs.
More information
There is more information about ProtectPoint, its components, how to configure them, and how to
use them in:
l ProtectPoint Solutions Guide
l File System Agent Installation and Administration Guide
l Database Application Agent Installation and Administration Guide
l Microsoft Application Agent Installation and Administration Guide
vVol components
To support management capabilities of vVols, the storage/vCenter environment requires the
following:
l Dell EMC PowerMax VASA Provider – The VASA Provider (VP) is a software plug-in that uses
a set of out-of-band management APIs (VASA version 2.0). The VASA Provider exports
storage array capabilities and presents them to vSphere through the VASA APIs. vVols are
managed by way of vSphere through the VASA Provider APIs (create/delete) and not with the
Unisphere for PowerMax user interface or Solutions Enabler CLI. After vVols are setup on the
array, Unisphere and Solutions Enabler only support vVol monitoring and reporting.
l Storage Containers (SC)—Storage containers are chunks of physical storage used to logically
group vVols. SCs are based on the grouping of Virtual Machine Disks (VMDKs) into specific
Service Levels. SC capacity is limited only by hardware capacity. At least one SC per storage
system is required, but multiple SCs per array are allowed. SCs are created and managed on
the array by the Storage Administrator. Unisphere and Solutions Enabler CLI support
management of SCs.
l Protocol Endpoints (PE)—Protocol endpoints are the access points from the hosts to the
array by the Storage Administrator. PEs are compliant with FC and replace the use of LUNs
and mount points. vVols are "bound" to a PE, and the bind and unbind operations are managed
through the VP APIs, not with the Solutions Enabler CLI. Existing multi-path policies and NFS
topology requirements can be applied to the PE. PEs are created and managed on the array by
the Storage Administrator. Unisphere and Solutions Enabler CLI support management of PEs.
Functionality Component
vVol device management (create, delete) VASA Provider APIs / Solutions Enabler APIs
Functionality Component
vVol scalability
The vVol scalability limits are:
Requirement Value
Number of vCenters/VP 2
a. vVol Snapshots are managed through vSphere only. You cannot use Unisphere or Solutions
Enabler to create them.
vVol workflow
Requirements
Install and configure these applications:
l Unisphere for PowerMax V9.0 or later
l Solutions Enabler CLI V9.0 or later
l VASA Provider V9.0 or later
Instructions for installing Unisphere and Solutions Enabler are in their respective installation
guides. Instructions on installing the VASA Provider are in the Dell EMC PowerMax VASA Provider
Release Notes .
Procedure
The creation of a vVol-based virtual machine involves both the storage administrator and the
VMware administrator:
Storage administrator
The storage administrator uses Unisphere or Solutions Enabler to create the storage and
present it to the VMware environment:
1. Create one or more storage containers on the storage array.
This step defines how much storage and from which service level the VMware user can
provision.
2. Create Protocol Endpoints and provision them to the ESXi hosts.
VMware administrator
The VMware administrator uses the vSphere Web Client to deploy the VM on the storage
array:
1. Add the VASA Provider to the vCenter.
This allows vCenter to communicate with the storage array,
2. Create a vVol datastore from the storage container.
3. Create the VM storage policies.
4. Create the VM in the vVol datastore, selecting one of the VM storage policies.
l SuperPAV
l PDS Search Assist
l Modified Indirect Data Address Word (MIDAW)
l Multiple Allegiance (MA)
l Sequential Data Striping
l Multi-Path Lock Facility
l Product Suite for z/TPF
l HyperSwap
l Secure Snapsets in SnapVX for zDP
Note: A PowerMax array can participate in a z/OS Global Mirror (XRC) configuration only as a
secondary.
LCUs per director slice (or port) 255 (within the range of 00 to FE)
a. A split is a logical partition of the storage array, identified by unique devices, SSIDs, and
host serial number. The maximum storage array host address per array is inclusive of all
splits.
The following table lists the maximum LPARs per port based on the number of LCUs with active
paths:
LCUs with active paths per Maximum volumes Array maximum LPARs per
port supported per port port
16 4K 128
32 8K 64
64 16K 32
128 32K 16
255 64K 8
Cascading configurations
Cascading configurations greatly enhance FICON connectivity between local and remote sites by
using switch-to-switch extensions of the CPU to the FICON network. These cascaded switches
communicate over long distances using a small number of high-speed lines called interswitch links
(ISLs). A maximum of two switches may be connected together within a path between the CPU
and the storage array.
Use of the same switch vendors is required for a cascaded configuration. To support cascading,
each switch vendor requires specific models, hardware features, software features, configuration
settings, and restrictions. Specific IBM CPU models, operating system release levels, host
hardware, and PowerMaxOS levels are also required.
The Dell EMC Support Matrix, available through E-Lab Interoperability Navigator (ELN) at http://
elabnavigator.emc.com has the most up-to-date information on switch support.
l Thin provisioning................................................................................................................... 66
Thin provisioning
PowerMax arrays are configured in the factory with thin provisioning pools ready for use. Thin
provisioning improves capacity utilization and simplifies storage management. It also enables
storage to be allocated and accessed on demand from a pool of storage that services one or many
applications. LUNs can be “grown” over time as space is added to the data pool with no impact to
the host or application. Data is widely striped across physical storage (drives) to deliver better
performance than standard provisioning.
Note: Data devices (TDATs) are provisioned/pre-configured/created while the host
addressable storage devices TDEVs are created by either the customer or customer support,
depending on the environment.
Thin provisioning increases capacity utilization and simplifies storage management by:
l Enabling more storage to be presented to a host than is physically consumed
l Allocating storage only as needed from a shared thin provisioning pool
l Making data layout easier through automated wide striping
l Reducing the steps required to accommodate growth
Thin provisioning allows you to:
l Create host-addressable thin devices (TDEVs) using Unisphere or Solutions Enabler
l Add the TDEVs to a storage group
l Run application workloads on the storage groups
When hosts write to TDEVs, the physical storage is automatically allocated from the default
Storage Resource Pool.
l Storage Resource Pools — one (default) Storage Resource Pool is pre-configured on the
array. This process is automatic and requires no setup. You cannot modify Storage Resource
Pools, but you can list and display their configuration. You can also generate reports detailing
the demand storage groups are placing on the Storage Resource Pools.
Thin devices (TDEVs) have no storage allocated until the first write is issued to the device.
Instead, the array allocates only a minimum allotment of physical storage from the pool, and maps
that storage to a region of the thin device including the area targeted by the write.
These initial minimum allocations are performed in units called thin device extents. Each extent for
a thin device is 1 track (128 KB).
When a read is performed on a device, the data being read is retrieved from the appropriate data
device to which the thin device extent is allocated. Reading an area of a thin device that has not
been mapped does not trigger allocation operations. Reading an unmapped block returns a block in
which each byte is equal to zero.
When more storage is required to service existing or future thin devices, data devices can be
added to existing thin storage groups.
Over-subscription allows presenting larger than needed devices to hosts and applications without
having the physical drives to fully allocate the space represented by the thin devices.
Masking view
Initiator group
VM 1 VM
VM 1 2 VM
VM 2 3 VM
VM 3 4
VM 4
HBA 22
HBA 33
HBA 44
HBA 11
HBA ESX
HBA
HBA
HBA
2
1
Host initiators
Port group
Ports
dev
dev
dev dev
dev
dev
dev
dev
dev Storage group
Devices
SYM-002353
Initiator group
A logical grouping of Fibre Channel initiators. An initiator group is limited to either a parent,
which can contain other groups, or a child, which contains one initiator role. Mixing of
initiators and child name in a group is not supported.
Port group
A logical grouping of Fibre Channel front-end director ports. A port group can contain up to 32
ports.
Storage group
A logical grouping of thin devices. LUN addresses are assigned to the devices within the
storage group when the view is created if the group is either cascaded or stand alone. Often
there is a correlation between a storage group and a host application. One or more storage
groups may be assigned to an application to simplify management of the system. Storage
groups can also be shared among applications.
Masking view
An association between one initiator group, one port group, and one storage group. When a
masking view is created, the group within the view is a parent, the contents of the children are
used. For example, the initiators from the children initiator groups and the devices from the
children storage groups. Depending on the server and application requirements, each server or
group of servers may have one or more masking views that associate a set of thin devices to
an application, server, or cluster of servers.
Platinum 0.8 ms No
Gold 1 ms No
Unisphere Diamond
FBA CKD
Diamond ✓ ✓
Platinum ✓ ✗
Gold ✓ ✗
Silver ✓ ✗
Bronze ✓ ✗
Optimized ✓ ✓
Usage examples
Here are three examples of using service levels:
l Protected application
l Service provider
l Relative application priority
Protected application
A storage administrator wants to ensure that a set of SGs is protected from the performance
impact of other, noncritical applications that use the storage array. In this case, the administrator
assigns the Diamond service level to the critical SGs and sets lower-priority service levels on all
other SGs.
For instance:
l An enterprise-critical OLTP application requires almost immediate response to each I/O
operation. The storage administrator may assign the Diamond level to its SGs.
l A batch program that runs overnight has less stringent requirements. So, the storage
administrator may assign the Bronze level to its SGs.
Service provider
A provider of storage for external customers has a range of prices. Storage with lower response
times is more costly than that with longer response times. In this case, the provider uses service
levels to establish SGs that provide the required range of performance. An important part of this
strategy is the use of the Silver and Bronze service levels to introduce delays even though the
storage array could provide a shorter response time.
Relative application priority
A site wants to have the best possible performance for all applications. However, there is a relative
priority among the protected applications. To achieve this, the storage administrator can assign
Diamond, Platinum, and Gold to the SGs that the applications use. The SGs for the higher priority
applications have the Diamond service level. The Platinum and Gold service levels are assigned to
the remaining SGs depending on the relative priority of the applications.
In normal conditions, there is no delay to any SG because the array has the capacity to handle all
I/O requests. However, should the workload increase and it is not possible to service all I/O
requests immediately, the SGs with Platinum and Gold service levels begin to experience delay.
This delay, however, is in proportion to the service level allocated to each SG.
l Environment.......................................................................................................................... 78
l Operation.............................................................................................................................. 78
l Service level biasing.............................................................................................................. 78
l Compression and deduplication............................................................................................. 78
l Availability............................................................................................................................. 78
Environment
The performance of SCM drives is an order of magnitude better than NVMe drives. So an array
that contains both types of drive effectively has two storage tiers: the higher performance SCM
drives and the NVMe drives.
Automated data placement takes advantage of the performance difference to optimize access to
data that is frequently accessed. The feature can also help to optimize access to storage groups
that have higher priority service levels.
An array that contains only SCM drives or NVMe drives has only one tier of storage. So that type
of array cannot use automated data placement.
Operation
Automated data placement monitors the frequency that the application host accesses data in the
array. As a piece of data becomes accessed more frequently, automated data placement promotes
that data to the SCM drives. Similarly, when a piece of data is accessed less frequently, automated
data placement relegates it to the NVMe devices. Should more data need to be promoted but
there is no available space in the SCM drives, automated data placement relegates data that has
been accessed least frequently. This algorithm ensures that the SCM drives contain the most
frequently accessed data.
Availability
Automated data placement is available for arrays that contain any combination of FBA and CKD
devices.
l About TimeFinder..................................................................................................................80
l Mainframe SnapVX and zDP................................................................................................. 83
About TimeFinder
Dell EMC TimeFinder delivers point-in-time copies of volumes that can be used for backups,
decision support, data warehouse refreshes, or any other process that requires parallel access to
production data.
Previous VMAX families offered multiple TimeFinder products, each with their own characteristics
and use cases. These traditional products required a target volume to retain snapshot or clone
data.
PowerMaxOS and HYPERMAX OS introduce TimeFinder SnapVX which provides the best aspects
of the traditional TimeFinder offerings combined with increased scalability and ease-of-use.
TimeFinder SnapVX emulates the following legacy replication products:
l FBA devices:
n TimeFinder/Clone
n TimeFinder/Mirror
n TimeFinder VP Snap
l Mainframe (CKD) devices:
n TimeFinder/Clone
n TimeFinder/Mirror
n TimeFinder/Snap
n Dell EMC Dataset Snap
n IBM FlashCopy (Full Volume and Extent Level)
TimeFinder SnapVX dramatically decreases the impact of snapshots and clones:
l For snapshots, this is done by using redirect on write technology (ROW).
l For clones, this is done by storing changed tracks (deltas) directly in the Storage Resource
Pool of the source device - sharing tracks between snapshot versions and also with the source
device, where possible.
There is no need to specify a target device and source/target pairs. SnapVX supports up to 256
snapshots per volume. Each snapshot can have a name and an automatic expiration date.
Access to snapshots
With SnapVX, a snapshot can be accessed by linking it to a host accessible volume (known as a
target volume). Target volumes are standard PowerMax TDEVs. Up to 1024 target volumes can be
linked to the snapshots of the source volumes. The 1024 links can all be to the same snapshot of
the source volume, or they can be multiple target volumes linked to multiple snapshots from the
same source volume. However, a target volume may be linked only to one snapshot at a time.
Snapshots can be cascaded from linked targets, and targets can be linked to snapshots of linked
targets. There is no limit to the number of levels of cascading, and the cascade can be broken.
SnapVX links to targets in the following modes:
l Nocopy Mode (Default): SnapVX does not copy data to the linked target volume but still
makes the point-in-time image accessible through pointers to the snapshot. The target device
is modifiable and retains the full image in a space-efficient manner even after unlinking from
the point-in-time.
l Copy Mode: SnapVX copies all relevant tracks from the snapshot's point-in-time image to the
linked target volume. This creates a complete copy of the point-in-time image that remains
available after the target is unlinked.
If an application needs to find a particular point-in-time copy among a large set of snapshots,
SnapVX enables you to link and relink until the correct snapshot is located.
Online device expansion
PowerMaxOS provides facilities for the online expansion of devices in a TimeFinder. Online Device
Expansion on page 127 has more information.
Targetless snapshots
With the TimeFinder SnapVX management interfaces you can take a snapshot of an entire
PowerMax Storage Group using a single command. With this in mind, PowerMax supports up to
64K storage groups. The number of groups is enough even in the most demanding environment to
provide one for each application. The storage group construct already exists in most cases as they
are created for masking views. TimeFinder SnapVX uses this existing structure, so reducing the
administration required to maintain the application and its replication environment.
Creation of SnapVX snapshots does not require preconfiguration of extra volumes. In turn, this
reduces the amount of cache that SnapVX snapshots use and simplifies implementation. Snapshot
creation and automatic termination can easily be scripted.
The following Solutions Enabler example creates a snapshot with a 2-day retention period. The
command can be scheduled to run as part of a script to create multiple versions of the snapshot.
Each snapshot shares tracks where possible with the other snapshots and the source devices. Use
a cron job or scheduler to run the snapshot script on a schedule to create up to 256 snapshots of
the source volumes; enough for a snapshot every 15 minutes with 2 days of retention:
symsnapvx -sid 001 -sg StorageGroup1 -name sg1_snap establish -ttl -delta 2
If a restore operation is required, any of the snapshots created by this example can be specified.
When the storage group transitions to a restored state, the restore session can be terminated. The
snapshot data is preserved during the restore process and can be used again should the snapshot
data be required for a future restore.
Secure snaps
Secure snaps prevent administrators or other high-level users from deleting snapshot data,
intentionally or not. Also, Secure snaps are also immune to automatic failure resulting from running
out of Storage Resource Pool (SRP) or Replication Data Pointer (RDP) space on the array.
When the administrator creates a secure snapshot, they assign it an expiration date and time. The
administrator can express the expiration either as a delta from the current date or as an absolute
date. Once the expiration date passes, and if the snapshot has no links, PowerMaxOS
automatically deletes the snapshot. Before its expiration, administrators can only extend the
expiration date; they cannot shorten the date or delete the snapshot. If a secure snapshot expires,
and it has a volume linked to it, or an active restore session, the snapshot is not deleted. However,
it is no longer considered secure.
Note: Secure snapshots may only be terminated after they expire or by customer-authorized
Dell EMC support. Refer to Knowledgebase article 498316 for more information.
Note: Unmount target volumes before issuing the relink command to ensure that the host
operating system does not cache any filesystem data. If accessing through VPLEX, ensure that
you follow the procedure outlined in the technical note VPLEX: Leveraging Array Based and
Native Copy Technologies, available on the Dell EMC support website.
Once the relink is complete, volumes can be remounted.
Snapshot data is unchanged by the linked targets, so the snapshots can also be used to restore
production data.
Cascading snapshots
Presenting sensitive data to test or development environments often requires that the source of
the data be disguised beforehand. Cascaded snapshots provides this separation and disguise, as
shown in the following image.
Figure 7 SnapVX cascaded snapshots
If no change to the data is required before presenting it to the test or development environments,
there is no need to create a cascaded relationship.
recovery procedure. zDP results in minimal data loss compared to other methods such as restoring
data from daily or weekly backups.
As shown in Figure 8 on page 84, you can use zDP to create and manage multiple point-in-time
snapshots of volumes. Each snapshot is a pointer-based, point-in-time image of a single volume.
These images are created using the SnapVX feature of PowerMaxOS. SnapVX is a space-efficient
method for making snapshots of thin devices and consuming additional storage capacity only when
changes are made to the source volume.
There is no need to copy each snapshot to a target volume as SnapVX separates the capturing of a
point-in-time copy from its usage. Capturing a point-in-time copy does not require a target
volume. Using a point-in-time copy from a host requires linking the snapshot to a target volume.
From PowerMaxOS 5978.444.444 onwards, there can be up to 1024 snapshots of each source
volume. On earlier versions of PowerMaxOS, HYPERMAX OS, and Enginuity there can be up to
256 snapshots for each source volume. PowerMaxOS 5978.444.444 also provides facilities for
creating a snapshot on demand.
Figure 8 zDP operation
These snapshots share allocations to the same track image whenever possible while ensuring they
each continue to represent a unique point-in-time image of the source volume. Despite the space
efficiency achieved through shared allocation to unchanged data, additional capacity is required to
preserve the pre-update images of changed tracks captured by each point-in-time snapshot.
zDP includes the secure snap facility (see Secure snaps on page 82).
The process of implementing zDP has two phases — the planning phase and the implementation
phase.
l The planning phase is done in conjunction with your Dell EMC representative who has access
to tools that can help size the capacity needed for zDP if you are currently a PowerMax or
VMAX All Flash user.
l The implementation phase uses the following methods for z/OS:
n A batch interface that allows you to submit jobs to define and manage zDP.
n A zDP run-time environment that executes under SCF to create snapsets.
For details on zDP usage, refer to the TimeFinder SnapVX and zDP Product Guide. For details on
zDP usage in z/TPF, refer to the TimeFinder Controls for z/TPF Product Guide.
SRDF
TimeFinder
TimeFinder
background copy
R1 R2
Site A Site B
Cluster 2
SRDF/S or SRDF/A links
Cluster 2 Host 2
Host 1
SRDF-2node2cluster.eps
Site A Site B
SRDF and VMware Site Recovery Manager Protection side Recovery side
vCenter and SRM Server vCenter and SRM Server
Completely automates storage-based disaster Solutions Enabler software Solutions Enabler software
restart operations for VMware environments in
SRDF topologies.
IP Network IP Network
l The Dell EMC SRDF Adapter enables
VMware Site Recovery Manager to
automate storage-based disaster restart ESX Server
Solutions Enabler software
operations in SRDF solutions. configured as a SYMAPI server
SRDF groups.
l Requires that the adapter is installed on
each array to facilitate the discovery of SRDF mirroring
arrays and to initiate failover operations.
l Implemented with:
n SRDF/S
n SRDF/A Site A, primary
Site B, secondary
n SRDF/Star
n TimeFinder
a. In some circumstances, using SRDF/S over distances greater than 200 km may be feasible. Contact your Dell EMC
representative for more information.
SRDF/Automated Replication
(SRDF/AR)
Host Host
l Combines SRDF and
TimeFinder to optimize
bandwidth requirements
and provide a long-distance
disaster restart solution.
R1 R2
l Operates in a 3-site SRDF adaptive TimeFinder
SRDF/S TimeFinder copy
environment that uses a
combination of SRDF/S, R2
R1
SRDF/DM, and TimeFinder.
Site A Site B Site C
Concurrent SRDF
3-site disaster recovery and
advanced multi-site business F/S R2
continuity protection. SRD
Cascaded SRDF
3-site disaster recovery and
SRDF/S SRDF/A
advanced multi-site business R1 R21 R2
continuity protection.
Data on the primary site (Site A) Site A Site B Site C
is synchronously mirrored to a
secondary site (Site B), and
then asynchronously mirrored
from the secondary site to a
tertiary site (Site C).
Interfamily compatibility
SRDF supports connectivity between different operating environments and arrays. Arrays running
PowerMaxOS can connect to legacy arrays running older operating environments. In mixed
configurations where arrays are running different versions, SRDF features of the lowest version
are supported.
PowerMax arrays can connect to:
l PowerMax arrays running PowerMaxOS
l VMAX 250F, 450F, 850F, and 950F arrays running HYPERMAX OS
l VMAX 100K, 200K, and 400K arrays running HYPERMAX OS
l VMAX 10K, 20K, and 40K arrays running Enginuity 5876 with an Enginuity ePack
Note: When you connect between arrays running different operating environments, limitations
may apply. Information about which SRDF features are supported, and applicable limitations
for 2-site and 3-site solutions is in the SRDF Interfamily Connectivity Information.
This interfamily connectivity allows you to add the latest hardware platform/operating
environment to an existing SRDF solution, enabling technology refreshes.
Note: ProtectPoint has been renamed to Storage Direct and it is included in the PowerProtect,
Data Protection Suite for Apps, or Data Protection Suite Enterprise Edition software.
R1 and R2 devices
An R1 device is the member of the device pair at the source (production) site. R1 devices are
generally Read/Write accessible to the application host.
An R2 device is the member of the device pair at the target (remote) site. During normal
operations, host I/O writes to the R1 device are mirrored over the SRDF links to the R2 device. In
general, data on R2 devices is not available to the application host while the SRDF relationship is
active. In SRDF synchronous mode, however, an R2 device can be in Read Only mode that allows a
host to read from the R2.
In a typical environment:
l The application production host has Read/Write access to the R1 device.
l An application host connected to the R2 device has Read Only (Write Disabled) access to the
R2 device.
Figure 9 R1 and R2 devices
R1 SRDF Links R2
Read/ Read
Write Only
R1 data copies to R2
R11 devices
R11 devices operate as the R1 device for two R2 devices. Links to both R2 devices are active.
R11 devices are typically used in 3-site concurrent configurations where data on the R11 site is
mirrored to two secondary (R2) arrays:
Figure 10 R11 device in concurrent SRDF
Site B
Target
R2
Site C
R11
Target
Site A
Source
R2
R21 devices
R21 devices have a dual role and are used in cascaded 3-site configurations where:
l Data on the R1 site is synchronously mirrored to a secondary (R21) site, and then
l Asynchronously mirrored from the secondary (R21) site to a tertiary (R2) site:
Figure 11 R21 device in cascaded SRDF
Production
host
SRDF Links
R1 R21 R2
The R21 device acts as a R2 device that receives updates from the R1 device, and as a R1 device
that sends updates to the R2 device.
When the R1->R21->R2 SRDF relationship is established, no host has write access to the R21
device.
In arrays that run Enginuity, the R21 device can be diskless. That is, it consists solely of cache
memory and does not have any associated storage device. It acts purely to relay changes in the R1
device to the R2 device. This capability requires the use of thick devices. Systems that run
PowerMaxOS or HYPERMAX OS contain thin devices only, so setting up a diskless R21 device is
not possible on arrays running those environments.
R22 devices
R22 devices:
l Have two R1 devices, only one of which is active at a time.
l Are typically used in cascaded SRDF/Star and concurrent SRDF/Star configurations to
decrease the complexity and time required to complete failover and failback operations.
l Let you recover without removing old SRDF pairs and creating new ones.
Figure 12 R22 devices in cascaded and concurrent SRDF/Star
Synchronous mode
Synchronous mode maintains a real-time mirror image of data between the R1 and R2 devices over
distances up to 200 km (125 miles). Host data is written to both arrays in real time. The application
host does not receive the acknowledgment until the data has been stored in the cache of both
arrays.
Asynchronous mode
Asynchronous mode maintains a dependent-write consistent copy between the R1 and R2 device
over unlimited distances. On receiving data from the application host, SRDF on the R1 side of the
link writes that data to its cache. Also it batches the data received into delta sets. Delta sets are
transferred to the R2 device in timed cycles. The application host receives the acknowledgment
once data is successfully written to the cache on the R1 side.
SRDF groups
An SRDF group defines the logical relationship between SRDF devices and directors on both sides
of an SRDF link.
Group properties
The properties of an SRDF group are:
l Label (name)
l Set of ports on the local array used to communicate over the SRDF links
l Set of ports on the remote array used to communicate over the SRDF links
l Local group number
l Remote group number
l One or more pairs of devices
The devices in the group share the ports and associated CPU resources of the port's directors.
Types of group
There are two types of SRDF group:
l Static: which are defined in the local array's configuration file.
l Dynamic: which are defined using SRDF management tools and their properties that are stored
in the array's cache memory.
On arrays running PowerMaxOS or HYPERMAX OS all SRDF groups are dynamic.
SRDF consistency
Many applications, especially database systems, use dependent write logic to ensure data
integrity. That is, each write operation must complete successfully before the next can begin.
Without write dependency, write operations could get out of sequence resulting in irrecoverable
data loss.
SRDF implements write dependency using the consistency group (also known as SRDF/CG). A
consistency group consists of a set of SRDF devices that use write dependency. For each device
in the group, SRDF ensures that write operations propagate to the corresponding R2 devices in
the correct order.
However, if the propagation of any write operation to any R2 device in the group cannot complete,
SRDF suspends propagation to all group's R2 devices. This suspension maintains the integrity of
the data on the R2 devices. While the R2 devices are unavailable, SRDF continues to store write
operations on the R1 devices. It also maintains a list of those write operations in their time order.
When all R2 devices in the group become available, SRDF propagates the outstanding write
operations, in the correct order, for each device in the group.
SRDF/CG is available for both SRDF/S and SRDF/A.
Data migration
Data migration is the one-time movement of data from one array to another. Once the movement
is complete, the data is accessed from the secondary array. A common use of migration is to
replace an older array with a new one.
Dell EMC support personnel can assist with the planning and implementation of migration projects.
SRDF multisite configurations enable migration to occur in any of these ways:
l Replace R2 devices.
l Replace R1 devices.
l Replace both R1 and R2 devices simultaneously.
For example, this diagram shows the use of concurrent SRDF to replace the secondary (R2) array
in a 2-site configuration:
Figure 13 Migrating data and removing a secondary (R2) array
Site A Site B
R1 R2
R11 R2 R1
SRDF
migration
R2
R2
Site C Site C
Here:
l The top section of the diagram shows the original, 2-site configuration.
l The lower left section of the diagram shows the interim, 3-site configuration with data being
copied to two secondary arrays.
l The lower right section of the diagram shows the final, 2-site configuration where the new
secondary array has replaced the original one.
The Dell EMC SRDF Introduction contains more information about using SRDF to migrate data. See
also Data migration on page 111.
More information
Here are other Dell EMC documents that contain more information about the use of SRDF in
replication and migration:
SRDF Introduction
SRDF and NDM Interfamily Connectivity Information
SRDF/Cluster Enabler Plug-in Product Guide
Using the Dell EMC Adapter for VMWare Site Recovery Manager Technical Book
Dell EMC SRDF Adapter for VMware Site Recovery Manager Release Notes
SRDF/Metro
In traditional SRDF configurations, only the R1 devices are Read/Write accessible to the
application hosts. The R2 devices are Read Only and Write Disabled.
In SRDF/Metro configurations, however:
l Both the R1 and R2 devices are Read/Write accessible to the application hosts.
l Application hosts can write to both the R1 and R2 side of the device pair.
l R2 devices assume the same external device identity as the R1 devices. The identity includes
the device geometry and device WWN.
This shared identity means that R1 and R2 devices appear to application hosts as a single, virtual
device across two arrays.
Deployment options
SRDF/Metro can be deployed in either a single, multipathed host environment or in a clustered
host environment:
Figure 14 SRDF/Metro
Multi-Path Cluster
Read/Write Read/Write
Read/Write Read/Write
SRDF/Metro Resilience
If either of the devices in a SRDF/Metro configuration become Not Ready, or connectivity
between the devices is lost, SRDF/Metro must decide which side remains available to the
application host. There are two mechanisms that SRDF/Metro can use : Device Bias and Witness.
Device Bias
Device pairs for SRDF/Metro are created with a bias attribute. By default, the create pair
operation sets the bias to the R1 side of the pair. That is, if a device pair becomes Not Ready (NR)
on the SRDF link, the R1 (bias side) remains accessible to the hosts, and the R2 (nonbias side)
becomes inaccessible. However, if there is a failure on the R1 side, the host loses all connectivity
to the device pair. The Device Bias method cannot make the R2 device available to the host.
Witness
A witness is a third party that mediates between the two sides of a SRDF/Metro pair to help:
l Decide which side remains available to the host
l Avoid a "split brain" scenario when both sides attempt to remain accessible to the host despite
the failure
The witness method allows for intelligently choosing on which side to continue operations when
the bias-only method may not result in continued host availability to a surviving, nonbiased array.
There are two forms of the Witness mechanism:
l Array Witness: The operating environment of a third array is the mediator.
l Virtual Witness (vWitness): A daemon running on a separate, virtual machine is the mediator.
When both sides run PowerMaxOS 5978 SRDF/Metro takes these criteria into account when
selecting the side to remain available to the hosts (in priority order):
1. The side that has connectivity to the application host (requires PowerMaxOS 5978.444.444)
2. The side that has a SRDF/A DR leg
3. Whether the SRDF/A DR leg is synchronized
4. The side that has more than 50% of the RA or FA directors that are available
5. The side that is currently the bias side
The first of these criteria that one array has, and the other does not, stops the selection process.
The side with the matched criteria is the preferred winner.
SRDF/Metro SRDF/Metro
R11 R2 R1 R21
SRDF/A SRDF/A
or Adaptive Copy or Adaptive Copy
Disk Disk
R2 R2
Site C Site C
Double-sided replication
SRDF/Metro SRDF/Metro
R2
R2 R2
R2
Note that the device names differ from a standard SRDF/Metro configuration. This reflects the
change in the devices' function when disaster recovery facilities are in place. For instance, when
the R2 side is replicated to a disaster recovery site, its name changes to R21 because it is both the:
l R2 device in the SRDF/Metro configuration
l R1 device in the disaster-recovery configuration
More information
Here are other Dell EMC documents that contain more information on SRDF/Metro:
SRDF Introduction
SRDF/Metro vWitness Configuration Guide
SRDF Interfamily Connectivity Information
RecoverPoint
RecoverPoint is a comprehensive data protection solution designed to provide production data
integrity at local and remote sites. RecoverPoint also provides the ability to recover data from a
point in time using journaling technology.
The primary reasons for using RecoverPoint are:
l Remote replication to heterogeneous arrays
l Protection against Local and remote data corruption
l Disaster recovery
l Secondary device repurposing
l Data migrations
RecoverPoint systems support local and remote replication of data that applications are writing to
SAN-attached storage. The systems use existing Fibre Channel infrastructure to integrate
seamlessly with existing host applications and data storage subsystems. For remote replication,
the systems use existing Fibre Channel connections to send the replicated data over a WAN, or
use Fibre Channel infrastructure to replicate data aysnchronously. The systems provide failover of
operations to a secondary site in the event of a disaster at the primary site.
Previous implementations of RecoverPoint relied on a splitter to track changes made to protected
volumes. The current implementation relies on a cluster of RecoverPoint nodes, provisioned with
one or more RecoverPoint storage groups, leveraging SnapVX technology, on the storage array.
Volumes in the RecoverPoint storage groups are visible to all the nodes in the cluster, and available
for replication to other storage arrays.
RecoverPoint allows data replication of up to 8,000 LUNs for each RecoverPoint cluster and up to
eight different RecoverPoint clusters attached to one array. Supported array types include
PowerMax, VMAX All Flash, VMAX3, VMAX, VNX, VPLEX, and XtremIO.
RecoverPoint is licensed and sold separately. For more information about RecoverPoint and its
capabilities see the Dell EMC RecoverPoint Product Guide.
SRDF/AR
SRDF/AR combines SRDF and TimeFinder to provide a long-distance disaster restart solution.
SRDF/AR can be deployed over 2 or 3 sites:
l In 2-site configurations, SRDF/DM is deployed with TimeFinder.
l In 3-site configurations, SRDF/DM is deployed with a combination of SRDF/S and TimeFinder.
The time to create the new replicated consistent image is determined by the time that it takes to
replicate the deltas.
Host Host
SRDF
TimeFinder
TimeFinder
background copy
R1 R2
Site A Site B
In this configuration, data on the SRDF R1/TimeFinder target device is replicated across the SRDF
links to the SRDF R2 device.
The SRDF R2 device is also a TimeFinder source device. TimeFinder replicates this device to a
TimeFinder target device. You can map the TimeFinder target device to the host connected to the
secondary array at Site B.
In a 2-site configuration, SRDF operations are independent of production processing on both the
primary and secondary arrays. You can utilize resources at the secondary site without interrupting
SRDF operations.
Use SRDF/AR 2-site configurations to:
l Reduce required network bandwidth using incremental resynchronization between the SRDF
target sites.
l Reduce network cost and improve resynchronization time for long-distance SRDF
implementations.
Host Host
R1 R2
SRDF adaptive TimeFinder
SRDF/S TimeFinder copy
R2
R1
If Site A (primary site) fails, the R2 device at Site B provides a restartable copy with zero data
loss. Site C provides an asynchronous restartable copy.
If both Site A and Site B fail, the device at Site C provides a restartable copy with controlled data
loss. The amount of data loss is a function of the replication cycle time between Site B and Site C.
SRDF and TimeFinder control commands to R1 and R2 devices for all sites can be issued from Site
A. No controlling host is required at Site B.
Use SRDF/AR 3-site configurations to:
l Reduce required network bandwidth using incremental resynchronization between the
secondary SRDF target site and the tertiary SRDF target site.
l Reduce network cost and improve resynchronization time for long-distance SRDF
implementations.
l Provide disaster recovery testing, point-in-time backups, decision support operations, third-
party software testing, and application upgrade testing or the testing of new applications.
Requirements/restrictions
In a 3-site SRDF/AR multi-hop configuration, SRDF/S host I/O to Site A is not acknowledged until
Site B has acknowledged it. This can cause a delay in host response time.
l Overview.............................................................................................................................. 112
l Data migration for open systems.......................................................................................... 113
l Data migration for IBM System i.......................................................................................... 124
l Data migration for mainframe.............................................................................................. 124
Overview
Data migration is a one-time movement of data from one array (the source) to another array (the
target). Typical examples are data center refreshes where data is moved from an old array after
which the array is retired or re-purposed. Data migration is not data movement due to replication
(where the source data is accessible after the target is created) or data mobility (where the target
is continually updated).
After a data migration operation, applications that access the data reference it at the new location.
To plan a data migration, consider the potential impact on your business, including the:
l Type of data to be migrated
l Site location(s)
l Number of systems and applications
l Amount of data to be moved
l Business needs and schedules
PowerMaxOS provides migration facilities for:
l Open systems
l IBM System i
l Mainframe
Non-Disruptive Migration
Non-Disruptive Migration (NDM) is a method for migrating data without application downtime.
The migration takes place over a metro distance, typically within a data center.
NDM Updates is a variant of NDM introduced in PowerMaxOS 5978.444.444. NDM Updates
requires that the application associated with the migrated data is shut down for part of the
migration process. This is due to the fact that the NDM is heavily dependent on the behavior of
multipathing software to detect, enable, and disable paths none of which is under the control of
Dell EMC (except for supported products such as PowerPath). NDM is the term that covers both
non-disruptive and disruptive migration.
Starting with PowerMaxOS 5978 there are two implementations of NDM each for a different type
of source array:
l Either:
n PowerMax array running PowerMaxOS 5978
n VMAX3 or VMAX All Flash array running HYPERMAX OS 5977.1125.1125 or later with an
ePack
l VMAX array running Enginuity 5876 with an ePack
When migrating to a PowerMax array, these are the only configurations for the source array.
The SRDF Interfamily Connectivity Information lists the Service Packs and ePacks required for
HYPERMAX OS 5977 and Enginuity 5876. In addition, the NDM support matrix has information on
array operating systems support, host support, and multipathing support for NDM operations. The
support matrix is available on the eLab Navigator.
Regulatory or business requirements for disaster recovery may require the use of replication to
other arrays attached to source array, the target array, or both using SRDF/S, during the
migration. In this case, refer to the SRDF Interfamily Connectivity Information for information on the
Service Packs and the ePacks required for the SRDF/S configuration.
Process
Normal flow
The steps in the migration process that is normally followed are:
1. Set up the migration environment – configure the infrastructure of the source and target
array, in preparation for data migration.
2. On the source array, select a storage group to migrate.
3. If using NDM Updates, shut down the application associated with the storage group.
4. Create the migration session optionally specifying whether to move the identity of the LUNs in
the storage group to the target array – copy the content of the storage group to the target
array using SRDF/Metro.
During this time the source and target arrays are both accessible to the application host.
5. When the data copy is complete:
a. if the migration session did not move the identity of the LUNs, reconfigure the application
to access the new LUNs on the target array.
b. Commit the migration session – remove resources from the source array and those used in
the migration itself.
6. If using NDM Updates, restart the application.
7. To migrate further storage groups, repeat steps 2 on page 114 to 6 on page 114.
8. After migrating all the required storage groups, remove the migration environment.
Alternate flow
There is an alternative process that pre-copies the data to the target array before making it
available to the application host. The steps in this process are:
1. Set up the migration environment – configure the infrastructure of the source and target
array, in preparation for data migration.
2. On the source array, select a storage group to migrate.
3. Use the precopy facility of NDM to copy the selected data to the target array.
Optionally, specify whether to move the identity off the LUNS in the storage group to the
target array.
While the data copy takes place, the source array is available to the application host, but the
target array is unavailable.
4. When the copying of the data is complete: use the Ready Target facility in NDM to make the
target array available to the application host also.
a. If the migration session did not move the identity of the LUNs, reconfigure the application
to access the new LUNs on the target array.
b. If using NDM Updates, restart the application.
c. Commit the migration session – remove resources from the source array and those used in
the migration itself. The application now uses the target array only.
5. To migrate further storage groups, repeat steps 2 on page 115 to 4 on page 115.
6. After migrating all the required storage groups, remove the migration environment.
Other functions
Other NDM facilities that are available for exceptional circumstances are:
l Cancel – to cancel a migration that has not yet been committed.
l Sync – to stop or start the synchronization of writes to the target array back to source array.
When stopped, the application runs on the target array only. Used for testing.
l Recover – to recover a migration process following an error.
Other features
Other features of migrating from VMAX3, VMAX All Flash or PowerMax to PowerMax are:
l Data can be compressed during migration to the PowerMax array
l Allows for non-disruptive revert to the source array
l There can be up to 50 migration sessions in progress simultaneously
l Does not require an additional license as NDM is part of PowerMaxOS
l The connections between the application host and the arrays use FC; the SRDF connection
between the arrays uses FC or GigE
Devices and components that cannot be part of an NDM process are:
l CKD devices
l eNAS data
l ProtectPoint and FAST.X relationships along with their associated data
Process
The steps in the migration process are:
1. Set up the environment – configure the infrastructure of the source and target array, in
preparation for data migration.
2. On the source array, select a storage group to migrate.
3. If using NDM Updates, shut down the application associated with the storage group.
4. Create the migration session – copy the content of the storage group to the target array using
SRDF.
When creating the session, optionally specify whether to move the identity of the LUNs in the
storage group to the traget array.
5. When the data copy is complete:
a. If the migration session did not move the identity of the LUNs, reconfigure the application
to access the new LUNs on the target array.
b. Cutover the storage group to the PowerMax array.
c. Commit the migration session – remove resources from the source array and those used in
the migration itself. The application now uses the target array only.
6. If using NDM Updates, restart the application.
7. To migrate further storage groups, repeat steps 2 on page 116 to 6 on page 116.
8. After migrating all the required storage groups, remove the migration environment.
Other features
Other features of migrating from VMAX to PowerMax are:
l Data can be compressed during migration to the PowerMax array
l Allows for non-disruptive revert to the source array
l There can be up to 50 migration sessions in progress simultaneously
l NDM does not require an additional license as it is part of PowerMaxOS
l The connections between the application host and the arrays use FC; the SRDF connection
between the arrays uses FC or GigE
Devices and components that cannot be part of an NDM process are:
l CKD devices
l eNAS data
l ProtectPoint and FAST.X relationships along with their associated data
Source Targets
l If SSRDF is not normally used in the migration environment, it may be necessary to install and
configure RDF directors and ports on both the source and target arrays and physically
configure SAN connectivity.
Management host
l Wherever possible, use a host system separate from the application host to initiate and control
the migration (the control host).
l The control host requires visibility of and access to both the source and target arrays.
l The names of masking groups to migrate must not exist on the target array.
l The names of initiator groups to migrate may exist on the target array. However, the
aggregate set of host initiators in the initiator groups that the masking groups use must be the
same. Also, the effective ports flags on the host initiators must have the same setting on both
arrays.
l The names of port groups to migrate may exist on the target array, as long as the groups on
the target array are in the logging history table for at least one port.
l The status of the target array must be as follows:
n If a target-side Storage Resource Pool (SRP) is specified for the migration, that SRP must
exist on the target array.
n The SRP to be used for target-side storage must have enough free capacity to support the
migration.
n The target side must be able to support the additional devices required to receive the
source-side data.
n All initiators provisioned to an application on the source array must also be logged into ports
on the target array.
Open Replicator
Open Replicator enables copying data (full or incremental copies) from qualified arrays within a
storage area network (SAN) infrastructure to or from arrays running PowerMaxOS. Open
Replicator uses the Solutions Enabler SYMCLI symrcopy command.
Use Open Replicator to migrate and back up/archive existing data between arrays running
PowerMaxOS. Open Replicator uses the Solutions Enabler SYMCLI symrcopy and third-party
storage arrays within the SAN infrastructure without interfering with host applications and
ongoing business operations.
Use Open Replicator to:
l Pull from source volumes on qualified remote arrays to a volume on an array running
PowerMaxOS. Open Replicator uses the Solutions Enabler SYMCLI symrcopy.
l Perform online data migrations from qualified storage to an array running PowerMaxOS. Open
Replicator uses the Solutions Enabler SYMCLI symrcopy with minimal disruption to host
applications.
NOTICE Open Replicator cannot copy a volume that is in use by TimeFinder.
Remote
The donor Dell EMC arrays or third-party arrays on the SAN are referred to as the remote
array/devices.
Hot
The Control device is Read/Write online to the host while the copy operation is in progress.
Note: Hot push operations are not supported on arrays running PowerMaxOS. Open
Replicator uses the Solutions Enabler SYMCLI symrcopy.
Cold
The Control device is Not Ready (offline) to the host while the copy operation is in progress.
Pull
A pull operation copies data to the control device from the remote device(s).
Push
A push operation copies data from the control device to the remote device(s).
Pull operations
On arrays running PowerMaxOS, Open Replicator uses the Solutions Enabler SYMCLI symrcopy
support for up to 4096 pull sessions.
For pull operations, the volume can be in a live state during the copy process. The local hosts and
applications can begin to access the data as soon as the session begins, even before the data copy
process has completed.
These features enable rapid and efficient restoration of remotely vaulted volumes and migration
from other storage platforms.
Copy on First Access ensures the appropriate data is available to a host operation when it is
needed. The following image shows an Open Replicator hot pull.
Figure 20 Open Replicator hot (or live) pull
SB14
SB15
SB12
SB13
SB10
SB11
PiT
SB8
SB9
SB6
SB7
Copy
SB4
SB5
SB2
SB3
SB0
SB1
PS0 PS1 PS2 PS3 PS4 SMB0 SMB1
STD
STD
PiT
Copy
The pull can also be performed in cold mode to a static volume. The following image shows an
Open Replicator cold pull.
Figure 21 Open Replicator cold (or point-in-time) pull
SB14
SB15
SB12
SB13
SB10
SB11
SB8
SB9
STD
SB6
SB7
SB4
SB5
SB2
SB3
SB0
SB1
Target
STD
Target
Target
STD
Disaster Recovery
When the control array runs PowerMaxOS it can also be the R1 side of a SRDF configuration. That
configuration can use SRDF/A, SRDF/S, or Adaptive Copy Mode to provide data protection during
and after the data migration.
Volume level data migration facilities move logical volumes in their entirety. z/OS Migrator volume
migration is performed on a track for track basis without regard to the logical contents of the
volumes involved. Volume migrations end in a volume swap which is entirely non-disruptive to any
applications using the data on the volumes.
Volume migrator
Volume migration provides host-based services for data migration at the volume level on
mainframe systems. It provides migration from third-party devices to devices on Dell EMC arrays
as well as migration between devices on Dell EMC arrays.
Volume mirror
Volume mirroring provides mainframe installations with volume-level mirroring from one device on
a Dell EMC array to another. It uses host resources (UCBs, CPU, and channels) to monitor channel
programs scheduled to write to a specified primary volume and clones them to also write to a
specified target volume (called a mirror volume).
After achieving a state of synchronization between the primary and mirror volumes, Volume Mirror
maintains the volumes in a fully synchronized state indefinitely, unless interrupted by an operator
command or by an I/O failure to a Volume Mirror device. Mirroring is controlled by the volume
group. Mirroring may be suspended consistently for all volumes in the group.
Online device expansion (ODE) is a mechanism to increase the capacity of a device without taking
it offline. This is an overview of the ODE capabilities:
l Introduction......................................................................................................................... 128
l General features.................................................................................................................. 128
l Standalone devices.............................................................................................................. 129
l SRDF devices.......................................................................................................................129
l LREP devices.......................................................................................................................130
l Management facilities...........................................................................................................131
Introduction
ODE enables a storage administrator to provide more capacity on a storage device while it remains
online to its application. This particularly benefits organizations where applications need to remain
available permanently. If a device associated with an application runs low on space, the
administrator can increase its capacity without affecting the availability and performance of the
application.
Standalone devices, devices in a SRDF configuration and those in an LREP configuration can all be
expanded using ODE.
General features
Features of ODE that are applicable to stand-alone, SRDF, and LREP devices are:
l ODE is available for both FBA and CKD devices.
l ODE operates on thin devices (TDEVs).
l A device can be expanded to a maximum capacity of 64 TB (1,182,006 cylinders for a CKD
device).
l A device can expand only.
There are no facilities for reducing the capacity of a device.
l During expansion, a device is locked.
This prevents operations such as adding a device to an SRDF configuration until the expansion
is complete.
l An administrator can expand the capacity of multiple devices using one management operation.
A thin device presents a given capacity to the host, but consumes only the physical storage
necessary to hold the data that the host has written to the device (Thin devices (TDEVs) on page
67 has more information). Increasing the capacity of a device using ODE does not allocate any
additional physical storage. Only the configured capacity of the device as seen by the host
increases.
Failure of an expansion operation for a stand-alone, SRDF, or LREP device may occur because:
l The device does not exist.
l The device is not a TDEV.
l The requested capacity is less than the current capacity.
l The requested capacity is greater than 64 TB.
l There is insufficient space in the storage system for expansion.
l There are insufficient PowerMax internal resources to accommodate the expanded device.
l Expanding the device to the requested capacity would exceed the oversubscription ratio of the
physical storage.
l A reclaim, deallocation, or free-all operation is in progress on the device.
There are other reasons specific to each type of device. These are listed in the description of
device expansion for that type of device.
Standalone devices
The most basic form of device expansion is of a device that is associated with a host application
and is not part of a SRDF or LREP configuration. Additional features of ODE in this environment
are:
l ODE can expand vVols in addition to TDEVs.
vVolS are treated as a special type of TDEV.
l ODE for a standalone device is available in PowerMaxOS 5978, HYPERMAX OS 5977.691.684
or later (for FBA devices), and HYPERMAX OS 5977.1125.1125 or later (for CKD devices).
Each expansion operation returns a status that indicates whether the operation succeeded or not.
The status of an operation to expand multiple devices can indicate a partial success. In this case at
least one of the devices was successfully expanded but one or more others failed.
Another reason why an expansion operation might fail is if the device is not a vVol.
SRDF devices
PowerMaxOS 5978 introduces online device expansion for SRDF configurations. The administrator
can expand the capacity of thin devices in an SRDF relationship without any service disruption in a
similar way to expanding stand-alone devices.
Devices in an asynchronous, synchronous, Adaptive Copy Mode, or SRDF/Metro, SRDF/Star
(mainframe only), or SRDF/SQAR (mainframe only) configuration are all eligible for expansion.
However, this feature is not available in RecoverPoint, ProtectPoint, NDM, or MDM
configurations.
Also, device expansion is available only on storage arrays in an SRDF configuration that run
PowerMaxOS (PowerMaxOS 5978.444.444 for SRDF/Metro) on both sides. Any attempt to
expand an SRDF device in a system that runs an older operating environment fails.
Other features of ODE in an SRDF environment are for expanding:
l An individual device on either the R1 or R2 side
l An R1 device and its corresponding device on the R2 side in one operation
l A range of devices on either the R1 or R2 side
l A range of devices on the R1 side and their corresponding devices on the R2 side in one
operation
l A storage group on either the R1 or R2 side
l A storage group on the R1 side and its corresponding group on the R2 side in one operation
Note: An SRDF/Metro configuration does not allow the expansion of devices on one side only.
Both sides, whether it is a device, a range of devices, or a storage group, need to be expanded
in one operation.
Basic rules of device expansion are:
l The R1 side of an SRDF pair cannot be larger than the R2 side.
l In an SRDF/Metro configuration, both sides must be the same size.
When both sides are available on the SRDF link, Solutions Enabler, Mainframe Enablers, and
Unisphere (the tools for managing ODE) enforce these rules. When either device is not available
on the SRDF link, the management tools allow you to make the R1 larger than the R2. However,
before the devices can be made available on the link, the capacity of the R2 must increase to at
least the capacity of the R1 device.
Similar considerations apply to multiple site configurations:
l Cascaded SRDF: The size of R1 must be less than or equal to the size of R21. The size of R21
must be less than or equal to the size of R2.
l Concurrent SRDF: The size of R11 must be less than or equal to the size of both R2 devices.
Other reasons why an expansion operation may fail in an SRDF environment are:
l One or more of the devices is on a storage system that does not run PowerMaxOS 5978 (or
PowerMaxOS 5978.444.444 for SRDF/Metro).
l One or more of the devices is a vVol.
l One or more devices are part of a ProtectPoint, RecoverPoint, NDM, or MDM configuration.
l The operation would result in an R1 device being larger than its R2 device.
LREP devices
PowerMaxOS 5978 also introduces online device expansion for LREP (local replication)
configurations. As with standalone and SRDF devices, this means an administrator can increase
the capacity of thin devices that are part of a LREP relationship without any service disruption.
Devices eligible for expansion are those that are part of:
l SnapVX sessions
l Legacy sessions that use CCOPY, SDDF, or Extent
ODE is not available for:
l SnapVX emulations such as Clone, TimeFinder Clone, TimeFinder Mirror, TimeFinder Snap, and
VP Snap
l RecoverPoint and ProtectPoint devices
l vVols
l PPRC
This is to maintain compatibility with the limitations that IBM place on expanding PPRC
devices.
By extension, ODE is not available for a product that uses any of these technologies. For example,
it is not available for Remote Pair FlashCopy since that uses PPRC.
Additional ODE features in a LREP environment are:
l Expand Snap VX source or target devices.
l Snapshot data remains the same size.
l The ability to restore a smaller snapshot to an expanded source device.
l Target link and relink operations are dependent on the size of the source device when the
snapshot was taken not its size after expansion.
There are additional reasons for an ODE operation to fail in an LREP environment. For instance
when the LREP configuration uses one of the excluded technologies.
Management facilities
Solutions Enabler, Unisphere, and Mainframe Enablers all provide facilities for managing ODE. With
any of these tools you can:
l Expand a single device
l Expand multiple devices in one operation
l Expand both sides of an SRDF pair in one operation
Solutions Enabler
Use the symdev modify command in Solutions Enabler to expand one or more devices. Some
features of this command are:
l Use the -cap option to specify the new capacity for the devices.
Use the -captype option with -cap to specify the units of the new capacity. The available
units are cylinders, MB, GB, and TB.
l Use the -devs option to define the devices to expand. The argument for this option consists
of a single device identifier, an range of device identifiers, or a list of identifiers. Each element
in the list can be a single device identifier or a range of device identifiers.
l Use the -rdfg option to specify the SRDF group of the devices to be expanded. Inclusion of
this option indicates that both sides of the SRDF pair associated with the group are to be
expanded in a single operation.
The Dell EMC Solutions Enabler Array Controls and Management CLI User Guide has details of the
symdev modify command, its syntax and its options.
Examples:
l Expand a single device on array 005 to a capacity of 4TB:
Unisphere
Unisphere provides facilities to increase the capacity of a device, a range of devices (FBA only),
and SRDF pairs. The available units for specifying the new device capacity are cylinders, GB, and
TB. The Unisphere Online Help has details on how to select and expand devices.
For example, this is the dialog for expanding a standalone device:
Mainframe Enablers
Mainframe Enablers provides the DEV,EXPAND command in the Symmetrix Control Facility (SCF)
to increase the capacity of a device. Some features of this command are:
l Use the DEVice parameter to specify a single device or a range of devices to expand.
l Use the CYLinders parameter to specify the new capacity of the devices, in cylinders.
l Use the RDFG parameter to specify the SRDF group associated with the devices and so expand
the R1 and R2 devices in a single operation.
The Dell EMC Mainframe Enablers ResourcePak Base for z/OS Product Guide has details of the
DEV,EXPAND command and its parameters.
Example:
Expand device 8013 to 1150 cylinders:
DEV,EXPAND,DEV(8013),CYL(1150)
This is an overview of some of the security features of PowerMaxOS. For more detailed
information, see the Dell EMC PowerMax Family Security Configuration Guide.
The higher a role is in the hierarchy the more permissions, and hence capabilities, it has.
Monitor
The Monitor role allows a user to use show, list, and view operations to monitor a system.
Allowed operations
Examples of the operations that the Monitor role allows are:
l View array information
l View masking objects (storage groups, initiator groups, port groups, and masking views)
l View device information
l View the RBAC rules defined on this array.
This is available only when the Secure Reads policy is not in effect. Secure Reads policy on
page 138 has more information the Secure Reads policy and its management.
Prevented operations
The Monitor role does not allow the user to view:
l Security-related data such as array ACLs and the array's Audit Log file
l The RBAC roles defined on this system, when the Secure Reads policy is in effect
PerfMonitor
The PerfMonitor role allows a user to configure performance alerts and thresholds in Unisphere.
The PerfMonitor role also has the permissions of the Monitor role.
Auditor
The Auditor role allows a user to view the security settings on a system.
Allowed operations
Examples of operations that the Auditor role allows are:
l View the array's ACL settings
l View RBAC rules and settings
DeviceManage
The DeviceManage role allows a user to configure and manage devices.
Allowed operations
Examples of operations that the DeviceManage role allows are:
l Control operations on devices, such as Ready, Not-Ready, Free
l Configuration operations on devices, such as setting names, or setting flags
l Link, Unlink, Relink, Set-Copy, and Set-NoCopy operations on SnapVX link devices
l Restore operations to SnapVX source devices
This is available only when the user also has the LocalRep role.
When the role is restricted to one or more storage groups, it allows these operations on the
devices in those groups only.
The DeviceManage role also has the permissions of the Monitor role.
Prevented operations
The DeviceManage role does not allow the user to create, expand, or delete devices. However, if
the role is associated with a storage group, those operations are allowed on the devices within the
group.
LocalRep
The LocalRep role allows the user to carry out local replication using SnapVX, or the legacy
operations of Snapshot, Clone, and BCV.
Allowed operations
Examples of operations that the LocalRep role allows are:
l Create, manage, and delete SnapVX snapshots
For operations that result in changes to the contents of any device, the user may also need the
DeviceManage role:
l SnapVX restore operations require both the LocalRep and DeviceManage roles.
l SnapVX Link, Unlink, Relink, Set-Copy, and Set-No_Copy operations require the
DeviceManage role on the link devices and the LocalRep role on the source devices.
When the role is restricted to one or more storage groups, it allows all these operation on the
devices within those groups only.
The LocalRep role also has the permissions of the Monitor role.
Prevented operations
The LocalRep role does not allow the user to create Secure SnapVX snapshots.
RemoteRep
The RemoteRep role allows a user to carry out remote replication using SRDF.
Allowed operations
Examples of operations that the RemoteRep role allows are:
l Create, manage, and delete SRDF device pairs
When the role is restricted to storage groups, it allows these operations on devices within
those groups only.
l Set attributes that are not associated with SRDF/A on a SRDF group
This is available only if the role is applied to the entire array.
When the role is restricted to one or more storage groups, it allows these operations on the
devices in those groups only.
The RemoteRep role also has the permissions of the Monitor role.
Prevented operations
The RemoteRep role does not allow the user to:
l Create and delete SRDF groups
l Set attributes that are not associated with SRDF/A on a SRDF group when the role is
restricted to a set of storage groups
StorageAdmin
The StorageAdmin role allows a user to perform any storage operation, except those related to
security.
Allowed operations
Examples of operations that the StorageAdmin role allows are:
l Perform array configuration operations
l Provision storage
l Delete storage
l Create, modify, and delete masking objects (storage groups, initiator groups, port groups, and
masking views)
l Create and delete Secure SnapVX Snapshots
l Any operation allowed for the LocalRep, RemoteRep, and DeviceManage roles
This role also has the permissions of the LocalRep, RemoteRep, DeviceManage, and Monitor roles.
SecurityAdmin
The SecurityAdmin role allows a user to view and modify the system security settings.
Allowed operations
Operations that the SecurityAdmin role allows are:
l Modify the array's ACL settings
l Modify the RBAC rules and settings
The SecurityAdmin role also has the permissions of the Auditor and Monitor roles.
Admin
The Admin role allows a user to carry out any operation on the array. It has the permissions of the
StorageAdmin and SecurityAdmin roles.
Lockbox
Solutions Enabler uses a Lockbox to store and protect sensitive information. The Lockbox is
associated with a particular host. This association prevents the Lockbox from being copied to a
second host and used to obtain access.
The Lockbox is created at installation. During installation, the installer prompts the user to provide
a password for the Lockbox, or if no password is provided at installation, a default password is
generated and used with the Stable System values (SSVs, a fingerprint that uniquely identifies the
host system). For more information about the default password, see Default Lockbox password on
page 139.
Lockbox passwords
If you create the Lockbox using the default password during installation, change the password
immediately after installation to best protect the contents in the Lockbox.
For maximum security, select a password that is hard to guess. It is very important to remember
the password.
WARNING Loss of this password can lead to situations where the data stored in the Lockbox is
unrecoverable. Dell EMC cannot recover a lost lockbox password.
Passwords must meet the following requirements:
l 8 - 256 characters in length
l Include at least one numeric character
l Include at least one uppercase and one lowercase character
l Include at least one of these non-alphanumeric characters: ! @ # % &
Lockbox passwords may include any character that can be typed in from US standard
keyboard.
l The new password must not be the same as the previous password.
Client/server communications
All communications between client and hosts uses SSL to help ensure data security.
Severity Description
REMOTE FAILED The Service Processor cannot communicate with the Dell
EMC Customer Support Center.
Environmental errors
The following table lists the environmental errors in SIM format for PowerMaxOS 5978 or higher.
Operator messages
Error messages
On z/OS, SIM messages are displayed as IEA480E Service Alert Error messages. They are
formatted as shown below:
Figure 25 z/OS IEA480E acute alert error message format (call home failure)
Figure 26 z/OS IEA480E service alert error message format (Disk Adapter failure)
Figure 27 z/OS IEA480E service alert error message format (SRDF Group lost/SIM presented against
unrelated resource)
Event messages
The storage array also reports events to the host and to the service processor. These events are:
l The mirror-2 volume has synchronized with the source volume.
l The mirror-1 volume has synchronized with the target volume.
l Device resynchronization process has begun.
On z/OS, these events are displayed as IEA480E Service Alert Error messages. They are
formatted as shown below:
Figure 28 z/OS IEA480E service alert error message format (mirror-2 resynchronization)
Figure 29 z/OS IEA480E service alert error message format (mirror-1 resynchronization)
l eLicensing............................................................................................................................150
l Open systems licenses......................................................................................................... 152
eLicensing
Arrays running PowerMaxOS use Electronic Licenses (eLicenses).
Note: For more information on eLicensing, refer to Dell EMC Knowledgebase article 335235 on
the Dell EMC Online Support website.
You obtain license files from Dell EMC Online Support, copy them to a Solutions Enabler or a
Unisphere host, and push them out to your arrays. The following figure illustrates the process of
requesting and obtaining your eLicense.
Figure 30 eLicensing process
EMC generates a single license file
1. New software purchase either as
part of a new array, or as
2. for the array and posts it
an additional purchase on support.emc.com for download.
to an existing system.
Note: To install array licenses, follow the procedure described in the Solutions Enabler
Installation Guide and Unisphere Online Help.
Each license file fully defines all of the entitlements for a specific system, including the license
type and the licensed capacity. To add a feature or increase the licensed capacity, obtain and
install a new license file.
Most array licenses are array-based, meaning that they are stored internally in the system feature
registration database on the array. However, there are a number of licenses that are host-based.
Array-based eLicenses are available in the following forms:
l An individual license enables a single feature.
l A license suite is a single license that enables multiple features. License suites are available only
if all features are enabled.
l A license pack is a collection of license suites that fit a particular purpose.
To view effective licenses and detailed usage reports, use Solutions Enabler, Unisphere, Mainframe
Enablers, Transaction Processing Facility (TPF), or IBM i platform console.
Capacity measurements
Array-based licenses include a capacity licensed value that defines the scope of the license. The
method for measuring this value depends on the license's capacity type (Usable or Registered).
Not all product titles are available in all capacity types, as shown below.
RecoverPoint
Usable capacity
Usable Capacity is defined as the amount of storage available for use on an array. The usable
capacity is calculated as the sum of all Storage Resource Pool (SRP) capacities available for use.
This capacity does not include any external storage capacity.
Registered capacity
Registered capacity is the amount of user data managed or protected by each particular product
title. It is independent of the type or size of the disks in the array.
The methods for measuring registered capacity depends on whether the licenses are part of a
bundle or individual.
License packages
This table lists the license packages available in an open systems environment.
Table 16 PowerMax license packages
l Duplicate existing
sessions
– Associating an
RDFA-DSE
pool with an
SRDF
group
DSE
Threshold
DSE Autostart
n Write Pacing
attributes,
including:
– Write Pacing
Threshold
– Write Pacing
Autostart
– Device Write
Pacing
exemption
– TimeFinder
Write Pacing
Autostart
improve operational
efficiency.
Individual licenses
These items are available for arrays running PowerMaxOS and are not included in any of the
license suites:
Table 17 Individual licenses for open systems environment
Ecosystem licenses
These licenses do not apply to arrays:
Table 18 Individual licenses for open systems environment