Implementing The V7K Gen2
Implementing The V7K Gen2
Implementing The V7K Gen2
Jon Tate
Morten Dannemand
Nancy Kinney
Massimo Rosati
Lev Sturmer
ibm.com/redbooks
International Technical Support Organization
January 2015
SG24-8244-00
Note: Before using this information and the product it supports, read the information in “Notices” on
page vii.
This edition applies to the IBM Storwize V7000 Gen2 running software version 7.3.
Notices . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . vii
Trademarks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . viii
Preface . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xi
Authors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xii
Now you can become a published author, too . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xiv
Comments welcome. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xiv
Stay connected to IBM Redbooks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xiv
Chapter 6. IBM Real-time Compression and the IBM Storwize V7000 Gen2 . . . . . . . 117
6.1 Real-time Compression background, overview, and value proposition. . . . . . . . . . . . 118
6.1.1 The solution: IBM Real-time Compression . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 118
6.1.2 Common use cases . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 119
6.2 IBM Real-time Compression technology . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 120
6.2.1 Random Access Compression Engine . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 121
6.2.2 RACE in Storwize V7000 Gen2 software stack . . . . . . . . . . . . . . . . . . . . . . . . . 126
6.2.3 Data write flow . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 127
6.2.4 Data read flow . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 127
6.2.5 Compression of existing data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 127
6.3 Storwize V7000 Gen2 software and hardware updates that enhance Real-time
Compression . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 127
Chapter 9. IBM Storwize V7000 Gen2 operations using the GUI . . . . . . . . . . . . . . . . 169
9.1 Introduction to IBM Storwize V7000 GUI . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 170
9.1.1 Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 171
9.1.2 Monitoring . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 172
9.1.3 Pools . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 173
9.1.4 Volumes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 174
9.1.5 Hosts . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 175
9.1.6 Copy services . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 176
9.1.7 Access . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 177
9.1.8 Settings . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 178
9.2 IBM Storage Tier Advisor Tool . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 178
Contents v
vi Implementing the IBM Storwize V7000 Gen2
Notices
This information was developed for products and services offered in the U.S.A.
IBM may not offer the products, services, or features described in this document in other countries. Consult
your local IBM representative for information on the products and services currently available in your area. Any
reference to an IBM product, program, or service is not intended to state or imply that only that IBM product,
program, or service may be used. Any functionally equivalent product, program, or service that does not
infringe any IBM intellectual property right may be used instead. However, it is the user’s responsibility to
evaluate and verify the operation of any non-IBM product, program, or service.
IBM may have patents or pending patent applications covering subject matter described in this document. The
furnishing of this document does not grant you any license to these patents. You can send license inquiries, in
writing, to:
IBM Director of Licensing, IBM Corporation, North Castle Drive, Armonk, NY 10504-1785 U.S.A.
The following paragraph does not apply to the United Kingdom or any other country where such
provisions are inconsistent with local law: INTERNATIONAL BUSINESS MACHINES CORPORATION
PROVIDES THIS PUBLICATION “AS IS” WITHOUT WARRANTY OF ANY KIND, EITHER EXPRESS OR
IMPLIED, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF NON-INFRINGEMENT,
MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE. Some states do not allow disclaimer of
express or implied warranties in certain transactions, therefore, this statement may not apply to you.
This information could include technical inaccuracies or typographical errors. Changes are periodically made
to the information herein; these changes will be incorporated in new editions of the publication. IBM may make
improvements and/or changes in the product(s) and/or the program(s) described in this publication at any time
without notice.
Any references in this information to non-IBM websites are provided for convenience only and do not in any
manner serve as an endorsement of those websites. The materials at those websites are not part of the
materials for this IBM product and use of those websites is at your own risk.
IBM may use or distribute any of the information you supply in any way it believes appropriate without incurring
any obligation to you.
Any performance data contained herein was determined in a controlled environment. Therefore, the results
obtained in other operating environments may vary significantly. Some measurements may have been made
on development-level systems and there is no guarantee that these measurements will be the same on
generally available systems. Furthermore, some measurements may have been estimated through
extrapolation. Actual results may vary. Users of this document should verify the applicable data for their
specific environment.
Information concerning non-IBM products was obtained from the suppliers of those products, their published
announcements or other publicly available sources. IBM has not tested those products and cannot confirm the
accuracy of performance, compatibility or any other claims related to non-IBM products. Questions on the
capabilities of non-IBM products should be addressed to the suppliers of those products.
This information contains examples of data and reports used in daily business operations. To illustrate them
as completely as possible, the examples include the names of individuals, companies, brands, and products.
All of these names are fictitious and any similarity to the names and addresses used by an actual business
enterprise is entirely coincidental.
COPYRIGHT LICENSE:
This information contains sample application programs in source language, which illustrate programming
techniques on various operating platforms. You may copy, modify, and distribute these sample programs in
any form without payment to IBM, for the purposes of developing, using, marketing or distributing application
programs conforming to the application programming interface for the operating platform for which the sample
programs are written. These examples have not been thoroughly tested under all conditions. IBM, therefore,
cannot guarantee or imply reliability, serviceability, or function of these programs.
The following terms are trademarks of the International Business Machines Corporation in the United States,
other countries, or both:
AIX® Global Technology Services® Storwize®
DB2® GPFS™ System Storage®
DS4000® IBM® System z®
DS5000™ IBM Elastic Storage™ Tivoli®
DS8000® Real-time Compression™ XIV®
Easy Tier® Redbooks®
FlashCopy® Redbooks (logo) ®
Intel, Intel logo, Intel Inside logo, and Intel Centrino logo are trademarks or registered trademarks of Intel
Corporation or its subsidiaries in the United States and other countries.
Linux is a trademark of Linus Torvalds in the United States, other countries, or both.
Microsoft, Windows, and the Windows logo are trademarks of Microsoft Corporation in the United States,
other countries, or both.
Other company, product, or service names may be trademarks or service marks of others.
Download
Android
iOS
Now
Data is the new currency of business, the most critical asset of the modern organization. In
fact, enterprises that can gain business insights from their data are twice as likely to
outperform their competitors. Nevertheless, 72% of them have not started, or are only
planning, big data activities. In addition, organizations often spend too much money and time
managing where their data is stored. The average firm purchases 24% more storage every
year, but uses less than half of the capacity that it already has.
The IBM® Storwize® family, including the IBM SAN Volume Controller Data Platform, is a
storage virtualization system that enables a single point of control for storage resources. This
functionality helps support improved business application availability and greater resource
use. The following list describes the business objectives of this system:
To manage storage resources in your information technology (IT) infrastructure
To make sure that those resources are used to the advantage of your business
To do it quickly, efficiently, and in real time, while avoiding increases in administrative costs
Virtualizing storage with Storwize helps make new and existing storage more effective.
Storwize includes many functions traditionally deployed separately in disk systems. By
including these functions in a virtualization system, Storwize standardizes them across
virtualized storage for greater flexibility and potentially lower costs.
Storwize functions benefit all virtualized storage. For example, IBM Easy Tier® optimizes use
of flash memory. In addition, IBM Real-time Compression™ enhances efficiency even further
by enabling the storage of up to five times as much active primary data in the same physical
disk space. Finally, high-performance thin provisioning helps automate provisioning. These
benefits can help extend the useful life of existing storage assets, reducing costs.
Integrating these functions into Storwize also means that they are designed to operate
smoothly together, reducing management effort.
This IBM Redbooks® publication provides information about the latest features and functions
of the Storwize V7000 Gen2 and software version 7.3 implementation, architectural
improvements, and Easy Tier.
Alan Dawson
Steven White
Chris Canto
Barry Whyte
Evelyn Perez
Lee Sanders
Katja Gebuhr
Paul Merrison
Gareth Nicholls
Ian Boden
John Fairhurst
IBM Hursley, UK
Special thanks to the Brocade Communications Systems staff in San Jose, California for their
unparalleled support of this residency in terms of equipment and support in many areas:
Madu Amajor
Silviano Gaona
Sangam Racherla
Brian Steffler
Marcus Thordal
Jim Baldyga
Brocade Communications Systems
Preface xiii
Now you can become a published author, too
Here’s an opportunity to spotlight your skills, grow your career, and become a published
author—all at the same time. Join an ITSO residency project and help write a book in your
area of expertise, while honing your experience using leading-edge technologies. Your efforts
can help to increase product acceptance and customer satisfaction, as you expand your
network of technical contacts and relationships. Residencies run from two to six weeks in
length, and you can participate either in person or as a remote resident working from your
home base.
Obtain more information about the residency program, browse the residency index, and apply
online:
ibm.com/redbooks/residencies.html
Comments welcome
Your comments are important to us.
We want our books to be as helpful as possible. Send us your comments about this book or
other IBM Redbooks publications in one of the following ways:
Use the online Contact us review Redbooks form:
ibm.com/redbooks
Send your comments in an email:
redbooks@us.ibm.com
Mail your comments:
IBM Corporation, International Technical Support Organization
Dept. HYTD Mail Station P099
2455 South Road
Poughkeepsie, NY 12601-5400
The focus of this book is virtualization at the disk layer, which is referred to as block-level
virtualization, or the block aggregation layer. A description of file system virtualization is
beyond the scope of this book.
However, if you are interested in file system virtualization, see the following information about
IBM General Parallel File System (GPFS™) or IBM Scale Out Network Attached Storage
(SONAS), which is based on GPFS.
To obtain more information and an overview of GPFS and IBM Elastic Storage™, see the
following website:
http://www.ibm.com/systems/technicalcomputing/platformcomputing/products/gpfs/
The Storage Networking Industry Association’s (SNIA) block aggregation model (Figure 1-1
on page 3) provides a useful overview of the storage domain and its layers. The figure shows
the three layers of a storage domain:
The file
The block aggregation
The block subsystem layers
The model splits the block aggregation layer into three sublayers. Block aggregation can be
realized within hosts (servers), in the storage network (storage routers and storage
controllers), or in storage devices (intelligent disk arrays).
One of the IBM implementations of a block aggregation solution is IBM Storwize V7000 Gen2.
Storwize V7000 Gen2 is implemented as a clustered appliance in the storage network layer.
The key concept of virtualization is to decouple the storage from the storage functions that
are required in the storage area network (SAN) environment. Decoupling means abstracting
the physical location of data from the logical representation of that data. The virtualization
engine presents logical entities to the user, and internally manages the process of mapping
these entities to the actual location of the physical storage.
The actual mapping that is performed depends on the specific implementation, such as the
granularity of the mapping, which can range from a small fraction of a physical disk, up to the
full capacity of a physical disk.
A single block of information in this environment is identified by its logical unit number (LUN),
which is the physical disk, and an offset within that LUN, which is known as a logical block
address (LBA).
The term physical disk is used in this context to describe a piece of storage that might be
carved out of a Redundant Array of Independent Disks (RAID) in the underlying disk
subsystem.
Specific to the Storwize V7000 Gen2 implementation, the address space that is mapped
between the logical entity is referred to as volume, and the physical disk is referred to as
managed disks (MDisks).
The server and application are only aware of the logical entities, and they access these
entities using a consistent interface that is provided by the virtualization layer.
The functionality of a volume that is presented to a server, such as expanding or reducing the
size of a volume, mirroring a volume, creating an IBM FlashCopy®, and thin provisioning, is
implemented in the virtualization layer. It does not rely in any way on the functionality that is
provided by the underlying disk subsystem. Data that is stored in a virtualized environment is
stored in a location-independent way, which enables a user to move or migrate data between
physical locations (referred to as storage pools).
Storwize V7000 Gen2 delivers these functions in a homogeneous way on a scalable and
highly available platform, over any attached storage, and to any attached server.
You can see the importance of addressing the complexity of managing storage networks by
applying the total cost of ownership (TCO) metric to storage networks. Industry analyses
show that storage acquisition costs are only about 20% of the TCO. Most of the remaining
costs relate to managing the storage system.
However, how much of the management of multiple systems, with separate interfaces, can be
handled as a single entity? In a non-virtualized storage environment, every system is an
“island” that needs to be managed separately.
In addition, because Storwize V7000 Gen2 provides advanced functions, such as mirroring
and FlashCopy, there is no need to purchase them again for each new virtualized disk
subsystem.
Today, it is typical that open systems run at less than 50% of the usable capacity that is
provided by the RAID subsystems. Using the installed raw capacity in the disk subsystems
will, depending on the RAID level that is used, show usage numbers of less than 35%.
A block-level virtualization solution, such as Storwize V7000 Gen2, can enable capacity
usage to increase to approximately 75 - 80%. With Storwize V7000 Gen2, free space does
not need to be maintained and managed within each storage subsystem, which further
increases capacity use.
There are two major approaches in use today to consider for the implementation of block-level
aggregation and virtualization:
Symmetric: In-band appliance
The device is a SAN appliance that sits in the data path, and all I/O flows through the
device. This implementation is referred to as symmetric virtualization or in-band.
The device is both target and initiator. It is the target of I/O requests from the host
perspective, and the initiator of I/O requests from the storage perspective. The redirection
is performed by issuing new I/O requests to the storage. Storwize V7000 Gen2 uses
symmetric virtualization.
Asymmetric: Out-of-band or controller-based
The device is usually a storage controller that provides an internal switch for external
storage attachment. In this approach, the storage controller intercepts and redirects I/O
requests to the external storage as it does for internal storage. The actual I/O requests are
themselves redirected. This implementation is referred to as asymmetric virtualization or
out-of-band.
The Storwize V7000 Gen2 solution provides a modular storage system that includes the
capability to virtualize both external SAN-attached storage and its own internal storage. The
Storwize V7000 Gen2 solution is built upon the IBM SAN Volume Controller technology base,
and uses technology from the IBM System Storage DS8000® family.
A Storwize V7000 Gen2 system provides several configuration options that are aimed at
simplifying the implementation process. It also provides automated wizards, called Directed
Maintenance Procedures (DMP), to help resolve any events that might occur. A Storwize
V7000 Gen2 system is a midrange, clustered, scalable, and external virtualization device.
Included with a Storwize V7000 Gen2 system is a graphical user interface (GUI) that enables
storage to be deployed quickly and efficiently. The GUI runs on the Storwize V7000 Gen2
system, so there is no need for a separate console. The management GUI contains a series
of preestablished configuration options that are called presets, and that use common settings
to quickly configure objects on the system. Presets are available for creating volumes and
FlashCopy mappings, and for setting up a RAID configuration.
The Storwize V7000 Gen2 solution provides a choice of up to 1056 serial-attached Small
Computer System Interface (SCSI), called SAS, drives for the internal storage in a clustered
system. It uses SAS cables and connectors to attach to the optional expansion enclosures. In
a clustered system, the Storwize V7000 Gen2 can provide about 4 pebibytes (PiB) of internal
raw capacity.
When virtualizing external storage arrays, a Storwize V7000 Gen2 system can provide up to
32 PiB of usable capacity. A Storwize V7000 Gen2 system supports a range of external disk
systems, similar to what the IBM SAN Volume Controller supports today.
The Storwize V7000 Gen2 subsystem consists of a set of drive enclosures. Control
enclosures contain disk drives and two nodes (an I/O Group), which are attached to the SAN
fabric. Expansion enclosures contain drives, and are attached to control enclosures.
The simplest use of Storwize V7000 Gen2 is as a traditional RAID subsystem. The internal
drives are configured into RAID arrays and virtual disks created from those arrays. Storwize
V7000 Gen2 can also be used to virtualize other storage controllers.
Storwize V7000 Gen2 supports regular and solid-state drives (SSDs) and uses IBM System
Storage Easy Tier to automatically place volume hot spots on better-performing storage. In
this section, we briefly explain the basic architecture components of Storwize V7000 Gen2.
Nodes
Each Storwize V7000 Gen2 hardware controller is called a node or nodecanister. The node
provides the virtualization for a set of volumes, cache, and copy services functions. Nodes are
deployed in pairs, and multiple pairs make up a clustered system or system. A system can
consist of 1 - 4 Storwize V7000 Gen2 node pairs.
One of the nodes within the system is known as the configuration node. The configuration
node manages the configuration activity for the system. If this node fails, the system chooses
a new node to become the configuration node.
Because the nodes are installed in pairs, each node provides a failover function to its partner
node in the event of a node failure.
I/O Groups
In Storwize V7000 Gen2, there are 1 - 4 pairs of node canisters known as I/O Groups.
Storwize V7000 Gen2 supports eight node canisters in the clustered system, which provides
four I/O Groups.
When a host server performs I/O to one of its volumes, all of the I/Os for a specific volume are
directed to the I/O Group. Also, under normal conditions, the I/Os for that specific volume are
always processed by the same node within the I/O Group.
Both nodes of the I/O Group act as preferred nodes for their own specific subset of the total
number of volumes that the I/O Group presents to the host servers (a maximum of 2048
volumes per I/O Group). However, each node also acts as a failover node for its partner node
within the I/O Group, so a node takes over the I/O workload from its partner node, if required,
with no effect to the server’s application.
The Storwize V7000 Gen2 I/O Groups are connected to the SAN so that all application
servers accessing volumes from the I/O Group have access to them. Up to 2048 host server
objects can be defined in four I/O Groups.
If required, host servers can be mapped to more than one I/O Group in the Storwize V7000
Gen2 system. Therefore, they can access volumes from separate I/O Groups. You can move
volumes between I/O Groups to redistribute the load between the I/O Groups.
However, moving volumes between I/O Groups cannot always be done concurrently with host
I/O, and requires in some cases a brief interruption to remap the host. On the following
website, you can check the compatibility of Storwize V7000 Gen2 non-disruptive volume
move (NDVM) function with your hosts:
http://www-01.ibm.com/support/docview.wss?uid=ssg1S1004622
Important: The active/active architecture provides availability to process I/Os for both
controller nodes, and enables the application to continue running smoothly, even if the
server has only one access route or path to the storage controller. This type of architecture
eliminates the path and LUN thrashing typical of an active/passive architecture.
System
The system or clustered system consists of 1 - 4 I/O Groups. Certain configuration limitations
are then set for the individual system. For example, the maximum number of volumes
supported per system is 8192 (having a maximum of 2048 volumes per I/O Group), or the
maximum managed disk supported is 32 petabytes (PB) per system.
All configuration, monitoring, and service tasks are performed at the system level.
Configuration settings are replicated to all nodes in the system. To facilitate these tasks, a
management Internet Protocol (IP) address is set for the system.
A process is provided to back up the system configuration data onto disk so that it can be
restored in the event of a disaster. Note that this method does not back up application data.
Only Storwize V7000 Gen2 system configuration information is backed up. For the purposes
of remote data mirroring, two or more systems must form a partnership before creating
relationships between mirrored volumes.
System configuration backup: After backing up the system configuration, save the
backup data on your hard disk (or at the least outside of the SAN). If you are unable to
access Storwize V7000 Gen2, you do not have access to the backup data if it is on
the SAN.
For details about the maximum configurations that are applicable to the system, I/O Group,
and nodes, see the following link:
http://www-01.ibm.com/support/docview.wss?uid=ssg1S1004628
These drives are referred to as members of the array. Each array has a RAID level. RAID
levels provide various degrees of redundancy and performance, and have various restrictions
regarding the number of members in the array.
Storwize V7000 Gen2 supports hot spare drives. When an array member drive fails, the
system automatically replaces the failed member with a hot spare drive and rebuilds the array
to restore its redundancy. Candidate and spare drives can be manually exchanged with
array members.
Each array has a set of goals that describe the location and performance of each array. A
sequence of drive failures and hot spare takeovers can leave an array unbalanced (with
members that do not match these goals). The system automatically rebalances such arrays
when the appropriate drives are available.
MDisks
A managed disk (MDisk) is the unit of storage that Storwize V7000 Gen2 virtualizes. This unit
could be a logical volume on an external storage array presented to Storwize V7000 Gen2, or
a RAID array consisting of internal drives. Storwize V7000 Gen2 can then allocate these
MDisks into various storage pools. An MDisk is not visible to a host system on the SAN,
because it is internal or zoned only to the Storwize V7000 Gen2 system.
A volume is host-accessible storage that has been provisioned out of one storage pool or, if it
is a mirrored volume, out of two storage pools. The maximum size of an MDisk is 1 PB. A
Storwize V7000 Gen2 system supports up to 4096 MDisks (including internal RAID arrays).
Disk tier
It is likely that the MDisks (LUNs) presented to the Storwize V7000 Gen2 system have
various performance attributes due to the type of disk or RAID on which they reside. The
MDisks can be on 15,000 disk revolutions per minute (RPMs) Fibre Channel or SAS disks,
Nearline SAS or Serial Advanced Technology Attachment (SATA) disks, or even flash drives.
Therefore, a storage tier attribute is assigned to each MDisk, with the default being
enterprise. A tier 0 (zero)-level disk attribute (ssd) is available for flash drives, and a tier
2-level disk attribute (nearline) is available for nl-sas.
Storage pool
A storage pool is a collection of up to 128 MDisks that provides the pool of storage from which
volumes are provisioned. A single system can manage up to 128 storage pools. The size of
these pools can be changed (expanded or shrunk) at run time by adding or removing MDisks,
without taking the storage pool or the volumes offline.
At any point in time, an MDisk can only be a member in one storage pool, except for image
mode volumes.
Each MDisk in the storage pool is divided into several extents. The size of the extent is
selected by the administrator when the storage pool is created, and cannot be changed later.
The size of the extent ranges from 16 MB - 8192 MB.
It is a leading practice to use the same extent size for all storage pools in a system. This
approach is a prerequisite for supporting volume migration between two storage pools. If the
storage pool extent sizes are not the same, you must use volume mirroring.
Storwize V7000 Gen2 limits the number of extents in a system to 222 = ~4 million. Because
the number of addressable extents is limited, the total capacity of a Storwize V7000 Gen2
system depends on the extent size that is chosen by the Storwize V7000 Gen2 administrator.
The capacity numbers that are specified in Table 1-1 for a Storwize V7000 Gen2 system
assume that all of the defined storage pools have been created with the same extent size.
32 MB 128 TB 1024 MB 4 PB
64 MB 256 TB 2048 MB 8 PB
256 MB 1 PB 8192 MB 32 PB
Volumes
Volumes are logical disks that are presented to the host or application servers by Storwize
V7000 Gen2. The hosts cannot see the MDisks. They can only see the logical volumes
created from combining extents from a storage pool.
Using striped mode is the best method to use for most cases. However, sequential extent
allocation mode can slightly increase the sequential performance for certain workloads.
Figure 1-5 shows the striped volume mode and sequential volume mode, and it illustrates
how the extent allocation from the storage pool differs.
You can allocate the extents for a volume in many ways. The process is under full user control
at volume creation time, and can be changed at any time by migrating single extents of a
volume to another MDisk within the storage pool.
Hosts
Volumes can be mapped to a host to allow access for a specific server to a set of volumes. A
host in Storwize V7000 Gen2 is a collection of host bus adapter (HBA) worldwide port names
(WWPNs) or Internet SCSI (iSCSI) qualified names (IQNs) defined on the specific server.
Note that iSCSI names are internally identified by “fake” WWPNs, or WWPNs that are
generated by the Storwize V7000 Gen2.
Volumes can be mapped to multiple hosts, for example, a volume that is accessed by multiple
hosts of a server system. iSCSI is an alternative means of attaching hosts. However, all
communication with back-end storage subsystems, and with other Storwize V7000 Gen2
systems, is still through FC.
Volumes are LUN-masked to the host’s HBA WWPNs by a process called host mapping.
Mapping a volume to the host makes it accessible to the WWPNs or IQNs that are configured
on the host object. For a SCSI over Ethernet connection, the IQN identifies the iSCSI target
(destination) adapter. Host objects can have both IQNs and WWPNs.
Easy Tier
Easy Tier is a performance function that automatically migrates or moves extents off a volume
to, or from, one MDisk storage tier to another MDisk storage tier. In Storwize V7000 Gen2, the
Easy Tier automatically moves extents between highly used and less-used MDisks within the
same storage tier. This function is called Storage Pool Balancing, and it is enabled by default
without any need for licensing.
It cannot be disabled by the user. Easy Tier monitors the host I/O activity and latency on the
extents of all volumes with the Easy Tier function turned on in a multitier storage pool, over a
24-hour period.
New in Storwize family software V7.3: Easy Tier V3 integrates the automatic
functionality to balance the workloads between highly used and less-used MDisks within
the same tier. It is enabled by default, cannot be disabled by the user, and does not need
an Easy Tier license.
Next, it creates an extent migration plan based on this activity, and then dynamically
moves high-activity (or hot) extents to a higher disk tier in the storage pool. It also moves
extents whose activity has dropped off (or cooled) from the high-tier MDisks back to a
lower-tiered MDisk.
Easy Tier: The Easy Tier function can be turned on or off at the storage pool level and the
volume level. It supports any combination of three tiers within the system. Flash drives are
always marked as Tier 0. Turning off Easy Tier does not disable Storage Pool Balancing.
To experience the potential benefits of using Easy Tier in your environment before installing
expensive flash drives, you can turn on the Easy Tier function for a single-level storage pool.
Next, turn on the Easy Tier function for the volumes within that pool. Easy Tier then starts
monitoring activity on the volume extents in the pool.
Easy Tier creates a report every 24 hours, providing information about how Easy Tier
behaves if the tier were a multitiered storage pool. So, even though Easy Tier extent migration
is not possible within a single-tiered pool, the Easy Tier statistical measurement function
is available.
The Easy Tier function can make it more appropriate to use smaller storage pool extent sizes.
The usage statistics file can be off-loaded from Storwize V7000 Gen2. Then, you can use the
IBM Storage Tier Advisor Tool to create a summary report.
Therefore, the real capacity determines the quantity of MDisk extents that is initially allocated
to the volume. The virtual capacity is the capacity of the volume reported to all other Storwize
V7000 Gen2 components (for example, FlashCopy, Cache, and remote copy), and to the
host servers. The real capacity is used to store both the user data and the metadata for the
thin-provisioned volume. The real capacity can be specified as an absolute value, or a
percentage of the virtual capacity.
Write I/Os to grains of the thin volume that were not previously written to cause grains of the
real capacity to be used to store metadata and the actual user data. Write I/Os to grains that
were previously written to update the grain where data was previously written. The grain size
is defined when the volume is created, and can be 32 kilobytes (KB), 64 KB, 128 KB, or 256
KB. The default grain size is 256 KB, which is the strongly suggested option. If you select 32
KB for the grain size, the volume size cannot exceed 260,000 GB.
The grain size cannot be changed after the thin-provisioned volume has been created.
Generally, smaller grain sizes save space but require more metadata access, which can
adversely affect performance. If you are not going to use the thin-provisioned volume as a
FlashCopy source or target volume, use 256 KB to maximize performance. If you are going to
use the thin-provisioned volume as a FlashCopy source or target volume, specify the same
grain size for the volume and for the FlashCopy function.
Thin-provisioned volumes store both user data and metadata. Each grain of data requires
metadata to be stored. Therefore, the I/O rates that are obtained from thin-provisioned
volumes are less than the I/O rates that are obtained from fully allocated volumes.
The metadata storage fixed use is never greater than 0.1% of the user data. The fixed
resource use is independent of the virtual capacity of the volume. If you are using
thin-provisioned volumes in a FlashCopy map, for the best performance, use the same grain
size as the map grain size. If you are using the thin-provisioned volume directly with a host
system, use a small grain size.
The real capacity of a thin volume can be changed if the volume is not in image mode.
Increasing the real capacity enables a larger amount of data and metadata to be stored on
the volume. Thin-provisioned volumes use the real capacity that is provided in ascending
order as new data is written to the volume. If the user initially assigns too much real capacity
to the volume, the real capacity can be reduced to free storage for other uses.
The contingency capacity is initially set to the real capacity that is assigned when the volume
is created. If the user modifies the real capacity, the contingency capacity is reset to be the
difference between the used capacity and the real capacity. A volume that is created without
the autoexpand feature, and therefore has a zero contingency capacity, goes offline as soon
as the real capacity is fully used and needs to expand.
Autoexpand does not cause the real capacity to grow much beyond the virtual capacity. The
real capacity can be manually expanded to more than the maximum that is required by the
current virtual capacity, and the contingency capacity is recalculated.
Real-time Compression
Compressed volumes are a special type of volume where data is compressed as it is written
to disk, saving additional space. To use the compression function, you must obtain the IBM
Real-time Compression license. With the IBM Storwize V7000 Gen2 model (2076-524), you
already have one compression acceleration adapter included in the base product, and you
can get one more optional.
Cache
The primary benefit of storage cache is to improve I/O response time. Reads and writes to a
magnetic disk drive suffer from both seek and latency time at the drive level, which can result
in from 1 millisecond (ms) - 10 ms of response time (for an enterprise-class disk).
When data is written by the host, the preferred node saves the data in its cache. Before the
cache returns completion to the host, the write must be mirrored to the partner node, or
copied into the cache of its partner node, for availability reasons. After having a copy of the
written data, the cache returns completion to the host. A volume that has not received a write
update during the last two minutes will automatically have all modified data destaged to disk.
Note: Optional cache upgrade of 32 GB on Storwize V7000 Gen2 is reserved for RtC and
it is not used when RtC is disabled.
Starting with Storwize V7000 Gen2 the concept of the cache architecture has been changed.
Storwize V7000 Gen2 now distinguishes between upper and lower cache that enables the
system to be more scalable:
Required for support beyond 8192 volumes
Required for support beyond 8 node clusters
Required for 64-bit addressing beyond 28 GB
Required for larger memory in nodes
Required for more processor cores
Required for improved performance and stability
Write cache is partitioned by storage pool. This feature restricts the maximum amount of write
cache that a single storage pool can allocate in a system. Table 1-2 shows the upper limit of
write-cache data that a single storage pool in a system can occupy.
Storwize V7000 Gen2 will treat part of its physical memory as non-volatile. Non-volatile
means that its contents are preserved across power losses and resets. Bitmaps for
FlashCopy and Remote Mirroring relationships, the virtualization table, and the write cache
are items in the non-volatile memory.
In the event of a disruption or external power loss, the physical memory is copied to a file in
the file system on the node’s internal disk drive, so that the contents can be recovered when
external power is restored. The functionality of uninterruptible power supply units is provided
by internal batteries, which are delivered with each node’s hardware.
They ensure that there is sufficient internal power to keep a node operational to perform this
dump when the external power is removed. After dumping the content of the non-volatile part
of the memory to disk, Storwize V7000 Gen2 shuts down.
To meet these objectives, the base hardware configuration of the Storwize V7000 Gen2 was
improved substantially to support more advanced processors, more memory and faster
interconnects. This is also the first time that the IBM storage area network (SAN) Volume
Controller platform and the Storwize platform share the same processors.
To learn more about the changes to the new Storwize V7000 Gen2, see Table 2-1.
Compression No Yes
The following items provide details that should help understand the changes made across
both of the platforms to meet the goals:
Processors
IBM SAN Volume Controller DH8 and Storwize V7000 Gen2. Both platforms use the Ivy
Bridge processors from Intel, which has eight cores.
Memory
32 GB or cache and compression with an option to have another 32 GB is added for
Real-time Compression workloads.
Peripheral Component Interconnect (PCI) Express (PCIe) technology
Both the platforms have multiple PCIe Gen3 slots, as compared to dual PCIe Gen 2 slots
in previous versions. This shift to PCIe Gen3 enables each PCIe lane to get a maximum
speed of 1000 megabytes per second (MBps).
Optional adapters
In previous models, the only option that customers had was to add a dual port 10 Gbps
converged network adapter. The Storwize V7000 Gen1 base model has dual port
1 Gbps adapters, plus quad port 8 Gbps FC adapters. In both of the new platforms, the
base models come with three 1 Gbps Ethernet ports onboard. However, customers have
an option to select multiple add-on adapters for driving host input/output (I/O) and
off-loading compression workloads.
The increase in the performance of input/output operations per second (IOPS) is due to the
two-fold increase in the disks that can be attached behind a Gen 2 Control Enclosure.
Storwize V7000 Gen1 supported a maximum of nine Expansion Enclosures, each with 24
Small Form Factor (SFF) drives. This enables for a maximum of 240 drives per controller.
The design of Storwize V7000 Gen2 is geared toward making the platform more scalable,
flexible, and gives higher performance without using more space in customer’s data centers.
Each Storwize V7000 Gen2 has the ability to handle maximum capacity up to 4 PB with the
capability to virtualize external storage, and to enable storage administrators to provide more
bandwidth for applications by adding more I/O adapters, memory, and quick assist card.
Storwize V7000 Gen2 SFF Control Enclosure Model 524 includes the following components:
Two node Enclosures, each with an eight-core processor and integrated hardware-
assisted compression acceleration
64 GB cache (32 GB per Enclosure) with optional 128 GB cache (64 GB per Enclosure)
8 Gb FC, 10 Gb Ethernet, and 1 Gb Ethernet ports for FC, iSCSI, and FCoE connectivity
The front and back views of the two-node cluster based on the 2076-524 are shown in
Figure 2-1.
The Storwize V7000 Gen2 brings with it several significant changes and enhancements over
the previous generation hardware. The IBM Storwize V7000 Gen2 2076-524 includes
preinstalled V7.3 software.
Be aware that it is not supported to downgrade the software to version 7.2 or earlier, and the
Storwize V7000 Gen2 will reject any attempt to install a version earlier than 7.3. See the
following links for integration with existing clustered systems, compatibility, and
interoperability with installed nodes and other system components:
http://www.ibm.com/support/docview.wss?uid=ssg1S1003850
http://www.ibm.com/support/docview.wss?rs=591&uid=ssg1S1003705
All models are delivered in a 2U, 19-inch rack mount Enclosure and include a three-year
warranty with customer-replaceable unit (CRU) and onsite service. Optional warranty service
upgrades are available for enhanced levels of warranty service.
With the migration to the 2U platform and integration of previous external components into a
single chassis, no more rack space is required. The Storwize V7000 Gen2 integrates a
redundant battery backup system, therefore eliminating the need for external rack-mount
uninterruptible power supply (UPS), optional power switch, and related cabling.
The IBM Storwize V7000 Gen2 seamlessly integrates into the existing infrastructure, and
enables nondisruptive hardware upgrades from previous generations. We start with the base
configuration of the following components:
Two node Enclosures, each with the following components:
– Three 1 Gbps Ethernet ports (iSCSI and management)
– 1 Gbps technician Ethernet port for immediate availability and emergency access
– 8-core Intel processor
– 32 GB RAM
– 2 x 12 Gbps SAS drive expansion ports
– Hardware compression
– Three expansion slots (2 x Host Interface, 1 x for Compression)
– Two Universal Serial Bus (USB) ports for debug and emergency access
– Battery
SAS and Nearline-SAS hard disk drives (HDDs), and Flash drives
The base card contains the following major components of Storwize V7000 Gen2:
Intel Ivy Bridge 64-bit processor (see Table 2-2)
Lower-level cache 20 MB
Power consumption 70 W
One dual inline memory module (DIMM) slot per channel for registered error correction
code (ECC) DDR3L DIMMs running at 1600 megatransfers per second (MT/s):
– Support for 32 GB total using commodity memory DIMMs of 8 GB
– Support for 64 GB total using 16 GB DIMMs
– Memory support follows the processor-based model designations:
• DIMM size supported 8 GB DDR3, 16 GB DDR3
• Base memory fitted 16 GB, 32 GB
• Optional additional memory +16 GB, +32 GB
Intel Platform Controller Hub (PCH)
Coleto Creek 8926 stock keeping unit (SKU) with hardware assists for compression
Serial peripheral interface (SPI) flash memory to power-on self-test (POST)
Onboard SSD for boot, Linux file system, and Fire-Hose Dump (FHD):
– The SSD capacity of 64 GB with a sustained bandwidth of 50 MBps for sequential
reads and sequential write to support the FHD
– Multi-level cell (MLC) flash technology
Definition: The node Enclosure, which contains a battery, is able to power the
Enclosure while it stores cache and system data to an internal drive in the event of a
power failure. This process is referred to as a Fire-Hose Dump.
Dividing the attached Expansion into two separate chains of Expansion Enclosures helps to
reduce the resources required to set up a connection and improve network stability.
Internally, the SPCve uses two RISC cores. Each one is responsible for a group of eight SAS
lanes. To provide a balanced network, four lanes from a RISC group must be routed to an
external port, and the other four lanes to the internal expander.
Enclosure and 12V power supplies to the HIC card are turned off during FHD to preserve
battery life. However, during the 5-second ride-through period, the supply remains within
normal PCIe tolerances.
After the FHD is complete, the node powers off. The node restarts automatically when
input power returns. If this is after the ride-through period but before the end of the FHD,
the node still shuts down but immediately restarts.
For more information, see the IBM System Storage Interoperation Center (SSIC):
http://www.ibm.com/systems/support/storage/ssic/interoperability.wss
IBM Storwize V7000 Gen2 uses the IBM SES firmware. IBM requires the Object Data
Manager (ODM) to provide details of the hardware interfaces and fan control algorithms and
assist with resolving integration problems.
Storwize V7000 SFF Expansion Enclosure Model 24F includes the following components:
Two expansion canisters
12 Gb SAS ports for control enclosure and expansion enclosure attachment
Twenty-four slots for 2.5-inch SAS drives
2U, 19-inch rack mount enclosure with AC power supplies
In the event of a power failure, each node performs an independent FHD from memory to the
on-board SSD under software control. This method is compatible with the existing Storwize
code, and it retains the data indefinitely. The memory is not required to be persistent across a
node reset.
The drives do not require battery backup, so the control enclosure removes power from the
drive slots immediately before the EPOW expires (approximately within 5 ms after an ac
failure). This function is implemented in hardware. Battery packs are not required in
Expansion Enclosures.
The battery pack powers the processor and memory for a few minutes while the Storwize
code copies the memory contents to the onboard SSD. The FHD code runs on a single
processor core to minimize power requirements. The I/O chips and HIC slot are powered
down to save energy. The fans run with the node components remaining within thermal limits.
Each node switches from ac power to battery and back again without interruption. The battery
supports a 5-second ride-thru delay with all node electronics active before the dump starts, in
case power comes back quickly. When started, the dump always runs to completion. To allow
the system to be brought online immediately after a longer power outage, the total energy
stored in each battery pack supports two consecutive cycles without an intervening recharge.
Each cycle incorporates a ride-thru delay plus a 16 GB FHD.
The onboard SSD has a high write bandwidth to allow the dump to complete quickly. For
example, if the SSD has a sustained write bandwidth of 100 MBps, each battery pack would
need to power its node canister for just under three minutes.
The operational status of batteries and their VPD are available from the IBM Storwize V7000
Gen2 command-line interface (CLI) using the saninfo lsservicestatus command.
The high-discharge-rate battery solution uses Lithium Nickel Manganese Colbalt Oxide
(NMC) cells (Sony US18650VTC4), in a three cells in a series made up of two cells
connected in parallel (3S2P) configuration. The nominal battery capacity is derated to allow
for operating temperature and degradation at end of life. The batteries are tested for
conformance with IBM and agency standards for safety.
The battery packs recharge from flat to 98% capacity in one hour or less. Provisions are
made to monitor their state with an electronic “gas gauge,” and should be tested by
periodically discharging one battery pack at a time.
Each battery pack is maintainable concurrently with system operation. The battery packs
should be replaced as required, with an average life expectancy of at least five years
assuming a one FHD cycle per month.
The Storwize V7000 Gen2 has fan cooling modules, which are not housed within the power
supplies. Rather, they sit between the Node Canisters and the midplane. The reason for the
Fan Module being separated is so that when the canister is removed, the fan continues to
cool drives. There are two of these per enclosure, which means that you have to pull out the
canister to service the fan assembly after first removing the relevant Node Canister.
There are two cam levers to eject the Fan Module, which are accessible when the Node
Canister is removed. The Fan Module must be reinserted into the Storwize V7000 Gen2
within 3 minutes of removal to maintain adequate system cooling.
Each Storwize V7000 Gen2 control enclosure contains two Fan Modules for cooling. Each
Fan Module contains eight individual fans in four banks of two, as shown in Figure 2-6.
Storwize V7000 Gen2 nodes must have two processors, 64 GB of memory, and at least one
Compression Accelerator card installed to use compression. Enabling compression on
Storwize V7000 Gen2 nodes does not affect non-compressed host-to-disk I/O performance.
The compression card is shown in Figure 2-7.
The Compression Accelerator card conforms to the form-factor for a half length, low-profile
PCIe card. It uses a 16-lane PCIe edge finger connector, although the usage of this is
non-standard. The card has two chips:
An Intel Coleto Creek (8926 SKU without encryption support) used for compression
functions only.
A 24-lane PLX PCIe switch. This is configured as three 8-lane busses as follows:
– A Gen3 bus from the Intel CPU
– A Gen2 bus to the on-card Coleto Creek chip
– A Gen2 bus passed back through the rear 8 lanes of the PCIe socket
In Storwize V7000 Gen2, we support 1 x 10 GbE adapter in each of the platforms. Only
IBM-supported 10 Gb SFPs are used. Each adapter port has amber and green colored LEDs
to indicate port status.
iSCSI is an alternative means of attaching hosts to the Storwize V7000 Gen2. All
communications with back-end storage subsystems, and with other Storwize V7000 Gen2,
only occur through an FC connection.
The iSCSI function is a software function that is provided by the Storwize software, and not
hardware. Refer to the IBM Knowledge Center for Storwize V7000 7.3 for iSCSI for more
information:
http://www.ibm.com/support/knowledgecenter/ST3FR7_7.3.0/com.ibm.storwize.v7000.730
.doc/fab1_hic_installing.html?lang=en
2.4 LEDs
LED indicators on the system let you know the status of the system. The indicators have
changed from previous generations of the Control Enclosure models. We are going to discuss
Storwize V7000 Gen2, which refers to the newer generation of enclosures in Table 2-3 and
their respective LEDs and meanings.
If power to a node canister is lost, saving critical data starts after a 5-second wait. (If the
outage is shorter than five seconds, the battery continues to support the node and critical
data is not saved.) The node canister stops handling I/O requests from host applications. The
saving of critical data runs to completion, even if power is restored during this time. The loss
of power might be because the input power to the enclosure is lost, or because the node
canister is removed from the enclosure.
When power is restored to the node canister, the system restarts without operator
intervention. How quickly it restarts depends on whether there is a history of previous power
failures. The system restarts only when the battery has sufficient charge for the node canister
to save the cache and state data again. A node canister with multiple power failures might not
have sufficient charge to save critical data. In such a case, the system starts in service state
and waits to start I/O operations until the battery has sufficient charge.
Figure 2-8 shows the Control Enclosure LEDs that indicate battery status.
A detailed view of the system state is provided in the Monitoring sections of the management
GUI, and by the service assistant. If neither the management GUI nor the service assistant is
accessible, use this table to determine the system status using the LED indicators on the
Control Enclosures.
The system status LEDs visible at the rear of each control enclosure can show one of several
states, as described in Table 2-4.
Figure 2-10 shows the rear Control Enclosure LEDs and their meanings.
Table 2-6 LEDs for SAS ports 1 and 2 Indicators for Control Enclosure
Name Call out Symbol Color/State Meaning
SAS Port 1 Link 1 None Green/OFF No link connection on any PHYs. The
connection is down.
SAS Port 1 Fault 2 None Amber/OFF No fault. All four PHYs have a link
connection.
SAS Port 2 Link 3 None Green/OFF No fault. All four PHYs have a link
connection.
SAS Port 2 Fault 4 None Amber/OFF No fault. All four PHYs have a link
connection.
Table 2-7 Storwize V7000 2076-524 node canister system status LEDs
Name Callout Color/State Meaning
To understand in more detail the status of the I/O port at the rear of a control enclosure, refer
to the topic about Storwize V7000 2076-524 node canister ports and indicators that is linked
at the IBM Knowledge Center on the following website:
http://www.ibm.com/support/knowledgecenter/api/content/ST3FR7_7.3.0/com.ibm.storwi
ze.v7000.730.doc/tbrd_sysstsleds.html
Figure 2-11 SAS ports and Power LEDs at rear of Expansion Enclosure
Three LEDs are in a horizontal row on the right side (when viewed from the rear) of the
expansion canister, and there are two SAS Link ports on either side of the center.
Two SAS ports are in the rear of the Storwize V7000 Gen2 Expansion Enclosure.
SAS ports are identified at the bottom of the port, with 1 on the left and 2 on the right, as
shown in Figure 2-11 on page 42. Use of port 1 is required. Use of port 2 is optional. Each
port connects four data channels.
SAS Port 1 Link Green OFF No link connection on any PHY. The
connection is down.
SAS Port 1 Fault Amber OFF No fault. All four PHYs have a link
connection.
SAS Port 2 Link Green OFF No link connection on any PHYs (lanes).
The connection is down.
SAS Port 2 Fault Amber OFF No fault. All four PHYs have a link
connection.
More detailed information is available from the Storwize IBM Knowledge Center:
http://www.ibm.com/support/knowledgecenter/api/content/ST3FR7_7.3.0/com.ibm.storwi
ze.v7000.730.doc/fab1_system_leds.html
The technician port can be used by directly connecting a computer that has web browsing
software and is configured for Dynamic Host Configuration Protocol (DHCP) via a standard
1 Gbps Ethernet cable. On uninitialized systems, the technician port provides access to the
system initialization wizard instead of the service assistant.
After a system has been initialized, the technician port provides access to the following
components:
The service assistant
The password reset facility (if enabled)
The Init tool is not displayed if there is a problem that prevents the system from clustering. For
example, this occurs if Node canister is in Service state because of an error, or if there is a
stored System ID (the system was set up before, and the user forgot to remove the ID using
the chenclosurevpd -resetclusterid command). If there is a problem, then the Service
Assistant GUI is shown, where the customer can log on and check the node canisters status.
7. Follow the instructions that are presented by the initialization tool to configure the system
with a management IP address. After you complete the initialization process, the system
can be reached by opening a supported web browser and entering the following adress:
http://<management_IP_address>
Figure 2-13 shows the setup flow for the technician port.
8. After the system is set up and the user connects to the Technician port, they are directed
to the Service GUI.
9. Only the Technician port has a Password Reset option from the Service Assistant
available. The sainfo lsservicestatus command displays the current status of the node,
and there is a new information field to indicate if password reset is enabled.
For more information on CLI commands, see Chapter 8, “IBM Storwize V7000 Gen2
command-line interface” on page 143.
Important: At the time of writing, the statements made in this book are correct, but they
might change over time. Always verify any statements that have been made with the
Storwize V7000 Gen2 supported hardware list, device driver, firmware, and suggested
software levels on the following website:
http://www.ibm.com/support/docview.wss?uid=ssg1S1004622
To achieve the most benefit from Storwize V7000 Gen2, pre-installation planning must include
several important steps. These steps ensure that the Storwize V7000 Gen2 provides the best
possible performance, reliability, and ease of management for your application needs. Proper
configuration also helps minimize downtime by avoiding changes to the Storwize V7000 Gen2
and the storage area network (SAN) environment to meet future growth needs.
Tip: For comprehensive information about the topics that are described here, see IBM
Storwize V7000 Gen2 Product Manuals on the following website:
http://www.ibm.com/support/docview.wss?uid=ssg1S7003318
Follow these steps when planning for the Storwize V7000 Gen2:
1. Collect and document the number of hosts (application servers) to attach to the Storwize
V7000 Gen2, the traffic profile activity (read or write, sequential or random), and the
performance requirements in terms of input/output (I/O) operations per second (IOPS).
2. Collect and document the storage requirements and capacities:
– The total existing back-end storage to be provisioned on Storwize V7000 Gen2 (if any)
– The total new back-end storage to be provisioned on the Storwize V7000 Gen2 (if any)
– The required storage capacity for local mirror copy (volume mirroring)
– The required storage capacity for point-in-time copy (IBM FlashCopy)
– The required storage capacity for remote copy (Metro Mirror and Global Mirror)
– The required storage capacity for compressed volumes
– Per host:
• Storage capacity
• Host logical unit number (LUN) quantity and sizes
– The required virtual storage capacity that is used as a fully managed volume, and used
as a thin-provisioned volume
3. Define the local and remote SAN fabrics and clustered systems, if a remote copy or a
secondary site is needed.
4. Define the number of clustered system systems, and the number of pairs of nodes for
each system. Each pair of nodes (an I/O Group) is the container for the volume. The
number of necessary I/O Groups depends on the overall performance requirements.
5. Design the SAN according to the requirement for high availability (HA) and best
performance. Consider the total number of ports. Also consider the bandwidth that is
needed between the host and the Storwize V7000 Gen2, the Storwize V7000 Gen2 and
the disk subsystem, between the Storwize V7000 Gen2 nodes, and for the inter-switch link
(ISL) between the local and remote fabric.
6. Design the Internet Small Computer System Interface (iSCSI) network according to the
requirements for HA and best performance. Consider the total number of ports and
bandwidth that is needed between the host and the Storwize V7000 Gen2.
Although Storwize V7000 Gen2 used to provide additional licenses for purchase to enable
additional functionality, the new structure and pricing model ties the software licenses closely
to the enclosure, and provides the capability to enable additional functionality by purchasing
feature codes under that license. The Storwize V7000 Gen2 base software includes the
following functions:
Software Redundant Array of Independent DisksRAID (0/1/5/6/10 with global spares and
rebalancing)
Thin provisioning
Volume mirroring
Read/write cache
Unlimited IBM FlashCopy
Automatic pool balancing
Volume and host limits per Storwize V7000: 2048 volumes and 512 host objects per
Control Enclosure, and so on
Embedded management and service graphical user interfaces (GUIs), Storage
Networking Industry Association (SNIA) Storage Management Initiative Specification
(SMI-S)-compliant Common Information Model (CIM) Object Manager (CIMOM)
Management command-line interface (CLI) over Secure Shell (SSH)
Four-way system clustering
Environmental statistics reporting for Energy Star compliance
We have a solution to address those complexities and challenges, and we are happy to have
arrived at an intuitive and straightforward way to order Storwize V7000 Gen2 licenses, and to
maintain them going forward. In this section, we review the licensing structure, as shown in
Figure 3-1.
IBM Storwize family software V7.3 introduces new software licenses for Storwize V7000
Gen2. This new license and pricing structure provides intuitive licensing based on the
functions customers want to enable and use the most.
Storwize V7000 Gen2 used to provide additional licenses for purchase to enable additional
functionality. The new structure and pricing model ties the software licenses closely to the
enclosure, and provides the capability to enable additional functionality by purchasing feature
codes under that license.
Each Storwize Family Software for Storwize V7000 Gen2 5639-CB7, 5639-XB7, and
5639-EB7 license has the following feature codes:
Base software
Full Feature Set
Easy Tier
FlashCopy
Remote Mirroring
Compression
The following sections describe each new software license in detail, explains when and where
it applies, and provides examples.
For example, if you are running a Storwize V7000 Gen2 system and want to improve
performance efficiencies with Easy Tier, you might purchase that feature code to use Easy
Tier across all of the Control Enclosures, Expansion Enclosures, and externally virtualized
enclosures configured with that system. You can then use Easy Tier for that Storwize V7000
Gen2 system. Alternatively, you can enable Easy Tier and all other optional advanced
functions available by purchasing the single feature code labeled Full Feature Set.
For example, if you are running a Storwize V7000 Gen2 system and want to improve
performance efficiencies with Easy Tier, you might purchase that feature code to use Easy
Tier across all of the Control Enclosures, Expansion Enclosures, and externally virtualized
enclosures configured with that system. You can then use Easy Tier for that Storwize V7000
Gen2 system. Alternatively, you can enable Easy Tier and all other optional advanced
functions available by purchasing the single feature code labeled Full Feature Set.
Consult an IBM sales representative with any questions regarding storage controllers. For
example, adding an IBM System Storage DS5020 consisting of two enclosures to an IBM
Storwize V7000 Gen2 consisting of one Control Enclosure and three Expansion Enclosures
requires the purchase of the external virtualization license with a feature code quantity of
two enclosures.
You can then use Easy Tier for that Storwize V7000 Gen2 system. Alternatively, you can
enable Easy Tier and all other optional advanced functions available by purchasing the single
feature code labeled Full Feature Set.
Mixing a new Storwize V7000 Gen 2 system to an existing Storwize V7000 Gen1
in the same cluster
In this second scenario, at the primary site, you are managing a clustered Storwize V7000
Gen2 system consisting of one 2076-124 Disk Control Enclosure that has two 2076-224
Expansion Enclosures attached, and a 2076-524 Disk Control Enclosure that has four
2076-24F Expansion Enclosures attached.
In addition, this Storwize V7000 Gen2 system is managing a System Storage DS5020
consisting of four enclosures. You want to use the Remote Mirroring features in this
configuration. At the secondary site, you have the exact same configuration set up.
One of the first factors to consider is whether you are building a brand new cluster of Storwize
V7000 Gen2 with only 2076-524s in it, or if you are adding the 2076-524 to an existing cluster
having older model Storwize V7000 Gen1 or IBM SAN Volume Controller nodes in it. A
second factor is, if it is a brand new Storwize V7000 Gen2 cluster, you need to determine if
you are racking your Storwize V7000 Gen2s in a single cabinet layout or a dual cabinet
layout.
Additionally, when using the optional 2076-24F flash arrays as part of your Storwize V7000
Gen2 cluster implementation, the distance that you can separate the 2076-524 nodes in the
I/O Group away from their shared 2076-24F flash array is limited by the maximum length of
the 6-meter serial-attached SCSI (SAS) cable used to attach the array to the Storwize V7000
Gen2 units.
Important: You must consider the maximum power rating of the rack. Do not exceed it. For
more information about the power requirements, see the following website:
http://www.ibm.com/support/knowledgecenter/api/content/ST3FR7_7.3.0/com.ibm.sto
rwize.v7000.730.doc/tbrd_physicalconfig.html
Figure 3-3 shows the rear view of a Storwize V7000 Gen2 Control Enclosure with the power
supplies.
A rear view of a Storwize V7000 Gen2 Expansion Enclosure is shown in Figure 3-4.
Each Control Enclosure contains two node canisters, forming an I/O Group. The guidelines
apply on an I/O Group by I/O Group basis:
Control Enclosure only
The Control Enclosure requires two standard rack units of space in a rack. If you plan to
add Expansion Enclosures in the future, follow the guidelines for a Control Enclosure plus
one or more Expansion Enclosures.
Control Enclosure plus one or more Expansion Enclosures
If you have one or more Expansion Enclosures, position the Control Enclosure in the
center of the rack to make cabling easier. Balance the number of Expansion Enclosures
above and below the Control Enclosure.
Storwize V7000 Gen2 has the following requirements:
– Each enclosure requires two standard rack units of space in a rack.
– Attach no more than 10 Expansion Enclosures to port 1 of the Control Enclosure.
– Attach no more than 10 Expansion Enclosures to port 2 of the Control Enclosure.
There is support for 10 expansions per chain, for a total of 21 enclosures to be included:
– Position the enclosures together. Avoid adding other equipment between enclosures.
– Position the enclosures in the rack so that you can easily view them and access them
for servicing. This action also enables the rack to remain stable, and enables two or
more people to install and remove the enclosures.
Many data centers today are at an Uptime Tier 3 or higher level, so power redundancy
concerns that would require a dual cabinet Storwize V7000 Gen2 implementation are no
longer an issue.
However, Fire Protection Systems Type, such as overhead wet pipe sprinkler systems, should
be considered. In association to these items, you should also consider physical separation
and location of other key storage environment components.
If you are implementing your entire storage environment with multiple redundant devices
physically separated across multiple cabinets and strings, you need to provide sufficient
physical distance to ensure that your redundant components are in different fire protection
zones and different power-sourced zones. Otherwise, your end-to-end storage environment
can be compromised in case of a zonal facilities failure.
If the data center does not have a robust enough power redundancy infrastructure, or your
storage environment design strategy does not have fully redundant components placed at
sufficient distances apart, the investment in a dual cabinet implementation is justified in
furthering the level of HA and redundancy for your overall storage environment.
Another consideration would be that if you anticipate that you will be adding another Storwize
V7000 Gen2 cluster to your storage environment in the future, by implementing a dual cabinet
approach from the start, and reserving remaining space in each cabinet for nodes from the
second cluster, you accomplish both objectives.
Figure 3-5 shows the rear view of a 2076-524 Node with the two PCIe adapter slots identified.
The Storwize V7000 Gen2 2076-524 node introduces a new feature called a Technician Port.
Ethernet port 4 is allocated as the Technician service port, and is marked with a T. All initial
configuration for each node is performed through the Technician Port.
After the cluster configuration has been completed, the Technician Port automatically routes
the connected user directly to the service GUI.
Information: The default IP address for the Technician Port on a 2076-524 Node is
192.168.0.1. If the Technician Port is connected to a switch, it is disabled and an error is
logged.
Each Storwize V7000 Gen2 node requires one Ethernet cable to connect it to an Ethernet
switch or hub. The cable must be connected to port 1. A 10/100/1000 megabit (Mb) Ethernet
connection is required for each cable. Both Internet Protocol Version 4 (IPv4) and Internet
Protocol Version 6 (IPv6) are supported.
To ensure system failover operations, Ethernet port 1 on all nodes must be connected to the
same set of subnets. Each Storwize V7000 Gen2 cluster has a Cluster Management IP
address and a Service IP address for each node in the cluster. See Example 3-1 for details.
Each node in a Storwize V7000 Gen2 clustered system needs to have at least one Ethernet
connection.
Support for iSCSI provides one additional IPv4 and one additional IPv6 address for each
Ethernet port on every node. These IP addresses are independent of the clustered system
configuration IP addresses.
When accessing the Storwize V7000 Gen2 through the GUI or SSH, choose one of the
available IP addresses to which to connect. No automatic failover capability is available. If one
network is down, use an IP address on the alternative network. Clients might be able to use
the intelligence in domain name servers (DNS) to provide partial failover.
The zoning capabilities of the SAN switch are used to create three distinct zones. Storwize
V7000 Gen2 7.3 supports 2 gigabits per second (Gbps), 4 Gbps, or 8 Gbps FC fabric, (16
Gbps connects, but it only uses the 8 Gbps speed by default) depending on the hardware
platform and on the switch where the Storwize V7000 Gen2 is connected. In an environment
where you have a fabric with multiple-speed switches, the preferred practice is to connect the
Storwize V7000 Gen2 and the disk subsystem to the switch operating at the highest speed.
All Storwize V7000 Gen2 nodes in the Storwize V7000 Gen2 clustered system are connected
to the same SANs, and they present volumes to the hosts. These volumes are created from
storage pools that are composed of MDisks presented by the disk subsystems.
SAN configurations that use inter-cluster Metro Mirror and Global Mirror relationships require
the following additional switch zoning considerations:
For each node in a clustered system, it is preferred to zone two FC ports from the source
system to two FC ports on the target system.
If dual-redundant ISLs are available, split the two ports from each node evenly between
the two ISLs.
Local clustered system zoning continues to follow the standard requirement for all ports on
all nodes in a clustered system to be zoned to one another.
If an inter-cluster link becomes severely and abruptly overloaded, the local FC fabric
can become congested to the extent that no FC ports on the local Storwize V7000
Gen2 nodes are able to perform local intra-cluster heartbeat communication. This
situation can, in turn, result in the nodes experiencing lease expiry events.
Configure your SAN so that FC traffic can be passed between the two clustered systems.
To configure the SAN this way, you can connect the clustered systems to the same SAN,
merge the SANs, or use routing technologies.
Configure zoning to enable all of the nodes in the local fabric to communicate with all of
the nodes in the remote fabric.
You can use the lsfabric command to generate a report that displays the connectivity
between nodes and other controllers and hosts. This report is helpful for diagnosing SAN
problems.
For more information about zoning and configuration, see the following website:
http://www.ibm.com/support/knowledgecenter/api/content/ST3FR7_7.3.0/com.ibm.storwi
ze.v7000.730.doc/svc_configrulessummary_02171530.html
Configuration of the system is straightforward: Storwize family systems can normally find
each other in the network, and can be selected from the GUI. IP replication includes
Bridgeworks SANSlide network optimization technology, and is available at no additional
charge. Remote mirror is a chargeable option, but the price does not change with IP
replication. Existing remote mirror users can access the new function at no additional charge.
Information: Full details of how to set up and configure IP replication are available in the
IBM SAN Volume Controller and Storwize Family Native IP Replication publication:
http://www.redbooks.ibm.com/abstracts/redp5103.html
See the following website for a list of currently supported storage subsystems:
http://www.ibm.com/support/docview.wss?uid=ssg1S1004622
Apply the following general guidelines for back-end storage subsystem configuration
planning:
In the SAN, storage controllers that are used by the Storwize V7000 Gen2 clustered
system must be connected through SAN switches. Direct connection between the
Storwize V7000 Gen2 and the storage controller is not supported.
Multiple connections are enabled from the redundant controllers in the disk subsystem to
improve data bandwidth performance. It is not mandatory to have a connection from each
redundant controller in the disk subsystem to each counterpart SAN, but it is a preferred
practice. Therefore, both canisters port 1 & 3 in a Storwize V3700 subsystem can be
connected to SAN A, and port 2 & 4 to SAN B.
All Storwize V7000 Gen2 nodes in an Storwize V7000 clustered system must be able
to see the same set of ports from each storage subsystem controller. Violating this
guideline causes the paths to become degraded. This degradation can occur as a result of
applying inappropriate zoning and LUN masking. This guideline has important implications
for a disk subsystem, such as DS3000, Storwize V3700, Storwize V5000, or Storwize
V7000, which imposes exclusivity rules as to which HBA WWPNs a storage partition can
be mapped.
If you do not have a storage subsystem that supports the Storwize V7000 Gen2 round-robin
algorithm, make the number of MDisks per storage pool a multiple of the number of storage
ports that are available. This approach ensures sufficient bandwidth to the storage controller
and an even balance across storage controller ports.
Note: Active Data Workload is typically 5 - 8% of the total managed capacity. In a single
I/O Group, 8 TB of active data equates to approximately 160 TB managed. In an
eight-node cluster, this equates to 32 TB of active data (8 TB per I/O Group).
Figure 3-7 shows the basic layout of how Easy Tier works. With Storwize V7000 Gen2 the
user must manually define flash disk MDisk or Nearline MDisks. All MDisks are classed as
Enterprise by default.
Figure 3-7 Identifies the three tiers of disk accessible by Easy Tier
For more information about Easy Tier, see Chapter 4, “IBM Storwize V7000 Gen2 Easy Tier”
on page 85.
Important: There is no fixed relationship between I/O Groups and storage pools.
Attention: Image mode disks should be imported into a storage pool of like disks,
otherwise you risk the possibility of data corruption or loss.
– When creating a managed mode volume with sequential or striped policy, you must
use several MDisks containing extents that are available and of a size that is equal to or
greater than the size of the volume that you want to create. There might be sufficient
extents available on the MDisk, but a contiguous block large enough to satisfy the
request might not be available.
Thin-Provisioned volume considerations:
– When creating the Thin-Provisioned volume, you need to understand the use patterns
by the applications or group users accessing this volume. You must consider items,
such as the actual size of the data, the rate of creation of new data, modifying or
deleting existing data, and so on.
– Two operating modes for Thin-Provisioned volumes are available:
• Autoexpand volumes allocate storage from a storage pool on demand, with minimal
required user intervention. However, a malfunctioning application can cause a
volume to expand until it has used all of the storage in a storage pool.
• Non-autoexpand volumes have a fixed amount of assigned storage. In this case, the
user must monitor the volume and assign additional capacity when required. A
malfunctioning application can only cause the volume that it uses to fill up.
– Depending on the initial size for the real capacity, the grain size and a warning level can
be set. If a volume goes offline, either through a lack of available physical storage for
autoexpand, or because a volume that is marked as non-expand had not been
expanded in time, a danger exists of data being left in the cache until storage is made
available. This situation is not a data integrity or data loss issue, but you must not rely
on the Storwize V7000 Gen2 cache as a backup storage mechanism.
– When you create a thin-provisioned volume, you can choose the grain size for
allocating space in 32 KB, 64 KB, 128 KB, or 256 KB chunks. The grain size that you
select affects the maximum virtual capacity for the thin-provisioned volume. The default
grain size is 256 KB, and is the strongly suggested option. If you select 32 KB for the
grain size, the volume size cannot exceed 260,000 GB. The grain size cannot be
changed after the thin-provisioned volume is created.
Generally, smaller grain sizes save space but require more metadata access, which
could adversely affect performance. If you are not going to use the thin-provisioned
volume as a FlashCopy source or target volume, use 256 KB to maximize
performance. If you are going to use the thin-provisioned volume as a FlashCopy
source or target volume, specify the same grain size for the volume and for the
FlashCopy function.
– Thin-provisioned volumes require more I/Os because of directory accesses. For truly
random workloads with 70% read and 30% write, a thin-provisioned volume requires
approximately one directory I/O for every user I/O.
– The directory is two-way write-back-cached (just like the Storwize V7000 Gen2
fast-write cache), so certain applications perform better.
– Thin-provisioned volumes require more processor processing, so the performance per
I/O Group can also be reduced.
– A thin-provisioned volume feature called zero detect provides clients with the ability to
reclaim unused allocated disk space (zeros) when converting a fully allocated volume
to a Thin-Provisioned volume using volume mirroring.
Volume mirroring guidelines:
– Create or identify 2 separate storage pools to allocate space for your mirrored volume.
– Allocate the storage pools containing the mirrors from separate storage controllers.
– If possible, use a storage pool with MDisks that share the same characteristics.
Otherwise, the volume performance can be affected by the poorer-performing MDisk.
Note: It is a supported configuration to have eight paths to each volume, but this design
provides no performance benefit, and it does not improve reliability or availability by any
significant degree.
Hosts with four (or more) HBAs take a little more planning, because eight paths are not
an optimum number, so you must instead configure your IBM SAN Volume Controller
Host Definitions (and zoning) as though the single host is two or more separate hosts.
A pseudo-host is not a defined function or feature of the IBM SAN Volume Controller. If
you need to define a pseudo-host, you are simply adding another host ID to the IBM
SAN Volume Controller host configuration. Rather than creating one host ID with four
WWPNs, you would define two hosts with two WWPNs. This is now the reference for
the term pseudo-host.
Be careful not to share the volume to more than two adapters per host, to not
oversubscribe the number of datapaths per volumes per host.
If a host has multiple HBA ports, each port must be zoned to a separate set of Storwize
V7000 Gen2 ports to maximize HA and performance.
Note: We use the term HBA port to describe the SCSI initiator. We use the term
Storwize V7000 port to describe the SCSI target.
The maximum number of host paths per volume must not exceed eight.
Storwize V7000 Gen2 Advanced Copy Services must apply the following guidelines.
FlashCopy guidelines
Consider these FlashCopy guidelines:
Identify each application that must have a FlashCopy function implemented for its volume.
FlashCopy is a relationship between volumes. Those volumes can belong to separate
storage pools and separate storage subsystems.
You can use FlashCopy for backup purposes by interacting with the IBM Tivoli® Storage
Manager Agent, or for cloning a particular environment.
Define which FlashCopy best fits your requirements: No copy, Full copy, Thin-Provisioned,
or Incremental.
Define which FlashCopy rate best fits your requirement in terms of the performance and
the amount of time to complete the FlashCopy. Table 3-2 on page 73 shows the
relationship of the background copy rate value to the attempted number of grains to be
split per second.
Define the grain size that you want to use. A grain is the unit of data that is represented by
a single bit in the FlashCopy bitmap table. Larger grain sizes can cause a longer
FlashCopy elapsed time, and a higher space usage in the FlashCopy target volume.
Smaller grain sizes can have the opposite effect. Remember that the data structure and
the source data location can modify those effects.
31% - 40% 1 MB 4 16
41% - 50% 2 MB 8 32
51% - 60% 4 MB 16 64
Inter-cluster operation needs at least two clustered systems that are separated by several
moderately high-bandwidth links.
Node Node
Intercluster links
Switch Switch
Node Node
fabric 1A fabric 1B
Node Node
Switch Switch
fabric 2A fabric 2B
Node Node
Technologies for extending the distance between two Storwize V7000 Gen2 clustered
systems can be broadly divided into two categories: FC extenders and SAN multiprotocol
routers.
Due to the more complex interactions involved, IBM explicitly tests products of this class for
interoperability with the Storwize V7000 Gen2. You can obtain the current list of supported
SAN routers in the supported hardware list on the Storwize V7000 Gen2 support website:
https://www.ibm.com/support/entry/myportal/product/system_storage/disk_systems/mid
-range_disk_systems/ibm_storwize_v7000_(2076)
IBM has tested several FC extenders and SAN router technologies with the Storwize V7000
Gen2. You must plan, install, and test FC extenders and SAN router technologies with the
Storwize V7000 Gen2 so that the following requirements are met:
The round-trip latency between sites must not exceed 80 milliseconds (ms), 40 ms one
way. For Global Mirror, this limit enables a distance between the primary and secondary
sites of up to 8000 kilometers (km), 4970.96 miles, using a planning assumption of 100 km
(62.13 miles) per 1 ms of round-trip link latency.
The latency of long-distance links depends on the technology that is used to implement
them. A point-to-point dark fiber-based link typically provides a round-trip latency of 1 ms
per 100 km (62.13 miles) or better. Other technologies provide longer round-trip latencies,
which affects the maximum supported distance.
The configuration must be tested with the expected peak workloads.
When Metro Mirror or Global Mirror is used, a certain amount of bandwidth is required for
Storwize V7000 Gen2 inter-cluster heartbeat traffic. The amount of traffic depends on how
many nodes are in each of the two clustered systems.
The bandwidth between sites must, at the least, be sized to meet the peak workload
requirements, in addition to maintaining the maximum latency that has been specified
previously. You must evaluate the peak workload requirement by considering the average
write workload over a period of one minute or less, plus the required synchronization
copy bandwidth.
Determine the true bandwidth that is required for the link by considering the peak write
bandwidth to volumes participating in Metro Mirror or Global Mirror relationships, and
adding it to the peak synchronization copy bandwidth.
If the link between the sites is configured with redundancy so that it can tolerate single
failures, you must size the link so that the bandwidth and latency statements continue to
be true even during single failure conditions.
The configuration is tested to simulate the failure of the primary site (to test the recovery
capabilities and procedures), including eventual failback to the primary site from the
secondary.
The configuration must be tested to confirm that any failover mechanisms in the
inter-cluster links interoperate satisfactorily with the Storwize V7000 Gen2.
The FC extender must be treated as a normal link.
The bandwidth and latency measurements must be made by, or on behalf of, the client.
They are not part of the standard installation of the Storwize V7000 Gen2 by IBM. Make
these measurements during installation, and record the measurements. Testing must be
repeated after any significant changes to the equipment that provides the inter-cluster link.
The capabilities of the storage controllers at the secondary clustered system must be
provisioned to provide for the peak application workload to the Global Mirror volumes, plus
the client-defined level of background copy, plus any other I/O being performed at the
secondary site.
The performance of applications at the primary clustered system can be limited by the
performance of the back-end storage controllers at the secondary clustered system to
maximize the amount of I/O that applications can perform to Global Mirror volumes.
It is necessary to perform a complete review before using Serial Advanced Technology
Attachment (SATA) for Metro Mirror or Global Mirror secondary volumes. Using a slower
disk subsystem for the secondary volumes for high-performance primary volumes can
mean that the Storwize V7000 Gen2 cache might not be able to buffer all of the writes, and
flushing cache writes to SATA might slow I/O at the production site.
Storage controllers must be configured to support the Global Mirror workload that is
required of them:
– Dedicate storage controllers to only Global Mirror volumes.
– Configure the controller to ensure sufficient quality of service (QoS) for the disks being
used by Global Mirror.
– Ensure that physical disks are not shared between Global Mirror volumes and other I/O
(for example, by not splitting an individual RAID array).
MDisks in a Global Mirror storage pool must be similar in their characteristics, for example,
RAID level, physical disk count, and disk speed. This requirement is true of all storage
pools, but it is particularly important to maintain performance when using Global Mirror.
When a consistent relationship is stopped, for example, by a persistent I/O error on the
intercluster link, the relationship enters the consistent_stopped state. I/O at the primary
site continues, but the updates are not mirrored to the secondary site. Restarting the
relationship begins the process of synchronizing new data to the secondary disk.
Because multiple data migration methods are available, choose the method that best fits your
environment, your operating system platform, your kind of data, and your application’s
service-level agreement (SLA).
Tip: Technically, almost all storage controllers provide both striping (RAID 5 or RAID 10)
and a form of caching. The real benefit is the degree to which you can stripe the data
across all MDisks in a storage pool, and therefore have the maximum number of active
spindles at one time. The caching is secondary. The Storwize V7000 Gen2 provides
additional caching to the caching that midrange controllers provide (usually several GB),
but enterprise systems have much larger caches.
To ensure the wanted performance and capacity of your storage infrastructure, undertake a
performance and capacity analysis to reveal the business requirements of your storage
environment. When this analysis is done, you can use the guidelines in this chapter to design
a solution that meets the business requirements.
When considering performance for a system, always identify the bottleneck and, therefore,
the limiting factor of a given system. You must also consider the component for whose
workload you identify a limiting factor. The component might not be the same component that
is identified as the limiting factor for other workloads.
The Storwize V7000 Gen2 is designed to handle large quantities of multiple paths from the
back-end storage.
The Storwize V7000 Gen2 is capable of providing automated performance optimization of hot
spots by using flash drives and Easy Tier.
3.4.1 SAN
The currently available Storwize V7000 Gen2 models have connection to 2 Gbps, 4 Gbps, 8
Gbps, and 16 Gbps switches. From a performance point of view, connecting the Storwize
V7000 Gen2 to 8 Gbps or 16 Gbps switches is better to maximize the benefits of the
performance and I/O speed.
Correct zoning on the SAN switch brings security and performance together. Implement a
dual-HBA approach at the host to access the Storwize V7000 Gen2.
Advanced features, such as Disk Tiering, should be disabled on the underlying storage
controller, because they will skew the results of the performance of the MDisk expected by
Storwize V7000 Gen2.
Storwize family controllers should not use MDisk pooling, but should present a single MDisk,
as a single pool (as a single volume), because the Storage Pool Balancing feature affects the
way the MDisk behaves to Storwize V7000 Gen2.
3.4.3 Cache
The Storwize V7000 Gen2 clustered system is scalable up to 20 nodes, and the performance
is nearly linear when adding more nodes into a Storwize V7000 Gen2 clustered system.
The large cache and advanced cache management algorithms in Storwize V7000 Gen2
enable it to improve on the performance of many types of underlying disk technologies. The
Storwize V7000 Gen2 capability to manage, in the background, the destaging operations that
are incurred by writes (in addition to still supporting full data integrity), assists with Storwize
V7000 Gen2’s capability in achieving good database performance.
There are several changes to how Storwize V7000 Gen2 uses its cache in the 7.3 code level.
The cache is separated into two layers, an upper cache, and a lower cache.
SCSI Target
Forwarding
Peer Communications
Upper Cache New
Interface Layer
- First major update to
FlashCopy
Configuration
cache since 2003
Clustering
- Flexible design for Mirroring
plug and play style
Thin Provisioning Compression
cache algorithm
enhancements in the Lower Cache New
future
Virtualization Easy Tier 3
- “SVC” like L2 cache
for advanced functions Forwarding New
Upper Cache – simple RAID
write cache
Forwarding
Lower Cache – algorithm
intelligence SCSI Initiator
- Understands mdisks
Fibre Channel
Shared buffer space
iSCSI
between two layers
FCoE
SAS
PCIe
The upper cache delivers the following functionality enabling Storwize V7000 Gen2 to
streamline data write performance:
Provides fast write response times to the host by being as high up in the I/O stack as
possible
Provides partitioning
Combined together, the two levels of cache also deliver the following functionality:
Pin data when LUN goes offline.
Provide enhanced statistics for Tivoli Storage Productivity Center for Replication while
maintaining compatibility with an earlier version.
Provide trace for debugging.
Report medium errors.
Correctly resync cache and provide the atomic write functionality.
Ensure that other partitions continue operation where one partition becomes 100% full of
pinned data.
Depending on the size, age, and technology level of the disk storage system, the total cache
available in the Storwize V7000 Gen2 can be larger, smaller, or about the same as that
associated with the disk storage. Because hits to the cache can occur in either the Storwize
V7000 Gen2 or the disk controller level of the overall system, the system as a whole can take
advantage of the larger amount of cache wherever it is located.
Therefore, if the storage controller level of the cache has the greater capacity, expect hits to
this cache to occur, in addition to hits in the Storwize V7000 Gen2 cache.
Also, regardless of their relative capacities, both levels of cache tend to play an important role
in enabling sequentially organized data to flow smoothly through the system. The Storwize
V7000 Gen2 cannot increase the throughput potential of the underlying disks in all cases,
because this increase depends on both the underlying storage technology and the degree to
which the workload exhibits hot spots or sensitivity to cache size or cache algorithms.
IBM SAN Volume Controller 4.2.1 Cache Partitioning, REDP-4426, explains the IBM SAN
Volume Controller and Storwize V7000 Gen2 cache partitioning capability:
http://www.redbooks.ibm.com/abstracts/redp4426.html?Open
The only exception to this is when using mixed hardware types, in which case the lowest
ports should be used for the same purposes, and the remaining ports can be allocated as
required. (The lowest ports are the lowest numbered adapter slots, or the rightmost bits in
the mask.)
Although virtualization with the Storwize V7000 Gen2 provides a great deal of flexibility, it
does not diminish the necessity to have a SAN and disk subsystems that can deliver the
wanted performance. Essentially, Storwize V7000 Gen2 performance improvements are
gained by having as many MDisks as possible, therefore creating a greater level of concurrent
I/O to the back-end without overloading a single disk or array.
Assuming that no bottlenecks exist in the SAN or on the disk subsystem, remember that you
must follow specific guidelines when you perform these tasks:
Creating a storage pool
Creating volumes
Connecting to or configuring hosts that must receive disk space from a Storwize V7000
Gen2 clustered system
You can obtain more detailed information about performance and preferred practices for the
Storwize V7000 Gen2 in IBM System Storage SAN Volume Controller and Storwize V7000
Best Practices and Performance Guidelines, SG24-7521:
http://www.redbooks.ibm.com/abstracts/sg247521.html?Open
This topic is covered in more detail in IBM System Storage SAN Volume Controller and
Storwize V7000 Best Practices and Performance Guidelines, SG24-7521:
http://www.redbooks.ibm.com/abstracts/sg247521.html?Open
For the Storwize V7000 Gen2, as for the other IBM storage subsystems, the official IBM
product to collect performance statistics and supply a performance report is the IBM Tivoli
Storage Productivity Center.
You can obtain more information about using the IBM Tivoli Storage Productivity Center to
monitor your storage subsystem in SAN Storage Performance Management Using Tivoli
Storage Productivity Center, SG24-7364:
http://www.redbooks.ibm.com/abstracts/sg247364.html?Open
More reference links for IBM Storwize V7000 Gen2 Support Portal to download code,
download manuals, and review current information for planning and installation, can be
viewed on the following website:
http://www.ibm.com/storage/support/storwize/v7000
IBM Storwize V7000 Gen2 Supported Hardware List, Device Driver, Firmware, and
Suggested Software Levels V7.x can be viewed on the following website:
http://www.ibm.com/support/docview.wss?uid=ssg1S1003741
IBM Storwize V7000 Gen2 Configuration Limits and Restrictions can be viewed on the
following website:
http://www.ibm.com/support/docview.wss?uid=ssg1S1003741
The IBM Storwize V7000 Gen2 Knowledge Center is on the following website:
http://www.ibm.com/support/knowledgecenter/ST3FR7/landing/V7000_welcome.html
View IBM Storwize V7000 Gen2 Power and Cooling Requirements on the following website:
http://www.ibm.com/support/docview.wss?uid=ssg1S1003711
In the following chapter, our intent is to provide only a basic technical overview, and focus on
the benefits with the new version of Easy Tier. More details for planning and configuration are
available in the following IBM Redbooks publications:
Implementing IBM Easy Tier with IBM Real-time Compression, TIPS1072
IBM System Storage SAN Volume Controller and Storwize V7000 Best Practices and
Performance Guidelines, SG24-7521
IBM DS8000 Easy Tier, REDP-4667 (this concept is similar to Storwize V7000 Gen2 Easy
Tier)
IBM Storwize family software has benefited from the software development work for the IBM
System Storage DS8000 product, in which there have been six versions of Easy Tier. Of
those versions, versions 1 and 3 have been implemented in the 7.3 IBM Storwize family
software.
The first generation of Easy Tier introduced automated storage performance management by
efficiently boosting enterprise-class performance with flash drives (SSD), and automating
storage tiering from enterprise-class drives to flash drives. These changes optimized flash
deployments with minimal costs. Easy Tier also introduced dynamic volume relocation and
dynamic extent pool merge.
The third generation of Easy Tier introduces further enhancements that provide automated
storage performance and storage economics management across all three drive tiers (flash,
enterprise, and Nearline storage tiers) as outlined in Figure 4-1. It enables you to consolidate
and efficiently manage more workloads on a single IBM Storwize V7000 Gen2 system. It also
introduces support for storage pool balancing in homogeneous pools. It is based on
performance, not capacity.
Figure 4-1 shows the supported easy tier pools now available in Easy Tier 3.
Figure 4-2 shows the Easy Tier process for extent migration.
Auto Rebalance
Flash/SSD Tier
Expanded or Swap
Cold Demote
Nearline Tier
The process automatically balances existing data when new MDisks are added into an
existing pool, even if the pool only contains a single type of drive.
Note: Storage pool balancing is used to balance extents across a storage Pool with the
same performance tier. For example, when adding new drives of the same class to an
existing storage pool, storage pool balancing redistributes the extents based on
performance factors, not capacity.
If a pool contains a single type of MDisk, Easy Tier goes into balancing mode (status is
balanced). When the pool contains multiple types of MDisks, Easy Tier is automatically
turned on (status is active).
The Storwize V7000 Gen2 does not automatically identify external flash drive MDisks. All
external MDisks are put into the enterprise tier by default. You must manually identify
external flash drive MDisks and change their tiers. Local (internal) MDisks are
automatically classified as flash, enterprise, or Nearline, and are placed in the appropriate
tier without user intervention.
However, when a new external MDisk is added to Storwize V7000 Gen2, Storwize V7000
Gen2 does not automatically classify the MDisk by the type of drive that the MDisk consists
of. You need to manually select the MDisk, choose the type of drive, and allocate it to the
MDisk.
Example 4-1 The chmdisk command to change the tier of an internal MDisk
IBM_Storwize:ITSO_V7000Gen2:superuser>lsmdisk md_v7kgen2-2-001
id 0
name md_v7kgen2-2-001
status online
mode array
mdisk_grp_id 0
mdisk_grp_name INT_V7KGEN2
capacity 558.4GB
quorum_index
block_size
controller_name
ctrl_type
ctrl_WWNN
controller_id
path_count
max_path_count
ctrl_LUN_#
UID
preferred_WWPN
active_WWPN
fast_write_state empty
raid_status online
raid_level raid1
redundancy 1
strip_size 256
spare_goal 1
spare_protection_min 1
balanced exact
tier enterprise
slow_write_priority latency
fabric_type
site_id
site_name
easy_tier_load
IBM_Storwize:ITSO_V7000Gen2:superuser>chmdisk -tier nearline md_v7kgen2-2-001
IBM_Storwize:ITSO_V7000Gen2:superuser>
IBM_Storwize:ITSO_V7000Gen2:superuser>lsmdisk md_v7kgen2-2-001
id 0
name md_v7kgen2-2-001
status online
mode array
mdisk_grp_id 0
mdisk_grp_name INT_V7KGEN2
capacity 558.4GB
quorum_index
block_size
controller_name
ctrl_type
ctrl_WWNN
Figure 4-3 Shows the option to select a tier for a specific external MDisk
Figure 4-4 Shows the class of drives available for Storwize V7000 Gen2 MDisks
Figure 4-5 shows an example of the properties window for a three-tier storage pool.
Important: When virtualizing any Storwize family storage controller that supports storage
pool balancing, you must disable storage pool balancing on the virtualized Storwize family
storage controller by using the chmdiskgrp command. Failure to do so means that storage
pool balancing on MDisks within Storwize V7000 Gen2 competes with storage pool
balancing on the Storwize virtualized controller, causing performance degradation at
both levels.
Be sure that you are using the same extent size on your storage pools.
If you have any flash drives into your virtualized Storwize family storage controller, use
them for Easy Tier at the higher level (Storwize V7000 Gen2).
Heat data files are produced approximately once a day (every 24 hours) when Easy Tier is
active on one or more storage pools, and summarizes the activity per volume since the prior
heat data file was produced. On Storwize family products, the heat data file is in the /dumps
directory on the configuration node, and is named dpa_heat.node_name.time_stamp.data.
Any existing heat data file is erased when it has existed for longer than seven days. The user
must off-load the file, and start STAT from a Windows command-line interface (CLI) with the
file specified as a parameter. The user can also specify the output directory. The STAT
creates a set of Hypertext Markup Language (HTML) files, and the user can open the
resulting index.html file in a browser to view the results.
Updates to the STAT for Storwize V7000 Gen2 have added additional capability for reporting.
As a result, when the STAT is run on a heat map file, an additional three comma-separated
values (CSV) files are created and placed in the Data_files directory.
The IBM STAT utility can be downloaded from the IBM Support website:
http://www.ibm.com/support/docview.wss?uid=ssg1S4000935
Figure 4-6 shows the CSV files highlighted in the Data_files directory after running the STAT
over an IBM storage area network (SAN) Volume Controller heatmap.
Figure 4-6 CSV files created by the STAT for Easy Tier
In addition to the STAT, Storwize family software V7.3 now has an additional utility, which is a
Microsoft Structured Query Language (SQL) file for creating additional graphical reports of
the workload that Easy Tier is performing. The IBM STAT Charting Utility takes the output of
the three CSV files and turns them into graphs for simple reporting.
Workload skew
This graph shows the skew of all workloads across the system, to help clients visualize
and accurately tier configurations when adding capacity or a new system. The output is
illustrated in Figure 4-9.
The STAT Charting Utility can be downloaded from the IBM support website:
http://www.ibm.com/support/techdocs/atsmastr.nsf/WebIndex/PRS5251
Figure 5-1 shows the various ways to manage the Storwize V7000 Gen2.
Information: For information about supported web browsers, see the following IBM
Knowledge Center:
http://www.ibm.com/support/knowledgecenter/ST3FR7_7.3.0/com.ibm.storwize.v7000.
730.doc/v7000_ichome_730.html
Note that you have full management control of the Storwize V7000 Gen2, regardless of which
method you choose. IBM Tivoli Storage Productivity Center is a robust software product with
various functions that needs to be purchased separately. You can learn more about IBM Tivoli
Storage Productivity Center on the following website:
http://www.ibm.com/software/products/en/tivostorprodcent
For more information about TCP/IP prerequisites, see Chapter 3, “Planning and
configuration” on page 47.
5.1.2 Prerequisites
Ensure that the Storwize V7000 Gen2 has been physically installed, and that Ethernet and
Fibre Channel (FC) connectivity has been correctly configured. For information about physical
connectivity, see Chapter 3, “Planning and configuration” on page 47.
Before configuring the Storwize V7000 Gen2, ensure that the following information is
available:
Licenses
The licenses indicate whether the client is permitted to use IBM Storwize V7000 Easy Tier,
IBM FlashCopy, IBM Storwize V7000 External Virtualization, remote copy, and IBM
Real-time Compression. For details about Licensing, see Chapter 3, “Planning and
configuration” on page 47.
IPv4 addressing:
– Cluster IPv4 address (one IP address for management)
– Service IPv4 addresses (two addresses for the service interfaces)
– IPv4 subnet mask
– Gateway IPv4 address
IPv6 addressing:
– Cluster IPv6 address (one address for management)
– Service IPv6 addresses (two addresses for the service interface, one for each node)
– IPv6 prefix
– Gateway IPv6 address
5.2.1 How to make the first connection to the Storwize V7000 Gen2
Follow these steps to connect to Storwize V7000 Gen2:
1. The first step is to connect a PC or notebook (PC) to the Technician Port on the rear of the
Storwize V7000 Gen2 node. See Figure 5-3 for the location of the Technician Port. The
Technician Port provides a Dynamic Host Configuration Protocol (DHCP) IP address V4,
so you must ensure that your PC is configured for DHCP. The default IP address for a new
node is 192.168.0.1. You can, however, also use a static IP, which should be set to
192.168.0.2 on your PC or notebook.
The Storwize V7000 Gen2 does not provide IPv6 IP addresses for the Technician Port.
Nodes: During the initial configuration, you will probably see certificate warnings
because these certificates are self-issued. You can accept these warnings because
they are not harmful.
2. When your PC is connected to the Technician Port, and you have validated that you have
an IPv4 DCHP address, for example 192.168.0.12 (the first IP address that the Storwize
V7000 Gen2 node assigns), open a supported browser.
This should automatically redirect you to 192.168.0.1, and the initial configuration of the
cluster can start.
4. This chapter focuses on setting up a new system, so we select Yes and click Next.
Remember: If you are adding a Storwize V7000 Gen2 into an existing cluster, ensure
that the existing systems are running code level 7.3 or higher, because the 2076-524
only supports code level 7.3 or higher.
5. The next window will ask you to set an IP address for the cluster. You can choose between
an IPv4 or IPv6 address. In Figure 5-5, we have set an IPv4 address.
When the initialization is successfully completed, you see the message shown in
Figure 5-7.
2. Click Log in and you are prompted to change the default password, as shown in
Figure 5-9. The new password can be any combination of 6 - 63 characters.
5. You can now enter the purchased licenses for this system, as shown in Figure 5-12.
On the next window, you can choose to leave the default name and change it later.
8. Click Apply and Next. The system configures the system name, as shown in Figure 5-15.
12.Click Apply and Next. The system configures the detected enclosures (Figure 5-19).
14.It is highly suggested to set up email notifications. However, they can be configured later. If
you choose to say No to this option now, a warning displays, as shown in Figure 5-21.
Requirement: You must have access to an Simple Mail Transfer Protocol (SMTP)
server (by IP address) to be able to configure Email Event Notifications.
19.Click Apply and Next, and you are taken to the next step, which is Email Notifications, as
shown in Figure 5-26 on page 113.
There are four types of notifications:
– Errors
The user receives email about problems, such as hardware failures, that must be
resolved immediately. To run fix procedures on these events, select Monitoring
Events.
– Warnings
The user receives email about problems and unexpected conditions. Investigate the
cause to determinate any corrective action. To run fix procedures on these events,
select Monitoring Events.
– Information
The user receives email about expected events, for example when a FlashCopy has
finished. No action is required for for these events.
– Inventory
The user receives inventory email that contains a summary of system status and
configuration settings.
20.The system then configures settings, as shown in Figure 5-27. This concludes the Event
Notifications setup.
You are now ready to log in to the main GUI of the IBM Storwize V7000 Gen2, and you
have full functionality, which concludes this Chapter.
For a detailed guide showing how to use the main GUI, see Chapter 9, “IBM Storwize V7000
Gen2 operations using the GUI” on page 169.
Alternatively, you can use the CLI, which is described in Chapter 8, “IBM Storwize V7000
Gen2 command-line interface” on page 143.
The Real-time Compression solution addresses the challenges listed in the previous section,
because it was designed from the ground up for primary storage. Implementing Real-time
Compression provides the following benefits:
Compression for active primary data
IBM Real-time Compression can be used with active primary data. Therefore, it supports
workloads that are not candidates for compression in other solutions. A unique in-line
compression mechanism enables data to be compressed before it is de-staged to the disk,
which reduces disk cycles and the amount of input/output (I/O) being written to the disk.
Compression for replicated or mirrored data
Remote volume copies can be compressed, in addition to the volumes at the primary
storage tier. This process reduces storage requirements in Metro Mirror and Global Mirror
destination volumes as well.
General-purpose volumes
Most general-purpose volumes are used for highly compressible data types, such as home
directories, computer-aided design and computer-aided manufacturing (CAD/CAM), and oil
and gas geo-seismic data. Storing such types of data in compressed volumes provides
immediate capacity reduction to the overall used space. More space can be provided to users
without any change to the environment.
There can be many file types stored in general-purpose servers. However, for practical
information, the estimated compression ratios are based on actual field experience. Expected
compression ratios are 50% - 60%.
File systems that contain audio, video files, and compressed files are not good candidates for
compression. The overall capacity savings on these file types are minimal.
Databases
Database information is stored in table space files. It is common to observe high compression
ratios in database volumes. Examples of databases that can greatly benefit from real-time
compression are IBM DB2®, Oracle, and Microsoft SQL Server. Expected compression ratios
are 50% - 80%.
Tip: Some databases offer optional built-in compression. Generally, do not compress
already-compressed database files.
Chapter 6. IBM Real-time Compression and the IBM Storwize V7000 Gen2 119
Virtualized infrastructures
The proliferation of open systems virtualization in the market has increased the use of storage
space, with more virtual server images and backups kept online. The use of compression
reduces the storage requirements at the source. Examples of virtualization solutions that can
greatly benefit from Real-time Compression are VMWare, Microsoft Hyper-V, and
Kernel-based Virtual Machine (KVM). Expected compression ratios are 45% - 75%.
Tip: Virtual machines with file systems that contain compressed files are not good
candidates for compression, as described in “General-purpose volumes” on page 119.
RACE technology makes use of over 50 patents, many of which are not about compression.
Rather, they define how to make industry standard Lempel-Ziv (L)-based compression of
primary storage operate in real time while enabling random access. The primary intellectual
property behind this is RACE. At a high level, the IBM RACE component compresses data
written into the storage system dynamically.
This compression occurs transparently, so Fibre Channel (FC) and Internet Small Computer
System Interface (iSCSI)-connected hosts are not aware of the compression. RACE is an
in-line compression technology, so each host write is compressed as it passes through RACE
to the disks.
This has a clear benefit over other compression technologies that are post-processing in
nature. These alternative technologies do not provide immediate capacity savings, and
therefore are not a good fit for primary storage workloads, such as databases and active data
set applications.
RACE is based on the Lempel-Ziv lossless data compression algorithm, and operates in real
time. When a host sends a write request, it is acknowledged by the upper-level write cache of
the system, and then de-staged to the storage pool.
As part of its de-staging, the request passes through the compression engine, and is then
stored in compressed format onto the storage pool. Writes are therefore acknowledged
immediately after being received by the upper write cache, with compression occurring as
part of the destaging to internal or external physical storage. Capacity is saved when the data
is written by the host, because the host writes are smaller when written to the storage pool.
IBM Real-time Compression is a self-tuning solution, similar to the Storwize V7000 system
itself. It adapts to the workload that runs on the system at any particular moment.
Compression utilities
Compression is probably most known to users because of the widespread use of
compression utilities, such as Zip and Gzip. At a high level, these utilities take a file as their
input, and parse the data by using a sliding window technique. Repetitions of data are
detected within the sliding window history, most often 32 kilobytes (KB). Repetitions outside of
the window cannot be referenced. Therefore, the file cannot be reduced in size unless data is
repeated when the window “slides” to the next 32 KB slot.
Figure 6-1 shows compression that uses a sliding window, where the first two repetitions of
the string “ABCDEF” fall within the same compression window, and can therefore be
compressed using the same dictionary. Note that the third repetition of the string falls outside
of this window, and cannot, therefore, be compressed using the same compression dictionary
as the first two repetitions, reducing the overall achieved compression ratio.
Chapter 6. IBM Real-time Compression and the IBM Storwize V7000 Gen2 121
Traditional data compression in storage systems
The traditional approach taken to implement data compression in storage systems is an
extension of how compression works in the compression utilities previously mentioned.
Similar to compression utilities, the incoming data is split into fixed chunks, and then each
chunk is compressed and extracted independently.
However, there are drawbacks to this approach. An update to a chunk requires a read of the
chunk followed by a recompression of the chunk to include the update. The larger the chunk
size chosen, the heavier the I/O penalty to recompress the chunk. If a small chunk size is
chosen, the compression ratio is reduced, because the repetition detection potential is
reduced.
Figure 6-2 shows an example of how the data is split into fixed-size chunks (in the upper-left
side of the figure). It also shows how each chunk gets compressed independently into
variable-length compressed chunks (in the upper-right side of the figure). The resulting
compressed chunks are stored sequentially in the compressed output.
Data
1
3
4
Compressed
Data
1 2
3
4 5
6
7
Compressed
Data
Data
1 1
2 2
3 3
4 4
5 5
6 6
Compressed
Data
1
2
3
4
5
6
Location-based compression
Both compression utilities and traditional storage systems-compression approaches
compress data by finding repetitions of bytes within the chunk that is being compressed. The
compression ratio of this chunk depends on how many repetitions can be detected within the
chunk. The number of repetitions is affected by how much the bytes stored in the chunk are
related to each other.
The relation between bytes is driven by the format of the object. For example, an office
document might contain textual information and an embedded drawing (such as this page).
Because the chunking of the file is arbitrary, it has no concept of how the data is laid out
within the document. Therefore, a compressed chunk can be a mixture of the textual
information and part of the drawing.
This process yields a lower compression ratio, because the different data types mixed
together cause a suboptimal dictionary of repetitions. Fewer repetitions can be detected
because a repetition of bytes in a text object is unlikely to be found in a drawing.
Chapter 6. IBM Real-time Compression and the IBM Storwize V7000 Gen2 123
This traditional approach to data compression is also called location-based compression. The
data repetition detection is based on the location of data within the same chunk.
This challenge was also addressed with the predecide mechanism that was introduced in IBM
SAN Volume Controller code version 7.1.
Predecide mechanism
Some data chunks have a higher compression ratio than others. Compressing some of the
chunks saves very little space but still requires resources, such as processor and memory. To
avoid spending resources on incompressible data, and to provide the ability to use a different,
more effective (in this particular case) compression algorithm, IBM has invented a predecide
mechanism that was first introduced in version 7.1.
The chunks that are below a given compression ratio are skipped by the compression engine,
therefore saving processor time and memory processing. Chunks that are decided not to be
compressed with the main compression algorithm, but that still can be compressed well with
the other algorithm, are marked and flagged accordingly. The result can vary, because
predecide does not check the entire block, only a sample of it.
Temporal compression
RACE offers a technology leap beyond location-based compression, temporal compression.
When host writes arrive to RACE, they are compressed and filled up in fixed-size chunks, also
called compressed blocks. Multiple compressed writes can be aggregated into a single
compressed block.
This type of data compression is called temporal compression because the data repetition
detection is based on the time the data was written into the same compressed block.
Temporal compression adds the time dimension that is not available to other compression
algorithms. It offers a higher compression ratio, because the compressed data in a block
represents a more homogeneous set of input data.
The upper part of Figure 6-5 shows how three writes, sent one after the other by a host, end
up in different chunks. They get compressed into different chunks because their location on
the volume is not adjacent. This yields a lower compression ratio, because the same data
must be compressed non-natively by using three separate dictionaries.
When the same three writes are sent through RACE (in the lower part of the figure), the writes
are compressed together by using a single dictionary. This yields a higher compression ratio
than location-based compression.
1 Location
Compression
Window
2
# = Host write
Temporal
Compression
Window
1 2 3
Time
Figure 6-5 Location-based versus temporal compression
Chapter 6. IBM Real-time Compression and the IBM Storwize V7000 Gen2 125
6.2.2 RACE in Storwize V7000 Gen2 software stack
It is important to understand where the RACE technology is implemented in the Storwize
V7000 Gen2 software stack. RACE technology is implemented into the Storwize system thin
provisioning layer, and is an organic part of the stack. The Storwize V7000 Gen2 software
stack is shown in Figure 6-6. Compression is transparently integrated with existing system
management design. All of the Storwize V7000 features are supported on compressed
volumes.
You can create, delete, migrate, map (assign), and unmap (unassign) a compressed volume
as though it were a fully allocated volume. In addition, you can use Real-time Compression
with IBM Easy Tier on the same volumes. This compression method provides non-disruptive
conversion between compressed and decompressed volumes. This conversion provides a
uniform user experience, and eliminates the need for special procedures when dealing with
compressed volumes.
When the upper cache layer de-stages to the RACE, the I/Os are sent to the thin-provisioning
layer. They are then sent to RACE and, if necessary, the original host write or writes. The
metadata that holds the index of the compressed volume is updated if needed, and is
compressed as well.
This capability enables customers to regain space from the storage pool, which can then be
reused for other applications.
Chapter 6. IBM Real-time Compression and the IBM Storwize V7000 Gen2 127
6.3.1 Software enhancements
Cache is the most significant software enhancement.
Cache
As mentioned in Chapter 1, “Introduction to IBM storage virtualization” on page 1, Storwize
V7000 Gen2 software version 7.3 introduces an enhanced, dual-level caching model. This
model differs from the single-level cache model of previous software versions.
In the previous model, the Real-time Compression software component sat below the
single-level read/write cache. The benefit of this model is that the upper-level read/write
cache masks from the host any latency introduced by the Real-time Compression software
component. However, in this single-level caching model, the de-staging of writes for
compressed I/Os to disk might not be optimal for certain workloads, due to the fact that the
RACE component is interacting directly with un-cached storage.
In the new, dual-level caching model, the Real-time Compression software component sits
below the upper-level, fast-write cache, and above the lower-level advanced read/write cache.
There are several advantages to this dual-level model regarding Real-time Compression:
Host writes, whether to compressed or decompressed volumes, are still serviced directly
using the upper-level write cache, preserving low host write I/O latency. Response time
can improve with this model, becuase the upper cache flushes less data to RACE more
frequently.
The performance of the de-staging of compressed write I/Os to storage is improved,
because these I/Os are now de-staged via the advanced, lower-level cache, as opposed
to directly to storage.
The existence of a lower-level write cache below the RtC component in the software stack
enables the coalescing of compressed writes and, as a result, a reduction in back-end
I/Os, due to the ability to perform full-stride writes for compressed data.
The existence of a lower-level read cache below the Real-time Compression component in
the software stack enables the temporal locality nature of RtC to benefit from pre-fetching
from the back-end storage.
The main (lower-level) cache now stores compressed data for compressed volumes,
increasing the effective size of the lower-level cache.
Support for larger numbers of compressed volumes.
For additional details about the new hardware specification, see Chapter 2, “IBM Storwize
V7000 Gen2 Hardware” on page 21.
Chapter 6. IBM Real-time Compression and the IBM Storwize V7000 Gen2 129
130 Implementing the IBM Storwize V7000 Gen2
7
However, it is beyond the intended scope of this book to provide an in-depth understanding of
performance statistics, or explain how to interpret them. For a more comprehensive look at
the performance of the IBM Storwize V7000 Gen2, see IBM System Storage SAN Volume
Controller and Storwize V7000 Best Practices and Performance Guidelines, SG24-7521,
which is available at the following website:
http://www.redbooks.ibm.com/abstracts/sg247521.html?Open
For the Storwize family, as with all other IBM storage subsystems, the official IBM tool for the
collection of performance statistics, and to supply performance reporting, is IBM Tivoli
Storage Productivity Center.
You can obtain more information about IBM Tivoli Storage Productivity Center usage and
configuration in SAN Storage Performance Management Using Tivoli Storage Productivity
Center, SG24-7364:
http://www.redbooks.ibm.com/abstracts/sg247364.html?Open
In addition to the hardware upgrades, the 7.3 code level adds IBM Easy Tier 3 and storage
pool balancing, which can further optimize performance. In this topic, we look at the
performance basics, and best practices. Easy Tier is described in more detail in Chapter 4,
“IBM Storwize V7000 Gen2 Easy Tier” on page 85.
When using Real-time Compression, ensure that you use the Comprestimator tool to assess
the expected compression ratio for the workload that you will be delivering. With the addition
of the optional acceleration cards in the Storwize V7000 Gen2, there are additional
performance benefits for compressed workloads.
http://www-01.ibm.com/support/docview.wss?uid=ssg1S4001012
For more information about RtC, see Chapter 6, “IBM Real-time Compression and the IBM
Storwize V7000 Gen2” on page 117.
Figure 7-1 shows a general HDD performance comparison taken from an IBM Storwize
V7000, single-drive, Redundant Array of Independent Disks 0 (RAID 0) cache disabled
configuration.
Figure 7-1 Drive performance comparison: Value in (x) are short stroked (<25% capacity used)
RAID 0
Striping
Full capacity
No protection against drive loss
One host write equals one disk write (fastest performance)
RAID 1
Mirroring between two drives
Effective capacity of 50%
Protects against one drive loss
One host write equals two disk writes (fast performance)
RAID 5
Striping with parity
Protects against 1 drive loss
Effective capacity of total drives minus one (n-1)
One host write equals two disk reads and two disk writes
RAID 10
Mirroring between two sets of striped drives
Protects against up to 50% drive loss
Effective capacity or 50%
One write equals two disk writes (good performance)
Remember the following high-level rules when designing your SAN and Storwize layout:
Host-to-Storwize inter-switch link (ISL) oversubscription
This area is the most significant I/O load across ISLs. The suggestion is to maintain a
maximum of 7-to-1 oversubscription. Going higher is possible, but it could lead to I/O
bottlenecks. This suggestion also assumes a core-edge design, where the hosts are on
the edge and the IBM SAN Volume Controller is on the core.
Rules and guidelines are no substitution for monitoring performance. Monitoring performance
can both provide a validation that design expectations are met, and identify opportunities for
improvement.
This design provides statistics for the most recent 80-minute period if using the default
five-minute sampling interval. The Storwize V7000 Gen2 supports user-defined sampling
intervals of from1 - 60 minutes. You can define the sampling interval by using the startstats
-interval 2 command to collect statistics at 2-minute intervals.
Collection intervals: Although more frequent collection intervals provide a more detailed
view of what happens in the Storwize V7000 Gen2, they shorten the amount of time that
the historical data is available. For example, rather than an 80-minute period of data with
the default 5-minute interval, if you adjust to 2-minute intervals, you have a 32-minute
period instead.
The lsdumps -prefix /dumps/iostats command shows typical MDisk volume, node, and
disk drive statistics file names, as shown in Example 7-1. Note that the output is truncated
and shows only part of the available statistics.
Tip: The performance statistics files can be copied from the Storwize V7000 Gen2 nodes
to a local drive on your workstation using the pscp.exe command (included with PuTTY)
from an MS-DOS CLI, as shown in the following example:
C:\Program Files\PuTTY>pscp -unsafe -load ITSO_V7000Gen2
admin@10.18.229.81:/dumps/iostats/* c:\statsfiles
Use the -load parameter to specify the session that is defined in PuTTY.
Each node collects various performance statistics, mostly at five-second intervals, and
the statistics that are available from the config node in a clustered environment. This
information can help you determine the performance effect of a specific node. As with system
statistics, node statistics help you to evaluate whether the node is operating within normal
performance metrics.
Both commands lists the same set of statistics, but either representing all node canisters in
the cluster, or a particular Node Canister. The values for these statistics are calculated from
the node statistics values in the following way:
Bandwidth: Sum of bandwidth of all nodes
Latency: Average latency for the cluster, which is calculated using data from the whole
cluster, not an average of the single node values
IOPS: Total IOPS of all nodes
CPU percentage: Average CPU percentage of all nodes
Table 7-1 Field name descriptions for lssystemstats and lsnodecanisterstats statistics
Field name Unit Description
The window, as shown in Figure 7-3 on page 141, is divided into four sections that provide
use views for the following resources:
CPU Use
– Shows the CPU usage for general tasks%
– Shows the CPU7 usage for compression (when enabled)%
Volumes. This shows the overall volume use with the following fields:
– Read
– Write
– Read latency
– Write latency
Interfaces. This shows the overall statistics for each of the available interfaces:
– Fibre Channel
– iSCSI
– Serial-attached SCSI (SAS)
– IP Replication
MDisks. This shows the following overall statistics for the MDisks:
– Read
– Write
– Read latency
– Write latency
You can also select to view performance statistics for each of the available canisters of the
system, as shown in Figure 7-4.
It is also possible to change the metric between MBps or IOPS (Figure 7-5).
For each of the resources, there are various values that you can view by selecting the check
box next to a value. For example, for the MDisks view, as shown in Figure 7-7, the four
available fields are selected:
Read
Write
Read latency
Write latency
7.3.3 Performance data collection and Tivoli Storage Productivity Center for
Disk
Although you can obtain performance statistics in standard .xml files, using .xml files is a less
practical and less user-friendly method to analyze the Storwize V7000 Gen2 performance
statistics. Tivoli Storage Productivity Center for Disk is the supported IBM tool to collect and
analyze performance statistics.
For more information about using Tivoli Storage Productivity Center to monitor your storage
subsystem, see the following IBM Redbooks publications:
SAN Storage Performance Management Using Tivoli Storage Productivity Center,
SG24-7364, which is available at the following website:
http://www.rebooks.ibm.com/abstracts/sg247364.html?Open
Tivoli Storage Productivity Center V5.2 Release Guide, SG24-8204, which is available at
the following website:
http://www.redbooks.ibm.com/redpieces/abstracts/sg248204.html?Open
The CLI is a powerful tool offering even more functionality than the graphical user interface
(GUI). We show how to set it up, and how to manage and operate your IBM Storwize V7000
Gen2. We do not delve into the advanced functionality, because it is beyond the intended
scope of this book. If you want to learn about the advanced commands,see the folllowing
website:
http://www.ibm.com/support/knowledgecenter/ST3FR7_7.3.0/com.ibm.storwize.v7000.730
.doc/v7000_ichome_730.html
Furthermore, the IBM Storwize V7000 Gen2 shares the underlying platform with the IBM
storage area network (SAN) Volume Controller. Therefore, we also suggest the CLI chapter in
IBM SAN Volume Controller 2145-DH8 Introduction and Implementation, SG24-8229:
http://www.redbooks.ibm.com/Redbooks.nsf/RedpieceAbstracts/sg248229.html
SSH is the communication vehicle between the management workstation and the Storwize
V7000 Gen2. The SSH client provides a secure environment from which to connect to a
remote machine. It uses the principles of public and private keys for authentication.
SSH keys are generated by the SSH client software. The SSH keys include a public key,
which is uploaded and maintained by the clustered system, and a private key, which is kept
private on the workstation that is running the SSH client. These keys authorize specific users
to access the administration and service functions on the system. Each key pair is associated
with a user-defined ID string that can consist of up to 40 characters.
Up to 100 keys can be stored on the system. New IDs and keys can be added, and unwanted
IDs and keys can be deleted. To use the CLI, an SSH client must be installed on that system,
the SSH key pair must be generated on the client system, and the client’s SSH public key
must be stored on the IBM Storwize V7000 Gen2.
The SSH client used in this book is PuTTY. Also, a PuTTY key generator can be used to
generate the private and public key pair. The PuTTY client can be downloaded from the
following address at no initial cost:
http://www.chiark.greenend.org.uk
To generate keys: The blank area that is indicated by the message is the large blank
rectangle on the GUI inside the section of the GUI labeled Key. Continue to move the
mouse pointer over the blank area until the progress bar reaches the far right. This
action generates random characters to create a unique key pair.
4. You are prompted for a name (for example, pubkey) and a location for the public key (for
example, C:\Support Utils\PuTTY). Click Save.
Ensure that you record the name and location, because the name and location of this SSH
public key must be specified later.
Public key extension: By default, the PuTTY key generator saves the public key with
no extension. Use the string pub for naming the public key, for example, pubkey, to easily
differentiate the SSH public key from the SSH private key.
6. You are prompted with a warning message (Figure 8-5). Click Yes to save the private key
without a passphrase.
7. When prompted, enter a name (for example, icat), select a secure place as the location,
and click Save.
Key generator: The PuTTY key generator saves the private key with the PPK
extension.
2. Right-click the user name for which you want to upload the key and click Properties
(Figure 8-7).
2. In the right pane, select SSH as the connection type. In the Close window on exit
section, select Only on clean exit, which ensures that if any connection errors occur, they
are displayed on the user’s window.
7. In the Category pane, click Session to return to the Basic options for your PuTTY session
view (Figure 8-12 on page 154).
8. Enter the following information in the fields (Figure 8-12 on page 154) in the right pane:
– Host Name: Specify the host name or cluster IP address of the IBM Storwize V7000
Gen2.
– Saved Sessions: Enter a session name.
10.Select the new session and click Open to connect to the IBM Storwize V7000 system. A
PuTTY Security Alert opens; confirm it by clicking Yes (Figure 8-13).
11.PuTTY now connects to the system and prompts you for a user name to log in as. Enter
admin as the user name, as shown in Example 8-1. Click Enter.
8.2 Configuring the IBM Storwize V7000 Gen2 using the CLI
Now we describe how to use the CLI to configure the Storwize V7000 Gen2
Tip: For a listing a full commands including syntaxes, variables, and arguments, see the
IBM Knowledge Center. You can also use the help command followed by the command you
want to learn more about.
Note: Command syntax, arguments, and variables are not shown in this topic. Use -? or
>help {command name} to obtain more details.
2. At first, we make our drives candidates with the chdrive command (Example 8-6), then
repeat for the remaining drives.
3. In our case, we create one storage pool to hold our storage with the mkmdiskgrp command
(Example 8-7).
4. We then create one array in the storage pool with the mkarray command, which gives us a
storage pool consisting of 10 drives. In our setup, we would like the two remaining drives
to act as hot spares. Use the chdrive command again (Example 8-8).
5. This concludes our internal storage configuration, and we now have a managed disk, as
shown in Example 8-9 on page 159.
This reveals our two IBM Storwize V7000, which are visible on the SAN.
Because the names do not have any meaning for us, we change them.
Now we have set up external storage, and we are ready to start configuring it.
Remember: Create the Storwize V7000 Gen2 as a host on the external disk system
and present the volumes from the external storage to the Storwize V7000 Gen2. This
action is also referred to as host mapping and LUN mapping.
3. The first thing we do when configuring external storage is to issue the detectmdisk
command, which adds the external volumes as managed disks (MDisks). After this, we
issue the lsmdisk command to reveal all of the visible managed disks (Example 8-12).
IBM_Storwize:ITSO_V7000Gen2:admin>lsmdisk
id name status mode mdisk_grp_id mdisk_grp_name capacity
ctrl_LUN_# controller_name UID
tier
0 mdisk0 online array 0 INT_V7KGEN2 558.4GB
ssd
1 mdisk1 online array 0 INT_V7KGEN2 558.4GB
nearline
2 mdisk2 online array 0 INT_V7KGEN2 558.4GB
enterprise
3 mdisk3 online array 0 INT_V7KGEN2 558.4GB
enterprise
4 mdisk4 online array 0 INT_V7KGEN2 558.4GB
enterprise
5 md_v7kgen1-1_003 online unmanaged 500.0GB
0000000000000000 ITSO_EXT1A
6005076802898002680000000000000c00000000000000000000000000000000 enterprise
6 md_v7kgen1-1_002 online unmanaged 500.0GB
0000000000000001 ITSO_EXT1A
6005076802898002680000000000000d00000000000000000000000000000000 enterprise
7 md_v7kgen1-1_005 online unmanaged 500.0GB
0000000000000002 ITSO_EXT1A
6005076802898002680000000000000e00000000000000000000000000000000 enterprise
8 md_v7kgen1-1_004 online unmanaged
500.0GB0000000000000003 ITSO_EXT1A
We now have a 5 GB volume mirror between the IBM Storwize V7000 Gen2 and a
virtualized external storage system. This is useful in many cases, for example for backup
purposes. You can run your backup from the external storage without affecting
performance on your primary storage.
2. Create two more volumes using mkvdisk, but this time we only want them to reside on our
Storwize V7000 Gen2 (Example 8-15).
Important: Be careful when expanding volumes. You should always make sure that the
operating system accessing the volume supports volume expansion at the storage
layer. Check compatibility with your operating system vendor.
IBM System Storage SAN Volume Controller and Storwize V7000 Replication Family
Services, SG24-7574:
http://www.redbooks.ibm.com/redbooks/pdfs/sg247574.pdf
IBM SAN Volume Controller and Storwize Family Native IP Replication, REDP-5103
http://www.redbooks.ibm.com/redpieces/pdfs/redp5103.pdf
Our lab setup includes a Storwize V7000 Gen2 and two Storwize V7000 and we show how to
create a Metro Mirror Remote Copy relationship:
1. First, check if we have some partnership candidates using the lspartnershipcandidate
command, as shown in Example 8-17.
IBM_Storwize:ITSO_V7000Gen2:admin>lspartnership
id name location partnership type
cluster_ip event_log_sequence
00000100204001E0 ITSO_V7000Gen2 local
00000200A260009A EXTSTG1 remote partially_configured_local fc
IBM_Storwize:ITSO_V7000Gen2:admin>
3. At this stage, our partnership is partially configured. This is because the mkfcpartnership
command must also be run on the secondary system (Example 8-19).
IBM_Storwize:EXTSTG1:superuser>lspartnership
id name location partnership type cluster_ip
event_log_sequence
00000200A260009A EXTSTG1 local
00000100204001E0 ITSO_V7000Gen2 remote fully_configured fc
IBM_Storwize:EXTSTG1:superuser>
4. This gives us a fully configured partnership between our two Storwizes, and now we just
need to create the volumes to be mirrored. For this operation, we use the mkvdisk
command (Example 8-20) to create two volumes with the same characteristics, one on
each system.
5. After the volumes are created, we issue the mkrcrelationship command (Example 8-21).
This initiates a Metro Mirror relationship between the two Storwize.
Example 8-22 The mkrcrelationship command as Global Mirror with change volumes
IBM_Storwize:ITSO_V7000GEN2_2:admin>mkrcrelationship -aux EXTSTG1_RCVOL
-cluster 00000200A260009A -master V7KG2_RCVOL -global -cyclingmode
RC Relationship, id [0], successfully created
IBM_Storwize:ITSO_V7000GEN2_2:admin>
For our lab setup, we have two hosts running VMware, each with a dual-port host bus adapter
(HBA) and an IBM Storwize V7000 Gen2. Assuming that host installation and SAN zoning
have taken place, follow these steps to configure the host:
1. we start by issuing the lsfcportcandidate, which gives us information about open FC
ports on the SAN (Example 8-23).
2. In our case, this reveals four open FC ports, two for each of our hosts. With this
information, we create our hosts using the mkhost command (Example 8-24).
IBM_Storwize:ITSO_V7000Gen2:admin>lshost 0
id 0
name VMWare1
port_count 2
type generic
mask 1111111111111111111111111111111111111111111111111111111111111111
iogrp_count 4
status online
WWPN 100000051EC76B92
node_logged_in_count 2
state active
WWPN 100000051EC76B91
node_logged_in_count 2
state active
IBM_Storwize:ITSO_V7000Gen2:admin>
4. This shows us our two configured hosts, which are ready for LUN mapping. Use the
lsvdisk command to show available volumes, and continue with the mkvdiskhostmap
command to map them (Example 8-26).
The provided storage should now be visible from the hosts, so this topic is concluded.
lsenclosurefanmodule
This command gives us a concise or detailed status of the new fan modules that are installed
in the V7000 Storwize Gen2 (Example 8-27).
lsenclosurebattery
This command gives us a concise or detailed view of the canister batteries (Example 8-28).
http://www.ibm.com/support/knowledgecenter/ST3FR7_7.3.0/com.ibm.storwize.v7000.730
.doc/v7000_ichome_730.html
The information is presented at a high level, because this book is based on the new
hardware, and is not intended to be an in-depth coverage of every aspect of the software. For
more detailed information about using the GUI, see Implementing the IBM System Storage
SAN Volume Controller V7.2, SG24-7933.
Although the IBM Storage Tier Advisor Tool (STAT) is not part of the GUI, it is a strong and
useful tool to determine the use of your tiered storage, as the 7.3 code level now supports
three-tiered storage using the IBM Easy Tier functionality.
Important: It is possible for more than one user to be logged in to the GUI at any given
time. However, no locking mechanism exists, so be aware that if two users change the
same object at the same time, the last action entered from the GUI is the one that takes
effect.
The following steps illustrate how to start the Storwize V7000 GUI:
1. Initially, to log on to the management software, type the IP address that was set during the
initial setup process into the address line of your web browser. You can connect from any
workstation that can communicate with the system.
2. You start at the login window, as shown in Figure 9-1.
Dynamic menu
From any page inside the IBM Storwize V7000 GUI, you always have access to the dynamic
menu. The IBM Storwize V7000 GUI dynamic menu is on the left side of the IBM Storwize
V7000 GUI window. To navigate using this menu, move the mouse cursor over the various
icons, and choose a page that you want to display.
The IBM Storwize V7000 dynamic menu consists of multiple panes. These panes group
common configuration and administration objects, and presents individual administrative
objects to the IBM Storwize V7000 GUI users.
Chapter 9. IBM Storwize V7000 Gen2 operations using the GUI 171
9.1.2 Monitoring
Figure 9-3 shows the Monitoring menu where you can work with the following details:
Information about the code level
Hardware configuration
See installed hardware and change memory allocation (also known as bitmap allocation).
Events
See warnings and alerts, and run the maintenance procedure.
Real-time performance graphs
See central processing unit (CPU) usage input/output operations per second (IOPS) for
volumes, managed disks (MDisks), and so on.
Storage pool balancing is introduced in code level 7.3, which means that if you add new or
additional MDisks to an existing pool, it balances the extents across all of the MDisks in a
pool. Before release 7.3, you had to do this manually, or use a script to balance extents after
adding new MDisks to an existing pool. Note that this is an automated process, and it is not
configurable.
Chapter 9. IBM Storwize V7000 Gen2 operations using the GUI 173
9.1.4 Volumes
The Volumes menu contains the following administrative options:
View volumes, create volumes, and delete volumes.
See details about volumes, if they are mapped or unmapped to a host.
See details about volumes mapped to host.
Chapter 9. IBM Storwize V7000 Gen2 operations using the GUI 175
9.1.6 Copy services
In the Copy Services menu, you can administer all copy services related activities:
Create partnerships with other IBM SAN Volume Controller and Storwize systems.
Create and delete Metro Mirrored volumes.
Create and delete Global Mirrored volumes.
Create and delete IBM FlashCopy volumes.
View details about the copy services configured.
Chapter 9. IBM Storwize V7000 Gen2 operations using the GUI 177
9.1.8 Settings
In the Settings menu, you have access to the following activities:
Event notifications, such as call home (using email), Simple Network Management
Protocol (SNMP), Simple Mail Transfer Protocol (SMTP), and syslog
Directory Services, for enabling remote authentication of users
Network, both Fibre Channel (FC) settings and Internet Protocol (IP) settings
Support, where you can manage dumps, snaps, heatmap files, and so on
General, where you can upgrade the system, time/date settings, and so on
Note: The STAT utility is not a part of the GUI, but can be downloaded from IBM support:
http://www.ibm.com/support/docview.wss?uid=ssg1S4000935
STAT uses limited storage performance measurement data from a user’s operational
environment to model potential unbalanced workload (also known as skew) on disk and array
resources. It is intended to supplement and support, but not replace, detailed pre-installation
sizing and planning analysis.
The STAT.exe command creates a Hypertext Markup Language (HTML) report of the
input/output (I/O) distribution. IBM Storwize V7000 input files are found under /dumps on the
configuration node, and are named dpa_heat.<node_name>.<time_stamp>.data. The file must
be off-loaded manually using the command-line interface (CLI) or GUI.
You can install the STAT tool on any Windows-based PC or notebook, and you don’t need to
have direct access to the IBM SAN Volume Controller.
When the STAT tool is installed, it’s time to off-load or download heat files from your IBM
Storwize V7000 system.
The next few screen captures show how you can download the heat files from the GUI. The
heat files can also be downloaded using the CLI or PuTTY Secure Copy (PSCP). However,
we show how to off-load or download these files using the GUI:
1. Log in to the GUI and select Settings Support and click the Show full log listing link,
as shown in Figure 9-10.
Chapter 9. IBM Storwize V7000 Gen2 operations using the GUI 179
2. Now you can select the heat files that you want to use for the STAT tool. Select the files,
right-click, and select Download, as shown in Figure 9-11.
3. When the files have been off-loaded or downloaded, open a command prompt and go to
the directory where you have installed the STAT tool (the default path on a 64-bit windows
operating system is C: Program Files (x86)\IBM\STAT).
Note: If the config node of the system reboots, asserts, and so on, note that the new
config node starts the Easy Tier heatmap cycle count from 0, which means that it takes
24 hours until you see a new heatmap file in the /dumps directory.
You might want to copy/move the off-loaded/downloaded files to the directory where you
have installed the STAT tool for ease of the usage. Otherwise, you have to define the entire
input file path every time you create a report.
4. To generate the report, (in this case we have already copied the input file to the STAT
directory), run the following command (one line):
stat.exe -o “c:\Program Files
(x86)\IBM\STAT\ITSO_V7KGen2”dpa_heat.KD8P1BP.140518.174808.data
Replace the heat files with the correct file names of the ones that you have off-loaded or
downloaded. For IBM Storwize V7000 Systems, you can only run one file concurrently.
Be aware that in this scenario we have used the syntax -o, which specifies an output path
(a folder). This is useful if you are generating STAT files from more than one system.
Tip: For detailed information about the usage of the STAT tool, see the readme file for
the tool that is contained within the same directory where you installed it.
7. Further details can be seen in the lower section of the Performance Statistics and
Improvement Recommendation page, where you can expand hyperlinks for the following
information:
– Workload Distribution Across Tiers
– Recommended NL Configuration
– Volume Heat Distribution
More details for planning and configuration are available in the following IBM Redbooks
publications:
Implementing IBM Easy Tier with IBM Real-time Compression, TIPS1072
http://www.redbooks.ibm.com/abstracts/tips1072.html
IIBM System Storage SAN Volume Controller and Storwize V7000 Best Practices and
Performance Guidelines, SG24-7521
http://www.redbooks.ibm.com/redpieces/pdfs/sg247521.pdf
IBM DS8000 Easy Tier, REDP-4667
http://www.redbooks.ibm.com/abstracts/redp4667.html?Open
This is described in more detail in the Chapter 4, “IBM Storwize V7000 Gen2 Easy Tier” on
page 85.
Chapter 9. IBM Storwize V7000 Gen2 operations using the GUI 181
182 Implementing the IBM Storwize V7000 Gen2
Related publications
The publications listed in this section are considered particularly suitable for a more detailed
description of the topics covered in this book.
IBM Redbooks
The following IBM Redbooks publications provide additional information about the topic in
this document. Note that some publications referenced in this list might be available in
softcopy only:
Implementing the IBM System Storage SAN Volume Controller V7.2, SG24-7933
Implementing the IBM Storwize V7000 V7.2, SG24-7938
IBM b-type Gen 5 16 Gbps Switches and Network Advisor, SG24-8186
Introduction to Storage Area Networks and System Networking, SG24-5470
IBM SAN Volume Controller and IBM FlashSystem 820: Best Practices and Performance
Capabilities, REDP-5027
Implementing the IBM SAN Volume Controller and FlashSystem 820, SG24-8172
Implementing IBM FlashSystem 840, SG24-8189
IBM FlashSystem in IBM PureFlex System Environments, TIPS1042
IBM FlashSystem 840 Product Guide, TIPS1079
IBM FlashSystem 820 Running in an IBM StorwizeV7000 Environment, TIPS1101
Implementing FlashSystem 840 with SAN Volume Controller, TIPS1137
IBM FlashSystem V840, TIPS1158
IBM Midrange System Storage Implementation and Best Practices Guide, SG24-6363
IBM System Storage b-type Multiprotocol Routing: An Introduction and Implementation,
SG24-7544
IBM Tivoli Storage Area Network Manager: A Practical Introduction, SG24-6848
Tivoli Storage Productivity Center for Replication for Open Systems, SG24-8149
IBM Tivoli Storage Productivity Center V5.2 Release Guide, SG24-8204
Implementing an IBM b-type SAN with 8 Gbps Directors and Switches, SG24-6116
You can search for, view, download, or order these documents and other Redbooks
publications, Redpapers publications, Web Docs, drafts, and additional materials, from the
following website:
ibm.com/redbooks
Learn about the latest Data is the new currency of business, the most critical asset of the modern organization.
In fact, enterprises that can gain business insights from their data are twice as likely to INTERNATIONAL
addition to the IBM outperform their competitors. Nevertheless, 72% of them have not started, or are only TECHNICAL
SAN Volume planning, big data activities. In addition, organizations often spend too much money and
time managing where their data is stored. The average firm purchases 24% more SUPPORT
Controller/Storwize storage every year, but uses less than half of the capacity that it already has. ORGANIZATION
family The IBM Storwize family, including the IBM SAN Volume Controller Data Platform, is a
storage virtualization system that enables a single point of control for storage resources.
Understand the new This functionality helps support improved business application availability and greater
resource use. The following list describes the business objectives of this system:
functions and To manage storage resources in your information technology (IT) infrastructure
features To make sure that those resources are used to the advantage of your business
BUILDING TECHNICAL
To do it quickly, efficiently, and in real time, while avoiding increases in INFORMATION BASED ON
administrative costs PRACTICAL EXPERIENCE
Benefit from an
Virtualizing storage with Storwize helps make new and existing storage more effective.
uncomplicated Storwize includes many functions traditionally deployed separately in disk systems. By IBM Redbooks are developed
implementation including these functions in a virtualization system, Storwize standardizes them across by the IBM International
virtualized storage for greater flexibility and potentially lower costs. Technical Support
Storwize functions benefit all virtualized storage. For example, IBM Easy Tier optimizes Organization. Experts from
use of flash memory. In addition, IBM Real-time Compression enhances efficiency even IBM, clients, and IBM
further by enabling the storage of up to five times as much active primary data in the Business Partners from
same physical disk space. Finally, high-performance thin provisioning helps automate around the world create
provisioning. These benefits can help extend the useful life of existing storage assets,
timely technical information
reducing costs.
based on realistic scenarios.
Integrating these functions into Storwize also means that they are designed to operate Specific recommendations
smoothly together, reducing management effort.
are provided to help you
This IBM Redbooks publication provides information about the latest features and implement IT solutions more
functions of the Storwize V7000 Gen2 and software version 7.3 implementation, effectively in your
architectural improvements, and Easy Tier.
environment.