SG 248561
SG 248561
Vasfi Gucer
Andy Harchen
Jon Herd
Hartmut Lonzer
Jonathan Wilkie
Redbooks
IBM Redbooks
October 2024
SG24-8561-00
Note: Before using this information and the product it supports, read the information in “Notices” on
page xv.
Figures . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . vii
Tables . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xi
Examples . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xiii
Notices . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xv
Trademarks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xvi
Preface . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xvii
Authors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xvii
Now you can become a published author, too! . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xix
Comments welcome. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xix
Stay connected to IBM Redbooks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xix
iv Unleash the Power of Flash: Getting Started with IBM Storage Virtualize Version 8.7
6.1.2 Using the GUI . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 109
6.1.3 Recommended actions and fix procedure. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 110
6.1.4 Storage Virtualize failure recovery . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 111
6.1.5 Using the command-line interface. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 114
6.2 Collecting diagnostic data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 115
6.2.1 IBM Storage Virtualize systems data collection . . . . . . . . . . . . . . . . . . . . . . . . . 115
6.2.2 Drive data collection: drivedumps . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 120
6.2.3 Host multipath software . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 121
6.2.4 More data collection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 127
Contents v
vi Unleash the Power of Flash: Getting Started with IBM Storage Virtualize Version 8.7
Figures
viii Unleash the Power of Flash: Getting Started with IBM Storage Virtualize Version 8.7
5-14 Add a note or attachment window. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 100
5-15 Selecting a Severity Level window . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 100
5-16 Review the ticket window . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 101
5-17 Update ticket . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 102
5-18 View tickets . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 102
5-19 Adding a log package to the ticket . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 103
5-20 Confirming the log upload . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 103
5-21 Log upload completed and processing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 104
5-22 IBM Storage insights Pro and IBM Flash Grid integration. . . . . . . . . . . . . . . . . . . . . 105
6-1 Events icon in the GUI . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 109
6-2 System Health expanded section in the dashboard . . . . . . . . . . . . . . . . . . . . . . . . . . 110
6-3 Recommended actions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 111
6-4 Monitoring → Events window . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 113
6-5 Properties and Sense Data for an event . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 114
6-6 Upload Support Package details. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 118
Figures ix
x Unleash the Power of Flash: Getting Started with IBM Storage Virtualize Version 8.7
Tables
This information was developed for products and services offered in the US. This material might be available
from IBM in other languages. However, you may be required to own a copy of the product or product version in
that language in order to access it.
IBM may not offer the products, services, or features discussed in this document in other countries. Consult
your local IBM representative for information on the products and services currently available in your area. Any
reference to an IBM product, program, or service is not intended to state or imply that only that IBM product,
program, or service may be used. Any functionally equivalent product, program, or service that does not
infringe any IBM intellectual property right may be used instead. However, it is the user’s responsibility to
evaluate and verify the operation of any non-IBM product, program, or service.
IBM may have patents or pending patent applications covering subject matter described in this document. The
furnishing of this document does not grant you any license to these patents. You can send license inquiries, in
writing, to:
IBM Director of Licensing, IBM Corporation, North Castle Drive, MD-NC119, Armonk, NY 10504-1785, US
This information could include technical inaccuracies or typographical errors. Changes are periodically made
to the information herein; these changes will be incorporated in new editions of the publication. IBM may make
improvements and/or changes in the product(s) and/or the program(s) described in this publication at any time
without notice.
Any references in this information to non-IBM websites are provided for convenience only and do not in any
manner serve as an endorsement of those websites. The materials at those websites are not part of the
materials for this IBM product and use of those websites is at your own risk.
IBM may use or distribute any of the information you provide in any way it believes appropriate without
incurring any obligation to you.
The performance data and client examples cited are presented for illustrative purposes only. Actual
performance results may vary depending on specific configurations and operating conditions.
Information concerning non-IBM products was obtained from the suppliers of those products, their published
announcements or other publicly available sources. IBM has not tested those products and cannot confirm the
accuracy of performance, compatibility or any other claims related to non-IBM products. Questions on the
capabilities of non-IBM products should be addressed to the suppliers of those products.
Statements regarding IBM’s future direction or intent are subject to change or withdrawal without notice, and
represent goals and objectives only.
This information contains examples of data and reports used in daily business operations. To illustrate them
as completely as possible, the examples include the names of individuals, companies, brands, and products.
All of these names are fictitious and any similarity to actual people or business enterprises is entirely
coincidental.
COPYRIGHT LICENSE:
This information contains sample application programs in source language, which illustrate programming
techniques on various operating platforms. You may copy, modify, and distribute these sample programs in
any form without payment to IBM, for the purposes of developing, using, marketing or distributing application
programs conforming to the application programming interface for the operating platform for which the sample
programs are written. These examples have not been thoroughly tested under all conditions. IBM, therefore,
cannot guarantee or imply reliability, serviceability, or function of these programs. The sample programs are
provided “AS IS”, without warranty of any kind. IBM shall not be liable for any damages arising out of your use
of the sample programs.
The following terms are trademarks or registered trademarks of International Business Machines Corporation,
and might also be trademarks or registered trademarks in other countries.
Redbooks (logo) ® HyperSwap® IBM Research®
AIX® IBM® IBM Spectrum®
DS8000® IBM Cloud® PowerHA®
Easy Tier® IBM FlashCore® Redbooks®
FlashCopy® IBM FlashSystem®
Intel, Intel Xeon, Intel logo, Intel Inside logo, and Intel Centrino logo are trademarks or registered trademarks
of Intel Corporation or its subsidiaries in the United States and other countries.
The registered trademark Linux® is used pursuant to a sublicense from the Linux Foundation, the exclusive
licensee of Linus Torvalds, owner of the mark on a worldwide basis.
Microsoft, Windows, and the Windows logo are trademarks of Microsoft Corporation in the United States,
other countries, or both.
Red Hat, OpenShift, are trademarks or registered trademarks of Red Hat, Inc. or its subsidiaries in the United
States and other countries.
VMware, VMware vSphere, and the VMware logo are registered trademarks or trademarks of VMware, Inc. or
its subsidiaries in the United States and/or other jurisdictions.
Other company, product, or service names may be trademarks or service marks of others.
xvi Unleash the Power of Flash: Getting Started with IBM Storage Virtualize Version 8.7
Preface
IBM® Storage Virtualize (formerly IBM Spectrum® Virtualize) can simplify infrastructure
management for block storage across diverse workloads on-premises, off-premises, or in
hybrid cloud environments. This core offering of the IBM Storage portfolio enables rapid
deployment and streamlines management for SAN Volume Controller and IBM FlashSystem®
systems, including support for hybrid multicloud deployments.
This IBM Redbooks® publication focuses on IBM Storage Virtualize Version 8.7, guides users
through new features, upgrades, and configuration for both new and existing systems. It is
intended for pre-sales and post-sales technical support personnel and storage administrators.
Authors
This book was produced by a team of specialists from around the world.
Lucy Harris, Evelyn Perez, Chris Bulmer, Chris Canto, Daniel Dent, Bill Passingham,
Nolan Rogers, David Seager, Russell Kinmond
IBM UK
xviii Unleash the Power of Flash: Getting Started with IBM Storage Virtualize Version 8.7
Now you can become a published author, too!
Here’s an opportunity to spotlight your skills, grow your career, and become a published
author—all at the same time! Join an IBM Redbooks residency project and help write a book
in your area of expertise, while honing your experience using leading-edge technologies. Your
efforts will help to increase product acceptance and customer satisfaction, as you expand
your network of technical contacts and relationships. Residencies run from two to six weeks
in length, and you can participate either in person or as a remote resident working from your
home base.
Find out more about the residency program, browse the residency index, and apply online at:
ibm.com/redbooks/residencies.html
Comments welcome
Your comments are important to us!
We want our books to be as helpful as possible. Send us your comments about this book or
other IBM Redbooks publications in one of the following ways:
Use the online Contact us review Redbooks form found at:
ibm.com/redbooks
Send your comments in an email to:
redbooks@us.ibm.com
Mail your comments to:
IBM Corporation, IBM Redbooks
Dept. HYTD Mail Station P099
2455 South Road
Poughkeepsie, NY 12601-5400
Preface xix
xx Unleash the Power of Flash: Getting Started with IBM Storage Virtualize Version 8.7
1
1.1.1 Overview
IBM Storage Virtualize (formerly IBM Spectrum Virtualize) can simplify managing block
storage for various workloads, on-premises or in the cloud. It runs on IBM FlashSystem and
SAN Volume Controller hardware, offering data protection, rapid cloud deployment, and
performance for analytics. IBM Storage Virtualize provides a way to manage and protect huge
volumes of data from mobile and social applications. IBM Storage Virtualize enables rapid
and flexible cloud services deployments and delivers the performance and scalability that is
needed to gain insights from the latest analytics technologies.
Note: For more information, see IBM Storage FlashSystem and IBM SAN Volume
Controller.
With the introduction of the IBM Storage family, the software that runs on IBM SAN Volume
Controller and on IBM Storage FlashSystem (IBM FlashSystem) products is called
IBM Storage Virtualize. The name of the underlying hardware platform is not changed.
2 Unleash the Power of Flash: Getting Started with IBM Storage Virtualize Version 8.7
– Enables supported storage to be deployed with Kubernetes and Docker container
environments, including Red Hat OpenShift
– Consolidates storage, regardless of the hardware vendor for simplified management,
consistent functions, and greater efficiency
– Supports common capabilities across storage types, which provide flexibility in storage
acquisition by allowing a mix of vendors in the storage infrastructure
Note: These benefits are a subset of the list of features and functions that are available
with IBM Storage Virtualize software.
Figure 1-1 shows the current IBM FlashSystem and IBM SAN Volume Controller Family.
i
Note: IBM Storage Virtualize for Public Cloud is not currently supported on IBM Storage
Virtualize V8.7. This function is planned for a future release.
Note: This version of the IBM Redbooks includes systems that can run IBM Storage
Virtualize V8.7. Some products that are listed in the book are no longer sold by IBM but
can still run the V8.7 software. Where this is applicable, it is mentioned in the text.
Table 1-2 shows the IBM Storage FlashSystem Family feature summary and comparison, for
currently marketed products.
Table 1-2 IBM FlashSystem current products feature summary comparison chart
SVC 5015 5045 5300 7300 9500
Controller SA2 (No 2P2 (12-drive) 3P2 (12-drive) 7H2 (12-drive) 924 (24-drive) AH8
Models Drives) 2P4 (24-drive) 3P4 (24-drive) (48-drive)
SV3 (No
Drives)
Expansion N/A 12H (12-drive) 12H (12-drive) 12G (12-drive) 12G (12-drive) AFF
Models 24H (24-drive) 24H (24-drive) 24G (24-drive) 24G (24-drive) (24-drive)
92H (92-drive) 92H (92-drive) 92G (92-drive) 92G (92-drive) A9F
(92-drive)
4 Unleash the Power of Flash: Getting Started with IBM Storage Virtualize Version 8.7
SVC 5015 5045 5300 7300 9500
Processors 2 Intel Xeon 2 Intel Xeon 2 Intel Xeon 2 Intel Xeon 2 Intel Xeon 4 Intel Xeon
CPUs CPUs CPUs CPUs CPUs CPUs
SV3 24 cores 2 cores each 6 cores each 12 cores each 10 cores each 24 cores
each each
SA2 8 Cores
each
Height 2U 2U 2U 1U 2U 4U
Connectivity N/A 1 Gb/s iSCSI 10 Gb/s iSCSI 25/10 Gb/s 10 Gb/s iSCSI N/A
(standard) iSCSI or
NVMe/TCP
Max ports 12 8 8 16 24 48
Warranty 2145 One year 9x5 One year 9x5 One year 9x5 One year 9x5 One year
and Support Enterprise standard standard standard standard 24x7
Class 1–5 Expert 1–5 Expert 1–5 Expert 1–5 Expert standard
Support and Care Basic, Care Basic, Care Basic, Care Basic, 1–5 Expert
a one-year Advanced, or Advanced, or Advanced, or Advanced, or Care
warranty. Premium Premium Premium Premium Advanced or
2147 Premium
Enterprise
Class
Support and
a three-year
warranty.
However, storage administrators often lack complete visibility into volume usage. Applications
and different teams might employ volumes for diverse purposes, often resorting to cryptic
volume names that don't reflect the actual use case. This lack of clear information hinders
efficient storage management and ransomware detection strategies. IBM Support can also
benefit from understanding which file systems are in each volume in some recovery
scenarios.
IBM Storage Virtualize V8.7.0 provides the following file level awareness for ransomware
detection:
1. Every 12 hours, the file system is automatically updated for each volume. The file system
can also be updated by analyzevdisk or analyzevdiskbysystem CLI commands.
2. Background reads are sent to a volume.
3. Open-source libraries are used to determine file system.
4. Output is displayed in the file_system field of the lsvdiskanalysis command:
– 15 character max for field
– Can display multiple file systems
6 Unleash the Power of Flash: Getting Started with IBM Storage Virtualize Version 8.7
5. File system used by inferencing engine to improve ransomware detection.
With these statistics, IBM Storage Insights builds a historical model of a storage system and
uses its built-in intelligence and formulas to identify when and where ransomware attacks
might be occurring. For more information about statistics, see IBM Storage Insights.
Note: For more information, see the blog post IBM Storage Virtualize 8.7.0 including Flash
Grid by Barry Whyte and Andrew Martin, which provides a good overview of the new
features in Version 8.7.
Figure 1-2 Async policy-based replication and partition base HA user experience improvements
Figure 1-3 Volume group tile and assigning ownership groups to volume groups
8 Unleash the Power of Flash: Getting Started with IBM Storage Virtualize Version 8.7
Figure 1-4 shows the newly designed GUI performance panel.
The current I/O group structure presents several limitations that hinder performance
scalability and flexibility:
Limited scalability. A maximum of four I/O groups restricts the overall performance
achievable by a single system.
Hardware compatibility challenges. Compatibility requirements between I/O groups
complicate hardware upgrades.
Disruptive upgrades. System-wide upgrades are needed for both software and hardware,
leading to downtime and complexity.
Nonlinear object limits. Volume, snapshot, and host counts are limited per system, not per
I/O group, hindering scalability.
Feature restrictions. Several advanced features, such as policy-based HA, vVol
replication, and storage partitions, are only available on single I/O group systems.
Flash Grid addresses these limitations by offering a more granular and flexible approach to
storage management. Storage Virtualize 8.7.0 is the first phase of this implementation and
includes the following key features:
CLI-driven Flash Grid management. Initial configuration and management are primarily
done through the command-line interface (CLI). The CLI uses AI-assisted storage partition
Note: IBM plans to include the Flash Grid implementation, monitoring, and management in
the GUI in a future release. There is also a plan to more closely integrate with IBM Storage
Insights to give AI capable operations to storage partitions migration across systems in the
Flash Grid.
A patch is a small update to a function or service that can be installed on a user’s system. A
patch install never requires a node reboot or reset.
Note: A patch installation might restart a Linux service when installed. It can be installed
on all platform types and is small in size.
A process for creating and publishing patches is already in place on Storage Virtualize 8.6.0.
When developers identify an issue, they create patches such as bug fixes and security
updates to address issues in Storage Virtualize. Patches are published on IBM FixCentral.
IBM Cloud® Call Home is used to access patches. Newer versions of IBM Storage Virtualize
code can include older patches that were released in previous Storage Virtualize versions.
The Automatic Patch Updating has the following benefits to the clients’ systems:
You can use enhanced Patching framework to schedule automatic patch updates for your
Storage Virtualize systems. This eliminates the need for complex full Program Temporary
Fix (PTF) or concurrent code upgrades and can save you time and effort.
Benefits users whose systems have patches that might need frequent updates.
Note: An example might be Ransomware Threat Detection, where the inference data
files might be regularly changed.
10 Unleash the Power of Flash: Getting Started with IBM Storage Virtualize Version 8.7
Users can configure their systems and know that updates of vital patches happen in the
background.
Automatic Patch Updating can be configured on a user’s system by using either the GUI or
CLI commands.
After it is configured, automatic patching performs daily checks on IBM Fix Central. If any
selected patches are available for download, they are automatically downloaded and applied
to your system.
Important: Automatic Patch Updating uses IBM Cloud Call Home to access patch
information and lists. Therefore, a functioning IBM Cloud Call Home is a prerequisite
before you configure automatic updates.
Because of Automatic Drive Firmware Download, any FCM Field Replaceable Unit (FRU)
replacements or additional drives that are added to your array are automatically updated, so
they are compatible with your system's firmware.
Example scenario
Consider the following scenario:
1. A user has an FCM4 array that uses firmware version 4.1.4. A drive fails and requires
replacement.
2. A FRU arrives with version 4.0.4. The user performs the Dynamic Drive Pool operation
and replaces the failed drive with the replacement FRU.
3. As the drive attempts to rejoin the array, Automatic Drive Download automatically verifies
that version 4.1.4 is available and upgrades the drive.
Note: Remote Copy is supported on V8.7.0 if the hardware has a valid support contract.
This also includes the following functions:
Global Mirror
Global Mirror with Change Volumes
HyperSwap
Metro Mirror
Migration relationships
HyperSwap and Metro Mirror 3-site solutions
Important: Entry-level IBM FlashSystem 5015 and 5035 do have replication capabilities if
upgraded beyond V8.7.0.
Storage Virtualize V8.7.0 and later includes the following changes to remote copy support:
Global Mirror and Global Mirror with Change Volumes are replaced by policy-based
replication. For more information, see Migrating to Safeguarded snapshots.
HyperSwap is replaced by policy-based HA.
Migration is using storage partition migration.
12 Unleash the Power of Flash: Getting Started with IBM Storage Virtualize Version 8.7
1.4 Preparation and upgrading to IBM Storage Virtualize V8.7.0
To run IBM Storage Virtualize V8.7.0 on your selected hardware, there are some tasks and
checks that need to be done before implementing this level of IBM Storage Virtualize
software.
Figure 1-5 shows the matrix of supported hardware versus the IBM Storage Virtualize
software levels.
The “from” level is your current IBM Storage Virtualize software level and the “to” level is
IBM Storage Virtualize 8.7.0.
Examine the matrix in Figure 1-5 and confirm your IBM Storage Virtualize hardware can
upgrade to the IBM Storage Virtualize 8.7.0 level.
Also, there are limits with some features in IBM Storage Virtualize 8.7.0 that might be
applicable to your configuration. Ensure that you understand these limits before upgrading to
For the specific steps of the upgrade process including the prechecks and code download,
see Software update.
Also, verify that the drive firmware on your IBM FlashSystem is at the latest level. Updates to
drive firmware are not done during system updates. For more information, see Drive update.
14 Unleash the Power of Flash: Getting Started with IBM Storage Virtualize Version 8.7
2
It also provides step-by-step instructions for the initial setup process and defines the baseline
system settings. These settings are typically applied during the implementation phase, which
is before volume creation and provisioning.
Note: IBM SAN Volume Controller nodes need enough time to charge the batteries.
How long it takes to recharge depends on how long it was waiting idle in stock and
not in production. You cannot start the nodes without a fully charged battery.
The web browser that is used for managing the system is supported by the management
GUI. For the list of supported browsers, see Management GUI.
The required information for remote management of the system is available:
– The IPv4 (or IPv6) addresses that are assigned for the system’s management
interfaces:
• The unique cluster IP address, which is the address that is used for the
management of the system.
• Unique service IP addresses, which are used to access node service interfaces.
You need one address for each IBM SAN Volume Controller node or
IBM FlashSystem node (two per control enclosure).
• The IP subnet mask for each subnet that is used.
• The IP gateway for each subnet that is used.
– The licenses that might be required to use specific functions. Whether these licenses
are required depends on the hardware that is used. For more information, see
Licensed functions.
– Information that is used by a system when performing Call Home functions:
• The company name and system installation address.
• The name, email address, and phone number of the storage administrator whom
IBM can contact if necessary.
– The following information is optional:
• The Network Time Protocol (NTP) server IP address
• The Simple Mail Transfer Protocol (SMTP) server IP address, which is necessary if
you want to enable Call Home or want to be notified about system events through
email
• The IP addresses for Remote Support Proxy Servers, which are required only if you
want to use them with the Remote Support Assistance feature
16 Unleash the Power of Flash: Getting Started with IBM Storage Virtualize Version 8.7
Note: IBM FlashSystem 9500 and IBM SAN Volume Controller are installed by an
IBM System Services Representative (IBM SSR). Provide all the necessary information to
the IBM SSR by completing the following planning worksheets:
Planning worksheets for IBM FlashSystems
Planning worksheets for IBM SAN Volume Controller
After the IBM SSR completes their portion of the setup, see 2.3, “System setup” on
page 24 to continue the setup process.
You can view the following demonstration videos. Although the videos are based on
IBM Storage Virtualize V8.6, they are still applicable to V8.7.
IBM Storage Virtualize V8.6 Initial setup: SSR configuration tasks
IBM Storage Virtualize V8.6 Initial setup: Customer configuration tasks
IBM Storage Virtualize V8.6 Initial setup: Setting up a cluster from the service IP
On IBM FlashSystem 5015, the technician port is enabled initially. However, the port is
switched to internet Small Computer Systems Interface (iSCSI) host attachment mode after
the setup wizard is complete.
To re-enable an onboard Ethernet port on a system to be used as the technician port, refer to
the command shown in Example 2-1.
Example 2-1 Reenabling the onboard Ethernet port 2 as the technician port
IBM_IBM FlashSystem 9100:superuser>satask chserviceip -techport enable -force
The location of the technician port of an IBM FlashSystem 7300 is shown in Figure 2-2.
The location of the technician port of an IBM FlashSystem 5200 is shown in Figure 2-3
The location of the technician port of an IBM FlashSystem 5300 is shown in Figure 2-4
The location of the technician port of an IBM FlashSystem 5045 is shown in Figure 2-5.
18 Unleash the Power of Flash: Getting Started with IBM Storage Virtualize Version 8.7
The location of the technician port of an IBM FlashSystem 5015 is shown in Figure 2-6.
The location of a technician port on the IBM SAN Volume Controller 2145-SV3 is shown in
Figure 2-7.
The location of a technician port on the IBM SAN Volume Controller 2145-SV2 is shown in
Figure 2-8.
The technician port runs an IPv4 DHCP server, and it can assign an address to any device
that is connected to this port. Ensure that your workstation Ethernet adapter is configured to
use a DHCP client if you want the IP to be assigned automatically.
If you prefer not to use DHCP, you can set a static IP on the Ethernet port from the
192.168.0.x/24 subnet; for example, 192.168.0.2 with the netmask 255.255.255.0.
The default IP address of a technician port on a node canister is 192.168.0.1. Do not use this
IP address for your workstation.
Note: Ensure that the technician port is not connected to the organization’s network. No
Ethernet switches or hubs are supported on this port.
During initialization, the nodes within a single control enclosure are joined into a cluster. This
cluster is later configured to process data. For an IBM SAN Volume Controller system, the
cluster initially consists of only one node.
If your system has multiple control enclosures or IBM SAN Volume Controller nodes, initialize
only the first enclosure or node. The remaining enclosures or nodes can be added to the
cluster later by using the cluster management interface (GUI or CLI) after the initial setup.
During initialization, you must specify an IPv4 or IPv6 system management address. This
address is assigned to Ethernet port 1 on each node and is used to access the management
GUI and CLI. You can configure additional IP addresses after the system is initialized.
Note: Do not perform the system initialization procedure on more than one node canister
of one control enclosure. After initialization is done, use the management GUI or CLI to
add control enclosures to the system.
Warnings about untrusted certificates: During system initialization, you might see
warnings about untrusted certificates. This happens because the system uses
self-signed certificates, which are not verified by a well-known authority.
However, if you are directly connected to the service interface, there is no intermediary
that might impersonate the system with a fake certificate. Therefore, you can safely
accept the certificates in this scenario.
If the system is not in a state that allows initialization, the system does not start the
System Initialization wizard, and you are redirected to the Service Assistant interface. Use
the displayed error codes to troubleshoot the problem.
20 Unleash the Power of Flash: Getting Started with IBM Storage Virtualize Version 8.7
3. If the system is not in a state that allows initialization, the window that is used to log in to
Service Assistant opens (Figure 2-9). Otherwise, the System Initialization wizard opens
(Figure 2-10). Enter the default superuser password of passw0rd and click Log in.
4. The System Initialization wizard shows the detected canisters, as shown in Figure 2-10.
Click Proceed to continue. This window is not shown for IBM SAN Volume Controller
nodes.
For IBM SAN Volume Controller systems, the initialization window might differ (see
Figure 2-12). You are likely to be prompted to add nodes directly, rather than enclosures.
If you select As an additional node in an existing system, you are directed to
disconnect from the technician port and use the system's GUI for further configuration.
Figure 2-12 System Initialization: Initialize the first IBM SAN Volume Controller node
22 Unleash the Power of Flash: Getting Started with IBM Storage Virtualize Version 8.7
6. Enter the management IP address information for the new system as shown in
Figure 2-13. Set the IP address, network mask, and gateway. Click Next.
7. A window that includes a restart timer opens (Figure 2-14). When the timeout is reached,
the window is updated to reflect success or failure. Failure occurs if the system is
disconnected from the network, which prevents the browser from updating with the
IBM FlashSystem web server.
Figure 2-14 System Initialization: Web-server restart timer counting down from 5 minutes
Follow the instructions, and direct your browser to the management IP address to access
the system GUI after you click Finish.
System Setup is also available directly from the technician port. The System Setup wizard
is available through both the management IP address and the technician port.
The first time that you connect to the management GUI, you can be prompted to accept
untrusted certificates because the system certificates are self-signed. If your company policy
requests certificates that are signed by a trusted certificate authority (CA), you can install
them after you complete the System Setup.
Note: The default password for the superuser account is passw0rd (with the number
zero, not the uppercase letter O). The default password must be changed by using the
System Setup wizard or after the first CLI login. The new password cannot be set to the
default password.
24 Unleash the Power of Flash: Getting Started with IBM Storage Virtualize Version 8.7
Figure 2-16 Logging in for the first time
2. The Initial Setup starts with the Welcome page, as shown in Figure 2-17. Click Next.
On IBM FlashSystem 9500 systems and IBM SAN Volume Controller systems, an
IBM SSR configures Call Home during installation. Verify that all the entered data is
correct.
All IBM FlashSystem products and IBM SAN Volume Controller systems support the
following methods of sending Call Home notifications to IBM:
– Cloud Call Home
– Call Home with email notifications
Cloud Call Home is the default and preferred option for a system to report event
notifications to IBM Support. With this method, the system uses RESTful application
programming interfaces (APIs) to connect to an IBM centralized file repository that
contains troubleshooting information that is gathered from customers. This method
requires no extra configuration.
The system can also be configured to use email notifications for this purpose. If this
method is selected, you are prompted to enter the SMTP server IP address.
If both methods are enabled, Cloud Call Home is used, and the email notifications method
is kept as a backup.
26 Unleash the Power of Flash: Getting Started with IBM Storage Virtualize Version 8.7
If either of these methods is selected, the system location and contact information must be
entered. This information is used by IBM to provide technical support. All fields in the form
must be completed. In this step, the system also verifies that it can contact the Cloud Call
Home servers.
4. Click Next to enter the Transmission Type for Call Home.
5. Select which transmission types to use for Call Home. See Figure 2-19.
6. Select your choice. In the example Send using Cloud services is selected. Click Apply
and Next to setup the Internal Proxy Server. See Figure 2-20 on page 28.
7. Enter the requested information. After you set up the Proxy Server, the system checks the
connection to the Support Center. See Figure 2-21 on page 28.
28 Unleash the Power of Flash: Getting Started with IBM Storage Virtualize Version 8.7
8. Enter all required information for the System Location.
9. Click Next and enter the Contact information
Figure 2-22 shows the panel for the contact information. Use the Company contact
information to comply with privacy regulations. IBM might use the contact data if you allow
it.
10.To complete the registration, click Apply and Next.
11.Review the Summary information. If all is correct, click Finish. See Figure 2-23 on
page 30.
12.The system saves the entered information, and you are pompted to log in again. After
login, you are guided to the System Setup page. See Figure 2-24 on page 30.
30 Unleash the Power of Flash: Getting Started with IBM Storage Virtualize Version 8.7
13.Click Next to view the License Agreement page. Read the license agreement. Select I
agree with the terms in the license agreement if you want to continue the setup.
Otherwise, the system stops the setup. See Figure 2-25.
14.You are prompted to change the password as shown in Figure 2-26 on page 32. Enter a
new password for superuser. A valid password is 8– 64 characters and cannot begin or
end with a space. Also, the password cannot be set to match the default password.
Note: All configuration changes that are made by using the System Setup wizard are
applied immediately, including the password change. The user sees the system running
commands during the System Setup wizard.
Note: In a 3-Site Replication solution, ensure that the system name is unique for all
three clusters when you prepare the IBM Storage Virtualize clusters at Master,
AuxNear, and AuxFar sites to work. The system names must remain different for the life
of the 3-site configuration.
For more information about 3-Site Replication, see IBM Spectrum Virtualize 3-Site
Replication, SG24-8504.
If required, the system name can be changed by running the chsystem -name
<new_system_name> command. The system can also be renamed in the management GUI
by clicking Monitoring → System Hardware and selecting System Actions → Rename
System.
32 Unleash the Power of Flash: Getting Started with IBM Storage Virtualize Version 8.7
Figure 2-27 System Name
34 Unleash the Power of Flash: Getting Started with IBM Storage Virtualize Version 8.7
Figure 2-29 DNS Server setup
Note: When encryption is enabled on the system, encrypted storage pools can be
created. If the system is a single control enclosure system where all FCM-drives should
be in the same storage pool, encryption must be enabled before creating the storage
pool. If a storage pool is created before encryption is enabled, any data in that pool
must be migrated to an encrypted storage pool, if the data must be encrypted.
If you purchased the encryption feature, you are prompted to activate your license
manually or automatically. The encryption license is key-based and required for each
control enclosure.
You can use automatic activation if the workstation that you use to connect to the GUI and
run the System Setup wizard has Internet access. If no Internet connection is available,
use manual activation and follow the instructions.
25.After the encryption license is activated, you see a green checkmark for each enclosure,
as shown in Figure 2-32 on page 37. After all the control enclosures show that encryption
is licensed, click Next.
26.If you want to modify your previously entered Call Home settings, you can do so here. See
Figure 2-33 on page 37.
36 Unleash the Power of Flash: Getting Started with IBM Storage Virtualize Version 8.7
Figure 2-32 Encryption Licensed
With the Support Assistance feature, you allow IBM Support to perform maintenance
tasks on your system with support personnel onsite or remotely.
If an IBM SSR is onsite, the SSR can log in locally with your permission and a special user
ID and password so that a superuser password does not need to be shared with the
IBM SSR.
You can also enable Support Assistance with remote support to allow IBM Support
personnel to log in remotely to the machine with your permission through a secure tunnel
over the Internet.
If you allow remote support, you are provided with the IP addresses and ports of the
remote support centers and an opportunity to provide proxy server details (if required) to
allow the connectivity, as shown in Figure 2-35 on page 39. Click Apply and Next.
28.You can also allow remote connectivity at any time or only after obtaining permission from
the storage administrator, as shown in Figure 2-36 on page 39.
38 Unleash the Power of Flash: Getting Started with IBM Storage Virtualize Version 8.7
Figure 2-35 System communicating with named IBM Support servers
For more information about how to enable Automatic configuration for IBM SAN Volume
Controller on a running system after the System Setup wizard, see 2.3.7, “Automatic
configuration for IBM SAN Volume Controller back-end storage” on page 55.
31.On the Summary page, the settings that were selected by the System Setup wizard are
shown. If corrections are needed, you can return to a previous step by clicking Back.
Otherwise, click Finish to complete the system setup wizard shown Figure 2-38 on
page 41.
40 Unleash the Power of Flash: Getting Started with IBM Storage Virtualize Version 8.7
Figure 2-38 Summary Page
When the system setup wizard is done, your IBM FlashSystem consists of only the control
enclosure that includes the node canister that you used to initialize the system and its partner,
and the expansion enclosures that are attached to them.
When you set up an IBM SAN Volume Controller, your system consists of only one node in
the cluster, which might see other candidate nodes in the service GUI if they are connected to
SAN and zoned together.
If you have other control and expansion enclosures or IBM SAN Volume Controller nodes, you
must add them to complete the System Setup.
For more information about how to add a control or expansion enclosure, see 2.3.2, “Adding
an enclosure in IBM FlashSystem” on page 43.
For more information about how to add a node or hot spare node, see 2.3.3, “Adding a node
or hot spare node in IBM SAN Volume Controller systems” on page 45.
If no other enclosures or nodes are to be added to this system, the System Setup process is
complete and you can click Finish to be returned to the login window of the
IBM FlashSystem.
All the required steps of the initial configuration are complete. If needed, you can configure
other global functions, such as system topology, user authentication, or local port masking
before configuring the volumes and provisioning them to hosts.
42 Unleash the Power of Flash: Getting Started with IBM Storage Virtualize Version 8.7
34.Clicking Close and Finish takes you to the Dashboard (see Figure 2-41).
The tasks that are described next are used to define global system configuration settings.
Often, they are performed during the System Setup process. However, they can also be
performed later, such as when the system is expanded or the system environment is
reconfigured.
Before beginning this process, ensure that the new control enclosure is correctly installed and
cabled to the system.
For FC node-to-node communication, verify that the correct SAN zoning is set.
For node-to-node communication over RDMA-capable Ethernet ports, ensure that the IP
addresses are configured and a connection between nodes can be established.
2. Click Add Enclosure, and a list of available candidate enclosures opens, as shown in
Figure 2-43. To light the Identify light-emitting diode (LED) on a selected enclosure, select
Actions → Identify. When the required enclosure (or enclosures) is chosen, click Next.
3. Review the summary in the next window and click Finish to add the expansion enclosure
or the control enclosure and all expansions that are attached to it to the system.
Note: When a new control enclosure is added, the software version that is running on
its nodes is upgraded or rolled back to match the system software version. This process
can take up to 30 minutes or more, and the enclosure is added only when this process
completes.
4. After the control enclosure is successfully added to the system, a success message
appears. Click Close to return to the System Overview window and check that the new
enclosure is visible and available for management.
IBM_IBM FlashSystem:ITSO-FS9500:superuser>lsiogrp
id name node_count vdisk_count host_count site_id site_name
0 io_grp0 2 0 0
1 io_grp1 0 0 0
2 io_grp2 0 0 0
3 io_grp3 0 0 0
4 recovery_io_grp 0 0 0
44 Unleash the Power of Flash: Getting Started with IBM Storage Virtualize Version 8.7
2. To list control enclosures that are available to add, run the lscontrolenclosurecandidate
command, as shown in Example 2-3. To list the expansion enclosures, run the
lsenclosure command. Expansions that have the managed parameter set to no can be
added.
4. To add an expansion enclosure, change its managed status to yes by running the
chenclosure command, as shown in Example 2-5.
2.3.3 Adding a node or hot spare node in IBM SAN Volume Controller systems
This procedure is the same whether you are configuring the system for the first time or
expanding it later. The same process is used to add a node to an I/O group, or a hot spare
node.
Before beginning this process, ensure that the new control enclosure is correctly installed and
cabled to the system.
For FC node-to-node communication, verify that the correct SAN zoning is set.
For node-to-node communication over RDMA-capable Ethernet ports, ensure that the IP
addresses are configured and a connection between nodes can be established.
Note: If the Add Node button does not appear, review the installation instructions to
verify that the new node is connected and set up correctly.
2. Click Add Node. A form that you can use to assign nodes to I/O groups opens, as shown
in Figure 2-45. To illuminate the Identify LED on a node, click the LED icon that is next to a
node name. When the required node or nodes is selected, click Finish.
The Monitoring → Systems Hardware window changes and shows that the node is added,
as shown in Figure 2-46 on page 47.
46 Unleash the Power of Flash: Getting Started with IBM Storage Virtualize Version 8.7
Figure 2-46 IBM SAN Volume Controller is adding node to the cluster
Note: When a node is added, the software version that is running is upgraded or rolled
back to match the cluster software version. This process can take 30 minutes or more
to complete. The node is added only after this process finishes.
2. To list nodes that are available to add to the I/O group, run the lsnodecandidate command,
as shown in Example 2-7.
3. Add a node by running the addnode command. The command in Example 2-8 adds a node
as a spare. The command starts in the background and can take 30 minutes or more.
In Example 2-9 the addnode command is used to add a node to I/O group io_grp1.
4. List the nodes in the system by using CLI. As shown in Example 2-10 on page 49, the
IBM SAN Volume Controller is configured with two nodes, which forms one IO-group. A
spare node is configured for the IO-group.
48 Unleash the Power of Flash: Getting Started with IBM Storage Virtualize Version 8.7
Example 2-10 Single IO-group (two nodes) and one spare
IBM_2145:ITSO-SVC:superuser>lsnode
id name UPS_serial_number WWNN status IO_group_id IO_group_name config_node
UPS_unique_id hardware iscsi_name iscsi_alias panel_name
enclosure_id canister_id enclosure_serial_number site_id site_name
1 node1_78KKLC0 500507680C00D990 online 0 io_grp0 yes
SV1 iqn.1986-03.com.ibm:2145.itso-svc.node178kklc0 78KKLC0
2 node2_78KKCD0 500507680C00D982 online 0 io_grp0 no
SV1 iqn.1986-03.com.ibm:2145.itso-svc.node278kkcd0 78KKCD0
3 spare1 500507680C00D98F spare no
SV1 78KKLD0
The administrator might want to rename the nodes to feature consistent names. This
process can be done by clicking Monitoring → System Hardware → Node Actions →
Rename.
From a storage perspective, business continuity involves maintaining data consistency and
availability for uninterrupted application access, achieved through two key concepts: disaster
recovery (DR) and high availability (HA). DR focuses on replicating data to remote locations
for recovery. HA prioritizes continuous data accessibility.
Disasters can range from entire site outages to data corruption or theft. Data protection relies
on local or remote backups. IBM Storage Virtualize offers functionalities to safeguard your
data against various threats, such as hardware failures, software errors, or cyberattacks.
Policy-based replication and policy-based high availability protect against site failures by
automatically failing over to a secondary site, helping ensure business continuity. Although it
is not covered here, Storage Virtualize offers additional features such as snapshots and
Safeguarded snapshots to protect against data corruption or cyberattacks.
Note: Policy-based high availability is not supported by the IBM FlashSystem 5015.
For more information about this topic, refer to IBM Redbooks Ensuring Business Continuity: A
Practical Guide to Policy-Based Replication and Policy-Based High Availability for
IBM Storage Virtualize Systems, SG24-8569.
One of these items is selected for the active quorum role, which is used to resolve failure
scenarios where half the nodes on the system become unavailable or a link between
enclosures is disrupted. The active quorum determines which nodes can continue processing
host operations. It also avoids a “split brain” condition, which occurs when both halves of the
system continue I/O processing independently of each other.
For IBM FlashSystem products with a single control enclosure and IBM SAN Volume
Controller systems with a standard topology, quorum devices are automatically selected from
the internal drives or assigned from an MDisk, respectively. No special configuration actions
are required. This function also applies for IBM FlashSystem products with multiple control
enclosures, a standard topology, and virtualizing external storage.
Without a third arbitration site (quorum server), a tie-breaker mechanism must be chosen for
the two existing sites. During a network outage between the sites, the pre-configured winner
continues operating and processing I/O requests. The loser site is unavailable until the
connection is restored. IP quorum settings, within the configuration options, determine the
preferred site for handling these scenarios. If a site outage occurs at the winning site, the
system stops processing I/O requests until this site is recovered or the manual quorum
override procedure is used.
On IBM FlashSystem products in a standard topology system with two or more control
enclosures and no external storage, none of the internal drives can be the active quorum
device. For such configurations, it is a best practice to deploy an IP-based quorum application
to avoid a “split brain” condition.
50 Unleash the Power of Flash: Getting Started with IBM Storage Virtualize Version 8.7
Figure 2-48 Download IPv4 quorum button
2. Click Download... and a window opens, as shown in Figure 2-49. It provides an option to
create an IP application that is used for tie-breaking only, or an application that can be
used as a tie-breaker and to store recovery metadata.
An application that does not store recovery metadata requires less channel bandwidth for
a link between the system and the quorum app, which might be a decision-making factor
for using a multi-site HA system.
For a full list of IP quorum app requirements, see IP quorum application configuration.
3. Click OK. The ip_quorum.jar file is created. Save the file and transfer it to a supported
AIX®, Linux, or Windows host that can establish an IP connection to the service IP
address of each system node. Move it to a separate directory and start the application, as
shown in Example 2-12.
Example 2-12 Starting the IP quorum application on the Windows operating system
C:\IPQuorum>java -jar ip_quorum.jar
=== IP quorum ===
Name set to null.
Successfully parsed the configuration, found 2 nodes.
Trying to open socket
Trying to open socket
Note: Add the IP quorum application to the list of auto-started applications at each start
or restart or configure your operating system to run it as an auto-started service in the
background. The server hosting the IP quorum application must reside within the same
network subnet as the IBM FlashSystem for proper communication. Up to five IP
quorums can be deployed in your environment.
The IP quorum log file and recovery metadata are stored in the same directory with the
ip_quorum.jar file.
4. Check that the IP quorum application is successfully connected and running by verifying
its online status by selecting Settings → System → IP Quorum, as shown in Figure 2-50.
52 Unleash the Power of Flash: Getting Started with IBM Storage Virtualize Version 8.7
The Preferred quorum mode is supported by an IP quorum only.
To set a quorum mode, select Settings → System → IP Quorum and then click Quorum
Setting. The Quorum Setting window opens, as shown in Figure 2-51.
To set the FC port mask by using the GUI, complete the following steps:
1. Select Settings → Network → Fibre Channel Ports. In a displayed list of FC ports, the
ports are grouped by a system port ID. Each port is configured identically across all nodes
in the system. You can click the arrow next to the port ID to expand a list and see which
node ports (N_Port) belong to the selected system port ID and their worldwide port names
(WWPNs).
2. Right-click a system port ID that you want to change and select Modify Connection, as
shown in Figure 2-52 on page 54.
By default, all system ports can send and receive traffic of any kind, including the following
examples:
– Host traffic
– Traffic to virtualized back-end storage systems
– Local system traffic (node to node)
– Partner system (remote replication) traffic
The first two types are always allowed, and you can control them only with SAN zoning.
The other two types can be blocked by port masking.
3. In the Modify Connection dialog box (Figure 2-53), you can choose which type of traffic a
port can send. For example, Remote if the port is dedicated to Remote Replication traffic.
Click Modify when done.
54 Unleash the Power of Flash: Getting Started with IBM Storage Virtualize Version 8.7
None. Local and remote systems traffic is allowed, but system-to-host and
system-to-back-end storage communication still exists.
Port masks can also be set by using the CLI. Local and remote partner port masks are
internally represented as a string of zeros and ones. The last digit in the string represents port
one. The previous digits represent ports two, three, and so on.
If the digit for a port is set to 1, the port is enabled for the specific type of communication. If it
is set to 0, the system does not send or receive traffic that is controlled by a mask on the port.
To view the current port mask settings, run the lssystem command, as shown in
Example 2-13. The output shows that all system ports allow all types of traffic.
To set the localfcportmask for node to node traffic or the partnerfcportmask for remote
replication traffic, run the chsystem command. Example 2-14 shows the mask setting for a
system with four FC ports on each node and that has RC relationships. Masks are applied to
allow local node-to-node traffic only on ports 1 and 2, and replication traffic only on ports 3
and 4.
Example 2-14 Setting a local port mask by running the chsystem command
IBM_IBM FlashSystem:ITSO-FS9500:superuser>chsystem -localfcportmask 0011
IBM_IBM FlashSystem:ITSO-FS9500:superuser>chsystem -partnerfcportmask 1100
IBM_IBM FlashSystem:ITSO-FS9500:superuser>lssystem |grep mask
local_fc_port_mask 0000000000000000000000000000000000000000000000000000000000000011
partner_fc_port_mask 0000000000000000000000000000000000000000000000000000000000001100
The mask is extended with zeros, and all ports that are not set in a mask have the selected
type of traffic blocked.
Note: When replacing or upgrading your node hardware, consider that the number of FC
ports and their arrangement might be changed. If so, make sure that any configured port
masks are still valid for the new configuration.
Automatic Configuration for Virtualization is intended for a new system. If host, pool, or
volume objects are configured, all the user data must be migrated out of the system and
those objects must be deleted.
The Automatic Configuration for Virtualization wizard starts immediately after you complete
the initial setup wizard if you set Automatic Configuration to On.
2. You can add any control or expansion enclosures as part of the external storage to be
virtualized. If you do not have more enclosures to add, this part of the prerequisite steps
can be skipped.
Click Add Enclosure to add the enclosures, or click Skip to move to the next step (see
Figure 2-55).
Note: You can turn off the Automatic Configuration for Virtualization wizard at any step
by clicking the dotted symbol in the upper right corner.
56 Unleash the Power of Flash: Getting Started with IBM Storage Virtualize Version 8.7
3. The wizard checks whether the IBM SAN Volume Controller is correctly zoned to the
system. By default, newly installed systems run in N_Port ID Virtualization (NPIV) mode
(Target Port Mode). The system’s virtual (host) WWPNs must be zoned for IBM SAN
Volume Controller. On the IBM SAN Volume Controller side, physical WWPNs must be
zoned to a back-end system independently of the NPIV mode setting.
4. Create a host cluster object for IBM SAN Volume Controller. Each IBM SAN Volume
Controller node has its own worldwide node name (WWNN). Make sure to select all
WWNNs that belong to nodes of the same IBM SAN Volume Controller cluster.
Figure 2-56 shows that because the system detected an IBM SAN Volume Controller
cluster with dual I/O groups, four WWNNs are selected.
5. When all nodes of an IBM SAN Volume Controller cluster (including the spare cluster) are
selected, you can change the host object name for each one, as shown in Figure 2-57 on
page 58. For convenience, name the host objects to match the IBM SAN Volume
Controller node names or serial numbers.
6. Click Automatic Configuration and check the list of internal resources that are used, as
shown in Figure 2-58.
58 Unleash the Power of Flash: Getting Started with IBM Storage Virtualize Version 8.7
7. If the system uses compressed drives (FCM drives), you are prompted to enter your
expected compression ratio or the total capacity that is to be provisioned to IBM SAN
Volume Controller (Figure 2-59). If IBM SAN Volume Controller uses encryption or writes
data that is not compressible, set the ratio to 1:1 and then click Next.
8. Review the configuration of the pools, (Figure 2-60), and click Proceed to apply the
configuration.
10.You can export the system volume configuration data in .csv format by using this window
or anytime by selecting Settings → System → Automatic Configuration.
60 Unleash the Power of Flash: Getting Started with IBM Storage Virtualize Version 8.7
3
This chapter describes the Storage Virtualize GUI, the steps needed for network
configuration, creating pools and assigning storage, configuring hosts, basic snapshots, and
asynchronous replication configuration.
Recommendation: It is a recommended practice for each user to have their own unique
account
The default user accounts can be disabled for use or their passwords changed and kept
secured for emergency purposes only. This approach helps to identify any personnel who are
working on the systems and track all important changes that are done by them. The
superuser account is for initial configuration and servicing the system only. For more
information on user accounts, see Users.
62 Unleash the Power of Flash: Getting Started with IBM Storage Virtualize Version 8.7
Task menu
The IBM Storage Virtualize GUI task menu is always available on the left panel of the GUI
window. To browse by using this menu, click the action and choose a task that you want to
display.
Performance
This section provides important information about latency, bandwidth, input/output operations
per second (IOPS), and CPU usage. All this information can be viewed at the system or
canister levels. A Node comparison view shows the differences in characteristics of each
node. The performance graph is updated with new data every 5 seconds. The granularity of
the metrics can be adjusted from seconds to days. For more detailed performance charts,
select Monitoring → Performance.
Capacity
This section shows the current usage of attached storage. It also shows provisioned capacity
and capacity savings.
System Health
This section indicates the status of all critical system components, which are grouped in three
categories: Hardware Components, Logical Components, and Connectivity Components.
When you click Expand, each component is listed as a subgroup. You can then go directly to
the section of GUI where the component in which you are interested is managed.
You can also view node to node Ethernet connectivity, fibre channel connectivity, NVMe
connectivity, and fibre channel ports.
For more information on configuring ports in a FlashSystem storage unit, refer to the
IBM Redpaper The Definitive Guide to FlashSystem 5300 Port Configuration, REDP-5734.
By connecting to a service IP address with a browser or SSH client, you can access the
Service Assistant Interface, which can be used for maintenance and service tasks. The
service IPs are also used for some system functions, for example to access a key server or IP
quorum or for remote support assistance.
On the next screen, select Add IP address to configure the IP address and add to a portset.
See Figure 3-3 on page 65.
64 Unleash the Power of Flash: Getting Started with IBM Storage Virtualize Version 8.7
Figure 3-3 Add IP address
3.2.4 Portsets
Portsets are groupings of logical addresses that are associated with the specific traffic types.
The system comes with one Fibre Channel and five Ethernet portsets defined. They are used
for host attachment, system management, remote copy, and back-end storage virtualization.
For more information, see Portsets.
Storage pools aggregate internal and external capacity as managed disks (Mdisks) and
provide containers in which you can create volumes that can be mapped to host systems.
Storage pools help make it easier to dynamically allocate resources, maximize productivity,
and reduce costs.
A storage pool is created as an empty container with no storage assigned to it. Storage is
then added in the form of MDisks. MDisks can be a redundant array of independent disks
Arrays are assigned to storage pools at creation time. Arrays cannot exist outside of a storage
pool and they cannot be moved between storage pools. It is possible to delete an array by
removing it from a pool and re-create it within a new pool.
External MDisks can exist within or outside of a pool. The MDisk object remains on a system
if it is visible from external storage, but its access mode changes depending on whether it is
assigned to a pool.
Note: Provisioning policy does not change any parameters of volumes that already exist in
the pool when a policy is assigned. If you already have volumes in the pool, then after
assigning a provisioning policy you might need to change volumes capacity savings
settings manually.
Child pools are created from capacity that is assigned to a parent pool instead of created
directly from MDisks. When a child pool is created from a standard pool, the capacity for a
child pool is reserved from the parent pool. This capacity is no longer reported as available
capacity of the parent pool. In terms of volume creation and management, child pools are
similar to parent pools. Child pools that are created from DRPs are quota-less. Their capacity
is not reserved but is shared with a parent pool.
DRPs use a set of techniques, such as compression and deduplication, that can reduce the
required amount of usable capacity to store data. Data reduction can increase storage
efficiency and performance, and reduce storage costs, especially for flash storage. These
techniques can be used in addition to compression on Flash Core Modules (FCMs).
In standard pools, there can be no compression on a pool layer, but data is still compressed
on the FCM layer if the pool contains drives with this technology. For more information, see
Pools.
66 Unleash the Power of Flash: Getting Started with IBM Storage Virtualize Version 8.7
The pool consists of only FCM4 drives with firmware 4.1 or higher configured in a single
DRAID6 array.
Each node contains at least 128 GB RAM.
Volumes are in a standard pool or fully allocated within a DRP.
Both alternatives open the dialog box that is shown in Figure 3-6.
2. Select the Data reduction option if you want to create a DRP. Leaving it clear creates a
standard storage pool.
The size of the extents is selected at creation time and cannot be changed later. The
extent size controls the maximum total storage capacity that is manageable per system
(across all pools). For DRPs, the extent size also controls the maximum pool stored
capacity per IO group. For more information, see V8.7.0.x Configuration Limits for
IBM FlashSystem and SAN Volume Controller.
Important: Do not create DRPs with small extent sizes. For more information, see this
IBM Support alert.
If an encryption license is installed and enabled, you can select whether the storage pool is
encrypted. The encryption setting of a storage pool is selected at creation time and cannot be
changed later. By default, if encryption is licensed and enabled, the encryption check-box is
selected.
Naming rules: When you choose a name for a pool, the following rules apply:
Names must begin with a letter.
The first character cannot be numerical.
The name can be a maximum of 63 characters.
Valid characters are uppercase letters (A - Z), lowercase letters (a - z), digits (0 - 9),
underscore (_), period (.), hyphen (-), and space.
Names must not begin or end with a space.
Object names must be unique within the object type. For example, you can have a
volume that is named ABC and a storage pool that is called ABC, but not two storage
pools that are both called ABC.
The default object name is valid (object prefix with an integer).
Objects can be renamed at a later stage.
The new pool is created and is included in the list of storage pools. It has no storage in it, so
its capacity is zero. Storage in a form of disk arrays or externally-virtualized MDisks must be
assigned to the pool before volumes can be created.
68 Unleash the Power of Flash: Getting Started with IBM Storage Virtualize Version 8.7
Figure 3-7 Add Storage
This opens a new pane with a suggested RAID array configuration based on the installed
drives. See Figure 3-8
A key feature of the system is its ability to consolidate disk controllers from various vendors
into storage pools. The storage administrator can manage and provision storage to
applications from a single user interface and use a common set of advanced functions across
all of the storage systems under the control of the system.
This concept is called External Virtualization, which can make your storage environment
more flexible, more cost-effective, and easier to manage.
System layers
The system layer affects how the system interacts with a system and other external systems
that run IBM Storage Virtualize software. To virtualize another system using Storage
For detailed instructions on configuring an external storage system, review the External
storage documentation.
When external LUs are discovered by the IBM Storage Virtualize system, they are visible in
Pools → MDisks by pools under Unassigned MDisks. Select the MDisks that are to be added
to a pool and select Actions → Assign.
When you add MDisks to pools, you must assign them to the correct storage tiers. It is
important to set the tiers correctly if you plan to use the IBM Easy Tier® feature. The use of
an incorrect tier can mean that the Easy Tier algorithm might make wrong decisions and thus
affect system performance.
The storage tier setting can also be changed after the MDisk is assigned to the pool. For more
information, see Easy tier.
A child pool cannot be created within another child pool. Multiple child pools can be created
within a single parent pool.
Multiple child pools can be created from a single parent pool for different uses. Each child
pool can use a different provisioning policy. Child pools can also be linked to a remote pool for
policy-based replication. See Figure 3-9.
Child pools created from standard pools and child pools that are created from data reduction
pools have a significant difference:
A child pool with a standard pool as a parent has a type child_thick. Child pools of
Standard pools have a fixed capacity, which is taken, or reserved, from the parent pool.
Free capacity of a parent pool reduces when a child pool is created. Volumes in a child
pool of a standard pool cannot occupy more capacity that is assigned to the child.
70 Unleash the Power of Flash: Getting Started with IBM Storage Virtualize Version 8.7
A child pool with DRP as a parent, has type child_quotaless. Quotaless child pools share
its free and used capacity with the parent pool and do not have their own capacity limit.
Free capacity of a DRP does not change when a new quotaless child pool is created.
The capacity of a child_thick type pool is set at creation time, but can be modified later
nondisruptively. The capacity must be a multiple of the parent pool extent size and must be
smaller than the free capacity of the parent pool.
Child pools of a child_thick type can be used to implement the following configurations:
Limit the capacity that is allocated to a specific set of volumes
It can also be useful when strict control over thin-provisioned volume expansion is needed.
For example, you might create a child pool with no volumes in it to act as an emergency
set of extents so that if the parent pool uses all its free extents, you can use the ones from
the child pool.
As a container for VMware vSphere virtual volumes (VVOLs)
Data reduction pools are not supported as parent pools for VVOL storage.
Migrate volumes from nonencrypted parent storage pool to encrypted child pools
When you create a child pool of type child_thick after encryption is enabled, an encryption
key is created for the child pool, even when the parent pool is not encrypted. You can then
use volume mirroring to migrate the volumes from the nonencrypted parent pool to the
encrypted child pool.
Encrypted child_quotaless type child pools can be created only if the parent pool is
encrypted. The data reduction child pool inherits an encryption key from the parent pool.
Select Pools → Pools. Right-click the parent pool that you want to create a child pool from
and select Create Child Pool. The Create Child Pool pane opens. See Figure 3-10.
This section describes how to create and provision volumes on IBM Storage Virtualize
systems. For more information on volumes and the various volume types, see Volumes.
It is important to note that volume groups are distinct from consistency groups. Although in
some cases, the underlying system might use a consistency group concept internally when
managing volume groups
To create a volume group select Volumes → Volume groups → Create Volume Group.
Note: If you plan to use policy-based replication, then if you configure it and assign a
replication policy to the volume group before creating volumes within the group, you can
skip the initial copy of replicated data to the remote site.
On the next panel, you define the volume properties. However, if the pool has a pre-assigned
provisioning policy, the capacity savings option is locked and reflects the policy's settings.
There is a toggle on the screen for Advanced settings mode, which allows manual selection of
I/O group and preferred node parameters.
72 Unleash the Power of Flash: Getting Started with IBM Storage Virtualize Version 8.7
The newly created volumes automatically start formatting. This is a background process and
the volume is immediately available for host access. The default format speed is 2 MiB/s per
volume. If you want to increase the format rate for a volume, right-click the volume and select
Modify Mirror Sync Rate. Then choose the preferred rate.
It is possible to overuse the system’s resources by formatting too many volumes at too high a
rate. If you experience a system performance problem after you increase the mirror sync rate,
you can reduce it in the same manner.
Note: If you are using a system with IBM FlashCore® Modules, the data that is written to
the system is compressed automatically. There is no requirement to also create the
volumes as compressed.
For more information about configuring vVols with IBM Storage Virtualize, see IBM Storage
Virtualize and VMware: Integrations, Implementation and Best Practices, SG24-8549.
This section describes the processes that are required to attach a supported host system to
an IBM Storage Virtualize storage system through various supported interconnect protocols.
These hosts can connect to the storage systems through any of the following protocols:
Fibre Channel Protocol (FCP)
Fibre Channel over Ethernet (FCoE)
iSCSI
SAS
iSCSI Extensions for Remote Direct Memory Access (RDMA) (iSER)
Non-Volatile Memory Express (NVMe) over Fibre Channel (FC-NVMe)
NVMe over Remote Direct Memory Access (NVMe over RDMA)
NVMe over Transmission Control Protocol (NVMe over TCP)
To enable multiple access paths and enable correct volume presentation, a host system must
have a multipathing driver installed.
For more information about the native operating system multipath drivers that are supported
for IBM Storage Virtualize systems, see the SSIC.
For more information about how to attach specific supported host operating systems to the
storage systems, see Host attachment.
Note: If a specific host operating system is not mentioned in the SSIC, contact your IBM
representative or IBM Business Partner to submit a special request for support.
N_Port ID Virtualization
IBM Storage Virtualize systems use N_Port ID Virtualization (NPIV), which is a method for
virtualizing a physical FC port that is used for host I/O.
NPIV mode creates a virtual worldwide port name (WWPN) for every physical system FC
port. This WWPN is available for host connection only. During node maintenance, restart, or
failure, the virtual WWPN from that node is transferred to the same port of the other node in
the I/O group.
Ensure that the FC switches support the ability to create four more NPIV ports on each
physically connected system port.
When performing zoning configuration, virtual WWPNs are used for host communication only.
That is, system-to-host zones must include virtual WWPNs. Internode, intersystem, and
back-end storage zones must use the WWPNs of physical ports. Ensure that equivalent ports
with the same port ID are on the same fabric and in the same zone.
Important: IBM i Systems that are attached to FlashSystem or SVC must be converted to
use FlashSystem NPIV before upgrading to 8.7 or higher. For FlashSystem or SVC
systems that have NPIV in a state of disabled or transitional and that have any IBM i hosts,
a modified procedure must be used when enabling NPIV, to avoid loss of host access to
data. For more information, see IBM i Systems attached to FlashSystem or SVC must be
converted to use FlashSystem NPIV before upgrading to 8.7. or higher.
74 Unleash the Power of Flash: Getting Started with IBM Storage Virtualize Version 8.7
To view the virtual WWPNs to be used in system to host select Settings → Network → Fibre
Channel Ports. Expand the section for each port. Columns indicate WWPN, Host IO
Permitted, and Protocol type. SCSI is for Fibre Channel Protocol (FCP).
Note: The NPIV WWPNs do not become active until there is at least one online volume.
Host zones
A host must be zoned to an I/O group to access volumes that are presented by this I/O group.
The preferred zoning policy is single initiator zoning. To implement it, create a separate zone
for each host bus adapter (HBA) port, and place one port from each node in each I/O group
that the host accesses in this zone. A typical Fibre Channel host has two ports zoned to each
IO group, which creates a total of four paths. For deployments with more than 64 hosts that
are defined in the system, this host zoning scheme must be used.
Note: Cisco Smart Zoning and Brocade Peer Zoning are supported. You can use either to
insert target ports and multiple initiator ports in a single zone for ease of management.
However, either acts as though each initiator and target are configured in isolated zones.
The use of these zoning techniques is supported for host attachment and storage
virtualization. As a best practice, use normal zones when configuring ports for clustering or
for replication because these functions require the port to be an initiator and a target.
Consider the following rules for zoning hosts over SCSI or FC-NVMe:
For any volume, the number of paths through the SAN from the host to a system must not
exceed eight. For most configurations, four paths to an I/O group are sufficient.
Balance the host load across the system’s ports. For example, zone the first host with
ports 1 and 3 of each node in the I/O group, zone the second host with ports 2 and 4, and
so on. To obtain the best overall performance of the system, the load of each port must be
equal. Assuming that a similar load is generated by each host, you can achieve this
balance by zoning approximately the same number of host ports to each port.
Spread the load across all system ports. Use all ports that are available on your machine.
Balance the host load across HBA ports. If the host has more than one HBA port per
fabric, zone each host port with a separate group of system ports.
All paths must be managed by the multipath driver on the host side. Make sure that the
multipath driver on each server can handle the number of paths that is required to access all
volumes that are mapped to the host.
The same ports can be used for iSCSI and iSER host attachment concurrently. However, a
single host can establish an iSCSI or session, but not both
Hosts connect to the system through IP addresses, which are assigned to the Ethernet ports
of the node. If the node fails, the address becomes unavailable and the host loses
communication with the system through that node.
To allow hosts to maintain access to data, the node-port IP addresses for the failed node are
transferred to the partner node in the I/O group. The partner node handles requests for its
In addition to node-port IP addresses, the iSCSI name and iSCSI alias for the failed node are
transferred to the partner node. After the failed node recovers, the node-port IP address and
the iSCSI name and alias are returned to the original node.
iSCSI
iSCSI is a protocol that uses the Transmission Control Protocol and Internet Protocol
(TCP/IP) to encapsulate and send SCSI commands to storage devices that are connected to
a network. iSCSI is used to deliver SCSI commands from a client interface, which is called an
iSCSI Initiator, to the server interface, which is known as the iSCSI Target. The iSCSI payload
contains the SCSI CDB and optionally, data. The target carries out the SCSI commands and
sends the response back to the initiator.
RNICs can use RDMA over Ethernet by way of RoCE encapsulation. RoCE wraps standard
InfiniBand payloads with Ethernet or IP over Ethernet frames, and is sometimes called
InfiniBand over Ethernet. The following main RoCE encapsulation types are available:
RoCE V1
This type uses dedicated Ethernet Protocol Encapsulation (Ethernet packets between
source and destination MAC addresses by using EtherType 0x8915).
RoCE V2
This type uses dedicated UDP over Ethernet Protocol Encapsulation, IP UDP packets that
use port 4791 between source and destination IP addresses. UDP packets are sent over
Ethernet by using source and destination MAC addresses. This type is not compatible with
other Ethernet options, such as RoCE v1.
NVMe over TCP needs more CPU resources than protocols using RDMA. Each NVMe/TCP
port on FlashSystem supports multiple IP addresses and multiple VLANs. Generally,
NVMe-TCP runs on all switches and is routable.
For operating system support and multipathing, see IBM System Storage Interoperation
Center (SSIC).
76 Unleash the Power of Flash: Getting Started with IBM Storage Virtualize Version 8.7
3.5.4 Host objects
Before a host can access the storage capacity, it must be presented to the storage system as
a host object.
A host object is configured by using the GUI or command-line interface (CLI) and must
contain the necessary credentials for host-to-storage communications. After this process is
completed, storage capacity can be mapped to that host in the form of a volume.
A host cluster object groups clustered servers and treats them as a single entity. This
configuration allows multiple hosts to access the same volumes through one shared mapping.
Note: Any volume that is mapped to a host cluster is automatically assigned to all of the
members in that cluster with the same SCSI ID.
A typical use case for a host cluster object is to group multiple clustered servers with a
common operating system, such as IBM PowerHA® and Microsoft Cluster Server, and enable
them to have shared access to common volumes.
To create a host object select Hosts → Hosts → Add Host. The Add Host page opens. See
Figure 3-11 on page 78.
Tip: The Host port drop-down menu shows FCP initiator WWPNs that are currently logged
in to the system. If an expected WWPN is missing, examine switch zoning and rescan the
storage from the hosts. Some operating systems log out if no LUNs are mapped to the
host. If an expected host is not listed, then select Enter Unverified WWPN and enter the
host WWPNs manually.
Note: Usually, all volumes within a volume group are mapped to the same host or host
cluster, and the mapping can be done within the volume group view.
The purpose of the volume group snapshot management model is to simplify the
implementation of standard IBM FlashCopy® operations. It achieves this by offering a more
straightforward setup process and separating the snapshot and clone features. By using
volume group snapshots, administrators can create snapshots of volume groups with more
ease and efficiency, without the need for complex consistency group configurations.
78 Unleash the Power of Flash: Getting Started with IBM Storage Virtualize Version 8.7
Snapshots cannot be mapped to a host. To access the data on a snapshot, create a
thin-clone of the snapshot and map it to a host.
Note: When using a snapshot policy, after the initial snapshot, snapshots are triggered
based on the frequency defined. This means that the time of day the snapshot is triggered
might shift forward and backward with Daylight Saving Time changes.
To create, view, or assign a snapshot policy select Policies → Snapshot policies. See
Figure 3-12 on page 80.
You can also suspend or unassign a policy from within the volume group. See Figure 3-13.
To configure policy-based replication between two systems, both require at least one IP
address that is created and assigned to a replication portset with at least one pool with
storage created. Multiple IP addresses can be added to a replication portset. If there is a
second independent inter-site link between the systems, a second portset can be used and
added to the partnership.
1. On the primary system, select Copy Services → Partnerships → Create Partnership. If
using IP, select IP and enter the partner IP address then select Test Connection. If the
partner meets requirements for policy-based replication the Use policy-based
80 Unleash the Power of Flash: Getting Started with IBM Storage Virtualize Version 8.7
replication checkbox can be selected. Enter the requested information and select Create.
Repeat these steps on the partner system. See Figure 3-14.
2. When the partnership shows a green dot and configured select Setup policy-based
replication. See Figure 3-15 on page 82 and Figure 3-16 on page 82
For more information, see IBM Documentation Asynchronous disaster recovery replication
and see Policy-Based Replication with IBM Storage FlashSystem, IBM SAN Volume
Controller and IBM Storage Virtualize, REDP-5704.
82 Unleash the Power of Flash: Getting Started with IBM Storage Virtualize Version 8.7
4
This chapter includes discussions of the system health dashboard, verifying configuration of
objects configured in Chapter 3, “Step-by-step configuration” on page 61, system security,
getting support from IBM, and data migration.
84 Unleash the Power of Flash: Getting Started with IBM Storage Virtualize Version 8.7
Select IP Addresses or Partnerships for additional details. See Figure 4-3
There is a column for Usable Capacity and a column for Written Capacity Limit. If you are
using over-provisioned storage, for example, FCMs the values are different. The usable
capacity represents the physical capacity available after the data is reduced because of
compression and deduplication. The Written Capacity Limit is the effective capacity of data
that is written to the system before its size is reduced.
You can change which columns are displayed by right-clicking the column titles bar. Some
useful capacity related columns can be displayed.
If a volume is thin provisioned in a standard pool, adding the columns Real Capacity and
Used Capacity can provide useful information. Used capacity is the capacity used by the data
written to the volume. Real Capacity is the Used Capacity plus a contingency capacity that is
used for new writes. These values are effective capacity.
Adding the column Compression Savings lists information on how compressible the data is.
To view the results and the date of the latest estimation cycle, under the volumes view,
right-click the volume then select Capacity Savings → Estimate Compression Savings.
To download a capacity savings reports, under the volumes view, select Actions → Capacity
Savings → Download Savings report.
The report is also useful for determining the physical capacity used by each volume when the
volume is compressed by FCMs or when the volumes are compressed in a Data Reduction
Pool.
A stand-alone comprestimator utility can be installed and used on host systems to estimate
savings before you move data to a Storage Virtualize system. To download the
Comprestimator that can be installed on a server, see IBM FlashSysterm Comprestimator.
To review the host connectivity, select Settings → Network → Fibre Channel or NVMe
Connectivity. The results can be filtered by the host. In the following example Host1 is
degraded because each WWPN is logged in to node1 twice and node 2 once. See
Figure 4-4.
When a host shows as degraded but there is no hardware failure on the host or storage, verify
the fibre channel switch zoning and rescan the storage from the host.
Tip: If you are using Broadcom Fibre Channel switches, the fcping command that is run
from the switch CLI can be used to verify zoning and WWPN connectivity.
86 Unleash the Power of Flash: Getting Started with IBM Storage Virtualize Version 8.7
4.2 Additional settings and basic operations
The following sections discuss additional settings that an administrator can use when
implementing a new system along with basic operations.
For more information, see Security and see IBM Storage Virtualize, IBM Storage
FlahsSystem, and IBM SAN Volume Controller Security Feature Checklist, REDP-5716.
Remote authentication
You can use remote authentication to authenticate to the system by using credentials that are
stored on an external authentication service. When you configure remote authentication, you
do not need to configure users on the system or assign more passwords. Instead, you can
use your existing passwords and user groups that are defined on the remote service to help
simplify user management and access, to enforce password policies, and to separate user
management from storage management.
A remote user is authenticated on a remote LDAP server. It is not required to add a remote
user to the list of users on the system, although they can be added to configure optional SSH
keys. For remote users, an equivalent user group must be created on the system with the
same name and role as the group on the remote LDAP server.
Ownership groups
An ownership group defines a subset of users and objects within the system. You can create
ownership groups to further restrict access to specific resources that are defined in the
ownership group. Only users with Administrator or Security Administrator roles can configure
and manage ownership groups.
Ownership groups restrict access to only those objects that are defined within that ownership
group. An owned object can belong to one ownership group.
An owner is a user with an ownership group that can view and manipulate objects within that
group.
The system supports the following resources that you assign to ownership groups:
Child pools
Volumes
Volume groups
Hosts
Host clusters
Host mappings
FlashCopy mappings
FlashCopy consistency groups
User groups
Portsets
When a user group is assigned to an ownership group, the users in that user group retain
their role, but are restricted to only those resources within the same ownership group. User
groups can define the access to operations on the system, and the ownership group can
further limit access to individual resources.
For example, you can configure a user group with the Copy Operator role, which limits access
of the user to Copy Services functions, such as FlashCopy and Remote Copy operations.
Access to individual resources, such as a specific FlashCopy consistency group, can be
further restricted by adding it to an ownership group.
When the user logs on to the management GUI, only resources that they can access through
the ownership group are displayed. Also, only events and commands that are related to the
ownership group to which a user belongs are viewable by those users.
System certificates
SSL certificates are used to establish secure communications for many services. The system
uses a certificate to identify itself when authenticating with other devices. Depending on the
scenario, the system might be acting as either the client or the server.
The system has a root certificate authority (CA) that can be used to create internally signed
system certificates. System setup creates a certificate that is signed by the root CA to secure
connections between the management GUI and the browser. The root certificate can be
exported from the system and added to truststores on other systems, browsers, or devices to
establish trust. Internally signed certificates can be renewed automatically before they expire.
Automatic renewal can simplify the certificate renewal process and can prevent security
warnings from expired certificates. Automatic renewal is only supported by using an internally
signed certificate.
Externally signed certificates are issued and signed by a trusted third-party provider of
certificates, called an external certificate authority (CA). This CA can be a public CA or your
own organization's CA. Most web browsers trust well-known public CAs and include the root
certificate for these CAs in the device or application. Externally signed certificates cannot be
renewed automatically because they must be issued by the external CA. Externally signed
certificates must be manually updated before they expire by creating a new certificate signing
request (CSR) on the system and supplying it to the CA. The CA signs the request and issues
a certificate that must be installed on the system. The system raises a warning in the event
log 30 days before the certificate expires.
Ensure that the Certificate Authority (CA) used to sign the certificate includes these
extensions.
88 Unleash the Power of Flash: Getting Started with IBM Storage Virtualize Version 8.7
Security protocol levels
Security administrators can change the security protocol level for either SSL or SSH
protocols. When you change the security level for either of these security protocols, you can
control which encryption algorithms, ciphers, and version of the protocol are permitted on the
system.
The GUI gives a high-level description of each level. For a more detailed description including
the ciphers supported with each level, see Security protocol levels.
The audit log tracks action commands that are issued through an SSH session, management
GUI, or Remote Support Assistance. It provides the following entries:
Identity of the user who ran the action command.
Name of the actionable command.
Timestamp of when the actionable command ran on the configuration node.
Parameters that ran with the actionable command.
The audit log is accessed by selecting Access → Audit Log see Figure 4-5.
Call Home
IBM Call Home is a support function that is embedded in all IBM Storage Virtualize storage
products. By enabling call home, the health and functionality of your system is constantly
monitored by IBM. If a software or hardware error occurs, the call home function notifies IBM
support of the event and then automatically opens a service request. By obtaining information
There are two methods available for a system to call home and both can be enabled
simultaneously:
1. Cloud Services uses HTTPS to connect directly to IBM from the management IP address
assigned to the lowest physical port ID over port 443
2. Email services require an SMTP server to forward the email to IBM. Email services can
also send alerts to local administrators.
For detailed information on call home or remote support assistance, see the white paper
IBM Storage Virtualize Products Call Home and Remote Support Overview.
Note: Remote support assistance uses the service IP addresses to make an outbound
connection to IBM on port 22.
The connections for both Call Home and Remote Support Assistance can be routed
through a client-supplied web proxy.
Support package
If you encounter a problem and contact the IBM Support Center, you are asked to provide a
support package, which is often referred to as a snap.
You can use two methods to collect and upload the support package from the GUI of your
Storage Virtualize system:
Upload Support Package
Use this feature if your system is connected to the internet to upload the Support Package
directly from the storage system.
Download Support Package
Use this feature if your system is not connected to the internet to upload the Support
Package manually.
The support agent provides the type of support package to collect based on the problem. For
general guidelines and the differences between the different support package types, see
What Data Should You Collect for a Problem on Spectrum Virtualize systems.
90 Unleash the Power of Flash: Getting Started with IBM Storage Virtualize Version 8.7
5
IBM strongly recommends that all customers install and use this no-charge, cloud-based IBM
application because it provides a single dashboard that provides a clear view of all your IBM
block storage. You can make better decisions by seeing trends in performance and capacity.
Note: IBM Storage Insights is available at no cost to clients who have IBM Storage
Systems on either IBM warranty or maintenance. The more fully featured IBM Storage
Insights Pro is a chargeable product, which can be purchased separately and can also be
included in certain levels of IBM Storage Expert Care and IBM Storage Control.
With storage health information, you can focus on areas that need attention. When IBM
support is needed, IBM Storage Insights simplifies uploading logs, speeds resolution with
online configuration data, and provides an overview of open tickets all in one place.
In addition to the no-charge version of IBM Storage Insights, IBM offers IBM Storage Insights
Pro. IBM Storage Insights Pro is a subscription service that provides longer historical views of
data, more reporting and optimization options, and supports IBM file and block storage with
EMC VNX and VMAX.
Note: For a comparison of the features in the IBM Storage Insights and Insights Pro
editions, see IBM Storage Insights vs IBM Storge Insights Pro.
92 Unleash the Power of Flash: Getting Started with IBM Storage Virtualize Version 8.7
For more information about IBM Storage Insights and for registration, see the following
resources:IBM Storage Insights Fact Sheet
IBM Storage Insights Security Guide, SC27-8774
IBM Storage Insights
Product registration, which is used to sign up and register for this no-charge service
The monitoring capabilities that IBM Storage Insights provides are useful for things like
capacity planning, workload optimization, and managing support tickets for ongoing issues.
For a live demo of IBM Storage Insights, see Storage Insights Demo (requires login).
Demonstration videos: To view videos about Storage Insights, see Videos for
IBM Storage Insights. The videos include new features and enhancements of IBM Storage
Insights.
After you add your systems to IBM Storage Insights, you see the Dashboard, where you can
select a system that you want to see the overview for.
There are two versions of the dashboard, the classic version and the new Carbon enhanced
version.
Figure 5-1 shows the classic version view of the IBM Storage Insights dashboard.
Figure 5-2 IBM Storage Insights System overview (Carbon enhanced view)
The error entries can be expanded to obtain more details by selecting the three dots at the
upper-right corner of the component that has an error and then selecting View Details. The
relevant part of the more detailed System View opens, and what you see depends on which
component has the error, as shown in Figure 5-4 on page 95.
94 Unleash the Power of Flash: Getting Started with IBM Storage Virtualize Version 8.7
Figure 5-4 Ports in error
In Figure 5-4, the GUI lists which components have the problem and exactly what is wrong
with them. You can use that information to open a support ticket with IBM if necessary.
Figure 5-5 Capacity area of the IBM Storage Insights system overview
In the Capacity view, the user can select the required system. Clicking any of these items
takes the user to the detailed system view for the selection option. From there, you can get a
historical view of how the system capacity changed over time, as shown in Figure 5-6 on
page 96. At any time, the user can select the timescale, resources, and metrics to be
displayed on the graph by clicking any options around the graph.
To view more detailed performance statistics, enter the system view again, as described in
5.2.2, “Capacity monitoring” on page 95.
For this performance example, select View Pools, and then select Performance from the
System View pane, as shown in Figure 5-8 on page 97.
96 Unleash the Power of Flash: Getting Started with IBM Storage Virtualize Version 8.7
Figure 5-8 IBM Storage Insights: Performance view
It is possible to customize what can be seen on the graph by selecting the metrics and
resources. In Figure 5-9, the Overall Response Time for one IBM FlashSystem over a
12-hour period is displayed.
Scrolling down the graph, the Performance List view is visible, as shown in Figure 5-10 on
page 98. Metrics can be selected by clicking the filter button at the right of the column
headers. If you select a row, the graph is filtered for that selection only. Multiple rows can be
selected by holding down the Shift or Ctrl keys.
A window opens where you can create a ticket or update an existing ticket, as shown in
Figure 5-12 on page 99.
98 Unleash the Power of Flash: Getting Started with IBM Storage Virtualize Version 8.7
Figure 5-12 Get Support window
2. Select Create Ticket, and the ticket creation wizard opens. Details of the system are
automatically populated, including the customer number, as shown in Figure 5-13. Select
Next.
4. You can select a severity for the ticket. Examples of what severity you to select are shown
in Figure 5-15. Because in the example there are storage ports offline with no impact,
select severity 3 because there is only minor impact.
100 Unleash the Power of Flash: Getting Started with IBM Storage Virtualize Version 8.7
5. Choose whether this is a hardware or a software problem. For this example, the offline
ports are likely caused by a physical layer hardware problem. Click Next.
6. Review the details of the ticket to be logged with IBM, as shown in Figure 5-16. Contact
details must be entered so that IBM Support can respond to the correct person. You must
also choose which type of logs to attach to the ticket. For more information about the types
of snaps, see Figure 5-16. Click Create Ticket.
7. A confirmation window opens, as shown in Figure 5-17 on page 102, and IBM Storage
Insights automatically uploads the snap to the ticket when it is collected. Click Close.
Chapter 5. IBM Storage Insights and IBM Storage Insights Pro 101
Figure 5-17 Update ticket
102 Unleash the Power of Flash: Getting Started with IBM Storage Virtualize Version 8.7
2. To quickly add logs to a ticket without having to browse to the system GUI or use
IBM ECuRep, click Get Support and Add Log Package to Ticket. A window opens that
guides you through the process, as shown in Figure 5-19. After you enter the support
ticket number, you can select which type of log package you want and add a note to the
ticket with the logs.
Chapter 5. IBM Storage Insights and IBM Storage Insights Pro 103
4. After clicking Update Ticket, a confirmation opens, as shown in Figure 5-21. You can exit
the wizard. IBM Storage Insights runs in the background to gather the logs and upload
them to the ticket.
IBM Storage Virtualize software 8.7.0 and FlashCore modules (FCMs) with firmware 4.1
include the following enhancements to ransomware threat detection:
IBM FCMs collect and analyze detailed ransomware statistics from every I/O with no
performance impact.
IBM Storage Virtualize runs an AI engine on every FlashSystem that is fed Machine
Language (ML) models developed by IBM Research® trained on real-world ransomware.
The AI engine learns what’s normal for the system and detects threats by using data from
the FCMs.
IBM Storage Insights Pro collects threat information from connected FlashSystems. Alerts
trigger SIEM/SOAR software to initiate a response.
104 Unleash the Power of Flash: Getting Started with IBM Storage Virtualize Version 8.7
Statistics are fed back to IBM to improve ML models.
For more information about the IBM ransomware threat detection solutions, including those
mentioned in this book, see Ransomware protection solutions.
Also, for more information about how to mark volume snapshots as compromised after
ransomware threat detection, see Boost Your Defense with IBM Storage Insights.
IBM Storage Insights Pro works with the Flash Grid and provides an overview of your grid
with grouping of your systems and the ability to nondisruptively move workloads, also called
storage partitions, between systems in the grid. The goal is to provide a seamless integration
and interaction between the on-premises and cloud-based management portals. Figure 5-22
shows the integration of IBM Storage Insights Pro with the IBM Storage Virtualize software
GUI and the linkage to the IBM FlashSystems and IBM SAN Volume Controllers it monitors.
Figure 5-22 IBM Storage insights Pro and IBM Flash Grid integration
Chapter 5. IBM Storage Insights and IBM Storage Insights Pro 105
106 Unleash the Power of Flash: Getting Started with IBM Storage Virtualize Version 8.7
6
The following questions help define the problem for effective troubleshooting:
What are the symptoms of the problem?
– What is reporting the problem?
– Which error codes and messages were observed?
– What is the business impact of the problem?
– Where does the problem occur?
– Which exact component is affected, the whole system or for instance certain hosts,
IBM Storage Virtualize nodes
– Is the environment and configuration supported?
When does the problem occur?
– How often does the problem happen?
– Does the problem happen only at a certain time of day or night?
– What kind of activities was ongoing at the time the problem was reported?
– Did the problem happen after a change in the environment, such as a code upgrade or
installing software or hardware?
Under which conditions does the problem occur?
– Does the problem always occur when the same task is being performed?
– Does a certain sequence of events need to occur for the problem to surface?
– Do any other applications fail at the same time?
Can the problem be reproduced?
– Can the problem be re-created, for example by running a single command, a set of
commands, or a particular application?
– Are multiple users or applications encountering the same type of problem?
– Can the problem be reproduced on any other system?
Note: For effective troubleshooting, it is crucial to collect log files as close to the incident as
possible and provide an accurate problem description with a timeline.
108 Unleash the Power of Flash: Getting Started with IBM Storage Virtualize Version 8.7
6.1.1 Storage Insights
As discussed in Chapter 5, “IBM Storage Insights and IBM Storage Insights Pro” on page 91,
IBM Storage Insights is an important part of monitoring to help ensure continued availability of
IBM Storage Virtualize systems.
When IBM Support is needed, IBM Storage Insights simplifies uploading logs, speeds
resolution with online configuration data, and provides an overview of open tickets all in one
place.
IBM strongly recommends that all customers install and use this no-charge, cloud-based IBM
application because it provides a single dashboard that provides a clear view of all your IBM
block storage.
For detailed information and examples, refer to Chapter 5, “IBM Storage Insights and
IBM Storage Insights Pro” on page 91.
As shown in Figure 6-1, the first icon shows IBM Storage Virtualize events, such as an error
or a warning, and the second icon shows suggested, running, or recently completed
background tasks.
The System Health section in the lower part of the dashboard provides information about the
health status of hardware, logical, and connectivity components. If you click Expand in each
of these categories, the status of the individual components is shown (see Figure 6-2).
Clicking More Details takes you to the GUI panel related to that specific component, or
shows more information about it.
110 Unleash the Power of Flash: Getting Started with IBM Storage Virtualize Version 8.7
in Monitoring → Events. The highest priority event, which is the event log entry with the
lowest four-digit error code, is highlighted so that it is addressed first as shown in Figure 6-3.
Click Run Fix to start the Fix Procedure for this particular event. Fix Procedures help resolve
a problem. In the background, a Fix Procedure analyzes the status of the system and its
components and provides further information about the nature of the problem. This is to
ensure that the actions taken do not lead to undesirable results, as, for instance, volumes
becoming inaccessible to the hosts. The Fix Procedure then automatically performs the
actions that are required to return the system to its optimal state. This can include checking
for dependencies, resetting internal error counters and apply updated to the system
configuration. Whenever user interaction is required, you are shown suggested actions to
take and guided through the same. If the problem is fixed, the related error in the event log is
eventually marked as fixed. Also, an associated alert in the GUI is cleared.
Error codes along with their detailed properties in the event log provide reference information
when a service action is required. The four-digit Error Code is visible in the event log. They
are accompanied by a six-digit Event ID, which provides additional details about this event.
Three-digit Node Error Codes are visible in the node status in the Service Assistant GUI. For
more information about messages and codes, see Messages and Codes.
Note: Attempt to run the Recover System Procedure only after a complete and thorough
investigation of the cause of the system failure. Try to resolve those issues by using
other service procedures first.
Selecting Monitoring → Events shows information messages, warnings, and issues about
the IBM Storage Virtualize system. Therefore, this area is a good place to check for problems
in the system.
To display the most important events that must be fixed, use the Recommended Actions
filter.
If an important issue must be fixed, look for the Run Fix button in the upper ribbon with an
error message that indicates which event must be fixed as soon as possible. This fix
procedure helps resolve problems. It analyzes the system, provides more information about
the problem, suggests actions to take with the steps to follow, and finally checks to see
whether the problem is resolved.
Always use the fix procedures to resolve errors that are reported by the system, such as
system configuration problems or hardware failures.
112 Unleash the Power of Flash: Getting Started with IBM Storage Virtualize Version 8.7
Note: IBM Storage Virtualize systems detect and report error messages. However, events
might be triggered by factors external to the system, for example back-end storage devices
or the storage area network (SAN).
You can safely mark events as fixed. If the error persists or reoccurs, a new event is logged.
To select multiple events in the table, press and hold the Ctrl key while clicking the events you
want to fix.
Figure 6-4 shows Monitoring → Events window with Recommended Run Fix.
To obtain more information about any event, double-click or select an event in the table, and
select Actions → Properties. You can also select Run Fix Procedure and properties by
right-clicking an event.
The properties and details are displayed in a panel, as shown in Figure 6-5 on page 114.
Sense Data is available in an embedded tab. You can review and click Run Fix to run the fix
procedure.
Important: Run these commands when any type of change that is related to the
communication between IBM Storage Virtualize systems and back-end storage subsystem
occurs, such as back-end storage is configured or a SAN zoning change occurred. This
process helps ensure that IBM Storage Virtualize recognizes the changes.
Common error recovery involves the following IBM Storage Virtualize CLI commands:
detectmdisk
Discovers changes in the SAN and back-end storage.
lscontroller and lsmdisk
114 Unleash the Power of Flash: Getting Started with IBM Storage Virtualize Version 8.7
Provides the status of all controllers and MDisks. Pay attention to status values other than
online, for instance offline or degraded.
lscontroller <controller_id_or_name>
Checks the controller that was causing the issue and verifies that all the worldwide port
names (WWPNs) are listed as you expect. Also check whether the path_counts are
distributed evenly across the WWPNs.
lsmdisk
Determines whether all MDisks are online.
Note: When an issue is resolved by using the CLI, verify that the error disappears by
selecting Monitoring → Events. If not, make sure to mark the error as fixed.
To do so, check the View your cases section in the IBM Let’s Troubleshoot portal.
Storage Virtualize systems that are configured to be monitored in Chapter 5, “IBM Storage
Insights and IBM Storage Insights Pro” on page 91, show associated support cases there as
well.
Alternatively, you can log in with your IBMid to IBM Call Home Connect Cloud. Call Home
Connect Cloud provides an enhanced live view of your assets, including the status of cases,
warranties, maintenance contracts, service levels, and end of service information.
Additionally, Call Home Connect Cloud offers links to other online tools such as IBM Storage
Insights.
Four different types of Snap can be collected, Snap Type 1 through Snap Type 4, colloquially
often referred to as Snap/1, Snap/2, Snap/3 or Snap/4. The Snap types vary in the amount of
diagnostic information that is contained in the package:
Snap/1 includes Standard logs including performance stats. It is the fastest and smallest
and contains no node dumps.
Snap/2 is the same as Snap/1 plus one existing statesave, which is the most recently
created dump or livedump from the current config node. It is slightly slower than Snap/1
and can be large.
Snap/3 is the same as Snap/1 plus the most recent dump or livedump from each active
member node in the clustered system.
Snap/4. is the same as Snap/1 plus a fresh livedump from each active member node in
the clustered, which is created upon triggering the data collection.
A livedump is a binary data capture of the current state of the software. It causes only minimal
impact to I/O operations. Livedumps are preferred when the system is still operational and a
detailed snapshot of the current state is needed. The contents of a livedump are similar to the
contents of a dump with slightly less detailed information. Livedumps can be initiated
manually or automatically based on certain events.
116 Unleash the Power of Flash: Getting Started with IBM Storage Virtualize Version 8.7
Tip: For urgent cases, start with collecting and uploading a Snap/1 followed by a Snap/4.
This enables IBM Remote Support to more quickly begin an analysis while the more
detailed Snap/4 is being collected and uploaded.
For more information about the required support package that is most suitable to diagnose
different type of issues and their content, see What data should you collect for a problem on
IBM Storage Virtualize systems?
By default, Storage Virtualize offers two options for automatic support package upload:
Automatic upload by using the management interface
You can configure Storage Virtualize to automatically collect and upload support packages
to the IBM Support Center. This can be done through the GUI or CLI.
Download and manual upload
Alternatively, you can use Storage Virtualize to download the support package locally to
your device. You can then manually upload it to the IBM Support Center if needed.
To collect a Snap/4 using the CLI, a livedump of each active node must be generated by using
the svc_livedump command. Then, the log files and newly generated dumps are uploaded by
using the svc_snap gui3 command, as shown in Example 6-1 on page 119. To verify whether
the support package was successfully uploaded, use the sainfo lscmdstatus command
(TSXXXXXXX is the case number).
Note: The use of Service Assistant commands such as sainfo or satask requires
superuser privileges.
118 Unleash the Power of Flash: Getting Started with IBM Storage Virtualize Version 8.7
Example 6-1 The svc_livedump command
IBM_FlashSystem:FS9110:superuser>svc_livedump -nodes all -yes
Livedump - Fetching Node Configuration
Livedump - Checking for dependent vdisks
Livedump - Check Node status
Livedump - Preparing specified nodes - this may take some time...
Livedump - Prepare node 1
Livedump - Prepare node 2
Livedump - Trigger specified nodes
Livedump - Triggering livedump on node 1
Livedump - Triggering livedump on node 2
Livedump - Waiting for livedumps to complete dumping on nodes 1,2
Livedump - Waiting for livedumps to complete dumping on nodes 1
Livedump - Successfully captured livedumps on nodes 1,2
IBM_FlashSystem:FS9110:superuser>sainfo lscmdstatus
last_command satask supportupload -pmr TSxxxxxxxxx -filename
/dumps/snap.serial.YYMMDD.HHMMSS.tgz
last_command_status CMMVC8044E Command completed successfully.
T3_status
T3_status_data
cpfiles_status Complete
cpfiles_status_data Copied 160 of 160
snap_status Complete
snap_filename /dumps/snap.serial.YYMMDD.HHMMSS.tgz
installcanistersoftware_status
supportupload_status Active
supportupload_status_data Uploaded 267.5 MiB of 550.2 MiB
supportupload_progress_percent 48
supportupload_throughput_KBps 639
supportupload_filename /dumps/snap.serial.YYMMDD.HHMMSS.tgz
If you do not want to automatically upload the snap to IBM, omit the upload pmr=TSxxxxxxxxx
command option. When the snap creation is done, all collected files are packaged into a file
that is compressed in gzip format that uses the following format:
/dumps/snap.<panel_id>.YYMMDD.hhmmss.tgz
The creation of the Snap archive takes a few minutes to complete. Depending on the size of
the system and the configuration, it can take considerably longer, particularly if fresh
livedumps are being created.
The generated file can be retrieved from the GUI by selecting Settings → Support →
Manual Upload Instructions → Download Support Package, and then clicking Download
Existing Package. Find the exact name of the snap that was generated by running the
svc_snap command that was run earlier. Select that file, and click Download.
For deeper analysis in cases where drives or FCMs are involved, drivedumps are often
useful. Drivedumps are particularly useful for troubleshooting issues with FCMs, as they
capture the low-level state of the drive. Their data can help you understand problems with the
drive, and they do not contain any data that applications write to the drive. In some situations,
drivedumps are automatically triggered by the system.
To collect support data from a disk drive, run the triggerdrivedump drive_id command. The
output is stored in a file in the /dumps/drive directory. This directory is located on one of the
nodes that are connected to the drive.
Any snap that is taken after the trigger command contains the stored drivedumps. It is
sufficient to provide Snap/1 for drivedumps.
120 Unleash the Power of Flash: Getting Started with IBM Storage Virtualize Version 8.7
6.2.3 Host multipath software
If a problem occurs that is related to host communication with an IBM Storage Virtualize
system, collecting data from hosts and their multipath software is useful.
Example 6-4 shows the output for the command multipath -ll, including the following
information:
Name of the mpath device (mpatha / mpathb).
UUID of the mpath device.
Discovered paths for each mpath device, including the name of the sd-device, the priority,
and state information.
You can also use the multipathd interactive console for troubleshooting. The multipath -k
command opens an interactive interface to the multipathd daemon.
Entering this command opens an interactive multipath console. After this command is run, it is
possible to enter help to get a list of available commands, which can be used within the
interactive console. To exit the console, press Ctrl-d.
To display the current configuration, including the defaults, issue show config within the
interactive console.
lspath Lists all paths for all hdisks with their status and parent FSCSI
(Fibre Channel SCSI) device information.
lspath -H -l hdisk1 List all paths for the specified hdisk with its status and
corresponding FSCSI device information. The output includes a
column header.
lspath -l hdisk1 -HF "name Lists more detailed information about the specified hdisk the parent
path_id parent connection FSCSI device and its path status.
path_status status"
lspath -s disabled Lists all paths that have an operational status of disabled.
lspath -s failed Lists all paths that have an operational status of failed.
lspath -AHE -l hdisk0 -p Display attributes for a path and connection (-w) (-A is like lsattr
vscsi0 -w "810000000000" for devices. If only one path exists to the parent device, the
connection can be omitted by running:
lspath -AHE -l hdisk0 -p vscsi0)
lsmpio Lists all disks and corresponding paths with state, parent, and
connection information.
lsmpio -q Shows all disks with vendor ID, product ID, size, and volume name.
lsmpio -ar Lists the parent adapter and remote port information (-a: adapter
(local), and -r: remote port).
lsmpio -are Lists the parent adapter and remote port error statistics (-e: error).
122 Unleash the Power of Flash: Getting Started with IBM Storage Virtualize Version 8.7
Windows using MPIO
Because IBM Storage Virtualize 8.3.0 is the last version that supports Subsystem Device
Driver Device Specific Module (SDDDSM), you must use native Windows multipathing,
which is provided by the installable feature MPIO.
You can manage the multipathing configuration by using the Windows GUI. It is also possible
to use the CLI by using the tool mpclaim.exe, which is installed by default.
mpclaim.exe -e View the storage devices that are discovered by the system.
mpclaim.exe -s -d Checks the policy that your volumes are currently using.
Generic MPIO settings can be listed and modified by using Windows PowerShell cmdlets.
Table 6-4 shows the PowerShell cmdlets, which can be used to list or modify generic
Windows MPIO settings.
Get-MSDSMSupportedHW The cmdlet lists hardware IDs in the Microsoft Device Specific
Module (MSDSM) supported hardware list.
Get-MPIOSetting The cmdlet gets Microsoft MPIO settings. The settings are as
follows:
PathVerificationState
PathVerificationPeriod
PDORemovePeriod
RetryCount
RetryInterval
UseCustomPathRecoveryTime
CustomPathRecoveryTime
DiskTimeoutValue
Set-MPIOSetting The cmdlet changes Microsoft MPIO settings. The settings are as
follows:
PathVerificationState
PathVerificationPeriod
PDORemovePeriod
RetryCount
RetryInterval
UseCustomPathRecoveryTime
CustomPathRecoveryTime
DiskTimeoutValue
Command-line interface
To obtain logical unit number (LUN) multipathing information from the ESXi host CLI,
complete the following steps:
1. Log in to the ESXi host console.
2. To get detailed information about the paths, run esxcli storage core path list.
Example 6-5 shows an example for the output of the esxcli storage core path list
command.
124 Unleash the Power of Flash: Getting Started with IBM Storage Virtualize Version 8.7
Maximum I/O Size: 33553920
3. To list detailed information for all the corresponding paths for a specific device, run esxcli
storage core path list -d <naaID>.
Example 6-6 shows the output for the specified device with the ID
naa.600507680185801aa000000000000972, which is attached with eight paths to the ESXi
server. The output is abridged for brevity.
fc.5001438028d02923:5001438028d02922-fc.500507680100037e:500507680130037e-naa.600507680185801aa0
00000000000972
Runtime Name: vmhba2:C0:T2:L9
Device: naa.600507680185801aa000000000000972
Device Display Name: IBM Fibre Channel Disk (naa.600507680185801aa000000000000972)
Adapter: vmhba2
Channel: 0
Target: 2
LUN: 9
Plugin: NMP
State: active
Transport: fc
Adapter Identifier: fc.5001438028d02923:5001438028d02922
Target Identifier: fc.500507680100037e:500507680130037e
Adapter Transport Details: WWNN: 50:01:43:80:28:d0:29:23 WWPN: 50:01:43:80:28:d0:29:22
Target Transport Details: WWNN: 50:05:07:68:01:00:03:7e WWPN: 50:05:07:68:01:30:03:7e
Maximum I/O Size:
33553920fc.5001438028d02921:5001438028d02920-fc.500507680100037e:500507680110037e-naa.6005076801
85801aa000000000000972
UID:
126 Unleash the Power of Flash: Getting Started with IBM Storage Virtualize Version 8.7
6.2.4 More data collection
Data collection methods vary by storage platform, SAN switch, and operating system.
For an issue in a SAN environment when it is not clear where the problem is occurring, you
might need to collect data from several devices in the SAN.
The following basic information must be collected for each type of device:
Hosts:
– Operating system: Version and level
– Host Bus Adapter (HBA): Driver and firmware level
– Multipathing driver level
SAN switches:
– Hardware model
– Software version
Storage subsystems:
– Hardware model
– Software version
The publications listed in this section are considered particularly suitable for a more detailed
discussion of the topics covered in this book.
IBM Redbooks
The following IBM Redbooks publications provide additional information about the topic in this
document. Note that some publications referenced in this list might be available in softcopy
only.
Implementation Guide for IBM Storage FlashSystem and IBM SAN Volume Controller
Updated for IBM Storage Virtualize Version 8.6, SG24-8542
IBM Storage Virtualize and VMware: Integrations, Implementation and Best Practices,
SG24-8549
Ensuring Business Continuity: A Practical Guide to Policy-Based Replication and
Policy-Based High Availability for IBM Storage Virtualize Systems, SG24-8569
Introduction and Implementation of Data Reduction Pools and Deduplication, SG24-8430
IBM Storage Insights Security Guide, SC27-8774
Data Resiliency Designs: A Deep Dive into IBM Storage Safeguarded Snapshots,
REDP-5737
You can search for, view, download or order these documents and other Redbooks,
Redpapers, Web Docs, draft and additional materials, at the following website:
ibm.com/redbooks
Online resources
These websites are also relevant as further information sources:
IBM Storage FlashSystem information:
https://www.ibm.com/flashsystem/
IBM SAN Volume Controller information:
https://www.ibm.com/products/san-volume-controller?mhsrc=ibmsearch_a&mhq=SAN%20
Volume%20Controller
IBM System Storage Interoperation Center (SSIC):
https://www.ibm.com/systems/support/storage/ssic/interoperability.wss
132 Unleash the Power of Flash: Getting Started with IBM Storage Virtualize Version 8.7
Unleash the Power of Flash: Getting Started with IBM Storage Virtualize
(0.2”spine)
0.17”<->0.473”
90<->249 pages
Back cover
SG24-8561-00
ISBN 0738461776
Printed in U.S.A.
®
ibm.com/redbooks