Ibm Storeviz 3700
Ibm Storeviz 3700
Ibm Storeviz 3700
Jon Tate Saiprasad Prabhakar Parkar Lee Sirett Chris Tapsell Paulo Tomiyoshi Takeda
ibm.com/redbooks
International Technical Support Organization Implementing the IBM Storwize V3700 October 2013
SG24-8107-01
Note: Before using this information and the product it supports, read the information in Notices on page ix.
Second Edition (October 2013) This edition applies to Version 7 Release 1 of IBM Storwize V3700 machine code.
Copyright International Business Machines Corporation 2013. All rights reserved. Note to U.S. Government Users Restricted Rights -- Use, duplication or disclosure restricted by GSA ADP Schedule Contract with IBM Corp.
Contents
Notices . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ix Trademarks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .x Preface . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xi Authors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xii Now you can become a published author, too! . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xiii Comments welcome. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xiv Stay connected to IBM Redbooks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xiv Summary of changes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xv October 2013, Second Edition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xv Chapter 1. Overview of the IBM Storwize V3700 system . . . . . . . . . . . . . . . . . . . . . . . . . 1 1.1 IBM Storwize V3700 overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2 1.2 IBM Storwize V3700 terminology . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3 1.3 IBM Storwize V3700 models . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5 1.4 IBM Storwize V3700 hardware . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7 1.4.1 Control enclosure . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8 1.4.2 Expansion enclosure. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10 1.4.3 Host interface cards . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11 1.4.4 Disk drive types. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11 1.5 IBM Storwize V3700 terms . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12 1.5.1 Hosts . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12 1.5.2 Node canister . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12 1.5.3 I/O Group . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12 1.5.4 Clustered system . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13 1.5.5 RAID . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13 1.5.6 Managed disks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14 1.5.7 Quorum disks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14 1.5.8 Storage pools . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15 1.5.9 Volumes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17 1.5.10 iSCSI . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19 1.5.11 SAS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20 1.6 IBM Storwize V3700 features . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20 1.6.1 Volume mirroring. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20 1.6.2 Thin provisioning . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21 1.6.3 Easy Tier . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21 1.6.4 Turbo Performance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22 1.6.5 Storage Migration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22 1.6.6 FlashCopy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22 1.6.7 Remote Copy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23 1.7 Problem management and support . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24 1.7.1 IBM Assist On-site and remote service . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24 1.7.2 Event notifications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24 1.7.3 SNMP traps. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24 1.7.4 Syslog messages . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25 1.7.5 Call Home email . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25 1.7.6 Useful IBM Storwize V3700 websites . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25
iii
Chapter 2. Initial configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.1 Hardware installation planning . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.2 SAN configuration planning. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.3 SAS direct attach planning . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.4 LAN configuration planning . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.4.1 Management IP address considerations. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.4.2 Service IP address considerations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.5 Host configuration planning. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.6 Miscellaneous configuration planning . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.7 System management . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.7.1 Graphical user interface . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.7.2 Command-line interface . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.8 First-time setup . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.9 Initial configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.9.1 Adding enclosures after initial configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.9.2 Configure Call Home, email alert, and inventory. . . . . . . . . . . . . . . . . . . . . . . . . . 2.9.3 Service Assistant tool . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
27 28 30 32 34 34 35 36 37 38 39 40 41 50 61 66 69
Chapter 3. Graphical user interface overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 73 3.1 Getting started. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 74 3.1.1 Supported browsers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 74 3.1.2 Access the management GUI . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 75 3.1.3 Overview panel layout. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 78 3.2 Navigation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 79 3.2.1 Function icons navigation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 80 3.2.2 Extended help navigation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 81 3.2.3 Breadcrumb navigation aid . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 84 3.2.4 Suggested Tasks aid . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 84 3.2.5 Presets . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 85 3.2.6 Access actions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 87 3.2.7 Task progress . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 88 3.2.8 Navigating panels with tables . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 88 3.3 Status Indicators menus . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 94 3.3.1 Horizontal bars . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 94 3.3.2 Allocated status bar menu . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 94 3.3.3 Running tasks bar menu . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 96 3.3.4 Health status bar menu . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 97 3.4 Function Icon menus . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 99 3.4.1 Home menu . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 100 3.4.2 Monitoring menu . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 100 3.4.3 Pools menu . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 110 3.4.4 Volumes menu . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 120 3.4.5 Hosts menu . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 123 3.4.6 Copy Services menu . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 126 3.4.7 Access menu . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 130 3.4.8 Settings menu . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 134 3.5 Management GUI help . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 147 3.5.1 IBM Storwize V3700 Information Center. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 147 3.5.2 Watching an e-Learning videos . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 147 3.5.3 Learning more . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 148 3.5.4 Embedded panel help . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 148 3.5.5 Hidden question mark help . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 149 3.5.6 Hover help. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 149
iv
3.5.7 IBM endorsed YouTube videos. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 150 Chapter 4. Host configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.1 Host attachment overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.2 Preparing the host operating system . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.2.1 Windows 2008 R2: Preparing for Fibre Channel attachment . . . . . . . . . . . . . . . 4.2.2 Windows 2008 R2: Preparing for iSCSI attachment . . . . . . . . . . . . . . . . . . . . . . 4.2.3 Windows 2008 R2: Preparing for SAS attachment . . . . . . . . . . . . . . . . . . . . . . . 4.2.4 VMware ESX: Preparing for Fibre Channel attachment . . . . . . . . . . . . . . . . . . . 4.2.5 VMware ESX: Preparing for iSCSI attachment . . . . . . . . . . . . . . . . . . . . . . . . . . 4.2.6 VMware ESX: Preparing for SAS attachment . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.3 Configuring hosts on IBM Storwize V3700 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.3.1 Considerations when creating hosts on IBM Storwize V3700. . . . . . . . . . . . . . . 4.3.2 Creating Fibre Channel hosts . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.3.3 Creating iSCSI hosts. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.3.4 Configuring IBM Storwize V3700 for iSCSI host connectivity. . . . . . . . . . . . . . . 4.3.5 Creating SAS hosts. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Chapter 5. Basic volume configuration. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.1 Provisioning storage from IBM Storwize V3700 and making it available to the host. . 5.1.1 Creating a generic volume . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.1.2 Creating a thin-provisioned volume. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.1.3 Creating a mirrored volume. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.1.4 Creating a thin-mirror volume . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.2 Mapping a volume to the host . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.2.1 Mapping newly created volumes to the host by using the wizard . . . . . . . . . . . . 5.2.2 Manually mapping a volume to the host . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.3 Discovering the volumes from the host and specifying multipath settings . . . . . . . . . 5.3.1 Windows 2008 Fibre Channel volume attachment . . . . . . . . . . . . . . . . . . . . . . . 5.3.2 Windows 2008 iSCSI volume attachment . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.3.3 Windows 2008 Direct SAS volume attachment. . . . . . . . . . . . . . . . . . . . . . . . . . 5.3.4 VMware ESX Fibre Channel volume attachment . . . . . . . . . . . . . . . . . . . . . . . . 5.3.5 VMware ESX iSCSI volume attachment . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.3.6 VMware ESX Direct SAS volume attachment. . . . . . . . . . . . . . . . . . . . . . . . . . . Chapter 6. Storage migration wizard . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.1 Interoperability and compatibility . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.2 Storage migration wizard . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.2.1 External virtualization capability . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.2.2 Overview of the storage migration wizard . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.2.3 Storage migration wizard tasks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.3 Storage migration wizard example scenario . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.3.1 Storage migration wizard example scenario description. . . . . . . . . . . . . . . . . . . 6.3.2 Using the storage migration wizard for example scenario . . . . . . . . . . . . . . . . . Chapter 7. Storage pools . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.1 Configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.2 Working with internal drives . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.2.1 Internal storage window . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.2.2 Actions on internal drives . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.3 Configuring internal storage . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.3.1 RAID configuration presets . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.3.2 Customize initial storage configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.3.3 Create new MDisk and pool . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Contents
151 152 153 153 156 161 164 166 174 175 175 177 181 183 186 189 190 192 195 198 203 207 207 210 213 215 220 228 232 240 252 261 262 262 262 262 263 276 276 278 313 314 315 316 318 326 326 328 329 v
7.3.4 Using the recommended configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.3.5 Selecting a different configuration. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.4 Working with MDisks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.4.1 MDisk by Pools panel . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.4.2 RAID action for MDisks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.4.3 Other actions on MDisks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.5 Working with Storage Pools . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.5.1 Create Pool . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.5.2 Actions on storage pools. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Chapter 8. Advanced host and volume administration . . . . . . . . . . . . . . . . . . . . . . . . 8.1 Advanced host administration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.1.1 Modifying Mappings menu . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.1.2 Unmapping volumes from a host . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.1.3 Duplicate Mappings option . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.1.4 Renaming a host . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.1.5 Deleting a host . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.1.6 Host properties . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.2 Adding and deleting host ports . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.2.1 Adding a host port . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.2.2 Adding a Fibre Channel port . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.2.3 Adding a SAS host port. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.2.4 Adding an iSCSI host port. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.2.5 Deleting a host port . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.3 Host mappings overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.3.1 Unmap Volumes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.3.2 Properties (Host) option . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.3.3 Properties (Volume) option . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.4 Advanced volume administration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.4.1 Advanced volume functions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.4.2 Mapping a volume to a host . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.4.3 Unmapping volumes from all hosts . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.4.4 Viewing a host that is mapped to a volume. . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.4.5 Duplicate Volume option . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.4.6 Renaming a volume . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.4.7 Shrinking a volume . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.4.8 Expanding a volume . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.4.9 Migrating a volume to another storage pool . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.4.10 Deleting a volume . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.5 Volume properties . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.5.1 Overview tab . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.5.2 Host Maps tab . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.5.3 Member MDisk tab . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.5.4 Adding a mirrored volume copy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.5.5 Editing Thin-Provisioned volume properties . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.6 Advanced volume copy functions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.6.1 Thin-provisioned . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.6.2 Splitting into a new volume . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.6.3 Validate Volume Copies option . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.6.4 Delete Volume Copy option . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.6.5 Migrating volumes by using the volume copy features . . . . . . . . . . . . . . . . . . . . 8.7 Volumes by Pool feature . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.8 Volumes by Host feature . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
330 333 338 340 341 345 348 350 350 353 354 356 360 363 364 366 367 371 372 373 374 375 376 377 378 379 379 379 380 383 384 384 385 387 387 388 388 390 391 391 394 395 396 398 400 402 402 404 405 406 407 410
vi
Chapter 9. Easy Tier . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9.1 Easy Tier overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9.2 Easy Tier for IBM Storwize V3700 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9.2.1 Tiered storage pools . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9.3 Easy Tier process . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9.3.1 I/O Monitoring . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9.3.2 Data Placement Advisor . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9.3.3 Data Migration Planner . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9.3.4 Data Migrator . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9.3.5 Easy Tier operating modes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9.3.6 Easy Tier rules . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9.4 Easy Tier configuration using the GUI . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9.4.1 Creating multitiered pools: Enable Easy Tier . . . . . . . . . . . . . . . . . . . . . . . . . . . 9.4.2 Downloading Easy Tier I/O measurements. . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9.5 Easy Tier configuration using the CLI . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9.5.1 Enabling Easy Tier evaluation mode. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9.5.2 Enabling or disabling Easy Tier on single volumes. . . . . . . . . . . . . . . . . . . . . . . 9.6 IBM Storage Tier Advisor Tool . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9.6.1 Creating graphical reports. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9.6.2 STAT reports. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9.7 Tivoli Storage Productivity Center . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9.7.1 Features of Tivoli Storage Productivity Center . . . . . . . . . . . . . . . . . . . . . . . . . . 9.7.2 Adding IBM Storwize V3700 in Tivoli Storage Productivity Center . . . . . . . . . . . 9.8 Administering and reporting an IBM Storwize V3700 system through Tivoli Storage Productivity Center . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9.8.1 Basic configuration and administration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9.8.2 Report Generation by using the GUI. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9.8.3 Report Generation using Tivoli Storage Productivity Center web page . . . . . . . Chapter 10. Copy services . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10.1 FlashCopy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10.1.1 Business requirements for FlashCopy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10.1.2 FlashCopy functional overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10.1.3 Planning for FlashCopy. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10.1.4 Managing FlashCopy by using the GUI . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10.1.5 Managing FlashCopy mappings . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10.1.6 Managing a FlashCopy consistency group . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10.2 Remote Copy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10.2.1 Remote Copy license consideration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10.2.2 Remote Copy concepts. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10.2.3 Global Mirror with Change Volumes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10.2.4 Remote Copy planning . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10.3 Troubleshooting Remote Copy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10.3.1 1920 error . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10.3.2 1720 error . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10.4 Managing Remote Copy using the GUI . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10.4.1 Managing cluster partnerships . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10.4.2 Managing stand-alone Remote Copy relationships . . . . . . . . . . . . . . . . . . . . . 10.4.3 Managing a Remote Copy consistency group . . . . . . . . . . . . . . . . . . . . . . . . . Chapter 11. RAS, monitoring, and troubleshooting . . . . . . . . . . . . . . . . . . . . . . . . . . . 11.1 Reliability, availability, and serviceability on the IBM Storwize V3700 . . . . . . . . . . . 11.2 IBM Storwize V3700 components . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11.2.1 Enclosure midplane assembly . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Contents
411 412 413 413 414 415 415 415 415 416 417 419 419 426 427 428 431 433 433 434 436 436 436 439 439 441 443 449 450 450 451 459 460 467 489 498 498 499 505 510 513 513 515 515 515 520 531 543 544 545 545 vii
11.2.2 Node canisters: Ports and LED. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11.2.3 Node canister replaceable hardware components . . . . . . . . . . . . . . . . . . . . . . 11.2.4 Expansion canister: Ports and LED . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11.2.5 Disk subsystem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11.2.6 Power Supply Unit . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11.3 Configuration backup procedure . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11.3.1 Generating a configuration backup using the CLI . . . . . . . . . . . . . . . . . . . . . . 11.3.2 Downloading a configuration backup using the GUI . . . . . . . . . . . . . . . . . . . . 11.4 Software upgrade . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11.4.1 Upgrading software automatically . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11.4.2 GUI upgrade process . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11.4.3 Upgrading software manually . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11.5 Event log . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11.5.1 Managing the event log. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11.5.2 Alert handling and recommended actions. . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11.6 Collecting support information . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11.6.1 Support data via GUI . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11.6.2 Support information via Service Assistant . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11.6.3 Support Information onto USB stick . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11.7 Powering on and shutting down IBM Storwize V3700 . . . . . . . . . . . . . . . . . . . . . . . 11.7.1 Shutting down the system . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11.7.2 Powering on . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Appendix A. Command-line interface setup and SAN Boot . . . . . . . . . . . . . . . . . . . . Command-line interface . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Basic setup . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Example commands . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . SAN Boot . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Enabling SAN Boot for Windows. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Enabling SAN Boot for VMware . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Windows SAN Boot migration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Related publications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . IBM Redbooks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . IBM Storwize V3700 publications. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . IBM Storwize V3700 support . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Help from IBM . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
546 550 554 555 558 560 560 561 564 564 565 568 573 574 577 585 585 586 587 588 588 591 593 594 594 603 606 606 606 607 609 609 609 609 609
Index . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 611
viii
Notices
This information was developed for products and services offered in the U.S.A. IBM may not offer the products, services, or features discussed in this document in other countries. Consult your local IBM representative for information on the products and services currently available in your area. Any reference to an IBM product, program, or service is not intended to state or imply that only that IBM product, program, or service may be used. Any functionally equivalent product, program, or service that does not infringe any IBM intellectual property right may be used instead. However, it is the user's responsibility to evaluate and verify the operation of any non-IBM product, program, or service. IBM may have patents or pending patent applications covering subject matter described in this document. The furnishing of this document does not grant you any license to these patents. You can send license inquiries, in writing, to: IBM Director of Licensing, IBM Corporation, North Castle Drive, Armonk, NY 10504-1785 U.S.A. The following paragraph does not apply to the United Kingdom or any other country where such provisions are inconsistent with local law: INTERNATIONAL BUSINESS MACHINES CORPORATION PROVIDES THIS PUBLICATION "AS IS" WITHOUT WARRANTY OF ANY KIND, EITHER EXPRESS OR IMPLIED, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF NON-INFRINGEMENT, MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE. Some states do not allow disclaimer of express or implied warranties in certain transactions, therefore, this statement may not apply to you. This information could include technical inaccuracies or typographical errors. Changes are periodically made to the information herein; these changes will be incorporated in new editions of the publication. IBM may make improvements and/or changes in the product(s) and/or the program(s) described in this publication at any time without notice. Any references in this information to non-IBM websites are provided for convenience only and do not in any manner serve as an endorsement of those websites. The materials at those websites are not part of the materials for this IBM product and use of those websites is at your own risk. IBM may use or distribute any of the information you supply in any way it believes appropriate without incurring any obligation to you. Any performance data contained herein was determined in a controlled environment. Therefore, the results obtained in other operating environments may vary significantly. Some measurements may have been made on development-level systems and there is no guarantee that these measurements will be the same on generally available systems. Furthermore, some measurements may have been estimated through extrapolation. Actual results may vary. Users of this document should verify the applicable data for their specific environment. Information concerning non-IBM products was obtained from the suppliers of those products, their published announcements or other publicly available sources. IBM has not tested those products and cannot confirm the accuracy of performance, compatibility or any other claims related to non-IBM products. Questions on the capabilities of non-IBM products should be addressed to the suppliers of those products. This information contains examples of data and reports used in daily business operations. To illustrate them as completely as possible, the examples include the names of individuals, companies, brands, and products. All of these names are fictitious and any similarity to the names and addresses used by an actual business enterprise is entirely coincidental. COPYRIGHT LICENSE: This information contains sample application programs in source language, which illustrate programming techniques on various operating platforms. You may copy, modify, and distribute these sample programs in any form without payment to IBM, for the purposes of developing, using, marketing or distributing application programs conforming to the application programming interface for the operating platform for which the sample programs are written. These examples have not been thoroughly tested under all conditions. IBM, therefore, cannot guarantee or imply reliability, serviceability, or function of these programs.
ix
Trademarks
IBM, the IBM logo, and ibm.com are trademarks or registered trademarks of International Business Machines Corporation in the United States, other countries, or both. These and other IBM trademarked terms are marked on their first occurrence in this information with the appropriate symbol ( or ), indicating US registered or common law trademarks owned by IBM at the time this information was published. Such trademarks may also be registered or common law trademarks in other countries. A current list of IBM trademarks is available on the Web at http://www.ibm.com/legal/copytrade.shtml The following terms are trademarks of the International Business Machines Corporation in the United States, other countries, or both:
AIX DS8000 Easy Tier FlashCopy IBM Netfinity Power Systems Redbooks Redbooks (logo) Storwize System i System Storage Tivoli VIA XIV xSeries
The following terms are trademarks of other companies: VIA, and TEALEAF device are trademarks or registered trademarks of Tealeaf, an IBM Company. Intel, Intel logo, Intel Inside logo, and Intel Centrino logo are trademarks or registered trademarks of Intel Corporation or its subsidiaries in the United States and other countries. Linux is a trademark of Linus Torvalds in the United States, other countries, or both. Microsoft, Windows, and the Windows logo are trademarks of Microsoft Corporation in the United States, other countries, or both. Java, and all Java-based trademarks and logos are trademarks or registered trademarks of Oracle and/or its affiliates. UNIX is a registered trademark of The Open Group in the United States and other countries. Other company, product, or service names may be trademarks or service marks of others.
Preface
Organizations of all sizes are faced with the challenge of managing massive volumes of increasingly valuable data. But storing this data can be costly, and extracting value from the data is becoming more and more difficult. IT organizations have limited resources but must stay responsive to dynamic environments and act quickly to consolidate, simplify, and optimize their IT infrastructures. The IBM Storwize V3700 system provides a smarter solution that is affordable, easy to use, and self-optimizing, which enables organizations to overcome these storage challenges. Storwize V3700 delivers efficient, entry-level configurations that are specifically designed to meet the needs of small and midsize businesses. Designed to provide organizations with the ability to consolidate and share data at an affordable price, Storwize V3700 offers advanced software capabilities that are usually found in more expensive systems. Built upon innovative IBM technology, Storwize V3700 addresses the block storage requirements of small and midsize organizations. Providing up to 240 TB of capacity packaged in a compact 2U, Storwize V3700 is designed to accommodate the most common storage network technologies to enable easy implementation and management. Storwize V3700 includes the following features: Web-based GUI provides point-and-click management capabilities. Internal disk storage virtualization enables rapid, flexible provisioning and simple configuration changes. Thin provisioning enables applications to grow dynamically but only use space they actually need. Enables simple data migration from external storage to Storwize V3700 storage (one way from another storage device). Remote Mirror to create copies of data at remote locations for disaster recovery. IBM FlashCopy creates instant application copies for backup or application testing. This IBM Redbooks publication is intended for pre- and post-sales technical support professionals and storage administrators. The concepts in this book also relate to the IBM Storwize V3500. This book was written at a software level of Version 7 Release 1.
xi
Authors
This book was produced by a team of specialists from around the world working at the IBM Manchester Lab, UK.
Jon Tate is a Project Manager for IBM System Storage SAN Solutions at the International Technical Support Organization, San Jose Center. Before joining the ITSO in 1999, he worked in the IBM Technical Support Center, providing Level 2/3 support for IBM storage products. Jon has over 27 years of experience in storage software and management, services, and support, and is an IBM Certified Consulting IT Specialist and an IBM SAN Certified Specialist. He is also the UK Chairman of the Storage Networking Industry Association. Saiprasad Prabhakar Parkar is a senior IT Specialist for IBM at the ISTL Pune, India. He has worked for IBM for five years and provides Level 3 support for UNIX, IBM Power Systems, and storage products. Sai has 10 years of experience in UNIX and Power System and Storage. He is IBM Certified Solution Specialist.
Lee Sirett is a Storage Technical Advisor for the European Storage Competency Centre in Mainz, Germany. Before joining the ESCC, he worked in IBM Technical Support Services for 10 years providing support on a range of IBM Products including Power Systems. Lee has 24 years experience in the IT Industry. He is IBM Storage Certified and an IBM Certified XIV Administrator and Certified XIV Specialist.
Chris Tapsell is a Presales Storage Technical Specialist for IBM Systems & Technology Group. Before his current role, in his 25+ years at IBM, he has worked as a CE covering products such as Golf Ball typewriters up to AS400 (System i), as a Support Specialist for all of the IBM Intel server products (PC Server, Netfinity, xSeries & System x), PCs and notebooks, and as a Presales Technical Specialist for System x. Chris holds a number of IBM Certifications covering System x and storage products.
xii
Paulo Tomiyoshi Takeda is a SAN and Storage Disk specialist at IBM Brazil. He has over eight years of experience in the IT arena. He holds a bachelors degree in Information Systems from UNIFEB (Universidade da Fundao Educacional de Barretos) and is IBM Certified for IBM DS8000 and IBM Storwize V7000. His areas of expertise include planning, configuring and troubleshooting DS8000, SAN Volume Controller and IBM Storwize V7000. He is involved in storage-related projects such as capacity growth planning, SAN consolidation, storage microcode upgrades, and copy services in the Open Systems environment. Thanks to the following people for their contributions to this project: Martyn Spink Djihed Afifi Karl Martin Imran Imtiaz Doug Neil David Turnbull Stephen Bailey IBM Manchester Lab John Fairhurst Paul Marris Paul Merrison IBM Hursley Mary Connell IBM STG Thanks to the following authors of the previous edition of this book: Uwe Dubberke Justin Heather Andrew Hickey Imran Imtiaz Nancy Kinney Dieter Utesch
Preface
xiii
Comments welcome
Your comments are important to us! We want our books to be as helpful as possible. Send us your comments about this book or other IBM Redbooks publications in one of the following ways: Use the online Contact us review Redbooks form found at this website: http://www.ibm.com/redbooks Send your comments in an email to: redbooks@us.ibm.com Mail your comments to: IBM Corporation, International Technical Support Organization Dept. HYTD Mail Station P099 2455 South Road Poughkeepsie, NY 12601-5400
xiv
Summary of changes
This section describes the technical changes made in this edition of the book and in previous editions. This edition might also include minor corrections and editorial changes that are not identified. Summary of Changes for SG24-8107-01 for Implementing the IBM Storwize V3700 as created or updated on October 9, 2013.
New information
FlashCopy Remote Mirror Easy Tier
Changed information
Screen shots updated to 7.1 CLI command issuance and output
xv
xvi
Chapter 1.
The IBM Storwize V3700 is a virtualized storage solution that groups its internal drives into RAID arrays, which are called Managed Disks (MDisks). These MDisks are then grouped into Storage Pools. Volumes are then created from these Storage Pools and provisioned out to hosts. Storage Pools are normally created with MDisks of the same type and capacity of drive. Volumes can be moved non-disruptively between Storage Pools with differing performance characteristics. For example, a volume can be moved between a Storage Pool made up of NL-SAS drives to a Storage Pool made up of SAS drives. The IBM Storwize V3700 system also provides several configuration options that are aimed at simplifying the implementation process. It also provides configuration presets and automated wizards, called Directed Maintenance Procedures (DMP) to help resolve any events that might occur. IBM Storwize V3700 system provides a simple and easy to use graphical user interface (GUI) that is designed to allow storage to be deployed quickly and efficiently. The GUI runs on any supported browser. The management GUI contains a series of preestablished configuration options called presets that use commonly used settings to quickly configure objects on the system. Presets are available for creating volumes and IBM FlashCopy mappings and for setting up a RAID configuration. You can also use the command-line interface (CLI) to set up or control the system.
Chain
Clone
Definition An occurrence that is significant to a task or system. Events can include completion or failure of an operation, a user action, or the change in the state of a process. A hardware unit that includes the serial-attached SCSI (SAS) interface hardware that enables the node hardware to use the drives of the expansion enclosure. A hardware unit that includes expansion canisters, drives, and power supply units. Fibre Channel ports are connections for the hosts to get access to the IBM Storwize V3700. The process of controlling which hosts can access specific volumes within a IBM Storwize V3700. Internet Protocol (IP)-based storage networking standard for linking data storage facilities. Array MDisks and drives that are held in enclosures and nodes that are part of the IBM Storwize V3700. A component of a storage pool that is managed by a clustered system. An MDisk is part of a RAID array of internal storage. An MDisk is not visible to a host system on the storage area network. A hardware unit that includes the node hardware, fabric, and service interfaces, serial-attached SCSI (SAS), expansion ports, and battery. A single SAS lane. There are four PHYs in each SAS cable. Each enclosure has two power supply units (PSU). A disk that contains a reserved area that is used exclusively for cluster management. The quorum disk is accessed when it is necessary to determine which half of the cluster continues to read and write data. SAS ports are connections for the host to get direct attached access to the IBM Storwize V3700 and expansion enclosure. An image backup type that consists of a point-in-time view of a volume. A collection of storage capacity that provides the capacity requirements for a volume. The SAS connectivity of a set of drives within multiple enclosures. The enclosures can be control enclosures or expansion enclosures. The ability to define a storage unit (full system, storage pool, or volume) with a logical capacity size that is larger than the physical capacity that us assigned to that storage unit. Increases system maximum IOPS and maximum throughput. A discrete unit of storage on disk, tape, or other data recording medium that supports some form of identifier and parameter list, such as a volume label or input/output control.
Expansion canister
Expansion enclosure Fibre Channel port Host mapping iSCSI (Internet Small Computer System Interface) Internal storage Managed disk (MDisk)
Node canister
Definition Each Fibre Channel or SAS port is identified by their physical port number and worldwide port name (WWPN).
The IBM Storwize V3700 models are described in Table 1-2. All models have two node canisters. C models are control enclosures and E models are expansion enclosures.
Table 1-2 IBM Storwize V3700 models Model 2072-12C (with two node canister) 2072-24C (with two node canister) 2072-12E (one expansion enclosure) 2072-24E (one expansion enclosure) Total System Cache 8 GB upgradeable to 16 GB 8 GB upgradeable to 16 GB N/A N/A Drive slots 12 x 3.5-inch per enclosure 24 x 2.5-inch per enclosure 12 x 3.5-inch 24 x 2.5-inch
Figure 1-1 shows the front view of the 2072-12C and 12E enclosures.
Figure 1-1 IBM Storwize V3700 front view for 2072-12C and 12E enclosures
The drives are positioned in four columns of three horizontal-mounted drive assemblies. The drive slots are numbered 1 - 12, starting at upper left and going left to right, top to bottom.
Figure 1-2 shows the front view of the 2072-24C and 24E enclosures.
Figure 1-2 IBM Storwize V3700 front view for 2072-24C and 24E enclosure
The drives are positioned in one row of 24 vertically mounted drive assemblies. The drive slots are numbered 1 - 24, starting from the left. There is a vertical center drive bay molding between slots 12 and 13.
Figure 1-4 shows the controller rear view of IBM Storwize V3700 models 12C and 24C.
Figure 1-4 IBM Storwize V3700 controller rear view - models 12C and 24C
In Figure 1-4, you can see that there are two power supply slots at the bottom of the enclosure. The Power Supply units are identical and exchangeable. There are two canister slots at the top of the chassis. In Figure 1-5, you can see the rear view of an IBM Storwize V3700 expansion enclosure.
Figure 1-5 IBM Storwize V3700 expansion enclosure rear view - models 12E and 24E
You can see that the only difference between the control enclosure and the expansion enclosure are the canisters. The canisters of the expansion have only the two SAS ports. For more information about the expansion enclosure, see Expansion enclosure on page 10.
Each node canister contains the following hardware: Battery Memory: 4 GB memory (upgradeable to 8 GB) Host Interface Card slot (different options are possible) Four 6 Gbps SAS ports Two 10/100/1000 Mbps Ethernet ports Two USB 2.0 ports (one port is used during installation) System flash The battery is used in case of power loss. The IBM Storwize V3700 system uses this battery to power the canister while the cache data is written to the internal system flash. This memory dump is called a fire hose memory dump. After the system is up again, this data is loaded back to the cache for destage to the disks. Figure 1-6 on page 8 also shows the following ports, which are provided by the IBM Storwize V3700 node canister: Two 10/100/1000 Mbps Ethernet ports Port 1 (left port) must be configured. The second port is optional and is used for management. Both ports can be used for iSCSI traffic. For more information, see Chapter 4, Host configuration on page 151. Two USB ports. One port is used during the initial configuration or when there is a problem. They are numbered 1 on the left and 2 on the right. For more information about usage, see Chapter 2, Initial configuration on page 27. Four serial attached SCSI (SAS) ports. These ports are numbered 1 on the left to 4 on the right. The IBM Storwize V3700 uses ports 1, 2 and 3 for host connectivity and port 4 to connect to the optional expansion enclosures. The IBM Storwize V3700 incorporates one SAS chains and four expansion enclosures can be connected to each chain. Service port: Do not use the port that is marked with a wrench. This port is a service port only. The two nodes act as a single processing unit and form an I/O Group that is attached to the SAN fabric, an iSCSI infrastructure or directly attached to hosts via FC or SAS. The pair of node is responsible for serving I/O to a volume. The two nodes provide a highly available fault-tolerant controller so that if one node fails, the surviving node automatically takes over. Nodes are deployed in a pair that is called an I/O Group. One node is designated as the configuration node, but each node in the control enclosures holds a copy of the control enclosure state information. The IBM Storwize V3700 only supports one I/O group in a clustered system. The terms node canister and node are used interchangeably throughout this book.
The expansion enclosure power supplies are the same as the control enclosure. There is a single power lead connector on each power supply unit. Figure 1-8 shows the expansion canister ports.
Each expansion canister that is shown in Figure 1-8 provides two SAS interfaces that are used to connect to the control enclosure and any optional expansion enclosures. The ports are numbered 1 on the left and 2 on the right. SAS port 1 is the IN port and SAS port 2 is the OUT port. The use of the SAS connector 1 is mandatory because the expansion enclosure must be attached to a control enclosure or another expansion enclosure. SAS connector 2 is optional because it is used to attach to more expansion enclosures. Each port includes two LEDs to show the status. The first LED indicates the link status and the second LED indicates the fault status. For more information about LED or ports, see this website: http://pic.dhe.ibm.com/infocenter/storwize/v3700_ic/topic/com.ibm.storwize.v3700.7 10.doc/tbrd4_expcanindi.html
10
11
1.5.1 Hosts
A host system is a server that is connected to IBM Storwize V3700 through a Fibre Channel connection, an iSCSI connection, or through a SAS connection. Hosts are defined on IBM Storwize V3700 by identifying their worldwide port names (WWPNs) for Fibre Channel and SAS hosts. iSCSI hosts are identified by using their iSCSI names. The iSCSI names can be iSCSI qualified names (IQNs) or extended unique identifiers (EUIs). For more information, see Chapter 4, Host configuration on page 151. Hosts can be Fibre Channel attached via an existing Fibre Channel network infrastructure or direct attached, iSCSI/FCoE attached via an existing IP network, or directly attached via SAS. A significant benefit of having direct attachment is that you can attach the host directly to the IBM Storwize V3700 without the need for an FC or IP network.
12
The IBM Storwize V3700 I/O Group can be connected to the SAN so that all application servers can access the volumes from the I/O Group. Up to 256 host server objects can be defined to the IBM Storwize V3700. Important: The active/active architecture provides availability to process I/Os for both control nodes and allows the application to continue running smoothly, even if the server has only one access route or path to the storage controller. This type of architecture eliminates the path/LUN thrashing typical of an active/passive architecture.
1.5.5 RAID
The IBM Storwize V3700 contains a number of internal drives, but these drives cannot be directly added to Storage Pools. The drives must be included in a Redundant Array of Independent Disks (RAID) array to provide protection against the failure of individual drives. These drives are referred to as members of the array. Each array has a RAID level. RAID levels provide different degrees of redundancy and performance, and have different restrictions regarding the number of members in the array. The IBM Storwize V3700 supports hot spare drives. When an array member drive fails, the system automatically replaces the failed member with a hot spare drive and rebuilds the array to restore its redundancy. Candidate and spare drives can be manually exchanged with array members. Each array has a set of goals that describe the required location and performance of each array. A sequence of drive failures and hot spare takeovers can leave an array unbalanced, that is, with members that do not match these goals. The system automatically rebalances such arrays when the appropriate drives are available. The following RAID levels are available: RAID 0 (striping, no redundancy) RAID 1 (mirroring between two drives, implemented as RAID 10 with 2 drives)
Chapter 1. Overview of the IBM Storwize V3700 system
13
RAID 5 (striping, can survive one drive fault, with parity) RAID 6 (striping, can survive two drive faults, with parity) RAID 10 (RAID 0 on top of RAID 1) RAID 0 arrays stripe data across the drives. The system supports RAID 0 arrays with one member, which is similar to traditional JBOD attach. RAID 0 arrays have no redundancy, so they do not support hot spare takeover or immediate exchange. A RAID 0 array can be formed by one to eight drives. RAID 1 arrays stripe data over mirrored pairs of drives. A RAID 1 array mirrored pair is rebuilt independently. A RAID 1 array can be formed by two drives only. RAID 5 arrays stripe data over the member drives with one parity strip on every stripe. RAID 5 arrays have single redundancy. The parity algorithm means that an array can tolerate no more than one member drive failure. A RAID 5 array can be formed by 3 - 16 drives. RAID 6 arrays stripe data over the member drives with two parity stripes (which is known as the P-parity and the Q-parity) on every stripe. The two parity strips are calculated by using different algorithms, which give the array double redundancy. A RAID 6 array can be formed by 5 - 16 drives. RAID 10 arrays have single redundancy. Although they can tolerate one failure from every mirrored pair, they cannot tolerate two-disk failures. One member out of every pair can be rebuilding or missing at the same time. A RAID 10 array can be formed by 2 - 16 drives.
If the environment has multiple storage systems, you should allocate the quorum disk on different storage systems to avoid the possibility of losing all of the quorum disks because of a failure of a single storage system. It is possible to manage the quorum disks by using the CLI.
15
The effect of extent size on the maximum volume and cluster size is shown in Table 1-5.
Table 1-5 Maximum volume and cluster capacity by extent size Extent size (MB) 16 32 64 128 256 512 1024 2048 4096 8192 Maximum volume capacity for normal volumes (GB) 2048 (2 TB) 4096 (4 TB) 8192 (8 TB) 16384 (16 TB) 32768 (32 TB) 65536 (64 TB) 131072 (128 TB) 262144 (256 TB) 262144 (256 TB) 262144 (256 TB) Maximum storage capacity of cluster 64 TB 128 TB 256 TB 512 TB 1 PB 2 PB 4 PB 8 PB 16 PB 32 PB
Use the same extent size for all storage pools in a clustered system, which is a prerequisite if you want to migrate a volume between two storage pools. If the storage pool extent sizes are not the same, you must use volume mirroring to copy volumes between storage pools, as described in Chapter 8, Advanced host and volume administration on page 353. A storage pool can have a threshold warning set that automatically issues a warning alert when the used capacity of the storage pool exceeds the set limit.
16
1.5.9 Volumes
A volume is a logical disk that is presented to a host system by the clustered system. In our virtualized environment, the host system has a volume that is mapped to it by IBM Storwize V3700. The IBM Storwize V3700 translates this volume into a number of extents, which are allocated across MDisks. The advantage with storage virtualization is that the host is decoupled from the underlying storage, so the virtualization appliance can move around the extents without impacting the host system. The host system cannot directly access the underlying MDisks in the same manner as it can access RAID arrays in a traditional storage environment. The following types of volumes are available: Striped A striped volume is allocated one extent in turn from each MDisk in the storage pool. This process continues until the space that is required for the volume is satisfied. It also is possible to supply a list of MDisks to use. Figure 1-9 shows how a striped volume is allocated, assuming 10 extents are required.
17
Sequential A sequential volume is a volume in which the extents are allocated one after the other from one MDisk to the next MDisk, as shown in Figure 1-10.
Image mode Image mode volumes are special volumes that have a direct relationship with one MDisk. They are used to migrate existing data into and out of the clustered system to or from external FC SAN-attached storage. When the image mode volume is created, a direct mapping is made between extents that are on the MDisk and the extents that are on the volume. The logical block address (LBA) x on the MDisk is the same as the LBA x on the volume, which ensures that the data on the MDisk is preserved as it is brought into the clustered system, as shown in Figure 1-11 on page 19.
18
Some virtualization functions are not available for image mode volumes, so it is often useful to migrate the volume into a new storage pool. After it is migrated, the MDisk becomes a managed MDisk. If you want to migrate data from an existing storage subsystem, use the Storage Migration wizard, which guides you through the process. For more information, see Chapter 6, Storage migration wizard on page 261. If you add an MDisk containing data to a storage pool, any data on the MDisk is lost. If you are presenting externally virtualized LUNs that contain data to an IBM Storwize V3700, import them as image mode volumes to ensure data integrity or use the migration wizard.
1.5.10 iSCSI
iSCSI is an alternative method of attaching hosts to the IBM Storwize V3700. The iSCSI function is a software function that is provided by the IBM Storwize V3700 code, not hardware. In the simplest terms, iSCSI allows the transport of SCSI commands and data over an Internet Protocol network that is based on IP routers and Ethernet switches. iSCSI is a block-level protocol that encapsulates SCSI commands into TCP/IP packets and uses an existing IP network, instead of requiring FC HBAs and a SAN fabric infrastructure. The following concepts of names and addresses are carefully separated in iSCSI: An iSCSI name is a location-independent, permanent identifier for an iSCSI node. An iSCSI node has one iSCSI name, which stays constant for the life of the node. The terms initiator name and target name also refer to an iSCSI name.
19
An iSCSI address specifies the iSCSI name of an iSCSI node and a location of that node. The address consists of a host name or IP address, a TCP port number (for the target), and the iSCSI name of the node. An iSCSI node can have any number of addresses, which can change at any time, particularly if they are assigned by way of Dynamic Host Configuration Protocol (DHCP). An IBM Storwize V3700 node represents an iSCSI node and provides statically allocated IP addresses. Each iSCSI node, that is, an initiator or target, has a unique iSCSI Qualified Name (IQN), which can have a size of up to 255 bytes. The IQN is formed according to the rules that were adopted for Internet nodes. The IQNs can be abbreviated by using a descriptive name, which is known as an alias. An alias can be assigned to an initiator or a target. The IBM Storwize V3700 also supports the use of the FCoE protocol, which encapsulates the native Fibre Channel frames into Ethernet frames.
1.5.11 SAS
SAS standard is an alternative method of attaching hosts to the IBM Storwize V3700. The IBM Storwize V3700 supports direct SAS host attachment that provides easy-to-use, affordable storage needs. Each SAS port device has a worldwide unique 64-bit SAS address.
20
21
It is possible to demonstrate the potential benefit of Easy Tier in your environment before installing SSDs by using the IBM Storage Advisor Tool. The Easy Tier is described in more detail in Chapter 10, Copy services on page 449. The IBM Easy Tier feature is licensed per Storwize V3700 Storage system.
1.6.6 FlashCopy
FlashCopy copies a source volume on to a target volume. The original contents of the target volume are lost. After the copy operation starts, the target volume has the contents of the source volume as it existed at a single point. Although the copy operation takes time, the resulting data at the target appears as though the copy was made instantaneously. FlashCopy is sometimes described as an instance of a time-zero (T0) copy or a point-in-time (PiT) copy technology. FlashCopy can be performed on multiple source and target volumes. FlashCopy permits the management operations to be coordinated so that a common single point-in-time is chosen for copying target volumes from their respective source volumes. IBM Storwize V3700 also permits multiple target volumes to be FlashCopied from the same source volume. This capability can be used to create images from separate points in time for the source volume, and to create multiple images from a source volume at a common point in time. Source and target volumes can be thin-provisioned volumes. Reverse FlashCopy enables target volumes to become restore points for the source volume without breaking the FlashCopy relationship and without waiting for the original copy operation to complete. IBM Storwize V3700 supports multiple targets and thus multiple rollback points. The base FlashCopy feature, which requires no license, provides up to 64 mappings. An optional license is available to upgrade FlashCopy mappings up to 2,040 mappings. 22
Implementing the IBM Storwize V3700
23
24
25
These videos are applicable not only to IBM Storwize V3700 but also to the IBM Storwize V7000 because the GUI interface, functions, and features are similar to both products.
26
Chapter 2.
Initial configuration
This chapter provides a description of the initial configuration steps for the IBM Storwize V3700. This chapter includes the following topics: Planning for IBM Storwize V3700 installation First time setup Initial configuration steps Call Home, email event alert, and inventory settings
27
28
IP addresses: A fourth IP address should be used for backup configuration access. This other IP address allows a second system IP address to be configured on port 2 of either node canister, which the storage administrator can also use for management of the IBM Storwize V3700 system. A minimum of one and up to four IPv4 or IPv6 addresses are needed if iSCSI-attached hosts access volumes from the IBM Storwize V3700. A single 1, 3, or 6 meter SAS cable per expansion enclosure is required. The length of the cables depends on the physical rack location of the expansion relative to the control enclosure or other expansion enclosures. Locate the control enclosure so that up to four enclosures can be located, as shown in Figure 2-1. The IBM Storwize V3700 supports one external SAS chain using SAS port 4 on the control enclosure node canisters.
The following connections must be made: Connect SAS port 4 of the left node canister in the control enclosure to SAS port 1 of the left expansion canisters in the first expansion enclosure.
29
Connect SAS port 4 of the right node canister in the control enclosure to SAS port 1 of the right expansion canisters in the first expansion enclosure. Connect SAS port 2 of the left node canister in the first expansion enclosure to SAS port 1 of the left expansion canister in the second expansion enclosure. Connect SAS port 2 of the right node canister in the first expansion enclosure to SAS port 1 of the right expansion canister in the second expansion enclosure Continue in this fashion, adding expansion controllers on the SAS chains originating at port 4 on the control enclosure node canisters. Disk drives: The disk drives that are included with the control enclosure (model 2072-12C or 2072-24C) are part of the single SAS chain. The expansion enclosures should be connected to the SAS chain as shown in Figure 2-1 on page 29 so that they can use the full bandwidth of the system.
30
Figure 2-2 shows how to cable devices to the SAN. Refer to this example as we describe the zoning.
Create a host/IBM Storwize V3700 zone for each server to which volumes are mapped to and from the clustered system, as shown in the following examples in Figure 2-2: Zone Host 1 port 1 (HBA 1) with both node canister ports 1 Zone Host 1 port 2 (HBA 2) with both node canister ports 2 Zone Host 2 port 1 (HBA 1) with both node canister ports 3 Zone Host 2 port 2 (HBA 2) with both node canister ports 4 Similar zones should be created for all other hosts with volumes that are on the IBM Storwize V3700. Verify interoperability with which the IBM Storwize V3700 connects to SAN switches or directors by starting at this website: http://www.ibm.com/support/docview.wss?uid=ssg1S1004388 Switches or directors are at the firmware levels that are supported by the IBM Storwize V3700. Important: IBM Storwize V3700 port login maximum that is listed in the restriction document must not be exceeded. The document is available at this website: http://www-01.ibm.com/support/docview.wss?rs=591&uid=ssg1S1004380
Connectivity issues: If you have any connectivity issues between IBM Storwize V3700 ports and Brocade SAN Switches or Directors at 8 Gbps, see this website for the correct setting of the fillword port config parameter in the Brocade operating system: http://www-01.ibm.com/support/docview.wss?rs=591&uid=ssg1S1003699
31
IBM Storwize V3700 can be used with a direct attach Fibre Channel host configuration. The recommended configuration for direct attachment is to have at least one Fibre Channel cable from the host connected to each node of the IBM Storwize V3700 to provide redundancy in the event one of the nodes goes offline, as shown in Figure 2-3.
Verify direct attach interoperability with the IBM Storwize V3700 and the supported server operating systems by following the requirements that are provided at this website: http://www.ibm.com/systems/support/storage/ssic/interoperability.wss
32
Although it is possible to attach six hosts, one to each of the three available SAS ports on the two node canisters, the recommended configuration for direct attachment is to have at least one SAS cable from the host connected to each node of the IBM Storwize V3700. This configuration provides redundancy in the event one of the nodes goes offline, as shown in Figure 2-5.
33
IP management addresses: The IP management address that is shown on Node Canister 1 in Table 2-1 on page 34 is an address on the configuration node; in case of failover, this address transfers to Node Canister 2 and this node canister becomes the new configuration node. The management addresses are managed by the configuration node canister only (1 or 2; in this case by Node Canister 1).
Figure 2-6 shows a logical view of the Ethernet ports that are available for configuration of the one or two management IP addresses. These IP addresses are for the clustered system and therefore are associated with only one node, which is then considered the configuration node.
35
36
Verify that the hosts that access volumes from the IBM Storwize V3700 meet the requirements that are found at this website: http://www-947.ibm.com/support/entry/portal/overview/hardware/system_storage/disk_ systems/entry-level_disk_systems/ibm_storwize_v3700 Multiple OSs are supported by IBM Storwize V3700. For more information about HBA/Driver/multipath combinations, see this website: http://www-03.ibm.com/systems/support/storage/ssic/interoperability.wss As per the IBM System Storage Interoperation Center (SSIC), keep the following items under consideration: Host operating systems are at the levels that are supported by the IBM Storwize V3700. HBA BIOS, device drivers, firmware, and multipathing drivers are at the levels that are supported by IBM Storwize V3700. If boot from SAN is required, ensure that it is supported for the operating systems that are deployed. If host clustering is required, ensure that it is supported for the operating systems that are deployed. All direct connect hosts should have the HBA set to point-to-point. For more information about host configuration, see Chapter 4, Host configuration on page 151.
37
IP address of SNMP server to direct alerts to, if required (for example, operations or Help desk). After the IBM Storwize V3700 initial configuration, you might want to add more users who can manage the system. You can create as many users as you need, but the following roles generally are configured for users: Security Admin Administrator CopyOperator Service Monitor The user in the Security Admin role can perform any function on the IBM Storwize V3700. The user in the Administrator role can perform any function on the IBM Storwize V3700 system, except create users. User creation: The create users function is allowed by the Security Admin role only and should be limited to as few users as possible. The user in the CopyOperator role can view anything in the system, but the user can configure and manage only the copy functions of the FlashCopy capabilities. The user in the Monitor role can view object and system configuration information but cannot configure, manage, or modify any system resource. The only other role that is available is the service role, which is used if you create a user ID for the IBM service representative. This user role allows IBM service personnel to view anything on the system (as with the monitor role) and perform service-related commands, such as adding a node back to the system after it is serviced or including disks that were excluded. For more information about creating users, see Chapter 3, Graphical user interface overview on page 73.
38
39
After the initial configuration that is described in 2.9, Initial configuration on page 50 is completed, the IBM Storwize V3700 Welcome window opens, as shown in Figure 2-8.
40
1 13 online online 1 14 online online 1 15 online online 1 16 online online 1 17 online online 1 18 online online 1 19 online online 1 20 online online 1 21 online online 1 22 online online 1 23 online online 1 24 online online IBM_Storwize:mcr-atl-cluster-01:superuser>
yes yes yes yes yes yes yes yes yes yes yes yes
15 13 16 19 1 3 6 0 4 7 2 5
The initial IBM Storwize V3700 system setup should be done by using the process and tools that are described in 2.8, First-time setup.
41
We use Windows in the following examples. Complete the following steps to complete the initial setup using the USB key: 1. Plug the USB key into a Windows system and start the initialization tool. If the system is configured to autorun USB keys, the initialization tool starts automatically; otherwise, open My Computer and double-click the InitTool.bat file. The opening window of the tool is shown in Figure 2-9. After the tool is started, select Next and select Create a new system.
Mac OS or Linux: For Mac OS or Linux, complete the following steps: a. Open a terminal window. b. Locate the root directory of the USB flash drive: For Mac systems, the root directory is often in the /Volumes/ directory. For Linux systems, the root directory is often in the /media/ directory. If an automatic mount system is used, the root directory can be located by entering the mount command.
c. Change the directory to the root directory of the flash drive. d. Enter: sh InitTool.sh
42
There are other options available through the Tasks section; however, these are generally only required after initial configuration. There is the option to reset the superuser password or set the service IP of a node canister. Selecting Next (as shown in Figure 2-10) progresses through the initial configuration of the IBM Storwize V3700.
43
44
4. Click Apply and Next to show the IBM Storwize V3700 power up instructions, as shown in Figure 2-12.
Any expansion enclosures that are part of the system should be powered up and allowed to come ready before the control enclosure. Follow the instructions to power up the IBM Storwize V3700 and wait for the status LED to flash. Then insert the USB stick in one of the USB ports on the left node canister. This node becomes the control node and the other node is the partner node. The fault LED begins to flash. When it stops, return the USB stick to the Windows PC. Clustered system creation: While the clustered system is created, the amber fault LED on the node canister flashes. When this LED stops flashing, remove the USB key from IBM Storwize V3700 and insert it in your system to check the results.
45
The wizard then attempt to verify connectivity to the IBM Storwize V3700, as shown in Figure 2-13.
46
If successful, a summary page is displayed that shows the settings that were applied to the IBM Storwize V3700, as shown in Figure 2-14.
47
If the connectivity to the IBM Storwize V3700 cannot be verified, the wizard shows the error message as shown in Figure 2-15.
Follow the instructions to resolve any issues. The wizard assumes the system you are using can connect to the IBM Storwize V3700 through the network. If it is not, you must follow step 1 from a machine that does have network access to the IBM Storwize V3700. After the initialization process completes successfully, click Finish. 5. The initial setup is now complete. If you have a network connection to the IBM Storwize system, the wizard re-directs you, as shown in Figure 2-16 on page 49.
48
We review system initial configuration by using the GUI in 2.9, Initial configuration on page 50.
49
50
2. After you are logged in, a welcome window opens, as shown in Figure 2-18.
Click Next to start the configuration wizard. 3. The first window that is opened is the Licensed Functions window, as shown in Figure 2-19.
51
The IBM Storwize V3700 includes a trial license for EasyTier, Remote Copy, and Turbo Performance. Additionally, there is an upgrade option to FlashCopy to increase the number of flash copies from the standard 64 to a maximum of 2,048. In our example, the trial licenses for Remote Copy and EasyTier are active and show their expiration date. 4. Set up the system name as shown in Figure 2-20.
52
5. There are two options for configuring the date and time, as shown in Figure 2-21.
Select the required method and enter the data and time manually or specify a network address for a Network Time Protocol server. After this is complete, click Apply and Next to continue. 6. The configuration wizard continues with the hardware configuration. Verify the hardware, as shown in Figure 2-22 on page 54.
53
Click Apply and Next. 7. The next window in the configuration process is setting up Call Home, as shown in Figure 2-23.
54
It is possible to configure your system to send email reports to IBM if an issue that requires hardware replacement is detected. This function is called Call Home. When this email is received, IBM automatically opens a problem report and contacts you to verify whether replacements parts are required. Call Home: When Call Home is configured, the IBM Storwize V3700 automatically creates a Support Contact with one of the following email addresses, depending on country or region of installation: US, Canada, Latin America, and Caribbean Islands: callhome1@de.ibm.com All other countries or regions: callhome0@de.ibm.com IBM Storwize V3700 can use Simple Network Management Protocol (SNMP) traps, syslog messages, and Call Home email to notify you and the IBM Support Center when significant events are detected. Any combination of these notification methods can be used simultaneously. To set up up Call Home, you need the location details of the IBM Storwize V3700, Storage Administrators details, and at least one valid SMTP server IP address. If you do not want to configure Call Home, it is possible to do it later by using the GUI option Settings Event Notification (for more information, see 2.9.2, Configure Call Home, email alert, and inventory on page 66). If your system is under warranty or you have a hardware maintenance agreement to enable pro-active support of the IBM Storwize V3700, it is recommended that Call Home be configured. Select Yes and click Next to move to the window in which you can enter the location details, as shown in Figure 2-24.
These details are shown on the Call Home data to enable IBM Support to correctly identify where the IBM Storwize V3700 is located.
55
Important: Unless the IBM Storwize V3700 is in the US the state or province box should be filled with XX. Follow the instructions for correct entries for locations inside the US. The next window allows you to enter the contact details of the main storage administrator, as shown in Figure 2-25. You can choose to enter the details for a 24-hour operations desk. These details also are sent with any Call Home, which allows IBM Support to contact the correct people quickly to process any issues.
The next window shows the details of the email server. To enter more than one email server, click the green + icon, as shown in Figure 2-26 on page 57.
56
The IBM Storwize V3700 also has the option to configure local email alerts. These can be sent to a storage administrator or to an email alias for a team of administrators or operators. To add more than one recipient, click the green + icon, as shown in Figure 2-27.
57
Click Apply and Next to show the summary window for the call home options, as shown in Figure 2-28.
Click Apply and Next to move onto Configure Storage. 8. The initial configuration wizard moves on to the Configure Storage option next. This option takes all of the disks in the IBM Storwize V3700 and automatically configures them into optimal RAID arrays for use as MDisks. If you do not want to automatically configure disks now, select No and you exit the wizard to the IBM Storwize V3700 GUI, as shown in Figure 2-29 on page 59.
58
Select Yes and click Next to move to the summary window that shows the RAID configuration that the IBM Storwize V3700 implements, as shown in Figure 2-30.
The storage pool is created when you click Finish. Depending on the disks available, this process might take some time to complete as a background task. 59
Closing the task box completes the Initial Configuration wizard. From the GUI, select the Overview option, as shown in Figure 2-31.
60
From the Overview, you can select the Suggested Tasks, as shown in Figure 2-32.
From here you can configure host access, iSCSI, create volumes, call home (if it has not already been done through the setup wizard) and configure remote access authentication. More information about each of these tasks can be found later in this publication.
61
2. A message appears that informs you to check and confirm cabling and power to the new expansion enclosure. Click Next to continue, as shown in Figure 2-34.
62
3. A task runs and completes to discover the new hardware, as shown in Figure 2-35. Click Close to continue.
4. A window opens that shows the details of the new hardware to be added, as shown in Figure 2-36 on page 64. There is an option to identify the new enclosure by flashing the identify light and another option to view the SAS chain that relates to the enclosure.
63
64
5. To add the enclosure, highlight it and click Finish, as shown in Figure 2-37.
65
6. The new expansion enclosure now shows up as part of the cluster that is attached to the control enclosure, as shown in Figure 2-38.
For more information about how to provision the new storage in the expansion enclosure, see Chapter 7, Storage pools on page 313.
66
67
68
The wizard that is used to configure Call Home starts, as shown in Figure 2-41.
3. You are prompted to enter the system details, contact details, event notification details, and email server details, as previously shown in Figure 2-23 on page 54, Figure 2-24 on page 55, Figure 2-25 on page 56, Figure 2-26 on page 57, Figure 2-27 on page 57, and Figure 2-28 on page 58.
69
The Service Assistant (SA) tool is a web-based GUI that is used to service individual node canisters, primarily when a node has a fault and is in a service state. A node cannot be active as part of a clustered system while it is in a service state. The SA is available even when the management GUI is not accessible. The following information and tasks are included: Status information about the connections and the node canister. Basic configuration information, such as configuring IP addresses. Service tasks, such as restarting the common information model object manager (CIMOM) and updating the worldwide node name (WWNN). Details about node error codes and hints about what to do to fix the node error. Important: The Service Assistant Tool can be accessed only by using the superuser account. The Service Assistance GUI is available by using a service assistant IP address on each node. The SA GUI is accessed through the cluster IP addresses by appending service to the cluster management URL. If the system is down, the only other method of communicating with the node canisters is through the SA IP address directly. Each node can have a single SA IP address on Ethernet port 1. It is recommended that these IP addresses are configured on all IBM Storwize V3700 node canisters. The default IP address of canister 1 is 192.168.70.121, with a subnet mask of 255.255.255.0. The default IP address of canister 2 is 192.168.70.122, with a subnet mask of 255.255.255.0. To open the SA GUI, enter one of the following URLs into any web browser: http(s)://cluster IP address of your cluster/service http(s)://service IP address of a node/service Fro example: Management address: http://1.2.3.4/service SA access address: http://1.2.3.5/service
70
When you are accessing SA by using the <cluster address>/service, the configuration node canister SA GUI login window opens, as shown in Figure 2-42.
The SA interfaces can view status and run service actions on other nodes, in addition to the node where user is connected.
71
After you are logged in, you see the Service Assistant Home window, as shown in Figure 2-43.
The Current canister node is displayed in the upper left corner of the GUI. As shown in Figure 2-43, this is node 2. To change the canister, select the relevant node in the Change Node section of the window. You see the details in the upper left change to reflect the new canister. The SA GUI provides access to service procedures and shows the status of the node canisters. It is recommend that these procedures should only be carried out if directed to do so by IBM Support. For more information about how to use the SA tool, see this website: http://pic.dhe.ibm.com/infocenter/storwize/v3700_ic/index.jsp?topic=%2Fcom.ibm.sto rwize.v3700.710.doc%2Ftbrd_sagui_1938wd.html
72
Chapter 3.
73
Important: The web browser requirements, recommended configuration settings to access the IBM Storwize V3700 management GUI, and the IBM Storwize V3700 Quick Installation Guide, GC27-4219 can be found in the IBM Storwize V3700 Information Center at this website: http://pic.dhe.ibm.com/infocenter/storwize/v3700_ic/index.jsp
74
Default user name and password: Use the following information to log in to the IBM Storwize V3700 storage management: User name: superuser Password: passw0rd (a zero replaces the letter O)
75
A successful login shows the Overview panel by default, as shown in Figure 3-2. Alternatively, the last opened window from the previous session is displayed.
Figure 3-1 on page 75 shows the IBM Storwize V3700 login panel and the option to enable low graphics mode. This feature can be useful for remote access over narrow bandwidth links. The Function Icons no longer enlarge and list the available functions. However, navigation is achieved by clicking a Function Icon and by using the breadcrumb navigation aid. For more information about the Function Icons, see 3.1.3, Overview panel layout on page 78. For more information about the breadcrumb navigation aid, see 3.2.3, Breadcrumb navigation aid on page 84. Figure 3-3 shows the management GUI in low graphics mode.
76
77
Figure 3-4 The three main sections of the IBM Storwize V3700 overview panel
The Function Icons section shows a column of images. Each image represents a group of interface functions. The icons enlarge with mouse hover and show the following menus: Home Monitoring Pools Volumes Hosts Copy Services Access Settings The Extended Help section has a flow diagram that shows the available system resources. The flow diagram is consists of system resource images and green arrows. The images represent the physical and logical elements of the system. The green arrows show the order to perform storage allocation tasks and highlight the various logical layers between the physical internal disks and the logical volumes. Clicking the objects in this area shows more information. This information provides Extended Help references, such as the online version of the Information Center and e-Learning modules. This information also provides direct links to the various configuration panels that relate to the highlighted image.
78
The Status Indicators section shows the following horizontal status bars: Allocated: Status that is related to the storage capacity of the system. Running Tasks: Status of tasks that are running and the recently completed tasks. Health Status: Status relating to system health, which is indicated by using the following color codes: GreenHealthy YellowDegraded RedUnhealthy Hovering the mouse pointer and clicking the horizontal bars provides more information and menus, which is described in 3.3, Status Indicators menus on page 94.
3.2 Navigation
Navigating the management GUI is simple and like most systems, there are many ways to navigate. The two main methods are to use the Function Icons section or the Extended Help section of the Overview panel. For more information about these sections, see 3.1.3, Overview panel layout on page 78. This section describes the two main navigation methods and introduces the well-known breadcrumb navigation aid and the Suggested Tasks aid. Information regarding the navigation of panels with tables also is provided.
79
80
Figure 3-6 shows all of the menus with options under the Function Icons section.
81
To access the e-Learning modules, click Need Help. To configure the internal storage, click Pools. Figure 3-8 shows the selection of Pools in the Extended Help section, which opens the Internal Storage panel.
82
Figure 3-9 shows the Internal Storage panel, which is shown because Pools was selected in the information area of the Extended Help section.
83
84
3.2.5 Presets
The management GUI contains a series of preestablished configuration options that are called presets that use commonly used settings to quickly configure objects on the system. Presets are available for creating volumes, IBM FlashCopy mappings and for setting up a RAID configuration. Figure 3-12 on page 86 shows the available internal storage presets.
85
86
87
Sorting columns
Columns can be sorted by clicking the column heading. Figure 3-16 on page 89 shows the result of clicking the heading of the Slot ID column. The table is now sorted and lists Slots IDs in descending order.
88
Reordering Columns
Columns can be reordered by dragging the column to the required location. Figure 3-17 shows the location of the column with the heading Slot ID positioned between the headings MDisk Name and Enclosure ID. Dragging this heading reorders the columns in the table.
89
Figure 3-18 shows the column heading Slot ID being dragged to the required location.
Figure 3-19 shows the result of dragging the column heading Slot ID to the new location.
90
Important: Some users might encounter a problem where a context menu from the Firefox browser is shown by right-clicking to change the column heading. This issue can be fixed by clicking in Firefox: Tools Options Content Advanced (for Java setting) Select: Display or replace context menus The web browser requirements and recommended configuration settings to access the IBM Storwize V3700 management GUI can be found in the IBM Storwize V3700 Information Center at this website: http://pic.dhe.ibm.com/infocenter/storwize/v3700_ic/index.jsp
Multiple selections
By using the management tool, you also can select multiple items in a list by using a combination of the Shift or Ctrl keys.
91
Figure 3-23 shows the result of the use of the Ctrl key to select multiple non-sequential items.
Filtering objects
To focus on a subset of the listed items that are shown in a panel with columns, use the filter field, which is found at the upper right side of the table. This tool shows items that match the required value. Figure 3-24 shows the text bvol was entered into the filter field. Now, only volumes with the text bvol in any column are listed and the filter word also is highlighted.
92
Filter by column
Click the magnifying glass that is next to the filter field to activate the filter by column feature. Figure 3-25 shows the Filter by Column drop-down menu. This feature allows the filter field value to be matched to a specific column. Figure 3-26 shows the column filter is set to Host Mappings and the filter value set to Yes and the resulting Volumes with Hosts mapped.
.
93
94
To change the allocated bar comparison, click the image of two arrows on the right side of the Allocated status bar. Figure 3-28 shows the new comparison, virtual capacity to real capacity.
Figure 3-28 Changing the allocated menu comparison, virtual capacity to real capacity
95
For an indication of task progress, browse to the Running Tasks bar menu and click the task. Figure 3-30 shows the selection of a task from the Running Tasks menu.
Figure 3-30 Selecting a task from the Running Task menu for and indication of task progress
96
97
In Figure 3-33, the health status bar menu shows that the system as Degraded status and provides a description of Internal Storage for the type of event that occurred. To investigate the event, open the health status bar menu and click the description of the event, as shown in Figure 3-33.
Figure 3-33 Status and description of an alert via the health status menu
Click the description of the event in the health status menu to show the Events panel (Monitoring Events), as shown in Figure 3-34. This panel lists all events and provides directed maintenance procedures (DMP) to help resolve errors. For more information, see Events panel on page 105.
98
99
Overview panel
Select Overview in the Home menu to open the panel. For more information, see 3.1.3, Overview panel layout on page 78.
100
System panel
Select System in the Monitoring menu to open the panel. As shown in Figure 3-38, the System panel shows capacity usage, enclosures, and all drives in the system.
Selecting the name and version of the system shows more information that is related to storage allocation. The information is presented under two tabs: Info and Manage. Figure 3-39 shows the System panel Info tab.
101
Select the Manage tab to show the name of the system and shutdown and upgrade actions. Figure 3-40 shows the System panel Manage tab.
Selecting a rack-mounted enclosure shows more information. Hovering over a drive shows the drive status, size, and speed details. Identify starts the blue identification LED on the front of the enclosure. Click Enclosure 1 to show the System Details panel. For more information, see System details panel on page 103. Figure 3-41 shows the System panel enclosure view.
102
103
104
Events panel
Select Events in the Monitoring menu to open the panel. The machine is optimal when all errors are addressed and there are no items that are found in the Events panel. Figure 3-45 shows the Events panel with no recommended actions.
Figure 3-46 Unfixed Messages and Alerts or the Show All options in the events panel
105
Event properties
To show actions and properties that are related to an event, or to repair an event that is not the Next Recommended Action, right-click the event to show other options. Figure 3-47 shows the selection of the Properties option.
106
107
Performance panel
Select Performance in the Monitoring menu to open the panel. This panel shows graphs that represent the last 5 minutes of performance statistics. The performance graphs include statistics that are related to CPU usage, volumes, MDisks, and interfaces. Figure 3-51 shows the Performance panel.
108
Figure 3-53 Peak CPU usage value over the last 5 minutes
109
110
Volume allocation
The upper right corner of the Volumes by Pool panel shows the Volume Allocation which, in this example, shows the physical capacity (8.14TB), the virtual capacity (21.25TB), and the used capacity (1.65TB in the green portion). The red bar shows the threshold at which a warning is generated when the used capacity in the pool first exceeds the threshold set for the physical capacity of the Pool. By default, this threshold is set to 80% but can be altered in the Pool properties. Figure 3-56 shows the volume allocation information that is displayed in the Volumes by Pool panel.
Renaming pools
To rename a pool, select the pool from the pool filter and click the name of the pool. Figure 3-57 shows that pool mdiskgrp1 was renamed to Gold Pool.
111
112
Volume functions
The Volumes by Pool panel also provides access to the volume functions via the Actions menu, the New Volume option, and by right-clicking a listed volume. For more information about navigating the Volume panel, see 3.4.4, Volumes menu on page 120. Figure 3-59 shows the volume functions that are available via the Volumes by Pool panel.
Figure 3-59 Volume functions are available via the Volume by Pools panel
113
Drive actions
Drive level functions, such as identifying a drive, and marking a drive as offline, unused, candidate, or spare can be accessed here. Right-click a listed drive to show the actions menu. Alternatively, the drives can be selected and the Action menu is used. For more information, see Multiple selections on page 91. Figure 3-60 shows the drive actions menu.
Drive properties
Drive properties and dependent volumes can be displayed from the Internal Storage panel. Select Properties from the Disk Actions menu. The drive Properties panel shows drive attributes and the drive slot SAS port status. Figure 3-61 on page 115 shows the drive properties with the Show Details option selected.
114
115
By using this wizard, you can configure the RAID properties and pool allocation of the internal storage. Figure 3-63 shows Step 1 of the Configure Internal Storage wizard.
For more information about configuring internal storage, see Chapter 7, Storage pools on page 313.
116
Pool actions
To delete a pool or change the pool name or icon, right-click the listed pool. Alternatively, the Actions menu can be used. Figure 3-66 shows the pool actions.
RAID actions
By using the MDisks by Pool panel, you can perform MDisk RAID tasks, such as set spare goal, swap drive, and delete. To access these functions, right-click the MDisk, as shown in Figure 3-67 on page 118.
117
118
119
Volumes panel
Select Volumes in the Volumes menu to open the panel, as shown in Figure 3-71. The Volumes panel shows all of the volumes in the system. The information that is displayed is dependent on the columns that are selected.
120
Volume actions
Volume actions such as map, unmap, rename, shrink, expand, migrate to another pool, delete, and mirror can be performed from this panel.
121
123
Hosts panel
Select the Hosts item in the Hosts menu to open the panel, as shown in Figure 3-76. The Hosts panel shows all of the hosts that are defined in the system.
Host Actions
Host Actions such as Modify Mappings, Unmap All Volumes, Rename, Delete and Properties can be performed from the Hosts panel. Figure 3-76 shows the actions available from the Hosts panel. For more information about the Hosts Actions menu, see Chapter 8, Advanced host and volume administration on page 353.
124
125
126
FlashCopy panel
Select FlashCopy in the Copy Services menu to open the panel, as shown in Figure 3-81. The FlashCopy panel displays all of the volumes in the system.
127
FlashCopy actions
FlashCopy Actions such as snapshot, clone, backup, target assignment, and deletion can be performed from this panel. Figure 3-81 on page 127 shows the actions that are available from the FlashCopy panel.
128
For more information about how to create and administer FlashCopy mappings, see Chapter 8, Advanced host and volume administration on page 353.
For more information, see Chapter 10, Copy services on page 449.
Partnerships panel
Clicking Partnerships opens the window that is shown in Figure 3-85 on page 130. This window allows you to set up a new partnership or delete an existing partnership with another IBM Storwize or SAN Volume Controller system for the purposes of remote mirroring.
129
From this window, you can also set the background copy rate. This rate specifies the bandwidth, in megabytes per second (MBps), that is used by the background copy process between the clusters. For more information, see Chapter 10, Copy services on page 449.
130
Users panel
Select Users in the Access menu to open the panel. The Users panel shows the defined user groups and users for the system. The users that are listed can be filtered by user group. Click New User Group to open the Create a New Group panel. Figure 3-87 shows the Users panel and User actions.
131
Creating a user
Click New User to define a user to the system. Figure 3-89 shows the Users panel and the New User option.
By using the New User panel, you can configure the user name, password, and authentication mode. It is essential to enter the user name, password, group, and authentication mode. The public Secure Shell (SSH) key is optional. After the user is defined, click Create. The authentication mode can be set to local or remote. Select local if the IBM Storwize V3700 performs the authentication locally. Select remote if a remote service such as an LDAP server authenticates the connection. If remote is selected, the remote authentication server must be configured in the IBM Storwize V3700 by clicking Settings menu Directory Services panel.
132
The SSH configuration can be used to establish a more secure connection to the command-line interface. See Appendix A, Command-line interface setup and SAN Boot on page 593 for more information about how to set up SSH keys. Figure 3-90 shows the New User panel.
133
134
135
136
137
Network panel
Select Network in the General menu to open the panel. The Network panel, as shown in Figure 3-98, provides access to the Management IP Addresses, Service IP Addresses, iSCSI, and Fibre Channel configuration panels.
Management IP addresses
The Management IP Address is the IP address of the system and is configured during initial setup. The address can be an IPv4 address, IPv6 address, or both. The Management IP address is logically assigned to Ethernet port 1 of each node canister, which allows for node canister failover. Another Management IP Address can be logically assigned to Ethernet port 2 of each node canister for more fault tolerance. If the Management IP address is changed, use the new IP address to log in to the Management GUI again. Click Management IP Addresses and then click the port you want to configure (the corresponding port on the partner node canister is also highlighted). Figure 3-99 on page 139 shows Management IP Addresses configuration panel.
138
Service IP Addresses
Service IP addresses are used to access the Service Assistant. The address can be an IPv4 address, IPv6 address, or both. The Service IP addresses are configured on Ethernet port 1 of each node canister. Click Service IP addresses and the select the Control Enclosure and Node Canister to configure. Figure 3-100 on page 140 shows the Service IP addresses configuration panel. For more information, see 2.9.3, Service Assistant tool on page 69.
139
iSCSI connectivity
The IBM Storwize V3700 supports iSCSI connections for hosts. Click iSCSI and select the node canister to configure the iSCSI IP addresses. Figure 3-101 on page 141 shows the iSCSI configuration panel.
140
141
Support panel
Select Support in the Settings menu to open the panel. As shown in Figure 3-103, the Support panel provides access to the IBM support package, which is used by IBM to assist with problem determination. Click Download Support Package to access the wizard.
142
General panel
Select General in the Settings menu to open the panel. The General panel provides access to the Date and Time, Licensing, Upgrade Machine Code, and GUI Preferences configuration panels. Figure 3-106 shows the General panel.
143
Licensing
The licensing panel shows the current system licensing. The IBM Storwize V3700 is based on per system licensing and is not licensed per enclosure as other systems in the Storwize family. The following optional licenses are available: FlashCopy upgrade to 2040 target copies Remote Copy Easy Tier Turbo Performance One-time trial licenses can be enabled for any function except the FlashCopy upgrade from the GUI. The trial licenses automatically disable after 90 days if not fully licensed in the meantime. IBM ships a printed page with an authorization code when the optional license is ordered. The following options are available for enabling the licenses: Automatic: The client enters an authorization code into the GUI of an internet-attached IBM Storwize V3700, the system validates the code with IBM and enables function. Manual: The client enters an authorization code and machine details into the Data Storage Feature Activation (DSFA) website, which validates and creates a license key. The client enters the license key into the IBM Storwize V3700 GUI, which enables the function. For more information, see the DSFA website at: https://www-03.ibm.com/storage/dsfa/storwize/selectMachine.wss The authorization code can be used only once and the license key is specific to one machine. Licenses cannot be transferred between systems and if the system is sold, license ownership transfers with it. Figure 3-107 on page 145 shows the Licensed Functions panel within the General panel. The licenses can be activated by right-clicking the function and selecting the required activation method.
144
145
GUI Preferences
By using the GUI Preferences panel (as shown in Figure 3-109), you can refresh GUI objects, restore default browser preferences, set table selection policy, and configure the Information Center web address.
146
147
148
Figure 3-115 shows the information panel that is opened from the embedded panel help. The information panel includes hotlinks to various other panels, including the Information Center.
149
150
Chapter 4.
Host configuration
This chapter provides an overview on how to set up Open System hosts in the context of IBM Storwize V3700. It also describes how to use the IBM Storwize V3700 GUI to create hosts connections to access Storage Disk Subsystem volumes. For more information about volume administration, see Chapter 5, Basic volume configuration on page 189. This chapter includes the following sections: Host attachment overview Preparing the host operating system Configuring hosts on IBM Storwize V3700
151
152
153
154
3. After the setup completes, you are prompted to restart the system. Confirm this restart by entering yes and pressing Enter, as shown in Figure 4-3.
155
You successfully installed IBM SDDDSM. You can check the installed driver version by selecting Start All Programs Subsystem Device Driver DSM Subsystem Device Driver DSM. When a command prompt opens, run the datapath query version command to determine the version that is installed (as shown in Example 4-1) for this Microsoft Windows 2008 R2 host.
Example 4-1 Datapath query version
C:\Program Files\IBM\SDDDSM>datapath.exe query version IBM SDDDSM Version 2.4.3.1-2 Microsoft MPIO Version 6.1.7601.17514
To determine the Worldwide Port Numbers (WWPNs) of the host (this is applicable for Fibre Channel adapter only), run the datapath query wwpn command (as shown in Example 4-2) and take note of WWPN for further use.
Example 4-2 Datapath query wwpn
C:\Program Files\IBM\SDDDSM>datapath query wwpn Adapter Name PortWWN Scsi Port3: 10008C7CFF20CFCC Scsi Port4: 10008C7CFF20CFCD The IBM Multipath Subsystem Device Driver Users Guide can be found at ftp://ftp.software.ibm.com/storage/subsystem/UG/1.8--3.0/SDD_1.8--3.0_User_Guide_E nglish_version.pdf and provides useful information about how to install and configure SDD on the main Operating System platforms. If the SAN Zone configuration already is established, the Microsoft Windows host is prepared to connect to the IBM Storwize V3700. The next step is to configure a host object and its WWPNs by using the IBM Storwize V3700 GUI. For more information, see 4.3, Configuring hosts on IBM Storwize V3700 on page 175. Microsoft Windows operating systems can use SAN Boot implementations. SAN Boot details are beyond the intended scope of this book. For more information and supportability about Microsoft systems booting from storage area networks (SANs), check the Microsoft article at this website: http://support.microsoft.com/kb/305547 Windows 2003: The examples focus on Windows 2008 R2, but the procedure for Windows 2003 is similar. If you use Windows 2003, you must install Microsoft Hotfix 908980 or latest Service Pack. If you do not install these, preferred pathing is not available. You can download this hotfix from this website: http://support.microsoft.com/kb/908980
156
In Windows 2008 R2, the Microsoft iSCSI software initiator is preinstalled. Enter iscsi in the search field of the Windows start menu (as shown in Figure 4-4) and click iSCSI Initiator.
Confirm the automatic startup of the iSCSI Service, as shown in Figure 4-5.
The iSCSI Configuration window opens. Select the Configuration tab, as shown in Figure 4-6 on page 158. Make a note of the initiator name of your Windows host for further use.
157
You can change the initiator name or enable advanced authentication, but these tasks are beyond the scope of our basic setup. By default, iSCSI authentication is not enabled. For more information, see the IBM Storwize V3700 V7.1 Information Center at this website: http://pic.dhe.ibm.com/infocenter/storwize/v3700_ic/index.jsp For more information about Microsoft iSCSI authentication and security, see the following website: http://pic.dhe.ibm.com/infocenter/storwize/v3700_ic/index.jsp?topic=%2Fcom.ibm.sto rwize.v3700.710.doc%2Fsvc_iscsisancoverview_08301435.html&resultof%3D%2522%2569%25 73%2563%2573%2569%2522%2520
158
3. A Quick Connect window opens in which the name and status of iSCSI Target is confirmed, as shown in Figure 4-8.
4. Switch to the Volumes and Devices tab. If the IBM Storwize V3700 was configured for iSCSI connectivity and volumes were mapped to the host, a list of iSCSI disks should appear, as shown in Figure 4-9 on page 160.
159
5. Click the Auto Configure button to automatically configure all volumes and devices on the discovered target. 6. To assign the volume to specific drive letter on Microsoft Windows 2008 R2, open the Disk Management tool. The new volume must be brought online and initialized, as shown in Figure 4-10.
160
The initial configuration of iSCSI connectivity features a single path until the Microsoft MPIO feature is installed. For more information, see Installing the Microsoft MPIO multipathing software on page 154. To configure iSCSI multipathing, see 5.3.2, Windows 2008 iSCSI volume attachment on page 220. These are the basic steps to configure an iSCSI target on Windows Server 2008 R2. For a more detailed description about Installing and configuring the iSCSI initiator, see the Microsoft Technet web link at this website: http://technet.microsoft.com/en-us/library/ee338480%28v=ws.10%29.aspx
161
2. Use the latest firmware and drive on your host system. 3. Install SAS Host Bus Adapter. 4. Connect the SAS cables to the host HBA and IBM Storwize V3700 Canister (or canisters) SAS host port (or ports). 5. Configure HBA firmware and drivers on Microsoft Windows host (unless SAN boot). 6. Install multipath Driver Device Module software.
162
4. Record the SAS WWPN information from SAS Address line, as shown in Figure 4-12.
163
For more information about verifying the installation of SDDDSM, see Installing the Microsoft MPIO multipathing software on page 154.
Login Retry Count: 8 Link Down Timeout: 10 Command Timeout: 20 Extended event logging: Disabled (enable it for debugging only) RIO Operation Mode: 0 Interrupt Delay Timer: 0
165
http://www.vmware.com/support/pubs
Complete the following steps to prepare a VMware ESXi host to connect to an IBM Storwize V3700 using iSCSI: 1. Make sure that the latest firmware levels are applied on your host system. 2. Install VMware ESXi and load more drivers, if required. 3. Connect the ESXi server to your network. You should use separate network interfaces for iSCSI traffic. 4. Configure your network to fulfill your security and performance requirements. The iSCSI initiator is installed by default on ESXi server, and only must be enabled. To enable the initiator, complete the following steps: 1. Connect to your ESXi server by using the vSphere Client. Browse to the Configuration tab and select Networking, as shown in Figure 4-14 on page 167.
166
2. Click Add Networking to start the Add Network Wizard, as shown in Figure 4-15. Select VMkernel and click Next.
167
VMKernel: The VMkernel networking interface is used for VMware vMotion, IP storage and Fault Tolerance. 3. Select one or more network interfaces that you want to use for iSCSI traffic and click Next, as shown in Figure 4-16.
168
4. Enter a meaningful Network Label and click Next, as shown in Figure 4-17.
Important: None of the properties were marked intentionally; see VMware Best Practices and Recommendation for use of these options.
169
5. Enter an IP address for your iSCSI network. You should use a dedicated network for iSCSI traffic, as shown in Figure 4-18.
6. Click Next to see what is included in the network settings and the click Finish to exit from iSCSI configuration.
170
7. From the Configuration tab, select Storage Adapters and scroll down to iSCSI Adapter and click Properties, as shown in Figure 4-19.
171
8. The iSCSI Software Adapter Properties window opens. Click Configure and the General Properties window opens. The initiator is enabled by default (it was changed from VMware ESX 4.0). Parameters can be changed here if required.The VMware ESX iSCSI initiator is now successfully enabled, as shown in Figure 4-20. Make a note of the initiator name for future use.
172
3. Click Add to enter the IBM Storwize V3700 iSCSI IP address (or addresses) or iSCSI target name (or names). Typically, the default iSCSI target port is TPC Port 3260. In this example, the default Port remains and the IBM Storwize V3700 iSCSI IP address (or addresses) are added, as shown in Figure 4-22.
4. Click Ok to confirm. Repeat step 3 to add other IBM Storwize V3700 node canister iSCSI IP addresses.
173
5. After all entries are completed, click Ok and then click Close. A new window should appear to explain that a rescan of iSCSI adapter is necessary, as shown in Figure 4-23.
Your VMware ESX host is now prepared to connect to the IBM Storwize V3700. For more information about creating the ESX iSCSI host by using the IBM Storwize V3700 GUI, see 4.3, Configuring hosts on IBM Storwize V3700 on page 175. For more information and best practices procedures that use VMware vSphere 5.1, see the guide that is available at this website: http://pubs.vmware.com/vsphere-51/topic/com.vmware.ICbase/PDF/vsphere-esxi-vcenter -server-51-storage-guide.pdf
174
By using VMware vSphere client, the SAS HBA is visible by selecting Configuration Storage Adapters, as shown in Figure 4-24.
Important: Under the WWN column in VMware vSphere client, worldwide port names (WWPNs) are not shown. To obtain the WWN number, see Recording SAS WWPN on page 162.
175
IBM Storwize V3700 supports a maximum of two nodes per system, which are arranged as single I/O Group per cluster. To create a host, complete the following steps: 1. From the IBM Storwize V3700 GUI, open the host configuration window by clicking Hosts, as shown in Figure 4-25.
2. In the Hosts window, click New Host to start the Create Host wizard, as shown in Figure 4-26.
The IBM Storwize V3700 V7.1 supports Fibre Channel, iSCSI, and SAS Hosts. If you want to create a Fibre Channel host object, continue with 4.3.2, Creating Fibre Channel hosts. For more information about creating iSCSI hosts, see 4.3.3, Creating iSCSI hosts on page 181. For more information about creating SAS hosts, see 4.3.5, Creating SAS hosts on page 186.
176
2. Enter a host name and click the Fibre Channel Ports drop-down menu to see a list of all known WWPNs, as shown in Figure 4-28.
The IBM Storwize V3700 shows the host port WWPNs available if you prepared the hosts, as described in 4.2, Preparing the host operating system on page 153. If they do not appear in the list, scan for new disks in your operating system and click Rescan in the configuration wizard. If they still do not appear, check your SAN zoning and repeat the scanning.
l
AIX hosts: AIX host WWPNs appear after few minutes after you log in to the fabric. You can enter the WWPN manually or run the cfgmgr command on the AIX host again. This starts a new discovery process and updates the SAN Fabric. WWPNs should then be available to the IBM Storwize V3700 GUI.
177
3. Select the WWPN for your host and click Add Port to List, as shown in Figure 4-29.
Creating offline hosts: If you want to create offline hosts object in IBM Storwize V3700 (for example, if it is not connected at the moment), it is also possible to enter the WWPNs manually. Enter them in standardized format into the Fibre Channel Ports field and add them to the list.
178
5. Click Advanced. If you are creating an HP/UX or TPGS host, select the required Host Type from the list, as shown in Figure 4-31.
6. Click Create Host. The created host task runs, as shown in Figure 4-32.
179
8. Repeat steps 1 - 7 for all of your Fibre Channel hosts. Figure 4-34 shows the All Hosts window after a second host is created.
After you complete the creation of Fibre Channel hosts, see Chapter 5, Basic volume configuration on page 189 to create volumes and map them to the created hosts.
180
2. Enter a host name and the iSCSI initiator name into the iSCSI Ports field and click Add Ports to List, as shown in Figure 4-36. Repeat this step if several initiator names are required for one host.
181
3. If you are connecting an HP/UX or TPGS host, select Advanced (as shown in Figure 4-37) and select the correct host type.
4. Click Create Host and the wizard completes, as shown in Figure 4-38. Click Close.
5. Repeat steps 1 - 4 for every iSCSI host that you want to create.
182
183
2. Select iSCSI and the iSCSI configuration window opens, as shown in Figure 4-40.
In the iSCSI Configuration, you have all the iSCSI settings for the IBM Storwize V3700. You can configure iSCSI Alias, iSNS Addresses, Chap Authentication Configuration, and iSCSI IP addresses from this window. Important: The name of the system becomes part of iSCSI qualified name (IQN). If you change the cluster name after iSCSI is configured, iSCSI hosts might need to be reconfigured.
184
3. Click in the Node Canister drop-down menu to select the canister you want to enter the iSCSI IP addresses, as shown in Figure 4-41. Repeat this step for each Ethernet port from both node canisters.
4. After you enter the IP addresses for each port, click Save to enable the configuration, as shown in Figure 4-42.
If you need to setup iSNS and CHAP authentication, scroll down to enter the IP address for the iSCSI Storage Name Service (iSNS). After you configure the CHAP secret for the Storwize V3700 clustered system, ensure that the clustered system CHAP secret is added to each iSCSI-attached host. Before your ESXi host can discover the IBM Storwize V3700 storage, the iSCSI initiator must be configured and authentication might need to be done, depending on the customer scenario.
185
You can verify the network configuration by using the vmkping utility. If you must authenticate the target, you might need to configure the dynamic or static discovery address and target name of the Storwize V3700 in vSphere. For more information, see VMware iSCSI Target Configuration on page 172. For more information about creating volumes and mapping them to a host, see Chapter 5, Basic volume configuration on page 189.
2. Enter the Host name and from the drop-down menu, select the SAS WWPN (or WWPNs) that is associated with the host, as shown in Figure 4-44.
3. Click Advanced to expand the Advanced Settings options. 4. As shown Figure 4-45 on page 187, select HP/UX or TPGS if you are creating one of these types of hosts. For more information, see Chapter 5.2.2, Manually mapping a volume to the host on page 210.
186
5. Click Create Host to create SAS Host object on IBM Storwize V3700. 6. Click Close upon task completion. The IBM Storwize V3700 shows the host port WWPNs that are available if you prepared the hosts, as described in 4.2, Preparing the host operating system on page 153. If they do not appear in the list, scan for new disks in your operating system and click Rescan in the configuration wizard. If they still do not appear, check your physical connectivity, paying particular attention to the SAS cable orientation and repeat the scanning. For more information about hosts, see the Information Center at this website: http://pic.dhe.ibm.com/infocenter/storwize/v3700_ic/topic/com.ibm.storwize.v3700.7 10.doc/svc_over_1dcur0.html The IBM Storwize V3700 is now configured and ready for SAS Host use. For more information about advanced host and volume administration, see Chapter 8, Advanced host and volume administration on page 353.
187
188
Chapter 5.
189
5.1 Provisioning storage from IBM Storwize V3700 and making it available to the host
This section describes the setup process and shows how to create volumes and make them accessible from the host. The following steps are required to complete the basic setup of your environment: 1. Create volumes. 2. Map volumes to the host. 3. Discover the volumes from the host and specify multipath settings. Complete the following steps to create the volumes: 1. Open the All Volumes window of the IBM Storwize V3700 GUI to start the process of creating volumes, as shown in Figure 5-1.
190
Highlight and click Volumes and the window that lists all current volumes opens, as shown in Figure 5-2 on page 191.
If this is a first time setup, no volumes are listed. Click New Volume in the upper left of the window. 2. The New Volume window opens, as shown in Figure 5-3.
By default, all volumes that you create are striped across all available MDisks in that storage pool. The GUI for the IBM Storwize V3700 provides the following preset selections for the user:
191
Generic: A striped volume that is fully provisioned, as described in 5.1.1, Creating a generic volume on page 192. By fully provisioned, we mean that the volume capacity reflects the same size physical disk capacity. Thin-provisioned: A striped volume that is space efficient. This means that the volume capacity is not reflected the physical capacity that is available to the volume. There are choices available in the Advanced menu to help determine how much space is fully allocated initially and how large the volume can grow, as described in 5.1.2, Creating a thin-provisioned volume on page 195. Mirror: A striped volume that consists of two striped copies and is synchronized to protect against loss of data if the underlying storage pool of one copy is lost, as described in 5.1.3, Creating a mirrored volume on page 198. Thin-mirror: Two synchronized copies. Both are thin provisioned, as described in 5.1.4, Creating a thin-mirror volume on page 203. 3. Select which volume you want to create. For more information, see the relevant section in this chapter.
192
2. Select the pool in which the volume is to be created. Select the pool by clicking it. In our example, we click the mdiskgrp0 pool, as shown in Figure 5-5.
Important: The Create and Map to Host option is disabled if no host is configured on the IBM Storwize V3700. For more information about configuring the host, see Chapter 4, Host configuration on page 151.
193
For Generic volumes, capacity management and mirroring do not apply. There is an option to set the preferred node within the I/O Group. The recommendation is to set Preferred Node to automatic and allow the IBM Storwize V3700 balance the volume I/O across the two node canisters in the I/O Group.
194
3. Enter a volume name and size. Click Create and Map to Host to create and map the new volume to a host or click Create to complete the task and leave mapping the volume to a later stage. The Generic Volume is created, as shown in Figure 5-7.
If you chose to map the host, click Continue and see 5.2.1, Mapping newly created volumes to the host by using the wizard on page 207. If you do not want to map the volumes now, click Close and they can be mapped later, as described in 5.2.2, Manually mapping a volume to the host on page 210.
195
To create a thin-provisioned volume, complete the following steps: 1. Select Thin-Provision, as shown in Figure 5-8.
2. Select the pool in which the thin-provisioned volume should be created by clicking it and entering the volume name and size. In our example, we click the mdiskgrp0 pool. The result is shown in Figure 5-9.
196
Under the Volume Name field is a summary that shows that you are about to make a thin-provisioned volume, what virtual capacity is to be configured (this is the volume size you specified), the space that is physically allocated (real capacity), and the available physical capacity in the pool. By default, the real capacity is 2% of the virtual capacity but you can change this setting in the Advanced options. Select Advanced and click Capacity Management, as shown in Figure 5-10.
The following advanced options are available: Real: Specify the size of the physical capacity space that is used during creation. Automatically Extend: This option enables the automatic expansion of real capacity as the physical data size of the volume grows. Warning Threshold: Enter a threshold for receiving capacity alerts; the IBM Storwize V3700 will send an alert when the physically allocated capacity reaches 80% of the virtual capacity in this case. This is the default setting. Thin-Provisioned Grain Size: Specify the grain size for real capacity. 3. Make your choices, if required, and click OK to return to New Volume window, as shown in Figure 5-9 on page 196.
197
4. Click Create and Map to Host to create and map the volume to a host, or click Create to complete the task and leave mapping the volume to a later stage. The volume is created, as shown in Figure 5-11.
If you chose to map the host, click Continue and see 5.2.1, Mapping newly created volumes to the host by using the wizard on page 207 If you do not want to map the volumes now, click Close and they can be mapped later, as described in 5.2.2, Manually mapping a volume to the host on page 210.
198
To create a mirrored volume, complete the following steps: 1. Select Mirror, as shown in Figure 5-12.
199
2. Select the primary pool by clicking it and the view changes to the secondary pool, as shown in Figure 5-13.
200
3. Select the secondary pool by clicking it, and enter a volume name and the required size, as shown in Figure 5-14.
Storage pools: It is best practice before a mirrored volume is created to create at least two separate storage pools and use different pools for the primary and secondary pool when entering the information in the GUI to create the volume. In this way, the two mirror copies are created on different MDisks (and therefore different physical drives) and protect against a full MDisk failure in a storage pool. For more information about storage pools, see Chapter 7, Storage pools on page 313. 4. The summary shows you the capacity information about the pool. If you want to select advanced settings, click Advanced and then click the Mirroring tab, as shown in Figure 5-15 on page 202.
201
5. In the advanced mirroring settings, you can specify a synchronization rate. Enter a Mirror Sync Rate 1 - 100%. With this option, you can set the importance of the copy synchronization progress. This sets the preference to synchronize more important volumes faster than other mirrored volumes. By default, the rate is set to 50% for all volumes. If for any reason the mirrors loose synchronization, this parameter governs the rate at which the various mirrored volumes re-synchronize. Click OK to return to the New Volume window, as shown in Figure 5-14 on page 201.
202
6. Click Create and Map to Host and the mirrored volume is created, as shown in Figure 5-16. If you do not want to map hosts, click Create.
If you chose to map the host, click Continue and see 5.2.1, Mapping newly created volumes to the host by using the wizard on page 207 If you do not want to map the volumes now, click Close and they can be mapped later, as described in 5.2.2, Manually mapping a volume to the host on page 210.
203
To create a thin-mirror volume, complete the following steps: 1. Select Thin Mirror, as shown in Figure 5-17.
204
2. Select the primary pool by clicking it and the view changes to the secondary pool, as shown in Figure 5-18.
205
3. Select the pool for the secondary copy and enter a name and a size for the new volume, as shown in Figure 5-19.
4. The summary shows you the capacity information and the allocated space. You can click Advanced and customize the thin-provision settings (as shown in Figure 5-10 on page 197) or the mirror synchronization rate (as shown in Figure 5-15 on page 202). If you opened the advanced settings, click OK to return to the New Volume window, as shown in Figure 5-19.
206
5. Click Create and Map to Host and the mirrored volume is created, as shown in Figure 5-20.If you do not want to map hosts, click Create to complete the task.
If you chose to map the host, click Continue and see 5.2.1, Mapping newly created volumes to the host by using the wizard on page 207 If you do not want to map the volumes now click Close and they can be mapped later, as described in 5.2.2, Manually mapping a volume to the host on page 210
5.2.1 Mapping newly created volumes to the host by using the wizard
We continue to map the volume that we created in 5.1, Provisioning storage from IBM Storwize V3700 and making it available to the host on page 190. We assume that you followed the procedure and clicked Create and Map to Host followed by Continue when the volume create task completed.
207
To map the volumes, complete the following steps: 1. Select the host as shown in Figure 5-21.
2. The Modify Host Mappings window opens and your host and the created volume already are selected. Click Map Volumes and the volume is mapped to the host, as shown in Figure 5-22.
208
The new volume to be mapped is highlighted already. To continue and complete the mapping, you can click Apply or Map Volumes. The only difference is that after the mapping task completes (as shown in Figure 5-23), the Modify Host Mappings window closes automatically. Clicking Apply also completes the task, but leaves the Modifying Host Mappings window open.
3. After the task completes, click Close. If you selected the Map Volumes option, the window returns to the Volumes display and the newly created volume is displayed. We see that it is already mapped to a host, as shown in Figure 5-24.
The host can access the volume and store data on it. For more information about discovering the volumes on the host and changing host settings if required, see 5.3, Discovering the volumes from the host and specifying multipath settings on page 213.
209
You also can create multiple volumes in preparation for discovering them later. Mappings can be customized as well. For more information about advanced host configuration, see Chapter 8, Advanced host and volume administration on page 353.
2. Right-click the host to which a volume is to be mapped and select Modify Mappings, as shown in Figure 5-26.
210
3. The Modify Host Mappings window opens. Select the volume that you want to map from the Unmapped Volumes pane, as shown in Figure 5-27.
The volume is highlighted and the green move to the right arrow becomes active, as shown in Figure 5-28
211
Important: The Unmapped pane shows all the volumes that are not mapped to the current selected host. Some of the volumes might display a mappings icon because they might be mapped to other hosts. 4. Click the right-pointing arrow button. The volume is moved to Volumes Mapped to the Host pane, as shown in Figure 5-29. Repeat this step for all the volumes that you want to map. To continue and complete the mapping, you can click Apply or Map Volumes. The only difference is that after the mapping task completes as shown in Figure 5-29, the Modify Host Mappings window closes automatically. Clicking Apply also completes the task but leaves the Modifying Host Mappings window open.
5. After the task completes, click Close, as shown in Figure 5-30 on page 213. If you selected the Map Volumes option, the window returns to the Hosts display. If you clicked Apply, the GUI still displays the Modify Host Mappings window.
212
The volumes are now mapped and the host can access the volumes and store data on them. For more information about discovering the volumes on the host and changing host settings (if required), see 5.3, Discovering the volumes from the host and specifying multipath settings.
5.3 Discovering the volumes from the host and specifying multipath settings
This section describes how to discover the volumes that were created and mapped as described in 5.1, Provisioning storage from IBM Storwize V3700 and making it available to the host on page 190 and 5.2, Mapping a volume to the host on page 207, and set more multipath settings, if required. We assume that you completed all of the following steps that are described previously in the book so that the hosts and the IBM Storwize V3700 are prepared: Prepare your operating systems for attachment including installing MPIO support (see Chapter 4, Host configuration on page 151). Create hosts by using the GUI (see Chapter 4, Host configuration on page 151). Perform basic volume configuration and host mapping (see 5.1, Provisioning storage from IBM Storwize V3700 and making it available to the host on page 190 and 5.2, Mapping a volume to the host on page 207). This section describes how to discover Fibre Channel, iSCSI and SAS volumes from Windows 2008 and VMware ESX 5.x hosts.
213
In the IBM Storwize V3700 GUI, click Hosts, as shown in Figure 5-31.
The view that opens gives you an overview of the currently configured hosts and shows if they are mapped, as shown in Figure 5-32.
214
The host details show you which volumes are mapped to the host. You also see the volume UID and the SCSI ID. In our example, one volume with SCSI ID 0 is mapped to the host. 3. If MPIO is not already installed on your Windows 2008 host and does not yet have IBM Subsystem Device Driver installed, follow the procedure that is described in 4.2.1, Windows 2008 R2: Preparing for Fibre Channel attachment on page 153.
215
4. Log on to your Microsoft host and click Start All Programs Subsystem Device Driver DSM Subsystem Device Driver DSM. A command-line interface (CLI) opens. Enter datapath query device and press Enter to see whether there are IBM Storwize V3700 disks that are connected to this host, as shown in Example 5-1.
Example 5-1 Datapath query device
DEV#: 0 DEVICE NAME: Disk1 Part0 TYPE: 2145 POLICY: OPTIMIZED SERIAL: 600507630080009B000000000000003F ============================================================================ Path# Adapter/Hard Disk State Mode Select Errors 0 Scsi Port5 Bus0/Disk1 Part0 OPEN NORMAL 0 0 1 Scsi Port5 Bus0/Disk1 Part0 OPEN NORMAL 23 0 2 Scsi Port6 Bus0/Disk1 Part0 OPEN NORMAL 0 0 3 Scsi Port6 Bus0/Disk1 Part0 OPEN NORMAL 21 0
5. The output provides information about the connected volumes. In our example, there is one disk that is connected, Disk 1, and four paths to the disk are available (State = Open). Important: Correct SAN switch zoning must be implemented to allow only eight paths to be visible from the host to any one volume. Volumes with more than this amount are not supported. For more information, see Chapter 2, Initial configuration on page 27. 6. Open the Windows Disk Management window (as shown in Figure 5-35) by clicking Start Run. Enter diskmgmt.msc and click OK.
216
7. Right-click the disk in the left pane and select Online if the disk is not online already, as shown in Figure 5-36.
8. Right-click the disk again and then click Initialize Disk, as shown in Figure 5-37.
9. Select an initialization option and click OK. In our example, we selected MBR, as shown in Figure 5-38.
217
10.Right-click the pane on the right side and click New Simple Volume, as shown in Figure 5-39.
Follow the wizard and the volume is ready to use from your Windows host, as shown in Figure 5-41 on page 219. In our example, we mapped a 300 GB disk on the IBM Storwize V3700 to a Windows 2008 host using Fibre Channel connectivity.
218
Windows device discovery: Windows often automatically discovers new devices, such as disks. If you completed all of the steps that are presented here and do not see any disks, click Actions Rescan Disk in Disk Management to discover the new volumes, as shown in Figure 5-42.
The basic setup is now complete and the IBM Storwize V3700 is configured. The host is prepared and can access the volumes over several paths and can store data on the storage subsystem.
219
220
The host details show you which volumes are mapped to the host. You also can see the volume UID and the SCSI ID. In our example, one volume with SCSI ID 2 is mapped to the host. 2. Log on to your Windows 2008 host and click Start Administrative Tools iSCSI Initiator to open the iSCSI Configuration tab, as shown in Figure 5-45.
3. Enter the IP address of one of the IBM Storwize V3700 iSCSI ports in the Target field at the top of the window and click Quick Connect, as shown in Figure 5-46. iSCSI IP addresses: The iSCSI IP addresses are different for the cluster and canister IP addresses. They are configured as described in 4.3.3, Creating iSCSI hosts on page 181.
221
The IBM Storwize V3700 initiator is discovered and connected, as shown in Figure 5-47.
Click Done to return to the iSCSI Initiator Properties window. The storage disk is connected to your iSCSI host, but only a single path is used. To enable multipathing for iSCSI targets, complete the following steps. 4. If MPIO is not already installed on your Windows 2008 host, follow the procedure that is described in 4.2.1, Windows 2008 R2: Preparing for Fibre Channel attachment on page 153. IBM Sub System Device Driver is not required for iSCSI connectivity.
222
5. Click Start Administrative Tools MPIO, click the Discover Multi-Paths tab, and select Add support for iSCSI devices, as shown in Figure 5-48.
Important: In some cases, the Add support for iSCSI devices option is disabled. To enable this option, you must have a connection to at least one iSCSI device. 6. Click Add and confirm the prompt to reboot your host.
223
7. After the reboot process is complete, log on again and click Start Administrative Tools iSCSI Initiator to open the iSCSI Configuration tab. Browse to the Discovery tab, as shown in Figure 5-49.
224
8. Click Discover Portal..., enter the IP address of another IBM Storwize V3700 iSCSI port (as shown in Figure 5-50), and click OK.
9. Return to the Targets tab (as shown in Figure 5-51) and you see that the new connection there is listed as Inactive.
225
10.Highlight the inactive port and click Connect. The Connect to Target window opens, as shown in Figure 5-52.
11.Select Enable Multipath and click OK. The second port is now Connected, as shown in Figure 5-53.
Repeat this step for each IBM Storwize V3700 port you want to use for iSCSI traffic. It is possible to have up to four port paths to the system.
226
12.Open the Windows Disk Management window (as shown in Figure 5-54) by clicking Start Run. Enter diskmgmt.msc and then click OK.
13.Set the disk online, initialize it, and then create a file system on it as described in step 6 10 of 5.3.1, Windows 2008 Fibre Channel volume attachment on page 215. The disk is now ready to use as shown in Figure 5-55. In our example, we mapped a 5 GB disk to a Windows 2008 host that uses iSCSI connectivity.
227
228
The Mapped Volumes tab shows you which volumes are mapped to the host. You also see the volume UID and the SCSI ID. In our example, one volume with SCSI ID 0 is mapped to the host. 3. If MPIO is not installed on your Windows 2008 host and does not have IBM Subsystem Device Driver installed, follow the procedure that is described in 4.2.1, Windows 2008 R2: Preparing for Fibre Channel attachment on page 153. 4. Log on to your Microsoft host and click Start All Programs Subsystem Device Driver DSM Subsystem Device Driver DSM. A CLI opens. Enter datapath query device and press Enter to see whether there are IBM Storwize V3700 disks that are connected to this host, as shown in Example 5-2.
Example 5-2 SDDDSM output SAS attached host
DEV#: 0 DEVICE NAME: Disk1 Part0 TYPE: 2145 POLICY: OPTIMIZED SERIAL: 600507630080009B0000000000000042 ============================================================================ Path# Adapter/Hard Disk State Mode Select Errors 0 Scsi Port5 Bus0/Disk1 Part0 OPEN NORMAL 70 0 1 Scsi Port5 Bus0/Disk1 Part0 OPEN NORMAL 0 0 C:\Program Files\IBM\SDDDSM>
5. The output provides information about the connected volumes. In our example, there is one disk that is connected, Disk 1, and two paths to the disk are available (State = Open). 6. Open the Windows Disk Management window (as shown in Figure 5-58) by clicking Start Run. Enter diskmgmt.msc and then click OK.
229
7. Right-click the disk in the left pane and select Online if the disk is not online, as shown in Figure 5-59.
8. Right-click the disk again and then click Initialize Disk, as shown in Figure 5-60.
9. Select an initialization option and click OK. In our example, we selected MBR, as shown in Figure 5-61.
230
10.Right-click the pane on the right side and click New Simple Volume, as shown in Figure 5-62.
Follow the wizard and the volume is ready to use from your Windows host, as shown in Figure 5-64 on page 232. In our example, we mapped a 100 GB disk on the IBM Storwize V3700 to a Windows 2008 host using SAS direct attach connectivity.
231
232
In the Host Details window, there are two volumes that are connected to the ESX FC host that uses SCSI ID 0 and SCSI ID 1. The UID of the volumes is also displayed. 3. Connect to your VMware ESX Server by using the vSphere client. Browse to the Configuration tab and select Storage Adapters, as shown in Figure 5-67.
233
4. Click Rescan All... in the upper right hand corner and click OK in the resulting pop-up window, as shown in Figure 5-68. This scans for new storage devices.
The mapped volumes on the IBM Storwize V3700 should now appear against the Fibre Channel adapters. 5. Select Storage and click Add Storage, as shown in Figure 5-69.
234
6. The Add Storage wizard opens. Click Select Disk/LUN and click Next. The IBM Storwize V3700 disks appear, as shown in Figure 5-70. In our example, they are the Fibre Channel Disks. We continue with the 500GB volume. Highlight and click Next.
235
7. Select a File System version option. In our example, we selected VMFS-5, as shown in Figure 5-71.
236
8. Click Next to move through the wizard. A summary window of the current disk layout is shown, followed by the option to name the new Datastore. In our example, we chose RedbookTestOne, as shown in Figure 5-72.
237
9. Click Next and the final window is the choice of creating the datastore with the default maximum size of the volume or a proportion of it. After you click Finish, the wizard closes and you return to the storage view. In Figure 5-73, you see that the new volume was added to the configuration.
10.Highlight the new Datastore and click Properties (as shown in Figure 5-74) to see the details of the Datastore, as shown in Figure 5-75 on page 239.
238
11.Click Manage Paths to customize the multipath settings. Select Round Robin (as shown in Figure 5-76) and click Change.
239
When the change completes, click Closed and the storage disk is available and ready to use with your VMware ESX server that uses Fibre Channel attachment.
240
In the Host Details window, you see that there is one volume that is connected to the ESX iSCSI host that uses SCSI ID 1. The UID of the volume is also displayed. 3. Connect to your VMware ESX Server by using the vSphere Client. Browse to the Configuration tab and select Storage Adapters, as shown in Figure 5-79.
241
4. Highlight the iSCSI Software Adapter and click Properties. The iSCSI initiator properties window opens. Select the Dynamic Discovery tab (as shown in Figure 5-80) and click Add.
5. To add a target, enter the target IP address, as shown in Figure 5-81 on page 243. The target IP address is the iSCSI IP address of a node in the I/O Group from which you are mapping the iSCSI volume. Leave the IP port number at the default value of 3260 and click OK. The connection between the initiator and target is established.
242
Repeat this step for each IBM Storwize V3700 iSCSI port you want to use for iSCSI connections. iSCSI IP addresses: The iSCSI IP addresses are different for the cluster and canister IP addresses. They are configured as described in 4.3.3, Creating iSCSI hosts on page 181. 6. After you add all the ports that are required, close the iSCSI Initiator properties by clicking Close, as shown in Figure 5-80 on page 242. You are prompted to rescan for new storage devices. Confirm the scan by clicking Yes, as shown in Figure 5-82 on page 244.
243
7. Go to the storage view and click Add Storage. The Add Storage wizard opens, as shown in Figure 5-83. Select Disk/LUN and then click Next.
244
8. The new iSCSI LUN displays. Highlight it and click Next, as shown in Figure 5-84.
245
9. Select a File System version option. In our example, we selected VMFS-5, as shown in Figure 5-85.
246
10.Review the disk layout and click Next, as shown in Figure 5-86.
247
11.Enter a name for the Datastore and click Next, as shown in Figure 5-87.
248
12.Select the Maximum available space and click Next, as shown in Figure 5-88.
249
The process starts to add an iSCSI LUN, which can take a few minutes. After the tasks complete, the new Datastore appears in the storage view, as shown in Figure 5-90.
250
14.Highlight the new Datastore and click Properties to open and review the Datastore settings, as shown in Figure 5-91.
15.Click Manage Paths, select Round Robin as the multipath policy (as shown in Figure 5-92), and click Change.
251
16.Click Close twice to return to the storage view. The storage disk now is available and ready to use for your VMware ESX server that uses an iSCSI attachment.
252
In the Host Details window, there is one volume that is connected to the ESX SAS host that use SCSI ID 0. The UID of the volume is also displayed. 3. Connect to your VMware ESX Server by using the vSphere client. Browse to the Configuration tab and select Storage Adapters, as shown in Figure 5-95.
4. Click Rescan All... in the upper right corner and click OK in the resulting pop-up window, as shown in Figure 5-96. This scans for new storage devices.
Chapter 5. Basic volume configuration
253
The mapped volumes on the IBM Storwize V3700 should now appear against the SAS adapters. 5. Select Storage and click Add Storage, as shown in Figure 5-97.
254
6. The Add Storage wizard opens. Click Select Disk/LUN and click Next. The IBM Storwize V3700 disks appear, as shown in Figure 5-98. In our example, it is the SAS Disk. Highlight and click Next.
255
7. Select a File System version option. In our example, we selected VMFS-5, as shown in Figure 5-99.
256
8. Click Next to move through the wizard. A summary window of the current disk layout is shown, followed by the option to name the new Datastore. In our example, we chose RedbookTestThree, as shown in Figure 5-100.
257
9. Click Next and the final window is the choice of creating the datastore with the default maximum size of the volume or a proportion of it. After you click Finish, the wizard closes and you return to the storage view. In Figure 5-101, you see that the new volume was added to the configuration.
10.Highlight the new Datastore and click Properties (as shown in Figure 5-102) to see the details of the Datastore, as shown in Figure 5-103 on page 259.
258
11.Click Manage Paths to customize the multipath settings. Select Round Robin (as shown in Figure 5-104 on page 259) and click Change.
Figure 5-104 Select a Datastore multipath setting Chapter 5. Basic volume configuration
259
When the change completes, click Close and the storage disk is available and ready to use with your VMware ESX server that uses Fibre Channel attachment.
260
Chapter 6.
261
262
Image mode volumes are created from the image mode MDisks. Each volume has a one-to-one mapping with an image mode MDisk. From a data perspective, the image mode volume represents the SAN-attached LU exactly as it was before the import operation. The image mode volume is on the same physical drives of the older storage system and the data remains unchanged. The Storwize V3700 is presenting active images of the SAN-attached LUs. The hosts have the older storage system multipath device driver removed and are then configured for Storwize V3700 attachment. Further zoning changes are made for host-to-V3700 SAN connections. The Storwize V3700 hosts are defined with worldwide port names (WWPNs) and the image mode volumes are mapped. After the volumes are mapped, the hosts discover the Storwize V3700 volumes through a host rescan device or reboot operation. Storwize V3700 volume mirror operations are then initiated. The image mode volumes are mirrored to generic Storwize V3700 volumes. The generic volumes are from user nominated internal storage pools. The mirrors are online migration tasks, which means hosts can access and use the volumes during the mirror synchronization process. After the mirror operations are complete, the migrations are then finalized by the user. The finalization process is seamless and it removes the volume mirror relationships and the image mode volumes. The older storage system LUs are now migrated and the Storwize V3700 control of those old LUs can be removed. The older storage system can then be retired.
263
Click Start New Migration and the storage migration wizard is started. Figure 6-2 shows the System Migration panel.
264
Figure 6-3 Step 1 of the storage migration wizard, with all options selected
Restrictions
Confirm that the following restrictions apply: I am not using the storage migration wizard to migrate cluster hosts, including clusters of VMware hosts and VIOS. I am not using the storage migration wizard to migrate SAN Boot images. If the restrictions cannot be selected, the migration must be performed outside of this wizard because more steps are required. For more information about this topic, see the IBM Storwize V3700 Information Center at this website: http://pic.dhe.ibm.com/infocenter/storwize/v3700_ic/index.jsp?topic=%2Fcom.ibm.sto rwize.v3700.641.doc%2Fv3700_ichome_641.html The VMware ESX Storage vMotion feature might be an alternative for migrating VMware clusters. For more information about this topic, see this website: http://www.vmware.com/products/vmotion/overview.html
265
Prerequisites
Confirm that the following prerequisites apply: Make sure that the Storwize V3700, older storage system, hosts, and Fibre Channel ports are physically connected to the SAN fabrics. If there are VMware ESX hosts involved in the data migration, make sure that the VMware ESX hosts are set to allow volume copies to be recognized. For more information, see the VMware ESX product documentation at this website: http://www.vmware.com/support/pubs/vsphere-esxi-vcenter-server-pubs.html? If all options can be selected, click Next to continue. If there are circumstances that prevent one or other of the options from being selected, the Next button remains inactive. The wizard does not progress and the data must be migrated without use of this wizard.
266
Table 6-1 shows an example table for capturing the information that relates to older storage system LUs.
Table 6-1 Example table for capturing external LU information LU Name MCRPRDW2K801 MCRPRDW2K802 MCRPRDLNX01 MCRPRDLNX02 Controller DS3400_01 DS3400_01 DS3400_01 DS3400_01 Array Array_01 Array_01 Array_02 Array_02 SCSI ID 0 1 0 1 Host name MCRPRDW2K8 MCRPRDW2K8 MCRPRDLNX MCRPRDLNX Capacity 50 GB 200 GB 100 GB 300 GB
SCSI ID: Record the SCSI ID of the LUs to which the host is originally mapped. Some operating systems do not support changing the SCSI ID during the migration.
267
When all the data is captured and the volume mappings are changed, click Next to continue. The Storwize V3700 runs the discover devices task. After the task is complete, click Close to continue. Figure 6-6 shows the results of the discover devices task.
268
MDisk selection: Select only the MDisks that are applicable to the current migration plan. After step 8 of the current migration is complete, another migration plan can be started to migrate any remaining MDisks.
The IBM Storwize V3700 then runs the import MDisks task. After the task is complete, click Close to continue. Figure 6-8 on page 270 shows the result of the import MDisks task.
269
270
Important: It is not mandatory to select the hosts now. The actual selection of the hosts occurs in the next step, Map Volumes to Hosts. However, take this opportunity to cross-check the hosts that have data to be migrated by highlighting them in the list before you click Next.
The image mode volumes are listed and the names of the image mode volumes are assigned automatically by the Storwize V3700 storage system. The names can be changed to reflect something more meaningful to the user by selecting the volume and clicking Rename in the Actions menu. Names: The names of the image mode volumes must begin with a letter. The name can be a maximum of 63 characters. The following valid characters can be used: Uppercase letters (A-Z) Lowercase letters (a-z) Digits (0 -9) Underscore (_) Period (.) Hyphen (-) Blank space The names must not begin or end with a space. A Host menu is displayed as shown in Figure 6-11 on page 272.
271
Select the required host and the Modify Host Mappings panel is opened, as shown in Figure 6-12. The MDisks highlighted in step 6 of the wizard are shown in yellow on the Modify Host Mappings panel. The yellow highlighting means that the volumes are not yet mapped to the host. Click the volume and then Edit SCSI ID and modify as required. The SCSI ID should reflect the same SCSI ID as was recorded in step 3. Click Map Volume to complete the mapping.
272
The Storwize V3700 runs the modify mappings task. After the task is complete, the volume is mapped to the host. Figure 6-13 shows the Modify Mappings task. Click Close to continue.
The Map Volumes to Hosts panel is displayed again as shown in Figure 6-14. Verify that migrated volumes now have Yes in the Host Mappings column. Click Next to continue.
Figure 6-14 Map Volumes to Hosts panel that shows Yes in the Host Mappings column
Scan for new devices on the hosts to verify the mapping. The disks are now displayed as IBM 2145 Multi-Path disk devices. This disk device type is common for the IBM Storwize disk family and the IBM SAN Volume Controller.
273
The Storwize V3700 runs the start migration task. After the task is complete, click Close to continue. Figure 6-16 shows the result of the Start Migration task.
274
The end of the storage migration wizard is not the end of the data migration. The data migration is still in progress. A percentage indication of the migration progress is displayed in the System Migration panel, as shown in Figure 6-18.
275
The two disks to be migrated are on the IBM DS3400 storage system. Therefore, the disk properties display the disk device type as an IBM1726-4xx FAStT disk device. To show this disk attribute, right-click the disk to show the menu and then select Properties, as shown in Figure 6-20.
276
After the disk properties panel is opened, the General tab shows the disk device type. Figure 6-21 shows the General tab in the Windows 2008 Disk Properties window.
Perform this task on all disks before the migration. Performing this same task after the Storwize V3700 mapping and host rescan, the disk device definitions change to IBM 2145 Multi-Path disk device and confirm that the disks are under Storwize V3700 control.
277
278
b. Remove zones between the hosts and the storage system from which you are migrating in our case remove the Host-to-DS3400 zones on SAN. c. Update your host device drivers, including your multipath driver and configure them for attachment to the IBM Storwize V3700 system. Complete the steps that are described in 4.2.1, Windows 2008 R2: Preparing for Fibre Channel attachment on page 153 to connect to Storwize V3700 using Fibre Channel. Pay careful attention to the following tasks: Make sure that the latest OS service pack and test fixes are applied to your Microsoft server. Use the latest firmware and driver levels on your host system. Install HBA or HBAs on the Windows server that use the latest BIOS and drivers. Configure the HBA for hosts that are running Windows. Set the Windows timeout value Install the Subsystem Device Driver Device Specific Module (SDDDSM) multipath module. Connect the FC Host Adapter ports to the switches. Configure the switches (zoning).
d. Create a storage system zone between the storage system that is to be migrated and the IBM Storwize V3700 system and host zones for the hosts that are to be migrated. Pay careful attention to the following tasks: Locate the WWPNs for Host. Locate WWPNs for IBM DS3400 Locate WWPNs for Storwize V3700 Define port aliases definitions on SAN. Add V3700-to-DS3400 zones on SAN. Add Host-to-V3700 zones on SAN.
e. Create a host or host group in the external storage system with the WWPNs for this system. Important: If you cannot restrict volume access to specific hosts by using the external storage system, all volumes on the system must be migrated. Add Storwize V3700 host group on DS3400 f. Configure the storage system for use with the IBM Storwize V3700 system. Follow the IBM Storwize V3700 Information Center for DS3400 configuration recommendations. http://pic.dhe.ibm.com/infocenter/storwize/v3700_ic/index.jsp?topic=%2Fcom.i bm.storwize.v3700.710.doc%2Fsvc_configdiskcontrollersovr_22n9uf.html 6. Follow Step 3 of the wizard to map storage, including the following steps: a. Create a list of all external storage system volumes that are migrated. Create a DS3400 LU table. b. Record the hosts that use each volume. Create Host table.
279
c. Record the WWPNs that are associated with each host. Add WWPNs to Host table. d. Unmap all volumes that are migrated from the hosts in the storage system and map them to the host or host group that you created when your environment was prepared. Important: If you cannot restrict volume access to specific hosts by using the external storage system, all volumes on the system must be migrated. Move LUs from Host to Storwize V3700 Host Group on DS3400. e. Record the storage system LUN that is used to map each volume to this system. Update the DS3400 LU table. 7. Follow Step 4 of the wizard to migrate MDisks. Select discovered MDisks on the IBM Storwize V3700. 8. In Step 5 of the wizard, configure hosts by completing the following steps: a. Create Hosts on Storwize V3700 b. Select Hosts on Storwize V3700 9. In Step 6 of the wizard, map volumes to hosts by completing the following steps: a. Map volumes to Host on Storwize V3700 b. Verify disk device type is now 2145 on Host c. SDDDSM datapath query commands on Host 10.In Step 7 of the wizard, select the storage pool on the IBM Storwize V3700 to create the mirror copies in as part of the background data migration task. 11.Finish the storage migration wizard. 12.Finalize the migrated volumes.
Detailed view of the storage migration wizard for the example scenario
The following steps provide more information about the wizard tasks for our example scenario: 1. Search the IBM SSIC for scenario compatibility. 2. Back up all of the data that is associated with the host, DS3400, and Storwize V3700. 3. Start New Migration to open the wizard on the Storwize V3700, as shown in Figure 6-23.
280
4. Follow step 1 of the wizard and select all of the restrictions and prerequisites, as shown in Figure 6-24. Click Next to continue.
281
5. Follow step 2 of the wizard, as shown in Figure 6-25. Complete all of the steps to before you continue.
Pay careful attention to the following tasks: a. Stop host operations or stop all I/O to volumes that you are migrating. b. Remove zones between the hosts and the storage system from which you are migrating. c. Update your host device drivers, including your multipath driver and configure them for attachment to this system. Complete the steps that are described in 4.2.1, Windows 2008 R2: Preparing for Fibre Channel attachment on page 153 to prepare a Windows host to connect to Storwize V3700 by using Fibre Channel. Pay careful attention to the following tasks: 282 Make sure that the latest OS service pack and test fixes are applied to your Microsoft server. Use the latest firmware and driver levels on your host system. Install host bus adapters (HBAs) on the Windows server that uses the latest BIOS and drivers. Connect the FC Host Adapter ports to the switches. Configure the switches (zoning). Configure the HBA for hosts that are running Windows. Set the Windows timeout value. Install the multipath module.
d. Create a storage system zone between the storage system that is migrated and this system, and host zones for the hosts that are migrated. To perform this step, locate the WWPNs of the host, IBM DS3400, and Storwize V3700, then create an alias for each port to simplify the zone creation steps. Important: A WWPN is a unique identifier for each Fibre Channel port that is presented to the SAN fabric.
Figure 6-27 shows the IBM DS3400 storage manager host definition and the associated WWPNs.
Record the WWPNs for alias, zoning, and the Storwize V3700 New Host task. Important: Alternatively, the QLogic SAN Surfer application for the QLogic HBAs or the SAN fabric switch reports can be used to locate the hosts WWPNs.
283
Click the Controllers tab to show the WWPNs for each controller. Figure 6-29 shows the IBM DS3400 storage manager storage subsystem profile.
associated WWPNs. Figure 6-30 on page 285 shows the Storwize V3700 System Details panel with the WWPNs shown for Enclosure 1 Canister 1.
WWPN: The WWPN is made up of eight bytes (two digits per byte). In Figure 6-30, the third last byte in the listed WWPNs are 04, 08, 0C, and 10. They are the differing bytes for each WWPN only. Also, the last two bytes in the listed example of 08CE are unique for each node canister. Noticing these types of patterns can help when you are zoning or troubleshooting SAN issues.
285
Figure 6-31 Example scenario Storwize V3700 and IBM DS3400 WWPN location diagram
alias= V3700_Canister_Right_Port3 wwpn= 50:05:07:68:03:0C:26:CF Storwize V3700 ports connected to SAN Fabric B: alias= V3700_Canister_Left_Port2 wwpn= 50:05:07:68:03:08:26:CE alias= V3700_Canister_Left_Port4 wwpn= 50:05:07:68:03:10:26:CE alias= V3700_Canister_Right_Port2 wwpn= 50:05:07:68:03:08:26:CF alias= V3700_Canister_Right_Port4 wwpn= 50:05:07:68:03:10:26:CF IBM DS3400 ports connected to SAN Fabric A: alias= DS3400_CTRLA_FC1 wwpn= 20:26:00:A0:B8:75:DD:0E alias= DS3400_CTRLB_FC1 wwpn= 20:27:00:A0:B8:75:DD:0E IBM DS3400 ports connected to SAN Fabric B: alias= DS3400_CTRLA_FC2 wwpn= 20:36:00:A0:B8:75:DD:0E alias= DS3400_CTRLB_FC2 wwpn= 20:37:00:A0:B8:75:DD:0E Window 2008 HBA port connected to SAN Fabric A: alias= W2K8_HOST_P2 wwpn= 21:00:00:24:FF:2D:0B:E9 Window 2008 HBA port connected to SAN Fabric B: alias= W2K8_HOST_P1 wwpn= 21:00:00:24:FF:2D:0B:E8
287
288
Click Create Host Group and create a host group that is named V3700. Figure 6-33 shows the IBM DS3400 Create Host Group panel.
Figure 6-34 IBM DS3400 storage manager configure tab: Create host
289
Enter a name for the host and ensure that the selected host type is IBM TS SAN VCE. The name of the host should be easily recognizable and meaningful, such as Storwize_V3700_Canister_Left and Storwize_V3700_Canister_Right. Click Next to continue. Figure 6-35 shows the IBM DS3400 storage manager configure host access (manual) panel.
Figure 6-35 IBM DS3400 storage manager configure tab: Configure host
290
The node canisters WWPNs are automatically discovered and must be matched to the canisters host definition. Select each of the four WWPNs for the node canister and then click Add >. The selected WWPN moves to the right side of the panel. Figure 6-36 shows the IBM DS3400 Specify HBA Host Ports panel.
291
Click Edit to open the Edit HBA Host Port panel, as shown in Figure 6-37.
Figure 6-37 IBM DS3400 storage manager specifying HBA host ports: Edit alias
Figure 6-38 shows the Edit HBA Host Port panel. Enter a meaningful alias for each of the WWPNs, such as V3700_Canister_Left_P1. See the previously defined SAN fabric aliases in Zoning: Define aliases on the SAN fabrics on page 286 to ensure that everything was added correctly.
After the four ports for the node canister with the meaningful aliases are added to the node canister host definition, click Next to continue. Figure 6-39 on page 293 shows the node canister WWPNs that are added to the host definition on the IBM DS3400 Specify HBA Host Ports panel.
292
Select Yes to allow the host to share access with other hosts for the same logical drives. Ensure that the existing Host Group is selected and shows the previously defined V3700 host group. Click Next to continue. Figure 6-40 shows the IBM DS3400 Specify Host Group panel.
293
A summary panel of the defined host and its associated host group is displayed. Cross-check and confirm the host definition summary, and then click Finish. Figure 6-41 shows the IBM DS3400 Confirm Host Definition panel.
A host definition must be created for the other node canister. This host definition must also be associated to the Host Group V3700. To configure the other node canister, complete the steps that are described in Creating IBM DS3400 hosts on page 289. The node canister Host definitions are logically contained in the V3700 Host Group. After both node canister hosts are created, confirm the host group configuration by reviewing the IBM DS3400 host topology tree. To access the host topology tree, use the IBM DS3400 storage manage and click the Modify tab and select Edit Host Topology. Figure 6-42 shows the IBM DS3400 Modify tab and the Edit Host Topology option.
294
Figure 6-43 shows the host topology of the defined V3700 Host Group with both of the IBM Storwize V3700 node canister hosts, as seen through the DS3400 Storage Manager software.
Figure 6-43 IBM DS3400 host group definition for the IBM Storwize V3700
For more information regarding the configuration of IBM DS3400, see the IBM Storwize V3700 Information Center for DS3400 configuration recommendations that are found at this website: http://pic.dhe.ibm.com/infocenter/storwize/v3700_ic/index.jsp?topic=%2Fcom.ibm.sto rwize.v3700.710.doc%2Fsvc_configdiskcontrollersovr_22n9uf.html Now that the environment is prepared, return to the IBM Storwize V3700 GUI step 2 of the storage migration wizard and click Next to continue to the next stage of the migration wizard, as shown in Figure 6-44 on page 296.
295
Create a list of all external storage system volumes that are being migrated. Record the hosts that use each volume. Table 6-3 shows a list of the IBM DS3400 logical units that are to be migrated and the host that uses them.
Table 6-3 List of the IBM DS3400 logical units that are migrated and hosted LU Name Migration_1 Migration_2 Controller DS3400 DS3400 Array Array 1 Array 3 SCSI ID 0 1 Host name W2K8_FC W2K8_FC Capacity 50 GB 100 GB
Record the WWPNs that are associated with each host. The WWPNs that are associated to the host can be seen in Table 6-4 on page 297. It is also recommended to record the HBA firmware, HBA device driver version, adapter information, operating system, and V3700 multi-path software version, if possible.
296
Table 6-4 WWPNs associated to the host Host Name Adapter / Slot / Port WWPNs HBA F/W HBA Device Driver 9.1.9.25 Operating System V3700 Multipath Software SDDDSM 2.4.3.1-2
W2K8_FC
QLE2562 / 2 / 1 QLE2562 / 2 / 2
21000024FF2D0BE8 21000024FF2D0BE9
2.10
W2K8 R2 SP1
Unmap all volumes being migrated from the hosts in the storage system and map them to the host or host group that you created when your environment was prepared. Important: If you cannot restrict volume access to specific hosts by using the external storage system, all volumes on the system must be migrated.
Figure 6-46 shows the IBM DS3400 logical drives mapping information before the change.
Figure 6-46 IBM DS3400 Logical drives mapping information before changes
To modify the mapping definition so that the LUs are accessible only by the V3700 Host Group, select Change... to open the Change Mapping panel and modify the mapping. This step ensures that the LU cannot be accessed from the Windows 2008 Host. Figure 6-47 shows the Change... selection in Modify Mapping panel of the DS3400.
297
Select Host Group V3700 in the menu and ensure that the Logical Unit Number (LUN) remains the same. Record the LUN number for later reference. Figure 6-48 shows the IBM DS3400 Change Mapping panel.
Repeat the steps that are described in Change IBM DS3400 LU mappings on page 297 for each of the LUs that are to be migrated. Confirm that the Accessible By column now reflects the mapping changes. Figure 6-50 on page 299 shows both logical drives are now accessible by Host Group V3700 only.
298
Figure 6-50 iBM DS3400 storage manager Modify panel: edit host-to-logical drive mappings
Record the storage system LUN that is used to map each volume to this system. The LUNs that are used to map the logical drives remained unchanged and can be found in Table 6-3 on page 296. Now that step 3 of the storage migration wizard is complete, click Next to begin the Detect MDisks task. After the task is complete, click Close to move to the next step of the wizard. Figure 6-51 shows the Discover Devices task.
The next step of the storage migration wizard is entitled Migrating MDisks, as shown in Figure 6-52 on page 300. The MDisk name is allocated depending on the order of device discovery; mdisk2 in this case is LUN 0 and mdisk3 is LUN 1. There is an opportunity to change the MDisk names to something more meaningful to the user in later steps.
299
Select the MDisks and click Next to begin the Import MDisks task. After the Import MDisks running task is complete (as shown in Figure 6-53), select Close to move to the next stage.
300
The next stage of the storage migration wizard is entitled Configure Hosts, as shown in Figure 6-54.
301
If the Windows 2008 host is not yet defined in the Storwize V3700, select New Host to open the Create Host panel, as shown in Figure 6-55. Enter a host name and select the WWPNs that were recorded earlier from the Fibre Channel ports menu. Select Add Port to List for each WWPN. If the host is already defined, select it and click Next to move on to the next stage of the migration a shown in Figure 6-58 on page 304.
Important: It is not mandatory to select the hosts now. The actual selection of the hosts occurs in the next step.
302
After all of the port definitions are added, click Create Host to start the Create Host running task. Figure 6-56 shows the Create Host panel with both of the required port definitions listed.
303
After the Create Host running task is complete, select Close (as shown in Figure 6-57) to return to data migration wizard.
From the Configure Hosts stage of the data migration wizard, select the host that was configured and click Next, as shown in Figure 6-54 on page 301. The next step of the wizard is entitled Map Volumes to Hosts, as shown in Figure 6-58.
304
The name that is automatically given to the image mode volume includes the controller and the LUN information. At this stage, it is possible to rename the volumes to a more appropriate name. Highlight the volume and right-click or click Actions and select the rename option. After the new name is entered, click Rename from the Rename Volume panel to start the rename running task. Rename all volumes to be migrated. Figure 6-59 shows the Rename Volume panel.
After the final rename running task is complete, click Close (as shown in Figure 6-60) to return to the migration wizard.
305
Highlight the two MDisks and select Map to Host to open the Modify Host Mappings panel. Figure 6-61 shows the storage migration wizard with the renamed MDisks highlighted for mapping.
Figure 6-61 Storage migration wizard: Renamed MDisks highlighted for mapping
Select the host from the menu on the Modify Host Mappings panel (as shown in Figure 6-62) and click Map Volumes. The rest of the Modify Host Mappings panel opens.
The MDisks that were highlighted are shown in yellow in the Modify Host Mappings panel. The yellow highlighting means that the volumes are not yet mapped to the host. Now is the time to edit the SCSI ID, if required. (In this case, it is not necessary.) Click Map Volumes to start the Modify Mappings task and map the volumes to the host. Figure 6-63 on page 307 shows the Modify Host Mappings panel.
306
After the Modify Mappings running task is complete, select Close (as shown in Figure 6-64) to return to the data migration wizard.
Confirm that the MDisks are now mapped by ensuring the Host Mappings column has a Yes listed for each MDisk, as shown in Figure 6-65 on page 308.
307
Figure 6-66 Display the disk properties from the Windows 2008 disk migration panel
After the disk properties panel is opened, the General tab shows the disk device type. Figure 6-67 shows the Windows 2008 disk properties General tab.
308
The Storwize V3700 SDDDSM can also be used to verify that the migrated disk device is connected correctly. For more information about running SDDDSM commands, see Chapter 4, Host configuration on page 151 and Chapter 5, Basic volume configuration on page 189. Use the SSDDSM output to verify that the expected number of devices, paths, and adapters are shown. From the storage migration wizard, click Next to open the next stage of the wizard entitled Select Storage Pool, as shown in Figure 6-68. In this section, you can optionally choose an internal storage pool in which to create the mirror volumes for the data migration task. If you do not choose a pool, the data migration can be carried out at a later date.
309
Highlight an internal storage pool and click Next to begin the Start Migration task. After the Start Migration running task is complete, select Close (as shown in Figure 6-69 on page 310) to return to the storage migration wizard.
Click Next (as shown in Figure 6-68 on page 309) to move to the final stage of the data migration wizard entitled Finish the Storage Migration Wizard and click Finish, as shown in Figure 6-70.
310
The end of the storage migration wizard is not the end of the data migration. The System Migration panel opens, which shows the data migration in progress. A percentage indicator shows how far it has progressed, as shown in Figure 6-71.
When the volume migrations are complete, select the volume migration instance and right-click Finalize to open the Finalize Volume Migrations panel. Figure 6-72 shows the System Migration panel with the completed migrations and the Finalize option.
From the Finalize Volume Migrations panel, verify the volume names and the number of migrations and click OK, as shown in Figure 6-73.
311
The image mode volumes are deleted and the associated image mode MDisks are removed from the migration storage pool. The status of those image mode MDisks is unmanaged. When the finalization is done, the data migration to the IBM Storwize V3700 is complete. Remove the DS3400-to-V3700 zoning and retire the older storage system.
312
Chapter 7.
Storage pools
This chapter describes how IBM Storwize V3700 manages physical storage resources. All storage resources that are under IBM Storwize V3700 control are managed by using storage pools. Storage pools make it easy to dynamically allocate resources, maximize productivity, and reduce costs. This chapter includes the following topics: Configuration Working with internal drives Configuring internal storage Working with MDisks Working with Storage Pools
313
7.1 Configuration
Storage pools are configured through the Easy Setup wizard when the system is first installed, as described in Chapter 2, Initial configuration on page 27. All available drives are configured based on recommended configuration preset values for the RAID level and drive class. The recommended configuration uses all of the drives to build arrays that are protected with the appropriate number of spare drives. The management GUI also provides a set of presets to help you configure for different RAID types. You can tune storage configurations slightly, based on best practices. The presets vary according to how the drives are configured. Selections include the drive class, the preset from the list that is shown, whether to configure spares, whether to optimize for performance or capacity, and the number of drives to provision.
Default extent size: The IBM Storwize V3700 GUI has a default extent size value of 1GB when you define a new Storage Pool. This is a change in the IBM Storwize code v7.1 (prior versions of code used a default extent size of 256MB). The GUI does not have the option to change the extent size. Therefore, if you want to create Storage Pools with a different extent size, this must be done via the command-line interface (CLI) by using the mkmdiskgrp and mkarray commands.
314
315
An alternative way to access the Internal Storage window is by clicking the Pools icon on the left side of the window, as shown in Figure 7-2.
316
On the right side of the Internal Storage window, the internal disk drives of the selected type are listed. By default, the following information also is listed: Logical drive ID Drives capacity Current type of use (unused, candidate, member, spare, or failed) Status (online, offline, and degraded) MDisks name that the drive is a member of Enclosure ID that it is installed in Physical Drive Slot ID of the enclosure that it is installed in Default sort order is by enclosure ID, but this default can be changed to any other column by left-clicking the column header. To toggle between ascending and descending sort order, left-click the column header again. More details can be shown (for example, the drives RPM speed or its MDisk member ID) by right-clicking the blue header bar of the table, which opens the selection panel, as shown in Figure 7-4.
In addition, you can find the internal storage capacity allocation indicator in the upper right. The Total Capacity shows the overall capacity of the internal storage that is installed in this IBM Storwize V3700 storage system. The MDisk Capacity shows the internal storage capacity that is assigned to the MDisks. The Spare Capacity shows the internal storage capacity that is used for hot spare disks.
317
The percentage bar that is shown in Figure 7-5 indicates how much capacity is allocated.
Fix Error
The Fix Error action starts the Directed Maintenance Procedure (DMP) for a defective drive. For more information, see Chapter 11, RAS, monitoring, and troubleshooting on page 543.
318
A drive must be taken offline only if a spare drive is available. If the drive fails (as shown in Figure 7-8), the MDisk of which the failed drive is a member remains online and a hot spare is automatically reassigned.
If no sufficient spare drives are available and one drive must be taken offline, the second option for no redundancy must be selected. This option results in a degraded MDisk, as shown in Figure 7-9.
319
The IBM Storwize V3700 storage system prevents the drive from being taken offline if there might be data loss as a result. A drive cannot be taken offline (as shown in Figure 7-10) if no suitable spare drives are available and, based on the RAID level of the MDisk, drives are already offline.
Figure 7-10 Internal drive offline not allowed because of insufficient redundancy
Example 7-1 shows how to use the chdrive command to set the drive to failed.
Example 7-1 The use of the chdrive command to set drive to failed
Mark as...
The internal drives in the IBM Storwize V3700 storage system can be assigned to several usage roles, which can be unused, candidate, or spare, as shown in Figure 7-11 on page 321. The roles have the following meanings: Unused: The drive is not in use and cannot be used as a spare. Candidate: The drive is available for use in an array. Spare: The drive can be used as a hot spare, if required.
320
The new role that can be assigned depends on the current drive usage role. Figure 7-12 shows these dependencies.
Identify
Use the Identify action to turn on the LED light so that you can easily identify a drive that must be replaced or that you want to troubleshoot. The panel that is shown in Figure 7-13 on page 322 appears when the LED is on.
Chapter 7. Storage pools
321
Click Turn LED Off when you are finished. Example 7-2 shows how to use the chenclosureslot command to turn on and off the drive LED.
Example 7-2 The use of the chenclosureslot command to turn on and off drive LED
322
Figure 7-15 shows the list of dependent volumes for a drive when its underlying MDisk is in a degraded state.
Example 7-3 shows how to view dependent volumes for a specific drive by using the CLI.
Example 7-3 Command to view dependent vdisks for a specific drive
Properties
Clicking Properties (as shown in Figure 7-16 on page 324) in the Actions menu or double-clicking the drive provides the VPD and the configuration information. The Show Details option was selected to show more details.
323
If the Show Details option is not selected, the technical information section is reduced, as shown in Figure 7-17.
A tab for the Drive Slot is available in the Properties panel (as shown in Figure 7-18) to show specific information about the slot of the selected drive.
324
Example 7-4 shows how to use the lsdrive command to display configuration information and drive VPD.
Example 7-4 The use of the lsdrive command to display configuration information and drive VPD
lsdrive driveID
325
The decision choices include the following considerations: Use initial configuration During system setup, all available drives can be configured based on the RAID configuration presets. The initial setup creates MDisks and pools but does not create volumes. If this automated configuration fits your business requirement, it is recommended that this configuration is used. Customize storage configuration A storage configuration might be customized for the following reasons: The automated initial configuration does not meet customer requirements. More storage was attached to the IBM Storwize V3700 and must be integrated into the existing configuration.
326
Table 7-1 SSD RAID presets Preset SSD RAID 5 Purpose Protects against a single drive failure. Data and one stripe of parity are striped across all array members. Protects against two drive failures. Data and two stripes of parity are striped across all array members. Protects against at least one drive failure. All data is mirrored on two array members. Protects against at least one drive failure. All data is mirrored on two array members. Provides no protection against drive failures. Mirrors data to protect against drive failure. The mirrored pairs are spread between storage pools to be used for the Easy Tier function. RAID level 5 Drives per array goal 8 Drive count (Min - Max) 3 - 16 Spare drive goal 1
SSD RAID 6
12
5 - 16
SSD RAID 10
10
2 - 16 (even)
SSD RAID 1
0 10
8 2
1-8 2 - 16 (even)
0 1
Table 7-2 describes the RAID presets that are used for hard disk drives (HDDs) for the IBM Storwize V3700 storage system.
Table 7-2 HDD RAID presets Preset Purpose RAID level 5 Drives per array goal 8 Drive count (Min - Max) 3 - 16 Spare goal 1
Basic RAID 5
Protects against a single drive failure. Data and one stripe of parity are striped across all array members. Protects against two drive failures. Data and two stripes of parity are striped across all array members. Protects against at least one drive failure. All data is mirrored on two array members. Protects against at least one drive or enclosure failure. All data is mirrored on two array members. The mirrors are balanced across the two enclosure chains. Provides no protection against drive failures.
Basic RAID 6
12
5 - 16
10
2 - 16 (evens) 2 - 16 (evens)
10
RAID 0
1-8
327
The option for deleting the Volumes, Mappings and MDisks must be selected so that all associated drives are marked as a candidate for deletion, as shown in Figure 7-21.
These drives now can be used for a different configuration. Important: When a pool is deleted, data that is contained within any volume that is provisioned from this pool is deleted.
328
A configuration wizard opens and guides you through the process of configuring internal storage. The wizard shows all internal drives, their status, and their use. The status shows whether it is Online, Offline or Degraded. The Use will show if a drive is Unused, a Candidate for configuration, a Spare, a Member of a current configuration, or Failed. Figure 7-23 shows an example in which 67 drives are available for configuration.
329
If there are internal drives with a status of unused, a window opens, which gives the option to include them in the RAID configuration, as shown in Figure 7-24.
When the decision is made to include the drives into the RAID configuration, their status is set to Candidate, which also makes them available for a new MDisk. The use of the storage configuration wizard simplifies the initial disk drive setup and offers the following options: Use the recommended configuration Select a different configuration Selecting Use the recommended configuration guides you through the wizard that is described in Using the recommended configuration on page 330. Selecting Select a different configuration uses the wizard that is described in Selecting a different configuration on page 333.
330
The following recommended RAID presets for different drive classes are available: SSD EasyTier or RAID 1 for SSDs Basic RAID 5 for SAS drives and SSD drives Basic RAID 6 for Nearline SAS drives Figure 7-25 on page 330 shows a sample configuration with 1x SSD and 14x SAS drives. The Configuration Summary shows a warning that there are insufficient SSDs installed to satisfy the RAID 1 SSD preset, as two drives are required to do this, plus a third drive for a hot spare. By using the recommended configuration, spare drives also are automatically created to meet the spare goals according to the preset that is chosen. One spare drive is created out of every 24 disk drives of the same drive class. Spares are not created if sufficient spares are already configured. Spare drives in the IBM Storwize V3700 are global spares, which means that any spare drive that has at least the same capacity as the drive to be replaced can be used in any array. Thus, an SSD array with no SSD spare available uses an HDD spare instead. If the proposed configuration meets your requirements, click Finish, and the system automatically creates the array MDisks with a size according to the chosen RAID level. Storage pools also are automatically created to contain the MDisks with similar performance characteristics, including the consideration of RAID level, number of member drives, and drive class. Important: This option adds new MDisks to an existing storage pool when the characteristics match. If this is not what is required, the Select different configuration option should be used. After an array is created, the Array MDisk members are synchronized with each other through a background initialization process. The progress of the initialization process can be monitored by clicking the icon at the left of the Running Tasks status bar and selecting the initialization task to view the status, as shown in Figure 7-26 on page 332.
331
Click the taskbar to open the progress window, as shown in Figure 7-27. The array is available for I/O during this process. The initialization does not affect the availability because of possible member drive failures.
332
2. Click Next and Select the appropriated RAID preset, as shown in Figure 7-29 on page 334.
333
3. Define the RAID attributes. You can slightly tune RAID configurations based on best practices. Selections include the configuration of spares, optimization for performance, optimization for capacity, and the number of drives to provision. Each IBM Storwize V3700 preset has a specific goal for the number of drives per array. For more information, see the Information Center at this website: http://pic.dhe.ibm.com/infocenter/storwize/v3700_ic/index.jsp Table 7-3 shows the RAID goal widths.
Table 7-3 RAID goal width RAID level 0 5 6 10 HDD goal width 8 8 12 8 SSD goal width 8 9 10 8
Optimizing for Performance creates arrays with same capacity and performance characteristics. The RAID goal width (as shown in Table 7-3) must be met for this target. In a performance optimized setup, the IBM Storwize V3700 provisions eight physical disk drives in a single array MDisk, except for the following situations: RAID 6 uses 12 disk drives. SSD Easy Tier uses two disk drives. Therefore, creating an Optimized for Performance configuration is only possible if there is a sufficient number of drives available to match your needs.
334
As a consequence, all arrays with similar physical disks feature the same performance characteristics. Because of the defined presets, this setup might leave drives unused. The remaining unconfigured drives can be used in another array. Figure 7-30 shows an example in which not all of the provisioned drives can be used in a performance optimized configuration. Six drives remain.
Figure 7-31 shows that the number of drives is not enough to satisfy the needs of the configuration.
Figure 7-32 shows that there are a suitable number of drives to configure performance optimized arrays.
335
Four RAID 5 were built and all provisioned drives are used. Optimizing for capacity creates arrays that allocate all the drives specified in the Number of drives to provision field. This option results in arrays of different capacities and performance. The number of drives in each MDisk does not vary by more than one drive, as shown in Figure 7-33.
336
4. Storage pool assignment. Choose whether an existing pool must be expanded or whether a new pool is created for the configuration, as shown in Figure 7-34.
Complete the following steps to expand or create a pool: a. Expand an existing pool. When an existing pool is to be expanded, you can select an existing storage pool that does not contain MDisks or a pool that contains MDisks with the same performance characteristics, which are listed automatically as shown in Figure 7-35.
337
b. Create a pool. Alternatively, a storage pool is created by enter the required name, as shown in Figure 7-36.
338
An alternative way to access the MDisks window is by using the Pools function icon and selecting MDisks by Pools, as shown in Figure 7-38.
339
The following default information is provided: Name The MDisk or the Storage Pool name that is provided during the configuration process. ID The MDisk or Storage Pool ID that is automatically assigned during the configuration process. Status The status of the MDisk and Storage Pool. The following statuses are possible: Online All MDisks are online and performing optimally. Degraded One MDisk is in degraded state (for example, missing SAS connection to enclosure of member drives or a failed drive with no spare available). As shown in Figure 7-40, the pool also is degraded.
340
Offline One or more MDisks in a pool are offline. The pool (Pool3) also changes to offline, as shown in Figure 7-41.
Capacity The capacity of the MDisk. For the Storage Pool, the capacity is shown, which is the total of all the MDisks in this pool. The usage of the Storage Pool is represented by a bar and the number. Mode The mode of the MDisk. The following modes are available in the IBM Storwize V3700: Array The MDisk represents a set of drives from internal storage that is managed together by using RAID. Image/unmanaged This status is an intermediate status of the migration process and is described in Chapter 6, Storage migration wizard on page 261. Storage Pool The name of the Storage Pool to which the MDisk belongs. The CLI command lsmdiskgrp (as shown in Example 7-5) returns a concise list or a detailed view of the storage pools that are visible to the system.
Example 7-5 CLI command lsmdiskgrp
341
You can choose the following RAID actions: Set Spare Goal Figure 7-43 shows how to set the number of spare drives that are required to protect the array from drive failures.
charray -sparegoal mdiskID goal If the number of drives that are assigned as Spare does not meet the configured spare goal, an error is logged in the event log that reads: Array MDisk is not protected by sufficient spares. This error can be fixed by adding drives as spare. During the internal drive configuration, spare drives are automatically assigned according to the chosen RAID presets spare goals, as described in 7.3, Configuring internal storage on page 326. Swap Drive The Swap Drive action can be used to replace a drive in the array with another drive with the status of Candidate or Spare. This action is used to replace a drive that failed, or is expected to fail soon; for example, as indicated by an error message in the event log.
342
Select an MDisk that contains the drive to be replaced and click RAID Actions Swap Drive. In the Swap Drive window, select the member drive that is replaced (as shown in Figure 7-44) and click Next.
In step 2 (as shown as Figure 7-45), a list of suitable drives is presented. One drive must be selected to swap into the MDisk. Click Finish.
343
The exchange process starts and runs in the background, and the volumes on the affected MDisk remain accessible. If for any reason the GUI process is not used, the CLI command in Example 7-7 can be run.
Example 7-7 CLI command to swap drives
charraymember -balanced -member oldDriveID -newdrive newDriveID mdiskID Delete An Array MDisk can be deleted by clicking RAID Actions Delete. To select more than one MDisk, use Ctrl+left-mouse click. A confirmation is required (as shown in Figure 7-46) by entering the correct number of MDisks to be deleted. You must confirm the number of MDisks that you want to delete. If there is data on the MDisk, it can be deleted only by tagging the option Delete the RAID array MDisk even if it has data on it. The system migrates the data to other MDisks in the pool.
Data that is on MDisks is migrated to other MDisks in the pool, assuming enough space is available on the remaining MDisks in the pool. Available capacity: Make sure that you have enough available capacity that is left in the storage pool for the data on the MDisks to be removed. After an MDisk is deleted from a pool, its former member drives return to candidate mode. The alternative CLI command to delete MDisks is shown in Example 7-8.
Example 7-8 CLI command to delete MDisk
rmmdisk -mdisk list -force mdiskgrpID If all the MDisks of a storage pool were deleted, the pool remains as an empty pool with 0 bytes of capacity, as shown in Figure 7-47.
344
Rename
MDisks can be renamed by selecting the MDisk and clicking Rename from the Actions menu. Enter the new name of your MDisk (as shown in Figure 7-49) and click Rename.
345
Properties
The Properties action for an MDisk shows the information that you need to identify it. In the MDisks by Pools window, select the MDisk and click Properties from the Actions menu. The following tabs are available in this window: The Overview tab (as shown in Figure ) contains information about the MDisk. To show more details, click Show Details.
346
The Dependent Volumes tab (as shown in Figure 7-52) lists all of volumes that use extents on this MDisk.
In the Member Drives tab (as shown in Figure 7-53), you find all of the member drives of this MDisk. Also, all actions that are described in 7.2.2, Actions on internal drives on page 318 can be performed on the drives that are listed here.
347
348
An alternative path to the Pools window is to click Pools MDisks by Pools, as shown in Figure 7-55.
The MDisk by Pools window (as shown in Figure 7-56) allows you to manage storage pools. All existing storage pools are displayed by row. The first row contains the item Not in a Pool, if any exist. Each defined storage pool is displayed with its assigned icon and name, numerical ID, status, and a graphical indicator that shows that the ratio the pools capacity is allocated to volumes.
When you expand a pools entry by clicking the plus sign (+) to the left of the pools icon, you can access the MDisks that are associated with this pool. You can perform all actions on them, as described in 7.4, Working with MDisks on page 338.
349
The new pool is listed in the pool list with 0 bytes, as shown in Figure 7-58.
350
351
If it is safe to delete the pool, the option must be selected. Important: After you delete the pool, all data that is stored in the pool is lost except for the image mode MDisks; their volume definition is deleted, but the data on the imported MDisk remains untouched. After you delete the pool, all the associated volumes and their host mappings are removed. All the array mode MDisks in the pool are removed and all the member drives return to candidate status.
352
Chapter 8.
353
354
If you click Hosts, the Hosts window opens, as shown in Figure 8-2.
As you can see in Figure 8-2, a few hosts are created and there are volumes mapped to all of them. These hosts are used to show all the possible modifications. If you highlight a host, you can click Action (as shown in Figure 8-3 on page 356) or right-click the host to see all of the available tasks. Important: Fibre Channel over Ethernet hosts are listed as FC Hosts.
355
As figure Figure 8-3 shows, there are a number of tasks that are related to host mapping. For more information, see sections 8.1.1, Modifying Mappings menu and 8.1.2, Unmapping volumes from a host on page 360.
356
At the upper left, there is a drop-down menu that shows the host you selected. By selecting the host from this menu, IBM Storwize V3700 lists the volumes that are ready to be mapped to the chosen host. The left panel shows the volumes that are already mapped to this host. In our example, a single volume with SCSI ID 0 is mapped to the host Host_02, and nine more volumes are available. Important: The unmapped volumes panel refers to volumes that are not mapped to the chosen host.
357
To map a volume, highlight the volume in the left pane (as shown in Figure 8-5), and select the upper arrow (pointing to the right) to move the volume from pane to pane. The changes are marked in yellow and now the Map Volumes and Apply buttons are enabled, as shown in Figure 8-6.
358
If you click Map Volumes, the changes are applied (as shown in Figure 8-7) and the Modify Mappings window shows the task completed successfully.
After you click Close, the Modify Host Window closes. If you clicked Apply, the changes are submitted to the system, but the Modify Host window remains open for further changes. You can now choose to modify another host by selecting it from the Hosts drop-down menu or continue working with the host that is already selected. As shown in Figure 8-8, we switched to a different host.
359
Highlight the volume that you want to modify again and click the right arrow button to move it to the right pane. If you right-click in the yellow unmapped volume, you can change the SCSI ID, which is used for the host mapping, as shown in Figure 8-9.
Click Edit SCSI ID and then click OK to change the SCSI ID. Click Apply to submit the changes and complete the host volume mapping. Important: IBM Storwize V3700 automatically assigns the lowest available SCSI ID if none is specified. However, you can set a SCSI ID for the volume. The SCSI ID cannot be changed while volume is assigned to host. If you want to remove a host mapping, the required steps are the same. For more information about Unmapping Volumes, see 8.1.2, Unmapping volumes from a host.
360
If you want to remove access to all volumes in your IBM Storwize V3700 from a host, you can do it by highlighting the host from Hosts window and clicking Unmap all Volumes from the Actions menu, as shown in Figure 8-11.
361
You are prompted to confirm the number of mappings you want to remove. Enter the number of mappings and click Unmap. In our example, we remove two mappings. Figure 8-12 shows the unmap from host Host_01.
Unmapping: By clicking Unmap, all access for this host to volumes that are controlled by IBM Storwize V3700 system is removed. Ensure that you run the required procedures in your host operating system before the unmapping procedure. The changes are applied to the system, as shown in Figure 8-13. Click Close after you review the output.
362
Figure 8-14 shows that the host Host_01 no longer has any volume mappings.
363
You are prompted to select the target host that you want to duplicate the volume mappings on to as shown in Figure 8-16.
Click Duplicate and then Close to return to Hosts window. Important: Always check the Operating System capabilities and requirements before duplicating volume mappings.
364
Enter a new name and click Rename, as shown in Figure 8-18. If you click Reset, your changes are not saved and the host retains its original name.
After the changes are applied to the system, click Close from the task window, as shown in Figure 8-19.
365
You are prompted to confirm the number of hosts you want to delete then click Delete, as shown in Figure 8-21.
If you want to delete a host with volumes assigned, you must force the deletion by selecting the option in the lower part of the window, as shown in Figure 8-21. If you select this option, the host is completely removed from the IBM Storwize V3700. After the task is complete, click Close (as shown in Figure 8-22 on page 367) to return to the mappings window.
366
367
In the following example, we selected the host Host_01 to show the host properties information. When the Overview tab opens, select Show Details in left bottom of the window to see more information about the host, as shown in Figure 8-24.
This tab provides the following information: Host Name: Host object name. Host ID: Host object identification number. Status: The current host object status; it can be Online, Offline or Degraded. # of FC: Shows the number of host Fibre Channel or FCoE ports that IBM Storwize V3700 can see. # of iSCSI Ports: Shows the number of host iSCSI name or host IQN ID. # of SAS Ports: Shows the number of host SAS ports that are connected to IBM Storwize V3700. iSCSI CHAP Secret: Shows the Challenge Handshake Authentication Protocol (CHAP) information if it exists or is configured.
368
To change the host properties, click Edit and several fields can be edited, as shown in Figure 8-25.
The following changes can be made: Host Name: Change the host name. Host Type: Change this setting if you are intending to change host type to HP/UX, OpenVMS, or TPGS hosts. iSCSI CHAP Secret: Enter or change the iSCSI CHAP secret for this host. Make any changes necessary and click Save to apply them. Click Close to return to the Host Properties window. The Mapped Volume tab (as shown in Figure 8-26 on page 370) gives you an overview of which volumes are mapped to this host. The details shown are SCSI ID, volume name, UID (volume ID) and the Caching I/O Group ID per volume. Important: Only a single I/O Group is allowed in IBM Storwize V3700 cluster. Selecting the Show Details option does not show any detailed information.
369
The Port Definitions tab (as shown in Figure 8-27) shows the configured host ports and provides status information about them. It also shows the WWPN numbers (for SAS and FC hosts) and the IQN (iSCSI Qualified Name) for iSCSI hosts. The Type column shows the port type information and the # Nodes Logged In column lists the number of IBM Storwize V3700 node canisters that each port (initiator port) has logged on to.
370
By using this window, you can also Add and Delete Host Port (or ports), as described in 8.2, Adding and deleting host ports. The Show Details option does not show any other information. Click Close to close the Host Properties section.
Hosts are listed in the left pane of the window, as shown in Figure 8-29 on page 372. The function icons show an orange cable for Fibre Channel and FCoE hosts, black for SAS hosts, and a blue cable for iSCSI hosts. The properties of the highlighted host are shown in the right pane. If you click New Host, the wizard that is described in Chapter 4, Host configuration on page 151 starts. If you click the Action drop-down menu (as shown in Figure 8-29 on page 372), the tasks that are described in the previous sections can be started from this location.
371
Important: A host system can have a mix of Fibre Channel, iSCSI, and SAS connections. If you must mix protocols, check your Operating System capabilities and plan carefully to avoid miscommunication or data loss.
372
The port appears as unverified because it is not logged on to the IBM Storwize V3700. The first time the port logs on, the state changes to online automatically and the mapping is applied to this port. To remove one of the ports from the list, click the red X next to it. In Figure 8-31, we manually added an FC port. Important: If you are removing either online/offline ports, IBM Storwize V3700 prompts you to add the number of ports you want to delete but does not warn you about mappings. Disk mapping is associated to the host object and LUN access is obviously lost if all ports are deleted. Click Add Ports to Host and the changes are applied. Figure 8-32 on page 374 shows the output after ports are added to the host. Even if it is an offline port, the IBM Storwize V3700 still add it.
373
Important: IBM Storwize V3700 allows the addition of an offline SAS port. Enter the SAS WWPN in the SAS Port field and then click Add Port to List.
Select the SAS WWPN you want to add to the existing host and click Add Port to List, as shown in Figure 8-33.
The Add Port to Host task completes successfully, as shown in Figure 8-32 on page 374.
374
Enter the initiator name of your host and click Add Port to List. After you add the iSCSI Port, click Add Ports to Host to complete the tasks and apply the changes to the system. The iSCSI port status remains as unknown until it is added to the host and a host rescan process is completed. Figure 8-35 shows the output after an iSCSI port is added.
Click Close to return to the Ports by Host window. Important: An error message with code CMMVC6581E is shown if one of the following conditions occurs: The IQNs exceed the maximum number allowed. There is a duplicated IQN. The IQN contains a comma or leading or trailing spaces or is not valid in some other way.
375
If you press and hold the Ctrl key, you can also select several host ports to delete. Click Delete and you are prompted to enter the number of host ports that you want to delete, as shown in Figure 8-37.
Click Delete to apply the changes to the system. A task window appears that shows the results. Click Close to return to the Ports by Host window.
376
This window shows a list of all the hosts and volumes and the respective SCSI ID and Volume Unique Identifier (UID). In our example in Figure 8-38, the host vmware-fc has two mapped volumes (volumes vmware-fc and vmware-fc1), and the associated SCSI ID (0 and 1), Volume Name, Volume Unique Identifier (UID), and Caching I/O Group ID. If you highlight one line and click Actions (as shown in Figure 8-39), the following options are available: Unmap Volumes Properties (Host) option Properties (Volume) option
377
If multiple lines are highlighted (which are selected by pressing and holding the Ctrl key), only the Unmap Volumes option becomes available.
A task window should appear showing the status and completion of volume unmapping. Figure 8-41 shows volume windows2k8-sas being unmapped from host windows2k8-sas.
Warning: Always ensure that you run the required procedures in your host operating system before you unmap volumes in the IBM Storwize V3700 GUI.
378
379
By default, this window lists all configured volumes on the system and provides the following information: Name: Shows the name of the volume. If there is a + sign next to the name, this sign means that there are two copies of this volume. Click the + sign to expand the view and list the copies, as shown in Figure 8-44 on page 381. Status: Gives you status information about the volume, which can be online, offline, or degraded. Capacity: The disk capacity that is presented to the host is listed here. If there is a blue volume listed next to the capacity, this means that this volume is a thin-provisioned volume. Therefore, the listed capacity is the virtual capacity, which might be more than the real capacity on the system. Storage Pool: Shows in which Storage Pool the volume is stored. The primary copy is shown unless you expand the volume copies. UID: The volume unique identifier. Host Mappings: Shows if a volume has host mapping. Yes when host mapping exists (along with small server icon) and No when there are no hosting mappings. Important: If you right-click anywhere in the blue title bar, you can customize the volume attributes that are displayed. You might want to add some useful information here.
380
To create a volume, click New Volume and complete the steps that are described in 5.1, Provisioning storage from IBM Storwize V3700 and making it available to the host on page 190.
381
You can right-click or highlight a volume and select Actions to see the available actions for a volume, as shown in Figure 8-45.
Depending on which volume you highlighted, the following Volume options are available: Map to Host Unmap All Hosts View Mapped Host Duplicate Volume Rename Shrink Expand Migration to Another Pool Export to Image Mode Delete Properties The following Volume Copy options are available: Add Mirror Copy Thin Provisioned: Only available for thin-provisioned volumes: Shrink Expand Properties All of these options are described next.
382
After you select a host, the Modify Mappings window opens. In the upper left, you see the selected host. The yellow volume is the selected volume that is ready to be mapped, as shown in Figure 8-47. Click Map Volumes to apply the changes to the system.
After the changes are made, click Close to return to the All Volumes window. Modify Mappings window: For more information about the Modify Mappings window, see 8.1.1, Modifying Mappings menu on page 356.
383
After the task completes, click Close to return to the All Volumes window.
Warning: Always ensure that you run the required procedures in your host operating system before a procedure is unmapped.
384
If you want to remove a mapping, highlight the host and click Unmap from Host, which removes the access for the selected host (after you confirm it). If several hosts are mapped to this volume (for example, in a cluster), only the highlighted host is removed.
385
A Duplicate Volume window opens and you are prompt to enter the new volume name. IBM Storwize V3700 automatically suggests a name by adding an incremental number in the end of the new volume name, as shown in Figure 8-51.
Click Duplicate and the IBM Storwize V3700 creates an independent volume that uses the same characteristics as the source volume. Important: When the Duplicate Volume function is used, you cannot specify a preferred node.
386
If you click Reset, the name field is reset to the active name of the volume. Click Rename to apply the changes and click Close after task window completes.
Click Shrink to start the process and then click Close when task window completes to return to the All Volumes window. Run the required procedures on your host after the shrinking process. Important: For volumes that contain more than one copy, you might receive a CMMVC6354E error; use the lsvdisksyncprogress command to view the synchronization status. Wait for the copy to synchronize. If you want the synchronization process to complete more quickly, increase the rate by running the chvdisk command. When the copy is synchronized, resubmit the shrink process.
387
After the tasks complete, click Close to return to the All Volumes window. Run the required procedures in your operating system to use the available space.
388
Select the new target storage pool and click Migrate, as shown in Figure 8-55. The volume copy migration starts, as shown in Figure 8-56. Click Close to return to the All Volumes window.
Depending on the size of the volume, the migration process can take some time. You can monitor the status of the migration in the running tasks bar at the bottom of the window. Volume migration tasks cannot be interrupted. After the migration completes, the copy 0 from the vmware-sas volume is shown in the new storage pool, as shown in Figure 8-57 on page 390.
389
The volume copy was migrated without any downtime to the new storage pool. It is also possible to migrate both volume copies to other storage pools. Another way to migrate volumes to a different pool is by using the volume copy feature, as described in 8.6.5, Migrating volumes by using the volume copy features on page 406.
Click Delete and the volume is removed from the system. Click Close to return to Volumes window.
390
Important: You must force the deletion if the volume has host mappings or is used in FlashCopy or RemoteCopy mappings. To be safe, always ensure the volume has no association before you delete it.
391
The following details are available: Volume Properties: Volume Name: Shows the name of the volume. Volume ID: Shows the ID of the volume. Every volume has a system-wide unique ID. Status: Gives status information about the volume, which can be online, offline, or degraded. Capacity: Shows the capacity of the volume. If the volume is thin-provisioned, this number is the virtual capacity; the real capacity is displayed for each copy. # of Flash Copy Mappings: The number of existing Flash Copy relationships. For more information, see Chapter 10.1, FlashCopy on page 450. Volume UID: The volume unique identifier. Accessible I/O Group: Shows the I/O Group. Preferred Node: Specifies the ID of the preferred node for the volume. I/O Throttling: It is possible to set a maximum rate at which the volume processes I/O requests. The limit can be set in I/Os or MBps. This feature is an advanced feature and can be enabled only through the CLI, as described in Appendix A, Command-line interface setup and SAN Boot on page 593. Mirror Sync Rate: After creation, or if a volume copy is offline, the mirror sync rate weights the synchronization process. Volumes with a high sync rate (100%) complete the synchronization faster than volumes with a lower priority. By default, the rate is set to 50% for all volumes. Cache Mode: Shows whether the cache is enabled or disabled for this volume.
392
Cache State: Provides feedback if open I/O requests are inside the cache that are not destaged to the disks. UDID (OpenVMS): The unit device identifiers are used by OpenVMS hosts to access the volume. Copy Properties: Storage Pool: Provides information about which pool the copy is in, what type of copy it is (generic or thin-provisioned), the status of the copy and Easy Tier status. Capacity: Shows the allocated (used) and the virtual (Real) capacity from both Tiers (SSD and HDD) and the warning threshold, and the grain size for Thin-Provisioned volumes. If you want to modify any of these changeable settings, click Edit and the window changes to modify mode. Figure 8-61 shows the Volume Details Overview tab in modify mode.
Inside the Volume Details window, the following properties can be changed: Volume Name Mirror Sync Rate Cache Mode UDID Make any required changes and click Save.
393
To unmap a host from the volume, highlight it and click Unmap from Host. Confirm the number of mappings to remove and click Unmap. Figure 8-63 shows the Unmap Host window.
394
The changes are applied to the system. The selected host no longer has access to this volume. Click Close to return to the Host Maps window. For more information about host mappings, see 8.3, Host mappings overview on page 377.
395
Highlight an MDisk and click Actions to see the available tasks, as shown in Figure 8-65 on page 396). The Show Details option on the lower left side does not provide more information. For more information about the available tasks, see Chapter 7, Storage pools on page 313.
396
Select the storage pool to which the new copy should be created, as shown in Figure 8-67. If the new copy should be thin-provisioned, select the Thin-Provisioned option and click Add Copy.
The copy is created after you click Add Copy and data starts to synchronize as a background task. Figure 8-68 on page 398 shows you that the volume named volume_001 holds two volume copies.
397
These changes are made only to the internal storage; no changes to your host are necessary.
398
Deallocating extents: You can only deallocate extents that do not include stored data on them. If the space is allocated because there is data on them, you cannot shrink the allocated space and an out-of-range warning message appears. Figure 8-70 shows the Shrink Volume window.
After the task completes, click Close. The allocated space of the thin-provisioned volume is reduced.
The new space is now allocated. Click Close after task is completed.
399
After the task completes, click Close to return to the All Volumes window.
400
If you review the volume copies that are shown in Figure 8-73 on page 400, you see that one of the copies has a star displayed next to its name, as also shown in Figure 8-74.
Each volume has a primary and a secondary copy, and the star indicates the primary copy. The two copies are always synchronized, which means that all writes are destaged to both copies, but all reads are always done from the primary copy. Two copies per volume are the maximum number configurable and you can change the roles of your copies. To accomplish this task, highlight the secondary copy and then click Actions Make Primary. Usually, it is a best practice to place the volume copies on storage pools with similar performance because the write performance is constrained if one copy is on a lower performance pool than the other. Figure 8-75 shows the secondary copy Actions menu.
If you demand high read performance only, another possibility is to place the primary copy in an SSD pool and the secondary copy in a normal disk storage pool. This action maximizes the read performance of the volume and makes sure that you have a synchronized second copy in your less expensive disk pool. It is possible to migrate online copies between storage pools. For more information about how to select which copy you want to migrate, see 8.4.9, Migrating a volume to another storage pool on page 388. Click Make Primary and the role of the copy is changed to online. Click Close when the task completes. The volume copy feature also is a powerful option for migrating volumes, as described in 8.6.5, Migrating volumes by using the volume copy features on page 406.
401
8.6.1 Thin-provisioned
This menu item includes the same functions that are described in Shrinking Thin-Provisioned space on page 398, Expanding Thin-Provisioned space on page 399, and Editing Thin-Provisioned properties on page 399. You can specify the same settings for each volume copy. Figure 8-76 shows the Thin-provisioned menu item.
402
After the task completes, click Close to return to the All Volumes window, where the copy appears as a new volume named vdisk0 that can be mapped to a host, as shown in Figure 8-78.
Important: If you receive an error message while you are splitting volume copy (error message code CMMVC6357E), use the lsvdisksyncprogress command to view the synchronization status or wait for the copy to synchronize. Example 8-1 shows an output of lsvdisksyncprogress command.
Example 8-1 Output of lsvdisksyncprogress command
IBM_Storwize:mcr-atl-cluster-01:superuser>lsvdisksyncprogress vdisk_id vdisk_name copy_id progress estimated_completion_time 3 vmware-sas 1 3 130605014819 14 thin-volume 1 38 130606032210 25 win_vol_01 1 55 130604121159 IBM_Storwize:mcr-atl-cluster-01:superuser>
403
The following options are available: Generate Event of Differences Use this option if you want to verify only that the mirrored volume copies are identical. If any difference is found, the command stops and logs an error that includes the logical block address (LBA) and the length of the first difference. Starting at a different LBA each time, you can use this option to count the number of differences on a volume. Overwrite Differences Use this option to overwrite contents from the primary volume copy to the other volume copy. The command corrects any differing sectors by copying the sectors from the primary copy to the copies that are compared. Upon completion, the command process logs an event which indicates the number of differences that were corrected. Use this option if you are sure that the primary volume copy data is correct or that your host applications can handle incorrect data. Return Media Error to Host Use this option to convert sectors on all volume copies that contain different contents into virtual medium errors. Upon completion, the command logs an event, which indicates the number of differences that were found, the number that were converted into medium errors, and the number that were not converted. Use this option if you are unsure what the correct data is and you do not want an incorrect version of the data to be used. 2. Select which action to perform and click Validate to start the task. The volume is now checked. Click Close. Figure 8-80 on page 405 shows the output when the volume copy Generate Event of Differences option is chosen.
404
The validation process runs as a background process and can take some time, depending on the volume size. You can check the status in the Running Tasks window, as shown in Figure 8-81.
405
After the copy is deleted, click Close to return to the All Volumes window.
406
407
The Volumes by Pool window opens, as shown in Figure 8-84 on page 408.
The left pane is named Pool Filter, and all of your existing storage pools are shown there. For more information about storage pools, see Chapter 7, Storage pools on page 313. In the upper right, you see information about the pool that you selected in the pool filter, the following information is also shown: Pool icon: Because storage pools might have different characteristics, you can change the storage pool icon. For more information about making the changes, see 7.5, Working with Storage Pools on page 348. Pool Name: This is the name given during the creation of the storage pool. For more information about changing the storage pool name, see, Chapter 7, Storage pools on page 313. Pool Details: This shows you the information about the storage pools such as status, the number of managed disks, and easy tier status. Volume allocation: This shows you the amount of capacity that is allocated to volumes from this storage pool. The lower right section (as shown in Figure 8-85 on page 409) lists all volumes that have at least one copy in the selected storage pool and the following information is provided: Name: The name of the volume. Status: The status of the volume. Capacity: The capacity that is presented to host. UID: The volume unique identifier. Host Mappings: Indicates if host mapping exists.
408
It is also possible to create volumes from this window. Click Create Volume to start the volume creation window. The steps are the same as those that are described in Chapter 5, Basic volume configuration on page 189. If you highlight a volume and select Actions or right-click the volume, the same options as described in 8.4, Advanced volume administration on page 379 appears.
409
In the left pane of the view is the Host Filter. If you select a host, its properties appear in the right pane; for example host name, the number of ports and host type. The hosts with an orange cable represent Fibre Channel or FCoE hosts, the black cable represents the SAS hosts, and blue cable represents the iSCSI hosts. The volumes that are mapped to this host are listed, as shown in Figure 8-87.
It is also possible to create a volume from this window. If you click New Volume, the same wizard that is described in 5.1, Provisioning storage from IBM Storwize V3700 and making it available to the host on page 190 opens. If you highlight the volume, the Actions button becomes available and the options are the same as described in 8.4, Advanced volume administration on page 379. 410
Implementing the IBM Storwize V3700
Chapter 9.
Easy Tier
In todays storage market, solid-state drives (SSDs) are emerging as an attractive alternative to hard disk drives (HDDs). Because of their low response times, high throughput, and IOPS-energy-efficient characteristics, SSDs have the potential to allow your storage infrastructure to achieve significant savings in operational costs. However, the current acquisition cost per GB for SSDs is currently much higher than for HDDs. SSD performance depends a lot on workload characteristics, so SSDs must be used with HDDs. It is critical to choose the right mix of drives and the right data placement to achieve optimal performance at low cost. Maximum value can be derived by placing hot data with high IO density and low response time requirements on SSDs, while targeting HDDs for cooler data that is accessed more sequentially and at lower rates. Easy Tier automates the placement of data among different storage tiers and boosts your storage infrastructure performance to achieve optimal performance through a software, server, and storage solution. This chapter describes the Easy Tier disk performance optimization feature and how to activate the Easy Tier process for evaluation purposes and for automatic extent migration. Information is also included about the monitoring tool, Storage Tier Advisor Tool (STAT) and Tivoli Storage Productivity Center for performance monitoring. This chapter includes the following topics: Easy Tier overview Easy Tier for IBM Storwize V3700 Easy Tier process Easy Tier configuration using the GUI Easy Tier configuration using the CLI IBM Storage Tier Advisor Tool Tivoli Storage Productivity Center Administering and reporting an IBM Storwize V3700 system through Tivoli Storage Productivity Center
411
You can enable Easy Tier for storage on a volume basis. It monitors the I/O activity and latency of the extents on all Easy Tier enabled volumes over a 24-hour period. Based on the performance log, it creates an extent migration plan and dynamically moves high activity or hot extents to a higher disk tier within the same storage pool, and moves extents whose activity has dropped off, or cooled, from higher disk tier MDisks back to a lower tier MDisk.
412
To enable this migration between MDisks with different tier levels, the target storage pool must consist of different characteristic MDisks. These pools are called multitiered storage pools. IBM Storwize V3700 Easy Tier is optimized to boost the performance of storage pools that contain HDDs and SSDs. To identify the potential benefits of Easy Tier in your environment before actually installing higher MDisk tiers, such as SSDs, it is possible to enable the Easy Tier monitoring on volumes in single-tiered storage pools. Although the Easy Tier extent migration is not possible within a single-tiered pool, the Easy Tier statistical measurement function is possible. Enabling Easy Tier on a single-tiered storage pool starts the monitoring process and logs the activity of the volume extents. In this case, Easy Tier creates a migration plan file that can then be used to show a report on the number of extents that are appropriate for migration to higher level MDisk tiers, such as SSDs. The IBM Storage Tier Advisor Tool (STAT) is a no-cost tool that helps you to analyze this data. If you do not have an IBM Storwize V3700, use Disk Magic to get a better idea about the required number of SSDs that are appropriate for your workload. If you do not have any workload performance data, a good starting point can be to add about 5% of net capacity of SSDs to your configuration. But this ratio is heuristics-based and changes according to different applications or different disk tier performance in each configuration. For database transactions, a ratio of fast SAS or FC drives to SSD is about 6:1 to achieve the optimal performance, but this ratio depends on the environment on which it is implemented.
413
Also, Easy Tier is based on an algorithm with a threshold to evaluate if an extent is cold or hot. If an extent activity is below this threshold, it is not considered by the algorithm to be moved to the SSD tier. The four main processes and the flow between them are described in the following sections.
414
415
Evaluation Mode
If you turn on Easy Tier in a single-tiered storage pool, it runs in Evaluation Mode, which means it measures the I/O activity for all extents. A statistic summary file is created and can be offloaded from the IBM Storwize V3700. This file can be analyzed with the IBM Storage Tier Advisory Tool, as described in 9.6, IBM Storage Tier Advisor Tool on page 433. This analysis shows the benefits for your workload if you were to add SSDs to your pool before any hardware acquisition.
416
If you do want to disable Auto Data Placement Mode for single volumes inside a multitiered storage pool, it is possible to turn off at the volume level. This action excludes the volume from Auto Data Placement Mode and measures the I/O statistics only. The statistic summary file can be offloaded for input to the advisor tool. The tool produces a report on the extents that are moved to SSD and a prediction of performance improvement that can be gained if SSD drives were added.
Easy Tier automatic data placement is not supported for image mode or sequential volumes. I/O monitoring for such volumes is supported, but you cannot migrate extents on such volumes unless you convert image or sequential volume copies to striped volumes. If possible, IBM Storwize V3700 creates new volumes or volume expansions by using extents from HDD tier MDisks, but uses extents from SSD tier MDisks if no HDD space is available. When a volume is migrated out of a storage pool that is managed with Easy Tier, Automatic Data Placement Mode is no longer active on that volume. Automatic Data Placement is also turned off while a volume is being migrated, even if it is between pools that both have Easy Tier Automatic Data Placement enabled. Automatic Data Placement for the volume is re-enabled when the migration is complete. SSD performance is dependent on block sizes, and small blocks perform much better than larger ones. Because Easy Tier is optimized to work with SSD, it decides if an extent is hot by measuring I/O smaller than 64 KB, but it migrates the entire extent to the appropriate disk tier. As extents are migrated, the use of smaller extents makes Easy Tier more efficient. The first migration of hot data to SSD starts about one hour after Automatic Data Placement Mode is enabled, but it takes up to 24 hours to achieve optimal performance. In the current IBM Storwize V3700 Easy Tier implementation, it takes about two days before hot spots are considered being moved from SSDs, which prevents hot spots from being moved from SSDs if the workload changes over a weekend. If you run an unusual workload over a longer period, Automatic Data Placement can be turned off and on online, to avoid data movement.
417
Depending on which storage pool and which Easy Tier configuration is set, a volume copy can have the Easy Tier states that are shown in Table 9-1.
Table 9-1 Easy Tier states Storage pool Single-tiered or multitiered storage pool Single-tiered Single-tiered Multitiered Multitiered Single-tiered Single-tiered Multitiered Multitiered Single-tiered Single-tiered Multitiered Multitiered Volume copy Easy Tier setting Off On Off On Off On Off On Off On Off On Easy Tier status
Inactivea Inactivea Inactivea Inactivea Inactivea Inactivea Measuredc Actived e Measuredc Measuredc Measuredc Actived
a. When the volume copy status is inactive, no Easy Tier functions are enabled for that volume copy. b. The default Easy Tier setting for a storage pool is auto, and the default Easy Tier setting for a volume copy is on. This scenario means that Easy Tier functions are disabled for storage pools with a single tier, and that automatic data placement mode is enabled for all striped volume copies in a storage pool with two tiers. c. When the volume copy status is measured, the Easy Tier function collects usage statistics for the volume, but automatic data placement is not active. d. If the volume copy is in image or sequential mode or is being migrated, then the volume copy Easy Tier status is measured instead of active. e. When the volume copy status is active, the Easy Tier function operates in automatic data placement mode for that volume.
418
2. Click Pools Internal Storage. Figure 9-6 shows that one internal SSD drive is available and is in a candidate status.
419
3. Click Configure Storage and the Storage Configuration wizard opens. Figure 9-7 shows the first step of the configuration wizard.
The wizard recommends the use of the SSDs to enable Easy Tier. If you select Use recommended configuration, it selects the recommended RAID level and hot spare coverage for your system automatically, as shown in Figure 9-8.
420
If you select Select a different configuration (as shown in Figure 9-9 on page 421), you can select the preset.
4. Choose a custom RAID level, or you can also select the SSD Easy Tier preset to review and modify the recommended configuration. Because we do not have enough drives in our configuration in this example, the SSD Easy Tier preset is unavailable from the preset selection. When this preset is available, it configures a RAID 10 array with a spare goal of one drive. In this example, we create a RAID 0 array (this is not best practice and is not used in a production environment). Because there are not enough drives, an error message is displayed, as shown in Figure 9-10 on page 422.
421
This error message can be avoided if the Automatically configure spares option is cleared, as shown in Figure 9-11. A RAID 0 array with one drive and no spares is created.
5. To create a multitiered storage pool, the SSDs must be added to an existing generic HDD pool. Select Expand an existing pool (see Figure 9-12 on page 423) and select the pool you want to change to a multitiered storage pool. In our example, V3700_Pool_2 is selected. Click Finish.
422
6. Now the array is configured on the SSDs and added to the selected storage pool. Click Close after the task completes, as shown in Figure 9-13.
Figure 9-14 on page 424 shows that the internal SSD drives usage has now changed to Member and that the wizard created an MDisk that is named mdisk2.
423
In Figure 9-15, you see that the new MDisk is now part of the V3700_Pool_2 storage pool and that the Pool icon changed to show . This means that the status of the Easy Tier changed to Active. In this pool, Automatic Data Placement Mode is started and the Easy Tier processes starts to work.
By default, Easy Tier is now active in this storage pool and all its volumes. Figure 9-16 shows an example of three volumes in the multitiered storage pool.
424
If you open the properties of a volume by clicking Actions Properties, you can also see that Easy Tier is enabled on the volume by default, as shown in Figure 9-17.
If a volume has more than one copy, Easy Tier can be enabled and disabled on each copy separately. This action depends on the storage pool where the volume copy is defined. A volume with two copies is stored in two different storage pools, as shown in Figure 9-18.
425
If you want to enable Easy Tier on the second copy, change the storage pool of the second copy to a multitiered storage pool by repeating these steps.
426
This action lists all the log files that are available to download (see Figure 9-21). The Easy Tier log files are always named dpa_heat.canister_name_date.time.data.
Log file creation: Depending on your workload and configuration, it can take up to 24 hours until a new Easy Tier log file is created. If you run Easy Tier for a longer period, it generates a heat file at least every 24 hours. The time and date of the file creation is included in the file name. The heat log file always includes the measured I/O activity of the last 24 hours. 3. Right-click the dpa_heat.canister_name_date.time.data file and click Download. Select the file for Easy Tier measurement for the most representative time. You can also use the search field on the right to filter your search, as shown in Figure 9-22.
Depending on your browser settings, the file is downloaded to your default location, or you are prompted to save it to your computer. This file can be analyzed as described in 9.6, IBM Storage Tier Advisor Tool on page 433.
427
Before the CLI is used, you must configure CLI access, as described in Appendix A, Command-line interface setup and SAN Boot on page 593. Readability: In most examples shown in this section, many lines were deleted in the command output or responses so that we can concentrate on the Easy Tier related information only.
IBM_2072:admin>lsmdiskgrp id name status mdisk_count ... easy_tier easy_tier_status 0 mdiskgrp0 online 3 ... auto inactive 1 Multi_Tier_Pool online 3 ... auto active For a more detailed view of the single-tiered storage pool, run lsmdiskgrp storage pool name, as shown in Example 9-2.
Example 9-2 Storage Pools details - Easy Tier inactive
IBM_2072:admin>lsmdiskgrp mdiskgrp0 id 0 name mdiskgrp0 status online mdisk_count 3 ... easy_tier auto easy_tier_status inactive tier generic_ssd tier_mdisk_count 0 ... tier generic_hdd tier_mdisk_count 3 ... To enable Easy Tier on a single-tiered storage pool, run chmdiskgrp -easytier on storage pool name, as shown in Example 9-3. Because this storage pool does not have any SSD MDisks, it is not a multitiered storage pool; only measuring is available.
Example 9-3 Enable Easy Tier on a single-tiered storage pool
428
Check the status of the storage pool again by running lsmdiskgrp storage pool name again, as shown in Example 9-4.
Example 9-4 Storage pool details: Easy Tier ON
IBM_2072:admin>lsmdiskgrp mdiskgrp0 id 0 name mdiskgrp0 status online mdisk_count 3 vdisk_count 7 ... easy_tier on easy_tier_status active tier generic_ssd tier_mdisk_count 0 ... tier generic_hdd tier_mdisk_count 3 ... Run the svcinfo lsmdiskgrp command again (as shown in Example 9-5) and you see that Easy Tier is turned on the storage pool now, but that Automatic Data Placement Mode is not active on the multitiered storage pool.
Example 9-5 Storage pool list
IBM_2072:admin>lsmdiskgrp id name status mdisk_count vdisk_count ... easy_tier easy_tier_status 0 mdiskgrp0 online 3 7 ... on active 1 Multi_Tier_Pool online 3 0 ... auto active To get the list of all the volumes that are defined, run the lsvdisk command, as shown in Example 9-6. For this example, we are only interested in the redhat1 volume.
Example 9-6 All volumes list
IBM_2072:admin>lsvdisk id name IO_group_id IO_group_name status 5 redhat1 0 io_grp0 online many ...
mdisk_grp_id many
To get a more detailed view of a volume, run the lsvdisk volume name command, as shown in Example 9-7. This output shows two copies of a volume. Copy 0 is in a multitiered storage pool and Automatic Data Placement is active, Copy 1 is in the single-tiered storage pool, and Easy Tier evaluation mode is active, as indicated by the easy_tier_status measured line.
Example 9-7 Volume details
IBM_2072:admin>lsvdisk redhat1 id 5 name redhat1 IO_group_id 0 IO_group_name io_grp0 status online mdisk_grp_id many mdisk_grp_name many
Chapter 9. Easy Tier
429
capacity 10.00GB ... copy_id 0 status online sync yes primary yes mdisk_grp_id 1 mdisk_grp_name Multi_Tier_Pool ... easy_tier on easy_tier_status active tier generic_ssd tier_capacity 0.00MB tier generic_hdd tier_capacity 10.00GB ... copy_id 1 status online sync yes primary no mdisk_grp_id 0 mdisk_grp_name mdiskgrp0 .... easy_tier on easy_tier_status measured tier generic_ssd tier_capacity 0.00MB tier generic_hdd tier_capacity 10.00GB ... These changes are also reflected in the GUI, as shown in Figure 9-23 on page 431. Select the Show Details option to view the details of the Easy Tier for each of the volume copies.
430
Easy Tier evaluation mode is now active on the single-tiered storage pool (mdiskgrp0), but only for measurement. For more information about downloading the I/O statistics and analyzing them, see 9.4.2, Downloading Easy Tier I/O measurements on page 426.
IBM_2072:admin>chvdisk -easytier off redhat1 IBM_2072:admin> This command disables Easy Tier on all copies of this volume. Example 9-9 shows that the Easy Tier status of the copies has changed, even if Easy Tier is still enabled on the storage pool.
Example 9-9 Easy Tier disabled
431
status online mdisk_grp_id many mdisk_grp_name many capacity 10.00GB ... copy_id 0 status online sync yes primary yes mdisk_grp_id 1 mdisk_grp_name Multi_Tier_Pool ... easy_tier off easy_tier_status meassured tier generic_ssd tier_capacity 0.00MB tier generic_hdd tier_capacity 10.00GB ... copy_id 1 status online sync yes primary no mdisk_grp_id 0 mdisk_grp_name mdiskgrp0 .... easy_tier off easy_tier_status measured tier generic_ssd tier_capacity 0.00MB tier generic_hdd tier_capacity 10.00GB ...
To enable Easy Tier on a volume, run the chvdisk -easytier on volume name command (as shown in Example 9-10), and the Easy Tier Status changes back to Enabled, as shown in Example 9-7 on page 429.
Example 9-10 Easy Tier enabled
432
C:\EasyTier>STAT.exe -o C:\EasyTier C:\StorwizeV3700_Logs\dpa_heat.31G00KV-1.101 209.131801.data CMUA00019I The STAT.exe command has completed. C:\EasyTier> Browse the directory where you directed the output file, and there is a file named index.html. Open it with your browser to view the report.
433
Important: Because this tool was first available for SAN Volume Controller and IBM Storwize V7000, you can ignore that it is showing IBM Storwize V7000 in the report. The STAT tool works on all SAN Volume Controller and Storwize systems. The System Summary window provides the most important numbers. In Figure 9-24, we see that 12 volumes were monitored with a total capacity of 6000 GB. The result of the analysis of the hot extents is that about 160 GB (which means 2%) that should be migrated to the high performance disk tier. It also recommends that one SSD RAID 5 array should be added as a high performance tier that consists of four SSD drives (3+P). This predicted performance improvement is the possible response time reduction at the backend in a balanced system is between 64% and 84%.
434
Click Volume Heat Distribution to change to a more detailed view, as shown in Figure 9-25.
The table in Figure 9-25 shows how the hot extents are distributed across your system. It contains the following information: Volume ID: The unique ID of each volume on the IBM Storwize V3700. Copy ID: If a volume owns more than one copy, the data is measured for each copy. Pool ID: The unique ID of each pool configured on the IBM Storwize V3700. Configured Size: The configured size of each volume that is represented to the host. Capacity on SSD: Capacity of the volumes on high performance disk tier (even in evaluation mode, volumes can be on high performance disk tiers if they were moved there before). Heat Distribution: Shows the heat distribution of the data in this volume. The blue portion of the bar represents the capacity of the cold extents, and the red portion represents the capacity of the hot extents. The red hot data is candidates to be moved to high performance disk tier.
435
436
2. After you start Tivoli Storage Productivity Center, it starts an application download, as shown in Figure 9-27. During your first login, the required Java packages are installed to your local system.
3. Use your login credentials to access Tivoli Storage Productivity Center, as shown in Figure 9-28.
437
4. After successful login, you are ready to add storage devices into Tivoli Storage Productivity Center, as shown in Figure 9-29.
5. Enter the information about your IBM Storwize V3700 in Tivoli Storage Productivity Center, as shown in Figure 9-30.
438
After you complete all of the required steps, follow the wizard. After it is completed, Tivoli Storage Productivity Center collects information from IBM Storwize V3700. A summary of details is shown at the end of discovery process.
9.8 Administering and reporting an IBM Storwize V3700 system through Tivoli Storage Productivity Center
This section shows examples of how to use Tivoli Storage Productivity Center to administer, configure and generate reports for IBM Storwize V3700 system. A detailed description about Tivoli Storage Productivity Center reporting is beyond the intended scope of this book.
When you highlight the IBM Storwize V3700 system, action buttons become available that allows you to view the device configuration or create virtual disks (see Figure 9-32). MDisk Groups provides a detailed list of the configured MDisk groups including, pool space, available space, configured space and Easy Tier Configuration. Virtual Disks lists all the configured Volumes with the option to filter them by MDisk Group. The list includes several attributes, such as capacity, volume type, and type. Terms used: Tivoli Storage Productivity Center and SAN Volume Controller use the following terms: Virtual Disk: The equivalent of a Volume on a Storwize device MDisk Group: The equivalent of a Storage Pool on a Storwize device.
439
If you click Create Virtual Disk, it opens the Create Virtual Disk wizard window, as shown in Figure 9-33. Use this window to create volumes that specify several options (such as size, name, thin provisioning) and add MDisks to an MDisk Group.
440
Add an IBM Storwize V3700 in the probe for collecting information, as shown in Figure 9-35.
441
After you create the probe, you can create Subsystem Performance Monitor, as shown in Figure 9-36.
To check the Managed Disk performance, click Disk Manager Reporting Storage Subsystem Performance By Managed Disk. You see many options to include in the wizard to check MDisk performance, as shown in Figure 9-37.
442
If you click the Generate Report option, you see a report, as shown in Figure 9-38.
Clicking the Pie chart icon creates a graphical chart view of the selected MDisk, as shown in Figure 9-39.
9.8.3 Report Generation using Tivoli Storage Productivity Center web page
In this section, we describe how to generate reports by using Tivoli Storage Productivity Center Web Console. To connect to the console, enter the following URL in your browser: https://tpchostname.com:9569/srm/ You see a login window, as shown in Figure 9-40 on page 444. Log in by using your Tivoli Storage Productivity Center credentials.
443
After login, you see the Tivoli Storage Productivity Center web dashboard, as shown in Figure 9-41. The Tivoli Storage Productivity Center web-based GUI is used to show information about the storage resources in your environment. It contains predefined and custom reports about performance and storage tiering.
You can use IBM Tivoli Common Reporting to view predefined reports and create custom reports from the web-based GUI. We show some predefined reports, starting with the report that is shown in Figure 9-42.
444
Figure 9-44 shows the different report options for Storage Tiering.
445
Figure 9-46 on page 446 shows the Report Overview in a pie chart.
Figure 9-47 shows the Easy Tier usage for volumes. To open this report in Tivoli Storage Productivity Center, click Storage Resources Volumes.
446
447
448
10
Chapter 10.
Copy services
In this chapter, we describe the copy services functions that are provided by the IBM Storwize V3700 storage system, including FlashCopy and Remote Copy. Copy services functions are useful for making data copies for backup, application test, recovery, and so on. The IBM Storwize V3700 system makes it easy to apply these functions to your environment through its intuitive GUI. This chapter includes the following topics: FlashCopy Remote Copy Troubleshooting Remote Copy Managing Remote Copy using the GUI
449
10.1 FlashCopy
By using the FlashCopy function of the IBM Storwize V3700 storage system, you can create a point-in-time copy of one or more volumes. In this section, we describe the structure of FlashCopy and provide details about its configuration and use. You can use FlashCopy to solve critical and challenging business needs that require the duplication of data on your source volume. Volumes can remain online and active while you create consistent copies of the data sets. Because the copy is performed at the block level, it operates below the host operating system and cache and therefore is not apparent to the host. Flushing: Because FlashCopy operates at the block level, which is below the host operating system and cache, those levels do need to be flushed for consistent FlashCopy copies. While the FlashCopy operation is performed, I/O to the source volume is frozen briefly to initialize the FlashCopy bitmap and then allowed to resume. Although several FlashCopy options require the data to be copied from the source to the target in the background (which can take time to complete), the resulting data on the target volume copy appears to have completed immediately. This task is accomplished through the use of a bitmap (or bit array) that tracks changes to the data after the FlashCopy is initiated and an indirection layer, which allows data to be read from the source volume transparently. License information: The IBM Storwize V3700 offers up to 64 FlashCopy mappings at no charge. However, other licenses can be purchased to expand to 2,040 FlashCopy mappings per system.
450
Rapidly creating consistent copies of production data to facilitate data movement or migration between hosts FlashCopy can be used to facilitate the movement or migration of data between hosts while minimizing downtime for applications. FlashCopy allows application data to be copied from source volumes to new target volumes while applications remain online. After the volumes are fully copied and synchronized, the application can be stopped and then immediately started on the new server accessing the new FlashCopy target volumes. This mode of migration is faster than other migration methods that are available through the IBM Storwize V3700 because the size and the speed of the migration is not as limited. Rapidly creating copies of production data sets for application development and testing Under normal circumstances, to perform application development and testing, data must be restored from traditional backup media, such as tape. Depending on the amount of data and the technology in use, this process easily can take a day or more. With FlashCopy, a copy can be created and online for use in a few minutes. The time varies based on the application and the data set size. Rapidly creating copies of production data sets for auditing purposes and data mining Auditing or data mining normally require the usage of the production applications. This situation can cause high loads for databases track inventories or similar data. With FlashCopy, you can create copies for your reporting and data mining activities. This feature reduces the load on your production systems, which increases their performance. Rapidly creating copies of production data sets for quality assurance Quality assurance is an interesting case for FlashCopy. Because traditional methods involve so much time and labor, the refresh cycle typically is extended. This reduction in time required allows much more frequent refreshes of the quality assurance database.
451
Immediately following the FlashCopy operation, both the source and target volumes are available for use. The FlashCopy operation creates a bitmap that is referenced and maintained to direct I/O requests within the source and target relationship. This bitmap is updated to reflect the active block locations as data is copied in the background from the source to target and updates are made to the source. Figure 10-1 shows the redirection of the host I/O toward the source volume and the target volume.
When data is copied between volumes, it is copied in units of address space known as grains. Grains are units of data that are grouped together to optimize the use of the bitmap that tracks changes to the data between the source and target volume. You have the option of using 64 KB or 256 KB grain sizes (256 KB is the default). The FlashCopy bitmap contains 1 bit for each grain and is used to track whether the source grain was copied to the target. The 64 KB grain size uses bitmap space at a rate of four times the default 256 KB size. The FlashCopy bitmap dictates the following read and write behavior for the source and target volumes: Read I/O request to source: Reads are performed from the source volume the same as for non-FlashCopy volumes. Write I/O request to source: Writes to the source cause the grains of the source volume to be copied to the target if they are not already and then the write is performed to the source. Read I/O request to target: Reads are performed from the target if the grains are already copied; otherwise, the read is performed from the source. Write I/O request to target: Writes to the target cause the grain to be copied from the source to the target first, unless the entire grain is being written and then the write completes to the target only.
FlashCopy mappings
A FlashCopy mapping defines the relationship between a source volume and a target volume. FlashCopy mappings can be stand-alone mappings or a member of a consistency group, as described in FlashCopy consistency groups on page 456. 452
Implementing the IBM Storwize V3700
Background copy
The background copy rate is a property of a FlashCopy mapping defined as a value of 0 100. The background copy rate can be defined and dynamically changed for individual FlashCopy mappings. A value of 0 disables background copy. This option is also called the no-copy option, which provides pointer-based images for limited lifetime uses. With FlashCopy background copy, the source volume data is copied to the corresponding target volume in the FlashCopy mapping. If the background copy rate is set to 0, which means disable the FlashCopy background copy, only data that changed on the source volume is copied to the target volume. The benefit of the use of a FlashCopy mapping with background copy enabled is that the target volume becomes a real independent clone of the FlashCopy mapping source volume after the copy is complete. When the background copy is disabled, the target volume only remains a valid copy of the source data while the FlashCopy mapping remains in place. Copying only the changed data saves your storage capacity (assuming it is thin provisioned and -rsize was correctly set up.)
453
The relationship of the background copy rate value to the amount of data copied per second is shown in Table 10-1.
Table 10-1 Background copy rate Value 1 - 10 11 - 20 21 - 30 31 - 40 41 - 50 51 - 60 61 - 70 71 - 80 81 - 90 91 - 100 Data copied per second 128 KB 256 KB 512 KB 1 MB 2 MB 4 MB 8 MB 16 MB 32 MB 64 MB Grains per second (256 KB grain) 0.5 1 2 4 8 16 32 64 128 256 Grains per second (64 KB grain) 2 4 8 16 32 64 128 256 512 1024
Data copy rate: The data copy rate remains the same regardless of the FlashCopy grain size. The difference is the number of grains copied per second. The gain size can be 64 KB or 256 KB. The smaller size uses more bitmap space and thus limits the total amount of FlashCopy space possible, but can be more efficient regarding the amount of data moved, depending on your environment.
Cleaning rate
The cleaning rate provides a method for FlashCopy copies with dependant mappings (multiple target or cascaded) to complete their background copies before their source goes offline or is deleted after a stop is issued. When you create or modify a FlashCopy mapping, you can specify a cleaning rate for the FlashCopy mapping that is independent of the background copy rate. The cleaning rate is also defined as a value of 0 - 100, which has the same relationship to data copied per second as the backup copy rate (see Table 10-1 on page 454). The cleaning rate controls the rate at which the cleaning process operates. The cleaning process purpose is to copy (or flush) data from FlashCopy source volumes upon which there are dependent mappings. For cascaded and multiple target FlashCopy, the source maybe a target for another FlashCopy or a source for a chain (cascade) of FlashCopy mappings. The cleaning process must complete before the FlashCopy mapping can go to the stopped state. This feature and the distinction between stopping and stopped states was added to prevent data access interruption for dependent mappings, when their source is issued a stop.
If the mapping is incremental and the background copy is complete, the mapping only records the differences between the source and target volumes. If the connection to both nodes in the IBM Storwize V3700 storage system that the mapping is assigned to is lost, the source and target volumes go offline. Copying The copy is in progress. Read and write caching is enabled on the source and the target volumes. Prepared The mapping is ready to start. The target volume is online, but is not accessible. The target volume cannot perform read or write caching. Read and write caching is failed by the SCSI front end as a hardware error. If the mapping is incremental and a previous mapping has completed, the mapping only records the differences between the source and target volumes. If the connection to both nodes in the IBM Storwize V3700 storage system that the mapping is assigned to is lost, the source and target volumes go offline. Preparing The target volume is online, but not accessible. The target volume cannot perform read or write caching. Read and write caching is failed by the SCSI front end as a hardware error. Any changed write data for the source volume is flushed from the cache. Any read or write data for the target volume is discarded from the cache. If the mapping is incremental and a previous mapping has completed, the mapping records only the differences between the source and target volumes. If the connection to both nodes in the IBM Storwize V3700 storage system that the mapping is assigned to is lost, the source and target volumes go offline. Stopped The mapping is stopped because you issued a stop command or an I/O error occurred. The target volume is offline and its data is lost. To access the target volume, you must restart or delete the mapping. The source volume is accessible and the read and write cache is enabled. If the mapping is incremental, the mapping is recording write operations to the source volume. If the connection to both nodes in the IBM Storwize V3700 storage system that the mapping is assigned to is lost, the source and target volumes go offline. Stopping The mapping is in the process of copying data to another mapping. If the background copy process is complete, the target volume is online while the stopping copy process completes. If the background copy process is not complete, data is discarded from the target volume cache. The target volume is offline while the stopping copy process runs. The source volume is accessible for I/O operations. Suspended The mapping started, but it did not complete. Access to the metadata is lost, which causes the source and target volume to go offline. When access to the metadata is restored, the mapping returns to the copying or stopping state and the source and target volumes return online. The background copy process resumes. Any data that was flushed and written to the source or target volume before the suspension is in cache until the mapping leaves the suspended state.
455
FlashCopy mapping management: After an individual FlashCopy mapping is added to a consistency group, it can only be managed as part of the group; operations such as start and stop are no longer allowed on the individual mapping.
Dependent writes
To show why it is crucial to use consistency groups when a data set spans multiple volumes, consider the following typical sequence of writes for a database update transaction: 1. A write is run to update the database log, which indicates that a database update is about to be performed. 2. A second write is run to complete the actual update to the database. 3. A third write is run to update the database log, indicating that the database update completed successfully. The database ensures the correct ordering of these writes by waiting for each step to complete before starting the next step. However, if the database log (updates 1 and 3) and the database (update 2) are on separate volumes, it is possible for the FlashCopy of the database volume to occur before the FlashCopy of the database log. This situation can result in the target volumes seeing writes (1) and (3) but not (2), because the FlashCopy of the database volume occurred before the write completed.
456
In this case, if the database was restarted by using the backup that was made from the FlashCopy target volumes, the database log indicates that the transaction completed successfully when, in fact, it had not. This situation occurs because the FlashCopy of the volume with the database file was started (bitmap was created) before the write had completed to the volume. Therefore, the transaction is lost and the integrity of the database is in question. To overcome the issue of dependent writes across volumes and to create a consistent image of the client data, it is necessary to perform a FlashCopy operation on multiple volumes as an atomic operation using consistency groups. A FlashCopy consistency group can contain up to 512 FlashCopy mappings. The more mappings you have, the more time it takes to prepare the consistency group. FlashCopy commands can then be issued to the FlashCopy consistency group and simultaneously for all of the FlashCopy mappings that are defined in the consistency group. For example, when starting the FlashCopy for the consistency group, all FlashCopy mappings in the consistency group are started at the same time, resulting in a point-in-time copy that is consistent across all FlashCopy mappings that are contained in the consistency group. A consistency group aggregates FlashCopy mappings, not volumes. Thus, where a source volume has multiple FlashCopy mappings, they can be in the same or separate consistency groups. If a particular volume is the source volume for multiple FlashCopy mappings, you might want to create separate consistency groups to separate each mapping of the same source volume. Regardless of whether the source volume with multiple target volumes is in the same consistency group or in separate consistency groups, the resulting FlashCopy produces multiple identical copies of the source data. The consistency group can be specified when the mapping is created. You can also add the FlashCopy mapping to a consistency group or change the consistency group of a FlashCopy mapping later. Important: Do not place stand-alone mappings into a consistency group because they become controlled as part of that consistency group.
457
Stopped The consistency group is stopped because you issued a command or an I/O error occurred. Suspended At least one FlashCopy mapping in the consistency group is in the Suspended state. Empty The consistency group does not have any FlashCopy mappings.
Reverse FlashCopy
Reverse FlashCopy enables FlashCopy targets to become restore points for the source without breaking the FlashCopy relationship and without waiting for the original copy operation to complete. It supports multiple targets and multiple rollback points. A key advantage of Reverse FlashCopy is that it does not delete the original target, thus allowing processes that use the target, such as a tape backup, to continue uninterrupted. You can also create an optional copy of the source volume that is made before you start the reverse copy operation. This copy restores the original source data, which can be useful for diagnostic purposes. Figure 10-3 shows an example of the reverse FlashCopy scenario.
458
To restore from a FlashCopy backup by using the GUI, complete the following steps: 1. (Optional) Create a target volume (volume Z) and run FlashCopy on the production volume (volume X) to copy data on to the new target for later problem analysis. 2. Create a FlashCopy map with the backup to be restored (volume Y) or (volume W) as the source volume and volume X as the target volume. 3. Start the FlashCopy map (volume Y volume X). The -restore option: In the CLI, you must add the -restore option to the command svctask startfcmap manually. For more information about the use of the CLI, see Appendix A, Command-line interface setup and SAN Boot on page 593.
Regardless of whether the initial FlashCopy map (volume X to volume Y) is incremental, the Reverse FlashCopy operation only copies the modified data. Consistency groups are reversed by creating a set of new reverse FlashCopy maps and adding them to a new reverse consistency group. Consistency groups cannot contain more than one FlashCopy map with the same target volume. For more information about restoring from a FlashCopy, see Restoring from a FlashCopy on page 484.
459
FlashCopy property FlashCopy consistency groups per cluster FlashCopy mappings per consistency group
FlashCopy presets
The IBM Storwize V3700 storage system provides three FlashCopy presets, named Snapshot, Clone, and Backup, to simplify the more common FlashCopy operations (Table 10-3).
Table 10-3 FlashCopy presets Preset Snapshot Purpose Creates a point-in-time view of the production data. The snapshot is not intended to be an independent copy, but is used to maintain a view of the production data at the time the snapshot is created. This preset automatically creates a thin-provisioned target volume with 0% of the capacity allocated at the time of creation. The preset uses a FlashCopy mapping with 0% background copy so that only data written to the source or target is copied to the target volume. Creates an exact replica of the volume, which can be changed without affecting the original volume. After the copy operation completes, the mapping that was created by the preset is automatically deleted. This preset automatically creates a volume with the same properties as the source volume and creates a FlashCopy mapping with a background copy rate of 50. The FlashCopy mapping is configured to automatically delete itself when the FlashCopy mapping reaches 100% completion. Creates a point-in-time replica of the production data. After the copy completes, the backup view can be refreshed from the production data, with minimal copying of data from the production volume to the backup volume. This preset automatically creates a volume with the same properties as the source volume. The preset creates an incremental FlashCopy mapping with a background copy rate of 50.
Clone
Backup
Presets: All of the presets can be adjusted by using the Advanced Settings expandable section in the GUI.
460
Most of the actions to manage the FlashCopy mapping can be done in the FlashCopy window or the FlashCopy Mappings windows, although the quick path to create FlashCopy presets can only be found in the FlashCopy window.
461
Click FlashCopy in the Copy Services function icon menu and the FlashCopy window opens, as shown in Figure 10-5. In the FlashCopy window, the FlashCopy mappings are organized by volumes.
Click FlashCopy Mappings in the Copy Services function icon menu and the FlashCopy Mappings window opens, as shown in Figure 10-6. In the FlashCopy Mappings window, the FlashCopy mappings are listed individually.
462
The Consistency Groups window is used to manage the consistency groups for FlashCopy mappings. Click Consistency Groups in the Copy Services function icon menu and the Consistency Groups window opens, as shown in Figure 10-7.
Creating a snapshot
In the FlashCopy window, choose a volume and click New Snapshot from the Actions drop-down menu, as shown in Figure 10-8. Alternatively, highlight your chosen volume and right-click to access the Actions menu.
463
You now have a snapshot volume for the volume that you selected.
Creating a clone
In the FlashCopy window, choose a volume and click New Clone from the Actions drop-down menu, as shown in Figure 10-9. Alternatively, highlight your chosen volume and right-click to access the Actions menu.
You now have a clone volume for the volume that you selected.
464
Creating a backup
In the FlashCopy window, choose a volume and click New Backup from the Actions drop-down menu, as shown in Figure 10-10. Alternatively, highlight your chosen volume and right-click to access the Actions menu.
You now have a backup volume for the volume that you selected. You can monitor the progress of the running FlashCopy operations in the FlashCopy window and in the FlashCopy Mappings window, as shown in Figure 10-11. The progress bars for each target volume indicate the copy progress in percentage. The copy progress remains 0% for snapshots; there is no change until data is written to the target volume. The copy progresses for clone and backup keep increasing until the copy process completes.
Figure 10-11 FlashCopy in progress viewed for the FlashCopy Mappings window
465
The copy progress can be also found under the Running Tasks status indicator, as shown in Figure 10-12.
This view is slightly different than that of the FlashCopy and FlashCopy Mappings windows, as shown in Figure 10-13.
After the copy processes complete, you find the FlashCopy mapping with the clone preset (FlashVol2 in our example) is deleted automatically, as shown in Figure 10-14 on page 467. There are now two identical volumes completely independent of each other.
466
467
You have the option to create target volumes as part of the mapping process or use existing target volumes. We describe creating new volumes next. For more information about use existing volumes, see Using existing target volumes on page 473.
468
The following default advanced settings are available: Background Copy: 0 Incremental: No Auto Delete after completion: No Cleaning Rate: 0
469
Figure 10-18 shows the Advanced Settings for the Clone Preset.
The following advanced settings are available: Background Copy: 50 Incremental: No Auto Delete after completion: Yes Cleaning Rate: 50
470
Figure 10-19 shows the Advanced Settings for the Backup preset.
The following advanced settings are available: Background Copy: 50 Incremental: Yes Auto Delete after completion: No Cleaning Rate: 50
Change the settings of the FlashCopy mapping according to your requirements and click Next. 2. You have the option to add your FlashCopy mapping to a consistency group, as shown in Figure 10-20 on page 471. If the consistency group is not ready, the FlashCopy mapping can be added to the consistency group afterward. Click Next to continue.
471
You can choose from which storage pool you want to create your target volume. As shown in Figure 10-21, you can select the same storage pool that is used by the source volume or a different one. Click Next to continue.
Figure 10-21 Choose use the same storage pool with the source volume
Next you have the option to define how the new target volumes manage capacity. Create a generic volume is your default choice if you selected Clone or Backup as your basic preset. If you select a thin-provisioned volume, you see more options, as shown in Figure 10-22 on page 472.
3. Click Finish and a task runs to create the mappings and volume, as shown in Figure 10-23 on page 473. Close this window to see the FlashCopy mapping that was created on your volume with a new target, as shown in Figure 10-24 on page 473. The status of the created FlashCopy mapping is Idle; it can be started, as described in Starting a FlashCopy mapping on page 476.
472
Figure 10-24 New FlashCopy mapping has been created with a new target
473
2. You now must choose the target volume for the source volume you selected. Select the target volume in the drop-down menu in the right pane of the window and click Add, as shown in Figure 10-26.
3. The FlashCopy mapping is now listed, as shown in Figure 10-27 on page 475. Click the red X if the FlashCopy mapping is not the one you want to create. If the FlashCopy mapping is what you want, click Next to continue.
474
4. Select the preset and, if necessary, adjust the settings by using Advanced Settings, as shown in Figure 10-28. For more information about the advanced settings, see Creating target volumes on page 468. Make sure that the settings meet your requirements and click Next.
475
5. Now you can add the FlashCopy mapping to a consistency group if necessary, as shown in Figure 10-29. Select Yes to see a drop-down menu from which you can select a consistency group. Click Finish and the FlashCopy mapping is created with the Idle status, as shown in Figure 10-24.
A wizard opens to guide you through the creation of a FlashCopy mapping. The steps are the same as creating an Advanced FlashCopy mapping by using Existing Target Volumes, as described in Using existing target volumes on page 473.
476
You can start the mapping by selecting the FlashCopy target volume in the FlashCopy window and selecting the Start option from the Actions drop-down menu (see Figure 10-31) or by selecting the volume and right-clicking. The status of the FlashCopy mapping changes from Idle to Copying.
477
478
Enter your new name for the target volume, as shown in Figure 10-34. Click Rename to finish.
You must enter your new name for the FlashCopy mapping, as shown in Figure 10-36 on page 480. Click Rename to finish.
479
FlashCopy Mapping state: If the FlashCopy mapping is in the Copying state, it must be stopped before it is deleted.
480
You must confirm your action to delete FlashCopy mappings in the window that opens, as shown in Figure 10-38. Verify the number of FlashCopy mappings that you must delete. If you want to delete the FlashCopy mappings while the data on the target volume is inconsistent with the source volume, select the option to do so. Click Delete and your FlashCopy mapping is removed.
Deleting FlashCopy mapping: Deleting the FlashCopy mapping does not delete the target volume. If you must reclaim the storage space that is occupied by the target volume, you must delete the target volume manually.
481
Clicking either volume shows the properties of the volume, as shown in Figure 10-41.
Editing properties
The background copy rate and cleaning rate can be changed after the FlashCopy mapping is created by selecting the FlashCopy target mapping in the FlashCopy window and clicking the Edit Properties option from the Actions drop-down menu (as shown in Figure 10-42 on page 483) or by right-clicking.
482
You can then modify the value of the background copy rate and cleaning rate by moving the pointers on the bars, as shown in Figure 10-43. Click Save to save changes.
483
484
2. Create a mapping by using the target volume of the mapping to be restored. In our case, it is FlashVol1_01, as shown in Figure 10-45. Select Advanced FlashCopy Use Existing Target Volumes.
3. The Source Volume is preselected with the target volume that we selected in the previous step. Select the Target Volume from the drop-down menu. Select the source volume that you want to restore. In our case, it is FlashVol1, as shown in Figure 10-46.
485
4. Click Add. As shown in Figure 10-47, a warning appears because we are using a source as a target. Click Close.
5. Click Next and you see a Snapshot preset choice, as shown in Figure 10-48.
486
6. In the next window, the question is asked if the new mapping is to be part of a consistency group, as shown in Figure 10-49. In our example the new mapping is not part of the consistency group, so we click No and then Finish to create the mapping.
7. The reverse mapping is created and is shown in the Idle state, as shown in Figure 10-50.
487
8. To restore the original source volume FlashVol1 with the snapshot we took FlashVol1_01, we select the new mapping and right-click for the Actions menu, as shown in Figure 10-51.
9. Clicking Start results in FlashVol1 being over written with the original bitmap data that was saved in the FlashCopy FlashVol01_01. The command completes, as shown in Figure 10-52.
Important: The underlying command that is run by the IBM Storwize V3700 appends the -restore option automatically.
488
10.The reverse mapping now shows as 100% copied, as shown in Figure 10-53.
489
The Consistency Groups window (see Figure 10-55) is where you can manage consistency groups and FlashCopy mappings.
In the left pane of the Consistency Groups window, you can list the consistency groups you need. Click Not in a Group, and then expand your selection by clicking the plus icon (+) next to it. All the FlashCopy mappings that are not in any consistency groups are shown. In the lower pane of the Consistency Groups window, you can see the properties of a consistency group and the FlashCopy mappings in it. You can also work with any consistency groups and FlashCopy mappings within the Consistency Groups window, as allowed by their state. All the actions that are allowed for the FlashCopy mapping are described in 10.1.5, Managing FlashCopy mappings on page 467.
490
You are prompted to enter the name of the new consistency group, as shown in Figure 10-57. Following your naming conventions, enter the name of the new consistency group in the field and click Create.
After the creation process completes, you find a new consistency group, as shown in Figure 10-58.
You can rename the Consistency Group by highlighting it and right-clicking or by using the Actions drop-down menu. Select Rename and enter the new name, as shown in Figure 10-59. Next to the name of the consistency group, the state shows that it is now an empty consistency group with no FlashCopy mapping in it.
491
Important: You cannot move mappings that are in the process of copying. Selecting a snapshot that is already running makes the Move to Consistency Group option unavailable.
Selections of a range are performed by highlighting a mapping, pressing and holding the Shift key, and clicking the last item in the range. Multiple selections can be made by pressing and holding the Ctrl key and clicking each mapping individually. The option is also available by right-clicking individual mappings. You are prompted to specify which consistency group you want to move the FlashCopy mapping into, as shown in Figure 10-61. Choose from the list in the drop-down menu. Click Move to Consistency Group to continue.
After the action completes, you find that the FlashCopy mappings you selected are removed from the Not In a Group list to the consistency group you chose.
492
After you start the consistency group, all the FlashCopy mappings in the consistency group start at the same point. The state of the consistency group and all the underlying mappings changes to Copying, as shown in Figure 10-63.
493
After the stop process completes, the FlashCopy mappings in the consistency group are in the Stopped state, and a red X icon appears on the function icon of this consistency group to indicate an alert, as shown in Figure 10-65.
Previously copied relationships that were added to a consistency group that was later stopped before all members of the consistency group completed synchronization do not go out of the Copied state.
494
The FlashCopy mappings are returned to the Not in a Group list after being removed from the consistency group.
495
2. Click New Consistency Group in the upper left corner (as shown in Figure 10-68) and create a consistency group. We created one that is called RedBookTest. 3. Follow the procedure that is described in Restoring from a FlashCopy on page 484 to create reverse mappings for each of the mappings that exist in the source consistency group (FlashTestGroup). When you are prompted to add to a consistency group as shown in Figure 10-49 on page 487, select Yes and from the drop-down menu, select the new reverse consistency group you created in step 2 (in our case, RedBookTest). The result is similar to that shown in Figure 10-69.
496
4. To restore the consistency group, highlight the reverse consistency group and click Start, as shown in Figure 10-70.
5. Clicking Start results in FlashVol1 and FlashVol5 being over written with the original bitmap data that was saved in the FlashTestGroup FlashCopy consistency group mapping. The command completes as shown in Figure 10-71.
497
Important: The IBM Storwize V3700 automatically appends the -restore option to the command. 6. Clicking Close returns to the Consistency Group window. The reverse consistency group now shows as a 100% copied and all volumes in the original FlashTestGroup were restored, as shown in Figure 10-72.
498
Partnership
When a partnership is created, we connect two separate IBM Storwize V3700 systems or an IBM SAN Volume Controller, Storwize V3700, or Storwize V7000 and an IBM Storwize V3700. After the partnership creation is configured on both systems, further communication between the node canisters in each of the storage systems is established and maintained by the SAN network. All inter-cluster communication goes through the Fibre Channel network. Partnership must be defined on both IBM Storwize V3700 or on the IBM Storwize V3700 and the other IBM SAN Volume Controller, Storwize V3700, or Storwize V7000 storage system to make the partnership fully functional. Interconnection: Interconnects between IBM Storwize products were introduced in Version 6.3.0. Because IBM Storwize V3700 supports only version 7.10 or higher, there is no problem with support for this functionality. However, any other Storwize product must be at a minimum level of 6.3.0 to connect to the IBM Storwize V3700 and the IBM Storwize V3700 must set the replication layer by using the svctask chsystem -layer replication limitations that are described in Introduction to layers on page 499.
Introduction to layers
IBM Storwize V3700 implements the concept of layers. Layers determine how the IBM Storwize portfolio interacts with the IBM SAN Volume Controller. Currently, the following layers are available: replication and storage. The replication layer is used when you want to use the IBM Storwize V3700 with one or more IBM SAN Volume Controllers as a Remote Copy partner. The storage layer is the default mode of operation for the IBM Storwize V3700, and is used when you want to use the IBM Storwize V3700 to present storage to an IBM SAN Volume Controller. The layer for the IBM Storwize V3700 can be switched by running the svctask chsystem -layer replication command. Generally, switch the layer while your IBM Storwize V3700 system is not in production. This situation prevents potential disruptions because layer changes are not I/O-tolerant. Figure 10-73 shows the effect of layers on IBM SAN Volume Controller and IBM Storwize V3700 partnerships.
499
The replication layer allows an IBM Storwize V3700 system to be a Remote Copy partner with an IBM SAN Volume Controller, while the storage layer allows an IBM Storwize V3700 system to function as back-end storage for an IBM SAN Volume Controller. An IBM Storwize V3700 system cannot be in both layers at the same time.
Partnership topologies
A partnership between up to four IBM Storwize V3700 systems are allowed. The following typical partnership topologies between multiple IBM Storwize V3700s are available: Daisy-chain topology, as shown in Figure 10-74.
Figure 10-74 Daisy chain partnership topology for IBM Storwize V3700
500
Partnerships: These partnerships are valid for configurations with SAN Volume Controllers and IBM Storwize V3700 systems if the IBM Storwize V3700 systems are using the replication layer. They are also valid for IBM Storwize V5000 and V7000 products.
Partnership states
A partnership has the following states: Partially Configured Indicates that only one cluster partner is defined from a local or remote cluster to the displayed cluster and is started. For the displayed cluster to be configured fully and to complete the partnership, you must define the cluster partnership from the cluster that is displayed to the corresponding local or remote cluster. Fully Configured Indicates that the partnership is defined on the local and remote clusters and is started.
501
Remote Not Present Indicates that the remote cluster is not present for the partnership. Partially Configured (Local Stopped) Indicates that the local cluster is only defined to the remote cluster and the local cluster is stopped. Fully Configured (Local Stopped) Indicates that a partnership is defined on the local and remote clusters and the remote cluster is present, but the local cluster is stopped. Fully Configured (Remote Stopped) Indicates that a partnership is defined on the local and remote clusters and the remote cluster is present, but the remote cluster is stopped. Fully Configured (Local Excluded) Indicates that a partnership is defined between a local and remote cluster; however, the local cluster is excluded. Usually this state occurs when the fabric link between the two clusters is compromised by too many fabric errors or slow response times of the cluster partnership. Fully Configured (Remote Excluded) Indicates that a partnership is defined between a local and remote cluster; however, the remote cluster is excluded. Usually this state occurs when the fabric link between the two clusters is compromised by too many fabric errors or slow response times of the cluster partnership. Fully Configured (Remote Exceeded) Indicates that a partnership is defined between a local and remote cluster and the remote is available; however, the remote cluster exceeds the number of allowed clusters within a cluster network. The maximum of four clusters can be defined in a network. If the number of clusters exceeds that limit, the IBM Storwize V3700 system determines the inactive cluster or clusters by sorting all the clusters by their unique identifier in numerical order. The inactive cluster partner that is not in the top four of the cluster-unique identifiers shows Fully Configured (Remote Exceeded).
502
The two volumes in a relationship must be the same size. The Remote Copy relationship can be established on the volumes within one IBM Storwize V3700 storage system, which is called an intra-cluster relationship. The relationship can also be established in different IBM Storwize V3700 storage systems or between an IBM Storwize V3700 storage system and an IBM SAN Volume Controller, IBM Storwize V5000, or IBM Storwize V7000, which are called inter-cluster relationships. Usage of Remote Copy target volumes as Remote Copy source volumes is not allowed. A FlashCopy target volume can be used as Remote Copy source volume and as a Remote Copy target volume.
Metro Mirror
Metro Mirror is a type of Remote Copy that creates a synchronous copy of data from a master volume to an auxiliary volume. With synchronous copies, host applications write to the master volume but do not receive confirmation that the write operation completed until the data is written to the auxiliary volume. This action ensures that both volumes have identical data when the copy completes. After the initial copy completes, the Metro Mirror function maintains a fully synchronized copy of the source data at the target site at all times. Figure 10-78 shows how a write to the master volume is mirrored to the cache of the auxiliary volume before an acknowledgement of the write is sent back to the host that issued the write. This process ensures that the auxiliary is synchronized in real time if it is needed in a failover situation.
The Metro Mirror function supports copy operations between volumes that are separated by distances up to 300 km. For disaster recovery purposes, Metro Mirror provides the simplest way to maintain an identical copy on both the primary and secondary volumes. However, as with all synchronous copies over remote distances, there can be a performance impact to host applications. This performance impact is related to the distance between primary and secondary volumes and, depending on application requirements, its use might be limited based on the distance between sites.
503
Global Mirror
Global Mirror provides an asynchronous copy, which means that the secondary volume is not an exact match of the primary volume at every point. The Global Mirror function provides the same function as Metro Mirror Remote Copy without requiring the hosts to wait for the full round-trip delay of the long-distance link; however, some delay can be seen on the hosts in congested or overloaded environments. Make sure that you closely monitor and understand your workload. In asynchronous Remote Copy (which Global Mirror provides), write operations are completed on the primary site and the write acknowledgement is sent to the host before it is received at the secondary site. An update of this write operation is sent to the secondary site at a later stage, which provides the capability to perform Remote Copy over distances exceeding the limitations of synchronous Remote Copy. The distance of Global Mirror replication is limited primarily by the latency of the WAN Link provided. Global Mirror has a requirement of 80 ms round-trip-time for data sent to the remote location. The propagation delay is roughly 8.2 s per mile or 5 s per kilometer for Fibre Channel connections. Each device in the path adds delay of about 25 s. Devices that use software (such as some compression devices) adds much more time. The time that is added by software-assisted devices is highly variable and should be measured directly. Be sure to include these times when you are planning your Global Mirror design. You should also measure application performance that is based on the expected delays before Global Mirror is fully implemented. The IBM Storwize V3700 storage system provides you with an advanced feature of Global Mirror that permits you to test performance implications before Global Mirror is deployed and a long-distance link is obtained. This advanced feature is enabled by modifying the IBM Storwize V3700 storage system parameters gmintradelaysimulation and gminterdelaysimulation. These two parameters can be used to simulate the write delay to the secondary volume. The delay simulation can be enabled separately for each intra-cluster or inter-cluster Global Mirror. You can use this feature to test an application before the full deployment of the Global Mirror feature. For more information about how to enable the CLI feature, see Appendix A, Command-line interface setup and SAN Boot on page 593. Figure 10-79 on page 505 shows that a write operation to the master volume is acknowledged back to the host that is issuing the write before the write operation is mirrored to the cache for the auxiliary volume.
504
The Global Mirror algorithms maintain a consistent image on the auxiliary volume at all times. They achieve this consistent image by identifying sets of I/Os that are active concurrently at the master, assigning an order to those sets, and applying those sets of I/Os in the assigned order at the secondary. In a failover scenario where the secondary site must become the master source of data, certain updates might be missing at the secondary site depending on the workload pattern and the bandwidth and distance between local and remote site. Therefore, any applications that use this data must have an external mechanism for recovering the missing updates and reapplying them; for example, a transaction log replay.
505
To address these issues, Change Volumes were added as an option for Global Mirror relationships. Change Volumes use the FlashCopy functionality but cannot be manipulated as FlashCopy volumes because they are special purpose only. Change Volumes replicate point-in-time images on a cycling period (the default is 300 seconds). This situation means that your change rate only needs to include the condition of the data at the point in time that the image was taken instead of all the updates during the period. This situation can provide significant reductions in replication volume. Figure 10-80 shows a basic Global Mirror relationship without Change Volumes.
With Change Volumes, a FlashCopy mapping exists between the primary volume and the primary Change Volume. The mapping is updated during a cycling period (every 60 seconds to one day.) The primary Change Volume is then replicated to the secondary Global Mirror volume at the target site, which is then captured in another change volume on the target site. This situation provides an always consistent image at the target site and protects your data from being inconsistent during resynchronization. Take a closer look at how Change Volumes might reduce replication traffic.
506
Figure 10-82 shows a number of I/Os on the source volume and the same number on the target volume, and in the same order. Assuming that this set is the same set of data being updated over and over, these updates are wasted network traffic and the I/O can be completed much more efficiently, as shown in Figure 10-83.
In Figure 10-83, the same data is being updated repeatedly, so Change Volumes demonstrate significant IO transmission savings because you only must send I/O number 16, which was the last I/O before the cycling period.
The cycling period can be adjusted by running the chrcrelationship -cycleperiodseconds <60-86400> command. If a copy does not complete in the cycle period, the next cycle does not start until the prior one completes. It is for this reason that the use of Change Volumes gives you the following possibilities for RPO: If your replication completes in the cycling period, your RPO is twice the cycling period. If your replication does not complete within the cycling period, your RPO is twice the completion time. The next cycling period starts immediately after the prior one is finished. Careful consideration must be put into balancing your business requirements with the performance of Global Mirror with Change Volumes. Global Mirror with Change Volumes increases the inter-cluster traffic for more frequent cycling periods, so going as short as possible is not always the answer. In most cases, the default should meet your requirements and perform reasonably well.
507
Important: When Global Mirror volumes are used with Change Volumes, make sure that you remember to select the Change Volume on the auxiliary (target) site. Failure to do so leaves you exposed during a resynchronization operation. Also, the GUI automatically creates Change Volumes for you. However, it is a limitation of this initial release that they are fully provisioned volumes. To save space, you should create thin-provisioned volumes before and use the existing volume option to select your change volumes.
508
Remote Copy relationships can only belong to one consistency group, but they do not have to belong to a consistency group. Relationships that are not part of a consistency group are called stand-alone relationships. A consistency group can contain zero or more relationships. All relationships in a consistency group must have matching primary and secondary clusters, which are sometimes referred to as master and auxiliary clusters. All relationships in a consistency group must also have the same copy direction and state. Metro Mirror and Global Mirror relationships cannot belong to the same consistency group. A copy type is automatically assigned to a consistency group when the first relationship is added to the consistency group. After the consistency group is assigned a copy type, only relationships of that copy type can be added to the consistency group.
509
ConsistentDisconnected The volumes in this half of the consistency group are all operating in the secondary role and can accept read I/O operations but not write I/O operations. Empty The consistency group does not contain any relationships.
510
Zoning recommendation
Node canister ports on each IBM Storwize V3700 must communicate with each other for the partnership creation to be performed. These ports must be visible to each other on your SAN. Proper switch zoning is critical to facilitating inter-cluster communication. The following SAN zoning recommendations can be considered: For each node canister, exactly two Fibre Channel ports should be zoned to exactly two Fibre Channel ports from each node canister in the partner cluster. If dual-redundant inter-switch links (ISLs) are available, the two ports from each node should be split evenly between the two ISLs; that is, exactly one port from each node canister should be zoned across each ISL. For more information, see this website: http://www-01.ibm.com/support/docview.wss?uid=ssg1S1003634&myns=s033&mynp=famil yind5329743&mync=E Additionally, all local zoning rules should be followed. A properly configured SAN fabric is key to not only local SAN performance, but Remote Copy. For more information, see this website: http://pic.dhe.ibm.com/infocenter/storwize/v3700_ic/index.jsp?topic=%2Fcom.ibm. storwize.v3700.641.doc%2Fv3700_ichome_641.html
Fabrics: When a local fabric and a remote fabric are connected for Remote Copy purposes, the ISL hop count between a local node and a remote node cannot exceed seven.
511
Bandwidth The bandwidth must satisfy the following requirements: If you are not using Change Volumes: Can sustain peak write load for all mirrored volumes and background copy traffic. If you are using Change Volumes with Global Mirror: Can sustain change rate of Source Change Volumes and background copy traffic. More background copy rate (the best practice is 10% to 20% of maximum peak load) for initial synchronization and resynchronization. Remote Copy internal communication at idle with or without Change Volumes is approximately 2.6 Mbps (the minimum amount). Redundancy: If the link between two sites is configured with redundancy so that it can tolerate single failures, the link must be sized so that the bandwidth and latency requirement can be met during single failure conditions.
FlashCopy target
This combination is supported. Issuing stop -force might cause the Remote Copy relationship to fully resynchronize.
If you are not using Global Mirror with Change Volumes, for disaster recovery purposes, you can use the FlashCopy feature to create a consistent copy of an image before you restart a Global Mirror relationship. When a consistent relationship is stopped, the relationship enters the consistent_stopped state. While in this state, I/O operations at the primary site continue to run. However, updates are not copied to the secondary site. When the relationship is restarted, the synchronization process for new data is started. During this process, the relationship is in the inconsistent_copying state. The secondary volume for the relationship cannot be used until the copy process completes and the relationship returns to the consistent state. When this situation occurs, start a FlashCopy operation for the secondary volume before you restart the relationship. While the relationship is in the Copying state, the FlashCopy feature can provide a consistent copy of the data. If the relationship does not reach the synchronized state, you can use the FlashCopy target volume at the secondary site.
512
513
Figure 10-85 shows why the effective transit time should only be measured by using packets large enough to hold a Fibre Channel frame. This packet size is 2148 bytes (2112 bytes of payload and 36 bytes of header) and you should allow some more capacity to be safe because different switching vendors have optional features that might increase this size.
Figure 10-85 The effect of packet size (in bytes) versus the link size
Before you proceed, take a quick look at the second largest component of your round-trip-time, that is, serialization delay. Serialization delay is the amount of time that is required to move a packet of data of a specific size across a network link of a given bandwidth. This delay is based on a simple concept: the time that is required to move a specific amount of data decreases as the data transmission rate increases. In Figure 10-85, there are orders of magnitude of difference between the different link bandwidths. It is easy to see how 1920 errors can arise when your bandwidth is insufficient and why you should never use a TCP/IP ping to measure RTT for FCIP traffic. Figure 10-85 compares the amount of time in microseconds required to transmit a packet across network links of varying bandwidth capacity. The following packet sizes are used: 64 bytes: The size of the common ping packet. 1500 bytes: The size of the standard TCP/IP packet. 2148 bytes: The size of a Fibre Channel frame. Remember that your path MTU affects the delay that is incurred in getting a packet from one location to another when it causes fragmentation or is too large and causes too many retransmits when a packet is lost. After you verify your latency by using the correct packet size, proceed with normal hardware troubleshooting.
514
515
Creating a partnership
No partnership is defined in our example (see Figure 10-87), so you must create a partnership between the IBM Storwize V3700 systems. Click New Partnership in the Partnership window.
516
Check the zoning and the system status and make sure that the clusters can see each other. Then you can create your partnership by selecting the appropriate remote storage system (see Figure 10-89), and defining the available bandwidth between both systems.
Figure 10-89 Select the remote IBM Storwize storage system for a new partnership
The bandwidth you must input here is used by the background copy process between the clusters in the partnership. To set the background copy bandwidth optimally, make sure that you consider all three resources (the primary storage, the inter-cluster link bandwidth, and the auxiliary storage) to avoid overloading them and affecting the foreground I/O latency. Click Create and the partnership definition is complete on the first IBM Storwize V3700 system. You can find the partnership listed in the left pane of the Partnership window. If you select the partnership, more information for this partnership is displayed on the right, as shown in Figure 10-90 on page 518.
517
Important: The state of the partnership is Partially Configured: Local because we did not define it on the other IBM Storwize V3700. For more information about partnership states, see Remote Copy and consistency group states on page 509. Complete the same steps on the second storage system for the partnership to become fully configured. The Remote Copy partnership is now implemented between the two IBM Storwize V3700 systems and both systems are ready for further configuration of Remote Copy relationships, as shown in Figure 10-91.
You can also change the bandwidth setting for the partnership in the Partnerships window. Click Apply Changes to confirm your modification.
518
After you stop the partnership, your partnership is listed as Fully Configured: Stopped, as shown in Figure 10-93.
You can restart a stopped partnership by clicking Start Partnership from the Actions drop-down menu. The partnership returns to the fully configured status when it is restarted.
Deleting a partnership
You can delete a partnership by selecting Delete Partnership from the Actions drop-down menu, as shown in Figure 10-92.
519
520
The Remote Copy window (see Figure 10-95) is where you can manage Remote Copy relationships and Remote Copy consistency groups.
The Remote Copy window displays a list of Remote Copy consistency groups. You can also take actions on the Remote Copy relationships and Remote Copy consistency groups. Click Not in a Group and all the Remote Copy relationships that are not in any Remote Copy consistency groups are displayed. To customize the blue column heading bar and select different attributes of Remote copy relationships, right-click anywhere in the blue bar.
521
You must select where your auxiliary (target) volumes are: the local system or the already defined second storage system. In our example (see Figure 10-97), choose another system to build an inter-cluster relationship. Click Next to continue.
The Remote Copy master and auxiliary volume must be specified. Both volumes must have the same size. As shown in Figure 10-98 on page 523, the system offers only appropriate auxiliary candidates with the same volume size as the selected master volume. After you select the volumes based on your requirement, click Add.
522
You can define multiple and independent relationships by clicking Add. You can remove a relationship by clicking the red cross. In our example, we create two independent Remote Copy relationships, as shown in Figure 10-99.
A window opens and asks if the volumes in the relationship are already synchronized. In most situations, the data on the master volume and on the auxiliary volume are not identical, so click No and then click Next to enable an initial copy, as shown in Figure 10-100.
523
If you select Yes, the volumes are already synchronized in this step, a warning message opens, as shown in Figure 10-101. Make sure that the volumes are truly identical, and then click OK to continue.
Figure 10-101 Warning message to make sure that the volumes are synchronized
You can choose to start the initial copying progress now, or wait to start it at a later time. In our example, select Yes, start copying now and then click Finish, as shown in Figure 10-102.
After the Remote Copy relationships creation completes, two independent Remote Copy relationships are defined and displayed in the Not in a Group list, as shown in Figure 10-103.
524
Optionally, you can monitor the ongoing initial synchronization in the Running Tasks status indicator, as shown in Figure 10-104 on page 525. Highlight one of the operations and click it to see the progress.
525
A prompt appears. Click to allow secondary read/write access, if required, and then click Stop Relationship, as shown in Figure 10-106.
After the stop completes, the state of the Remote Copy relationship is changed from Consistent Synchronized to Idling, as shown in Figure 10-107. Read/write access to both volumes is now allowed unless you selected otherwise.
526
When a Remote Copy relationship is started, the most important item is selecting the copy direction. Both master and auxiliary volumes can be the primary. Make your decision based on your requirements and click Start Relationship. In our example, choose the master volume to be the primary, as shown in Figure 10-109.
527
A warning message opens and shows you the consequences of this action, as shown in Figure 10-111. If you switch the Remote Copy relationship, the copy direction of the relationship becomes the opposite; that is, the current primary volume becomes the secondary, while the current secondary volume becomes the primary. Write access to the current primary volume is lost and write access to the current secondary volume is enabled. If it is not a disaster recovery situation, you must stop your host I/O to the current primary volume in advance. Make sure that you are prepared for the consequences, and if so, click OK to continue.
528
Figure 10-111 Warning message for switching direction of a Remote Copy relationship
After the switch completes, your Remote Copy relationship is tagged, as shown in Figure 10-112, and shows you that the primary volume in this relationship was changed.
Figure 10-112 Note the switch icon on the state of the relationship
529
Enter the new name for the Remote Copy relationship and click Rename.
You must confirm this deletion by verifying the number of relationships to be deleted, as shown in Figure 10-115. Click Delete to proceed.
530
You must enter a name for your new consistency group, as shown in Figure 10-117.
531
You are prompted for the location of auxiliary volumes, as shown in Figure 10-118 on page 532. In our case, these volumes are on another system. Select the option and from the drop-down menu, select the correct remote system. In our example, we only have one remote system defined. Click Next to continue.
Figure 10-118 Remote Copy consistency group auxiliary volume location window
You are then prompted to create an empty consistency group or add relationships to it, as shown in Figure 10-119.
532
If you select No and click Finish, the wizard completes and creates an empty Remote Copy Consistency Group. Selecting Yes prompts for the type of copy to create, as shown in Figure 10-120.
Choose the relevant copy type and click Next. In the following window, you can choose existing relationships to add the new consistency group. This step is optional. Use the Ctrl and Shift keys to select multiple relationships to add. If you decide that you do not want to use any of these relationships but want to create relationships, then click Next. However, if you already highlighted a relationship and then decide you do not want any of these, the relationship cannot be removed. You must cancel the wizard and start again, as shown in Figure 10-121 on page 534.
533
The next window is optional and gives the option to create relationships to add to the consistency group, as shown in Figure 10-122.
534
Select the relevant Master and Auxiliary volumes for the relationship you want to create and click Add. Multiple relationships can be defined by selecting another Master and Auxiliary volume and clicking Add again. When you finish, click Next. The next window prompts for whether the relationships are synchronized, as shown in Figure 10-123.
In the next window, you are prompted whether you want to start the volumes copying now, as shown in Figure 10-124.
After you select this option, click Finish to create the Remote Copy Consistency Group. Click Close to close the task window and the new consistency group is now shown in the GUI, as shown in Figure 10-125 on page 536.
535
In our example, we created a consistency group with a single relationship and added further Remote Copy relationships to the consistency group afterward. You can find the name and the status of the consistency group next to the Relationship function icon. It is easy to change the name of consistency group by right-clicking the name, selecting Rename and then entering a new name. Alternatively, you can highlight the consistency group and select Rename from the Actions drop-down menu. Similarly, you find all the Remote Copy relationships in this consistency group below the Relationship function icon. The actions on the Remote Copy relationships can be applied here by using the Actions drop-down menu or right-clicking the relationships, as shown in Figure 10-126.
536
You must choose the consistency group to which to add the Remote Copy relationships. Based on your requirements, select the appropriate consistency group and click Add to Consistency Group, as shown in Figure 10-128.
Figure 10-128 Choose the consistency group to add the remote copies to
Your Remote Copy relationships are now in the consistency group that you selected.
537
The consistency group starts copying data from the primary to the secondary.
538
You can allow read/write access to secondary volumes by selecting the option (see Figure 10-131) and then clicking Stop Consistency Group.
Figure 10-131 Confirm consistency group stop and decide to allow secondary read/write access
A warning message opens, as shown in Figure 10-133 on page 540. After the switch, the primary cluster in the consistency group is changed. Write access to current master volumes is lost, while write access to the current auxiliary volumes is enabled. This affects host access, so make sure that these settings are what you need. If the settings are as you want them, click OK to continue.
539
You are prompted to confirm the Remote Copy relationships you want to delete from the consistency group, as shown in Figure 10-135 on page 541. Make sure the Remote Copy relationships that are shown in the field are the ones you want to remove from the consistency group, and then click Remove to proceed.
540
Figure 10-135 Confirm the relationships to remove from the Remote Copy consistency group
After the removal process completes, the Remote Copy relationships are deleted from the consistency group and displayed in the Not in a Group list.
You must confirm the deletion of the consistency group, as shown in Figure 10-137. Click OK if you are sure that this consistency group must be deleted.
541
The consistency group is deleted. Any relationships that were part of the consistency group are returned to the Not in a Group list.
542
11
Chapter 11.
543
544
For more information about replacing the control or expansion enclosure midplane, see the IBM Storwize V3700 Information Center at this website: http://pic.dhe.ibm.com/infocenter/storwize/V3700_ic/index.jsp
545
USB ports
There are two USB connectors side-by-side and they are numbered as 1 on the left and as 2 on the right. There are no indicators that are associated with the USB ports. Figure 11-3 shows the USB ports.
Ethernet ports
There are two 10/100/1000 Mbps Ethernet ports side-by-side on the canister and they are numbered 1 on the left and 2 on the right. Port 1 is required and port 2 optional. The ports are shown in Figure 11-4.
546
Each port has two LEDs and their status is shown in Table 11-1.
Table 11-1 Ethernet LEDs status LED Link state Activity Color Green Yellow Meaning It is on when there is an Ethernet link. It is flashing when there is activity on the link.
SAS ports
There are four 6-Gbps Serial Attached SCSI (SAS) ports side-by-side on both canisters. They are numbered 1 on the left to 4 on the right. IBM Storwize V3700 uses port 1 and 2 and 3 for host connectivity and port 4 to connect optional expansion enclosures. The ports are shown in Figure 11-5.
547
The IBM Storwize V3700 uses SFF mini-SAS connector cable to connect expansion enclosures, as shown in Figure 11-6.
Battery status
Each node canister includes a battery, the status of which is displayed on three LEDs on the back of the unit, as shown in Figure 11-7.
548
Canister status
The status of each canister is displayed by three LEDs, as shown in Figure 11-8.
Green (mid)
System Status
Amber
Fault
549
550
Memory replacement
For more information about the memory replacement process, see the IBM Storwize V3700 Information Center at this website: http://pic.dhe.ibm.com/infocenter/storwize/v3700_ic/index.jsp?topic=%2Fcom.ibm.sto rwize.v3700.710.doc%2Ftbrd_rmvrplparts_1955wm.html At the website, browse to Troubleshooting Removing and replacing parts Replacing the node canister memory (4-GB DIMM).
551
552
Complete the following steps to replace the BBU: 1. Grasp the blue touch points on each end of the battery, as shown in Figure 11-12.
2. Lift the battery vertically upwards until the connectors disconnect. Important: During a BBU change, the battery must be kept parallel to the canister system board while it is removed or replaced, as shown in Figure 11-13. Keep equal force (or pressure) on each end.
Figure 11-13 BBU replacement: Step 2 Chapter 11. RAS, monitoring, and troubleshooting
553
SAS ports
SAS ports are used to connect the expansion canister to the node canister or to an extra expansion in the chain. Figure 11-14 shows the SAS ports that are on the expansion canister. Port 1 is for incoming SAS cables and Port 2 for outgoing cables.
Canister status
Each expansion canister has its status displayed by three LEDs, as shown in Figure 11-15.
554
SAS cabling
Expansion enclosures are attached to control enclosures by using SAS cables. There is one supported SAS chain and up to four expansion enclosures can be attached. The node canister uses SAS ports 4 for enclosures. Important: When an SAS cable is inserted, ensure that the connector is oriented correctly by confirming that the following conditions are met: The pull tab must be below the connector. Insert the connector gently until it clicks into place. If you feel resistance, the connector is probably oriented the wrong way. Do not force it. When inserted correctly, the connector can be removed only by pulling the tab. The expansion canister has SAS port 1 for channel input and SAS port 2 for output to connect another expansion enclosure.
555
A strand starts with an SAS initiator chip inside an IBM Storwize V3700 node canister and progresses through SAS expanders, which connect disk drives. Each canister contains an expander. Each drive has two ports, each of which is connected to a different expander and strand. This configuration means both nodes have direct access to each drive and there is no single point of failure. At system initialization, when devices are added to or removed from strands (and at other times), the IBM Storwize V3700 Software performs a discovery process to update the state of the drive and enclosure objects.
556
Enclosure 24x 2.5-inch drives: Control enclosure 2072-24C Expansion enclosure 2072-24E
Array goal
Each array has a set of goals that describe the wanted location and performance of each array member. A sequence of drive failures and hot spare takeovers can leave an array unbalanced; that is, with members that do not match these goals. The system automatically rebalances such arrays when appropriate drives are available.
RAID level
An IBM Storwize V3700 supports RAID 0, RAID 1, RAID 5, RAID 6, or RAID 10. Each RAID level is described in Table 11-8.
Table 11-8 RAID levels that are supported by an IBM Storwize V3700 RAID level 0 1 5 Where data is striped Arrays have no redundancy and do not support hot-spare takeover. Provides disk mirroring, which duplicates data between two drives. A RAID 1 array is internally identical to a two-member RAID 10 array. Arrays stripe data over the member drives with one parity strip on every stripe. RAID 5 arrays have single redundancy with higher space efficiency than RAID 10 arrays, but with some performance penalty. RAID 5 arrays can tolerate no more than one member drive failure. Arrays stripe data over the member drives with two parity strips on every stripe. A RAID 6 array can tolerate any two concurrent member drive failures. Drive count (Min - Max) 1-8 2 3 - 16
5 - 16
557
RAID level 10
Where data is striped Arrays stripe data over mirrored pairs of drives. RAID 10 arrays have single redundancy. The mirrored pairs rebuild independently. One member out of every pair can be rebuilding or missing at the same time. RAID 10 combines the features of RAID 0 and RAID 1.
Disk scrubbing
The scrub process runs when arrays do not have any other background processes. The process checks that the drive logical block addresses (LBAs) are readable and array parity is synchronized. Arrays are scrubbed independently and each array is entirely scrubbed every seven days.
Solid-state drives
Solid-date drives (SSDs) are treated no differently by an IBM Storwize V3700 than traditional hard disk drives (HDDs) in relation to RAID arrays or MDisks. The individual SSD drives in the IBM Storwize V3700 are combined into an array, usually in RAID 10 or RAID 5 format. It is unlikely that RAID 6 SSD arrays are used because of the double parity impact, with two SSD logical drives that are used for parity only.
The left side PSU is numbered 1 and the right side PSU is numbered 2.
558
559
svcconfig backup The progress of the command is shown by advancing dots, as shown in Example 11-2.
Example 11-2 Backup CLI command progress and output
.................................................................................. .................................................................................. .................... CMMVC6155I SVCCONFIG processing completed successfully The svcconfig backup command creates three files that provide information about the backup process and cluster configuration. These files are created in the /tmp directory on the configuration node and are listed on the support view. The three files that are created by the backup process are described Table 11-10.
Table 11-10 File names that are created by the backup process File name svc.config.backup.xml svc.config.backup.sh svc.config.backup.log Description This file contains your cluster configuration data. This file contains the names of the commands that were issued to create the backup of the cluster. This file contains details about the backup, including any error information that might be reported.
560
2. Select the configuration node on the support view, as shown in Figure 11-20.
3. Select the Show full log listing... option (as shown in Figure 11-21) to list all of the available log files that are stored on the configuration node.
561
4. Search for a file named /dumps/svc.config.backup.xml_* (as shown in Figure 11-22). Select the file, right-click it, and select Download.
5. Save the configuration backup file on your management workstation (as shown in Figure 11-23) where it can be found easily.
562
Even if the configuration backup file is updated automatically, it might be of interest to verify the time stamp of the actual file. Therefore, the /dumps/svc.config.backup.xml_xx file must be opened with an editor, such as WordPad, as shown in Figure 11-24.
Open the /dumps/svc.config.backup.xml_xx file with an editor (we used WordPad) and search for the string timestamp=, which is found near of the top of the file. Figure 11-25 shows the file opened and the time stamp information in it.
563
4 5
Important: The amount of time it takes to perform an upgrade can vary depending on the amount of preparation work that is required and the size of the environment. Some code levels support upgrades only from specific previous levels. If you upgrade to more than one level above your current level, you might be required to install an intermediate level. Important: Ensure that you have no unfixed errors in the log and that the system date and time are correctly set. Start the fix procedures, and ensure that you have fixed any outstanding errors before you attempt to concurrently upgrade the code.
564
When new nodes are added to the system, the upgrade package is automatically downloaded to the new node from the IBM Storwize V3700 system. The upgrade can be performed concurrently with normal user I/O operations. However, performance can be affected.
Multipathing requirement
Before you upgrade, ensure that the multipathing driver is fully redundant with every path available and online. You might see errors that are related to some of the paths failing and the error count increases during the upgrade. This is a result of each node effectively going offline to I/O while it upgrades and re-starts. When the node comes back, the paths become available and fully redundant again. After a 30-minute delay, the paths to the other node go down as that begins to upgrade.
As a first step, the upgrade test utility must be downloaded from the internet. The correct link is provided within the panel. If the tool was downloaded and is stored on the management station, it can be uploaded, as shown in Figure 11-27 on page 566.
565
The version to which the system should be upgraded must be entered next. By default, the latest code level is shown, as shown in Figure 11-29.
Important: You must choose the correct code level because you cannot recheck this information later. The version that is selected is used throughout the rest of the process.
566
Figure 11-30 shows the panel that indicates the background test task is running.
The utility can be run as many times as necessary on the same system to perform a readiness check-in preparation for a software upgrade. Next, the code must be downloaded to the management workstation. If the code was already downloaded to the management station, it can be directly uploaded to the IBM Storwize V3700, as shown in Figure 11-31. Verify that the correct code file is used.
The automated code upgrade can be started when the Automatic upgrade option is selected in the decision panel (as shown in Figure 11-33 on page 568), which is the default. If the upgrade is done manually for any reason, the selection must be made (an automatic upgrade is recommended).
567
If you choose to select the Service Assistant Manual upgrade option, see to 11.4.3, Upgrading software manually on page 568. Selecting Finish starts the upgrade process on the nodes. Messages inform you when the nodes are upgraded. When all nodes are rebooted, the upgrade process is complete. It can take up to two hours to finish this process.
568
After you select manual upgrade, a warning is shown, as shown in Figure 11-35.
Both nodes are set to status Waiting for Upgrade in the Upgrade Machine Code panel, as shown in Figure 11-36.
2. In the management GUI, select System Details and select the canister (node) you want to upgrade next. As shown in Figure 11-37, select Remove Node in the Action menu, which shows you an alert in Health Status.
569
The non-config node is removed from GUI Upgrade Machine Code panel, as shown in Figure 11-39.
In the System Details panel, the node is shown as Unconfigured, as shown in Figure 11-40.
3. In the Service Assistant panel, the node that is ready for upgrade must be selected. Select the node showing Node status as service mode and that has no available cluster information, as shown in Figure 11-41.
570
4. In the Service Assistant panel, select Upgrade Manually and then select the machine code version to which you want to upgrade, as shown in Figure 11-42.
5. Click Upgrade to start the upgrade process on the first node. 6. The node is automatically added to the system after upgrade. Upgrading and adding the node can take up to 30 minutes, as shown in Figure 11-43.
7. Repeat steps 2 - 4 to the remaining node (or nodes). After you remove config node from the cluster for upgrade, a warning appears, as shown in Figure 11-44.
571
Important: The config node remains in Service State when it is added again to the cluster. Therefore, exit Service State manually. 8. To exit from service state, browse to the home panel of the Service Assistant and open the Action menu. Select Exit Service State, as shown in Figure 11-45.
Both the nodes are now back into the cluster (as shown in Figure 11-46) and the system is running on the new code level.
Figure 11-46 Cluster is active again and running new code level
572
573
We describe the following available fields that are recommended at a minimum to assist you in diagnosing the problems: Event ID This number precisely identifies the reason why the event was logged. Error code This number describes the service action that should be followed to resolve an error condition. Not all events have error codes that are associated with them. Many event IDs can have the same error code because the service action is the same. Sequence number A number that identifies the event.
574
Event count The number of events that are coalesced into this event log record. Fixed When an alert is shown for an error condition, it indicates whether the reason for the event was resolved. In many cases, the system automatically marks the events that are fixed when appropriate. There are some events that must be manually marked as fixed. If the event is a message, this field indicates that you read and performed the action. The message must be marked as read. Last time The time when the last instance of this error event was recorded in the log. Root sequence number If set, this number is the sequence number of an event that represents an error that probably caused this event to be reported. Resolve the root event first.
575
Recommended Actions (default) Only events with recommended actions (Status Alert) are displayed. Warning: Check for this filter option if no event is listed. There might be events that are not associated to recommended actions. Figure 9-51 shows an event log with no items found, which does not necessarily mean that the event log is clear. We check whether the log is clear by using the filter option Show all.
Recommended Actions
A fix procedure is a wizard that helps you to troubleshoot and correct the cause of an error. Some fix procedures reconfigure the system and are based on your responses. Ensure that actions are carried out in the correct sequence to prevent or mitigate loss of data. For this reason, you must always run the fix procedure to fix an error, even if the fix might seem obvious.
576
To run the fix procedure for the error with the highest priority, go to the Recommended Action panel at the top of the Event page and click Run This Fix Procedure. When you fix higher priority events first, the system can often automatically mark lower priority events as fixed. For more information about how to run a DMP, see 11.5.2, Alert handling and recommended actions on page 577.
Review the event log for more information. Find alert in event log The default filter in the error log view is Recommended actions. This option lists the alert event only. Figure 11-53 shows the recommended action list.
577
Gather more information: Show all Find the events that are logged around the alert to understand what happened or find more information for better understanding and to find the original problem. Use the Show all filter to see all of the logged events, as shown in Figure 11-54.
Gather more information: Alert properties More details about the event (for example, enclosure ID and canister ID) can be found in the properties option, as shown in Figure 11-55 on page 579. This information might be of interest for problem fixing or for root cause analysis.
578
Run recommended action (DMP) It is highly recommended to fix alerts under the guidance of the recommended action (DMP). There are running tasks in the background that might be missed when the DMP is bypassed. Not all alerts have DMPs available. To start the DMP, right-click the alert record or click Run this fix procedure at the top of the window. The steps and panels of DMP are specific to the error that must be fixed. The following figures represent the recommended action (DMP) for the SAS cable event example.
579
580
581
582
583
When all of the steps of the DMP are processed successfully, the recommended action is complete and the problem should be fixed. Figure 11-64 shows the red color of the event status changed to green. The system health status is green and there are no further alerts that must be addressed.
The Next Recommended Action function orders the alerts by severity and displays the events with the highest severity first. If multiple events have the same severity, they are ordered by date and the oldest event is displayed first. The following order of severity starts with the most severe condition: Unfixed alerts (sorted by error code; the lowest error code has the highest severity) Unfixed messages 584
Monitoring events (sorted by error code; the lowest error code has the highest severity) Expired events Fixed alerts and messages Faults are often fixed with the fixing of the most severe fault.
The panel that is shown in Figure 11-68 opens and you can select one of four different versions of the svc_snap support package.
585
The version that you download depends on the event you are investigating. For example, if you noticed a node was restarted in the event log, capture the snap with the latest existing statesave. The following components are included in the support package: Standard logs Contains the most recent logs that were collected from the system. These logs are most commonly used by Support to diagnose and solve problems. Standard logs plus one existing statesave Contains the standard logs from the system and the most recent statesave from any of the nodes in the system. Statesaves also are known as dumps or live dumps. Standard logs plus most recent statesave from each node This option is the mostly used by support team for problem analysis. They contain the standard logs from system and the most recent statesave from each node in the system. Standard logs plus new statesave This option might be requested by the Support team for problem determination. It generates a new statesave (livedump) for all of the nodes and packages them with the most recent logs. Save the resulting snap file in a directory for later usage or upload to IBM support.
586
Support information can be downloaded with or without the latest statesave, as shown in Figure 11-70.
587
satask_result file
The satask_result.html file is the general response to the command that is issued via the USB stick. If the command did not run successfully, it is shown in this file. Otherwise, any general system information is stored here, as shown in Figure 11-72.
588
2. Select the root level of the system detail tree, click Actions and select Shut Down System, as shown in Figure 11-75.
589
The following process can be used as an alternative to step 1 and step 2, as shown in Figure 11-76: a. Browse to the Monitoring navigator and open the System view. b. Click the system that is under the system display. c. An information panel opens. Click the Manage tab. d. Click Shut Down System to shut down, as shown in Figure 11-76.
3. The Confirm System Shutdown window opens. A message opens and prompts you to confirm whether you want to shut down the cluster. Ensure that you stopped all FlashCopy mappings, data migration operations, and forced deletions before you continue. Enter Yes and click OK to begin the shutdown process, as shown in Figure 11-77.
4. Wait for the power LED on both node canisters in the control enclosure to flash at 1 Hz, which indicates that the shutdown operation completed (1 Hz is half as fast as the drive indicator LED).
590
Tip: When you shut down an IBM Storwize V3700, it does not automatically restart. You must manually restart the system.
stopsystem Are you sure that you want to continue with the shut down? # Type y to shut down the entire clustered system.
11.7.2 Powering on
Complete the following steps to power on the system: Important: This process assumes all power is removed from the enclosure. If the control enclosure is shut down but the power is not removed, the power LED on all node canisters flash at a rate of half of one second on, half of one second off. In this case, remove the power cords from both power supplies and then reinsert them. 1. Ensure that any network switches that are connected to the system are powered on. 2. Power on any expansion enclosures by connecting the power cord to both power supplies in the rear of the enclosure or turning on the power circuit. 3. Power on the control enclosure by connecting the power cords to both power supplies in the rear of the enclosure and turning on the power circuits. The system starts. The system starts successfully when all node canisters in the control enclosure have their status LED permanently on, which should take no longer than 10 minutes. 4. Start the host applications.
591
592
Appendix A.
593
Command-line interface
The IBM Storwize V3700 system has a powerful CLI, which offers even more functions than the GUI. This section is not intended to be a detailed guide to the CLI because that topic is beyond the scope of this book. The basic configuration of the IBM Storwize V3700 CLI and some example commands are covered. However, the CLI commands are the same as in the IBM SAN Volume Controller. In addition, there are more commands that are available to manage internal storage. If a task completes in the GUI, the CLI command is always displayed in the task box detail, as shown throughout this book. Detailed CLI information is available in the IBM Storwize V3700 Information Center under the Command Line section, which can be found at this website: http://pic.dhe.ibm.com/infocenter/storwize/V3700_ic/index.jsp?topic=%2Fcom.ibm.sto rwize.V3700.641.doc%2Fsvc_clicommandscontainer_229g0r.html Implementing the IBM Storwize V7000 V6.3, SG24-7938 also has information about the use of the CLI. The commands in that book also apply to the IBM Storwize V3700 system.
Basic setup
In the IBM Storwize V3700 GUI, authentication is done using a user name and password. The CLI uses a Secure Shell (SSH) to connect from the host to the IBM Storwize V3700 system. A private and public key pair or user name and password is necessary. The following steps are required to enable CLI access with SSH keys: 1. A public key and private key are generated as a pair. 2. A public key is uploaded to the IBM Storwize V3700 system using the GUI. 3. A client SSH tool must be configured to authenticate with the private key. 4. A secure connection can be established between the client and IBM Storwize V3700 system. Secure Shell is the communication vehicle that is used between the management workstation and the IBM Storwize V3700 system. The SSH client provides a secure environment from which to connect to a remote machine. It uses the principles of public and private keys for authentication. SSH keys are generated by the SSH client software. The SSH keys include a public key, which is uploaded and maintained by the clustered system, and a private key, which is kept private on the workstation that is running the SSH client. These keys authorize specific users to access the administration and service functions on the system. Each key pair is associated with a user-defined ID string that can consist of up to 40 characters. Up to 100 keys can be stored on the system. New IDs and keys can be added, and unwanted IDs and keys can be deleted. To use the CLI, an SSH client must be installed on that system, the SSH key pair must be generated on the client system, and the clients SSH public key must be stored on the IBM Storwize V3700 systems. The SSH client that is used in this book is PuTTY. There is also a PuTTY key generator that can be used to generate the private and public key pair. The PuTTY client can be downloaded at no cost from the following website: http://www.chiark.greenend.org.uk Download the following tools: PuTTY SSH client: putty.exe 594
Implementing the IBM Storwize V3700
Make sure that the following options are selected: SSH2 RSA Number of bits in a generated key: 1024 2. Click Generate and move the cursor over the blank area to generate the keys, as shown in Figure A-2.
595
To generate keys: The blank area that is indicated by the message is the large blank rectangle on the GUI inside the section of the GUI labeled Key. Continue to move the mouse pointer over the blank area until the progress bar reaches the far right side. This action generates random characters to create a unique key pair. 3. After the keys are generated, save them for later use. Click Save public key, as shown in Figure A-3.
4. You are prompted for a name (for example, pubkey) and a location for the public key (for example, C:\Support Utils\PuTTY). Click Save.
596
Ensure that you record the name and location of this SSH public key as it must be specified later. Public key extension: By default, the PuTTY key generator saves the public key with no extension. Use the string pub for naming the public key; for example, pubkey, to differentiate the SSH public key from the SSH private key. 5. Click Save private key, as shown in Figure A-4.
6. You receive a warning message, as shown in Figure A-5. Click Yes to save the private key without a passphrase.
7. When prompted, enter a name (for example, icat), select a secure place as the location, and click Save. Key generator: The PuTTY key generator saves the private key with the PPK extension. 8. Close the PuTTY key generator.
597
2. Right-click the user for which you want to upload the key and click Properties, as shown in Figure A-7.
3. To upload the public key, click Browse, select your public key, and click OK, as shown in Figure A-8. 598
Implementing the IBM Storwize V3700
599
In the right side pane under the Specify the destination you want to connect to section, select SSH. Under the Close window on exit section, select Only on clean exit, which ensures that if there are any connection errors, they are displayed in the users window. 2. From the Category pane on the left side of the PuTTY Configuration window, click Connection SSH to open the PuTTY SSH Configuration window, as shown in Figure A-11.
600
3. In the right side pane, in the Preferred SSH protocol version section, select 2. 4. From the Category pane on the left side of the PuTTY Configuration window, click Connection SSH Auth. As shown in Figure A-12, in the right side pane, in the Private key file for authentication: field under the Authentication Parameters section, browse to or manually enter the fully qualified directory path and file name of the SSH client private key file that was created earlier (for example, C:\Support Utils\PuTTY\icat.PPK).
601
5. From the Category pane on the left side of the PuTTY Configuration window, click Session to return to the Session view, as shown in Figure A-10 on page 600. 6. In the right side pane, enter the host name or system IP address of the IBM Storwize V3700 clustered system in the Host Name field, and enter a session name in the Saved Sessions field, as shown in Figure A-13.
602
8. Highlight the new session and click Open to connect to the IBM Storwize V3700 system. 9. PuTTY now connects to the system and prompts you for a user name. Enter superuser as the user name and press Enter (see Example A-1).
Example: A-1 Enter user name
login as: superuser Authenticating with public key "rsa-key-20130521" Last login: Tue May 21 15:21:55 2013 from 9.174.219.143 IBM_Storwize:mcr-atl-cluster-01:superuser> The CLI is now configured for IBM Storwize V3700 administration.
Example commands
A detailed description about all of the available commands is beyond the scope of this book. In this section, sample commands are presented that we referenced in this book. The svcinfo and svctask prefixes are no longer needed in IBM Storwize V3700. If you have scripts that use this prefix, they run without problems. If you enter svcinfo or svctask and press the Tab key twice, all of the available subcommands are listed. Pressing the Tab key twice also auto-completes commands if the input is valid and unique to the system. Enter lsvdisk, as shown in Example A-2, to list all configured volumes on the system. The example shows that six volumes are configured.
Example: A-2 List all volumes
IBM_Storwize:mcr-atl-cluster-01:superuser>lsvdisk
603
id name IO_group_id IO_group_name status mdisk_grp_id mdisk_grp_name capacity type FC_id FC_name RC_id RC_name vdisk_UID opy_count fast_write_state se_copy_count RC_change compressed_copy_count 0 V3700_Vol1 0 io_grp0 online 0 V3700_Pool 20.00GB striped 6005076300800 empty 1 no 0 1 V3700_Vol2 0 io_grp0 online 0 V3700_Pool 2.00GB striped 6005076300800 empty 1 no 0 2 V3700_Vol3 0 io_grp0 online 0 V3700_Pool 2.00GB striped 6005076300800 empty 1 no 0 3 V3700_Vol4 0 io_grp0 online 0 V3700_Pool 2.00GB striped 6005076300800 empty 1 no 0 4 V3700_Vol5 0 io_grp0 online 0 V3700_Pool 2.00GB striped 6005076300800 empty 1 no 0 5 V3700_Vol6 0 io_grp0 online 0 V3700_Pool 2.00GB striped 6005076300800 empty 1 no 0
Enter lshost to see a list of all configured hosts on the system, as shown in Example A-3.
Example: A-3 List hosts
To map the volume to the hosts, enter mkvdiskhostmap, as shown in Example A-4.
Example: A-4 Map volumes to host
IBM_Storwize:mcr-atl-cluster-01:superuser>mkvdiskhostmap -host ESXi-1 -scsi 0 ESXi-Redbooks Virtual Disk to Host map, id [0], successfully created To verify the host mapping, enter lsvdiskhostmap, as shown in Example A-5.
Example: A-5 List all hosts mapped to a volume
-force
IBM_Storwize:mcr-atl-cluster-01:superuser>lshostvdiskmap ESXi-1 id name SCSI_id vdisk_id vdisk_name vdisk_UID 4 ESXi-1 0 2 ESXi-Redbooks 600507680185853FF000000000000011 In the CLI, there are more options available than in the GUI. All advanced settings can be set; for example, I/O throttling. To enable I/O throttling, change the properties of a volume by using the changevdisk command, as shown in Example A-6). To verify the changes, run the lsvdisk command.
604
IBM_Storwize:mcr-atl-cluster-01:superuser>chvdisk -rate 1200 -unit mb ESXi-Redbooks IBM_Storwize:mcr-atl-cluster-01:superuser> IBM_Storwize:mcr-atl-cluster-01:superuser>lsvdisk ESXi-Redbooks id 2 name ESXi-Redbooks . . vdisk_UID 600507680185853FF000000000000011 virtual_disk_throttling (MB) 1200 preferred_node_id 2 . . IBM_Storwize:mcr-atl-cluster-01:superuser>
Command output: The lsvdisk command lists all available properties of a volume and its copies. To make it easier to read, lines in Example A-6 were deleted.
If you do not specify the unit parameter, the throttling is based on I/Os instead of throughput, as shown in Example A-7.
Example: A-7 Throttling based on I/O
IBM_Storwize:mcr-atl-cluster-01:superuser>chvdisk -rate 4000 ESXi-Redbooks IBM_Storwize:mcr-atl-cluster-01:superuser>lsvdisk ESXi-Redbooks id 2 name ESXi-Redbooks . . vdisk_UID 600507680185853FF000000000000011 throttling 4000 preferred_node_id 2 . . IBM_Storwize:mcr-atl-cluster-01:superuser> To disable I/O throttling, set the I/O rate to 0, as shown in Example A-8.
Example: A-8 Disable I/O Throttling
IBM_Storwize:mcr-atl-cluster-01:superuser>chvdisk -rate 0 ESXi-Redbooks IBM_Storwize:mcr-atl-cluster-01:superuser>lsvdisk ESXi-Redbooks id 2 . . vdisk_UID 600507680185853FF000000000000011 throttling 0 preferred_node_id 2 IBM_Storwize:mcr-atl-cluster-01:superuser>
605
SAN Boot
IBM Storwize V3700 supports SAN Boot for Windows, VMware, and many other operating systems. SAN Boot support can change, so regularly check the IBM Storwize V3700 interoperability matrix at this website: http://www.ibm.com/support/docview.wss?uid=ssg1S1004111 The IBM Storwize V3700 Information Center has more information about SAN Boot for different operating systems. For more information, see this website: http://pic.dhe.ibm.com/infocenter/storwize/v3700_ic/index.jsp?topic=%2Fcom.ibm.sto rwize.v3700.710.doc%2Fsvc_hostattachmentmain.html For more information about SAN Boot, see the IBM System Storage Multipath Subsystem Device Driver User's Guide, GC52- 1309-03, which can be found at this website: ftp://ftp.software.ibm.com/storage/subsystem/UG/1.8--3.0/SDD_1.8--3.0_User_Guide_E nglish_version.pdf
HBAs: You might need to load another HBA device driver during installation, depending on your ESX level and the HBA type. 5. Modify your SAN zoning to allow multiple paths. 6. Check your host to see if all paths are available and modify the multipath policy, if required.
607
b. Set the BIOS settings on the host to find the boot image at the worldwide port name (WWPN) of the node that is zoned to the HBA port. 7. If SDD V1.6 or higher is installed and you ran the bootdiskmigrate command in step 1, reboot your host, update SDDDSM to the latest level, and go to step 14. If SDD V1.6 is not installed, go to step 8. 8. Modify the SAN Zoning so that the host sees only one path to the IBM Storwize V3700. 9. Boot the host in single-path mode. 10.Uninstall any multipathing driver that is not supported for IBM Storwize V3700 system. 11.Install SDDDSM. 12.Restart the host in single-path mode and ensure that SDDDSM was properly installed. 13.Modify the SAN Zoning to enable multipathing. 14.Rescan drives on your host and check that all paths are available. 15.Reboot your host and enter the HBA BIOS. 16.Configure the HBA settings on the host. Ensure that all HBA ports are boot-enabled and can see both nodes in the IBM Storwize V3700 I/O group that contains the SAN Boot image. Configure the HBA ports for redundant paths. 17.Exit the BIOS utility and finish starting the host. 18.Map any other volumes to the host, as required.
608
Related publications
The publications listed in this section are considered particularly suitable for a more detailed discussion of the topics that are covered in this book.
IBM Redbooks
The following IBM Redbooks publications provide more information about the topics in this book. Some publications that are referenced in the following list might be available in softcopy only: Implementing the IBM System Storage SAN Volume Controller V6.3, SG24-7933 Implementing the IBM Storwize V7000 V6.3, SG24-7938 SAN Volume Controller: Best Practices and Performance Guidelines, SG24-7521 Implementing an IBM/Brocade SAN with 8 Gbps Directors and Switches, SG24-6116 You can search for, view, download, or order these documents and other Redbooks, Redpapers, Web Docs, draft, and other materials, at the following website: http://www.ibm.com/redbooks
609
610
Index
A
active-active architecture 12 Advance FlashCopy menu 473 Advanced Settings 179, 182, 468 alerts 16, 27, 66, 494 algorithms 14 aliases 20 analyses 433434, 459, 544 architecture 1, 12, 544 arrays 1314 asynchronous replication 521 authentication 158, 184, 594, 601602 Authentication Parameters 601 auto-complete commands 603 autoexpand 21 autoexpansion 21 automatic data placement 417 real 21, 195, 197 solid-state disks 413 storage 2, 15, 317, 453 thin provisioned volumes 460 used 16, 21 used free 21 virtual 21, 195, 197 volumes 15, 206, 348, 434, 511 warning 21 Cascaded FlashCopy mappings 453 chains 3, 30, 500 CHAP 369 chat sessions 24 CIMOM 13 clones 3, 453, 464, 466, 469 clusters 13, 17, 31, 198 allowed 502 auxiliary 509510 background copy rate 130 backups 560 configuration 28, 560 data 560 configuration nodes 12 creating 45 error logging 544 extent sizes 16 FlashCopy consistency groups per cluster 460 FlashCopy mappings per cluster 459 host mappings 4 I/O groups 12 Image Mode volumes 18 inactive 502 internal storage 4 IP addresses 25, 29, 34, 221, 243, 602 local 502 management 4, 35, 38 master 509510 master and auxiliary 509510 MDisks 4 data 18 network 502 node canisters 544 number of Remote Copy consistency groups per cluster 511 partnerships 501, 511, 515, 517 primary 509510, 539 primary and secondary 510 public keys 594 remote 501502 secondary 509510 shutdown 589590 colliding writes 511 command-line interface (CLI) 13, 2425, 38, 40, 427, 459, 504, 560, 588, 593594, 600, 603604
B
background copies 130, 453455, 460, 482483, 512, 517 bandwidth 517 background copy rate 453454, 460 backup copy rate 454 backups 25, 29, 449, 457, 460, 465, 502 application data 13 FlashCopy 450, 459 image 4 management IP address 34 system configuration data 13 tape 450, 458 volumes 460, 465 bandwidth 30, 130, 505, 512, 517518 batteries 10 bitmaps 452, 457 boot 154
C
cache 16, 412, 454455, 503504 Call Home 25 email 2425 candidate 320, 329 canisters 10, 28, 34, 152, 499, 544 expansion 10 slots 8 capacity 195 cold data 435 configurable warning 21 current internal storage 317 free 197 hot data 435 pools 201
611
commands 19, 38, 456457, 508, 560, 594, 603 concepts 544 configuration 189, 238, 258 access redundancy 28 advanced host 210 backups 560561 basic 594 basic volumes 213 changes 607 cluster activity 12 clusters 13 data 560 Easy Tier 418 enabling 185 FlashCopy 467 hosts 176 initial 50 IP addresses for Ethernet ports 3435 issues 25 Metro Mirror limits 511 nodes 9, 12, 3435, 560 options 3 recommended storage 330 steps 27 storage controller changes 607 supported HBA driver 153, 162 system changes 560 system restore 560 tasks 13 wizards 177, 181, 187, 329 Configuration tab 157 connectivity IBM Storwize V7000 and Brocade SAN Switches and Directors 31 iSCSI 183 iSCSI Ethernet ports 183 consistency point in time 456 consistency groups 129, 452, 456, 459, 471, 476, 489495, 508, 510, 521, 531, 537542 empty 491, 509 Consistency Groups menu 489 Consistency Groups window 460, 490 consistent copies 512 data set 451 image 457, 505, 509 relationship 512 ConsistentDisconnected 510 ConsistentStopped 509 ConsistentSynchronized 509 containers 348 contingency 21 capacity 21 control enclosures 3, 810, 28, 30 state information 9 Copied state 454, 457 copies primary 20, 198, 417
splitting 20 copy bandwidth 517 operations 22, 458, 460 processes 130, 455, 509, 512, 517 rate 130, 453454, 460, 482483, 512 copy processes stopping 455 Copy Services 460463, 489, 515 copy services functions 449 Copying state 455, 457, 477, 480, 512 creating advanced FlashCopy mappings 476 Fibre Channel hosts 180 HP/UX or TPGS hosts 179 partnerships 499, 511 Remote Copy relationships 521, 524 Reverse FlashCopy mappings 459 second host 180 second hosts 180 thin-provisioned volumes 195 volume copies 189 volumes and FlashCopy mappings 3 Critical Fix Notification 573
D
data analyses 413 application 13, 505, 560 backup 13 client 457 cluster configuration 13 configuration 560 consistency 510 consistent copy 512 error-related 25 hot 411, 417, 433 loss 192, 320 master volume 523 MDisks 18, 344 memory dumps 544 migration 18, 413, 590 mirrored 327 mirrored volumes 20 movement 417 performance 433 placement 411412 production 460, 502 protection 560 read and write 4, 455 recording medium 4 relocation 412 sets 456 sources 457458 striped 14, 557 support 585 transmitting and receiving load 544 workload performance 413 Data Migration Planner 415
612
Data Migrator 415 Data Placement Advisor (DPA) 415 database integrity 457 date and time 37, 50, 427 decoupled 17 default location 427 delete consistency groups 541 FlashCopy mappings 460, 481 mappings 455 partnerships 129, 519 deleting 20 dependent writes 456457 detected hardware issues 55, 66 software failures 544 device-specific modules (DSM) 154 Directed Maintenance Procedure (DMP) 318, 415 disk drive types 11 drives 5, 608 3.5-inch 557 dependent 322 disk 8, 10, 30 enclosures 34 failure 1314, 327 fast SAS or FC 413 faults 14 hard disk 411 hot spare 331 internal 34, 13, 315, 318, 320, 330 objects 13 overview 316 logical 38 expanding 38 MDisks 344, 347, 352 mirrored 14 pairs 14, 327 RAID arrays 557 slots 5 solid-state 412, 419420, 422423, 433434 spare 320
E
Easy Tier 411 Automatic Data Placement Mode 416 enabling 428 enabling and disabling 425 enabling or disabling 431 evaluation mode 429, 433 extent migration 417 hot spots 417 internal volumes 413 log files 427 measurements 427 operating modes 416 overview 412 performance data 433 process 414 SSD 327, 334, 420421
states 418 statuses 424 enclosures 4, 28, 555, 557558 chains 3 expansion 4, 10, 2830, 555 errors asynchronous notification 544 cluster logging 544 ConsistentStopped state 509 hardware 455 I/O 458 partnership candidates 517 service panel 544 svc.config.backup.log 560 Ethernet cabling 28 LED statuses 546 ports 28, 3435, 546 switches 19 EUIs 12 events 3 alerts 27 definition 4 detected 24 disaster 13 extended logging 165 failure 4 logs 586 node canisters 35 notifications 24, 66, 544 offline fabrics 30 site power failure 544 SNMP traps 24 svc_snap files 586 syslog messages 25 expansion enclosures 316 extended unique identifiers 12 extents 16, 412 allocation 414 cold 414 Easy Tier 412, 433 hot 414, 417, 434435 I/O activity 417 level 412 mappings 18 MDisks 15, 21 migration 411, 413, 416417, 426 migration plans 412 real capacity 195 sizes 1516, 20 striped volumes 17 external storage MDisks 340 ports 30
F
fabric 4 errors 502 Fibre Channel ports 28 links 502 Index
613
remote 511 WWPN zones 30 failed 320 failover situation 503 failovers 12, 34, 152, 505 fault-tolerant controller 9 Fibre Channel ports 28, 511 FlashCopy 38, 195, 450451 capabilities 38 commands 456457 consistency group states Prepared 457 Preparing 457 Stopped 458 consistency groups 457, 489490 states 457 Suspended 457 copy services 449 database volumes 456 integrity 456 mapping states 454 Idling/Copied 456 Prepared 455 Preparing 455 Stopped 455 Suspended 455 mapping states Stopped 477, 494 mappings 3, 452, 454, 456457, 459460, 471472, 477, 479480, 490492, 494495, 590 creating 467 deleting 480481 dependency tree 482 renaming 479 operations 459460 properties 459 Remote Copy 512 thin provisioning 21 volumes multiple target 453 target 457, 477478, 480, 482, 503, 510 FlashCopy Mapping window 476 FlashCopy Mappings menu 462 flexibility 333 foreground I/O latency 517
HBAs BIOS 37, 153, 606 installing 153, 164 port connection 36 ports 607608 heat maps 414 high availability 198 Host Type setting 369 hosts 189 adding ports 373 clusters 37 configuration 176 configured and mapped overview 214 creating 176 details 215, 221, 233, 241, 253 ESX 166, 174 Fibre Channel 232233, 240, 252253 iSCSI 174, 241 Fibre Channel 176 connections 30 Windows 2008 215, 228 I/O 452 initiator names 181 iSCSI 182, 222 iSCSI access 28, 34 iSCSI CHAP Secret 369 manual mapping process 207 mappings 4, 186, 189, 207, 209, 212213, 215, 221, 229 verifying 604 mirrored volumes 198 names 20, 181 paths 573 ports 30, 178 WWPNs 177, 187 rebooting 223 renaming 369 servers 12, 21, 164, 195 systems 4, 12, 1415, 17, 20, 152153, 164166 TPGS 179 unmapping 360 virtual capacity 195 volumes access 36, 219 Windows 606 Windows 2008 153, 156157, 216, 218, 229, 231 iSCSI 161, 218, 220, 227, 231 WWPNs 30 hot spare 13, 317, 320, 420, 557
G
Global Mirror 23, 498, 504505, 509510, 521 relationships 512 gminterdelaysimulation 504 gmintradelaysimulation 504 grains 197, 452 granularity 451
I
I/O groups 9, 12, 242, 511, 556, 607608 I/O Monitoring (IOM) 415 I/O statistics 417, 431 IBM Assist On-site restricting access 24 IBM Assist On-site tool 24 IBM SAN Volume Contoller (SVC) 568, 594, 598 IBM Storage Tier Advisor Tool (STAT) 414, 433 IBM Storwize V7000 177, 187
H
hardware failures 25
614
amber fault LED 45 architecture 1 basic volume configuration 189 Call Home configuration 66 command-line interface 41, 594 components 12 configuration instructions 36 control enclosures 8 copy services 449 creating generic volumes 192 creating mirrored volumes 198 creating new partnerships 516 disk subsystems 557 downloading and installing supported firmware 164 Easy Tier configuration using the CLI 427 overview 412 rules 417 FlashCopy concepts 451 guidelines for implementation 459 mapping states 455 GUI 3, 15, 24, 26, 38, 156, 166, 174 creating hosts 151, 213 Easy Tier configuration 419 managing and configuring Remote Copy 515 managing and implementing Remote Copy 498 managing FlashCopy 460 overview 73 hardware installation planning 28, 61 hardware overview 7 host configuration 151 planning 36 Information Center 158 initial configuration 27, 38, 50 initiator 222 internal storage 2 iSCSI 1920 connections 161 overview 184 LAN configuration planning 34 management 13 GUI 3 models 5 monitoring host paths 573 multipathing 155 multitiered storage pools 16 overview 1 performance optimized setup 334 ports login maximums 31 provisioning storage 190 RAID 2 supported levels 557 RAS 544 Remote Copy general guidelines 510 partnerships 499 synchronous and asynchronous 23 renaming target volumes 478
SAN configuration planning 30 requirements 31 SAN Boot 36, 606 SAS cabling 556 shutdown 591 using the GUI 589 SNMP traps 24 software upgrades 544 STAT reports 434 Supported Hardware List 152 system configuration backup 13 terminology 3 thin-provisioned volumes 21 uploading the SSH public key 598 VMware ESX iSCSI attachment 166 multipathing 166 Volume ID 435 volume types 191 websites 25 Windows SAN Boot migration 607 IBM Subsystem Device Driver (SDD) 607 IBM Subsystem Device Driver DSM (SDDDSM) 155156, 216, 229, 606 updates 608 IBM Support Center 24 IBM Tivoli Storage Productivity Center 13 identical data 503 IdlingDisconnected 509 image mode 14, 18, 417, 607 InconsistentCopying 509 InconsistentDisconnected 509 InconsistentStopped 509 Incremental FlashCopy mappings 453 initialization process 331 inter-cluster 504, 522 communication 499 link bandwidth 517 internal storage 315, 317, 594 configuring 329 definition 4 intra-cluster 504 IP addresses 20, 25, 2829, 3435, 3738, 50, 170, 221, 242243, 602 IP network 19, 25 IP port numbers 242 iSCISI Configuration tab 221 iSCSI 19 access 34 addresses 20 attachment 166 Windows 2008 156 CHAP Secret 369 connections 12 ESX attachment 240, 252 initiator 172
Index
615
Ethernet ports 183 hosts 28 creating 176 initiator names 19, 157, 166, 172, 181, 242 interface 152 Microsoft software initiator 157 name 19 nodes 19 ports 221, 225 settings 184 volumes 213 mappings 242 Windows 2008 220, 228 iSCSI Alias 184 iSCSI Configuration tab 221, 224 iSCSI Host menu 181 iSCSI qualified names (IQN) 12, 20 iSCSI Service 157 iSCSI Software Adapter Properties window 172 ISL hop count 511 iSNS 184
M
maintenance contract 25 management copy services 515 FlashCopy 460 IP addresses 13, 35 system 38, 4041, 50 Management Information Base (MIB) 24 mappings FlashCopy 452 cleaning rate 454 host definition 4 source volumes 457 volumes 152 master console 24 MDisks 338 additional actions 345 arrays 331, 347 capacity 317 definition 4 extended help 338 HDD tier 417 higher tier 413 internal storage definition 4 lower tier 412 overview 14 Properties action 346 RAID 341 single-tiered storage pools 16 storage pools 191 swapping drives 343 unmanaged 341 definition 14 window 346 Member MDisk tab 395 memory dumps 544 metadata 21, 455 Metro Mirror 23, 417, 498, 503504, 508511, 521 consistency groups 508 relationships 508 microcode 544 Microsoft Multipath Input Output (MPIO) 154, 223 migration 20 automatic 16 Automatic Data Placement 417 Easy Tier 412, 417 extents 411412, 417 FlashCopy 451 image mode volumes 18 images 607 plan 414 SAN Boot images 607 volumes 16 mirroring advance mirroring settings 202 host based software 198 remote 28, 129 modes 14, 23, 166, 416
J
JBOD 14
K
key generators 594595, 597 key pairs 40, 594, 596 keys HKEY_LOCAL_MACHINESYSTEMCurrentControlSe tControlClass4D36E97B-E325-11CE-BFC1-08002BE10318} <bus ID>Parameters axRequestHoldTime 161 HKEY_LOCAL_MACHINESYSTEMCurrentControlSe tControlClass4D36E97B-E325-11CE-BFC1-08002BE10318} <bus ID>ParametersLinkDownTime 161 HKEY_LOCAL_MACHINESystemCurrentControlSet ServicesiskimeOutValue 154 private 594595, 597, 601 public 594, 596, 598 SSH 594, 598 USB 41, 45
L
lanes PHY 4 latency 412, 511, 517 LEDs 45, 321, 547, 554 logged 21 logical block addresses (LBA) 18 logical disks 17, 189 logs messages 25 lower tiers 414 LUNs 153, 166
616
monitoring 13, 413, 417, 544 Move to Consistency Group option 492 multipath I/O 155 multipath storage solution 154 multiple paths 36, 606 multitiered storage pools 16
N
Nearline SAS 11, 331 network management program 24 network time protocol (NTP) 37 nodes 544 adding back to a cluster 38 canisters 8, 30, 34, 45, 499, 511, 544 definitions 4 Ethernet ports 28 overview 546 expansion canisters definition 4 FlashCopy mapping states 455 hung 544 internal storage definition 4 Internet 20 IP addresses 28, 35 pairs 198 partner 12 port 1 28 ports 606 quorum disks 14 replacing 544 surviving 9 non-zero contingency 21
O
operations copy 503, 510 data migration 590 I/O 455, 509, 512 IOPS 16 iSCSI 161 start and stop 456 stop 456 write 455, 504 ordering 456 overwritten 451
Fully Configured (Remote Exceeded) 502 Fully Configured (Remote Excluded) 502 Fully Configured Stopped 519 Local Excluded 502 Partially Configured 501 Remote Not Present 502 Stopped 502, 519 stopped 519 passphrases 597 peak loads 512 performance high performance disk tiers 426, 434435 I/O 412 impacts 503 logs 412 needs 2 optimized setups 334 requirements 166 storage 412 testing 504 PHY 4 point in time (PiT) 3, 22, 454, 457, 504 copies 22, 457 data 456 pools primary 200, 205 secondary 201, 206 Pools function icon extended help 348 ports FC Host Adapter 153, 164 iSCSI 225 iSCSi 221 node canister 511 power supply 8, 10 slots 8 power supply units (PSU) 558 removing 588 P-parity 14 presets 3, 327, 331, 333, 335, 460461, 463, 467 private keys 594595, 597, 601 public keys 594, 596, 598 PuTTY 594597, 600, 602 key generators 597
Q
Q-parity 14 quorum disks 4, 14
P
parity strips 14 partnerships clusters 501 managing 515 creating 499, 511, 516 deleting 129 disconnected 519 new 129 states Fully Configure (Remote Stopped) 502
R
RAID definition 13 MDisks 4 mirrored 13, 327 overview 13 performance 13 presets 3, 327, 331 types 16
Index
617
RAID 0 13, 327 redundancy 13 RAID 1 1314 RAID 10 14, 327 failure 14 RAID 5 14, 327, 434 redundancy 14 RAID 6 14, 327, 334 RAID arrays 3, 17, 38 goals 13, 557 spare 331 RAID levels 13, 315, 327, 331, 333, 420 redundancy 13 rebalancing 13, 557 receivers 25 Redbooks website 609 Contact us xiv redundancy access 3 arrays 1314 boot devices 606 clustered system management 34 fabrics 30 hosts 36 inter-switch links 511 paths 608 PSUs 558 spare drives 326 relationship consistent 512 Relationship function icon 536 relationships background copies 454 cleaning rate 454 copy direction 528 FlashCopy 22, 452453, 458 Global Mirror 512 image mode volumes 18 inter-cluster 522 Metro Mirror 503, 508 partnerships 519 Remote Copy 509 removing 523 stopping 525 reliability 152 reliability, availability, and serviceability (RAS) 544 remote clusters 501502 fabric 511 mirroring 28 nodes 511 service 24 storage system 517 Remote Copy asynchronous 498, 504 configuration 498 consistency groups 508509 creating 531 managing 531 copy services 449
description 498 general guidelines 510 Global Mirror 504 limits 511 link requirements 511 management 498 Metro Mirror 503 partnerships 518 planning 510 synchronous 498, 504 Remote Copy relationships 129, 502, 509510, 523 deleting 530, 540 multiple 510 renaming 529 stand-alone 520521 starting 527 stopping 525 switching direction 528 Remote Mirroring 129 response times 16 restore points 22, 458 Reverse FlashCopy 22, 458459 roles primary 502
S
SAN Boot 36, 164, 593, 606607 hosts 153 SAN fabrics 9, 19, 36 SAN Volume Controller clusters 594, 598 SCSI commands 19 front ends 455 LUN IDs 607 SCSI IDs 215, 221, 229, 233, 241, 253, 369 secondary read/write access 526 security features 24 requirements 166 Serial Attached SCSI (SAS) 4, 1011, 2930, 331, 413, 547 connectivity 4 connectors 10 initiators 556 interfaces 10 ports 4, 10, 547 service tasks 13 Simple Network Management Protocol (SNMP) 24, 38, 544 manager 24 messages 24 settings 24 single points of failure 556 single-tiered 16, 413 sites primary 504, 512 remote 505 secondary 504505, 512
618
target 503 SMTP server 25, 37 snapshots 4, 460, 464465 solid-state drives (SSD) 11, 411 arrays 331, 434 capacity 435 Easy Tier 334, 420 Easy Tier presets 331 extents 417 hot data 417 MDisks 433, 558 multiered storage pools 422 performance 411 response times 411 tiers 414 sorting 502 sources 452, 458, 460, 505 Global and Metro Mirror 417 multiple 22 spare role 320 SSH 40, 594, 597598, 600 client 594, 600601 keys 594, 598 states consistent 502, 508 link 547 model 509 processes 4 stopped 454, 494 stopping 453, 455 suspended 455 statistics 416 file 426 I/O 417, 431 log file 426 stop 526, 528 command 455 partnerships 519 process 494 storage primary 37, 517 secondary 517 storage area networks (SAN) configuration 30 failure 14 zoning 177, 187, 511, 606 storage pools adding drives 13 capacity 4, 15, 344 clustered systems 16 configured overview 428 definition 4, 15 Easy Tier 431 multitiered 422 overview 313 performance 413 single-tiered 428 striped volumes 17 target 413
thin provisioning definition 4 strand 556 definition 4 striping 1314 summary reports 414 superuser 38, 50 svctask 604 syslog manager 25 messages 2425 protocol 25 syslog messages sender 25
T
T0 22 targets 20, 226, 242, 417, 452, 458, 460, 472 IP address 242 names 19 TCP ports 20 thin provisioned 192, 453 definition 4 thin provisioning 4, 2021 thrashing 13 threshold warning 16 tie 14 time-zero copy 22 traffic 166, 168, 170, 226 troubleshooting 24, 38, 321
U
UDP 25 unbalanced 13, 557 unconfigured drives 335 unused 320, 330 updates 452, 456, 502, 505, 512 database transaction 456 drives and enclosures 556 HBA BIOS 153 write operations 504 upgrades software 564 upper tier 414 USB 41, 45, 546 User Datagram Protocol 25
V
VDisks auxiliary 504 virtualization 12, 17 volumes 34, 9, 12, 14, 17, 166, 189, 192, 196, 198, 201, 203, 206207, 209, 212, 215216, 218, 221, 229, 231, 233, 238, 241, 253, 258, 412, 425, 435, 450, 463, 472, 606 auxiliary 502, 504505, 520, 522523, 527, 539 boot 606 capacity
Index
619
per I/O group 511 clones 464 copies 418419, 431432, 435 data placement 412 database 456 definition 4 details 429 discovering 210 entire 451 expanding 21 extents 413 fully allocated 21, 192 hot spots 12 I/O handling management 12 image mode 18, 607 iSCSI 242 level 417 mappings 208, 604 overview 369 master 502, 504505, 520, 522523, 527, 539 maximum size 1516 migrating 16, 20, 413 mirrored 16, 2021, 203, 207, 417 automatic data placement 417 creating 199 peak write load 512 mirroring 16 non-mirrored 20 overview 17 pairs 510 primary 502, 504, 509, 527528 properties 425, 604605 Remote Copy master 522 removing 362, 378, 384 secondary 502, 504, 509, 512, 528, 538539 sequential 18, 417 size 522 snapshots 464 source 22, 451455, 457460, 467, 472, 474, 478, 481, 502503, 510, 520, 523, 527 striped 17, 192 definition 17 target 22, 450452, 455457, 459460, 465, 468, 474, 476482, 502503, 510, 512, 520, 522523 multiple 22 thin-mirror 203 thin-provisioned 2122, 195197, 459460, 472 UID 215, 221, 229, 233, 241, 253
Z
zero contingency 21 zone 30, 152 zoning recommendation 511
W
warranty period 25 Windows 40, 42, 152153, 155157, 161, 213, 215216, 218, 220221, 227229, 231, 433, 606607 timeout value 153154 wizard 330 WWPNs 12, 30
620
Back cover
Easily manage and deploy systems with embedded GUI Experience rapid and flexible provisioning Protect data with remote mirroring
Organizations of all sizes are faced with the challenge of managing massive volumes of increasingly valuable data. But storing this data can be costly, and extracting value from the data is becoming more and more difficult. IT organizations have limited resources but must stay responsive to dynamic environments and act quickly to consolidate, simplify, and optimize their IT infrastructures. The IBM Storwize V3700 system provides a smarter solution that is affordable, easy to use, and self-optimizing, which enables organizations to overcome these storage challenges. Storwize V3700 delivers efficient, entry-level configurations that are specifically designed to meet the needs of small and midsize businesses. Designed to provide organizations with the ability to consolidate and share data at an affordable price, Storwize V3700 offers advanced software capabilities that are usually found in more expensive systems. Built upon innovative IBM technology, Storwize V3700 addresses the block storage requirements of small and midsize organizations. Providing up to 240 TB of capacity packaged in a compact 2U, Storwize V3700 is designed to accommodate the most common storage network technologies to enable easy implementation and management. This IBM Redbooks publication is intended for pre- and post-sales technical support professionals and storage administrators. The concepts in this book also relate to the IBM Storwize V3500. This book was written at a software level of Version 7 Release 1.